Filed by Archimedes Tech SPAC Partners Co.

Pursuant to Rule 425 under the Securities Act of 1933

and deemed filed pursuant to Rule 14a-12

of the Securities Exchange Act of 1934

Subject Company: SoundHound, Inc.

Commission File No. 333-262094

 

Company Name: SoundHound

Event: Cowen 2nd Annual Mobility Disruption Conference

Date: March 4, 2022

 

Jeffrey Osborne, Analyst, Cowen & Co.

Hey, good afternoon, everybody. It’s Jeff Osborne, Mobility Technology Analyst at Cowen. Thanks for tuning in on day three of our Mobility Disruption Conference. Very pleased to have Keyvan and Nitesh joining us from SoundHound, the CEO and CFO, respectively recently entered into a SPAC merger transaction, which I’m sure they can touch on. But gentlemen, thanks for taking time for joining us. Maybe for those in the audience that might not be familiar with SoundHound and the recent transaction, do you mind just taking a few minutes to introduce what the company’s up to as well as the structure of the deal?

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

Absolutely. Thank you for having us. SoundHound is a leading innovator in voice AI and conversational intelligence. We started a company in a Stanford dorm room in 2005 with my co-founders. We wanted to make a long lasting significant impact in the world. And we were inspired by concepts in sci-fi like Star Trek that they had, and we didn’t have like products like replicators and holograms and teleportation.

 

We thought the one that would happen for sure in our lifetime is voice AI, talking to robots and computers and they talk back to us, and they could get things for us and get things done for us. And we wanted to make that happen. So, we started the company in a dorm room and build all the technologies in voice AI to build that platform. It’s been a long journey.

 

We unveiled it finally in 2015 and our customers choose us for a number of reasons. First, we promise superior technology and we also offer it under friendlier terms. We let our customers keep their brand and keep their users and access the data and differentiate and innovate and monetize. And we power cars, TVs, IT devices, mobile apps and services. Some of our customers include Hyundai, Mercedes-Benz, Pandora, Deutsche Telekom, Snapchat, VIZIO, KIA, and Stellantis. And last year, we surpassed billion queries to our platform. It doubled in six months. And as you mentioned, we announced we will merge with SPAC, we expected to be listed on NASDAQ earlier this year and I can tell you more about that if you like.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Perfect. Thanks for the introduction there. I want to do dig into the Houndify platform. I’ve seen your demos when we last spoke and certainly it’s high speed multi-requests can be sort of layered on top of each other and conversational would love to talk about how the technologies evolved and how it’s differentiated versus say nuance and other folks in the space.

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

Yeah, thanks. Perfect question. So our vision is to make voice AI better than humans. So, make computers better than humans in language understanding. Now we know computers are better than humans in many things, they are better in computing and many tasks, but when it comes to language understanding, if you think about it, we don’t think computers are that good like we have complex conversations with each other, but when we talk to computers and personal assistants, we lower our expectation and we talk in short, simple, keyboard based queries, and you’re nervous that they won’t work and in most cases, they don’t work. And we finally just realize what they could do and just limit our queries to those things by setting a timer or asking for time or something. And we want to change that. We want to make computers better than humans and if we can achieve that, we can make the world a better place. I’ll give you a more specific example. Let’s say you’re looking for a Chinese restaurant and you can ask show me Chinese restaurants. And most personal assistants can detect the word Chinese and restaurant and give you the results.

 

 

 

 

Now, if you say, show me restaurants except Chinese. Most personal assistants give you Chinese restaurants. They give you exactly what you don’t want. Because again, they don’t really understand the whole sentence. They just detect the keyword Chinese and restaurant, and they give you exactly what you do not want. Now we have unique approaches to language understanding that enable us to understand much more complex conversations.

 

In fact, I can give you a live demo. I’ll talk to our Hound app, which is powered by Houndify platform and I’ll ask even a more complex question. So, you can hear me speaking and the devices speak back. “Show me Asian restaurants in San Francisco excluding Chinese and Japanese and only show the ones that have more than three stars and are open after 9:00 p.m. on Wednesday.

 

So my query was very complex. It had compound criteria. It had exclusions, double exclusions and wanted to be Asian, but not Chinese, not Japanese in San Francisco, open after 9:00 p.m. on Wednesdays and have more than three stars. It got me what I asked for it also spoke the criteria back to me to comfort me that I just – it didn’t just give me some keyword results. It actually is giving me exactly what I wanted.

 

It’s also very conversational. I can follow-up and refine my criteria. So, I can say something like “Sort by rating then by price remove Korean and Vietnamese and only show the ones that are for kids and patio.”

 

And I can ask like is the first one open? Tell me about the second one? Is the third one? How long is it from third one to the airport? And I can extend it to other domains. I can say it’s my flight on time. How is the stock market doing today? And sports scores and general knowledge information, so all these domains can interact with each other and it’s very conversational. You can ask complex queries and that is the vision of to make superior technology in language understanding that is better than humans.

 

Jeffrey Osborne, Analyst, Cowen & Co.

That’s really impressive. And kudos to pulling off a live demo there on the Zoom. I’ve got a VIZIO at home, which I believe has your content inside in my office in the basement. But maybe just touch on how product creators are using voice AI to make the product better or more experiential. I’d love to understand that a bit more.

 

Nitesh Sharan, Chief Financial Officer, SoundHound

Yeah. Maybe I’ll jump in here, Jeff, and I’ll also convey a bit, thank you for inviting us here. So, there’s a lot of applications and actually that’s one of the really distinctive and exciting things about our platform. We play across multitude of different devices, across different industries, globally. We highlighted and Keyvan mentioned some of these, but we’re in autos, we voice power multiple cars in our investor material, which is on our website and what we conveyed a couple months ago when we announced the transaction. It highlights many partners including Mercedes, Stellantis, KIA, Hyundai, KIA and others and that’s one vertical.

 

You mentioned VIZIO, so devices is another area. And we have great partnerships there with voice powering TVs and other devices. We partner with the likes of SKY and Deutsche Telekom as well and a number of other vendors. And then also in the app ecosystem, so we are voice power behind Snap, behind Pandora. So, there are applications that, that also increasingly are adopting voice technology.

 

2

 

 

And for us, what we’re really excited about is there’s a tremendous market opportunity. We highlight $160 billion TAM projected in 2026 across a number of verticals, the – couple that I mentioned, but you could increasingly see this penetration for us across healthcare, financial services on and on. And that gets us tremendous opportunity and we’re distinctively positioned, which again, I appreciate the opportunity to have this session because we are unique in the sense that we can play across these verticals.

 

I have this foundation of core tech that’s differentiated that Keyvan highlighted with the demo. And we have a unique business model also. So maybe if I – you could bear with me for a second, I’ll jump into a little bit of that in terms of how we generate revenue.

 

So, we have three pillar revenue generation models. So, one is where we voice enabled products. So, to the VIZIO example, to the auto manufacturers, and we get royalties based on that.

 

And we secondly voice enabled services and we get subscriptions based on that. So again, I highlighted a couple of those examples. For us also distinctive opportunity is how we bring together voice enabled products with voice enabled services, and we provide monetization there.

 

So that is actually very interesting and distinct where we can imagine you’re driving in a car that’s voice enabled, and you’re searching for coffee. You can connect with the ecosystem of coffee shops. And even through just the recommendation engine in that trans – algorithm that’s a monetizable moment. And if you ultimately say, yes, please have – up or pickup at the next exit that transaction is a monetizable moment. So, for us, that opens up the aperture of moving from a just a licensing only ecosystem and TAM to licensing plus monetization. And so that’s a big part of our opportunity and our vision and what we’re driving here.

 

Jeffrey Osborne, Analyst, Cowen & Co.

And maybe just to I had that further down on my list, so we’re on the monetization side, but just to flesh that out, I think you’re in Oregon and just asked if I wanted a Voodoo Doughnut in Portland and let’s say they had a voice enabled app or a feature in my car. How does that work? I request I want whatever wacky donut they have of the day. And then you’re getting a fraction of a penny per transaction multiplied by millions, or can you just walk through the business model at a high level there?

 

Nitesh Sharan, Chief Financial Officer, SoundHound

Yeah. At a high level, I mean, first of all, just going to the Voodoo Doughnut is an experience in itself, so for those who haven’t had the opportunity to appreciate you highlighting that. But yeah, the idea is that imagine you’re driving in your car, and you can order a doughnut and obviously for the – for Voodoo Doughnut in that example – that’s a lead generation and that’s exciting for them to get more customers.

 

So, they’re incentive there. For the consumer who’s craving that donut with the captain crunch on top or whatever they have. That’s exciting because they get value from the transaction. And then the car manufacturer also there is incented. And so, the sale let’s say we make up a number its $10 sale. There’s a share of that that comes to us, a share of that that goes to car manufacturer. And that’s the model that to take that one step further, what’s so exciting and disruptive is you can envision now those car manufacturer or even more so just broader product makers, now are moving from where voice AI is a cost element in their bill materials or their FOP or whatever you want to call it to actually a revenue generating model.

 

3

 

 

And so that unlocks a whole ecosystem of products. We also talk about how there’s going to be 75 billion IoT devices in the like there’s a lot of smartphones, a lot of people have smartphones, but increasingly we’re seeing IoT application, internet connectivity with your appliances with your coffee maker and so forth. And it just unlocks a whole ecosystem of people who want to engage in this voice capability because you can envision even in the future of light bulb being voice enabled because it allows for monetization opportunities, repeat purchases, et cetera.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Got it. Maybe jumping around here on my list of questions I’d love to touch on the patents. I think in your SPAC merger deck, you had 200 patents or so granted and pending would love to understand the scope of those and how defensible they are in terms of protecting your competitive mode if you will. And then I think you had 35 patents in what we were just talking about conversational monetization. So maybe flesh that out of it as well if you don’t mind.

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

Absolutely. So, IT is a one of our biggest assets and technology innovation is in our DNA and we tried to focus on more of quality of our pattens as opposed to quantity. Andwe have in-house people and very proud of all the patents that we have filed and so on. And our patents span, core technology and also processes around them and user experiences. And in monetization actually we think search monetization is going to change today people search with keywords. So, advertisers can bid on those keywords and there have been like patents around keyword bidding for example that have been very successful.

 

But if you think about it, people are not going to search with keywords anymore, they’re search based on having conversations with the things around them. So, the keyword bidding invention that is the main way that search is monetize today is not going to hold. And we predicted that years ago and we thought that because search is going to be based on conversation, advertisers will bid on conversational interactions instead of keyword interactions. And we filed a number of patents around it and we aim to be a leader in the next generation of search monetization.

 

I’ll give you a specific example. Let’s say you are – today’s, let’s say Friday, and you are sitting in San Francisco, and you say, how’s the weather going to be in Seattle this week? Now this is a weather question, but because of the meaning behind it and the context you are in San Francisco asking for the weather in Seattle and you are asking for the weather for the time in the future, it’s very likely that you think of traveling to Seattle.

 

So, travel advertisers can choose to be on that kind of a meaning behind it. There is no keywords in the question about travel. There’s no mention of flight or hotels or trips or travel, it’s the meaning behind it that that can tell you that the user is likely to travel. So we have patterns around the concept of advertisers bidding on conversational interactions.

 

Jeffrey Osborne, Analyst, Cowen & Co.

And maybe that’s what – my next question was, you had a few buzzwords, I’m not sure if it’s yours or the invest bankers you worked with as part of the investor deck that I saw, but what is speech to meaning, deep meaning, understanding? It sounds a bit like a word solid, but is that what you’re just describing, is sort of the second derivative of the voice query, is that deep meaning or no?

 

4

 

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

Yeah. Those were our own engineers came up with those.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Okay.

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

And those are our technology innovations.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Right.

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

And I’ll briefly describe both Speech-to-Meaning and Deep Meaning Understanding. So when you talk to most assistance, first, they convert your speech to text and then they convert your text to meanings. So, it’s done in sequence, it’s two separate steps and it can be slower, because you have to wait for speech to text and then you wait, you have the whole text of the query, and then you do text to meanings. So, the user has to wait for that second step. It also can be less accurate, because if your speech to text makes a mistake, then you have the wrong text and you feed that wrong text, your next step, and doesn’t matter how good it is. It has the wrong input. So, it’ll suffer.

 

And we thought the human brain doesn’t work like that. As you’re listening to me right now, you’re not going in your brain speech to text and then text to meaning. You’re going from speech to meaning in real time and that helps you with speed and accuracy. Sometimes if you don’t hear me, right, the meaning information that you’re processing in real time, guides the accuracy of speech recognition.

 

And we were inspired by the human brain, and we believe you’re the only company that has been able to combine speech recognition and natural language understanding around in real time, feed them into each other. And that gives us better latency for the end user. It also improves the accuracy dramatically. So that’s one of the innovations we usually highlight.

 

The next one is Deep Meaning Understanding and you saw that in the demo earlier today and thanks for appreciating live demos. Anytime might do that, my team members get mad at me, because one of these days, one of these demos will go wrong. But I see a lot of value in live demos. So Deep Meaning Understanding is, I’d like to understand highly complex queries and in sort of detecting keywords, like even the Chinese restaurant example. I had this other example, like show me hotels. I used to travel for work a lot. And I used to have this query. “Show me hotels in San Francisco that are less than $600, but not less than $300 are pets friendly have regime and a pool staying for two nights and don’t include anything doesn’t have Wifi”. Now it’s a very reasonable ask for your hotel on a search that probably will take you 20 minutes or 30 minutes to on a website with complex filters.

 

You can do it in just a few seconds without technology. And we haven’t seen anyone else be able to handle complex queries like that. And that’s the Deep Meaning Understanding approach. It’s our unique approach to language understanding.

 

5

 

 

And then the third breakthrough that we’ve highlighted is Collective AI. That’s our unique architecture that gives us kind of an exponential effect in understanding growth based on linear contribution. So instead of, when you add one skill and one skill and one skill, these skills don’t interact with each other. So linear contribution means linear growth. It’s also the user experience is not as good.

 

Our Collective AI architecture works around domains and these domains, even though, they’re linearly contributed by developers, they interact with each other, they learn from each other. And that gives us an exponential growth in understanding and the ultimate goal to have a personal assistant that knows everything, can answer any question, can do anything. We don’t have that yet. Nobody has it, but we are on a faster path to get there using our Collective AI architecture.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Got it. It’s helpful. And no disrespect to the engineers that came up with the names of the product lines. But anyways, moving on here maybe for Nitesh, as your revenue mix shifts from royalties to subscription to monetization. Would love to understand the growth rates and sort of margin trajectory over time with the mix shifts there. Is that something you could walk us through?

 

Nitesh Sharan, Chief Financial Officer, SoundHound

Sure. So first, yeah, we see tremendous growth in all the pillars. In our material that we posted on our website, we show a composition shift over time today in 2021 is 88% royalties, single digit percentages across pillar two and pillar three of services, subscription and monetization. We do expect increasing growth across the service and subscription and increasing growth thereafter in monetization.

 

But to be clear, all three pillars, we expect tremendous growth as we can expand. And there are certain verticals to your earlier question, where either is incumbent legacy players, where openly the customer themselves are coming to us asking sort of for – to bring our technology to bear with their product. And we see tremendous opportunity increasingly gain share.

 

There are newer spaces, where we are going to enter and we’re going to expand what voice AI can do for those products. And all of that will help grow the licensing. We have announced multiple even post the merger acquisition in November. We’ve a series of other announcements that have come up publicly. I can highlight several of them, but maybe I’ll point out a couple. We announced in December timeframe, I think a partnership with Netflix to voice enable their set top box, their Da Vinci Reference Design Kit capabilities, which has tens of millions subscribers. And it’s a great opportunity.

 

We announced in January, I believe, a partnership with Qualcomm to voice enable their Snapdragon chip. So, the point I’m trying to make is we are scaling across verticals with new opportunities, and we believe both that licensing would grow, but that services opportunities we’re excited about the opportunity within food service ordering, quick service restaurants, but also directly that will enable us to scale and grow within the services.

 

And then as I highlighted earlier with the Voodoo Doughnut example, which now you got me really hungry, I think bringing together that monetization opportunity where we’re growing in license, in products that are enabled to voice growing and services that are enabled to voice. We do expect that incubates in almost like compounds, we call sort of the flywheel effect of these things coming together, where they will compound one another, and we’ll see a much more balanced composition of our revenue between all those pillars, and they will amplify one another and scale and grow.

 

6

 

 

And then you asked about margin profile, I think was the other thing. So yeah, I mean, I think first to start, where we provide the core tech and software, and we provide both cloud enabled and embedded technology. But the good thing is it’s very scalable. And so, we projected out margin profile and the 70%, and we believe that’s a sustainable long-term opportunity, obviously, as we build scale, we’re a growing company with hyper growth and high aspirations from an infrastructure standpoint, there’s economies of scale as we grow.

 

So those are things that there are opportunities from a margin efficiency standpoint as we grow. As we grow into monetization, in that example, we talked about with there’s revenue sharing across the ecosystem, the product creators, the service providers that we believe there’s a fair revenue sharing algorithm, which openly is somewhat maybe dilutive to the margin rate, but certainly accretive from a margin dollar standpoint.

 

So that’s why the balance between economies of scale opportunities on the gross margin from a software delivery perspective to the revenue sharing opportunity, which we see a significant is roughly in that zone of say in the 70%-ish range.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Since it is a mobility conference, maybe we could just drill in a bit on the automotive side and mobility side of your business. So, I understand you have both embedded and cloud, are you known for one versus the other, it would be part A of the question. And then B, I’d be curious, the engagement you have, I think you said Stellantis and KIA and Mercedes, are they all using it for their own branded platform? So, where you would say, hey, Mercedes or like, what’s the keyword wake up in the automotive space? Because a common question I get, as it relates to my coverage of severance is sort of, how does a Google or an Amazon fit in the car versus the OEMs, one of their own branded experience, which obviously is what you’re enabling. And so maybe you could just sort of walk us through that.

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

Yeah, we absolutely believe the product creators want their own branded experience. They don’t want to create a product and then have that be a shell for somebody else’s service and product creators are increasingly becoming aware of that. So, we let our partners choose their brand and customize differentiate monetize. And we do provide, let me just go back on that, some people do ask me like, will there be like Alexa everywhere, Google everywhere and obviously those service providers that want that, but an example I use to kind of proof by example, imagine we live among robots. Like, there will be a day where we have 10 billion people and maybe 10 billion robots. We all those robots have the same name, will all of them have the same, like be called Alexa, obviously not, right? So, every robot will have its own name, it’s own identities, own personality, and they will be good at different things.

 

Some of them will be teachers, some of them will be doctors, some of them will in your house, some of them will be your friends. And today, we have a variation of that. We have cars that do something and we have copy makers and TVs and appliances and washing machines and IoT devices, and they do different things. And we believe that they – the product creators know their brand the best, their product the best and know their users the best.

 

So, we need to enable them to differentiate and customize and monetize under their own brand. So it is a big part of our offering and know it’s a reason people choose us. We do offer both embedded and cloud. We promise a complete offering to our customers. And in some cases we offer both, a hybrid model, like for example, in cars, usually cars, the legacy speech recognition in cars have been embedded, because cloud computing was not a big thing.

 

7

 

 

So, there are still attached to having that. They are worried about cars losing their connectivity. And then as connectivity is becoming more mainstream, users can do more things. We can’t ask for weather, for example, like you don’t have connection. So what we offer is a hybrid offering where we have our embedded solution and our cloud solution working together. And then we have an arbitration engine that decides, which one to use and can happen in real time.

 

So, you could be sticking to your car. We send the query to both cloud and embedded. And in real time, you’re processing both responses, unless now you go through a tunnel to user connectivity, then you start paying more attention to the embedded engine. And then before you finish talking, you could come out of the tunnel and get your front and back. You switch back to cloud and the cloud might end up winning depending on your connection. So, we also have that package of hybrid engine to really ultimately give the best user experience to the end users.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Excellent. Maybe for Nitesh, just given an investor, I’m trying to think of the right word. Anxiety is not the right one, but skepticism of SPAC merger decks in general. I would just love to understand, yeah, I think you had a $100 million of bookings and backlog in the deck you’re projecting close to a $1 billion revenue ramp in five years and average revenue per user to go up roughly 10x, I think from $0.30 to $3 or so. Can you just sort of walk me through the journey of how those projections play out? Obviously, you got a lot of new announcements that you’ve had, which are great to see. But any sort of backfilling of the growth would be helpful to understand.

 

Nitesh Sharan, Chief Financial Officer, SoundHound

Yeah, absolutely. One thing I’ll say is we did highlight our bookings backlog and expectation over the next couple years, we have the greatest visibility and that does scale and I would – we’re actually right in the middle of SEC sort of engagement on the S-4 and we’re iterating. And I can tell you forthcoming shortly is the filing of our 12/31 financials. And I think people see great momentum in our business.

 

And what I think is the pillar and one important point is while we see scale to grow from that $100 million highlighted, we’re exceeding that we’re growing. That’s a leading indicator, especially, because as you apply the revenue – software revenue recognition rules, the ASC 606, there’s sort of a delay in effect of that ruling into the P&L. One thing you highlighted was the ARPU numbers from monetization, important point I highlight is that is truly incremental to the booking.

 

So there was no monetization baked into those bookings numbers we provided in the investor materials. It is incremental. And that’s again, when we build the ecosystem and grow in product that are voice enabled and grow in services that are voice enabled and we integrate those and scale. We believe those are relatively achievable and conservative assumptions just to put a couple of data points on that. So that $0.30 to $3 kind of estimate.

 

As we scale across food service ordering into – groceries into other car related services, into product specific, into advertising space, like there’s a lot of opportunity with the volume of products that are becoming more and more integrating connected into the voice AI space with our – within our Houndify platform. And so these – we’ve kind of made some reference points to other industries, where this has happened before – we’re not incubating new. This is actually just sort of a transference.

 

8

 

 

So just to give a couple of data points and certainly we have a tremendous growth opportunity in scale. We have to go improve and deliver and provide the metrics to the investment community to watch us as we go along and we’ll actively engage. But as a reference point on the $0.30, if you look at what Google and Facebook have done, just as a reference point over the last 10 years, they’ve scaled from single digit $4 roughly to a north of $30 of ARPU revenue per unit per year – per user per year. And that’s a global number. If you look at their U.S. numbers, I think Facebook is in the high one hundreds, Google’s in the high two hundreds. And if you look across a number of other platforms like the low less than $1 to low single digit dollars, we believe is certainly achievable.

 

And the momentum we’re seeing in the marketplace with the conversations we having with customers gives us increasing confidence. But I’ll close with – I think this is a journey we’re excited to become a public company, where we can have active ongoing dialogue and lay out the metrics and measure. So you can hold this accountable and we can continue to engage actively and openly with what we’re seeing and what we’re driving.

 

Jeffrey Osborne, Analyst, Cowen & Co.

That’s an excellent way to conclude. I look forward to monitoring your progress and reading the S-4 as well, when it comes out. There’s always a lot of helpful detail and information in that. Gentlemen, I appreciate you taking the time and hopping on with us. I’m sure you’ve got a busy day ahead of you, but thanks so much for taking time and joining us.

 

Keyvan Mohajer, Co-Founder & Chief Executive Officer, SoundHound

Thank you for having us.

 

Nitesh Sharan, Chief Financial Officer, SoundHound

Thanks for having us.

 

Jeffrey Osborne, Analyst, Cowen & Co.

Appreciate it.

 

***

 

9

 

 

Important Information and Where to Find It

 

This communication refers to a proposed transaction between Archimedes Tech SPAC Partners Co. (“Archimedes”) and SoundHound. This communication does not constitute an offer to sell or exchange, or the solicitation of an offer to buy or exchange, any securities, nor shall there be any sale of securities in any jurisdiction in which such offer, sale or exchange would be unlawful prior to registration or qualification under the securities laws of any such jurisdiction. In connection with the transaction described herein, Archimedes has filed relevant materials with the SEC, including a registration statement on Form S-4, which includes a preliminary proxy statement/prospectus. Security holders are encouraged to carefully review such information, including the risk factors and other disclosures therein. The definitive proxy statement/prospectus will be sent to all Archimedes stockholders. Archimedes also will file other documents regarding the proposed transaction with the SEC. Before making any voting or investment decision, investors and security holders of Archimedes are urged to read the registration statement, the proxy statement/prospectus and all other relevant documents filed or that will be filed with the SEC in connection with the proposed transaction as they become available because they will contain important information about the proposed transaction.

 

Investors and security holders may obtain free copies of the proxy statement/prospectus and all other relevant documents filed or that will be filed with the SEC by Archimedes through the website maintained by the SEC at www.sec.gov or via the website maintained by Archimedes at www.archimedesspac.com or by emailing [email protected].

 

Participants in the Solicitation

 

Archimedes and SoundHound and their respective directors and executive officers may be deemed to be participants in the solicitation of proxies from Archimedes’ stockholders in connection with the proposed transaction. Information about Archimedes’ directors and executive officers and their ownership of Archimedes’ securities is set forth in Archimedes’ filings with the SEC. Additional information regarding the interests of those persons and other persons who may be deemed participants in the proposed transaction may be obtained by reading the proxy statement/prospectus regarding the proposed transaction. You may obtain free copies of these documents as described in the preceding paragraph.

 

Forward-Looking Statements

 

This communication contains forward-looking statements, which are based on estimates, assumptions, and expectations. Actual results and performance could differ materially and adversely from those expressed or implied in forward-looking statements. SoundHound and Archimedes do not undertake any obligation to update any forward-looking statements, except as required by law.

 

10