Podcast EP 167 - Entering the era of AI super tools - Gabriel Rene, CEO, Verses

EP 167 - Entering the era of AI super tools - Gabriel Rene, CEO, Verses

Feb 27, 2023

This week we interviewed Gabriel Rene, CEO of Verses. Verses is the company behind KOSM, the world's first network operating system for distributed intelligence.

In this episode, we discussed the coming explosion of AI-based super-tools built on the converging maturation of AI, IoT, robotics, and AR, like ChatGPT. We also explored the challenges facing this technology development, ranging from the need to lay a strong rules-based foundation to build safe AI to engage with governments and integration with legacy business systems.

Key Questions:

  • How do we get the industry actively involved in creating solutions before the problem emerges?
  • What is the future of society with a handful of systems developed by AI, Google, and entrepreneurs? 
  • What are the human, regulatory, and business challenges for AI super tools?
Subscribe

Erik: Gabriel, thanks for joining us on the podcast today.

Gabriel: Great to be here, Erik. I appreciate the time.

Erik: Gabriel, this has got to be one of the most ambitious companies that we've had on the show recently. So, I'm really looking forward to understanding your vision for how the space of artificial intelligence, general intelligence, will develop. But before we go there, I'd love to understand a little bit more about your background and how you made the decision to throw yourself into this incredible challenge. It looks like you started all the way back in 1992 as a sound design engineer. So, you have an engineering background. Then you've set up this World Internet Center, which is quite an interesting concept back in the boom days. What was the concept behind that?

Gabriel: Well, in the early '90s, I started working in an advanced R&D lab called Cyberlab which was sort of the tip of the spear, really aggressive, emerging tech R&D group just out of Silicon Valley that some of the biggest companies, Microsoft, Apple, Intel and others would outsource some of their impossible problems to government projects and other things. I was lucky enough to join this cyber punk crew — everything from scientists, to deep tech nerds, to hackers, and PhDs — and to have this opportunity to be exposed to early technologies.

The founder of Cyberlab, Dan Mapes, had a background in artificial intelligence. Dan is actually the co-founder of Verses today. 30 years later, we're still working together. Being exposed to that early '90s era of technology when the web was first emerging was really exciting. I also got exposed to way more advanced technologies, certainly the expert systems around artificial intelligence. I got a chance to work with Dr. Tom Furness in the Air Force on early augmented and virtual reality headsets — experiential, immersive computing — way back in those early days. That was the age of digital media, the whole idea that you're playing with audio and video and computer graphics and real-time computer graphics, like Silicon Graphics' machines and stuff or even performances around that. It was suddenly this moment where governments, large corporations, movie studios, everybody wanted the same tech.

The irony was that the sci-fi stories of that time, the kind of cyberpunk stories, we're talking about this world where you'd have these hyper speed networks with AI agents and robots and flying cars, the sci-fi stuff of the era. Meanwhile, it'll take five minutes to download a stamp-sized JPEG on the web. This vision of this hyper-immersive sci-fi techie future relative to the reality of how slow compute was, how slow the networks were and the rest was an interesting area for a young man like me to be thrown into. I was one of the younger guys in that field at the time. That was really where it all started. What really shifted over the years was that the idea around the immersive network of intelligent technology never went away. I ended up working on lots of projects in different industries around the various pieces.

Then as mobile started to emerge — I got into the mobile industry in the early 2000s — this then led to Internet of Things, wearables, and network optimization, understanding the global network at scale. As I got older, what became clear was that the idea behind the sci-fi stories with the really cool technology, that stories we all love, are fundamentally horror stories about horrible dystopian futures with the coolest tech that we want in a world that we don't want to live in.

Ultimately, this led to, when I was in telecom space in the mid 12 to 20 teams, the investments that were going into 5G that the telcos were making were all the result of research that essentially said that the main users of the network would not be human. It would be machine to machine. It would be autonomous cars and drones and vehicles. It will be holographic augmented reality content. It'll be an industrial Internet of Things and manufacturing robotics. And so, the largest infrastructure investment in history that every telco in the world did at the same time in the 5G was based on the idea that we already had 120% human penetration. Everyone had a phone and a half or so. That the future was about this convergence of these technologies.

When I saw that that was happening — I saw Oculus get purchased, we saw the breakthroughs in deep learning that suddenly become out of AI, the Internet of Things and sensing systems were starting to move — I called up my old boss from Cyberlab and said, "Hey, are you seeing what I'm seeing?" He said, "Yeah, it's like this convergence of all these things are reaching a certain maturity point. We've been talking about this stuff for 25 years. Let's get in the middle there and see what we can make." That was what spawned Verses. It was the idea that the convergence was coming. We'd been watching these trends. We're working in and around the edges of all these technologies in some of the most advanced areas of them for decades. Then finally, wow, the great convergence is coming. What kind of shift is that going to create? If it's about the power of exponential technology now becoming part of this network of exponential technology, how do we angle that to make a better, smarter world with cool tech and not just cool tech with a horrible dystopic world? That was really the motivation.

Erik: Yeah, that's a good outlet you got in that science fiction. I think that, to some extent, the general perception for how this might rollout or at least the general perception among the more sci-fi part of the population. I think it was Lex Fridman who hosted Neal Stephenson on his podcast a few weeks ago. They were discussing this, basically. It's a unique problem area in that you have to solve the problem before it emerges. Because if you're dealing with the chemical industry, you can wait until pollution gets out of control. Then you can say, "Okay. This is a problem. We've got to rein it in. Let's get the regulator's involved." And we can control the chemical industry. With general AI, you can't do that. You can't wait till the general AI gets out of control and then say, "Oh, we should probably start to put in some regulations and figure out how to control this." The cat is out of the bag there. But then, how do you get business and people that are competing with each other to move fast, to slow down and talk out the problems and implement some things that will make them move slower, in order to make sure that whatever they end up building is actually done in a way that benefits society?

It looks like you're taking a step also in this direction with your role in IEEE with the Ethics Certification Program for Autonomous and Intelligent Systems. What's the backstory there? More generally, I'd be really interested in hearing your thoughts on this general problem of how do we get industry actively involved in coming up with solutions before the problem really emerges and society is screaming at them to do so?

Gabriel: Well, I guess, the first thing that comes to mind is the analogy. The analogy is that if you had a horse that once it took off doubled its speed every two steps, and you wanted to build a bridle while riding that horse and then put it on the horse, your odds are pretty slim. So, you have to build the bridle first and put it on the horse before the horse takes off. You'll never catch an exponentially faster horse at a human regulatory and legal speed. First thing is, there's no slowing down. You're not going to slow the horse down. Innovation isn't going to slow down. As soon as the thing gets to the point where it can basically optimize itself, human timescales are becoming irrelevant. So, that's not even an option. You have to do it beforehand.

With that being said, I'm not a fan of the Skynet, Ultron monolithic AI that figures everything out and takes over the world theory. I don't think it's a good theory. I don't think it's a realistic fear. I don't even think it's very likely. Neil Bostrom, even Elon and the rest have been very phobic about this. I think there's plenty to be concerned about given the current types of AIs that are essentially algorithms on steroids that can't understand the effects that they cause. If you have a machine process in software that has some sort of optimization function that doesn't know what cause and effect is, then yeah, you really do have a concern. It doesn't mean that that thing can just suddenly take over the world. These are sci-fi horror stories for a reason because they're dramatic and provocative. But I don't think that those are engineering probabilities around that. Let me just frame it that way.

The approach that we took in person that we recognize when we started to look at the problem was that if you have exponential technology that converges into a network, you have exponential exponentials. If you have 100 billion IoT devices and you've got self-replicating AI, we just have a breakthrough infusion. The cost of energy may begin to plummet over the next few decades. A handful of these events completely change the dynamics. When you have virtual environments that can be as hyper realistic and immersive as anything that we experienced today, when the interface becomes a brain-computer interface — and so you're not even having to deal with interfaces that would reduce that hyperrealism — there are so many ethical and value-based questions that emerge from that, that it boggles the mind.

One of the two things that Dan and I did in the very beginning when we started this up in 2017 was, we've had an aha moment where we cracked the code — the matrix code, if you will — about how all these things would become interoperable. The minute that we realized that it could become interoperable, then the question was, how could they be governable? Because the power rating goes through the roof. Then the question became, well, does the thing that makes them interoperable make them governable? What I mean by that is, is there a way to design a network, not like the internet, where everything is open? Think about this today. The internet is open. The World Wide Web is open. Crypto wallets are open. Your email inbox is open. Everything can just be spammed and hit by any other server at any point in time, and there's no way to stop it. It's like a million people knocking at your door. There's laws against people coming on your property. There's laws about people getting your car. There's laws about people entering your house, your business, whatever. We don't have that in our computer networks. We have to add second-level security and key systems and cryptography and all this other stuff because the networks are designed to be open.

What you really want is something like the Zero Trust architectures that are emerging now, which is like I don't want 99% trust that nothing comes in. I want 100%. Nothing comes in without a credential, that it has the right to basically access some part of the system. Every single move that it has to make, there has to be permission. You have to reverse the entire paradigm and say that everything has to basically be approved. That means you could say, "Well, for all my stuff, I want it to be open." That's fine. But it may be that we don't want drones to be able to fly within certain ranges of the airports. Maybe we don't want them to be able to have cameras on when they're flying over nursery schools. Maybe there are AIs that should have to basically prove who they are before they can access certain servers. Software systems, they'll do the same. Holographic information, that can't be presented in certain locations because it's inappropriate, or it's harmful, or dangerous, or it's a threat, or it's a lie, or whatever.

How do you govern these things? One of the answers is, you have to basically build in security and trust. That's the default layer. Permission becomes a part of that. That starts to then deal with, at least without going into a lot of detail, a premise upon which then you could say, okay, now we can build a system where exponential things — the benefits of exponential things come from their interoperability and their network. But every interaction is basically a function of having the right to perform the activity. So then, you can make them compatible with laws, or with rules, or with policies, or any condition that you might set. You have young children. We have to set rules for them. Here, you can have a popsicle. I want to know. You can't have it. They'd throw a fit right here. You have to put the cookie jar up high. There's things that the world does to basically mediate the actions and demands of other things. In these systems, that has to become a prerequisite. Then you avoid a lot of these, like I said, sci-fi dystopic outcomes where Ultron uploads itself to the Internet and downloads all the information, and then suddenly can take over any number of robots, or drones, or systems. No, you have to ask all along the way. You don't have a certificate, so you can't even get into that system. Then you can talk about cryptography and all the rest of this stuff as additional key value, key signatures and the rest of this stuff.

There's a whole lot of factors. The IoT itself is actually a huge problem space because of the way that the protocols are designed. To tie back to your question on the IEEE, we spent the first few years just by designing essentially a standard protocol for communication, for interoperability and governance of digital systems, including especially ones that could interact with the physical world, collecting information to the IoT or taking action through some robotic system or actuator. We went to the IEEE and said, "Here's our 80-page specification." They said, "We will help you set this up to become the global standard for the mediation of cyber physical activities for all systems on the planet." That was part one.

Part two was, Dan and I wrote a book called The Spatial Web, which is this idea of a network of spaces where intelligent digital activities are occurring instead of a web of pages, a web of spaces. We described our entire vision for the world. We described what the threats and challenges we saw were. We described the failings of Web 2.0 and the architectural design shortcomings that we saw, and the need for type of new set of standards upon which we can build new infrastructure, new applications, and get the benefits of intelligent exponential technologies while being able to get the bridle on the horse before it takes off.

Erik: Interesting. Well, it's good to see organizations like IEEE getting involved here in an early stage. Because they have a lot of credibility as good actors on the regulatory stage. This is a fishy problem. I hope that you and everybody else who's working on this make some significant progress. I imagine this will be a long-term struggle to figure out how to get all the actors in the market engaged.

Let's assume that we are able to solve this issue, and generally develop AI systems that are acting in the interest of humanity, more of the people that are managing them. We're going to then have another — you could call it another problem or another question at least, which is this question of, when these AI systems, especially the more general intelligence AI systems, are developed, are we looking at a future where we have a handful of systems that are developed by open AI, or Google, or maybe Alibaba? Or are we going to have a thousand entrepreneurs building the little AIs that are collaborating with each other in this network? Maybe, in an integrated way, becoming something that approaches an artificial general intelligence, but doing it in a much more decentralized way? Do you have a view on what is more likely or what is more preferred from a social outcome? This might also lead a little bit to the vision for KOSM. I'm interested in seeing how that fits into this space.

Gabriel: Yeah, I think a couple of thoughts. Number one is that the idea of a monolithic AI, the artificial super intelligence, which is the described end state of AI — you get to AGI where it's human-level intelligence. At that point, it can update itself — then a lot of the stories are like, well, does it take a minute or 100 years for it to become smarter than all of human intelligence on the planet? No one knows.

Ray Kurzweil talks about this as the singularity. At 2045 or so, it's the prediction where you can't see past that line, like the singularity of a black hole. We don't know what happens the next second after AGI at that level actually clicks in. There's tons of debates about, is that even possible? If it's possible, what is that line? What would happen after that point? There's a lot of ink, I suppose, digital ink, being spilled on that topic. There was just a debate with Noam Chomsky, and Gary Marcus, and a bunch the world's, I guess, thought leaders in the AI space last month in December called the AGI debate, which was fantastic. They tackled a lot of these concerns and narratives. They're certainly talking about the shortcomings of the current AI, it's unlikeliness to get to AGI. The analogy there is that, yes, we built a very tall ladder. We're building taller ladders. What an amazing view we can get from something like ChatGPT? But you can't just extend that ladder to the moon. You actually have to have a totally different approach to get that far out.

I think that, first of all, the delineation between what current AI is, which is essentially a machine process and software which can produce intelligent outcomes and outputs — which I find to be spectacular and amazing and very exciting — but which have very significant shortcomings, they're not explainable. They're not transparent. You can't moderate them, really. You don't really know how they've arrived at any sort of decision. They are really statistical parrots at scale. When you put one in a Tesla and you say we expect regulators to approve the ability for you to take a nap in the back while it drives you around, Elon pops up every 12 months on Groundhog Day and says like, "Full self-driving is coming within this year." No, it's not. The limitations of deep learning are being felt all over the place. Even as we're seeing on the opposite side, some of the realizations of the power of the current approach to what I think is still considered narrow AI.

When you shift to the concept of artificial general intelligence, what you're looking for, there is a distinction that we believe is important. That distinction is that the system must be able to have a perspective. It must have a model of the world inside of it upon which it reasons, upon which it can plan, upon which it can have a set of beliefs about what might occur about its next action. This leads to things like curiosity and creativity. So, you can't ask ChatGPT like, "Hey, what do you think my next thought is? We've been talking about something for an hour." It'll be like, "I don't do that. I can't do that." But what you want is something that says, "Well, I'm curious about you. Why did you ask that question?" That's not what it's capable of doing. It can't do that. It can't actually input the information if you give it to it because it's not part of the training data.

The problem you have with this is that, when the training data or the big data approach to AI — which is the current state of the art — is that whatever's in the training data is all that the system is able to learn. And so, if the car has learned that stop signs are on the side of roads and it's driving on the freeway, and there's a truck moving, stop signs from point A to point B on the freeway, what does the car decide to do? Stop on the freeway? Because it doesn't understand the full context. It's not a knowledge-based system. It's just an information-based system. It doesn't have the relationships. It doesn't understand the nested nature. It doesn't even know what driving is. It doesn't know what cars are. It doesn't know what a freeway is. It knows what lines are, and it knows what speeds are. It knows some of these things, but it doesn't know the rest of the context. And so, it hasn't really achieved knowledge.

This distinction between the power of big data — which is really what's gotten us here today — and knowledge, which is what you and I have having our two-year-olds or our three-year-olds, we've watched them learn about the concept of a cow. What a cow looks like, what a cow sounds like, what a cow eats. They can tell a stuffed animal cow and a real picture of a cow and a cartoon version of a cow. They map all these things together. These systems have a real limitation around that. When they say like, "Hey, this person should or shouldn't get credit," maybe it's because their name is an uncommon name. This is an African name. This person shouldn't get credit. Because most of the data I have shows that John Thompson, that's a name with good credit. When it says amputate the leg, and you're like, "Well, why," it can't tell you. Because that's the limitations that we're dealing with today — black boxes built on big data. They're essentially an industrial process. When we talk about the future, will it be many AIs and all the little ones, what you are seeing right now is yet sort of foundational layers which are coming with these bark language models from open AI and Google and others, those that have access and can afford to call big data and get, let's say, in level one intelligence layer.

On top of this, you're already starting to see a bunch of applications. Someone building something like jasper.ai for copywriting and marketing is absolutely fantastic. Then there's Rally that's going to be for — you're not going to go to LegalZoom and look up contracts. You're going to say, "Here's the legal situation that I want a contract for. It'll generate a contract for you. The hundreds and then thousands of applications that are going to use this base layer, AI capabilities from big companies — much like we run on AWS today are using Google services for something — I think that will be very likely for a lot of the, let's call it infrastructure-level intelligence. Then you have all these really cool applications which will come from a million in one entrepreneur over the course of the next decade here. None of which will be AGI. But we'll all be smarter than every single software application we have today. We'll have exponential value and it essentially will be intelligent power tools for people like working in the marketing field, or working in the medical field, or working in the fitness training field, or working in the mental health field. They'll essentially be assistants that are helping to guide and inform and educate. There'll be personal ones. That'll be absolutely fantastic.

I think you're going to see 10x to 100x increase in effectiveness around the interactions with digital systems because of the breakthroughs of things like ChatGPT. For designers, DALL-E and Midjourney and all this amazing stuff. It fundamentally changes it. If you need to be a skilled carpenter in order to use certain set of saws, but then you can get a skilled saw and basically allow lots of people not be carpenters but to cut wood, then lots of people can become designers without having to have learned how to use the original tools. There's always a bias, because we consider that to be a skill. Well, power tools make that so that you can democratize that capability. It's not about people competing with AI. You're competing with the people that are going to adopt the AI. If you decide not to, you can saw as fast as you like. But the guy next door who just got a skilled saw is going to be able to cut more wood.

Erik: My brother is playing around with drawing now. He's an engineer, but he's a great artist. He's playing around with his skill set. I have a little game now where he'll show me something, and then I go on DALL-E 2 and see, okay, can I try to replicate that within 10 seconds? His art is still a bit better than what DALL-E 2 is performing generally, or at least they can't emulate exactly what he's creating. But it is a power tool that allows somebody like me, who has really no artistic capability, to generate things that are pretty reasonable in seconds. It's incredible. We have this world where we have, I like this concept of power tools that are all narrow AI solving specific problems. I think, for a lot of people, that is a future that they would want to live in, where there's a bunch of narrow AIs that are completely controlled in scope, and we don't have to deal with this general AI mess. But of course, there's that feasibility there. Where does Verses, where does KOSM fit into this ecosystem?

Gabriel: What Apple did in the smartphone era was they built a developer ecosystem around the platform. They developed iOS which is a new way of interfacing with a computer, since you have a computer in your hand which you can type with a keyboard or use a mouse, you touch it. It had GPS. It had accelerometers. It had all these other essentially package of IoT devices inside of it. This enabled things like Uber, and DoorDash, and all these other services, Instagram and Facebook. Literally, that whole Web 2.0 era is a product of mobile and the product of the smartphone.

But when Steve Jobs came out in 2007 and said, here's the iPhone — which was like an iPod with a phone that you can go online with, that was the three main capabilities — there was like, I don't know, 12 apps: calculator, maps, some email, and so on and so forth. The next year, they came out and said here's the App Store, 500 apps. Now there's like 5 million apps. Those millions of developers that used — mainly, the products that were developed by corporations or companies. There are maybe 1000 programs you could put on a Dell computer in 2005. You'd probably wouldn't put more than 30 or 40. There's millions of mobile apps. The barrier to entry was lowered to where high school kids were able to develop apps. Because they made that, they democratized the development of applications. The smartphone became the best interface to user. So, you've got millions of developers that produce billions of users, that produce trillions of dollars in value in a 15-year timeline.

What has just happened, you can think of the GPT moment as the crossover just a few weeks ago, which was a million people started using an app faster than any app that had ever been produced or any website in attraction. By five days, it was up with 2 million people. That shift where you're not like, "I want to go on Google. I've got a question. Give me search results, and I'll go try to find the right result. You're trying to do your best job to give me the best result trying to anticipate what I really want," instead of, "I can just go ask a question to a chatbot, and it just gives me the answer." So, I don't want to search and get search results. I want to ask a question, and get an answer. So, everyone went online and started asking questions and getting answers, and having conversations and trying to make sense. "Give me a fitness program given that I want to lose this much amount of weight, but I have a gluten allergy. Give me an outline for what my workouts are each day of the week. Plus, recipes for food, bla bla bla bla bla under a certain budget." There you go. It was just kicking this stuff out.

So, what's happened? The moment when water goes from 33 degrees to 32 degrees, it completely phase shifts into a different structure. I think we're seeing that moment in computing happen right now. The next era is now going to be about AI applications, smart apps for everything. Not just apps on a smartphone, but smart apps for everything, all basically driven by artificial intelligence. What KOSM is, KOSM is essentially doing for AI what Apple did for the smartphone. We're building the world's first AI operating system, a platform and ecosystem for developers to be able to develop AI applications for anything. Because they're built on the proto version of the standards we talked about, those AI applications can be interoperable. They'll be compliant with the standards.

And so, we've got this whole head start. Because we have essentially aid language that we invented that allows it to make it quite easy to design and develop the kinds of knowledge structures upon which AIs can run. That doesn't require big data at all. You can basically build AIs on our system without big data. So, if you want to bootstrap with big data, great. If you've got your own personal information like the data coming off of my Oura Ring, or my Apple watch, or any other sort of data set, or I'm an enterprise, or my business with my own corporate data, that doesn't work with GPT. 99% of the web is behind on password. You and I have information in an iCloud or we have information — we've got LinkedIn. It's not available, without a problem. With Facebook, with Twitter. Most of the web, 99% of the web is not available to just be digested. It actually has terms and conditions for using that material. It's illegal to crawl it. Certainly, they will break into an enterprise platform and take their data. These large language models don't have access to that information. They don't know anything about them. They can't do anything with that. Although, they do have general concepts. It doesn't help if I'm an engineer who needs to change something, I would like an AI to help assist me changing a part on an airplane, that particular model of an airplane for Boeing. But that information isn't available for ChatGPT. And so, it's not going to be able to walk me through that process just like it might be able to tell me what kind of food to cook.

So, there's a whole universe of information that can be shared in what we call knowledge models. KOSM stands for knowledge-oriented software model. Using our language, that takes you from the crude oil disk refining process, that large language models do with raw big data to try to get to information understanding. We're able to basically start at a higher level and say form relationships between all these data in this very common, universal standardized structure that lets you then immediately be able to start to ask questions, and infer outcomes and reason and plan on those datasets. That's what KOSM is essentially doing. Verses is intending to do for AI what Apple did for smartphone apps, to democratize them.

Erik: I'm imagining two different ways that this could work. Let me explain how I'm thinking through this.

Gabriel: Please.

Erik: One is I have my micro data set, and I'm a corporate working at managing factories. I need a toolkit, a set of frameworks to quickly build AIs that are specific to my purpose and are ideally also leveraging broader datasets that can support with training, et cetera, because I don't have sufficient data maybe to solve that problem internally. So, there could be a platform to support this.

Then there would be the second alternative, which would be probably the closer analogue to the App Store which is — okay, I'm going to use DALL-E 2 create an image. But then, the face of the image is all screwed up. So, I'm going to use a different algorithm to fix the face, because that's customized for the face. Then I want to animate the image. I'm going to use a different algorithm to now animate this image. Then I want to add voiceover. I'm going to use a different algorithm to add voice. Then maybe there's some custom thing that I internally build that adds some magic to this final product. I'm going to add my layer on top. Then you end up having five different AI apps that are all contributing to the final product, but they're all from different app developers. Right now, I know artists are doing this. But it's a very ad-hoc process. It's like alchemy of them. It's not an app store. Are you going in both directions? Is there one area that you see more potential in?

Gabriel: I think there'll be quite a few platforms that emerge over the next 24 months that do this sort of chaining, like what you're talking about. Then those are going to be really cool and very popular, and will have multibillion-dollar valuations overnight. Right now, you're seeing all these little toy versions of it, and people just throwing stuff up on GitHub. Essentially, they are just automating a copy-paste process, or in one form or another, just put some buttons on so you can do out painting and then painting. It's very cool. Doesn't mean it commoditize the hour after. One does it, there'll be five more. That's not a really great business barrier to entry. Although, I think that capability will be widely adopted and very useful. You could do that on our platform as well. There'll be some benefits to that.

There's another piece of the puzzle, which we hinted at during the course of this conversation, which is that the first thing you can make on KOSM is going to be what we call a KOSM — since we have a knowledge model. That knowledge model can be updated enrich. That knowledge model is not limited to text. You can put unstructured or structured data. You can put IoT data. You can put computer — you can put basically temperature data in there. So, if you're talking about modeling digital twin of a factory, or a warehouse, or a port, or your home, you can have inputs from every one of your systems building out effectively a knowledge model with a spatial information and physical data. It all gets correlated together into a coherent knowledge structure, which is not possible for say something like GPT, which is text-based corpus. DALL-E is doing it on a different sort of corpus.

But you want that general capability to take any input. Basically, what's called sensor fusion is one example. You and I are able to make sense of our environment right now even though we're pulling in inputs from a bunch of different systems. My skin can tell what the temperature is in the room. I've got two eyes. They're synthesizing some inputs. I'm taking in audio from my ears using a completely different kind of input. One is using light; one is using literally sound pressure. I'm synthesizing all these into a single world model in real time. Our platform allows you to do that. Our language, the standards language that we developed, that's what it's based on. Everyone is going to be able to do this. That part, we've already set up to basically be free. We essentially build out a monetization layer to make it easy. What was impossible, the standards make possible. KOSM then makes it easy.

The second piece of that is once you've got that knowledge model we've been working on — we've just released a paper last month on what we do think the future of artificial general intelligence is. It is an ecosystem, a network of multiple different intelligent agents, hybrids even of more human intelligence and algorithmic intelligence and what's called active inference-based intelligence, which is that adaptive intelligence with a model of the world that can update that model based on new information. I'm driving down the street while the stop signs are in the back of that truck. I'm going to figure I shouldn't stop. I guess, you can put stop signs in the back of trucks. Okay. Let me just update that. Then now you can push that through the entire system. So, that prevents all kinds of accidents when you have a network structure of intelligence.

This network of AIs, with interface to the IoT and to databases and to robotic system, is more or less the Spatial Web. What we are doing now is we recently announced the world's top computational neuroscientist who joined the company as our chief scientist. His name is Professor Karl Friston. Karl Friston is known for a thesis, that it's like if you take Einstein's ideas around breakthrough on E=MC² relative to physics, Friston has done that for intelligence. What he's done is he has basically described this idea of how actual intelligence has to model the environment, guess about what type of change needs to happen, make that change, and then see whether or not it works. So, you're either trying to update the world or update your model. If you can perform that loop, that's a genuinely intelligent system. That's the kind of thing we're building into our software, which we call KOSM agents. This KOSM agents will operate on those KOSM models. They'll be able to update those on an ongoing basis in real time.

The second thing that we're offering here, as we go into '24, is the ability to have these agents doing that kind of updating for you. These are agents that can learn. These are agents that can adapt. These are agents that can understand and share bits of knowledge with each other across the KOSM network — which is just kind of a subnet of the Spatial Web — in a standardized format that is interoperable and governable, and upon which I expect the laws of nations to actually be drafted upon. Because this is the only way you can translate laws of the land into something that can be machine-readable and machine-executable. We've been testing this with the European Commission with AI-powered drones for the last three years. This is not theoretical. We've proven and borne this out already. In fact, we're the only one doing it, that I'm aware of on the planet. We're working with a lot of universities and legal and ethics groups out of Europe in order to test this.

We're lining up all of these components and this crossover point as a company — which we're also a publicly-traded company, a small cap company that most of you have never heard of. But I expect by the end of the year, we may have garnered some new attention. What we've done is we've laid out in our white paper, more or less, what we think is an eight-year path to human-level intelligence and software systems. Two years from now, we expect we'll be able to demonstrate what's called sentient intelligence. That's a basic level of adaptation. Then at four years, there's what's called sophisticated intelligence. Then you get to essentially with sapient level intelligence, of which we're right now in the research lab, we're already seeing evidence of these capabilities now. The things that lead to that are not theoretical for us. We're actually testing and seeing some of those effects now.

It can take a while to scale this to product level. We're not ready to show things until they're ready. But the opportunity for what we call ecosystems of intelligence, which are really about a shared network of intelligence — where the activities of these things can be trusted, can be transparent, you can query them, and they can give you answers, and you can constrain and restrict what their activities are, even what information or knowledge they have access to, or what they can share with others — gives you the ability to, we hope, build and get the benefit of this exponential technologies that we think are so cool from our sci-fi stories without the kind of dystopic outcomes that come from the runaway power of these technologies.

Erik: Okay. Interesting. There's a lot of different things I want to riff off of here. Maybe the first one would be, how does Wayfinder fit into this? Because Wayfinder is very much a vertical software application. So, I'm curious on your website. You have KOSM which is this horizontal general AI platform. Then you have like a vertical, which is AI-assisted order picking. I'd say a very traditional problem for a warehouse to solve. Is this intended as an illustration of the application that KOSM would be building in the future?

Gabriel: That's right. Wayfinder actually runs on KOSM which runs on a proto version of the standards today. Wayfinder was like a vertical proof of concept. We went into an industry we knew nothing about. We basically said, "Okay. Can AI intelligence systems essentially improve in a physical real-world environment, and guide warehouse workers better than any other state of the art system?" And so, we did this. This is a bit like, can you put a calculator that you can touch a screen on, like Apple did when they first came out? Nothing innovative, really, Apple put on that phone. All the real innovation on the apps came from everyone else where they built that infrastructure out first.

Wayfinder was a proof of concept. That turned into a $25 million contract for us with our very first private customer, because we were able to demonstrate 35 plus percent productivity gains over millions and millions and millions of times. That then turned into a resell agreement, which we just announced two months ago, with Blue Yonder who's the largest warehouse management system provider in the world. 3,000 plus customers, 80 countries. Every major store at your local mall probably runs their warehouse management system in their distribution center around the world. They saw what we were doing and said we want to now resell this to all of our customers. Just by joining one intelligent app in one vertical, in an area we knew nothing about, we got an eight-figure contract as the first customer. Then we got the biggest platform provider in the entire planet to basically say, "We're going to take this and go resell it to our customers." We're able to, I think, demonstrate genuine efficacy of the capabilities technologically, but we're also to show market traction, and what I consider hyper credibility. Because the biggest whale in the sea basically said, "Cool. We don't have this. Our customers want this. Can we take this and resell it?"

As we're building out the horizontal capabilities — it'll certainly take a bit longer — we built out own capability on top to demonstrate that. Then that's turned into a whole business for us, that I think makes us a good bet from an investment perspective. Especially, as a public company, if this is the beginning of the AI wave, the opportunity for the public to invest is almost nothing. Because all of the big investments will come from venture capital and private strategic investment. We've seen $10 billion potentially going into open AI, right? All those little vertical ones that you're talking about is going to be Kleiner Perkins, Sequoia, and everyone else. They're going to make 90% of the profits for the next 5 to 10 years before the public even gets the first bite of the apple.

But Verses is a public company. And so, if you even as a retail investor wanted to play the game, there's hardly anywhere to go. We're the only, I think, pure play horizontal AI opportunity in the entire retail market. I think that's a very interesting position to be, especially as I think the markets are to bounce back in 2023. We start to get lift off through the course of the year and into '24. I think we're really well-positioned to the point where the core capabilities that we invented, we gave away as a public utility through the actions. We take the company public so that the public actually can be owners in the process with us, and actually benefit from the growth of catching this next big technological wave. That lines up with the kind of world that we want to live in. So, we're demonstrating our values literally by way we're architecting our technology and our company.

Erik: Okay. Interesting. That's an incredibly successful pilot project. Congratulations on that. Many of my corporate clients would be envious.

Gabriel: Thank you.

Erik: Okay. Interesting. Maybe just to wrap up here. It sounds like you have good traction already with Wayfinder. Nonetheless, this is just a monumental challenge, right? We have incredible technical challenges here, because it's basically a series of breakthroughs that are required, right? This is not an incremental set of improvements. There are regulatory challenges because this involves basically — I mean, you said you want governments to be writing laws on the platform and embedding them into codes so that they can be deployed automatically. Obviously, you're going to have to deal with bureaucracy. Then you have the business challenges of companies protecting their own business. Companies have legacy, legacy processes, legacy systems, et cetera. So, you have these different challenges.

When you look at the challenge landscape in terms of getting this to market, what is it that you anticipate is going to be the most — what are the biggest risks among those, or maybe there's things that I haven't mentioned there that you're more worried about? What are the biggest risks? Then maybe if you can share about how you're addressing those, whether it's through partnerships, in order to make sure that you can overcome some of these human challenges?

Gabriel: Great question. I think that most startups have a problem where they've got a product to sell supply, and they don't have demand, and they have to go generate demand. What we have is an overabundance of demand and not enough supply. There's hundreds of companies on our lead list. Most of them are multibillion dollar like Fortune 500s and Global 1000s. that we've spoken to in the last two years, that want to use our technology. We haven't made it easy for them to do that yet, because we're the only ones who know how to fly the plane. Hence, needing to build out the capabilities this year, that then allow third parties to be able to do that.

The other pilots in pipelines we have now are all like multi, multi, multibillion dollar companies. You'd recognize almost every logo. A lot of that came from the fact that we wrote a book in 2019, that then got picked up by CIOs and CTOs around the world who started reading it in part because Deloitte run an entire article from their research team in 2020, describing the work that we're doing around the idea of the Spatial Web as the future of computing. Literally, the title is What Business Leaders Need to Know About the Future of Computing. This went out to all the CIOs and CTOs in Deloitte's table, which is pretty much the Global 1000. The proverbial phone started ringing off the hook. CIOs from the largest companies in the world were reaching out to us on LinkedIn. "What are you guys doing over here? I've got a spaghetti mess of data and a 40-year-old stack of technology that I can't get anything to work."

We don't see an adoption problem. What we see is an embarrassment of demand. The EU, we didn't go to the European Commission and say, "Hey, we've got a great idea. Can we do a test?" They've said, "We're trying to find people to solve this problem, because otherwise we can't have drones flying around in our cities." So, we basically just put in an offer for grants with any other companies around the world. Then we want it. Because we're the only ones that we've got an underlying language that we're building as a standard because of the IEEE that's teed up to do this.

The working group members in the IEEE are from some of the largest corporations and governments in the world right now. In almost every case, they're pulling. We're not pushing. I would say that this is a zeitgeist moment and opportunity. Our approach was, "Let's create awareness around what we're doing," instead of, "Let's create a sales pipeline." The sales pipeline just literally formed by itself. Then the credibility we get and the interest that people have because we're taking a standards-based approach, particularly for large corporations and governments, means that we're not creating a lock-in scenario for them. It's not like, "Hey, come buy our solution. It's the only way and by the way, we're the new Microsoft. We're going to lock you in for 30 years." It's a standard. If at any point, you want to kick us out — provided they adopt us, I guess, in the first place — there's a path for them to be able to continue to operate. That's a bet that they're interested in making.

Now, that is not the same as pulling in a bunch of small fish which you can show very quickly. Whales are hard to pull in. We need the Deloitte and Accenture and others to actually act as channel partners for us, and do these implementations with these large customers. The pilots we're doing right now, even if you get eight figures for each, maybe we can do four or five a year. That's not a scalable business for us, right? But it's worth noting that the vice chairman of Deloitte at the time that our article came out, that said that the Spatial Web was going to be bigger than the Internet, has left and has joined our board as the chairman of our company. The alignments that we have with respect to business, to universities, the academia, government alliances, standards, bodies alliances, and then the type of pipeline we have from a commercial perspective on top of the technological breakthroughs we have, really does seem to be one of the — in 30 plus years of being in technology business space, I've never even seen something queue up this way. We're pretty smart guys. A lot of it is just a function of the timing. This is like back in the early '90s where everybody, regardless of industry, all wanted the same thing. How do we get online?

So, the barrier to entry is that — Peter Diamandis is one of the advisors to our company. He's one of the known top futurists in the world. He put out a tweet a few weeks ago that said there's going to be two companies, two kinds of companies at the end of the decade — those that embrace AI and those that don't exist anymore. I think that the pressure is already there. The competitive pressure is there. The need to digitally transform is there. It's really hard. It's really messy, and it's really difficult. The capabilities that we've invented and that we're commercializing are making it easy. That easy piece is where our business I think thrives. That goes back to that Apple example. Apple made it easy. Suddenly, high school kids can design apps, when you used to need a bunch of PhDs or someone working, a bunch of engineers, senior engineers working in a corporation.

Before the personal computing era, computers were the size of rooms. You needed a team of scientists to be able to just run one computer. That's what AI is like today. You need a team of data scientists to basically build an AI with a bunch of big data and only done by big tech. We're making it so that you don't need that big data. You just need a little bit of knowledge. Our system can basically help you immediately extract and get value and reason over that knowledge, whether you're a large corporation or you're just an individual developer. I think as we move into later this year, '24 and '25, that's going to become increasingly apparent. We're hoping that our actions speak louder than our words at that point.

Erik: Yeah, fantastic. Well, it's a great vision. I really wish you success, Gabriel.

Gabriel: Thank you.

Erik: I think that's a good thought to end on here, that there's going to be two types of companies — companies that embrace AI and companies that no longer exists. For those of our listeners who are interested in learning more about what you're doing and maybe exploring collaboration, what's the best way for them to reach out to Verses?

Gabriel: We are online at verses.ai or verses.io. You can just come to the website. The forms will direct you to the right spot. I'm on Twitter, but I'm not very involved. Twitter is a dumpster fire, so I don't come there. They'll be on, pulls it off. But it's not looking great. We're also trading on the New York exchange in Canada under the ticker VERS. We're trading in the US under VRSSF on the OTCQX. More exciting news to come over the course of the year. We're really, really pumped for 2023. I really appreciate the opportunity to speak with you here today, Erik.

test test