Peter Voss

Founder (Aigo.AI), AGI, Cognitive AI, Robotics

Episode Summary

In this fascinating episode, Manav interviews Peter Voss, the CEO and Chief Scientist of IGO AI, who is renowned for coining the term AGI (Artificial General Intelligence). Peter dives into the true meaning of AGI, contrasting it with today’s popular AI models and explaining why most current approaches fall short of real, adaptable general intelligence. He details the three waves of AI development, advocates for cognitive AI as the path to AGI, and discusses his team’s unique approach using integrated neuro-symbolic architecture. Peter also shares thought-provoking insights on the societal impact of AGI, from revolutionizing cancer research and personal assistants to reshaping the job market and economy. The conversation closes with Peter’s advice for entrepreneurs in the AI space and some recommended books for anyone interested in intelligence and the future of technology.

Transcript

Manav [00:00:00]:
Ladies and gentlemen, welcome to another episode of Emerging Founders with Manav. Today we have a guest who literally coined the term AGI, artificial General intelligence. He's a mastermind who took a software company from his garage to a 400 employee company, taking it public in just seven years. He's a serial entrepreneur, engineer, and AI pioneer with a lifelong mission to bring human level AI to the world. Currently, he's building a company with a direct path to AGI called IGO AI. Please join me in welcoming Peter Boss, CEO and chief scientist at IGO AI. Nice to have you on the show, Peter. How are you doing today?

Peter Voss [00:00:39]:
Great, yes, thanks for having me.

Manav [00:00:41]:
Amazing. I kind of want to ask you directly about AGI because it's such a word thrown around on X on Twitter lately. People, a lot of companies claiming to have a path to AGI. What does that even mean? AGI?

Peter Voss [00:00:58]:
Yeah, the term has really changed a lot since we coined it. Three of us coined it in 2002. And I'll give you a little bit of the history of why we actually coined the term. When the term AI was coined 69 years ago, that long ago, the original idea was to build thinking machines, machines that can think and learn and reason the way humans do. I can learn a wide variety of things that a smart human can. That was the original intent. And they figured that they could solve this problem in two or three years. Now, of course, it turned out to be much, much harder. Now, 70, almost 70 years later. So what happened? The field of AI really turned into the field of narrow AI, where you pick one particular problem that sort of requires human intelligence, human level intelligence, and you then figure out how you can write a program to solve that problem. So a perfect example is Deep Blue, the IBM world champion chess program. But there's a huge difference between the original intent of building thinking machines and building narrow AI. Because in narrow AI, it's actually the external intelligence, it's the intelligence of the programmer that solves the problem and how to get a computer to play chess, what particular algorithms to write and how to optimize and play chess. And then you want to do something else. Container optimization or medical diagnosis. And again, it's the intelligence of the program or the data scientist. So the field of AI really didn't live up to the promise. So in 2002, a few of us got together and said we thought the time is right now to go back to the original dream of AI to build thinking machines. So we decided to write a book on that topic. But we felt we wanted A different term. And we came up with artificial general intelligence. In fact, AG I, the G also respond corresponds to, in cognitive psychology, a little G really refers to IQ or general intelligence. So it's having a system that can by itself figure out how to, how to play chess or how to do container optimization or to do medical research or whatever. That's really what AGI is, a system that has the general ability to learn lots of different things by itself, largely by itself. So that was the meaning of AGI. Now, of course, in the last few years with ChatGPT and the massive amount of money coming in, the term AGI has picked up again. People picked up on that and say, that's what we do now. It's turned into a big mess because now it's become a marketing term, a term used for funding we can do AGI. But then you have ridiculous statements like Sam Altman saying, oh, we'll have AGI soon, but it's not going to be a big deal, which makes no sense. If you have AGI, it's a big deal. And so there's this confusion really about what AGI is. And AGI really requires a system that can adapt to changing circumstances. And that's also. We can talk more about that, but that's also the reason why large language models cannot be the path to AGI because they cannot adapt. Anyway, we'll come back to that.

Manav [00:04:13]:
Yeah. Okay, let's go back a little bit. I think AGI means a machine can conduct independent thinking.

Peter Voss [00:04:21]:
Yes, it can. Specifically that it can learn new things and adapt to changing circumstances. So it's, it means it needs to be able to think on the fly. Like we're having a conversation now. I'm learning from you, you're learning from me. We are updating our model and we come up with new ideas and we learn incrementally as time goes by. Now, of course, AGI is also about solving difficult problems. Being a cancer researcher or a master programmer or whatever. But the system can basically teach itself. So the autonomy not having to have constantly a human in the loop to tweak it.

Manav [00:04:58]:
So instead of prompting, finds problems to solve by itself.

Peter Voss [00:05:03]:
Yeah, interesting. Yeah.

Manav [00:05:05]:
And Igoe's approach is cognitive AI. And how is cognitive AI different from large language models?

Peter Voss [00:05:12]:
Darpa, actually, a few years ago they came out with a nice sort of categorization where they talk about the three waves of AI. Yeah, so the first wave people now refer to as good old fashioned AI, which is basically largely logical based approaches. And again, Deep Blue would be A good example of that. So you have lots of logical rules and maybe also some statistics. But that's basically the first wave of AI which pretty much the first 60 years like we're more or less at certainly 70s, 80s, 90s were all wave one.

Manav [00:05:47]:
Was there any application of that?

Peter Voss [00:05:50]:
Some expert systems, but they more or less handcrafted decision trees. And then of course again Deep Blue, a chess playing program, container optimization.

Manav [00:06:01]:
And when did that come out?

Peter Voss [00:06:03]:
Deep Blue I think was either late 90s or around 2000. There were sporadic successes in narrow applications for AI in the sort of first wave. Now the second wave hit us like a tsunami about 12, 13 years ago when people started to figure out how they could use massive amounts of data and massive amounts of compute to actually do useful things. So the big companies like Google have a lot of data, have a lot of compute. So that was how can we leverage that? And that is basically the second wave in is statistical big data approaches. So where you have a lot of data, a lot of compute and you number crunch all the data you have and end up with some kind of a model. And in the early days that was deep learning really and tremendous success with improving speech recognition, translation, image recognition and that kind of really opened the door to, towards self driving cars and things like a deep learning recognition. Yeah. So that was really a big advance in the field of AI, the second wave big data approach and that is now over the last two years of course moved into generative AI and ChatGPT which took this to a whole nother level by scaling even bigger and bigger systems and having this sort of huge oracle that has massive amounts of knowledge that's built into, that's trained into the model. So that's the second wave of AI is big data, statistical approaches and generative AI. ChatGPT is very much part of that. And even the sort of the latest, what they call agentic AI is still very much part of that because it's all based on big data. And now in the last few years it's all been transformer technology. We which is just a particular way of number crunching the data now. So that's the second wave. The third wave according to darpa is really this adaptive, adaptive AI that can learn by itself. It doesn't need massive amount of data. It works more the way our human brain works, our mind works. You have to consider just how different that is when you think that our brain uses 20 watts of power, not 20 gigawatts. We don't need a nuclear power station to power Our brain and a child can learn with maybe a few million words, if that.

Manav [00:08:37]:
But we also don't have infinite memory though.

Peter Voss [00:08:39]:
No, we don't. But unlike LLMs, well, they don't have infinite memory. Not infinite, but yeah, much larger. Yeah, it's a different thing. It's more a knowledge base really. That's what they are. It's a knowledge base with the sort of natural language query some people argue.

Manav [00:08:54]:
Even humans can be like LLMs. We're always predicting the next word, what to say.

Peter Voss [00:08:59]:
Yeah, but we have higher level processes. We know when we don't know something, whereas an LLM doesn't know when it doesn't know something. It'll just make up stuff and sound very confident.

Manav [00:09:10]:
Hallucinate.

Peter Voss [00:09:11]:
Yeah, hallucinate. And it's very confident in telling you something. That's complete garbage. So there are a couple of differences we can go into but, but really the biggest difference is that cognitive AI is much more like the human brain, that it can learn incrementally in real time with much less data.

Manav [00:09:31]:
And is that the third wave?

Peter Voss [00:09:33]:
Yeah, that's the third wave of AI. So that's cognitive AI. And that's the approach we've been following really since I coined the term that sort of, it makes sense for a system to be able to learn as you go along. While these large language models are very powerful and very useful, many things, they can never be reliable because they don't know what they don't know. And the other problem is they can't adapt to changing circumstances. And I'll give you a very clear example of just how severe that limitation is. We had recently had presidential elections and the day before the elections we didn't know who was going to win. By the time the results were out, suddenly a lot of things change. So your model has to be updated. So now with large language models, you now number crunch for two months or so before you have a new model with a new situation. But even that wouldn't be up to date because the cutoff of the data will be whenever you start your training. So it's really. They're not the predictors, it's not at all the way humans. As soon as we know the results, something major changes in the world, we start to think through the implications that we update our model, what we now expect to happen. And they simply large language models, second wave of AI simply cannot be updated in real time. It's impossible. In fact, I co authored a paper where we reviewed more than 200 research papers to, to see what Is the state of the art of people trying to get large language models to learn incrementally. There wasn't a single one that can update the core model itself. Can't be done. That's why companies spend hundreds of millions of dollars. And Elon Musk just on Grok, he spent $5 billion to build a data center. I don't know how much GROK cost, but many hundreds of millions of dollars to train it. Maybe half a billion or a billion, who knows? But then you use this for a few months and then it becomes outdated and you throw it away. Yeah, if they could update them, they wouldn't be throwing them away and building new models every time. So it can't be done. So it's a very severe limitation. So the third wave cognitive AI is really makes much more sense and it's ultimately what you require to get to AGI.

Manav [00:11:57]:
How would one even go about that? Is that a trade secret? Is that something you guys.

Peter Voss [00:12:02]:
Oh no, I have written not, not in detail. We're not open source or so. But the way we go about it is we have a knowledge representation. It's a vector graph database that we've developed very high performance. Our graph database is literally a thousand times faster than any commercially available graph database. But so you have this knowledge representation that is specifically designed that it can be updated in real time incrementally. Which means also it isn't opaque, it isn't a complete black box and it uses mechanisms actually similar to the way our brain does is that we can hear one sentence and we can learn and we can think through the implications. There are some other. So I can talk about a few of the technical details. So the knowledge representation is really important, having that in a way that can learn incrementally. It's also important to have your knowledge representation in what people refer to as neuro symbolic. So the neuro part is really the way large language models work. So they have fuzzy pattern matching basically where this pattern seems similar to another one and that's what they use to predict the next word. So you have this fuzzy pattern matching. But then you also need high level logical thinking and that is what's called symbolic logic basically. So you have neuro symbolic and in our approach, the data representation for both the neuro aspects of it and the symbolic aspect is one uniform database. So our system can switch between the two different modes of thinking. And in fact there's a parallel in cognitive psychology. You may have heard of system one and system two thinking. Daniel Kahneman is famous for developing that.

Manav [00:13:57]:
Thinking fast and slow.

Peter Voss [00:13:58]:
Exactly. Yeah, that's exactly it. Where a system one is, the automatic responses we have, that's much more like a large language model. But then we have our upper level, our metacognition, our logical thought that supervises that and that click, oh, no, I'm not sure here or I need to think a bit more. And it directs our thought. So our cognitive architecture is also can operate in those two modes. It can operate in system one mode or system two mode. They're not two separate systems. It's more like the mode you're operating in.

Manav [00:14:32]:
I'm just trying to imagine the world where this cognitive AI exists right now. Okay, ChatGPT is a chat box. People can wrap their heads around it. It's very simple. You prompt question, you get an answer. In the world. With cognitive AI, is there some application that you have envisioned that would exist that could be used by the masses?

Peter Voss [00:14:53]:
Oh, absolutely. And to me, cognitive AI, what we focusing on in our company right now is to get to full AGI, which means adult level general intelligence. Over the years we've alternated, our company has alternated between development, developing the core technology and then commercializing it. And that's to try and make money and get investments. We had to commercialize systems that weren't really anywhere near AGI yet, but also already using a cognitive architecture. But now our company is 100% focused on taking the core technology all the way to human level AGI. So the implications of AGI, the benefit of AGI to humanity is just tremendous. And I can give you three areas and there are more ways of carving it up, but let me explain it in three different buckets. The first bucket would be an AGI can be a very powerful researcher in any field really. But let's take for example, cancer research. So an AGI can teach itself to become a PhD level cancer researcher. Actually, let me go back one step and just sketch out what an AGI would be, sort of the generic AGI would be capable of. Think of it as a smart college graduate with kind of general training, some statistics, some mathematics, some general knowledge, but a very smart, highly motivated college graduate. That your baseline of AGI. But once you have that, that AGI can now teach itself like a human, except much quicker to become, say, a cancer researcher. But it could also be a researcher in battery technology, nuclear power or pollution control, or, you know, whatever. But you now have this one cancer researcher AGI, cancer researcher. You can now make a million copies of that. Now you have a million PhD level cancer researchers chipping away at the problem, pursuing different kinds of avenues of research, but also communicating with each other much more effectively than humans. They don't have egos getting in the way and they can copy whole bits of knowledge that they've acquired. So imagine just how much faster the progress will be for us to conquer diseases and solve technical problems in whether it's nanotechnology or energy or pollution.

Manav [00:17:32]:
That would be the biggest use case. Like finding cure to cancer or figuring out how we can use nuclear energy.

Peter Voss [00:17:40]:
Yeah, absolutely. And just think of the benefit of problems that humanity has been trying to solve for a long time that will just be accelerated tremendously. So the benefits we get to that. So that's kind of the one bucket which is super exciting on how that will improve the human condition. The second area is more obvious is that any desk bound job, cognitive job, can now be automated at a much, much lower cost than humans. So you have a dramatic reduction in the cost of goods and services, which means everything is becoming cheaper because it can be automated in an effective way. Because again, this smart college graduate can learn, can teach itself to become an accountant, a programmer or a manager, you know what, whatever. So that's the second bucket, that's very obvious. The third one I'm actually almost most excited about and that is what I call a personal assistant.

Manav [00:18:44]:
Now why two personal?

Peter Voss [00:18:46]:
Yeah, why? Too personal. So two different meanings of the word personal. The one is that you own it, it serves your agenda, not some mega corporation's agenda, it's there for you. That's the first personal. You own it, you control, it serves your agenda. The second personal, it's hyper personalized to you. So it learns over time, whatever you want to teach it about your history, your goals, your personality, your likes, your dislikes, who your friends are, what you're trying to achieve, and so on. So imagine having this kind of personal assistant that is with you all the time and you can use it. It's almost like an exocortex, like an extension of your own brain then. And why I'm so excited about that is it'll make us better people, it'll help us think things through and mistakes that we might make otherwise, getting into a wrong business relationship or personal relationship, whatever it might be. We will have a personal assistant to help us think things through and give us advice. Apart from doing the dirty work for us like dealing with insurance company and banks and stuff like that, that's AGI. It's immensely powerful and immensely beneficial to humanity. That's why I Can't think of wanting to work on anything else.

Manav [00:20:13]:
It's extremely exciting. It's extremely scary. I have to say that it's extremely scary. I just cannot believe that we will have so many new jobs created that will fill in the demand for 8 billion people. So I think the next big question is, like, what? What will humans be doing?

Peter Voss [00:20:35]:
Let me turn that around. Because people think of unemployment as a scary thing, but let me turn that on its head. If you ask most people, if I ask you, would you like to win the lottery?

Manav [00:20:47]:
Yes.

Peter Voss [00:20:48]:
Yes.

Manav [00:20:48]:
Everyone is.

Peter Voss [00:20:49]:
Of course, almost everybody. Maybe a billionaire will say, no, I don't care. Most people would like to win the lottery. Why do they want to win the lottery? Because then they can do the work they want to do or not work. Whether they want to travel, spend all their time with friends and family and kids, or do education or do some research, educate themselves, build something, whatever. You have that total freedom. Then if you don't have to work anymore, because the amount of wealth created through AGI will allow people to do that. If you look at history, actually the work week has shrunk tremendously when people work. Yes. Very solid statistics. People would, on farms, would work 14 hours every day to make a living.

Manav [00:21:39]:
I just feel like the concept of money will change. You know how we had a barter system then, now we have the financial system. I just feel like it's going to change the overall. I know people are experimenting with universal basic income now, but I personally don't know if what's going to be the next phase.

Peter Voss [00:21:57]:
We'll have AGI to also help us figure it out. Yeah, there will still be money, there will still be value. Because there are only so many houses on top of Beverly Hills Hills. There are only so many people who can come to a live performance of your favorite artist or whatever. So there will always be scarcity. And that scarcity will basically, people will be exchanging money, exchanging value for that. But people will not have to work nearly as many hours, if any hours at all to be able to afford a very good lifestyle.

Manav [00:22:35]:
Instead of waking up and being like, hey, create a to do list for me. The AI will tell you your to do list. You don't even have to prompt it.

Peter Voss [00:22:43]:
If you want to do that. When different people will have different degrees of authority and autonomy that they want to retain, some people will say, look, just manage. Help me manage my life, and I'll just watch tv.

Manav [00:22:57]:
And this sounds extremely exciting because the CEO of Nvidia, Jensen Huang, came out and said, humanoid Robots are having their chatgpt moment right now where every household will have a humanoid robot, like helping them with the basic chores that we find mundane. And then each robot can be come impact with this AGI.

Peter Voss [00:23:19]:
Yeah, robotics will come after AGI because at the moment, robotics is a big hot thing that we're raising a lot of money, but robots aren't. You're not going to put a robot in your house until it has human level general intelligence, because it needs to learn on the fly how to deal with, you've got a new pet or you've got a new child, you've got visitors coming, how you treat different kinds of visitors in the house. Things change, something breaks in the house. You can't pre program all of that. The system has to have general intelligence to be able to know what it doesn't know. So robotics will be fine in the factory or in very confined kind of situations, but to have a robot in the home as a personal robot, you will need to have AGI first. And AGI will actually help to make that happen.

Manav [00:24:15]:
So a lot of these new models, they claim that they have a lot of reasoning capability. And how is that going to differ from the reasoning capabilities of the AGI model?

Peter Voss [00:24:24]:
First of all, that reasoning ability is actually very brittle. It only really works well within the particular things that it's been trained on. Again, it doesn't know what it doesn't know. And some of the bloopers it makes up really bad. There's also a big distortion in how to assess large language models that they've been trained on. A lot of the benchmarks, and the benchmarks are very impressive, like the law.

Manav [00:24:52]:
Firm exam or the test, they know.

Peter Voss [00:24:55]:
What the answers are. So you don't know how much contamination there is. And there's a lot of contamination where basically they just know the answer because they got it in their training data. And the reasoning is similar. There are many examples where they fall down dramatically on that. In fact, recently there was a paper that the top large language models cannot learn how to multiply numbers. They break down after.

Manav [00:25:21]:
Horrible. Yeah, I've used. That's one thing I actually want to talk about, like how come it's so bad at mathematics and it's never accurate? Like I have to always double check and most of the times it's incorrect.

Peter Voss [00:25:32]:
Yeah, because they don't learn things in the way we learn, a child learns, or we learn mathematics. They do it through this sort of pattern recognition. And based on the frequency on how many times they've seen a particular thing. So they don't know that to multiply two bigger numbers you actually have to go through this procedure like we do in our heads, like we've learned to do that. And they don't even know that they should use a calculator, which they could, to solve the problem. They simply don't have an understanding of what they know, what they don't know and when to use a tool or when to use this. That's called metacognition. They really don't have metacognition. They don't have control over their own thought processes.

Manav [00:26:14]:
And how did you think about the new approach by deep seq compared to OpenAI?

Peter Voss [00:26:18]:
Oh, deep seek is really not that different. They've just been very smart to combine different techniques and to do some, take some clever shortcuts, but in terms of what they can achieve is really no better than. It's just they require less power, less compute to do it, but they don't do anything that other large language models can do.

Manav [00:26:40]:
And one of the applications that IGO was building, the voice agents currently and is there any product that people can use right now or everything is in beta right now.

Peter Voss [00:26:51]:
Yeah. So we had a commercial venture where we took our core engine and then built it out specifically for call center operation. So our best known customers, 1-800-Flowers and Harry and David Group of Companies, it's about 12 companies and in fact last Valentine's Day we replaced over 3,000 agents that they normally had to hire just for one week with our technology. But it wasn't AGI, it was cognitive AI, but there was a lot of human input in terms of rules that we put in grammar rules and reasoning rules that were handcrafted, programmed. So it didn't have the flexibility to learn that. Seven months ago we decided to put our commercial business completely on hold to concentrate on building AGI. Because why should we go after scaling to 10 million or even 100 million revenue with a commercial business when AGI is a multi trillion dollar business, which it literally is.

Manav [00:28:01]:
Yeah, because a lot of companies are chasing the low hanging fruit.

Peter Voss [00:28:04]:
Yeah, of course it makes sense. As a startup there are a lot of opportunities to take large language models and tune them to do a particular job. But like Yann Lecun, the chief scientist of Meta, he put it very strongly and I agree with him. He said large language models are an off ramp to AGI, a distraction, a dead end. Really, that's how strong he puts it. In fact, he tells his students, don't bother learning large language model Technology, that's not the future. The future is something more like cognitive. He doesn't use those terms, but clearly what you're referring to is essentially the third wave and cognitive AI that can learn and reason.

Manav [00:28:50]:
How can someone go about learning the next wave? Because I'm interested in it now. I'm thinking, why should I focus on my energy on learning how LLMs work when this third wave is going to.

Peter Voss [00:29:00]:
It's really hard because at the moment all the money is still. There was another announcement. Somebody just is raising another $1 billion at a $30 billion valuation for large language models. There is still so much momentum that it's sucking all of the oxygen out of the air for any other thing. So you won't find any university courses for the third wave or cognitive AI. They used to have them like 20 years ago, but now they basically, you have a whole generation of computer scientists and AI scientists that don't know anything other than big data, brute force approaches, statistical approach. They don't know anything other than second wave. And the same is true for VCs and investors and customers. That's what people know. So to learn about cognitive AI is really hard. There are only like a handful of companies in the world that are really dedicated to that kind of approach.

Manav [00:29:59]:
And how. You must be struggling a little bit, for sure. Hiring people to build this.

Peter Voss [00:30:06]:
Oh, no, quite the opposite.

Manav [00:30:07]:
Quite the opposite, yeah.

Peter Voss [00:30:08]:
Because we are not looking for the same people at all. We're not looking people with big data experience. In fact, that's a negative if that's what they've been trained. And they're not going to be useful to us because what we are looking for is for people who can think about the problem from a cognitive psychology point of view. Understanding what do IQ tests measure, that's important. What is special about human intelligence? How does our intelligence differ from animal intelligence? Understanding language, understanding education, how children learn. So half of our team is what I call AI psychologists.

Manav [00:30:51]:
Wow.

Peter Voss [00:30:51]:
It's a profession I invented.

Manav [00:30:53]:
They think how people think they think.

Peter Voss [00:30:55]:
That's their starting point. But now their job is to understand the mind of the AI, of the AGI. So to figure out what curriculum, what training system do we need to give the AI for it to learn language, for it to learn reasoning. So they build a curriculum and then build tests and evaluate and say, okay, and we need to improve memory or we need to improve reasoning in a particular manner. Understanding that. And then the other half of our team, basically on the coding side. But again, the coding that's done is Difficult algorithms that we do. It's nothing to do with big data and GPUs and stuff like that. So we don't need that expertise. So we're not competing with other companies. All the second Wave companies, generative AI companies, we're not really competing for their staff. And then the people who read about our approach and say, wow, this makes a lot of sense to me. This is the way to AGI, they're actually super keen to work on our project. So we have a lot of people apply to work on our project. And you know that it's. No, it's not actually hard. We don't have the funding to hire enough people. That's. We only 12 people in the company. And for us to get to AGI, we believe we need to hire another 45 people to be able to get there within two years. We have a particular roadmap of what we're developing and it's ridiculous because the kind of money we're looking for is like a rounding error compared to the money that's being spent on large language models. Because we don't need massive amounts of data, we don't need massive amounts of computer compute. Our system is trained on an off the shelf computer, a single computer. It's a very different approach. As I say, it's much closer to the 20 watts that our brain has and not the 20 gigawatts of nuclear power stations that need to be recommissioned.

Manav [00:32:56]:
You said two years. Do you have a roadmap for the next two years?

Peter Voss [00:32:59]:
Yes.

Manav [00:32:59]:
Wow.

Peter Voss [00:33:00]:
Yeah. We believe we can get to AGI with our approach in two years. Yeah.

Manav [00:33:05]:
Incredible.

Peter Voss [00:33:05]:
Because we've done a lot of work over the last 20 years with a small team. We have commercialized it. So we have proof points in terms of scaling the system and core technology. So there's a lot of stuff we've learned and figured out already. So at the moment we just. We need to scale up our system basically to get to this college graduate level.

Manav [00:33:29]:
One thing I wanted to ask you. I read something about insa. It's Integrated Neuro Symbolic Architecture. I had a question about that. You said that we've been steadily moving towards the beneficial goal of humanity by leveraging our insa, which is an integrated Neuro symbolic architecture, which facilitates real time incremental autonomous learning. I would love to know more about this.

Peter Voss [00:33:53]:
Yeah. As I said earlier, the approach that we have is a neurosymbolic approach. And the reason I call it inser, Integrated Neurosymbolic architecture is unlike Other cognitive architectures. There have been other cognitive architectures, especially like 20 years ago, that was quite a big thing. And now hardly anybody's working on cognitive architectures at this stage. But what people worked on 20 years ago, they were very modular. So you'd have a cognitive architecture that would have maybe one module for natural language parsing, and then have another module for reasoning, another module for memory, and so on. So they have a whole bunch of different modules. And that didn't work very well because these modules didn't really talk to each other effectively. Whereas the way our brain works, this is all integrated. And that's what we have. We have a integrated neural symbolic. As I mentioned earlier, our system can work on a sort of neural network basis where it's pattern matching, where it can do fuzzy pattern matching, but then it can also operate in a symbolic mode, which is logical thinking, which again, between system one and system two, that kind of can switch between building the.

Manav [00:35:11]:
System two, basically, because the pattern matching again has its flaws.

Peter Voss [00:35:15]:
Yeah, but the pattern matching, the system one also needs to be able to learn incrementally in real time. You can't just take a large language model in LLM and somehow connect it with a reasoning engine. That won't work. You need a uniform, an integrated. And that's why we call it insa integrated. You need an integrated system where the knowledge representation between these two modalities is uniform and the system can easily switch between the different modes. In fact, the two modes need to work together. And I can give you an example that I think many people are familiar with. If you've ever learned to play an instrument or ride a bicycle or ride a horse or whatever. But when you start playing guitar, initially it's the sort of more symbolic. You say, okay, this finger goes here, that finger goes there, and then you strum this string, and then after a while that becomes automatic. So you can now do that, so that now it's moved into system one, where you don't have to think about what you're doing anymore. So now you can concentrate while you're playing your guitar. You can now concentrate on intonation and changing emphasis and how exactly you strike and so on. Once that becomes automated, then you can concentrate on the audience and sing and.

Manav [00:36:36]:
Act and all that.

Peter Voss [00:36:37]:
Exactly. But these are two systems. And then something happens, something goes wrong, a string breaks on your guitar or something. Then system two kicks in. Right. What do I do now? Do I continue playing and without the string or what do I do? You need that integrated architecture for AGI and that's our approach and that's why we call it insa Integrated Neuro Symbolic Architecture.

Manav [00:37:03]:
That's incredible systems too, I think. Why are these other companies not building the system two or they're also building system two or they just have a different approach.

Peter Voss [00:37:12]:
You can't go from a deep learning, you can't go from a transformer based system to an incremental learning system. There just is no way they're trying to emulate the system 2 thinking through these reasoning, chain of thought things. But the chain of thought is also pre trained. It's not generated automatically, which it really needs to be to different things. It's really going back to Yann Lecun. Large language models are an off ramp to AGI, a distraction, a dead end. But there's a lot of money flowing into it. There's a lot of money to be made and people are jumping on that bandwagon. But it really is the wrong path. Think of it, if you're trying to go somewhere north but you're heading east, it doesn't matter how far you go or how fast you go, you're not going to get to north and that's the difference. But the momentum is there and it is doing useful things of making people money.

Manav [00:38:13]:
And do you feel AI should be open source like these models should be open source or closed source?

Peter Voss [00:38:20]:
It's a difficult decision to make because open source is obviously a lot harder to raise money. People, you know, investors not monetize it well, it's just to raise money. Even investors want that proprietary, they want that competitive advantage. So we're seeing that with open AI right now. They can't raise the kind of money that they want with a supposedly open. They were never really open AI to start with, but as a basically a nonprofit, they, they can't raise the kind of money. I do believe though that AGI with the right approach will not require these massive data centers. It'll run on an off the shelf computer which will make it available really to everyone in the world. In fact, as hardware improves and these AGI improves, it literally will be able to run on your phone. So you'll have your personal assistant on your phone. It'll be available, affordable to just about everybody. And then it will become automatically become open source because people will figure out how to do it, how to reverse engineer it and how to find different ways of achieving cognitive AI. But basically from an investment point of view, we want to be the company that achieves AGI. And obviously for our investors there will be a huge payday. But also very quickly the benefits will really spread across all of humanity through the research, on the one hand, through reducing the cost of goods and services and through giving us our personal assistant.

Manav [00:40:04]:
Yeah, I love that. Have you seen the movie Her?

Peter Voss [00:40:07]:
Oh yeah.

Manav [00:40:09]:
I feel there is no, it's a no brainer that we're going towards that direction.

Peter Voss [00:40:14]:
Except it'll be your personal assistant that you, you see there it was running in the cloud and was obviously ultimately the end of the story was it wasn't your own personal assistant. It was an assistant.

Manav [00:40:27]:
It wasn't 100,000 people's personal assistant.

Peter Voss [00:40:29]:
Exactly. And you don't want that. You want your personal assistant that you own is dedicated to you. That really becomes an extension of your own mind. And with our TECHN technology and our approach, that's exactly what you'll get.

Manav [00:40:43]:
If you're an investor, please contact Peter because he's onto something here we are.

Peter Voss [00:40:50]:
We are indeed. But it really requires visionary. We are currently looking for a Series A. Up to now we've been funded on a safe with money I've put in and we've had other investors come in. But we're looking for the right kind of partner for Series A. Now we're only looking for 25 million which is like a rounding error compared to the amount of money that's going in there. As I say, because we don't need massive amounts of compute, our compute budget is minimal. But we're looking for the right kind of investor who shares our vision. And unlike what OpenAI are talking about monetizing it through advertising and stuff. And no, we are not, we are not going to do that. We are not going to be another Alexa or a Siri or something that is owned by some mega corporation and they control what it can tell you. So we are looking for an aligned investor, a visionary who sees the benefit of AGI and sees the benefit of us getting there as soon as possible with the right kind of technology that we can offer. Yeah, absolutely. If you think you are that person. We are looking for a partner to take us to AGI.

Manav [00:42:01]:
One last thing I wanted to ask you. You've been an entrepreneur for five decades now and I was on your LinkedIn effort. So like you've been a founder and chief scientist for so many different companies. What's one advice you would give to entrepreneurs building in the AI space or just generally?

Peter Voss [00:42:19]:
So generally, if I have to think back, one of the biggest regrets I have is I started my first company at 25 and I wish I'd started it earlier because there's nothing like actually being a co founder or being responsible for a company and actually doing that and gaining that experience. And it's not for everyone. It's brutal. It's brutal starting a business and the ups and downs. You have maybe an occasional company that started where it's all plain sailing from the get go, but usually it isn't. Usually it's brutal.

Manav [00:42:58]:
It is.

Peter Voss [00:42:59]:
So the sooner you can learn the dynamics of a company, the marketing, finding partners, employees, customers, all of that, learn that. My biggest advice, just go out and do it. And if you can find a good partner, it makes it a lot easier. If you have the technical person and you have a salesperson, usually these are very different personalities. The person who loves to go out and talk to people and schmooze at parties and whatever finds the investors. Yeah, that's a certain kind of personality. And it's extremely valuable to have in a partnership. And then you have potentially the money, the accountant or manager or manager type and then technical person. But it makes it a lot easier if you can find it. Obviously it's always risky. So you need to think about how a divorce would work if it doesn't work out. Yeah. Because sometimes dynamic just doesn't work out. But if you can find a partner or two, it makes it a lot easier to run a business. It really does.

Manav [00:44:01]:
And I can also tell that you read a lot just from the way you talk. What some books you would recommend to people. Like what are some books you give to other people? Or some books that have changed your life.

Peter Voss [00:44:13]:
Yes, they could be technical books. There are so many books. I literally have actually. When I sold the shares of my first company that I took public, I took off five years just to study intelligence. All different aspects in philosophy, cognitive psychology and child development, psychometric tests and AI. And so I read a lot of books then one of the books that always comes to mind is Douglas Hofstadter. The Mind's Eye is a collection of short stories, which is a futuristic, really very cool stories on different ways of thinking. Thinking about what intelligence is, what identity, what personality. If we could teleport or if we could clone ourselves, who would we be? And really some neat, very neat stories. Yeah, that's probably the first one that comes to mind.

Manav [00:45:04]:
It was great having you, Peter. How can people find you and what you're building? Can you tell them, guide them in the right direction? I know the website is Igo AI, but how can they find you?

Peter Voss [00:45:14]:
Yeah, very easy, Peter Voss. And I'm on LinkedIn. Very easy to find. Also, you can email me, Peter Igo. AI. I'm on Twitter x. Yeah, between the website and LinkedIn and Twitter, you can easily find me.

Manav [00:45:30]:
Thank you, Peter, for coming on the show. I really appreciate your time.

Peter Voss [00:45:33]:
All right, well, thanks for good questions.

loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3
loop-img1
loop-img2
loop-img3