Special Episode: Phillip Swan on The Friendly Futurist with Dave Monk
TAI - The Friendly Futurist Collab
===
Speaker: [00:00:00] Ladies and gentlemen, this is your final boarding call for the Good Ship S tomorrow.
Speaker 2: Today on the Friendly Futurist, all about Agentic AI
and welcome back, and we're talking about the Garner hub cycles, one of the top trends. For 2025, which is agentic ai. Now [00:01:00] you've come across chat, JDP, that's Jennifer ai. This is a different one. This is getting a whole new be and acting as your agent is the simplest way I can think of. But today's guest, Phelps Swan from a radius is in a better position to explain it than I am.
Very fascinating topic, and it shows what's coming up in AI in the next. Two years. 'cause it's very hard to predict how it's going 10 years. 'cause the spot has grown quite quickly. Aaron, remember the Wozniak test now? So Steve Wozniak back in 2009, he's the co-founder, one of the co-founders. Apple said that if a robot can come into your house, walk into the kitchen, and make you a cup of coffee with no hit, artificial general intelligence, very prophetic warning or prophetic visions.
What's coming up Probably in about 2027. So listen out to that. Very fascinating. I really love this chat and Phil is coming up After the break,[00:02:00]
we got Philip Wan from Irid and Phil. Well, Irid in particular, it's about a gen ai. If you're looking at the GAR and the hype cycle, that's one of the big trends. And Philip, welcome. Thank you very much, Dave. Pleasure to be here. Pledges all my Now, as with all my guests, can you please give us a quick rundown on your journey so far and start wherever you wanna start.
Speaker: Sure. Well, I'm one of four co-founders at a company called IUs ai. I-R-I-D-I-U-S, ai. And what we do is that we help companies stay ahead of the AI arms race without the risk. Hmm. Uh, we have an AI solution factory that allows companies to build AI solutions that are fully compliant from. Design and authoring all the way through execution.
And my background is I run product and go to market for the company, but my background is, uh, been working with in the enterprise [00:03:00] space virtually my entire career. And, uh, started off life working for the government and then I've done a number of startups selling into the enterprise through my career and been successful at that.
And here we are doing it again.
Speaker 2: Hmm. Those who are aware or aren't familiar, what is I Radius's mission and how's it been received to the market so far?
Speaker: That's a great question. So our mission in life is to deliver AI solutions that are good for people and good for the planet. What does that mean when you want unpack it?
That means they're safe, they're responsible. What does safety mean? Well, it means people don't get hurt, people don't get killed. People don't get maimed if you're in a manufacturing context. Right. Responsible AI in that it adheres to regulations that are in place for good reason like GDPR in Europe or the a EU Act in, in Europe, as an example, which certain states in the US are following suit [00:04:00] like Colorado and Utah.
And you have so many laws that are coming out of the woodwork across the globe. How do you stay in touch? How do you manage it? Right? And the way we describe that part of what we do, 'cause that's only one part of what we do, is we do that much like windows. People will be familiar with Windows defender.
So Microsoft has team of gosh knows how many people that do nothing but manage threats. Yeah, coming across the ether and then they update continual updates that happen. And that's the same with us when it comes to compliance and regulatory. So not only do we take existing regulations and standards of which there are thousands.
We also get down to the local level, even down to municipalities and counties where you're dealing with things like you are signing mortgages. The each county has its own regulations. In the United States, as an example, it's almost as bad as the alcohol laws in the [00:05:00] United States and that every state has, its.
And every county has its own laws and it's brutal. It's chaos. And so what we do is, is that we bring order out of that chaos and we manage that part for our customers. And the other part that we manage for our customers is how do I take an idea that I may have drawn in a piece of paper, a napkin, quite literally, or I was in a meeting with Dave and we talked about this really cool product.
How do I build a product outta this? Take your MP four player from your recording as an example. You might have a document, document that describes something, whether it's a text document or a power or some slides. It doesn't matter. You can even be a voice recording and the system will take it in and what it will output for you within days is a fully compliant working AI solution or system.
Speaker 2: Wow.
Speaker: That behaves itself. Yeah, and we eliminate [00:06:00] about 95% of hallucination, and we do that through context so you can actually trust what the AI is telling you versus what's people are being familiar with that term. AI hallucination, which is where it's like propaganda is where it presents something as truth, where it doesn't, where it's not truth.
So
Speaker 2: actually speaking of the hallucinations I was playing around with, with another product, which generates video and I, it was like a scene when a waitress was serving a guest and I just pouring the wine. Then head bud the guest, and the guest just deflated and I thought, what's going on here? Like, yeah.
So it happens everywhere. Yeah. How do you actually mitigate hallucinations because. It's very hard for a computers to say, say how to pour a wine or how to tie up shoe license. So if
Speaker: you don't give it explicit instruc, do that quickly or actually do it. The way I describe AI in these large language models to people is [00:07:00] think of them like an exceptionally bright toddler.
I mean, really intensely intelligent, right? But unless you give it precise instructions, it'll start making stuff up just like a kid would. Oh, I don't know. Well, I'll just make that up. It's, yeah. Not that it's nefarious, it's just I want to give you an answer. Yeah. So if you were to say to an ai, give me the middle point between my head and my toes, what would you say?
You would maybe say your belly button, right? As an example. Right. But the AI might say, oh, it's your belt.
Speaker 3: Yeah.
Speaker: Right. We've seen that happen, right? That is actually not the, 'cause you didn't give it any context of is it a body that's clothed? Is it a body that's not clothed? Is it an anatomical or not?
Right? And you didn't give it that level of context. And the more context you give ai, the more accurate the response will be.
Speaker 3: Hmm.
Speaker: And I'm [00:08:00] teaching my wife, and I've got adult sons, so I'm teaching everybody. In my family how to use AI properly, to the point now where my wife is absolutely addicted, she's addicted to it now for her work because I've taught her how to use it appropriately.
And so it's about learning. It's about being open to learning, having the courage to make mistakes and don't worry about it, right? Because you're not putting something into production, so it's going to harm somebody. This is for your own use, so just get used to it. Pay attention to it as a human. You always need to be thinking.
Mm-hmm. And this is a really important part because if we lose our ability to think, because we're trusting the AI to give us all the answers, we are going to become so dumb as a species that that won't even, you know, how long will we last as a species as a result of that?
Speaker 3: Mm-hmm.
Speaker: So it has to be done with responsibility, right?
So responsibility comes down to [00:09:00] fairness, it comes down to accountability. It comes down to transparency and it comes down to, um, explainability.
Speaker 3: Yeah.
Speaker: And all of that pulls in the ethics and integrity, all of that. So we follow those four part of the FATE acronym, uh, very, very closely at a radius.
Speaker 2: Yeah, absolutely.
Uh, s some of the listeners that dunno, um, what is the difference saying chat NP was a generative AI and what you build was a, a gen ai.
Speaker: So gen, so generative AI comes from large language models, right? Mm-hmm. So this is where, which have really become a thing started to become real in 2016, maybe a little bit earlier, where the first LLMs were being made available.
Really in 2022 where I was actually in Australia when it kicked off, funnily enough I was in Sydney with my brother and when GPT launched and all firelight and I was like, I've gotta go onto this thing, right? So I was very early on into, [00:10:00] there wasn't even a user interface around it. I was actually coding user interfaces around.
Anyway, that's a whole separate story for another. Came about me wanting to learn about this thing called OpenAI, which I honestly had not been paying attention to prior to 2020, quite honestly.
Speaker 3: Yeah,
Speaker: it's really only in 2020 I really started paying attention to OpenAI, and the big thing that really drives adoption forward is trust.
So everything we do at a radius is about trust. Trust of the consumer, trust of the business, trust of the government. It doesn't matter who it's trust. And when I ask a question. Or ask for help. I expect to get it and get it consistently well. Hmm. Now that's all very well in theory, but you have this thing called Drift.
Think of drift like the human evolution. Like if you have kids, as great as your kids will be. They're going to be variants of you, right? So they are evolving. They're not the same as you. And it's the [00:11:00] same thing when as AI learns, they start to drift. So you have to have operation teams in place to stop that drift, to pull it back in.
So. If it's right or center, bring it back into center. If it's left or center, bring it back into center so you cannot shoot on fire. You've got to continually have vigilance and termination to provide a really comprehensive AI environment for your company or for you personally. Does that make sense?
Speaker 2: That makes sense. Yeah. And also I think there's a bit of a trend as well. I mean, we're seeing a lot, quite a lot of gimmicky ai. Are we going to see like.com crash ai? 'cause it seems to be ubiquitous everywhere. I mean we, you know, I think I just saw the update tell you that like I had some hearing aids ai.
I mean, are we gonna see, I'm not saying it's gonna disappear. We're gonna see just a sort of like dissipate then have a bit of a plateau, say couple years time.
Speaker: So you're in the hype cycle is what I think what you're saying, right? So what's [00:12:00] real We're cycle what's not, right? Yeah. Which is why I thought you we're in an interested period for funding for AI companies, right?
You'll literally get an AI for this, an AI for that. And we're in that period where there's so much hype. Within two years, 60, 70, 80% of these companies will be gone because they don't have a business model that they can stand on. Mm. And more importantly, they don't have a product you can stand on because you may be solving something that's true for today, but are you solving something that's gonna be true in six months, 12 months, 18 months?
Pick your distance in the future. And most of them aren't. So the really, the what is gonna be built to last, it's gonna be companies like Arids that build in security, privacy, regulatory compliance, that trust so that you can have trusted AI solutions out in the market without having to hire. Teams of rare people that you cannot [00:13:00] find because right now you're paying in stratospheric level, your pay scales for some of these people.
I mean, AI engineers are getting million dollar plus salaries is not uncommon in the United States. Prompt engineers getting similar salaries. It's not unusual. We saw in the news. Last week meta, you know, offering up a hundred million dollars, signing bonuses to single individuals. We actually know who one of those individuals, and not only was it that a hundred million dollars sign on bonus, it was over a hundred million dollars a year in sync.
I couldn't even think about that kind of money.
Speaker 2: Yeah,
Speaker: right. We're not making this up. This is real. Mm. And so companies are struggling to find the right talent. Net couldn't pay that kind of people. They spent over a billion dollars effectively hiring 10 people last week. That is just crazy to me.
Absolutely crazy. It is. How long is this going to last for? I know, I wish [00:14:00] sometimes
Speaker 2: we had a million dollars to talk.
Speaker: I'm going like, well, they're not knocking down my door to give me a hundred million dollars a year. So obviously they've got something that's, um, that others have. But my point is, we're in this hype mode, right?
And yeah. And it's a race, it's an arms race of sorts, right. And so how do companies stay ahead of that so-called arms race without the risk? Mm-hmm. And that's where companies like I Radius come into play because we are that fabric, if you want that, we call it an AI control layer, that's where we sit. And that AI control layer manages both the semantic operations of the system as well as the raw operations of the system.
And in terms of privacy and control layer and architectural layer, because. People today are talking Egen this and Egen badness. Back to your original question, what's difference between Gen AI and Egen ai? Well, the big difference there is Egen AI is autonomous or [00:15:00] getting to autonomous ai, which means these agents.
Which are not big applications. So everybody's thinking about these applications still running sequentially.
Speaker 2: Yeah,
Speaker: so when you see a lot of these company with agentic systems, they're using the old paradigm model of sequential systems. A true agentic system is. Your agents, and it could be millions of them.
Literally. Millions,
Speaker 2: yeah.
Speaker: Of these agents operating in the cloud. And how do you do that without impacting performance, impacting privacy, or impacting security? That's where you need people like us who have got decades of. Enterprise and government experience and securing data and systems.
Speaker 3: Yeah.
Speaker: Right. And that's what we do right for our customers, so they can really operate without fear and know it sounds too good to be true.
But we have solved two core problems at irid. The first being safe and responsible. AI safety, being [00:16:00] people getting hurt, right? People getting harmed, employees getting maimed, employees worse yet getting killed. That's one aspect that's physical harm. Then you've got the psychological harm, then you've got the list, and you can run the gamut on that.
The other part is that responsible AI part, which is all your compliance, your governance. I mean, we're talking documents with, you can be talking hundreds of thousands of pages, and OCR does not cut it. It just does not cut it. With these, you can't just say, oh, I'm just gonna scan all these documents in and I've got it.
No, no, no, no, no. It needs to be workable by the ai. So we have figured that hard part out. So we can literally take any standard and take the ISO 2040 2001 as an example, right? Anybody in AI knows that standard. That itself is made up and pulls from over 77 0 different standards. So we are able to take that entire standards and all the 70 other standards grab complete context around it.[00:17:00]
And apply that context to when we're generating the code for the solution to be compliant in your at authoring stage. And then we have a monitoring system that will actually monitor for that. Well, it's an execution just like Windows Defender would as you're running Windows or if you're on a Mac, whatever Mac antivirus you're using on there.
So very similar approach to that in that regard. Yeah.
Speaker 2: I'm gonna change a little tack here, Phil, and hope you don't mind it because I know I get this question a lot. Asked a lot. If you're, say a high school kid and coming up and looking for a Korean tech, what would be some of the best
Speaker: advice? So my advice is quite simple.
If you do, and this is what I tell my own sons, what are you passionate about? What do you enjoy doing? Don't run into tech just for the sake, because you think it's gonna earn a bunch of money doing that because Yeah, you are not. Right. It's not gonna happen. Go into tech. If you're passionate [00:18:00] about tech and I mean really passionate about it, you eat it, you sleep it, you drink it right, and it's everything for it, then do it.
Oh. 'cause then you're deeply passionate. You're like, if you're a geek like me, I'm a total geek. I still geek out on programming and, and see in Unix, believe it or not. Yeah. So I mean, I am hardcore. I used to do assembler, right. So it's really important to follow your passion because if you follow your passion, it's never a job.
Yeah, that's right. And deep in case you haven't figured out, I'm deeply passionate about what I do, so it's not a job for me. Do I get tired? Yeah. Do I get bored? Oh, no, no, no, no, no. I never get bored at work. And the beauty of what we're doing right now is just like it's moving at such a pace that the best that we can do as a startup.
Hang on. And I'm not talking about, hang on to the coattails of the open eyes, the rals of the world, the, the philanthropics of the world. Right? Because that's where the action is, right? And [00:19:00] so as long as we, and we have a secret weapon there, which is one of our team members. He happens to be the head of open AI's developer forum.
So we get to see what's coming down the pike. As much as he can tell us without violating any NDAs he has with open ai, so it at least allows us to stay. Up and that's the best anybody can hope to do. You'll never get ahead. Yeah. Staying, you'll just sitting on there. But if you were passionate about a career in technology, then I always knew personally that I wanted to get into the business side of things, but I was deeply technical, so I knew I had to get my foundation of programming being an engineering manager.
Speaker 3: And
Speaker: going through product and developing product and then going into sales, which is precisely what I did in sales and marketing, which is now why I am able to run both products and go to market because I have that depth of expertise from, I won't say how many decades of doing that, but I've got a few decades under [00:20:00] me.
I've got a few white hairs here in my beard, right? So, and more importantly, I've got the scars in my back from learning the hard way because I've always been in forward. Nascent industries like I was in the early days of mobile. I was in the early days of the internet. I was in the early days of AI back in 1980 as an example.
We were programming in in Lisp back then, and it's. Technology for helping people. What really gets me excited, Dave, is when I'm sitting on a flight, and this actually happened on a flight from San Francisco to Sydney of all places where I was sitting in business C and United Airlines, and I was at Microsoft at the time and I was running the mobile group and they were using the old pocket piece.
The guy who was using Pocket PC next to me, our compact pocket pc, if you remember that. I started asking him about all these questions about how he was using it and his experience and everything else. 'cause I'm huge into customer experience and I'm huge [00:21:00] into customer centricity because I believe if you make this customer the center of the equation of everything you do, they will buy from you and buy from it again and again and again and again and again.
Right. It is so much easier to sell to an existing customer than it is to acquire a new customer.
Speaker 3: Mm-hmm.
Speaker: And so. I like to make customers happy and customers loyal, and I only do that by delivering value to them. And so we as a company have figured out a way to deliver ongoing value to our customers, and we do that in two ways.
Again, one is really understanding their problem because we've lived their shoes, so we're helping them. You'll separate the noise from the reality, from the hype, so we do that regularly. And we bring our customers along on a journey where they suddenly go, like, you mean all of that innovation that's rocking around my head and rocking around my team's head that we can actually start testing that and [00:22:00] prototyping that without the risk?
And I mean, yes, you can. And we're proving that out day in, day out right now. And my days are packed right now, not with internal calls, but with external calls, and it's talking to customers multiple times every day. And the common theme of the product that we're solving is that speaks to market. That time to volume, time to volume is absolutely critical, and that's what we deliver to our customers.
Literally in minutes, we can start delivering value to our customers from installation.
Speaker 2: Yeah. Yeah. Philip, this podcast is about the future. I want you to. Cast your mind say 10, 20 years into the future, what will this space look like?
Speaker: Oh my God. I can't even tell you what it's gonna be in 10 months, man.
Speaker 2: Fair enough. Just little one in 10 years. Oh, it's, make it 10 months. Just make it 10 months. I know it moves. I know it moves quite quick, so,
Speaker: so 10 years is a hell of a long time. [00:23:00] Who knows what will 10 years be done? But the big thing is what's happening now, right? GPT five will release on or before the 4th of July.
And GPT five is really, you could arguably say it's the first step towards a GI or artificial general intelligence, which is the autonomy we're talking about.
Speaker 2: Yeah.
Speaker: And the future versions of GPT are focusing on autonomous robotics. Uh, so you'll start to see GPT six, GPT seven GPT eight, which will be all robotics based within a year, within 12 months.
So this is 2025. So by early 2026, we will start to see household robots become a thing. Not your eye robot that goes in, vacuums your floor, but I'm talking about your robots for around 40, $50,000. US. That will be able to do basic cooking, laundry ironing, so basic household [00:24:00] chores and starting to be able to do those will start to come out in 2026.
And so within three years we're gonna have really strong autonomy from a robotic standpoint, both in work and at the home. 'cause they'll start to become affordable. So within 10 years, robots will be common in the home.
Speaker 2: Now, I guess, uh, well now Steve Wozniak and he's famous coffee test has come to fruition then,
Speaker: so then you have the issue of, yeah, can I trust the robot that's in my house?
So safe and responsible AI is absolutely critical to the operation of these, right? Because, you know, it's back to Asimov's of, you know, remember iRobot, he wrote iRobot. Mm. And so there were the rules of robotics, right?
Speaker 3: Yeah. The
Speaker: first law being will not harm humans. Right. And that's a fundamental rule that cannot get overridden.
So in the world of agent ai, those are called entitlements. What is an agent entitled to do? And it's [00:25:00] not as simple as saying, well. It's allowed to do something for Dave, but not for something for Philip, or it's able to do X, Y, and Z or X, Y, and Z. If you're in America, you know what it could do. It's, it's really a multidimensional problem to solve, and we've actually found some patents around this because it's so complex and it's about behaviors.
It's about. Permissions. It's about kill switches. Can I kill the robot? Can I kill the agent? Because once this thing launches, and if you don't have the ability to have that big red button, that big red kill switch, mm, you're in for a world to hurt my friend.
Speaker 2: Wow.
Speaker: Yes, of course. And so you've gotta have all of this taken care of.
And we do that in spades within our platform. So that's what allows our customers to. Deploy AI without the risk because we factored all of these other risk back. Same thing for security. We've [00:26:00] thought about security, not in terms of the old paradigms around security. Egen AI has broken every model that you can think of from a, a cyber security perspective.
Mm. And so we've factored all of that into our solution as well as privacy. So security, privacy, reliability. What happens because even in agent systems will crash. Well, what happens if they crash? Well, that's the beauty of these systems. You can start having graceful shutdowns or graceful degradations and performance or whatever the action is you want to do because these agents are actually self-healing.
So when an agent goes down, a bunch of other agents will come in and fix it. Okay. Yeah. Yeah. So where's the human in the loop? And that's where the design comes in. So you've gotta have humans in the loop to still reference this, because I still pause it. I still argue. There's plenty of people will debate this with me, that humans will not get [00:27:00] replaced by.
You will get replaced by other humans who are using ai. That I believe to be true.
Speaker 3: Yeah.
Speaker: But I also do believe there's a story also in there's going to be radical operational efficiencies, which means fewer people. We as a company are AI born. We expect to be a billion dollars in revenue with a hundred people or less in our company.
Speaker 2: Wow.
Speaker: Think about that for a second.
Speaker 2: Yeah.
Speaker: A billion dollars in revenue with a hundred people or less. I also believe Sam Ton's statement that he made last year, which was within five years, we'll see the first $1 billion company that has a single employee that I believe is also true within five years. Wow.
Yeah. So how do we make money to survive and thrive? It's up to us. So you've gotta get on the bandwagon. 'cause change is happening. It's happening at a pace that I had never expected in my life before. [00:28:00]
Speaker 3: Mm.
Speaker: I'm seeing corporations like Microsoft move at speeds. I've worked at Microsoft, I worked several years at Microsoft and compared to other enterprises, we moved quickly, but not that quickly.
Not compared to a startup.
Speaker 3: Yeah.
Speaker: And the rate of speed at which Microsoft is moving is. I've never seen it before, ever in this entire history. It is incredible what Microsoft is doing. Google, same thing. AWS, same thing. It's, you know, some are further ahead than others in certain areas, you know, and you're going to get that, but.
The real thing is we're getting into this, is really driving that value of trust and the importance of trust that I'm not just solving a problem that, okay, I want to see this pan colored and pink today versus orange, or, mm-hmm. Who cares about that stuff? That's just, nah, that's just noise. Those are noisy applications.
They might be cool for one or two tries.
Speaker 3: Yeah.
Speaker: You want a solution that's going to have depth to it, [00:29:00] that's really going to add value.
Speaker 2: Yeah. I'm join my chat today, Phillip, but is there anything I didn't ask but should.
Speaker: Well, there's so many. I think we've covered quite lot.
Speaker 2: Yeah,
Speaker: you get that quite a bit.
There's, there's so many different questions out there. So how do I get started in ai? Don't go with a free version. Get a paid version of whether it's of Open AI Chat, GPT, or whether it's anthropics, cla, or some other solution. Get with it, learn it. Talk to it like you would your child, talk to it like you would another AI talk.
And here's the little hack for everybody in your audience. If you wanna learn how to prompt, just prompt GPT and say, write me a prompt to do X, Y, Z, and it'll do it. Mm. And more people should be doing that to get familiar with it. And that's a really, really. Simple hack to get through, but also reverify, learn your prompts and get them to be complex because the more context that you give it, the complex is the [00:30:00] wrong word, but have contextually rich prompts.
Because the more context you put in there, the more accurate the output will be. Yeah. Context is everything.
Speaker 3: Hmm.
Speaker: And then at top of that, data is everything. Because without data, AI is nothing. Yeah. That's right, it's all day. That's cup before. And that sometimes have to stress out to the students. So if people want to get a hold of me, check out our podcast.
Number one, we have our own podcast of the Ag Agentic Insider. Uh, and you can find that at Ag Insider. Do SHOW Ag. Insider, SHOW.
Speaker 3: Mm-hmm.
Speaker: And you can reach me on LinkedIn. Yeah. At Philip P-H-I-L-L-I-P dash SWAN. And I respond to all of my dms so personally. So you're gonna get me not an ai. Fair enough. Phil, it's great having
Speaker 2: you on the show.
Thank you very much. I, we learned a lot about ai. I think a few myths have been busted as well. It's been an absolute pleasure,
Speaker: [00:31:00] Dave. Thank you so much for having me on. It's been a real pleasure. Pleasures of mine.
Speaker 2: Thank you for listening. The Friendly Futurist is a podcast West production and has been produced and edit by me, Dave Martin.
If you love this show, please leave us a five star review on your favorite platform. That way can help me grow. Until next time, friends, remember to stay curious in the future is user friendly.
Speaker: Another podcast's West production.
Creators and Guests
