Tech Overflow

AI Is Already Better Than You Think with Ramez Naam

Hannah Clayton-Langton and Hugh Williams Season 1 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:57

AI did not creep in quietly, it arrived like a tidal wave. We talk with Ramez Naam, computer scientist, science fiction author, futurist, and climate tech investor, to pin down what today’s large language models really are, why they’re the fastest adopted general technology in history, and why “impressive” is not the same thing as artificial general intelligence. Along the way, we challenge the idea that AGI is right around the corner, even as these tools already outperform any single human on breadth of knowledge and rapid synthesis. 

We get practical about capability and risk: where ChatGPT, Claude, and Gemini shine, where they still fail, and why supervision and verification are the new baseline skill for knowledge workers. We also unpack why the AI model race keeps flipping leaders, why user data may not create the kind of network effects people assume, and what recursive self-improvement would need to be real rather than wishful thinking. 

Then we go straight to the biggest near-term shock: coding with AI. Vibe coding and modern developer tools are collapsing the distance between an idea and a working app, which raises hard questions about software engineering careers, junior hiring, and what “good” looks like when you are managing an army of bots. Finally, we zoom out to the energy and infrastructure behind AI, from data centres and grid bottlenecks to the case for solar-and-battery powered compute, including why Australia could be well placed. 

If you’re curious about the future of AI, AI jobs, AI reliability, data centres, and what the next ten years might realistically hold, this conversation will give you a grounded framework. Subscribe, share the episode with someone who debates AI with you, and leave a review with your most surprising takeaway.

Like, Subscribe, and Follow the Tech Overflow Podcast by visiting this link: https://linktr.ee/Techoverflowpodcast

Welcome And Guest Setup

SPEAKER_02

Hello world and welcome to the Tech Overflow podcast. I'm Hannah Clayton Einton, and we are the podcast that breaks down technology for curious people. And our curious listeners are probably wondering where Hugh is. Don't worry, guys, Hugh is coming up in just a second with a cracking interview for you today with special guest, Ramez Nam. So Ramez, aka Mez, is a computer scientist by education. In fact, him and Hugh met when they worked together for about four years at Microsoft. I believe that they were working on Bing. And now Mez does all sorts of things. So he is a climate tech investor. He is an award-winning science fiction author, and he is a futurist. So Mez is incredibly well qualified for our conversation today on the future of AI. And him and Hugh are getting into Mez's take on the AI that we're all using today and where he thinks the technology might go in the future and how that might shape society in years to come. So enjoy this one, guys, and I will see you at the bottom of the episode for a quick recap on the biggest insights.

Where AI Really Stands Today

SPEAKER_01

It's great to see you. Hugh, great to see you. It's been a while. It's been a while. It's great to have you on the podcast. I'm very, very excited to talk about the future of AI and perhaps get a little bit of a better perspective on the current state of AI. I thought a great place to start would be to talk a little bit about where is AI today. So I'm really interested in your perspective. I mean, everybody's talking about artificial general intelligence, people are talking about frontier models, people are talking about narrow AI. And I'd love to hear your perspective on ChatGPT, Claude, Gemini. What are they? How do they fit into this landscape?

What AGI Means In Practice

SPEAKER_00

You and I worked together at Microsoft 13 years in that company, worked on some machine learning stuff back in the day when two-layer neural nets uh were the thing. And also I'm a sci-fi writer, right? So come at it from that big picture. Look, I think the LLMs, ChatGPT, Claude, Gemini, the Chinese models, et cetera, are incredibly impressive. Very few people saw them coming. I think they're the most incredible tools that we have seen invented in some time. You know, I gave a talk earlier this week, and there's a couple metrics that are interesting. This is the fastest adopted general technology in history. It's going faster than the internet, faster than mobile phones, faster than social media. Wow. If you look at just OpenAI and anthropic, fastest growing revenue of companies of all time at the beginning of their histories. So this is real. And then you can sort of turn that on its head and say, still, you know, sub 1% of humanity is ever paid to use one of these tools. So adoption is sort of casual adoption is soaring, and we're still at the beginning of like real diffusion into society. Something that a few AI thinkers said that I agree with is if all AI progress stopped today, if they just never got any better, it would still be a decade of us integrating these things into society, into business, into healthcare, into science, and getting gains out of them. So I love this stuff. I spend hours a day with it. I also don't think it's AGI, and I think we have a long way to go to AGI, artificial general intelligence, let alone sort of this idea that it's gonna take off to be a super intelligence that is to us what we are to monkeys.

SPEAKER_01

Aaron Powell What is artificial general intelligence, Ms. Maybe you can set some context there.

Why Data Hunger Still Matters

Coding Becomes The Breakout Skill

SPEAKER_00

People have many different ideas of what it is. Sam Altman's answer is very practical. AGI is something that can do 80% of the economic tasks that humans can do, uh, as well as humans. And I think he he probably means white-collar type tasks. It's interesting because it depends on how you define what its capabilities are and what it's good at. So here's what LLMs are amazing at. LLMs have seen more knowledge, more data, more text than any human or any hundred humans will ever see together. It's actually memorized a whole lot of it. Like we can quantify how good it is at memorization, and they're actually memorized a fair bit. They also have this sort of reasoning ability. It's not quite like human reasoning, but they can sort of engage in like a feedback loop dialogue with themselves. And my view on current technology with machine learning, you know, basically neural nets, very deep neural nets, the transformer architecture that got developed several years back, for any problem where you can crisply define success. You got that right, you got that wrong, and for which you have sufficient training data, which might mean like millions of examples, billions of examples, and LLM will be able to do it as good as a human. But it also just learns from example, even if you don't have clear right or wrong definitions. The models that exist today have just learned what an essay is like from reading all the essays published on the internet. So for that, it's amazing. They will have read every chemistry paper, every chemistry textbook, and every, you know, Nobel Prize winner's speech. And so if you ask them a chemistry question, they're gonna nail it like the top chemistry professors on earth will. And that's at the same time, the same model also knows physics and biology and French literature and Roman history, and so they can just answer things that like no single human can. We don't know, and we haven't seen examples yet of them stepping well past that frontier. And that's what people talk about with superintelligence. You know, can it leapfrog into things that no human could have done, or even into things that humans can't understand? We don't really see it yet. They are incredibly hungry for data. Here I go to self-driving cars. In the US, a new student driver learns after driving maybe a thousand miles. Waymo has driven about a hundred million miles on real streets, and maybe it took ten million of those miles and a few billion simulated miles for Waymo to reach that skill level. And this is not unusual. In domain after domain, you know, Alpha Go Zero is maybe one of the most impressive pieces of software ever created. It was created by Google DeepMind, uses reinforcement learning, and it is just the best Go player on Earth. It is a superintelligence, it it just exceeds all humans. And the thing that made it possible in both those domains was you could give it infinite training data. It could just keep playing the game or keep playing itself, and that allowed it to get better. But if you're in a domain where you don't have infinite training data, they struggle. They don't do as well as humans. All of that said, I'll say I'll just say one more thing, which then we can come back to it. I think people who just use these as chatbots maybe don't see the the number one use. But the thing that where they're really just changing everything really fast, and I'm sure you know this you is coding. Like the ability of these tools to write software when you tell them, that's incredible, and that's gonna ripple through the economy. Basically, everyone now can create an app by like talking to one of these models in speech, and that's just bonkers, and we have not internalized what that means.

SPEAKER_01

No. I want to come back and talk about coding a little bit later on. I mean, I'd love your views on that. Maybe just sort of going back to this idea of intelligence. So I very much appreciate the point that you need a lot more training data than a human does to reach the point that they're at today, right? So I think your Waymo example, we had Nick Palley on from Waymo in our in our last season, it was a terrific episode, and you know, he talked about training and the amount of training that Waymo's done relative to a human, and it's um arguably better in a lot of ways and you know worse in some when it encounters new situations. But I'm interested in sort of the intelligent nature of say ChatGPT today. So is it reasonable to say that for most humans who want to interact in a way where they're asking a question or asking you to do something simple or perhaps expressing themselves and getting something back that it appears to be intelligent? I mean, is it is it as intelligent as a as a typical human? I mean, obviously there are humans who can jump outside the bounds of knowledge and create new knowledge and and ask new questions, but is it reaching a point where it's intelligent?

SPEAKER_00

It's not human intelligence, but as a practical matter, within the bounds of what humanity knows, it's better than any individual human. Like you can ask it some obscure question, it has encyclopedic knowledge. You know, if you took Google and you let it search everything and all the encyclopedias, but then it brought all of it together, not as just a pages, but like a synthesis of like here's all the stuff said in these things, and here's the key points that are most relevant to what you just asked. That's kind of what you're getting. And it just blows me away. It does make mistakes, it makes um sometimes really dumb mistakes. So it depends on the scenario. Like for brainstorming, for producing something that's not the final work product, for like giving me an image that I'm gonna use quick and dirty, it's gonna be 10 times better than what I could have done in 10 times the time. Uh, awesome, as long as I'm somewhat tolerant of failure or I'm supervising it. If it's a mission critical situation where like no level of failure or you know 0.01% failure is acceptable, then uh they're not prime time.

SPEAKER_01

And it feels today, I mean, you know, with every with every release of a Chat GPT model or improvements over it at Anthropic that you know the world's slightly less astounded. So it feels like it's it's asymptoting to something, yet they're spending more and more in each training cycle, you know, they're spending a blockbuster movie worth of worth of money. As you said earlier, they've vacuumed up all of the data that they can get their hands on. So is there a breakthrough coming, Mez, do you think, in in this technology, or is this really beginning to asymptote to its capability?

SPEAKER_00

Well, I think there's a constant drumbeat of small breakthroughs, if you will. There are a set of people that say just scaling will get you there. Just feeding it more training data and more compute cycles will get you to something astounding. And it does keep getting better, uh, but it gets exponentially harder. You know, we talk about exponentials, the amount of compute you get for a dollar doubles every 18 months, and 10 of those doublings is not 20x, it's a thousand X, right? But the amount of compute you have to use to get linear improvements also keeps doubling. I think some of what's happening is a mismatch between what we're measuring in the hope of achieving super intelligence or achieving, you know, really high quality in helping us on mathematics or discovering new physics or curing cancer versus what everyday people do. So I think for most of us in most chat sessions, it appears to just be you know almost saturated. It does get better, but for simple questions, it kind of already hit the level that is mostly satisfying, except for like maybe you know lowering that error percentage like bit by bit by bit. But it's just in the last six months, I would say that the world's top mathematicians have started to say, this can actually help me do my work and it's helped me like prove something that I was working on for a long time. And so that's sort of a new frontier. And how much is that worth? You know, it's hard to say, like in in strict math, it's probably worth a lot to society. But if that's like, oh, I came up with a new idea for a you know, a pathway for a cancer drug, that's worth a tremendous amount. A lot of the work that the labs do in the hardcore training and in trying to devise better algorithms and so on to address some of those issues is really not aimed at like satisfying your, hey, what's the best Moroccans 2 recipe I could make with the ingredients in the picture I just took, which is amazing that it can do that at all. It's not aimed at that, right? It's aimed at we're gonna cure cancer and we're gonna like study this physics problem and we're gonna see if we can predict the qualities of new materials that we might make before we even make them in the lab and that sort of thing.

The Race Between AI Labs

SPEAKER_01

You know, it's interesting to sort of compare this to some of the things that we did at Microsoft, and obviously we were competing with Google when you and I were there. And I personally reached a point in my head where I thought Microsoft will never beat Google because Google has a head start, which means it has more data, right? Which basically means it has more queries, it has more clicks, and therefore it is better at the harder queries than we will ever be, even though we probably have as much resources behind us and we have a really smart set of people that are probably equivalent to the Google people. I kind of reached that point of view. Do you think that's true in this world? I mean, will OpenAI win because of their advantage when it comes to number of users and the head start that they have, or will there be more than one winner in this space?

SPEAKER_00

I think this is a multipolar world. I think it's multiple winners. You can make examples either way. Like Meta for some reason has still just not done anything, even after spending billions of dollars to hire people. And Grok is struggling a bit. Like it's it's fallen behind a bit, and we don't know exactly why that is. But look, OpenAI and Anthropic and Google are just, you know, leapfrogging each other, not even by huge bounds, on an almost weekly basis, certainly monthly. I show a graph sometimes of at the beginning of all of this, just three years ago, OpenAI was the undisputed leader for 15 months. Everybody caught up. Then they opened up another lead that they held for five months, and now every few weeks at most, the head of the leaderboard changes. And so Claude Opus46 is amazing, Gemini 3.1 is amazing, uh, ChatGPT 5.4 thinking might actually be the best model on Earth right now. You know, but it but it came out last week. So we don't know what will be the best model two or three weeks from now. And ChatGPT is the undisputed leader in customer share when people just type in the name of an LLM. Google Gemini has the benefit of Google Search and a great team and their own hardware that saves them a lot of money versus buying NVIDIA. And Anthropic is the leader in coding and in enterprise usage broadly. And it, you know, my developer friends tell me that ChatGPT 5.4 has caught up to Anthropic's, you know, Opus 4.6, uh, but there's still some stickiness. People are not going to go switch within days unless it's a quantum leap.

SPEAKER_01

Do you think that OpenAI hasn't maintained its lead because it hasn't really taken advantage of the data advantage that it has? Or do you think it's really algorithmic breakthroughs in the case of Anthropic? Or is it really that Google's able to use their search asset in some interesting way? I mean, what wh why do you think that these leads are changing?

SPEAKER_00

My best guess is no one has actually managed to make that user data actually very valuable. Uh nobody has a network effect. It's not clear that the user data is that big a deal. So for the most part, all the labs are training off the same data, at least for pre-training, training off the same scrape of the internet, the same books, the same Wikipedia, the same scientific papers. And they mostly know each other's tricks. The flow of researchers back and forth is fierce. They tweet enough and go on podcasts enough that you can kind of get a sense of what they're doing. Everybody knows everybody. Everybody socializes in in San Francisco. So I just think that there's not obviously there are some secrets, for sure. I think those secrets are not very long-lived.

Recursive Self-Improvement Claims Tested

SPEAKER_01

You spoke a little bit earlier about the dialogues that we have with things like ChatGPT. Obviously, you know, you're going back and forth, and occasionally you do say thanks, or occasionally you say, hey, you got that wrong. Um, probably very, very occasionally, sub 0.1%, somebody's going to press thumbs up or thumbs down. I mean, we both know that people don't use advanced features like that. So there's these, I guess, very subtle signals. But do you think there's breakthroughs that might happen in analyzing the user interactions that might allow them to more effectively use that data and for somebody like OpenAI to capitalize on that?

SPEAKER_00

What people at these companies talk about now is a different kind of hoped-for runaway feedback, which I'm somewhat skeptical of. They talk about the AI just designing the next version of the AI. And that feedback loop being a path to RSI, recursive self-improvement. And that is the way that you get to AGI, and then once you got to AGI, where you've got like a human-level researcher who knows everything about computer science, is whatever AI textbook and paper and and so on. And I can, because it's just a piece of software, I can spin up 10,000 of those in my data center and have them all work on making the next version of my AI. And there's a theory that like that just feeds on itself and it keeps getting better and better and better and goes like this. I don't buy that for a variety of reasons, but that's I just want to be clear. Like that's the zeitgeist. That's what Dario is saying. That's what people inside Anthropic are saying. Sam is saying it a little bit more softly, but people inside OpenAI are saying it. I think it's probably dramatically overstated, but I'm the minority opinion. Uh, and the people in the labs are are much more bullish on this than I am.

SPEAKER_01

I think I'm where you are, miss. It feels to me like what we actually need is fundamental algorithmic breakthrough, or there'll be an adjacent field that takes advantage of the learning in this field and and and really accelerates us forward. You know, they've got all of human knowledge. Um, they're not perhaps being able to effectively use all the interactions that they're getting because OpenAI doesn't have a have a lead. So it feels to me like the step forward might be a step back and sideways, perhaps.

SPEAKER_00

We are really about to run out of all existing internet data for training, right? We keep increasing the number of resources allocated, whether it's compute or number of software engineers or number of researchers or so on. So I think we'll we'll keep finding new stuff. And you know, one of the things that gets not as much money, but I think is really important, is we are seeing startups being founded and academics working on stuff that is not just LLM. And I think that's actually really interesting. There's a startup that's trying to use like mathematical proofs to increase LLM reliability. Uh, Jan Lacun, who was the chief scientist at Meta, is a skeptic of LLMs, and he just raised a billion dollars for his startup in France. So I we have now there's so much economic return since OpenAI and Anthropic, just those two together, are on like a$45 billion a year revenue run rate that has like gone up by a factor of five in the last year or something. Everybody sees dollar signs, and like a billion dollar investment just looks like a bet. Now, like, let's just try it. Let's Jan Lacoon, he's famous, let's give him a billion dollars, you know. Uh Mira Marathi, who was the CTO of OpenAI, I think she's raised two billion. Uh Ilya Sutzgever, who was, you know, chief scientist for OpenAI. He raised, I mean, these guys, Ilya and Mira, each raised like a billion to two billion with nothing. With just like, hey, I'm Ilya, hey, I'm Mira. Here's like our friends who have said they're gonna join, and we've got some ideas. Give us the money. Like, people are willing to put money behind outlier ideas. And I think that's that's actually really good for the pace of innovation.

SPEAKER_01

So something happened with coding in, I don't know whether it was November or December, somewhere around there. You know, like if I went back a year, you know, I was pretty impressed by clawed code, I'd use it, but then I'd I'd spend a lot of time actually writing code, fixing code, cleaning up code. And then something happened in November, December where I started to wonder whether I'd ever code again. And certainly to get from an idea to something that is functional and at scale and deployed on Amazon and working reasonably well with all the pipelines running can be done with vibe coding now. I mean, I guess two things I'd love to talk to you about, Mes. I mean, what do you think that means for our industry, for software engineers and technology? Like what's gonna happen in that world? I mean, do you think this is a sort of an early warning sign of what will happen in other industries? You know, if you're a lawyer, if you're an accountant, if you're a financial analyst, is this what's gonna happen to you as well?

SPEAKER_00

Uh yes, it will in most of these places. I don't think this is necessarily gonna cause job loss for a lot of software engineers, though it might. I I mean I I just think what you said is absolutely right, and it's just amazing. And there's a spectrum. If you are someone who has never written a piece of code in your life and you have an idea for an app, you can you know get Replit for 20 bucks a month. So I did one. I'm like, I want a map of the US. I was debating with somebody in the energy field about like battery needs by different locations. Like, I want an app, I just want a map. I'm gonna click on the map, and I want the following stats about that location and how much sunshine it gets, different months of the year, and how much battery you know duration I would need in the cloudiest month, right? I gave Replit four sentences and within an hour, like it it spent 20 minutes thinking, and then it gave me some visuals of like what the app would look like. I'm like, great, you did great, better than I would. And then I said, okay, should I do it? I'm like, yeah. 20 minutes later it had the app working. And it's like, do you want me to publish this to the web? And I didn't, but like another five minutes I could just have it on the web and a URL I could send you, right? So at one end of the spectrum, people who've never written code in their life can just do things. Like the barrier, the distance between like having an idea and making it real is shrunk to little, and nobody knows. Like, as a first order approximation, no one has ever done this. But at least a billion people on Earth, I would say, are able to. To express an app that they would like to have made, probably more than that. And any of them could. So that's amazing. And then at the other end, like professional developers using Claude Code or Codex or Cursor can just like do their work. Now I think it's interesting that in most of these cases, we don't see that it can really do the work without a human. It still needs a human to tell it what you want, and it's still not totally perfect and needs like a little bit of oversight. So I think software is a special thing because software is such a general purpose tool, more than being a financial analyst or even a lawyer, that the amount of latent demand that exists for applications, for software, for websites, whatever you want to call it, might be so high that this just increases the amount of software that gets written without losing any jobs. It does mean that every designer, every program manager, every product manager, every small business owner is now a developer if they want to be.

SPEAKER_01

I mean, it's a very optimistic view, right? There's so much work out there that folks want done in the software. You know, there's always been more demand for software and features than there's ever been software engineers. So I guess your optimistic view is that, well, this is just an unlock, right? So now all software engineers can be incredibly productive, other people can be software engineers, and and in fact, this is a a growth opportunity. Do you think that that's also true in, say, law, say in accounting, say in the financial world, or do you think there'll be job loss there?

SPEAKER_00

I think there'll be job loss there. I think it will increase access. I'm gonna use therapy as maybe not the best example. Uh, I have a therapist. I love her, I pay her a lot. And the bulk of people on earth like really can't afford to have a therapist. I don't know how many of us need um accountants or financial analysts, but a lot of people don't have access to the law. A cousin of mine, 17 or 18 years old, got a speeding ticket. She and her parents didn't want it on her record and points. And so, you know, you hire a lawyer and they know the forms to fill, and the court will knock it down and not count it as a point as long as you don't get whatever. But there's like some number of forms. I'm not saying that that this 18-year-old should have represented herself in court, but there's there was a guy that built this app that let you just challenge speeding tickets. And there's a million things like that. So there's some aspect of it's gonna democratize access to legal services, people that never had them. And then I think something that people are worried is happening in software and might be. I'd say the data is somewhat suggestive, it's somewhat ambiguous, but it might be happening, is that junior people become less necessary. So, you know, we have this image of lawyers as they're mostly in the courtroom litigating dramatic questioning of witnesses and you know impassioned speeches to the jury. Very few lawyers are actually litigators at all. You know, most contract law and commercial law, even lawsuits, uh, there's like you know, a senior attorney on the case, then there's an army of associates and paralegals that are doing things like discovery that are pouring over hundreds or thousands or tens of thousands, sometimes millions of documents to like look for stuff that might be usable in the case. And you probably don't want that to ever be done entirely by AI, but the few associates you have left are gonna do that, you know, by guiding AI.

SPEAKER_01

I spoke to a friend at a at a well-known US tech company recently, and I spoke about this in one of the other episodes of the podcast in this season. But she said to me that they've stopped hiring mid-level people. They love senior people because senior people can, you know, architect large systems and think about sort of the broader estate, you know, which which echoes a little bit, I think, some of your sentiments. And they like junior people because junior people come in capable of using all of these tools in very effective ways, but the mid-level people are slow to adopt the tools and don't have the seniority to to kind of drive the overall picture. I mean, do you do you think that's possible also in in some of these areas?

SPEAKER_00

I mean, there's a couple papers that have tried to look at what's happening in software. And they haven't seen that. Like the Eric Brunjolson is an economist at Stanford. Uh he and Andrew McAfee wrote like one of the best books on AI and spent on the economy a few years back. And uh he and collaborators have done like a paper that says what they see is that early career people in white-collar work that is most exposed to AI are getting hired less. Like not a ton of firings or so on, but but hired less. They don't see that in in mid-level or senior, but who knows? Like it varies, probably company to company and sector to sector. But it could happen. I think you do want people who are flexible and adventurous. Like the tools are changing so fast, and we're all using, most of us are probably most of the time, using the tools to less than their full capabilities, and at the same time have to pay attention to catch the errors. But I think you want people with a certain set of uh traits that are like kind of adventurous and also kind of skeptical, and that have sort of like management skills for managing a bunch of an army of bots, and not everyone fits that bill. And there might be senior people that don't fit that bill, as well as mid-level people and junior people that don't fit that bill. So I do think it might change sort of the set of traits that make you successful.

SPEAKER_01

And what advice are you giving to junior young people today who are coming out of college, Ms?

SPEAKER_00

I just spoke to a room full of teens. I tried to get them all in the room to open up their phones and download a replet. I was semi-successful. And you know, my advice was look, just think of it as there used to be a barrier between having an idea and making the idea real. That barrier is zero now. So the number one thing that I encourage them, I actually think a CS degree is still useful. I think. I don't really know. Uh but my number one piece of feedback, just have agency. When you have an idea, give it a shot. Just give it a go. I'm gonna spend five minutes, ten minutes, and see if I can make this thing real. And you'll learn along the way what they're good at and what they're not good at. Have curiosity, keep learning. Don't obsess. I saw a funny tweet the other day. Uh, if you're not spending four hours a day and reading at least, you know, 10 AI newsletters, you're already obsolete. And the tweet was making fun of that attitude. The tweet was like, just don't sweat it, like it's cool, like stuff is changing. But like if you think this way, you will drive yourself nuts and you will be unhappy. So it's okay to enjoy life. The race is mostly with ourselves, and be curious and have fun with it, and you'll learn stuff and you'll figure out what actually works, and then learn to be skeptically minded. And I'll give you this the story that I told them. So, you know, like my last 15 years I spent in energy, and I'm I'm a global clean energy expert. So the with a thing in Iran, I had, you know, like very basic, simple insight. Countries that drive more electric cars are less vulnerable to oil price shocks. So I'm like, oh great. So I'm like uh Gemini. I think Gemini makes the best graphics. So I'm like Gemini. Here's this thing. Make me a chart of countries by like percent of cars on the road that are fully electric. And it it did. I'm like, okay, like tweet the colors and blah blah. Gave me a graph. I tweeted the graph. Graph was great, got hundreds of retweets, I'm happy, like, ooh, look at me, I'm an influencer. And then like somebody from Denmark, about eight hours later, replies and says, I don't think that number is accurate for Denmark. And I'm like, oh, and I look, and I'm like, you're absolutely right. I'm like, oh, sorry. So I go back to Gemini, I'm like, Gemini, there's an error here. Like, I think this is how we made the error. Like, please go through and check these numbers in the following way, and so on. And Gemini's, you're absolutely right, as they all, as they all say when they're caught in an error. And Gemini gives me a new graph. And again, like I eyeball it, and I'm like, that all looks plausible to me. And so I tweet that out, and then another eight, twelve hours goes by, and I tweet it out and be like, here, hey, here's a correction. Another eight, twelve hours go on, and somebody from Germany replies and says, I do not think this is right. And I sure enough, second time around, it had an error also. So this is like my domain, but it's got sloppy and lazy. So you have to be skeptically minded. And the more important it is, the more you've got to be willing to poke at this stuff and see if it's actually really correct.

SPEAKER_01

So a smart human, a problem-solving human, a curious human is is optimistically always going to be valuable then, Ms.

Data Centres And The Energy Question

SPEAKER_00

For a while. I mean, I will say like my time horizon has shrunk about which I'm willing to say things. So, but for for the foreseeable future, I think.

SPEAKER_01

And I want to ask you a little bit about energy. I mean, all of these companies are building data centers all over the place. I guess there's there's folks concerned about that. And those data centers need a lot of power and they need a lot of cooling. I know you're very optimistic about that. Uh uh, maybe you could just tell us a little bit about why you don't think that's such a problem.

SPEAKER_00

I do think it's something of a problem. Uh, it is clear in the US, where the bulk of these data centers are being built, that you know, natural gas is the preferred power source, and it's really natural gas off the grid. So the interesting thing about these data centers is that the cost of electricity is just not a big deal to them. The chips are so expensive. The electricity is like five to ten percent of the total cost. What is important is time to power because they're they all feel they're in a frantic race. And if you're, you know, one gigawatt data center that costs you fifty billion dollars can bring in ten or fifteen billion dollars of revenue a year, and the electricity costs less than a billion a year, you're like, I'm losing money by not having a power hookup. So yeah, I'll pay extra to have like some natural gas turbines installed uh at the data center itself. And that's not that's not my ideal thing. And that's really because we've stopped knowing in the US how to build grid stuff fast. It's actually the poles and wires and like the substations and not the generation, that it's the limiting factor on how fast you can get a data center online. Now, I do think there's huge opportunities, and I I think I presume some of your listeners or viewers are in Australia. If you do the math, like a totally off-grid solar and battery power data center can get you to baseload power that is very similar in cost and faster. You can do it in like a year, year and a half, and power that. Now, in the US, it's actually a bit challenging to put together parcels of land that big, uh, and we can't use federal land because of who's in the White House right now. Uh, I think this is actually an amazing opportunity for Australia. The AI companies are nervous about putting data centers outside of the US. OpenAI is doing it in the UAE, uh, because they want like a strong rule of law protecting both the model weights, you know, the the IP, and protecting the customer data. Australia seems like he could offer that. And so I think, you know, Australia, anywhere west of the coast, I mean, not Melbourne, Hugh, sorry to say, uh, but you know, like uh a hundred clicks, you know, north and west of Melbourne. I think you can do a lot of AI data centers there. And I could be a Mecca for that, to be honest. So I think that's a very interesting way to go.

SPEAKER_01

Oh, that's interesting. I've been somewhat critical of the Australian government saying, look, you know, we we should be investing more in creating the next open AI or creating the next anthropic and using our intellectual horsepower to create companies that are billion-dollar companies that compete on the world stage, but they seem to be very focused on data centers. But perhaps now I understand why.

SPEAKER_00

And it's, you know, why not both?

Ten-Year Forecast Without Sci-Fi

SPEAKER_01

Yeah, indeed. Indeed. Hey, so Ms. We're almost at time. I wanted to ask you, you're a futurist. I know you've you've said that your time horizons have come in for your predictions, but I'd love to spitball with you. Where do you where do you think we'll be with AI in 10 years?

SPEAKER_00

I think uh this will still be a civilization run by humans, first and foremost. I think AI will have gotten better, and it will be, you know, we talk about centaurs, human plus AI uh doing things. And I think AIs working with humans in coding, in engineering, in, you know, manufacturing in physics and chemistry and medicine will be pushing the frontiers of this stuff forward rapidly. I think some of those problems are really hard. Like I talked about cancer drugs. That's actually an incredibly hard field. Like that's in one of those fields, again, where the complexity of the problem rises exponentially as you try to make progress. Dario Amadai has said uh in 10 years, AI will have doubled human lifespan. He's just flat out wrong. That's just like impossible. I don't like to say the word impossible. It's not physically impossible, but it's just utterly improbable. But I think we'll have a lot of drugs and other therapies in the approval pipeline and some approved that came from AI. It might reverse the productivity slump that pharma is in. I think it'll be helping us design better materials. I think it could revolutionize education. Educators are terrified of AI. They say AI is like totally breaking the model, but you know, anybody that uses it knows it's the best way to educate yourself. So I think AI could totally revolutionize it. So I think we'll be having you know all of that going on. I think we will see some job loss and we will see some job creation, and I think we'll be figuring out the best way to manage sort of the pace of change that's happening there. And I think there will be some things that go badly in the economy, in labor. I think there'll be accidents with AI too, where AI like causes loss of life that might have happened in Iran just now. Uh I don't think it's an existential threat to humanity. But I think my long view of society is that almost every technology ever invented has been used to make humanity better and has caused problems. Everything from literacy, mostly very good, but the occasional problem. Trains uh, you know, took soldiers to the front lines. You know, the same discovery that led to nitrogen fertilizer that feeds about half the planet also led to explosives that were used in World War One. Uh airplanes move us around and let us connect with people, and they're they're used in war. But overall, most inventions have made the world better for humanity, and the sum total of inventions has made the world a lot better for humanity, not without some side effects that we're working on. So that's what I view with AI. Like I think it is, there's a paper saying AI is a normal technology, and that's how I view it. It might be the most impressive normal technology ever, but I think it is a normal technology. It will cause its problems and challenges, it will surprise us. Some of the things I've just said will be wrong. And mostly I think we'll be figuring out how to use it to make our lives and the lives of most of humanity better.

SPEAKER_01

That's a wonderful place to wrap up the podcast, I reckon, Mez. It's a very optimistic view of AI that uh that you share with our listeners. Look, thank you for your time.

SPEAKER_00

Such a pleasure here.

SPEAKER_01

Very much appreciated, and I know that uh all of our listeners are gonna love this episode.

SPEAKER_00

Great to see you, and uh take care. Great to see you.

Recap And How To Support

SPEAKER_02

All right, guys. Well, we hope you enjoyed the episode. I always love when we get into the topic of AI, it gets all the cogs whirring in my brain. I loved Mez's optimism around the impact of AI. I found it super refreshing. He talks about the democratization of access to information, which to me calls back a little bit to the advent of the internet, albeit now it's happening in a much more specialized way. And whilst even Mez admits that he's not clear where AI is going to take us in the future, he's certain that's gonna change the state of humanity for the better. And that's a viewpoint that I can definitely get behind. So that's it for this week's episode of Tech Overflow. If you've enjoyed the podcast today, please do share with your friends, family, colleagues, anyone who you think might be curious about how tech works. You can always find us on socials, including LinkedIn, TikTok, X, Instagram, YouTube, and of course techoverflowpodcast.com. See you next week for another episode of the Tech Overflow Podcast.