Tech Overflow
We're Tech Overflow, the podcast that explains tech to smart people. Hosted by Hannah Clayton-Langton and Hugh Williams.
Tech Overflow
AI, Without The Hype: Part #1
What happens when a search engine is driven by a text file of hand-written rules? You get a Jaguar car ranking first for an iPod query on eBay, and you get the perfect setup for a practical tour of how AI actually creates value. We unpack the journey from brittle if-then logic to machine learning that learns relevance from real outcomes.
In this episode, we break down AI, machine learning, and large language models (LLMs) in clear terms, showing how they fit together and where they differ. We discuss in depth the first wave of AI systems that solve specific problems from detecting credit card fraud, to ranking search results, to recommending the movies we watch. These AI systems are everywhere -- most everything you use at Meta, Google, Microsoft, Amazon, and Apple is driven by machine learning and AI at its core.
This episode also discusses the fundamental truth that great AI starts with fresh, comprehensive, clean data, and a well-defined target. We explain that most companies still aren't getting this right and that no AI system will be effective if there's garbage data.
The most surprising lesson might be the most useful: sometimes the right answer is not to use AI. Hear the story of a 99%‑accurate model that surfaced a company’s fax number as customer support, and why a small human team delivered safer, cheaper, 100%‑correct results. We also explore why LLMs feel like a revolution—real breakthroughs plus a brilliant, accessible UX—and how that shift is changing how people find information.
If you care about building reliable AI products, avoiding unforced errors, and making smarter trade-offs, this conversation will sharpen your instincts. Listen, share with a colleague who loves a good data debate, and subscribe so you don’t miss part two, where we dive into deep learning, transformers, and what’s coming next.
Like, Subscribe, and Follow the Tech Overflow Podcast by visiting this link: https://linktr.ee/Techoverflowpodcast
Hello world and welcome to the Tech Overflow podcast. As always, I'm Hannah Clayton Langton.
Hugh Williams:And I'm Hugh Williams.
Hannah Clayton-Langton:And we're the podcast that explains technical concepts to smart people. Speaking of smart people, Q, how's it going?
Hugh Williams:Uh I'm too smart, Hannah, but I am going really, really well. I'm actually about to go jump on a plane to head to the US, but I I think you're probably leaving the US.
Hannah Clayton-Langton:Yeah. There's some sort of missed opportunity there, because just as you arrive, I'm going, and then we'll be back to Australia UK time zones, which is probably the biggest challenge of the podcast that I didn't see coming.
Hugh Williams:Yeah, me too. But I'm looking forward to getting together with you in October and actually recording a couple of episodes again in a podcast studio. That'll be that'll be awesome.
Hannah Clayton-Langton:Exactly. And if I play my cards right, maybe I'll get to come down to Australia to do the same in our UK winter.
Hugh Williams:Yeah, series two, maybe, if we hit our if we hit our OKRs.
Hannah Clayton-Langton:That's right, listeners. Please like, subscribe, and share. Share with your group chats because uh if we get enough downloads, then I get to go to Australia.
Hugh Williams:So, what are we talking about today, Hannah?
Hannah Clayton-Langton:So today is a big one. Today we're going to be talking about the topic of AI, artificial intelligence. And it's such a big topic, very technically complex and very culturally significant. I think it's fair to say that we're already planning on this being a two-part episode. So today will be part one of two on AI.
Hugh Williams:Awesome. Why don't we get started?
Hannah Clayton-Langton:Let's do it. So, as I just mentioned, this is quite a technically complex topic. And I think the best way to get into those topics is by telling some stories or giving some examples. So, why don't you walk us through an example of AI in action just to sort of situate us in the topic?
Hugh Williams:You're probably not going to believe this, Hannah, but when I first joined eBay back in 2009, the search engine was actually run by the business team. And what I mean by that is that if a buyer came onto eBay and searched for something, there was actually a text file that was maintained by the business team that controlled what search did. And so the whole of the sort of search ranking, if you like, was driven off a text file that was managed by business people, which is quite something. Not exactly how I was used to working at Microsoft when I worked on search there, and certainly not how you know companies like Google went about it. So, you know, really, really old school way of running search. And you might ask sort of, well, what was in the text file? Um, it was things like if this item has free shipping, then boost its ranking by 20%. So, you know, just human written, human written rules.
Hannah Clayton-Langton:So was it like an Excel file that was applying those as rules as formula, or was it how was it pulling it?
Hugh Williams:Uh it was it was a text file. So it looked a little bit like code, if you like. So it said if this, then that. The ifs were things like if free shipping, and then the then was things like uh multiply the score of the item by you know 120%. Things like that. Okay. So really, really basic stuff. Actually, I remember walking into my boss's office, um, who you know, Mark Carges, used to be the CTO of eBay.
Hannah Clayton-Langton:Shout out to Mark if you're listening. Hopefully you are.
Hugh Williams:And Mark had just searched for an iPod, so they were a thing in 2009. Uh, he searched for an iPod, and the first result was a Jaguar car.
Hannah Clayton-Langton:Uh-oh.
Hugh Williams:Yeah, not good. And the reason it was there is because two things. Um, it in the description said that it had an iPod adapter in the in the glove box. And the second thing was it was expensive. And so this simple little text file had said, well, if it's got the keywords, then it's a match. And then if it's expensive, put it at the top.
Hannah Clayton-Langton:So this tells me a few things I didn't know. One being that people were selling Jagger cars on eBay in 2009, but that's not the that is not the target topic of this episode. But so basically, you're saying it was pretty basic and it wasn't really working as it should have.
Hugh Williams:No, and of course, you know, if you think about a marketplace, right? I mean, eBay's, you know, the original marketplace, really. If you think about a marketplace, it's all about connecting buyers to sellers. And so search is incredibly important in a marketplace because that's how buyers go about finding things that they that they want to buy. And you know, search was just basically broken.
Hannah Clayton-Langton:Okay, so how does AI fit into the story? Because I have a feeling it's going to present us with a better answer.
Hugh Williams:Yeah, absolutely. So, first thing, uh, first thing I did when I when I got to eBay was I hired this wonderful guy, Mike Matheson, is uh still still a friend of mine. He's up at Amazon these days. And Mike started what we call the search science team. And the search science team did what today we call AI, but back then we called machine learning. And we'll unpack these terms a little bit later on. But basically, what Mike and his team did was replace this text file that was managed by the business team with an algorithm. And the algorithm was something that was generated off very, very large amounts of data. So imagine you have lots and lots of examples of buyers successfully buying things on eBay, and you have lots of examples of buyers failing to get what they want. So imagine you've got lots and lots of examples. You can use this thing called machine learning to basically learn a function, the maths, if you like, learn a better version of this text file that combines all of the information that you might need to do a better job of ranking items in response to buyers' queries. So this whole thing was was learnt, if you like, off huge amounts of data. And basically we ended up with a piece of AI, if you like, that replaced the text file. And it doesn't matter how smart you are as a human, you can't write down a set of rules in a text file that cover all the possible cases and uh build a search engine that does a great job. Like it's just not possible to consider everything that you could consider, right? So if you're going to build a great search ranking function, you probably need to think about the buyer who's buying things, you probably need to think about the seller who's selling things, you probably need to think about the description, you probably need to think about the user's kind of behavior through the site, you probably need to think about the images that are there. There's lots and lots of things you could think about to return a great result. And as humans, we can't keep that all in our brains at once and write down a function, if you like, that works for you know all occasions. And so this thing just basically didn't work. There's no way smart humans can write a write down in a text file a set of rules that perfectly drive the search of eBay.
Hannah Clayton-Langton:Okay. I have so many questions. I want to know a bit more about what AI is. We've touched on it. And then I think there's something around the use cases of AI in the tech companies that the users will be using all the time. Because I think people think about AI basically as chat GPT. And I don't think we people would actually realize that if they're searching on a retailer online, they are actually using AI. So you've been in technology for quite a long time. How is AI, when did it sort of land on the scene, and how have you seen it evolve during that time?
Hugh Williams:Yeah, so I guess the first place to start is AI stands for artificial intelligence, right? And I guess, you know, if you just sort of pause and think about that for a second, you know, what that's about is really building machines that are intelligent in the sense that we understand it, right? So they're they they display intelligence, but in a in an artificial way. So it's not human intelligence, it's machine intelligence. So that's the that's the broad field. It's been around since the 1950s, you might be surprised to know. So AI was sort of conceived as an idea way, way back in the 1950s. I was forced when I was at university in the late 1980s to do an AI course in my in my undergraduate degree. I did that in the third year of my undergraduate degree. And I think at that point in time it was kind of seen as a field that was a little bit kind of aiming for the pie in the sky and was really, really going nowhere. But then in the, you know, if you jump forward into the into the 2000s, the field started to really, really take off. And I think it's really come into the public consciousness over the last few years, again, as you say, with the emergence of large language models, LLMs, and the advent of Chat GPT. So it's now something we're all talking about. But this, you know, dates back to the 1950s.
Hannah Clayton-Langton:Okay, and artificial intelligence is a fairly broad term. So is it right that like LLMs and so that's a large language model and machine learning, so ML, are like subsets of artificial intelligence rather than additional concepts?
Hugh Williams:Yeah, yeah, that's right. So artificial intelligence, if you kind of you know think about the broad field, includes a lot of things. And machine learning, which we'll unpack, I'm sure, in a second, is certainly part of that. Um, and large language models, if you like, are a part of machine learning. So big field artificial intelligence, machine learning is part of it, large language models are part of machine learning. But there's also other things that constitute artificial intelligence. So if we were trying to build a human-like system, um you'd probably think of more than what we're thinking of today as Chat GPT. So you'd start to think about things like robotics, right? You'd say, oh, that that would clearly need to be part of AI if we were going to build something that was truly a human-like piece of uh piece of intelligence. There's other things like search algorithms, um, things called expert systems, a field called symbolic reasoning. These are all sorts of parts of the broad field of artificial intelligence. But in practice today, you know, what's artificial intelligence? Well, it's probably 70% machine learning, which is large language models, and then these other fields sort of really sit on the side.
Hannah Clayton-Langton:Yeah, exactly. And uh, not a topic we'll look to get into on this podcast, but the ethics of AI are a hot topic at the minute. And again, that's largely talking about LLMs. We're all using AI, I think, on a sort of much more micro level when we search for something, you know, on Google and our phone. I think there's a lot more micro use cases that if people might say that they're not that they're opposed to AI or they have, you know, ethical challenges with how AI currently works, they really mean LLMs. And you I think you just gave a useful stat. So 70% of the field of AI in this day and age is probably LLMs, but there's these additional, possibly a bit more esoteric use cases that exist alongside that are absolutely artificial intelligence.
Hugh Williams:Absolutely. I should say, too, I mean, I'm it's sounding like I'm talking down LLMs and saying, well, they're not really AI. I I I do think there's been some massive breakthroughs, and I guess we'll talk about more of those in our in our second episode. But LLMs really do exhibit, you know, some of the characteristics of of this idea of artificial intelligence, right? So we're able to now, with ChatGPT and its and its brethren, we're able to engage in really coherent conversations, answer questions, and we can talk about a diverse range of topics. Um, you can see reasoning and logic and sort of inference coming from these tools that combine information to synthesize new information. So you can ask a question that pulls together two broad fields and synthesizes an answer, and they can do lots and lots of different tasks, even tasks that they weren't designed for. So we're certainly seeing some aspects of what you'd think of as artificial intelligence, you know, coming from these from these LLMs, but they aren't the complete story of artificial intelligence.
Hannah Clayton-Langton:Okay, so maybe a very different example, a Tesla, like a self-driving car, is that using those additional classical examples of AI that you mentioned earlier?
Hugh Williams:Yeah, that's a good example. So certainly when your Tesla today is in its self-driving mode, it's full self-driving mode out on out on a highway or a freeway, there's a lot of things going on there, one of which is LLM-like technology. So the Tesla car has about eight cameras. Those cameras are bringing pixels back to your Tesla. Those pixels are being you know understood by an LLM, and that's making decisions about how to drive the car. But at the moment, when you're out on the highway, if the car needs to brake, then the actual braking logic is not driven by the LLM. So let's imagine the LLM says, oh, you're gonna you're gonna crash into a tree. Um, that then triggers some symbolic reasoning, another sort of part of AI. And this symbolic reasoning basically says, hey, if you're traveling at a certain speed and the distance to the tree is this distance, then you should apply this amount of braking. And that, if you like, is really a handwritten set of rules that's written by humans. So, you know, you're traveling at 100 kilometers an hour, 60 miles an hour, whatever you want to think about it. You know, the tree is now, you know, 100 yards away, 100 meters away, whatever it is, you know, you should apply maximum braking force. So, you know, your Tesla today, when it's down on the freeway, is a is a combination, if you like, of what we're thinking of as sort of modern artificial intelligence and then some handwritten rules. So, you know, nice combination of both of those things. I think Tesla though behind the scenes is heading towards the whole thing being driven by a large language model. So I think they might be trying this in if you're uh you know not on the highway, um, then the whole thing's kind of hand off, if you like, isn't that the system's making the decisions about when to break and how hard to break, and there are no rules. So it's it's all driven by large language models, if you like.
Hannah Clayton-Langton:Okay. And is this a good time to ask you about training data? Because my understanding is that those models which translate a whole bunch of examples in data form into your optimized output, uh, that data that feeds them is something called training data. Is that right?
Hugh Williams:That's right. So if we go back to the eBay example, um, the way we were able to replace the handwritten human rules with machine written rules, if you like, using using this machine learning idea, which you know, again today would be called AI, was to train a system. And to train that system, you needed good examples and you needed bad examples. So you needed examples of what were the right answers and examples of what were the wrong answers. And this all dates back, if you like, this kind of field to what you know Google and Microsoft were doing in the early 2000s when they were building new search engines. So, you know, when I was at Microsoft, we had a huge labeling effort there, um, working on search, where we would ask human judges. So we'd employ people all over the world, we'd ask them for a given query, is this a good answer? And they'd rate that answer on a scale from uh you know being a perfect answer through to being an answer that would break the user's trust with the search engine. So they'd rate the they'd rate the answers and then we'd collate those back, and then we'd have this training data, if you like, on a massive scale that we could use to learn the ranking algorithm that was then you know deployed out to our customers. So that whole field was kind of born in the in the early 2000s, if you like, and Google and Microsoft in particular were labeling data at uh at huge scale, and and that continues unabated to today, right? So behind the scenes of all of these companies that are using machine learning is a lot of human labeled data. And it can be one of two things. It can be labeling the data that actually goes into training the algorithm, or once you've got the algorithm, it can be sort of helping the algorithm refine itself, point it in the right direction by giving it feedback on when it's doing a great job and when it's not doing a great job. So there's a sort of a manual adjustment later on. But certainly data and human labeling are the fuel that that's really driven this AI revolution.
Hannah Clayton-Langton:Okay, so I have two follow-on questions. One I think is fairly self-evident, but just to call out the outputs of these models with this AI is really only as good as the training data that goes into it, right? Which is why Google and Microsoft were in the early days investing in actual people being involved in it. Is that right?
Hugh Williams:Actually, yeah, you're totally right, Adam. I mean, there's there's really two things that are important, right? So the first thing is you've got to decide what you're optimizing for with the algorithm. So you're asking the algorithm to do something, you're trying to learn something that achieves the task. So you've got to be very clear about what is the task that you want to achieve. That's the first thing. And then the second thing is you need a huge number of examples of what's a good outcome and what's a bad outcome, or and perhaps some shades of gray in between those two things that allow you to learn the function that achieves the goal.
Hannah Clayton-Langton:And is that the blessing and the curse and some of the moral conundrum of things like Chat GPT, or that they're scouring the entire internet as they're training data? Is that if I understood that correctly?
Hugh Williams:Yeah, I mean these these companies, and again, it's a lot like Microsoft and Google back in the day. These companies have a voracious appetite for data, right? So the the large language models at places like ChatGPT are only as good as the uh the data that they've that they've got. So the more data they have, the better the models get. And so these companies are doing a couple of things. You know, one is they're out, as you say, scouring the web, so crawling the web, getting every every possible website that they could possibly get. So more and more examples of text helps them, helps them learn from the web. And they're also doing deals, right? So they're going to companies and saying, can we buy your data? Um, can you be a data source for us? And again, the more data you have, the better the model gets. And then they're also, in some cases, employing people to actually create data that gives them better examples of particular things that they that they need so that they get better at those kinds of tasks. But basically, they want the world's information. And if they have the world's information, then it's possible to build a better LLM that achieves more generalized tasks in a in a better way.
Hannah Clayton-Langton:I mean, it's kind of like stating the obvious, but how a child learns growing up, right? Like they are taking a feed of everything they see around them and everything they read and everything they're taught, and they're turning that into an optimized output based on the situation that they find themselves in. So obviously you remove anything subjective and it comes down to hard and fast rules. But it's, I guess that's the callback to the intelligence point, right? It's just learning.
Hugh Williams:Yeah, exactly. And, you know, I guess when our kids are young, you know, it's why we read to them, right? So we we read to them so that they they learn how to read, they get knowledge, and then you know, we encourage them as they get older to read books and you know, listen to the news and uh listen to great podcasts like our podcast, whatever it is, so that they uh they become you know better humans who are capable of you know reasoning about more things. And so these systems are exactly the same.
Hannah Clayton-Langton:I was gonna make a cheesy joke about this podcast being someone's training data, but you did it for me. Okay, so data is key. And does that mean that if I or someone is trying to set up an AI company or get an AI development right in their company, that's really where they have to start?
Hugh Williams:Yeah, that's that's the advice I'd give to our our listeners is if you want to use AI effectively within your company, then absolutely it all starts with data. And there's lots of properties you want of that data, but you want the data to be fresh, you want that data to be comprehensive, you want that data to not be erroneous. So we, you know, we'd say we want that data to be clean and we want that data to be organized and available. And and maybe it's worth just sort of pulling apart an example, Hannah. So we we talked about eBay at the top of the show. I mean, let's imagine that you and I work at a commerce website, um, and we let's suppose we want our users to land on a fantastic personalized homepage whenever they come to visit our website. The more data that we have, the better we will be able to build uh an AI algorithm, if you like, to solve that task. And so that data needs to be accurate and correct. We need that data to be all in one place, it needs to be as real time as possible. We want to know just what the customers were doing right now, not just this customer, but all of our customers, so that we can really deliver a great real-time homepage. It needs to come from all the right sources. So we need data about how users behave on our website, we need data about how users behave when we send them an email or a notification, we need to know what the users are doing in the apps. So we need all of this data in one place to give us the best opportunity to build the best algorithm to deliver the best personalized homepage, you know. So incredibly important that we get the data right if we want to get the AI right.
Hannah Clayton-Langton:Yeah, that makes total sense. Another good example that I've heard talked about at work in the last sort of six months is can we leverage our internal like knowledge database? So, where we store things like information about certain teams or policies, and can we have like an AI chat, which I guess would be an LLM to answer like employee queries? Um and the answer is yes, of course, you can do that, but then you need to make sure that there's no old policies stored on the system. And you have to make sure that everything's being refreshed because the answer will only be as good as the data that's going into it. And I think probably the less sexy truth is a lot of places aren't getting those basics right, particularly if it's something like an internal knowledge database, right? That can be an afterthought.
Hugh Williams:I guess if I was summing up my advice, I'd say garbage in, garbage out. So if if you've got garbage data going in, then you will get garbage coming out of your AI system, right? So the the first thing you absolutely have to do is get your data house in order.
Hannah Clayton-Langton:Okay, so it sounds like clean, well-structured data is foundational. Ideally, lots of it. So then what's your what's your model? Is it just like a super smart algorithm? How does that layer into it?
Hugh Williams:So great question, Hannah. What what do you what exactly is a model? Well, the good news, I think, for most companies and for most of our listeners is you don't have to invent one. There's lots of different choices out there of models that you can use to learn a function or an algorithm that that solves a problem. And so what most teams today are doing is they're selecting almost off the shelf, if you like, a model that they that they want to use to actually go and learn from the data how to solve a particular problem. And you basically just grab one of those off the shelf. So there's very few companies today actually innovating in the models themselves. They're they're really things that you just sort of grab off the shelf and start to use.
Hannah Clayton-Langton:Okay, wait a minute. My mind is being blown in real time. So literally, how many models are there? Like you say off the shelf, like do you buy them? Like what what are these, who's selling them? Like, where do they come from?
Hugh Williams:Yeah, so mostly they're published research. So they're they're generally invented by people who work in academia. Um, some have been invented in large companies like Google, and then Google have gone and published papers and put those things in the public domain. But by and large, these things are in the public domain. So you can buy implementations, sure, but you can also get what's called open source implementations. So you can actually go and find open versions of the code and download those pieces of code and use them. So the models themselves aren't generally terribly sort of secretive, if you like. They're they're really things that you can you can grab from the shelf.
Hannah Clayton-Langton:Okay, so they're kind of like hardcore academic thought and research.
Hugh Williams:Yeah. So you might have looked, some of our listeners will have heard of things like support vector machines, you know, gradient percent boosted decision trees, you know, these these kinds of things. They're all examples of neural nets. I mean, these things are all examples of types of approaches to learning from data how to actually do things.
Hannah Clayton-Langton:Okay, so you need a wealth of clean, well-structured data. You need to pick the right model, which I'm sure there's an art in that. And then you sort of need to apply it to a use case that is unique and useful for users, and then you have a tech product like a chat GPT that can take the world by storm.
Hugh Williams:Yeah, absolutely. And I think you know, another way to think about this. I mean, we we talked about sort of carpenters and analogies and things in our in one of our earlier episodes. But you know, if you think of yourself as a as a carpenter or something, you know, then it's really about choosing the right tools, right? So you sort of say, Do I need a saw? Do I need a drill? You know, what's the tool that I need to solve this task? And I think as a carpenter, you you have some intuition about what tool to go and use. And I think as a data scientist who are the people who work on these kinds of problems, you have an intuition as to what kinds of models you might use to solve particular problems. But you're not inventing a saw, you're not inventing a drill, you know, you're you're not working for bot or ray and you know, innovating in drills or saws. You're somebody who, by and large, typically would go and grab one of these and and make use of it.
Hannah Clayton-Langton:So, what was the big revolution that's come about with Chat GPT and large language models, which for me as a lay person burst onto the scene, I want to say two, three years ago. But a lot of the concepts that you've talked about don't feel fundamentally new or different. So, what was what was the revolution there that occurred?
Hugh Williams:Yeah, so so LLMs are certainly a genuine technical breakthrough, that's for sure. And I want to talk about that in our in our second episode, but there's a couple of things that were really exciting breakthroughs that led to LLMs. So there's this thing called deep learning, we'll talk about that in our next episode. Then there's this these things called Transformers, we'll we'll talk about that in our next episode as well. That was a that was a huge breakthrough. And then, of course, there was a lot of hard work that went on behind the scenes at OpenAI to build ChatGPT, and there's some really interesting things that that happened there. So, no, no, they're definitely a technical breakthrough, but I think also they're an incredible user experience, a really user experience revolution, right? So AI is something that folks like like me have been doing for you know 20, 25 years behind the scenes in large tech companies. Um so this this is this whole idea isn't very new to me and to lots of people like me, but now the whole world can use AI in a in a consumer kind of way. So, you know, you can you can download an app, you can get it on your phone, you can pull it out of your pocket, and you can actually kind of talk to it and reason with it. And that's a that's a massive breakthrough. I think that's a little bit like the iPhone, if you like. So, you know, back in the day, computers were big things that were stored in rooms, and then some people had one at home that was on a desk, and then boom, this revolution happened, and now everybody has one in their pocket. I think that's exactly what's happened here is AI has now become a uh a product that's used by consumers, and then it's it's obviously swept the world. But no, no, there's definitely some technical breakthrough. I'd love to talk about that, but also there's a huge user experience revolution that's happened.
Hannah Clayton-Langton:So it basically is about relevance and accessibility. Suddenly, as a person who may not work in a tech company, may not consider themselves to be an early adopter of technology, they're hearing of this thing because everyone's using it. It's super easy to use. They can pull it up on their phone and you know, literally enter text into a like a text message almost, except it's a machine that's responding back.
Hugh Williams:Yeah, and that's massive, right? And I think we're, you know, it's taking a lot of traffic away from Google. Um, we're now answering what we call our informational queries, so ones where we want to synthesize information or get information about something. We're largely now answering those using tools like ChatGPT. So Google's being marginalized, if you like, to doing other tasks. So this thing's, you know, it's taken the world by storm. I mean, it's one of the OpenAI is one of the fastest growing companies in history. Um I wish I was an early investor.
Hannah Clayton-Langton:I like what you did, likening it to the advent of the iPhone, because you're right. The computers existed in rooms for a lot much longer than most people realize. And it sounds like that's a strong metaphor to the AI machine learning that's existed in the technical space since the 50s. Like that to me is a big revelation that I don't think I'd quite grasped until we got into this topic.
Hugh Williams:Yeah, and you know, the web was a revelation, mobile was a revelation, cloud computing was a revelation. It allowed, you know, three people in a garage to actually go and build a startup. And this is certainly a major revolution. Um, I guess history will tell us which of those things is most important, uh, but they've all been massive, massive breakthroughs. And of course, you know, PC on a desk was a breakthrough. I mean, there's been a bunch of things that have happened in our lifetime that have been breakthroughs, but this is, yeah, this is a little bit like the advent of the iPhone.
Hannah Clayton-Langton:Okay, and just before we wrap up, as someone who's been involved in the tech world for a good number of years through some of these revolutions, are there any other cool anecdotes or experiences from your career that you think bring this topic to life?
Hugh Williams:Yeah, look, maybe this is ending the episode on a on a sort of a flat note, if you like. But I'd I'd say to our listeners, look, AI isn't the solution to everything, and you shouldn't always use AI. Right? So if you've got a human and they know how to do the task and do the task well, then why risk a machine messing it up? I'll tell you a quick story if you like. You have you up for a story? 100%, always. So when I was at Microsoft, we have this where this brainwave, this is this is back in uh let's call it 2007 or 8 or something, we have this brainwave. Hey, in search, wouldn't it be cool if when a customer types a query that somewhere on the results page they can see the phone number that they should call if they want to call that company, right? So so I don't know, let's imagine we're in the UK and we want to we want to call Tesco. So you type Tesco into your search engine, you press enter, and then somewhere on the page it says Tesco customer service, and the phone number you can call. It's like great, now I don't have to go to the Tesco website and you know trawl around this thing, desperately trying to find the uh contact number that they've probably really tried hard to hide. So turn to the team, say, hey, this is a great idea, we should go do this. So they they say, okay, you know, AI is the way, which we were calling machine learning at the time, but AI is the way. Let's go and learn um a model that can go to any website and extract out the phone numbers that are useful to customers. So we'll learn this. Like hands off, lots of training data, examples of good phone numbers, examples of the wrong phone number, you know, examples of websites, whatever it is. We'll go and learn this thing. Lots of engineers, lots of hard work. This thing goes on for a while, and it's pretty good. Um, it's about 99% right. So 99% of the time or so, this thing can find the right phone number and show it to the customer. Pretty good, but not good enough.
Hannah Clayton-Langton:What happened?
Hugh Williams:Well, I think it was Southwest Airlines. We um we put their fax number up as their customer service number.
Hannah Clayton-Langton:Oh wow. A fax. Now I bet there's listeners who don't even know what a fax machine is. Google it, ask chat GPT.
Hugh Williams:Old school way of communicating. Um around. I had to fax something to my bank the other day. Anyway, so um customers, you know, they type Southwest Airlines into the search engine and then get back. This phone number that's claiming to be the customer service number, they call it and they give this, you know, they're like, oh, helpless. Of course, Southwest Airlines is now really, really upset because their fax machine is now completely unable to be used because it's been called by hundreds, if not thousands, of people who want to talk to customer service. So the these people aren't happy, Southwest Airlines are unhappy. They give us a call and they say, What have you guys done? What am I missing? And so it turns out 99% isn't good enough, right? You need you need this thing to be 100% right. So we got rid of machine learning. We employed a small team of people, uh, and their job was to go to websites and find the relevant phone number and type it into a spreadsheet. Um, we had more than one person doing the same task. So we'd get more than one human to go to Southwest Airlines and try and find the phone number and type it into a spreadsheet. And then once the humans agreed enough, we'd say, okay, that's obviously correct. And we could do this for you know tens of thousands of websites. We could get these people to go back every month and check them. And it was 100% right. And so, you know, no machine, no machine learning, no mistakes, humans getting it right. Probably the cost was less too, you know, rather than having expensive software engineers doing this, you could uh you could employ people at, you know, near the minimum wage to go and do this task and do it well and give them a job. And uh 100% right.
Hannah Clayton-Langton:Okay, so that blows my mind. But what year was that? Like that can't seriously still be happening today.
Hugh Williams:Oh, it's still happening, huh? Yeah, look, that was 2008, but it's absolutely still happening. And and look, two reasons, you know, the reason one is AI is in the consciousness, right? So every board is talking about AI, every CEO is talking about AI, there's pressure to do everything using AI. So there's enormous pressure to solve tasks with AI. And in many cases, those probably shouldn't be solved with AI. So there's that. And also, if you turn to an engineering team, one that's proficient in these technologies, and you say, hey, I'd love to solve a problem, you know, guess what they're gonna do? They're gonna go and use AI.
Hannah Clayton-Langton:Well, that sounds fun, right?
Hugh Williams:Well, it's fun, yeah, yeah. And it's what they're qualified to do. And, you know, if you've got a hammer, everything looks like a nail, right? So so they're just gonna go for it. Um, and so I think as a as a leader, you know, somebody who's experienced, um, and I give this advice to our listeners, you know, stop and think, is AI the right solution to this or should we be doing this another way? And of course, while you're thinking about that, ask how good is good enough, right? Because 99% sounds great, but in this case it wasn't good enough. And in some cases, 51% is going to be good enough. So you have to really decide, you know, what is good enough before you decide, well, what can the solution be? And and remember that AI will always make mistakes.
Hannah Clayton-Langton:That is a really refreshing place to end. I know we've got part two, but someone with as much experience in the tech companies that you've worked in coming back around in 2025 and saying AI isn't always the answer isn't what I was expecting from this episode. So let's call it there. Uh, should we tease what we're gonna be talking about in the second episode?
Hugh Williams:Yeah, absolutely. In our next episode, we are going to be talking about large language models. We're gonna talk about how Chat GPT works and how it's in work, and we're gonna talk a little bit about the future of AI and what we can expect next. So that'll be a fun second topic, Hannah.
Hannah Clayton-Langton:Awesome. Okay, well, thank you so much for listening. This has been the Tech Overflow Podcast. As always, I'm Hannah Clayton Langton.
Hugh Williams:And I'm Hugh Williams. And if you'd like to learn more about our show, you can visit TechOverflowpodcast.com. We're also available on LinkedIn, X, and Instagram.
Hannah Clayton-Langton:Please like, subscribe, and share with your friends. And we'll see you next time for our second instalment on AI. All right, thanks to you, safe travels.
Hugh Williams:Thanks, Hannah. See you soon. Bye.