Tech Overflow
We're Tech Overflow, the podcast that explains tech to smart people. Hosted by Hannah Clayton-Langton and Hugh Williams.
Tech Overflow
How Tech Really Works: The Best Stories from Season One
A single field mismatch bricked fleets of Windows machines. A simple gesture turned dating into a swipe. A major grocer is hacked and down for 45 days. A driverless car pulled up with no one inside.
As we gear up for the launch of Season Two on March 3, Hannah shares her favourite stories from Season One. We went under the hood and explained tech in an accessible way for every curious listener. In this episode, we share what you've missed and our favourite parts for our loyal listeners.
We start by pulling apart the CrowdStrike outage to show why software that runs deep in the operating system is powerful and dangerous. Then we shift to the Marks and Spencer ransomware story to examine how attackers slip in at the edges, escalate privileges over months, and force hard choices about rebuilds and business continuity. From there, we pivot to product craft with a candid story from Google Maps, where watching Apple sparked a smarter roadmap and a useful parking feature. The theme: humility, fast learning, and disciplined shipping beat ego every time.
Our AI segments tackle the bigger shift: language models trained on trillions of tokens that summarise and reason without a tidy explanation of how. We cut through the hype with grounded numbers on GPUs, training timelines, and cost, and we explain why inference feels cheap while training burns the budget.
Then the interviews bring it home. Tinder co‑founder Jonathan Badeen traces swipe right back to flashcards, illustrating how a physical metaphor became a mobile-native flow that reduced friction and changed behaviour. Waymo’s engineering leader Nick Pelly breaks down the robotaxi experience, the safety data across one hundred million autonomous miles, and the sprawling software and hardware stack that makes autonomy work today. He also paints a vivid picture of tomorrow’s cities, where fewer car parks free space and travel time becomes time to work, play, or sleep.
We wrap with practical basics—LANs, WANs, data centres by rivers—and a reminder that legacy systems like COBOL still run banks and still pays.
If you enjoy smart stories backed by clear numbers, credibility, and lessons you can act on, this highlights edition was made for you.
Like, Subscribe, and Follow the Tech Overflow Podcast by visiting this link: https://linktr.ee/Techoverflowpodcast
To me, this is just one of the most interesting engineering challenges of our time.
Hannah Clayton-Langton:LLMs are the first time that AI isn't like hidden away under layers of computer.
Hugh Williams:Somebody said to me the other day, how does it summarize a document? And I said, well, nobody knows.
Hannah Clayton-Langton:Hello world, and welcome to the Tech Overflow podcast. I'm Hannah Clayton Langton, and today I'm without my co-host Hugh Williams, but that is for good reason. Those of you who tuned in to the early episodes of season one might remember that my whole motivation and inspiration for starting this podcast was that I wanted to learn about technology. I work for a tech company and I'm a technology user, I'm super interested, but I'm not an engineer. And when I reached for the resources to teach myself about tech, how it works, and to be better at my job, I couldn't find anything. And that's when I called up Hugh. So as Hugh's not here, before we get into it, let me just remind everyone of his pretty impressive CV in the tech space. Hugh started off his career in academia, but was quickly whisked off to the Silicon Valley and worked for the likes of Microsoft, Tinder, Google, and eBay, just to name a few. And so he's pretty well placed to tell us some industry secrets. This episode is all about the highlights of season one. And I'm gonna take you through some of my favourite moments, our best insider stories, technical insights, and special guests from season one. So if you're enjoying the Tech Overflow podcast, please do like and subscribe wherever you get your podcast. Share with your friends, family, colleagues, and you can find us on LinkedIn, Instagram, and X. Season two is coming to you on the 3rd of March. So mark your calendars, and we'll see you there. And now for the good stuff. So I've put together some segments in this episode to share some of the best bits and my favorite moments from season one. I hope you enjoy. Moment one was when we deep dived into the CrowdStrike disaster. So that was episode four. Hugh and I talk about what actually happened, what went wrong, and why this was the biggest outage in the history of tech at the time of recording.
Hugh Williams:So it opened up the file, it found it had 21 fields, the software is expecting 20, and all sorts of bad things started to happen.
Hannah Clayton-Langton:And so because it's so deeply embedded, it wasn't just like error, please restart, or error, couldn't read file type. You ended up sort of tripping the whole system. And blue screen at death rate means can't use your computer. Like that's it.
Hugh Williams:Exactly. Because if something like Word had a problem like this, Word would crash. Yeah. And you'd say, huh, Word's crashed, I'll try starting it again. Huh, it keeps crashing. Maybe I'll try downloading a new version or I'll wait till tomorrow until Microsoft updates it. But because this Falcon software runs deep inside the operating system, it actually took down the operating system, this error. And so all these blue screens of death started happening. So CrowdStrike folks deploy this file and they basically shut down every Windows machine that this software is installed on. They all get the blue screen of death. Of course, what happens after the blue screen of death is a lot of folks will try and reboot the machine. Yeah. So they say, oh, you know, reboot. But the problem was when it booted back up, the same thing happened again.
Hannah Clayton-Langton:And every Windows system across the world that had Falcon installed basically went black or went blue.
Hugh Williams:And was unusable and would not boot up again.
Hannah Clayton-Langton:Moment two was a real interesting one as a British shopper. So this was all about Marks and Spencer's or MS. They had an absolutely massive hack through 2025. It took them out for well over a month. And the most interesting bit was that Hughes takes slightly differed from what is on public record. So have a listen to what we think likely actually happened. So I think MS issued some official communication that they caught this whole hack pretty early days. But I think I think that's been somewhat debunked.
Speaker 3:Yeah, I don't think that's true. I think what is true is that once the ransomware was executed, and we'll talk about ransomware and what it does a little bit later on, once that was executed, MS were very quick to explain that that's what happened and begin a path that took a long time to towards rectification. But uh I don't think they were quick to detect that the hackers were inside their systems. And some folks are saying they're probably in there at least a couple of months.
Speaker 2:We can talk later about whether or not that's unusual. I think the answer is no. Not unusual. No. Okay.
Speaker 1:So hackers they get in, they exploit this vulnerability, and then they do like a few really key things that sort of set them up for success in taking things down.
Speaker 3:Yeah, that's right. I I think there's a couple of parts of the story that we'll never know the real details to, but what's certainly happened is they got in as fairly low-level employees, right, or contractors. They've now got access to some system. It's certainly not going to be the absolute core of MS, but they're they're in. They're in the edges. They've made it into some of the outbuildings if you wanted to use an analogy. Somewhere along this track over the next couple of months, they've managed to what we call escalate their privileges. So they've managed to figure out how to get more access to, you know, more of the buildings to continue the analogy.
Speaker 2:And our last little snippet, I think, is just the perfect example of good product management in action. So when Hugh ran Google Maps from I think it was 2015 to 2017, he talks about some competitor research he used to do on his drive to work.
Speaker 3:I was lucky enough at the end of my uh executive career in the US to run Google Maps. So I looked after the product and engineering teams for Google Maps, and I can tell you that I quite often used to drive to work using Apple Maps.
Speaker 1:But you were doing that just to just to know they hadn't changed their game or come up with anything.
Speaker 3:Yeah, and and uh I had a lot of respect for them. You know, I knew they were trying really, really hard. They had a catastrophic launch in 2012, and this is more like 2015 when I was using it. So they'd really got off their knees and kept on going, and they were beginning to do some pretty useful things. Like I'll give you a couple examples actually. They were the first ones to have a parking feature. So when you parked your car, they'd put a pin on the map that showed where you parked your car. Fantastic feature. Um, so I remember going into Google and saying, you know, hey, Apple's got this awesome feature. One of the product managers said, Oh, yeah, we've got that kind of queued up in our list. And I said, Well, probably time we we put that towards the front and got on with that. And, you know, ultimately Google released that, I think, probably six or seven months later than Apple. But I thought it was a great feature.
Speaker 2:Okay, so now I'm gonna run you through our top AI moments through season one. The AI insights that I took away from season one are probably the things that have most shifted my mindset about how technology works. And AI is gonna be a huge focus of season two at listener request. So we're super excited to get into more detail there. Just as a little reminder, Hugh worked in AI from the early 2000s at the likes of Microsoft, eBay, and Google. So he helped with some of their AI tech, which, if you haven't listened to our episode in more detail, you'll learn that AI's been around a little bit longer than people might think. And in this next segment, Hugh and I talk about large language models or LLMs. So that's the likes of Chat GPT and how ubiquitous they are in terms of daily users globally.
Speaker 3:AI is something that folks like like me have been doing for you know 20, 25 years behind the scenes in large tech companies. Um, so this this is this whole idea isn't very new to me and to lots of people like me, but now the whole world can use AI in a in a consumer kind of way. So, you know, you can you can download an app, you can get it on your phone, you can pull it out of your pocket, and you can actually kind of talk to it and reason with it. And that's a that's a massive breakthrough. I think that's a little bit like the iPhone, if you like. So, you know, back in the day, computers were big things that were stored in rooms, and then some people had one at home that was on a desk, and then boom, this revolution happened, and now everybody has one in their pocket. I think that's exactly what's happened here is AI has now become a uh a product that's used by consumers, and then it's it's obviously swept the world.
Speaker 2:I think my favorite episode of the whole first series was episode seven, actually, and that's where Hugh and I discuss how LLMs are trained, and we talk about the massive scale of building them, and there's some pretty cool stats that bring that to life.
Speaker 3:The scale of LLMs is just so incredibly, incredibly different. I mean, I'll use the word token a little bit later on, but you can just think words for now. These LLMs are trained on hundreds of billions or maybe even trillions of words to be able to generate the text that they generate. I've heard estimates that say that OpenAI's latest GPT-4, so the chat GPT that you're using today was trained on about 13 trillion tokens, about nine trillion words.
Speaker 2:And probably the most compelling thing about ChatGPT and its peers, as we know, is how they produce output that feels like cognizant and human-like. In episode seven, Hugh shares that no one actually knows how this works, and it's part of their magic, and I don't use that word lightly. They're basically built with a technology that learns from so much data that it can output plausible text based on any input you give them. And in this clip from episode seven, Hugh talks about how LLMs summarize text.
Speaker 3:Somebody said to me the other day, how does it summarize a document? And I said, Well, nobody knows, really. So when when you say, please shorten this document or summarize this document or turn this document into bullet points, it's just seen enough of examples of that in the vast amount of text that it's seen that it's able to carry out that task, right? So it's seen examples of a long document shortened to a shorter document, it's it's seen an example of uh an essay turned into PowerPoint slides, whatever it is. It's seen enough examples of that in the trillions of words that it's seen that it's able to do that. So you give it a simple instruction like summarize or shorten, and it can take the following content and know what to do with it.
Speaker 2:And the final piece of the LLM puzzle, which I particularly loved, was all about how they're trained and how they use GPUs, which is those chips that everyone's talking about that companies like NVIDIA make. Now, I've heard a lot about chips, GPUs, and NVIDIA, but I hadn't quite understood how they fit into the overall LLM story. So you have a bunch of GPUs which cost like 40 grand or something.
Speaker 3:Yeah, of that order.
Speaker 2:In USD.
Speaker 3:Yeah, absolutely. So, you know, 20 to 40,000 bucks per GPU card, and uh, you know, these data centers are absolutely full of them for this training process.
Speaker 2:Just for the training process.
Speaker 3:Yeah, they're used in the evaluation as well. So when you want to buy your holiday to Florence or whatever it is that you're asking about, or help me cook a recipe, or whatever those kinds of things, the GPUs are used in that as well. But you need vastly more infrastructure for the training than you do for the inference, which is the what we call the the question-asking piece of this.
Speaker 1:Okay, because there is a whole topic of conversation around like the compute power required by LLMs and like the environmental cost of it and the financial cost of it, but the main consumption of that compute happens in the training phase.
Speaker 3:Correct.
Speaker 1:Okay.
Speaker 3:So training takes weeks or months, probably costs 50 to 100 million dollars to do. When you type a question, like, you know, help me plan my itinerary to Florence, that probably costs a very small fraction of a cent.
Speaker 2:So we loved our interviews in season one, and we know you guys did too. We had two special guests, and we're having even more joining us in season three that you'll love, but no spoilers. So, first up, we had Jonathan Bedeen, who is the co-founder of Tinder and the inventor of the famous Swipe Write, which changed the landscape of modern dating. Jonathan opens by telling us about Hatch Labs and shares how the app actually got the name Tinder.
Speaker:So the original Matchbox was to be an app, it wasn't, it never really existed. Because Hatch Labs' focus was actually on creating uh disruptive mobile apps, so native mobile apps and all. And so that that was the focus. And then uh, of course, Tinder, when we were making Tinder, uh, we were gonna call it Matchbox, but because of issues that could arise with match.com and all, started looking for different ways of uh different names and all. And Tinder was one of those kind of went along with the fire theme that we were kind of going with. And so we ended up ultimately landing on Tinder.
Speaker 2:Tinder was also famously the first dating app. And here Jonathan talks about some of the decision making behind the scenes that were really smartphone-centric for the first time and how that led to truly revolutionary user experience.
Speaker:But there were a lot of decisions that got made because of the platform. And obviously, your swipe couldn't exist without gesture technology, which comes from a touch screen. Uh, we originally had sort of the a quick and easy way to log in or to create an account because you don't want to be typing profile for an hour on a small little uh so Facebook login, uh, which was more of a thing then than it is now, but it allowed us to create these profiles real easy. We've crafted the communication part of it more off of texting as opposed to the previous things, which were more email-centric. You know, you've got this small screen, you're gonna put less information about the person right up front. And so the the very first version had a photo, although not quite as large as it uh ultimately ended up becoming first name, age, and it had uh number of shared friends and shared interests. And then you tap into the profile and you could get all of that plus a little written blur. But uh, you know, I think it's uh a lot of people think, oh, it's only photos. It never was only photos. However, it does turn out to be actually one of the most important things.
Speaker 2:We had to ask Jonathan about how he invented the swipe right, and he shared a really insightful story about how flashcards were the real inspiration behind the way that he built the swipe right on our smartphone screens.
Speaker:I woke up one morning with this epiphany, just like literally woke up and just got really excited about how I thought you would make the perfect flashcards happen. And that was you would swipe in one direction for flipping the card over, and you'd make an actually good swipe, but then you'd swipe in the other directions for saying I got the card right or I got the card wrong. And I kind of came up with this with an idea of like how I would use real flashcards, like in real life, not going to class, but like if I was sitting there, I'd start out with a stack of cards, and I would take that card and I'd put it into one pile if I got it right, and that's this the cards I don't need to study anymore. Uh, you know, oh, I got this one wrong. I'm gonna put it over here into this other pile. It's the one that I need to study more. And so now you got three piles of cards until you whittle it down to two. And so I envisioned those two stacks of cards, the right and the wrong ones, right off the screen of the iPhone, because your screen's small, right? And basically, that's where the gesture comes from is dragging that card to the wrong or right stack that's just off screen. So ultimately, when we ended up making Tinder, we had already landed on this sort of one at a time sort of card.
Speaker 2:Our second interviewee was Nick Pelle, director of engineering at Waymo. We were really lucky to have him join what was our most listened to episode from season one. Nick shares why users and riders love Waymo, why they're actually safer than human-driven cars, and how they're expanding across the globe. They're actually coming to London, and I am really waiting to jump in my first Waymo. So, in this next clip, Nick explains to us, in his own words, exactly what Waymo is.
Speaker 4:Let me describe what Waymo is. It's a robotaxi experience where you know you pull out your phone, fail a vehicle, you know, much like you would do with Uber. In fact, in some places we partner with Uber, and you know, the vehicle comes to pick you up. But in this case, the vehicle is empty. It's gonna pull over on the side of the road with no one inside. You get a private vehicle, a private experience, and it's going to fully autonomously drive you to your destination. So this is a ride hailing service. It's what in the industry we describe as level four autonomy. Uh, you know, if you'd like later, we can get into some of those different levels. But level four meaning, you know, there is no human in the vehicle that is ready to take control of the steering wheel. There's no one in the vehicle who needs a driver's license.
Speaker 2:And if, like most people, you're concerned about the safety of robotaxis, let's join Nick in discussing what the data shows about the safety of autonomous vehicles versus human-driven cars and how that debunks the myth that they might be in any way less safe.
Speaker 4:That's the most noticeable uh for the user is you get a private experience, and that's a really big win. But what we think a lot about is the 40,000 road deaths, just in the US, 40,000 road deaths per year that are completely avoidable. And the safety benefits that an autonomous vehicle bring are quite dramatic. We've now driven over a hundred million fully autonomous miles. And you know, we've looked back at that data, and we have between five and 10x less accidents than human drivers who uh are driving in the same geographies. The safety benefit is is really quite stark.
Speaker 2:One thing that I took away from this episode is that Nick really loves his job and the challenges it brings to him as problems to solve. So there's a lot of hardware involved, and I learned some new words in this episode: radar, LIDOR, and other sort of sensors on the car. But here's Nick talking about the breadth of the software challenges that Waymo have to solve as well.
Speaker 4:This is just one of the most interesting engineering challenges of our time because of the well, the complexity, but the breadth of engineering involved. And I've touched on some of the hardware side of things, but you know, also on the software side, we have these real-time safety critical systems on the vehicle, as well as, you know, rider experience and you know, user interface systems in the vehicle. Then we have the mobile app, and we have a lot happening in the cloud. Uh, there's both the sort of ride hailing system that you can think about matching demand and supply and having a efficient marketplace there, you know, much like other ride hailing companies do, but then also the simulation systems and the log replay and the ability to visualize and play back, you know, what's happened in the field. There's such a rich ecosystem of tooling and infrastructure off the vehicle as well. I don't know if I've ever worked on a project with such a span of different software systems. It kind of brings every single software discipline together, as well as many different hardware disciplines.
Speaker 2:So clearly the world is changing as autonomous vehicles are becoming more and more common. And here is Nick sharing a little bit more about what the future could look like in a world of autonomous vehicles.
Speaker 4:This is going to take a little longer for all the impacts to play out because we're talking about city design, we're talking about manufacturing of much larger objects. We're talking about a safety critical system that you know has a lot of engagement with regulators as well. But this is the direction. It won't just be ride hailing, it'll be personal car ownership, it'll be trucking, it'll be all forms of transportation over time. Uh, you could imagine some quite high percentage of cities right now is dedicated to to vehicles and especially to parking. I believe it's in the 30 to 40 percent range. If you look at a city by real estate, it's dedicated to parking.
Speaker 2:Wow.
Speaker 4:And you know, with autonomous vehicles, better utilization of the vehicles, so there's less parking. And when you do need to park them, you can easily move them out outside of the city. So, what this will mean for how cities are laid out and and real estate is quite dramatic, and this will be significant. Also to people's lives. I think this will make the world feel smaller, you know, much like the jet plane did or or the automobile did originally. It'll become easier to get from A to B because you can use that time much more productively, and you can know you're getting there much more safely. And I'm sure over time we'll see a sort of abundance of options. You could imagine autonomous vehicles that have have beds that you can sleep in. I I don't know what timeline that is on, but that's that's clearly you know where we're going that you can work, you could play, you could sleep, you know, in in in these cities and just make the world feel feel smaller.
Speaker 2:And last but not least, listener questions and the controversial tech trivia. So in our last episode of the season, we took some listener questions. Thank you to all of you who sent those in. And here is Hugh getting into some acronyms.
Speaker 3:LAN is local area network. So, you know, if you're in a building, you've got Wi-Fi, you've got some cables running around the building, could be your house, could be where you work, that that would be called a LAN. So it's basically the network that is within your building. And then a wide area network is, you know, the bigger version of that, right? So that's something that a company might use to connect two campuses together. Or, you know, you've you've got some infrastructure out in out in the field somewhere and you want that connected back to the the head office, then that would be a wide area network that you'd be using to connect all of your infrastructure together. So folks like Google, Amazon, you know, those kinds of folks have very, very large, sophisticated WANs that connect together all their infrastructure, including all of their data centers, warehouses, offices, all these kinds of things are all connected together on a giant network that you'd call a WAN.
Speaker 2:One thing I learned was that Hugh has spent a lot of time in massive data centers, which I have to say is not something I've thought about much until they've hit the news a fair bit quite recently.
Speaker 3:I might uh I might share a couple of pictures on socials of me walking around data centers with some big data center infrastructure from the old days. You know, lots of blinking lights, cables, really cold, fluorescent lights. And they're gigantic pieces of infrastructure. I mean, they're some of the biggest warehouses you will ever see, just completely filled with computers. It's it's very, very cool.
Speaker 1:Interesting. And they're like in random places where there's like loads of land available, right?
Speaker 3:Yeah, yeah. And you might put it next to a hydropower station or somewhere where there's a lot of solar available, or perhaps, you know, nuclear energy or whatever it is, because they do use a lot of power to basically to run the infrastructure and keep the infrastructure cool, because you know, all of these CPUs and GPUs get very, very hot, and so you need a lot of cooling. And so, you know, put them next to rivers and pump cold water through them, all sorts of interesting things. So they're in very interesting locations, often hard to get to.
Speaker 2:And our last moment takes us back to our very first episode. Hugh and I were super nervous trying to figure out how to be podcasters, and we learned something really interesting about how engineers that know. Some of the super old coding languages can actually make some pretty good money helping companies out.
Speaker 3:So, for example, there's a coding language called COBOL. Uh it was very popular in the 1970s. It's one of the most verbose coding languages, so it takes lots and lots of lines of code to get anything done. And you can get paid an enormous amount if you are a COBOL programmer today, because lots of big banks, insurance companies, these kinds of folks, telecommunications companies are still running COBOL systems. I said I was in Thailand last week, I was talking to a bank executive, and most of their systems are still COBOL systems running on giant mainframe computers that they maintain themselves. And so if you're capable of writing code in COBOL, you can get paid very, very well. That's probably not true for Voyager 1 and Voyager 2 because they're probably really exciting jobs to have. They probably don't have to overpay for those. But knowing historic programming languages turns out to be valuable.
Speaker 2:So that's a wrap on the best bits of season one. Hugh and I had a ton of fun putting it together for you. And season two is coming soon with our first episode out the 3rd of March. So we'll have a trailer out for season two in a couple of weeks, and I am really excited about some of the guests we have lined up, but I won't share any more just now. Please do like, follow, review the podcast, share it with your friends, colleagues, family, and Hugh and I will be back soon. We can't wait to see you. Bye.