Serverless Chats
Episode #97: How Serverless Fits in to the Cyclical Nature of the Industry with Gojko Adzic
About Gojko Adzic
Gojko Adzic is a partner at Neuri Consulting LLP. He one of the 2019 AWS Serverless Heroes, the winner of the 2016 European Software Testing Outstanding Achievement Award, and the 2011 Most Influential Agile Testing Professional Award. Gojko’s book Specification by Example won the Jolt Award for the best book of 2012, and his blog won the UK Agile Award for the best online publication in 2010.
Gojko is a frequent speaker at software development conferences and one of the authors of MindMup and Narakeet.
As a consultant, Gojko has helped companies around the world improve their software delivery, from some of the largest financial institutions to small innovative startups. Gojko specializes in agile and lean quality improvement, in particular impact mapping, agile testing, specification by example, and behavior driven development.
Twitter: @gojkoadzic
Narakeet: https://www.narakeet.com
Personal website: https://gojko.net
Watch this video on YouTube: https://youtu.be/kCDDli7uzn8
This episode is sponsored by CBT Nuggets: https://www.cbtnuggets.com/
Transcript
Jeremy: Hi everyone, I'm Jeremy Daly and this is Serverless Chats. Today my guest is Gojko Adzic. Hey Gojko, thanks for joining me.
Gojko: Hey, thanks for inviting me.
Jeremy: You are a partner at Neuri Consulting, you're an AWS Serverless Hero, you've written I think, what? I think 6,842 books or something like that about technology and serverless and all that kind of stuff. I'd love it if you could tell listeners a little bit about your background and what you've been working on lately.
Gojko: I'm a developer. I started developing software when I was six and a half. My dad bought a Commodore 64 and I think my mom would have kicked him out of the house if he told her that he bought it for himself, so it was officially for me.
Jeremy: Nice.
Gojko: And I was the only kid in the neighborhood that had a computer, but didn't have any ways of loading games on it because he didn't buy it for games. I stayed up and copied and pasted PEEKs and POKEs in a book I couldn't even understand until I made the computer make weird sounds and print rubbish on the screen. And that's my background. Basically, ever since, I only wanted to build software really. I didn't have any other hobbies or anything like that. Currently, I'm building a product for helping tech people who are not video editing professionals create videos very easily. Previously, I've done a lot of work around consulting. I've built a lot of product that is used by millions of school children worldwide collaborate and brainstorm through mind-mapping. And since 2016, most of my development work has been on Lambda and on team stuff.
Jeremy: That's awesome. I joke a little bit about the number of books that you wrote, but the ones that you have, one of them's called Running Serverless. I think that was maybe two years ago. That is an excellent book for people getting started with serverless. And then, one of my probably favorite books is Humans Vs Computers. I just love that collection of tales of all these things where humans just build really bad interfaces into software and just things go terribly.
Gojko: Thank you very much. I enjoyed writing that book a lot. One of my passions is finding edge cases. I think people with a slight OCD like to find edge cases and in order to be a good developer, I think somebody really needs to have that kind of intent, and really look for edge cases everywhere. And I think collecting these things was my idea to help people first of all think about building better software, and to realize that stuff we might glance over like, nobody's ever going to do this, actually might cause hundreds of millions of dollars of damage ten years later. And thanks very much for liking the book.
Jeremy: If people haven't read that book, I don't know, when did that come out? Maybe 2016? 2015?
Gojko: Yeah, five or six years ago, I think.
Jeremy: Yeah. It's still completely relevant now though and there's just so many great examples in there, and I don't want to spent the whole time talking about that book, but if you haven't read it, go check it out because it's these crazy things like police officers entering in no plates whenever they're giving parking tickets. And then, when somebody actually gets that, ends up with thousands of parking tickets, and it's just crazy stuff like that. Or, not using the middle initial or something like that for the name, or the birthdate or whatever it was, and people constantly getting just ... It's a fascinating book. Definitely check that out.
But speaking of edge cases and just all this experience that you have just dealing with this idea of, I guess finding the problems with software. Or maybe even better, I guess a good way to put it is finding the limitations that we build into software mostly unknowingly. We do this unknowingly. And you and I were having a conversation the other day and we were talking about way, way back in the 1970s. I was born in the late '70s. I'm old but hopefully not that old. But way back then, time-sharing was a thing where we would basically have just a few large computers and we would have to borrow time against them. And there's a parallel there to what we were doing back then and I think what we're doing now with cloud computing. What are your thoughts on that?
Gojko: Yeah, I think absolutely. We are I think going in a slightly cyclic way here. Maybe not cyclic, maybe spirals. We came to the same horizontal position but vertically, we're slightly better than we were. Again, I didn't start working then. I'm like you, I was born in late '70s. I wasn't there when people were doing punch cards and massive mainframes and time-sharing. My first experience came from home PC computers and later PCs. The whole serverless thing, people were disparaging about that when the marketing buzzword came around. I don't remember exactly when serverless became serverless because we were talking about microservices and Lambda was a way to run microservices and execute code on demand. And all of a sudden, I think the JAWS people realized that JAWS is a horrible marketing name, and decided to rename it to serverless. I think it most important, and it was probably 2017 or something like that. 2000 ...
Jeremy: Something like that, yeah.
Gojko: Something like that. And then, because it is a horrible marketing name, but it's catchy, it caught on and then people were complaining how it's not serverless, it's just somebody else's servers. And I think there's some truth to that, but actually, it's not even somebody else's servers. It really is somebody else's mainframe in a sense. You know in the '70s and early '80s, before the PC revolution, if you wanted to be a small software house or a small product operator, you probably were not running your own data center. What you would do is you would rent it based on paying for time to one of these massive, massive, massive operators. And in fact, we ended up with AWS being a massive data center. As far as you and I are concerned, it's just a blob. It's not a collection of computers, it's a data center we learn something from and Google is another one and then Microsoft is another one.
And I remember reading a book about Andy Grove who was the CEO of Intel where they were thinking about the market for PC computers in the late '70s when somebody came to them with the idea that they could repurpose what became a 8080 processor. They were doing this I think for some Japanese calculator and then somebody said, "We can attach a screen to this and make this a universal computer and sell it." And they realized maybe there's a market for four or five computers in the world like that. And I think that that's ... You know, we ended up with four or five computers, it's just the definition of a computer changed.
Jeremy: Right. I think that's a good point because you think about after the PC revolution, once the web started becoming really big, people started building data centers and collocation facilities like crazy. This is way before the cloud, and everybody was buying racks and Dell was getting really popular because people buying servers from Dell, and installing these in their data centers and doing this. And it just became this massive, whole industry built around doing that. And then you have these few companies that say, "Well, what if we just handled all that stuff for you? Rather than just racking stuff for you," but started just managing the software, and started managing the networking, and the backups, and all this stuff for you? And that's where the cloud was born.
But I think you make a really good point where the cloud, whatever it is, Amazon or Google or whatever, you might as well just assume that that's just one big piece of processing that you're renting and you're renting some piece of that. And maybe we have. Maybe we've moved back to this idea where ... Even though everybody's got a massive computer in their pocket now, tons of compute power, in terms of the real business work that's being done, and the real global value, and the things that are powering global commerce and everything else like that, those are starting to move back to run in four, five, massive computers.
Gojko: Again, there's a cyclic nature to all of this. I remember reading about the advent of power networks. Because before people had electric power, there were physical machines and movement through physical power, and there were water-powered plants and things like that. And these whole systems of shafts and belts and things like that powering factories. And you had this one kind of power load in a factory that was somewhere in the middle, and then from there, you actually have physical belts, rotating cogs in other buildings, and that was rotating some shafts that were rotating other cogs, and things like that.
First of all, when people were able to package up electricity into something that's distributable, and they were running their own small electricity generators next to these big massive machines that were affecting early factories. And one of the first effects of that was they could reuse 30% of their factories better because it was up to 30% of the workspace in the factory that was taken up by all the belts and shafts. And all that movement was producing a lot of air movement and a lot of dust and people were getting sick. But now, you just plug a cable and you no longer have all this bad air and you don't have employees going sick and things like that. Things started changing quite a lot and then all of a sudden, you had this completely new revolution where you no longer had to operate your own electric generator. You could just plug in and get power from the network.
And I think part of that is again, cyclic, what's happening in our industry now, where, as you said, we were getting machines. I used to make money as a Linux admin a long time ago and I could set up my own servers and things like that. I had a company in 2007 where we were operating our own gaming system, and we actually had physical servers in a physical server room with all the LEDs and lights, and bleeps, and things like that. Around that time, AWS really made it easy to get virtual machines on EC2 and I realized how stupid the whole, let's manage everything ourself is. But, we are getting to the point where people had to run their own generators, and now you can actually just plug into the electricity network. And of course, there is some standardization. Maybe U.S. still has 110 volts and Europe has 220, and we never really get global standardization there.
But I assume before that, every factory could run their own voltage they wanted. It was difficult to manufacture for these things but now you have standardization, it's easier for everybody to plug into the ecosystem and then the whole ecosystem emerged. And I think that's partially what's happening now where things like S3 is an API or Lambda is an API. It's basically the electric socket in your wall.
Jeremy: Right, and that's that whole Wardley maps idea, they become utilities. And that's the thing where if you look at that from an enterprise standpoint or from a small business standpoint if you're a startup right now and you are ordering servers to put into a data center somewhere unless you're doing something that's specifically for servers, that's just crazy. Use the cloud.
Gojko: This product I mentioned that we built for mind mapping, there's only two of us in the whole company. We do everything from presales, to development testing supports, to everything. And we're competing with companies that have several orders of magnitude more employees, and we can actually compete and win because we can benefit from this ecosystem. And I think this is totally wonderful and amazing and for anybody thinking about starting a product, it's easier to start a product now than ever. And, another thing that's totally I think crazy about this whole serverless thing is how in effect we got a bookstore to offer that first.
You mentioned the world utility. I remember I was the editor of a magazine in 2001 in Serbia, and we had licensing with IDG to translate some of their content. And I remember working on this kind of piece from I think PC World in the U.S. where they were interviewing Hewlett Packard people about utility computing. And people from Hewlett Packard back then were predicting that in a few years' time, companies would not operate their own stuff, they would use utility and things like that. And it's totally amazing that in order to reach us over there, that had to be something that was already evaluated and tested, and there was probably a prototype and things like that. And you had all these giants. Hewlett Packard in 2001 was an IT giant. Amazon was just up-and-coming then and they were a bookstore then. They were not even anything more than a bookstore. And you had, what? A decade later, the tables completely turned where HP's ... I don't know ...
Jeremy: I think they bought Compaq at some point too.
Gojko: You had all these giants, IBM completely missed it. IBM totally missed ...
Jeremy: It really did.
Gojko: ... the whole mobile and web and everything revolution. Oracle completely missed it. They're trying to catch up now but fat chance. Really, we are down to just a couple of massive clouds, or whatever that means, that we interact with as we're interacting with electricity sockets now.
Jeremy: And going back to that utility comparison, or, not really a comparison. It is a utility now. Compute is offered as a utility. Yes, you can buy and generate compute yourself and you can still do that. And I know a lot of enterprises still will. I think cloud is like 4% of the total IT market or something. It's a fraction of it right now. But just from that utility aspect of it, from your experience, you mentioned you had two people and you built, is it MindMup.com?
Gojko: MindMup, yeah.
Jeremy: You built that with just two people and you've got tons of people using it. But just from your experience, especially coming from the world of being a Linux administrator, which again, I didn't administer ... Well, I guess I was. I did a lot of work in data centers in my younger days. But, coming from that idea and seeing how companies were building in the past and how companies are still building now, because not every company is still using the cloud, far from it. But not taking advantage of that utility, what are those major disadvantages? How badly do you think that's going to slow companies down that are trying to innovate?
Gojko: I can give you a story about MindMup. You mentioned MindMup. When was it? 2018, there was the Intel processor vulnerabilities that were discovered.
Jeremy: Right, yes.
Gojko: I'm not entirely sure what the year was. A few years ago anyway. We got a email from a concerned university admin when the second one was discovered. The first one made all the news and a month later a second one was discovered. Now everybody knew that, they were in panic and things like that. After the second one was discovered, we got a email from a university admin. And universities are big users, they need to protect the data and things like that. And he was insisting that we tell him what our plan was for mitigating this thing because he knows we're on the cloud.
I'm working on European time. The customer was in the U.S., probably somewhere U.S. Pacific because it arrived in the middle of the night. I woke up, I'm still trying to get my head around and drinking coffee and there's this whole sausage CV number that he sent me. I have no idea what it's about. I took that, pasted it into Google to figure out what's going on. The first result I got from Google was that AWS Lambda was already patched. Copy, paste, my day's done. And I assume lots and lots of other people were having a totally different conversation with their IT department that day. And that's why I said I think for products like the one I'm building with video and for the MindMup, being able to rent operations as a utility, but really totally rent ops as a utility, not have to worry about anything below my unique business level is really, really important.
And yes, we can hire people to work on that it could even end up being slightly cheaper technically but in terms of my time and where my focus goes and my interruptions, I think deploying on a utility platform, whatever that utility platform is, as long as it's reliable, lets me focus on adding value where I can actually add value. That makes my product unique rather than the generic stuff.
Jeremy: You mentioned the video product that you're working on too, and something that is really interesting I think too about taking advantage of the cloud is the scalability aspect of it. I remember, it was maybe 2002, maybe 2003, I was running my own little consulting company at the time, and my local high school always has a rivalry football game every Thanksgiving. And I thought it'd be really interesting if I was to stream the audio from the local AM radio station. I set up a server in my office with ReelCast Streaming or something running or whatever it was. And I remember thinking as long as we don't go over 140 subscribers, we'll be okay. Anything over that, it'll probably crash or the bandwidth won't be enough or whatever.
Gojko: And that's just one of those things now, if you're doing any type of massive processing or you need bandwidth, bandwidth alone ... I remember T1 lines being great and then all of a sudden it was like, well, now you need a T3 line or something crazy in order to get the bandwidth that you need. Just from that aspect of it, the ability to scale quickly, that just seems like such a huge blocker for companies that need to order provision servers, maybe get a utility company to come in and install more bandwidth for them, and things like that. That's just stuff that's so far out of scope for building a business to me. At least building a software business or building any business. It's crazy.
When I was doing consulting, I did a bit of work for what used to be one of the largest telecom companies in the world.
Jeremy: Used to be.
Gojko: I don't want to name names on a public chat. Somewhere around 2006, '07 let's say, we did a software project where they just needed to deploy it internally. And it took them seven months to provision a bunch of virtual machines to deploy it internally. Seven months.
Jeremy: Wow.
Gojko: Because of all the red tape and all the bureaucracy and all the wait for capacity and things like that. That's around the time where Amazon when EC2 became commercially available. I remember working with another client and they were waiting for some servers to arrive so they can install more capacity. And I remember just turning on the Amazon console. I didn't have anything useful to running it then but just being able to start up a virtual machine in about, I think it was less than half an hour, but that was totally fascinating back then. Here's a new Linux machine and in less than half an hour, you can use it. And it was totally crazy. Now we're getting to the point where Lambda will start up in less than 10 milliseconds or something like that. Waiting for that kind of capacity is just insane.
With the video thing I'm building, because of Corona and all of this remote teaching stuff, for some reason, we ended up getting lots of teachers using the product. It was one of these half-baked experiments because I didn't have time to build the full user interface for everything, and I realized that lots of people are using PowerPoint to prepare that kind of video. I thought well, how about if I shorten that loop, so just take your PowerPoint and convert it into video. Just type up what you want in the speaker notes, and we'll use these neurometrics to generate audio and things like that. Teachers like it for one reason or the other.
We had this influential blogger from Russia explain it on his video blog and then it got picked up, my best guess from what I could see from Google Translate, some virtual meeting of teachers of Russia where they recommended people to try it out. I woke up the next day, the metrics went totally crazy because a significant portion of teachers in Russia tried my tool overnight in a short space of time. Something like that, I couldn't predict it. It's lovely but as you said, as long as we don't go over a hundred subscribers, we're fine. If I was in a situation like that, the thing would completely crash because it's unexpected. We'd have a thing that's amazingly good for marketing that would be amazingly bad for business because it would crash all our capacity we had. Or we had to prepare for a lot more capacity than we needed, but because this is all running on Lambda, Fargate, and other auto-scaling things, it's just fine. No sweat at all. It was a lovely thing to see actually.
Jeremy: You actually have two problems there. If you're not running in the cloud or not running on-demand compute, is the fact that one, you would've potentially failed, things would've fallen over and you would've lost all those potential customers, and you wouldn't have been able to grow.
Gojko: Plus you've lost paying customers who are using your systems, who've paid you.
Jeremy: Right, that's the other thing too. But, on the other side of that problem would be you can't necessarily anticipate some of those things. What do you do? Over-provision and just hope that maybe someday you'll get whatever? That's the crazy thing where the elasticity piece of the cloud to me, is such a no-brainer. Because I know people always talk about, well, if you have predictable workloads. Well yeah, I know we have predictable workloads for some things, but if you're a startup or you're a business that has like ... Maybe you'd pick up some press. I worked for a company that we picked up some press. We had 10,000 signups in a matter of like 30 seconds and it completely killed our backend, my SQL database. Those are hard to prepare for if you're hosting your own equipment.
Gojko: Absolutely, not even if you're hosting your own.
Jeremy: Also true, right.
Gojko: Before moving to Lambda, the app was deployed to Heroku. That was basically, you need to predict how many virtual machines you need. Yes, it's in the cloud, but if you're running on EC2 and you have your 10, 50, 100 virtual machines, whatever running there, and all of a sudden you get a lot more traffic, will it scale or will it not scale? Have you designed it to scale like that? And one of the best things that I think Lambda brought as a constraint was forcing people to design this stuff in a way that scales.
Jeremy: Yes.
Gojko: I can deploy stuff in the cloud and make it all distributed monolith, so it doesn't really scale well, but with Lambda because it was so constrained when it launched, and this is one thing you mentioned, partially we're losing those constraints now, but it was so constrained when it launched, it was really forcing people to design things that were easy to scale. We had total isolation, there was no way of sharing things, there was no session stickiness and things like that. And then you have to come up with actually good ways of resolving that.
I think one of the most challenging things about serverless is that even a Hello World is a distributed transaction processing system, and people don't get that. They think about, well, I had this DigitalOcean five-dollar-a-month server and it was running my, you know, Rails up correctly. I'm just going to use the same ideas to redesign it in Lambda. Yes, you can, but then you're not going to really get the benefits of all of this other stuff. And if you design it as a massively distributed transaction processing system from the start, then yes, it scales like crazy. And it scales up and down and it's lovely, but as Lambda's maturing, I have this slide deck that I've been using since 2016 to talk about Lambda at conferences. And every time I need to do another talk, I pull it out and adjust it a bit. And I have this whole Git history of it because I do markdown to slides and I keep the markdown in Git so I can go back. There's this slide about limitations where originally it's only ... I don't remember what was the time limitation, but something very short.
Jeremy: Five minutes originally.
Gojko: Yeah, something like that and then it was no PCI compliance and the retries are difficult, and all of this stuff basically became sold. And one of the last things that was there, there was don't even try to put it in a VPC, definitely, you can but it's going to take 10 minutes to start. Now that's reasonably okay as well. One thing that I remember as a really important design constraint was effectively it was a share nothing platform because you could not share data between two Lambdas running at the same time very easily in the same VM. Now that we can connect Lambdas to EFS, you effectively can do that as well. You can have two Lambdas, one writing into an EFS, the other reading the same EFS at the same time. No problem at all. You can pump it into a file and the other thing can just stay in a file and get the data out.
As the platform is maturing, I think we're losing some of these design constraints, and sometimes constraints breed creativity. And yes, you still of course can design the system to be good, but it's going to be interesting to see. And this 15-minute limit that we have in Lamdba now is just an artificial number that somebody thought.
Jeremy: Yeah, it's arbitrary.
Gojko: And at some point when somebody who is important enough asks AWS to give them half-hour Lamdbas, they will get that. Or 24-hour Lambdas. It's going to be interesting to see if Lambda ends up as just another way of running EC2 and starting EC2 that's simpler because you don't have to manage the operating system. And I think the big difference we'll get between EC2 and Lambda is what percentage of ops your developers are responsible for, and what percentage of ops Amazon's developers are responsible for.
Because if you look at all these different offerings that Amazon has like Lightsail and EC2 and Fargate and AWS Batch and CodeDeploy, and I don't know how many other things you can run code on in Lambda. The big difference with Lambda is really, at least until very recently was that apart from your application, Amazon is responsible for everything. But now, we're losing design constraints, you can put a Docker container in, you can be responsible for the OS image as well, which is a bit again, interesting to look at.
Jeremy: Well, I also wonder too, if you took all those event sources that you can point at Lambda and you add those to Fargate, what's the difference? It seems like they're just merging into two very similar products.
Gojko: For the video build platform, the last step runs in Fargate because people are uploading things that are massive, massive, massive for video processing, and just they don't finish in 15 minutes. I have to run to Fargate, and the big difference is the container I packaged up for Fargate takes about 40 seconds to actually deploy. A new event at the moment with the stuff I've packaged in Fargate takes about 40 seconds to deploy. I can optimize that, but I can't optimize it too much. Fargate is still order of magnitude of tens of seconds to process an event. I think as Fargate gets faster and as Lambda gets more of these capabilities, it's going to be very difficult to tell them apart I think.
With Fargate, you're intended to manage the container image yourself. You're responsible for patching software, you're responsible for patching OS vulnerabilities and things like that. With Lambda, Amazon, unless you use a container image, Amazon is responsible for that. They come close. When looking at this video building for the first time, I was actually comparing code. I was considering using CodeBuild for that because CodeBuild is also a way to run things on demand and containers, and you actually can get quite decent machines with CodeBuild. And it's also event-driven, and Fargate is event-driven, AWS Batch is event-driven, and all of these things are converging to each other. And really, AWS is famous for having 10 products that do the same thing effectively and you can't tell them apart, and maybe that's where we'll end.
Jeremy: And I'm wondering too, the thing that was great about Lambda, at least for me like you said, the shared nothing architecture where it was like, you almost didn't have to rely on anything other than the event that came in, and the processing of that Lambda function. And if you designed your systems well, you may have some bottleneck up front, but especially if you used distributed transactions and you used async invocations of downstream functions, where you could basically take some data that you needed to pass into it, and then you wouldn't necessarily need that to communicate with anything other than itself to process that data. The scale there was massive. You could just keep scaling and scaling and scaling. As you add things like EFS and that adds constraints in terms of the number of transactions and connections that, that can make and all those sort of things. Do these things, do they become less reliable? By allowing it to do more, are we building systems that are less reliable because we're not using some of those tried-and-true constraints that were there?
Gojko: Possibly, but every time you add a new moving part, you create one more potential point of failure there. And I think for me, one of the big lessons when I was working on ... I spent a few years working on very high throughput transaction processing systems. That's why this whole thing rings a bell a lot. A lot of it really was how do you figure out what type of messages you send and where you send them. The craze of these messages and distributed transaction processing systems in early 2000s, created this whole craze of enterprise service buses later that came. We now have this... What is it called? It's not called enterprise service bus, it's called EventBridge, or something like that.
Jeremy: EventBridge, yes.
Gojko: That's effectively an enterprise service bus, it's just the enterprise is the Amazon cloud. The big challenge in designing things like that is decoupling. And it's realizing that when you have a complicated system like that, stuff is going to fail. And especially when we were operating around hardware, stuff is going to fail badly or occasionally, and you need to not bring the whole house down where some storage starts working a bit slower. You create circuit breakers, you create layers and layers of stuff that disconnect things. I remember when we were looking originally at Lambdas and trying to get the head around that and experimenting, should one Lambda call another? Or should one Lambda not call another? And things like that.
I realized, let's say for now, until we realize we want to do something else, a Lambda should only ever talk to SNS and nothing else. Or SQS or something like that. When one Lambda completes, it's going to track a message somewhere and we need to design these messages to be good so that we can decouple different parts of the process. And so far, that helps too as a constraint. I think very, very few times we have one Lambda calling another. Mostly when we actually need a synchronized response back, and for security reasons, we wanted to isolate something to a single Lambda, but that's effectively just a black box security isolation. Since creating these isolation layers through messages, through queues, through topics, becomes a fundamental part of designing these systems.
I remember speaking at the conference to somebody. I forgot the name of the person who was talking about airline. And he was presenting after me and he said, "Look, I can relate to a lot of what you said." And in the airline community basically they often talk about, apparently, I'm not an airline programmer, he told me that in the airline community, talk about designing the protocol being the biggest challenge. Once you design the protocol between your components, the message is who sends what where, you can recover from almost any other design flaw because it's decoupled so if you've made a mess in one Lambda, you can redesign that Lambda, throw it away, rewrite it, decouple things a different way. If the global protocol is good, you get all the flexibility. If you mess up the protocol for communication, then nothing's going to save you at the end.
Now we have EFS and Lambda can talk to an EFS. Should this Lambda talk directly to an EFS or should this Lambda just send some messages to a topic, and then some other Lambdas that are maybe reserved, maybe more constrained talk to EFS? And again, the platform's evolved quite a lot over the last few years. One thing that is particularly useful in that regard is the SQS FIFO queues that came out last year I think. With Corona ...
Jeremy: Yeah, whenever it was.
Gojko: Yeah, I don't remember if it was last year or two years ago. But one of the things it allows us to do is really run lots and lots of Lambdas in parallel where you can guarantee that no two Lambdas access the same kind of business entity that you have in the same type. For example, for this mind mapping thing, we have lots and lots of people modifying lots and lots of files in parallel, but we need to aggregate a single map. If we have 50 people over here working with a single map and 60 people on a map working a different map, aggregation can run in parallel but I never ever, ever want two people modifying the same map their aggregation to run in parallel.
And for Lambda, that was a massive challenge. You had to put Kinesis between Lambda and other Lambdas and things like that. Kinesis' provision capacity, it costs a lot, it doesn't auto-scale. But now with SQS FIFO queues, you can just send a message and you can say the kind of FIFO ID is this map ID that we have. Which means that SQS can run thousands of Lambdas in parallel but they'll never run more than one Lambda for the same map idea at the same time. Designing your protocols like that becomes how you decouple one end of your app that's massively scalable and massively parallel, and another end of your app that we have some reserved capacity or limits.
Like for this kind of video thing, the original idea of that was letting me build marketing videos easier and I can't get rid of this accent. Unfortunately, everything I do sounds like I'm threatening someone to blackmail them. I'm like a cheap Bond villain, and that's not good, but I can't do anything else. I can pay other people to do it for me and we used to do that, but then that becomes a big problem when you want to modify tiny things. We paid this lady to professionally record audio for a marketing video that we needed and then six months later, we wanted to change one screen and now the narration is incorrect. And we paid the same woman again. Same equipment, same person, but the sound is totally different because two different equipment.
Jeremy: Totally different, right.
Gojko: You can't just stitch it up. Then you end up like, okay, do we go and pay for the whole thing again? And I realized the neurometric text-to-speech has learned so much that it can do English better than I can. You're a native English speaker so you can probably defeat those machines, but I can't.
Jeremy: I don't know if I could. They're pretty good now. It's kind of scary.
Gojko: I started looking at one like why don't they just put stuff in a Markdown and use Markdown to generate videos and things like that? All of these things, you get quota limits still. I thought we were limited on Google. Google gave us something like five requests per second in parallel, and it took me a really long time to even raise these quotas and things like that. I don't want to have lots of people requesting stuff and then in parallel trashing this other thing over there. We need to create these layers of running things in a decent limit, and I think that's where I think designing the protocol for this distributed system becomes an importance.
Jeremy: I want to go back because I think you bring up a really good point just about a different type of architecture, or the architectural design of decoupling systems and these event-driven things. You mentioned a Lambda function processes something and sends it to SQS or sends it to SNS to it can do a fan-out pattern or in the case of the FIFO queue, doing an ordered pattern for sequential processing, which those were all great patterns. And even things that AWS has done, such as add things like Lambda destination. Now if you run an asynchronous Lambda function, you still have to write some code or you used to have to write some code that said, "When this is finished processing, now call some other component." And there's just another opportunity for failure there. They basically said, "Well, if it succeeds, then you can actually just forward it off to one of these other services automatically and we'll handle all of the retries and all the failures and that kind of stuff."
And those things have been added in to basically give you that warm and fuzzy feeling that if an event doesn't reach where it's supposed to go, that some sort of cloud trickery will kick in and make sure that gets processed. But what that is introduced I think is a cognitive overload for a lot of developers that are designing these systems because you're no longer just writing a script that does X, Y, and Z and makes a few database calls. Now you're saying, okay, I've got to write a script that can massively scale and take the transactions that I need to maybe parallelize or that I maybe need to queue or delay or throttle or whatever, and pass those down to another subsystem. And then that subsystem has to pick those up and maybe that has to parallelize those or maybe there are failure modes in there and I've got all these other things that I have to think about.
Just that effect on your average developer, I think you and I think about these things. I would consider myself to be a cloud architect, if that's a thing. But essentially, do you see this being I guess a wall for a lot of developers and something that really requires quite a bit of education to ramp them up to be able to start designing these systems?
Gojko: One of the topics we touched upon is the cyclic nature of things, and I think we're going back to where moving from apps working on a single machine to client server architectures was a massive brain melt for a lot of people, and three-tier architectures, which is later, we're not just client server, but three-tier architectures ended up with their own host of problems and then design problems and things like that. That's where a lot of these architectural patterns and design patterns emerged like circuit breakers and things like that. I think there's a whole body of knowledge there for people to research. It's not something that's entirely new and I think you can get started with Lambda quite easily and not necessarily make a mess, but make something that won't necessarily scale well and then start improving it later.
That's why I was mentioning that earlier in the discussion where, as long as the protocol makes sense, you can salvage almost anything late. Designing that protocol is important, but then we're going to good software design. I think teaching people how to do that is something that every 10 years, we have to recycle and reinvent and figure it out because people don't like to read books from more than 10 years ago. All of this stuff like designing fault tolerance systems and fail-safe systems, and things like that. There's a ton of books about that from 20 years ago, from 10 years ago. Amazon, for people listening to you and me, they probably use Amazon more for compute than they use for getting books. But Amazon has all these books. Use it for what Amazon was originally intended for and then get some books there and read through this stuff. And I think looking at design of distributed systems and stuff like that becomes really, really critical for Lambdas.
Jeremy: Yeah, definitely. All right, we've got a few minutes left and I'd love to go back to something we were talking about a little bit earlier and that was everything moving onto a few of these major cloud providers. And one of the things, you've got scale. Scale is a problem when we talked about oh, we can spin up as many VMs as we want to, and now with serverless, we have unlimited capacity really. I know we didn't say that, but I think that's the general idea. The cloud just provides this unlimited capacity.
Gojko: Until something else decides it's not unlimited.
Jeremy: And that's my point here where every major cloud provider that I've been involved with and I've heard the stories of, where you start to move the needle at all, there's always an SA that reaches out to you and really wants to understand what your usage is going to be, and what your patterns were going to be. And that's because they need to make sure that where you're running your applications, that they provision enough capacity because there is not enough capacity, or there's not unlimited capacity in the cloud.
Gojko: It's physically limited. There's only so much buildings where you can have data centers on the surface of Earth.
Jeremy: And I guess that's where my question comes in because you always hear these things about lock-in. Like, well serverless, if you use Lambda, you're going to be locked in. And again, if you're using Oracle, you're locked in. Or, you're using MySQL you're locked in. Or, you're using any of the other things, you're locked in.
Gojko: You're actually not locked in physically. There's a key and a lock.
Jeremy: Right, but this idea of being locked in not to a specific cloud provider, but just locked into a cloud in general and relying on the cloud to do that scaling for you, where do you think the limitations there are?
Gojko: I think again, going back to cyclic, cyclic, cyclic. The PC revolution started when a lot more edge compute was needed on mainframes, and people wanted to get stuff done on their own devices. And I think probably, if we do ever see the limitations of this and it goes into a next cycle, my best guess it's going to be driven by lots of tiny devices connected to a cloud. Not necessarily computers as we know computers today. I pulled out some research preparing for this from IDC. They are predicting basically from 18.3 zettabytes of data needed for IOT in 2019, to be 73.1 zettabytes by 2025. That's like times three in a space of six years. If you went to Amazon now and told them, "You need to have three times more data space in three years," I'm not sure how they would react to that.
This stuff, everything is taking more and more data, and everything is more and more connected to the cloud. The impact of something like that going down now is becoming totally crazy. There was a case in 2017 where S3 started getting a bit more latency than usual in U.S. East 1, in I think February of 2018, or something like that. There were cases where people couldn't turn the lights on in their houses because the management software was working on S3 and depending on S3. Expecting S3 to be indestructible. Last year, in November, Kinesis pretty much went offline as far as everybody else outside AWS concerend for about 15 hours I think. There were people on Twitter that they can't go back into their house because their smart lock is no longer that smart.
And I think we are getting to places where there will be more need for compute on the edge. First of all, there's going to be a lot more demand for data centers and cloud power and I think that's going to keep going on for the next five, ten years. But then people will realize they've hit some limitation of that, and they're going to start moving towards the edge. And we're going from mainframe back into client server computing I think. We're getting these products now. I assume most of your listeners have seen one like all these fancy ubiquity Wi-Fi thingies that are costing hundreds of dollars and they look like pieces of furniture that's just sitting discretely on the wall. And there was a massive security breach yesterday published. Somebody took their AWS keys and took all the customer data and everything.
The big advantage over all the ugly routers was that it's just like a thin piece of glass that sits on your wall, and it's amazing and it looks good, but the reason why they could do a very thin piece of glass is the minimal amount of software is running on that piece of glass, the rest is running the cloud. It's not just locking in terms of is it on Amazon or Google, it's that it's so tightly coupled with something totally outside of your home, where your network router needs Amazon to be alive now in a very specific region of Amazon where everybody's been deploying for the last 15 years, and it's running out of capacity very often. Not very often but often enough.
There's some really interesting questions that I guess we'll answer in the next five, ten years. We're on the verge of IOT I think exploding because people are trying to come up with these new products that you wouldn't even think before that you'd have smart shoes and smart whatnot. Smart glasses and things like that. And when that gets into consumer technology, we're no longer going to have five or ten computer devices per person, we'll have dozens and dozens of computing. I guess think about it this way, fifteen years ago, how many computer devices were you carrying with yourself? Probably mobile phone and laptop. Probably not more. Now, in the headphones you have there that's Bose ...
Jeremy: Watch.
Gojko: ... you have a microprocessor in the headphones, you have your watch, you have a ton of other stuff carrying with you that's low-powered, all doing a bit of processing there. A lot of that processing is probably happening on the cloud somewhere.
Jeremy: Or, it's just sending data. It's just sending, hey here's the information. And you're right. For me, I got my Apple Watch, my thermostat is connected to Wi-Fi and to the cloud, my wife just bought a humidifier for our living room that is connected to Wi-Fi and I'm assuming it's sending data to the cloud. I'm not 100% sure, but the question is, I don't know why we need to keep track of the humidity in my living room. But that's the kind of thing too where, you mentioned from a security standpoint, I have a bunch of AWS access keys on my computer that I send over the network, and I'm assuming they're secure. But if I've got another device that can access my network and somebody hacked something on the cloud side and then they can get in, it gets really dangerous.
But you're right, the amount of data that we are now generating and compute that we're using in the cloud for probably some really dumb things like humidity in my living room, is that going to get to a point where... You said there's going to be a limitation like five years, ten years, whatever it is. What does the cloud do then? What does the cloud do when it can no longer keep up with the pace of these IOT devices?
Gojko: Well, if history is repeating and we'll see if history is repeating, people will start getting throttled and all of a sudden, your unlimited supply of Lambdas will no longer be unlimited supply of Lambdas. It will be something that you have to reserve upfront and pay upfront, and who knows, we'll see when we get there. Or, we get things that we have with power networks like you had a Texas power cut there that was completely severe, and you get a IT cut. I don't know. We'll see. The more we go into utility, the more we'll start seeing parallels between compute and power networks. And maybe power networks are something that you can look at and later name. That's why I think the next cycle is probably going to be some equivalent of client server computing reemerging.
Jeremy: Yeah. All right, well, I got one more question for you and this is just something where it may be a little bit of a tongue-in-cheek question. Because we talked it a little bit ... we talked about the merging of Lambda, and of Fargate, and some of these other things. But just from your perspective, serverless in five years from now, where do you see that going? Do you see that just becoming the main ... This idea of utility computing, on-demand computing without setting up servers and managing ops and some of these other things, do you see that as the future of serverless and it just becoming just the way we build applications? Or do you think that it's got a different path?
Gojko: There was a tweet by Simon Wardley. You mentioned Simon Wardley earlier in the talk. There was a tweet a few days ago where he mentioned some data. I'm not sure where he pulled it from. This might be unverified, but generally Simon knows what he's talking about. Amazon itself is deploying roughly 50% of all new apps they're building on serverless. I think five years from now, that way of running stuff, I'm not sure if it's Lambda or some new service that Amazon starts and gives it some even more confusing name that runs in parallel to everything. But, that kind of stuff where the operator takes care of all the ops, which they really should be doing, is going to be the default way of getting utility compute out.
I think a lot of these other things will probably remain useful for specialists' use cases, where you can't really deploy it in that way, or you need more stability, or it's not transient and things like that. My best guess is first of all, we'll get Lambda's that run for longer, and I assume that after we get Lambdas that run for longer, we'll probably get some ways of controlling routing to Lambdas because you already can set up pre-provisioned Lambdas and hot Lambdas and reserved capacity and things like that. When you have reserved capacity and you have longer running Lambdas, the next logical thing there is to have session stickiness, and routing, and things like that. And I think we'll get a lot of the stuff that was really complicated to do earlier, and you had to run EC2 instances or you had to run complicated networks of services, you'll be able to do in Lambda.
And Lambda is, I wouldn't be surprised if they launch a totally new service with some AWS cloud socket, whatever. Something that is a implementation of the same principle, just in a different way, that becomes a default we are running computer for lots of people. And I think GPUs are still a bit limited. I don't think you can run GPU utility anywhere now, and that's limiting for a whole host of use cases. And I think again, it's not like they don't have the technology to do it, it's just they probably didn't get around to doing it yet. But I assume in five years time, you'll be able to do GPUs on-demand, and processing GPUs, and things like that. I think that the buzzword itself will lose really any special meaning and that's going to just be a way of running stuff.
Jeremy: Yeah, absolutely. Totally agree. Well, listen Gojko, thank you so much for spending the time chatting with me. Always great to talk with you.
Gojko: You, too.
Jeremy: If people want to get in touch with you, find out more about what you're doing, how do they do that?
Gojko: Well, I'm very easy to find online because there's not a lot of people called Gojko. Type Gojko into Google, you'll find me. And gojko.networks, gojko.com works, gojko.org works, and all these other things. I was lucky enough to get all those domains.
Jeremy: That's G-O-J-K-O ...
Gojko: Yes, G-O-J-K-O.
Jeremy: ... for people who need the spelling.
Gojko: Excellent. Well, thanks very much for having me, this was a blast.
Jeremy: All right, yeah. And make sure you check out ... You mentioned Narakeet. It's a speech thing?
Gojko: Yeah, for developers that want to build videos without hassle, and want to put videos in continuous integration, and things like that. Narakeet, that's like parakeet with an N for narration. Check that out and thanks for plugging it.
Jeremy: Awesome. And then, check out MindMup as well. Awesome stuff. I've got all the stuff in the show notes. Thanks again, Gojko.
Gojko: Thank you. Bye-bye.