Modern Software Podcast logoThe concept of “serverless” is on the minds of many developers and operations teams these days. The technology is definitely hot, but is serverless really ready for prime time in production environments?

To find out, we invited a pair of New Relic experts, senior director of strategic architecture Lee Atchison and developer advocate Clay Smith, back to the show to debate the issue.

Claybernetes,” who was a senior software engineer at several early-stage startup companies and has been building serverless solutions for many years now, from mobile backends to real-time APIs with Amazon Web Services (AWS)—says serverless is ready for prime time and software shops need to start moving in that direction as soon as they can.

Lee, the author of Architecting for Scale who spent seven years at Amazon and was instrumental in the early development of Elastic Beanstalk at AWS, will be taking the opposite position: that whatever it promises down the line, serverless is not yet ready for prime time and software organizations may want to take a wait-and-see approach.

My cohost, New Relic Developer Evangelist Tori Wieldt, and I went along to keep things moving while trying to stay out of the crossfire.

You can listen to the episode below, download all the episodes automatically by subscribing to the New Relic Modern Software Podcast on iTunes or wherever you get your podcasts, or read on below for a full transcript of our conversation, edited for clarity:

New Relic was the host of the attached forum presented in the embedded podcast. However, the content and views expressed are those of the participants and do not necessarily reflect the views of New Relic. By hosting the podcast, New Relic does not necessarily adopt, guarantee, approve or endorse the information, views or products referenced therein.

Fredric Paul: Welcome to a no-holds-barred debate on the age-old question (or maybe it’s a brand-new question): Is serverless technology ready for prime time? Stay tuned. It could get heated.

Tori Wieldt: Enough background, let’s jump in. Is the hype around serverless justified? Will it ever be justified? Round one, go for it.

Clay Smith: It’s interesting because after you’ve been in software for a few years, the answer to any question like that is always “It depends.”

Fred: The weaseling begins already?

Podcast crew, from left to right: Tori Wieldt, Lee Atchison, Fredric Paul, and Clay Smith

Serverless following the hype cycle

Clay: The weaseling is basically immediate on this. I think serverless is following a tired-and-true hype cycle, as have many other things. I think what makes serverless interesting and where the debate gets heated is that it effectively stamps, in a big, big way, “Deprecated!” on a ton of technology. And I empathize with people who may have spent the past three years building out a container platform because when you consider the emergence of serverless, it definitely puts that kind of investment at risk. So, I understand that, but I don’t think that’s an excuse to ignore it.

Lee Atchison: I completely agree with Clay, in general. The cost of running an infrastructure is a real cost, and the promise of serverless is to reduce or eliminate that cost. But that promise is a hype promise. It’s never going to go away. There’s always going to be infrastructure costs. You’re always going to  have to deal with it.

What serverless really promises is to be able to manage that and control that in a way that reduces your investment in the infrastructure and allows you to scale in a way that’s distinct and independent of the infrastructure cost itself.

Tori: Okay, so let’s back up. We threw the meat in the middle of the table, but we need to define what we’re talking about. What is serverless? How do you define the term?

Clay: I’ll take my favorite definition from Paul Johnston, who used to be a senior developer advocate at AWS. He has a really nice one-sentence way to describe it: “A serverless solution is something that costs nothing if no one is using it, excluding data storage cost.”

Lee: That’s a great definition. An even more basic definition is “serverless is something that doesn’t require servers that are exposed to you and exposed for you to have to deal with.”

What do I mean by that? A cloud infrastructure component for which you do not have to understand the server architecture on which it runs is serverless. If you have to understand the architecture it runs on, it is not serverless.

For those of you familiar with AWS, its Elastic Load Balancing is not serverless because you have to understand the servers underlying it, the size of them, how they scale, etc. AWS Lambda is serverless. It obviously still runs on servers, but you don’t care what servers it runs on. You don’t have to allocate those servers, you don’t have to deal with those servers. It is all transparent. You’re not charged based on them. You’re not scaled based on them. You do not have to understand how they work. There are lots and lots of services that are serverless, not just Lambda. Amazon S3 is serverless. Amazon DynamoDB is serverless. It’s a database storage mechanism. And you do not have to understand anything about the underlying servers that it runs on.

Fred: You’re mostly mentioning AWS services. Obviously, there is serverless stuff outside of that ecosystem as well.

Clay: Of course! Microsoft Azure Functions. Google Cloud Functions is still in Beta.

What problem does serverless solve?

Fred: Given all that, what’s the problem that this is supposed to solve?

Clay: From the developer perspective, it’s really compelling because it allows you to focus much more significantly on code and things like scaling.

Lee: Now the fun begins—I’ll start disagreeing with Clay a little bit here. Honestly, I think for the true developer, whether it’s serverless or not is mostly irrelevant because they’re mostly not concerned about the infrastructure it’s running on anyway.

Fred: That’s someone else’s problem?

Lee: Exactly. Now, DevOps muddies that. Certainly, somebody in a DevOps shop does care about the infrastructure it runs on. But it’s a different role that they’re doing within their team to decide what infrastructure to run an application on. That’s different than the role of developing the application.

I would argue that one of the disadvantages of Lambda-style serverless—which is what most people think of when they say “serverless”—is that it puts more requirements on the developer and what they’re allowed to do within their application than other infrastructure technologies do. Add language requirements, compute-time requirements, stack-size requirements. The problem is they have to think about the infrastructure early in the design and development of their application versus if they’re using other technologies. The promise of benefits is alluring until you get into the weeds. And once you’re in the weeds, the promise isn’t nearly as strong as it appears to be.

No silver bullets on software engineering

Clay: There’s never a silver bullet in software engineering. There’s a fundamental difficulty of design in architecture and that’s not going away with any solution. A few years ago, for example, I needed to host a static web page; it was for a small internal application. The process was, I had to open a JIRA ticket with operations, and then two weeks later they gave me a static web server. Now, if there’s an internal S3 bucket, as a developer, there’s not really anything interesting operationally going on there. I just put the files there and I’m good to go.

Lee: I completely agree with you. But we’re talking more of the benefit of the cloud there versus the benefit of serverless. You can do the same thing with allocating the servers you need to run an application. One of the advantages of the cloud is that as a developer, I don’t have to rely on an operations team. I can do what I need to do to get my job done because I have the tools available to me.

Fred: Right. But there’s still something else that you need to do there and with serverless, you wouldn’t have to do it all.

Lee: You do something different.

Tori: There’s always a price.

Clay: Let me push back a little bit because that was a cloud example. That was running on AWS. They created an EC2 instance. They put the static web server on instance. So maybe there’s a level of cloud sophistication or maybe there’s a broader movement toward managed services. Like a lot of the benefits of serverless, Lambda is really just people’s greater sophistication of being able to use the hundreds of AWS managed services.

Lee: That’s a perfect way of describing it and I think you hit on something really solid there. That difficulty of actually creating the instance had nothing to do with the complexity of the task; it had to do with the operations requirements behind the process of creating that task. You could have created your own AWS account, launched an EC2 instance, and a half an hour later had your website up and running. I think it changes what you have to deal with, but I don’t think it simplifies what you have to deal with, at the developer level.

Fred: Is that an issue of the current state of the technology? Or is it fundamental to the concept of serverless?

Lee: No, it’s the current state of the technology.

Fred: So at some point technology advancements could relieve that problem? Not just change it but make it go away?

Lee: Absolutely. One thing that’s happened recently is the merging of the container approach to developing applications and serverless. Amazon ECS with Fargate is a good example of that. People will say Google’s been doing this for a while. Microsoft’s been doing this for a while. That’s fine.

Clay: I would personally not call Fargate serverless.

Lee:  I think the perfect world is the ability to take a container and launch it like you do a Lambda function. You can say I want n number of instances, maybe, or you tell me how many I need and automatically make them run correctly and just make it all work and not have to worry at all about the server infrastructure underneath. Fargate is a step in that direction, but you’re absolutely right, it is not there yet either. It’s a technology step.

Server nostalgia

Clay: I think the future of that is extremely interesting and I completely agree that the line could continue to blur as the tech advances. One thing that surprised me, though, is that servers are virtual machines. Regardless of how you feel about serverless and Lambda, no one is sticking up for servers.

Fred: Weren’t we busy hugging them just a few years ago?

Clay: They were our pets, right?

Fred: Right. And now they’re cattle. Have we all gone vegan or what? Is that where we’re going with servers?

Clay: The virtual machine industry is still in the billions of dollars. The life of this stuff seems relatively long and I don’t think serverless fundamentally changes that.

Fred: So we’ll still be talking about servers for a while? Nostalgically, maybe?

Clay: I think when someone makes a tribute music video to their favorite server we’ll know we’ve hit peak nostalgia, but we’re stuck with them for a long time.

Searching for the serverless sweet spot

Lee: What might be the heart of the disagreement that Clay and I have is, What’s the ultimate sweet spot? I personally think the sweet spot is closer to Containers-as-a-Service. I think Clay probably thinks it’s closer to Functions-as-a-Service. And, yeah, I’m putting words in your mouth, please disagree or otherwise.

Fred: Podcast listeners cannot see the pained expression on Clay’s face.

Clay: I think ultimately people are going to have to choose which pattern makes the most sense for what they’re trying to build. And I do think that a key benefit of containers is there is some sense of a lift-and-shift approach for containers—from VMs to containers—that does not exist for serverless. More than that, the required skills to build a really complex serverless solution … not only are the tools immature but you also have to be fairly familiar with event-driven programming.

There’s clearly a need for greater knowledge around that type of software architecture. You know, with all those caveats said, I think there’s enough there, though, even for small solutions that it’ll be a compelling path. Do I think that banks are going to replace their mainframes with serverless in the next five years? Well, no, of course not.

Lee: You’re right, they’re not. You hit the nail on the head there­—you clarified by saying, “a class of problem.” And I think you’re absolutely right. There’s a class of problem that function-based computing is perfectly aligned with.

I hear some of our fringe customers saying, “We’re gonna use Functions-as-a-Service for everything that we do from now on moving forward.” When I hear comments like that, I try not to worry. But it’s very hard because Functions-as-a-Service has value but it has a lot of disadvantages, too. You really have to use the right tool for the right job.

What’s going to move to Functions-as-a-Service is a more specialized class of computing that’s better optimized for that type of programming. And that includes smaller applications and includes certain types of applications, including, for instance, high-speed data processing, data conversion, data handling. Things that are very specific, very event-driven, very data flow-focused. Those sorts of applications are going to work very well with Functions-as-a-Service.

Clay: We will have the need for EC2 and containers for a very long time. The wildcard I see is, what’s the next class of managed services that the different cloud providers are going to release? They keep moving higher and higher up the stack. So, Lambda aside, instead of buying some sort of enterprise storage box for $500,000, now you just use S3. Instead of the resource-heavy model of “Well, I’m gonna buy storage to compute,” maybe you’re just going to use Amazon API Gateway and integrate with a third-party Software-as-a Service (SaaS) tool, like Twilio, if you want to notify your customers. Maybe the wildcard isn’t “Am I gonna move to serverless or containers?” It’s “Do I even need to run this code myself anymore or can I just buy it off the shelf?”

Lee: That’s a fantastic point. One use of Functions-as-a-Service is as “glue,” connecting services of different types for different purposes. AWS services and other services as well. It’s very good at that.

And you’re right, as cloud providers go further up the value curve, as SaaS providers go further up the value curve and provide these higher-level services, building applications becomes less about programming and more about gluing things together. As we get into that model, then the glue that people are going to use is things like Lambda, Functions as a Service. That’s going to be a more prevalent model than container-based and traditional compute.

The real question is, How long until we get there? And is real traditional computing going away? And if so, who’s building those services?

I think that we’re not going toward computing going away. Service connection is what really matters, but it’s a layering model. There are people at the high end who are going to be doing that and the people at the next layer down who’re going to be gluing smaller components together and building some things themselves. And that’s going to go on all the way down the stack. There is always going to be building things, and there is always going to be gluing things.

Clay: Until AI starts doing it for you, right?

The glue approach versus 5 or 10 years ago is becoming more and more attractive for lots of different types of applications. And we’re seeing continued investment from all the major clouds going in that direction, which is super compelling. I have yet to talk to customers who have doubled down on that approach. But it seems like there’s a lot of wishful thinking that the industry would go that way.

Lee: There’s definitely going to be more focus on higher-level service integration as the key to building applications. You can’t disagree with that. That is certainly the way cloud providers want us to go.

Fredric: We’re looking forward here and perhaps this a good way to put a cap on our discussion. Where do we think the state of serverless will be a year from now, in 2019 or in 2020? What’s the optimistic projection there, Clay, and maybe what’s the—I don’t want to say “pessimistic”—but realistic prediction, Lee?

Clay: I think 2020, best-case scenario, a public cloud customer will weigh all their options. And in terms of building a new solution, you’re building something net new even if it integrates with older legacy things. The number of times you can say, “I’m gonna use serverless for this” is over 50%.

Fred: OK. Lee, what’s your take on that?

Lee: With the general use of the word “serverless,” I would agree with Clay. I think the question is the type of serverless. I think there’s going to be more and more use of serverless, but I don’t think that’s necessarily going to translate into Functions-as-a-Service. I think there’ll be more of a whiplash effect. That people will try and use Functions-as-a-Service and will end up suddenly back to more of a mid-point where they’ll use lots of different things. Functions-as-a-Service and Lambda are some of the tools we’re going be using, but not the only tools.

More and more services are going to be serverless, like S3 is serverless. Containers-as-a-Service is going to become truly serverless over the next couple of years. But Lambda Functions-as-a-Service is on the way up a hype cycle that it’s going to come back down from.

Fred: Clay, what needs to happen to get that best-case scenario to actually occur in the next year or two?

Clay: It comes down to people and training. The number of qualified people that have production serverless-solution experience is very small. And we’ve seen great programs, different AWS certifications, that aspirationally will get people there. But the number of people who can jump into this sort of software pattern where everything is event driven is just very, very small. So there’s this knowledge that people have to gain to qualify whether this pattern makes sense. And, if they do, they still have to actually build and architect it successfully. That can take some time.

Fred: Lee, what needs to happen to get past your “realistic” scenario?

Lee: I think there needs to be industry growth in Containers-as-a-Service. We need improvements to Fargate technology. We need improvements to the offerings that Amazon, Azure, and Google provide in order to make that the right stable platform in the future. There needs to be a realization of the value of Functions-as-a-Service, and people need to decide which of those two directions is the right one for their particular problem.

 

If you like what you hear, be sure to subscribe to the New Relic Modern Software Podcast on iTunesSoundCloud, or Stitcher.

Note: The intro music for the Modern Software Podcast is courtesy of Audionautix

 

fredric@newrelic.com'

Fredric Paul (aka The Freditor) is Editor in Chief for New Relic. He's an award-winning writer, editor, and content strategist who has held senior editorial positions at ReadWrite, AllBusiness.com, InformationWeek, CNET, Electronic Entertainment, PC World, and PC|Computing. His writing has appeared in MIT Technology Review, Omni, Conde Nast Traveler, and Newsweek, among other places. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!