5 - The API Economy

Posted on Saturday, Apr 25, 2020
Let's introduce the next episode -We have another guest! We're starting to bring a few of those previous topics together in this episode. We touch upon requirements, DevOps, and building applications - or rather APIs - in the cloud. In this episode, I talk with a colleague and friend, Peter Piper, on factors that relate and impact API design. So, without further ado… here we go!

Show Notes

Hello and welcome back to Cloud with Chris! You're with me - Chris Reddington, and we'll be talking about all things cloud. Now, the backlog of episodes continues to look healthy, with a number of additional session ideas planned and a number of guests scheduled to come in. But, please do keep your thoughts coming in, and if you find these podcasts useful, please do continue to share - and I hope that we can grow the community that supports it!

Let's introduce the next episode -We have another guest! We're starting to bring a few of those previous topics together in this episode. We touch upon requirements, DevOps, and building applications - or rather APIs - in the cloud. In this episode, I talk with a colleague and friend, Peter Piper, on factors that relate and impact API design. So, without further ado… here we go!

Chris: Hi everyone, welcome to this episode of Cloud with Chris! Now in this episode, we're going to be evolving the conversation that we've been having over recent episodes. When he think about architectures, we may think about the evolution of some of those architectures that we see. You may think of n-tier architectures, you may think of microservices. When you think about those microservice style architectures - or pretty much any architecture - one of the things that we think more and more about these days is a common thought is our APIs and how we provide access to some of our data and some of the different resources that might be valuable to our users. Those users might be internal. Those users might be external. But, there's a lot of gravity in those APIs, and that's something that we hear is the trend really within the industry these days. So I'm very very pleased to say that I'm joined today by a colleague and a friend in architecture, Mr. Peter Piper. Peter, how are you doing sir?

Peter: Hi Chris, I'm doing well. How about yourself?

Chris: All good thank you sir, all good. Now, Peter - I know that this topic is something which is a bit of a passion area for you. I know that you've had some discussions or conversations about this in local user groups that you are involved with as well. So, maybe let's start at the beginning and frame the discussion. When you think about APIs and API design - why is this relevant in today's world?

Peter: APIs have been used across many different paradigms within our technology and more importantly, the trend and the mindset around APIs I is to have intercommunications between various subsystems or third party solutions and they could be transported on a web based protocols such as HTTP or internal. So, well-architected APIs allow high cohesion and low coupling, so that way they could be malleable, they could evolve, they could be versioned. The trend and the design has been around for - ooh - in the fifties, since systems have started (that I've seen). And, more importantly - it allows you to continue to evolve your product and your solution and still be able to integrate with others.

Chris: Gotcha, so there's a couple of themes in there that you are saying. It's the integration with other services (whether they are internal and first party, or whether they are third party), but also really providing extra value to your end users and giving them a way of interfacing into your own system; so that they can rely on your services programmatically as well. So, it's both about providing value to the end users themselves but also being able to integrate into other systems.

Peter: Exactly. and more importantly not everybody is going to be a subject matter expert with respect to the domain that the API does provide (in terms of data services etc. So, not having to have that business acumen around the domain allows people to build APIs that can be consumed by many parties.

Chris: Mm. So by providing that easy entry point, you're there to provide the service and you abstract away a load of that complexity, so that your end users can focus on the thing that they want to focus on. You deliver the value of that insight in that data (or whatever that API is providing). So then, if we think about maybe delving into that API design thought a little bit more… I know something as well that you're very interested in is requirements. How can requirements drive similar API design? Are there certain factors, certain design decisions or design points that you commonly see affecting how you might shape or frame what your APIs might look like in in an organisation for example?

Peter: Sure. Now, with respect to requirements they're not going to be fully baked in our today frame of application development - and that is - being agile. So, the requirements - as they come through time - and things get exposed. Having your API operations, your contracts, your inner business processes will have to become more versioned or more malleable to support the evolution and change of the API. Especially as time progresses. So many APIs that we see in our current technology landscape have been around for 10, 15, maybe 20 years but they've evolved over time. So to the point of having cohesion and low coupling, it allows you to provide your services and not affect your users until they are ready to upgrade to the latest version of your particular API. So with respect to requirements, supporting a versioning model is critical so that way things are not broken. Versioning could be either a major or minor - it doesn't matter as long as you communicate that to your consumers; your internal users your internal development team; so that way, they could provide that framework within the API as it gets built. That's the biggest requirement. The other is understanding data contracts and how they evolve overtime. So, in other words - data schemas that you're abstracting to your consumers. That is the biggest pain point that I've seen over time. Then, the protocol. Typically it's HTTP, but if it's SOAP based or REST based (that is a thing that has evolved). I still see SOAP based APIs. But, with technologies that are available now - you can transform the request from REST to SOAP and result back to REST. So, understanding how you could repurpose and continue to use your legacy or your older APIs in the current landscape is another thing.

Chris: There's so many great things that you just said in there, Peter. You mentioned about the concept of versioning and this idea of our consumers depending on the different versions. To be clear, when we talk about this - when we say consumer - lots of people might be thinking external party calling into some API where maybe there's some rate limiting. But we don't even just mean those users. We mean it could be one microservice depending on another microservice, and that's where that whole contract idea comes into play, right? Where you have this decoupling between the different APIs, but there's this cohesion… this ability, to be able to somewhat depend on each other - but not directly… if something goes wrong, then everything goes wrong on the dependant system. So, I really love that idea there. Actually - I think you managed to plant my next segue into my next question here, which is around legacy applications. You said at the end there, that you can almost convert some of these rest API calls into things like SOAP calls. I guess there's maybe some kind of pattern that you might be thinking about there (and I don't want to second guess that), but I'm curious - how can we breathe new life into some of those APIs - those legacy ones and bring them into a modern architecture? I guess that's one of the great things about the cloud. You don't necessarily have to just directly lift and shift. You don't just have to re architect. You could do a blend of both. So, what options do people have when they think about reinvigorating some of those legacy APIs for example?

Peter: Sure, sure. The biggest thing is understanding who your consumers are, so that way you could understand impact. Not only that - that will allow you to continue to drive that adoption. and still continue to have that value first and foremost within the consumer of that particular API. Whether the consumer, to your point - is internal or external . So that's the first thing - just understanding who's doing what with whatever information - for whatever entry point that you are providing. That's the biggest perspective and that will then drive a little bit more of, how do you evolve to this new paradigm that is much more REST based versus the SOAP based services. So, to that point - There are different patterns that they don't necessarily have to be within a particular specific technology. Whether that be a Mulesoft or an Apigee or Azure API Management type of product. But there are patterns to support that transformation, to provide that level of abstraction - because, ultimately - your API is having that level abstraction to your ultimate data set. I've seen instances where the data schema is just a one-to-one all the way up, to your data contracts. That's not quite ideal, but nevertheless - understanding not just the pattern of - especially with microservices… That is the bounded context . Understanding the limited domain in which your data contract is going to provide. And so, that way - you could have quick rapid release cycles. Typically, those SOAP based legacy services are much more monolithic. Think of a master data management type system and more importantly the level of consumers becomes much more broad in terms of the enterprise consumption. So understanding not just the pattern that you've established that allowed you to have that market. Not market penetration per se, but just in terms of adoption and consumers but as you continue to evolve you could continue to have the monolithic in a REST based model. Then, start doing a what they call a strangler type pattern to start performing this decomposition into a microservices model. So that way, you could have a blend between microservices and much more of a monolithic.

Chris: What you are eluding too there, is also the idea of having this façade, right? This piece, right in front of our APIs that maybe abstracts away some of that complexity, abstracts away some of those underlying details that we don't need to be aware of the API. Because then, what we can do is we can decouple this idea of having that direct dependency on a specific API version or having a dependency on certain things that we need to go and call in. We might for example at one point want to move from one version of an API to another, and we need a way of being able to switch across from that one version of an API to another. We might for example want to go ahead and mock or stub out what responses might look like, just so we can go ahead and let any of those development teams start playing with what that API might look like. I guess that's really what we're talking about, isn't it? Having that - almost - translation layer in between whatever the client is and that back-end API call.

Peter: Correct, yeah. And to your point - yes those particular design patterns are highly frontend visible. But, there are other ones. For example, backends for front ends. And so, if you are developing APIs for particular platforms (such as a mobile devices) you have to design your API in a slightly different fashion. Especially when you're dealing with SOAP based. If you want to continue having that legacy, and not having to rewrite it - then you could have this particular API management component (whether or not that is in the Azure platform or any other cloud platform), to support that particular consumer. Whether it be a tablet device, a phone device or a computer.

Chris: Gotcha. Understood. I must say, all of this talk of SOAP is bringing back some memories of one of my first experiences of API design. Actually, a good number of years ago where I was part of just a team for an online game a sci-fi game. I remember it clearly. One of the first tasks that I had when I joined this team was building - at the time - this new thing called SOAP. We were building some APIs because there were no APIs that people could call into to access the game. One of the ideas this team wanted, was to expose some of the core game functionality to those gamers because then, what they could do is they could build on their third party marketplaces or their clan or faction websites and all these things and make that whole game experience a bit richer. So, this this talk of being able to bring new life to some of those older legacy APIs is bringing a warm feeling and some good nostalgia there as well. Maybe let's think about some more modern APIs, let's move away a little bit from the legacy side. So, if we think maybe we part of the development team - we're working on a brand new project, for example. I know there's going to be certain technologies that might favour themselves to this, but I'm curious on some of your thoughts - if we want to get going quickly - what are some of the things that we would think about from an API management or an API design or implementation side of things? So there may be certain technologies that we could go after and start thinking about? I know in the azure space for example, if we wanted to very quickly prototype something out in this space Logic Apps could be a good contender there. So I'm just curious of you thoughts. Are there certain trends, certain themes, hat you see and certain patterns that come up? If - you know - we want to experiment and try something out really quickly, let's start using these particular tools for example?

Peter: Sure, yeah. It depends on your SDK - but there are particular packages like for example GenFu and that will start mocking out lorem ipsum type data into your data contracts. So you don't have to have a back end database. But, you could have a data contract that you're operation will provide. So effectively, speaks to this mocking capability. So, if you do not have that kind of an SDK available… There are azure functions, logic apps, etc. - to your point - that will allow that mocking to be easily constructed. And then, if we go into much more robust model using Azure API Management, you can mock the data contracts or effectively, the response quickly - within not just the portal, but through infrastructure as code through ARM templates etc. You could do it for a particular HTTP status code. So, for example - if you are going to be creating several APIs that align to a CRUD (which is a Create, Read, Update and Delete) typical pattern, that we find in APIs. So, for a read operation - it could respond with the 200 or 404 . 200 could be everything's okay, here's the data - here's a record. They call it a URI, or resource. Either way, you could provide statuses, messages, your data contract for exceptions and then you could start building out proof of concepts very quickly. For example in API management, I don't even have to have a backend app service or compute of any kind. So it all depends on your point - our requirements. Saying okay we want to quickly prototype something… yes. What do we want to prototype? Then you could do a kind of swag at it. But ultimately, it's the it's the level of communication between the producer and consumer of the API to validate either the business requirement, or just an operation of education.

Chris: One of the things that struck me in what you said there - as well - is that there may be different personas at play there. I know we've talked about the idea of these internal users or external users who might be accessing an API. But, if we think about the people managing that (this of course depends on the size of our team, and what kind of organisation we're working in). But, there may be people for example who are managing the front end who need access to that facade layer for example. There may be some work on the facade layer, and then maybe someone working on the back end. Those may all be different responsibilities. They may be some kind of shared responsibility. Again, it depends on the team. But, really what you start doing is getting a little bit into this DevOps mindset, don't you? Of decoupling these different components and being able to start bringing some of that agility, I guess. And like you rightly said a little earlier on in the discussion here, starting to decouple some of those components. Then what you get (think of those broader requirements, not just the API side now) - is if one of those APIs has a bit of a wobble - something happens, something goes wrong with the underlying infrastructure for one of those APIs. Because again, in a microservice world there's nothing there saying for example that we have to host everything on the same type of infrastructure, or the same codebase, or the same languages or what not. It could all be very different. So what might affect one part of the system shouldn't affect then the rest of the system, if we've designed it in an appropriate way where the microservices have some kind of graceful degradation. So I'm curious. Maybe let's explore a little bit more on the DevOps side of the house here. When we think about DevOps, I think one of the emerging trends is this concept of DevSecOps and being able to bring in some of those security processes. Some of those security pieces that you might want to think about from the system. When we think about APIs, one of the things that we might think about for example is whether we need to authenticate to an API for example. Whether we have the right token and credentials and what-not to go and make that API call. What are your thoughts on things like that and how we could potentially bring them into some kind of automation of the deployment of our APIs and pipelines, for example sure?

Peter: Sure, no - this is a very good topic. The DevSecOps terminology has been used - but the understanding of where security is going to be introduced. Whether there are terms such as shift left and shift right with respect to where security is within DevOps. So if we think of a visual infinity symbol. Dev is on one side (which is typically is on the left), and then the ops is on the right. Security is - if you think about it - it's in the middle. And so, are your developers going to integrate security into their application design? Or is operations going to have security as part of the infrastructure or resources that are providing your API? So it could be a combination of the two. To your point, the various technologies within Azure such as Azure API Management. You could introduce policies at the entire API level or on the individual operations to do token validations - JWT tokens (which is JSON web tokens). Or you could do OAuth2 or Open IDC (which is Open ID Connect type of operations). But, more importantly you need to embed that level of token and validation within the operations that you are constructing in your API, that will ultimately be published. The other aspects of this API and DevSecOps is understanding what these design patterns - such as gatekeeper, bulkhead which is one of the things that you mention is - how do you isolate and ensure resiliency and not dependencies and one rogue operation taking the entire platform down? Ensuring those type of patterns will ultimately define your success. But, you don't have to build them into your design right away in terms of a requirement, but understanding which design patterns are going to be very beneficial upfront. Such as, for example - if you are a consumer of an API - do you implement a circuit breaker and retry pattern? For example in Azure API Management there is a policy for throttling . So, if you invoke the API too many times based upon your subscription limit within an interval, it will then return the appropriate http status code and you cannot re invoke the request until the throttle has been removed. So, how do you incorporate that into your API design?

Chris: Awesome. So, when we think about extending that DevSecOps piece a little bit further. Maybe on the functional side. I guess we've talked about the implementation detail from the API side of the house; the APIs, the API Management or the facade layer. There's also things that we can do from a testing perspective (and I'm a huge advocate of some of this area here now). But, being able to bring that testing mindset into the pipelines that we talk about. Lots of people think about unit testing, integration testing, some think about load testing maybe, which is very very important and maybe not a topic for today. But, security is also something that you can start testing for, right? Like, when we think about a little bit earlier - the idea of those JWT pieces or the OAuth and OIDC and the authentication to those, or the authorization - that we have the right credentials or what-not - and I am really who I say I am, and I can access this API. We can test for some of that as part of the pipeline. We can make that some kind of automated operation and validate that if we try and call that API, what happens if we don't pass in any credentials? I'm curious when you are working with different organisations for example, are there's certain practices like this that you see them doing? Are there common pitfalls for example, whether that's a DevOps case in particular or whether it's just designing their APIs in general for example? Are there certain good things and maybe challenges that you see them running into, let's say?

Peter: Yeah, that's a very good question. With respect to testing and API, it becomes a little bit more of a challenge on top of that. And that is - if you are moving quickly and you have many developers as part of it, you may step on each other. So with respect to the security question - that's one aspect that you could definitely create and you could definitely automate from a testing framework perspective. But, taking a step back and saying… okay. What is that we're doing? And what is the - either if you're following a minimal viable product model - or if you're doing a full regression testing - to your point you have P&L (performance and load testing). But, how do you incorporate your security into each of those? So for example if you are going to be changing from one identity provider to a different identity provider. How do you validate that through your testing process? And who will be impacted as you go through that into your SDLC pipeline, ultimately going to your production. So, understanding your gate and your level of testing metric, to say “I am comfortable with this based upon the requirements”. Understanding the threat actor along the testing is a very big challenge. I haven't seen a good mature model in my current customer base to support that just yet, because the requirements around identity become a little bit more of a challenge as you go from one identity provider to another. The bearer tokens and the refresh tokens - that becomes another challenge with respect to testing, because of the way they are built with respect to time.

Chris: Understood. Now, if we move away from the DevOps side of things. Would you say (from a pure API standpoint), there are certain common practices that you see different organisations taking on board? Or maybe some common pitfalls that you see them running into as well there, for example?

Peter: Sure. That is one area where using an external config store (and that is for example Azure Key Vault, Azure App Configuration) to support the sensitive credentials to access your data stores access other APIs for example. That is one area where the maturity is starting to gain momentum, which is a positive thing. However, understanding that you will need to integrate a key rotation strategy for example. That is a big challenge. Not everybody's ready from an operating model. Especially at the app dev level. When there is a key rotation - if you are not using these type of external config stores (or in microservices, a sidecar pattern). But I do see a positive trend that those key areas are definitely pain points. And I do see customers starting to introduce those concepts and address those challenges.

Chris: Understood. Well I think we've certainly done a whistle-stop tour of APIs there, and maybe not just APIs - but I think a lot of common concepts. So, things like microservices, things like API design and talking through things like SOAP and REST; how we potentially bring some of those legacy APIs into the modern architecture. How we could potentially accelerate some of those POCs. And then, talking a little bit around DevOps and around requirements as well and some of these common practices and common pitfalls there. So, plenty and plenty for folks to digest here. Before we wrap up, do you have any closing thoughts? Any closing words of wisdom? Or anything you maybe want to echo that we've already talked about already? If there's maybe one thing that people should take away about that journey - where they go and design their APIs and go along that path?

Peter: Yes. The biggest thing that I've seen since I've developed APIs using the old ASMX model in .NET. And that is, they don't implement a version out of the gate. Whether that be at the operation or at the data contract. And then try to introduce a version after becomes very painful. Not using an interface approach - that becomes another pain point which ultimately does allow a versioning model. And then, more importantly - providing a communication to your consumers for impact, should they be impacted by a major change or a minor change. And then, not realising the amount of APIs each consumer is requesting. So if you are providing APIs that they have to pay for via subscription… Do you create a throttle? Do you say “I have different subscription limits and different paid tiers, and so then I have to understand who's my major consumer to help me either support a newer version to improve the feature set or the capability of my API”. So, ultimately things evolve. You don't have to think about all of these concepts upfront. But, from my perspective - incorporate a versioning model upfront and just call it V1 and then start to iterate over it.

Chris: Lovely thoughts there. I think there's plenty of information for people to digest here, and I'm sure we could keep on talking - I'm sure there's plenty more that we could talk about on the pattern side as well. So, maybe we'll have to dive into some of those patterns and some of those thoughts a little deeper another time. But, Peter thank you so much for joining today. Like I say, plenty of pearls of wisdom and knowledge shared there today - so a big thank you for joining us today.

Peter: Thank you for having me Chris. I enjoyed it.

There we go! Lots to take in there, thinking of different patterns and common approaches or pitfalls that have been encountered along the way in Peter's experience. Peter and I both continued chatting after we recorded the episode, and had so many ideas on where we could have expanded further, so we're sure that there will be a follow-up at some point in the future. On that point - If there's anything you'd like us to expand on, please do get in touch either on Twitter or Facebook, @CloudWithChris.

Don't forget, CloudWithChris is on your favourite platforms - Spotify, iTunes, Google Play Music, Stitcher, PocketCasts, YouTube and directly at www.cloudwithchris.com. If you're enjoying the episodes that we're producing, please give a like or subscribe on your usual platform, so that you can keep up to date and support the work that we're doing! Feedback is always appreciated, along with your topic suggestions. And if you think you have a topic that you would be interested in talking to me about, why not join me?

Thank you for tuning in to this episode. Until next time - Good bye!

Guests

Peter Piper

Peter Piper

Peter has over 25 years of experience (most of which are software development with a focus on web application development) ranging from hardware to software, excellent global client interaction, and offering unique perspective resulting in solutions. Peter recently joined Microsoft's FastTrack for Azure Engineering team.

Hosts

Chris Reddington

Chris Reddington

Welsh Tech Geek, Cloud Advocate, Musical Theatre Enthusiast and Improving Improviser!

Chris is currently a Senior Engineer on Microsoft's FastTrack for Azure team.