Today, we’re talking about Microservices – The Production Line of the Cloud! We talked about Containers a little while ago, and the feedback was awesome. People love how we take topics on the cutting edge of technology, and break them up into pieces. But one piece we could have investigated more is around Microservices themselves.
Services, but micro
When we’re talking about cloud technologies, we often through around words like services and servers – and serverless for that matter. To be clear;
- Server – a defined amount of compute power which runs code.
- Service – a set of code which does a job. For example, watermarking an image could be a service, running on a server
- Serverless – similar to the above, but removes the dependence on the compute power itself. You can have a serverless service, and not have to deploy a server for it to run on.
- MicroService – A very atomic part of a service. In our example of watermarking an image, one microservice might just handle the image upload process part. The next microservice might generate the watermark based on a set of parameters. And another might do the export.
Great question. We have had services running on servers for ages, and they work fine. But they don’t scale well. When we’re in the cloud, we want to take advantage of the power of using lots of resource when we’re busy, whilst also saving money when we don’t need that power. By breaking things down into microservices, we can make sure we’re only scaling the bits we really need to.
If we’re a bank, and our software doesn’t use microservice architecture we would have to scale out (see our Buzzword Bingo post to learn about scaling terminology) our whole application at busy times. Imagine it’s the end of the month, and all of our business customers are running payrol. We take our whole banking software code and spread it over lots and lots of servers to ensure our service remains fast.
But our software contains all the code to make the mobile app work, everything for the cash machines, all of our integrations with other services – it’s huge! That means it takes a long time to deploy all that code onto each new node in the cluster of servers, and also that those sections of code will probably never be used on those nodes, because after everyone’s payroll has run, we will scale back down again.
This is where microservices come in. Under a microservices architecture, the payroll code would be one microservice on it’s own, and we could therefore scale that service independently of everything else. That means it can be quick to deploy and destroy again when we don’t need it. It also means we’re using all the capacity we pay for in the most efficient way possible.
The magic which enables a lot of these microservices ideas to work as well as they do, is queues. Amazon’s very first product under the AWS banner was Simple Queue Service, or SQS, way back in 2004. SQS does exactly what it says on the tin – you push jobs into the queue, and consumers can pull things out of the queue and do the work required.
It would be no good having a load of microservices all jumbled around, not knowing what state any workflow is in. Instead we can use queues to ensure that the output from one microservice can feed into the input of the next, resulting in highly efficient workflows.
The Production Line of the Cloud
Now we understand all of these terms, and the idea behind the design, we can see how microservices, together with queues, are just like a factory production line.
Let’s say we have the ingredients to make 1000 boxes of biscuits, so we can say we have 1000 boxes of biscuits in the queue. Each person and machine is ready and waiting at their station – they are the microservices. But they can’t do their job until the job before has been completed, and the production line moves on to their station.
Biscuits… what a great idea. Time for a brew.