This article describes how a Cloud Native application should look like, it is based on the well known 12 Factors.
At the moment of writing this article, in my understanding, the Cloud Native is not only about code that developers write, but it is also a way of organizing engineering teams so that their work will be highly effective, flexible code, and at the end, you have fast time-to-market for your applications.
In articles that I read, Cloud Native is strongly related to software as a service, Continuous Integration, Continuous delivery, Zero-downtime deployments, and teams that are deploying daily.
Codebase
One codebase tracked in revision control, many deploys
This principle says that your codebase should produce one executable, you don’t produce multiple executables from one repository and you don’t maintain multiple repositories that at the end are used to produce one executable.
When you have one repository that produces multiple executables is not good because easily you may end up messing models and some of your application doing work that they are not responsible for.
Having multiple repositories that are used to produce one binary is hard to maintain and hard to evolve – after-all we are humans and we don’t have an infinite capacity to keep things in our memory, it is hard to work when multiple repositories are used to produce a single app.
Dependencies
Explicitly declare and isolate dependencies
Your application should not make the assumption that it will run in an environment where some libraries or dependencies are available, or any tool is available, or the database is available on the same host or even it has access to storage.
When writing applications for the cloud you make them stateless, they make zero assumptions about the environment where they are going to run, storage/mail/database are external services that are plugable using configurations.
The only assumption that you can make is that your application has access to CPU and memory.
The benefit of not having external dependencies is that it simplifies the application setup at different stages, developers setup the development-environment easily, the staging and production environments are easy to configure.
Configs
Store config in the environment
Configuration of an application are the things that vary between environments:
- backing services like Memcached host, database resources
- credentials to your cloud provider or Facebook app
Sometimes application store configurations in code as constants, some tools make this easy to do (example injecting a Value using Spring and providing a default value).
In the 12 Factors, an application should store the configurations in the environment variables, configurations are easy to change and they are not written in a single file and prefixed with dev.*
, prod.*
etc.
There are tools like Spring Cloud Config or Netflix Archaius.
Backing services
Treat backing services as attached resources
A backing service is any service that your application communicates with over network, examples of backing services are: storage, database, messaging middleware (Apache Kafka, Rabbit MQ etc), email services or caching systems.
The cod for a twelve-factors app makes no distinction between local or third-party resources. Application communicates with backing-service using the network and threat them as something that can be easily replaced.
Communicating with a local database/cache service is just a environment configuration. Backing-services are resources and the application don’t care if the email provider is X or Y, it uses a protocol (SMTP in the case of emailing) and it knows in advance that should not rely on a specific resource provider.
Image source: https://12factor.net
Build, release, run
Separate build and run stages
- The build phase takes the codebase, executes some script on it converting it to an executable known as a build. In my experience, we used to run builds from the
master
branch but I will argue that this is not correct and when building code, the build system should take a commit ID and make builds out of it. - The release stage takes the build result, combines it with configurations, and produces an immutable release, the result of this phase is ready to be executed in a specific environment.
- The run phase – takes the release stage result (usually from a repository where other results of the release stage are available), and by applying some additional steps runs it. The run phase is executed by an application that is responsible for scaling and maintaining live your application.
The twelve-factor app uses strict separation between the build, release, and run stages. For example, it is impossible to make changes to the code at runtime, since there is no way to propagate those changes back to the build stage.
Even more, it is not possible to alter an existing release, releases are unique, immutable binaries published to a repository, when it is required, a Deployment tool can be used to push a new version or to revert to a previous one, when a change in the code is required – it should pass though all the phases and deployed by the Deployment tool.
Processes
Execute the app as one or more stateless processes
Application should share nothing and if there is something that should be stored then it is stored in backing-service. Application has not state and threat every request as something it have not seen before.
Application should not rely on the fact that something cached from a previous run will be available in memory at the next run. Request processed by an existing instance or by an instance that just started-up produces the same results.
Port binding
Export services via port binding
Web apps are sometimes executed inside a webserver container. For example, PHP apps might run as a module inside Apache HTTPD, or Java apps might run inside Tomcat.
The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.
Concurrency
Scale out via the process model
In the twelve-factor app, processes are a first class citizen. Processes in the twelve-factor app take strong cues from the unix process model for running service daemons. Using this model, the developer can architect their app to handle diverse workloads by assigning each type of work to a process type. For example, HTTP requests may be handled by a web process, and long-running background tasks handled by a worker process.
This does not exclude individual processes from handling their own internal multiplexing, via threads inside the runtime VM, or the async/evented model found in tools such as EventMachine, Twisted, or Node.js. But an individual VM can only grow so large (vertical scale), so the application must also be able to span multiple processes running on multiple physical machines.
Disposability
Maximize robustness with fast startup and graceful shutdown
Twelve-factor applications are disposable, this meaning that they can started and stopped at any time, they should be able to start fast and shutdown graceful but even they are killed without having a changes to finish their work, this should not affect the system.
Fast startup – application starts in few seconds, this makes the release process and the scaling easier, it aids robustness, because the process manager can more easily move processes to new physical machines.
Shut down gracefully when they receive a SIGTERM signal from the process manager – a web application handling HTTP requests, this means all the executing requests are allowed to finish and new requests are not accepted (they being handled by other instances). This also means that requests take milliseconds to be processed, long-running requests are using approaches like polling or sockets (where client reconnects when connection is lost).
Dev/prod parity
Keep development, staging, and production as similar as possible
Historically there are few gaps between production and development environment:
- the time gap – developers and implementing a new feature or fixing a bug and it get’s into production after days, weeks or months.
- the team gap – developers write code, ops engineers deploy it
- the tools gap – Mysql in production, SQLite in development, Nginx locally, Apache in production.
Ideally, differences between environments should not exist, when you have differences it works until doesn’t work.
There are tools that are trying to abstract work with databases but you should not rely on getting the same behavior from different databases.
- the time gap – instead of deploying after days or weeks, make it possible to deploy code in hours or minutes
- the team gap – the same person that wrote the code is responsible for deploying it, verifying and making sure it works.
- the tools gap – use the same tools in all env.
Logs
Treat logs as event streams
Logs should be treated as an event stream, they happen, they are immutable, they are taken out of the instances and sent to an external backing-service (for example ELK Stack can be used). The application doesn’t make an assumption that it’s responsible for writing logs in a file, logs are written to stdout.
Developer doesn’t ssh to instances in order to find logs, instances can die, they can be replaced and it is not reliable neither effective to work in this way.
Admin processes
Run admin/management tasks as one-off processes
- Running a DB migration
- Running scripts
Scripts/migration and other types of scripts that you may want to execute should be committed and versioned, they should be executed from a similar instances as where production code runs. Admin code should be shipped with the application code – to avoid sync issues (example: a DB column was removed but application code still uses it).