In this article, you will learn what tools you need to learn to become a DevOps engineer and the tasks and responsibilities of a DevOps Engineer.
First, You need to understand that there are two main parts when creating an application,
- the development part where software developers program the application and test it,
- operations part where the application is deployed and maintained on a server
DevOps is a link between the two IT departments
It all starts with the application. The developer’s team will program an application with any technology stack different programming languages, build tools, etc.,
They will, of course, have a code repository to work on the code in a team. One of the most popular ones today is Git.
As a DevOps engineer, you will not be programming the application, but you need to understand the concepts of how developers work, which get workflow they’re using.
You should also know how the application is configured to communicate with other services or databases, as well as concepts of automated testing and so on.
Now, that application needs to be deployed on a server so that eventually users can access it, right? That’s where we’re developing it. So we need some infrastructure on-premise servers or cloud servers.
And these servers need to be created and configured to run our application. Again, as a DevOps engineer, You may be responsible for preparing the infrastructure to run the application.
And since most of the servers where applications are running are Linux servers, you need Linux knowledge. You need to be comfortable using a line interface because you will be doing most of the server’s stuff using a command-line interface.
Knowing basic Linux commands, installing different tools and software on servers, understanding Linux file system basics of how to administer a server, how to SSH into the server, etc.
You also need to know the basics of networking and security, such as configuring firewalls to secure the application, opening some ports to make the application accessible from outside, and understanding how IP addresses, ports, and DNS works.
However, to draw a line between IT operations and DevOps, You don’t have to have an advanced super operating system or networking and security skills and administer the servers from start to finish.
There are own professions like network and system administrators, security engineers, and so on that specialize in one of these areas.
Your job is to understand the concepts and know of this to the extent that you’re able to prepare the server to run your application but not completely take over managing the servers and whole infrastructure.
Virtualization and containers
Nowadays, containers have become the new standard. You will probably be running your application as a container on a server. This means you need to generally understand concepts of virtualization and containers and also be able to manage containerized applications on a server.
One of the most popular container technologies today is Docker, so you need to learn it.
Now we have developers who are creating new features and bug fixes on one side, and we have infrastructure or servers that are managed and configured to run this application. The question now is how to get these features and bug fixes from the development team to the servers to make them available to the end-users.
So how do we release that new application versions?
And that’s where the main tasks and responsibilities of DevOps comes in.
With DevOps, The question is not just how we do this in any possible way, but how we do this continuously and efficiently, fast and automated.
Tests and bug fixing
First of all, when the feature or bug fix is done, we need to run the tests, package the application as an artifact like a jar file or a zip, etc., so that we can deploy it.
That’s where build tools and package manager tools come in.
You need to understand how this process of packaging testing applications works.
More and more companies are adopting containers as a new standard. So you will probably be building Docker images from your application.
As a next step, this image must be saved somewhere, right in an image repository. So Docker artifact repository on Nexus or Docker Hub, etc., will be used here.
So you need to understand how to create and manage artifact repositories as well.
Of course, you don’t want to do any of this manually. Instead, you want one pipeline that does all of this in sequential steps. So it would be best if you had build automation.
One of the most popular build automation tools is Jenkins. Of course, you need to connect this pipeline with the Git repository to get the code.
So this is part of the continuous integration process where code changes from the code repository get continuously tested, and you want to deploy that.
A new feature or bug fix to the server after testing, built and packaged, is part of a continuous deployment process where code changes get deployed continuously on a deployment server.
There could be some additional steps in this pipeline, like sending a notification to the team about the pipeline state or handling a failed deployment.
This flow represents the core of the CI/CD pipeline. The CI/CD pipeline happens to be at the heart of the device tasks and responsibilities.
As a DevOps Engineer, you should be able to configure the complete CI/CD pipeline for your application, and that pipeline should be continuous. That’s why the unofficial logo of DevOps an infinite cycle because the application improvement is infinite.
New features and bug fixes get added all the time that need to be deployed.
Nowadays, many companies are using virtual infrastructure on the cloud instead of creating and managing their own physical infrastructure. These are infrastructure as a service platform like Amazon AWS, Google Cloud, Microsoft Azure.
One apparent reason for that is to save costs of setting up your own infrastructure. But these platforms also manage a lot of stuff for you, making it much easier to manage your infrastructure there.
For example, using a UI, you can create your network, configure firewalls, row tables in all parts of your infrastructure through services and features that these platforms provide.
However, many of these features and services are platform-specific, so you need to learn to manage infrastructure there.
So if your applications will run on Amazon AWS, you need to learn the AWS and its services.
AWS is pretty complicated, but again, you don’t have to learn all the services that it offers. You need to know those concepts and services. You need to deploy and run your specific application on the AWS infrastructure.
Now, our application will run as a container, right? Because we’re building Docker images and containers that need to be managed for smaller applications Doker Campos or Docker Swarm is enough to manage them.
But if you have many containers, like in big microservices, you need a more powerful container orchestration tool to do the job.
Most popular of which is Kubernetes. So you need to understand how Kubernetes works and be able to administer and manage the cluster and deploy applications in it.
When you have all these thousands of containers running in Kubernetes on hundreds of servers, how do you track your individual applications’ performance or whether everything runs successfully, whether your infrastructure has any problems?
And what’s more important, how do you know if your users are experiencing any problems in real-time?
Monitoring servers cluster
One of your responsibilities as a DevOps Engineer may be to monitor your running application, the underlying Kubernetes cluster, in the servers on which the cluster is running.
So you need to know a monitoring tool like Prometheus or Nagios.
Infrastructure as code
In your project, you will, of course, need development and testing or staging environments as well to properly test your application before deploying it to production.
So you need that same deployment environment multiple times. Creating and maintaining that infrastructure for one environment already takes a lot of time and is very error-prone.
So we don’t want to do it manually. Three times. As I said before, we want to automate as much as possible. So how do we automate this process?
Creating the infrastructure and configuring it to run your application, and then deploying your application on that configured infrastructure can be done using a combination of two types of infrastructure as code tools.
Infrastructure provisioning, tools like Terraform, for example, and configuration management tools like Ansible or Puppet.
So you as a DevOps Engineer should know one of these tools to make your work more efficient and make your environments more transparent, so you know exactly which state it is easy to replicate and easy to recover.
In addition, since you are closely working with developers and system administrators to automate some of the tasks for them, you would most probably need to write scripts, maybe small applications to automate tasks like doing backups, system monitoring tasks, cron jobs, network management and so on.
To be able to do that, you need to know a scripting language. This could be an operating system, a specific scripting language like Bash or PowerShell, or even more demanded and more powerful and flexible language like Python, Ruby, or GoLink, which are also operating systems independently.
Again, here, you need to learn one of these languages. Python, without a doubt, is the most popular and demanded one in today’s DevOps space.
Easy to learn, easy to read, and very flexible.
Python has libraries for most of the database operating system tasks, as well as for different cloud platforms.
With these automation tools and languages you write, all of this automation logic is code like creating, managing and configuring infrastructure. That’s why the name infrastructure is code.
Now, how do you manage your code? Just like the application code, You manage this also using version control like Git. As a DevOps Engineer, you also need to learn Git.
Which tools do You need to learn?
You may ask, how many of these tools do I need to learn? Do I need to learn multiple tools in each category? Which one should I learn?
You should learn one tool in each category, one that’s the most popular and most widely used.
Because once you understand the concepts, building on that knowledge and using an alternative tool will be much easier if, for example, you need to use another tool in your company or project.
As every IT specialist, You have to learn to be effective. Time management is absolutely crucial in all IT jobs, no matter what. The ability to stay focused and productive is important as well. Here You find the best tips for productivity for IT specialists.