Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

DevOps tools professionals should be learning in 2023

From containers and Terraform to MLOps, here's what DevOps professionals should be on top of this year (and why).

Apr 15, 2024 • 14 Minute Read

  • IT Ops
  • Software Development
  • Developer Experience
  • Professional Development
  • Software Delivery Process
  • Product & UX
  • Learning & Development

The world of software development technology is constantly evolving. This is especially true for DevOps, a discipline that intersects with numerous domains of software development. DevOps practitioners of every skill level—be it seasoned professionals or aspiring newcomers—find themselves in a constant race to keep pace with the shifting landscape.

2023, in particular, has seen the emergence of one of the most disruptive technologies in many years: generative AI tools. These tools have an ever-expanding list of potential use-cases, including several within software development and DevOps.

However, AI isn’t ready to replace humans in software systems just yet! There’s still plenty of need for software and DevOps engineers to be involved in the design and deployment of modern, complex application architectures. DevOps engineers need to be well-versed in a variety of technologies, tools, and design paradigms to be effective in helping developers be more productive and to deliver software faster. 

In this article, we’ll look at tools that DevOps engineers should be focusing on in 2023, as well as newer frameworks to define what success in DevOps looks like.

The top DevOps tools of 2023

Must learn DevOps tools for 2023 are Terraform, a programming language, observability tools, artificial intelligence, and Developer Thriving framework

1. Containers

Containers have transformed the landscape of software development and deployment. They've emerged as a ubiquitous standard for packaging application code due to the distinct advantages they bring: isolation, portability, and consistency across different environments—from development and testing to production. In 2023, it’s probably harder to find engineers who haven’t been exposed to or used containers professionally in some fashion. Nonetheless, their fundamental importance to modern applications bears emphasizing how critical it is for DevOps engineers to be well versed in containers and container-based architecture.

Although the technology that makes containerization possible has existed in multiple iterations throughout computing history, Docker’s launch in 2013 heralded the “arrival” of containers as a mainstream development tool. Although other container runtimes are gaining popularity, with Kubernetes moving away from the underlying engine it has typically used: containerd. However, Docker is still compatible with Kubernetes, and is still the best choice for anyone looking to adopt containerization.

Software development benefits heavily from containerization, but when it comes time to run that software in a large-scale environment with an SLA and uptime requirements, automation and coordination are needed. Container orchestration platforms allow containerized software applications to be managed at scale, providing layers of abstraction at different levels of the infrastructure. These abstractions significantly reduce the administrative burden for engineers; smaller teams can leverage orchestration to deploy and manage applications meant to serve millions of users. 

Docker natively offers Docker Swarm, and there are a variety of managed cloud services like ECS, but Kubernetes currently has the crown of most widely adopted container orchestration platform. Its scalability and flexibility to run across multiple platforms and environments makes it an ideal choice.

Focus Areas

  • Know how to configure, run, and debug Docker for local development. 

  • Be familiar with common security mistakes and issues.

  • Know how to configure, run, and manage Kubernetes. Tools like minikube can help provide local environments.

2. Terraform

Infrastructure as Code (IaC) is critical to modern software infrastructure, especially when operating at scale. Much like application code, IaC enables infrastructure configuration to be defined, configured, tested, and deployed in a consistent, reliable, and repeatable manner.

Terraform continues to be the benchmark for IaC tooling. It has broad adoption, plenty of community support and resources, and a huge ecosystem of third party tooling and modules that allow users to extend it in various ways. Terraform also supports a broad selection of PaaS, SaaS, and IaaS providers, including the “big 3”: Azure, AWS, and GCP.

To further streamline infrastructure development, the Cloud Development Kit for Terraform (CDKTF) is now widely available. CDKTF enables engineers to write infrastructure configurations in the same context as their application code, reducing cognitive overhead by enabling IaC without the need to learn a Domain Specific Language (DSL).

Focus Areas

  • Build a strong depth of knowledge around Terraform. Spend time learning some of the more complex features like functions, expressions, and loops. 

  • Understand how to utilize Terraform at scale. At minimum, this should involve shared version control repositories, change management with multiple engineers, and CI/CD automation across multiple environments and regions.

  • Consider adopting a management platform like Terraform Cloud or Spacelift to help with scaling up.

3. Programming Languages

There’s no way around it: being able to read and write at least one programming language is an essential skill for a proficient DevOps engineer. Ideally, familiarity with multiple languages is preferred, given that DevOps professionals will typically find themselves supporting various developers and environments throughout their career. Moreover, knowing one language often makes it easier to read others, making this a skill that scales with your knowledge.

Past guidance would have suggested opting for an interpreted language such as Python, Ruby, or Perl. However, the new perspective is to choose a language that allows you to be productive and is pertinent to your working domain. For modern DevOps environments, that’s likely to be: Python, Typescript, or Golang.

Python is an interpreted language that is widely used and relatively easy to learn; it abstracts away a great deal of complexity, allowing the user to focus on application logic. It has a plethora of libraries and modules to tackle just about any task or problem. Python is seeing a bit of a renaissance as it is the lingua franca of data science, machine learning, and AI. A common DevOps use case for Python is interacting with the AWS API via the Boto3 SDK.

Golang, on the other hand, is a compiled language that offers excellent performance and is extensively used in developing a variety of applications. It is relatively easy to learn, considering the performance and features it provides. Golang's approach to managing concurrency is straightforward and efficient, making it the backend language for many popular DevOps tools. A typical use case for Golang in DevOps is writing custom Terraform providers.

Lastly, Typescript is a transpiled language, which compiles to vanilla JavaScript. It sees widespread use in frontend development, VSCode plugins, Github Actions, and more. It's particularly easy for those transitioning from JavaScript or frontend work and provides access to the vast JavaScript ecosystem. A common use case in DevOps includes building modules for Cloud Development Kit (CDK) frameworks.

Focus Areas

  • A senior DevOps engineer should be able to read and write code in at least one of these languages at an intermediate to proficient level. That doesn’t mean algorithms and complex data structures; just working, useful programs.

  • Spend time reading code of popular projects in a given language. Code is read far more often than it’s written, so it’s a good skill to practice, and can be instrumental in helping understand how programs are structured.

  • “Hello world” exercises only take you so far. Try automating something manual or tedious in your current role. If that’s not an option, try building a program that passes information between APIs. This will provide a reasonably useful simulation of a microservice-based environment.

4. Observability

Maintaining the health and performance of software infrastructure is a foundational aspect of DevOps. In simpler, more traditional environments, monitoring static resource usage—CPU, RAM, I/O, bandwidth—was generally sufficient. However, with the rise of distributed systems and microservices, this approach no longer provides a holistic view of system health, performance, and application behavior.

In these complex, distributed environments, it's not just about server metrics. What truly matters is understanding the end-user experience, or more specifically, how a customer perceives the performance of your application. This change in perspective necessitates a shift toward observability, which extends beyond simple monitoring. Observability provides the ability to understand your system's internal state based on the external outputs, offering vital insights into the system's behavior.

A common monitoring implementation typically involves a tool like Prometheus for metrics and alerting, and something like Grafana for visualization and graphing. However, this still won’t necessarily provide the insight that’s required to grasp what’s going on inside a large microservices stack. To achieve good observability requires implementing the “Three Pillars”:

  • Metrics: CPU, RAM, I/O. The mainstays of monitoring, but they don’t provide the complete picture.

  • Logs: Logs should be treated as a chronologically ordered, standardized record of events that occur in a given application or system. 

  • Tracing: Tracing provides data on how an external request is processed and satisfied by a given system.

These pillars aren’t just about the servers either. Critical data about application performance is best gathered from the application itself via telemetry, integrated directly in code. The OpenTelemetry project is a great example of an open-source, vendor-agnostic standard for collecting telemetry data, offering a unified approach to tracing and metrics. Extended Berkeley Packet Filter (eBPF) technology is a relatively recent advancement that introduces a new way to trace and observe systems. eBPF provides a method to run user-defined instrumentation code safely and efficiently inside the kernel, leading to a deeper level of insight into system behavior.

Focus Areas

  • Understand the difference between monitoring and observability.

  • Further emphasizing the need to be proficient in at least one language: understand how to implement and improve telemetry collection in applications and systems.

  • Most distributed environments will have gaps in monitoring, alerting, and observability. Identify them, and make a plan to close the gaps.

5. Artificial Intelligence

What technology article written in 2023 would be complete without a mention of Artificial Intelligence?. It has become ubiquitous, pervading multiple knowledge domains and demonstrating incredible utility in helping to debug and write code. Two facets of AI that are particularly relevant to DevOps engineers are Generative AI and MLOps/AI Infrastructure.

Generative AI

Generative AI and Large Language Models (LLM) are the AI tools that have grabbed all the headlines. Many readers of this article have probably at least heard of ChatGPT, DALL-E, or Stable Diffusion. The latter 2 have been used to create amazing designs and artwork, but the conversational and technical capabilities of chatbots like ChatGPT have already shown a potential to revolutionize software development, writing fully functional programs from basic natural language prompts. Several AI-based tools are now available that specifically target software development.

GitHub Copilot

GitHub Copilot is an AI-powered code completion and pair-programming tool that integrates into a developer’s IDE, suggesting lines or blocks of code based on the larger context of the program. Although not perfect, it can provide a significant productivity boost, saving engineers from having to do tedious, context-breaking documentation review.

Replit Ghostwriter

Replit Ghostwriter is another example of generative AI for code. It operates similar to Copilot, using AI models to generate code. However, Ghostwriter integrates directly into the Replit environment, making it a solid choice for newer developers or engineers who want to test or prototype concepts or logic without needing to stand up a new local development environment.

InfraCopilot

Similarly, InfraCopilot leverages generative AI and LLMs to automate code generation specifically for infrastructure configuration. InfraCopliot is relatively new, having just transitioned to open registration from beta, so it lacks the broader feedback of some of the other tools. Nonetheless, it has serious potential to be a massive productivity boost for anyone writing IaC.

MLOps/AI Infrastructure

As companies integrate more AI elements into their software infrastructure, understanding the deployment, testing, and monitoring of these systems becomes crucial. MLOps is a rapidly evolving field that addresses these needs.

MLOps seeks to bridge the gap between the development of machine learning models and their operation in production. It involves principles and practices that ensure the smooth deployment and maintenance of ML models, as well as their robust testing and monitoring. MLOps takes the learnings of DevOps and automation, and uses them to operationalize machine learning and data science. An understanding of MLOps will help DevOps bridge the same operational divide with data engineers that it did with software developers.

Focus Areas

  • It’s not time to worry about AI replacing everyone yet. Focus on using it to be more productive.

  • Get comfortable with AI tools. Try them out; see if you can use it to solve a technical task you’re working on.

  • Start thinking about how DevOps applies to the entire lifecycle of AI and data. Data ingestion, pipelines, and the AI infrastructure itself will all benefit from the same operational principles that are inherent to DevOps.

6. Developer Thriving framework

When it comes to quantifying success in DevOps, the industry has often looked to DORA metrics as the standard for measurement. This framework looks at multiple aspects of software development under DevOps, but it can be essentially distilled down to one directive: deploy more often for better outcomes. However, there are newer, more holistic ways to consider DevOps performance: by looking at the long-term health of organization development culture. 

Developer Thriving: A New Perspective

Developer Thriving introduces four factors that sheds light on developers productivity, with a focus on long-term, sustainable behaviors for organizations to implement. These factors are: Agency, Motivation, Learning Culture, and Sense of Belonging.

Agency

The agency factor refers to the extent to which developers feel they have control and autonomy over their work. It represents their ability to make decisions, influence the outcomes of their tasks, and contribute meaningfully to their projects. A high level of agency leads to increased motivation, job satisfaction, and a sense of empowerment among developers. Most developers have probably experienced both ends of this spectrum: the fantastic job where autonomy and agency was abundant, and the job where micromanagement and bureaucracy hindered every significant technical effort.

Motivation

The Motivation factor looks at the psychological drive that influences developers' engagement, effort, and overall performance in their work. It is affected by various elements such as visibility, recognition, self-confidence, and the sense of belonging within the team. When developers are thriving and feel a strong sense of agency, belonging, and motivation, their productivity tends to increase. Managers can improve the motivation factor by publicly recognizing and advocating for their developers' work, supporting a positive learning culture, and providing opportunities for developers to be credited directly for their contributions to the business. Motivation varies by developer, but recognition and a sense of ownership over outcomes has a fairly universal appeal.

Learning Culture

Learning culture is a key factor in a thriving software development team. It refers to the environment within the team that encourages continuous learning, knowledge sharing, and skill development among team members. A strong learning culture fosters open communication, collaboration, and the willingness to learn from mistakes and adapt to new technologies and methodologies. A strong learning culture seems like a natural fit for DevOps and its focus on continuous improvement.

Sense of Belonging

Sense of Belonging refers to a developer's sense of feeling accepted, included, and valued within their team or organization. It is an important component of Developer Thriving, as well as an innate human desire.. When developers have a strong sense of belonging, they are more likely to be engaged, productive, and contribute positively to the team's overall performance.

Focus Areas

  • There’s really one focus area here: regardless of framework, the focus is on helping developers be more productive and deliver better software faster. If that’s being achieved, you’re succeeding.

To keep up with change, keep learning

  • Key takeaway: keep learning!

  • Read blogs, follow other practitioners and notable industry figures on social media.

  • Experiment! Hands-on learning is one of the best ways to gain experience with a new tool.

  • Don’t get discouraged by AI; look at it as a tool for productivity, not as a replacement for engineers.

If there's one constant in the field of DevOps—and software development at large—it's that there will always be change. For DevOps practitioners, the most important tool isn't a specific piece of software or a coding language; it's the capacity for continuous learning and improvement.

There's a wealth of resources out there to help stay up-to-date. Read industry blogs to stay informed about the latest developments. Follow other practitioners and key industry figures on social media to get insights into their experience and approach.

Experimenting is also an effective way to learn. Hands-on experience with new tools and methodologies can often teach more than any tutorial or blog post. Real-world experience and heuristics will deliver more learning value than simply reading.

Lastly, approach AI with an open mind. Yes, it's a complex field, but think of it as a tool to enhance productivity, not a threat to the profession of software engineering. Just as previous technological advancements have, AI can open up new opportunities and challenges to keep work exciting and meaningful. In 2023, as ever, the key to staying relevant in DevOps is simple: never stop learning.

 

Pluralsight Content Team

Pluralsight C.

More about this author