The Role of Automation in DevOps: CI/CD Pipelines
Series index
Hey r/devops community! In this installment of the Comprehensive DevOps Learning Series, we will explore part of vital role automation plays in DevOps, with a particular emphasis on Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
By embracing automation, teams can streamline their development and deployment processes, enhancing efficiency and ensuring a consistent, high-quality software product. Please be aware that there are many different tools and software ecosystems that solve similar problems. There are only a few mentioned here, and further discussion in the comments is encouraged.
## Why Automation Matters in DevOps
Automation is a cornerstone of the DevOps philosophy, as it helps to eliminate manual, error-prone tasks and accelerates software delivery. It also fosters collaboration by allowing development and operations teams to work together more seamlessly throughout the entire software development lifecycle. By automating key processes, organizations can achieve the following benefits:
Improved productivity: Automation frees up valuable time and resources, enabling teams to focus on higher-value tasks, such as feature development or process improvement.
Faster time to market: Automated processes expedite the delivery of new features and bug fixes, ensuring faster software releases and a more rapid response to customer needs.
Enhanced consistency and reliability: Automated tasks are less prone to human error, resulting in more consistent and reliable outputs.
## Continuous Integration and Continuous Deployment
CI/CD pipelines are central to the automation process within DevOps. Let's take a closer look at each of these concepts:
### Continuous Integration (CI)
Continuous Integration is the practice of integrating code changes from multiple developers into a shared repository frequently, often multiple times a day. CI enables teams to identify and resolve integration issues early in the development cycle, reducing the risk of conflicts and costly fixes later on.
Typical CI pipeline components include:
Version control system (e.g., Git, SVN)
Build system (e.g., Jenkins, Bamboo, TeamCity)
Automated testing tools (e.g., JUnit, Selenium)
### Continuous Deployment (CD)
Continuous Deployment is the process of automatically deploying code changes to production environments after they have passed through the CI pipeline and met established quality criteria3. CD ensures rapid and reliable software releases, enabling teams to deliver new features and improvements with minimal delay.
CD pipelines often involve:
Configuration management tools (e.g., Ansible, Puppet, Chef)
Infrastructure as Code (IaC) platforms (e.g., Terraform, AWS CloudFormation)
Container orchestration tools (e.g., Kubernetes, Docker Swarm)
Conclusion
Automation, particularly in the form of CI/CD pipelines, is a critical element of DevOps practices, enabling teams to work more efficiently, minimize errors, and accelerate software delivery. By embracing automation and integrating CI/CD pipelines into their workflows, organizations can reap the benefits of streamlined processes, improved collaboration, and consistent, high-quality software products. There are many valuable guides on specific tools, deeper dives into why these principles work, and published books describing the DevOps world. Share your favorite, and your thoughts, experiences, and questions in the comments below!
Sources/additional reading:
Duvall, P. M., Matyas, S. M., & Glover, A. (2007). Continuous Integration: Improving Software Quality and Reducing Risk.
Fowler, M. (2006). Continuous Integration.
Humble, J., & Farley, D. (2010). Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment
Series index
Hey r/devops community! In this installment of the Comprehensive DevOps Learning Series, we will explore part of vital role automation plays in DevOps, with a particular emphasis on Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
By embracing automation, teams can streamline their development and deployment processes, enhancing efficiency and ensuring a consistent, high-quality software product. Please be aware that there are many different tools and software ecosystems that solve similar problems. There are only a few mentioned here, and further discussion in the comments is encouraged.
## Why Automation Matters in DevOps
Automation is a cornerstone of the DevOps philosophy, as it helps to eliminate manual, error-prone tasks and accelerates software delivery. It also fosters collaboration by allowing development and operations teams to work together more seamlessly throughout the entire software development lifecycle. By automating key processes, organizations can achieve the following benefits:
Improved productivity: Automation frees up valuable time and resources, enabling teams to focus on higher-value tasks, such as feature development or process improvement.
Faster time to market: Automated processes expedite the delivery of new features and bug fixes, ensuring faster software releases and a more rapid response to customer needs.
Enhanced consistency and reliability: Automated tasks are less prone to human error, resulting in more consistent and reliable outputs.
## Continuous Integration and Continuous Deployment
CI/CD pipelines are central to the automation process within DevOps. Let's take a closer look at each of these concepts:
### Continuous Integration (CI)
Continuous Integration is the practice of integrating code changes from multiple developers into a shared repository frequently, often multiple times a day. CI enables teams to identify and resolve integration issues early in the development cycle, reducing the risk of conflicts and costly fixes later on.
Typical CI pipeline components include:
Version control system (e.g., Git, SVN)
Build system (e.g., Jenkins, Bamboo, TeamCity)
Automated testing tools (e.g., JUnit, Selenium)
### Continuous Deployment (CD)
Continuous Deployment is the process of automatically deploying code changes to production environments after they have passed through the CI pipeline and met established quality criteria3. CD ensures rapid and reliable software releases, enabling teams to deliver new features and improvements with minimal delay.
CD pipelines often involve:
Configuration management tools (e.g., Ansible, Puppet, Chef)
Infrastructure as Code (IaC) platforms (e.g., Terraform, AWS CloudFormation)
Container orchestration tools (e.g., Kubernetes, Docker Swarm)
Conclusion
Automation, particularly in the form of CI/CD pipelines, is a critical element of DevOps practices, enabling teams to work more efficiently, minimize errors, and accelerate software delivery. By embracing automation and integrating CI/CD pipelines into their workflows, organizations can reap the benefits of streamlined processes, improved collaboration, and consistent, high-quality software products. There are many valuable guides on specific tools, deeper dives into why these principles work, and published books describing the DevOps world. Share your favorite, and your thoughts, experiences, and questions in the comments below!
Sources/additional reading:
Duvall, P. M., Matyas, S. M., & Glover, A. (2007). Continuous Integration: Improving Software Quality and Reducing Risk.
Fowler, M. (2006). Continuous Integration.
Humble, J., & Farley, D. (2010). Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment
Reddit
r/devops on Reddit: Introducing the Comprehensive DevOps Learning Series
Posted by u/Throwmetothewolf - 271 votes and 48 comments
Fearless Distroless
With the rise of Docker came a new focus for engineers: optimizing the build to reach the smallest image size possible.
A couple of options are available.
Multi-stage builds: A Dockerfile can consist of multiple steps, each having a different Docker base image. Each step can copy files from any of the previous build steps. Only the last one will receive a tag; the others will be left untagged.
This approach separates one or more build steps and a run step. On the JVM, it means that the first step includes compiling and packaging, based on a JDK, and the second step comprises running, based on a JRE.
Choosing the smallest base image size: The smaller the base image, the smaller the resulting image.
In this post, I’m going to focus on the second point.
Read further
https://redd.it/12ooagr
@r_devops
With the rise of Docker came a new focus for engineers: optimizing the build to reach the smallest image size possible.
A couple of options are available.
Multi-stage builds: A Dockerfile can consist of multiple steps, each having a different Docker base image. Each step can copy files from any of the previous build steps. Only the last one will receive a tag; the others will be left untagged.
This approach separates one or more build steps and a run step. On the JVM, it means that the first step includes compiling and packaging, based on a JDK, and the second step comprises running, based on a JRE.
Choosing the smallest base image size: The smaller the base image, the smaller the resulting image.
In this post, I’m going to focus on the second point.
Read further
https://redd.it/12ooagr
@r_devops
A Java geek
Fearless Distroless
With the rise of Docker came a new focus for engineers: optimizing the build to reach the smallest image size possible. A couple of options are available. Multi-stage builds: A Dockerfile can consist of multiple steps, each having a different Docker base…
Is Jenkins still the king?
A lot of people on reddit seem to recommend gitlab, or drone.io, but if you get on indeed and search for jobs there are tens of thousands of posts looking for people who know Jenkins and only a tiny fraction of job listings interested in any other ci framework. Is it worth investing time into anything else? It's my decision and while the other options seem more friendly I don't see any point in learning them if I'm not going to be able to use them in the future.
https://redd.it/12ovcoe
@r_devops
A lot of people on reddit seem to recommend gitlab, or drone.io, but if you get on indeed and search for jobs there are tens of thousands of posts looking for people who know Jenkins and only a tiny fraction of job listings interested in any other ci framework. Is it worth investing time into anything else? It's my decision and while the other options seem more friendly I don't see any point in learning them if I'm not going to be able to use them in the future.
https://redd.it/12ovcoe
@r_devops
www.drone.io
Drone CI – Automate Software Testing and Delivery
Drone is a self-service Continuous Delivery platform for busy development teams
Anything on par with HashiCorp Vault
Any alternative to Vault for a multi-cloud / on-prem environment?
https://redd.it/12oxakd
@r_devops
Any alternative to Vault for a multi-cloud / on-prem environment?
https://redd.it/12oxakd
@r_devops
Reddit
r/devops on Reddit: Anything on par with HashiCorp Vault
Posted by u/idkwhatname23 - No votes and 4 comments
How you deal with pain-in-the-ass team member?
We are few DevOps guys in our mid 30s working on the product.. we also happens to be good friends since college.
The problem is one of the guys have a huge ego - he is good person outside of work, knowledgeable, perhaps more experienced than us in certain areas, but every time his approach is questioned or better alternative is proposed he he starts getting protective. Instead of constructive discussion he leaves sarcastic remarks, putting others down.
His recent target is me as he seems even more agitated since I got my AWS SA Pro and Azure SA expert certs while I never rubbed anything to his face.
We try to give props to the guy since it seems he has very brittle ego but honestly I don't know if I want to continue working with them/him.
So I'm puzzled how to deal with that kind of people and what causes this kind of attitude?
Any reasonable non-conflicting suggestions are highly welcomed.
https://redd.it/12ox31l
@r_devops
We are few DevOps guys in our mid 30s working on the product.. we also happens to be good friends since college.
The problem is one of the guys have a huge ego - he is good person outside of work, knowledgeable, perhaps more experienced than us in certain areas, but every time his approach is questioned or better alternative is proposed he he starts getting protective. Instead of constructive discussion he leaves sarcastic remarks, putting others down.
His recent target is me as he seems even more agitated since I got my AWS SA Pro and Azure SA expert certs while I never rubbed anything to his face.
We try to give props to the guy since it seems he has very brittle ego but honestly I don't know if I want to continue working with them/him.
So I'm puzzled how to deal with that kind of people and what causes this kind of attitude?
Any reasonable non-conflicting suggestions are highly welcomed.
https://redd.it/12ox31l
@r_devops
Reddit
r/devops on Reddit: How you deal with pain-in-the-ass team member?
Posted by u/Dubinko - No votes and 2 comments
How would you manage Perforce?
Hello everyone,
I work for a game development studio based in UK. We have a few hundreds of developers and, as usual in this industry, we are using Perforce.
Since the DevOps team was short-staffed, the company tried to offload Perforce Helix Core and Helix Swarm administration by hiring a third party to do it. However it has come to my attention that this backfired and the third party needs to be hand held into solving any problem that appears and even came to the ridiculous of having the entire system down due to forgetting to renew the SSL certificate of their Authorization-handler website.
Now I have been thinking and am open to see what you guys think: in an ideal world with no limitations and full support of your companies how would you setup your Perforce/other centralised version control tools?
https://redd.it/12ozyz3
@r_devops
Hello everyone,
I work for a game development studio based in UK. We have a few hundreds of developers and, as usual in this industry, we are using Perforce.
Since the DevOps team was short-staffed, the company tried to offload Perforce Helix Core and Helix Swarm administration by hiring a third party to do it. However it has come to my attention that this backfired and the third party needs to be hand held into solving any problem that appears and even came to the ridiculous of having the entire system down due to forgetting to renew the SSL certificate of their Authorization-handler website.
Now I have been thinking and am open to see what you guys think: in an ideal world with no limitations and full support of your companies how would you setup your Perforce/other centralised version control tools?
https://redd.it/12ozyz3
@r_devops
Reddit
r/devops on Reddit: How would you manage Perforce?
Posted by u/Empire230 - No votes and 1 comment
First position as a dev ops engineer
For those of you who are hiring managers, what do you expect out of a entry level devops engineer?
https://redd.it/12p3hpn
@r_devops
For those of you who are hiring managers, what do you expect out of a entry level devops engineer?
https://redd.it/12p3hpn
@r_devops
Reddit
r/devops on Reddit: First position as a dev ops engineer
Posted by u/mzattitude - No votes and no comments
Saw a techfluencer in LinkedIn with all AWS, Azure and GCP cert.
Seems like it is good for marketing. But is it worth studying so many certs?
https://redd.it/12p4bnw
@r_devops
Seems like it is good for marketing. But is it worth studying so many certs?
https://redd.it/12p4bnw
@r_devops
Reddit
r/devops on Reddit: Saw a techfluencer in LinkedIn with all AWS, Azure and GCP cert.
Posted by u/IamOkei - No votes and 2 comments
KCL v0.4.6 is Coming — Rust-Based IDE Extension, Helm/Kustomize/KPT Integrations
The KCL team is pleased to announce that KCL v0.4.6 is now available! This release has brought three key updates to everyone: Language, Tools, and Integrations.
+ Use KCL IDE extensions to improve KCL code writing experience and efficiency
+ Helm/Kustomize/KPT cloud-native community tool integrations
+ Improve the KCL multilingual SDK for easy application integration.
See here for more: https://kcl-lang.io/blog/2022-kcl-0.4.6-release-blog/
https://redd.it/12p75ny
@r_devops
The KCL team is pleased to announce that KCL v0.4.6 is now available! This release has brought three key updates to everyone: Language, Tools, and Integrations.
+ Use KCL IDE extensions to improve KCL code writing experience and efficiency
+ Helm/Kustomize/KPT cloud-native community tool integrations
+ Improve the KCL multilingual SDK for easy application integration.
See here for more: https://kcl-lang.io/blog/2022-kcl-0.4.6-release-blog/
https://redd.it/12p75ny
@r_devops
kcl-lang.io
KCL v0.4.6 Release Blog | KCL programming language.
Introduction
View Kubernetes Secrets Quickly with a Single Command
Ever struggled to view kubernetes secret value as we have to identify and decode it.
Not anymore, view this youtube short and learn how to view the secret value with a single kubectl command.
https://youtube.com/shorts/XIRBdqAJkag?feature=share
https://redd.it/12ozske
@r_devops
Ever struggled to view kubernetes secret value as we have to identify and decode it.
Not anymore, view this youtube short and learn how to view the secret value with a single kubectl command.
https://youtube.com/shorts/XIRBdqAJkag?feature=share
https://redd.it/12ozske
@r_devops
YouTube
Kubernetes 101: View Kubernetes Secrets Quickly
View Kubernetes Secrets Quickly using a kubectl plugin named view-secret kubectl krew install view-secret : install the plugin with this command.kubectl view...
Help Project repo structure with common code, dependencies, IP protection issues
Hello everyone,
I'm currently struggling on taming a brown field project for an embedded device, particularly on enabling basic CI processes, mainly due to the poor codebase structuring the project has. For context, on a high-level, the product application is distributed, that is, we have multiple subsystems (e.g., vision, navigation, safety-control) that communicate either via a kind of a message broker or a point-to-point communication channel (e..g, using I2C). Some of these subsystems may be deployed to different hosts, with different OS and platform architectures (i.e., different build toolchains might be required).
Currently, the way the project is structured is a bit of a mess (simplified example):
\- Repository A, for subsystems X and W, to be deployed on platform Alpha;
\- Repository B, for subsystem Y, to be deployed on platform Beta;
\- Repositories [C, D, E\], for subsystem Z, to be deployed on platform Alpha. Some of these repos depend on each other (yuck...);
\- Repositories F and G, one with all necessary build toolchains and third-party dependencies, the other with "common code" such as definition of data structure for the messages being exchanged between subsystems/components. These are either pulled to create the build environment or used as dependency for some subsystems (in the form of git submodules).
Hopefully, you can imagine the headaches resulting from managing the build environments, dependencies, and submodules. My initial thought either to join everything into a mono-repo, with a "common" folder for the code shared between subsystems, and each subsystem residing in a dedicated folder comprising the corresponding source code and everything it needs to be built (toolchain, third-party packages, etc). Another approach, was to have one repo per domain/subsystem in a way that it is decoupled and fully buildable in isolation (ala microservices). Alas, I'm not sure how to deal with common code using this approach and the impact it has on the CI process...
To complicate things, we have externals working on parts of a subsystem (say repos C and D for subsystem Z) that cannot have access to an IP-protected part of the same subsystem (say repo E). My thoughts to "solve" this was to merge the "public" codebase for this subsystem into a single repo (H=C+D) and have the IP-protected code being imported as a git submodule (the externals wouldn't have permissions to the corresponding repo, and thus, couldn't pull it). I could also follow a similar approach for a mono-repo with the IP-protected part as a submodule but...the externals must be exposed to as few parts of the project as possible.
Finally, I need to find a way to integrate everything into a cohesive product comprising the different subsystems (e.g., the product has subsystem X v1.0.0, subsystem W v.1.0.1, subsystem Y v.1.0.5, etc) and make sure everything works (via system tests, etc). The mono-repo would easily solve this; for the multi-repo approach I would have an "umbrella" repository importing each subsystem as git submodules at the necessary tag/version (plus any helper scripts that I would need to build, glue, and deploy everything).
Does anyone experienced a similar scenario? How would you solve this and how would your CI process look like? Is there any book discussing these kind of issues, strategies on how to structure projects using GIT repositories, etc., that you would suggest? Most of the content I seem to find relies on scenarios with perfectly clean or basic situations...
Thank you everyone, and apologies for the long post!
https://redd.it/12paw3k
@r_devops
Hello everyone,
I'm currently struggling on taming a brown field project for an embedded device, particularly on enabling basic CI processes, mainly due to the poor codebase structuring the project has. For context, on a high-level, the product application is distributed, that is, we have multiple subsystems (e.g., vision, navigation, safety-control) that communicate either via a kind of a message broker or a point-to-point communication channel (e..g, using I2C). Some of these subsystems may be deployed to different hosts, with different OS and platform architectures (i.e., different build toolchains might be required).
Currently, the way the project is structured is a bit of a mess (simplified example):
\- Repository A, for subsystems X and W, to be deployed on platform Alpha;
\- Repository B, for subsystem Y, to be deployed on platform Beta;
\- Repositories [C, D, E\], for subsystem Z, to be deployed on platform Alpha. Some of these repos depend on each other (yuck...);
\- Repositories F and G, one with all necessary build toolchains and third-party dependencies, the other with "common code" such as definition of data structure for the messages being exchanged between subsystems/components. These are either pulled to create the build environment or used as dependency for some subsystems (in the form of git submodules).
Hopefully, you can imagine the headaches resulting from managing the build environments, dependencies, and submodules. My initial thought either to join everything into a mono-repo, with a "common" folder for the code shared between subsystems, and each subsystem residing in a dedicated folder comprising the corresponding source code and everything it needs to be built (toolchain, third-party packages, etc). Another approach, was to have one repo per domain/subsystem in a way that it is decoupled and fully buildable in isolation (ala microservices). Alas, I'm not sure how to deal with common code using this approach and the impact it has on the CI process...
To complicate things, we have externals working on parts of a subsystem (say repos C and D for subsystem Z) that cannot have access to an IP-protected part of the same subsystem (say repo E). My thoughts to "solve" this was to merge the "public" codebase for this subsystem into a single repo (H=C+D) and have the IP-protected code being imported as a git submodule (the externals wouldn't have permissions to the corresponding repo, and thus, couldn't pull it). I could also follow a similar approach for a mono-repo with the IP-protected part as a submodule but...the externals must be exposed to as few parts of the project as possible.
Finally, I need to find a way to integrate everything into a cohesive product comprising the different subsystems (e.g., the product has subsystem X v1.0.0, subsystem W v.1.0.1, subsystem Y v.1.0.5, etc) and make sure everything works (via system tests, etc). The mono-repo would easily solve this; for the multi-repo approach I would have an "umbrella" repository importing each subsystem as git submodules at the necessary tag/version (plus any helper scripts that I would need to build, glue, and deploy everything).
Does anyone experienced a similar scenario? How would you solve this and how would your CI process look like? Is there any book discussing these kind of issues, strategies on how to structure projects using GIT repositories, etc., that you would suggest? Most of the content I seem to find relies on scenarios with perfectly clean or basic situations...
Thank you everyone, and apologies for the long post!
https://redd.it/12paw3k
@r_devops
Reddit
r/devops on Reddit: [Help] Project repo structure with common code, dependencies, IP protection issues
Posted by u/OverlyCivilXenomorph - No votes and no comments
Jenkins guides
Any places considered best references to study Jenkins ?
https://redd.it/12pkdwd
@r_devops
Any places considered best references to study Jenkins ?
https://redd.it/12pkdwd
@r_devops
Reddit
r/devops on Reddit: Jenkins guides
Posted by u/Opening_Wishbone_422 - No votes and no comments
OnCall Fiasco
Friends, I’ve recently started a new job that has brought a challenging work-life balance. This includes unpredictable weekly deployments and releases (9 PM), including Fridays. And be on an oncall every three weeks.
I'm curious if this is normal, especially for senior positions – Is it common for higher-paying roles to demand this much off-hour work?
https://redd.it/12po3hi
@r_devops
Friends, I’ve recently started a new job that has brought a challenging work-life balance. This includes unpredictable weekly deployments and releases (9 PM), including Fridays. And be on an oncall every three weeks.
I'm curious if this is normal, especially for senior positions – Is it common for higher-paying roles to demand this much off-hour work?
https://redd.it/12po3hi
@r_devops
Reddit
r/devops on Reddit: OnCall Fiasco
Posted by u/moistPoonTang - No votes and no comments
Any real benefit on going with Datadog for AWS monitoring?
I have been wondering why some steps were taken by my current company and this is one of them.
We only use AWS, hence we only need to monitor AWS services. Therefore I got puzzled on why would someone go the extra mile and pay for Datadog when we have an AWS built-in product like CloudWatch. I could understand using Datadog would be good if we used multiple clouds, but that is not the case and we do not foresee any change to this within a couple of years.
Is there a big difference that would make going for Datadog reasonable if a company only uses AWS services?
https://redd.it/12pofo8
@r_devops
I have been wondering why some steps were taken by my current company and this is one of them.
We only use AWS, hence we only need to monitor AWS services. Therefore I got puzzled on why would someone go the extra mile and pay for Datadog when we have an AWS built-in product like CloudWatch. I could understand using Datadog would be good if we used multiple clouds, but that is not the case and we do not foresee any change to this within a couple of years.
Is there a big difference that would make going for Datadog reasonable if a company only uses AWS services?
https://redd.it/12pofo8
@r_devops
Reddit
r/devops on Reddit: Any real benefit on going with Datadog for AWS monitoring?
Posted by u/Empire230 - No votes and 7 comments
Addressing high cardinality using streaming aggregation
https://last9.io/blog/high-cardinality-no-problem-stream-aggregation-ftw/
https://redd.it/12pqsk0
@r_devops
https://last9.io/blog/high-cardinality-no-problem-stream-aggregation-ftw/
https://redd.it/12pqsk0
@r_devops
last9.io
High Cardinality? No Problem! Stream Aggregation FTW | Last9
Managing high cardinality in time series data is tough but crucial. Learn how Levitate’s streaming aggregations can help tackle it efficiently.
Kubernetes-Native Synthetic Monitoring with Kuberhealthy
Today I published an article titled "Kubernetes-Native Synthetic Monitoring with Kuberhealthy", where I explain how you can spin up synthetic monitoring platform in your own Kubernetes cluster using Kuberhealthy, including how to deploy it, configure it, create synthetic checks and set up monitoring and alerting.
Here's the link: https://betterprogramming.pub/kubernetes-native-synthetic-monitoring-with-kuberhealthy-15a8939972a
Feedback is very much appreciated!
https://redd.it/12pojwu
@r_devops
Today I published an article titled "Kubernetes-Native Synthetic Monitoring with Kuberhealthy", where I explain how you can spin up synthetic monitoring platform in your own Kubernetes cluster using Kuberhealthy, including how to deploy it, configure it, create synthetic checks and set up monitoring and alerting.
Here's the link: https://betterprogramming.pub/kubernetes-native-synthetic-monitoring-with-kuberhealthy-15a8939972a
Feedback is very much appreciated!
https://redd.it/12pojwu
@r_devops
Medium
Kubernetes-Native Synthetic Monitoring with Kuberhealthy
Learn how you can run your own synthetic monitoring on your Kubernetes clusters without any expensive third-party tools and platforms
Advice for becoming a better DevOps engineer
Hi everyone,
I have been recently thinking about growing by DevOps skills set. I know most of the time the answer most companies push their employees with is doing certifications to advance your skills and career, but I don't think I agree with this fully.
I need some advice on which skills to focus on than can improve my DevOps career no matter the tech stack I am working with.
I find it pointless to just pump out cert after cert and in the end I might not even use the skills that the cert taught me. If there are recommended certificates to target, I would like them to be more broad so that I can use the skills from that cert across any project I work on. I am looking for skills and certs that can stand stand the test of time.
If anyone has some recommendations, it would be much appreciated.
https://redd.it/12ptp01
@r_devops
Hi everyone,
I have been recently thinking about growing by DevOps skills set. I know most of the time the answer most companies push their employees with is doing certifications to advance your skills and career, but I don't think I agree with this fully.
I need some advice on which skills to focus on than can improve my DevOps career no matter the tech stack I am working with.
I find it pointless to just pump out cert after cert and in the end I might not even use the skills that the cert taught me. If there are recommended certificates to target, I would like them to be more broad so that I can use the skills from that cert across any project I work on. I am looking for skills and certs that can stand stand the test of time.
If anyone has some recommendations, it would be much appreciated.
https://redd.it/12ptp01
@r_devops
Reddit
r/devops on Reddit: Advice for becoming a better DevOps engineer
Posted by u/Blitziggy - No votes and 1 comment
What's your take if DevOps colleague always got new initiative / idea?
Hello all, I want to ask and might need a new insight. What is your take if one of your colleague (same devops team) always got initiative / idea although your team is not strong or vast enough to do that due to lack of skill (might be a blocker instead of improvement)
Thank you for reply this thread.
https://redd.it/12ptlab
@r_devops
Hello all, I want to ask and might need a new insight. What is your take if one of your colleague (same devops team) always got initiative / idea although your team is not strong or vast enough to do that due to lack of skill (might be a blocker instead of improvement)
Thank you for reply this thread.
https://redd.it/12ptlab
@r_devops
Reddit
r/devops on Reddit: What's your take if DevOps colleague always got new initiative / idea?
Posted by u/Cultural-Pizza-1916 - No votes and 3 comments
Resume Review for DevOps/Cloud jobs
Hello,
Looking at getting feedback on my resume. Been in IT for 4 years and been a Cloud Engineer for past 2 years. Will be applying soon to DevOps/Cloud roles and wanted to confirm with the community if there are any glaring flaws in format or anything I might be overlooking.
https://www.reddit.com/r/resumes/comments/12ptaeg/need\_resume\_review\_cloud\_engineer\_4\_yoe/
Thank you!
https://redd.it/12pyoyj
@r_devops
Hello,
Looking at getting feedback on my resume. Been in IT for 4 years and been a Cloud Engineer for past 2 years. Will be applying soon to DevOps/Cloud roles and wanted to confirm with the community if there are any glaring flaws in format or anything I might be overlooking.
https://www.reddit.com/r/resumes/comments/12ptaeg/need\_resume\_review\_cloud\_engineer\_4\_yoe/
Thank you!
https://redd.it/12pyoyj
@r_devops
Reddit
r/resumes on Reddit: Need Resume Review - Cloud Engineer (4 YOE)
Posted by u/1ooking4advice - 1 vote and 1 comment
Kubecon EU 2023
Does it happen that you will be at Kubecon Europe 2023? We will be at booth P15, ready to talk about AI/ML, MLOps, open-source ML, Kubeflow, and more. There are a bunch of demos Ubuntu prepared and a fun keynote on secure MLOps. Read more and meet us there!
https://redd.it/12q00v6
@r_devops
Does it happen that you will be at Kubecon Europe 2023? We will be at booth P15, ready to talk about AI/ML, MLOps, open-source ML, Kubeflow, and more. There are a bunch of demos Ubuntu prepared and a fun keynote on secure MLOps. Read more and meet us there!
https://redd.it/12q00v6
@r_devops
Medium
Open source MLOps at Kubecon with Ubuntu
Would you like to learn more about Open source MLOps at Kubecon? Pass by Ubuntu’s booth at Kubecon EU 2023