Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Starting my first proper DevOps job from Monday. Some questions about the culture and mentality aspects.

Hi there,

I have been working on the pure Ops side with Azure, SQL Server and a bit of Azure DevOps and Python for the past 4 years and I have landed a proper DevOps role now. I am set to start the new gig from this monday. The new role will have extensive involvement with AWS, GCP, Gitlab, Jenkins etc in addition to whatever I already know.

I am not worried about learning all the new tech but a bit confused on how to make a strong start to the new career. The team I am going into will be having experienced DevOps engineers with a lot of them having extensive dev experience prior to that. I am coming from an Ops background and is worried about not fitting in quickly from the start.

Are there any rules of thumb, or unwritten rules or mentality points that I can read on so that I understand the "art of DevOps" ?


Thanks in advance

https://redd.it/r2sgzk
@r_devops
Who has ever set up Artifactory as a docker registry?

I really need help with ways to create the proper cert to be able to login via command line using docker or podman. I’ve setup Nginx and able to get to the registry via my browser over https.

But every time I try to docker login, I keep getting: “x509 certificate is not valid for any names, but wanted to match etc etc”

https://redd.it/r310y3
@r_devops
As a devops engineer how your daily responsibilities differ from a pure sysadmin (windows or linux) jobs ?

1). As a devops engineer how your daily responsibilities differ from a pure sysadmin (windows or linux) jobs ?

2). Do you feel that with the Cloud the old day of managing everything in a datacenter or on premise by hand or long time gone or we will always need skill to install os, patch, administer stuffs because there is a limit to automation or CICD pipeline ? Will the old sysadmin jobs still exist or they will all need to program/script Azure Aws API with others tools like Jenkinds, Terraforms etc etc

https://redd.it/r335ul
@r_devops
What is the easiest cloud server provider to get started with?

I am learning more about devops and I am wondering what is the easiest to use and get started with?

How would you rank the cloud providers in terms of ease of use?

I started looking into AWS, but it doesn't feel as easy, mostly because it has lots of options and places where I could miss or forget. Is there anything that is easy and relatively cheap to get started with? Heroku is a bit easier but they charge a lot for their services.

https://redd.it/r38f0k
@r_devops
Which tools did you use to design yours cloud architecture ?

Hi community, currently I use r/drawio to design our cloud architecture. Whereas recently a partner tell that we are using old tools to do it. So here we come. Which tools did you use to design a cloud infrastructure?

https://redd.it/r3exqp
@r_devops
Behind the scenes of the night our transformer shut down in our data center

One night in September, a power transformer shutdown in one of our Parisian data centers.

While we were writing this article, this situation happened again for the third time in ten years. Like the two other times, our two power backups ensured the power lineup worked while our team rallied to bring the situation back to normal. Read on to find out what happened during this tense night.

​

We equip all of our data centers with a Scaleway-made building management system tool called SiMA. Thanks to this tool, we can monitor and analyze hundreds of thousands of real-time data points from our equipment. This allows us to have a complete overview of our infrastructure at all times, and to be able to optimize it to be as close as possible to customers’ demands.

We build our software and hardware to monitor our equipment because manufacturers’ products do not come equipped with the technical level we require.

It is common to see building management system tools exceed one million euros in our business.

So, we built our own and integrated it as an internal chatbot. Thanks to SiMA, we started receiving notifications at 05:09 AM, alerting us that one of our power lineups was no longer being supplied by the grid. Our technicians immediately checked the programmable logic controller and confirmed what we feared: SiMA was right, and we had a long night ahead of us. As soon as the failure occurred, the automatic switch to our generators had been made.

​

# First step: synchronize and assess the situation

​

We synched with our on-call engineers and board members, and notified our clients. At this point, we have an autonomy of five days on fuel oil, and 20 minutes on battery. The issue was likely caused by insufficient oil in the transformer itself. Our team quickly went to the site and found an oil leak by a transformer component called the Buchholz relay. This is a protection relay that acts as a sensor to monitor the temperature, oil level, and gas discharge of the transformer.

# Safe working conditions - isolate the high-tension unit

The fault with the Buchholz relay triggered the insufficiency, but luckily, we were only a few liters of oil short. We then started by creating a safe working environment by isolating the high-tension unit from the power transformer, while other members of the team searched for vegetable oil to stock up on - this proved to be quite a mission in itself as the incident occurred in the middle of the night.

We use vegetable oil instead of other types of oil to power our transformers, mainly for environmental and security reasons. The oil we use has a fire point of over 300°C, which makes it barely flammable. It also is bio-sourced, easily biodegradable, and non-toxic. Unfortunately, so far, our experience with vegetable oil has been pretty bad.

The power lineup continued to be fed by its two electric generators, supervised by our engineers. The company that handles the maintenance came to the site, too, ready to assist us. Even with two electric generators, you can never be too cautious. The faulty Buchholz relay was dismantled and checked to diagnose and understand what went wrong, and learn from there. The new relay was calibrated and then installed.

# We have now been relying on our electric generators for eight hours.

if you made it this far, the rest is here: with images, video and all: https://blog.scaleway.com/behind-the-scenes-of-the-night-our-transformer-shut-down-in-our-data-center/

https://redd.it/r39ouv
@r_devops
Has anyone given interview for SRE in FANG ?

Share your interview experience for an SRE role in FANG type org

https://redd.it/r3g8wz
@r_devops
GitHub is down (11/27)

All of GitHub appears to be down. https://www.githubstatus.com/

Congrats to anyone working on their side projects this holiday weekend in the USA lol

https://redd.it/r3mop1
@r_devops
Techworld Nana has no DevOps experience?

Techworld Nana is a really great instructor, and she does a very good job of breaking down complicated topics into something digestible for beginners.

And I think everyone here has at least heard of Techworld Nana. She's been an instructor for a couple of years now, covering beginner topics.

I recently checked her LinkedIn, and found it interesting that she has no actual DevOps or IT experience in general. Maybe she did and just didn't include it on her LinkedIn, it's hard to tell. But going off of her profile, there's just no actual work experience mentioned. Not just for DevOps, but for IT in general.

Just found that interesting. She has a DevOps Bootcamp course that goes for over a grand, and I think this is a really good example of where the industry is at the moment.

A bootcamp taught by someone without real experience in the things she teaches.

https://redd.it/r3klsg
@r_devops
What certification is most valuable?

Hello everyone, I'm trying to get my foot in the DevOps job market.

So far I have two Python certifications, PCEP and PCAP. Terraform associate and AWS cloud practitioner.

Tried the docker DCA and failed it, seems like not a lot of people pass it nowadays.

What's the most valuable now in this situation?

Thanks!

https://redd.it/r3dt6m
@r_devops
For those wanting to get into DevOps career.

This is oriented towards those new to the industry though more so, but been seeing a lot of posts asking how to learn DevOps and the skills.

The fundamental part is that you establish a learning routine. Pick a series of projects from the internet like setting up a static web page, basic Linux administration, Deploying NodeJS app through CICD to Docker, Kubernetes or Cloud etc. Preferably pick a group of projects that cover Cloud, Linux, CI/CD, Docker , Kubernetes, Infrastructure as Code, Ansible. It doesn't have to be complex. It can be simple. Then do them everyday. Everyday. Until you can do it without watching the videos. Until you can bring in your mother or layman and explain to her or him in detail whats going on and why and how. Do this routine before you tackle on something new. Before you watch a new tutorial.

I have put a great series of videos and project to get started here: https://www.youtube.com/c/Thetips4you . You support is important to me.

Obviously as your skills grow you should apply and add to this routine, but the most important thing is that you do it. Certificationss help you get interviews sometimes, but they will never speak for you. At the end of the day you need the experience of doing it constantly to help you do the talking.

Doing something everyday consistently is experience.

Doing something once and moving on is a experience.

If you can understand the difference then you're already ahead of the curve.

Wish you all the best and Happy learning.

https://redd.it/r3wm8v
@r_devops
Worst interview question/experience for DevOps position

What is the worst interview question that you guys are ever asked for a DevOps position?

I had this interview with one of the well known game company. Recruiter initially reached out and I was excited and prepared for it. I passed the first few rounds and on my last round, I was interviewed by the potential team members.

One of the guy asked me "what do you see if you click on this and that buttons on Jenkins?". I was like "did this guy really expect me to remember the user interface of Jenkins?" I paused for a bit, trying to remember vaguely on my head. The guy then yelled "if you don't know, said you don't know! don't waste my time!!".

Totally ruined my mood for the rest of the interview and said to the recruiter I am no longer interested.

https://redd.it/r3xvjh
@r_devops
What resume projects are you building?

Hello everyone,

There's a lot of talk about building your own projects for resume. So let's see what you got.

I just started my own first project. Don't know exactly what it'll look like but it will use Jenkins and terraform on AWS with some form of Python dockerized application and maybe some k8s in the mix.

Would love to see some of your projects for inspiration.

https://redd.it/r4ddwf
@r_devops
Curious what type of position I should be looking for as a SWE/Devops and what I should be getting paid what you guys are getting paid

Hey guys I am a SWE who also knows devops (k8s, terraform, gitlab/CI) entering the 4th year into my career.

I pretty much automate everything I see fit in bash and golang. I'm able to build my own high performant servers in golang with no coaching or supervision etc. and looking to pick up even a more systems heavy language such as rust.

Right now I work at a crypto startup as pretty much the devops lead (helm, kubernetes, terraform, bare k8s so no k8s specifc cloud prodvider - i.e goal is multicloud) as well as software engineer who is responsible for his own service. No one else understands the technology or devops processes on the team (even the CTO, the CEO does but hes busy).

We had a former old school "senior" sysadmin but they fired him after my complaints of having to train him at such low pay.

So my question here is what exactly is my skill set as a SWE/devops engineer should I be suitted for "cloud engineer", "infrastrucutre engineer", or "platform engineer" roles? How much would you reckon I should be getting paid for being able to contribute to both?

Some of my concerns -- feel as if I learned this devops stuff for no reason at my current pay should of just spent this time trading options etc.

https://redd.it/r49sla
@r_devops
Dokku vs Docker compose

I have a Django-Celery-Redis application, which, about 3-4 months ago was being hosted on Heroku. I took the community advice on self-hosting it and was advised to use Dokku. Now I'm running the application on an EC2 instance with (Dockerfile+Procfile) based Dokku deployment.

Since I'd love to dwell more deeply into the field of Docker, would it be a good idea to move from Dokku to Docker? One feature of Dokku I'd really not want to lose is the git push based deploys.

Extra information: Currently, since the Dokku deployment is on a development server (the production is still hosted on Heroku, which I'd like to migrate within a week or 2), it is connected with Dokku's Postgres plugin. However, when in production, we plan to use AWS RDS for database and Elasticache for Redis.

https://redd.it/r3wzxs
@r_devops
What are you doing for network diagram automation?

Looking for some ideas of how best to generate some (internal) diagrams of various AWS architecture and just as importantly, make sure the diagram stays up to date. We use Terraform so was thinking about spitting out a new graph every time a change is made and then using graphviz or some other graph tool to pretty it up.

Curious what other folks are doing and if there are certain things that work or should be avoided.

https://redd.it/r4lb15
@r_devops
Would this be expected from a mid-level software engineer?

Two weeks ago, I started a new job as a mid-level software engineer. Full stack web app development is my main background, so the position I accepted is for a "React Software Engineer". Indeed, this company has produced many web apps, in React and other stacks, and this was the kind of work I was told I would be doing during the interview process. So I was a bit surprised when the customer informs me they want me to help migrate their entire on-prem infrastructure to Azure, AWS, and GCP using Terraform and Ansible, starting with Azure. But remembering I'm expected to have some level of knowledge of cloud providers (I have some experience with AWS), I thought it wasn't totally unreasonable to ask a mid-level SWE to do this.

But then I ask how much stuff needs to move, and they say ALMOST 900 SERVICES AND DATA STORES. Now, I can see a team of DevOps or Site-Reliability engineers taking on this challenge. But I don't even know where to begin! I'm just a novice with cloud providers, and that's only with AWS. I've deployed an app on EC2 and made a few Lambdas, but that's about it! And it was for personal projects too where I didn't have to worry about production concerns like security, permissions, backups, proper networking, etc.

I started getting nervous at this point and tried my best to keep my cool. I told them I have some experience with cloud providers, but only with AWS and will need some time learning Azure, Ansible, and Terraform. They said that's fine and they don't expect me to have any certification with any cloud provider. I will be on a team of people with various levels of experience. I am also waiting a couple weeks for a background check to go through before I can start this work, and I can begin learning in the meantime.

My two concerns are (1) I will fail at this task because of my lack of experience, and (2) I'm not sure if this task is even right for me. In past jobs, there were separate teams dedicated to this kinda thing. We would work together, yes, but they would be the ones to actually set everything up in the right way. Plus, I prefer to stick with the development side of things. It's where I have the most experience and get the most joy. I'm happy to learn new things, but a task of this scale seems like it should be for someone with much more experience with the technology.

I am considering voicing these concerns to my manager because I know there are other projects my company is working on that better align with my skill set. But at the same time, I don't want to seem like I'm giving up or not interested in learning new things. What would you do in my shoes? Any advice is greatly appreciated.

https://redd.it/r4ln2n
@r_devops
Top 10 DevOps Trends For 2022



DevOps is a collaborative process of software delivery that brings together the business teams, development teams and operational teams. It is a set of practices and ideas that relate to flexible, quick and efficient development. With the frequent communication, DevOps engineers ensure that the product in development matches the market requirement. Programming approach of **DevOps** offers some benefits as the teams collaborate to deliver a product that works in the market. Those benefits include rapid development, enhanced collaboration and responsiveness, more time-to market, etc. One of the main reason of popularity of DevOps consulting is, it enables high-quality software delivery. And following the latest devops trends and best practices will help you to deliver the best in class product. So here we’ll discuss the DevOps trends and best practices for 2022.

## Top 10 DevOps Trends For 2022



### 1. Application Of DevSecOps-

DevSecOps is a new trend for DevOps that refers to the involvement of security and DevOps. However it might appear to be a new concept, it has been being used recently. All the vulnerabilities, attacks and security breaches causes several issues around the different networks. DevSecOps has sorted out the agile security network that will sort out the security issues and then incorporate the new technologies for keeping all kinds of hazards.

DevOps can reduce cost and speed up things. According to the verified market research reports, worldwide DevSecOps market value was $ 2.18 Billion in the year 2019 and will reach-out$ 17.16 Billion by 2027. It has been increasing at a CAGR of 30.76% from 2020 to 2027. Latest trends and future DevOps predictions suggest that DevSecOps systems ensure the security aspect of the system.

### 2. Microservice Architecture-

Microservices architecture is a cutting edge application in 2021. It divides data into chunks and independent units that are scalable and flexible. According to DevOps prediction for 2022 , there will be little bit changes in cycle that turns out to be free from hassle. Global microservices architecture market value was $2,073 million in 2018 and is predicted to reach $8,073 million by 2026.

In DevOps, there has been a necessity for deployment of the new version, and you just can’t proceed with the deployment of the minor highlights or the functionalities. This way, there has been an involvement of the microservice architecture. DevOps with microservices architecture is overwhelming the complications that it includes by allowing supply cycles. Customization will also be inclining toward the rise of the scaling choices.

### 3. DevOps Automation-

It is necessary for most enterprises today. Development teams spend a lot of time to fill the manual forms, creating change requests and logging into portals. Manual processes disturb the essential task and development lifecycle. DevOps automation is becoming common because companies move to understand data in a great way and automating manual processes. This allows programmers to focus on app development and results in faster delivery of products.

### 4. Migration To Serverless Architecture-

These days, devops companies are looking to provide solutions and consulting that allow the use of cloud computing to avoid server management. Serverless architecture will be a major DevOps trend in the upcoming years because companies want to reduce the hassle of server management and money.

Cloud providers perform backend tasks that reduce the administrative costs of server management. This will allow enterprises to deploy apps directly on the cloud. Migration from on-premise servers to cloud will transform today’s development team’s work. Cloud computing is great method of app deployment in real time.

### 5. GitOps Becoming The New Normal-

Development processes need tools that programmers can understand. Hence for the technological field, GitOps with DevOps is the best way for
continuous delivery. It can be said that the operating model for developing cloud native apps is using new technologies with it. Gitops ensures the deployment, monitoring, and the management all at one place. Hence it works in the source form of declarative infrastructure and apps also. With regards to automated CI/CD pipelines rolling out the infrastructure changes, it generally makes use of various tools for comparing the production state.

It also thinks about what is under source control and notifies you whenever there is a divergence. Hence it can be said that the main goal of this technology is to make development faster so that teams are always ready to do changes and updates. Also, it can ensure that complex apps running in kubernetes don’t face issues because of vulnerabilities.

### 6. Resilience Testing Becoming The Mainstream-

DevOps community focusing on resilience testing. There has been a intersection between testing, performance, observability, performance testing and resilience testing that are becoming the mainstream. Hence for the recent DevOps technologies, you can say that huge digital transformation is accelerating in all spheres.

### 7. Incorporation Of Infrastructure As Code (IaC)-

Infrastructure as a code has been a main tenet of DevOps in cloud locations. The storage devices, service networks, in the cloud or on-premise falls under the “Code” category. This allows companies to automate and simplify the infrastructure.

Also IaC is going to deliver an infrastructural version control system which ensures the team will be rolling back to last. Hence there will be the result in Rapid recovery and the reduced downtime with Iac and DevOps.

### 8. Enablement Of Kubernetes-

With Kubernetes, programmers can easily share software and apps associated with the IT operations team. And this happens in real time. Reason for errors is a difference in IT environment and infrastructure also. Kubernetes ensures the goal is collaboration and effectiveness between teams. There is a huge increase in efficiency by opting for the Kubernetes workflow. Also, it provides ease to to test/ build/ deploy pipelines in DevOps.

### 9. Incorporation Of The Artificial Intelligence (AI)-

DevOps teams use AI and ML technologies to ease the workflow. It can be said that AI optimizes the DevOps environment. And it is focused on managing big data. The AI-driven approach has now emerged as a tool which ensures a better decision-making process. AI guarantees data accessibility providing data seamlessly to the DevOps team.

Know the role of AI in DevOps at- **Role Of AI In Transforming DevOps**

### 10. Infrastructure Automation (IA) and Continuous Configuration Automation (CCA) Tools-

DevOps teams are looking for IA tools to bring automation. It favors automation in the configuration, delivery and management of the IT infrastructure. IA tools ensure empowering in the DevOps team. Considering all these things, it helps with the management of the multi and hybrid cloud infrastructure. Also, there is involvement of the design delivery services on-premise.

There is inclusion of cloud environments, with the involvement of effective resource provisioning. With IA tools teams ensures proper planning and execution of self-service.

Most of the companies are considering automated delivery services with on-premises and IaaS environments. Benefit of this is, devOps teams can focus on providing customer-focused agility. Also it can help with driving robust improvements, expanding networking, containers and security.

https://redd.it/r4q1l1
@r_devops
Using CICD to test pass/fail CPU and RAM usage from security tools (EDR, Splunk) change

Background: We used a number of security tools in our org. One of the complaint we typically get is those tools are the cause of high CPU and memory usage. It is also hard do retrospective root cause analysis to confirm that our tools are the cause of the resource usage when service performance is impacted. So possibly we will need to a test before any upgrade of policy change and document the performance usage as our artefact.

Objective: I thought using CI/CD (azure devops) to simulate installation of the security tools, implement the settings change or upgrade, sleep X minute and then start CPU and RAM benchmark for X minutes. And if benchmark breach threshold say 90% CPU or RAM peak, return non-zero (fail) code which will fail the pipeline.

Prior Research:

1. Azure query monitor. But has anyone tried using this? if the CPU and RAM metrics can be returned into the pipeline result?
2. Or using Monit. It can monitor node resources and send alert. But I would need the pipeline to capture the metric pass/fail instead of alert to email or log file.
3. Or htop/free/vmstat output to file? and parse to get the return code. Seems to be too hackish?
4. Or install Prometheus node exporter, prometheus server. But the pipeline will need to query the prometheus server which polls the result from the node exporter. Not elegant.

Would you have any other suggestion? a more elegant and simpler solution hopefuly

https://redd.it/r4pb11
@r_devops