How much Linux knowledge is required to be a Cloud Engineer?
How much Linux knowledge is required to be a Cloud Engineer?
I know the basics of the Linux CLI when it comes to commands. grep, ls, mv, etc, aren’t an issue. I’m just confused as to how much Linux knowledge one would need when pursuing a cloud role. I always see on job postings related to the cloud: “strong in Linux”. What exactly is a gauge of this?
Would I be wasting my time getting the RHCSA after my Network+? Should I just head for my first AWS cert after Network+?
I understand that certs aren’t the end all be all, and don’t guarantee a job etc.
https://redd.it/g8ki25
@r_devops
How much Linux knowledge is required to be a Cloud Engineer?
I know the basics of the Linux CLI when it comes to commands. grep, ls, mv, etc, aren’t an issue. I’m just confused as to how much Linux knowledge one would need when pursuing a cloud role. I always see on job postings related to the cloud: “strong in Linux”. What exactly is a gauge of this?
Would I be wasting my time getting the RHCSA after my Network+? Should I just head for my first AWS cert after Network+?
I understand that certs aren’t the end all be all, and don’t guarantee a job etc.
https://redd.it/g8ki25
@r_devops
reddit
How much Linux knowledge is required to be a Cloud Engineer?
How much Linux knowledge is required to be a Cloud Engineer? I know the basics of the Linux CLI when it comes to commands. grep, ls, mv, etc,...
Critique/help with the MLOps plan for a small DS team
I work for a small (~4 person) data science team within a much larger organization. The team is responsible for making two machine learning models, creating a single set of very important predicted values, and creating reporting and data validation tools relevant to those predicted values. I came on board about 4 months ago with experience in data science, systems administration, and devops. I have a strong linux background and plenty of experience with Docker and Kubernetes.
I've been asked to improve the existing modeling pipeline. I've come up with a plan that I think is feasible given the organization's goals and (considerable) constraints, but I'm hoping to get feedback on potential pitfalls or things to add from people with more ML/dev ops experience than myself. I also thought it might be fun for this sub to think through what the ideal toolchain might be given a pretty serious set of constraints.
## Goals
- Make our pipeline more robust. No more undetected data issues or breaking commits. Automatic unit and integration tests on all commits/merge requests.
- Improve pipeline transparency and reporting. Make summary and performance statistics about each model more easily available.
- Make testing and comparing new models significantly easier. More clearly tie new model results/objects to the code that produced them.
- Make the whole pipeline run continuously and automatically (given new data or other triggers).
## Constraints
- No cloud infrastructure. Everything has to be on-prem.
- Absolutely no additional money. Zero.
- Need to keep the developer toolchain as light as possible. It has to be usable by a team with limited devops/linux experience.
- Infrastructure can be (and is) linux + Docker based, but it has to be simple enough that if I die it's easy to understand and maintain for someone with a moderate devops background. For the same reason, all infrastructure setup has to be infrastructure as code.
- Any rebuild has to be done within 6 months of one person's full time work. This includes all infrastructure setup, code refactoring, CI/CD setup, and new code.
- The pipeline/modeling itself has to be written in R.
## Tools Available
- Hardware is limited to 2 beefy SQL servers, 2 beefy Ubuntu VMs, and ~6 beefy Windows workstations.
- We recently upgraded to GitLab Silver for the whole organization and have all the features that go along with it.
## Current Setup
This is a relatively new team that had to get something up and running quickly, so they haven't yet had the time or resources to setup a mature ML pipeline or incorporate many devops best practices. However, they're committed to improving things and making the best system possible, hence why they asked for this plan. The current pipeline is:
1. **Data extraction/processing.** Data is stored entirely in SQL and feature engineering/data extraction is done via SQL views. The view definitions are stored in GitLab. There is one SQL server that is used for both reporting and modeling. Data extraction takes a *very* long time.
2. **Modeling.** The entire pipeline is written in R and is stored in a single, large GitLab repo. Scripts are manually triggered sequentially to run the actual pipeline and modeling. Data ingest/validation, modeling, model validation, and reporting are all roughly part of the same repo. This repo has no unit testing or integration testing.
3. **Reporting.** Reporting is done via R Markdown and a set of Shiny apps that exist separately from the main modeling repo. These reporting applications pull from the same SQL server as the main modeling scripts and report on the predicted values created in the modeling step. Model performance metrics are not available to the reporting apps.
Other notes:
- Intermediate data and model objects are not saved. The model specification and performance statistics of the best performing model are saved to an excel sheet. The predicted values produced by this model are saved back to SQL.
- Testing new models and/or functional forms is
I work for a small (~4 person) data science team within a much larger organization. The team is responsible for making two machine learning models, creating a single set of very important predicted values, and creating reporting and data validation tools relevant to those predicted values. I came on board about 4 months ago with experience in data science, systems administration, and devops. I have a strong linux background and plenty of experience with Docker and Kubernetes.
I've been asked to improve the existing modeling pipeline. I've come up with a plan that I think is feasible given the organization's goals and (considerable) constraints, but I'm hoping to get feedback on potential pitfalls or things to add from people with more ML/dev ops experience than myself. I also thought it might be fun for this sub to think through what the ideal toolchain might be given a pretty serious set of constraints.
## Goals
- Make our pipeline more robust. No more undetected data issues or breaking commits. Automatic unit and integration tests on all commits/merge requests.
- Improve pipeline transparency and reporting. Make summary and performance statistics about each model more easily available.
- Make testing and comparing new models significantly easier. More clearly tie new model results/objects to the code that produced them.
- Make the whole pipeline run continuously and automatically (given new data or other triggers).
## Constraints
- No cloud infrastructure. Everything has to be on-prem.
- Absolutely no additional money. Zero.
- Need to keep the developer toolchain as light as possible. It has to be usable by a team with limited devops/linux experience.
- Infrastructure can be (and is) linux + Docker based, but it has to be simple enough that if I die it's easy to understand and maintain for someone with a moderate devops background. For the same reason, all infrastructure setup has to be infrastructure as code.
- Any rebuild has to be done within 6 months of one person's full time work. This includes all infrastructure setup, code refactoring, CI/CD setup, and new code.
- The pipeline/modeling itself has to be written in R.
## Tools Available
- Hardware is limited to 2 beefy SQL servers, 2 beefy Ubuntu VMs, and ~6 beefy Windows workstations.
- We recently upgraded to GitLab Silver for the whole organization and have all the features that go along with it.
## Current Setup
This is a relatively new team that had to get something up and running quickly, so they haven't yet had the time or resources to setup a mature ML pipeline or incorporate many devops best practices. However, they're committed to improving things and making the best system possible, hence why they asked for this plan. The current pipeline is:
1. **Data extraction/processing.** Data is stored entirely in SQL and feature engineering/data extraction is done via SQL views. The view definitions are stored in GitLab. There is one SQL server that is used for both reporting and modeling. Data extraction takes a *very* long time.
2. **Modeling.** The entire pipeline is written in R and is stored in a single, large GitLab repo. Scripts are manually triggered sequentially to run the actual pipeline and modeling. Data ingest/validation, modeling, model validation, and reporting are all roughly part of the same repo. This repo has no unit testing or integration testing.
3. **Reporting.** Reporting is done via R Markdown and a set of Shiny apps that exist separately from the main modeling repo. These reporting applications pull from the same SQL server as the main modeling scripts and report on the predicted values created in the modeling step. Model performance metrics are not available to the reporting apps.
Other notes:
- Intermediate data and model objects are not saved. The model specification and performance statistics of the best performing model are saved to an excel sheet. The predicted values produced by this model are saved back to SQL.
- Testing new models and/or functional forms is
done manually by editing the main repo's R code. Model outputs are not tied to specific commits or branches.
## Planned Improvements
Given my constraints, I'd like to make the following improvements:
- Disaggregate the steps of the pipeline into discrete repositories/tasks that can be individually run, tested, and worked on. Add unit testing to each of these repos that runs automatically (via GitLab CI/CD).
- Create an R package or packages that contains widely used functions and small datasets. Also add unit testing to these repos.
- Create a separate SQL server that mirrors the original server and is used exclusively for reporting.
- Use [DVC](https://dvc.org/) and [MinIO](https://min.io/) (running in Docker on a VM) to store the intermediate data produced by each step in the pipeline as well the final model objects. This is to prevent people from needing to constantly re-run the same data ingest scripts.
- Use DVC to define clear DAGs that automate the process of running the pipeline and collecting metrics on the results. Upload model metrics to a new table in SQL.
- Again use DVC to tie model output and data to specific commits and branches.
- Using the model summary metrics in both DVC and SQL, add some sort of reporting dashboard (Tableau, Shiny) that facilitates easy comparison of different models.
Those are my immediate thoughts for improvements, but I'm curious to get this sub's take as well. Additionally, I'd love to find an ML ops mentor if someone out there is willing to teach/talk.
**TL;DR:** You have 2 SQL servers, 2 VMs, a GitLab subscription, 0 money, and 1 person with linux experience. What's the most robust/transparent machine learning pipeline you can make?
https://redd.it/g8odm4
@r_devops
## Planned Improvements
Given my constraints, I'd like to make the following improvements:
- Disaggregate the steps of the pipeline into discrete repositories/tasks that can be individually run, tested, and worked on. Add unit testing to each of these repos that runs automatically (via GitLab CI/CD).
- Create an R package or packages that contains widely used functions and small datasets. Also add unit testing to these repos.
- Create a separate SQL server that mirrors the original server and is used exclusively for reporting.
- Use [DVC](https://dvc.org/) and [MinIO](https://min.io/) (running in Docker on a VM) to store the intermediate data produced by each step in the pipeline as well the final model objects. This is to prevent people from needing to constantly re-run the same data ingest scripts.
- Use DVC to define clear DAGs that automate the process of running the pipeline and collecting metrics on the results. Upload model metrics to a new table in SQL.
- Again use DVC to tie model output and data to specific commits and branches.
- Using the model summary metrics in both DVC and SQL, add some sort of reporting dashboard (Tableau, Shiny) that facilitates easy comparison of different models.
Those are my immediate thoughts for improvements, but I'm curious to get this sub's take as well. Additionally, I'd love to find an ML ops mentor if someone out there is willing to teach/talk.
**TL;DR:** You have 2 SQL servers, 2 VMs, a GitLab subscription, 0 money, and 1 person with linux experience. What's the most robust/transparent machine learning pipeline you can make?
https://redd.it/g8odm4
@r_devops
DVC
Home – DVC
Open-source version control system for Data Science and Machine Learning projects. Git-like experience to organize your data, models, and experiments.
Organizing Developer Teams, Code and Resources as your Organization Grows (Part 2: Organizing your Codebase)
Hey guys,
Having worked with teams of all sizes and technical backgrounds, I have created a series on how to properly organize teams, code and cloud resources that fits well with the "DevOps" methodology of allowing teams to autonomously develop, test and deploy software.
All constructive feedback is welcome!
[https://diligentprogrammer.com/2020/04/25/organizing-developer-teams-code-and-resources-as-your-organization-grows-part-2-organizing-your-codebase/](https://diligentprogrammer.com/2020/04/25/organizing-developer-teams-code-and-resources-as-your-organization-grows-part-2-organizing-your-codebase/)
https://redd.it/g8g7ju
@r_devops
Hey guys,
Having worked with teams of all sizes and technical backgrounds, I have created a series on how to properly organize teams, code and cloud resources that fits well with the "DevOps" methodology of allowing teams to autonomously develop, test and deploy software.
All constructive feedback is welcome!
[https://diligentprogrammer.com/2020/04/25/organizing-developer-teams-code-and-resources-as-your-organization-grows-part-2-organizing-your-codebase/](https://diligentprogrammer.com/2020/04/25/organizing-developer-teams-code-and-resources-as-your-organization-grows-part-2-organizing-your-codebase/)
https://redd.it/g8g7ju
@r_devops
The Diligent Programmer
Organizing Developer Teams, Code and Resources as your Organization Grows (Part 2: Organizing your Codebase) - The Diligent Programmer
Originally I had meant to write a single post about organizing your developer teams, codebase and cloud resources, however I quickly realized that this is a pretty in depth topic and it would have ended up being a very lengthy post. Instead I have decided…
[Article] DevOps-as-a-Product
I'm always trying to come up with different approaches to help influence change within an organization. Check out this article and let me know what you think about trying to sell DevOps like you would sell any other product to business.
https://medium.com/devops-dudes/devops-as-a-product-64251c439340?source=friends\_link&sk=bc8d992e52e2e42cd5190fc789b10ed0
https://redd.it/g8jocs
@r_devops
I'm always trying to come up with different approaches to help influence change within an organization. Check out this article and let me know what you think about trying to sell DevOps like you would sell any other product to business.
https://medium.com/devops-dudes/devops-as-a-product-64251c439340?source=friends\_link&sk=bc8d992e52e2e42cd5190fc789b10ed0
https://redd.it/g8jocs
@r_devops
Medium
DevOps-as-a-Product
How to sell your organization a better future
Windows Update API - Automation of MSU Package Downloads
Hi all.
I've been researching this for a few days and I haven't had much luck so far. I'm looking for a way to programmatically download the MSU packages for Windows Server 2019/2016. Mainly cumulative updates. I have a specific use case where I need the actual MSU files and automation will be very important there.
I don't think there's some sort of API for [catalog.update.microsoft.com](https://catalog.update.microsoft.com) \- there apparently used to be an RSS feed but it looks like that's been removed.
Any thoughts or advice would be much appreciated!
https://redd.it/g8iik5
@r_devops
Hi all.
I've been researching this for a few days and I haven't had much luck so far. I'm looking for a way to programmatically download the MSU packages for Windows Server 2019/2016. Mainly cumulative updates. I have a specific use case where I need the actual MSU files and automation will be very important there.
I don't think there's some sort of API for [catalog.update.microsoft.com](https://catalog.update.microsoft.com) \- there apparently used to be an RSS feed but it looks like that's been removed.
Any thoughts or advice would be much appreciated!
https://redd.it/g8iik5
@r_devops
reddit
Windows Update API - Automation of MSU Package Downloads
Hi all. I've been researching this for a few days and I haven't had much luck so far. I'm looking for a way to programmatically download the MSU...
YAML File generator for kubernetes
When I am trying to create my own YAML configuration file in Kubernetes I faced a lot of challenges. One of the main challenge is " what are the option I have for a particular property or option for a YAML field? "
So I searched in Google and found some interesting Github project which is [kubergui](https://github.com/BrandonPotter/kubergui) but that project/tool is not helpful when we want to generate advanced YAML configuration for our Kubernetes cluster.
So I thought to solve this problem with my web development skills 🤪 and want to help others to easily generate YAML configuration file for their Kubernetes cluster.
So after working hard, I came up with the one tool that helps you to generate the YAML file and also helps you to know what are options available for a particular YAML property/field.
The tool is Kube-yaml-gen that helps you to generate the YAML file by selecting the available options.
You don't need to pay anything 😂 for this tool/app because it open source and hosted in Github pages 🤪.
Check out that project [here](https://github.com/MohanSai1997/kube-yaml-gen). If you found useful then give a star and share because this is my first open-source project.
Any suggestions or features to improve the project are accepted 😉😊.
https://redd.it/g8i8pk
@r_devops
When I am trying to create my own YAML configuration file in Kubernetes I faced a lot of challenges. One of the main challenge is " what are the option I have for a particular property or option for a YAML field? "
So I searched in Google and found some interesting Github project which is [kubergui](https://github.com/BrandonPotter/kubergui) but that project/tool is not helpful when we want to generate advanced YAML configuration for our Kubernetes cluster.
So I thought to solve this problem with my web development skills 🤪 and want to help others to easily generate YAML configuration file for their Kubernetes cluster.
So after working hard, I came up with the one tool that helps you to generate the YAML file and also helps you to know what are options available for a particular YAML property/field.
The tool is Kube-yaml-gen that helps you to generate the YAML file by selecting the available options.
You don't need to pay anything 😂 for this tool/app because it open source and hosted in Github pages 🤪.
Check out that project [here](https://github.com/MohanSai1997/kube-yaml-gen). If you found useful then give a star and share because this is my first open-source project.
Any suggestions or features to improve the project are accepted 😉😊.
https://redd.it/g8i8pk
@r_devops
GitHub
BrandonPotter/kubergui
Kubernetes GUI YAML generators for simple but typo-prone tasks - BrandonPotter/kubergui
I just created my first app (a Prometheus exporter for Jira Cloud) in Python! I'd love it if any of you want to give me your feedback.
As stated, this is the first app I've ever created. However, it's fairly simple, and I think it's fairly well-coded as well. I'd really appreciate any feedback you guys can give me.
[https://github.com/R0quef0rt/prometheus-jira-cloud-exporter](https://github.com/R0quef0rt/prometheus-jira-cloud-exporter)
It's amazing how quickly I'm picking up programming. I started as a career sysadmin, then moved into devops. If you can learn devops, you can learn programming. I've only been studying for about a month.
https://redd.it/g8faxx
@r_devops
As stated, this is the first app I've ever created. However, it's fairly simple, and I think it's fairly well-coded as well. I'd really appreciate any feedback you guys can give me.
[https://github.com/R0quef0rt/prometheus-jira-cloud-exporter](https://github.com/R0quef0rt/prometheus-jira-cloud-exporter)
It's amazing how quickly I'm picking up programming. I started as a career sysadmin, then moved into devops. If you can learn devops, you can learn programming. I've only been studying for about a month.
https://redd.it/g8faxx
@r_devops
GitHub
GitHub - R0quef0rt/prometheus-jira-cloud-exporter: A simple exporter for Jira Cloud.
A simple exporter for Jira Cloud. Contribute to R0quef0rt/prometheus-jira-cloud-exporter development by creating an account on GitHub.
DevOps Shorts - the New Podcast/YouTube show
Hello folks and welcome to DevOps Shorts, the show where we invite wonderful human beings to have a lightning-fast conversation about Devs, Ops and other Mythical Creatures. The show where each episode only lasts 15 minutes and we are focused on asking only 3 questions.
2 episodes are already out:
Episode 001 - with Tobias Kunze - the CEO of Glasnostic
* [YouTube](https://www.youtube.com/watch?v=bT9Hmas65cU)
* [Anchor.fm](https://anchor.fm/devops-shorts/episodes/Tobias-Kunze---Nurture-over-Nature-ed1c6a)
* [Spotify](https://open.spotify.com/episode/31H53ShRROgGl5o7NNKxFW)
Episode 002 - with Baruch Sadogursky - the Head of DevRel at JFrog
* [YouTube](https://www.youtube.com/watch?v=kGaZM6WjpxM&t)
* [Anchor.fm](https://anchor.fm/devops-shorts/episodes/Baruch-Sadogursky---Let-Machines-Do-What-They-Are-Good-At-ed5srh)
* [Spotify](https://open.spotify.com/episode/4XyGEgbtFgclLqtyw0h3rY)
Looking forward to your feedback!
https://redd.it/g8c9jb
@r_devops
Hello folks and welcome to DevOps Shorts, the show where we invite wonderful human beings to have a lightning-fast conversation about Devs, Ops and other Mythical Creatures. The show where each episode only lasts 15 minutes and we are focused on asking only 3 questions.
2 episodes are already out:
Episode 001 - with Tobias Kunze - the CEO of Glasnostic
* [YouTube](https://www.youtube.com/watch?v=bT9Hmas65cU)
* [Anchor.fm](https://anchor.fm/devops-shorts/episodes/Tobias-Kunze---Nurture-over-Nature-ed1c6a)
* [Spotify](https://open.spotify.com/episode/31H53ShRROgGl5o7NNKxFW)
Episode 002 - with Baruch Sadogursky - the Head of DevRel at JFrog
* [YouTube](https://www.youtube.com/watch?v=kGaZM6WjpxM&t)
* [Anchor.fm](https://anchor.fm/devops-shorts/episodes/Baruch-Sadogursky---Let-Machines-Do-What-They-Are-Good-At-ed5srh)
* [Spotify](https://open.spotify.com/episode/4XyGEgbtFgclLqtyw0h3rY)
Looking forward to your feedback!
https://redd.it/g8c9jb
@r_devops
YouTube
DevOps Shorts Episode 001 - Tobias Kunze, CEO, Glasnostic
Our guest for tonight is Tobias Kunze. Tobias was the tech founder of the company that became what we now know as OpenShift. Today he’s the founder and CEO o...
Reminder: Chef-client 14.x Support EOL is 4/30/20 (Along with a few others)
Being something that has driven a fair portion of my work recently, I figured I'd throw a friendly reminder out.
https://docs.chef.io/versions/
https://redd.it/g8ex2i
@r_devops
Being something that has driven a fair portion of my work recently, I figured I'd throw a friendly reminder out.
https://docs.chef.io/versions/
https://redd.it/g8ex2i
@r_devops
docs.chef.io
Supported versions
This section lists the free and commercial Chef products and versions we currently support. The lifecycle stage defines the involvement by Chef Software in updating and maintaining each product.
Simpler Tool than Ansible for remote script execution and file copying
I'm looking for a simpler tool than Ansible. Most of my stuff is in containers so I don't need to use Ansible set up the system or configure it.
I just use Ansible to run scripts on many hosts (mainly not in containers)
What I like about Ansible:
* Only needs SSH and no Software to install on host.
* Can run one command on all hosts.
* Hosts are specified in file and can be grouped.
* Easy 'Commands' to copy files from and to the host without doing it manually with scp (compared to a bash script).
* I can run commands on the remote host and on this computer.
* I can use a 'jump box'.
What I don't like about Ansible:
* Cumbersome 'programming language' and many modules make it harder than a bash script, e.g. copying all files satisfying a pattern is really cumbersome.
* Though control structures (loops, if) are very very rarely used by me, they are very cumbersome in Ansible.
* Another language to learn.
So I'm just for a alternative that allows me to easily run scripts on a remote host and copy files between it. You might say, that you can easily build you own tool to do that. But this would lack proper documentation etc. and the next guy here would hate me.
https://redd.it/g8bty6
@r_devops
I'm looking for a simpler tool than Ansible. Most of my stuff is in containers so I don't need to use Ansible set up the system or configure it.
I just use Ansible to run scripts on many hosts (mainly not in containers)
What I like about Ansible:
* Only needs SSH and no Software to install on host.
* Can run one command on all hosts.
* Hosts are specified in file and can be grouped.
* Easy 'Commands' to copy files from and to the host without doing it manually with scp (compared to a bash script).
* I can run commands on the remote host and on this computer.
* I can use a 'jump box'.
What I don't like about Ansible:
* Cumbersome 'programming language' and many modules make it harder than a bash script, e.g. copying all files satisfying a pattern is really cumbersome.
* Though control structures (loops, if) are very very rarely used by me, they are very cumbersome in Ansible.
* Another language to learn.
So I'm just for a alternative that allows me to easily run scripts on a remote host and copy files between it. You might say, that you can easily build you own tool to do that. But this would lack proper documentation etc. and the next guy here would hate me.
https://redd.it/g8bty6
@r_devops
reddit
Simpler Tool than Ansible for remote script execution and file copying
I'm looking for a simpler tool than Ansible. Most of my stuff is in containers so I don't need to use Ansible set up the system or configure it....
Docker-consistency on all environments (especially development)
Hello everyone!
I am new to DevOps and for the last weeks, months actually, I was trying to figure out the best practices and tools to use for a good DevOps setup. I kept trying to learn from experts and use what I learn in my current project.
One of the best practices I have been told what to use docker on every environment for the reason to sophistically sustain the same environment: locally, in ci, on stage and of course on production.
This always made sense to me. Nevertheless I have rarely seen companies or people do this.
I have committed myself to this approach and it was not always easy to dockerize the local development, but I have found solutions. I never gave up because I remembered that this was an important best practice, as I was taught.
But now I really got to a point where this does not make sense. It really is impossible to consistently follow this approach - or it is impossible tough.
Every environment requires a different Dockerfile and it drives me crazy!
\- Local: For my go project I had to install "CompileDaemon" to rebuild the binary on every change I did to the code.
\- Testing: Well there is just two ways here. Either an extra Dockerfile to run tests locally or I install bash on the first Dockerfile, ssh in the container and run the tests from there.
\- Deployment on AWS ECS: This is a whole different story. I have to set special environment variables that are dynamic and I have to run database migrations and other stuff, and this has to be covered by the Dockerfile, because there is no other way to speak with the system.
\- CI: It's again different from any of the previous Dockerfiles!
With this I got to the point where I'm done - I'm really done with this approach. I hate to care about a docker environment locally and in CI and I understand why some companies/people just don't do it.
With this reddit thread I want to give this approach a last chance and evaluate again, just reading what you people think. I'm excited to see how much you agree with my let's call it discovery.
https://redd.it/g8bglj
@r_devops
Hello everyone!
I am new to DevOps and for the last weeks, months actually, I was trying to figure out the best practices and tools to use for a good DevOps setup. I kept trying to learn from experts and use what I learn in my current project.
One of the best practices I have been told what to use docker on every environment for the reason to sophistically sustain the same environment: locally, in ci, on stage and of course on production.
This always made sense to me. Nevertheless I have rarely seen companies or people do this.
I have committed myself to this approach and it was not always easy to dockerize the local development, but I have found solutions. I never gave up because I remembered that this was an important best practice, as I was taught.
But now I really got to a point where this does not make sense. It really is impossible to consistently follow this approach - or it is impossible tough.
Every environment requires a different Dockerfile and it drives me crazy!
\- Local: For my go project I had to install "CompileDaemon" to rebuild the binary on every change I did to the code.
\- Testing: Well there is just two ways here. Either an extra Dockerfile to run tests locally or I install bash on the first Dockerfile, ssh in the container and run the tests from there.
\- Deployment on AWS ECS: This is a whole different story. I have to set special environment variables that are dynamic and I have to run database migrations and other stuff, and this has to be covered by the Dockerfile, because there is no other way to speak with the system.
\- CI: It's again different from any of the previous Dockerfiles!
With this I got to the point where I'm done - I'm really done with this approach. I hate to care about a docker environment locally and in CI and I understand why some companies/people just don't do it.
With this reddit thread I want to give this approach a last chance and evaluate again, just reading what you people think. I'm excited to see how much you agree with my let's call it discovery.
https://redd.it/g8bglj
@r_devops
reddit
Docker-consistency on all environments (especially development)
Hello everyone! I am new to DevOps and for the last weeks, months actually, I was trying to figure out the best practices and tools to use for a...
ssl auto renew and deploy
Hi Guru's
I am looking for a open source tool to generate,manage and deploy ssl certs automatically. we are having a hybrid cloud some in aws and some in private dc's . I tried few things like Netflix's Lemur,. Hashicorp Vault, Aws cert manager but could not reach an agreement on what to finalize. Request your Suggestion on the same.
https://redd.it/g9dnj3
@r_devops
Hi Guru's
I am looking for a open source tool to generate,manage and deploy ssl certs automatically. we are having a hybrid cloud some in aws and some in private dc's . I tried few things like Netflix's Lemur,. Hashicorp Vault, Aws cert manager but could not reach an agreement on what to finalize. Request your Suggestion on the same.
https://redd.it/g9dnj3
@r_devops
reddit
ssl auto renew and deploy
Hi Guru's I am looking for a open source tool to generate,manage and deploy ssl certs automatically. we are having a hybrid cloud some in aws and...
Department moving to devops: Help needed.
Hello all, sorry for the long post.
This is a Xpost on r/devops, r/sre and r/sysadmin as i think this can be related to all 3 communities.
​
I am in charge of 8 ops engineers in an environment that is still working with development and operation separated.
​
Finally i was able to convince upper management that this is not viable anymore as the job market is now toward dev and ops together.
This means i need to provide a clear direction where i want to go with ops engineer and help them fill the gaps.
​
Before looking at the directions i would like to give you a bit of context: I work in a big company, within the company there's our department with ten squads of 5-6 people in which there's 1 ops, 1 product owner, 3-4 devs, 1 business analyst. As a working methodology we follow Scrum, PO is responsible of the backlog.
​
The squad itself is called "devops squad", meaning dev were developing and the ops was solving incidents, creating and maintaining CI/CD, performing standby shifts and dealing with some extra documentation. Infra is (actually was) maintained by a different department and we are (were) their customers.
​
As infrastructure responsibilites have been moving to our department we needed someone to start focusing on them: we don't really deal with bare metal, we are more taking care of VM's (creation, decomm, updates), and soon we will need to think on how to manage k8s clusters.
​
As a side note Security, Network and firewalls are being taken care by other departments, of course a bare minimum of knowledge on those fields is required but it is not someone we need to look at 24/7. At least not yet
​
Due to the above infra requirements i was able to get some of those ops, make them work in a central squad to provide infrastructure management and a general support to the squad.
This has helped migrating to an idea were developers were supposed to start thinking of basic operations
​
This central squad is trying to follow as much SRE concepts: automation, monitoring, incident automation BPM and so on. the part that is lacking most is the 50% time on developmenmt, due to the fact that some of the people are above 50 year old and some concepts takes a lot of time to absorb, and old habits take lots of time to change.
​
Now, as the shift toward devops is being pushed more and more i will be asked to provide a direction on the ops people. Of course there will be a huge focus on what they will develop, as opposed to the fact that most of the complains from developers is that they don't want to do incidents because they need to code.
​
I need to provide a clear answer to this; i don't want those people to become just developers, or have them being absorbed back in to squads becaus as a result old habits will be enforced and we will go to the situation: "this guy who has to fix incidents because he doesn't know how to code but he doesn't have time to learn to code because he needs to fix incidents and by the way he knows infrastructure better than us so why bother" .
I know this because i run this same experiment and this is what happened.
​
I would like to keep this central squad, i need to find a proper purpose to this and justified it; at the same time i need to instruct and help people to move away from this mentality of separation between dev and ops: this needs to be done on technical people but mostly important to the non technical people (PO's BA's).
​
One of the purpose i am thinking of is to make it become the SRE, and fully adopt SRE scope and purpose. I am totally unaware of what could be the issues i will face with this choice.
​
For sure i will need learning materials for those who are not used to coding, and help them understand concepts as branching strategies, automated testing and so on.
So far bigges coding challenges we faced are scripts under 300 lines.
​
Has anyone experienced something like this and could share their thoughts,
Hello all, sorry for the long post.
This is a Xpost on r/devops, r/sre and r/sysadmin as i think this can be related to all 3 communities.
​
I am in charge of 8 ops engineers in an environment that is still working with development and operation separated.
​
Finally i was able to convince upper management that this is not viable anymore as the job market is now toward dev and ops together.
This means i need to provide a clear direction where i want to go with ops engineer and help them fill the gaps.
​
Before looking at the directions i would like to give you a bit of context: I work in a big company, within the company there's our department with ten squads of 5-6 people in which there's 1 ops, 1 product owner, 3-4 devs, 1 business analyst. As a working methodology we follow Scrum, PO is responsible of the backlog.
​
The squad itself is called "devops squad", meaning dev were developing and the ops was solving incidents, creating and maintaining CI/CD, performing standby shifts and dealing with some extra documentation. Infra is (actually was) maintained by a different department and we are (were) their customers.
​
As infrastructure responsibilites have been moving to our department we needed someone to start focusing on them: we don't really deal with bare metal, we are more taking care of VM's (creation, decomm, updates), and soon we will need to think on how to manage k8s clusters.
​
As a side note Security, Network and firewalls are being taken care by other departments, of course a bare minimum of knowledge on those fields is required but it is not someone we need to look at 24/7. At least not yet
​
Due to the above infra requirements i was able to get some of those ops, make them work in a central squad to provide infrastructure management and a general support to the squad.
This has helped migrating to an idea were developers were supposed to start thinking of basic operations
​
This central squad is trying to follow as much SRE concepts: automation, monitoring, incident automation BPM and so on. the part that is lacking most is the 50% time on developmenmt, due to the fact that some of the people are above 50 year old and some concepts takes a lot of time to absorb, and old habits take lots of time to change.
​
Now, as the shift toward devops is being pushed more and more i will be asked to provide a direction on the ops people. Of course there will be a huge focus on what they will develop, as opposed to the fact that most of the complains from developers is that they don't want to do incidents because they need to code.
​
I need to provide a clear answer to this; i don't want those people to become just developers, or have them being absorbed back in to squads becaus as a result old habits will be enforced and we will go to the situation: "this guy who has to fix incidents because he doesn't know how to code but he doesn't have time to learn to code because he needs to fix incidents and by the way he knows infrastructure better than us so why bother" .
I know this because i run this same experiment and this is what happened.
​
I would like to keep this central squad, i need to find a proper purpose to this and justified it; at the same time i need to instruct and help people to move away from this mentality of separation between dev and ops: this needs to be done on technical people but mostly important to the non technical people (PO's BA's).
​
One of the purpose i am thinking of is to make it become the SRE, and fully adopt SRE scope and purpose. I am totally unaware of what could be the issues i will face with this choice.
​
For sure i will need learning materials for those who are not used to coding, and help them understand concepts as branching strategies, automated testing and so on.
So far bigges coding challenges we faced are scripts under 300 lines.
​
Has anyone experienced something like this and could share their thoughts,
Upgrading AMI's of k8s cluster provisioned with RKE (community) terraform provider.
Currently in the process of implementing rancher server. The initial cluster is provisioned using RKE (terraform provider atm) and then we place Rancher server on top. Maybe I'm overlooking something from the documentation or I'm just thinking about it wrong, but has anyone performed an underlying rolling update of the underlying ec2 instances?
https://redd.it/g9dqy9
@r_devops
Currently in the process of implementing rancher server. The initial cluster is provisioned using RKE (terraform provider atm) and then we place Rancher server on top. Maybe I'm overlooking something from the documentation or I'm just thinking about it wrong, but has anyone performed an underlying rolling update of the underlying ec2 instances?
https://redd.it/g9dqy9
@r_devops
reddit
Upgrading AMI's of k8s cluster provisioned with RKE (community)...
Currently in the process of implementing rancher server. The initial cluster is provisioned using RKE (terraform provider atm) and then we place...
What are my options for deploying my web backend that uses Docker?
Hopefully this is the right place to ask such questions. Essentially I have a RESTful API built with Django and MongoDB that I'd like to deploy into a VM I have. (no major cloud providers, I literally just have root access to RHEL VM) I already dockerized and tested it and now I'd like to automate the deployment process. (Repo is on Github) What are the options that make the most sense? Will I need some container orchestration tool like K8s or Docker Swarm? Do I do some webhook when something gets pushed to master? I'm kinda lost as to what I can do because DevOps tooling can get confusing for beginners. Also at some point, I'd like to automate the deployment of frontend side of things as well, but that's on a different repo (don't think it makes too much of a difference).
https://redd.it/g9anbo
@r_devops
Hopefully this is the right place to ask such questions. Essentially I have a RESTful API built with Django and MongoDB that I'd like to deploy into a VM I have. (no major cloud providers, I literally just have root access to RHEL VM) I already dockerized and tested it and now I'd like to automate the deployment process. (Repo is on Github) What are the options that make the most sense? Will I need some container orchestration tool like K8s or Docker Swarm? Do I do some webhook when something gets pushed to master? I'm kinda lost as to what I can do because DevOps tooling can get confusing for beginners. Also at some point, I'd like to automate the deployment of frontend side of things as well, but that's on a different repo (don't think it makes too much of a difference).
https://redd.it/g9anbo
@r_devops
reddit
What are my options for deploying my web backend that uses Docker?
Hopefully this is the right place to ask such questions. Essentially I have a RESTful API built with Django and MongoDB that I'd like to deploy...
What are the top 3 things you wish current APM tools did better?
I find current APM tools like DataDog to be complex. My top 3 wishes:
1. Simpler pricing. I always keep fearing that I will be charged for something which I don't know about
2. Simpler dashboards. Which show me relevant issues and I don't have to go through multiple graphs
3. Show me what you are doing to my infra? How much RAM is the agent using - what is the extra load on my infra due to using an APM.
What are your top 3?
https://redd.it/g98m1z
@r_devops
I find current APM tools like DataDog to be complex. My top 3 wishes:
1. Simpler pricing. I always keep fearing that I will be charged for something which I don't know about
2. Simpler dashboards. Which show me relevant issues and I don't have to go through multiple graphs
3. Show me what you are doing to my infra? How much RAM is the agent using - what is the extra load on my infra due to using an APM.
What are your top 3?
https://redd.it/g98m1z
@r_devops
reddit
What are the top 3 things you wish current APM tools did better?
I find current APM tools like DataDog to be complex. My top 3 wishes: 1. Simpler pricing. I always keep fearing that I will be charged for...
How I manage secrets in git
I created a little Git repo showing how I store encrypted secrets in git and then decrypt them at runtime on EC2/ECS/Kubernetes.
​
[https://github.com/noqcks/GitSecrets](https://github.com/noqcks/GitSecrets)
​
I created this because sops and other git secret managers make it easy to store the git secrets, but make little mention of operating them once you store them! The documentation can also be quite verbose and confusing.
​
I've used this previously for personal projects and places I've worked.
​
What do you think? How do you manage secrets in production?
https://redd.it/g96ywi
@r_devops
I created a little Git repo showing how I store encrypted secrets in git and then decrypt them at runtime on EC2/ECS/Kubernetes.
​
[https://github.com/noqcks/GitSecrets](https://github.com/noqcks/GitSecrets)
​
I created this because sops and other git secret managers make it easy to store the git secrets, but make little mention of operating them once you store them! The documentation can also be quite verbose and confusing.
​
I've used this previously for personal projects and places I've worked.
​
What do you think? How do you manage secrets in production?
https://redd.it/g96ywi
@r_devops
GitHub
noqcks/GitSecrets
A simple way to encrypt secrets in git and decrypt them at runtime. - noqcks/GitSecrets
How does google cloud platform detect detect which npm script to run ?
Hi,
​
I just started learning to use Google App Engine. I deployed a Node app, which had a npm script called "dev" so to launch the app using the following command: \`npm run dev\`. However I never mentionned that command to GCP. Did it guess which one to pick ? How does it work ?
​
Thanks.
https://redd.it/g97xfe
@r_devops
Hi,
​
I just started learning to use Google App Engine. I deployed a Node app, which had a npm script called "dev" so to launch the app using the following command: \`npm run dev\`. However I never mentionned that command to GCP. Did it guess which one to pick ? How does it work ?
​
Thanks.
https://redd.it/g97xfe
@r_devops
reddit
How does google cloud platform detect detect which npm script to run ?
Hi, I just started learning to use Google App Engine. I deployed a Node app, which had a npm script called "dev" so to launch the app...
Want to get into Devops and want to learn Language, Why python?
Hi All,
Sorry that this post may seem a bit of repetition to others posts, however, I'm hoping someone can answer the specifics of my question, so I can hopefully make the right choice on where to begin.
I have the ability to understand/read basics of code, But in no way am I a developer and would really need to start from scratch for learning a language. And generally speaking everywhere i have worked have been Microsoft one-stop shops, (other than AWS, but they moved to Azure when migrating to O365)
I've had "DevOps" roles in the past. I've worked with AWS, Created networks in the cloud, and those things were generally pretty easy for me to get my head around.I also moved onto a second job in which I was working within a Deployments team, again, advertised as DevOps but it was more Aspects of the role fit into DevOps, than being a DevOps role itself.
Sooooo in regards to languages, I've tried to have a search online regarding what is the best language to learn for starting in Devops, A lot of people are recommending Python.
**But what I don't understand is how does python integrate with things like Azure or AWS?Generally speaking, I thought these platforms mainly used C# or Java?**
**So how can you write in python and it still be used effectively?**
​
Thanks in Advance
https://redd.it/g94qzs
@r_devops
Hi All,
Sorry that this post may seem a bit of repetition to others posts, however, I'm hoping someone can answer the specifics of my question, so I can hopefully make the right choice on where to begin.
I have the ability to understand/read basics of code, But in no way am I a developer and would really need to start from scratch for learning a language. And generally speaking everywhere i have worked have been Microsoft one-stop shops, (other than AWS, but they moved to Azure when migrating to O365)
I've had "DevOps" roles in the past. I've worked with AWS, Created networks in the cloud, and those things were generally pretty easy for me to get my head around.I also moved onto a second job in which I was working within a Deployments team, again, advertised as DevOps but it was more Aspects of the role fit into DevOps, than being a DevOps role itself.
Sooooo in regards to languages, I've tried to have a search online regarding what is the best language to learn for starting in Devops, A lot of people are recommending Python.
**But what I don't understand is how does python integrate with things like Azure or AWS?Generally speaking, I thought these platforms mainly used C# or Java?**
**So how can you write in python and it still be used effectively?**
​
Thanks in Advance
https://redd.it/g94qzs
@r_devops
reddit
Want to get into Devops and want to learn Language, Why python?
Hi All, Sorry that this post may seem a bit of repetition to others posts, however, I'm hoping someone can answer the specifics of my question,...