Did "DevOps" somehow become synonymous with "Deployment Engineering" in the job market?
When I first started getting into DevOps (that is to say, the DevOps philosophy, not any job title or team named "DevOps") it was all about providing developers with tooling, education, and guardrails on service ownership and operations. We would give them the keys to open cross-service firewall ports, scaling/autoscaling rules, building deployment pipelines and stages, machine size and resource allocation, and all the things an "ops" person would do for them. With those keys, we provided some guidelines and automatic checks for sanity. We would write linters for their terraform code and require someone (an SRE or senior developer) schooled in operational needs to approve their Terraform/Chef/Puppet/whatever code. We would write the common/sidecars needed to allow their service's containers to run.
Now I see job after job listing and recruiter after recruiter with "DevOps" and "SRE" roles all about deployment engineering. Speed up testing. Speed up deployment. Fast rollbacks. Very little collaborative interaction with service developers to help them understand how there service operates, but a whole lot of "here's a black box - push your code into it and now it's online."
What happened?
https://redd.it/yjp95b
@r_devops
When I first started getting into DevOps (that is to say, the DevOps philosophy, not any job title or team named "DevOps") it was all about providing developers with tooling, education, and guardrails on service ownership and operations. We would give them the keys to open cross-service firewall ports, scaling/autoscaling rules, building deployment pipelines and stages, machine size and resource allocation, and all the things an "ops" person would do for them. With those keys, we provided some guidelines and automatic checks for sanity. We would write linters for their terraform code and require someone (an SRE or senior developer) schooled in operational needs to approve their Terraform/Chef/Puppet/whatever code. We would write the common/sidecars needed to allow their service's containers to run.
Now I see job after job listing and recruiter after recruiter with "DevOps" and "SRE" roles all about deployment engineering. Speed up testing. Speed up deployment. Fast rollbacks. Very little collaborative interaction with service developers to help them understand how there service operates, but a whole lot of "here's a black box - push your code into it and now it's online."
What happened?
https://redd.it/yjp95b
@r_devops
reddit
Did "DevOps" somehow become synonymous with "Deployment...
When I first started getting into DevOps (that is to say, the DevOps philosophy, not any job title or team named "DevOps") it was all about...
Help me hone my focus. My goal is to transition into a SRE and/or Platform Engineer style role in the next year-ish.
Hi all,
I currently work on the operations side, however focus the majority of my efforts on automation. I've spent most of my time in a quasi-hybrid role, primarily around infrastructure configuration management and automation. Ansible, PowerShell, PowerShell DSC are my bread & butter right now.
I'd like to make a list of say 4-5 technologies to focus on over the next year to make myself attractive for roles related to platform engineering or site reliability.
I just recently passed my AWS CCP exam. I also work with AWS somewhat regularly, and so I have a good conceptual knowledge of the core services: S3, EC2, VPC, CloudFront, IAM. I also have a decent idea around API Gateway, Lambda, and SSM from my experience. (Note: I'm lumping in a bunch of the networking into VPC, but I have a decent idea about NAT Gateways, VPC endpoints, subnets, yadda yadda). I also have my Terraform Associate certification, and am very comfortable with Terraform / Terragrunt.
So my list over the next year is as follows:
1. HTML / CSS / JS. No way around it. I'm not that great at this, but I need to be better. At least proficient.
2. Python. I feel like my years and years of PowerShell has set me up for learning another language, but I don't think many places will look at PowerShell favorably. I can already muddle my way through, but I need to be able to actually understand what I'm building with Python.
3. Containers. Again, conceptual understanding, but I need to learn how to use it in AWS using ECS. Obviously a stepping stone to EKS.
4. AWS Database services. I know that DynamoDB exists, but beyond that have no idea how to really use it, or when it's preferred over something like RDS or PostgreSQL.
What are your opinions? Am I on the right track? This seems like a lot, but I could devote a few months to each and I feel like this would set me apart.
https://redd.it/yjierh
@r_devops
Hi all,
I currently work on the operations side, however focus the majority of my efforts on automation. I've spent most of my time in a quasi-hybrid role, primarily around infrastructure configuration management and automation. Ansible, PowerShell, PowerShell DSC are my bread & butter right now.
I'd like to make a list of say 4-5 technologies to focus on over the next year to make myself attractive for roles related to platform engineering or site reliability.
I just recently passed my AWS CCP exam. I also work with AWS somewhat regularly, and so I have a good conceptual knowledge of the core services: S3, EC2, VPC, CloudFront, IAM. I also have a decent idea around API Gateway, Lambda, and SSM from my experience. (Note: I'm lumping in a bunch of the networking into VPC, but I have a decent idea about NAT Gateways, VPC endpoints, subnets, yadda yadda). I also have my Terraform Associate certification, and am very comfortable with Terraform / Terragrunt.
So my list over the next year is as follows:
1. HTML / CSS / JS. No way around it. I'm not that great at this, but I need to be better. At least proficient.
2. Python. I feel like my years and years of PowerShell has set me up for learning another language, but I don't think many places will look at PowerShell favorably. I can already muddle my way through, but I need to be able to actually understand what I'm building with Python.
3. Containers. Again, conceptual understanding, but I need to learn how to use it in AWS using ECS. Obviously a stepping stone to EKS.
4. AWS Database services. I know that DynamoDB exists, but beyond that have no idea how to really use it, or when it's preferred over something like RDS or PostgreSQL.
What are your opinions? Am I on the right track? This seems like a lot, but I could devote a few months to each and I feel like this would set me apart.
https://redd.it/yjierh
@r_devops
reddit
Help me hone my focus. My goal is to transition into a SRE and/or...
Hi all, I currently work on the operations side, however focus the majority of my efforts on automation. I've spent most of my time in a...
kxkn - Simple cli tool for switching between kubernetes namespace and cluster
This is small opensource tool that i have developed while learning rust. (Inspired by kubens and kubectx).
https://github.com/koolwithk/kx-kn-rust.git
Why kx and kn in rust?
Learning :)
small binary size
It does not have all the feature and proper error handling as in kubectx hence it has smaller binary size and perform faster :) You can give a try and report any bug/feature or contribute :)
As of 2n NOV 2022 it's faster than kubectx(by 1.5x) and kubens(by 2x) used `time command` on same cluster to calculate the performance.
Alternative tools:
[Kubectx](https://github.com/ahmetb/kubectx)
Kubie
[k9s](https://github.com/derailed/k9s)
kubeswitch
https://redd.it/yjjrz9
@r_devops
This is small opensource tool that i have developed while learning rust. (Inspired by kubens and kubectx).
https://github.com/koolwithk/kx-kn-rust.git
Why kx and kn in rust?
Learning :)
small binary size
It does not have all the feature and proper error handling as in kubectx hence it has smaller binary size and perform faster :) You can give a try and report any bug/feature or contribute :)
As of 2n NOV 2022 it's faster than kubectx(by 1.5x) and kubens(by 2x) used `time command` on same cluster to calculate the performance.
Alternative tools:
[Kubectx](https://github.com/ahmetb/kubectx)
Kubie
[k9s](https://github.com/derailed/k9s)
kubeswitch
https://redd.it/yjjrz9
@r_devops
GitHub
GitHub - koolwithk/kx-kn-rust: Simple kubernetes context and namespace switch in rust
Simple kubernetes context and namespace switch in rust - GitHub - koolwithk/kx-kn-rust: Simple kubernetes context and namespace switch in rust
Any nginx expert
I am using nginx stream to use it as a transparent proxy ([https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_upload\_rate](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_upload_rate)). In a way, it is acting as a firewall. here is my nginx config [https://pastebin.com/xmVdnax1](https://pastebin.com/xmVdnax1). I am getting these errors.
2022/11/02 02:56:17 [info] 25278#25278: *2333 recv() failed (104: Connection reset by peer) while proxying and reading from client, client: 172.25.239.179, server: 0.0.0.0:443, upstream: "136.146.33.36:443", bytes from/to client:666/4737, bytes from/to upstream:4737/1111
If I reduce the connect timeout to 10 seconds, I dont get these errors. I am running a very big server with 64GB of Ram so it is highly unlikely that it does not have enough ram. using amazon linux. anyone got an idea? thanks
https://redd.it/yjv15s
@r_devops
I am using nginx stream to use it as a transparent proxy ([https://nginx.org/en/docs/stream/ngx\_stream\_proxy\_module.html#proxy\_upload\_rate](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_upload_rate)). In a way, it is acting as a firewall. here is my nginx config [https://pastebin.com/xmVdnax1](https://pastebin.com/xmVdnax1). I am getting these errors.
2022/11/02 02:56:17 [info] 25278#25278: *2333 recv() failed (104: Connection reset by peer) while proxying and reading from client, client: 172.25.239.179, server: 0.0.0.0:443, upstream: "136.146.33.36:443", bytes from/to client:666/4737, bytes from/to upstream:4737/1111
If I reduce the connect timeout to 10 seconds, I dont get these errors. I am running a very big server with 64GB of Ram so it is highly unlikely that it does not have enough ram. using amazon linux. anyone got an idea? thanks
https://redd.it/yjv15s
@r_devops
Can I use Cloudfront as a single URL for multiple services?
Aloha colleagues,
To give you a bit of context, we need to deploy our application to our customers, and lots of them having proxy we need to provide them with a list of URLs to whitelist.
The problem is that we want to keep the list as short as possible, and we wonder if it is possible to have CloudFront serving as "router" for different services. I know cloudfront can be used as front for S3, but I could not find anything about ECR.
Is even Cloudfront the right tool for the job? We are yet not settle with ECR or S3 and could even go for a complete different stack.
Thanking you in advance for the help!
https://redd.it/yjy4pz
@r_devops
Aloha colleagues,
To give you a bit of context, we need to deploy our application to our customers, and lots of them having proxy we need to provide them with a list of URLs to whitelist.
The problem is that we want to keep the list as short as possible, and we wonder if it is possible to have CloudFront serving as "router" for different services. I know cloudfront can be used as front for S3, but I could not find anything about ECR.
Is even Cloudfront the right tool for the job? We are yet not settle with ECR or S3 and could even go for a complete different stack.
Thanking you in advance for the help!
https://redd.it/yjy4pz
@r_devops
reddit
Can I use Cloudfront as a single URL for multiple services?
Aloha colleagues, To give you a bit of context, we need to deploy our application to our customers, and lots of them having proxy we need to...
Development on Kubernetes Multicluster with Devtron
How Devtron (https://devtron.ai/) may simplify Kubernetes for developers? The article shows how easily run apps from a single UI to multiple clusters with Helm support: https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/
https://redd.it/yk10d9
@r_devops
How Devtron (https://devtron.ai/) may simplify Kubernetes for developers? The article shows how easily run apps from a single UI to multiple clusters with Helm support: https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/
https://redd.it/yk10d9
@r_devops
devtron.ai
Devtron | AI-Native Kubernetes Management Platform
Simplify Kubernetes operations with Devtron - the AI for DevOps platform that unifies application, infrastructure, and cost management with intelligent pipelines.
A question for GitHub Actions users
Are you running your tests on GitHub or any external service such as AWS etc
View Poll
https://redd.it/yjz0q4
@r_devops
Are you running your tests on GitHub or any external service such as AWS etc
View Poll
https://redd.it/yjz0q4
@r_devops
reddit
A question for GitHub Actions users
Are you running your tests on GitHub or any external service such as AWS etc
Write docker image size and build date to a file and contain in the image
I want to be able to read the container image size and date from a file in the container after it’s published and when running.
I’m also working on a bash script to read the date on the file but having some issues.
Any suggestions or help greatly appreciated!
https://redd.it/yk33ls
@r_devops
I want to be able to read the container image size and date from a file in the container after it’s published and when running.
I’m also working on a bash script to read the date on the file but having some issues.
Any suggestions or help greatly appreciated!
https://redd.it/yk33ls
@r_devops
reddit
Write docker image size and build date to a file and contain in...
I want to be able to read the container image size and date from a file in the container after it’s published and when running. I’m also working...
I wrote an OSS tool to tunnel your IDE to Kubernetes
Since the day I started my DevOps journey, it was always a dream of mine to create an open-source devtool.
I co-wrote a tool called \#KubeTunnel which connects your local development environment to your Kubernetes cluster for debugging complex microservice architectures without deploying them locally, without waiting for a long CI/CD process and without any syncing mechanism to the cluster.
This achieves developing exactly as you would locally with the added benefit of getting full network access to and from your cluster.
Check it out here: https://github.com/we-dcode/kubetunnel
*Buy me a cup of coffee by leaving a star on Github🌟*
https://redd.it/yk2i5b
@r_devops
Since the day I started my DevOps journey, it was always a dream of mine to create an open-source devtool.
I co-wrote a tool called \#KubeTunnel which connects your local development environment to your Kubernetes cluster for debugging complex microservice architectures without deploying them locally, without waiting for a long CI/CD process and without any syncing mechanism to the cluster.
This achieves developing exactly as you would locally with the added benefit of getting full network access to and from your cluster.
Check it out here: https://github.com/we-dcode/kubetunnel
*Buy me a cup of coffee by leaving a star on Github🌟*
https://redd.it/yk2i5b
@r_devops
Linkedin
Sign Up | LinkedIn
500 million+ members | Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.
Different IaC environments on cloud
So I've been working with IaC (Terraform and CloudFormation) on AWS for awhile. I've touched on simple environment stacks where Dev, sit, UAT and prod are identical, this makes trunk based development very simple and easy.
However, I also touched on more complicated environments where the application stack uses different AWS services in different environments to save cost.
just as an example, Dev may only use EC2 instances to run the app, then UAT will include ASG. In prod it will use ASG + ALB...
I'm curious to know if this practice of using different services in different environments is normal? I find it very difficult to make an IaC change to say ALB where it only exists in prod.
In my opinion, UAT should be the exact same replica of prod, so testing can be done in UAT (non production) at the least... this still makes me think what branching and coding strategy is right for this type of infrastructure requirement?
Have anyone else here face similar challenges?
https://redd.it/yk3ppf
@r_devops
So I've been working with IaC (Terraform and CloudFormation) on AWS for awhile. I've touched on simple environment stacks where Dev, sit, UAT and prod are identical, this makes trunk based development very simple and easy.
However, I also touched on more complicated environments where the application stack uses different AWS services in different environments to save cost.
just as an example, Dev may only use EC2 instances to run the app, then UAT will include ASG. In prod it will use ASG + ALB...
I'm curious to know if this practice of using different services in different environments is normal? I find it very difficult to make an IaC change to say ALB where it only exists in prod.
In my opinion, UAT should be the exact same replica of prod, so testing can be done in UAT (non production) at the least... this still makes me think what branching and coding strategy is right for this type of infrastructure requirement?
Have anyone else here face similar challenges?
https://redd.it/yk3ppf
@r_devops
reddit
Different IaC environments on cloud
So I've been working with IaC (Terraform and CloudFormation) on AWS for awhile. I've touched on simple environment stacks where Dev, sit, UAT and...
Datadog has OAuth Support Now
I'm a little surprised it took them this long but now I expect several companies will build on top of it. For example LambdaTest can show test results from within Datadog, https://www.datadoghq.com/blog/oauth/
It's not clear what endpoints are exposed yet but I imagine documentation will be forthcoming, and hopefully self-serve submissions too.
https://redd.it/yk6whi
@r_devops
I'm a little surprised it took them this long but now I expect several companies will build on top of it. For example LambdaTest can show test results from within Datadog, https://www.datadoghq.com/blog/oauth/
It's not clear what endpoints are exposed yet but I imagine documentation will be forthcoming, and hopefully self-serve submissions too.
https://redd.it/yk6whi
@r_devops
Datadog
Authorize your Datadog integrations with OAuth | Datadog
Datadog integrations are now backed by OAuth, allowing data to flow back and forth seamlessly and securely between Datadog and the rest of your tech stack.
How do you control images pulled from public image repositories like DockerHub?
We have a need to control what images a developer can source from DockerHub. Ideally we only want them to pull verified, approved images. But, how to ensure that only approved images are sourced?
For any images brought in, we want to have them scanned to ensure that they are safe to use. But are any other controls recommended to use?
I work in a highly regulated industry and our risk tolerance is very low. The more safeguards, the better. But we are new to container management.
https://redd.it/yk90ba
@r_devops
We have a need to control what images a developer can source from DockerHub. Ideally we only want them to pull verified, approved images. But, how to ensure that only approved images are sourced?
For any images brought in, we want to have them scanned to ensure that they are safe to use. But are any other controls recommended to use?
I work in a highly regulated industry and our risk tolerance is very low. The more safeguards, the better. But we are new to container management.
https://redd.it/yk90ba
@r_devops
reddit
How do you control images pulled from public image repositories...
We have a need to control what images a developer can source from DockerHub. Ideally we only want them to pull verified, approved images. But,...
Guidance on provisioning QEMU VM images based on specific hardware products
## Description
I work for a company that mainly develops custom industrial grade Computer hardware. As a part of the Software, we ship the hardware with an Ubuntu Image with all the bells and whistles in it (think Docker, Linux Cockpit, necessary configuration, container images)
### Tools Used
- Cloud-Init (first-boot provisioning)
- Hashicorp Packer with QEMU Plugin for x86_64
- Ansible (post-processor provisioning)
### Resultant Output
I have `qcow2` images that are successfully push to our internal artifacts registry.
## Query
Since we have a couple of different hardware that we produce in-house, I would like to separate the provisioning on the QEMU virtual machine images based on the Hardware Product Family.
The only problem here is, in a QEMU virtual image, Ansible Facts generally do not work. We build the images in a CI system and then create the filesystem tarballs and boot them "manually" in post-production stage of hardware.
Is there some way I can create Ansible Roles, that can be according to the Product Hardware Family without actually provisioning on "actual hardware"?
### TL;DR
How to create ansible roles for diverse hardware products when trying to provision images virtually using qemu?
e.g.
Product A --> consists of APT packages x,y,z,docker
Product B --> consists of APT packages x,z,docker
Product C --> consists of APT packages y,docker
etc.
https://redd.it/ykfuf7
@r_devops
## Description
I work for a company that mainly develops custom industrial grade Computer hardware. As a part of the Software, we ship the hardware with an Ubuntu Image with all the bells and whistles in it (think Docker, Linux Cockpit, necessary configuration, container images)
### Tools Used
- Cloud-Init (first-boot provisioning)
- Hashicorp Packer with QEMU Plugin for x86_64
- Ansible (post-processor provisioning)
### Resultant Output
I have `qcow2` images that are successfully push to our internal artifacts registry.
## Query
Since we have a couple of different hardware that we produce in-house, I would like to separate the provisioning on the QEMU virtual machine images based on the Hardware Product Family.
The only problem here is, in a QEMU virtual image, Ansible Facts generally do not work. We build the images in a CI system and then create the filesystem tarballs and boot them "manually" in post-production stage of hardware.
Is there some way I can create Ansible Roles, that can be according to the Product Hardware Family without actually provisioning on "actual hardware"?
### TL;DR
How to create ansible roles for diverse hardware products when trying to provision images virtually using qemu?
e.g.
Product A --> consists of APT packages x,y,z,docker
Product B --> consists of APT packages x,z,docker
Product C --> consists of APT packages y,docker
etc.
https://redd.it/ykfuf7
@r_devops
reddit
Guidance on provisioning QEMU VM images based on specific hardware...
## Description I work for a company that mainly develops custom industrial grade Computer hardware. As a part of the Software, we ship the...
FREE Azure Data Factory for Azure Data engineer and DP-203 Exam
Free Course: https://www.udemy.com/course/azure-data-factory-for-azure-data-engineers-with-hands-on-labs/?couponCode=100\_OFF
https://redd.it/ykjd0g
@r_devops
Free Course: https://www.udemy.com/course/azure-data-factory-for-azure-data-engineers-with-hands-on-labs/?couponCode=100\_OFF
https://redd.it/ykjd0g
@r_devops
Udemy
Azure Data Factory for Azure Data engineer and DP-203 Exam
Data engineering with Azure Data Factory in real world projects in 1.5 hours. Start your career as Azure Data engineer !
DevOps for generated art?
Not sure if this is the correct subreddit to post in, but here goes. (feel free to point me to a more appropriate one)
I am getting into generated art, which is going in the way of AI. I want to deploy some sort of pipeline of AI tools/services. But, I don't know where to start? Where do I begin? What tools should I be using? What AI models are simple to deploy and use?
If anyone has experience doing this, I'd love to hear from you.
Thanks!
https://redd.it/ykqhou
@r_devops
Not sure if this is the correct subreddit to post in, but here goes. (feel free to point me to a more appropriate one)
I am getting into generated art, which is going in the way of AI. I want to deploy some sort of pipeline of AI tools/services. But, I don't know where to start? Where do I begin? What tools should I be using? What AI models are simple to deploy and use?
If anyone has experience doing this, I'd love to hear from you.
Thanks!
https://redd.it/ykqhou
@r_devops
reddit
DevOps for generated art?
Not sure if this is the correct subreddit to post in, but here goes. (feel free to point me to a more appropriate one) I am getting into...
I need help with
Hey all. Hope it's OK to post this question here, since the context for what I'm trying to do with `jq` is an automation/monitoring that my team is trying to do.
I have a JSON payload with the following structure:
{
"bigArray":
{
"key1": "value1",
"key2": "value2",
"key3": value3,
"key4": value4,
"key5": "value5",
"key6": value6,
"key7": value7
},
{
"key1": "value1",
"key2": "value2",
"key3": value3,
"key4": value4,
"key5": "value5",
"key6": value6,
"key7": value7
},
{
"key1": "value1",
"key2": "value2",
"key3": value3,
"key4": value4,
"key5": "value5",
"key6": value6,
"key7": value7
},
...
}
I must parse/reduce this JSON. I don't care about all key/value pairs; I only care, say, about key2 and key4. So I need a `jq` query that would take as an input the JSON above, and generate the JSON below as an outcome:
{
"bigArray":
{
"key2": "value2",
"key4": value4
},
{
"key2": "value2",
"key4": value4
},
{
"key2": "value2",
"key4": value4
},
...
}
I have no clue how to do this. Can anyone help? I've been Google things like "filter by key" but no good so far.
https://redd.it/yky998
@r_devops
jqHey all. Hope it's OK to post this question here, since the context for what I'm trying to do with `jq` is an automation/monitoring that my team is trying to do.
I have a JSON payload with the following structure:
{
"bigArray":
{
"key1": "value1",
"key2": "value2",
"key3": value3,
"key4": value4,
"key5": "value5",
"key6": value6,
"key7": value7
},
{
"key1": "value1",
"key2": "value2",
"key3": value3,
"key4": value4,
"key5": "value5",
"key6": value6,
"key7": value7
},
{
"key1": "value1",
"key2": "value2",
"key3": value3,
"key4": value4,
"key5": "value5",
"key6": value6,
"key7": value7
},
...
}
I must parse/reduce this JSON. I don't care about all key/value pairs; I only care, say, about key2 and key4. So I need a `jq` query that would take as an input the JSON above, and generate the JSON below as an outcome:
{
"bigArray":
{
"key2": "value2",
"key4": value4
},
{
"key2": "value2",
"key4": value4
},
{
"key2": "value2",
"key4": value4
},
...
}
I have no clue how to do this. Can anyone help? I've been Google things like "filter by key" but no good so far.
https://redd.it/yky998
@r_devops
reddit
I need help with `jq`
Hey all. Hope it's OK to post this question here, since the context for what I'm trying to do with \`jq\` is an automation/monitoring that my team...
Are you concerned about the economy and potential layoffs?
What skills are you brushing up on, and trying to pick up on in case the inevitable happens?
I feel like our org could strip 80% of our Agile <insert buzzword here> roles lol..
https://redd.it/yl4c6n
@r_devops
What skills are you brushing up on, and trying to pick up on in case the inevitable happens?
I feel like our org could strip 80% of our Agile <insert buzzword here> roles lol..
https://redd.it/yl4c6n
@r_devops
reddit
Are you concerned about the economy and potential layoffs?
What skills are you brushing up on, and trying to pick up on in case the inevitable happens? I feel like our org could strip 80% of our Agile...
How do you avoid DevOps jobs that are really just ops / sysadmin jobs?
Title. How do you filter out the actual DevOps / SWE - Infra jobs, compared to the ones that are really just sysadmin jobs?
https://redd.it/yl3845
@r_devops
Title. How do you filter out the actual DevOps / SWE - Infra jobs, compared to the ones that are really just sysadmin jobs?
https://redd.it/yl3845
@r_devops
reddit
How do you avoid DevOps jobs that are really just ops / sysadmin jobs?
Title. How do you filter out the actual DevOps / SWE - Infra jobs, compared to the ones that are really just sysadmin jobs?
(RANT) Gov Devops is Difficult
Run away from any environment which you do not have complete control / access to everything in said environment. All the pain you will experience is not worth it unless you are getting paid six figures.
https://redd.it/yl7z0l
@r_devops
Run away from any environment which you do not have complete control / access to everything in said environment. All the pain you will experience is not worth it unless you are getting paid six figures.
https://redd.it/yl7z0l
@r_devops
reddit
(RANT) Gov Devops is Difficult
Run away from any environment which you do not have complete control / access to everything in said environment. All the pain you will experience...
Gitops as an auditlog is not very accessible or informative.
Wrote a blog post about something that was bugging me for a long time:
Gitops as an auditlog is not very accessible or informative.
https://gimlet.io/blog/three-problems-with-gitops-as-deployment-history-and-how-we-overcome-them
But I like the gitops approach, so wanted to fix this and I believe many others made an attempt doing so. What do you think of the issue? How did you solve it?
https://redd.it/yl60ax
@r_devops
Wrote a blog post about something that was bugging me for a long time:
Gitops as an auditlog is not very accessible or informative.
https://gimlet.io/blog/three-problems-with-gitops-as-deployment-history-and-how-we-overcome-them
But I like the gitops approach, so wanted to fix this and I believe many others made an attempt doing so. What do you think of the issue? How did you solve it?
https://redd.it/yl60ax
@r_devops
Gimlet
Three problems with GitOps as deployment history, and how we overcome them
Our finding is that the gitops history is too sparse and noisy to be used for anything practical without tooling. In this blog post we describe three problems we experienced once we adopted gitops and Flux CD, and what measures we implemented in Gimlet to…
How to communicate my manager that our implementation of Ansible is totally wrong?
Title.
Last month I started working for a new company. We work with Ansible and automate mostly simple tasks within our organization. Loads of LDAP management, some infra, etc. From my experience with Ansible I've never come across an environment like the one we have now but I know that none of the best practices are being followed. Things that should be simple playbooks are created as roles. The roles have only 1 main.yml tasks file and a couple variables in defaults/ but absolutely nothing else. Stuff like that should just be playbooks whilst roles should contain more than a couple things (templates, vars, files, etc). They also create new roles which use 90% import_roles from other places and the other 10% is "new" tasks. Needless to say this creates a dependency hell. What happens when they update 1 role? they need to update it in another 50 places. Ah.. they don't use Ansible tags either.
I believe this environment is beyond salvation at this point. It's been going for a long time so there is a lotttt of work done following these implementations. It'd also require a change of mind. How do I tell this to my new manager without sounding like a moron and having my team mates disliking me for basically telling them their work is done wrong? I wanted to create some sort of analysis of the situation and present it to my manager just to explain why this is not following standards and also providing a better understanding on what steps should be followed to improve our work environment. And... admitting everything we have done so far would take too long to repair so we should change that way of working from now on.
https://redd.it/ylebse
@r_devops
Title.
Last month I started working for a new company. We work with Ansible and automate mostly simple tasks within our organization. Loads of LDAP management, some infra, etc. From my experience with Ansible I've never come across an environment like the one we have now but I know that none of the best practices are being followed. Things that should be simple playbooks are created as roles. The roles have only 1 main.yml tasks file and a couple variables in defaults/ but absolutely nothing else. Stuff like that should just be playbooks whilst roles should contain more than a couple things (templates, vars, files, etc). They also create new roles which use 90% import_roles from other places and the other 10% is "new" tasks. Needless to say this creates a dependency hell. What happens when they update 1 role? they need to update it in another 50 places. Ah.. they don't use Ansible tags either.
I believe this environment is beyond salvation at this point. It's been going for a long time so there is a lotttt of work done following these implementations. It'd also require a change of mind. How do I tell this to my new manager without sounding like a moron and having my team mates disliking me for basically telling them their work is done wrong? I wanted to create some sort of analysis of the situation and present it to my manager just to explain why this is not following standards and also providing a better understanding on what steps should be followed to improve our work environment. And... admitting everything we have done so far would take too long to repair so we should change that way of working from now on.
https://redd.it/ylebse
@r_devops
reddit
How to communicate my manager that our implementation of Ansible...
Title. Last month I started working for a new company. We work with Ansible and automate mostly simple tasks within our organization. Loads of...