Unable to install Nginx from Ansible
How can I install Nginx through Ansible through this repo , I am unable to get latest version from this repo
https://redd.it/13mlgpg
@r_devops
How can I install Nginx through Ansible through this repo , I am unable to get latest version from this repo
https://redd.it/13mlgpg
@r_devops
GitHub
GitHub - nginxinc/ansible-role-nginx: Ansible role for installing NGINX
Ansible role for installing NGINX. Contribute to nginxinc/ansible-role-nginx development by creating an account on GitHub.
Bootcamp
Hey guys any free bootcamps,courses with different topics like kubernetes,agile,etc.Im totaly new in devops,but not new in programming,python and JavaScript
https://redd.it/13lr6s3
@r_devops
Hey guys any free bootcamps,courses with different topics like kubernetes,agile,etc.Im totaly new in devops,but not new in programming,python and JavaScript
https://redd.it/13lr6s3
@r_devops
Reddit
r/devops on Reddit: Bootcamp
Posted by u/Akriosss - No votes and 3 comments
How do you WORK WITH testing teams?
We have developers working on Jira tickets. Approx 5 developers and 3 testers. Here's our current flow: Developers create a pull request that addresses a Jira ticket from a feature branch to a main branch. (Jira goes to "In review). Automated testing is run against the branch. Another developer reviews and then approves, and the branch is merged to main. Automated testing is run against the main branch and a developer approves deployment to a "dev" environment. (Jira goes to "In Test")
A tester does some manual/semi-automated testing against the dev environment (updating automated tests if required). If tests pass then "main" branch is approved to be deployed to a staging environment. (Jira goes to "In Staging") Then more verification happens and if that passes main is deployed to a production environment. (Jira ticket goes to "In Prod")
The problem I've seen is that it takes a long time for a ticket to go from initial development to production. And it's unclear i) who is responsible for a ticket and ii) when a ticket is really done. We have many situations where when there's a failure found by a testing team we can't pinpoint the ticket/branch that caused the failure, so all are blamed. Similarly we have situations where developers are held back from merging PRs because "dev is broken" or "we're still doing testing".
My instinct is to declare a ticket done when it is merged and auto-deployed to the dev environment. This decouples responsibility/ownership. Any findings from testers are caught and raised as new tickets. Is this a sensible approach? How do you work with testing teams?
(Not sure if this is the right sub, but seems like the closest I could find)
https://redd.it/13mxhqh
@r_devops
We have developers working on Jira tickets. Approx 5 developers and 3 testers. Here's our current flow: Developers create a pull request that addresses a Jira ticket from a feature branch to a main branch. (Jira goes to "In review). Automated testing is run against the branch. Another developer reviews and then approves, and the branch is merged to main. Automated testing is run against the main branch and a developer approves deployment to a "dev" environment. (Jira goes to "In Test")
A tester does some manual/semi-automated testing against the dev environment (updating automated tests if required). If tests pass then "main" branch is approved to be deployed to a staging environment. (Jira goes to "In Staging") Then more verification happens and if that passes main is deployed to a production environment. (Jira ticket goes to "In Prod")
The problem I've seen is that it takes a long time for a ticket to go from initial development to production. And it's unclear i) who is responsible for a ticket and ii) when a ticket is really done. We have many situations where when there's a failure found by a testing team we can't pinpoint the ticket/branch that caused the failure, so all are blamed. Similarly we have situations where developers are held back from merging PRs because "dev is broken" or "we're still doing testing".
My instinct is to declare a ticket done when it is merged and auto-deployed to the dev environment. This decouples responsibility/ownership. Any findings from testers are caught and raised as new tickets. Is this a sensible approach? How do you work with testing teams?
(Not sure if this is the right sub, but seems like the closest I could find)
https://redd.it/13mxhqh
@r_devops
Reddit
r/devops on Reddit: How do you WORK WITH testing teams?
Posted by u/mothzilla - No votes and no comments
Fairly new to devops, I'm looking for feedback and ways to improve our cicd pipeline
Hello everyone, I hope you're having a good day.
As the title states, I'm seeking feedback for our (startup) CI/CD pipeline and deployment process.
Currently, we have four different services running as dockerized containers:
\- Next.js frontend (React frontend + API)
\- Node.js backend (connects to external services, writes to the database)
\- InfluxDB
\- Nginx
We utilize Docker Compose to run these services in development. For deployment, we push the code to GitLab, where a GitLab CI action is triggered. This action builds all the images based on a build-docker-compose file and pushes them to Docker Hub. Finally, we connect to a remote VM where we:
\- Copy a run.sh file to the VM using SCP
\- Copy a production-docker-compose file to the VM (which currently contains the environment variables) using SCP
\- SSH into the VM and execute the run.sh file, which stops the services, pulls the latest images, and starts them up again
Currently, this process works well for us since we don't rely on external services or frameworks. This keeps the complexity low, which is beneficial for our developers who don't have a software engineering background. We have discussed the possibility of migrating to Kubernetes, but at the moment, I don't see the need for it. While Kubernetes offers advantages, its complexity would introduce additional costs for us without significant benefits.
Please feel free to provide feedback or comments on how we can enhance our DevOps processes.
Thank you!
https://redd.it/13mvttz
@r_devops
Hello everyone, I hope you're having a good day.
As the title states, I'm seeking feedback for our (startup) CI/CD pipeline and deployment process.
Currently, we have four different services running as dockerized containers:
\- Next.js frontend (React frontend + API)
\- Node.js backend (connects to external services, writes to the database)
\- InfluxDB
\- Nginx
We utilize Docker Compose to run these services in development. For deployment, we push the code to GitLab, where a GitLab CI action is triggered. This action builds all the images based on a build-docker-compose file and pushes them to Docker Hub. Finally, we connect to a remote VM where we:
\- Copy a run.sh file to the VM using SCP
\- Copy a production-docker-compose file to the VM (which currently contains the environment variables) using SCP
\- SSH into the VM and execute the run.sh file, which stops the services, pulls the latest images, and starts them up again
Currently, this process works well for us since we don't rely on external services or frameworks. This keeps the complexity low, which is beneficial for our developers who don't have a software engineering background. We have discussed the possibility of migrating to Kubernetes, but at the moment, I don't see the need for it. While Kubernetes offers advantages, its complexity would introduce additional costs for us without significant benefits.
Please feel free to provide feedback or comments on how we can enhance our DevOps processes.
Thank you!
https://redd.it/13mvttz
@r_devops
Reddit
r/devops on Reddit: Fairly new to devops, I'm looking for feedback and ways to improve our cicd pipeline
Posted by u/Significant-Pin-3854 - No votes and 1 comment
Does there exist a tool like docker compose that runs containers serially?
Looking for a platform agnostic ci tool that uses containers for each step. Similar to Argo workflows but not as complex or featured. If docker compose could run containers serially it would be perfect for what I’m looking for. Use case is the developer would be able to run the same workflows on their local machine as the pipeline regardless of which ci platform is used officially (we use multiple).
I started working on building a tool but wanted to check first in case it already exists.
https://redd.it/13n1nbd
@r_devops
Looking for a platform agnostic ci tool that uses containers for each step. Similar to Argo workflows but not as complex or featured. If docker compose could run containers serially it would be perfect for what I’m looking for. Use case is the developer would be able to run the same workflows on their local machine as the pipeline regardless of which ci platform is used officially (we use multiple).
I started working on building a tool but wanted to check first in case it already exists.
https://redd.it/13n1nbd
@r_devops
Reddit
r/devops on Reddit: Does there exist a tool like docker compose that runs containers serially?
Posted by u/dat_terminal - No votes and 1 comment
How's DevOps market right now?
I see most of the companies are pulling away the consulting roles out there. How do you feel the heat right now in contracting world for DevOps out there?
https://redd.it/13n005y
@r_devops
I see most of the companies are pulling away the consulting roles out there. How do you feel the heat right now in contracting world for DevOps out there?
https://redd.it/13n005y
@r_devops
Reddit
r/devops on Reddit: How's DevOps market right now?
Posted by u/Mountain_Ad_1548 - 2 votes and 5 comments
How to encrypt traffic from NginX server to upstream servers?
Hi, I have a two EC2 instances. One is a public instance which is a NginX API gateway. Other instance is a private one which I use to run my microservices using Docker compose.
Currently I configured Let's encrypt SSL certificate in the NginX API gateway. So from end users to API gateway traffic is encrypted. But from API gateway to Private EC2 instance traffic is not encrypted. How can I encrypt that internal traffic. I couldn't able to find a proper document or tutorial on that. Can anyone help me?
https://redd.it/13n8gmm
@r_devops
Hi, I have a two EC2 instances. One is a public instance which is a NginX API gateway. Other instance is a private one which I use to run my microservices using Docker compose.
Currently I configured Let's encrypt SSL certificate in the NginX API gateway. So from end users to API gateway traffic is encrypted. But from API gateway to Private EC2 instance traffic is not encrypted. How can I encrypt that internal traffic. I couldn't able to find a proper document or tutorial on that. Can anyone help me?
https://redd.it/13n8gmm
@r_devops
Reddit
r/devops on Reddit: How to encrypt traffic from NginX server to upstream servers?
Posted by u/hashing_512 - No votes and 4 comments
I don't see any volumes when i do "sudo docker volume ls" . even tho in my docker compose file i have mention volumes and the postgres container runs perfectly and i can see data in specified path. I also have few questions if you guys can answer it. it will be of great help
Here is my docker-compose file : https://pastebin.com/wxHZD8nn
Here is my init.sql file: https://pastebin.com/PsfA9iGW
As far as i know docker volume has two types bind mount and named volumes
below are my understanding of both kindly correct me if i am wrong
1) Bind mounts are entire path ex: /home/test/data:/var/lib/postgresql/data
2) Named volumes are name of folder which is created inside /var/lib/docker/volume/{folder name}
ex: test:/var/lib/postgresql/data
A test folder will be created inside /var/lib/docker/volume
Please correct me if i am wrong
also can you guys please tell me
1) How can i write bind mounts or named volumes in docker compose file ?
2) what is the correct way ?
I would like to know how to add different types of volume in detail because i though only named volumes are written with top level volume but i also saw bind mounts too . so i am kinda confused now
Please help me clear my doubts
https://redd.it/13mzo0z
@r_devops
Here is my docker-compose file : https://pastebin.com/wxHZD8nn
Here is my init.sql file: https://pastebin.com/PsfA9iGW
As far as i know docker volume has two types bind mount and named volumes
below are my understanding of both kindly correct me if i am wrong
1) Bind mounts are entire path ex: /home/test/data:/var/lib/postgresql/data
2) Named volumes are name of folder which is created inside /var/lib/docker/volume/{folder name}
ex: test:/var/lib/postgresql/data
A test folder will be created inside /var/lib/docker/volume
Please correct me if i am wrong
also can you guys please tell me
1) How can i write bind mounts or named volumes in docker compose file ?
2) what is the correct way ?
I would like to know how to add different types of volume in detail because i though only named volumes are written with top level volume but i also saw bind mounts too . so i am kinda confused now
Please help me clear my doubts
https://redd.it/13mzo0z
@r_devops
Pastebin
version: '3'services: postgres: image: postgres container_name: - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
What's the point of defining the URL of an environment in GitHub Actions?
jobs:
myjob:
runs-on: ubuntu-latest
environment:
name: test
url: ${{ steps.test.outputs.url }}
steps:
- name: Test
id: test
run: echo "url=https://reddit.com" >> $GITHUBOUTPUT
If in order to run the Test step, the environment should have already been chosen, what's the point of setting the URL this way?
Have seen many examples doing this in one way or another. For example, https://github.com/actions/starter-workflows/blob/main/deployments/azure-webapps-python.yml.
https://redd.it/13nbdxg
@r_devops
jobs:
myjob:
runs-on: ubuntu-latest
environment:
name: test
url: ${{ steps.test.outputs.url }}
steps:
- name: Test
id: test
run: echo "url=https://reddit.com" >> $GITHUBOUTPUT
If in order to run the Test step, the environment should have already been chosen, what's the point of setting the URL this way?
Have seen many examples doing this in one way or another. For example, https://github.com/actions/starter-workflows/blob/main/deployments/azure-webapps-python.yml.
https://redd.it/13nbdxg
@r_devops
Reddit
Reddit is where millions of people gather for conversations about the things they care about, in over 100,000 subreddit communities.
What interview questions trip or expose most or a significant number of candidates
If you’re an employer or have been on the question asking side of interviews, what questions trip a lot of candidates or questions that you throw out there expecting a high chance of failure or the candidate to have to think a decent bit about the answer and if you’re a candidate what kind of questions like these have you seen
https://redd.it/13ncs07
@r_devops
If you’re an employer or have been on the question asking side of interviews, what questions trip a lot of candidates or questions that you throw out there expecting a high chance of failure or the candidate to have to think a decent bit about the answer and if you’re a candidate what kind of questions like these have you seen
https://redd.it/13ncs07
@r_devops
Reddit
r/devops on Reddit: What interview questions trip or expose most or a significant number of candidates
Posted by u/fullmetal_wolf_titan - 5 votes and 9 comments
Help, new on the area
Hello people! I want to get to know the DevOps area
What should I go FIRST to learn the best? What should I look for?
Many thanks already!
https://redd.it/13ne5vm
@r_devops
Hello people! I want to get to know the DevOps area
What should I go FIRST to learn the best? What should I look for?
Many thanks already!
https://redd.it/13ne5vm
@r_devops
Reddit
r/devops on Reddit: Help, new on the area
Posted by u/TrippyMustache - 1 vote and 4 comments
Installing stuff on ARM architectures.
Last year, while interning at a company I had to setup a Jetson computer which has a ARM architecture.
I faced a lot of issues with the libraries/packages/software, that would easily install or run on a PC which has Ubuntu. At my internship we had a DevOps Engineer who resolved the issue for me, but lately I have been getting a lot of interview questions on how can I solve that architecture problem.
How do I get stuff to work on multiple architectures?
https://redd.it/13nd6xv
@r_devops
Last year, while interning at a company I had to setup a Jetson computer which has a ARM architecture.
I faced a lot of issues with the libraries/packages/software, that would easily install or run on a PC which has Ubuntu. At my internship we had a DevOps Engineer who resolved the issue for me, but lately I have been getting a lot of interview questions on how can I solve that architecture problem.
How do I get stuff to work on multiple architectures?
https://redd.it/13nd6xv
@r_devops
Reddit
r/devops on Reddit: Installing stuff on ARM architectures.
Posted by u/Independent-Sink7380 - No votes and 1 comment
Ubuntu cloud-config / autoinstall - Anyway of making it more modular?
Hi All
I suspect the answer to this will be "no", but is there any native way (or establish 3rd party way) of splitting the "autoinstall" into seperate files, so it can be made more modular? i.e.
* storage.yaml
* ssh.yaml
* late-commands.yaml
The alternative is, just write a bash script that pulls these together, and spits out the merged file.
Thanks
https://redd.it/13ncmrw
@r_devops
Hi All
I suspect the answer to this will be "no", but is there any native way (or establish 3rd party way) of splitting the "autoinstall" into seperate files, so it can be made more modular? i.e.
* storage.yaml
* ssh.yaml
* late-commands.yaml
The alternative is, just write a bash script that pulls these together, and spits out the merged file.
Thanks
https://redd.it/13ncmrw
@r_devops
Reddit
r/devops on Reddit: Ubuntu cloud-config / autoinstall - Anyway of making it more modular?
Posted by u/Fluffer_Wuffer - 1 vote and no comments
How do you rotate 3rd parties API keys?
We are using AWS secret manager to store our API keys which include some 3rd parties.
I want to rotate those automatically, how do you do that in your company?
https://redd.it/13nke0n
@r_devops
We are using AWS secret manager to store our API keys which include some 3rd parties.
I want to rotate those automatically, how do you do that in your company?
https://redd.it/13nke0n
@r_devops
Reddit
r/devops on Reddit: How do you rotate 3rd parties API keys?
Posted by u/ninjaplot - No votes and 1 comment
Can and should I create a reusable workflow that automatically creates the AWS role and trust policies needed to setup OpenID connect in AWS for CICD?
At work we have a DevOps team that has created a reusable workflow that automatically creates the AWS role and trust policies needed to setup OpenID connect in AWS for CICD.
I'm more fullstack than DevOps, but I'm working on a personal project, and I'm trying to replicate something similar on my own. I get the basic principle how authentication happens, and I can technically follow through this guide to set it up myself. However, I would love to automate this, so that I can easily run it once per repo in a consistent way.
How the work reusable workflow works is that you create a config file containing all permissions needed in the role to be created, and then you execute the GitHub action, A hyperlink is then displayed in the actions terminal that takes you to the the AWS identity federation page for you to sign in and authorize the action to your account (the action is then federated to your user). Then the role and trust policy is created automatically between the AWS account you authenticated to and the GitHub repo that you ran the action from.
My one thought is that I'm not actually able to create this same UX because I don't federate authentication with any external identity provider on my personal AWS account, I rely solely on IAM.
Is it better if I just manually create each role / trust policy?
https://redd.it/13nbnhf
@r_devops
At work we have a DevOps team that has created a reusable workflow that automatically creates the AWS role and trust policies needed to setup OpenID connect in AWS for CICD.
I'm more fullstack than DevOps, but I'm working on a personal project, and I'm trying to replicate something similar on my own. I get the basic principle how authentication happens, and I can technically follow through this guide to set it up myself. However, I would love to automate this, so that I can easily run it once per repo in a consistent way.
How the work reusable workflow works is that you create a config file containing all permissions needed in the role to be created, and then you execute the GitHub action, A hyperlink is then displayed in the actions terminal that takes you to the the AWS identity federation page for you to sign in and authorize the action to your account (the action is then federated to your user). Then the role and trust policy is created automatically between the AWS account you authenticated to and the GitHub repo that you ran the action from.
My one thought is that I'm not actually able to create this same UX because I don't federate authentication with any external identity provider on my personal AWS account, I rely solely on IAM.
Is it better if I just manually create each role / trust policy?
https://redd.it/13nbnhf
@r_devops
GitHub Docs
Configuring OpenID Connect in Amazon Web Services - GitHub Docs
Use OpenID Connect within your workflows to authenticate with Amazon Web Services.
Shifting from Analytics to DevOps
Hello, just wanted to get some advice from all of you. I am currently working in analytics and planning to shift to a DevOps Engineering role. I've read the "Getting into DevOps" page which provided a lot of useful info. I know that this will take a while and I'm looking forward to building the skills needed. I'm actually learning AWS now with plans on taking the AWS Cloud Practitioner certification as a start.
Since my background is in analytics, I don't have any experience in deploying any apps (I'm familiar with git usage though). Is it better if I take the software dev path before going to DevOps or can I go to a DevOps role directly? I know there are several paths that one can take but just wanted to know the most optimal way to start.
Thank you all in advance!
https://redd.it/13npcub
@r_devops
Hello, just wanted to get some advice from all of you. I am currently working in analytics and planning to shift to a DevOps Engineering role. I've read the "Getting into DevOps" page which provided a lot of useful info. I know that this will take a while and I'm looking forward to building the skills needed. I'm actually learning AWS now with plans on taking the AWS Cloud Practitioner certification as a start.
Since my background is in analytics, I don't have any experience in deploying any apps (I'm familiar with git usage though). Is it better if I take the software dev path before going to DevOps or can I go to a DevOps role directly? I know there are several paths that one can take but just wanted to know the most optimal way to start.
Thank you all in advance!
https://redd.it/13npcub
@r_devops
Reddit
r/devops on Reddit: Shifting from Analytics to DevOps
Posted by u/ph_crap - No votes and 1 comment
Industrial engineering and Devops
Hello!
I am an industrial engineering student and I was offered the opportuinity to attend a program focused on Devops. I noticed that some concepts of Devops are somewhat related to concepts like Lean and continuous improvement.
Do you think it would be beneficial for me to enroll if I want to pursue a career in industrial engineering?
Thanks in advance!
https://redd.it/13npvs2
@r_devops
Hello!
I am an industrial engineering student and I was offered the opportuinity to attend a program focused on Devops. I noticed that some concepts of Devops are somewhat related to concepts like Lean and continuous improvement.
Do you think it would be beneficial for me to enroll if I want to pursue a career in industrial engineering?
Thanks in advance!
https://redd.it/13npvs2
@r_devops
Reddit
r/devops on Reddit: Industrial engineering and Devops
Posted by u/rudy004 - No votes and 10 comments
Anyone know of an umbrella CLI for multiple cloud providers?
Does anyone know of an umbrella cli tool that allows me to manage machines at different providers? Working with various providers because of my clients. Would be nice if the basics were covered like creating machines, dns settings. Wouldn't mind if it shelled out to the underlying per provider clis when things get more complex.
https://redd.it/13nr8e1
@r_devops
Does anyone know of an umbrella cli tool that allows me to manage machines at different providers? Working with various providers because of my clients. Would be nice if the basics were covered like creating machines, dns settings. Wouldn't mind if it shelled out to the underlying per provider clis when things get more complex.
https://redd.it/13nr8e1
@r_devops
Reddit
r/devops on Reddit: Anyone know of an umbrella CLI for multiple cloud providers?
Posted by u/LivingAnywhere - No votes and 4 comments
How do you learn the implementation of mTLS? For example you have two flask microservices
How to implement mTLS to understand further?
https://redd.it/13nqfap
@r_devops
How to implement mTLS to understand further?
https://redd.it/13nqfap
@r_devops
Reddit
r/devops on Reddit: How do you learn the implementation of mTLS? For example you have two flask microservices
Posted by u/IamOkei - 4 votes and 2 comments
TeamCity Build Dependencies
Hello everyone,
I'm working on a project where I need to programmatically trigger a new build in TeamCity and use an existing, specific build as an artifact dependency. I'm currently using the REST API to interact with TeamCity.
From what I understand, TeamCity typically retrieves artifacts based on the artifact dependency settings in the build configuration (e.g., last successful build, last pinned build, build with a specific build number, etc). However, I'm looking to dynamically set a specific build as an artifact dependency when triggering a new build via the API.
Is there a way to do this directly through the REST API? Or is there another workaround to achieve this functionality?
https://redd.it/13nuypk
@r_devops
Hello everyone,
I'm working on a project where I need to programmatically trigger a new build in TeamCity and use an existing, specific build as an artifact dependency. I'm currently using the REST API to interact with TeamCity.
From what I understand, TeamCity typically retrieves artifacts based on the artifact dependency settings in the build configuration (e.g., last successful build, last pinned build, build with a specific build number, etc). However, I'm looking to dynamically set a specific build as an artifact dependency when triggering a new build via the API.
Is there a way to do this directly through the REST API? Or is there another workaround to achieve this functionality?
https://redd.it/13nuypk
@r_devops
Reddit
r/devops on Reddit: TeamCity Build Dependencies
Posted by u/ZedXT - No votes and 2 comments