Cloud vs Self-Hosted Logging
I'm working on a personal project (SaaS, not launched yet) and need to set up logging.
I'm considering two options:
1. Self-hosting a logging stack like ELK or EFK
2. Free/low-cost cloud-based logging service. I've seen that New Relic has a free tier with a 100GB per month ingest limit, which seems promising. I'm open to other alternatives as well (didn't do much research here).
What would you recommend and why?
https://redd.it/1k61r56
@r_devops
I'm working on a personal project (SaaS, not launched yet) and need to set up logging.
I'm considering two options:
1. Self-hosting a logging stack like ELK or EFK
2. Free/low-cost cloud-based logging service. I've seen that New Relic has a free tier with a 100GB per month ingest limit, which seems promising. I'm open to other alternatives as well (didn't do much research here).
What would you recommend and why?
https://redd.it/1k61r56
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Built a Custom Kubernetes Operator to Deploy a Simple Resume Web Server Using CRDs
Hey folks,
This is my small attempt at learning how to build a custom Kubernetes operator using Kubebuilder. In this project, I created a custom resource called Resume, where you can define experiences, projects, and more. The operator watches this resource and automatically builds a resume website based on the provided data.
https://github.com/JOSHUAJEBARAJ/resume-operator/tree/main
https://redd.it/1k62tgz
@r_devops
Hey folks,
This is my small attempt at learning how to build a custom Kubernetes operator using Kubebuilder. In this project, I created a custom resource called Resume, where you can define experiences, projects, and more. The operator watches this resource and automatically builds a resume website based on the provided data.
https://github.com/JOSHUAJEBARAJ/resume-operator/tree/main
https://redd.it/1k62tgz
@r_devops
GitHub
GitHub - JOSHUAJEBARAJ/resume-operator
Contribute to JOSHUAJEBARAJ/resume-operator development by creating an account on GitHub.
There is a possibility that my org may implement DevOps practices…
Hey all!
I made a post here the other day asking about Terraform and CaC tools.
I was given great advice and useful information.
I wanted to reach out and actually provide an update regarding a possible opportunity and possible changes.
The org I work for is a global enterprise. We are a Windows/ Azure org. Our infrastructure is on-premise and in the cloud. I believe we recently moved away from physical servers and now host them using Azure VMs. Not sure if they use Linux or Windows servers though. I’m not that informed.
A year ago, I reached out to the cloud operations lead for the Americas (CAN, USA, LATAM). He told me to study Azure and I may be able to join the team someday. Well, I studied but they ended up hiring someone a bit more experienced. I cannot say I blame them. They were building up that team and needed more experienced people. Instead of holding a grudge, I reached out to the new hire and learned a lot of from him. He actually falls under my region of support so it’s normal that we communicate. Anyways, I eventually asked him about infrastructure as code and how much we used and what tools we used. Currently, the team doesn’t practice DevOps methodology so he didn’t speak much about. Instead, he referred me to the cloud operations lead. I reached out to the lead this morning and randomly just asked him if they were going to hire people once the hiring freeze was over. To my surprise, they are going to hire some people for junior opportunities. This time though, his advice on what to learn was a bit different than before. He advised that I study IaC (Azure native tools such as Bicep, and ARM) and CI/CD pipelines. It seems that my company may start practicing DevOps. Or at least, that is my takeaway.
I’m not sure how much time I have but I was able to get a voucher from MS. AZ-204 is one of the exams I can take for free using this voucher. I’m going to study this and then study AZ-104.
Wish me luck all! This may be my way in! I’m hopeful and excited!
https://redd.it/1k649ri
@r_devops
Hey all!
I made a post here the other day asking about Terraform and CaC tools.
I was given great advice and useful information.
I wanted to reach out and actually provide an update regarding a possible opportunity and possible changes.
The org I work for is a global enterprise. We are a Windows/ Azure org. Our infrastructure is on-premise and in the cloud. I believe we recently moved away from physical servers and now host them using Azure VMs. Not sure if they use Linux or Windows servers though. I’m not that informed.
A year ago, I reached out to the cloud operations lead for the Americas (CAN, USA, LATAM). He told me to study Azure and I may be able to join the team someday. Well, I studied but they ended up hiring someone a bit more experienced. I cannot say I blame them. They were building up that team and needed more experienced people. Instead of holding a grudge, I reached out to the new hire and learned a lot of from him. He actually falls under my region of support so it’s normal that we communicate. Anyways, I eventually asked him about infrastructure as code and how much we used and what tools we used. Currently, the team doesn’t practice DevOps methodology so he didn’t speak much about. Instead, he referred me to the cloud operations lead. I reached out to the lead this morning and randomly just asked him if they were going to hire people once the hiring freeze was over. To my surprise, they are going to hire some people for junior opportunities. This time though, his advice on what to learn was a bit different than before. He advised that I study IaC (Azure native tools such as Bicep, and ARM) and CI/CD pipelines. It seems that my company may start practicing DevOps. Or at least, that is my takeaway.
I’m not sure how much time I have but I was able to get a voucher from MS. AZ-204 is one of the exams I can take for free using this voucher. I’m going to study this and then study AZ-104.
Wish me luck all! This may be my way in! I’m hopeful and excited!
https://redd.it/1k649ri
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Devops/SRE AI agents
Has anyone successfully integrated any AI agents or models in their workflows or processes? I am thinking anything from deployment augmentation with AI to incidents management.
-JS
https://redd.it/1k69d11
@r_devops
Has anyone successfully integrated any AI agents or models in their workflows or processes? I am thinking anything from deployment augmentation with AI to incidents management.
-JS
https://redd.it/1k69d11
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Bad interview asking for reference from 10 years ago
I just wrapped up an interview, it started out well until the interviewer asked if I could provide references for two of the companies that I worked for in the past. One of those companies was from over 10 years ago, so I politely asked him if he meant another company with a similar name. He said no, he meant the company from 10 years ago. At this point I have a confused look on my face and before I could even tell him that I could provide a reference from that company (even though I thought it was strange given the time and that it wasn't a DevOps role), he goes 'Yeah the company's on your resume isn't it? You work there didn't you?'. At this point I'm all sorts of confused and flustered. I tell him yes I did work at the company and before I can say anything else he says 'you don't keep in touch with people'. I tried to explain that I haven't really kept in touch with anybody from my time there and that I've been out of the local market for a while (don't know why I mentioned that and I regret it now), but I could provide my manager's information. He then goes on to ask me what's wrong with the local market and as I'm answering his question abd talking about how bad the local market is, I'm thinking why am I even talking about this right now? We end up moving on to technical questions, things like ' how does DNS work?', ' how does a CDN work?', ' how does terraform work?', etc. but at this point I'm so flustered and confused about our 10-year-old reference argument that I struggled to answer these basic questions. I honestly don't even understand how a reference from 10 plus years ago and a different role would even be helpful. People change a lot in 10 years and most people don't clearly remember 10 years ago.
Has anyone else been asked for reference 10 plus years ago?
https://redd.it/1k6ai4w
@r_devops
I just wrapped up an interview, it started out well until the interviewer asked if I could provide references for two of the companies that I worked for in the past. One of those companies was from over 10 years ago, so I politely asked him if he meant another company with a similar name. He said no, he meant the company from 10 years ago. At this point I have a confused look on my face and before I could even tell him that I could provide a reference from that company (even though I thought it was strange given the time and that it wasn't a DevOps role), he goes 'Yeah the company's on your resume isn't it? You work there didn't you?'. At this point I'm all sorts of confused and flustered. I tell him yes I did work at the company and before I can say anything else he says 'you don't keep in touch with people'. I tried to explain that I haven't really kept in touch with anybody from my time there and that I've been out of the local market for a while (don't know why I mentioned that and I regret it now), but I could provide my manager's information. He then goes on to ask me what's wrong with the local market and as I'm answering his question abd talking about how bad the local market is, I'm thinking why am I even talking about this right now? We end up moving on to technical questions, things like ' how does DNS work?', ' how does a CDN work?', ' how does terraform work?', etc. but at this point I'm so flustered and confused about our 10-year-old reference argument that I struggled to answer these basic questions. I honestly don't even understand how a reference from 10 plus years ago and a different role would even be helpful. People change a lot in 10 years and most people don't clearly remember 10 years ago.
Has anyone else been asked for reference 10 plus years ago?
https://redd.it/1k6ai4w
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Managing Deployments of gitrepos to servers
I am slowly getting into to devops, however the plethora of tools which all seem to market themselves as the solution for everything it's pretty hard to figure out which is the right way to go. I hope this subreddits experience can guide me in the right direction.
I am managing a variety of services for multiple clients. Each client has one or more vps instances containing multiple services, all running as a docker compose project. Each service has its own git repo, some are client specific (websites) and some are general and reusable (reverse-proxies, paperless, etc.).
I'm now trying to figure out what the best way to approach deployments and updates would be.
My ideal scenario would be a tool which would allow me to:
- Configure which repo (and version) should deploy to which server.
- Execute a workflow/push the repo using ssh-access from a secrets' manager.
- Monitor whether it is successful or not.
My only requirement is to self-host it.
Would gitea or jenkins be the best way to approach this? Thanks for any insights.
https://redd.it/1k6c58t
@r_devops
I am slowly getting into to devops, however the plethora of tools which all seem to market themselves as the solution for everything it's pretty hard to figure out which is the right way to go. I hope this subreddits experience can guide me in the right direction.
I am managing a variety of services for multiple clients. Each client has one or more vps instances containing multiple services, all running as a docker compose project. Each service has its own git repo, some are client specific (websites) and some are general and reusable (reverse-proxies, paperless, etc.).
I'm now trying to figure out what the best way to approach deployments and updates would be.
My ideal scenario would be a tool which would allow me to:
- Configure which repo (and version) should deploy to which server.
- Execute a workflow/push the repo using ssh-access from a secrets' manager.
- Monitor whether it is successful or not.
My only requirement is to self-host it.
Would gitea or jenkins be the best way to approach this? Thanks for any insights.
https://redd.it/1k6c58t
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is devops relatively hard field to get into as new grad?
How did you get your first DevOps job?
https://redd.it/1k6bwvh
@r_devops
How did you get your first DevOps job?
https://redd.it/1k6bwvh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Can’t get UTM data from HTML forms
I'm creating an HTML form to embed in Framer (so that I can get around the limitations that Framer places on form response submissions). I've already managed to create the forms and send the information to my webhook.
The only problem is that I can't capture the page's UTMs via this form... Is this the best solution? Has anyone who knows about Framer ever experienced this?
https://redd.it/1k6artb
@r_devops
I'm creating an HTML form to embed in Framer (so that I can get around the limitations that Framer places on form response submissions). I've already managed to create the forms and send the information to my webhook.
The only problem is that I can't capture the page's UTMs via this form... Is this the best solution? Has anyone who knows about Framer ever experienced this?
https://redd.it/1k6artb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Have you built QA/Testing pipelines?
In my experience I built CI/CD pipelines for Dev, Stagging, Prod environments but I never really built a pipeline that did automated testing. It makes to not have it in the prod pipeline. But I’m curious, if you guys have built such pipelines. If yes, what can you share about it? How did it integrate with your CI/CD overall?
Edit: I only have 1.5 years of experience in DevOps and it was my first fulltime job
https://redd.it/1k6ijz2
@r_devops
In my experience I built CI/CD pipelines for Dev, Stagging, Prod environments but I never really built a pipeline that did automated testing. It makes to not have it in the prod pipeline. But I’m curious, if you guys have built such pipelines. If yes, what can you share about it? How did it integrate with your CI/CD overall?
Edit: I only have 1.5 years of experience in DevOps and it was my first fulltime job
https://redd.it/1k6ijz2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Tired of setting up the same pipelines? I'm building a CLI that deploys projects with natural language.
Starting a new service usually means hours of boilerplate: creating GitHub repos, setting up tests, Docker images, CD pipelines… What if you could just describe what you want?
I’m building 88tool, a terminal CLI that uses AI agents and LangChain to plan and execute full deployment pipelines.
It supports Go, Python, Java, etc., and connects to GitHub, AWS, Vercel, and more.
It’s not just generating code — it runs it.
Would love to hear from fellow devs who struggle with CI/CD fatigue.
https://datatricks.medium.com/building-in-public-from-terminal-to-deployment-with-ai-driven-ci-cd-fca220a63c58
https://redd.it/1k6kflk
@r_devops
Starting a new service usually means hours of boilerplate: creating GitHub repos, setting up tests, Docker images, CD pipelines… What if you could just describe what you want?
I’m building 88tool, a terminal CLI that uses AI agents and LangChain to plan and execute full deployment pipelines.
It supports Go, Python, Java, etc., and connects to GitHub, AWS, Vercel, and more.
It’s not just generating code — it runs it.
Would love to hear from fellow devs who struggle with CI/CD fatigue.
https://datatricks.medium.com/building-in-public-from-terminal-to-deployment-with-ai-driven-ci-cd-fca220a63c58
https://redd.it/1k6kflk
@r_devops
Medium
🚀 Building in Public: From Terminal to Deployment with AI-Driven CI/CD
You tell an AI assistant: “Create a web service to store information about a user in this <format> and showing in a web page”
pfsense ipsec tunnel aws issue
I know i can connect to two vpc via peer connection or transit but i need to get myself familiar with pfsense.
Current setup.
vpc1 (172.31.0.0/16)
pfsense1 (172.31.0.100) with public ip address
test1-ec2(172.31.0.101) no public ip address
vpc2(10.0.0.0/16)
pfsense (10.0.0.100) with public ip address
test2-ec2(10.0.0.101) no public ip address
1. Setup ipsec tunnel IKEv1 between the two pfsense. Both phase 1 and phase2 connection establish.
2. Both pfsense instance can ping each other (icmp) from their private ip address. So 172.31.0.100 can ping 10.0.0.100 without problem.
3. The route table attach to the subnet on vpc1 is routing traffic of 10.0.0.0/16 to the pfsense1 eni while the vpc2 route table routes traffic to 172.31.0.0/16 to the pfsense2 eni.
4. configured the firewall -> rules -> ipsec to have source and destination respectively. so for pfsense1 source is 172.31.0.0/16 to destination 10.0.0.0/16 all port and gateway. Vice verse for pfsense2
5. firewall -> nat -> outbound set to Automatic outbound NAT rule generation. (IPsec passthrough included)
6. the security group attached to both ec2 have icmp enable to 0.0.0.0/0
However test1-ec2 cannot ping test2-ec2 nor pfsense2 vice versa, `traceroute` gives me nothing but `* * *`
What am i missing here?
https://redd.it/1k6k5vg
@r_devops
I know i can connect to two vpc via peer connection or transit but i need to get myself familiar with pfsense.
Current setup.
vpc1 (172.31.0.0/16)
pfsense1 (172.31.0.100) with public ip address
test1-ec2(172.31.0.101) no public ip address
vpc2(10.0.0.0/16)
pfsense (10.0.0.100) with public ip address
test2-ec2(10.0.0.101) no public ip address
1. Setup ipsec tunnel IKEv1 between the two pfsense. Both phase 1 and phase2 connection establish.
2. Both pfsense instance can ping each other (icmp) from their private ip address. So 172.31.0.100 can ping 10.0.0.100 without problem.
3. The route table attach to the subnet on vpc1 is routing traffic of 10.0.0.0/16 to the pfsense1 eni while the vpc2 route table routes traffic to 172.31.0.0/16 to the pfsense2 eni.
4. configured the firewall -> rules -> ipsec to have source and destination respectively. so for pfsense1 source is 172.31.0.0/16 to destination 10.0.0.0/16 all port and gateway. Vice verse for pfsense2
5. firewall -> nat -> outbound set to Automatic outbound NAT rule generation. (IPsec passthrough included)
6. the security group attached to both ec2 have icmp enable to 0.0.0.0/0
However test1-ec2 cannot ping test2-ec2 nor pfsense2 vice versa, `traceroute` gives me nothing but `* * *`
What am i missing here?
https://redd.it/1k6k5vg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you learn new setup and then impart the knowledge to others in team?
This is a slightly different kind of question.
We're using EKS with KEDA to run agents in our Azure DevOps pipelines. This entire setup is deployed using Azure DevOps pipelines (executed via Azure agents) along with Helm, ArgoCD, and Terragrunt.
The challenge is that this setup and pipeline were created by someone who is no longer part of the team. I’ve now been assigned the task of understanding how everything works and then sharing that knowledge with the rest of the team. We have created a user story for this task :D
The issue is that none of us has much experience with Kubernetes, Helm, ArgoCD, or Terragrunt. So my question is: how would you approach a situation like this? If someone could break down their process for handling such scenarios, that would be really helpful.
My main concern is figuring out the most effective and efficient way to learn the setup on my own and then transfer the knowledge to my teammates once I’ve understood the setup myself.
Thanks
https://redd.it/1k6ozjy
@r_devops
This is a slightly different kind of question.
We're using EKS with KEDA to run agents in our Azure DevOps pipelines. This entire setup is deployed using Azure DevOps pipelines (executed via Azure agents) along with Helm, ArgoCD, and Terragrunt.
The challenge is that this setup and pipeline were created by someone who is no longer part of the team. I’ve now been assigned the task of understanding how everything works and then sharing that knowledge with the rest of the team. We have created a user story for this task :D
The issue is that none of us has much experience with Kubernetes, Helm, ArgoCD, or Terragrunt. So my question is: how would you approach a situation like this? If someone could break down their process for handling such scenarios, that would be really helpful.
My main concern is figuring out the most effective and efficient way to learn the setup on my own and then transfer the knowledge to my teammates once I’ve understood the setup myself.
Thanks
https://redd.it/1k6ozjy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
how to pass env variables to docker container when using github actions
how to pass env variables to docker container when using github actions to build image and running the container on linux virtual machine
currently i am doing this -
is this correct or is there any better way to pass these env variables ?
https://redd.it/1k6q5m3
@r_devops
how to pass env variables to docker container when using github actions to build image and running the container on linux virtual machine
currently i am doing this -
docker run -d --name movieapiapp_container \-p 6000:80 \-e ConnectionStrings__DefaultConnection="${{ secrets.DB_CONNECTION_STRING }}" \-e Jwt__Key="${{ secrets.JWT_SECRET_KEY }}" \-e Jwt__Issuer="web.url\-e Jwt__Audience="web.url\-e ApiKeyOmDb="${{ secrets.OMDB_API_KEY }}" \-e GEMINI_API_KEY="${{ secrets.GEMINI_API_KEY }}" \-e Google__Client_Id="${{ secrets.GOOGLE_CLIENT_ID }}" \-e Google__Client_Secret="${{ secrets.GOOGLE_CLIENT_SECRET }}" \-e ASPNETCORE_URLS=https://+:80 \is this correct or is there any better way to pass these env variables ?
https://redd.it/1k6q5m3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
First AWS cert to go for ?
I’m a software development engineer with 3 years of backend experience and I’m looking to transition into cloud computing, specifically with AWS. Which AWS certification would be the most suitable to start with?
https://redd.it/1k6t5q4
@r_devops
I’m a software development engineer with 3 years of backend experience and I’m looking to transition into cloud computing, specifically with AWS. Which AWS certification would be the most suitable to start with?
https://redd.it/1k6t5q4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What happed to the DevOps Paradox podcast?
The DevOps Paradox podcast is my favorite and they haven't done a show since February.
Does anyone know why??
https://redd.it/1k6ujiv
@r_devops
The DevOps Paradox podcast is my favorite and they haven't done a show since February.
Does anyone know why??
https://redd.it/1k6ujiv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Exploring Serverless Stack Architecture – How Do You Manage Environments & Security?
Hey folks,
I’m experimenting with a serverless stack on AWS using S3 + CloudFront for static hosting, API Gateway + Lambda for backend, DynamoDB for data, and Cognito for auth.
It’s been great for learning, and I’m thinking ahead about how to scale and manage this more professionally.
Curious to hear from others:
* How do you structure environments (dev/staging/prod)? Separate accounts, or manage via IaC/tagging?
* Best practices for securing this kind of stack — IAM roles, access boundaries, etc.?
* Any underrated tools or AWS services that help you keep things maintainable and cost-effective?
Appreciate any insight — always looking to learn from real-world setups. Happy to share my setup later once it’s more polished.
https://redd.it/1k6sux8
@r_devops
Hey folks,
I’m experimenting with a serverless stack on AWS using S3 + CloudFront for static hosting, API Gateway + Lambda for backend, DynamoDB for data, and Cognito for auth.
It’s been great for learning, and I’m thinking ahead about how to scale and manage this more professionally.
Curious to hear from others:
* How do you structure environments (dev/staging/prod)? Separate accounts, or manage via IaC/tagging?
* Best practices for securing this kind of stack — IAM roles, access boundaries, etc.?
* Any underrated tools or AWS services that help you keep things maintainable and cost-effective?
Appreciate any insight — always looking to learn from real-world setups. Happy to share my setup later once it’s more polished.
https://redd.it/1k6sux8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best Practices for Horizontally Scaling a Dockerized Backend on a VM
I need advice on scaling a Dockerized backend application hosted on a Google Compute Engine (GCE) VM.
# Current Setup:
* Backend runs in Docker containers on a single GCE VM.
* Nginx is installed on the **same VM** to route requests to the backend.
* Monitoring via Prometheus/Grafana shows backend CPU usage spiking to **200%**, indicating severe resource contention.
# Proposed Solution and Questions:
1. **Horizontal Scaling Within the Same VM**:
* Is adding more backend containers to the same VM a viable approach? Since the VM’s CPU is already saturated, won’t this exacerbate resource contention?
* If traffic grows further, would scaling require adding more VMs regardless?
2. **Nginx Placement**:
* Should Nginx be decoupled from the backend VM to avoid resource competition (e.g., moving it to a dedicated VM or managed load balancer)?
3. **Alternative Strategies**:
* How would you architect this system for scalability?
https://redd.it/1k6x7tp
@r_devops
I need advice on scaling a Dockerized backend application hosted on a Google Compute Engine (GCE) VM.
# Current Setup:
* Backend runs in Docker containers on a single GCE VM.
* Nginx is installed on the **same VM** to route requests to the backend.
* Monitoring via Prometheus/Grafana shows backend CPU usage spiking to **200%**, indicating severe resource contention.
# Proposed Solution and Questions:
1. **Horizontal Scaling Within the Same VM**:
* Is adding more backend containers to the same VM a viable approach? Since the VM’s CPU is already saturated, won’t this exacerbate resource contention?
* If traffic grows further, would scaling require adding more VMs regardless?
2. **Nginx Placement**:
* Should Nginx be decoupled from the backend VM to avoid resource competition (e.g., moving it to a dedicated VM or managed load balancer)?
3. **Alternative Strategies**:
* How would you architect this system for scalability?
https://redd.it/1k6x7tp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Procore Technologies
I have cleared my rounds at Procore Technologies, if any of you guys are working in the company or have worked previously please let me know the work culture.
https://redd.it/1k6x1r8
@r_devops
I have cleared my rounds at Procore Technologies, if any of you guys are working in the company or have worked previously please let me know the work culture.
https://redd.it/1k6x1r8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Manager said “that doesn’t make any sense!”
…to which I reply: “well neither does me driving into the office every day to do a job I can literally do from anywhere with an Internet connection but here I am”
https://redd.it/1k70np7
@r_devops
…to which I reply: “well neither does me driving into the office every day to do a job I can literally do from anywhere with an Internet connection but here I am”
https://redd.it/1k70np7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Have only worked in Jenkins, Git, Docker and Linux as DevOps Engineer– What all Skills Should I Learn as DevOps to Get Hired? Can't find jobs in Naukri for this
I’ve worked in DevOps using these: Jenkins, Git, and Linux, but in Job Portals like Linkedin, Naukri I am not seeing job openings that match just these skills.
What should I focus on learning next to actually get hired?
https://redd.it/1k70qjb
@r_devops
I’ve worked in DevOps using these: Jenkins, Git, and Linux, but in Job Portals like Linkedin, Naukri I am not seeing job openings that match just these skills.
What should I focus on learning next to actually get hired?
https://redd.it/1k70qjb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Simplecontainer.io
In the past few months, I've been developing an orchestration platform to improve the experience of managing Docker deployments on VMs. It operates atop the container engine and takes over orchestration. It supports GitOps and plain old apply. The engine is open sourced.
Apart from the terminal CLI, I've also created a sleek UI dashboard to further ease the management. Dashboard is available as an app https://app.simplecontainer.io and can be used as it is. It is also possible to deploy the dashboard on-premises.
The dashboard can be a central platform to manage operations for multiple projects. Contexts are a way to authenticate against the simplecontainer node and can be shared with other users via organizations. The manager could choose which context is shared with which organization.
On the security side, the dashboard acts as a proxy, and no information about access is persisted on the app. Also, everywhere mTLS and TLS.
Demos on how to use the platform + dashboard can be found at:
- https://app.simplecontainer.io/demos/gitops
- https://app.simplecontainer.io/demos/declarative
Photos of container and gitops dashboards are attached. Currently it is alpha and sign ups will be opened soon. Interested in what you guys think and if someone wants to try it out you can hit me up in DM for more info.
https://redd.it/1k72nb3
@r_devops
In the past few months, I've been developing an orchestration platform to improve the experience of managing Docker deployments on VMs. It operates atop the container engine and takes over orchestration. It supports GitOps and plain old apply. The engine is open sourced.
Apart from the terminal CLI, I've also created a sleek UI dashboard to further ease the management. Dashboard is available as an app https://app.simplecontainer.io and can be used as it is. It is also possible to deploy the dashboard on-premises.
The dashboard can be a central platform to manage operations for multiple projects. Contexts are a way to authenticate against the simplecontainer node and can be shared with other users via organizations. The manager could choose which context is shared with which organization.
On the security side, the dashboard acts as a proxy, and no information about access is persisted on the app. Also, everywhere mTLS and TLS.
Demos on how to use the platform + dashboard can be found at:
- https://app.simplecontainer.io/demos/gitops
- https://app.simplecontainer.io/demos/declarative
Photos of container and gitops dashboards are attached. Currently it is alpha and sign ups will be opened soon. Interested in what you guys think and if someone wants to try it out you can hit me up in DM for more info.
https://redd.it/1k72nb3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community