Anyone else sitting here waiting for Azure to come back up?
Been hours now, we are currently trying to move 25TB of data from one cloud hosting to another while hoping Azure Central US comes back up.
https://redd.it/1e6qe2o
@r_devops
Been hours now, we are currently trying to move 25TB of data from one cloud hosting to another while hoping Azure Central US comes back up.
https://redd.it/1e6qe2o
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Developing a VS Code extension in 5 minute
Ever wonder how a VS Code extension works?
Here is a code-along 5 minute speed run developing a VS Code extension for a CI tool
https://github.com/brisktest/brisk-extension/blob/main/SPEEDRUN.md
https://redd.it/1e6vbh3
@r_devops
Ever wonder how a VS Code extension works?
Here is a code-along 5 minute speed run developing a VS Code extension for a CI tool
https://github.com/brisktest/brisk-extension/blob/main/SPEEDRUN.md
https://redd.it/1e6vbh3
@r_devops
GitHub
brisk-extension/SPEEDRUN.md at main · brisktest/brisk-extension
The VS Code Brisk Extension. Contribute to brisktest/brisk-extension development by creating an account on GitHub.
How Do You Automate Your Status Pages?
Hi r/devops community,
I'm looking for advice and best practices on automating status pages for monitoring service health and notifying users of outages or performance issues. Specifically, I'm considering using Instatus to create and manage our status page.
Here's a bit of background:
I'm running multiple Kubernetes services, and I want each service to have its own component on the status page.
The goal is to automate the process of updating the status (Operational, Partial Outage, Major Outage, Degraded Performance, Under Maintenance) for each service.
Before I dive into implementing anything, I wanted to ask:
1. How do you automate your status pages?
2. What tools and processes do you use?
3. Any tips or best practices for integrating Kubernetes with a status page tool like Instatus?
I appreciate any insights or feedback!
https://redd.it/1e6xekh
@r_devops
Hi r/devops community,
I'm looking for advice and best practices on automating status pages for monitoring service health and notifying users of outages or performance issues. Specifically, I'm considering using Instatus to create and manage our status page.
Here's a bit of background:
I'm running multiple Kubernetes services, and I want each service to have its own component on the status page.
The goal is to automate the process of updating the status (Operational, Partial Outage, Major Outage, Degraded Performance, Under Maintenance) for each service.
Before I dive into implementing anything, I wanted to ask:
1. How do you automate your status pages?
2. What tools and processes do you use?
3. Any tips or best practices for integrating Kubernetes with a status page tool like Instatus?
I appreciate any insights or feedback!
https://redd.it/1e6xekh
@r_devops
Instatus
Instatus – Get ready for downtime
Monitor your services, fix incidents with your team, and share your status with customers. Beautiful, fast status pages in seconds.
Slow rendering some website pages and images
Hi r/devops community,
I run a blog called thenextscoop, and I face issues with some of the pages and images rendering very slowly, and some of them even fail to load. Is there any solution where I can check the website's health, latency and uptime? Earlier, I used a few browser Chrome extensions, but they did not give the right data in real-time.
I will be happy if any community can help me here... I appreciate it in advance.
https://redd.it/1e6y15h
@r_devops
Hi r/devops community,
I run a blog called thenextscoop, and I face issues with some of the pages and images rendering very slowly, and some of them even fail to load. Is there any solution where I can check the website's health, latency and uptime? Earlier, I used a few browser Chrome extensions, but they did not give the right data in real-time.
I will be happy if any community can help me here... I appreciate it in advance.
https://redd.it/1e6y15h
@r_devops
Sales and Marketing Blog - The Next Scoop
Your free source to grow sales and marketing skills, covering a wide range of topics like digital marketing, SEO, content writing, tools, case studies, bias-free review, tips, trends, etc
clean overlay2 docker
Hello,
Is there any safe way to clean overlay2 ?
It's label studio docker image running using this command:
docker run -d -it -e LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true -p 8080:8080 -v $(pwd)/mydata:/label-studio/data heartexlabs/label-studio:latest
please find more informations here:
root@vps-8b5453ed:/var/lib/docker/overlay2# du -sh * | sort -rh | head -n 5
129G 2cba2b496509f78e63274c6b9bcff18eb43c0fbe55c06ecfc684a1f883902aa3
836M ad32b03c1cae41ae7d2efc8973e5636c06602f78bc9af14b77a620965e914854
589M 10a810c769edf7e59c57d58cc693ac845fbf36ece72e46f55bcc8a9d07169b27
387M 74c3016cbdaad24ba3b5a58bb15ceba3e1130755dc8f17c343d0e6ba8a903637
156M m8yt2pa29iegnnlkp7zewi17g
root@vps-8b5453ed:/var/lib/docker/overlay2#
https://redd.it/1e6z840
@r_devops
Hello,
Is there any safe way to clean overlay2 ?
It's label studio docker image running using this command:
docker run -d -it -e LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true -p 8080:8080 -v $(pwd)/mydata:/label-studio/data heartexlabs/label-studio:latest
please find more informations here:
root@vps-8b5453ed:/var/lib/docker/overlay2# du -sh * | sort -rh | head -n 5
129G 2cba2b496509f78e63274c6b9bcff18eb43c0fbe55c06ecfc684a1f883902aa3
836M ad32b03c1cae41ae7d2efc8973e5636c06602f78bc9af14b77a620965e914854
589M 10a810c769edf7e59c57d58cc693ac845fbf36ece72e46f55bcc8a9d07169b27
387M 74c3016cbdaad24ba3b5a58bb15ceba3e1130755dc8f17c343d0e6ba8a903637
156M m8yt2pa29iegnnlkp7zewi17g
root@vps-8b5453ed:/var/lib/docker/overlay2#
https://redd.it/1e6z840
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Struggling with google cloud storage
I'm using Google Cloud Storage to update a CSV file from my website and when I manually check it it does update the CSV file, but when I read the file, it just does not include my last entry. I think google place a cache thing, but just want to know for certain why this is happening and how I can win it.
https://redd.it/1e7112m
@r_devops
I'm using Google Cloud Storage to update a CSV file from my website and when I manually check it it does update the CSV file, but when I read the file, it just does not include my last entry. I think google place a cache thing, but just want to know for certain why this is happening and how I can win it.
https://redd.it/1e7112m
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Hey what to learn in Linux ???
Hi everyone i want to learn linux for devops but have no clue what to actually learn in it i went to roadmap.sh and the linux path was super overwhelming please anyone can tell me what to actually learn in it .
https://redd.it/1e6zjfg
@r_devops
Hi everyone i want to learn linux for devops but have no clue what to actually learn in it i went to roadmap.sh and the linux path was super overwhelming please anyone can tell me what to actually learn in it .
https://redd.it/1e6zjfg
@r_devops
roadmap.sh
Developer Roadmaps - roadmap.sh
Community driven roadmaps, articles and guides for developers to grow in their career.
Industry Trends | DevOps , Cloud
Hi all, what do you think are the most important trends shaping the future of DevOps and cloud computing? How are you preparing for them?
Because I think DevOps and Cloud filed is getting saturated and it’s not enough to survive as DevOps.(let me know if you think it’s wrong assumption)
https://redd.it/1e73ng9
@r_devops
Hi all, what do you think are the most important trends shaping the future of DevOps and cloud computing? How are you preparing for them?
Because I think DevOps and Cloud filed is getting saturated and it’s not enough to survive as DevOps.(let me know if you think it’s wrong assumption)
https://redd.it/1e73ng9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need expert advice
I am a devops intern who has been assigned tasks, like my first task is interesting to me but, I am on a week holiday and before going back and asking out my mentor or other mates, thought to feed my brain something.
Like the task is to "create a one stop access solution, like say we have developer's who once get their role will need default access based on the role that they get, say frontend or backend or anything like that
I need to provide such users access to certain software such as Jenkins, db access through a bastion, open search etc...
I was asked to make sort of a ui where once user is added I can do this checkbox to provide access and remove access, set time for access, etc....
Any insights will be usefull, I was said to look for AWS ssm so that the logs of the actions could be more clear and concise.
https://redd.it/1e746f0
@r_devops
I am a devops intern who has been assigned tasks, like my first task is interesting to me but, I am on a week holiday and before going back and asking out my mentor or other mates, thought to feed my brain something.
Like the task is to "create a one stop access solution, like say we have developer's who once get their role will need default access based on the role that they get, say frontend or backend or anything like that
I need to provide such users access to certain software such as Jenkins, db access through a bastion, open search etc...
I was asked to make sort of a ui where once user is added I can do this checkbox to provide access and remove access, set time for access, etc....
Any insights will be usefull, I was said to look for AWS ssm so that the logs of the actions could be more clear and concise.
https://redd.it/1e746f0
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to Best Integrate E2E Tests into GitLab CI/CD Pipeline?
Hi all,
I'm looking for advice on the best way to integrate end-to-end tests into our existing GitLab pipeline. I am lost at this point. I'm doing this for the first time and I'm unsure how to handle it. Here’s a brief overview of our current setup, goals, and daily development workflow:
Current State:
Pipeline Tool: GitLab
Repositories: Backend (Spring Boot), Frontend (Angular Monorepo NX), Keycloak
Branching Strategy: Master-Branch, Feature-Branches
Daily Development Workflow:
When a feature is complete, we create a merge request from the feature branch to the master.
The pipeline is triggered on the merge request and includes the following steps: Prepare (npm etc.), Tests, Lint, Build, Trigger Deployment (manual trigger for the second part).
After successful pipeline completion, E2E tests can be manually triggered.
Merging can be done without triggering E2E tests or even if they fail, which currently has no impact on the merge process.
Clicking to trigger E2E tests temporarily deploys the changes to the dev stage.
Upon merging, changes are deployed to the master and rolled out to the dev stage.
During a release, we create a tag, and only the normal tests are executed, skipping E2E tests and deploying to the production stage.
Problems:
Manual execution of E2E tests.
Merge process can be completed even if E2E tests fail.
Goals:
Automate E2E tests for each commit/merge request.
Prevent merges if E2E tests fail.
Given our current constraints and setup, what would be the best way to achieve these goals with minimal disruption?
My initial thoughts were to containerize the frontend, backend, and Keycloak, and create a temporary E2E stage that would be terminated after a successful or unsuccessful job. However, it seems like a configuration mess and a waste of resources.
Are there simpler ways? I am open to any suggestions.
Thanks!
https://redd.it/1e759j6
@r_devops
Hi all,
I'm looking for advice on the best way to integrate end-to-end tests into our existing GitLab pipeline. I am lost at this point. I'm doing this for the first time and I'm unsure how to handle it. Here’s a brief overview of our current setup, goals, and daily development workflow:
Current State:
Pipeline Tool: GitLab
Repositories: Backend (Spring Boot), Frontend (Angular Monorepo NX), Keycloak
Branching Strategy: Master-Branch, Feature-Branches
Daily Development Workflow:
When a feature is complete, we create a merge request from the feature branch to the master.
The pipeline is triggered on the merge request and includes the following steps: Prepare (npm etc.), Tests, Lint, Build, Trigger Deployment (manual trigger for the second part).
After successful pipeline completion, E2E tests can be manually triggered.
Merging can be done without triggering E2E tests or even if they fail, which currently has no impact on the merge process.
Clicking to trigger E2E tests temporarily deploys the changes to the dev stage.
Upon merging, changes are deployed to the master and rolled out to the dev stage.
During a release, we create a tag, and only the normal tests are executed, skipping E2E tests and deploying to the production stage.
Problems:
Manual execution of E2E tests.
Merge process can be completed even if E2E tests fail.
Goals:
Automate E2E tests for each commit/merge request.
Prevent merges if E2E tests fail.
Given our current constraints and setup, what would be the best way to achieve these goals with minimal disruption?
My initial thoughts were to containerize the frontend, backend, and Keycloak, and create a temporary E2E stage that would be terminated after a successful or unsuccessful job. However, it seems like a configuration mess and a waste of resources.
Are there simpler ways? I am open to any suggestions.
Thanks!
https://redd.it/1e759j6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
As a DevOps architect, how would you ensure that an outage caused by CrowdStrike does not affect the development lifecycle and operations of your application?
🤔
https://redd.it/1e784tv
@r_devops
🤔
https://redd.it/1e784tv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Master in Data Science or Software Engineering?
I am currently working as a Linux System Engineer in a company where I write bash scripts to automate tasks, monitoring infra and services using prometheus, sending alerts to slack channel, and deploying services in containers and currently building a kubernetes cluster on our infra. I wanted to get into the Data Engineering and DevOps/DataOps/MLOps field. I hold a bachelor's degree in Management Information System and currently applying for a Diploma followed by a Master program in a university in my country from the faculty of Computer Science and Artificial Intelligence. They have three programs available; Software Engineering, Data Science, and Cyber Security. But I don't know which is more pertinent to my field, Data Science or Software Engineering Master and had mixed opinions about which one to choose. I already have a fairly good background about Computer Science subjects like Algorithms, Data Structures, and coding in general.
https://redd.it/1e7ap1n
@r_devops
I am currently working as a Linux System Engineer in a company where I write bash scripts to automate tasks, monitoring infra and services using prometheus, sending alerts to slack channel, and deploying services in containers and currently building a kubernetes cluster on our infra. I wanted to get into the Data Engineering and DevOps/DataOps/MLOps field. I hold a bachelor's degree in Management Information System and currently applying for a Diploma followed by a Master program in a university in my country from the faculty of Computer Science and Artificial Intelligence. They have three programs available; Software Engineering, Data Science, and Cyber Security. But I don't know which is more pertinent to my field, Data Science or Software Engineering Master and had mixed opinions about which one to choose. I already have a fairly good background about Computer Science subjects like Algorithms, Data Structures, and coding in general.
https://redd.it/1e7ap1n
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
3 Essential Linux Command Line Tools for DevOps Engineers
A short video about using
https://youtu.be/BYdrUJcU1yU
Related material:
- companion blog: https://medium.com/itnext/6-essential-linux-command-line-tools-for-devops-engineers-5cd23b578c50
- terminal slides: https://github.com/Piotr1215/shorts/tree/main/3-devops-tools
https://redd.it/1e78s5u
@r_devops
A short video about using
yq, sed/grep and curl. Getting better in using those commands is essencial for devops engineers (and not only). With proliferation of AI, there is even greater need to learn and understand the underlying technologies and tools. Hope it helps someone to learn a few new things or improve.https://youtu.be/BYdrUJcU1yU
Related material:
- companion blog: https://medium.com/itnext/6-essential-linux-command-line-tools-for-devops-engineers-5cd23b578c50
- terminal slides: https://github.com/Piotr1215/shorts/tree/main/3-devops-tools
https://redd.it/1e78s5u
@r_devops
YouTube
3 Essential Linux Command Line Tools for DevOps Engineers
Practicing DevOps means juggling lots of various command line tools, `kubectl, helm and other *.ctls` from different cloud native projects. A good working knowledge of those command line tools is essential, but even more important are the command line tools…
Advice on Running SAST and DAST with Veracode in Azure DevOps Without Access to Client's Source Code
Hi everyone,
I'm working on a project for a client where we need to run SAST (Static Application Security Testing) using Veracode. The client has provided the necessary endpoints for the DAST scan, and that part is straightforward. However, I’ve hit a snag with the SAST.
The client wants to integrate Veracode into their Azure DevOps pipeline but is not willing to share the source code with us. This brings up a few questions and concerns:
1. **Is direct access to the source code required to integrate Veracode with Azure DevOps and run SAST?**
2. **If the source code is not required, what are the alternative approaches to perform SAST under these conditions?**
3. **What specific type of access do I need in Azure DevOps to set up and configure Veracode for running SAST?**
* I assume I might need Project Administrator access to configure pipelines, deploy, and install/configure the Veracode extension, but any confirmation or additional insights would be helpful. if he's not okay to give us the Admin access, what are alternatives roles ?
Any advice or insights from those who have navigated similar situations would be greatly appreciated!
Thanks in advance!
https://redd.it/1e7cbjn
@r_devops
Hi everyone,
I'm working on a project for a client where we need to run SAST (Static Application Security Testing) using Veracode. The client has provided the necessary endpoints for the DAST scan, and that part is straightforward. However, I’ve hit a snag with the SAST.
The client wants to integrate Veracode into their Azure DevOps pipeline but is not willing to share the source code with us. This brings up a few questions and concerns:
1. **Is direct access to the source code required to integrate Veracode with Azure DevOps and run SAST?**
2. **If the source code is not required, what are the alternative approaches to perform SAST under these conditions?**
3. **What specific type of access do I need in Azure DevOps to set up and configure Veracode for running SAST?**
* I assume I might need Project Administrator access to configure pipelines, deploy, and install/configure the Veracode extension, but any confirmation or additional insights would be helpful. if he's not okay to give us the Admin access, what are alternatives roles ?
Any advice or insights from those who have navigated similar situations would be greatly appreciated!
Thanks in advance!
https://redd.it/1e7cbjn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to manage dozens of gitlab tokens in CI jobs?
Scenario: gitlab on-prem driven CI with many repos working together to provide a single infrastructure:
So we have a lot of tokens to manage. As gitlab now enforces a 1 year max token lifetime I've just had the realisation that hunting through CI variables in dozens of repos, recreating new tokens in other repos that that CI needs to access, with the appropriate permissions is not a sustainable approach.
So apart from better READMEs in each repo or a big spreadsheet, how do people manage dozens of tokens with varying permissions that need to renewed yearly and update the secret stored in the correct CI variable?
Unhelpfully gitlab deletes expired tokens and I don't see a convenient UI to list all project tokens across the entire account.
Curious... I assume this is a common problem with gitlab/github driven CI?
Many thanks in advance for any suggesstions, ideas, pointers... 👍😀
https://redd.it/1e7f5ot
@r_devops
Scenario: gitlab on-prem driven CI with many repos working together to provide a single infrastructure:
So we have a lot of tokens to manage. As gitlab now enforces a 1 year max token lifetime I've just had the realisation that hunting through CI variables in dozens of repos, recreating new tokens in other repos that that CI needs to access, with the appropriate permissions is not a sustainable approach.
So apart from better READMEs in each repo or a big spreadsheet, how do people manage dozens of tokens with varying permissions that need to renewed yearly and update the secret stored in the correct CI variable?
Unhelpfully gitlab deletes expired tokens and I don't see a convenient UI to list all project tokens across the entire account.
Curious... I assume this is a common problem with gitlab/github driven CI?
Many thanks in advance for any suggesstions, ideas, pointers... 👍😀
https://redd.it/1e7f5ot
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Terraform Certifications?
I am looking to learn terraform and possibly get a certification if there is such a thing. Anyone have any suggestions?
https://redd.it/1e7l496
@r_devops
I am looking to learn terraform and possibly get a certification if there is such a thing. Anyone have any suggestions?
https://redd.it/1e7l496
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Managing Kubernetes with K9s
For those that have been using k9s (or equivalent) to monitor your Kubernetes clusters in the cloud, how do you ensure some form of version control?
For example, increasing memory/cpu request and limits, scaling of replicas, updating some yaml file, can all be done using k9s.
But how do you ensure some form of version control?
The reason for this is bcos i recently joined a non-tech company with only one engineer who joined around 2-3 months earlier than me. We’ve been trying to maintain a data pipeline done by external vendor, so we found k9s really useful to tell us live updates of the cluster.
But recently, the other engineer has been fine-tuning the memory/cpu instances. Sometimes he messed up the yaml file while editing which causes some of the pods to not be able to restart due to insufficient memory allocation.
Deep down i feel like this may not be the best practice, thus would like everyone’s input on how is it done for other tech companies?
https://redd.it/1e7nnca
@r_devops
For those that have been using k9s (or equivalent) to monitor your Kubernetes clusters in the cloud, how do you ensure some form of version control?
For example, increasing memory/cpu request and limits, scaling of replicas, updating some yaml file, can all be done using k9s.
But how do you ensure some form of version control?
The reason for this is bcos i recently joined a non-tech company with only one engineer who joined around 2-3 months earlier than me. We’ve been trying to maintain a data pipeline done by external vendor, so we found k9s really useful to tell us live updates of the cluster.
But recently, the other engineer has been fine-tuning the memory/cpu instances. Sometimes he messed up the yaml file while editing which causes some of the pods to not be able to restart due to insufficient memory allocation.
Deep down i feel like this may not be the best practice, thus would like everyone’s input on how is it done for other tech companies?
https://redd.it/1e7nnca
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best docker& Kubernetes course on udemy?
I got an organizational user which means all courses are free to enroll.
I’m a security researcher and looking to get some knowledge and know how so at some point I’d also be able to understand the security aspects of docker and k8s and look under the hood.
https://redd.it/1e7tpx3
@r_devops
I got an organizational user which means all courses are free to enroll.
I’m a security researcher and looking to get some knowledge and know how so at some point I’d also be able to understand the security aspects of docker and k8s and look under the hood.
https://redd.it/1e7tpx3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Tips for a new learner for terraform / Kubernetes / docker
Hey everyone , i’am new to devops , where do i learn terraform / docker / kubernetes and ci/cd for free with hands on practice ? Thanks everyone
While you’re at it , how do i become really good and knowledgeable in this field ?
Thank you so much everyone
https://redd.it/1e7unz7
@r_devops
Hey everyone , i’am new to devops , where do i learn terraform / docker / kubernetes and ci/cd for free with hands on practice ? Thanks everyone
While you’re at it , how do i become really good and knowledgeable in this field ?
Thank you so much everyone
https://redd.it/1e7unz7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
CI/CD configs IN App Repos?
Do you keep CI/CD configs in the same repo as your application / service code where devs can manage them? A few teams in my org have recently started using CircleCI on their own and set up their own pipelines in each app repo. I can understand if it was just for building or pre-deploy stages that are more application specific, but these are full CICD pipelines. They aren't consistent across the repos now which makes troubleshooting a nightmare, and I've also found that some of our standard SDLC steps like linting, validation, testing, vulnerability scanning, and so on are missing. Not to mention skipping review requirements and dual approval. There is nothing stopping someone from adding a pipeline that just deploys straight to production. I raised these concerns with our head of engineering who argued that it is necessary to empower the devs to ship as fast as possible. Am I making a stink for nothing?
https://redd.it/1e7w0p6
@r_devops
Do you keep CI/CD configs in the same repo as your application / service code where devs can manage them? A few teams in my org have recently started using CircleCI on their own and set up their own pipelines in each app repo. I can understand if it was just for building or pre-deploy stages that are more application specific, but these are full CICD pipelines. They aren't consistent across the repos now which makes troubleshooting a nightmare, and I've also found that some of our standard SDLC steps like linting, validation, testing, vulnerability scanning, and so on are missing. Not to mention skipping review requirements and dual approval. There is nothing stopping someone from adding a pipeline that just deploys straight to production. I raised these concerns with our head of engineering who argued that it is necessary to empower the devs to ship as fast as possible. Am I making a stink for nothing?
https://redd.it/1e7w0p6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community