Learnings from integrating JMX based metrics from Java applications into time series databases
https://last9.io/blog/learnings-integrating-jmxtrans/
https://redd.it/1308y1v
@r_devops
https://last9.io/blog/learnings-integrating-jmxtrans/
https://redd.it/1308y1v
@r_devops
last9.io
Learnings integrating jmxtrans
JMX metrics give solid insights into the workings of your application. Integrating them with Levitate (our time series data warehosue) required us to jump some hoops with vmagent.
Is "Certified GitOps Associate" a joke or desperate attempt by CNCF?
Why does CNCF the want to sell certs so desperately?
https://redd.it/1309zrg
@r_devops
Why does CNCF the want to sell certs so desperately?
https://redd.it/1309zrg
@r_devops
Reddit
r/devops on Reddit: Is "Certified GitOps Associate" a joke or desperate attempt by CNCF?
Posted by u/IamOkei - No votes and no comments
Can Google Enteprise enforce hardware MFA?
Does anyone use google workspace enteprise know if they allow to enforce hardware MFA only?
https://redd.it/12zzmqi
@r_devops
Does anyone use google workspace enteprise know if they allow to enforce hardware MFA only?
https://redd.it/12zzmqi
@r_devops
Reddit
r/devops on Reddit: Can Google Enteprise enforce hardware MFA?
Posted by u/banhloc - No votes and 1 comment
Best courses to learn python specifically for devops
I am doing sysadmin things in my organization and I've heard everywhere that one cannot survive in devops without scripting knowledge either bash or python.
I know neither of them but want to learn one and picked python first. I don't want to learn in deep or i don't know if i should learn it deep.
I am from a non technical field just entered into the software industry so it'll be quite hard.
Is there any courses where can I learn python specifically for devops? Like for automation and scripting. I've heard that automate the boring stuff with python is good but would that be suitable for me as i am from non tech field and have low critical thinking ability? What are your recommended courses and what's your advice for me?
https://redd.it/130c6vu
@r_devops
I am doing sysadmin things in my organization and I've heard everywhere that one cannot survive in devops without scripting knowledge either bash or python.
I know neither of them but want to learn one and picked python first. I don't want to learn in deep or i don't know if i should learn it deep.
I am from a non technical field just entered into the software industry so it'll be quite hard.
Is there any courses where can I learn python specifically for devops? Like for automation and scripting. I've heard that automate the boring stuff with python is good but would that be suitable for me as i am from non tech field and have low critical thinking ability? What are your recommended courses and what's your advice for me?
https://redd.it/130c6vu
@r_devops
Reddit
r/devops on Reddit: Best courses to learn python specifically for devops
Posted by u/Neither_Wallaby_9033 - No votes and no comments
Monitor Logs From an Agent in Icinga2
I'm currently using Icinga2 for a distributed monitoring solution, in a master-agent configuration. I've looked into using the built-in logfiles plugin ([https://icinga.com/docs/icinga-2/latest/doc/10-icinga-template-library/#logfiles](https://icinga.com/docs/icinga-2/latest/doc/10-icinga-template-library/#logfiles)), but have found no success in getting it to parse through the requested logfile on the agent server.
Here's the curent configuration within \`/etc/icinga2/zones.d/master/cpanel.conf\` on the master server:
object Service "cpanel-backup" {
import "generic-service"
host_name = "cpanel29.dbl-mail.com"
check_command = "logfiles"
vars.logfiles_logfile ="/var/log/borgbackup.log"
vars.logfiles_critical_pattern = "error:"
command_endpoint = host.vars.agent_endpoint
}
Any ideas on what I'm doing wrong?
0 comments
https://redd.it/12zyusc
@r_devops
I'm currently using Icinga2 for a distributed monitoring solution, in a master-agent configuration. I've looked into using the built-in logfiles plugin ([https://icinga.com/docs/icinga-2/latest/doc/10-icinga-template-library/#logfiles](https://icinga.com/docs/icinga-2/latest/doc/10-icinga-template-library/#logfiles)), but have found no success in getting it to parse through the requested logfile on the agent server.
Here's the curent configuration within \`/etc/icinga2/zones.d/master/cpanel.conf\` on the master server:
object Service "cpanel-backup" {
import "generic-service"
host_name = "cpanel29.dbl-mail.com"
check_command = "logfiles"
vars.logfiles_logfile ="/var/log/borgbackup.log"
vars.logfiles_critical_pattern = "error:"
command_endpoint = host.vars.agent_endpoint
}
Any ideas on what I'm doing wrong?
0 comments
https://redd.it/12zyusc
@r_devops
Reddit
r/devops on Reddit: Monitor Logs From an Agent in Icinga2
Posted by u/Yibro99 - 1 vote and no comments
Using GPT to Analyze Cloud Security Issues
https://www.selefra.io/blog/using-gpt-to-analyze-cloud-security-issues-by-selefra-clgyrzjyn1132812znjv2pvoxip
https://redd.it/130esi8
@r_devops
https://www.selefra.io/blog/using-gpt-to-analyze-cloud-security-issues-by-selefra-clgyrzjyn1132812znjv2pvoxip
https://redd.it/130esi8
@r_devops
Selefra
Using GPT to Analyze Cloud Security Issues by Selefra
In today's digital age, cloud security has become an increasingly important task. Countless cloud se..
What are the most important DevOps conferences this year?
I'm curious what are the most interesting/important/influential DevOps related conferences happening this year? Job let's me pick one and pay for travel and the ticket so definitely planning to use that perk, just not sure on what.
https://redd.it/130gsk4
@r_devops
I'm curious what are the most interesting/important/influential DevOps related conferences happening this year? Job let's me pick one and pay for travel and the ticket so definitely planning to use that perk, just not sure on what.
https://redd.it/130gsk4
@r_devops
Reddit
r/devops on Reddit: What are the most important DevOps conferences this year?
Posted by u/ntech2 - No votes and no comments
Remote DevOps salaries poll
Hi, this one is for DevOps engineers who work remotely, I'm curious to see what are your salaries? Where are you from and where is your employer located? Do you work on a b2b contract basis or are you fully employed? Thanks!
I'll start:
q| a
---|---
Your location| E.Europe(Latvia)
Employer location| E.Europe(Latvia)
Contract or Full time| Full time employment
Years of relevant experience| 5
Salary(gross/net)| 50k EUR (55k USD) which is ~35k EUR after tax.
https://redd.it/130gcc4
@r_devops
Hi, this one is for DevOps engineers who work remotely, I'm curious to see what are your salaries? Where are you from and where is your employer located? Do you work on a b2b contract basis or are you fully employed? Thanks!
I'll start:
q| a
---|---
Your location| E.Europe(Latvia)
Employer location| E.Europe(Latvia)
Contract or Full time| Full time employment
Years of relevant experience| 5
Salary(gross/net)| 50k EUR (55k USD) which is ~35k EUR after tax.
https://redd.it/130gcc4
@r_devops
Reddit
r/devops on Reddit: Remote DevOps salaries poll
Posted by u/ntech2 - No votes and 1 comment
Scaling RabbitMQ
Hey guys!
Currently facing the use case where our RabbitMQ cluster (3 nodes, all quorom queues) has to handle 80k-100k events per second. We've reached situations where the latency was above what we can allow for our applications and downstream users.
The cluster is being used across the entire company and we were trying to think of ways to overcome this problem.
One of the suggestions is multicluster RabbitMQ where several clusters would be provisioned for different teams so that an outage of a cluster doesn't influence many functions. However, currently we're in a situation where certain teams are producing to queues which are consumed by other teams. Additionally, we don't want to overcomplicate the management of connections to several different teams on the application level and therefore we were debating whether federation is a good use case for this.
If anyone has experience with similar problems or how to spread the load of a RabbitMQ cluster and has any best practices/ recommendations I'd love to hear those and would be very appreciative.
Thanks!
https://redd.it/130kiu0
@r_devops
Hey guys!
Currently facing the use case where our RabbitMQ cluster (3 nodes, all quorom queues) has to handle 80k-100k events per second. We've reached situations where the latency was above what we can allow for our applications and downstream users.
The cluster is being used across the entire company and we were trying to think of ways to overcome this problem.
One of the suggestions is multicluster RabbitMQ where several clusters would be provisioned for different teams so that an outage of a cluster doesn't influence many functions. However, currently we're in a situation where certain teams are producing to queues which are consumed by other teams. Additionally, we don't want to overcomplicate the management of connections to several different teams on the application level and therefore we were debating whether federation is a good use case for this.
If anyone has experience with similar problems or how to spread the load of a RabbitMQ cluster and has any best practices/ recommendations I'd love to hear those and would be very appreciative.
Thanks!
https://redd.it/130kiu0
@r_devops
Reddit
r/devops on Reddit: Scaling RabbitMQ
Posted by u/Easy-Dragonfruit6606 - No votes and no comments
What books do you recommend for general Devops/Tools and Cloud best practices?
Appreciate any suggestions, thanks!
https://redd.it/130kgxd
@r_devops
Appreciate any suggestions, thanks!
https://redd.it/130kgxd
@r_devops
Reddit
r/devops on Reddit: What books do you recommend for general Devops/Tools and Cloud best practices?
Posted by u/Cevap - No votes and 2 comments
A notion template for writing Architecture Decision Record(ADR)
Make better architectural decisions with ease and clarity using our Architecture Decision Record template for Notion!
https://www.notion.so/templates/architecture-decision-record-template
https://redd.it/130nkii
@r_devops
Make better architectural decisions with ease and clarity using our Architecture Decision Record template for Notion!
https://www.notion.so/templates/architecture-decision-record-template
https://redd.it/130nkii
@r_devops
Notion
Architecture Decision Record template
Make better architectural decisions with ease and clarity using our Architecture Decision Record template for Notion!
Check application environnement status / requirements
Hello,
I am looking for a tool, an app that would allow me to check the status of an application environment.
I'm thinking about network resources (API endpoints, internet, databases, smtp servers, etc....
In practice, I currently have to check accessibility to +150 resources (URL, domains, host, port ...) and i repeat this check with every change 😮💨....
Before I start writing an application, I would like to have your feedbacks. Thanks all for sharing your experience.
https://redd.it/12zy5zr
@r_devops
Hello,
I am looking for a tool, an app that would allow me to check the status of an application environment.
I'm thinking about network resources (API endpoints, internet, databases, smtp servers, etc....
In practice, I currently have to check accessibility to +150 resources (URL, domains, host, port ...) and i repeat this check with every change 😮💨....
Before I start writing an application, I would like to have your feedbacks. Thanks all for sharing your experience.
https://redd.it/12zy5zr
@r_devops
Reddit
r/devops on Reddit: Check application environnement status / requirements
Posted by u/vjeantet - 1 vote and 5 comments
How to set DKIM signing key length and DKIM signature with boto3?
Hi,
I am trying to implement Python script with boto3 for aws ses. What my target is, I will create domain identity and it will set DKIM signing key length and Enable DKIM signatures. And then in the console, it will publish the DNS records for DKIM authentication. Anyone can help?
The below link is not working or deprecated I guess.
https://boto3.amazonaws.com/v1/documentation/api/1.26.108/reference/services/sesv2/client/create\_email\_identity.html
https://redd.it/12zhiwj
@r_devops
Hi,
I am trying to implement Python script with boto3 for aws ses. What my target is, I will create domain identity and it will set DKIM signing key length and Enable DKIM signatures. And then in the console, it will publish the DNS records for DKIM authentication. Anyone can help?
The below link is not working or deprecated I guess.
https://boto3.amazonaws.com/v1/documentation/api/1.26.108/reference/services/sesv2/client/create\_email\_identity.html
https://redd.it/12zhiwj
@r_devops
Reddit
r/devops on Reddit: How to set DKIM signing key length and DKIM signature with boto3?
Posted by u/autodevops - No votes and no comments
Essay on Datadog, Splunk and Grafana
I’m writing an essay for uni on Datadog, Splunk and Grafana and need to answer the questions which would be a better a tool for:
A. Startup company
B. FTSE 100
Finding it difficult to find sources or information surrounding why a startup might pick them but any help would be much appreciated
https://redd.it/130wsro
@r_devops
I’m writing an essay for uni on Datadog, Splunk and Grafana and need to answer the questions which would be a better a tool for:
A. Startup company
B. FTSE 100
Finding it difficult to find sources or information surrounding why a startup might pick them but any help would be much appreciated
https://redd.it/130wsro
@r_devops
Reddit
r/devops on Reddit: Essay on Datadog, Splunk and Grafana
Posted by u/usernameUnlikely - No votes and no comments
Technical debt prevention
Is there a technical debt prevention process in your organization? Can you explain how it's implemented?
https://redd.it/1310pr6
@r_devops
Is there a technical debt prevention process in your organization? Can you explain how it's implemented?
https://redd.it/1310pr6
@r_devops
Reddit
r/devops on Reddit: Technical debt prevention
Posted by u/Fit-Strain5146 - No votes and no comments
Any recommendations for CLI wrappers?
I currently have a makefile with a bunch of kubectl, helm, and aws commands for various environments. This is pretty ugly / brittle as the commands are duplicated for each environment, and I'd like to be do something like `<something other than make> apply_xyz -env staging`or `<something other than make> apply_xyz -conf ./staging.yaml`.
I recall seeing a CLI wrapper project maybe half a year ago from a DevOps influencer type that looked promising, but I can't recall who it was, or what the project was called.
Does anyone have any suggestions? I'd rather a more basic wrapper, vs doing my own in with fabric, etc.
https://redd.it/13137hi
@r_devops
I currently have a makefile with a bunch of kubectl, helm, and aws commands for various environments. This is pretty ugly / brittle as the commands are duplicated for each environment, and I'd like to be do something like `<something other than make> apply_xyz -env staging`or `<something other than make> apply_xyz -conf ./staging.yaml`.
I recall seeing a CLI wrapper project maybe half a year ago from a DevOps influencer type that looked promising, but I can't recall who it was, or what the project was called.
Does anyone have any suggestions? I'd rather a more basic wrapper, vs doing my own in with fabric, etc.
https://redd.it/13137hi
@r_devops
Reddit
r/devops on Reddit: Any recommendations for CLI wrappers?
Posted by u/Downtown_Twist_4782 - No votes and 1 comment
Is this stack good enough to get a cloud devops role?
I am currently a network admin looking to transition into cloud devops by sometime in 2024.
My predicted stack by the eoy should be:
-terraform
-Ansible
-kubernetes
-docker
-CCNA
-Az-900
Will this be enough for me to transition?
https://redd.it/1313ril
@r_devops
I am currently a network admin looking to transition into cloud devops by sometime in 2024.
My predicted stack by the eoy should be:
-terraform
-Ansible
-kubernetes
-docker
-CCNA
-Az-900
Will this be enough for me to transition?
https://redd.it/1313ril
@r_devops
Reddit
r/devops on Reddit: Is this stack good enough to get a cloud devops role?
Posted by u/Ok_Abbreviations388 - No votes and 2 comments
DevOps redirected to "study" work for AI project
Hello guys,
I need some help/hints for the following ticket since i am kinda confused. I used to work always as sys admin or DevOps but since there are no more projects they ask us to do "study" work and contribute on confluence pages for a AI project.
So they ask us this:
Complete a comprehensive analysis of all the network planning (cellular and non-cellular) software landscape, listing all tools used and categorising them by logical criteria (tbd based on analysis) but inclusive of market share.
Examples: Atoll / ABWave / Etc.
​
​
I dont have any idea about this type of IT since it was never in my job listing so i am trying to figure out how i should approach it...I checked Atoll but still i am not sure i make much of sense and if this is enough...
https://redd.it/1316ub9
@r_devops
Hello guys,
I need some help/hints for the following ticket since i am kinda confused. I used to work always as sys admin or DevOps but since there are no more projects they ask us to do "study" work and contribute on confluence pages for a AI project.
So they ask us this:
Complete a comprehensive analysis of all the network planning (cellular and non-cellular) software landscape, listing all tools used and categorising them by logical criteria (tbd based on analysis) but inclusive of market share.
Examples: Atoll / ABWave / Etc.
​
​
I dont have any idea about this type of IT since it was never in my job listing so i am trying to figure out how i should approach it...I checked Atoll but still i am not sure i make much of sense and if this is enough...
https://redd.it/1316ub9
@r_devops
Reddit
r/devops on Reddit: DevOps redirected to "study" work for AI project
Posted by u/Felix1178 - No votes and 2 comments
How important is Software Craftsmanship at your work ?
How much care your company puts into developing software ?
Do they follow best practices, make sure to properly plan and prepare everything, create documentation, design papers, testing strategies, code quality gates, technical dept management, give development enough of time to create proper solutions.
How many companies even stay adamant about the way they produce software over the years ?
How many of you heard about Software Craftsmanship ? Does anyone even follow the manifesto at work ?
https://redd.it/1318mf2
@r_devops
How much care your company puts into developing software ?
Do they follow best practices, make sure to properly plan and prepare everything, create documentation, design papers, testing strategies, code quality gates, technical dept management, give development enough of time to create proper solutions.
How many companies even stay adamant about the way they produce software over the years ?
How many of you heard about Software Craftsmanship ? Does anyone even follow the manifesto at work ?
https://redd.it/1318mf2
@r_devops
Reddit
r/devops on Reddit: How important is Software Craftsmanship at your work ?
Posted by u/pojzon_poe - No votes and 2 comments
GHA/ADO - Manual deployment of YAML pipeline to multiple environments - single pipeline with env selection vs separate pipelines per environment
GHA = GitHub Actions, ADO = Azure DevOps
Assuming I have 3 environments, dev/test/prod.
I use trunk based development. After every commit to the trunk, CI pipeline is run to build artifact & upload it as pipeline artifact. CD pipeline for dev environment is then automatically run when CI finished. This CD pipeline deploys only to the DEV env.
TEST and PROD deployments are manually triggered in our case right now. Also DEV sometimes has to be manually rolled back to previous artifact.
In case of manually triggered pipelines would you prefer to:
1. Have separate piplines for separate environment - like deploy_dev.yaml, deploy_test.yaml, deploy_prod.yaml. They can be parametrized by variables inside YAML template. Templates can be used to not repeat everything yet pipeline using templates would still be duplicated in some way.
2. Have single pipeline with Azure DevOps environment as a parameter - when triggering we can selecct whether to deploy specific artifact to dev, test or prod. Everything which needs to be parametrized can be parametrized via library group (ADO) or environment vars/secrets (GHA).
Why don't I use single multi-stage pipeline with stages for all environments? Because in our case deployments to test and especially prod environments are not really automated and Azure DevOps doesn't provide great support for manually triggering stages in multi-stage pipeline (they did with classic release pipelines).
I have seen both approach and I do not have a strong opinion. But I lean towards second option.
What are your thoughts?
https://redd.it/131a1vt
@r_devops
GHA = GitHub Actions, ADO = Azure DevOps
Assuming I have 3 environments, dev/test/prod.
I use trunk based development. After every commit to the trunk, CI pipeline is run to build artifact & upload it as pipeline artifact. CD pipeline for dev environment is then automatically run when CI finished. This CD pipeline deploys only to the DEV env.
TEST and PROD deployments are manually triggered in our case right now. Also DEV sometimes has to be manually rolled back to previous artifact.
In case of manually triggered pipelines would you prefer to:
1. Have separate piplines for separate environment - like deploy_dev.yaml, deploy_test.yaml, deploy_prod.yaml. They can be parametrized by variables inside YAML template. Templates can be used to not repeat everything yet pipeline using templates would still be duplicated in some way.
2. Have single pipeline with Azure DevOps environment as a parameter - when triggering we can selecct whether to deploy specific artifact to dev, test or prod. Everything which needs to be parametrized can be parametrized via library group (ADO) or environment vars/secrets (GHA).
Why don't I use single multi-stage pipeline with stages for all environments? Because in our case deployments to test and especially prod environments are not really automated and Azure DevOps doesn't provide great support for manually triggering stages in multi-stage pipeline (they did with classic release pipelines).
I have seen both approach and I do not have a strong opinion. But I lean towards second option.
What are your thoughts?
https://redd.it/131a1vt
@r_devops
Reddit
r/devops on Reddit: GHA/ADO - Manual deployment of YAML pipeline to multiple environments - single pipeline with env selection…
Posted by u/0x4ddd - No votes and no comments
API7 Cloud Integrates with Kubernetes Service Discovery
https://api7.ai/blog/api7-cloud-integrates-kubernetes-service-discovery
https://redd.it/12ze1dz
@r_devops
https://api7.ai/blog/api7-cloud-integrates-kubernetes-service-discovery
https://redd.it/12ze1dz
@r_devops
API7 Cloud Integrates with Kubernetes Service Discovery - API7.ai
API7 Cloud integrates with Kubernetes Service Discovery to help users proxy applications deployed in the Kubernetes cluster conveniently.