GitOps'y style with many repositories and submodules is kind of annoying..
May be a little bit niche but over the last couple of years we've been building up our new Kubernetes related service configuration in a sort of (in hindsight, the actual architecute grew relatively organically) GitOps'y way.
I'll start off on our repository types:
- Application source code (Bitbucket)
- Infrastructure configuration (Gitlab)
- Cluster configuration (Gitlab)
We have 8 Kubernetes clusters, which each have a repository for themselves
and within each cluster repository we have a structure which looks a bit like:
```
cluster-01:
apps/
platform-$service/
charts/
helmfile.yaml
.gitlab-ci.yml
deployments_$project_$container.yml
workloads_$project.yml
registries.yml
```
Each of those applications are essentially a submodule, and the deployments yaml file is created dynamically from the registries yaml file (basically, for each of the registries inside the file, search for every container we need to deploy, and deploy it).
However with 8 cluster repositories, and with every application being a submodule within those cluster repositories it can become quite labourious to actually make a simple chart change, or deploying a new routing change as it involes making the change, opening a PR for that change, and then updating the submodule, opening a PR for each cluster.
I've thought about maybe we can work within a monorepo for some of these things but I'm caught in a trap between being DRY and being easy.
Have any of you ended up with a similar structure? If so what did you do make it easier to use?
Alternatively, are you trying to solve a similar problem but solved it differently, I'd love to know how.
I'm not sure if I've explained myself particularly well so feel free to ask for clarification if needed!
https://redd.it/k6i6ax
@r_devops
May be a little bit niche but over the last couple of years we've been building up our new Kubernetes related service configuration in a sort of (in hindsight, the actual architecute grew relatively organically) GitOps'y way.
I'll start off on our repository types:
- Application source code (Bitbucket)
- Infrastructure configuration (Gitlab)
- Cluster configuration (Gitlab)
We have 8 Kubernetes clusters, which each have a repository for themselves
and within each cluster repository we have a structure which looks a bit like:
```
cluster-01:
apps/
platform-$service/
charts/
helmfile.yaml
.gitlab-ci.yml
deployments_$project_$container.yml
workloads_$project.yml
registries.yml
```
Each of those applications are essentially a submodule, and the deployments yaml file is created dynamically from the registries yaml file (basically, for each of the registries inside the file, search for every container we need to deploy, and deploy it).
However with 8 cluster repositories, and with every application being a submodule within those cluster repositories it can become quite labourious to actually make a simple chart change, or deploying a new routing change as it involes making the change, opening a PR for that change, and then updating the submodule, opening a PR for each cluster.
I've thought about maybe we can work within a monorepo for some of these things but I'm caught in a trap between being DRY and being easy.
Have any of you ended up with a similar structure? If so what did you do make it easier to use?
Alternatively, are you trying to solve a similar problem but solved it differently, I'd love to know how.
I'm not sure if I've explained myself particularly well so feel free to ask for clarification if needed!
https://redd.it/k6i6ax
@r_devops
reddit
GitOps'y style with many repositories and submodules is kind of...
May be a little bit niche but over the last couple of years we've been building up our new Kubernetes related service configuration in a sort of...
Too Many Job Opportunities (Rant)
I completely realize that in this time of economic hardship for so many, this post will sound a bit tone deaf. I also realize this post reeks of privilege, and for that I apologize. I don't really have anywhere else to vent.
I am currently a DevOps engineer for a relatively small analytics company. I like my job. I like the diverse work I do, I like my team, I like my schedule flexibility, I like all my benefits, and I love what my company does. The only area that leaves something to be desired is the salary (which is largely offset by the great benefits).
Since the Covid pandemic started up earlier in the year, I've had a very noticeable uptick in recruiters and HR people reaching out about opportunities. In the last week, I've had over a dozen opportunities brought to me. This isn't counting the numerous random emails about 6-month contract work in Nowhere, Texas. These are legit, high-paying, interesting roles with really impressive companies; many of them fairly local, too.
Under better economic circumstances, I'd likely just stay put in my job that I like. As I'm the only person in my household working right now, though, it feels fiscally irresponsible to not pursue opportunities that would be significantly higher-paying.
Constantly replying to LinkedIn messages and emails to coordinate calls in between the minutes that I'm working, and trying to find holes in my schedule that permit panel Zoom interviews is exhausting. Constantly being "on" for conversations with recruiters and hiring managers is exhausting. Constantly thinking about trying to improve my situation to better provide for my family is exhausting.
Sorry for this rant. I was just curious if anyone else is experiencing this type of burnout (on top of the normal work burnout). Also, I'd be interested to hear anyone's thoughts on job changes/upgrades during this pandemic. How do you balance things like salary expectations, benefits, work satisfaction, company satisfaction, etc.?
https://redd.it/k6pbtn
@r_devops
I completely realize that in this time of economic hardship for so many, this post will sound a bit tone deaf. I also realize this post reeks of privilege, and for that I apologize. I don't really have anywhere else to vent.
I am currently a DevOps engineer for a relatively small analytics company. I like my job. I like the diverse work I do, I like my team, I like my schedule flexibility, I like all my benefits, and I love what my company does. The only area that leaves something to be desired is the salary (which is largely offset by the great benefits).
Since the Covid pandemic started up earlier in the year, I've had a very noticeable uptick in recruiters and HR people reaching out about opportunities. In the last week, I've had over a dozen opportunities brought to me. This isn't counting the numerous random emails about 6-month contract work in Nowhere, Texas. These are legit, high-paying, interesting roles with really impressive companies; many of them fairly local, too.
Under better economic circumstances, I'd likely just stay put in my job that I like. As I'm the only person in my household working right now, though, it feels fiscally irresponsible to not pursue opportunities that would be significantly higher-paying.
Constantly replying to LinkedIn messages and emails to coordinate calls in between the minutes that I'm working, and trying to find holes in my schedule that permit panel Zoom interviews is exhausting. Constantly being "on" for conversations with recruiters and hiring managers is exhausting. Constantly thinking about trying to improve my situation to better provide for my family is exhausting.
Sorry for this rant. I was just curious if anyone else is experiencing this type of burnout (on top of the normal work burnout). Also, I'd be interested to hear anyone's thoughts on job changes/upgrades during this pandemic. How do you balance things like salary expectations, benefits, work satisfaction, company satisfaction, etc.?
https://redd.it/k6pbtn
@r_devops
reddit
Too Many Job Opportunities (Rant)
I completely realize that in this time of economic hardship for so many, this post will sound a bit tone deaf. I also realize this post reeks of...
Build/Deployment environment guidance
I'm looking for some guidance from the devops community here.
I recently completed the MVP design of an architecture for my organization's backend data collection, processing and persistence pipelines, only we don't have our CI/CD strategy finalized. We're currently on BitBucket/Bamboo and are likely going to be making a change and it's looking like GitLab may be what we move to.
The architecture is multi-stage (dev, test, qa, prod), multi-layered (data collection, enrichment/processing, persistence) and multi-region with some layers being in more regions than others. Aside from the complexity brought on by being multi-regional and multi-layered, it's a pretty neat, clean and simplistic architecture, that provides HA and somewhat easy regional failover, however the deployment is a bit intricate and of course because each region and layers needs to know about the others, each stack has dependencies on outputs from other stacks, in other regions.
My goal is to allow the developers to create their applications and be able to plug into the architecture. So, I'm using an EventBridge in each region/layer/stage that the developers can easily create a rule for the EventBridge to route data to their component.
Problem is, I haven't solved for the dependencies that the applications and their deployments will have on the infrastructure. I automated the infrastructure deployment using CloudFormation, and the developers typically use Serverless, so theoretically I can have all of the stacks export everything and let the developers just import those values everywhere they need them, however:
\- That creates a pretty tight coupling that I've never liked and Cloudformation is known for getting in these weird states that can be hard to get out of.
\- I can foresee developers wanting to depend on resources in other regions and Fn::ImportValue doesn't allow that.
\- Some of those values (i.e. "touch points") are needed at deployment time and others would be more valuable being obtained at runtime (Depending on them at runtime would allow human intervention in the event the whole thing goes up in flames, change the values and let all the resources "autodiscover")
I had a vision in my head that these touch-points (resource ARN's, hostnames, etc.) would reside in some key/value store that would be maintained in/by the deployment environment. When something gets deployed, its outputs would get stored in this key/value store. Even better, if the data store were backed by something like a DynamoDB global table, the values could be depended on at both runtime and/or deployment time.
Am I off base in thinking along those lines? Are there any frameworks that easily save these variable values and (ideally) persist them to AWS? I know GitLab has environment variables, but I haven't gotten a sense of whether they solve for what I'm thinking here. Should I just use Cloudformation Fn::ImportValue and cross the bridge of the above issues when I come to it?
Any guidance, pointer in the right direction, etc. would be much appreciated.
Thanks!
https://redd.it/k6lffr
@r_devops
I'm looking for some guidance from the devops community here.
I recently completed the MVP design of an architecture for my organization's backend data collection, processing and persistence pipelines, only we don't have our CI/CD strategy finalized. We're currently on BitBucket/Bamboo and are likely going to be making a change and it's looking like GitLab may be what we move to.
The architecture is multi-stage (dev, test, qa, prod), multi-layered (data collection, enrichment/processing, persistence) and multi-region with some layers being in more regions than others. Aside from the complexity brought on by being multi-regional and multi-layered, it's a pretty neat, clean and simplistic architecture, that provides HA and somewhat easy regional failover, however the deployment is a bit intricate and of course because each region and layers needs to know about the others, each stack has dependencies on outputs from other stacks, in other regions.
My goal is to allow the developers to create their applications and be able to plug into the architecture. So, I'm using an EventBridge in each region/layer/stage that the developers can easily create a rule for the EventBridge to route data to their component.
Problem is, I haven't solved for the dependencies that the applications and their deployments will have on the infrastructure. I automated the infrastructure deployment using CloudFormation, and the developers typically use Serverless, so theoretically I can have all of the stacks export everything and let the developers just import those values everywhere they need them, however:
\- That creates a pretty tight coupling that I've never liked and Cloudformation is known for getting in these weird states that can be hard to get out of.
\- I can foresee developers wanting to depend on resources in other regions and Fn::ImportValue doesn't allow that.
\- Some of those values (i.e. "touch points") are needed at deployment time and others would be more valuable being obtained at runtime (Depending on them at runtime would allow human intervention in the event the whole thing goes up in flames, change the values and let all the resources "autodiscover")
I had a vision in my head that these touch-points (resource ARN's, hostnames, etc.) would reside in some key/value store that would be maintained in/by the deployment environment. When something gets deployed, its outputs would get stored in this key/value store. Even better, if the data store were backed by something like a DynamoDB global table, the values could be depended on at both runtime and/or deployment time.
Am I off base in thinking along those lines? Are there any frameworks that easily save these variable values and (ideally) persist them to AWS? I know GitLab has environment variables, but I haven't gotten a sense of whether they solve for what I'm thinking here. Should I just use Cloudformation Fn::ImportValue and cross the bridge of the above issues when I come to it?
Any guidance, pointer in the right direction, etc. would be much appreciated.
Thanks!
https://redd.it/k6lffr
@r_devops
reddit
Build/Deployment environment guidance
I'm looking for some guidance from the devops community here. I recently completed the MVP design of an architecture for my organization's...
Any resource on AWS IaC with Docker?
Hi,
I would like to explore AWS using Terraform infrastructure as code. Is there any book (not a video) that discusses this topic?
I’m using Docker with Terraform on DigitalOcean and I’ve successfully hosted multiple sites online, but AWS Docker has a different nuance in Terraform. So what I really like is a tutorial or book that explains the steps simply.
I prefer to stick with Terraform rather than AWS CLI, so that my IaC is not coupled to any provider.
Also, this can be a paid resource.
https://redd.it/k6z37d
@r_devops
Hi,
I would like to explore AWS using Terraform infrastructure as code. Is there any book (not a video) that discusses this topic?
I’m using Docker with Terraform on DigitalOcean and I’ve successfully hosted multiple sites online, but AWS Docker has a different nuance in Terraform. So what I really like is a tutorial or book that explains the steps simply.
I prefer to stick with Terraform rather than AWS CLI, so that my IaC is not coupled to any provider.
Also, this can be a paid resource.
https://redd.it/k6z37d
@r_devops
reddit
Any resource on AWS IaC with Docker?
Hi, I would like to explore AWS using Terraform infrastructure as code. Is there any book (not a video) that discusses this topic? I’m using...
Do professionals from Dev background who transition into Devops get assigned the same work as someone from Ops background who transition into Devops?
As a newcomer to this industry, It would really be helpful to get some inputs from professionals who already work in the industry.
https://redd.it/k70cs9
@r_devops
As a newcomer to this industry, It would really be helpful to get some inputs from professionals who already work in the industry.
https://redd.it/k70cs9
@r_devops
reddit
Do professionals from Dev background who transition into Devops...
As a newcomer to this industry, It would really be helpful to get some inputs from professionals who already work in the industry.
How do you manage many resources with tools like Terraform?
We are looking at moving away from manually created infrastructure and going scripted. We have many projects and each has resources.
How does management of this work when using tooling like Terraform?
Should we just have many git repos? Should we use Terraform Cloud? Maybe we should use our deployment tool to assist? (Octopus Deploy).
This is new for us and there are so many options out there that it can be hard to understand the best route to take.
https://redd.it/k6vyqx
@r_devops
We are looking at moving away from manually created infrastructure and going scripted. We have many projects and each has resources.
How does management of this work when using tooling like Terraform?
Should we just have many git repos? Should we use Terraform Cloud? Maybe we should use our deployment tool to assist? (Octopus Deploy).
This is new for us and there are so many options out there that it can be hard to understand the best route to take.
https://redd.it/k6vyqx
@r_devops
reddit
How do you manage many resources with tools like Terraform?
We are looking at moving away from manually created infrastructure and going scripted. We have many projects and each has resources. How does...
🦄The saga continues 12/8 at 1pm EST - AWS GameDay @ re:Invent 2020
Calling all devops enthusiasts. Come learn AWS hands-on and risk-free, network with other tech professionals, and have some fun! The Unicorn Polo League returns live to Twitch on Tuesday, 12/8 at 1pm EST. Claim your spot from the re:Invent [GameDay Site](https://virtual.awsevents.com/channel/GameDay/186983893?nc2=reinv20_m_hocgd) beginning 1 hour prior. Review this [FAQ](https://d1.awsstatic.com/events/AWS%20reInvent%202020%20GameDay%20FAQ.pdf) and see below for some more pointers. Additional video content is available at the GameDay site as well which will help you navigate the event. We are excited to see you there!
**Know-Before-You-Go**
* Event is free, we will provide your team with an AWS account to work in
* You need an Amazon.com account to complete a brief registration process and log in
* We will be streaming live on [Twitch](https://www.twitch.tv/awsgameday), with an internal team playing along with customers and sharing tips and strategy. You'll also have the chance to meet the game developers in exclusive interviews.
* No pre-registration
* No pre-formed teams, we'll assign you to a team of 4 after you log in
* First come, first served
https://redd.it/k71ycl
@r_devops
Calling all devops enthusiasts. Come learn AWS hands-on and risk-free, network with other tech professionals, and have some fun! The Unicorn Polo League returns live to Twitch on Tuesday, 12/8 at 1pm EST. Claim your spot from the re:Invent [GameDay Site](https://virtual.awsevents.com/channel/GameDay/186983893?nc2=reinv20_m_hocgd) beginning 1 hour prior. Review this [FAQ](https://d1.awsstatic.com/events/AWS%20reInvent%202020%20GameDay%20FAQ.pdf) and see below for some more pointers. Additional video content is available at the GameDay site as well which will help you navigate the event. We are excited to see you there!
**Know-Before-You-Go**
* Event is free, we will provide your team with an AWS account to work in
* You need an Amazon.com account to complete a brief registration process and log in
* We will be streaming live on [Twitch](https://www.twitch.tv/awsgameday), with an internal team playing along with customers and sharing tips and strategy. You'll also have the chance to meet the game developers in exclusive interviews.
* No pre-registration
* No pre-formed teams, we'll assign you to a team of 4 after you log in
* First come, first served
https://redd.it/k71ycl
@r_devops
macOS taskbar plugin to monitor cloud costs
I am curious how do you monitor your cloud costs. Despite having credits, I have fear that I may exhaust it. AWS, GCP sends email invoices at the month end. Budgets and alarms is a solution. AWS has cost explorer API, but has limited information. GCP does not have a billing API.
How would you feel about a macOS plugin that displays your cloud expenditure and remaining credits in your taskbar. You can also customize it to show costs based on tags, resource type (EC2, S3, GKE). You can add any number of your cloud accounts from any cloud provider.
The user has to just enable billing exports and they see their cloud cost from menubar. Also, programmatic access to fetch their cost data so that they can integrate it with Zapier, IFTTT, webhooks.
https://redd.it/k6rnho
@r_devops
I am curious how do you monitor your cloud costs. Despite having credits, I have fear that I may exhaust it. AWS, GCP sends email invoices at the month end. Budgets and alarms is a solution. AWS has cost explorer API, but has limited information. GCP does not have a billing API.
How would you feel about a macOS plugin that displays your cloud expenditure and remaining credits in your taskbar. You can also customize it to show costs based on tags, resource type (EC2, S3, GKE). You can add any number of your cloud accounts from any cloud provider.
The user has to just enable billing exports and they see their cloud cost from menubar. Also, programmatic access to fetch their cost data so that they can integrate it with Zapier, IFTTT, webhooks.
https://redd.it/k6rnho
@r_devops
reddit
macOS taskbar plugin to monitor cloud costs
I am curious how do you monitor your cloud costs. Despite having credits, I have fear that I may exhaust it. AWS, GCP sends email invoices at the...
RHCSA 8 EX2OO
Hello guys,
Am preparing to take my RHCSA 8 ex200 by the end of this month,but am having difficulties getting a recent practice questions, please is there any recommendations,i just wanna practice the recent questions.. any help will be grateful appreciated..
Thanks
https://redd.it/k6rxjt
@r_devops
Hello guys,
Am preparing to take my RHCSA 8 ex200 by the end of this month,but am having difficulties getting a recent practice questions, please is there any recommendations,i just wanna practice the recent questions.. any help will be grateful appreciated..
Thanks
https://redd.it/k6rxjt
@r_devops
reddit
RHCSA 8 EX2OO
Hello guys, Am preparing to take my RHCSA 8 ex200 by the end of this month,but am having difficulties getting a recent practice questions, please...
DevOps last mile: automating edge cases.
Companies chasing DevOps have an internal team dedicated to automation. Developers use the tools created by this team to run and operate their code. They have many names, but let's call them Platform, which most companies do. Platform teams create abstractions on top of the infrastructure. The goal is to increase developer speed and make systems reliable. They increase speed by simplifying infrastructure APIs and reliability by automating manual tasks. But a curious thing always happens. One type of task that exists from the start of the company is always left behind in the automation backlog.
**Why automate?**
We used to ship software by accessing servers and running commands inside boxes to pull new code. Now code goes from Git to servers without human intervention. Developers define what they need with code. Platform teams build the tools to make code changes become running systems. The goal is doing this for everything, from the business code to infrastructure. Networking, databases, queues, etc. But this is hard. Platform teams have a big backlog, they prioritize items demanded with higher frequency.
Developers make manual changes for things not yet automated. Some companies have compliance, regulations, and other constraints. This makes it hard for developers to get direct access to production. So they have the Platform team running these changes for them. This is what makes the higher frequency items get priority. Engineers want to write software, not run repetitive manual tasks. But this backlog is never decreasing. The business changes and adopts new technologies. Companies create new units and teams. Headcount grows, adding new items to the automation backlog. And this mysterious type of task is always left behind.
**One-offs.**
Sometimes a bug in software messes with a customers' money, time, health, or ego. They won't wait for three iterations of code reviews, tests, code analysis, and gradual rollouts. This takes time. Someone will access the database and update it. These are one-off scripts. They solve a problem for one or a few customers before the team creates a definitive fix.
One-off scripts have a bad reputation. When this happens too much, it's a sign that the software is not stable. In the ideal world, it would never happen. Engineers would spot such time-critical problems during design and review phases. Production issues should be light and wait for regular software delivery flow.
Almost every company lives under the illusion that one-offs should not exist. Or that they will stop happening at some point. Yes, one should not do this every day. But having a few senior engineers run manual scripts in production because it's an exceptional case is a mistake.
**This is a myth.**
One-offs won't go away, and companies need to embrace it. Avoiding them will drive the company to a bad path. Either centralizing execution with experienced engineers or creating a team dedicated to analyzing and running them. This isn't good.
Some companies solve this problem with a slow and manual Change Management workflow. Developers find the problem and add a script to a ticketing system. Someone from the operations team runs it without all the context of what she is doing. Avoiding one-offs is the shortest path to this model.
**It's hard**
One-offs are the hardest piece to automate. When you don't know what problems can happen, it's hard to build a solution upfront. Few companies had the courage to 1) embrace one-offs, and 2) try to automate them. We did this for one of the companies I worked at, and it was a big success. Developers were happy with the autonomy to build and run all solutions to their problems. Security and compliance were happier with audit logs. SREs were happy with fewer manual interventions in production. It was hard but paid off.
I enjoyed the solution so much that I decided to bring it to other teams. I created a company; it's called [RunOps](https://runops.io). We help teams automate one-off scripts within minutes. We see fantastic results with
Companies chasing DevOps have an internal team dedicated to automation. Developers use the tools created by this team to run and operate their code. They have many names, but let's call them Platform, which most companies do. Platform teams create abstractions on top of the infrastructure. The goal is to increase developer speed and make systems reliable. They increase speed by simplifying infrastructure APIs and reliability by automating manual tasks. But a curious thing always happens. One type of task that exists from the start of the company is always left behind in the automation backlog.
**Why automate?**
We used to ship software by accessing servers and running commands inside boxes to pull new code. Now code goes from Git to servers without human intervention. Developers define what they need with code. Platform teams build the tools to make code changes become running systems. The goal is doing this for everything, from the business code to infrastructure. Networking, databases, queues, etc. But this is hard. Platform teams have a big backlog, they prioritize items demanded with higher frequency.
Developers make manual changes for things not yet automated. Some companies have compliance, regulations, and other constraints. This makes it hard for developers to get direct access to production. So they have the Platform team running these changes for them. This is what makes the higher frequency items get priority. Engineers want to write software, not run repetitive manual tasks. But this backlog is never decreasing. The business changes and adopts new technologies. Companies create new units and teams. Headcount grows, adding new items to the automation backlog. And this mysterious type of task is always left behind.
**One-offs.**
Sometimes a bug in software messes with a customers' money, time, health, or ego. They won't wait for three iterations of code reviews, tests, code analysis, and gradual rollouts. This takes time. Someone will access the database and update it. These are one-off scripts. They solve a problem for one or a few customers before the team creates a definitive fix.
One-off scripts have a bad reputation. When this happens too much, it's a sign that the software is not stable. In the ideal world, it would never happen. Engineers would spot such time-critical problems during design and review phases. Production issues should be light and wait for regular software delivery flow.
Almost every company lives under the illusion that one-offs should not exist. Or that they will stop happening at some point. Yes, one should not do this every day. But having a few senior engineers run manual scripts in production because it's an exceptional case is a mistake.
**This is a myth.**
One-offs won't go away, and companies need to embrace it. Avoiding them will drive the company to a bad path. Either centralizing execution with experienced engineers or creating a team dedicated to analyzing and running them. This isn't good.
Some companies solve this problem with a slow and manual Change Management workflow. Developers find the problem and add a script to a ticketing system. Someone from the operations team runs it without all the context of what she is doing. Avoiding one-offs is the shortest path to this model.
**It's hard**
One-offs are the hardest piece to automate. When you don't know what problems can happen, it's hard to build a solution upfront. Few companies had the courage to 1) embrace one-offs, and 2) try to automate them. We did this for one of the companies I worked at, and it was a big success. Developers were happy with the autonomy to build and run all solutions to their problems. Security and compliance were happier with audit logs. SREs were happy with fewer manual interventions in production. It was hard but paid off.
I enjoyed the solution so much that I decided to bring it to other teams. I created a company; it's called [RunOps](https://runops.io). We help teams automate one-off scripts within minutes. We see fantastic results with
runops.io
Runops: SSO & audit for ad-hoc acces
Stop handing out static credentials. Connect databases, Kubernetes, AWS, and 50+ integrations.
the first few companies using it. It's early days; our landing page is not clear on how we do it, so feel free to reach out on [Twitter](https://twitter.com/andriosrobert) or here to learn mode.
https://redd.it/k6plye
@r_devops
https://redd.it/k6plye
@r_devops
Twitter
Andrios Robert (@andriosrobert) | Twitter
The latest Tweets from Andrios Robert (@andriosrobert). Tweeting about:
Lisp superpowers |
Beautiful API designs |
Why DevOps is an illusion |
How No-SQL is underrated |
Building https://t.co/PnFamw1C7g. Sao Paulo, Brazil
Lisp superpowers |
Beautiful API designs |
Why DevOps is an illusion |
How No-SQL is underrated |
Building https://t.co/PnFamw1C7g. Sao Paulo, Brazil
How many DevOps Certifications are there right now?
Since DevOps is a widespread ideology, there is not a single inventor or authority to examine your caliber. There are various DevOps certifications in the market (more of which are based on its tools), but out of them, only a few highly recognized in the talent market are:
### A. Kubernetes Certifications
This exam tests aspirant’s knowledge & expertise on general Kubernetes features & is conducted in two types as:
* Certified Kubernetes Administrator (CKA)
* Certified Kubernetes Application Developer (CKAD) programs
### B. Docker Certified Associate
The exam aims at testing one’s experience of working with the Docker tool in an IT infrastructure.
### C. Puppet professional Certificate
The exam is named “*Puppet 206 – System Administration Using Puppet*” which approves your ability to operate system infrastructure using the Puppet tool.
### D. Microsoft Certified: Azure Administrator Associate
Examines your skills and expertise in maintaining storage, networking, and securing resources over Microsoft Azure.
### E. AWS Certified DevOps Engineer Professional Exam
The exam tests your technical skills and knowledge to provision, operate & manage distributed apps & systems across AWS.
### 7. Certified Jenkin Engineer
The exam tests your knowledge around the Jenkin tool’s use for building robust CI/CD pipelines.
https://redd.it/k6hcs7
@r_devops
Since DevOps is a widespread ideology, there is not a single inventor or authority to examine your caliber. There are various DevOps certifications in the market (more of which are based on its tools), but out of them, only a few highly recognized in the talent market are:
### A. Kubernetes Certifications
This exam tests aspirant’s knowledge & expertise on general Kubernetes features & is conducted in two types as:
* Certified Kubernetes Administrator (CKA)
* Certified Kubernetes Application Developer (CKAD) programs
### B. Docker Certified Associate
The exam aims at testing one’s experience of working with the Docker tool in an IT infrastructure.
### C. Puppet professional Certificate
The exam is named “*Puppet 206 – System Administration Using Puppet*” which approves your ability to operate system infrastructure using the Puppet tool.
### D. Microsoft Certified: Azure Administrator Associate
Examines your skills and expertise in maintaining storage, networking, and securing resources over Microsoft Azure.
### E. AWS Certified DevOps Engineer Professional Exam
The exam tests your technical skills and knowledge to provision, operate & manage distributed apps & systems across AWS.
### 7. Certified Jenkin Engineer
The exam tests your knowledge around the Jenkin tool’s use for building robust CI/CD pipelines.
https://redd.it/k6hcs7
@r_devops
reddit
How many DevOps Certifications are there right now?
Since DevOps is a widespread ideology, there is not a single inventor or authority to examine your caliber. There are various DevOps...
Please help me decide between a DevOps or Microsoft 365 career path
Currently I'm working in a help desk position for a large gov. org. I have basic programming knowledge which I haven't been able to use in this role. I've impressed my bosses by showing potential and working hard. I've been able to shadow a couple BAs as well.
Right now we are introducing Microsoft 365 and no one on our team is an expert in that area. My manager has told me would would pay for my training for 365 so that I can be his guy in this area.
I assume a new role would be made for me with better pay eventually as well.
I was drawn to get into dev ops, because I thought there is more opportunities in that field, and the pay is much better.
I feel a little paralyzed because I can't make a decision. If I get into 365, is that a complete tangent to getting into Azure DevOps? Should I do both? Should I do 365? I'm just terrified I'll focus on too much and end up being stuck in help desk.
Any advice would be much appreciated.
https://redd.it/k6euf3
@r_devops
Currently I'm working in a help desk position for a large gov. org. I have basic programming knowledge which I haven't been able to use in this role. I've impressed my bosses by showing potential and working hard. I've been able to shadow a couple BAs as well.
Right now we are introducing Microsoft 365 and no one on our team is an expert in that area. My manager has told me would would pay for my training for 365 so that I can be his guy in this area.
I assume a new role would be made for me with better pay eventually as well.
I was drawn to get into dev ops, because I thought there is more opportunities in that field, and the pay is much better.
I feel a little paralyzed because I can't make a decision. If I get into 365, is that a complete tangent to getting into Azure DevOps? Should I do both? Should I do 365? I'm just terrified I'll focus on too much and end up being stuck in help desk.
Any advice would be much appreciated.
https://redd.it/k6euf3
@r_devops
reddit
Please help me decide between a DevOps or Microsoft 365 career path
Currently I'm working in a help desk position for a large gov. org. I have basic programming knowledge which I haven't been able to use in this...
Exactly an year back, I was new to PagerDuty / DevOps and was suffering from anxiety in the new job. This group helped me a lot. Thank you my fellow engineers!
[Original post.](https://www.reddit.com/r/devops/comments/dj15cy/how_to_deal_with_oncall_pagerduty_anxiety/)
In fact after 1 year, what /u/lorarc [said has come true](https://www.reddit.com/r/devops/comments/dj15cy/how_to_deal_with_oncall_pagerduty_anxiety/f412gk1/). I am sitting here on a weekend (I am on-call this week), eagerly waiting for a PD alert! :-)
I would like to give my heart felt gratitude to this sub.
Thank you!
https://redd.it/k765aw
@r_devops
[Original post.](https://www.reddit.com/r/devops/comments/dj15cy/how_to_deal_with_oncall_pagerduty_anxiety/)
In fact after 1 year, what /u/lorarc [said has come true](https://www.reddit.com/r/devops/comments/dj15cy/how_to_deal_with_oncall_pagerduty_anxiety/f412gk1/). I am sitting here on a weekend (I am on-call this week), eagerly waiting for a PD alert! :-)
I would like to give my heart felt gratitude to this sub.
Thank you!
https://redd.it/k765aw
@r_devops
reddit
How to deal with on-call (PagerDuty) anxiety
Recently (2 months back) I joined as a DevOps engineers with a large company. Every month I have PagerDuty for 1 week (12 hour shift). I have...
Build/Deployment guidance
I'm looking for some guidance from the devops community here.
I recently completed the MVP design of an architecture for my organization's backend data collection, processing and persistence pipelines, only we don't have our CI/CD strategy finalized. We're currently on BitBucket/Bamboo and are likely going to be making a change and it's looking like GitLab may be what we move to.
The architecture is multi-stage (dev, test, qa, prod), multi-layered (data collection, enrichment/processing, persistence) and multi-region with some layers being in more regions than others. Aside from the complexity brought on by being multi-regional and multi-layered, it's a pretty neat, clean and simplistic architecture, that provides HA and somewhat easy regional failover, however the deployment is a bit intricate and of course because each region and layers needs to know about the others, each stack has dependencies on outputs from other stacks, in other regions.
My goal is to allow the developers to create their applications and be able to plug into the architecture. So, I'm using an EventBridge in each region/layer/stage that the developers can easily create a rule for the EventBridge to route data to their component.
Problem is, I haven't solved for the dependencies that the applications and their deployments will have on the infrastructure. I automated the infrastructure deployment using CloudFormation, and the developers typically use Serverless, so theoretically I can have all of the stacks export everything and let the developers just import those values everywhere they need them, however:
\- That creates a pretty tight coupling that I've never liked and Cloudformation is known for getting in these weird states that can be hard to get out of.
\- I can foresee developers wanting to depend on resources in other regions and Fn::ImportValue doesn't allow that.
\- Some of those values (i.e. "touch points") are needed at deployment time and others would be more valuable being obtained at runtime (Depending on them at runtime would allow human intervention in the event the whole thing goes up in flames, change the values and let all the resources "autodiscover")
I had a vision in my head that these touch-points (resource ARN's, hostnames, etc.) would reside in some key/value store that would be maintained in/by the deployment environment. When something gets deployed, its outputs would get stored in this key/value store. Even better, if the data store were backed by something like a DynamoDB global table, the values could be depended on at both runtime and/or deployment time.
Am I off base in thinking along those lines? Does anything like that exist? I know GitLab has environment variables, but I haven't gotten a sense of whether they solve for what I'm thinking here. Should I just use Cloudformation Fn::ImportValue and cross the bridge of the above issues when I come to it?
Any guidance, pointer in the right direction, etc. would be much appreciated.
Thanks!
https://redd.it/k6ep0e
@r_devops
I'm looking for some guidance from the devops community here.
I recently completed the MVP design of an architecture for my organization's backend data collection, processing and persistence pipelines, only we don't have our CI/CD strategy finalized. We're currently on BitBucket/Bamboo and are likely going to be making a change and it's looking like GitLab may be what we move to.
The architecture is multi-stage (dev, test, qa, prod), multi-layered (data collection, enrichment/processing, persistence) and multi-region with some layers being in more regions than others. Aside from the complexity brought on by being multi-regional and multi-layered, it's a pretty neat, clean and simplistic architecture, that provides HA and somewhat easy regional failover, however the deployment is a bit intricate and of course because each region and layers needs to know about the others, each stack has dependencies on outputs from other stacks, in other regions.
My goal is to allow the developers to create their applications and be able to plug into the architecture. So, I'm using an EventBridge in each region/layer/stage that the developers can easily create a rule for the EventBridge to route data to their component.
Problem is, I haven't solved for the dependencies that the applications and their deployments will have on the infrastructure. I automated the infrastructure deployment using CloudFormation, and the developers typically use Serverless, so theoretically I can have all of the stacks export everything and let the developers just import those values everywhere they need them, however:
\- That creates a pretty tight coupling that I've never liked and Cloudformation is known for getting in these weird states that can be hard to get out of.
\- I can foresee developers wanting to depend on resources in other regions and Fn::ImportValue doesn't allow that.
\- Some of those values (i.e. "touch points") are needed at deployment time and others would be more valuable being obtained at runtime (Depending on them at runtime would allow human intervention in the event the whole thing goes up in flames, change the values and let all the resources "autodiscover")
I had a vision in my head that these touch-points (resource ARN's, hostnames, etc.) would reside in some key/value store that would be maintained in/by the deployment environment. When something gets deployed, its outputs would get stored in this key/value store. Even better, if the data store were backed by something like a DynamoDB global table, the values could be depended on at both runtime and/or deployment time.
Am I off base in thinking along those lines? Does anything like that exist? I know GitLab has environment variables, but I haven't gotten a sense of whether they solve for what I'm thinking here. Should I just use Cloudformation Fn::ImportValue and cross the bridge of the above issues when I come to it?
Any guidance, pointer in the right direction, etc. would be much appreciated.
Thanks!
https://redd.it/k6ep0e
@r_devops
reddit
Build/Deployment guidance
I'm looking for some guidance from the devops community here. I recently completed the MVP design of an architecture for my organization's...
How do you interact with the API of internal services from CI systems?
I’m mostly thinking of the use case where you have something like Hashicorp Vault or Kubernetes running on EC2 in a private subnet. Right now, I have a bash script baked into an AMI that I trigger with the Systems Manager Run Command from CI. This bash script uses cli commands to configure internal services. Our infra is still in its infancy and I keep thinking “there has to be a better way!”. My main ideas were:
- switching to a self hosted CI solution, running the CI instance in the same VPC as internal services
- exposing the API of internal services to the internet, protect them with basic auth
https://redd.it/k78xfq
@r_devops
I’m mostly thinking of the use case where you have something like Hashicorp Vault or Kubernetes running on EC2 in a private subnet. Right now, I have a bash script baked into an AMI that I trigger with the Systems Manager Run Command from CI. This bash script uses cli commands to configure internal services. Our infra is still in its infancy and I keep thinking “there has to be a better way!”. My main ideas were:
- switching to a self hosted CI solution, running the CI instance in the same VPC as internal services
- exposing the API of internal services to the internet, protect them with basic auth
https://redd.it/k78xfq
@r_devops
reddit
How do you interact with the API of internal services from CI systems?
I’m mostly thinking of the use case where you have something like Hashicorp Vault or Kubernetes running on EC2 in a private subnet. Right now, I...
Any interest in sla management?
I used to work for a place that basically managed a bunch of programs between hospitals and hospital vendors. Followed up on contracts, found deals where the hospital could get free drugs for certain patients, etc. You're wondering where I'm going with this...
Before Thanksgiving when AWS went down it got me thinking. I saw a lot of people being very vocal about how it can be a pain to get credits and money back for failure to meet SLAs. I've had to do it myself, but usually I just felt like "that's why they pay me".
Is there any interest in a 3rd party managing your SLAs with your vendors?
I've got no idea exactly what *this* is right now, but I'd love to get some people's feedback on the idea, and if you're interested, [check out my website and sign up](https://slaslayer.com).
https://redd.it/k6co5d
@r_devops
I used to work for a place that basically managed a bunch of programs between hospitals and hospital vendors. Followed up on contracts, found deals where the hospital could get free drugs for certain patients, etc. You're wondering where I'm going with this...
Before Thanksgiving when AWS went down it got me thinking. I saw a lot of people being very vocal about how it can be a pain to get credits and money back for failure to meet SLAs. I've had to do it myself, but usually I just felt like "that's why they pay me".
Is there any interest in a 3rd party managing your SLAs with your vendors?
I've got no idea exactly what *this* is right now, but I'd love to get some people's feedback on the idea, and if you're interested, [check out my website and sign up](https://slaslayer.com).
https://redd.it/k6co5d
@r_devops
SLA Slayer
We want to help you get money back from vendors - SLA Slayer
Vendors will promise you uptime Service-level Agreements (SLAs) measured by the number of nines, but how many are making good on these promises? If you’ve been around cloud computing for a while you’ll know that major cloud platforms and vendors …
Finding the IP/web host behind Cloudflare
Guys, I am new to Devops and I keep getting spammed by this web design company on my website. Constantly.
I tried to look up their host so I could complaint to them but they are behind cloud flare. Is there anyway I can find their actual IP or host?
https://redd.it/k7fquz
@r_devops
Guys, I am new to Devops and I keep getting spammed by this web design company on my website. Constantly.
I tried to look up their host so I could complaint to them but they are behind cloud flare. Is there anyway I can find their actual IP or host?
https://redd.it/k7fquz
@r_devops
reddit
Finding the IP/web host behind Cloudflare
Guys, I am new to Devops and I keep getting spammed by this web design company on my website. Constantly. I tried to look up their host so I...
Development Project Overview
We run quite a few projects, and all of them are updating their information pages under confluence which.... well would like to say works but that is only for the ones that are high profile and in production . Im wondering if there is a tool that actually lets you browse the progress of project in DevOps terms (so not traditional Project Management). What I am after is something that lets me have a dashboard with pipeline data, service connections and uptime in sites and environments, resource utilization, and centralized linking to project information. I did see something that I base this kind of information from a DevOps consultancy firm (mockup? - dont know).
Does anything like this exist? Is it something that the ones have it build themselves??
https://redd.it/k783ps
@r_devops
We run quite a few projects, and all of them are updating their information pages under confluence which.... well would like to say works but that is only for the ones that are high profile and in production . Im wondering if there is a tool that actually lets you browse the progress of project in DevOps terms (so not traditional Project Management). What I am after is something that lets me have a dashboard with pipeline data, service connections and uptime in sites and environments, resource utilization, and centralized linking to project information. I did see something that I base this kind of information from a DevOps consultancy firm (mockup? - dont know).
Does anything like this exist? Is it something that the ones have it build themselves??
https://redd.it/k783ps
@r_devops
reddit
Development Project Overview
We run quite a few projects, and all of them are updating their information pages under confluence which.... well would like to say works but that...
Product preference between datadog, splunk, and sumo? And why?
21 y/o college kid here, and I am looking to spend my winter break learning about devops. Title says it all. Would welcome any sort of perspective from people who are knowledgeable/work directly with these platforms. Thanks!
https://redd.it/k7i2cu
@r_devops
21 y/o college kid here, and I am looking to spend my winter break learning about devops. Title says it all. Would welcome any sort of perspective from people who are knowledgeable/work directly with these platforms. Thanks!
https://redd.it/k7i2cu
@r_devops
reddit
Product preference between datadog, splunk, and sumo? And why?
21 y/o college kid here, and I am looking to spend my winter break learning about devops. Title says it all. Would welcome any sort of perspective...
Career Advice! Fullstack Engineer transition to DevOps Engineer
I was wondering if it anyone here has made the transition from a Full Stack Engineers to a DevOps Engineer. And if so, what was the transition like and what did you do to become a DevOps Engineer?
Currently I am a Full Stack Engineer with a solid understanding of Docker, and a pretty solid understanding of AWS from my professional experiences. I found these kinds of things more interesting than application development, and I'm seriously considering on making a pivot in my career.
https://redd.it/k717xi
@r_devops
I was wondering if it anyone here has made the transition from a Full Stack Engineers to a DevOps Engineer. And if so, what was the transition like and what did you do to become a DevOps Engineer?
Currently I am a Full Stack Engineer with a solid understanding of Docker, and a pretty solid understanding of AWS from my professional experiences. I found these kinds of things more interesting than application development, and I'm seriously considering on making a pivot in my career.
https://redd.it/k717xi
@r_devops
reddit
Career Advice! Fullstack Engineer transition to DevOps Engineer
I was wondering if it anyone here has made the transition from a Full Stack Engineers to a DevOps Engineer. And if so, what was the transition...