Need help researching and specifying company devops strategy
I work in a small company, ~20 employees of which we are only 3 people (soon more) in the development department. I am responsible for the devops side of things along side full stack development, as we grow, I hope to be able to focus on devops.
I am currently researching the area more in depth, in order to write out an initial draft of considerations and descriptions of our near-future and long-term devops strategy. Below I have drafted the headlines I intend to describe, with initial thoughts and questions I have for the sections, please tell if I am missing any:
# Workflows
This is the section I am most in doubt about how to approach.
I intend to describe branching and release strategy based off trunk based development. Any other resources would be good as well to help build a deeper understanding.
Here I will also describe continues integration, delivery and deployment. I feel I have an intuition about these, but I really need some resources to read to better my understanding of especially how to handle the integration part.
# Infrastructure
We have our own server, mainly because of a great need for lots and lots of disk space (we are using about ~120TB at the moment, wav files in multiple iterations for > 25K titles takes up a lot of space).
The server itself is managed by another company, I am only controlling the already created virtual machines (ubuntu).
We are currently hosting our application through a self hosted docker swarm, but I am thinking that we would be better of utilizing some managed kubernetes cloud, instead of continuing to host our own swarm as the complexity rises, it is getting more and more difficult to manage. Kubernetes in the cloud should also give us higher scalability and maintainability. But because of our massive need for data storage, I don't think a pure cloud solution is feasible, or am I missing some details?
# Monitoring
Humio WIP
# Security
WIP - I need resources for what I need to consider here.
https://redd.it/sdwx6p
@r_devops
I work in a small company, ~20 employees of which we are only 3 people (soon more) in the development department. I am responsible for the devops side of things along side full stack development, as we grow, I hope to be able to focus on devops.
I am currently researching the area more in depth, in order to write out an initial draft of considerations and descriptions of our near-future and long-term devops strategy. Below I have drafted the headlines I intend to describe, with initial thoughts and questions I have for the sections, please tell if I am missing any:
# Workflows
This is the section I am most in doubt about how to approach.
I intend to describe branching and release strategy based off trunk based development. Any other resources would be good as well to help build a deeper understanding.
Here I will also describe continues integration, delivery and deployment. I feel I have an intuition about these, but I really need some resources to read to better my understanding of especially how to handle the integration part.
# Infrastructure
We have our own server, mainly because of a great need for lots and lots of disk space (we are using about ~120TB at the moment, wav files in multiple iterations for > 25K titles takes up a lot of space).
The server itself is managed by another company, I am only controlling the already created virtual machines (ubuntu).
We are currently hosting our application through a self hosted docker swarm, but I am thinking that we would be better of utilizing some managed kubernetes cloud, instead of continuing to host our own swarm as the complexity rises, it is getting more and more difficult to manage. Kubernetes in the cloud should also give us higher scalability and maintainability. But because of our massive need for data storage, I don't think a pure cloud solution is feasible, or am I missing some details?
# Monitoring
Humio WIP
# Security
WIP - I need resources for what I need to consider here.
https://redd.it/sdwx6p
@r_devops
Swann-Studio
Swann Studio: Audiobook production and distribution
Europe’s largest audiobook production company with 35 years of experience, recording studios in 14 countries and audiobook and eBook distribution services.
Having a difficult time splitting traffic for one domain via cloudfront
I'm moving a legacy config from on-prem to AWS. The site was originally PHP and a new react platform was later developed on the same domain name. Basically what's happening is there's a single nginx server which sends routes for the new react platform to the react app and the rest get handled by PHP.
I'm trying to accomplish something similar in AWS. At first I thought I could use an ALB and split traffic to cloudfront and the PHP stuff to another target, but it looks like you cannot send traffic from the ALB to cloudfront (aside from a redirect).
So I did a bit more research and it seems that the recommended way is to use Cloudfront first with multiple origins and redirect traffic based on behaviours.
I understand how this works but I'm having a lot of trouble making it all work the way I want it to, mostly because react is a single page index.html. We have other single react apps hosted in cloudfront/s3 and this is easy to deal with by setting default root to index.html and setting up 404 and 403 error in cloudfront to redirect to index.html. Both the default root and the error pages apply everywhere though, it's not per origin. So if I set an index.html default root for example all requests use that.
I'm wondering if anybody has done something similar before and if you've found a working solution to split traffic like this with Cloudfront with a react site. Can it be achieved with the s3 bucket not having hosting enabled or without bringing in any additional cloudfront/lambda functions to modify the request etc.?
origin #1 : cloudfront > oai > s3 (hosting disabled).
origin #2 : cloudfront > ALB > internal IP of PHP web server
For behaviours I have setup react routes first so /react-route goes to the s3 origin for example and the default catchall (the very last rule) is the * catchall and directs the rest of traffic to PHP web server.
https://redd.it/sdyoxa
@r_devops
I'm moving a legacy config from on-prem to AWS. The site was originally PHP and a new react platform was later developed on the same domain name. Basically what's happening is there's a single nginx server which sends routes for the new react platform to the react app and the rest get handled by PHP.
I'm trying to accomplish something similar in AWS. At first I thought I could use an ALB and split traffic to cloudfront and the PHP stuff to another target, but it looks like you cannot send traffic from the ALB to cloudfront (aside from a redirect).
So I did a bit more research and it seems that the recommended way is to use Cloudfront first with multiple origins and redirect traffic based on behaviours.
I understand how this works but I'm having a lot of trouble making it all work the way I want it to, mostly because react is a single page index.html. We have other single react apps hosted in cloudfront/s3 and this is easy to deal with by setting default root to index.html and setting up 404 and 403 error in cloudfront to redirect to index.html. Both the default root and the error pages apply everywhere though, it's not per origin. So if I set an index.html default root for example all requests use that.
I'm wondering if anybody has done something similar before and if you've found a working solution to split traffic like this with Cloudfront with a react site. Can it be achieved with the s3 bucket not having hosting enabled or without bringing in any additional cloudfront/lambda functions to modify the request etc.?
origin #1 : cloudfront > oai > s3 (hosting disabled).
origin #2 : cloudfront > ALB > internal IP of PHP web server
For behaviours I have setup react routes first so /react-route goes to the s3 origin for example and the default catchall (the very last rule) is the * catchall and directs the rest of traffic to PHP web server.
https://redd.it/sdyoxa
@r_devops
reddit
Having a difficult time splitting traffic for one domain via...
I'm moving a legacy config from on-prem to AWS. The site was originally PHP and a new react platform was later developed on the same domain name....
Is Kubernetes useful outside of Cloud environments?
Hi! I'm currently working on redistributing services from one server to another group of servers. Kubernetes sounded like a useful tool for this, since it would allow me to place every server inside the cluster and manage them quite easily. So i started reading and practicing kubernetes, but every example that showed up involved a cloud. Now i'm a little confused, is kube really useful for my problem?
https://redd.it/se035a
@r_devops
Hi! I'm currently working on redistributing services from one server to another group of servers. Kubernetes sounded like a useful tool for this, since it would allow me to place every server inside the cluster and manage them quite easily. So i started reading and practicing kubernetes, but every example that showed up involved a cloud. Now i'm a little confused, is kube really useful for my problem?
https://redd.it/se035a
@r_devops
reddit
Is Kubernetes useful outside of Cloud environments?
Hi! I'm currently working on redistributing services from one server to another group of servers. Kubernetes sounded like a useful tool for this,...
How Infrastructure as Code Should Feel
More and more IaC seems to be the default approach to provisioning cloud infrastructure. But with that there is a risk that it is implemented in a "paint by numbers" way, just something else to tick off when starting a new project. In this blog post I don't detail how to implement infrastructure as code, nor do I evangelize the benefits of implementing it, instead I describe how infrastructure as code should feel for those who already have it and hopefully provide a path back to Nirvana for anybody who isn’t realising the benefits it can bring.
How Infrastructure as Code Should Feel
https://redd.it/se269e
@r_devops
More and more IaC seems to be the default approach to provisioning cloud infrastructure. But with that there is a risk that it is implemented in a "paint by numbers" way, just something else to tick off when starting a new project. In this blog post I don't detail how to implement infrastructure as code, nor do I evangelize the benefits of implementing it, instead I describe how infrastructure as code should feel for those who already have it and hopefully provide a path back to Nirvana for anybody who isn’t realising the benefits it can bring.
How Infrastructure as Code Should Feel
https://redd.it/se269e
@r_devops
Scalefactory
How Infrastructure as Code Should Feel
So you have Infrastructure as Code, but are you realising its benefits?
Seeking advice, recommendation
Hey guys,
I'm building a fairly simple/lightweight private app for a BigCommerce store.
Naturally, the app needs to be hosted and so I was looking for some recommendations, preferably AWS.
I'm virtually certain we would be able to stay within the confines of the free tier and I'm oscillating between Amazon EC2 and AWS Lambda.
Thanks for any feedback!
https://redd.it/se3enq
@r_devops
Hey guys,
I'm building a fairly simple/lightweight private app for a BigCommerce store.
Naturally, the app needs to be hosted and so I was looking for some recommendations, preferably AWS.
I'm virtually certain we would be able to stay within the confines of the free tier and I'm oscillating between Amazon EC2 and AWS Lambda.
Thanks for any feedback!
https://redd.it/se3enq
@r_devops
reddit
Seeking advice, recommendation
Hey guys, I'm building a fairly simple/lightweight private app for a BigCommerce store. Naturally, the app needs to be hosted and so I was...
Came back to Devops after 10 years, so much changed but Jenkins is still the default CI/CD?!
Hi,
I'm a pretty experienced developer but new to modern Devops (used to do Devops but been out of the game for years), and I've been trying to choose a CI/CD tool. With so many other changes in the stack over the past few years, I was surprised that the default choice for CI/CD is still … Jenkins.
Several of my friends in DevOps told me that they started with Jenkins, switched to a commercial solution that seemed better, and then came back to Jenkins.
What I like about Jenkins:
Easy to get started -- has a good configuration UI, can ignore advanced features until you need them.
Powerful enough for complex projects, includes CaC.
Big community and lots of people writing good plugins.
Points against Jenkins:
You need to write your own build scripts.
You need to learn Groovy to use its CaC.
The UI just shows logs of your jobs. For example, when I build an environment using Terraform, I wish the UI showed me the results visually. (You can get this information from the Terraform logs, but it’s not seamless.)
If you've abandoned Jenkins, what made you do it?
If you've abandoned a commercial solution, what made you come back to Jenkins?
https://redd.it/se4ww6
@r_devops
Hi,
I'm a pretty experienced developer but new to modern Devops (used to do Devops but been out of the game for years), and I've been trying to choose a CI/CD tool. With so many other changes in the stack over the past few years, I was surprised that the default choice for CI/CD is still … Jenkins.
Several of my friends in DevOps told me that they started with Jenkins, switched to a commercial solution that seemed better, and then came back to Jenkins.
What I like about Jenkins:
Easy to get started -- has a good configuration UI, can ignore advanced features until you need them.
Powerful enough for complex projects, includes CaC.
Big community and lots of people writing good plugins.
Points against Jenkins:
You need to write your own build scripts.
You need to learn Groovy to use its CaC.
The UI just shows logs of your jobs. For example, when I build an environment using Terraform, I wish the UI showed me the results visually. (You can get this information from the Terraform logs, but it’s not seamless.)
If you've abandoned Jenkins, what made you do it?
If you've abandoned a commercial solution, what made you come back to Jenkins?
https://redd.it/se4ww6
@r_devops
reddit
Came back to Devops after 10 years, so much changed but Jenkins is...
Hi, I'm a pretty experienced developer but new to modern Devops (used to do Devops but been out of the game for years), and I've been trying to...
On premise RTS confusion
I'm working on a real-time IoT system which will be deployed on premise, on a single virtual machine, scaling and high availability is not a concern here, the actual device is our critical part while the BE is more of a "nice to have". Also there will be a small amount of devices (<50) and our backend logic is not really that complex, we have like 6 subsystems
We need to support full duplex communication between browser UI and those devices. Our backend is running on NestJS. Communication with UI or devices is fairly straight forward, but where I'm struggling right now is deciding on how to make the communication backbone.
In cloud environment I'd use a message broker for it, that way logic is nicely decoupled and we have a nice buffer and a pub/sub interface
here, i'm not quite sure what is the optimal (or somewhat optimal) solution because of the following factors:
* people working on the project are quite inexperienced, so anything too complex would backfire on our ETA
* the virtual machine running this will not be that great, let's say it has 8GB RAM (maybe I'm overthinking on this part, but installing some software might hog too much of the resources)
So the question is:
* does it make sense to add something like RabbitMQ in here? (devices are using gRPC http2)
* do I just go Redis pub/sub?
* or just good old observer pattern?
https://redd.it/se68zg
@r_devops
I'm working on a real-time IoT system which will be deployed on premise, on a single virtual machine, scaling and high availability is not a concern here, the actual device is our critical part while the BE is more of a "nice to have". Also there will be a small amount of devices (<50) and our backend logic is not really that complex, we have like 6 subsystems
We need to support full duplex communication between browser UI and those devices. Our backend is running on NestJS. Communication with UI or devices is fairly straight forward, but where I'm struggling right now is deciding on how to make the communication backbone.
In cloud environment I'd use a message broker for it, that way logic is nicely decoupled and we have a nice buffer and a pub/sub interface
here, i'm not quite sure what is the optimal (or somewhat optimal) solution because of the following factors:
* people working on the project are quite inexperienced, so anything too complex would backfire on our ETA
* the virtual machine running this will not be that great, let's say it has 8GB RAM (maybe I'm overthinking on this part, but installing some software might hog too much of the resources)
So the question is:
* does it make sense to add something like RabbitMQ in here? (devices are using gRPC http2)
* do I just go Redis pub/sub?
* or just good old observer pattern?
https://redd.it/se68zg
@r_devops
reddit
On premise RTS confusion
I'm working on a real-time IoT system which will be deployed on premise, on a single virtual machine, scaling and high availability is not a...
Can anyone give an ELI5 of this article?
https://medium.com/@cfatechblog/bare-metal-k8s-clustering-at-chick-fil-a-scale-7b0607bd3541
I just started and want to get a better grasp of the DevOps world. This article is really interesting, but I feel like I don't understand how they use the technology.
Could anyone provide me with some information about how it works?
https://redd.it/se7jca
@r_devops
https://medium.com/@cfatechblog/bare-metal-k8s-clustering-at-chick-fil-a-scale-7b0607bd3541
I just started and want to get a better grasp of the DevOps world. This article is really interesting, but I feel like I don't understand how they use the technology.
Could anyone provide me with some information about how it works?
https://redd.it/se7jca
@r_devops
Medium
Bare Metal K8s Clustering at Chick-fil-A Scale
by Brian Chambers, Caleb Hurd, and Alex Crane
Learning Devops - Need help
Hey people,
​
Why some of the microservices get service endpoint automatically but some not?
Cluster was created by terraform with elb and private and public subnets, I'm making use also in external-dns to manage public dns zone for my domain application.Cluster based on AWS EKS.
https://imgur.com/t020i6F
I attached a picture for your reference.
https://redd.it/se8l0b
@r_devops
Hey people,
​
Why some of the microservices get service endpoint automatically but some not?
Cluster was created by terraform with elb and private and public subnets, I'm making use also in external-dns to manage public dns zone for my domain application.Cluster based on AWS EKS.
https://imgur.com/t020i6F
I attached a picture for your reference.
https://redd.it/se8l0b
@r_devops
Imgur
Post with 0 votes and 26 views.
It looks like docker networking is kind of sticked to the order or containers boot
I had 2 cases where it is definatelly a fault of the order containers boot
First happened when I ran
`docker network create some-shared-network`
and created two, or more projects with docker-compose that reused the external network to communicate with each other
This didn't work after I restart my machine and is mostly related to order of spinned up containers, service B requires service A to start first to be visible
Now I had similar problem with my selfhosted Jira software that could not communicate with the database which all were in the same stack (in the same non-external network)
I had to scale down the app service to 0 instances and then scale it up to previous values... recreating the stack didn't help... and suddenly it noticed the presence of the database...
Docker, what the heck
https://redd.it/se95az
@r_devops
I had 2 cases where it is definatelly a fault of the order containers boot
First happened when I ran
`docker network create some-shared-network`
and created two, or more projects with docker-compose that reused the external network to communicate with each other
This didn't work after I restart my machine and is mostly related to order of spinned up containers, service B requires service A to start first to be visible
Now I had similar problem with my selfhosted Jira software that could not communicate with the database which all were in the same stack (in the same non-external network)
I had to scale down the app service to 0 instances and then scale it up to previous values... recreating the stack didn't help... and suddenly it noticed the presence of the database...
Docker, what the heck
https://redd.it/se95az
@r_devops
reddit
It looks like docker networking is kind of sticked to the order or...
I had 2 cases where it is definatelly a fault of the order containers boot First happened when I ran \`docker network create...
Buddy: It just .Works
A few months ago my team and I set out to replace an existing WordPress site with a Gatsby.js PWA. We originally had a shared hosting plan, but as our Gatsby site became more and more fledged out, deployments to this hosting provider became increasingly difficult. Our original hosting platform was geared more towards WordPress hosting and did not come with CI/CD customization out of the box, so we ended up getting our own dedicated server on Cloudways – and that’s where Buddy comes into the picture. The perfect “middleman,” Buddy is the seamless fit for our Gatsby application – our first pipeline began with a staging environment and it involves 3 steps: as soon as the associated branch on GitHub receives a new push, Buddy prepares our environment by fetching and uploading the new files to our server. Finally, according to our package.json we are able to trigger node installations and a Gatsby build process to deploy our site. I just sit back and watch the logs of the pipeline to make sure all is well, and Buddy will just do its thing. It just works. Gone are the days of having to SSH into your server and manually doing everything yourself! And the best part? The free tier Buddy offers is more than generous enough to suit your every needs. Highly recommend checking them out – it’s worked wonders on someone like me who is more front-end-oriented and is quite new to DevOps.
https://redd.it/se9qoa
@r_devops
A few months ago my team and I set out to replace an existing WordPress site with a Gatsby.js PWA. We originally had a shared hosting plan, but as our Gatsby site became more and more fledged out, deployments to this hosting provider became increasingly difficult. Our original hosting platform was geared more towards WordPress hosting and did not come with CI/CD customization out of the box, so we ended up getting our own dedicated server on Cloudways – and that’s where Buddy comes into the picture. The perfect “middleman,” Buddy is the seamless fit for our Gatsby application – our first pipeline began with a staging environment and it involves 3 steps: as soon as the associated branch on GitHub receives a new push, Buddy prepares our environment by fetching and uploading the new files to our server. Finally, according to our package.json we are able to trigger node installations and a Gatsby build process to deploy our site. I just sit back and watch the logs of the pipeline to make sure all is well, and Buddy will just do its thing. It just works. Gone are the days of having to SSH into your server and manually doing everything yourself! And the best part? The free tier Buddy offers is more than generous enough to suit your every needs. Highly recommend checking them out – it’s worked wonders on someone like me who is more front-end-oriented and is quite new to DevOps.
https://redd.it/se9qoa
@r_devops
reddit
Buddy: It just .Works
A few months ago my team and I set out to replace an existing WordPress site with a Gatsby.js PWA. We originally had a shared hosting plan, but as...
Moving away from Chef // Data bag alternatives?
Hi Everyone,
Our team is looking to move away from chef; mostly due to cost. We utilize data-bags pretty heavily and I'm curious if there are any cost-effective alternatives? We've considered ansible-vault, hashicorp, etc. Just curious if there are any open-source technologies we can leverage.
We'd plan on storing secrets... <keys, etc>; so some sort of encryption would be ideal.
Our team is big on python and go; so anything along those lines would be awesome.
Thanks in advanced.
https://redd.it/secpdo
@r_devops
Hi Everyone,
Our team is looking to move away from chef; mostly due to cost. We utilize data-bags pretty heavily and I'm curious if there are any cost-effective alternatives? We've considered ansible-vault, hashicorp, etc. Just curious if there are any open-source technologies we can leverage.
We'd plan on storing secrets... <keys, etc>; so some sort of encryption would be ideal.
Our team is big on python and go; so anything along those lines would be awesome.
Thanks in advanced.
https://redd.it/secpdo
@r_devops
reddit
Moving away from Chef // Data bag alternatives?
Hi Everyone, Our team is looking to move away from chef; mostly due to cost. We utilize data-bags pretty heavily and I'm curious if there are any...
Terraform. Should I use a new s3bucket and DynamoDB for each TF project?
Hi, I'm using Terraform for learning proposes and want to upload my terraform project to Git (excluding the tf.vars and .tfstate) and want to upload the State to an AWS remote backend.
I'm not sure, Should I create a new bucket for each new Terraform project?
Maybe I could create a 'terraform-tfstate-bucket' and upload in a different folder each tfstate for each project.
Seems cleaner and more centralized. What do u think?
https://redd.it/se73ge
@r_devops
Hi, I'm using Terraform for learning proposes and want to upload my terraform project to Git (excluding the tf.vars and .tfstate) and want to upload the State to an AWS remote backend.
I'm not sure, Should I create a new bucket for each new Terraform project?
Maybe I could create a 'terraform-tfstate-bucket' and upload in a different folder each tfstate for each project.
Seems cleaner and more centralized. What do u think?
https://redd.it/se73ge
@r_devops
reddit
Terraform. Should I use a new s3bucket and DynamoDB for each TF...
Hi, I'm using Terraform for learning proposes and want to upload my terraform project to Git (excluding the tf.vars and .tfstate) and want to...
Realistically talking, which CI/CD tool to use if starting from zero?
Hi,
It's me again, so, I have this situation, one of our clients does not like managed stuff, so we are using "on prem" (in our EC2 instances) Atlassian Server products, the whole pack, Jira, Bitbucket, Bamboo, Confluence, even Crowd, now, as some of you may know, Atlassian is ending feature development for their Server line (Data Center is way too pricey for us) in February of 2022, and end all support in 2024, so, I talked to my bosses and they agreed to start looking for an alternative
So, what are the real options in 2022? I looked into ligurios Awesome CI, list of CI services and there are a few of CI services like Abstruse CI, Agola, Buildkite, Circle CI, Cirrus CI, CDS, Concourse CI, flow.ci, GitLab, Kraken CI, Semaphore, TeamCity, etc
I don't have time to test all of them to see which one is better, I'm leaning into GitLab because of all the features (and if they like it enough, maybe migrate our other proyects from Bitbucket Cloud to GitLab cloud ) and something that I grew to hate about Bamboo is it's lack of features (or have to pay 5k a year for plugin that does something) and having to write what I need in bash
Our pipelines are mostly for NodeJS backends and Angular frontends (with webpack) and a few custom docker images that I need to build, for now our deploys consists of sshing into our DEV or QA hosts and installing our packages with NPM (something that I want to work into too)
It doesn't have to be free (we already paying a good amount to Atlassian), it shoud support SAML SSO (with Okta at least)
Does anybody has experience with lesser know CI/CD tools and would like to share?
Please don't say Jenkins
Thanks
Alex
https://redd.it/sec4g8
@r_devops
Hi,
It's me again, so, I have this situation, one of our clients does not like managed stuff, so we are using "on prem" (in our EC2 instances) Atlassian Server products, the whole pack, Jira, Bitbucket, Bamboo, Confluence, even Crowd, now, as some of you may know, Atlassian is ending feature development for their Server line (Data Center is way too pricey for us) in February of 2022, and end all support in 2024, so, I talked to my bosses and they agreed to start looking for an alternative
So, what are the real options in 2022? I looked into ligurios Awesome CI, list of CI services and there are a few of CI services like Abstruse CI, Agola, Buildkite, Circle CI, Cirrus CI, CDS, Concourse CI, flow.ci, GitLab, Kraken CI, Semaphore, TeamCity, etc
I don't have time to test all of them to see which one is better, I'm leaning into GitLab because of all the features (and if they like it enough, maybe migrate our other proyects from Bitbucket Cloud to GitLab cloud ) and something that I grew to hate about Bamboo is it's lack of features (or have to pay 5k a year for plugin that does something) and having to write what I need in bash
Our pipelines are mostly for NodeJS backends and Angular frontends (with webpack) and a few custom docker images that I need to build, for now our deploys consists of sshing into our DEV or QA hosts and installing our packages with NPM (something that I want to work into too)
It doesn't have to be free (we already paying a good amount to Atlassian), it shoud support SAML SSO (with Okta at least)
Does anybody has experience with lesser know CI/CD tools and would like to share?
Please don't say Jenkins
Thanks
Alex
https://redd.it/sec4g8
@r_devops
GitHub
GitHub - ligurio/awesome-ci: The list of continuous integration services and tools
The list of continuous integration services and tools - ligurio/awesome-ci
How to select a Network Gateway for your Private Cloud
We created this blog post to help answer some very common questions regarding how to route traffic into your kubernetes clusters:
https://www.netris.ai/how-to-select-a-network-gateway-for-your-private-cloud/
​
Disclaimer: I work for Netris.
https://redd.it/se8sqg
@r_devops
We created this blog post to help answer some very common questions regarding how to route traffic into your kubernetes clusters:
https://www.netris.ai/how-to-select-a-network-gateway-for-your-private-cloud/
​
Disclaimer: I work for Netris.
https://redd.it/se8sqg
@r_devops
reddit
How to select a Network Gateway for your Private Cloud
We created this blog post to help answer some very common questions regarding how to route traffic into your kubernetes...
What have you made so far that you're "proud of", so to speak?
The tile sounds like a "juniorish" question, but I'll leave it like that. = )
https://redd.it/seiisw
@r_devops
The tile sounds like a "juniorish" question, but I'll leave it like that. = )
https://redd.it/seiisw
@r_devops
reddit
What have you made so far that you're "proud of", so to speak?
The tile sounds like a "juniorish" question, but I'll leave it like that. = )
"The fact remains that tooling is behind the culture. To date, if there's a major incident that needs rapid response, developers often feel powerless because they don't have a connection to SRE tools that are heavily infrastructure-oriented." Agree or disagree?
I came across this article and I wanted to get a feeling on whether or not people agree with this statement, made by Itiel Shwartz of Komodor. I'm fairly new in my Devops journey so I would appreciate some discussion to learn from.
Source: https://vmblog.com/archive/2021/12/13/komodor-2022-predictions-sre-tools-will-start-speaking-the-language-of-developers.aspx#.YdxWBBNBz0q
https://redd.it/sekaam
@r_devops
I came across this article and I wanted to get a feeling on whether or not people agree with this statement, made by Itiel Shwartz of Komodor. I'm fairly new in my Devops journey so I would appreciate some discussion to learn from.
Source: https://vmblog.com/archive/2021/12/13/komodor-2022-predictions-sre-tools-will-start-speaking-the-language-of-developers.aspx#.YdxWBBNBz0q
https://redd.it/sekaam
@r_devops
Vmblog
Komodor 2022 Predictions: SRE Tools Will Start Speaking the Language of Developers : @VMblog
In response to this, we'll see in 2022 a new generation of tools that have the specific goal to give developers more autonomy to further support the shift-left cultural transformation.
Not much devops but for you it's installed on your machine
I'm looking for an alternative for Docker Desktop. I'm not sure if there are alternatives where docker cli command is compatible. If there's none, it's ok. Which alternative did you like best?
https://redd.it/sel0w5
@r_devops
I'm looking for an alternative for Docker Desktop. I'm not sure if there are alternatives where docker cli command is compatible. If there's none, it's ok. Which alternative did you like best?
https://redd.it/sel0w5
@r_devops
reddit
Not much devops but for you it's installed on your machine
I'm looking for an alternative for Docker Desktop. I'm not sure if there are alternatives where docker cli command is compatible. If there's none,...
How to get error messages from Puppet to show up in Jenkins?
I have a pipeline that restarts artifactory.service when there is a change to a configuration file. The job is failing but I can only see the error message in Puppet, how can I get this error message to display in the Jenkins pipeline logs?
I havent been able to find it in the task endpoints doc.
This was the error btw
{"status": "failure", "error": {"message": "Job for artifactory.service failed. See "systemctl status artifactory.service" and "journalctl -xe" for details. ", "kind": "bash-error", "details": {\]}}
But I want to see the error message for any error that pops up from Puppet, in the Jenkins Pipeline.
https://redd.it/se7hu8
@r_devops
I have a pipeline that restarts artifactory.service when there is a change to a configuration file. The job is failing but I can only see the error message in Puppet, how can I get this error message to display in the Jenkins pipeline logs?
I havent been able to find it in the task endpoints doc.
This was the error btw
{"status": "failure", "error": {"message": "Job for artifactory.service failed. See "systemctl status artifactory.service" and "journalctl -xe" for details. ", "kind": "bash-error", "details": {\]}}
But I want to see the error message for any error that pops up from Puppet, in the Jenkins Pipeline.
https://redd.it/se7hu8
@r_devops
reddit
How to get error messages from Puppet to show up in Jenkins?
I have a pipeline that restarts artifactory.service when there is a change to a configuration file. The job is failing but I can only see the...
HA Vault/Consul Setup
Hello, almost all Vault/Consul HA setup guides uses Consul client on Vault side, I see it is also possible to connet vault directly to consul server. Is it wrong way, is there downsides? (Vault and Consul are on same server under Docker)
https://redd.it/seocnt
@r_devops
Hello, almost all Vault/Consul HA setup guides uses Consul client on Vault side, I see it is also possible to connet vault directly to consul server. Is it wrong way, is there downsides? (Vault and Consul are on same server under Docker)
https://redd.it/seocnt
@r_devops
reddit
HA Vault/Consul Setup
Hello, almost all Vault/Consul HA setup guides uses Consul client on Vault side, I see it is also possible to connet vault directly to consul...
Automated testing, development of a substantial number of Ansible roles
We have a substantial inventory of in-house developed Ansible roles (almost 70 and that number is rapidly growing).
They are representing an essential set of tools that we use every day to manage existing and deploy new infrastructure for our hosting customers, so we are really trying hard to keep them updated, tested and in a working condition.
So far we used `molecule` with Docker as a driver to develop them locally, and GitHub Actions to automate their scheduled testing, as we have tests for dozens of scenarios for each role.
However, doing it this way is really starting to be a problem, especially in the development side of things because of `systemd` (we are using Geerling's base docker images) that just does not work properly all the time and some other molecule related bugs that we need to deal with.
Often, our automated tests on GitHub Actions fail just because of Docker, even though roles themselves are functioning perfectly fine.
With all of that said, I feel that we need to make some changes in our development and testing, but I am unsure what could be a proper replacement for the `molecule+docker`? To make things worse, we need something that works on both M1 and X64 arch (for development), plus GitHub Actions as platform of choice for automation.
We were considering switching to Vagrant for development, but Vagrant does not work on M1 (because of VirtualBox). Multipass would do the job, however there is no Provider/Driver for Multipass (plus we are always developing our roles to work on Ubuntu, Debian and CentOS, so Multipass would not be an ideal candidate).
We also have a substantial amount of Proxmox servers that we could use to bring up instances, do automated deployment and testing and then bring them down, but molecule does not have support for Proxmox, either, and documentation is quite horrible.
So, does anyone have a suggestion how we can improve all of this?
EDIT: Just to make it clear, I am more of trying to find if there is a better way of doing all of this, having in mind the amount of roles we have. I know that we can always build our own Docker images to fix the sysmted issue, and continue with what we were doing so far.
https://redd.it/seowmc
@r_devops
We have a substantial inventory of in-house developed Ansible roles (almost 70 and that number is rapidly growing).
They are representing an essential set of tools that we use every day to manage existing and deploy new infrastructure for our hosting customers, so we are really trying hard to keep them updated, tested and in a working condition.
So far we used `molecule` with Docker as a driver to develop them locally, and GitHub Actions to automate their scheduled testing, as we have tests for dozens of scenarios for each role.
However, doing it this way is really starting to be a problem, especially in the development side of things because of `systemd` (we are using Geerling's base docker images) that just does not work properly all the time and some other molecule related bugs that we need to deal with.
Often, our automated tests on GitHub Actions fail just because of Docker, even though roles themselves are functioning perfectly fine.
With all of that said, I feel that we need to make some changes in our development and testing, but I am unsure what could be a proper replacement for the `molecule+docker`? To make things worse, we need something that works on both M1 and X64 arch (for development), plus GitHub Actions as platform of choice for automation.
We were considering switching to Vagrant for development, but Vagrant does not work on M1 (because of VirtualBox). Multipass would do the job, however there is no Provider/Driver for Multipass (plus we are always developing our roles to work on Ubuntu, Debian and CentOS, so Multipass would not be an ideal candidate).
We also have a substantial amount of Proxmox servers that we could use to bring up instances, do automated deployment and testing and then bring them down, but molecule does not have support for Proxmox, either, and documentation is quite horrible.
So, does anyone have a suggestion how we can improve all of this?
EDIT: Just to make it clear, I am more of trying to find if there is a better way of doing all of this, having in mind the amount of roles we have. I know that we can always build our own Docker images to fix the sysmted issue, and continue with what we were doing so far.
https://redd.it/seowmc
@r_devops
reddit
Automated testing, development of a substantial number of Ansible...
We have a substantial inventory of in-house developed Ansible roles (almost 70 and that number is rapidly growing). They are representing an...