GraphQL vs REST - a low-code API showdown
REST and (the newer) GraphQL APIs are the core technologies behind the vast most of today's integrations. These APIs allow external developers to tap into the functionality of the major platforms and build in their custom functionality to suit their needs.
​
https://linx.software/graphql-vs-rest-a-low-code-showdown/
https://redd.it/rg8ivm
@r_devops
REST and (the newer) GraphQL APIs are the core technologies behind the vast most of today's integrations. These APIs allow external developers to tap into the functionality of the major platforms and build in their custom functionality to suit their needs.
​
https://linx.software/graphql-vs-rest-a-low-code-showdown/
https://redd.it/rg8ivm
@r_devops
Linx
GraphQL vs REST - a low-code API showdown
The fundamental difference is that REST is an architectural design framework based on HTTP, while GraphQL is a query syntax that is not transport-dependent.
"infrastructure in a bottle"
Hi,
In most of the jobs I worked, there's always a complex issue of testing new code. We always have the "dev" copy of infrastructure. But it's never in sync with prod, used by someone else to experiment with next push.
I am looking for something like VirtualBox but for entire infrastructure. Single command to spawn entire fleet of mock machines, with networks, dns, volumes etc. So I can do end-to-end on a one powerful enough machine.
An infrastructure in a bottle.
I was thinking about Kubernetes, but before I dive into 300 pages book on the subject, I figured it does not hurt to ask here first.
Does anyone know a language of decribing infrastructure, that can be just as much deployed on prod AND deployed locally?
By prod I don't mean AWS or any other provider in particular. On the opposite, I am happy to setup my own machines if that gives me this single use case.
Kind Regards
https://redd.it/rg9kr3
@r_devops
Hi,
In most of the jobs I worked, there's always a complex issue of testing new code. We always have the "dev" copy of infrastructure. But it's never in sync with prod, used by someone else to experiment with next push.
I am looking for something like VirtualBox but for entire infrastructure. Single command to spawn entire fleet of mock machines, with networks, dns, volumes etc. So I can do end-to-end on a one powerful enough machine.
An infrastructure in a bottle.
I was thinking about Kubernetes, but before I dive into 300 pages book on the subject, I figured it does not hurt to ask here first.
Does anyone know a language of decribing infrastructure, that can be just as much deployed on prod AND deployed locally?
By prod I don't mean AWS or any other provider in particular. On the opposite, I am happy to setup my own machines if that gives me this single use case.
Kind Regards
https://redd.it/rg9kr3
@r_devops
reddit
"infrastructure in a bottle"
Hi, In most of the jobs I worked, there's always a complex issue of testing new code. We always have the "dev" copy of infrastructure. But it's...
Where are you finding high paying jobs?
I keep reading that DevOps and SRE jobs are high paying in the $300k+ range but I rarely see any for that. I am located in Canada but looking for a remote US job and seeing almost none in that salary band. I wanted to ask and see where people are finding these high paying jobs? I currently make decent enough money but not close to what others are saying they make. Advice?
https://redd.it/rg9vi9
@r_devops
I keep reading that DevOps and SRE jobs are high paying in the $300k+ range but I rarely see any for that. I am located in Canada but looking for a remote US job and seeing almost none in that salary band. I wanted to ask and see where people are finding these high paying jobs? I currently make decent enough money but not close to what others are saying they make. Advice?
https://redd.it/rg9vi9
@r_devops
reddit
Where are you finding high paying jobs?
I keep reading that DevOps and SRE jobs are high paying in the $300k+ range but I rarely see any for that. I am located in Canada but looking for...
Improving Application Availability with Pod Readiness Gates
Hi /r/DevOps,
Today I published an article titled "Improving Application Availability with Pod Readiness Gates", where I explain how to use Kubernetes Readiness Gates to create custom Pod status conditions and to implement complex readiness checks in places where liveness and readiness probes just aren't good enough.
Here's the link: https://towardsdatascience.com/improving-application-availability-with-pod-readiness-gates-4ebebc3fb28a
Feedback is very much appreciated!
https://redd.it/rg929a
@r_devops
Hi /r/DevOps,
Today I published an article titled "Improving Application Availability with Pod Readiness Gates", where I explain how to use Kubernetes Readiness Gates to create custom Pod status conditions and to implement complex readiness checks in places where liveness and readiness probes just aren't good enough.
Here's the link: https://towardsdatascience.com/improving-application-availability-with-pod-readiness-gates-4ebebc3fb28a
Feedback is very much appreciated!
https://redd.it/rg929a
@r_devops
Medium
Improving Application Availability with Pod Readiness Gates
What can you do when Pod readiness and liveness probes just aren’t good enough?
How to get the Kafka confluent developer or administrator certification ?
Hey everyone I just started working as a Devops engineer and I have been working with Kafka for about a month now, in order for me to make sure that I really understand Kafka and that I am fully autonomous while using I decided to pass the confluent official certification in 2 months. (And also use it as an argument to get better jobs proposal).
I have 2 questions :
- The certification is only valid for 2 years, do I need to pay again to extend it ?
- Do you have any tips to get the certification ?
https://redd.it/rgb69d
@r_devops
Hey everyone I just started working as a Devops engineer and I have been working with Kafka for about a month now, in order for me to make sure that I really understand Kafka and that I am fully autonomous while using I decided to pass the confluent official certification in 2 months. (And also use it as an argument to get better jobs proposal).
I have 2 questions :
- The certification is only valid for 2 years, do I need to pay again to extend it ?
- Do you have any tips to get the certification ?
https://redd.it/rgb69d
@r_devops
reddit
How to get the Kafka confluent developer or administrator...
Hey everyone I just started working as a Devops engineer and I have been working with Kafka for about a month now, in order for me to make sure...
How do you test your cloud based resources if its written as IaC? Do you apply the same testing pyramid concepts?
I actually have two questions here,
The first would be on automation testing for the IaC itself, as I've recently started reading about different tools which can do this (e.g. terratest - it requires go knowledge though, RSpec - requires some ruby knowledge)
I'm interested to know more about your implementations for testing IaC, how did it benefit you/your team and the ROI of applying it.
The second question would be on how you guys do performance and stress testing? as I'm interested to know real world experience regarding it.
https://redd.it/rgdtby
@r_devops
I actually have two questions here,
The first would be on automation testing for the IaC itself, as I've recently started reading about different tools which can do this (e.g. terratest - it requires go knowledge though, RSpec - requires some ruby knowledge)
I'm interested to know more about your implementations for testing IaC, how did it benefit you/your team and the ROI of applying it.
The second question would be on how you guys do performance and stress testing? as I'm interested to know real world experience regarding it.
https://redd.it/rgdtby
@r_devops
reddit
How do you test your cloud based resources if its written as IaC?...
I actually have two questions here, The first would be on automation testing for the IaC itself, as I've recently started reading about...
Noop Question: Is it possible to Automate the Creation and Configuration of VM in Azure?
Hello there!
Currently, I'm working with a small team on a huge project, because that we need to do pretty much everything in the project (backend, frontend, support, business stuff, and DevOps).
I was thinking in find ways to get rid of support things, that can be massive sometimes. One of the things is the task to create and configure new Virtual Machines to Client when requested.
What we currently:
\- Clone Machine (the machine can be new or some that already exist)
\- Enter the VM and execute a script that will configure it:
\- Change hostname
\- Change username and password
\- Update few certificates
\- Enter the Server and add a new machine to the list
​
So can it be automated, or some part of it?
Thanks!
https://redd.it/rgelru
@r_devops
Hello there!
Currently, I'm working with a small team on a huge project, because that we need to do pretty much everything in the project (backend, frontend, support, business stuff, and DevOps).
I was thinking in find ways to get rid of support things, that can be massive sometimes. One of the things is the task to create and configure new Virtual Machines to Client when requested.
What we currently:
\- Clone Machine (the machine can be new or some that already exist)
\- Enter the VM and execute a script that will configure it:
\- Change hostname
\- Change username and password
\- Update few certificates
\- Enter the Server and add a new machine to the list
​
So can it be automated, or some part of it?
Thanks!
https://redd.it/rgelru
@r_devops
reddit
Noop Question: Is it possible to Automate the Creation and...
Hello there! Currently, I'm working with a small team on a huge project, because that we need to do pretty much everything in the project...
What is really considered Junior/Mid/Senior SRE?
I've seen job posts Junior/Mid/Senior SRE on LinkedIn. All with overlapping descriptions.
From a seasoned SRE point of view, which would be the real skillset and experience each one should have?
Thanks.
https://redd.it/rge0de
@r_devops
I've seen job posts Junior/Mid/Senior SRE on LinkedIn. All with overlapping descriptions.
From a seasoned SRE point of view, which would be the real skillset and experience each one should have?
Thanks.
https://redd.it/rge0de
@r_devops
reddit
What is really considered Junior/Mid/Senior SRE?
I've seen job posts Junior/Mid/Senior SRE on LinkedIn. All with overlapping descriptions. From a seasoned SRE point of view, which would be the...
Can this be done with AWS free tier services?
Long time full-stack web developer staring at the prospect of using AWS for the first time.
I have a client (who happens to be an Amazon seller) looking to automate a few tasks, ideally using AWS Free Tier services.
The solution will be a small collection of PHP scripts (not a website, just the scripts and their Composer dependencies) that call various third party APIs (including UPS Quantum View and Dropbox) and send emails, all run as cron jobs at most hourly.
I could deploy on traditional hosting in a few minutes, but the AWS services are an obtuse menagerie.
From what I can gather, I'll need EC2 Beanstalk at minimum, but beyond that I'm lost.
I'm certain this can be done with AWS, but which services do I need?
Or should I steer the client toward hosting this on his GoDaddy server? Not ideal, but it's what he has.
https://redd.it/rgijdl
@r_devops
Long time full-stack web developer staring at the prospect of using AWS for the first time.
I have a client (who happens to be an Amazon seller) looking to automate a few tasks, ideally using AWS Free Tier services.
The solution will be a small collection of PHP scripts (not a website, just the scripts and their Composer dependencies) that call various third party APIs (including UPS Quantum View and Dropbox) and send emails, all run as cron jobs at most hourly.
I could deploy on traditional hosting in a few minutes, but the AWS services are an obtuse menagerie.
From what I can gather, I'll need EC2 Beanstalk at minimum, but beyond that I'm lost.
I'm certain this can be done with AWS, but which services do I need?
Or should I steer the client toward hosting this on his GoDaddy server? Not ideal, but it's what he has.
https://redd.it/rgijdl
@r_devops
reddit
Can this be done with AWS free tier services?
Long time full-stack web developer staring at the prospect of using AWS for the first time. I have a client (who happens to be an Amazon seller)...
Do I have to add backend port to security groups in aws ec2 to let the frontend talk to it?
I have a website hosting on aws ec2. My backend is running at port 8000 and my frontend is at 3000. Now I made my ip public so people can access my website. I added 3000 into the security group and now people can see my frontend UI. However, I found out that I have to add backend port 8000 into the security group as well otherwise my website only has frontend because it could not talk to the backend. This is bit confusing to me because to my knowledge if I expose only frontend port, it will call 8000 by itself within the ec2 host just like how localhost works there. But now I need to expose 8000 to the public in order to make my website fully functional. I don't know if exposing both frontend and backend ports is supposed to be the way to host a website or there has to be another way? Any comments or suggestions would be greatly appreciated! :)
https://redd.it/rgk0t0
@r_devops
I have a website hosting on aws ec2. My backend is running at port 8000 and my frontend is at 3000. Now I made my ip public so people can access my website. I added 3000 into the security group and now people can see my frontend UI. However, I found out that I have to add backend port 8000 into the security group as well otherwise my website only has frontend because it could not talk to the backend. This is bit confusing to me because to my knowledge if I expose only frontend port, it will call 8000 by itself within the ec2 host just like how localhost works there. But now I need to expose 8000 to the public in order to make my website fully functional. I don't know if exposing both frontend and backend ports is supposed to be the way to host a website or there has to be another way? Any comments or suggestions would be greatly appreciated! :)
https://redd.it/rgk0t0
@r_devops
reddit
Do I have to add backend port to security groups in aws ec2 to let...
I have a website hosting on aws ec2. My backend is running at port 8000 and my frontend is at 3000. Now I made my ip public so people can access...
Create a symbolic link (also symlink or soft link) in Linux - Ansible module file
How to create an "example" symbolic link that references "/proc/cpuinfo" in Linux with Ansible. Simple Ansible code and verification included.
https://youtu.be/uZF671e924k
\#ansible #symlink #softlink #file #linux #filesystem
https://redd.it/rgkwhz
@r_devops
How to create an "example" symbolic link that references "/proc/cpuinfo" in Linux with Ansible. Simple Ansible code and verification included.
https://youtu.be/uZF671e924k
\#ansible #symlink #softlink #file #linux #filesystem
https://redd.it/rgkwhz
@r_devops
YouTube
Create a symbolic link (also symlink or soft link) in Linux - Ansible module file
How to create an "example" symbolic link that references "/proc/cpuinfo" in Linux with Ansible. Simple Ansible code and verification included.
https://www.ansiblepilot.com/articles/create-a-symlink-ansible-module-file/
Timestamps
00:00 Introduction
00:26…
https://www.ansiblepilot.com/articles/create-a-symlink-ansible-module-file/
Timestamps
00:00 Introduction
00:26…
PagerDuty is down
https://status.pagerduty.com/
Edit: Looks like it's coming back online now. Will delete post when all clear.
https://redd.it/rgmhmu
@r_devops
https://status.pagerduty.com/
Edit: Looks like it's coming back online now. Will delete post when all clear.
https://redd.it/rgmhmu
@r_devops
How do you handle SSL certs for dynamic sub-subdomains like feat321.dev.example.com?
I’m in the middle of creating a way for our team to have preview apps foe open Pull Requests.
We have a commercial wildcard certificate for *.example.com. As you all know, this wildcard only works for level1 subdomains like dev.example.com.
We agreed to use domains like this for the preview apps: feat321.dev.example.com. With the restriction that another commercial wild card cert only for this use case is too expensive: how do you tackle this problem?
Do you use let’s encrypt certs for the specific domains, even if you have to create multiple ones per hour and maybe even delete them again within a few mins?
Or do you use a Let‘s Encrypt wildcard cert - which is cumbersome to renew due to the DNS TXT record challenge that has to be altered every 3 months?
Or do you maybe come up with some other domain structure like dev-feat321.example.com for the sake of simplicity?
https://redd.it/rgkjtp
@r_devops
I’m in the middle of creating a way for our team to have preview apps foe open Pull Requests.
We have a commercial wildcard certificate for *.example.com. As you all know, this wildcard only works for level1 subdomains like dev.example.com.
We agreed to use domains like this for the preview apps: feat321.dev.example.com. With the restriction that another commercial wild card cert only for this use case is too expensive: how do you tackle this problem?
Do you use let’s encrypt certs for the specific domains, even if you have to create multiple ones per hour and maybe even delete them again within a few mins?
Or do you use a Let‘s Encrypt wildcard cert - which is cumbersome to renew due to the DNS TXT record challenge that has to be altered every 3 months?
Or do you maybe come up with some other domain structure like dev-feat321.example.com for the sake of simplicity?
https://redd.it/rgkjtp
@r_devops
reddit
How do you handle SSL certs for dynamic sub-subdomains like...
I’m in the middle of creating a way for our team to have preview apps foe open Pull Requests. We have a commercial wildcard certificate for...
What I can do to do more and more devops things?
I work for a company where I'm the sole "server guy". Fully remote, all of our infrastructure is in Digitalocean (and a few clients in AWS). All servers are managed by me, deployed by me, backed up by me and so on.
We have a very strong dev team, so I don't need to help them much; I'm not a dev myself, I can help understand some problems from a more out of the box perspective but that's it. They pretty much handle themselves. When shit hits the fan and they don't know what to do they either go to their lead dev, the company owner, or me; when the lead dev doesn't know how to handle it he goes to company owner; I'm the last resort when it's not a development challenge.
What I do daily:
\- orient devs on what to focus on (project management), test their work, give feedback, write new vectors for them to focus on the next day/push.
\- solve problems the devs don't know/have access to solve, like installing libraries, reconfiguring PHP, setting up Apache/NGinx/elasticsearch/whatnot to handle the workload
\- solve management requirements, like scripting backup and maintenance, scripting data normalization scripts to filter what devs need to feed to their code to attain client objectives
\- solve "lack of knowledge" issues, like devs don't know how to handle a certain workload and I know a service/software that does just that.
\- solve "lack of creativity" issues, like dev doesn't know how to handle a problem and I can think of a straightforward way to solve it but can't code the solution myself.
\- research when even the company owner doesn't know if something is possible.
There's no need for terraform/ansible on our company because 99,9% of our work is web development, so 99% of servers use the same structure (php, apache, yada yada); I handle most of our staging environment on a single big server (instead of several smaller ones, to save on cost of operation), and deploy to tailored size when it goes live.
There's also not much leeway to interfere in CICD because like I said, we do mostly webdev, so no "new features all the time". I'd bet 50% of our workload is Laravel and around 30% Magento.
Fact is that I earn 20 USD/h and I do have a lot of leeway to do more hours a day. My kids need special needs school next year so I'm wanting tips on what I could do to do more (hours) in my job and also bring more value to the company. Make things better.
I'm most reactive to events in the company and that gets me around 40h to 60h a month; I would love to see that reach 200h.
What would you guys suggest?
https://redd.it/rgnsgm
@r_devops
I work for a company where I'm the sole "server guy". Fully remote, all of our infrastructure is in Digitalocean (and a few clients in AWS). All servers are managed by me, deployed by me, backed up by me and so on.
We have a very strong dev team, so I don't need to help them much; I'm not a dev myself, I can help understand some problems from a more out of the box perspective but that's it. They pretty much handle themselves. When shit hits the fan and they don't know what to do they either go to their lead dev, the company owner, or me; when the lead dev doesn't know how to handle it he goes to company owner; I'm the last resort when it's not a development challenge.
What I do daily:
\- orient devs on what to focus on (project management), test their work, give feedback, write new vectors for them to focus on the next day/push.
\- solve problems the devs don't know/have access to solve, like installing libraries, reconfiguring PHP, setting up Apache/NGinx/elasticsearch/whatnot to handle the workload
\- solve management requirements, like scripting backup and maintenance, scripting data normalization scripts to filter what devs need to feed to their code to attain client objectives
\- solve "lack of knowledge" issues, like devs don't know how to handle a certain workload and I know a service/software that does just that.
\- solve "lack of creativity" issues, like dev doesn't know how to handle a problem and I can think of a straightforward way to solve it but can't code the solution myself.
\- research when even the company owner doesn't know if something is possible.
There's no need for terraform/ansible on our company because 99,9% of our work is web development, so 99% of servers use the same structure (php, apache, yada yada); I handle most of our staging environment on a single big server (instead of several smaller ones, to save on cost of operation), and deploy to tailored size when it goes live.
There's also not much leeway to interfere in CICD because like I said, we do mostly webdev, so no "new features all the time". I'd bet 50% of our workload is Laravel and around 30% Magento.
Fact is that I earn 20 USD/h and I do have a lot of leeway to do more hours a day. My kids need special needs school next year so I'm wanting tips on what I could do to do more (hours) in my job and also bring more value to the company. Make things better.
I'm most reactive to events in the company and that gets me around 40h to 60h a month; I would love to see that reach 200h.
What would you guys suggest?
https://redd.it/rgnsgm
@r_devops
reddit
What I can do to do more and more devops things?
I work for a company where I'm the sole "server guy". Fully remote, all of our infrastructure is in Digitalocean (and a few clients in AWS). All...
Can't ssh self host by ssh
I created an `appuser` on linux. Then generated ssh keys with right permissions.
(operations via appuser)
$ ls -la /appuser/
...
drwx------ 2 appuser appuser 20 1 2 01:01 .ssh
$ ls -la /appuser/.ssh
drwx------ 2 appuser appuser 80 1 5 01:02 .
drwxr-x--- 10 appuser appuser 4096 1 5 01:02 ..
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 authorized_keys
-rw------- 1 appuser appuser 1675 1 5 01:02 id_rsa
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 id_rsa.pub
-rw-r--r-- 1 appuser appuser 2670 1 5 01:02 known_hosts
I copied the id_rsa.pub key to authorized_keys. Then run
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
In the `/etc/ssh/sshd_config`:
#PasswordAuthentication yes
When ssh self
$ ssh (self IP)
It request password
appuser@(self IP)'s password:
Why? Which permission is wrong?
https://redd.it/rgorur
@r_devops
I created an `appuser` on linux. Then generated ssh keys with right permissions.
(operations via appuser)
$ ls -la /appuser/
...
drwx------ 2 appuser appuser 20 1 2 01:01 .ssh
$ ls -la /appuser/.ssh
drwx------ 2 appuser appuser 80 1 5 01:02 .
drwxr-x--- 10 appuser appuser 4096 1 5 01:02 ..
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 authorized_keys
-rw------- 1 appuser appuser 1675 1 5 01:02 id_rsa
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 id_rsa.pub
-rw-r--r-- 1 appuser appuser 2670 1 5 01:02 known_hosts
I copied the id_rsa.pub key to authorized_keys. Then run
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
In the `/etc/ssh/sshd_config`:
#PasswordAuthentication yes
When ssh self
$ ssh (self IP)
It request password
appuser@(self IP)'s password:
Why? Which permission is wrong?
https://redd.it/rgorur
@r_devops
reddit
Can't ssh self host by ssh
I created an `appuser` on linux. Then generated ssh keys with right permissions. (operations via appuser) $ ls -la /appuser/ ... ...
I get to pick 1 online course for professional development, what should I pick to enhance my employability?
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy. As eluded to in the title, this year my company has offered to pay for me to pick one online course to up-level my general devops skills. Since the tech I work with is not very cutting edge and so I'm not in a position to know, my question for the community is what do you think would be the single highest-leverage investment I could make to improve my employability in the current market (I'm not actively job hunting atm, but maybe in the new year)? Probably something new or in high demand, but open to all ideas!
https://redd.it/rgmt8e
@r_devops
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy. As eluded to in the title, this year my company has offered to pay for me to pick one online course to up-level my general devops skills. Since the tech I work with is not very cutting edge and so I'm not in a position to know, my question for the community is what do you think would be the single highest-leverage investment I could make to improve my employability in the current market (I'm not actively job hunting atm, but maybe in the new year)? Probably something new or in high demand, but open to all ideas!
https://redd.it/rgmt8e
@r_devops
reddit
I get to pick 1 online course for professional development, what...
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy....
A Pipeline that creates pipelines?
Hello,
Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But hear me out, in a fast growing organization with new projects/modules being spun up everyday with the exact deployment process, I was thinking to myself, how can I automate the creation of these pipelines? Specifically with AWS CodePipeline/AWS Codebuild.
There is no way to scan github and create these pipelines automatically with AWS. So I was thinking to myself, how could I make this possible?
So at face value, AWS treats (most) everything as a resource. Whether that be an API Gateway, an ECR, EC2, Codebuild, CodePipeline, they are all just "resources".
So I was thinking, what's to prevent me from creating a pipeline, that, well creates a resource, specifically another CodePipeline Resource?
The basic principle is this -- and feel free to call this ridiculous because it most likely is.
Please note, this was quickly written and obviously there are some intricacies that need to be refined, but heres the quick and dirty rundown:
I set up a lambda ran on a cronjob (or alternatively can be triggered manually) that scans our organization for repositories, and as it scans the repositories I search for a terraform file that references a terraform module that handles the the setup, stages, etc of the pipeline that we want created. The base terraform file in the repository contains just boilerplate code such as the repository source url, buildspec, type of deployment, etc, and any additional non-secret env variables passed in by the terraform file in the repository. If it contains the terraform file, then it checks out the latest version, sets the pipeline source to the s3 reference key to the repositories name, and from there it then stores the the source code in the s3 bucket with the name of the project, which then triggers the codepipeline to execute. From there the pipeline then takes the source, and in the build step it executes the terraform script which sets up the pipeline for that repository. If the pipeline has already been created, then the terraform script has no changes, then there will have be no updates - and nothing will be changed. If there has been updates to the terraform file, then the pre-existing pipeline will then be updated.
From there bam, we have a pipeline that has been automatically created without any manual work.
This is obviously entirely a novel concept -- but does this seem absolutely ridiculous or could this actually be a feasible solution?
https://redd.it/rglrtv
@r_devops
Hello,
Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But hear me out, in a fast growing organization with new projects/modules being spun up everyday with the exact deployment process, I was thinking to myself, how can I automate the creation of these pipelines? Specifically with AWS CodePipeline/AWS Codebuild.
There is no way to scan github and create these pipelines automatically with AWS. So I was thinking to myself, how could I make this possible?
So at face value, AWS treats (most) everything as a resource. Whether that be an API Gateway, an ECR, EC2, Codebuild, CodePipeline, they are all just "resources".
So I was thinking, what's to prevent me from creating a pipeline, that, well creates a resource, specifically another CodePipeline Resource?
The basic principle is this -- and feel free to call this ridiculous because it most likely is.
Please note, this was quickly written and obviously there are some intricacies that need to be refined, but heres the quick and dirty rundown:
I set up a lambda ran on a cronjob (or alternatively can be triggered manually) that scans our organization for repositories, and as it scans the repositories I search for a terraform file that references a terraform module that handles the the setup, stages, etc of the pipeline that we want created. The base terraform file in the repository contains just boilerplate code such as the repository source url, buildspec, type of deployment, etc, and any additional non-secret env variables passed in by the terraform file in the repository. If it contains the terraform file, then it checks out the latest version, sets the pipeline source to the s3 reference key to the repositories name, and from there it then stores the the source code in the s3 bucket with the name of the project, which then triggers the codepipeline to execute. From there the pipeline then takes the source, and in the build step it executes the terraform script which sets up the pipeline for that repository. If the pipeline has already been created, then the terraform script has no changes, then there will have be no updates - and nothing will be changed. If there has been updates to the terraform file, then the pre-existing pipeline will then be updated.
From there bam, we have a pipeline that has been automatically created without any manual work.
This is obviously entirely a novel concept -- but does this seem absolutely ridiculous or could this actually be a feasible solution?
https://redd.it/rglrtv
@r_devops
reddit
A Pipeline that creates pipelines?
Hello, Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But...
Has anyone here used Ansible and Packer with Proxmox?
I am trying to now get my template provisioned with ansible. I finally have it so that packer is able to create the vm, configure it, reboot then attempt to connect by SSH. On the output I get waiting for ssh to become available, then connected to ssh. But then it fails to handshake because its unreachable. This is the pastebin I have from my console. I have -vvvv for ansible output so I see that it fails to connect via ssh to openssh and trying to connect to 127.0.0.1
Packers documents make it seem that its just a add provisioner, add the yml and you are good.
https://redd.it/rgjt58
@r_devops
I am trying to now get my template provisioned with ansible. I finally have it so that packer is able to create the vm, configure it, reboot then attempt to connect by SSH. On the output I get waiting for ssh to become available, then connected to ssh. But then it fails to handshake because its unreachable. This is the pastebin I have from my console. I have -vvvv for ansible output so I see that it fails to connect via ssh to openssh and trying to connect to 127.0.0.1
Packers documents make it seem that its just a add provisioner, add the yml and you are good.
https://redd.it/rgjt58
@r_devops
Pastebin
proxmox: output will be in this color.==> proxmox: Creating VM==> proxmox: - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
Is kubernetes in demand?
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what could be the culprit for the drop in interest?
Thanks!
https://redd.it/rgu7hr
@r_devops
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what could be the culprit for the drop in interest?
Thanks!
https://redd.it/rgu7hr
@r_devops
reddit
Is kubernetes in demand?
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what...
Deploying microservices in a consistent way using different gitlab repositories
Hi,
I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S.
Our SaaS app that is made of :
A backend app, mostly our API (django)
A user app (React)
An admin app (React)
Both frontend apps are connected to the API.
Currently the backend and user apps are in the same gitlab repository, and are deployed using a CI/CD pipeline that builds the apps, builds the docker images, and deploy them to K8S using a Helm Chart. The Helm Chart is located in the same repo.
I recently added the admin app in another gitlab repository and I'm concerned about keeping all apps consistent, that is to say, both frontend apps have to compatible with the API.
I'm thinking about adding another repository especially for the deployment (let's call this Deploy Repo). This repo could contain:
3 git submodules, one for each sub app,
The Helm chart,
Other files related to deployment
I thought about using git submodules to be able to have separate projects. The devs would update the right versions in the Deploy Repo when a service is ready to be deployed.
The push would then trigger the CI/CD pipeline, build all apps, and deploy all together using the Helm Chart.
Is this a good idea to use submodules like this ? What could be the best practice to link multiple projects together ?
I'm also concerned about how I could do to build only the sub project that has changed instead of all projects.
I have seen that it could be possible to link the pipelines of all subprojects together, and use artifacts to pass the needed files, but I'm not sure if this is a good solution.
https://redd.it/rgtuls
@r_devops
Hi,
I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S.
Our SaaS app that is made of :
A backend app, mostly our API (django)
A user app (React)
An admin app (React)
Both frontend apps are connected to the API.
Currently the backend and user apps are in the same gitlab repository, and are deployed using a CI/CD pipeline that builds the apps, builds the docker images, and deploy them to K8S using a Helm Chart. The Helm Chart is located in the same repo.
I recently added the admin app in another gitlab repository and I'm concerned about keeping all apps consistent, that is to say, both frontend apps have to compatible with the API.
I'm thinking about adding another repository especially for the deployment (let's call this Deploy Repo). This repo could contain:
3 git submodules, one for each sub app,
The Helm chart,
Other files related to deployment
I thought about using git submodules to be able to have separate projects. The devs would update the right versions in the Deploy Repo when a service is ready to be deployed.
The push would then trigger the CI/CD pipeline, build all apps, and deploy all together using the Helm Chart.
Is this a good idea to use submodules like this ? What could be the best practice to link multiple projects together ?
I'm also concerned about how I could do to build only the sub project that has changed instead of all projects.
I have seen that it could be possible to link the pipelines of all subprojects together, and use artifacts to pass the needed files, but I'm not sure if this is a good solution.
https://redd.it/rgtuls
@r_devops
reddit
Deploying microservices in a consistent way using different gitlab...
Hi, I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S. Our SaaS app that is made of...
CoTurn server
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
https://redd.it/rgvzsd
@r_devops
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
https://redd.it/rgvzsd
@r_devops
reddit
CoTurn server
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?