Backend/Frontend in same repository
How do you manage the pipeline definition in Jenkins on a repository who has backend and frontend in the same repository?
Because when a change is made to the frontend (for example: a picture is changed) the pipeline gets all the code both frontend and backend, build both, do testing on both, deploy both, but the backend code did not change.
How do I define my pipeline to do the automation process separated?
The only solution is to separate my front and back in different repositories?
​
Thanks in advance.
https://redd.it/l4twsx
@r_devops
How do you manage the pipeline definition in Jenkins on a repository who has backend and frontend in the same repository?
Because when a change is made to the frontend (for example: a picture is changed) the pipeline gets all the code both frontend and backend, build both, do testing on both, deploy both, but the backend code did not change.
How do I define my pipeline to do the automation process separated?
The only solution is to separate my front and back in different repositories?
​
Thanks in advance.
https://redd.it/l4twsx
@r_devops
reddit
Backend/Frontend in same repository
How do you manage the pipeline definition in Jenkins on a repository who has backend and frontend in the same repository? Because when a change...
Teardown feature branch environment
I'm setting up automated deployment/Teardown of a feature branch environment. I'm triggering the creation/deploy of it when a branch is committed/created and isn't the default (master) branch.
What I'm struggling to do is figuring out what should trigger the Teardown of that environment. to merge into master, my team has to do a GitHub pr. I'm thinking about triggering off the merge to master and parsing the GitHub pr merge message for the feature branch name (using powershell) and deleting with that. Does that sound reasonable? Is there a better way?
My stack is Github for repo and Azure devops for pipelines.
Thanks!
https://redd.it/l53wvs
@r_devops
I'm setting up automated deployment/Teardown of a feature branch environment. I'm triggering the creation/deploy of it when a branch is committed/created and isn't the default (master) branch.
What I'm struggling to do is figuring out what should trigger the Teardown of that environment. to merge into master, my team has to do a GitHub pr. I'm thinking about triggering off the merge to master and parsing the GitHub pr merge message for the feature branch name (using powershell) and deleting with that. Does that sound reasonable? Is there a better way?
My stack is Github for repo and Azure devops for pipelines.
Thanks!
https://redd.it/l53wvs
@r_devops
reddit
Teardown feature branch environment
I'm setting up automated deployment/Teardown of a feature branch environment. I'm triggering the creation/deploy of it when a branch is...
configuring ec2 for node.js apps with pm2 and nginx
First of all, I'm a backend developer, so I don't know much about devops besides basic CI/CD configuration with deploys in more "automated" services, like Heroku, etc.
What I'm trying to do, is have my EC2 instance host my Node.js apps, and use PM2 to startup/monitor each one. I configured the `ecosystem.config.js` with this:
module.exports = {
apps: [
{
name: "my-app",
cwd: "./my-app/packages/backend/",
script: "yarn",
args: "start:prod",
env: {
PORT: 3010,
// Other env vars
}
},
],
};
For now this app is a Nest.js service, that I build manually and use that command to start it. That part is running ok.
After that I tried installing Nginx on the server, to try to reverse-proxy all requests to my app (when I'll have more than one app, I'll probably need to proxy using subdomain like `my-app1.domain.com` -> localhost:3010, `my-app2.domain.com` -> localhost:3020).
Even if don't change anything on the nginx config files, when I try to access the server by the IP or DNS from AWS, it should show that default "You are using NGINX" page right?
All I'm getting now is `ERR_CONNECTION_REFUSED` when I try to access it. [Here is the Inboud and Outbound rules for my instance](https://imgur.com/a/RnrSHn5). I followed [this tutorial](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04#set-up-nginx-as-a-reverse-proxy-server) to configure the reverse-proxy, and that part seems ok. If I try to run `curl https://localhost:3010/status` or `curl https://localhost/status` inside the server, I get the right response from my app
Thanks for the help!
---
Since I'm here, is there any other service with a nice free tier for this? I've used Heroku, but since it doesn't have servers where I live, the response time is a bit high. I was using GCP App Engine before, but I couldn't configure my env_vars in some way that I didn't need to commit an `.env` file with DB credentials and keys in my source code (which I REALLY don't wanna do it).
Running my own server is also not ideal since I'll need to manually SSH to the machine, pull the latest changes and restart the PM2 server, but at least is free and I can run multiple apps
https://redd.it/l5hwyv
@r_devops
First of all, I'm a backend developer, so I don't know much about devops besides basic CI/CD configuration with deploys in more "automated" services, like Heroku, etc.
What I'm trying to do, is have my EC2 instance host my Node.js apps, and use PM2 to startup/monitor each one. I configured the `ecosystem.config.js` with this:
module.exports = {
apps: [
{
name: "my-app",
cwd: "./my-app/packages/backend/",
script: "yarn",
args: "start:prod",
env: {
PORT: 3010,
// Other env vars
}
},
],
};
For now this app is a Nest.js service, that I build manually and use that command to start it. That part is running ok.
After that I tried installing Nginx on the server, to try to reverse-proxy all requests to my app (when I'll have more than one app, I'll probably need to proxy using subdomain like `my-app1.domain.com` -> localhost:3010, `my-app2.domain.com` -> localhost:3020).
Even if don't change anything on the nginx config files, when I try to access the server by the IP or DNS from AWS, it should show that default "You are using NGINX" page right?
All I'm getting now is `ERR_CONNECTION_REFUSED` when I try to access it. [Here is the Inboud and Outbound rules for my instance](https://imgur.com/a/RnrSHn5). I followed [this tutorial](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04#set-up-nginx-as-a-reverse-proxy-server) to configure the reverse-proxy, and that part seems ok. If I try to run `curl https://localhost:3010/status` or `curl https://localhost/status` inside the server, I get the right response from my app
Thanks for the help!
---
Since I'm here, is there any other service with a nice free tier for this? I've used Heroku, but since it doesn't have servers where I live, the response time is a bit high. I was using GCP App Engine before, but I couldn't configure my env_vars in some way that I didn't need to commit an `.env` file with DB credentials and keys in my source code (which I REALLY don't wanna do it).
Running my own server is also not ideal since I'll need to manually SSH to the machine, pull the latest changes and restart the PM2 server, but at least is free and I can run multiple apps
https://redd.it/l5hwyv
@r_devops
Imgur
Post with 0 views.
What can you do with Docker/K8s agents on Azure?
I'm getting more into the world of CI/CD and kubernetes.
I recently had to set up our own Azure agent to run .NET core API tests. I think I'm limited to Linux or Windows because the tests use OS environment variables and a runsettings file but it got me thinking.
Is it possible to run NUnit API tests on an Azure agent in docker or AKS?
If not, what can you do with an agent hosted in this way? Just build/push to a container registry?
https://redd.it/l5hm53
@r_devops
I'm getting more into the world of CI/CD and kubernetes.
I recently had to set up our own Azure agent to run .NET core API tests. I think I'm limited to Linux or Windows because the tests use OS environment variables and a runsettings file but it got me thinking.
Is it possible to run NUnit API tests on an Azure agent in docker or AKS?
If not, what can you do with an agent hosted in this way? Just build/push to a container registry?
https://redd.it/l5hm53
@r_devops
reddit
What can you do with Docker/K8s agents on Azure?
I'm getting more into the world of CI/CD and kubernetes. I recently had to set up our own Azure agent to run .NET core API tests. I think I'm...
Stuck on the deployment part at the Gitlab-CI/Docker/Terraform/ECR pipeline. Where to deploy Express.js web server?
I am trying to build a dream pipeline around a simple Express.js web server that returns "Hello World" on / route. I am doing this process in few iterations, and currently, I am stuck on my second iteration at a moment where I need to actually deploy the app.
Let me first show you my current progress on this stack:
I want to follow Gitlab Flow ✔
>My application source of truth is master branch. It is the branch which I want to continously deliver.
I want to use Docker and Docker-compose ✔
>I have both Dockerfile and docker-compose.yml file which describes my application stack and allows both developers and CI server to build the app, run the app etc. very easily. The deployed app is running in docker container as well.
I want to use Gitlab shared runners to do my CI ✔
>Done. There is a single test stage for now which is doing lint check and actuall mocha tests. This pipeline is triggered on MR branches and also on master branch.
I want my runners to build & push docker images to an Amazon ECR repository ✔
>I think this definitely needs to happen whatever my strategy is. I guess having a docker image in some kind of a registry is a must. I have just arrived at the point where I need to make this happen, and I did educate myself on how it is done, so there is no issue with this step. My choice of registry is ECR.
That is the progress so far. Now I have come to the realization that I have few options.
I am not sure whether to use ECS or manual AWS CLI + EC2
>Since I haven't even touched Kubernetes yet (remember I am building just a second iteration of a simple "Hello World" app) and I am not looking for auto-scalable fancy stuff such as EKS (yet), rather I am wondering whether I need Amazon's ECS or should I set-up deployment on the instance level?
>
>Up until now, my deployment pipeline was very primitive. Manually created EC2 instances had to be SSH-ed into, and I had to pull latest code from Git repository and restart the processes.
>
>So I can see the possibility of automating my primitive flow by introducing docker images instead of bare code, and also doing all this automatically from Gitlab CI trough AWS CLI. But is that how it's usually done, or should I switch to ECS and invoke ECS "refresh" once my images are in the ECR.
>
>One question here: since I use Docker compose, If i went with the "EC2 way" I know I can write a correct deployment script which DOES use docker compose and run the app correctly. But what I don't know is whether ECS runs my compose script, or just my Dockerfile, and is there a way to set that up correctly if I use docker compose?
I want to use Terraform to provision infrastructure in an automated fashion
>My second problem is this. How does terraform come into play if I have the architecture set-up in the above fashion?
>
>What I know is that Terraform CAN provision EC2 instances for me trough IaC in declarative fashion. What I don't know is this:
>
>Should I put ECR creation in the Terraform config files as well? Does Terraform also provision/configure ECS? I know Terraform is a topic in itself, and I have and will research it's full capacity, but I'm mainly looking for waypoints on configuring it to work with my deployment plan that I have described above.
Thanks for reading, each contributing comment is welcome.
https://redd.it/l5hl31
@r_devops
I am trying to build a dream pipeline around a simple Express.js web server that returns "Hello World" on / route. I am doing this process in few iterations, and currently, I am stuck on my second iteration at a moment where I need to actually deploy the app.
Let me first show you my current progress on this stack:
I want to follow Gitlab Flow ✔
>My application source of truth is master branch. It is the branch which I want to continously deliver.
I want to use Docker and Docker-compose ✔
>I have both Dockerfile and docker-compose.yml file which describes my application stack and allows both developers and CI server to build the app, run the app etc. very easily. The deployed app is running in docker container as well.
I want to use Gitlab shared runners to do my CI ✔
>Done. There is a single test stage for now which is doing lint check and actuall mocha tests. This pipeline is triggered on MR branches and also on master branch.
I want my runners to build & push docker images to an Amazon ECR repository ✔
>I think this definitely needs to happen whatever my strategy is. I guess having a docker image in some kind of a registry is a must. I have just arrived at the point where I need to make this happen, and I did educate myself on how it is done, so there is no issue with this step. My choice of registry is ECR.
That is the progress so far. Now I have come to the realization that I have few options.
I am not sure whether to use ECS or manual AWS CLI + EC2
>Since I haven't even touched Kubernetes yet (remember I am building just a second iteration of a simple "Hello World" app) and I am not looking for auto-scalable fancy stuff such as EKS (yet), rather I am wondering whether I need Amazon's ECS or should I set-up deployment on the instance level?
>
>Up until now, my deployment pipeline was very primitive. Manually created EC2 instances had to be SSH-ed into, and I had to pull latest code from Git repository and restart the processes.
>
>So I can see the possibility of automating my primitive flow by introducing docker images instead of bare code, and also doing all this automatically from Gitlab CI trough AWS CLI. But is that how it's usually done, or should I switch to ECS and invoke ECS "refresh" once my images are in the ECR.
>
>One question here: since I use Docker compose, If i went with the "EC2 way" I know I can write a correct deployment script which DOES use docker compose and run the app correctly. But what I don't know is whether ECS runs my compose script, or just my Dockerfile, and is there a way to set that up correctly if I use docker compose?
I want to use Terraform to provision infrastructure in an automated fashion
>My second problem is this. How does terraform come into play if I have the architecture set-up in the above fashion?
>
>What I know is that Terraform CAN provision EC2 instances for me trough IaC in declarative fashion. What I don't know is this:
>
>Should I put ECR creation in the Terraform config files as well? Does Terraform also provision/configure ECS? I know Terraform is a topic in itself, and I have and will research it's full capacity, but I'm mainly looking for waypoints on configuring it to work with my deployment plan that I have described above.
Thanks for reading, each contributing comment is welcome.
https://redd.it/l5hl31
@r_devops
reddit
Stuck on the deployment part at the Gitlab-CI/Docker/Terraform/ECR...
I am trying to build a dream pipeline around a simple Express.js web server that returns "Hello World" on / route. I am doing this process in few...
DevOps not for fresher/careershifter?
I've been applying for a devops role for the past 3 months but apparently all of them needs experienced. I have 3years of project management experience.
I know that having AWS-SAA cert wont get me the job but I strongly believe that I just need a chance to prove myself. So here I am asking for your advise and suggestions on how I can ace the interview. I am also thinking of creating a project but I dont have any idea what to build. Can you please give some good resources?
I have basic Python and Linux skills. Thanks in advance!
https://redd.it/l5e4q8
@r_devops
I've been applying for a devops role for the past 3 months but apparently all of them needs experienced. I have 3years of project management experience.
I know that having AWS-SAA cert wont get me the job but I strongly believe that I just need a chance to prove myself. So here I am asking for your advise and suggestions on how I can ace the interview. I am also thinking of creating a project but I dont have any idea what to build. Can you please give some good resources?
I have basic Python and Linux skills. Thanks in advance!
https://redd.it/l5e4q8
@r_devops
reddit
DevOps not for fresher/careershifter?
I've been applying for a devops role for the past 3 months but apparently all of them needs experienced. I have 3years of project management...
Simplifying K8S and OpenShift deployment and management on GCP/Cloud
I wrote a few words on our approach at Palo Alto Networks to simplify different Orchestrations deployment and management on GCP and AWS.
We are using a Chrome Extension that allows us to quickly trigger builds and deletion of clusters we use for application testing.
Please let me know if you have any questions or suggestions, would be glad to help if needed.
Here's the article:
https://medium.com/engineering-at-palo-alto-networks/simplifying-k8s-and-openshift-installation-using-a-chrome-extension-84391d0ed6f
https://redd.it/l5aeqm
@r_devops
I wrote a few words on our approach at Palo Alto Networks to simplify different Orchestrations deployment and management on GCP and AWS.
We are using a Chrome Extension that allows us to quickly trigger builds and deletion of clusters we use for application testing.
Please let me know if you have any questions or suggestions, would be glad to help if needed.
Here's the article:
https://medium.com/engineering-at-palo-alto-networks/simplifying-k8s-and-openshift-installation-using-a-chrome-extension-84391d0ed6f
https://redd.it/l5aeqm
@r_devops
Medium
Simplifying K8S and OpenShift installation using a Chrome Extension
Let’s start with some hard cold truths…
GitlabCI with Chef
Hi, I would make the CI/CD pipeline with chef and local gitlabci. I've used puppet and ansible with jenkins before :-)))
SO I have some beginner questions with chef integrated to CI.
\- Can I use the "external" gitlab repository for store the cookbooks because I read the chef automatically store the cookbooks on the chef server when I develop the code on the workstation. Can I develop without the workstation? Just develop on my machine > Push to the gitlab repo > git clone on the test vm > run ?
\- I would make a pipeline in gitlabci what get the feature branch as a parameter and deploy it to the test vm. IS it possible? Can Chef run headless?
\- Anyone else tried to make the same toolset?
https://redd.it/l5dd5f
@r_devops
Hi, I would make the CI/CD pipeline with chef and local gitlabci. I've used puppet and ansible with jenkins before :-)))
SO I have some beginner questions with chef integrated to CI.
\- Can I use the "external" gitlab repository for store the cookbooks because I read the chef automatically store the cookbooks on the chef server when I develop the code on the workstation. Can I develop without the workstation? Just develop on my machine > Push to the gitlab repo > git clone on the test vm > run ?
\- I would make a pipeline in gitlabci what get the feature branch as a parameter and deploy it to the test vm. IS it possible? Can Chef run headless?
\- Anyone else tried to make the same toolset?
https://redd.it/l5dd5f
@r_devops
reddit
GitlabCI with Chef
Hi, I would make the CI/CD pipeline with chef and local gitlabci. I've used puppet and ansible with jenkins before :-))) SO I have some beginner...
Trying to Deploy Through Concourse CF Flyway Resource and need help increasing the memory
Hi,
Hope you are doing well.
I am trying to increase the memory from 256MB =>1GB for https://hub.docker.com/r/emeraldsquad/cf-flyway-resource/ the problem is that in my pipeline there does not seem to be an easy way to overwrite the memory.
I could manually change it in PCF but I dont want to do that.
Was wondering if anyone has faced a similar issue with Concourse and PCF and how you resolved it.
Thanks
https://redd.it/l5ql54
@r_devops
Hi,
Hope you are doing well.
I am trying to increase the memory from 256MB =>1GB for https://hub.docker.com/r/emeraldsquad/cf-flyway-resource/ the problem is that in my pipeline there does not seem to be an easy way to overwrite the memory.
I could manually change it in PCF but I dont want to do that.
Was wondering if anyone has faced a similar issue with Concourse and PCF and how you resolved it.
Thanks
https://redd.it/l5ql54
@r_devops
How GitOps deals with mono-repo environments?
Hi! I am a very fresh beginner and I woud like to ask those who has some experience with Gitops :)
Let’s say I have a project and a single microservice repository - “tutorial-microservice”.
This repo folder structure:
- TutorialMicroservice
- Deployment (openshift yamls, deploymentConfigs...)
- Production
- Dev
- QA
- Dockerfile
- dotnet.jenkinsfile
My question would be, if I for an example make changes in TutorialMicroservice and in Dev deploymentConfig, create a PR and MERGE these changes to a master or another branch, is it possible to detect that among all these changes there was a change also in Dev environment and DEPLOY these changes in Dev environment?
I know it would be easy if there would be a separate configs repository, but currently in our real project we cannot change the architecture :/
https://redd.it/l5atsg
@r_devops
Hi! I am a very fresh beginner and I woud like to ask those who has some experience with Gitops :)
Let’s say I have a project and a single microservice repository - “tutorial-microservice”.
This repo folder structure:
- TutorialMicroservice
- Deployment (openshift yamls, deploymentConfigs...)
- Production
- Dev
- QA
- Dockerfile
- dotnet.jenkinsfile
My question would be, if I for an example make changes in TutorialMicroservice and in Dev deploymentConfig, create a PR and MERGE these changes to a master or another branch, is it possible to detect that among all these changes there was a change also in Dev environment and DEPLOY these changes in Dev environment?
I know it would be easy if there would be a separate configs repository, but currently in our real project we cannot change the architecture :/
https://redd.it/l5atsg
@r_devops
reddit
How GitOps deals with mono-repo environments?
Hi! I am a very fresh beginner and I woud like to ask those who has some experience with Gitops :) Let’s say I have a project and a single...
DevOps for automatic VM deployment via Rest API
Hello
​
In my company we want to use Azure DevOps to automate the deployment of VMs in Azure.
At the moment we have a Jenkins pipeline that deploys virtual machines for customers in VmWare. We want to rebuild this in Azure DevOps for Azure VMs.
We want to make a front end where the parameters can be filled in by the customer or colleagues, then Azure DevOps should be triggered via REST API to build the VM.
If we do it like this, I don't think that we are using Azure DevOps the right way. I think it's made for deploying environments and not for deploying single VMs per deployment.
Does anyone have tips for me? Should we do it this way? Should we rethink our strategy?
https://redd.it/l5a71p
@r_devops
Hello
​
In my company we want to use Azure DevOps to automate the deployment of VMs in Azure.
At the moment we have a Jenkins pipeline that deploys virtual machines for customers in VmWare. We want to rebuild this in Azure DevOps for Azure VMs.
We want to make a front end where the parameters can be filled in by the customer or colleagues, then Azure DevOps should be triggered via REST API to build the VM.
If we do it like this, I don't think that we are using Azure DevOps the right way. I think it's made for deploying environments and not for deploying single VMs per deployment.
Does anyone have tips for me? Should we do it this way? Should we rethink our strategy?
https://redd.it/l5a71p
@r_devops
reddit
DevOps for automatic VM deployment via Rest API
Hello In my company we want to use Azure DevOps to automate the deployment of VMs in Azure. At the moment we have a Jenkins pipeline...
Monitoring vs Observability: what's the difference and how's Twitter doing it?
https://dashbird.io/blog/monitoring-vs-observability/
https://redd.it/l62zf3
@r_devops
https://dashbird.io/blog/monitoring-vs-observability/
https://redd.it/l62zf3
@r_devops
Dashbird
Monitoring vs Observability: Can You Tell The Difference?
Monitoring vs. Observability: we explain what exactly is observability and how does it differ from monitoring.
Ensuring developers have updated libraries/dependencies locally
What's everyone's best practice for ensuring (aka forcing) developers have the latest/correct version of dependencies on their local device in the scenario another developer made changes amidst their coding?
​
We're a C++ shop so will be using Conan - My thought was this would all be driven through changes to the conanfile.py. If git recognizes a change there, the developer is alerted at commit/push and should then pull the new conanfile.py and install the latest dependencies with a
​
Is there a better method or am I just completely missing something?
​
We currently just require everyone to network boot into a dev environment that has the "current" versions loaded - However, this is with a 6-10 week coding cycle and that environment is built once per cycle. Going forward with the goal of daily cycles and using Conan, I don't think is the right method to use.
https://redd.it/l68tkg
@r_devops
What's everyone's best practice for ensuring (aka forcing) developers have the latest/correct version of dependencies on their local device in the scenario another developer made changes amidst their coding?
​
We're a C++ shop so will be using Conan - My thought was this would all be driven through changes to the conanfile.py. If git recognizes a change there, the developer is alerted at commit/push and should then pull the new conanfile.py and install the latest dependencies with a
conan install to test locally with before re-pushing their changes. Could use either a pre-commit hook or more likely a pre-receive server hook to ensure it's not being skipped.​
Is there a better method or am I just completely missing something?
​
We currently just require everyone to network boot into a dev environment that has the "current" versions loaded - However, this is with a 6-10 week coding cycle and that environment is built once per cycle. Going forward with the goal of daily cycles and using Conan, I don't think is the right method to use.
https://redd.it/l68tkg
@r_devops
What does a Network Engineer do in an actual outage!! Microsoft Azure Network Engineer speaks...
A network engineer is not just responsible for configuring routers and establishing connections... there's a lot more that needs to be done to maintain a smooth and uninterupted network... Sharing a video that talks about the same.
what does a network engineer do in an actual outage
https://redd.it/l68api
@r_devops
A network engineer is not just responsible for configuring routers and establishing connections... there's a lot more that needs to be done to maintain a smooth and uninterupted network... Sharing a video that talks about the same.
what does a network engineer do in an actual outage
https://redd.it/l68api
@r_devops
YouTube
What do Network Engineers actually do in an outage!! Microsoft Azure Networking Engineer speaks
Hi,
Real insights into what a Network Engineer does and what needs to be done in an outage!
More details about live Network outages at: https://www.datacenterdynamics.com/en/news/
Chapters:
0:00 Introduction
1:30 Case Study of some outages seen in 2020…
Real insights into what a Network Engineer does and what needs to be done in an outage!
More details about live Network outages at: https://www.datacenterdynamics.com/en/news/
Chapters:
0:00 Introduction
1:30 Case Study of some outages seen in 2020…
Release Management
How are other folks visualizing / monitoring what code has been deployed into each environment? Are there tools / jenkins plugin / integration out there that are solving this need? I know there are git tags, but how would one figure out which tag has been deployed to a UAT env or prod env?
https://redd.it/l6bqzs
@r_devops
How are other folks visualizing / monitoring what code has been deployed into each environment? Are there tools / jenkins plugin / integration out there that are solving this need? I know there are git tags, but how would one figure out which tag has been deployed to a UAT env or prod env?
https://redd.it/l6bqzs
@r_devops
reddit
Release Management
How are other folks visualizing / monitoring what code has been deployed into each environment? Are there tools / jenkins plugin / integration out...
I made a question generator API using Python
I've been trying to develop a question generator algorithm and found out that there's no public API for that, so I made one.
For the architecture, I kept it simple. I used Flask to expose the API and hosted it for free on App Engine. To eliminate the overhead I just listed it on RapidAPI.
Here's the link, check it out and let me know what you think about the API or it's architecture.
https://redd.it/l61ur0
@r_devops
I've been trying to develop a question generator algorithm and found out that there's no public API for that, so I made one.
For the architecture, I kept it simple. I used Flask to expose the API and hosted it for free on App Engine. To eliminate the overhead I just listed it on RapidAPI.
Here's the link, check it out and let me know what you think about the API or it's architecture.
https://redd.it/l61ur0
@r_devops
reddit
I made a question generator API using Python
I've been trying to develop a question generator algorithm and found out that there's no public API for that, so I made one. For the...
Container security scanner
Hi,
there are some commercial tools available on the market for container scanning.
Most of them work in two modes:
1. Continuously scanning registries
2. Scanning during the build
Currently I'm thinking about enabling both of these options.
My rationale would be:
1. I can give feedback to product team as soon as they introduce a new vulnerability - so I won't be introducing insecure images to registry.
2. Continuously scanning registry to detect any new vulnerabilities which were identified in the base images or some of the dependencies in the meantime.
What are your thoughts about that? What are your preferences?
https://redd.it/l6g3d8
@r_devops
Hi,
there are some commercial tools available on the market for container scanning.
Most of them work in two modes:
1. Continuously scanning registries
2. Scanning during the build
Currently I'm thinking about enabling both of these options.
My rationale would be:
1. I can give feedback to product team as soon as they introduce a new vulnerability - so I won't be introducing insecure images to registry.
2. Continuously scanning registry to detect any new vulnerabilities which were identified in the base images or some of the dependencies in the meantime.
What are your thoughts about that? What are your preferences?
https://redd.it/l6g3d8
@r_devops
reddit
Container security scanner
Hi, there are some commercial tools available on the market for container scanning. Most of them work in two modes: 1. Continuously...
Understanding AWS K8s architecture using EC2
Hi!
I'm quite new using Kubernetes. I work at a tailored software company and we are migrating our projects to containers, creating pipelines, etc.
We choose to use kOps to deploy our cluster into AWS environment, instead of provider manager K8s solutions (at least for now).
I registered my domain in Route 53 and configured the name servers at my registrar. Then I set up the cluster following the usual kOps workflow. Next, I deployed NGINX Ingress Controller following the docs, and, as expect, a Network Load Balancer was created.
I know these two are separated services, and Route 53 is redirecting traffic to my K8s API, while NLB is redirecting traffic to NGINX Ingress Controller, which follows it to ingress -> service -> pod -> container.
Am I right? Or I'm missing something?
Is there a setup where my applications could be reached via app1.mydomain.com and app2.mydomain.com (or even mydomain.com/app1 and mydomain.com/app2) instead of some-big-hash.elb.us-east-1.amazonaws.com ?
https://redd.it/l6fwx2
@r_devops
Hi!
I'm quite new using Kubernetes. I work at a tailored software company and we are migrating our projects to containers, creating pipelines, etc.
We choose to use kOps to deploy our cluster into AWS environment, instead of provider manager K8s solutions (at least for now).
I registered my domain in Route 53 and configured the name servers at my registrar. Then I set up the cluster following the usual kOps workflow. Next, I deployed NGINX Ingress Controller following the docs, and, as expect, a Network Load Balancer was created.
I know these two are separated services, and Route 53 is redirecting traffic to my K8s API, while NLB is redirecting traffic to NGINX Ingress Controller, which follows it to ingress -> service -> pod -> container.
Am I right? Or I'm missing something?
Is there a setup where my applications could be reached via app1.mydomain.com and app2.mydomain.com (or even mydomain.com/app1 and mydomain.com/app2) instead of some-big-hash.elb.us-east-1.amazonaws.com ?
https://redd.it/l6fwx2
@r_devops
GitHub
kops/aws.md at master · kubernetes/kops
Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management - kops/aws.md at master · kubernetes/kops
Today I screwed up - while deploying a laravel Vue/nuxt.js app
Hey guys,
​
I have a client and he has an App in production. He wanted that someone renews the ssl certificate and I am not much experienced but this looked to me like a simple task where I just run certbot. So I told him that I am not 100% sure that I can do it but that I will give it a go.
So then he told me that the App is deployed on AWS, after a bit chatting and contacting his old dev he gave me the SSL key to the EC2 unit.
I found out that the app is running on three subdomains
[frontend1.domain.com](https://frontend1.domain.com) // Nuxt.js
[frontend2.domain.com](https://frontend1.domain.com) // Nuxt.js
[api.domain.com](https://frontend1.domain.com) // Laravel API
So I had no idea what I got myself into. So far I only made simple deployments where Vue (or any other FE Framework) is delivered as static by the Backend.
Also, I never used Bitnami." I thought okay this cannot be too bad"
Hopped on google and ask how to renew SSL Certificate Bitnami
which brought me straight to here: [https://docs.bitnami.com/aws/how-to/understand-bncert/](https://docs.bitnami.com/aws/how-to/understand-bncert/)
and to this command
sudo /opt/bitnami/bncert-tool
After running the command, I was asked to provide the domain I wanna renew the Certificates for. I provided them 1. wrongly, so everything was redirected to the wrong domain.
Then I figured I should rerun the command and give at first the backend domain (api.backend.com).
After I did this it seems to be working again, however, now the browser is not sending any request to the [api.domain.com](https://api.domain.com) due to cores. Also the SSL certificates is still not working. I spent quite some time on this problem. I tried to configure /bitnami/bitnami.conf and inserted it at the end.
<IfModule headers_module>
Header set Access-Control-Allow-Origin "DOMAIN"
</IfModule>
/// save and then run
sudo /opt/bitnami/ctlscript.sh restart apache
In the end, I told him that I am very sorry and that **I don't charge him** for my last task and the current deployment task I did for him today. I am feeling very sorry, and still, I would like to fix this. If someone here can give me any advice on how to deal with that I would be very grateful.
​
​
The old developer did not leave any documentation. Perhaps it was too obvious for him.
https://redd.it/l6dpm2
@r_devops
Hey guys,
​
I have a client and he has an App in production. He wanted that someone renews the ssl certificate and I am not much experienced but this looked to me like a simple task where I just run certbot. So I told him that I am not 100% sure that I can do it but that I will give it a go.
So then he told me that the App is deployed on AWS, after a bit chatting and contacting his old dev he gave me the SSL key to the EC2 unit.
I found out that the app is running on three subdomains
[frontend1.domain.com](https://frontend1.domain.com) // Nuxt.js
[frontend2.domain.com](https://frontend1.domain.com) // Nuxt.js
[api.domain.com](https://frontend1.domain.com) // Laravel API
So I had no idea what I got myself into. So far I only made simple deployments where Vue (or any other FE Framework) is delivered as static by the Backend.
Also, I never used Bitnami." I thought okay this cannot be too bad"
Hopped on google and ask how to renew SSL Certificate Bitnami
which brought me straight to here: [https://docs.bitnami.com/aws/how-to/understand-bncert/](https://docs.bitnami.com/aws/how-to/understand-bncert/)
and to this command
sudo /opt/bitnami/bncert-tool
After running the command, I was asked to provide the domain I wanna renew the Certificates for. I provided them 1. wrongly, so everything was redirected to the wrong domain.
Then I figured I should rerun the command and give at first the backend domain (api.backend.com).
After I did this it seems to be working again, however, now the browser is not sending any request to the [api.domain.com](https://api.domain.com) due to cores. Also the SSL certificates is still not working. I spent quite some time on this problem. I tried to configure /bitnami/bitnami.conf and inserted it at the end.
<IfModule headers_module>
Header set Access-Control-Allow-Origin "DOMAIN"
</IfModule>
/// save and then run
sudo /opt/bitnami/ctlscript.sh restart apache
In the end, I told him that I am very sorry and that **I don't charge him** for my last task and the current deployment task I did for him today. I am feeling very sorry, and still, I would like to fix this. If someone here can give me any advice on how to deal with that I would be very grateful.
​
​
The old developer did not leave any documentation. Perhaps it was too obvious for him.
https://redd.it/l6dpm2
@r_devops
Just out: "State of CloudNative Release Orchestration 2021" report
Hi all, CTO and cofounder of Vamp.io here. We've just released(sic) our report on the 2021 state of cloudnative release orchestration, and i feel there are some interesting insights to be learned from it.
It seems "dependency-hell" and costly release-validation are some of the more pressing challenges in the devops, kubernetes and cloudnative space.
Do you agree, disagree? Do you miss any specific topics you're focussing on? All feedback is welcome!
**https://blog.vamp.io/the-state-of-cloud-native-release-orchestration-2021/**
https://redd.it/l6169t
@r_devops
Hi all, CTO and cofounder of Vamp.io here. We've just released(sic) our report on the 2021 state of cloudnative release orchestration, and i feel there are some interesting insights to be learned from it.
It seems "dependency-hell" and costly release-validation are some of the more pressing challenges in the devops, kubernetes and cloudnative space.
Do you agree, disagree? Do you miss any specific topics you're focussing on? All feedback is welcome!
**https://blog.vamp.io/the-state-of-cloud-native-release-orchestration-2021/**
https://redd.it/l6169t
@r_devops
CircleCI
Autonomous validation for the AI era
Deliver production-ready software at AI speed. CircleCI helps modern teams validate, test, and ship every change with intelligent automation.
Troubleshooting the right way
In this blog-post, I share a methodology for troubleshooting technical challenges - https://www.meirg.co.il/2021/01/23/troubleshooting-the-right-way/
As part of this blog-post, I share a "real-life technical" challenge that I faced and the methodology that I used to tackle this challenge. **The challenge**: Disallow outbound connection from Prometheus to NewRelic, to make it possible to investigate Prometheus's logs and understand which errors (if any) are raised when there's no internet connection upon a remote_write event.
I'd love to hear your thoughts and have a discussion about the way YOU troubleshoot and tackle technical challenges. Rock on!
https://redd.it/l6iajv
@r_devops
In this blog-post, I share a methodology for troubleshooting technical challenges - https://www.meirg.co.il/2021/01/23/troubleshooting-the-right-way/
As part of this blog-post, I share a "real-life technical" challenge that I faced and the methodology that I used to tackle this challenge. **The challenge**: Disallow outbound connection from Prometheus to NewRelic, to make it possible to investigate Prometheus's logs and understand which errors (if any) are raised when there's no internet connection upon a remote_write event.
I'd love to hear your thoughts and have a discussion about the way YOU troubleshoot and tackle technical challenges. Rock on!
https://redd.it/l6iajv
@r_devops
meirg
Troubleshooting the right way - meirg
In this blog-post, I will share a methodology for troubleshooting technical challenges. As a DevOps engineer, I face technical challenges daily. In my early days, I rushed into finding solutions because "everything is time-critical". Rushing into solving…