Reddit DevOps
271 subscribers
9 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Stuck on the deployment part at the Gitlab-CI/Docker/Terraform/ECR pipeline. Where to deploy Express.js web server?

I am trying to build a dream pipeline around a simple Express.js web server that returns "Hello World" on / route. I am doing this process in few iterations, and currently, I am stuck on my second iteration at a moment where I need to actually deploy the app.

Let me first show you my current progress on this stack:

I want to follow Gitlab Flow

>My application source of truth is master branch. It is the branch which I want to continously deliver.

I want to use Docker and Docker-compose

>I have both Dockerfile and docker-compose.yml file which describes my application stack and allows both developers and CI server to build the app, run the app etc. very easily. The deployed app is running in docker container as well.

I want to use Gitlab shared runners to do my CI

>Done. There is a single test stage for now which is doing lint check and actuall mocha tests. This pipeline is triggered on MR branches and also on master branch.

I want my runners to build & push docker images to an Amazon ECR repository

>I think this definitely needs to happen whatever my strategy is. I guess having a docker image in some kind of a registry is a must. I have just arrived at the point where I need to make this happen, and I did educate myself on how it is done, so there is no issue with this step. My choice of registry is ECR.

That is the progress so far. Now I have come to the realization that I have few options.

I am not sure whether to use ECS or manual AWS CLI + EC2

>Since I haven't even touched Kubernetes yet (remember I am building just a second iteration of a simple "Hello World" app) and I am not looking for auto-scalable fancy stuff such as EKS (yet), rather I am wondering whether I need Amazon's ECS or should I set-up deployment on the instance level?
>
>Up until now, my deployment pipeline was very primitive. Manually created EC2 instances had to be SSH-ed into, and I had to pull latest code from Git repository and restart the processes.
>
>So I can see the possibility of automating my primitive flow by introducing docker images instead of bare code, and also doing all this automatically from Gitlab CI trough AWS CLI. But is that how it's usually done, or should I switch to ECS and invoke ECS "refresh" once my images are in the ECR.
>
>One question here: since I use Docker compose, If i went with the "EC2 way" I know I can write a correct deployment script which DOES use docker compose and run the app correctly. But what I don't know is whether ECS runs my compose script, or just my Dockerfile, and is there a way to set that up correctly if I use docker compose?

I want to use Terraform to provision infrastructure in an automated fashion

>My second problem is this. How does terraform come into play if I have the architecture set-up in the above fashion?
>
>What I know is that Terraform CAN provision EC2 instances for me trough IaC in declarative fashion. What I don't know is this:
>
>Should I put ECR creation in the Terraform config files as well? Does Terraform also provision/configure ECS? I know Terraform is a topic in itself, and I have and will research it's full capacity, but I'm mainly looking for waypoints on configuring it to work with my deployment plan that I have described above.

Thanks for reading, each contributing comment is welcome.

https://redd.it/l5hl31
@r_devops
DevOps not for fresher/careershifter?

I've been applying for a devops role for the past 3 months but apparently all of them needs experienced. I have 3years of project management experience.

I know that having AWS-SAA cert wont get me the job but I strongly believe that I just need a chance to prove myself. So here I am asking for your advise and suggestions on how I can ace the interview. I am also thinking of creating a project but I dont have any idea what to build. Can you please give some good resources?

I have basic Python and Linux skills. Thanks in advance!

https://redd.it/l5e4q8
@r_devops
Simplifying K8S and OpenShift deployment and management on GCP/Cloud

I wrote a few words on our approach at Palo Alto Networks to simplify different Orchestrations deployment and management on GCP and AWS.

We are using a Chrome Extension that allows us to quickly trigger builds and deletion of clusters we use for application testing.

Please let me know if you have any questions or suggestions, would be glad to help if needed.

Here's the article:
https://medium.com/engineering-at-palo-alto-networks/simplifying-k8s-and-openshift-installation-using-a-chrome-extension-84391d0ed6f

https://redd.it/l5aeqm
@r_devops
GitlabCI with Chef

Hi, I would make the CI/CD pipeline with chef and local gitlabci. I've used puppet and ansible with jenkins before :-)))

SO I have some beginner questions with chef integrated to CI.

\- Can I use the "external" gitlab repository for store the cookbooks because I read the chef automatically store the cookbooks on the chef server when I develop the code on the workstation. Can I develop without the workstation? Just develop on my machine > Push to the gitlab repo > git clone on the test vm > run ?

\- I would make a pipeline in gitlabci what get the feature branch as a parameter and deploy it to the test vm. IS it possible? Can Chef run headless?

\- Anyone else tried to make the same toolset?

https://redd.it/l5dd5f
@r_devops
Trying to Deploy Through Concourse CF Flyway Resource and need help increasing the memory

Hi,

Hope you are doing well.

I am trying to increase the memory from 256MB =>1GB for https://hub.docker.com/r/emeraldsquad/cf-flyway-resource/ the problem is that in my pipeline there does not seem to be an easy way to overwrite the memory.

I could manually change it in PCF but I dont want to do that.

Was wondering if anyone has faced a similar issue with Concourse and PCF and how you resolved it.

Thanks

https://redd.it/l5ql54
@r_devops
How GitOps deals with mono-repo environments?

Hi! I am a very fresh beginner and I woud like to ask those who has some experience with Gitops :)

Let’s say I have a project and a single microservice repository - “tutorial-microservice”.
This repo folder structure:

- TutorialMicroservice
- Deployment (openshift yamls, deploymentConfigs...)
- Production
- Dev
- QA
- Dockerfile
- dotnet.jenkinsfile

My question would be, if I for an example make changes in TutorialMicroservice and in Dev deploymentConfig, create a PR and MERGE these changes to a master or another branch, is it possible to detect that among all these changes there was a change also in Dev environment and DEPLOY these changes in Dev environment?

I know it would be easy if there would be a separate configs repository, but currently in our real project we cannot change the architecture :/

https://redd.it/l5atsg
@r_devops
DevOps for automatic VM deployment via Rest API

Hello

​

In my company we want to use Azure DevOps to automate the deployment of VMs in Azure.

At the moment we have a Jenkins pipeline that deploys virtual machines for customers in VmWare. We want to rebuild this in Azure DevOps for Azure VMs.

We want to make a front end where the parameters can be filled in by the customer or colleagues, then Azure DevOps should be triggered via REST API to build the VM.

If we do it like this, I don't think that we are using Azure DevOps the right way. I think it's made for deploying environments and not for deploying single VMs per deployment.

Does anyone have tips for me? Should we do it this way? Should we rethink our strategy?

https://redd.it/l5a71p
@r_devops
Ensuring developers have updated libraries/dependencies locally

What's everyone's best practice for ensuring (aka forcing) developers have the latest/correct version of dependencies on their local device in the scenario another developer made changes amidst their coding?

​

We're a C++ shop so will be using Conan - My thought was this would all be driven through changes to the conanfile.py. If git recognizes a change there, the developer is alerted at commit/push and should then pull the new conanfile.py and install the latest dependencies with a conan install to test locally with before re-pushing their changes. Could use either a pre-commit hook or more likely a pre-receive server hook to ensure it's not being skipped.

​

Is there a better method or am I just completely missing something?

​

We currently just require everyone to network boot into a dev environment that has the "current" versions loaded - However, this is with a 6-10 week coding cycle and that environment is built once per cycle. Going forward with the goal of daily cycles and using Conan, I don't think is the right method to use.

https://redd.it/l68tkg
@r_devops
What does a Network Engineer do in an actual outage!! Microsoft Azure Network Engineer speaks...

A network engineer is not just responsible for configuring routers and establishing connections... there's a lot more that needs to be done to maintain a smooth and uninterupted network... Sharing a video that talks about the same.


what does a network engineer do in an actual outage

https://redd.it/l68api
@r_devops
Release Management

How are other folks visualizing / monitoring what code has been deployed into each environment? Are there tools / jenkins plugin / integration out there that are solving this need? I know there are git tags, but how would one figure out which tag has been deployed to a UAT env or prod env?

https://redd.it/l6bqzs
@r_devops
I made a question generator API using Python

I've been trying to develop a question generator algorithm and found out that there's no public API for that, so I made one.

For the architecture, I kept it simple. I used Flask to expose the API and hosted it for free on App Engine. To eliminate the overhead I just listed it on RapidAPI.

Here's the link, check it out and let me know what you think about the API or it's architecture.

https://redd.it/l61ur0
@r_devops
Container security scanner

Hi,
there are some commercial tools available on the market for container scanning.
Most of them work in two modes:
1. Continuously scanning registries
2. Scanning during the build


Currently I'm thinking about enabling both of these options.
My rationale would be:
1. I can give feedback to product team as soon as they introduce a new vulnerability - so I won't be introducing insecure images to registry.
2. Continuously scanning registry to detect any new vulnerabilities which were identified in the base images or some of the dependencies in the meantime.

What are your thoughts about that? What are your preferences?

https://redd.it/l6g3d8
@r_devops
Understanding AWS K8s architecture using EC2

Hi!

I'm quite new using Kubernetes. I work at a tailored software company and we are migrating our projects to containers, creating pipelines, etc.

We choose to use kOps to deploy our cluster into AWS environment, instead of provider manager K8s solutions (at least for now).

I registered my domain in Route 53 and configured the name servers at my registrar. Then I set up the cluster following the usual kOps workflow. Next, I deployed NGINX Ingress Controller following the docs, and, as expect, a Network Load Balancer was created.

I know these two are separated services, and Route 53 is redirecting traffic to my K8s API, while NLB is redirecting traffic to NGINX Ingress Controller, which follows it to ingress -> service -> pod -> container.

Am I right? Or I'm missing something?

Is there a setup where my applications could be reached via app1.mydomain.com and app2.mydomain.com (or even mydomain.com/app1 and mydomain.com/app2) instead of some-big-hash.elb.us-east-1.amazonaws.com ?

https://redd.it/l6fwx2
@r_devops
Today I screwed up - while deploying a laravel Vue/nuxt.js app

Hey guys,

​

I have a client and he has an App in production. He wanted that someone renews the ssl certificate and I am not much experienced but this looked to me like a simple task where I just run certbot. So I told him that I am not 100% sure that I can do it but that I will give it a go.

So then he told me that the App is deployed on AWS, after a bit chatting and contacting his old dev he gave me the SSL key to the EC2 unit.

I found out that the app is running on three subdomains

[frontend1.domain.com](https://frontend1.domain.com) // Nuxt.js

[frontend2.domain.com](https://frontend1.domain.com) // Nuxt.js

[api.domain.com](https://frontend1.domain.com) // Laravel API

So I had no idea what I got myself into. So far I only made simple deployments where Vue (or any other FE Framework) is delivered as static by the Backend.

Also, I never used Bitnami." I thought okay this cannot be too bad"

Hopped on google and ask how to renew SSL Certificate Bitnami

which brought me straight to here: [https://docs.bitnami.com/aws/how-to/understand-bncert/](https://docs.bitnami.com/aws/how-to/understand-bncert/)

and to this command

sudo /opt/bitnami/bncert-tool

After running the command, I was asked to provide the domain I wanna renew the Certificates for. I provided them 1. wrongly, so everything was redirected to the wrong domain.

Then I figured I should rerun the command and give at first the backend domain (api.backend.com).

After I did this it seems to be working again, however, now the browser is not sending any request to the [api.domain.com](https://api.domain.com) due to cores. Also the SSL certificates is still not working. I spent quite some time on this problem. I tried to configure /bitnami/bitnami.conf and inserted it at the end.

<IfModule headers_module>
Header set Access-Control-Allow-Origin "DOMAIN"
</IfModule>

/// save and then run

sudo /opt/bitnami/ctlscript.sh restart apache

In the end, I told him that I am very sorry and that **I don't charge him** for my last task and the current deployment task I did for him today. I am feeling very sorry, and still, I would like to fix this. If someone here can give me any advice on how to deal with that I would be very grateful.

&#x200B;

&#x200B;

The old developer did not leave any documentation. Perhaps it was too obvious for him.

https://redd.it/l6dpm2
@r_devops
Just out: "State of CloudNative Release Orchestration 2021" report

Hi all, CTO and cofounder of Vamp.io here. We've just released(sic) our report on the 2021 state of cloudnative release orchestration, and i feel there are some interesting insights to be learned from it.


It seems "dependency-hell" and costly release-validation are some of the more pressing challenges in the devops, kubernetes and cloudnative space.
Do you agree, disagree? Do you miss any specific topics you're focussing on? All feedback is welcome!

**https://blog.vamp.io/the-state-of-cloud-native-release-orchestration-2021/**

https://redd.it/l6169t
@r_devops
Troubleshooting the right way

In this blog-post, I share a methodology for troubleshooting technical challenges - https://www.meirg.co.il/2021/01/23/troubleshooting-the-right-way/

As part of this blog-post, I share a "real-life technical" challenge that I faced and the methodology that I used to tackle this challenge. **The challenge**: Disallow outbound connection from Prometheus to NewRelic, to make it possible to investigate Prometheus's logs and understand which errors (if any) are raised when there's no internet connection upon a remote_write event.

I'd love to hear your thoughts and have a discussion about the way YOU troubleshoot and tackle technical challenges. Rock on!

https://redd.it/l6iajv
@r_devops
Sharing a link-to-text with your colleagues

!scroll-to-fragment

From time to time, I find myself sending screenshots of blog-posts and documentation. The reason - I doubt they will scroll down to the relevant text, a screenshot provides a more direct approach of "here you go". And of course, I add the link to the content, in case my colleagues will want to investigate the subject.

Another approach - sharing a direct link to the relevant text in the docs. For example, here's a very long blog-post (no I didn't write it), and I'd like to share a link to a specific text fragment of this blog-post "...primary function of the external ID...", here's how:

DISCLAIMER: Available in Chromium Engine 80+, read more about it in chromestatus

- Link to page: https://aws.amazon.com/blogs/security/how-to-use-external-id-when-granting-access-to-your-aws-resources
- Add #:~:text=relevant text: In my case it's https://aws.amazon.com/blogs/security/how-to-use-external-id-when-granting-access-to-your-aws-resources#:~:text=primary%20function%20of%20the external%20ID (%20 is whitespace)
- (Optional) Use first and last: https://aws.amazon.com/blogs/security/how-to-use-external-id-when-granting-access-to-your-aws-resources#:~:text=primary,external%20ID (primary to "external ID")

A very detailed StackOverflow answer on the subject. The source of the image is the same Stackoverflow answer.

https://redd.it/l6hgmi
@r_devops
What can I expect from a DevOps internship tech Interview?

I’m going on to 2nd round of interviews for a DevOps internship position. It consists of a 2 hour round of a screen share and tech interview. What are some of the things I should expect from this kind of interview? Will I be expected to code live?

Background: I’m a senior in IT and AWS certified. Worked on a couple of personal projects that include AWS, Terraform, Ansible, Jenkins, Python, some React.js, Node.js.

Freaked out about this because I suck at coding on spot and suck at leetcode.

https://redd.it/l6gl91
@r_devops
Detecting Genuine Continuous Integration Configurations

Hey! I'm not sure if such posts are accepted here, but I will give it a try.

My name is Tim, a student at the University of Zurich, Switzerland, and I am working on my Master thesis right now.

I envision a world, in which it is easy to find genuine CI configurations in the vast numbers of open-source projects, without having to work my way through countless meaningless config files. I would like to build a system that can automatically find good and representative CI pipelines.

To make this vision come true, I need some feedback from professional developers to learn which types of configuration files would be interesting to look at.

I would really appreciate if you could find the time to fill out the following survey to help me in my thesis. The survey takes approximately 10 minutes. Participating in the questionnaire is completely anonymous.

Many Thanks
Tim

PS: Feedback is very much appreciated
PPS: If you have any questions, also about the thesis, feel free to ask!

https://redd.it/l5zohc
@r_devops
helm issues after upgrade

Hey I am pretty new to helm and kube. Been using it for about 3-4 months. I just upgraded to helm3 and it does not seem to grab my namespace from my kubeconfig.

When I run helm ls it is empty

when I run helm ls -A it returns

Error from server (Forbidden): secrets is forbidden: User "MYUSERNAME" cannot list resource "secrets" in API group "" at the cluster scope

helm2 was fine as soon as I set tiller to point at the namespace. My namespace and context is set on kubernetes, I can view my pods in the namespace just fine in kube.

&#x200B;

These commands all work and pull from the namespace

kubectl get pods
kubectl describe pods

&#x200B;

&#x200B;

https://redd.it/l5q56k
@r_devops