Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
How to do HTTP access from outside to EC2 that in private subnet of a VPC?

I have app running on EC2 which is in private subnet. I have mapped private subnet to NAT gateway which resides in public subnet with internet gateway. My question is how can I do http access to that private subnet? Currently I can not access from outside world. I have also allowed http port in EC2 security group.

https://redd.it/z3iyzw
@r_devops
Azure - Copy files over SSH to old AIX box

Hey,

I'm trying to copy some scripts to a remote AIX box using an Azure pipeline. The scripts are in an Azure report. The official Microsoft "Copy files over SSH" task fails because the underlying dependencies do not allow the relevant key exchange algorithm as present on the box. I've gone to Microsoft support about this and while they were quite helpful, this isn't gonna change.


Ideally we'd just update the box - and therefore the OpenSSH version - but this isn't up to me and doesn't look likely.


I'm trying to figure out a way of using Plink to copy the files to the box. I can get plink to talk to the server and run various commands but can't figure out a way of copying the files from the Azure repo to the server using this method.


Any suggestions would be welcome.
Thanks

https://redd.it/z3l2xd
@r_devops
What is the best way to integrate the bind9 service on ci/cd?

Hi, it often happens that due to new services or the removal or change of services, I have to manually change the entries in the zone file of the bind9 service.

we use the GitLab-CE system for CI/CD, as you probably know, A - records in the DNS are changed by a simple change in the zone file, in this regard, I wanted to know how to properly splice the GitLab + bind9 file zone so that users themselves change the necessary entries if necessary.

https://redd.it/z3jteg
@r_devops
KOPS vs EKS

As someone who is starting learning kubernetes, which one would you recommend to use.

I would say kops since you can see the master node. But at the same time I think EKS would be easier since you mainly take care and interact only with the worker nodes.

Let me know your thoughts.

https://redd.it/z3igx0
@r_devops
Odigos V0.1.35 - Distributed Tracing and more. New features and destinations

We just released Odigos v0.1.35 with exciting new features:

Prometheus users: Based on our distributed tracing, Odigos can now automatically generates metrics for any open-source library in use
Honeycomb users: Odigos now supports metrics and logs in addition to distributed traces

Odigos supports several managed and open-source destinations and we are constantly adding more backends.

Using one of the destinations we support? Make sure to update and install the latest version of Odigos to get the most accurate data and resolve production issues faster 🎯 πŸš€ πŸ’‘

https://github.com/keyval-dev/odigos

Check out our supported destinations: https://github.com/keyval-dev/odigos/blob/main/DESTINATIONS.md

https://redd.it/z3hbh9
@r_devops
Introducing #path2DevOps

Hey guys!

Having had quite a few buddies wanting to transition to a devops role, I developed a "plan" to help them start building some microservices, pipelines, cloud infrastructure and dabble with multiple environments, IaC and so forth.

The result of these handovers is a journey I called "path2DevOps" - corny, I know, which aims to help people wanting to transition to a role that is devops focused in a follow along manner.

The content might be quite basic for seasoned engineers on this sub, but I want to know if this is something you see value in? Also for beginners, would this be of interest to you?

PS. I know the initial quality of the videos is lacking but it took me some time to invest in gear and the quality got better later.

https://youtube.com/playlist?list=PL-WCaWbINSZ6xyqY9mNlfvgx3MgKpOuxG

https://redd.it/z2rbor
@r_devops
Are there any popular standards for deciding if and/or when to deploy software updates into the wild?

I've been seeing this tweet make the rounds again about Traeger deploying a software update right before Thanksgiving.


In retrospect, this seems like a completely obvious mistake, but you only ever see examples of who's doing it wrong.

​

Are there any good standards to proactively prevent these terrible user experiences?

Maybe the best timing for software releases, frequency, and how to decide what goes into it.

​

Also, I'm curious if there and great examples of software handling this.

I know one great example is Panic's Transmit which gives you the option to update on close.

https://redd.it/z3rvd3
@r_devops
Looking for advice

Hello there,

I am actually python developer with some knowledge about the world devops. I am writing this post for share my situation and maybe get some advice.

I started as intern in this little company ( i am still an intern after 8 months ), literally i started from zero all the cloud world ( azure cloud ) and learn all alone Devops, Docker, Automatism ecc.. and i have to say, i like this stuff, i created pipelines for my project that deploy over the development and production enviroiment,helped to deploy the apk of our product on the appCenter (all alone so maybe someone more experct could do a better work, but hey at least it works) and toher stuff like that.

The fact is i wanna learn more and maybe be good like some folks in this subreddit ( i love it), but my company is old style and is hard bring all the stuff i am learning ( yes, tons of theory and little courses/tutorial ). So i am trying to change my job and join some company that actually need a devops figure ( junior obv).

Now the real question is, if you are a devops senior that want hire a junior figure, what did you expect from him, like what attitude is a must have,or tecnology in the field?

Thank you

https://redd.it/z3sz1j
@r_devops
Seeking help with moving a locally running, short-lived Docker container to CI / deployment process?

I own a git repo of end to end tests. They are portable in such a way where anyone can clone the repo, create a docker build, then run docker compose up. These tests are designed to run against a publicly available web app. There are no other dependencies to worry about which is quite nice. There is no need to worry about test result reporting either. Absolutely everything is taken care of by the Docker container. Essentially it runs a shell script via CMD and exits on its own when the command completes after 10-11 minutes.

However this type of thing is kind of in a "silo" because it's run from my local laptop, manually triggered after a deployment, and triggering these tests is not automated in any way.

Does anyone have any tips and resources to integrate this type of thing into Jenkins? It would be great to maintain the ease and portability of running these tests via Docker Compose.

I thought about maybe running this container in ECS (we are an AWS shop) but couldn't tell if it was overkill. I also think it wouldn't help because I would just be trading my local laptop for ECS and still have the same exact problem. I would still maybe need Jenkins itself to automate some kind of schedule/trigger for this (and I'm mostly clueless around Jenkins).

Thanks in advance for any help. Appreciate it!

https://redd.it/z0zkqp
@r_devops
Starting out with my first project

Just deployed my first application (a simple bookstore) on the test cluster. The application consists of a frontend and backend in Go interacting with Postgres DB, currently made accessible through ingress (no proper dns yet :( ). Had to deal with db connection issues and making the frontend accessible. Learnt/learning a lot during the process. I am currently creating a helmchart for the application.
As the next iteration to this project, I am planning to host this on AWS using Terraform. Later on add some Gitops functionality.

Any suggestions on workflows / functionality I need to add into it to gain more experience is much appreciated.

https://redd.it/z3v8v3
@r_devops
30% OFF Azure Data Factory Basics for Azure Data Engineer DP203+Lab

Learn Data engineering on Azure Data Factory real world projects in 1.5 hour. Start your career as Azure Data Engineer - https://www.udemy.com/course/azure-data-factory-for-azure-data-engineers-with-hands-on-labs/?couponCode=E2F0178F625C337E3861

In this course, you are going to learn how to build data pipelines with hands-on labs using azure data factory

This course covers demos for all concepts used in real-world projects from an industry expert with hands-on Labs.

Anyone can spend 1.5 hours time and become an Azure Data engineer!

This course will provide what is data factory, what are the various core components in azure data factory.

This course is useful for those aspiring to become azure data engineers.

Those who want to switch their career from soft engineering to data engineer

To expand their skills from AWS, GCP data engineer to AZURE data engineer

to gain an in-depth understanding of the latest trends in azure data engineering

This course covers the following topics.

Basic concepts of various components in ADF

What are ETL and ELT

What is Linked service,

Demo for creating Linked service

What is a data set,

Demo for creating Data set

What is Integration run time

What is an Azure Data lake

Demo for creating Azure data lake storage account

What is Cosmos DB

Demo for creating Cosmos DB

What is COPY activity

Demo for creating a data pipeline to move data from ADLS 2 to Cosmos DB using COPY Activity

What are control flow activities-

Get Metadata activity

Demo for getting Metadata activity

Filter activity

Demo for Filter activity

ForEach activity

Demo for ForEach activity

Demo for other Control flow activities

What is DATA FLOW Activity

Demo for removing Duplicate data using DATA FLOW Activity

The course is useful not only for freshers but also for experienced data engineers to gain more skills in azure data engineering

https://redd.it/z3wfg2
@r_devops
Does lead time - cycle time = queue length?

I've just started reading Accelerate and I'm trying to wrap my head around the tempo and stability metrics and where Kanban metrics fits in.

They mention development lead time, which they define as "from commit to production time". Do they mean "commit" to doing the work, or "commit" the code?

Both are very different things.

I would imagine the high level software delivery team performance measurement they would be trying to get would be the length of time it takes you to complete work after agreeing to do it? Or is it purely focused on once you have written the code, how long it takes that to get into production?

I'm starting to realize that their are two types of backlog items:

1) ideas or minor issue fixes that you might do one day (not committed)

2) work that you have agreed and committed to which is added to your queue

How do you handle these differences?

https://redd.it/z186s3
@r_devops
Going from junior to mid-level

Been in industry for 3 years and I've come to learn yoe doesn't mean anything. I still feel like a junior and need help being told what to do next sometimes. I might start looking a new job and one of the things that's worrying me is being able to bridge the gap between junior to mid. I don't want to be in a position where they think I'm more senior than I really am and then when start the job it all goes to shit.

What are some things I can do/you did to make yourself go from junior to mid or senior? I want to be able to hold my own.

https://redd.it/z17g9h
@r_devops
Couple of doubts regarding YAML's syntaxes.

Everything in YAML, at least visually seems to be the exact same key: pair combination, I've familiarized myself with all the YAML syntaxes but I'm having a really hard time figuring out whether what I'm entering is an an Array, Sequence, Set, Map, Collection, Dictionary or an Object.

1. I simply use [ \] for simple values and { } for key: value pairs. The only real difference I could notice is that when using [ \] , you can have duplicates inside but if you enclose the same content within { } , then the duplicate items disappear. Am I missing something more important here?

2. Also can someone please explain the purpose of ? Sets in YAML, is it only there to assign null value to the key and delete any duplicate entries? And why are they emphazing in bold that it's "Unordered" unlike "Ordered" Lists.?

3. You can use * to call all the values of & , so why use << on top of it, isn't it redundant?

ref: &ref

girl: F

boy: M

Gender: *ref

4. The most confusing of all for me is Tags, I read the YAML documentation about tag handles, verbatim, shorthand. The more I read, the more confused I get.

I know that !! is used to specify data type and also you can enter a tag using the key: value pair method, but have no idea on how to properly use the single quotation ! tag and the %Tag directive.

%TAG !e! tag:example.com,2000:app/

\---

!e!foo "bar"

!<tag:example.com,2000:app/foo> "bar"

To me this code seems to function exactly like an Anchor / Alias combo rather than an actual Tag. So essentially, we're replacing the !e! with the URL. Isn't the primary goal of #Tags is to help find your content and resolve your search queries faster, I don't see it happening anywhere here.

&#x200B;

Sorry if the questions are too basic or outright dumb.

https://redd.it/z0xkec
@r_devops
Bicep templates

Hello.

Is there anywhere a big repository with bicep templates which you can look up for examples? Microsoft have a few but usually they're not filled with "values" but rather empty and from there i'll have to try my best until something doesn't work and then fix it.

Currently it just feels like i'm at times reinventing the wheel since some resources won't accept values which others would usually be accepting.

https://redd.it/z10yl2
@r_devops
do you action alerts in non-live environments?

At my company, we have alerting set up for all envs (dev/qa/live). I as the devops person responsible for multiple services, each of them having 10-20 alerts set up, am in charge of setting up the alerting and maintaining it/steering the dev team to do it.

Now our live services usually work fine and we rarely get alerts fired there, but a lot of times, it's the dev/qa alerts that fire, most of the time because of some failing dependency or bad data used in dev, or similar reason.

I'm usually inclined to ignore these and focus on actual work, but I'm pushed from above to action these alerts - refine thresholds, tamper with queries, etc. (or make sure developers take care of them).

How do you approach alerting in your teams?

https://redd.it/z0tyrt
@r_devops
Please recommend Student Information System solutions for higher ed (colleges and universities)

Looking for potential solutions to replace our university SIS which is Oracle People Soft. A few requirements our university needs for the SIS to have:

1. an active directory used for account creation, reading, updating, deleting, and authentification.
2. a built-in LMS like moodle.com to handle student enrolments, and attendance, create exam schedules, and be able to process final grades
3. a dashboard for reporting and analytics
4. a complete and comprehensive program for accounting control to track daily financial transactions
5. an online payment gateway for student payments and account balance updates
6. the SIS should be able to process online applications and student placement information
7. workflow processes for making grade changes, creating the course and major sheet exceptions

Currently, the existing solution has MOST of the things listed above as custom apps and integrations. I understand there is no perfect solution that will offer all of these, but any recommendations to handle most of the requirements with a ready-made solution are highly appreciated. Obviously, some things will have to be made as custom integrations, which is no problem, I'm more than welcome to opinions and feedback on potential options. Thanks in advance!

https://redd.it/z1ai3p
@r_devops
How much collaboration you need to fix your config when your deployment is broken?

I wonder if you work on isolation or you set up some kind of war room when you are facing a minor or major outage in your application.

We are thinking on adding capabilities for online simultaneous editing of config files for quick fixes or long term configuration definition, but I really don't know if it makes sense, specially if you are using GitOps or any git based configuration workflow. I can see some value, but I don't know if it is better to open a Zoom call and share your screen to work on it instead of having some kind of Etherpad like experience, even if we (Monokle) can do real time validation of your config files.

Do you see value in such a feature? What would be the requirements that would make this unacceptable if there are not there?

https://redd.it/z48k3b
@r_devops
Should i migrate from Kustomize to Helm?

Hi!
I'm currently facing some "limits" of a kustomize approach.. Basically i need a sort-of "Preview Environments" based on PR. I managed to get something that works using ArgoCD and it's PullRequestGenerator which use a specific kustomize overlay to deploy those environments when a PR is opened.
But the problem is: I need to pass some values to kustomize from ArgoCD, and i guess that there isn't a way of doing that easily.

Let's say that a PR is opened from a branch: feature-12

This will trigger a GH Action that push the container to a private registry tagged with branch name and other stuff.

ArgoCD then will be notified from a webhook that a PR is opened and will create a namespace (named like the branch) with all the stuff deployed.

From ArgoCD i can pass to kustomize some values, like prefixName and images, and that's fine.

But ideally the stuff just deployed from the PR should be reachable from specific URLs like feature-12.example.com or feature-12-api.example.com..

I cannot pass that value to kustomize.
So i think that the only way is to migrate to Helm and then pass those value with Values.yaml

Any suggestions about that?

P.S: Also kustomize insert namePrefix on a lot of stuff and those stuff are usually also in some container environments of the deployment.
Like an env that refer to: redis://redis-service:6379 when deployed to dev using dev as a namePrefix, will become: redis://dev-redis-service:6379, so the env must be patched as well. And while this is fine in a scenario with 3/4 different environments, that's not feasible in a scenario with multiple environments dinamically created by ArgoCD

https://redd.it/z49fdx
@r_devops
How to access a Cloud SQL instance in GCP from a different project?

I have 2 GCP projects. Each has its own VPC and one of them has a Cloud SQL instance. Thing is, after doing VPC peering between those two separate VPCs I still can't access the database from the VPC in the other project. Anything I'm missing here?

https://redd.it/z49wud
@r_devops
Is it possible to update .net core WebAPI in IIS without taking it down?

I'm currently using jenkins build server, basically building+deleting+deploying(copying), but the endpoint goes down while doing this.

Is it possible to use f.ex. Azure DevOps to update WebAPI in IIS without taking the application down?

https://redd.it/z49pvs
@r_devops