Reddit DevOps
270 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Is there way to run jenkins blue ocean pipeline remotely through url?

Hi

I hava problem using jenkins blue ocean

normal jenkins can build remotely by url with authentication token and build parameter

​

but blue ocean pipeline configure don't have any remote build options.


Is there way to run jenkins blue ocean pipeline remotely through url?

https://redd.it/ngty43
@r_devops
Legacy Application Modernization: 7 Alternative Ways to a Digital Future

Most technology products have a life cycle of only five years, Flexera says. Then outdated technologies become a severe IT issue that almost all organizations face eventually. Antiquated IT systems generate bugs, errors, and critical issues with a domino effect that must be eliminated.

Read why legacy applications modernization is so essential and choose the right way to upgrade your legacy technology.

With companies spending 60-80% of their IT budget supporting legacy systems and applications, 44% of CIOs rightly believe that complex legacy software is slowing business digital transformation.

Gartner says that for every dollar spent on digital innovation, three dollars are spent on upgrading applications. And this disproportionate amount of money wasted on keeping legacy systems afloat could be an investment in further development. Therefore, many companies are looking for ways to reduce the dependency on legacy technologies and move forward into the future.

https://redd.it/ngxgz3
@r_devops
What are the framework activities are completed when following an evolutionary (or spiral) user interface development process?

I need to know more about the framework activities are completed when following an evolutionary (or spiral) user interface development process. Can someone please help?

https://redd.it/ngwlff
@r_devops
Disaster Recovery Plan(DRM) - doing it in-house.....

Good day community.....currently working in a smallish company as jnr devops engineer...in a previous life as a business analyst for bigger corporates - i was exposed to DRM testing and DRM/DRP - however it was for bigger corporates - as such the way. they handled it was always to outsource the DRM to DRM specialist companies.....however in current company we planning to do it inhouse (DIY)...just wanted to know if you doing same in your company and is there any specific documentation/software that you used to 1 - document the DRM/DRP approach and 2 - testing the plan (from an automation perspective using Ansible for instance...our stack is fairly opensource ...Docker/Python/Ansible/Gitlab/Postgres/Redis/Linux-Ubuntu Servers and VMs.......just some general feedback and advice would be appreciated...TIA

https://redd.it/nhig0l
@r_devops
Tutorial 51 - Methods With The Same Name In GO | Golang For Beginners

Watch the video tutorial on creating methods in Golang with the same name premiering today at 10AM IST on my YouTube channel Brainstorm Codings.

Make sure to subscribe the channel if you find it interesting, like the video, comment your thoughts, and share.

https://redd.it/nhhjbj
@r_devops
AWS service with CI

Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing (however you do it) and then deploy a ec2 instance using awscli?

https://redd.it/nhcubs
@r_devops
Opinion Kubik: language to define validation rules

I'm working on a language to define validation rules. Purpose is to validate Kubernetes and other cloud configurations.

In this post trying to collect an opinion on the overall syntax. Entire doc is inside README.md. Examples there are using "life" cases (no k8s, etc.).

https://github.com/kubevious/kubik

Thank you!

https://redd.it/nhcccx
@r_devops
Need help updating dependency on Lambda

I'm using a Python (3.8) runtime on AWS Lambda, and one of the packages I'm using requires OpenSSL 1.1.1+, but the Amazon Linux instance used by Lambda has an older version (1.0.2k).

I've read people doing this for NodeJS runtimes (Stack overflow link), but I'm too much of a noob to fully understand this.

How can I achieve this? I'm already using Lambda Layers to update Python libraries, but no idea how to do this for native Linux ones.

https://redd.it/nhc6ib
@r_devops
Managing Binaries/Executables for Jenkins Agents

I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage executables required for a pipeline?

Some things that come to mind are the gcloud sdk and sops library for remote decryption of secrets, but I'm sure some other things could apply. So my question is this - what are the "best practice" ways of handling these things?

My initial thought was to create a custom image with all of the goods I need, but I reach the catch 22 of needing the gcloud sdk to access the image because we store our images in GCPs container registry. Some other things I've read include creating permanent agents with the software you need included, but my current setup uses the Kubernetes plugin to dynamically create pods for agents and assign them to nodes in our GKE cluster.

So, I'd love to hear everyone's thoughts, experiences, and industry go-to's for the issue!

https://redd.it/nh8qaz
@r_devops
What a devops do? Different point of view

Recently I have spoken with an it recruiter, she said that a devops is a sysadmin that knows a little of backend development and fronted development. I said instead that a devops is not a sysadmin and a developer, but a sysadmin that knows how to manage a system infrastructure with the code, I was referring to python and ansible above all.
Who has right? Let me know thanks.
Maybe neither lol.

https://redd.it/nh6lss
@r_devops
Does datadog monitor query supports multiple tags with same key?

I am using terraform datadog_monitor resource to deploy some monitors, one thing I came accross is in the "query" paramater of datadog_monitor resource it only works with single tags as shown below.

resource "datadogmonitor" "httpdevtestdemo" {
..........................
query = "'http.canconnect'.over('environment:dev').by('host','instance','url').last(5).countbystatus()"
.........................
}

The below one does not work

resource "datadog
monitor" "httpdevtestdemo" {
..........................
query = "'http.can
connect'.over('environment:dev','environment:demo','environment:test').by('host','instance','url').last(5).countbystatus()"
.........................
}

Anyone came accross this issue or knows a solution for this except creating separate monitors for each environment.

https://redd.it/ngnrga
@r_devops
restricting use of certain python library for developer

Imagine that we've learned the Python simplejson package has a major security vulnerability. The engineering teams have spent a few days replacing it with safer substitutes, so it's now secure and safe to ship. How would you ensure there are no regressions in the future? I.e., that no one adds the simplejson package back in.

https://redd.it/nhujsz
@r_devops
Has anyone found success in switching to a night role?

Generally, this question is rooted in a lifelong struggle with ADHD but for a long time (years) I've avoided the reality that I can work much better at night, focus better at night, have zero need for medication at night or live with the anxiety and guilt of struggling through each day in the workplace. I believe this may be a better solution than getting back on stimulants, I can't put my body through that anymore.


My question is, has anyone out there had this realization and switched to night hours or found a role fitting that description in the wild and if you did make that switch, did you find that it worked for you? Any unexpected tradeoffs? I can think of a few that might come up with communicating with daytime hour teams or mandatory early meetings when they happen, etc.


Typically I don't see this in job postings but people in this sub are probably used to working with staggered team schedules/international teams in different time zones anyway.


Thanks for humoring my question.

https://redd.it/ni7xw5
@r_devops
MidLevel DevOps Engineer Interviewing Sr DevOps Engineer

I work at a large company, and my manager asked me to conduct the “team member interview” portion of the hiring pipeline.

I’m a MidLevel DevOps Engineer with 2 - 3 years xp and will be interviewing an applicant with 6 years xp. I’m conducting the interview for our sister team, and am familiar with their tech stack, but am not sure how to “interview up” as I’ve only ever interviewed interns and seasonals (college job, not tech).

Any Sr engineers or up-interviewers have advice?

Thanks guys!

PS: Love this subreddit

https://redd.it/ni33gl
@r_devops
Gitlab-CI: Passing version from one stage to the next

I'm running into a bit of an issue which I'm not sure I'm solving in the right way. This is for a personal project, basically to continue learning things about gitlab-ci, etc...

​

What I am trying to achieve is:

1. Commits pushed to master
2. Gitlab CI runs on master and runs tests, lint, whatever
3. If tests pass, CI stage runs an automatic version: release-it, semantic-release, etc, bumps the version number and creates a commit with the updated package.json and CHANGELOG.md
4. The new version is then packaged into a docker image (tagged with new version, sentry release created with new version).
5. New docker image is pushed to deployment.

Problem being that the commit made in step 3, is not reflected in steps 4 and 5.

e.g.

Software is at 1.0.0 and I make some changes and run the pipeline. Step 3 runs and says, "cool, we can make this into 1.0.1 " and makes a commit back to the repo.

Steps 4 and 5 run and bundle the software and deploy it, which still shows 1.0.0 on the front end, with the version from package.json and without the updated CHANGELOG.md which was created/updated during the pipeline.

I hope that makes sense, and I'm totally unsure if I'm approaching this the right way. Basically I want the pipeline to create the next version of the software and release it.

I've found a bunch of stuff on automatic semantic versioning, but noting about carrying the new version forward through the pipeline.

https://redd.it/nhyxq5
@r_devops
Question about nip.io and including port number in ip address

Hi, I'm relatively inexperienced in this area. I have a machine with an external ip address from gcp. I want to create an oauth app (say google oauth) but they do not accept an ip address as it is not a "valid url". I did some digging and learned about nip.io, which I assume is just a service that forwards requests from xyz.nip.io to xyz , xyz being the ip address.

Now, I write wep apps that listen to a certain port (say 2021) and receive requests there. So I would usually do 11.123.12.12:2021 to go to say the index of my web app. But, I don't know how to specify that with nip.io. Should I do 11.123.12.12.nip.io:2033 say? Or, maybe 11.123.12.12.nip.io works and some default port receives the request on the gcp machine (I did some googling to no avail).

I'm hoping someone can provide insight on this (I'm a newbie so if possible simplified terms would be great), and if my understanding of using nip.io in this situations works. I hope this question is ok here!

https://redd.it/nhyj5n
@r_devops
Github integration with teleport

Hi,

I have configured teleport with github as the oauth provider. I am able to login via the Web ui and tsh. I get the admin role which I had configured while setting up github oauth.

The problem is I am getting the username of the profile I logged into in the node connection user list. The username normally dosen't exists.

How should I setup github oauth such that the logins defined in a role are given to the user signing in via github. Normal username and password authentication gives the correct login list for all the nodes.

Any help will be appreciated. Thank you.

https://redd.it/nhxyzc
@r_devops
Will this CI/CD pipeline work out?

Hi all,

I‘m a Junior Dev and try to come up with an idea how my company (very small) could make use of a CI/CD pipeline to streamline some processes which are still done manually.
This is just a idea without any details, would be grateful if you point out what I‘m missing.

3 Stages: Dev, Test, Production
3 Branches: feature, develop, master

1. Every Push to any feature branch triggers build, unit/integration tests and code analysis
2. After merging feature -> develop
a. Build docker image
b. Deployment to Dev stage
c. Extended automated tests
d. If passed, deployment to Test stage
3. Optionally manual tests on Test stage or customer review
4. Trigger deployment to Production stage, merge develop -> master branch


Thanks, any feedback is highly appreciated.

https://redd.it/nhx0xp
@r_devops
IaaC and secrets

How do you guys handle your secrets (service passwords/tokens) as a code? We wanted our secrets to live in source control (encrypted with git-crypt) and be written to some secrets storage, like vault or AWS SSM, with Terraform. However, since we've been using Terraform Cloud, we couldn't get git-crypted files decrypted on Terraform Cloud side. My colleague is working on Terraform provider for reading git-crypted files. If it works out, we are going to have our secrets under source control being managed by Terraform. It's going to remain decrypted in TF state, but we are ok with that since Terraform Cloud keeps it securely. I expect it to work well but I'm wondering if there are any other ways to achieve managing secrets under VCS in a secure way. Can you guys share your experience?

https://redd.it/nhuhdx
@r_devops