Reddit DevOps
271 subscribers
22 photos
31.3K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Not Devops, but been tasked with setting up our CI/cd pipeline

Hi all,

Im not a devops engineer, just a software dev, working on a legacy product and we want some automation to bring us towards CI/CD.

We have jenkins running, which I have now got to automatically build on Pull Requests. Next step is deployment, but I have no knowledge of the best technologies to use.

Our product is just an executable and configs in zip file that needs to be placed onto an inhouse VM, unzipped, and a cmd script run.

I dont want it to always automatically deploy, but instead (ideally) have a button on jenkins I can click to deploy to a server of my choice.

Can anyone help point me towards the best, preferably free (not a requirement) technologies that can help with this, as I have so little knowledge of the space.

https://redd.it/tqvq4y
@r_devops
Performance testing in CI/CD pipeline?

For people that have setup performance tests (Gatling, JMeter, Locust, etc) in your CI/CD pipeline how do you run them? Curious if you 1) run the performance benchmarks within the CI/CD infrastructure or (2) post-deploy on lower environments? I'm leaning towards (1) at the moment if only to avoid cleanup issues -- just curious what others are doing in terms of testing performance as part of the pipeline?

https://redd.it/tr64tj
@r_devops
Software inventory tool?

So we're running a mixed Linux and Windows environment of roughly 1k servers with a 60/40 split. We're trying to gain visibility into what packages are installed across the environment on both OSes. So for example, when there is a dotnet core vulnerability, it would make remediation a lot simpler if there was a single pane that we can see this information. Something simple, Servername, Packages Installed, Package version, etc.

Right now we use lansweeper (it's cheap) but it's clearly designed with Windows in mind and isn't the best at finding packages in Linux from what we've seen. Also the fact that you can't connect multiple AWS accounts to it is also a pain in the ass but that's out of scope.

Anything ya'll can recommend? I'm also okay with having two sets of tools, one for windows, one for Linux, and then marry that in a dashboard in grafana or something. But if there is a single tool, that'd be better

https://redd.it/tqz7te
@r_devops
Not able to create index management policy on opensearch for logs, any ideas?

Hello All,

I was recently tasked with creating an index management policy to discard all logs after X amount of days. My JSON knowledge is very little, but I can figure most things out by piecing info together. I wanted to first try whether I could delete an entire index by a specific date, so I tried the following:

POST /(index_name)/_delete_by_query
{
"query": {
"match_all": {}
}
}

The issue is nothing gets deleted. I originally was getting a `blocked by: [FORBIDDEN/8/index write (api)];` error so I changed the `/my_index/settings` to have the `blocks: write` setting to `false`

Now when I run the POST query above, all I get is a 200-success, but nothing is deleted.

Anyone have a clue what I'm missing here?

https://redd.it/tralv4
@r_devops
Manage cross technology/tool CICD pipelines

Currently in my work I am using multiple tools for variety of tasks. For eg, teraform to provision infra, ansible to configure the infra once deployed. Using custom python scripts to generate reports etc. I have created a pipeline in Jenkins but facing lots of issues in transferring the information from one tool to another.

Need some guidance on how to transfer information between these tools for eg, need to know the ip the teraform code deploys by ansible.

Currently I am writing the info on a single file and passing it around but it had become a bottleneck since changes due to any new information requirement breaks the code.

PS: Using a single tool is not feasible due to legacy issues and compliance issues.

https://redd.it/trv07w
@r_devops
Trigger Jenkins job with specific commit message

Hi r/devops,

I have a job in jenkins. I need that job to be triggered if a specific commit message is added to scm repo for e.g. "fix ISSUEID-123 Issue fixed. /jenkinsbuild"

So in this case if commit message has /jenkinsbuild, it should trigger the build in jenkins. I came across a jenkins plugin named, commit-message-trigger-plugin but that seems to be removed now.

Do you know any way to achieve this ? Please help me with the steps or any document you have.

Thanks

https://redd.it/ts3ere
@r_devops
Best lab environment for practicing Ansible / automation?

I used to write Ansible, and in recent years I haven't. I'd like to practice again, without spinning up loads of VMs. Is there a downloadable / online lab environment that makes it easy to manage small VMs locally? Something like an OCI container that spawns multiple services, etc. Thanks.

https://redd.it/tsyi8x
@r_devops
Download a file from S3 using ansible

I had to work this out and thought I'd share. We needed a way to download files from a non-public s3 bucket to remote instances using local aws credentials (not on the instances).

`playbooks/filter_plugins/presign.py`:

import boto3

def presign(s3_url):
if not s3_url.startswith("s3://"):
return s3_url
path = s3_url.lstrip("s3://")
bucket, key = path.split("/", 1)
session = boto3.Session(profile_name="default")
return session.client("s3").generate_presigned_url(
"get_object", Params={"Bucket": bucket, "Key": key}
)

class FilterModule:
def filters(self):
return {"presign": presign}

usage:

- name: get s3 file
get_url:
url: "{{ 's3://bucket/key.tar.gz' | presign }}"
dest: /tmp/key.tar.gz

https://redd.it/tt6tr5
@r_devops
Just started using Argo-CD... BRUH

How have I never used this amazing tool. It literally makes DevOps and GitOps so easy.

https://redd.it/tt4oc5
@r_devops
RFC for Breeze--a structure Cloud-As-Code language

Hi folks!

I'm soliciting feedback for a new cloud-as-code that is cloud-agnostic, statically-typed and constraint-solving (can catch a ton of deployment errors before they ever happen). This will be 100% open-source and retargetable (you can generate Terraform or whatever if a backend supports it), but I'd love some feedback. I know there will be a lot of "argh not another technology to learn", but the goal of this is to really be able to quickly and easily deploy infrastructure and applications in a cloud-agnostic fashion while integrating secrets and property management.

A straw-man is available at https://github.com/sunshower-io/breeze. The status is that the runtime and module system are complete and deployed in a wide variety of environments, but are still proprietary.

This will be 100% open-source with parsers available for Go, TypeScript, and Java. We will probably support CloudFormation and ResourceManager first, but I'll certainly consider Terraform generation if there's sufficient interest.


Edit: I should also note that an overarching design goal is to have this generated from a visual modeler. Having done this several times, it's just easier to hook into an actual language than to try to extract stuff from a general-purpose intermediate like JSON/XML/YAML.

https://redd.it/tt8ysw
@r_devops
GUI for scheduled db/data backups (and restore)?

As in the title, I’ve been trying to Google my way to something I can run in Portainer and use to schedule, monitor and possibly restore or rollback data directories and (no)/SQL database dumps. The backup sources would be in docker containers and the destination either a local data directory or S3 compatible storage.

I’m picturing nice easy forms for choosing backup frequency, inputting backup commands if they’re needed for database exports, and easy to read lists of backups.

I feel like this most likely exists in some form already but I’m finding myself in search keyword hell looking for the right tool for the job. Does anyone know if it’s real? Please don’t tell me I’d need to code it myself haha

https://redd.it/ttdjg2
@r_devops
So Many Unqualified Candidates

Just wondering if any of you are finding qualified candidates for mid to sr level DevOps engineers? And if so, where are you looking?

We've been looking for a few months now and it seems to be a cavalcade of severely unqualified candidates, even for basic entry-level type roles. It feels like the bar is very low when it comes to what a DevOps engineer is exactly. Building a CI/CD pipeline in Jenkins and running a few instances in AWS, in my opinion, does not make one a DevOps Engineer.

Now, I may be putting too much on the role by expecting a certain competency level in fundamental knowledge of cloud infrastructure such as containerization, micro-services, basic application (Java/Tomcat) knowledge, and the importance of network engineering that goes into building a solid and redundant cloud infrastructure. If so, y'all please let me know how I can better level-set my expectations.

As far as pay scale, we're offering mid-tier for what most DevOps roles are going for so I don't think that's what is turning off qualified candidates, maybe we're just looking in the wrong place (indeed / LinkedIn)?

https://redd.it/tskutn
@r_devops
Oauth2 token concurrency?

Hi, We are doing business with a new API provider, who has a concurrency limit for Oauth2 bearer tokens - only one can exist at a time, and any pre-existing tokens get invalidated if a new one is created, regardless of ttls. This is wreaking havoc, because like everyone else in the world we connect to services from multiple systems. They are like a big provider, they use Apigee.

The vendor wont budge, and I am wondering, is this normal? Is it me thats fucked up?

https://redd.it/tt5x9o
@r_devops
Does this group provide 📌 articles?

I am wondering if this group provides or would consider providing a PIN or link to FAQs?

This post is my vote of support.

https://redd.it/tsz90f
@r_devops
DevOp Jobs that pay 200k TC in gov contracting

Wondering if anyone here is in gov contracting and has seen people making over 200k total comp. Just trying to see if it is somehow possible in the world of gov contracting. Potentially remote or are they all in SCIFs?

https://redd.it/tsmuq8
@r_devops
🚀 Tekton CI/CD simple start ✌️

Simple start with https://tekton.dev seems not to be that easy? I have a series of blog posts that can help you get started and find your way around with a running sample project.

https://blog.codecentric.de/en/2022/01/tekton-cloud-native-ci-cd-pragmatic-intro/

https://blog.codecentric.de/en/2022/02/tekton-buildpack-pipeline/

https://blog.codecentric.de/en/2022/03/tekton-triggers-in-practice/

You can also find an article here where I show you a project with a Tekton bootstrapping and testing approach.

https://redd.it/ttnwfn
@r_devops
how you have v1 and v2 of apps running in k8s using helm

Basically, how are people managing their deployments of apps that have a clear defined v1 and v2 endpoints which are deployed to kubernetes via helm charts.

I can see that I can create ingress objects with different paths to get to the backend services but wondering how people achieve internal cluster communication to v1 of something (ideally without svc-v1 being the name, unless this is the only way)

https://redd.it/ttna8y
@r_devops
Making DevOps deployment decisions in a university group project

## Background
- Doing a final year group project for computer science at university
- Services are being developed in the same GitLab repo, each can be run in Docker and automated by docker-compose
- Frontend web server
- NodeJS API server
- Python machine learning instance
- MySQL Database server
- Project is being developed for another university department who would like it deployed on an existing server in their office which contains a database that must remain
- Server is running Ubuntu LTS but they've asked if we can ensure the server / automation to get it up and running works on Windows and macOS too
- Working on a basic GitLab CI/CD pipeline for automated testing, would be nice to have automated deployment but unsure how feasible given the production environment

## Question
Given the above, it appears everything should work just deploying with docker-compose, but I have my doubts about whether it's safe or sufficiently performant to keep MySQL in a docker container (don't worry, I'm using bind mounts), or indeed whether it's even worth putting a Python machine learning instance in Docker. I've been reading about Terraform, and I'd love an excuse to learn it if I removed MySQL from being a container.

What would you do in this instance? Would Terraform work if they decided to suddenly run the server on Windows or macOS? Is it normal for a client to want future flexibility to run a server-side application on a different OS?


Just trying to get into DevOps and learn as much as I can to find jobs after graduation - would appreciate any advice, thanks!

https://redd.it/ttnqh8
@r_devops