Reddit DevOps
271 subscribers
22 photos
31.3K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Expectations for a terraform interview test

I have an interview next week, part of it is a Terraform / Kube / Helm test. I'm ok with the Kube and Helm stuff, but I'm a bit rusty on my terraform (and I've not used native terraform for a while, its always been with terragrunt and scripts to improve DRY)

Any thoughts on things I should be revising? What sort of questions would you ask on a test? ( believe the test is going to be 'here's a terminal, do stuff' rather than a QA)

https://redd.it/tphaeq
@r_devops
How to find out a company is working in real devops culture

More than half of the job offers called "devops engineer" turn out to be an ops/monitoring engineer job or consultant where you are stuck on calls whole day. Most of the recrutiers or "architects" have no clue how company/project in devops culture works. How do you make sure, during an interview, that the job is really devops stuff and not something else. Do u ask interviewers any specific questions?

https://redd.it/tpmhtg
@r_devops
Static Code Analysis of a large code base in a Gitlab CI/CD pipeline

Hello, I am currently working on adding static code analysis (SCA) to a large code base of .c, .cpp, .h, and .hpp files. These files are placed in multiple subdirectories with their own CMakeLists.txt, where the files from the various subdirectories are connected and linked when building with CMAKE. How am I supposed to properly test all these files using SCA? As of now, I am using a python script to locate all files with the .c, .cpp, .h, and .hpp extensions, and then running cppcheck and flawfinder on each of the files separately. How can I just run these SCA tools once and all the files will be run instead of using a python script?

https://redd.it/tpoi49
@r_devops
I created a guide on how to build custom Windows 11 "golden" images for Azure Virtual Desktop using Packer. The build is automated by using a scheduled GitHub Actions workflow to check daily for new Windows releases and create a new image as soon as it's published

Hey /r/devops,

Over the past month I created a guide on how to build custom Windows 11 "golden" images for [Azure Virtual Desktop (AVD)](https://azure.microsoft.com/en-us/services/virtual-desktop/) using [Packer](https://www.packer.io/) and automate everything using [GitHub Actions (GHA)](https://github.com/features/actions).

[https://schnerring.net/blog/automate-building-custom-windows-images-for-azure-virtual-desktop-with-packer-and-github-actions/](https://schnerring.net/blog/automate-building-custom-windows-images-for-azure-virtual-desktop-with-packer-and-github-actions/)

Here's a quick overview:

* Resources required by Packer are pre-provisioned using [Terraform](https://www.terraform.io/)
* The Packer build uses [Chocolatey](https://chocolatey.org/) and a custom PowerShell script to provision software.
* The GHA workflow queries Azure daily for the latest Windows available Windows version and creates an image in case a new version was released by Microsoft (usually Patch Tuesday)

I'm happy how it turned out, especially the GHA part. For production use cases you probably want to flesh it out, but it's good enough to build upon.

The code is available on my GitHub: [https://github.com/schnerring/packer-windows-avd](https://github.com/schnerring/packer-windows-avd)

https://redd.it/tpscvi
@r_devops
Where do you all look to find remote jobs worth applying for

What is the best place to look for dev ops jobs. I am looking for 100% remote. And ideally it would be like a west coast hours company. I want to be choosy, and would prefer to do as much of that before wasting time talking to them. Thanks

https://redd.it/tpsve4
@r_devops
Job has 0 Infrastructure and I don't know where to start

So I'm a software developer, fairly little experience with DevOps. I've worked with Jenkins in a previous job and played with Docker with some home projects.

​

I just started a new job because the pay was a massive jump from what I was making. Then I started noticing some verbiage used by team members that raised some red flags.

​

"Let me grab the source code from x directory to somewhere where you can access it"

​

"I found some code for x app but the classes in the jars don't match. I thought this code was the most recent but I guess not"

​

"Sally can you build y environment for us to do some testing? Bob can you copy x database to y environment?"

​

2 weeks in and I realized just how bad it is. There is no infrastructure at all. There's like 10 different people to manually do so many different parts of tasks. To point 1, THEY AREN'T EVEN USING VERSION CONTROL. My first task has been "mitigate log4j and move projects into gitlab." For me to build, test, and deploy an app, I have to walk over to at least 5 people around the office to accomplish different parts.

​

So I have two choices here IMO. Bail after a month, or use this as an opportunity to really learn devops and build out this infrastructure from scratch. Is this place too far gone? Is building multiple pipelines for apps too big a task for a junior dev with little DevOps experience? We do have gitlab licensed (for years now, just nobody's done anything with it) so I could definitely use that for help, but I've never worked with it.

https://redd.it/tq9qjz
@r_devops
Does Azure DevOps feels overwhelming compared to GitHub or GitLab?

Is it just me, or does Azure DevOps feels like a giant maze with thousands of options? Using GitHub or GitLab is a breeze compared to that, and they achieve the same thing.

Typical Microsoft bloat?

https://redd.it/tqvmsf
@r_devops
Not Devops, but been tasked with setting up our CI/cd pipeline

Hi all,

Im not a devops engineer, just a software dev, working on a legacy product and we want some automation to bring us towards CI/CD.

We have jenkins running, which I have now got to automatically build on Pull Requests. Next step is deployment, but I have no knowledge of the best technologies to use.

Our product is just an executable and configs in zip file that needs to be placed onto an inhouse VM, unzipped, and a cmd script run.

I dont want it to always automatically deploy, but instead (ideally) have a button on jenkins I can click to deploy to a server of my choice.

Can anyone help point me towards the best, preferably free (not a requirement) technologies that can help with this, as I have so little knowledge of the space.

https://redd.it/tqvq4y
@r_devops
Performance testing in CI/CD pipeline?

For people that have setup performance tests (Gatling, JMeter, Locust, etc) in your CI/CD pipeline how do you run them? Curious if you 1) run the performance benchmarks within the CI/CD infrastructure or (2) post-deploy on lower environments? I'm leaning towards (1) at the moment if only to avoid cleanup issues -- just curious what others are doing in terms of testing performance as part of the pipeline?

https://redd.it/tr64tj
@r_devops
Software inventory tool?

So we're running a mixed Linux and Windows environment of roughly 1k servers with a 60/40 split. We're trying to gain visibility into what packages are installed across the environment on both OSes. So for example, when there is a dotnet core vulnerability, it would make remediation a lot simpler if there was a single pane that we can see this information. Something simple, Servername, Packages Installed, Package version, etc.

Right now we use lansweeper (it's cheap) but it's clearly designed with Windows in mind and isn't the best at finding packages in Linux from what we've seen. Also the fact that you can't connect multiple AWS accounts to it is also a pain in the ass but that's out of scope.

Anything ya'll can recommend? I'm also okay with having two sets of tools, one for windows, one for Linux, and then marry that in a dashboard in grafana or something. But if there is a single tool, that'd be better

https://redd.it/tqz7te
@r_devops
Not able to create index management policy on opensearch for logs, any ideas?

Hello All,

I was recently tasked with creating an index management policy to discard all logs after X amount of days. My JSON knowledge is very little, but I can figure most things out by piecing info together. I wanted to first try whether I could delete an entire index by a specific date, so I tried the following:

POST /(index_name)/_delete_by_query
{
"query": {
"match_all": {}
}
}

The issue is nothing gets deleted. I originally was getting a `blocked by: [FORBIDDEN/8/index write (api)];` error so I changed the `/my_index/settings` to have the `blocks: write` setting to `false`

Now when I run the POST query above, all I get is a 200-success, but nothing is deleted.

Anyone have a clue what I'm missing here?

https://redd.it/tralv4
@r_devops
Manage cross technology/tool CICD pipelines

Currently in my work I am using multiple tools for variety of tasks. For eg, teraform to provision infra, ansible to configure the infra once deployed. Using custom python scripts to generate reports etc. I have created a pipeline in Jenkins but facing lots of issues in transferring the information from one tool to another.

Need some guidance on how to transfer information between these tools for eg, need to know the ip the teraform code deploys by ansible.

Currently I am writing the info on a single file and passing it around but it had become a bottleneck since changes due to any new information requirement breaks the code.

PS: Using a single tool is not feasible due to legacy issues and compliance issues.

https://redd.it/trv07w
@r_devops
Trigger Jenkins job with specific commit message

Hi r/devops,

I have a job in jenkins. I need that job to be triggered if a specific commit message is added to scm repo for e.g. "fix ISSUEID-123 Issue fixed. /jenkinsbuild"

So in this case if commit message has /jenkinsbuild, it should trigger the build in jenkins. I came across a jenkins plugin named, commit-message-trigger-plugin but that seems to be removed now.

Do you know any way to achieve this ? Please help me with the steps or any document you have.

Thanks

https://redd.it/ts3ere
@r_devops
Best lab environment for practicing Ansible / automation?

I used to write Ansible, and in recent years I haven't. I'd like to practice again, without spinning up loads of VMs. Is there a downloadable / online lab environment that makes it easy to manage small VMs locally? Something like an OCI container that spawns multiple services, etc. Thanks.

https://redd.it/tsyi8x
@r_devops
Download a file from S3 using ansible

I had to work this out and thought I'd share. We needed a way to download files from a non-public s3 bucket to remote instances using local aws credentials (not on the instances).

`playbooks/filter_plugins/presign.py`:

import boto3

def presign(s3_url):
if not s3_url.startswith("s3://"):
return s3_url
path = s3_url.lstrip("s3://")
bucket, key = path.split("/", 1)
session = boto3.Session(profile_name="default")
return session.client("s3").generate_presigned_url(
"get_object", Params={"Bucket": bucket, "Key": key}
)

class FilterModule:
def filters(self):
return {"presign": presign}

usage:

- name: get s3 file
get_url:
url: "{{ 's3://bucket/key.tar.gz' | presign }}"
dest: /tmp/key.tar.gz

https://redd.it/tt6tr5
@r_devops
Just started using Argo-CD... BRUH

How have I never used this amazing tool. It literally makes DevOps and GitOps so easy.

https://redd.it/tt4oc5
@r_devops
RFC for Breeze--a structure Cloud-As-Code language

Hi folks!

I'm soliciting feedback for a new cloud-as-code that is cloud-agnostic, statically-typed and constraint-solving (can catch a ton of deployment errors before they ever happen). This will be 100% open-source and retargetable (you can generate Terraform or whatever if a backend supports it), but I'd love some feedback. I know there will be a lot of "argh not another technology to learn", but the goal of this is to really be able to quickly and easily deploy infrastructure and applications in a cloud-agnostic fashion while integrating secrets and property management.

A straw-man is available at https://github.com/sunshower-io/breeze. The status is that the runtime and module system are complete and deployed in a wide variety of environments, but are still proprietary.

This will be 100% open-source with parsers available for Go, TypeScript, and Java. We will probably support CloudFormation and ResourceManager first, but I'll certainly consider Terraform generation if there's sufficient interest.


Edit: I should also note that an overarching design goal is to have this generated from a visual modeler. Having done this several times, it's just easier to hook into an actual language than to try to extract stuff from a general-purpose intermediate like JSON/XML/YAML.

https://redd.it/tt8ysw
@r_devops
GUI for scheduled db/data backups (and restore)?

As in the title, I’ve been trying to Google my way to something I can run in Portainer and use to schedule, monitor and possibly restore or rollback data directories and (no)/SQL database dumps. The backup sources would be in docker containers and the destination either a local data directory or S3 compatible storage.

I’m picturing nice easy forms for choosing backup frequency, inputting backup commands if they’re needed for database exports, and easy to read lists of backups.

I feel like this most likely exists in some form already but I’m finding myself in search keyword hell looking for the right tool for the job. Does anyone know if it’s real? Please don’t tell me I’d need to code it myself haha

https://redd.it/ttdjg2
@r_devops