Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
install a package before checking who’s behind it.



Review packages based on research, not just the description on the git repository.

In order to review an open source project you’re interested in using, you will need to download the package and study its content to ensure it’s secure. You should not rely on the data that comes out of the registry you’re using. Or — use WhiteSource Diffend, which will analyze the packages for you to detect security and quality issues.

As security shifts left, developers are increasingly tasked with the detection and remediation of vulnerabilities.

While old methodologies put security at the end of the development process and slowed down the development cycle, today’s DevSecOps gives developers a seat at the security table from the earliest stages of development. Unfortunately, they aren’t always given the tools and practices that they need in order to share ownership over security.

Developers don’t need to become security experts in order to share ownership over security. They simply need to integrate the right automated tools and practices that will help them cover security threats like supply chain attacks, without slowing them down.


Source

https://redd.it/oimf1e
@r_devops
All buzz-words which you need to know before interview

So let me start:
DevOps
GitOps
SRE
SaaS/PaaS/IaaS
IaC

PS. Jokes are also appreciated.

https://redd.it/oimxpn
@r_devops
Do you use A Cloud Guru or similar for continuing professional development?

Like the question in the post title, I'm wondering if you use something like A Cloud Guru to continue improving your DevOps skills.

I ask because it dawned me that the project I have been working on might be redundant. It's a continuous improvement project to help DevOps professionals learn in a bite-sized, organic way.

It was mainly designed for neurodiverse (e.g. dyspraxia) professionals. It would work like this:

1. Break your job down into a visual map of key responsibility areas + responsibilities within each
2. You can then add incident reports, learning notes, updates onto specific responsibilities
3. Bring in your team leader or senior to add their feedback onto your progress in relevant areas

Eventually the team would get on board so you can pick various other responsibilities (that interest you) or self-select into projects that draw on your core strengths.

Alas, it seems a bit redundant if you can just do DIY learning like in an LMS or in a sandbox like what A Cloud Guru offers. Thoughts?

https://redd.it/oipwtq
@r_devops
GitOps testing and promotion procedures and practices

Hi all. As we know GitOps is gaining more and more traction and for very good reason. The less points of friction that you have, the better the development workflow will be thus forcing the ownership onto the developers themselves, instead of admins.

But, I have a question, or an issue that I am trying to work through. That is the actual development workflow and any advices are welcomed.

Image the following scenario:

Dozen of app repos (only containing code and build instructions)
A config repo for Helm charts and values for various environments

How do you actually approach the integration testing when there are breaking changes in the app that need to be reflected in the helm chart as well? When the app is at 2.x.x and needs to more to 3.x.x and the chart is 1.x.x and needs to move to 2.x.x. How you deploy the changes in the app (living the the PR) and do integrations tests before merging this in master and thus triggering the version upgrade in the config/chart repo?
Also, to me it looks that the two repo approach is an overhead because the testing of the app is so detached for the deployment itself.

It seems to me that there are some tools, procedures, practices missing in order to glue all of this together, or am I not approaching this the right way.

https://redd.it/oiplne
@r_devops
Help me decide on a monitoring/log analysis stack - ELK vs. TICK vs. [Other]

I'm new to all of these stacks and am unsure of the right solution for my scenario.

**General description**

* Collect application logs from 100-200 instances of an application, each on a different server
* Footprint on these servers should be minimal (avoid parsing/transforming on these servers)

**Size of data**

* 1-5 GB per day, per server. Let's ballpark at 15 TB raw data per month.

**Log format and parsing/transformation requirements**

* See below for the raw format
* Note that each logged command has separate entries for *start* and *stop*, usually with other entries in between
* **Each command should be stored as a single record.** I.e., as part of processing the logs, the *start* and *stop* records should be merged into a single record with a *startTime*, *stopTime*, *duration,* and other fields.

​

[datetime] [commandId-1] start [commandType] [user] [transferSize]...
[datetime] [commandId-2] start [commandType] [user] [transferSize]...
[datetime] [commandId-1] stop [commandType] [user] [transferSize]...

**What kind of queries/reports/analytics/alerting do we want?**Examples:

* How many commands are issued per \[timeframe\]?
* Visualize commands per second over time
* Which *commandTypes* take the most execution time?
* Which users issue the most expensive commands?
* What commands did user "Bob" issue between 3 and 4 PM?
* Anomaly detection and alerting

# So, what's the right solution?

I'll take any suggestions. Send them my way :)

Below are my thoughts from the research I've done, but I'm new to this space

* ELK - Mostly sounds good. Filebeat would ship the logs (minimal footprint), Logstash would transform them, Elasticsearch would store them, and Kibana would display results. But I've heard concerns about Logstash at scale, much of what we're looking for feels more like metrics (commands per second) than logs, and I get the impression that anomaly detection and alerting are not as great (or included) with ELK.
* TICK - Could maybe work? I don't see the equivalent of Logstash in this stack and I don't want to do transforms on the application servers. I'm also not sure if the data structure supports keeping the related data in a log entry together.
* Scale and Cost - This is a big unknown to me. How well do these stacks handle this kind of scale and what does the hosting architecture usually look like?

https://redd.it/oiqjgf
@r_devops
An Offline Environment - Brainstorming

Hi everyone!

We're deploying a k8s cluster in an offline environment and wanted to share our ideas for improving this process with the world since this case is quite rare in the cloud generation.

Our DEV environment is on the cloud. Production is offline.

Current situation:

We are packing the following, using a giant Shell script, in a tar bundle:

\- Nexus installation

\- RPMs, images, and helm charts

\- Ansible playbooks

\- Environment Variables.

\
Images and Env Vars are the only items that change between customers.

​

On-site, using VMWare vSphere, we create VM and use it as the managing point:

\- we upload the bundle to it.

\- we install Nexus on it and push all our images, RPMs, and charts inside.

\- we create the rest of the VMs for the k8s cluster and run the Ansible playbooks from the Nexus VM.

\- we run the helm charts and deploy our App. The images are taken from the Nexus VM.

​

Questions:

1. We've thought about Gravity as a tool for pack all the local Environment and send it "as is", but it has been deprecated.

Does anyone know another solution?

2. We've thought about Packer, for packing our Nexus VM. Do you think it's a good solution?

3. We've also thought about creating all the cluster VMs with Terraform. Any other ideas?

4. Any other DevOps tool for improving offline deployment will be welcomed.

Thanks!

Erez

https://redd.it/oisl70
@r_devops
MySQL RDS exporting

What is the best way to have a copy of the prod database for test in AWS ? Should i create an ec2 instance and install MYSQL client and use dump? If yes, how to export only the changes from the production database and not all the database.
Is there a way to do it from RDS ??

Should i read the doc here will it help me?

**https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html**


Thanks in purpose.

https://redd.it/oiq1al
@r_devops
password protect website with .htaccess and .htpasswd - error - “Could not open password file:”

Hello, i wanna protect my web site using htaccess & htpasswd ( i'm using Windows ) , but i got this error "Internal Server Error" >> Could not open password file: /etc/apache2/C:/xampp/htdocs/.htpasswd (my logs) .... this is my htacess

<IfModule mod_rewrite.c>
<IfModule mod_negotiation.c>
Options -MultiViews -Indexes
</IfModule>

AuthName "Member's Area Name"
AuthUserFile /xampp/htdocs/.htpasswd
AuthType Basic
Require valid-user

RewriteEngine On

# Handle Authorization Header
RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]

# Redirect Trailing Slashes If Not A Folder...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} (.+)/$
RewriteRule ^ %1 [L,R=301]

# Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>

https://redd.it/oinm1n
@r_devops
Should I go Vendor Focus(AWS/Azure/GCP) Route or Cloud Native Route?

Rookie Here. I'm unsure of what route to take? Please suggest me. Which one has more job opportunities for beginner? Or should I do a little bit of both?

Thank you

https://redd.it/oix73e
@r_devops
Help with Helmfile rendering values !

Hey all ,


So In my current project we're using helmfile to do the deployments.

the helm file itself is very basic

environments:
dev:
values:
- sample.yaml

releases:
- name: api-users
namespace: orange
chart: private-hosted/api-users
version: v1.0.0
values:
- values/api-users.yaml.gotmpl


My issue is that if for example I make wholesale changes to `- values/api-users.yaml.gotmpl` . For example I can run

helmfile -f deployment.yaml diff
.....
....
...
ERROR:
exit status 1

EXIT STATUS
1

STDERR:
Error: Failed to render chart: exit status 1: Error: failed to parse /var/folders/cq/tcb7454j6qq67vgh1_wzl3c00000gn/T/helmfile412132336/orange-api-users-values-6f449f5557: error converting YAML to JSON: yaml: line 17: did not find expected node content
Error: plugin "diff" exited with error

The problem is although I'll get a stack trace of the issue. The value files generated are ephemeral and already deleted from the system before I can even check what values are wrong to create the issue

&#x200B;

So my question is ..**And I'm aware of how silly this makes me look** (but can't find anything online) ...is there any simple way of generating the values files that will be produced when trying to do a helmfile diff/apply ?


Or even without getting to the diff stage , sometimes I put logic within the {{ }} and I would like to know what the resulting values would be but I honestly have no clue how to do it

https://redd.it/oiyf8p
@r_devops
Python vs Bash scripts in CI/CD pipeline

I’m creating a CI/CD pipeline for my organization. We’re running on OpenShift, so I’m using the OpenShift Pipelines (Tekton) operator.

Every example I see uses bash commands for the typical pipeline stuff (git operations, build commands, deployment, test runners etc), so I guess this is the standard way of doing things.

However, we’ve discussed using Python instead. Firstly, because it lets us unit test our steps, which will come in very handy when we’re updating/expanding the steps (which we’ll probably be doing plenty of in the first months - probably longer). We’re thinking of having a separate pipeline for our pipeline, so that tests are run automatically when we push changes to our steps.

Secondly, the organization mainly consists of developers, and not system administrators, so we figure Python is easier to both read and maintain.

Some (most I guess) steps will probably have a few more lines when we use Python modules such as subprocess, os and pathlib, but as far as we can see, it doesn’t really matter, because if the various pros mentioned above.

Any good reasons NOT to use Python in this scenario? Performance is also an issue (we obviously want our pipeline running as fast as possible).

https://redd.it/oizpkz
@r_devops
How do I advance my career into higher level positions/management? (Design, Architecture, team management etc)

I want to move up. I was a systems/devops engineer for a few years now. I liked it at first but I got a little bored of it. I want to make decisions and lead a team in a direction that helps the whole department or something. I recently got into a role which is higher up but I want to go higher. What are the best steps to take/learn to accomplish this?

https://redd.it/oj099n
@r_devops
Analogy for when brew install goes wrong...

My co-worker dropped this amazing analogy for using brew install

I figured anyone here whose used a Mac and dealt with brew, might feel similar.

>using brew is like inviting the worst contractor into your house.

>me: Hi can you upgrade my bathroom sink?

>brew: sure can

>me: just checking in, how’s it going?

>brew: well i noticed you had other rooms in the house that share a common paint color, so i started upgrading all of them

>brew: your toilet might not work anymore


Anyone else feel this pain? or is it just us?

https://redd.it/oj0qqe
@r_devops
Asking full stack developer's that turned devops, was it an upgrade?

As every company does devops differently, it is common for management to think of devops as the team that does any of the following: sre, production on call, supporting developers, CICD pipelines, maintaining the cloud, operations, monitoring, etc... From my personal experience, these responsibilities (although are great to have) are a downgrade to a full stack developer. You may get marginal pay increase but for way more stress and more toil, combined with less respect

I know that a lot of people here would rather be a developer than in devops but there are also a lot of people that enjoy devops For those that enjoy the devops role, what part of the role do you enjoy more than being a full stack developer. Do people view you more as a person in operations or are you viewed as an architect (system design and interfacing with multiple teams)? What do you think your company does better than other companies who treat devops as operations

https://redd.it/oj3t15
@r_devops
Fix WSL using random private subnets

WSL uses new random private subnet each time it starts. That may obstruct working with your work and private VPN, because - oops - sometimes the subnet is already is use, and will be until next reboot, and wsl --shutdown won't help you.

Here is my ugly hack to fix that: https://selivan.github.io/2021/07/12/wsl-set-static-subnet-hack.html

I got the idea from people in github issue discussing the problem: https://github.com/microsoft/WSL/issues/4467

Btw, WSL developers are determined to ignore that. Because somehow it makes WSL more newbie friendly. Like randomly selecting fixed private subnet, allowing to change it later(that's what for example VirtualBox does), would be less friendly.

https://redd.it/oj2x1v
@r_devops
Self-taught developer looking for guidance

I am in the process of deploying a SaaS based in the US (this is NOT a promo post) and I am using Heroku to host my Laravel application & DigitalOcean to host my databases.

As of now, the application is running with no apparent issues, including the customer onloading process. I am very uneasy because this seems a little bit too easygoing of a launch so I am looking for a checklist of sorts or just some sort of guidance on what I should be doing to test & ensure that the application is ready for production.

I am also a bit uncertain about compliance & data security requirements (if there are any). Any and all information will be appreciated.

https://redd.it/oj4t67
@r_devops
Is your python for devops complex?Do you create classes,use inheritance and polymorphism?

to automate your task, is your code complex or do you write simple script?
Is really necessary know deeply all the theory behind python to use it for devops tasks?
Thank you

https://redd.it/oiyx3b
@r_devops
Jenkins mapping 2 separately hosted Perforce depots. Failing to add any new files.

I've got 2 Perforce Assembla repos hosted on separate endpoints. I'm trying to map a few folders from one project to the other. ex: //DepotA//main/shared/folder to //DepotB/main/shared/folder.


I've currently got Jenkins pulling the two repos down into local workspaces and I'm using rsync to move files from one endpoint to the other. That part works fine. However if new files are added to DepotA's shared folders, when my pipeline does a p4publish, it will only update existing files and not mark new ones for addition. Does anyone have any experience telling Jenkins to, bascially, run a 'Mark for Add' command on a folder before a pipeline executes?

https://redd.it/oj2k7s
@r_devops