Reddit DevOps
269 subscribers
4 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
ways to test terraform scripts

guys, I have a project in which I have to validate ways to test scripts in terraform, I know terratest and KitcheCl, does anyone know any others?

https://redd.it/sd7vhe
@r_devops
DevOps Engineer To Sales Engineer Path - Questions Inside!

Hello,

Short-Version:

I am currently a DevOps Engineer and have been in the IT industry for around 13 years (since I graduated college) total. My current company has been going through reorgs, and I am thinking now is a good time to jump ship to another company....and possibly also switching careers to Sales Engineering along the way. I am trying to understand the benefits and the new 'stress' that may be associated with an SE role.

Long Version:

Background:

Throughout my career I've been told I have really good soft skills and have used these to excel in my current career in IT (was a sysadmin for a few years, now DevOps / Cloud engineer). I formed relationships with leadership and directors that helped me become more visible (promotions), and being able to talk to my business stakeholders and message correctly (know my technical folks vs. keeping it simple for business folks and how to frame my messaging), and basically have always gotten several compliments on my communication both verbal and written. The conversation has come up several times of what I want to do moving forward in my current career path, do I want to get into management and become an Engineering Manager? However, management doesn't really appeal to me that much.

Sales Engineer Opportunity:

Several of my friends have told me that I would be an excellent sales engineer with my technical skills and my soft skills + communication skills. One of my friends recently recommended me internally at the tech company he works with and interviews will probably be starting soon if everything checks out.

I've been reading a lot of previous reddit posts and trying to understand what are the big pros/cons for switching from Tech Engineer (DevOps, Containers, Cloud, etc.) to a Sales Engineer. This leads to a few questions for those that may have jumped into a sales engineering role:

Questions:

1. Are there things you miss about the standard tech engineer role vs. sales engineer?
2. If you could do it over, would you have made the switch? I hear once you are a Sales Engineer, it's hard to go 'back' to the Tech engineer role.
3. It sounds like pay is higher in SE world, but how is work / life balance?

Sorry for the verbal diarrhea. Bottom line is I am currently in a tech role (DevOps Engineer), considering Sales Engineer role, and am trying to understand the benefits and the new 'stress' that may be associated.

Thanks everyone for your time and help, it is greatly appreciated!

https://redd.it/sfbvgd
@r_devops
REQUEST Could anyone please share DevOps Bootcamp | Techworld with Nana for only the final part which is Monitoring with Prometheus.

I don't have 1K of USD to enrol on the program, I'm saving money to join the basic version of this program as 1k USD is too expensive I couldn't afford the premium version. I'm kindly requesting if you have subscribed to this Bootcamp could please share the final part only (Monitoring with Prometheus) it means a lot to me.


Thank you.


https://imgur.com/a/Xssf5S7

https://redd.it/sfcl5i
@r_devops
SAMPLE: Low-code devops (Github)

The provided sample includes:

Generating GitHub API access tokens via OAuth 2.0.
Authenticating and connecting to the GitHub API via Linx
Retrieving commit activity for repositories.
Sending email notifications from Gmail for Github repository commit activity for a time period. The sample Solution sends out an HTML email containing a summary of your GitHub activity from your configured Gmail account. This email can be directed to you or any others you want to alert containing the information concerning the commits you want to pass on.

​

https://github.com/linx-software/github-devops-management

https://redd.it/sd3qko
@r_devops
problem-with-accessing-jasperserver-behind-nginx-reverse-proxy

i have several instances of dockerized jasperserver instances. They are running on the same server on different port. On the same server i also has a nginx server running too.

I have following settings on Dockerfile to host https://xx.xx.xx.xx/
instead of https://xx.xx.xx.xx/jasperserver


Dockerfile

RUN rm -rf /usr/local/tomcat/webapps/ROOT RUN mv /usr/local/tomcat/webapps/jasperserver /usr/local/tomcat/webapps/ROOT RUN rm -r /usr/local/tomcat/work

web.xml

<context-param>
<param-name>webAppRootKey</param-name>
<param-value>ROOT.root</param-value> </context-param>

My environments are as follows.

&#x200B;

|URL|env|
|:-|:-|
|https://xx.xx.xx.xx:13425|dev|
|https://xx.xx.xx.xx:13429|ops|
|https://xx.xx.xx.xx:13427|test|

&#x200B;

So when i point to https://xx.xx.xx.xx:13429/
, i can login to jasperserver with credentiuals.

My next step is to access jasperver via nginx.

This is my location block for ops env.

location /reporting-ops/ {
proxysetheader Host $host;
proxysetheader X-Forwarded-Proto $scheme;
rewrite ^\/reporting-ops\/(.) /$1 break;
proxy_redirect off;
proxy_pass
https://xx.xx.xx.xx:13429/$1;

}



when i type
https://xx.xx.xx.xx/reporting-ops/
on the browser i get redirected. (Please see image)

i have done same kind of url rewriting for other applications and they work fine. So i assume this must be something to do with jasperserver.

&
#x200B;



\######## Further findings.

when I type [
https://xx.xx.xx.xx/reporting-ops/login.html](https://xx.xx.xx.xx/reporting-ops/login.html) I can see the login page without .css & .js loading. When I look at the request initiator chain I see below. The first request has /reporting-ops/ but in the subsequent requests [https://xxx.xx.xx.xx/reporting-ops/..](https://xxx.xx.xx.xx/). is missing. But when if we check URL [https://xx.xx.xx.xx/reporting-ops/runtime/5CD5658f/themes/reset.css](https://xx.xx.xx.xx/reporting-ops/runtime/5CD5658f/themes/reset.css) the .css file is available.

My nginx block look like this now.

&
#x200B;



location ~ ^/reporting-ops {

proxy_set_header Host $host;
proxy_set_header X-Forwarded-Port 8083;
proxy_redirect ~^/reporting-ops/(.
)$ https://192.168.125:13429/$1;

}

https://redd.it/scy6nx
@r_devops
Are coding standards important?

I personally believe they are but I work with people who don't. Curious how others view coding standards and deal with strong opinions against them.

https://redd.it/sfi68s
@r_devops
Google and GitHub Announce OpenSSF Scorecards v4 with New GitHub Actions Workflow

GitHub and Google have announced the version 4 release of the Open Source Security Foundation (OpenSSF)'s Scorecards project. Scorecards is an automated security tool that identifies risky supply chain practices in open source projects. This release includes a new Scorecards GitHub Action, new security checks, and a large increase in the repositories included in the foundations weekly scans.

Read further

https://redd.it/sfi0r5
@r_devops
Any opinions on Docker and Kubernetes cookbooks by O'Reilly?

Hey!

I am new to devops and I want to start learning by using the Docker and Kubernetes cookbooks by O'Reilly, published in 2016. Should I do that, are they still up to date? What is your opinion? Thank you!

https://redd.it/sfhgba
@r_devops
hikaru 0.10.0b released

Hikaru is a tool that provides you the ability to easily shift between YAML, Python objects/source, and JSON representations of your Kubernetes config files. It provides assistance in authoring these files in Python, opens up options in how you can assemble and customise the files, and provides some programmatic tools for inspecting large, complex files to enable automation of policy and security compliance.

Additionally, Hikaru allows you to use its K8s model objects to interact with Kubernetes, directing it to create, modify, and delete resources.

This is a 'catch-up' release for Hikaru; while it doesn't feature any materially new features, it does add support for the 1.20 release of the Kubernetes Python client.

Additionally, this Hikaru release drops support for the 1.16 release of the Kubernetes Python client as it was deprecated in Hikaru 0.9.

&#x200B;

https://pypi.org/project/hikaru/

https://hikaru.readthedocs.io/en/latest/index.html

https://github.com/haxsaw/hikaru

https://redd.it/sfmiz5
@r_devops
How to deploy Meilisearch to existing droplet with Dokku?

Hello everyone. I've used Dokku to deploy my app to DigitalOcean droplet. How can I deploy Meilisearch on that same droplet instead of creating a new Droplet and thus spending another 5 bucks?

https://redd.it/sfmoad
@r_devops
Scalable multi-environment logging?

Hey all, I'm currently looking at making some changes to our company's dev logging infrastructure. We have these testing environments which can be created and destroyed at-will and there can be any number of them at any given time. Basically, a developer can choose a branch to deploy and a new ec2 instance is created and the application stack is started up in Docker. Currently, each of these environments has its own ELK stack.

What I'm looking to do is remove the ELK stack from each of these environments. I'm trying to do some research on solutions which would take in the logs and make them easily accessible to the developers.

There are quite a few solutions available, so I'm hoping some of you might have some experience or insight into something like this. What do you all think?

https://redd.it/sfqgh1
@r_devops
Kubespray vs. Rancher vs. Cloud Managed Kubernetes

I work at a small company, and they are wanting to keep costs low. The app is a game server so not really the standard stateless web app. I am wondering what the best way to deploy our Kubernetes clusters would be, in a way that is reproducible and simple

I guess it's a question of cost vs. complexity. My company is thinking of using bare metal servers and operating Kubernetes manually on them, I suppose using either Kubespray or Rancher, or maybe custom Ansible playbook. The other option is to use a cloud provider's managed Kubernetes. Would that really cost that much more?

From my research of Kubespray vs. Rancher, it seems like Rancher is simpler and more well-liked, but that the simplest solution would be cloud managed Kubernetes.

Would there anything to take into consideration about our specific scenario, or advice?

Thanks

https://redd.it/sfrvs3
@r_devops
Share your loki config!

Seems that loki is a bit tricky to config for aggresive logs search (~10Go/days) looking for good helm chart config!

https://redd.it/sfq6it
@r_devops
Why do you need one more utility for data aggregation and streaming?

# Dive into problem

Several years ago I started develop SIP server. The first problem I encountered — I didn’t know about SIP.

The good way is learning about SIP by studying theory, but I don’t like studying — I like investigating!

Therefore, I started from investigation of simple SIP call.

But, the next problem that I encountered, was how many servers (or micro-services) need for doing simple SIP call — approximetly 20.

20 servers! It means, that before you will hear anything in the ip-telephone you need to trace more than 20 servers, and each server will doing work with your call!

How to trace one SIP call? You have several ways:

1. Setup ELK stack in your micro-services environment and investigate logs after sip call
2. Via ssh get any information that you need
3. Write own utility for investigation

# Daggy - Data Aggregation Utility and C/C++ developer library for data streams catching

What’s wrong with firstable two variants?

ELK stack looks good, but:

1. What if you want looks, for example, at tcp dumps and ELK don’t aggregate them?
2. What if you don’t have ELK?

From other side, via ssh and command line you can do anything, but, what if ou need to aggregate data from over 20 servers and run several commands on each server? This task is turning to the bash/powershell nightmare.

Therefore, several years ago, I wrote the utility, that can:

1. Aggregate and streams data via command-line commands from multiple servers at the same time
2. Each aggregation session is saved into separete folder. Each aggregation command is saving and steaming into separate file
3. Data Aggregation Sources are simply and can be used repeatly.

# Is it about devops?

Often, in distributed network systems, need to catch any data for analyzing and debuging any user scenarious. But server based solutions for this can be expensive - the adding new type of data catching for you ELK system is not a simple. From other side, you can want to get any binary data, like tcpdumps, during user scenario execution. In this cases daggy will help you!

https://github.com/synacker/daggy

https://redd.it/sfr5eu
@r_devops
Recommended courses for CKA certification

Hi guys




I want to certify myself for the Certified Kubernetes Administrator. The course I want to use to prepare myself is the one on Udemy from Kodekloud.

Do you guys recommend this course or any other courses?

Thnx!

https://redd.it/sfr2uw
@r_devops
Common avenues for reducing waste in AWS (Specifically EC2)

I'm tasked with collecting data on CPU and memory usage in EC2 and trying to figure out the best way to eliminate wasted capacity. I've got data on a few thousand instances and can see plenty of examples of boxes that run at low CPU and memory utilization (and so we usually tell the owners of those boxes to either scale down or containerize). What are some common ways to look for waste in your aws resources? We're also working on incorporating the TrustedAdvisor report into our thinking

https://redd.it/sfxaf1
@r_devops
Trunk-based Development, PRs and CI Question

I've been having conversations today that have me looking at my pipelines again.

They are currently based on what I thought was considered to be trunk-based development:

Develop locally in `trunk`
fetch and rebase on trunk before pushing to remote
If everything looks good, `git push origin trunk:short_lived_feature_branch` since remote `trunk` is protected / locked
Open PR, CI pipelines run automated testing and code reviewer reviews to make sure trunk does not break and coding practices are being followed
If approved, the `short_lived_feature_branch` is merged to `trunk` and deleted
The merge to trunk triggers the CD pipeline

But I was told that isn't really trunk-based development.

In a "pure" trunk-based development process, you'd be pushing directly into the remote trunk which would then run CI, and there wouldn't even be a PR.

I'm having trouble wrapping my brain around how that would work.

I use Azure DevOps, and If I push directly into trunk, my changes are there immediately. This does trigger the CI pipeline, but it could be several minutes before an issue is detected by them. Meanwhile the changes are in trunk that other developers could have fetched and rebased from.

In Azure DevOps, you can have branch policies and build validations, but those only apply to PRs and they have to be turned off to push directly to trunk.

Hoping someone can explain how this "pure" trunk-based development would be implemented that doesn't turn into a shit show of developers pulling bad code and then having to communicate to them it needs to be reverted.

Going down a rabbit-hole at this point...

https://redd.it/sfwa1i
@r_devops
aws nginx handle two api locations?

Any help is appreciated. I'm trying to run 2 node express servers on 2 ports on an AWS instance with NGINX.

Prod HTTP 404s with URL: /api-new/servermembers/some-email-address
But in local dev it works with https://127.0.0.1:8080/api-new/servermembers/some-email-address

Requests to the original /api URL still work.

  server_name xxxx.xxxx.com; # managed by Certbot

root /home/ubuntu/discord-bot/web/client/public;
rewrite ^/([^/.]+)$ /$1/index.html break;
error_page 404 /404/index.html;

location /api-new {
proxy_pass https://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass $http_upgrade;
}

location /api {
proxy_pass https://127.0.0.1:8222;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass $http_upgrade;
}

location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}


https://redd.it/sfw397
@r_devops
How do you explain your job to people so they can understand it generally (and not bore them)?

So I am basically a combo of DevOps+sysadmin, leaning more to the DevOps part of that. Usual stuff: integrate databases, make dashboards, move services to cluster infrastructure, and create a CI/CD framework -among other duties.

I can't for the life of me find a way to explain what I do, and if I try it's a conversation stopper..

How would you explain your job (or mine) to someone, if they seem interested enough to ask a follow up question about it?

https://redd.it/sg3bp7
@r_devops