Reddit DevOps
269 subscribers
4 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
DevOps with AWS Live Demo in Telugu | తెలుగులో DevOps | DevOps Real Time...

In this video we are going to cover DevOps with AWS Live Demo in Telugu | తెలుగులో DevOps | DevOps Real Time Training in Telugu | DevOps Training in Telugu with Real Time Projects

In this DevOps with AWS Demo we are going to cover below points

1.DevOps introduction

2.AWS introduction

3.who can learn this course

4.Laptop configuration needed for practical's

5.Duration of the Course

6.Latest DevOps Tools Trending in Present Market in 2021

https://redd.it/o9e1rm
@r_devops
Dockerfile optimization

I have been given a task to optimize a messy Dockefile. I've dome some of it on my own. Posting it here to gather some fresh ideas.


FROM python:3.6
WORKDIR */app*
COPY *.* *.*
RUN *chmod* *+x* */app/run.sh*
ENTRYPOINT *\[*"/app/run.sh"*\]*
RUN *pip3* *install* *snakemake*
RUN *apt-get* *update* *&&* *apt-get* *install* *-y* *dirmngr* *gnupg* *apt-transport-https* *ca-certificates* *software-properties-common*
RUN *apt-key* *adv* *--keyserver* *keys.gnupg.net* *--recv-key* '0123456789ABCD'
RUN *add-apt-repository* 'deb https://cloud.r-project.org/bin/linux/debian buster-cran35/' *&&* *apt-get* *update*
RUN *apt-get* *install* *-y* *r-base*
RUN *apt-get* *update* *&&* *apt-get* *-y* *upgrade* *&&* *apt-get* *install* *-y* *--allow-unauthenticated* *gcc* *zlib1g* *zlib1g-dev* *libbz2-dev* *liblzma-dev* *build-essential* *unzip* *default-jre* *default-jdk* *make* *tabix* *libcurl4-gnutls-dev*
RUN *pip3* *install* *cython*
RUN *pip3* *install* *numpy==1.18.\** *pyvcf==0.6.8* *pysam==0.15.\** *pandas* *boto3*
RUN *pip* *install* *awscli*
ARG AWS\_ACCESS\_KEY\_ID
ARG AWS\_SECRET\_ACCESS\_KEY
ENV AWS\_ACCESS\_KEY\_ID=$AWS\_ACCESS\_KEY\_ID
ENV AWS\_SECRET\_ACCESS\_KEY=$AWS\_SECRET\_ACCESS\_KEY
RUN *mkdir* *tempo* *&&* *cd* *tempo* *&&* *aws* *s3* *cp* *s3://some-bucket/some-dir/plink\_linux\_x86\_64\_20201019.zip* *./* *&&* *unzip* *plink\_linux\_x86\_64\_20201019.zip* *&&* *mv* *plink* */bin/*
RUN *git* *clone* *git://github.com/SelfHacked/htslib.git* *&&* *git* *clone* *git://github.com/SelfHacked/bcftools.git* *&&* *cd* *bcftools* *&&* *make* *&&* *cd* *..* *&&* *mv* *bcftools/\** */bin/*
RUN *apt-get* *install* *tabix*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/snpEff\_latest\_core.zip* *./*
RUN *unzip* *snpEff\_latest\_core.zip* *&&* *mv* *snpEff* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/conform-gt.24May16.cee.jar* *./* *&&* *mv* *conform-gt.24May16.cee.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/beagle.18May20.d20.jar* *./* *&&* *mv* *beagle.18May20.d20.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/picard.jar* *./* *&&* *mv* *picard.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/bedops\_linux\_x86\_64-v2.4.39.tar.bz2* *./* *&&* *tar* *jxvf* *bedops\_linux\_x86\_64-v2.4.39.tar.bz2* *&&* *cp* *-r* *bin/\** */usr/local/bin*
RUN *git* *clone* *-b* *1.2.1* *https://github.com/Illumina/GTCtoVCF.git*
RUN *Rscript* *-e* 'install.packages("https://cran.r-project.org/src/contrib/BiocManager\_1.30.10.tar.gz", repos=NULL, type="source")'
RUN *Rscript* *-e* 'BiocManager::install("rtracklayer")'
RUN *Rscript* *-e* 'BiocManager::install("GenomicRanges")'
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/master.zip* *./* *&&* *unzip* *master.zip* *&&* *Rscript* *-e* "install.packages('GenomeBuildPredictor-master/',repos=NULL,type='source')"
RUN *apt-get* *update* *&&* *apt-get* *install* *-y* *wait-for-it* *vim* *man* *awscli* *jq*
COPY *scripts/wkhtmltopdf.sh* *scripts/*
RUN *scripts/wkhtmltopdf.sh*
COPY *requirements.frozen.txt* */opt/requirements.txt*
RUN *cd* */opt* *&&* *pip* *install* *--upgrade* *pip* *&&* *pip* *install* *-r* *requirements.txt*





Please provide me some of your thoughts.

https://redd.it/o9eq5e
@r_devops
Devs to DevOPs Ratio

Hi DevOps folk...

Probably a hard question to answer, and its likely "it depends"... but... do any of you know the magic ratio of Devs a single DevOps engineer can support? Is it 10:1 ? More? Less?

Im trying to determine the site my DevOps team needs to be as i scale out my development team with external developers. I want to keep DevOps in-house, whilst i outsource my software development.

I heard anecdotes that 10:1 is a safe average ratio, but keen to hear what others see.

Im aware that if you have invested in an automated CI/CD you can probably do more with less, but lets see. I use Azure DevOps, and have an OK-ish CI automation engine, and we have GitHub actions that acts as a CD, calling terraform scripts to deploy on Azure.

Thanks in advance.

​

Neil

https://redd.it/o9ehbk
@r_devops
DevOps/SRE Reading Material

Hi guys, Just wondering if anyone had hany meterial beyond The DevOps Handbook, The Pheonix project and Site Reliability Engineering. I am looking for something that will really advance my system design/system architecture knowledge.

Feel free to also share any book that you found of interest.

https://redd.it/o9gcxl
@r_devops
Tech newsletters.

Hey community!

I was wondering what are your go-to sources for industry news? Any newsletter you are subscribed to?

I'm particularly interested in AI / ML / AIOps , Cloud , Open Source, IT Culture, DevOps tools, IoT Security.

Thank you. 🙏

https://redd.it/o9i8g6
@r_devops
Is Devops a entry level friendly job?

Can somebody who doesn't have a experience get hire for Devops job?

Thank you

https://redd.it/o9l8tq
@r_devops
Career Question - SysOps -> DevOps

Hi,

I'm coming from an 3 yr system engineer background and wanting to move into DevOps & Cloud engineering. I've got work experience with Linux, Cisco, Python, Ansible and a bit of Azure. I did a self-hosted kubernetes cluster on a few pi's and hosted my self developed JS application on it. I applied to various DevOps positions ranging from intern to junior to mid, but I always got rejected immediately.

What can I improve? More projects? Maybe some certs?

https://redd.it/o9k0s0
@r_devops
Ideas and Topics for DevOps/DevSecOps Speaking Sessions?

Hi all -

Trying to brainstorm some potential topics around DevOps/DevSecOps for speaking (30 min topics) at events like DevOps Days, etc.

What are some ideas/topics that you all would love to hear more about or even hear about? Automation? Getting a foot into the door? Career transitions from Ops to DevOps? Culture?

Love to get some idea from others on what topics you think might be missing in tech talks.

Yes, I'm polling the audience to help my brainstorm. :)

https://redd.it/o9jvr9
@r_devops
A GitHub Action that automatically generates & updates markdown content (like your README.md) from external or remote files.

Hi everyone!, I just released markdown-autodocs GitHub action which helps to auto-document your markdown files. Please give a star for this repo if you find it useful.

​

Github repo: https://github.com/dineshsonachalam/markdown-autodocs

Hacker News: https://news.ycombinator.com/item?id=27662736

https://redd.it/o9oo4j
@r_devops
Fork or Copy an Entire DevOps Organization or Project

I have an organization and 2 projects i would like to keep in sync across entire different accounts and completely different env is this possible? Of course having it completely automated would be great but any way to export and import maybe when needed? Or something similar? I want to use all of my work for in one DevOps tenant/env to another

https://redd.it/o9qljx
@r_devops
Debugging on AWS infrastructure

Situation:

There are 3 environments: prod, qa, dev.
All 3 are deployed using cloudformation template generated by the same serverless framework template.
All 3 are deployed using the same source code.
All 3 have fully working configurations.
Tech involved: AWS ECS+Faragate, AWS ALB, AWS Lambda, AWS ApiGateway, AWS CloudFront.

Issue:

Started happening after dev environment was redeployed from scratch. Using the same serverless framework template. Wasn't happening before.
https://example.com/service/some-service/ returns HTTP 200 on qa and prod but fails with HTTP 403 on dev.
Everything else works as expected.


Questions:

1. How would you go about debugging this?
2. What questions would you ask?
3. What is your best educated guess on what is the issue?

https://redd.it/o9qgxo
@r_devops
I built a reference architecture for global package logistics and learned a bunch about Terraform in the process + scaled up to 400k packages delivered per second!

I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with SingleStore (scale out relational database) and Redpanda (high performance alternative to Apache Kafka).

While I am quite excited to share my blog post, for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.

tldr; step by step instructions and Terraform module

My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose (docker-compose.yml).

After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:

First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of ephemeral storage. While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.

I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).

Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I organized the scripts here and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types I dynamically compiled a single script and uploaded it to google storage. Finally, I leveraged the startup-script-url metadata option that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.

So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
I built a reference architecture for global package logistics and learned a bunch about Terraform in the process + scaled up to 400k packages delivered per second!

I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with [SingleStore](https://www.singlestore.com) (scale out relational database) and [Redpanda](https://vectorized.io/redpanda) (high performance alternative to Apache Kafka).

While I am quite excited to share my [blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/), for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.

tldr; [step by step instructions](https://github.com/singlestore-labs/singlestore-logistics-sim#deploying-into-google-cloud) and [Terraform module](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp)

My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose ([docker-compose.yml](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/docker-compose.yml)).

After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:

First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of [ephemeral storage](https://cloud.google.com/compute/docs/disks#localssds). While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.

I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).

Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I [organized the scripts here](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp/scripts) and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types [I dynamically compiled a single script and uploaded it to google storage](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/scripts.tf#L60). Finally, I leveraged the [startup-script-url metadata option](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/singlestore.tf#L24) that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.

So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
like local disks aren't quite there yet. Perhaps something to revisit in the future. For now I am reasonably happy with my all-terraform solution. The downsides of this approach include:

* don't store sensitive data in metadata (i.e. like I am storing the license) - anyone able to issue a http request from one of the machines can easily exfiltrate it
* writing a ton of bash scripts is not easy to debug and even harder to make idempotent - in this case I just blew everything away if there was an issue, but obviously don't do this in production
* not mentioning that there is no way to easily upgrade this solution - I would only trust something like a full backup + offline rebuild

Well... this ended up being a lot longer than expected - but I figured that this community might enjoy hearing some of the background behind the various decisions I made while quickly putting this system together. Check out [the blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/) if you want to learn more about this projects background, and be sure to leave comments if you have any questions about my solution!

Thanks for reading! :)

https://redd.it/o9toig
@r_devops
Anyone here hold a security clearance and work with a modern tech stack (k8s, Terraform, AWS/GCP, Python/Go, etc)? How much more do you make with the clearance vs without? Is it worth getting one?

I'm considering a gig that requires obtaining a security clearance but worried about some things from my past in regards to drug use (one that's legal at state level, not federal, and have no criminal record with it). Is it worth it to just go through with the process and try to obtain one? Passing a drug test and staying clean while holding a clearance wouldn't be an issue just previous history with it might be.

https://redd.it/o9uvge
@r_devops
Direktiv: Docker development environment, VSCode plugin & Infrastructure-as-a-Chatbot

G'day DevOps,

Another update to our Direktiv event-driven serverless workflow engine - but this one focused on development. Release v0.3.1 included some bug fixes, improved stability and security enhancements, but more notably:

A Docker development environment (A Direktiv instance on your laptop or desktop!)
VSCode integration for workflow management and development

The update adds on to the features released for GitHub marketplace and hopefully makes it easier for developers (and non-developers alike) to create, verify and deploy the Direktiv workflows and plugins as containers!

The latest blog article is available at:

https://blog.direktiv.io/direktiv-new-dev-environment-vscode-plugin-ab047b7a8266

and the implementation docs at:

https://docs.direktiv.io/docs/development.html

We also created (as a PoC) an Infrastructure-as-a-Chat integration with Google Diaglogflow (it provisions to AWS and GCP):

https://blog.direktiv.io/direktiv-cloud-provisioning-chatbot-part-1-f482bb9ea943

The second article is the first in a three part series - but gives a good overview of what was done :)

Finally, added some new plugins to the direktiv-apps repository, one of which allows you to now run Terraform scripts (without having a Terraform environment):

https://github.com/vorteil/direktiv-apps/tree/master/terraform

As always - feedback is welcomed!!!

https://redd.it/o9wwlp
@r_devops
DevOps Beginner Guide

I'm a beginner and I want to start learning DevOps and practically apply the lifecycle of DevOps ( Plan, Code, Build, Test, CI/CD, etc) using the tools and software.

Can anyone guide me to books or courses or tutorials where I will learn to use all the tools needed like how it's done in this field?

It would be better if the same project is being used in the entire lifecycle of DevOps?

Thanks in advance!

https://redd.it/o9r1gb
@r_devops
Is it hard to find DevOps jobs that involve software dev?

I know "Dev" is in the name, however a lot of the time DevOps engineering positions seem to skew more towards "Ops", from what I've seen. This is the case with my job. I am more of a Cloud engineer who can write IAC and automation code as well. Any other "DevOps" things I do like CI/CD pipelines and containers revolve around cloud infrastructure.

I like this stuff, but where my heart truly belongs is building applications through the SDLC, full stack development, automated testing, the works. I had hoped my job woukd include these things, as well as the operational work like infrastructure, hosting and pipelines. Afterall, that's a big part of the idea of DevOps. But it hasn't turned out that way.

My question is how common is it to find positions that truly include Dev and Ops in a single position/team? I'm not talking about just the big tech companies either, I mean through the entire "DevOps" landscape.

https://redd.it/o9z25l
@r_devops
Logical grouping of resources created using for_each with a conditional statement

Consider the following scenario:

​

​

​

I am trying to create multiple resources from multiple modules using for\_each.

​

My [main.tf](https://main.tf) file reads

​

​

​

//postgres

​

module "postgres" {
source = "./postgres"
for_each = var.app
name = each.key
region = each.value.postgres.region
postgres_database_version = lookup(each.value.postgres, "postgres_database_version", "")
}

//mysql

​

module "mysql" {
source = "./mysql"
for_each = var.app
name = each.key
region = each.value.mysql.region
mysql_database_version = lookup(each.value.mysql, "mysql_database_version", "")
}

​

​

//mssql

​

module "mssql" {
source = "./mssql"
for_each = var.app
name = each.key
region = each.value.mssql.region
mssql_database_version = lookup(each.value.mssql, "mssql_database_version", "")
}

[variable.tf](https://variable.tf)

​

​

​

variable "app" {}

​

​

terraform.tfvars

​

app = {
app1 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}

This works fine if I am creating all three resources(MySQL, mssql and postgres) for app1, app2, and app3.

​

However, it does not work if I want to create say only postgres for app1, MySQL and mssql for app2, and mssql and postgres for app3 as follows

​

​

​

app = {
app1 = {
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}

I need to include a conditional statement in for\_each that prevents the creation of a resource if no value for the resource is provided or if an empty map is passed

​

example

​

​

​

app = {
app1 = {
postgres = {
region = "us-east1"
}
}
mssql = {}
mysql = {}
}

should only create a postgres DB

​

I have tried,

​

​

​

module "mysql" {source = "./mysql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mssql != {} }

module "postgres" {source = "./postgres"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].postgres != {} }

module "mssql" {source = "./mssql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mysql != {} }

​

​

but this does not seem to work. Any ideas on how to solve this would be much appreciated

https://redd.it/o9x47s
@r_devops