How does your team do devops
I arm wondering how you do devops at your company. Taking a look at the posts on this sub reddit, it seems that people spend most of their time with things like kubernetes, docker, CICD pipelines, infrastructure as code, configuration management, etc, and not on Javascript, node, Java, .NET, etc... The disconnect that I am seeing is that most people here believe devops is a culture thing and not specific to one team. So where does your team fit on this spectrum? Do you spend most of your time doing full stack development, then applying devops principles. Or are you part of a devops teams helping other teams better adopt devops. What are the advantages and disadvantages of being on either spectrum, from a personal growth point of view
https://redd.it/o94z1j
@r_devops
I arm wondering how you do devops at your company. Taking a look at the posts on this sub reddit, it seems that people spend most of their time with things like kubernetes, docker, CICD pipelines, infrastructure as code, configuration management, etc, and not on Javascript, node, Java, .NET, etc... The disconnect that I am seeing is that most people here believe devops is a culture thing and not specific to one team. So where does your team fit on this spectrum? Do you spend most of your time doing full stack development, then applying devops principles. Or are you part of a devops teams helping other teams better adopt devops. What are the advantages and disadvantages of being on either spectrum, from a personal growth point of view
https://redd.it/o94z1j
@r_devops
reddit
How does your team do devops
I arm wondering how you do devops at your company. Taking a look at the posts on this sub reddit, it seems that people spend most of their time...
I've got a potential opportunity to start a career in devops and I'm wanting to know what sort of skills I should get/have
I just finished a 3 year Higher level apprenticeship in computer science, I have expierence in service desk, basic software development, infrastructure, SQL and most recently M365 and power platform.
I just want to know if devops is something I could really get into. It seems really interesting and something right up my street but I feel some external advice and tips could be super useful.
I'm thankful for any and all advice you guys can give me
https://redd.it/o95mtf
@r_devops
I just finished a 3 year Higher level apprenticeship in computer science, I have expierence in service desk, basic software development, infrastructure, SQL and most recently M365 and power platform.
I just want to know if devops is something I could really get into. It seems really interesting and something right up my street but I feel some external advice and tips could be super useful.
I'm thankful for any and all advice you guys can give me
https://redd.it/o95mtf
@r_devops
reddit
I've got a potential opportunity to start a career in devops and...
I just finished a 3 year Higher level apprenticeship in computer science, I have expierence in service desk, basic software development,...
I want to migrate off of Heroku, where do I start?
Note: Not so much migrate but avoid, as this is a fresh app and I won't need to transfer data off of Heroku such as a database.
Right now I use Heroku for hosting web apps as it's easy. I'm getting ready to push a project into production soon. Comparing pricing, going with a Kubernetes setup on GKE and Cloud SQL is about 20% cheaper for my needs versus Heroku. And since I'm paying hosting fees, I like the sound of cheaper.
But I know absolutely nothing about replicating what Heroku does on GCP. I'm only a developer, I've never had to deal with a complicated setup like this. I've got my web app running in Docker just fine and with a Docker Compose, but that's about it. How do I get to merging my git branches and auto-deploying onto Kubernetes Cluster(s)?
What are some good resources to learn this stuff? For reference it's a pretty standard Rails app with Sidekiq. Uses Postgresql.
Thanks.
https://redd.it/o9b0zd
@r_devops
Note: Not so much migrate but avoid, as this is a fresh app and I won't need to transfer data off of Heroku such as a database.
Right now I use Heroku for hosting web apps as it's easy. I'm getting ready to push a project into production soon. Comparing pricing, going with a Kubernetes setup on GKE and Cloud SQL is about 20% cheaper for my needs versus Heroku. And since I'm paying hosting fees, I like the sound of cheaper.
But I know absolutely nothing about replicating what Heroku does on GCP. I'm only a developer, I've never had to deal with a complicated setup like this. I've got my web app running in Docker just fine and with a Docker Compose, but that's about it. How do I get to merging my git branches and auto-deploying onto Kubernetes Cluster(s)?
What are some good resources to learn this stuff? For reference it's a pretty standard Rails app with Sidekiq. Uses Postgresql.
Thanks.
https://redd.it/o9b0zd
@r_devops
reddit
r/devops - I want to migrate off of Heroku, where do I start?
0 votes and 0 comments so far on Reddit
DevOps with AWS Live Demo in Telugu | తెలుగులో DevOps | DevOps Real Time...
In this video we are going to cover DevOps with AWS Live Demo in Telugu | తెలుగులో DevOps | DevOps Real Time Training in Telugu | DevOps Training in Telugu with Real Time Projects
In this DevOps with AWS Demo we are going to cover below points
1.DevOps introduction
2.AWS introduction
3.who can learn this course
4.Laptop configuration needed for practical's
5.Duration of the Course
6.Latest DevOps Tools Trending in Present Market in 2021
https://redd.it/o9e1rm
@r_devops
In this video we are going to cover DevOps with AWS Live Demo in Telugu | తెలుగులో DevOps | DevOps Real Time Training in Telugu | DevOps Training in Telugu with Real Time Projects
In this DevOps with AWS Demo we are going to cover below points
1.DevOps introduction
2.AWS introduction
3.who can learn this course
4.Laptop configuration needed for practical's
5.Duration of the Course
6.Latest DevOps Tools Trending in Present Market in 2021
https://redd.it/o9e1rm
@r_devops
reddit
DevOps with AWS Live Demo in Telugu | తెలుగులో DevOps | DevOps...
In this video we are going to cover DevOps with AWS Live Demo in Telugu | తెలుగులో DevOps | DevOps Real Time Training in Telugu | DevOps Training...
Dockerfile optimization
I have been given a task to optimize a messy Dockefile. I've dome some of it on my own. Posting it here to gather some fresh ideas.
FROM python:3.6
WORKDIR */app*
COPY *.* *.*
RUN *chmod* *+x* */app/run.sh*
ENTRYPOINT *\[*"/app/run.sh"*\]*
RUN *pip3* *install* *snakemake*
RUN *apt-get* *update* *&&* *apt-get* *install* *-y* *dirmngr* *gnupg* *apt-transport-https* *ca-certificates* *software-properties-common*
RUN *apt-key* *adv* *--keyserver* *keys.gnupg.net* *--recv-key* '0123456789ABCD'
RUN *add-apt-repository* 'deb https://cloud.r-project.org/bin/linux/debian buster-cran35/' *&&* *apt-get* *update*
RUN *apt-get* *install* *-y* *r-base*
RUN *apt-get* *update* *&&* *apt-get* *-y* *upgrade* *&&* *apt-get* *install* *-y* *--allow-unauthenticated* *gcc* *zlib1g* *zlib1g-dev* *libbz2-dev* *liblzma-dev* *build-essential* *unzip* *default-jre* *default-jdk* *make* *tabix* *libcurl4-gnutls-dev*
RUN *pip3* *install* *cython*
RUN *pip3* *install* *numpy==1.18.\** *pyvcf==0.6.8* *pysam==0.15.\** *pandas* *boto3*
RUN *pip* *install* *awscli*
ARG AWS\_ACCESS\_KEY\_ID
ARG AWS\_SECRET\_ACCESS\_KEY
ENV AWS\_ACCESS\_KEY\_ID=$AWS\_ACCESS\_KEY\_ID
ENV AWS\_SECRET\_ACCESS\_KEY=$AWS\_SECRET\_ACCESS\_KEY
RUN *mkdir* *tempo* *&&* *cd* *tempo* *&&* *aws* *s3* *cp* *s3://some-bucket/some-dir/plink\_linux\_x86\_64\_20201019.zip* *./* *&&* *unzip* *plink\_linux\_x86\_64\_20201019.zip* *&&* *mv* *plink* */bin/*
RUN *git* *clone* *git://github.com/SelfHacked/htslib.git* *&&* *git* *clone* *git://github.com/SelfHacked/bcftools.git* *&&* *cd* *bcftools* *&&* *make* *&&* *cd* *..* *&&* *mv* *bcftools/\** */bin/*
RUN *apt-get* *install* *tabix*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/snpEff\_latest\_core.zip* *./*
RUN *unzip* *snpEff\_latest\_core.zip* *&&* *mv* *snpEff* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/conform-gt.24May16.cee.jar* *./* *&&* *mv* *conform-gt.24May16.cee.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/beagle.18May20.d20.jar* *./* *&&* *mv* *beagle.18May20.d20.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/picard.jar* *./* *&&* *mv* *picard.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/bedops\_linux\_x86\_64-v2.4.39.tar.bz2* *./* *&&* *tar* *jxvf* *bedops\_linux\_x86\_64-v2.4.39.tar.bz2* *&&* *cp* *-r* *bin/\** */usr/local/bin*
RUN *git* *clone* *-b* *1.2.1* *https://github.com/Illumina/GTCtoVCF.git*
RUN *Rscript* *-e* 'install.packages("https://cran.r-project.org/src/contrib/BiocManager\_1.30.10.tar.gz", repos=NULL, type="source")'
RUN *Rscript* *-e* 'BiocManager::install("rtracklayer")'
RUN *Rscript* *-e* 'BiocManager::install("GenomicRanges")'
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/master.zip* *./* *&&* *unzip* *master.zip* *&&* *Rscript* *-e* "install.packages('GenomeBuildPredictor-master/',repos=NULL,type='source')"
RUN *apt-get* *update* *&&* *apt-get* *install* *-y* *wait-for-it* *vim* *man* *awscli* *jq*
COPY *scripts/wkhtmltopdf.sh* *scripts/*
RUN *scripts/wkhtmltopdf.sh*
COPY *requirements.frozen.txt* */opt/requirements.txt*
RUN *cd* */opt* *&&* *pip* *install* *--upgrade* *pip* *&&* *pip* *install* *-r* *requirements.txt*
Please provide me some of your thoughts.
https://redd.it/o9eq5e
@r_devops
I have been given a task to optimize a messy Dockefile. I've dome some of it on my own. Posting it here to gather some fresh ideas.
FROM python:3.6
WORKDIR */app*
COPY *.* *.*
RUN *chmod* *+x* */app/run.sh*
ENTRYPOINT *\[*"/app/run.sh"*\]*
RUN *pip3* *install* *snakemake*
RUN *apt-get* *update* *&&* *apt-get* *install* *-y* *dirmngr* *gnupg* *apt-transport-https* *ca-certificates* *software-properties-common*
RUN *apt-key* *adv* *--keyserver* *keys.gnupg.net* *--recv-key* '0123456789ABCD'
RUN *add-apt-repository* 'deb https://cloud.r-project.org/bin/linux/debian buster-cran35/' *&&* *apt-get* *update*
RUN *apt-get* *install* *-y* *r-base*
RUN *apt-get* *update* *&&* *apt-get* *-y* *upgrade* *&&* *apt-get* *install* *-y* *--allow-unauthenticated* *gcc* *zlib1g* *zlib1g-dev* *libbz2-dev* *liblzma-dev* *build-essential* *unzip* *default-jre* *default-jdk* *make* *tabix* *libcurl4-gnutls-dev*
RUN *pip3* *install* *cython*
RUN *pip3* *install* *numpy==1.18.\** *pyvcf==0.6.8* *pysam==0.15.\** *pandas* *boto3*
RUN *pip* *install* *awscli*
ARG AWS\_ACCESS\_KEY\_ID
ARG AWS\_SECRET\_ACCESS\_KEY
ENV AWS\_ACCESS\_KEY\_ID=$AWS\_ACCESS\_KEY\_ID
ENV AWS\_SECRET\_ACCESS\_KEY=$AWS\_SECRET\_ACCESS\_KEY
RUN *mkdir* *tempo* *&&* *cd* *tempo* *&&* *aws* *s3* *cp* *s3://some-bucket/some-dir/plink\_linux\_x86\_64\_20201019.zip* *./* *&&* *unzip* *plink\_linux\_x86\_64\_20201019.zip* *&&* *mv* *plink* */bin/*
RUN *git* *clone* *git://github.com/SelfHacked/htslib.git* *&&* *git* *clone* *git://github.com/SelfHacked/bcftools.git* *&&* *cd* *bcftools* *&&* *make* *&&* *cd* *..* *&&* *mv* *bcftools/\** */bin/*
RUN *apt-get* *install* *tabix*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/snpEff\_latest\_core.zip* *./*
RUN *unzip* *snpEff\_latest\_core.zip* *&&* *mv* *snpEff* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/conform-gt.24May16.cee.jar* *./* *&&* *mv* *conform-gt.24May16.cee.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/beagle.18May20.d20.jar* *./* *&&* *mv* *beagle.18May20.d20.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/picard.jar* *./* *&&* *mv* *picard.jar* */app/*
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/bedops\_linux\_x86\_64-v2.4.39.tar.bz2* *./* *&&* *tar* *jxvf* *bedops\_linux\_x86\_64-v2.4.39.tar.bz2* *&&* *cp* *-r* *bin/\** */usr/local/bin*
RUN *git* *clone* *-b* *1.2.1* *https://github.com/Illumina/GTCtoVCF.git*
RUN *Rscript* *-e* 'install.packages("https://cran.r-project.org/src/contrib/BiocManager\_1.30.10.tar.gz", repos=NULL, type="source")'
RUN *Rscript* *-e* 'BiocManager::install("rtracklayer")'
RUN *Rscript* *-e* 'BiocManager::install("GenomicRanges")'
RUN *aws* *s3* *cp* *s3://some-bucket/some-dir/master.zip* *./* *&&* *unzip* *master.zip* *&&* *Rscript* *-e* "install.packages('GenomeBuildPredictor-master/',repos=NULL,type='source')"
RUN *apt-get* *update* *&&* *apt-get* *install* *-y* *wait-for-it* *vim* *man* *awscli* *jq*
COPY *scripts/wkhtmltopdf.sh* *scripts/*
RUN *scripts/wkhtmltopdf.sh*
COPY *requirements.frozen.txt* */opt/requirements.txt*
RUN *cd* */opt* *&&* *pip* *install* *--upgrade* *pip* *&&* *pip* *install* *-r* *requirements.txt*
Please provide me some of your thoughts.
https://redd.it/o9eq5e
@r_devops
Devs to DevOPs Ratio
Hi DevOps folk...
Probably a hard question to answer, and its likely "it depends"... but... do any of you know the magic ratio of Devs a single DevOps engineer can support? Is it 10:1 ? More? Less?
Im trying to determine the site my DevOps team needs to be as i scale out my development team with external developers. I want to keep DevOps in-house, whilst i outsource my software development.
I heard anecdotes that 10:1 is a safe average ratio, but keen to hear what others see.
Im aware that if you have invested in an automated CI/CD you can probably do more with less, but lets see. I use Azure DevOps, and have an OK-ish CI automation engine, and we have GitHub actions that acts as a CD, calling terraform scripts to deploy on Azure.
Thanks in advance.
​
Neil
https://redd.it/o9ehbk
@r_devops
Hi DevOps folk...
Probably a hard question to answer, and its likely "it depends"... but... do any of you know the magic ratio of Devs a single DevOps engineer can support? Is it 10:1 ? More? Less?
Im trying to determine the site my DevOps team needs to be as i scale out my development team with external developers. I want to keep DevOps in-house, whilst i outsource my software development.
I heard anecdotes that 10:1 is a safe average ratio, but keen to hear what others see.
Im aware that if you have invested in an automated CI/CD you can probably do more with less, but lets see. I use Azure DevOps, and have an OK-ish CI automation engine, and we have GitHub actions that acts as a CD, calling terraform scripts to deploy on Azure.
Thanks in advance.
​
Neil
https://redd.it/o9ehbk
@r_devops
reddit
Devs to DevOPs Ratio
Hi DevOps folk... Probably a hard question to answer, and its likely "it depends"... but... do any of you know the magic ratio of Devs a single...
DevOps/SRE Reading Material
Hi guys, Just wondering if anyone had hany meterial beyond The DevOps Handbook, The Pheonix project and Site Reliability Engineering. I am looking for something that will really advance my system design/system architecture knowledge.
Feel free to also share any book that you found of interest.
https://redd.it/o9gcxl
@r_devops
Hi guys, Just wondering if anyone had hany meterial beyond The DevOps Handbook, The Pheonix project and Site Reliability Engineering. I am looking for something that will really advance my system design/system architecture knowledge.
Feel free to also share any book that you found of interest.
https://redd.it/o9gcxl
@r_devops
reddit
DevOps/SRE Reading Material
Hi guys, Just wondering if anyone had hany meterial beyond The DevOps Handbook, The Pheonix project and Site Reliability Engineering. I am looking...
Tech newsletters.
Hey community! ✌
I was wondering what are your go-to sources for industry news? Any newsletter you are subscribed to?
I'm particularly interested in AI / ML / AIOps , Cloud , Open Source, IT Culture, DevOps tools, IoT Security.
Thank you. 🙏
https://redd.it/o9i8g6
@r_devops
Hey community! ✌
I was wondering what are your go-to sources for industry news? Any newsletter you are subscribed to?
I'm particularly interested in AI / ML / AIOps , Cloud , Open Source, IT Culture, DevOps tools, IoT Security.
Thank you. 🙏
https://redd.it/o9i8g6
@r_devops
reddit
Tech newsletters.
Hey community! ✌ I was wondering what are your go-to sources for industry news? Any newsletter you are subscribed to? I'm particularly...
Transform legacy apps to Microservices using the DevOps approach
Read this blog post to know how DevOps can help you to transform your old apps into microservices.
https://redd.it/o9k3fq
@r_devops
Read this blog post to know how DevOps can help you to transform your old apps into microservices.
https://redd.it/o9k3fq
@r_devops
softwebsolutions
DevOps approach to migrating legacy apps to Microservices
Why you should migrate your legacy apps to Microservices? Read our blog post to know why and how to transform your legacy apps to Microservices using the DevOps approach.
Is Devops a entry level friendly job?
Can somebody who doesn't have a experience get hire for Devops job?
Thank you
https://redd.it/o9l8tq
@r_devops
Can somebody who doesn't have a experience get hire for Devops job?
Thank you
https://redd.it/o9l8tq
@r_devops
reddit
Is Devops a entry level friendly job?
Can somebody who doesn't have a experience get hire for Devops job? Thank you
Career Question - SysOps -> DevOps
Hi,
I'm coming from an 3 yr system engineer background and wanting to move into DevOps & Cloud engineering. I've got work experience with Linux, Cisco, Python, Ansible and a bit of Azure. I did a self-hosted kubernetes cluster on a few pi's and hosted my self developed JS application on it. I applied to various DevOps positions ranging from intern to junior to mid, but I always got rejected immediately.
What can I improve? More projects? Maybe some certs?
https://redd.it/o9k0s0
@r_devops
Hi,
I'm coming from an 3 yr system engineer background and wanting to move into DevOps & Cloud engineering. I've got work experience with Linux, Cisco, Python, Ansible and a bit of Azure. I did a self-hosted kubernetes cluster on a few pi's and hosted my self developed JS application on it. I applied to various DevOps positions ranging from intern to junior to mid, but I always got rejected immediately.
What can I improve? More projects? Maybe some certs?
https://redd.it/o9k0s0
@r_devops
reddit
Career Question - SysOps -> DevOps
Hi, I'm coming from an 3 yr system engineer background and wanting to move into DevOps & Cloud engineering. I've got work experience with Linux,...
Ideas and Topics for DevOps/DevSecOps Speaking Sessions?
Hi all -
Trying to brainstorm some potential topics around DevOps/DevSecOps for speaking (30 min topics) at events like DevOps Days, etc.
What are some ideas/topics that you all would love to hear more about or even hear about? Automation? Getting a foot into the door? Career transitions from Ops to DevOps? Culture?
Love to get some idea from others on what topics you think might be missing in tech talks.
Yes, I'm polling the audience to help my brainstorm. :)
https://redd.it/o9jvr9
@r_devops
Hi all -
Trying to brainstorm some potential topics around DevOps/DevSecOps for speaking (30 min topics) at events like DevOps Days, etc.
What are some ideas/topics that you all would love to hear more about or even hear about? Automation? Getting a foot into the door? Career transitions from Ops to DevOps? Culture?
Love to get some idea from others on what topics you think might be missing in tech talks.
Yes, I'm polling the audience to help my brainstorm. :)
https://redd.it/o9jvr9
@r_devops
reddit
Ideas and Topics for DevOps/DevSecOps Speaking Sessions?
Hi all - Trying to brainstorm some potential topics around DevOps/DevSecOps for speaking (30 min topics) at events like DevOps Days, etc. What...
A GitHub Action that automatically generates & updates markdown content (like your README.md) from external or remote files.
Hi everyone!, I just released markdown-autodocs GitHub action which helps to auto-document your markdown files. Please give a star for this repo if you find it useful.
​
Github repo: https://github.com/dineshsonachalam/markdown-autodocs
Hacker News: https://news.ycombinator.com/item?id=27662736
https://redd.it/o9oo4j
@r_devops
Hi everyone!, I just released markdown-autodocs GitHub action which helps to auto-document your markdown files. Please give a star for this repo if you find it useful.
​
Github repo: https://github.com/dineshsonachalam/markdown-autodocs
Hacker News: https://news.ycombinator.com/item?id=27662736
https://redd.it/o9oo4j
@r_devops
GitHub
GitHub - dineshsonachalam/markdown-autodocs: ✨ A GitHub Action that automatically generates & updates markdown content (like your…
✨ A GitHub Action that automatically generates & updates markdown content (like your README.md) from external or remote files. - dineshsonachalam/markdown-autodocs
Fork or Copy an Entire DevOps Organization or Project
I have an organization and 2 projects i would like to keep in sync across entire different accounts and completely different env is this possible? Of course having it completely automated would be great but any way to export and import maybe when needed? Or something similar? I want to use all of my work for in one DevOps tenant/env to another
https://redd.it/o9qljx
@r_devops
I have an organization and 2 projects i would like to keep in sync across entire different accounts and completely different env is this possible? Of course having it completely automated would be great but any way to export and import maybe when needed? Or something similar? I want to use all of my work for in one DevOps tenant/env to another
https://redd.it/o9qljx
@r_devops
reddit
Fork or Copy an Entire DevOps Organization or Project
I have an organization and 2 projects i would like to keep in sync across entire different accounts and completely different env is this possible?...
Debugging on AWS infrastructure
Situation:
There are 3 environments: prod, qa, dev.
All 3 are deployed using cloudformation template generated by the same serverless framework template.
All 3 are deployed using the same source code.
All 3 have fully working configurations.
Tech involved: AWS ECS+Faragate, AWS ALB, AWS Lambda, AWS ApiGateway, AWS CloudFront.
Issue:
Started happening after dev environment was redeployed from scratch. Using the same serverless framework template. Wasn't happening before.
https://example.com/service/some-service/ returns HTTP 200 on qa and prod but fails with HTTP 403 on dev.
Everything else works as expected.
Questions:
1. How would you go about debugging this?
2. What questions would you ask?
3. What is your best educated guess on what is the issue?
https://redd.it/o9qgxo
@r_devops
Situation:
There are 3 environments: prod, qa, dev.
All 3 are deployed using cloudformation template generated by the same serverless framework template.
All 3 are deployed using the same source code.
All 3 have fully working configurations.
Tech involved: AWS ECS+Faragate, AWS ALB, AWS Lambda, AWS ApiGateway, AWS CloudFront.
Issue:
Started happening after dev environment was redeployed from scratch. Using the same serverless framework template. Wasn't happening before.
https://example.com/service/some-service/ returns HTTP 200 on qa and prod but fails with HTTP 403 on dev.
Everything else works as expected.
Questions:
1. How would you go about debugging this?
2. What questions would you ask?
3. What is your best educated guess on what is the issue?
https://redd.it/o9qgxo
@r_devops
reddit
r/devops - Debugging on AWS infrastructure
0 votes and 0 comments so far on Reddit
DevOps Days Amsterdam Online - JUNE 29, 2021
If you want to join https://devopsdays.org/events/2021-amsterdam/welcome/
https://redd.it/o9tom8
@r_devops
If you want to join https://devopsdays.org/events/2021-amsterdam/welcome/
https://redd.it/o9tom8
@r_devops
devopsdays.org
devopsdays Amsterdam 2021
I built a reference architecture for global package logistics and learned a bunch about Terraform in the process + scaled up to 400k packages delivered per second!
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with SingleStore (scale out relational database) and Redpanda (high performance alternative to Apache Kafka).
While I am quite excited to share my blog post, for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; step by step instructions and Terraform module
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose (docker-compose.yml).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of ephemeral storage. While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I organized the scripts here and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types I dynamically compiled a single script and uploaded it to google storage. Finally, I leveraged the startup-script-url metadata option that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with SingleStore (scale out relational database) and Redpanda (high performance alternative to Apache Kafka).
While I am quite excited to share my blog post, for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; step by step instructions and Terraform module
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose (docker-compose.yml).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of ephemeral storage. While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I organized the scripts here and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types I dynamically compiled a single script and uploaded it to google storage. Finally, I leveraged the startup-script-url metadata option that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
SingleStore
SingleStore | The Performance You Need for Enterprise AI
SingleStore delivers the performance you need for enterprise AI. We combine transactional (OLTP) and analytical (OLAP) processing, multi-model data support (vectors, full-text, JSON, time-series, etc.) and real-time analytics all in one platform.
I built a reference architecture for global package logistics and learned a bunch about Terraform in the process + scaled up to 400k packages delivered per second!
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with [SingleStore](https://www.singlestore.com) (scale out relational database) and [Redpanda](https://vectorized.io/redpanda) (high performance alternative to Apache Kafka).
While I am quite excited to share my [blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/), for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; [step by step instructions](https://github.com/singlestore-labs/singlestore-logistics-sim#deploying-into-google-cloud) and [Terraform module](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp)
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose ([docker-compose.yml](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/docker-compose.yml)).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of [ephemeral storage](https://cloud.google.com/compute/docs/disks#localssds). While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I [organized the scripts here](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp/scripts) and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types [I dynamically compiled a single script and uploaded it to google storage](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/scripts.tf#L60). Finally, I leveraged the [startup-script-url metadata option](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/singlestore.tf#L24) that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with [SingleStore](https://www.singlestore.com) (scale out relational database) and [Redpanda](https://vectorized.io/redpanda) (high performance alternative to Apache Kafka).
While I am quite excited to share my [blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/), for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; [step by step instructions](https://github.com/singlestore-labs/singlestore-logistics-sim#deploying-into-google-cloud) and [Terraform module](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp)
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose ([docker-compose.yml](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/docker-compose.yml)).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of [ephemeral storage](https://cloud.google.com/compute/docs/disks#localssds). While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I [organized the scripts here](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp/scripts) and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types [I dynamically compiled a single script and uploaded it to google storage](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/scripts.tf#L60). Finally, I leveraged the [startup-script-url metadata option](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/singlestore.tf#L24) that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
SingleStore
SingleStore | The Performance You Need for Enterprise AI
SingleStore delivers the performance you need for enterprise AI. We combine transactional (OLTP) and analytical (OLAP) processing, multi-model data support (vectors, full-text, JSON, time-series, etc.) and real-time analytics all in one platform.
like local disks aren't quite there yet. Perhaps something to revisit in the future. For now I am reasonably happy with my all-terraform solution. The downsides of this approach include:
* don't store sensitive data in metadata (i.e. like I am storing the license) - anyone able to issue a http request from one of the machines can easily exfiltrate it
* writing a ton of bash scripts is not easy to debug and even harder to make idempotent - in this case I just blew everything away if there was an issue, but obviously don't do this in production
* not mentioning that there is no way to easily upgrade this solution - I would only trust something like a full backup + offline rebuild
Well... this ended up being a lot longer than expected - but I figured that this community might enjoy hearing some of the background behind the various decisions I made while quickly putting this system together. Check out [the blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/) if you want to learn more about this projects background, and be sure to leave comments if you have any questions about my solution!
Thanks for reading! :)
https://redd.it/o9toig
@r_devops
* don't store sensitive data in metadata (i.e. like I am storing the license) - anyone able to issue a http request from one of the machines can easily exfiltrate it
* writing a ton of bash scripts is not easy to debug and even harder to make idempotent - in this case I just blew everything away if there was an issue, but obviously don't do this in production
* not mentioning that there is no way to easily upgrade this solution - I would only trust something like a full backup + offline rebuild
Well... this ended up being a lot longer than expected - but I figured that this community might enjoy hearing some of the background behind the various decisions I made while quickly putting this system together. Check out [the blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/) if you want to learn more about this projects background, and be sure to leave comments if you have any questions about my solution!
Thanks for reading! :)
https://redd.it/o9toig
@r_devops
Singlestore
Scaling Worldwide Parcel Logistics with SingleStore and Vectorized
Learn how SingleStore and Redpanda can work together to solve the operational complexity of global logistics.
Anyone here hold a security clearance and work with a modern tech stack (k8s, Terraform, AWS/GCP, Python/Go, etc)? How much more do you make with the clearance vs without? Is it worth getting one?
I'm considering a gig that requires obtaining a security clearance but worried about some things from my past in regards to drug use (one that's legal at state level, not federal, and have no criminal record with it). Is it worth it to just go through with the process and try to obtain one? Passing a drug test and staying clean while holding a clearance wouldn't be an issue just previous history with it might be.
https://redd.it/o9uvge
@r_devops
I'm considering a gig that requires obtaining a security clearance but worried about some things from my past in regards to drug use (one that's legal at state level, not federal, and have no criminal record with it). Is it worth it to just go through with the process and try to obtain one? Passing a drug test and staying clean while holding a clearance wouldn't be an issue just previous history with it might be.
https://redd.it/o9uvge
@r_devops
reddit
r/devops - Anyone here hold a security clearance and work with a modern tech stack (k8s, Terraform, AWS/GCP, Python/Go, etc)? How…
0 votes and 4 comments so far on Reddit
Direktiv: Docker development environment, VSCode plugin & Infrastructure-as-a-Chatbot
G'day DevOps,
Another update to our Direktiv event-driven serverless workflow engine - but this one focused on development. Release v0.3.1 included some bug fixes, improved stability and security enhancements, but more notably:
A Docker development environment (A Direktiv instance on your laptop or desktop!)
VSCode integration for workflow management and development
The update adds on to the features released for GitHub marketplace and hopefully makes it easier for developers (and non-developers alike) to create, verify and deploy the Direktiv workflows and plugins as containers!
The latest blog article is available at:
https://blog.direktiv.io/direktiv-new-dev-environment-vscode-plugin-ab047b7a8266
and the implementation docs at:
https://docs.direktiv.io/docs/development.html
We also created (as a PoC) an Infrastructure-as-a-Chat integration with Google Diaglogflow (it provisions to AWS and GCP):
https://blog.direktiv.io/direktiv-cloud-provisioning-chatbot-part-1-f482bb9ea943
The second article is the first in a three part series - but gives a good overview of what was done :)
Finally, added some new plugins to the direktiv-apps repository, one of which allows you to now run Terraform scripts (without having a Terraform environment):
https://github.com/vorteil/direktiv-apps/tree/master/terraform
As always - feedback is welcomed!!!
https://redd.it/o9wwlp
@r_devops
G'day DevOps,
Another update to our Direktiv event-driven serverless workflow engine - but this one focused on development. Release v0.3.1 included some bug fixes, improved stability and security enhancements, but more notably:
A Docker development environment (A Direktiv instance on your laptop or desktop!)
VSCode integration for workflow management and development
The update adds on to the features released for GitHub marketplace and hopefully makes it easier for developers (and non-developers alike) to create, verify and deploy the Direktiv workflows and plugins as containers!
The latest blog article is available at:
https://blog.direktiv.io/direktiv-new-dev-environment-vscode-plugin-ab047b7a8266
and the implementation docs at:
https://docs.direktiv.io/docs/development.html
We also created (as a PoC) an Infrastructure-as-a-Chat integration with Google Diaglogflow (it provisions to AWS and GCP):
https://blog.direktiv.io/direktiv-cloud-provisioning-chatbot-part-1-f482bb9ea943
The second article is the first in a three part series - but gives a good overview of what was done :)
Finally, added some new plugins to the direktiv-apps repository, one of which allows you to now run Terraform scripts (without having a Terraform environment):
https://github.com/vorteil/direktiv-apps/tree/master/terraform
As always - feedback is welcomed!!!
https://redd.it/o9wwlp
@r_devops
GitHub
GitHub - vorteil/direktiv: Serverless Container Orchestration
Serverless Container Orchestration. Contribute to vorteil/direktiv development by creating an account on GitHub.