Looking for Suggestions on git training courses for senior employees that need retraining
I'm looking for git training course suggestions:
The Situation:
I have been tasked with providing a plan for my companies migration from perforce to git; eventually moving our current process into bitbucket. Obviously part of this involves training employees on git, some of which have never used it. We are a small company, so in-house training from other employees would be too much of a time sink to be a viable solution for us.
I've been scouring the internet but i'm having trouble finding unbiased reviews of various git training courses.
Another consideration is that, ideally, there'd be some form of testing/interaction involved. Unfortunately, I'm worried that pure reading/video type courses will result in employees just clicking through it to get it done as quickly as possible and causing a knowledge gap.
Paid or Free doesn't matter for us
https://redd.it/p2im5u
@r_devops
I'm looking for git training course suggestions:
The Situation:
I have been tasked with providing a plan for my companies migration from perforce to git; eventually moving our current process into bitbucket. Obviously part of this involves training employees on git, some of which have never used it. We are a small company, so in-house training from other employees would be too much of a time sink to be a viable solution for us.
I've been scouring the internet but i'm having trouble finding unbiased reviews of various git training courses.
Another consideration is that, ideally, there'd be some form of testing/interaction involved. Unfortunately, I'm worried that pure reading/video type courses will result in employees just clicking through it to get it done as quickly as possible and causing a knowledge gap.
Paid or Free doesn't matter for us
https://redd.it/p2im5u
@r_devops
reddit
Looking for Suggestions on git training courses for senior...
I'm looking for git training course suggestions: **The Situation**: I have been tasked with providing a plan for my companies migration from...
What does your company give you for professional development?
Hello,
I'm a position where I can significantly influence the creation of a professional development policy for our company. We're roughly 400 people, majority of which are based in North America but we have a global presence.
I'm curious what everyone's companies are giving them in terms of professional development?
Specifically:
* Do you have a departmental, team, or individual budget? If so, how much?
* Do you have guidelines on how much your company reimburses vs the individual? For example, if someone wants to do some expense certification (i.e. in the thousands of $$), does the company reimburse up to a certain %?
* How does your company determine what is eligible?
There are mixed opinions about this internally so I'm try to collect some data points to justify a pretty generous policy in the spirit of retention. Thanks!
https://redd.it/p2jsea
@r_devops
Hello,
I'm a position where I can significantly influence the creation of a professional development policy for our company. We're roughly 400 people, majority of which are based in North America but we have a global presence.
I'm curious what everyone's companies are giving them in terms of professional development?
Specifically:
* Do you have a departmental, team, or individual budget? If so, how much?
* Do you have guidelines on how much your company reimburses vs the individual? For example, if someone wants to do some expense certification (i.e. in the thousands of $$), does the company reimburse up to a certain %?
* How does your company determine what is eligible?
There are mixed opinions about this internally so I'm try to collect some data points to justify a pretty generous policy in the spirit of retention. Thanks!
https://redd.it/p2jsea
@r_devops
reddit
What does your company give you for professional development?
Hello, I'm a position where I can significantly influence the creation of a professional development policy for our company. We're roughly 400...
What are some tools you have built that you are particularly proud of?
It's your time to shine. Brag away!
https://redd.it/p2kcwv
@r_devops
It's your time to shine. Brag away!
https://redd.it/p2kcwv
@r_devops
reddit
What are some tools you have built that you are particularly proud of?
It's your time to shine. Brag away!
Test Cloud Ping
Just discovered this website, it's useful if like me you have a team spread on several countries and some peers start complaining about latency to some cloud server.
Not made by me.
https://cloudpingtest.com/
https://redd.it/p2mhni
@r_devops
Just discovered this website, it's useful if like me you have a team spread on several countries and some peers start complaining about latency to some cloud server.
Not made by me.
https://cloudpingtest.com/
https://redd.it/p2mhni
@r_devops
Cloudpingtest
Cloud Ping Test (Latency) for different providers like AWS, Azure, GCP
Test ping time (latency) for different cloud providers like AWS, Azure, GCP, Digital Ocean from your web browser.
Java Creator James Gosling Interview
James Gosling, often referred to as "Dr. Java", is a Canadian computer scientist, best known as the father of the Java programming language. He did the original design of Java and implemented its original compiler and virtual machine. Our DevRel, Grigory Petrov, had the opportunity to interview James, and we have included the entire transcript below. Hope you enjoy it!
https://redd.it/p2cwrf
@r_devops
James Gosling, often referred to as "Dr. Java", is a Canadian computer scientist, best known as the father of the Java programming language. He did the original design of Java and implemented its original compiler and virtual machine. Our DevRel, Grigory Petrov, had the opportunity to interview James, and we have included the entire transcript below. Hope you enjoy it!
https://redd.it/p2cwrf
@r_devops
Evrone
Java Creator James Gosling Interview by Evrone
We had an opportunity to talk with James Gosling, a Canadian computer scientist, best known as the father of the Java programming language, and get his insight into the languages, features, and solutions that we use every day.
Is it true that the decision to choose a VDS/VPS hosting for a company is more influenced by engineers than managers?
I assume that behind any management decision on choosing a VDS/VPS hosting company, there is a consultation (past or present) with engineers.
View Poll
https://redd.it/p2jsc2
@r_devops
I assume that behind any management decision on choosing a VDS/VPS hosting company, there is a consultation (past or present) with engineers.
View Poll
https://redd.it/p2jsc2
@r_devops
How would you answer this Problem Statement
Roughly about a year when I had gotten my cloud cert, and was getting into devops I had an interview for a company for a junior devops engineer position. For the interview I had to explain and answer the following Problem Statement:
· The company is creating its new applications with an event driven microservices pattern.
· The company has already selected AWS
· The company has already selected Jenkins
· The microservices uptime should be 24/7
· The microservices need to be highly resilient, an hour of downtime will cost the company a million dollars in revenue.
Create a design for continuous delivery for these microservices from the branching strategy, through deployment, and the overall stability and scalability in a production environment.
I don't remember the answer I gave, but I am curious how would someone with a lot of experience in the industry answer this question?
https://redd.it/p2hti5
@r_devops
Roughly about a year when I had gotten my cloud cert, and was getting into devops I had an interview for a company for a junior devops engineer position. For the interview I had to explain and answer the following Problem Statement:
· The company is creating its new applications with an event driven microservices pattern.
· The company has already selected AWS
· The company has already selected Jenkins
· The microservices uptime should be 24/7
· The microservices need to be highly resilient, an hour of downtime will cost the company a million dollars in revenue.
Create a design for continuous delivery for these microservices from the branching strategy, through deployment, and the overall stability and scalability in a production environment.
I don't remember the answer I gave, but I am curious how would someone with a lot of experience in the industry answer this question?
https://redd.it/p2hti5
@r_devops
reddit
How would you answer this Problem Statement
Roughly about a year when I had gotten my cloud cert, and was getting into devops I had an interview for a company for a junior devops engineer...
From AWS CloudFormation to Terraform: Migrating Apache Kafka
Every once in a while we found ourselves in a spot where it's no longer up to us — our infrastructure demands a change.
When it comes to Kafka, the high scale and the fact that it's the system bottleneck requires us to be dynamic, responsive and in control, especially when running in production.
But how can we deploy frequent changes, such as: security, hardware, monitoring, etc. and still be stable, version controlled, audited and with a growing demand to user independence?
Check out my new blog post to hear about how Riskified created it's new Kafka infrastructure with Terraform and how we performed our cluster migration with zero downtime and zero data loss.
Invite you to read:
https://medium.com/riskified-technology/from-aws-cloudformation-to-terraform-migrating-apache-kafka-32bdabdbaa59
https://redd.it/p28lfz
@r_devops
Every once in a while we found ourselves in a spot where it's no longer up to us — our infrastructure demands a change.
When it comes to Kafka, the high scale and the fact that it's the system bottleneck requires us to be dynamic, responsive and in control, especially when running in production.
But how can we deploy frequent changes, such as: security, hardware, monitoring, etc. and still be stable, version controlled, audited and with a growing demand to user independence?
Check out my new blog post to hear about how Riskified created it's new Kafka infrastructure with Terraform and how we performed our cluster migration with zero downtime and zero data loss.
Invite you to read:
https://medium.com/riskified-technology/from-aws-cloudformation-to-terraform-migrating-apache-kafka-32bdabdbaa59
https://redd.it/p28lfz
@r_devops
Linkedin
Riskified | LinkedIn
Riskified | 59,177 followers on LinkedIn. Riskified (NYSE:RSKD) empowers businesses to unleash ecommerce growth by taking risk off the table. Many of the world’s biggest brands and publicly traded companies selling online rely on Riskified for guaranteed…
production setup
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
https://redd.it/p2r4hh
@r_devops
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
https://redd.it/p2r4hh
@r_devops
reddit
production setup
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
How does your team handle interrupt work?
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work encompasses all things unplanned (e.g. last minute 'urgent' requests or non-planned sprint work).
Does your team/organization have processes in place to handle or track this kind of unplanned work? If so, what kind of benefits did you gain?
https://redd.it/p256ud
@r_devops
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work encompasses all things unplanned (e.g. last minute 'urgent' requests or non-planned sprint work).
Does your team/organization have processes in place to handle or track this kind of unplanned work? If so, what kind of benefits did you gain?
https://redd.it/p256ud
@r_devops
reddit
How does your team handle interrupt work?
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work...
Adding custom alerts in kubeprometheus helm chart
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am using kubeprometheus stack ?
https://redd.it/p2sryz
@r_devops
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am using kubeprometheus stack ?
https://redd.it/p2sryz
@r_devops
reddit
Adding custom alerts in kubeprometheus helm chart
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am...
Who has the ability to connect 3 x 1 Gbit/s at home for less than $80/mo per Gbit/s?
Hi Guys!
For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home.
Some ISPs don't have dedicated infrastructure and rent it from someone else so different ISPs won't be able to give you more than one line.
Please choose the option...
View Poll
https://redd.it/p2da54
@r_devops
Hi Guys!
For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home.
Some ISPs don't have dedicated infrastructure and rent it from someone else so different ISPs won't be able to give you more than one line.
Please choose the option...
View Poll
https://redd.it/p2da54
@r_devops
reddit
Who has the ability to connect 3 x 1 Gbit/s at home for less than...
Hi Guys! For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home. Some ISPs don't have dedicated...
A writing competition, with a cash prize
For the month of August, Hashnode (https://hashnode.com) has a writing competition and one of the primary topics is AWS.
If you have written articles in the past and you put a lot of effort you can use them by republishing (use the canonical URL😎) or write new ones!
The prize is $50 and there is a lot of room (not many people have joined so far, so this makes it easier for new writers).
https://townhall.hashnode.com/special-august-giveaway-for-the-top-150-writers-of-javascript-aws-and-ruby-on-rails
https://redd.it/p2udld
@r_devops
For the month of August, Hashnode (https://hashnode.com) has a writing competition and one of the primary topics is AWS.
If you have written articles in the past and you put a lot of effort you can use them by republishing (use the canonical URL😎) or write new ones!
The prize is $50 and there is a lot of room (not many people have joined so far, so this makes it easier for new writers).
https://townhall.hashnode.com/special-august-giveaway-for-the-top-150-writers-of-javascript-aws-and-ruby-on-rails
https://redd.it/p2udld
@r_devops
Hashnode
Hashnode — Blogging Platform for Builders in Tech
Hashnode is a blogging platform where developers, engineers, and tech leaders write to sharpen ideas, share knowledge, and build their reputation. Start for free.
How to reduce risk of deployments by using Autopilot on Datadog
In the blog, we will explain how SREs can accurately verify the risk of their software in CI/CD pipeline by integrating Autopilot with Datadog monitoring solutions.
OpsMx Autopilot is a machine learning (ML) and natural language processing tool that analyzes the data for you automatically so you can quickly and accurately decide whether an update should be moved forward in the pipeline.
Autopilot helps you to stay a step ahead of the competition by automating the decision-making process and assessing risk before deployment. Autopilot is a verification module, which is a part of the larger OpsMx platform for continuous delivery built on top of Spinnaker.
It follows API based architecture, which is extremely easy to extend and integrate with any DevOps tool chain in your organization.
https://redd.it/p2vf7i
@r_devops
In the blog, we will explain how SREs can accurately verify the risk of their software in CI/CD pipeline by integrating Autopilot with Datadog monitoring solutions.
OpsMx Autopilot is a machine learning (ML) and natural language processing tool that analyzes the data for you automatically so you can quickly and accurately decide whether an update should be moved forward in the pipeline.
Autopilot helps you to stay a step ahead of the competition by automating the decision-making process and assessing risk before deployment. Autopilot is a verification module, which is a part of the larger OpsMx platform for continuous delivery built on top of Spinnaker.
It follows API based architecture, which is extremely easy to extend and integrate with any DevOps tool chain in your organization.
https://redd.it/p2vf7i
@r_devops
OpsMx Blog |
How to reduce risk of deployments in Datadog with OpsMx Autopilot
Integrate Autopilot with Data dog or nay existing platform to reduce the risk of deployments in your ci/cd pipeline.
Encrypting server-side emails using serverless workflows
G'day DevOps,
We wanted to share something we worked on as a PoC for our serverless workflow engine. The idea was not ours, but something that the group who ran the PoC dreamt up!
The problem they tried to solve was the fact that emails sent from internal systems typically only have an SMTP (or email) configuration with the generic username, password and transport security settings. But their requirement was that all of the attachments from the system sent to external emails (vendor support, managed service support or outsourced support) be compressed and encrypted.
Direktiv (open source edition) was configured with an SMTP listener, converts the email to a CloudEvent and deconstructs it into JSON objects. From that point forward the workflow does whatever they want to do (zip, encrypt, SMS password to a number).
We thought it was pretty cool and applicable to a lot of users - let us know what you think!
We've written a blog article about it below:
https://blog.direktiv.io/direktiv-encrypting-server-side-email-attachments-in-the-real-world-d18a7bccb36c
We also released version 0.3.4, a lot of features added:
https://github.com/vorteil/direktiv/releases/tag/v0.3.4
As always - we welcome feedback and questions!
https://redd.it/p2vpuz
@r_devops
G'day DevOps,
We wanted to share something we worked on as a PoC for our serverless workflow engine. The idea was not ours, but something that the group who ran the PoC dreamt up!
The problem they tried to solve was the fact that emails sent from internal systems typically only have an SMTP (or email) configuration with the generic username, password and transport security settings. But their requirement was that all of the attachments from the system sent to external emails (vendor support, managed service support or outsourced support) be compressed and encrypted.
Direktiv (open source edition) was configured with an SMTP listener, converts the email to a CloudEvent and deconstructs it into JSON objects. From that point forward the workflow does whatever they want to do (zip, encrypt, SMS password to a number).
We thought it was pretty cool and applicable to a lot of users - let us know what you think!
We've written a blog article about it below:
https://blog.direktiv.io/direktiv-encrypting-server-side-email-attachments-in-the-real-world-d18a7bccb36c
We also released version 0.3.4, a lot of features added:
https://github.com/vorteil/direktiv/releases/tag/v0.3.4
As always - we welcome feedback and questions!
https://redd.it/p2vpuz
@r_devops
Medium
Direktiv: encrypting server-side email attachments in the real-world!
We’ve all been there … engaging with external parties to monitor and manage the internal systems. A consequence of this is that internal…
How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
This blog is a continuation of the Autopilot story where we discuss how one can reduce the risk of releases by augmenting an exiting monitoring platform like Datadog. autopilot provides Realtime risk assessment of releases before a code is deployed into production and also deny releases that fail a minimum threshold.
Once Autopilot is configured, it will automatically fetch the logs from applications, pipelines and metrics. During the execution of a pipeline, it can compare risk scores of a new release against a baseline run to assert the quality of a release. Autopilot determines if it can promote a new update fully to production or push it back to the developer for debugging. The log analysis and risk- assessment get processed in a matter of seconds and provide automated decisions during the execution of a pipeline run.
The AI/ML-enabled intelligence layer in Autopilot uses supervised learning to improve its judgment abilities over time. SREs, as they evaluate the confidence score of any release, can change Autopilot’s assessment of the impact of errors and warnings. These inputs are like feedback to Autopilot, which helps it to develop a contextual understanding of specific applications and pipelines.
Read More How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
https://redd.it/p2verc
@r_devops
This blog is a continuation of the Autopilot story where we discuss how one can reduce the risk of releases by augmenting an exiting monitoring platform like Datadog. autopilot provides Realtime risk assessment of releases before a code is deployed into production and also deny releases that fail a minimum threshold.
Once Autopilot is configured, it will automatically fetch the logs from applications, pipelines and metrics. During the execution of a pipeline, it can compare risk scores of a new release against a baseline run to assert the quality of a release. Autopilot determines if it can promote a new update fully to production or push it back to the developer for debugging. The log analysis and risk- assessment get processed in a matter of seconds and provide automated decisions during the execution of a pipeline run.
The AI/ML-enabled intelligence layer in Autopilot uses supervised learning to improve its judgment abilities over time. SREs, as they evaluate the confidence score of any release, can change Autopilot’s assessment of the impact of errors and warnings. These inputs are like feedback to Autopilot, which helps it to develop a contextual understanding of specific applications and pipelines.
Read More How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
https://redd.it/p2verc
@r_devops
OpsMx Blog |
How to reduce risk of deployments in Datadog with OpsMx Autopilot
Integrate Autopilot with Data dog or nay existing platform to reduce the risk of deployments in your ci/cd pipeline.
Advice on CircleCI config
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it?
The app is run using Heroku, but I don't want to necessarily automatically upgrade Heroku because of database schema changes.
---
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
# Create network
- run: docker network create test_network
# Run postgres
- run: docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=runner --name db --network test_network postgres
# Build flask image
- run: docker build -f flask/Dockerfile -t flask flask/
# Run flask image
- run: docker run -d -e TEST_DATABASE_URL=postgresql://postgres:runner@db:5432/db_test
-e DATABASE_URL=postgresql://postgres:postgres@db:5432/db_dev
--name flask --network test_network flask python manage.py run
# Run Tests
- run: docker exec flask pytest "app/tests" --cov="app" -p no:warnings
https://redd.it/p2y21u
@r_devops
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it?
The app is run using Heroku, but I don't want to necessarily automatically upgrade Heroku because of database schema changes.
---
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
# Create network
- run: docker network create test_network
# Run postgres
- run: docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=runner --name db --network test_network postgres
# Build flask image
- run: docker build -f flask/Dockerfile -t flask flask/
# Run flask image
- run: docker run -d -e TEST_DATABASE_URL=postgresql://postgres:runner@db:5432/db_test
-e DATABASE_URL=postgresql://postgres:postgres@db:5432/db_dev
--name flask --network test_network flask python manage.py run
# Run Tests
- run: docker exec flask pytest "app/tests" --cov="app" -p no:warnings
https://redd.it/p2y21u
@r_devops
reddit
Advice on CircleCI config
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it? The app is...
New Book: CI/CD for Monorepos
Ie have a gift for you: a free, 50-page ebook on effective CI/CD for monorepos. The book is open source, and you can download it today.
https://redd.it/p2zq5c
@r_devops
Ie have a gift for you: a free, 50-page ebook on effective CI/CD for monorepos. The book is open source, and you can download it today.
https://redd.it/p2zq5c
@r_devops
GitHub
GitHub - semaphoreci/book-monorepo-cicd: Effectively build, test, and deploy code with monorepos.
Effectively build, test, and deploy code with monorepos. - semaphoreci/book-monorepo-cicd
Job title for someone who mainly works on CI/CD?
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps Engineer" (e.g. DevOps is a culture, not a job title, etc).
The model we have right now is "DevOps Engineers" aligned to one or more Agile scrums. The DevOps Engineers are responsible for helping the scrum build, test and release software themselves using existing tools and APIs.
The DevOps Engineers don't touch the software code or support the apps in production (SREs do that), and they don't manage the cloud infrastructure (there is a separate "Platform Engineering" team for that).
Rather they help the app developers implement the right APIs in their apps to make sure things like logging, monitoring, unit testing, containerisation are all implemented and that configuration, secret storage and so on are all done properly.
"DevOps Engineer" seems to be okay, alongside SRE and Platform Engineer (for infrastructure), but in the spirit of the "DevOps as a culture, not a job title" I'm wondering if there is a better option for this type of CI/CD/Pipeline role?
https://redd.it/p2wy88
@r_devops
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps Engineer" (e.g. DevOps is a culture, not a job title, etc).
The model we have right now is "DevOps Engineers" aligned to one or more Agile scrums. The DevOps Engineers are responsible for helping the scrum build, test and release software themselves using existing tools and APIs.
The DevOps Engineers don't touch the software code or support the apps in production (SREs do that), and they don't manage the cloud infrastructure (there is a separate "Platform Engineering" team for that).
Rather they help the app developers implement the right APIs in their apps to make sure things like logging, monitoring, unit testing, containerisation are all implemented and that configuration, secret storage and so on are all done properly.
"DevOps Engineer" seems to be okay, alongside SRE and Platform Engineer (for infrastructure), but in the spirit of the "DevOps as a culture, not a job title" I'm wondering if there is a better option for this type of CI/CD/Pipeline role?
https://redd.it/p2wy88
@r_devops
reddit
Job title for someone who mainly works on CI/CD?
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps...
Domain knowledge for DevOps?
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was stumped because I have worked with banking clients, audit firms, healthcare and data analytics startups.
As a DevOps engineer, does the domain really matter since it's basically the same flow (SCM, IaC, Config, CICD, Monitoring)? Although I know the product we are building but I don't really know the nuances of these different sectors.
What domain knowledge should I look into if I have worked primarily for Banking and Audit clients?
PS. One pointer could be the difference in security audits across these sectors. For eg, healthcare has HIPAA.
https://redd.it/p31fol
@r_devops
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was stumped because I have worked with banking clients, audit firms, healthcare and data analytics startups.
As a DevOps engineer, does the domain really matter since it's basically the same flow (SCM, IaC, Config, CICD, Monitoring)? Although I know the product we are building but I don't really know the nuances of these different sectors.
What domain knowledge should I look into if I have worked primarily for Banking and Audit clients?
PS. One pointer could be the difference in security audits across these sectors. For eg, healthcare has HIPAA.
https://redd.it/p31fol
@r_devops
reddit
Domain knowledge for DevOps?
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was...