I asked ChatGpt for a Devops/Cloud project idea. Thoughts/feedback?
Uni undergrad working on a roadmap. [My Roadmap](https://www.reddit.com/r/ITCareerQuestions/comments/10vbvej/want_to_get_into_the_devopssrecloud_space_have/)
This project would be undertaken after I've have gotten my Aws SAA and Terraform Associate ideally.
I've decided on undertaking a larger project rather than some copy-paste tutorials that would waste time and give me a false sense of progress. Quality > quantity basically. So basically, curious as to those already in the field might think about the project and whether it would be a good idea to go ahead with it as it is a huge time commitment. I realize I may be biting off more than I can chew but I figure the best way for me to learn all of this and get in the field is to just dive in the deep end. Any thoughts/feedback would be greatly appreciated.
*The prompt I gave chatgpt was:*
Build a micro-service in Python which runs in a server-less environment using containers, such as Lambda with a Container Image. Using something like DynamoDB with global tables and use GitHub actions to build and deploy your application with the serverless application framework. Then, have a Python-based microservice with a global datastore, deployed in multiple regions using CI/CD. It should then be able to commit a simple change to Github and have the service deployed globally. This solution would also be almost all usage-based, no fixed cost servers or vms running. Then also use terraform to host in a github repository.
*The response:*
**Note, the following output is the micro-service itself:**
This is a service that automates the deployment of infrastructure on AWS.
Description:
The microservice would allow users to define infrastructure as code in YAML or
JSON files, and then deploy the infrastructure on AWS automatically. It would
use the AWS SDK for Python (Boto3) to interact with AWS services such as EC2,
VPC, and RDS. Users would be able to specify the region where the
infrastructure should be deployed, the instance type and count, the size and
type of storage, and other configuration options.
1. Set up a virtual environment for Python and install the necessary
dependencies, such as Boto3, PyYAML or JSON, and any other necessary
libraries.
2. Create a basic Flask app with routes to handle HTTP requests from users.
For example, you could define a route that accepts a YAML or JSON file
containing the infrastructure definition, and another route to trigger the
deployment of the infrastructure.
3. Use Boto3 to define the AWS services and resources that the microservice
will interact with. This could include creating a VPC, an EC2 instance, an RDS
instance, and any other necessary resources.
4. Use PyYAML or JSON to parse the user-provided infrastructure definition and
convert it to Boto3 API calls to create the infrastructure.
5. Define a deployment strategy that specifies how the microservice will
deploy the infrastructure. For example, you could define a simple strategy
that deploys the infrastructure to a single region, or a more complex strategy
that deploys the infrastructure to multiple regions using AWS Global
Accelerator or Route 53.
6. Implement error handling and logging to ensure that the microservice is
robust and reliable.
7. Test the microservice locally to ensure it works as expected. You could use
a tool like Docker to simulate the AWS environment and test the microservice
in a container.
8. Deploy the microservice on AWS Lambda or ECS using the Serverless
Application Model (SAM) or Elastic Container Service (ECS) task definitions.
9. Test the microservice on AWS to ensure it works as expected. You could use
AWS CloudFormation or Terraform to provision the infrastructure and test the
microservice in a real environment.
10. Document the microservice and its deployment process, including any
configuration
Uni undergrad working on a roadmap. [My Roadmap](https://www.reddit.com/r/ITCareerQuestions/comments/10vbvej/want_to_get_into_the_devopssrecloud_space_have/)
This project would be undertaken after I've have gotten my Aws SAA and Terraform Associate ideally.
I've decided on undertaking a larger project rather than some copy-paste tutorials that would waste time and give me a false sense of progress. Quality > quantity basically. So basically, curious as to those already in the field might think about the project and whether it would be a good idea to go ahead with it as it is a huge time commitment. I realize I may be biting off more than I can chew but I figure the best way for me to learn all of this and get in the field is to just dive in the deep end. Any thoughts/feedback would be greatly appreciated.
*The prompt I gave chatgpt was:*
Build a micro-service in Python which runs in a server-less environment using containers, such as Lambda with a Container Image. Using something like DynamoDB with global tables and use GitHub actions to build and deploy your application with the serverless application framework. Then, have a Python-based microservice with a global datastore, deployed in multiple regions using CI/CD. It should then be able to commit a simple change to Github and have the service deployed globally. This solution would also be almost all usage-based, no fixed cost servers or vms running. Then also use terraform to host in a github repository.
*The response:*
**Note, the following output is the micro-service itself:**
This is a service that automates the deployment of infrastructure on AWS.
Description:
The microservice would allow users to define infrastructure as code in YAML or
JSON files, and then deploy the infrastructure on AWS automatically. It would
use the AWS SDK for Python (Boto3) to interact with AWS services such as EC2,
VPC, and RDS. Users would be able to specify the region where the
infrastructure should be deployed, the instance type and count, the size and
type of storage, and other configuration options.
1. Set up a virtual environment for Python and install the necessary
dependencies, such as Boto3, PyYAML or JSON, and any other necessary
libraries.
2. Create a basic Flask app with routes to handle HTTP requests from users.
For example, you could define a route that accepts a YAML or JSON file
containing the infrastructure definition, and another route to trigger the
deployment of the infrastructure.
3. Use Boto3 to define the AWS services and resources that the microservice
will interact with. This could include creating a VPC, an EC2 instance, an RDS
instance, and any other necessary resources.
4. Use PyYAML or JSON to parse the user-provided infrastructure definition and
convert it to Boto3 API calls to create the infrastructure.
5. Define a deployment strategy that specifies how the microservice will
deploy the infrastructure. For example, you could define a simple strategy
that deploys the infrastructure to a single region, or a more complex strategy
that deploys the infrastructure to multiple regions using AWS Global
Accelerator or Route 53.
6. Implement error handling and logging to ensure that the microservice is
robust and reliable.
7. Test the microservice locally to ensure it works as expected. You could use
a tool like Docker to simulate the AWS environment and test the microservice
in a container.
8. Deploy the microservice on AWS Lambda or ECS using the Serverless
Application Model (SAM) or Elastic Container Service (ECS) task definitions.
9. Test the microservice on AWS to ensure it works as expected. You could use
AWS CloudFormation or Terraform to provision the infrastructure and test the
microservice in a real environment.
10. Document the microservice and its deployment process, including any
configuration
Reddit
r/ITCareerQuestions on Reddit: Want to get into the devops/sre/cloud space. Have currently drafted up a 4 year study plan. Thoughts/advice?
Posted by u/Spitefulmutant_ - 38 votes and 26 comments
options, best practices, and troubleshooting steps.
Overall, this microservice would automate the deployment of infrastructure on
AWS and provide a streamlined experience for users. It would leverage Python,
Boto3, Flask, and other technologies to build a robust and reliable service
that can be deployed globally in a serverless environment.`
**Note, the following output is now the project utilizing the micro-service above:**
This is a Serverless Microservice with a Global Datastore using Python,
DynamoDB, and GitHub Actions.
Description:
The project involves building a microservice in Python that runs in a
serverless environment using containers. The microservice should use a global
datastore, such as DynamoDB, and be deployed in multiple regions using CI/CD.
The deployment process should be automated using GitHub actions, and the
solution should be usage-based, without any fixed cost servers or VMs running.
Finally, the project should use Terraform to host everything in a GitHub
repository.
1. Set up a GitHub repository to store the code and configuration files for
the microservice. This repository will serve as a central location for storing
and managing the codebase.
2. Create a virtual environment for Python and install the necessary
dependencies, such as AWS SDK, Boto3, and Serverless Application Framework
(SAF). A virtual environment is a way to create a sandboxed environment for
your Python code to run in. This will help ensure that your dependencies are
isolated from the system Python installation.
3. Use SAF to create a new Python-based serverless application. This will
create a basic serverless.yml file, which defines the configuration of the
application. The serverless.yml file will contain the configuration for your
Lambda functions, API Gateway endpoints, and other AWS resources that your
application uses.
4. Use SAF to create a DynamoDB table with global tables enabled. This will
create a table that can replicate data across multiple AWS regions, ensuring
high availability and low latency. The global tables feature allows you to
replicate your DynamoDB data across multiple AWS regions, improving the
performance and reliability of your application.
5. Write the code for the microservice in Python. This could include functions
for handling API requests, interacting with the DynamoDB table, and any other
necessary functionality. You can use the Boto3 library to interact with AWS
services such as DynamoDB, S3, and Lambda.
6. Use GitHub actions to set up a CI/CD pipeline that builds and deploys the
microservice whenever changes are committed to the GitHub repository. This
could involve using Docker to build a container image of the microservice and
then deploying it to AWS Lambda using SAF. GitHub actions provide a way to
automate your development workflow, including building and deploying your code
to AWS.
7. Test the microservice locally and make any necessary changes to ensure it
works as expected. You can use the serverless invoke local command to test
your Lambda functions locally.
8. Use Terraform to automate the deployment of the microservice and its
associated resources. This could include creating the DynamoDB table, setting
up the AWS Lambda function, and configuring any necessary IAM roles or
policies. Terraform is a tool for creating and managing infrastructure as
code, which allows you to define your AWS resources as code.
9. Use Terraform to set up a GitHub repository for the microservice and its
associated resources. This could include creating a GitHub repository, setting
up webhooks to trigger the CI/CD pipeline, and configuring any necessary
secrets or environment variables. Terraform can be used to automate the
creation and configuration of your GitHub repository.
10. Test the entire solution end-to-end, including
Overall, this microservice would automate the deployment of infrastructure on
AWS and provide a streamlined experience for users. It would leverage Python,
Boto3, Flask, and other technologies to build a robust and reliable service
that can be deployed globally in a serverless environment.`
**Note, the following output is now the project utilizing the micro-service above:**
This is a Serverless Microservice with a Global Datastore using Python,
DynamoDB, and GitHub Actions.
Description:
The project involves building a microservice in Python that runs in a
serverless environment using containers. The microservice should use a global
datastore, such as DynamoDB, and be deployed in multiple regions using CI/CD.
The deployment process should be automated using GitHub actions, and the
solution should be usage-based, without any fixed cost servers or VMs running.
Finally, the project should use Terraform to host everything in a GitHub
repository.
1. Set up a GitHub repository to store the code and configuration files for
the microservice. This repository will serve as a central location for storing
and managing the codebase.
2. Create a virtual environment for Python and install the necessary
dependencies, such as AWS SDK, Boto3, and Serverless Application Framework
(SAF). A virtual environment is a way to create a sandboxed environment for
your Python code to run in. This will help ensure that your dependencies are
isolated from the system Python installation.
3. Use SAF to create a new Python-based serverless application. This will
create a basic serverless.yml file, which defines the configuration of the
application. The serverless.yml file will contain the configuration for your
Lambda functions, API Gateway endpoints, and other AWS resources that your
application uses.
4. Use SAF to create a DynamoDB table with global tables enabled. This will
create a table that can replicate data across multiple AWS regions, ensuring
high availability and low latency. The global tables feature allows you to
replicate your DynamoDB data across multiple AWS regions, improving the
performance and reliability of your application.
5. Write the code for the microservice in Python. This could include functions
for handling API requests, interacting with the DynamoDB table, and any other
necessary functionality. You can use the Boto3 library to interact with AWS
services such as DynamoDB, S3, and Lambda.
6. Use GitHub actions to set up a CI/CD pipeline that builds and deploys the
microservice whenever changes are committed to the GitHub repository. This
could involve using Docker to build a container image of the microservice and
then deploying it to AWS Lambda using SAF. GitHub actions provide a way to
automate your development workflow, including building and deploying your code
to AWS.
7. Test the microservice locally and make any necessary changes to ensure it
works as expected. You can use the serverless invoke local command to test
your Lambda functions locally.
8. Use Terraform to automate the deployment of the microservice and its
associated resources. This could include creating the DynamoDB table, setting
up the AWS Lambda function, and configuring any necessary IAM roles or
policies. Terraform is a tool for creating and managing infrastructure as
code, which allows you to define your AWS resources as code.
9. Use Terraform to set up a GitHub repository for the microservice and its
associated resources. This could include creating a GitHub repository, setting
up webhooks to trigger the CI/CD pipeline, and configuring any necessary
secrets or environment variables. Terraform can be used to automate the
creation and configuration of your GitHub repository.
10. Test the entire solution end-to-end, including
deploying the microservice
globally, making changes to the code, and verifying that the changes are
automatically deployed to all regions. You can use AWS tools such as
CloudFormation and the AWS Management Console to verify that your application
is deployed correctly and working as expected.
11. Set up AWS CloudWatch: Create a new CloudWatch log group for your
microservice, and configure the logging output of your microservice to send
logs to this group. You can use the Python logging module to output log
messages to CloudWatch, or use a logging library such as structlog to format
your logs in a way that is more easily searchable in CloudWatch.
12. Define metrics: Decide on the key performance indicators (KPIs) that you
want to track for your microservice, such as the number of requests, the
response time, and the error rate. You can use CloudWatch metrics to track
these KPIs by emitting custom metrics from your microservice.
13. Set up CloudWatch alarms: Create CloudWatch alarms to monitor your KPIs
and trigger notifications when thresholds are exceeded. For example, you might
create an alarm to trigger a notification when the error rate exceeds a
certain threshold, or when the response time exceeds a certain threshold.
14. Configure notifications: Configure CloudWatch to send notifications to a
notification service such as Amazon SNS or Amazon SES. You can configure the
notifications to be sent via email, SMS, or other channels.
15. Test and refine: Test your monitoring and alerting setup to make sure that
notifications are being sent when expected, and refine your alert thresholds
as needed to avoid false positives or false negatives.`
https://redd.it/12mh7jh
@r_devops
globally, making changes to the code, and verifying that the changes are
automatically deployed to all regions. You can use AWS tools such as
CloudFormation and the AWS Management Console to verify that your application
is deployed correctly and working as expected.
11. Set up AWS CloudWatch: Create a new CloudWatch log group for your
microservice, and configure the logging output of your microservice to send
logs to this group. You can use the Python logging module to output log
messages to CloudWatch, or use a logging library such as structlog to format
your logs in a way that is more easily searchable in CloudWatch.
12. Define metrics: Decide on the key performance indicators (KPIs) that you
want to track for your microservice, such as the number of requests, the
response time, and the error rate. You can use CloudWatch metrics to track
these KPIs by emitting custom metrics from your microservice.
13. Set up CloudWatch alarms: Create CloudWatch alarms to monitor your KPIs
and trigger notifications when thresholds are exceeded. For example, you might
create an alarm to trigger a notification when the error rate exceeds a
certain threshold, or when the response time exceeds a certain threshold.
14. Configure notifications: Configure CloudWatch to send notifications to a
notification service such as Amazon SNS or Amazon SES. You can configure the
notifications to be sent via email, SMS, or other channels.
15. Test and refine: Test your monitoring and alerting setup to make sure that
notifications are being sent when expected, and refine your alert thresholds
as needed to avoid false positives or false negatives.`
https://redd.it/12mh7jh
@r_devops
Reddit
r/devops on Reddit: I asked ChatGpt for a Devops/Cloud project idea. Thoughts/feedback?
Posted by u/Spitefulmutant_ - No votes and 1 comment
Advice for pay review
Hi all, this will be my first annual pay review as a devops engineer. I have job hopped around every 1.5-2 years and this will be my first pay review at a company. For DevOps, how do you usually prepare? Do you go in with a list of everything you have improved and a $ figure with how much you’ve saved the company? We are fully remote by the way if that changes anything. Not sure what to expect, thanks! Just looking to demonstrate my ability and show that i deserve a pay increase
https://redd.it/12mjnro
@r_devops
Hi all, this will be my first annual pay review as a devops engineer. I have job hopped around every 1.5-2 years and this will be my first pay review at a company. For DevOps, how do you usually prepare? Do you go in with a list of everything you have improved and a $ figure with how much you’ve saved the company? We are fully remote by the way if that changes anything. Not sure what to expect, thanks! Just looking to demonstrate my ability and show that i deserve a pay increase
https://redd.it/12mjnro
@r_devops
Reddit
r/devops on Reddit: Advice for pay review
Posted by u/openwidecomeinside - No votes and no comments
Role of AI specifically in the Devops/DX space
Hi,
I would be really interested in kicking off a discussion about the current headspace of folks working in devops/dx/platform related roles around the upcoming role of AI in our workdays/stacks/developer tooling.
I guess, as ever, it would pay to be ahead of the curve if possible?
Let's discuss.
https://redd.it/12m4ui4
@r_devops
Hi,
I would be really interested in kicking off a discussion about the current headspace of folks working in devops/dx/platform related roles around the upcoming role of AI in our workdays/stacks/developer tooling.
I guess, as ever, it would pay to be ahead of the curve if possible?
Let's discuss.
https://redd.it/12m4ui4
@r_devops
Reddit
r/devops on Reddit: Role of AI specifically in the Devops/DX space
Posted by u/Dependent-Example930 - No votes and 5 comments
How do you show that you know what you're doing in your CV?
Hey there,
I'm a Sr. DevOps Engineer with 4 yoe in total (professionally, will get back to this later). But currently in my team I'm literally the go-to guy for any kind of problem that requires good systems understanding. Whether it's related to MongoDB, k8s, CI/CD or Docker. This is mostly thanks to my problem solving skills and my past experience as a amateur server/software tinkerer. I know that I have a long road ahead, I'm no "rockstar" or "best in the business" by any stretch of the imagination . But I know that when I talk about a technical solution for a problem (literally any kind of problem that revolves around a system whether about databases/clusters or software), I can see that other people (whether junior/senior/lead) value my opinion greatly and act accordingly. Besides another thing that assures me is that my company is currently paying me shit ton of money compared to my experience.
Anyway, enough humblebragging. Here's my problem
When I look at my colleagues CVs or Linkedin profiles I keep saying that "yeah, this person would look obviously more experienced and therefore more "hireable". But I know that I'm techically more capable than them (I was going to stop bragging but I had to write this down, sorry lol), I solve problems with the software they are specialized in. I literally formed a k8s cluster on premise without prior experience and it's still solid even though it has not-so-production-friendly 3rd party software currently.
I have 0 certificates, 24 yo, dropped out of college due to lack of time because of my current work.
I'm thinking about changing my job due to company is starting to have some toxic attributes. Although I'm not directly affected by it, it doesn't really suit my boat. I'm looking for job options but it seems like my CV is not enough for anybody in this sector because of my lack of experience on paper. What advice would you give for someone in this position?
https://redd.it/12j0geh
@r_devops
Hey there,
I'm a Sr. DevOps Engineer with 4 yoe in total (professionally, will get back to this later). But currently in my team I'm literally the go-to guy for any kind of problem that requires good systems understanding. Whether it's related to MongoDB, k8s, CI/CD or Docker. This is mostly thanks to my problem solving skills and my past experience as a amateur server/software tinkerer. I know that I have a long road ahead, I'm no "rockstar" or "best in the business" by any stretch of the imagination . But I know that when I talk about a technical solution for a problem (literally any kind of problem that revolves around a system whether about databases/clusters or software), I can see that other people (whether junior/senior/lead) value my opinion greatly and act accordingly. Besides another thing that assures me is that my company is currently paying me shit ton of money compared to my experience.
Anyway, enough humblebragging. Here's my problem
When I look at my colleagues CVs or Linkedin profiles I keep saying that "yeah, this person would look obviously more experienced and therefore more "hireable". But I know that I'm techically more capable than them (I was going to stop bragging but I had to write this down, sorry lol), I solve problems with the software they are specialized in. I literally formed a k8s cluster on premise without prior experience and it's still solid even though it has not-so-production-friendly 3rd party software currently.
I have 0 certificates, 24 yo, dropped out of college due to lack of time because of my current work.
I'm thinking about changing my job due to company is starting to have some toxic attributes. Although I'm not directly affected by it, it doesn't really suit my boat. I'm looking for job options but it seems like my CV is not enough for anybody in this sector because of my lack of experience on paper. What advice would you give for someone in this position?
https://redd.it/12j0geh
@r_devops
Reddit
r/devops on Reddit: How do you show that you know what you're doing in your CV?
Posted by u/throwawayforresume_ - 1 vote and no comments
Do you crave for recognition in your job? Noticed that your colleagues get recognized by their work by not you.
How do you market yourself? Should you find a new job instead?
https://redd.it/12moplz
@r_devops
How do you market yourself? Should you find a new job instead?
https://redd.it/12moplz
@r_devops
Reddit
r/devops on Reddit: Do you crave for recognition in your job? Noticed that your colleagues get recognized by their work by not…
Posted by u/IamOkei - No votes and 4 comments
The Dark Side of Certificates
A slightly dramatic title, but not misleading. Discover how the SAN field in TLS/SSL certificates often exposes too much information. The article also offers suggestions on how to mitigate the risk of expanding potential attack surfaces.
Results; all discovered hostnames from Fortune 500 base websites: https://gist.github.com/alen-z/90e805cb6309f67c3d15809a3b3e2f71
Full article: https://relaymonkey.substack.com/p/the-dark-side-of-certificates
https://redd.it/12mytqx
@r_devops
A slightly dramatic title, but not misleading. Discover how the SAN field in TLS/SSL certificates often exposes too much information. The article also offers suggestions on how to mitigate the risk of expanding potential attack surfaces.
Results; all discovered hostnames from Fortune 500 base websites: https://gist.github.com/alen-z/90e805cb6309f67c3d15809a3b3e2f71
Full article: https://relaymonkey.substack.com/p/the-dark-side-of-certificates
https://redd.it/12mytqx
@r_devops
Gist
cert-extracted-dns.txt
cert-extracted-dns.txt. GitHub Gist: instantly share code, notes, and snippets.
New open-source programming language for DevOps engineers by the creator of the CDK
Hey everyone! A group of dedicated contributors and myself are working on Winglang, a new programming language that combines application logic and cloud provisioning.
Wing programs can be executed locally using a fully-functional simulator, or deployed to any cloud provider.
# Key Features
Cloud services and distributed computing as first-class language primitives.
Compiles to JavaScript and Terraform (with support for more provisioning engines on the way).
Same code compiles to different clouds (AWS, GCP, Azure).
Compiler generates least-privilege IAM policies (and other cloud mechanics) based on intent.
[Compiler plugins](https://docs.winglang.io/blog/2023/02/17/plugins) for customizing the generated Terraform.
Instant local simulation for visualizing, debugging, and testing apps without creating cloud mocks.
JavaScript interoperability.
Supports any Terraform resource.
Check out this short demo video to see Winglang and its local simulation tools in action, as well as the compilation of the same code to different clouds and provisioning engines.
Please note that Winglang is currently in Alpha, meaning it's not yet ready for real-world applications, but it's mature enough to explore and create some impressive demos.
# My Motivation
As a developer, I've always felt that creating cloud applications should be easier. The process requires mastering various layers of the cloud stack, IAM roles, networking, and numerous tools, along with finding creative ways to test and debug code. In addition, long deployment times hinder iteration cycles and take me out of my creative flow.
Other development disciplines have benefited from smart compilers that handle the lower levels of the stack and reduce developers' cognitive load. With Wing, we aim to bring the same benefits to cloud development with a language that treats the entire cloud as its target computer.
# How to get Involved
We'd love to hear your feedback on any aspect of the project, and we're looking for developers to join our community to help shape Winglang's future.
If you find Winglang interesting, please give us a ⭐️ on GitHub to bring the project to the attention of more developers 🙏. Your support is greatly appreciated!
# More Info
You can find more info, including how to get started with Wing and why we're building it as a language, in our GitHub: https://github.com/winglang/wing.
https://redd.it/12n4sna
@r_devops
Hey everyone! A group of dedicated contributors and myself are working on Winglang, a new programming language that combines application logic and cloud provisioning.
Wing programs can be executed locally using a fully-functional simulator, or deployed to any cloud provider.
# Key Features
Cloud services and distributed computing as first-class language primitives.
Compiles to JavaScript and Terraform (with support for more provisioning engines on the way).
Same code compiles to different clouds (AWS, GCP, Azure).
Compiler generates least-privilege IAM policies (and other cloud mechanics) based on intent.
[Compiler plugins](https://docs.winglang.io/blog/2023/02/17/plugins) for customizing the generated Terraform.
Instant local simulation for visualizing, debugging, and testing apps without creating cloud mocks.
JavaScript interoperability.
Supports any Terraform resource.
Check out this short demo video to see Winglang and its local simulation tools in action, as well as the compilation of the same code to different clouds and provisioning engines.
Please note that Winglang is currently in Alpha, meaning it's not yet ready for real-world applications, but it's mature enough to explore and create some impressive demos.
# My Motivation
As a developer, I've always felt that creating cloud applications should be easier. The process requires mastering various layers of the cloud stack, IAM roles, networking, and numerous tools, along with finding creative ways to test and debug code. In addition, long deployment times hinder iteration cycles and take me out of my creative flow.
Other development disciplines have benefited from smart compilers that handle the lower levels of the stack and reduce developers' cognitive load. With Wing, we aim to bring the same benefits to cloud development with a language that treats the entire cloud as its target computer.
# How to get Involved
We'd love to hear your feedback on any aspect of the project, and we're looking for developers to join our community to help shape Winglang's future.
If you find Winglang interesting, please give us a ⭐️ on GitHub to bring the project to the attention of more developers 🙏. Your support is greatly appreciated!
# More Info
You can find more info, including how to get started with Wing and why we're building it as a language, in our GitHub: https://github.com/winglang/wing.
https://redd.it/12n4sna
@r_devops
GitHub
GitHub - winglang/wing: A programming language for the cloud ☁️ A unified programming model, combining infrastructure and runtime…
A programming language for the cloud ☁️ A unified programming model, combining infrastructure and runtime code into one language ⚡ - winglang/wing
Looking for a monitoring suction for on prem transcode servers
Basically, we have about 50 super beefy transcode/storage servers we want to start analyzing for performance metrics. We run video workflows on them and are trying to see if we are under utilizing them. So we want to track things like RAM/CPU usage over time, so we can correlate it with our workflow metrics to see if we can basically increase the workload these servers are taking on.
The workload is adhoc, not fully automated, which is why it's hard to tell if we're really optimizing these servers. We need data.
We've started developing a somewhat basic version of this ourselves, but have been talking about stuff like New Relic as an alternative. The problem is that the cost might be prohibitive.
So are there any good OSS options we can use to perhaps roll our own solution with without having to entirely write a custom solution? Currently we're just writing a service in Go that would live on each server and we'd centralize the data in our own UI.
Again, we really only care about things like resource usage, specifically how the servers are using resources over time, we already have solutions for logging and aren't really that interested in network stats.
https://redd.it/12n7gvd
@r_devops
Basically, we have about 50 super beefy transcode/storage servers we want to start analyzing for performance metrics. We run video workflows on them and are trying to see if we are under utilizing them. So we want to track things like RAM/CPU usage over time, so we can correlate it with our workflow metrics to see if we can basically increase the workload these servers are taking on.
The workload is adhoc, not fully automated, which is why it's hard to tell if we're really optimizing these servers. We need data.
We've started developing a somewhat basic version of this ourselves, but have been talking about stuff like New Relic as an alternative. The problem is that the cost might be prohibitive.
So are there any good OSS options we can use to perhaps roll our own solution with without having to entirely write a custom solution? Currently we're just writing a service in Go that would live on each server and we'd centralize the data in our own UI.
Again, we really only care about things like resource usage, specifically how the servers are using resources over time, we already have solutions for logging and aren't really that interested in network stats.
https://redd.it/12n7gvd
@r_devops
Reddit
r/devops on Reddit: Looking for a monitoring suction for on prem transcode servers
Posted by u/PM_ME_A_WEBSITE_IDEA - No votes and no comments
Distributing Work
Hey all,
Bit of a rant/curious if this is happening to others.
So I have been in tech for a little over 12 years and have mainly been on the ops side of thing with a transition to DevOps about 5 years ago.
Since that time I have been in multiple companies but since Covid the way work is being assigned seems to be changing. No longer is it “who wants this project” or “you are gonna work on this project” and now is “find something to work on or your not performing how we expect”. I am not a high level engineer (3/5) and am fine not moving up to those roles as I just like focusing on work and not having to plan out x and convince people why we should do x. When I ask my management why can’t they just assign me work they fall back on that’s not how agile works and that I need to come up with my own innovative. We aren’t talking about startup companies here that really have no plan and need the innovation. Is this something others are noticing as well?
To me if I need to come up with my own work then why don’t I just go work for myself? I have even asked ok what’s the bar for work and was told they couldn’t answer but that I need to do more. This isn’t to say I haven’t come up with things that I believe were priorities and have completed but it’s very frustrating when you do something like that and then no one will use it or care and then someone else will come along with the same idea and repeat it only to have it go the same way. Are managers just not expected to understand what the environment needs? Are the higher level engineers no longer coming up with initiatives for us lower levels to carry out? Like I’m fine with we want to implement x go figure it out me do it. But the we don’t know what to tell you to work on is driving me up a wall.
Curious to hear others thoughts on how they are issued work and if this has been an issue for them. It’s happened at 2 companies so far so not sure if it’s just me being a dinosaur now or this is truly the way it works and I need to adapt.
Thanks!
https://redd.it/12n7og4
@r_devops
Hey all,
Bit of a rant/curious if this is happening to others.
So I have been in tech for a little over 12 years and have mainly been on the ops side of thing with a transition to DevOps about 5 years ago.
Since that time I have been in multiple companies but since Covid the way work is being assigned seems to be changing. No longer is it “who wants this project” or “you are gonna work on this project” and now is “find something to work on or your not performing how we expect”. I am not a high level engineer (3/5) and am fine not moving up to those roles as I just like focusing on work and not having to plan out x and convince people why we should do x. When I ask my management why can’t they just assign me work they fall back on that’s not how agile works and that I need to come up with my own innovative. We aren’t talking about startup companies here that really have no plan and need the innovation. Is this something others are noticing as well?
To me if I need to come up with my own work then why don’t I just go work for myself? I have even asked ok what’s the bar for work and was told they couldn’t answer but that I need to do more. This isn’t to say I haven’t come up with things that I believe were priorities and have completed but it’s very frustrating when you do something like that and then no one will use it or care and then someone else will come along with the same idea and repeat it only to have it go the same way. Are managers just not expected to understand what the environment needs? Are the higher level engineers no longer coming up with initiatives for us lower levels to carry out? Like I’m fine with we want to implement x go figure it out me do it. But the we don’t know what to tell you to work on is driving me up a wall.
Curious to hear others thoughts on how they are issued work and if this has been an issue for them. It’s happened at 2 companies so far so not sure if it’s just me being a dinosaur now or this is truly the way it works and I need to adapt.
Thanks!
https://redd.it/12n7og4
@r_devops
Reddit
r/devops on Reddit: Distributing Work
Posted by u/papamayhem87 - No votes and 1 comment
Does mid / regular devops role exist? How would you define it?
I have 3 years experience with jenkins pipelines (dockers, mobile apps, desktop apps production), about 2yr with docker + compose, 0.5yr k8s, terraform, gitlab, about the same with AWS (EC2, vpc, rds, some eks and ecr but no experience with ecs, fargate, aws pipelines). In my private project i use terraform, eks, rds, kustomize, argocd, gitlab pipelines and its registry. I use bash mainly and switch to python if bash becomes too tedious for particular task/problem. So far i considered myself as a mid devops - a person who already have experience with small and medium projects but doesnt have neither very deep knowledge about some things nor wide experience with big setups. But yet at interviews they ask me questions like:
- "what is process niceness in linux" - i have 6+ years experience with linux systems, i compiled xWRT systems with custom packages, created .deb packages but ive never ever used such thing like "process niceness"
- "what is extreme programming" - what??
- "when does dns use TCP?" - why should i even bother about that?
- "what is 12 factor application" - ???
- "How would you configure access for users in AWS (50-150 users, different accounts, separate dev/prod environments)?" - ...
etc.
And my favourite one was from last interview when guy asked me about difference between ftp, ftps and sftp. I knew that first doesnt use encrypted traffic, and other 2 use it and one of them uses SSL but i dont remember. I tried to explain that last time i used ftp protocol was like 4-5 years ago, nowadays i mostly use rsync or scp through SSH. But he didnt seem to care about it.
I mean should i really know these things as mid devops? Because i feel like they expect some senior here. After few interviews i feel like i know sh^t and i should look for job in mcdonalds kitchen's.
So what mid / regular devops should know? Im open for learning.
https://redd.it/12n8r8t
@r_devops
I have 3 years experience with jenkins pipelines (dockers, mobile apps, desktop apps production), about 2yr with docker + compose, 0.5yr k8s, terraform, gitlab, about the same with AWS (EC2, vpc, rds, some eks and ecr but no experience with ecs, fargate, aws pipelines). In my private project i use terraform, eks, rds, kustomize, argocd, gitlab pipelines and its registry. I use bash mainly and switch to python if bash becomes too tedious for particular task/problem. So far i considered myself as a mid devops - a person who already have experience with small and medium projects but doesnt have neither very deep knowledge about some things nor wide experience with big setups. But yet at interviews they ask me questions like:
- "what is process niceness in linux" - i have 6+ years experience with linux systems, i compiled xWRT systems with custom packages, created .deb packages but ive never ever used such thing like "process niceness"
- "what is extreme programming" - what??
- "when does dns use TCP?" - why should i even bother about that?
- "what is 12 factor application" - ???
- "How would you configure access for users in AWS (50-150 users, different accounts, separate dev/prod environments)?" - ...
etc.
And my favourite one was from last interview when guy asked me about difference between ftp, ftps and sftp. I knew that first doesnt use encrypted traffic, and other 2 use it and one of them uses SSL but i dont remember. I tried to explain that last time i used ftp protocol was like 4-5 years ago, nowadays i mostly use rsync or scp through SSH. But he didnt seem to care about it.
I mean should i really know these things as mid devops? Because i feel like they expect some senior here. After few interviews i feel like i know sh^t and i should look for job in mcdonalds kitchen's.
So what mid / regular devops should know? Im open for learning.
https://redd.it/12n8r8t
@r_devops
Reddit
r/devops on Reddit: Does mid / regular devops role exist? How would you define it?
Posted by u/domanpanda - No votes and 9 comments
Learn 9 CI/CD tools for FREE
Hi everyone, CloudClan, a community that I've co-founded is hosting a CI/CD Workshop Marathon where in-demand CI/CD tools like GitLab CI, CircleCI, Jenkins, Tekton, ArgoCD, Azure DevOps, GitHub Actions, Concourse, Cloud Build will be taught every Sunday from April 30th
You can find more details about the workshop at www.cloudclan.co/workshop
https://redd.it/12ncdss
@r_devops
Hi everyone, CloudClan, a community that I've co-founded is hosting a CI/CD Workshop Marathon where in-demand CI/CD tools like GitLab CI, CircleCI, Jenkins, Tekton, ArgoCD, Azure DevOps, GitHub Actions, Concourse, Cloud Build will be taught every Sunday from April 30th
You can find more details about the workshop at www.cloudclan.co/workshop
https://redd.it/12ncdss
@r_devops
Reddit
r/devops on Reddit: Learn 9 CI/CD tools for FREE
Posted by u/akshayvsuresh - No votes and no comments
Is AutoGPT gonna take over DevOps?
I'm a student. I haven't really worked for a firm as of yet. I got this ad in the emails today for a prompt course on AutoGPT :
"I just tried AutoGPT on my computer, showed it to my wife, and we stood in awe.
My Windows machine started to work autonomously on a task to research and execute a business idea. It opened my Chrome browser, visited URLs for research, downloaded information and stored it on my file system, created Python code files on my computer, debugged, improved, and executed them.
In a fully autonomous way!
It was like watching an intelligent being working on a task - AutoGPT is self-aware and uses advanced project management techniques such as PLAN, ACT, ASSESS, and ADAPT. Occasionally, it asked me if I wanted to proceed, but otherwise, it was acting autonomously.
This was nothing like prompting. This beast prompted itself, iteratively and very smartly, to solve the high-level task I had given it. Before it worked on a macro or micro problem, it researched the best way to solve it online.
As my wife was scared (rightly so) that the AI may do something shady or dangerous on my computer, I switched it off. But I'm sure I'll span a small Amazon machine soon and put it to work on one of my side projects
If you want to fully learn and master the art of effective prompting and prompt engineering, check out our full course on the academy: <AcademyName> "
I'm really scared of AI.
https://redd.it/12n6r82
@r_devops
I'm a student. I haven't really worked for a firm as of yet. I got this ad in the emails today for a prompt course on AutoGPT :
"I just tried AutoGPT on my computer, showed it to my wife, and we stood in awe.
My Windows machine started to work autonomously on a task to research and execute a business idea. It opened my Chrome browser, visited URLs for research, downloaded information and stored it on my file system, created Python code files on my computer, debugged, improved, and executed them.
In a fully autonomous way!
It was like watching an intelligent being working on a task - AutoGPT is self-aware and uses advanced project management techniques such as PLAN, ACT, ASSESS, and ADAPT. Occasionally, it asked me if I wanted to proceed, but otherwise, it was acting autonomously.
This was nothing like prompting. This beast prompted itself, iteratively and very smartly, to solve the high-level task I had given it. Before it worked on a macro or micro problem, it researched the best way to solve it online.
As my wife was scared (rightly so) that the AI may do something shady or dangerous on my computer, I switched it off. But I'm sure I'll span a small Amazon machine soon and put it to work on one of my side projects
If you want to fully learn and master the art of effective prompting and prompt engineering, check out our full course on the academy: <AcademyName> "
I'm really scared of AI.
https://redd.it/12n6r82
@r_devops
Reddit
r/devops on Reddit: Is AutoGPT gonna take over DevOps?
Posted by u/hbsk8156 - No votes and 7 comments
CI/CD good practices: Should the CI/CD pipeline refactor the code?
I am working on some personal projects on GitHub and I thought setting up a super simple CI to ensure proper code formatting would be fun and useful to try/learn and get some basic of DevOps
I am setting up the pipeline for a Python project on Github using Github Actions, and I want to ensure proper code formatting with the package black. Should the CI pipeline modify the code itself if the test fails or only check if the code complies with UTF-8 standards?
What is the standard?
Should a CI/CD pipeline be able to refactor/change the code or just test it?
What are the best practices for CI/CD pipelines in this situation?
Is there any source where I can find good practices for CI/CD pipelines?
Thanks in advance :D
https://redd.it/12m0e88
@r_devops
I am working on some personal projects on GitHub and I thought setting up a super simple CI to ensure proper code formatting would be fun and useful to try/learn and get some basic of DevOps
I am setting up the pipeline for a Python project on Github using Github Actions, and I want to ensure proper code formatting with the package black. Should the CI pipeline modify the code itself if the test fails or only check if the code complies with UTF-8 standards?
What is the standard?
Should a CI/CD pipeline be able to refactor/change the code or just test it?
What are the best practices for CI/CD pipelines in this situation?
Is there any source where I can find good practices for CI/CD pipelines?
Thanks in advance :D
https://redd.it/12m0e88
@r_devops
Reddit
r/devops on Reddit: CI/CD good practices: Should the CI/CD pipeline refactor the code?
Posted by u/omaeloc - 1 vote and 4 comments
Portainer (swarm) nginx reverse proxy
Hello,
I am desperately trying to use
Here is the configuration of
Here is my
If in my browser I navigate to
I have tried many configurations and alternatives but I am unable to getting it work despite my time spent on this problem.
Thank you very much in advance for any help
https://redd.it/12nkcf8
@r_devops
Hello,
I am desperately trying to use
nginx as a reverse proxy for my portainer installed on another machine.Here is the configuration of
nginx inside server {...} in the default configuration file:location /portainer/api/websocket {
rewrite /portainer/api/wesocket/(.*) /api/websocket/$1 break;
proxy_pass https://localhost:5050;
proxy_set_header upgrade $http_upgrade;
proxy_set_header connection 'upgrade';
proxy_http_version 1.1;
}
location /portainer/api {
rewrite /portainer/api/(.*) /api/$1 break;
proxy_pass https://localhost:5050;
}
location /portainer {
rewrite /portainer/(.*) /$1 break;
proxy_pass https://localhost:5050;
}
Here is my
yaml file for my portainer:version: '3.2'
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:latest
networks:
- proxy
ports:
- 9600:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./vhost.d:/etc/nginx/vhost.d:ro
agent:
image: portainer/agent:latest
environment:
AGENT_CLUSTER_ADDR: tasks.agent
AGENT_PORT: 9001
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
portainer:
image: portainer/portainer-ce:latest
command: -H tcp://tasks.agent:9001 --tlsskipverify
volumes:
- portainer_data:/data
environment:
- VIRTUAL_HOST=https://portainer.yourdomain.com
- VIRTUAL\_PORT=9000
ports:
- 5050:9443
- 8000:8000
networks:
- proxy
- agent_network
deploy:
mode: global
placement:
constraints: [node.role == manager]
networks:
proxy:
driver: overlay
attachable: true
agent_network:
driver: overlay
attachable: true
volumes:
portainer_data:
If in my browser I navigate to
https://<my server>:5050 it works well. But if I navigate to https://<my server>/portainer I get a page not found.I have tried many configurations and alternatives but I am unable to getting it work despite my time spent on this problem.
Thank you very much in advance for any help
https://redd.it/12nkcf8
@r_devops
Reddit
r/devops on Reddit: Portainer (swarm) nginx reverse proxy
Posted by u/ScriptorTux - No votes and no comments
I made a database with 500+ places to promote your products
Just finished compiling list of places where you can promote your product.
What is included in the list:
1500+ Startup directories
110+ Facebook groups tagged by sector
170+ guest post opportunities
50+ Slack/Discord list
100+ Tech Journalists Email
100+ Online communities
Wanted to share the list with all! To get a free download just enter 0$ or you can buy me a coffee entering more! Cheers!
Link: https://gumroad.com/l/jbpoy/
Would be great if you left a star rating for the product in exchange on Gumroad ⭐️ 🙏
Upvote the post for all to see! ⬆️
https://redd.it/12o2grd
@r_devops
Just finished compiling list of places where you can promote your product.
What is included in the list:
1500+ Startup directories
110+ Facebook groups tagged by sector
170+ guest post opportunities
50+ Slack/Discord list
100+ Tech Journalists Email
100+ Online communities
Wanted to share the list with all! To get a free download just enter 0$ or you can buy me a coffee entering more! Cheers!
Link: https://gumroad.com/l/jbpoy/
Would be great if you left a star rating for the product in exchange on Gumroad ⭐️ 🙏
Upvote the post for all to see! ⬆️
https://redd.it/12o2grd
@r_devops
Gumroad
500+ Places to Promote Your Startup
Are you looking for ways to promote your startup? Look no further! We help you get your first users and start growing. Save hours of research for the next acquisition channel.What you'll get:80+ St...
Tired of people trying to make tools IaaS agnostic
I cannot count the number of new devops tools which have attempted to provide "IaaS agnosticism" beyond what kubernetes and terraform can provide.
IaaS providers provide different stuff. There's no common IaaS spec. Networking, IAM, Compute, GPU support, chipset, etc, all work differently.
And yet I think I've seen more product demos for devops tools claiming to have abstracted away underlying IaaS platforms than tools that do anything else. The deal is you just have to use their platform or tool exclusively instead. What platforms do they actually support at the moment? Well, just AWS of course, with plans to support Azure and GCP by Q3 of 2024.
Then you try their product out. First, you select an environment, and have to select the cloud provider. You have to select the exact instance type you want from that cloud provider. There is no ability to migrate workloads between cloud providers with zero downtime.
If the tool was actually cloud agnostic, we would have to give up all the extra stuff that drives us to pick one provider over the other. I can't use KVS since GCP and Azure don't have a compatible service, as a single example.
To me, it feels like the equivalent of trying to make a programming language which is language independent. Where only features supported in all languages are present in this "language independent" programming language. It totally misses the point of why we pick specific programming languages for certain tasks in the first place.
Whenever I see this on a list of 5 ot 6 features a new devops tool offers I just shake my head and move on.
Edit: I said "tool cloud agnostic" but I meant "tools attempting to abstract away IaaS providers" if it wasn't clear. Obviously devops tools that do something specific are useful (estimating costs as an example) and can also be cloud agnostic
https://redd.it/12o96gi
@r_devops
I cannot count the number of new devops tools which have attempted to provide "IaaS agnosticism" beyond what kubernetes and terraform can provide.
IaaS providers provide different stuff. There's no common IaaS spec. Networking, IAM, Compute, GPU support, chipset, etc, all work differently.
And yet I think I've seen more product demos for devops tools claiming to have abstracted away underlying IaaS platforms than tools that do anything else. The deal is you just have to use their platform or tool exclusively instead. What platforms do they actually support at the moment? Well, just AWS of course, with plans to support Azure and GCP by Q3 of 2024.
Then you try their product out. First, you select an environment, and have to select the cloud provider. You have to select the exact instance type you want from that cloud provider. There is no ability to migrate workloads between cloud providers with zero downtime.
If the tool was actually cloud agnostic, we would have to give up all the extra stuff that drives us to pick one provider over the other. I can't use KVS since GCP and Azure don't have a compatible service, as a single example.
To me, it feels like the equivalent of trying to make a programming language which is language independent. Where only features supported in all languages are present in this "language independent" programming language. It totally misses the point of why we pick specific programming languages for certain tasks in the first place.
Whenever I see this on a list of 5 ot 6 features a new devops tool offers I just shake my head and move on.
Edit: I said "tool cloud agnostic" but I meant "tools attempting to abstract away IaaS providers" if it wasn't clear. Obviously devops tools that do something specific are useful (estimating costs as an example) and can also be cloud agnostic
https://redd.it/12o96gi
@r_devops
Reddit
r/devops on Reddit: Tired of people trying to make tools IaaS agnostic
Posted by u/CooperNettees - No votes and 5 comments
The Role of Automation in DevOps: CI/CD Pipelines
Series index
Hey r/devops community! In this installment of the Comprehensive DevOps Learning Series, we will explore part of vital role automation plays in DevOps, with a particular emphasis on Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
By embracing automation, teams can streamline their development and deployment processes, enhancing efficiency and ensuring a consistent, high-quality software product. Please be aware that there are many different tools and software ecosystems that solve similar problems. There are only a few mentioned here, and further discussion in the comments is encouraged.
## Why Automation Matters in DevOps
Automation is a cornerstone of the DevOps philosophy, as it helps to eliminate manual, error-prone tasks and accelerates software delivery. It also fosters collaboration by allowing development and operations teams to work together more seamlessly throughout the entire software development lifecycle. By automating key processes, organizations can achieve the following benefits:
Improved productivity: Automation frees up valuable time and resources, enabling teams to focus on higher-value tasks, such as feature development or process improvement.
Faster time to market: Automated processes expedite the delivery of new features and bug fixes, ensuring faster software releases and a more rapid response to customer needs.
Enhanced consistency and reliability: Automated tasks are less prone to human error, resulting in more consistent and reliable outputs.
## Continuous Integration and Continuous Deployment
CI/CD pipelines are central to the automation process within DevOps. Let's take a closer look at each of these concepts:
### Continuous Integration (CI)
Continuous Integration is the practice of integrating code changes from multiple developers into a shared repository frequently, often multiple times a day. CI enables teams to identify and resolve integration issues early in the development cycle, reducing the risk of conflicts and costly fixes later on.
Typical CI pipeline components include:
Version control system (e.g., Git, SVN)
Build system (e.g., Jenkins, Bamboo, TeamCity)
Automated testing tools (e.g., JUnit, Selenium)
### Continuous Deployment (CD)
Continuous Deployment is the process of automatically deploying code changes to production environments after they have passed through the CI pipeline and met established quality criteria3. CD ensures rapid and reliable software releases, enabling teams to deliver new features and improvements with minimal delay.
CD pipelines often involve:
Configuration management tools (e.g., Ansible, Puppet, Chef)
Infrastructure as Code (IaC) platforms (e.g., Terraform, AWS CloudFormation)
Container orchestration tools (e.g., Kubernetes, Docker Swarm)
Conclusion
Automation, particularly in the form of CI/CD pipelines, is a critical element of DevOps practices, enabling teams to work more efficiently, minimize errors, and accelerate software delivery. By embracing automation and integrating CI/CD pipelines into their workflows, organizations can reap the benefits of streamlined processes, improved collaboration, and consistent, high-quality software products. There are many valuable guides on specific tools, deeper dives into why these principles work, and published books describing the DevOps world. Share your favorite, and your thoughts, experiences, and questions in the comments below!
Sources/additional reading:
Duvall, P. M., Matyas, S. M., & Glover, A. (2007). Continuous Integration: Improving Software Quality and Reducing Risk.
Fowler, M. (2006). Continuous Integration.
Humble, J., & Farley, D. (2010). Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment
Series index
Hey r/devops community! In this installment of the Comprehensive DevOps Learning Series, we will explore part of vital role automation plays in DevOps, with a particular emphasis on Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
By embracing automation, teams can streamline their development and deployment processes, enhancing efficiency and ensuring a consistent, high-quality software product. Please be aware that there are many different tools and software ecosystems that solve similar problems. There are only a few mentioned here, and further discussion in the comments is encouraged.
## Why Automation Matters in DevOps
Automation is a cornerstone of the DevOps philosophy, as it helps to eliminate manual, error-prone tasks and accelerates software delivery. It also fosters collaboration by allowing development and operations teams to work together more seamlessly throughout the entire software development lifecycle. By automating key processes, organizations can achieve the following benefits:
Improved productivity: Automation frees up valuable time and resources, enabling teams to focus on higher-value tasks, such as feature development or process improvement.
Faster time to market: Automated processes expedite the delivery of new features and bug fixes, ensuring faster software releases and a more rapid response to customer needs.
Enhanced consistency and reliability: Automated tasks are less prone to human error, resulting in more consistent and reliable outputs.
## Continuous Integration and Continuous Deployment
CI/CD pipelines are central to the automation process within DevOps. Let's take a closer look at each of these concepts:
### Continuous Integration (CI)
Continuous Integration is the practice of integrating code changes from multiple developers into a shared repository frequently, often multiple times a day. CI enables teams to identify and resolve integration issues early in the development cycle, reducing the risk of conflicts and costly fixes later on.
Typical CI pipeline components include:
Version control system (e.g., Git, SVN)
Build system (e.g., Jenkins, Bamboo, TeamCity)
Automated testing tools (e.g., JUnit, Selenium)
### Continuous Deployment (CD)
Continuous Deployment is the process of automatically deploying code changes to production environments after they have passed through the CI pipeline and met established quality criteria3. CD ensures rapid and reliable software releases, enabling teams to deliver new features and improvements with minimal delay.
CD pipelines often involve:
Configuration management tools (e.g., Ansible, Puppet, Chef)
Infrastructure as Code (IaC) platforms (e.g., Terraform, AWS CloudFormation)
Container orchestration tools (e.g., Kubernetes, Docker Swarm)
Conclusion
Automation, particularly in the form of CI/CD pipelines, is a critical element of DevOps practices, enabling teams to work more efficiently, minimize errors, and accelerate software delivery. By embracing automation and integrating CI/CD pipelines into their workflows, organizations can reap the benefits of streamlined processes, improved collaboration, and consistent, high-quality software products. There are many valuable guides on specific tools, deeper dives into why these principles work, and published books describing the DevOps world. Share your favorite, and your thoughts, experiences, and questions in the comments below!
Sources/additional reading:
Duvall, P. M., Matyas, S. M., & Glover, A. (2007). Continuous Integration: Improving Software Quality and Reducing Risk.
Fowler, M. (2006). Continuous Integration.
Humble, J., & Farley, D. (2010). Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment
Reddit
r/devops on Reddit: Introducing the Comprehensive DevOps Learning Series
Posted by u/Throwmetothewolf - 271 votes and 48 comments
Fearless Distroless
With the rise of Docker came a new focus for engineers: optimizing the build to reach the smallest image size possible.
A couple of options are available.
Multi-stage builds: A Dockerfile can consist of multiple steps, each having a different Docker base image. Each step can copy files from any of the previous build steps. Only the last one will receive a tag; the others will be left untagged.
This approach separates one or more build steps and a run step. On the JVM, it means that the first step includes compiling and packaging, based on a JDK, and the second step comprises running, based on a JRE.
Choosing the smallest base image size: The smaller the base image, the smaller the resulting image.
In this post, I’m going to focus on the second point.
Read further
https://redd.it/12ooagr
@r_devops
With the rise of Docker came a new focus for engineers: optimizing the build to reach the smallest image size possible.
A couple of options are available.
Multi-stage builds: A Dockerfile can consist of multiple steps, each having a different Docker base image. Each step can copy files from any of the previous build steps. Only the last one will receive a tag; the others will be left untagged.
This approach separates one or more build steps and a run step. On the JVM, it means that the first step includes compiling and packaging, based on a JDK, and the second step comprises running, based on a JRE.
Choosing the smallest base image size: The smaller the base image, the smaller the resulting image.
In this post, I’m going to focus on the second point.
Read further
https://redd.it/12ooagr
@r_devops
A Java geek
Fearless Distroless
With the rise of Docker came a new focus for engineers: optimizing the build to reach the smallest image size possible. A couple of options are available. Multi-stage builds: A Dockerfile can consist of multiple steps, each having a different Docker base…