Reddit DevOps
269 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Troubleshooting slow connections

Hey all,

I'm on a team without a ton of experience in how internet traffic is shaped. Ive been monitoring some synthetic traffic to our API gateway and have it return a 200. I chose this as it should have the least amount of variation from inside our application. We are seeing some pretty high latency from it, like 100ms, which seems crazy for just a 200.

How do we go about troubleshooting a problem like this?

https://redd.it/12lef97
@r_devops
Permissions on Argo Workflows

I’ve loved Argo CD and I’m experimenting with Argo Workflows. The specific use case I have is I want to enable developers to run specific jobs in production where they typically don’t have permissions. The problem is, the way I understand it, in order to be able to make jobs you need permission to make a workflow CRD but there are no controls as to what goes in the workflow. I’d like to open it up to run one or two specific jobs, not anything. Is there a way to close this off or am I misunderstanding something? I’m tempted to write a tool that sits in front of the Argo Workflows api that does have the permissions restricted the way I want, but that sounds like a pain.

https://redd.it/12le8h9
@r_devops
Use GitOps for Efficient DevOps

Hey everyone! We're a startup team focused on developing a cloud-native time series database. With no historical operational burdens, we eagerly adopted GitOps from day one to enhance our DevOps efficiency. We're excited to share our experiences and invite open discussions on this topic.

In this article, we'll explore how Greptime utilizes tools like Terraform, Kubernetes, and ArgoCD to establish an effective DevOps workflow. We'll dive into the connection between IaC and GitOps, explain the key components and benefits of GitOps, and discuss the critical technical decisions made throughout our implementation journey.

Full article here: https://greptime.com/blogs/2023-04-13-greptime-gitops

https://redd.it/12ljmaf
@r_devops
DevOps interview: Picking random facts from AWS docs and use them as interview question.

I think this is a bad technique. What is your opinion?

https://redd.it/12lkgwp
@r_devops
How do you solve multi-dev environment problem?

I have 4 environments: dev, test, staging and prod.

Things are working all good except one thing. i.e. developers waiting for dev environment while someone else is using it.

Tech stack for reference:
TeamCity for the CI
EKS for the backend
Cloudfront and S3 for the frontend

I’m willing to change anything of the above if required.

Please comment if you’re solving this issue. Itd be very useful.

https://redd.it/12lnuit
@r_devops
Best authentication mechanisms for different microservices communicating with each other intra-VPC and across VPCs.

We have a variety of different microservices, some within the same VPCs, and some across different VPCs.

Our go-to authentication mechanism has just been basic auth, and then making HTTPS calls across the public internet, but given that these are all our internal resources, we figure there should be a way of doing this that's rather more secure than that.

We've already decided that VPC peering will be used here to facilitate inter-VPC communication.

Since we're a bit new to having services in separate VPCs, we are wondering a couple of things:

1. One VPC peering is set up, what else needs to be changed to ensure the communication between the machines happens through the peering connection, as opposed to over the public internet.
2. Our current authentication method involves using Basic Auth. For REST API communication between our various services, what tends to be the most industry standard way to ensure authentication.

https://redd.it/12lrraw
@r_devops
Pulumi or terraform?

In a bind. I have two offers. One is moving to pulumi (C#) and the other is going to be classic terraform etc. I'm familiar with terraform, no issue learning pulumi but I worry it's a niche tool and it may stunt my career. It also appears that it's very developer heavy, in that developers seem to love it. I'm operations, I'm SRE pretty much, infrastructure background and while I know the tiniest amount of C#. I feel like the tool isn't meant for me if that makes sense. What are your thoughts? Will it get future adoption and now is a good time to "buy in"?

https://redd.it/12lxpad
@r_devops
Is this job always this thankless?

Senior DevOps/Infra/Platform Engineer here. I've been working at a fintech startup since day 1 of the company and now, almost 4 years after, we are becoming a big player in the market we operate. That is many clients, high stakes all the time.

Yesterday I had to perform, what I believe, has been the riskiest, most difficult migration I've done in my career and life.

It's safe to say that the environment is highly complex and has many many moving parts. Well since I've been preparing and getting things ready, yesterday I migrated everything under 1.5 hrs. And afterwards it was as if nothing had happened.

I feel it was a monumental accomplishment on my part (specially because I mainly performed it myself, manager was just watching) but come Friday noone said anything.. no one gives a shit really.

I don't blame anyone but I can't shake the feeling that sometimes people don't realise how bad things can go and how much risk and stress our job entails.

What do you do in these situations?

https://redd.it/12m40w8
@r_devops
Opinion: Github Codespaces is the best thing to ever happen to "local" development. Docker Desktop is on the chopping block.

# Overview

Alternate Title: The Death of “But It Works on My Machine!”

This is an opinion piece based on my recent experience with GitHub Codespaces. I’ve written about several tools in the past, but few (besides ChatGPT) have had me raving this much in recent memory.

TL;DR: Codespaces is the breath of fresh air that Docker promised to bring to development \~10 years ago. Container-based development, with all of the tedium that comes with it— fully-automated. Everything from the building and storing of images, management of container engines, cloning of repos, connection to IDEs, redirection of OAuth requests, and probably dozens of other features I haven’t scratched the surface of, are no longer a burden. All you need is a browser.

# The Local Dev Environment

Starting a new job, contributing to FOSS projects, building personal tools – each case carries with it the drudgerous feeling of “ugh, how do I get started.” Contribution guides are often lacking, and it can be tough to blend them with your development setup and workflow. I’ve been attempting to standardize my setup process for 10+ years now, and until recently I felt that I’d made “meh” progress at best. I’ve struggled to put it into words, but this diagram from [containers.dev](https://containers.dev/overview) lays it out perfectly:

The so-called **Outer Loop** and **Production** have been my primary focus as a DevOps Engineer, as business revenue and reputation tie directly to them. No production = no money. As a result, the **Inner Loop** has often been neglected. It’s one thing to build a CI/CD pipeline in Jenkins, Bitbucket, Bamboo, Azure DevOps, or GitHub, (I’ve been doing this for years), but another thing entirely to have a local setup that is easily replicable and stable. On top of that, even if my job were to solely revolve around developer experience, I’d have one hell of a time trying to built a gold-standard setup that fits everyone’s needs.

Working on the **Outer Loop** is like building an assembly line; the process is generalizable, repeatable, and there is usually a single output that determines success. You monitor the process and check the outputs for quality, but your primary function is to **keep it moving**.

Working on the **Inner Loop** is akin to building a fallout shelter; it is an environment in which you will reside for an indefinite amount of time. You can try your hardest to plan for what you will need in the future, but you can’t fit every tool inside without cluttering the whole place. Occasionally, you will need to venture out into the world to gather new resources, hoping that you don’t bring something tainted or bug-ridden into your domicile. You do your best to keep the place clean, but tasks of urgency and botched experiments lead the place to fall into a state of disrepair (or worse). Years pass, and you give up maintaining your mess and burn the whole thing down, only to rebuild it back in place.

Except your blueprints were last updated years ago.

And you’re missing half of the parts.

And you forget which wire is hot/neutral.

And your glasses are broken.

And you wonder how you made it this long at all.

Oh wait, we’re still talking about software, right?

Hopefully, I am not alone in this struggle, and my journey resonates with some. The point is, building and maintaining local development environments can be a burden, even with the modern niceties of containers and package managers. If you care to compare, here is an off-the-cuff listing of how I’ve maintained my dev setup over the years:

* Jotted on a piece of notebook paper how to set up Windows 7 Ultimate (2010)f
* Found Ninite, which scripted 80% of the process
* Toyed with virtualization, spent countless hours building a VirtualBox golden image
* Realized working in vBox sucks (or did in 2013), and wrote a Powershell setup script for my laptop
* Built an auto-recovery partition with an autounattend.xml to refresh the laptop
regularly
* Realized that the previous setup was a clunky and fragile mess
* Switched back to vBox with the autounattend and setup script to build dev environments from scratch
* Found Chocolatey, which replaced Ninite for me
* Discovered the power of containers, but struggled to integrate them into my workflow
* Experimented with Docker Desktop, but hated the fact that it used Hyper-V
* Learned that Vagrant exists, switched to prebuilt images
* Found /r/homelab, bought an r720, set up proxmox
* Experimented with LXC, struggled to understand it
* Somehow thought golden images were a good idea again, replicated on pmox with Windows
* Switched to my shitty autounattend + script **again** on pmox
* Moved to ubuntu + docker + portainer – my first time feeling productive with containers
* Configured zfs wrong, nuked my proxmox setup
* Tried unraid, bought unraid, and repeated my cycle of misery all over again
* Moved through several AWS and Azure services (EC2, Workspaces, and VMs)
* Frustrated myself trying to connect cloud resources to local resources
* Learned k8s, had some fun with kubefwd connecting local resources to clusters
* Learned that k8s kind of (actually, **really**) sucks to maintain
* Tried Fedora Silverblue, frustrated myself again
* Built setup scripts for WSL 1 & 2 with Ubuntu, Fedora
* Started using Podman on WSL
* Banged my head against a wall with a broken WSL setup
* Discovered Windows sandbox
* Looped all the way back to my powershell setup script
* Actually enjoyed windows sandbox, but still struggled to keep the script maintainable (a previous guide I wrote shows me using this in January)

Typing out that list alone made me exhausted– figure that between each of those bullet points there are probably 20 hours of struggle. Through each of these, I’ve waffled with one “do it all” config, versus slim configs for each dev situation (one for python, one for node, etc). I’ve tried containerizing my work, throwing it in a VM, running it on bare metal, and everything in-between. That nagging feeling that there **has** to be a better way has never left my head. Until now, with Codespaces.

# Containerization

*\*(skip if you don’t want a brief rant/history)\**

To understand Codespaces, a basic understanding of containerization is helpful. Containerization promised to simplify development by enabling developers to package applications and their dependencies into a single distributable unit, called an image. These images all rely on a shared linux kernel, making them smaller and faster to spin up than conventional VMs. Docker was at the forefront of this movement, offering a set of tools to create, deploy, and run applications in containers. However, Docker fell short in implementing a truly seamless integration with IDEs. The setup process was often time-consuming, clunky, and prone to breaking, especially on non-\*nix OSes. The learning curve was steep for developers without a background in virtualization. As I mentioned before, I struggled with this heavily (especially as a Windows user). Docker Desktop eventually made this process easier, but with the caveat that it was no longer free to use for large businesses. Of the places I have worked, not a single one has entertained the thought of paying, meaning devs would be restricted to the CLI, or another management tool such as Portainer. The overhead of configuring, troubleshooting, and documenting this was, and still is, a PITA.

The following is a dramatization of the onboarding process as a developer:

>**Manager**: Okay here’s your ThinkPad. We use containers! Set up your machine so we can start building.
>
>**Developer**: Alright, you know I’ll need local admin for that, right?
>
>**Manager**: Sure, we’ll get you that!

— 3 weeks later —

>**Internal IT**: Admin access granted
>
>**Developer**: Attempts to install docker, fails due to missing WSL
>
>**Developer**: Attempts to install WSL, fails due to Windows 1903 requirement
>
>**Developer**: Attempts to update Windows, blocked by GPO

— 3 weeks and 3 support tickets later


>**Internal IT**: Due to the demands of a very persistent developer, the whole company has updated to Windows 10 1903. We will be swamped fielding support tickets related to this for the next year.
>
>**Developer**: Whew, glad that’s not me! Downloads and installs Docker
>
>**Manager**: No, wait, not Docker Desktop! We don’t want to pay for that and we don’t want to get sued
>
>**Developer**: Uninstalls Docker Desktop
>
>**Developer**: Attempts to install Docker **CLI**
>
>**Developer**: Wait, where is the installer for Windows?

— The CLI / engine is only officially distributed for \*nix —

>**Developer**: Oh goddamnit, I have to set up Ubuntu on WSL just for this
>
>**Developer** Goes to install Ubuntu from the Microsoft Store
>
>**Developer** The Microsoft store is disabled by GPO

— 4 weeks and 4 support tickets later —

>**Internal IT**: Due to the demands of one developer who claims the Microsoft Store is “mandatory” for his job, we will be enabling it organization wide. Please do not install Candy Crush. All microtransactions and jelly beans will be the property of ConglomerateCorp
>
>**Developer** Finally installs Ubuntu for WSL
>
>**Developer** sudo apt-get update && sudo apt-get install -y docker.io
>
>**Developer** Alrighty, finally time to work…

All of this, and we haven’t even gotten to the IDE linkage, VPN issues, or other dependency hell scenarios… 11 weeks have gone by, not a single line of code has been written, and the anxiety of having to repeat this process if your laptop shits the bed looms above your head.

I’ve gone through this hypothetical in some fashion at least 5 times now. Some parts get better, but the dread never fades.

# Codespaces

If you’ve read the entirety of this article, please take a smoke break, or maybe just scroll on TikTok for a bit – you’ve earned it. If you’ve skipped to this section, just know that the world of local development on corporate machines has not been fun. And Codespaces will change that (or the open source spec, [https://containers.dev](https://containers.dev/)). What exactly is it, though?

GitHub Codespaces has managed to bridge the gap left by Docker and other development tools by providing a smooth, browser-based (or desktop-based, if you prefer) ephemeral environment. With Codespaces, devs can quickly spin up a container on Azure and plop directly into it using VSCode. The setup requires only a browser, a GitHub account, and a few JSON/YAML files.

The entire scope of the development environment is contained within a folder at the root of the repo, .devcontainer. If the folder does not exist, the Microsoft Universal image is used, which has many modern tools and frameworks pre-installed. It’s as easy as clicking the big “Create Codespace on Main” button. 10 seconds later, VSCode opens with the repo already cloned, all dependencies installed, and… that’s it. You can just work. All app ports that you’d need to access are forwarded to your local machine. Debugging feels as natural as it does locally. GitHub keeps track of changes made in the repo so you can commit without even having VSCode open. The container pauses after 30 minutes of inactivity to minimize costs. When you’re ready to resume, it takes 10 seconds to pick back up where you left off. If you need additional custom dependencies, you can pre-build your own image, and GitHub handles versioning and pushing it to the registry behind the scenes. I’m struggling to convey how easy it is.

As an example, I’ve written a .devcontainerdefinition for a utility called “granted”, which handles AWS SSO credentials in a very seamless way. The result is that I can go from zero, a brand new machine with just a browser → deploying to an AWS dev account in the time it takes to make a coffee.

[https://github.com/robbycuenot/codespaces-aws-granted](https://github.com/robbycuenot/codespaces-aws-granted)

It has never been so easy. Gone is the tedium of setting up Docker, configuring proxies, finding workarounds for Cisco VPN in WSL, and every other menial task that burns
developer hours without accomplishing a single task. Call me a fanboy or a Microsoft shill all day; this is a gamechanger.

# Conclusion

GitHub Codespaces has successfully realized Docker’s vision for efficient developer environments by offering a fast, easily distributed, and seamless solution. By addressing the pain points of previous dev tooling and leveraging containerization, Codespaces has changed the way developers can work. As the development landscape continues to evolve, we can expect Codespaces to play a significant role in shaping the future of software development.


Modified from my blog post with images: [GitHub Codespaces | cuenot.io](https://cuenot.io/articles/github-codespaces/)

https://redd.it/12m5jad
@r_devops
ChatGPT eliminates the need for developers - Will it also destroy DevOps?

If I were a software engineer right now, I'd be distraught.

ChatGPT is replacing about 80% of what developers do daily (shuffling bits in and out of a DB, CRUD apps, etc.). Each day that goes by, this percentage increases.

So, like it or not, the software engineering industry is in for a reckoning.

But what about us in DevOps? Some of the IoC use cases will probably be replaced with ChatGPT, and there's so much more to our job that can't be represented in code.

Am I right, or are we just as doomed as the app developers?

What else do we do, as DevOps engineers, that ChatGPT won't be able to replace us for?

https://redd.it/12ma63m
@r_devops
What’s the best way to learn devops real life work

Studying is the easy part but knowing how to apply the concepts in real life is the tricky part because it seems like no one wants to be bothered to teach it. Do you guys have any good courses/bootcamps to learn how to apply concepts to real enterprise projects?

https://redd.it/12m8fp9
@r_devops
Can you be a SRE and not know how to code ?

Hey Folks!Need some harsh advice here.I am currently working as a DevOps Engineer and making about 156K annually.There is an internal opening in our company that range base salary of 110K-230K. The role title is Sr SRE. I saw the requirements and only thing that bothers me is that it requires coding obviously.Also this role requires two days in the office where I have to drive hour and 25 mins.

Should I stay where I am enjoy the fully remote, chill environment with 156K and work on my skills or aim to 200K and accept challenging position with high requirements with two days in the office where I have to drive hour and 25 mins ?

https://redd.it/12m7dpp
@r_devops
I'd like to learn more about the DevOps role.

(Yes, I've read the pinned post)


Hey everyone! So to give a little bit of context, I'm a Mid Software Engineer who's been working on large scale projects for Fortune 500 companies, and throughout the years I've come across various levels of automated processes, development mindsets and practices etc. My current project has a rather poor agile culture, and the unit testing is made just to have a certain % coverage so that the jenkins builds pass, even though they are written very poorly. I'd say the integration with bitbucket is pretty basic, but there aren't different checks for feature/release branches, hotfixes, admin approvals etc.


Recently there has been an initiative to move the project from on-prem to AWS, but the progress is very slow due to no one on the project having previous experience in this regard.


Having encountered these shortcomings, and now looking back at previous teams and projects where the processes were much smoother(not all but most), I now come to realise that I'd rather like working on infrastructure rather than on the product itself.


I have the opportunity to have a career change inside the current company, opting to transfer to the DevOps "Guild" (for clarification, the client has poor devops practices, and no devops engineers from our company, I actually have no clue what projects our company's devops engineers work on), but for that I would need to actually have some general knowledge first. So I was planning on creating some small project (front+backend) and then try to apply some core devops concepts to it. Currently I'm familiar with docker, but that's about it, and I was currently looking into k8s.


I heard other buzzwords like "Terraform", "Elasticsearch", "Prometheus", "Splunk", "Kibana" but I have no knowledge of what they entail.


What would you recommend me in this situation?

https://redd.it/12m3tva
@r_devops
I asked ChatGpt for a Devops/Cloud project idea. Thoughts/feedback?

Uni undergrad working on a roadmap. My Roadmap

This project would be undertaken after I've have gotten my Aws SAA and Terraform Associate ideally.

I've decided on undertaking a larger project rather than some copy-paste tutorials that would waste time and give me a false sense of progress. Quality > quantity basically. So basically, curious as to those already in the field might think about the project and whether it would be a good idea to go ahead with it as it is a huge time commitment. I realize I may be biting off more than I can chew but I figure the best way for me to learn all of this and get in the field is to just dive in the deep end. Any thoughts/feedback would be greatly appreciated.

The prompt I gave chatgpt was:

Build a micro-service in Python which runs in a server-less environment using containers, such as Lambda with a Container Image. Using something like DynamoDB with global tables and use GitHub actions to build and deploy your application with the serverless application framework. Then, have a Python-based microservice with a global datastore, deployed in multiple regions using CI/CD. It should then be able to commit a simple change to Github and have the service deployed globally. This solution would also be almost all usage-based, no fixed cost servers or vms running. Then also use terraform to host in a github repository.

The response:

Note, the following output is the micro-service itself:

This is a service that automates the deployment of infrastructure on AWS.

Description:

The microservice would allow users to define infrastructure as code in YAML or
JSON files, and then deploy the infrastructure on AWS automatically. It would
use the AWS SDK for Python (Boto3) to interact with AWS services such as EC2,
VPC, and RDS. Users would be able to specify the region where the
infrastructure should be deployed, the instance type and count, the size and
type of storage, and other configuration options.

1. Set up a virtual environment for Python and install the necessary
dependencies, such as Boto3, PyYAML or JSON, and any other necessary
libraries.

2. Create a basic Flask app with routes to handle HTTP requests from users.
For example, you could define a route that accepts a YAML or JSON file
containing the infrastructure definition, and another route to trigger the
deployment of the infrastructure.

3. Use Boto3 to define the AWS services and resources that the microservice
will interact with. This could include creating a VPC, an EC2 instance, an RDS
instance, and any other necessary resources.

4. Use PyYAML or JSON to parse the user-provided infrastructure definition and
convert it to Boto3 API calls to create the infrastructure.

5. Define a deployment strategy that specifies how the microservice will
deploy the infrastructure. For example, you could define a simple strategy
that deploys the infrastructure to a single region, or a more complex strategy
that deploys the infrastructure to multiple regions using AWS Global
Accelerator or Route 53.

6. Implement error handling and logging to ensure that the microservice is
robust and reliable.

7. Test the microservice locally to ensure it works as expected. You could use
a tool like Docker to simulate the AWS environment and test the microservice
in a container.

8. Deploy the microservice on AWS Lambda or ECS using the Serverless
Application Model (SAM) or Elastic Container Service (ECS) task definitions.

9. Test the microservice on AWS to ensure it works as expected. You could use
AWS CloudFormation or Terraform to provision the infrastructure and test the
microservice in a real environment.

10. Document the microservice and its deployment process, including any
configuration
I asked ChatGpt for a Devops/Cloud project idea. Thoughts/feedback?

Uni undergrad working on a roadmap. [My Roadmap](https://www.reddit.com/r/ITCareerQuestions/comments/10vbvej/want_to_get_into_the_devopssrecloud_space_have/)

This project would be undertaken after I've have gotten my Aws SAA and Terraform Associate ideally.

I've decided on undertaking a larger project rather than some copy-paste tutorials that would waste time and give me a false sense of progress. Quality > quantity basically. So basically, curious as to those already in the field might think about the project and whether it would be a good idea to go ahead with it as it is a huge time commitment. I realize I may be biting off more than I can chew but I figure the best way for me to learn all of this and get in the field is to just dive in the deep end. Any thoughts/feedback would be greatly appreciated.

*The prompt I gave chatgpt was:*

Build a micro-service in Python which runs in a server-less environment using containers, such as Lambda with a Container Image. Using something like DynamoDB with global tables and use GitHub actions to build and deploy your application with the serverless application framework. Then, have a Python-based microservice with a global datastore, deployed in multiple regions using CI/CD. It should then be able to commit a simple change to Github and have the service deployed globally. This solution would also be almost all usage-based, no fixed cost servers or vms running. Then also use terraform to host in a github repository.

*The response:*

**Note, the following output is the micro-service itself:**

This is a service that automates the deployment of infrastructure on AWS.

Description:

The microservice would allow users to define infrastructure as code in YAML or
JSON files, and then deploy the infrastructure on AWS automatically. It would
use the AWS SDK for Python (Boto3) to interact with AWS services such as EC2,
VPC, and RDS. Users would be able to specify the region where the
infrastructure should be deployed, the instance type and count, the size and
type of storage, and other configuration options.

1. Set up a virtual environment for Python and install the necessary
dependencies, such as Boto3, PyYAML or JSON, and any other necessary
libraries.

2. Create a basic Flask app with routes to handle HTTP requests from users.
For example, you could define a route that accepts a YAML or JSON file
containing the infrastructure definition, and another route to trigger the
deployment of the infrastructure.

3. Use Boto3 to define the AWS services and resources that the microservice
will interact with. This could include creating a VPC, an EC2 instance, an RDS
instance, and any other necessary resources.

4. Use PyYAML or JSON to parse the user-provided infrastructure definition and
convert it to Boto3 API calls to create the infrastructure.

5. Define a deployment strategy that specifies how the microservice will
deploy the infrastructure. For example, you could define a simple strategy
that deploys the infrastructure to a single region, or a more complex strategy
that deploys the infrastructure to multiple regions using AWS Global
Accelerator or Route 53.

6. Implement error handling and logging to ensure that the microservice is
robust and reliable.

7. Test the microservice locally to ensure it works as expected. You could use
a tool like Docker to simulate the AWS environment and test the microservice
in a container.

8. Deploy the microservice on AWS Lambda or ECS using the Serverless
Application Model (SAM) or Elastic Container Service (ECS) task definitions.

9. Test the microservice on AWS to ensure it works as expected. You could use
AWS CloudFormation or Terraform to provision the infrastructure and test the
microservice in a real environment.

10. Document the microservice and its deployment process, including any
configuration
options, best practices, and troubleshooting steps.

Overall, this microservice would automate the deployment of infrastructure on
AWS and provide a streamlined experience for users. It would leverage Python,
Boto3, Flask, and other technologies to build a robust and reliable service
that can be deployed globally in a serverless environment.`


**Note, the following output is now the project utilizing the micro-service above:**


This is a Serverless Microservice with a Global Datastore using Python,
DynamoDB, and GitHub Actions.

Description:

The project involves building a microservice in Python that runs in a
serverless environment using containers. The microservice should use a global
datastore, such as DynamoDB, and be deployed in multiple regions using CI/CD.
The deployment process should be automated using GitHub actions, and the
solution should be usage-based, without any fixed cost servers or VMs running.
Finally, the project should use Terraform to host everything in a GitHub
repository.

1. Set up a GitHub repository to store the code and configuration files for
the microservice. This repository will serve as a central location for storing
and managing the codebase.

2. Create a virtual environment for Python and install the necessary
dependencies, such as AWS SDK, Boto3, and Serverless Application Framework
(SAF). A virtual environment is a way to create a sandboxed environment for
your Python code to run in. This will help ensure that your dependencies are
isolated from the system Python installation.

3. Use SAF to create a new Python-based serverless application. This will
create a basic serverless.yml file, which defines the configuration of the
application. The serverless.yml file will contain the configuration for your
Lambda functions, API Gateway endpoints, and other AWS resources that your
application uses.

4. Use SAF to create a DynamoDB table with global tables enabled. This will
create a table that can replicate data across multiple AWS regions, ensuring
high availability and low latency. The global tables feature allows you to
replicate your DynamoDB data across multiple AWS regions, improving the
performance and reliability of your application.

5. Write the code for the microservice in Python. This could include functions
for handling API requests, interacting with the DynamoDB table, and any other
necessary functionality. You can use the Boto3 library to interact with AWS
services such as DynamoDB, S3, and Lambda.

6. Use GitHub actions to set up a CI/CD pipeline that builds and deploys the
microservice whenever changes are committed to the GitHub repository. This
could involve using Docker to build a container image of the microservice and
then deploying it to AWS Lambda using SAF. GitHub actions provide a way to
automate your development workflow, including building and deploying your code
to AWS.

7. Test the microservice locally and make any necessary changes to ensure it
works as expected. You can use the serverless invoke local command to test
your Lambda functions locally.

8. Use Terraform to automate the deployment of the microservice and its
associated resources. This could include creating the DynamoDB table, setting
up the AWS Lambda function, and configuring any necessary IAM roles or
policies. Terraform is a tool for creating and managing infrastructure as
code, which allows you to define your AWS resources as code.

9. Use Terraform to set up a GitHub repository for the microservice and its
associated resources. This could include creating a GitHub repository, setting
up webhooks to trigger the CI/CD pipeline, and configuring any necessary
secrets or environment variables. Terraform can be used to automate the
creation and configuration of your GitHub repository.

10. Test the entire solution end-to-end, including