Reddit DevOps
268 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
QUESTION(S) ON DATABASES IN DEVOPS

Hi, I am doing research into how modern applications are built specifically around how they utilize various databases. It would greatly help my academic efforts if you could help answer 6 questions for my paper here: https://www.surveymonkey.com/r/25C8ZP9. I truly appreciate any responses I can get.

https://redd.it/y9iqx6
@r_devops
Ask /devops - How often do you get pulled into ad-hoc tasks that don't fall into any clear bucket?

This is more of a startup or small company question. How often do you get pulled into time-consuming ad-hoc tasks that fall in your lap as an Ops person - but ideally should have been done by somebody else? It might come to you because of unclear processes, or maybe because you have admin access to something - whatever it is, it ends up consuming your time. It gets worse when your team's progress is tracked by looking at the Kanban board or Jira tickets - because this was never there.

Note that this is more rampant in smaller orgs - so would love to hear others' who face this, and how they deal with it.

https://redd.it/y9o81n
@r_devops
how to apply CICD in github to deploy in aws ec2 instance?

I made a flask application and want it to be deployed automatically in the ec2 instance one i push it to the repo. Can anyone help me knowing the steps to do this? I also want to know how to do this when using laravel.

https://redd.it/yajlio
@r_devops
Any learning resources on HPC architecture?

Hi I'm looking into High performance computing distributed architecture on Windows platform. Any resources would be welcome ♡.

https://redd.it/yanjh4
@r_devops
API Export/Import CI/CD pipeline

I'm specifically doing this with Informatica, however I've done the same thing in Databricks.

When I did it for Databricks, my team was much more involved with the release process. So, we would do an export from dev create a pull request and deploy that in to test/prod.

However, for this pipeline, I'm looking for a more hands off approach, and letting the team who contributes to the repository be the ones to create the pull requests. I'm thinking that they can continue to develop in Informatica, and I'll provide a script that will export out what they need and they can make a pull request with that. Obviously there's going to be more complexities than that, but has anyone else done something similar?

https://redd.it/yaffid
@r_devops
CI/CD

I live in a country where we are not allowed to pay foreign companies. I am unable to get the free tier of aws,gcp using credit card . How do you recommend me learning cloud and a make a complete CI\CD pipeline?

https://redd.it/yaf6pc
@r_devops
Minecraft server deployment to DO from Github Actions

Hi everyone,

I've got a few questions regarding deploying Minecraft server to Digital Ocean. My use case is to automatically spin up a new droplet with mc server with predefined server configuration on some action (let's say click of a button in some UI) and destroy if the server is empty for some time.

Currently, my process would look something like this:

1. Git repository contains mc server configuration files along with docker-compose with services that bootstrap Minecraft server and backup service.
2. User triggers Github Action that deploys the server to Digital Ocean using Terraform.
3. Terraform pulls Minecraft world data from data storage using rclone (is this a good use case for provisioner?) and runs docker compose up
4. After some time, the droplet should be destroyed (to reduce the cost), but a backup is triggered before that.

Questions:

1. Is it better to create a custom Dockerfile with minecraft server image along with my custom server configuration and upload it to docker hub or to copy the configuration along with other server data to terraform using provisioner or user_data?
2. Is there a problem if I don't save the tfstate files from the triggered Github Action?
3. How would you go about automatically destroying the droplet if there were no online players on the server for some time?

https://redd.it/yax9n2
@r_devops
Is DevOps the best way out of service desk and into software development (for me)?

I'm an IT Generalist. I touch everything from AD, networking, apps, servers, etc.

I'm working on my computer science degree and trying to move into software development without taking a pay cut or doing internships. I have experience programming in Python, C, C++ and scripting experience with PowerShell and Bash.

Thinking about getting AWS Developer and AWS DevOps Engineer certifications in order to move from my MSP Service Desk job to a DevOps engineer position for both a significant pay increase as well as getting my foot in the door for software development while still working on my degree.

Anyone have an criticism of this path or advise?

https://redd.it/yay5nx
@r_devops
Followup to my research survey

I had a lot of good input to my previous survey on how modern devops based application deployments utilize databases. I modified it here to also show the results... if you want to look at the current status please see this: https://www.surveymonkey.com/r/25C8ZP9

https://redd.it/yaft3d
@r_devops
Is it possible to work as devops/sre completely outside faang?

I'm making a career switch to the operations side of thing, but every business seems to use either Azure, GCP or AWS, but my ethics/morals clash heavily with all three.

Is it possible to have a decent career as DevOps or SRE without using any of those three platforms?

Edit: I live in the west, so Chinese platforms like Alibaba or Tencent cloud would also be no-go.

https://redd.it/yb90dl
@r_devops
Terraform AWS api gateway

Hello,

I have to create, in Terraform, the following infrastructure: one api gateway and 2 lambda functions (can be "Hello world", they don't matter at all). So far so good, I create everything without any problems.

Now for the part that eludes me: api_invoke.url/f1 should call function 1 and api_invoke.url/f2 should call function 2.


This seems to be very easy from the aws console, you just add a trigger to the function and it automatically adds the /f1 or /f2 on the url. In Terraform, however, it seems to be a lot harder.


What I've done so far:

- api points to both functions, api url is api_invoke.url, without "/f1" or "/f2"

- created 2 stages for the api, each integration_uri pointing to a function, still one fucntion

-created 2 route, with 2 different routing_key: POST /f1/post and POST/f2/post

Nothing worked and I'm going crazy cause it's so easy from the aws console. Any ideas?

Thank you in advance.

https://redd.it/ybb3ky
@r_devops
How does stackoverflow or the internet work for defense contractors?

If I need to Google something, do I need to come out of the secured area to Google my question then go back in?

https://redd.it/yag8el
@r_devops
Doing work-based projects in the free time vs burnout

Hello everyone,

I have a lot of fantastic ideas to improve work for me and my team. During 8 hour of work, I struggle with my own tasks (I am doing them really slowly, as I am testing every change I do, I spend a lot of time on the concept and sometimes during implementation phase I find out that it will not work so I start again with thinking how it should be done). I can only book about 3-4 hours per week for others tasks, like these improvements that I want to do.

Outside of work I have different hobbies, but also during my free time I started to look for some projects that I could build for polishing my python skills and other technologies.

​

Basically, I don't have any motivation to build anything that I could use in my personal life expect one web application, but I struggle with that a lot, because I only know some basics of HTML and CSS, so now writing stuff in Javascript and thinking how all these things should be done is consuming a lot of energy from me.

​

During my free time I started to implement all these nice ideas, but focusing mostly on a good design, clean code rather than having a result as quickly as possible. So I really try out stuff, see the results, change something and check the results again.

Despite of not being able to achieve much during the time I spend on these work-related projects (as I mostly play around rather than write something and improve it) - I can feel that it sometimes makes me tired to constantly solve the problems which are also work related. Senior from my company told me multiple times to treat my personal time as personal and do not care about IT on my free time.

​

For those who don't want to read the whole thing, the question is very simple:

What do you guys think about polishing your IT skills by creating and improving the projects that are related to your work?

​

Please, ignore the aspect of money here. I know that doing a work for free is not they way it supposed to be, but here I treat this work as a "self-improvement" and no one from my company expects to see the results of it as I am not charging them for the time I spend on these projects. Company will reward me anyway for this time by increasing my salary, as they will see that I've improved.

https://redd.it/ybcqwb
@r_devops
How to Use the GitHub Actions Matrix Strategy in Deployments

Hey guys,

Quincy Ukumakube just wrote a new blog post you may enjoy on the ATA blog.

"How to Use the GitHub Actions Matrix Strategy in Deployments"

Summary:
Learn how to use the GitHub Actions Matrix deployment strategy and take your actions to the next level in this ATA Learning tutorial.

https://adamtheautomator.com/github-actions-matrix/

https://redd.it/ybfcq5
@r_devops
Simple stack for deploying full stack applications using Pulumi?

Hello guys, I am a beginner in DevOps, but have been writing full stack apps for several years now. I have been working for companies until now with full on DevOps departments, so I never really cared too much, but I went on a solo journey now as an indie developer and I am looking for a simple stack that would be manageable in one or two people teams.


So far, the stack has been super messy and heavily manual. I served my NextJS frontend from Vercel, and ssh'ed manually into DigitalOcean and pulled my repo and rebuilt/redeploy it (I know, I know).


I decided to give Pulumi a shot but I am a bit confused - how should I properly set this up for:
\- monolithic backend
\- 1 postgres instance
\- 1 redis instance
\- NextJS frontend


I understand that I can setup S3 buckets and EC2s using Pulumi, but what would be the simplest and most manageable way to continuously deploy my applications to the instances set up by Pulumi? Should I integrate CI/CDs with Pulumi? Or should those two be completely separated and only use Pulumi to take care of aws lifecycles? I am a bit lost here in terms of how these 2 paradigms are interconnected.


Of course, I am more than open to any suggestions that you may have in terms of how to automate DevOps for someone in my situation and abstracting all these things away into TypeScript wherever possible. Thanks!

https://redd.it/ybeh9i
@r_devops
KodeKloud, O'Reilly, Pluralsight : Train for cloud and devops, ideally dev too

Hello,

I am looking for some feedback on the training following platforms : O'Reilly, Pluralsight, and KodeKloud. I have subscribed to their free trial but would like to get as many elements as I can.

​

TLDR:

\- Does O'Reilly have good devops stuff, especially Ansible, Terraform, AWS, Azure, and does it have good hands-on/sandboxes for these use-cases ?

\- Would you recommend KodeKloud for devops AND cloud providers training ?

\- Is Pluralsight any good regarding hands-on labs for devops (Ansible, Terraform) and cloud providers ?

\----

​

I would like to have a platform to train for certain subjects : ansible, terraform, and general cloud things (all providers, at least AWS and Azure)

I have used KodeKloud in the past and it left me a really good impression. However, I think it lacks on the cloud side ?

Also I hear a lot of good about O'Reilly Learning, do they have good content and labs on these matters (and others) ?. The plus for O'Reilly would be that they also have good trainings for languages and all (I guess ?) which would avoid also having to buy classes somewhere or a pluralsight subscription.

On that note, I absolutely like Pluralsight, but have used it only to learn things like Angular or C# stuff ; I see they have hands-on and sandboxes, and they own ACG. But I am guessing they do not want to eat into ACG margins, so the features must be less good ?

Pluralsight also has the advantage right now of being way cheaper than the two others (327€/y VS 499€/y).

The key aspect for me is the hands-on labs, because I do NOT want to create cloud accounts. Ideally I would like hands-on labs that give a certain freedom to explore too.

Does anyone have any feedback on these points ? So far I am in the mind of :

Pluralsight > O'Reilly > KodeKloud

https://redd.it/y9rkqs
@r_devops
ELK deployment advice

Hello everyone,

At my work, we are thinking about deploying ELK stack on our VMs to analyze logs.
The entire stack will be at v.7.17 and all deploy on VMs (maybe EC2s, basically Linux machines).
The deployment restriction is ELK to be deployed as 1 Logstash/Elasticsearch/Kibana (1 per each/no clustering).
Downtime can be tolerated within reasons (blackout/disaster).

The current daily log size will be at most 500k lines (about 1-1.5GB total)
We'll have to keep them for at least 90 days so 1.5x90 = 135 GB total
Search speed can be less optimized but within reason for users (let's say searching for a specific event last month is within 5 secs)

The current idea is:
Logs will be read using Filebeat from each machine and sent to Logstash.
Logstash will then use some filters to process logs and send them to Elasticsearch and Kibana.

Filebeats/logs via HTTP requests from apps --> Logstash --> Elasticsearch --> Kibana

My questions are:
Is there any example/best practice on this?
Are there any pitfalls we should know to avoid?
Is an enterprise license required?
Are there places where we can learn more about ELK?

We are quite new to this so any recommendations are welcomed.
I've already recommended Datadog/Loki and other solutions but the end solution is ELK so we'll have to go with it.

https://redd.it/yboysz
@r_devops
sending logs to central storage from all ELK instances

hello everyone. what are you using/suggest for log aggregation from multiple ELK (and loki) instances.

We want this central storage to be able to connect to kibana/grafana as well.

I know there's victoria/thanos and other stuff for prometheus, but I am looking for something similar for logs thru ELK or promtail/loki.

https://redd.it/ybrqsb
@r_devops