Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Serverless infrastructure

Hello all,

I’m looking into learning a bit more about the underlying technology under serverless architecture.

How would a provider approach the solution, load balancers, reverse proxies, and finally the application web server?

Thank you 🙏🏽

https://redd.it/lll3wj
@r_devops
Project Manager looking for Feedback

I am currently a waterfall based project manager who is learning more modern agile project management methodology & DevOps frameworks. The feedback I am seeking is – Is there value to you as a developer if the PM or the project leader knew or was familiar with programming?

I am under no delusion that I would be able to replace a developer nor do I have the desire to. I just want to know the best way to be part of the team.

*I am thinking about taking two Code Academy Career Paths (Code Foundations Computer Science) for those who are interested.

https://redd.it/lllgzn
@r_devops
Does AWS ECS add a price overhead if you don't use Fargate?

If you manage your own ECS cluster on EC2, then is there a price overhead from using ECS?

Also:

\- How does the pricing of Fargate compare to the pricing of EC2 these days?

\- How does the pricing of Fargate with spot instances compare to the pricing of EC2 (no spot instances) these days?

Thanks!

https://redd.it/llilj6
@r_devops
CI/CD Pipeline For Library + Backend Server

Hey all, I'm fairly new to DevOps and I am curious what my options are for the following scenario.

​

TLDR; How do you setup a CICD pipeline that builds two different repos, where one is dependent on the other, and it can handle the situation where you need to push out code to both at the same time?

​

To start off, I have a library A and then a backend server B that depends on A. The code for A and B are maintained in separate git repos. I want to setup a CI/CD pipeline so that whenever I push out changes to A and it successfully builds, it will trigger B to automatically rebuild using the new version of A. Likewise, if I push out new changes to B, it will automatically grab the latest version of A and rebuild itself. I think this is a fairly typical situation. I'm using CircleCI and have an idea of how to set this up in that environment.

​

Here's what I'm curious about though. What happens when I need to make changes to both A and B, and then push out changes to both repos back to back? I will end up with some weird synchronization issues if I followed the setup described above. For example, if I open a pull request for A and trigger the pipeline, it will attempt to build the previous version of B since the current version is probably still sitting in its own pull request that has yet to be merged. On top of this, I would really only want to build once, and the above setup I described would cause the server to be built twice assuming everything else worked out.

​

Is there a common way to deal with this issue? Hopefully I have explained it well enough. I appreciate any feedback or thoughts you can give me!

​

For some extra context, I'm working with Java using Gradle as my build tool. Both the library and server are built on Spring.

https://redd.it/llhf02
@r_devops
Java classes communication using Uri

https://stackoverflow.com/q/66233294/15165716



I am trying to get java objects (object1 and object2) to communicate using URIs. I would like these objects to have a Uri that has different endpoints and to execute some code when the URI is reached. The objects could be running on different computers (but on the same network) or locally on the same machine.

An example of this could be: If object2 wants to enter object1 it could access something like "object1@localhost:80/service/enter?name=object2" . and object2 would get a reply saying {"Accept"} or {"Error:condition not met"}.

I know it is a lot easier to use socket programming but because of the scope of the project I'm working on that is not possible. Any help would be greatly appreciated.

https://redd.it/llhdon
@r_devops
I’m new to DevOps and am very confused

I’m trying to build a small project to better understand the tech for DevOps jobs and I’m hitting major bumps.

The idea: simple flask API in a docker container, simple database in a docker container, unit test the database through API endpoints, and push changes to repo if unit tests pass.

I’m struggling on how to push the code if tests pass. I was thinking of using Jenkins to build and test the app, but I really don’t know how to proceed.

Eventually I want to use ansible, kubernetes, and aws, but I’ve been searching the internet for days on how to proceed and I’m stumped.

How does this all connect together?

I seriously am in the dark and don’t know what I don’t know

https://redd.it/llgnb1
@r_devops
Which CI/CD tools you are using ?



View Poll

https://redd.it/llg6y0
@r_devops
DevOps engineer at work sugested I start with Docker, Kubernetes, and Elm. Anything else you guys would add?

I feel a bit rudderless at the moment. My working plan is to take one of my repos that is sitting on Github and try to deploy it. Then add the commercial features we expect at work: multiple instances, load balancing, etc.

Are the three techs plus this personal project all I need or would you guys like to add anything?

https://redd.it/llefyi
@r_devops
Best way to benchmark and load test an api.

Hi guys

I want to know how you guys benchmark and load test an api endpoint. Is it done depending on the language we use or are there few things which we need to know before load testing an endpoint like whats the architecture the application is hosted etc.

Thanks in advance.

https://redd.it/llv5s7
@r_devops
CI/CD Process for internal Python package

Hi everyone,

​

I am not very well versed in DevOps practices - I am a data scientist and I have good software engineering skills, but CI/CD was always something that "someone else does".

​

Recently, I've created a python package for my team to use. We're just hosting it on Github and we're expecting people to download via `pip install <github link>` like you would install any Python package from a github repo rather than PyPI.

&#x200B;

My question - what kind of CI/CD pipeline can I/should I set up for this? What's important to have - or even, what questions do I need to ask to *know* what's important to have?

&#x200B;

Thanks!!

https://redd.it/lle4us
@r_devops
Managing Microservices using Kubernetes and Docker

OSS colleagues, this Modern Container-Based DevOps program begins by guiding the user through the concept of microservices, explaining fundamentals and other components in IT that play a vital role in obtaining a microservices architecture. It then addresses how to use Git, and work with and manage containers using Docker as well as Podman on RHEL 8. The course then covers how to perform daily container management tasks, and works its way through managing container images, storage, and networking. Module 1, "Microservices Essentials Overview," introduces the microservices essentials, including what they are, why Git is so important, and how containers fit into the picture. The last lesson explains everything that's going on in containers. Module 2, "Managing Containers," explains how to work with containers, including Docker containers and Podman. Module 3, "Implementing Full Microservices with Container Orchestration Platforms," explores container orchestration platforms, which provide the perfect way of managing microservices in an enterprise environment. In this lesson Kubernetes, the most significant container orchestration platform, is also introduced.

Enroll today (individuals & teams): https://tinyurl.com/1pj3ph8z

Much career success, Lawrence E. Wilson - Online Learning Central (https://tinyurl.com/bto061zr)

https://redd.it/llbopy
@r_devops
Using Syslog for Application Logs?

&#x200B;

I am researching a log forwarding solution to aggregate all of the OS and application/services logs across all of our various systems to a single data store. Syslog/Rsyslog works great for OS logs in our system currently, but I am unsure how suitable it will be for application originated logs where the log message may be a large json string containing a serialized stack trace, etc....

I know syslog has added support for json messages but my understanding is it is basically just placing the json string in the message portion of the syslog formatted message, not json from the ground up. Also it seems like some of the syslog implementations have hard limits on message sizes and may potentially split a large message into multiple when processing. My other concerns are with configuring a large number of nodes and configuration updates, I have read configuring the syslog agents is a real pain point.

Can anyone with Syslog experience comment on using Syslog for application messages? Do you recommend it or have any success using it on your systems? Any advice would be appreciated.

Thanks

https://redd.it/llbhax
@r_devops
GCP loadbalancer monitoring, aggregated per route

Hey guys,

Would you know of a tool, SAAS preferably, which would read logs from a GCP loadbalancer and would produce stats (latency/volume/error) aggregated per routes?

By route aggregation I mean:

GET /api/users/52356

GET /api/users/1234

I'd love the tool to be able to detect those two routes are actually the same: /api/users/{\\d+}

I cannot find anything like that, so I made something like LB logs -> big query -> custom view with route parsing -> google data studio to visualize.

If it doesn't exist, should we build that? :)

https://redd.it/llyw0f
@r_devops
So, here's a question...

Is every DevOps engineer in Romania taken?

Honestly, I know the War for Talent is real, but it seems as though for each member of the DevOps community, there are at least half a dozen job offers lying around.

Anyway, there are a few projects (quite a lot, actually) we are working on and we need like lots of great DevOps engineers and maybe you could help me with a few pointers on what is truly attractive to you, when considering job opportunities. Any sort of information is priceless, right now, and greatly appreciated!

https://redd.it/llxbae
@r_devops
Is kubernetes-external-secrets mature enough?

We seek a solution to fetch secrets from various kms/secrets-manager (e.g. aws secret manager) into our k8s cluster as secrets. kubernetes-external-secrets seems to satisfy our requirements but is it mature and stable enough based on your experience?


https://github.com/external-secrets/kubernetes-external-secrets

https://redd.it/llxmmp
@r_devops
CICD pipeline - Gitlab and gke...and helm?

hi - looking to start this, so wondering how everyone deploys to gke using gitlab? i dont have any experience with either.

i am thinking the following - most people build and test in a gitlab pipeline => if tests pass push to a repository with a tag of test (or similar), and then use helm to deploy to a k8s staging environment. once the container is in k8s staging, run functional tests...but then how do you get it in to prod? if functional tests pass, tag with prod and then do a helm deploy again?

any insights in to what people are doing/have found useful/wouldnt do or would change if doing it over again would be awesome!

cheers guys!

https://redd.it/llr2yr
@r_devops
Manifest tag update w/ GitOps workflow

I'm curious to see how others are handling the process of updating manifest files when new tags are pushed for images when following GitOps practices.


You've got your various application repositories - they are responsible for building -> testing -> pushing the image and tag up to the image repository. Then you've got your manifest repository that holds your Helm charts or Kustomize or vanilla yaml, etc., that something like ArgoCD or Flux is watching for changes to update in the cluster. What is the preferred approach/best practice for linking the tags created by the application repositories to the manifest repo?

https://redd.it/llu7dy
@r_devops
Hackerrank for a devops role

I just finished a hackerrank "test" for a devops role. It was a pretest before the interview. I've never used hackerrank, I've always viewed it more for programming. Coming from a sys admin background, yeah I can code a bit, script stuff absolutely, build a pipelines for sure. Make pictures about "devops" architecture and answer obscure questions on msg brokers, and hardly used git commands. There was a bunch of crap stuffed in there that left me scratching my head, not that I didn't know it or have a partial answer. It was just by the book multiple choice answers to pick from, you know the kind that makes it look like a trick question. Not to mention you don't get to use Google or anything you script you can't write print statements to debug, it's basically right or wrong. I love too they don't give you any background to what the topics are so you just go in blind.

I appreciate questioning skills, giving a homework assignment. But I just don't feel like hackerrank was a good option for a devops role.

Anyone have a similar test or experience with hackerrank for devops?

https://redd.it/llmzue
@r_devops
CI/CD pipeline for database changes

Hello World !

^(First time posting on Reddit ! 🆕)

Here an article to include database changes in your continuous delivery process !

https://medium.com/tales-of-libeo/continous-integration-make-sure-database-changes-are-included-using-gitlab-ci-cd-6191e984f8d0


It's done using GitLab CI and PostgreSQL, but no matter what you use, concept should be kinda the same ! Looking for some feedbacks ! 🤗

https://redd.it/llqyyg
@r_devops
What size server for two mobile and two web apps?

My company is planning to launch two mobile and two web applications, all of which have dynamic, not static content. Collectively they make up an on-demand platform, and we’re wondering how big a server we should purchase in terms of memory transfer, and SSD space.

We plan to have one server for each application, and don’t expect to have too much traffic initially since we’re launching in one city. Support for ~1,000 users would be ideal.

https://redd.it/lm9dnb
@r_devops
Terraform and Jenkins

Hey Guys,

I just wanted to ask you for an advice. If you have project in Terraform which is broken into multiple objects(per resource set) - for example:

\- Main Core Virtual Network, NSGs, Subnets, FW etc. all configured in one configuration file

\- Resource Groups - all configured in separate configuration etc.

Same will apply for other resources. Each configuration has its own idependent state file. My question is: how you would go about Jenkins pipeline configuration. Would you create pipeline per resource or you would somehow use one Pipeline. There is a possibility to convert everything into modules and run everything from one main file configuration file. Would that be a solution? So, if one module would change it would only apply the config based on that changed module. Is my thinking right here?

https://redd.it/lm6jlz
@r_devops