Reddit DevOps
269 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
A newbie question about EKS and EBS

Hi everyone.

So if I have three EKS nodes with three EBS volumes attached to each of them for persistent storage, and one of the nodes goes down, that nodes EBS volume gets detached and stays detached while a new node goes up automatically.

Is there a way to store data for EKS nodes so it persists after a node going down and gets associated with the new one? I'm talking about an ECK / ELK cluster and data nodes going down.

I probably sound like a dumbass but that's the question lol. Thank you!

https://redd.it/pzbcbu
@r_devops
How do you guys go about database backups in terms of multi-cloud?

The title is self explanatory, but to give more context I'm trying to find a solution for multi-cloud database backups in order to be have a more proactive approach to ransomware attacks.

In the past I used 2ndquadrant's Barman with BarmanS3 to save the data to a S3 bucket on another cloud account and worked with PITR without a problem but now I'm working with GCP's Cloud SQL instead and can't find a good alternative to achieve a similar result.

So, how do you guys go about database backups in terms of multi-cloud?

Any opinions and suggestions about other approaches to protect the data is also welcome.

Thanks.

https://redd.it/pzcqbr
@r_devops
Replacing Dummy Fields to fix MD5 Checksum Error

Remove if not allowed, but I need help with some code (stack overflow post)

I need to replace "dummy fields" in this code, specifically data-timestamp and data-signature. data-signature is the most important one, if the timestamp is wrong i don't really care...

the signature variable is giving me a checksum error no matter what i do and that is the biggest issue.

this is for an acuity scheduling custom referral tracking and referralcandy integration.

​

><div
>
> id="refcandy-mint"
>
> data-app-id="--------"
>
> data-fname="%first%"
>
> data-lname="%last%"
>
> data-email="%email%"
>
> data-amount="%price%"
>
> data-currency="USD"
>
> data-timestamp="NEED VARIABLE"
>
> data-external-reference-id="%id%"
>
> data-signature="NEED VARIABLE"
>
>\></div>
>
>
>
><script>
>
> (function(e){
>
>var t,n,r,i,s,o,u,a,f,l,c,h,p,d,v;
>
>z="script";
>
>l="refcandy-purchase-js";
>
>c="refcandy-mint";
>
>p="go.referralcandy.com/purchase/";
>
>t="data-app-id";
>
>r={
>
>email:"a",
>
>fname:"b",
>
>lname:"c",
>
>amount:"d",
>
>currency:"e",
>
>accepts-marketing:"f",
>
>timestamp:"g",
>
>referral-code:"h",
>
>locale:"i",
>
>external-reference-id:"k",
>
>signature:"ab"
>
>};
>
>i=e.getElementsByTagName(z)[0\];
>
>s=function(e,t){
>
>if(t){
>
>return "" + e + "=" + encodeURIComponent(t)
>
>}
>
>else{
>
>return ""
>
>}
>
>};
>
>d=function(e){
>
>return "" + p + h.getAttribute(t) + ".js?aa=75&"
>
>};
>
>if (!e.getElementById(l)) {
>
>h=e.getElementById(c);
>
>if (h) {
>
>o=e.createElement(z);
>
>o.id=l;
>
>a=function(){
>
>var e;
>
>e=[\];
>
>for(n in r){
>
>u=r[n\];
>
>v=h.getAttribute("data-"+n);
>
>e.push(s(u,v))
>
>}
>
>return e
>
>}();
>
>o.src="//"+d(h.getAttribute(t))+a.join("&");
>
>return i.parentNode.insertBefore(o,i)
>
>}
>
>}
>
>})(document);
>
></script>

https://redd.it/pzh8g6
@r_devops
Where is the line between dev and devops?

I came into devops more from the ops side than from the dev side and haven't been working in the field for that long. That said, I have a solid grasp of typical DevOps technologies like automation, containerization, scalability, etc. and was recently hired as the sole devops engineer at a small-ish company that just landed some huge contracts. Each project, for each client, has a team of dedicated developers, and then me, who's on every project.

Long story short, there's no standardization between projects, and they all want me to be able to deploy their code to various environments. The code itself is a mess, and as a result, the deployments are a mess, too, with 40-ish manual steps *after* their integration and delivery. The best part, though, is that due to lack of standardization, no one can tell me how to deploy their product for the individual projects.

So where exactly is the division of responsibility here? Should I be able to look at their code base and be able to deduce how to deploy it, or should I expect that at least one dev on each project should be able to tell me how to deploy the code, what its dependencies are, etc.?

Basically, how much should the devs give me, versus how much should I be able to do on my own? Because I'm driving myself up the wall trying to figure all of this out on my own when deployment deadlines are coming up in less than two weeks. For multiple projects.

Sanity check, anyone?

https://redd.it/pzht2o
@r_devops
Multiple Domain Mapping in AWS Opensearch (ELK)

I have added the Custom URL to AWS Opensearch (ELK) and having HTTPS access (SSL Certificate attached).

Now i want to add two more domain to it. So when i point that two custom domain to Opensearch whether it is AWS provided Opensearch URL or My own custom URL it shows me SSL Error even though my two custom domain/ website are working fine .

I am getting only SSl Error.

Any Soln??

Thsnk you

https://redd.it/pzg6ye
@r_devops
Modern Build/deploy strategy should always be artifact based.

Opinionated poll.
I'd like to posit that any modern build/deploy strategy should be artifact based.

e.g. given a branch we want to test (in any number of "environments") and get to production, we build once, create artifacts and deploy those I'm a repeatable way to each environment (automated preferably) including prod.

View Poll

https://redd.it/pzkkax
@r_devops
Step by step process on how to improve a docker image using highly custom boxes?

Step by step process on how to improve a docker image using highly custom boxes? It's easy to create a node.js docker container, but it's hard to improve an existing wordpress docker container using highly custom docker images. Is there a step-by-step process that allows you to debug and build upon an existing box? Another thing is that sometimes you have a box using ubuntu. Does that mean all your other boxes need to use ubuntu? Could you give me some insights as to how to do this? I noticed that sometimes when I do docker compose exec php bash, I can't access the folders of the other containers, shouldn't they be in the same virtual O.S.? How come this is possible and can you build docker containers using highly disparate images using different O.S. (like combining an ubuntu image with an alpine image)?

https://redd.it/pzk1r8
@r_devops
CloudGraph: an open-source GraphQL powered search engine for your AWS infrastructure

CloudGraph is an open-source search engine for your public cloud infrastructure, powered by DGraph and GraphQL. Within seconds, query assets, configurations, and more across accounts and providers. CloudGraph also enables you to solve a host of security, compliance, governance, and FinOps challenges in the time it takes to write a single query.

We currently support select services on AWS, with more added each day. Support for Azure and Google Cloud coming soon. We’re also looking forward to contributions from the community and have endeavored to make contributing new providers and services as simple as possible.

We would love any feedback you have!

https://redd.it/pzbdxa
@r_devops
Terror with Build YAML! Looking for recommendations!

Hello all,

I work in DevOps at what I'll call a middle sized software company (\~400 employees total \~150-200 repos for all products combined). Our DevOps group manages the build processes on these repos for our \~50 devs. We recently moved up to Azure DevOps from an internally hosted TFS.

We decided to move from task groups to handle builds, to using Build YAML. Seemed to have some benefits at first. We are slowly discovering the hassles associated in needing to touch every single repo's build YAML when we decide on a small change in process/uncover an issue.

With that background out there, I am hoping to get some input from others in potentially a similar scenario (or even from a significantly bigger or smaller company) on how you manage builds on your repos.

We have discussed moving back to taskgroups, or having a centralized YAML file for different builds (webs, APIs, etc) in our DevOps repo. I'm looking for ideas along with potential benefits.

Thanks!

https://redd.it/py11s4
@r_devops
Deloitte/Big 4 Training Process?

Hey guys I recently just accepted an offer with Deloitte consulting on their GPS team (Government and Public Sector) as an AWS DevOps Specialist. I have experience working at a small consulting firm as I was contracted out to the government for a 6 month project. The project was mainly maintenance and not deployment/implementation and ended in February. Since then, my company hasn't been able to find another project for me and so I've been out of DevOps work for the past 8 months (good thing i'm salaried).

My question is, does Deloitte train you for the project you'll be working on? Im a pretty quick learner and am very adapt to change so I can pick up various tools I haven't learned yet within a couple of days. I have a solid foundation of the devops tool stack but like I said, I haven't worked with Ansible, Jenkins, Docker or Kubernetes in almost a year. Should I be worried? What can I expect in the first 3-6 months?

Any information is helpful. Tips/Tricks for Deloitte are also welcome. Thanks in advance

https://redd.it/py4ony
@r_devops
REST API monitoring solution

Hello, I'm in the process to build a REST API monitoring solution for a set of API...

What I need to extract are few parameters/statistics for:

- response time
- response code
- uptime
- # of endpoint calls

I was reading that this can be done with Prometheus and Gafana.. is this true?

If this is the right teck stack, how do I start? Do I need to put an agent in my api's endpoints?

Thank you all

https://redd.it/py170y
@r_devops
Prevent code from being pushed via arbitrary controls on github actions

Hi there,

Sorry for the (probably) dumb question, i'm super newbie in this field and was wondering if there are tools to make automatic checks on the code that is pushed to GH, for example if I wanted to add as a criteria "no Go functions that start with foo", if someone pushes from any branch a code containing "func fooSomething", it would be automatically rejected.

Could you please help me? Thanks in advance!

https://redd.it/pzuub1
@r_devops
Next steps in devops career

Hi all,

I am currently a devops engineer with a strong focus on cloud and automation. I am earning very well for UK standards and it's unlikely that I would be paid more where I live any time soon even if I were to leave my current company.

However, I am still quite early in my career and I would like to plan for what I need to do during the next few years to be able to reach for these higher salaries. I think I might have to make some changes to my career but it is not clear to me what direction I should take and what these changes would be. Before anyone suggests, I am not interested in going into management. What is the career progression path for devops? What is at the end of this path? How can I make myself more valuable on the job market? I already hold some certifications in the tools we use.

Any related suggestions would be welcome.

Thanks

https://redd.it/pzujdo
@r_devops
Need help introducing Terraform in our K8S deployment workflow

Hello redditers,

I am a backend ruby developer a small company. Last year, we moved away from Heroku to a GKE hosted Kubernetes cluster. So I have been transitioning into becoming a devops guy week after week, discovering tools and best practices as I went along.

So far, we have been using a messy bash script called from the CI to automate things like helm and kubectl deployment. Our main app contains a k8s directory with all the yaml files, etc.

Lately, I started looking at Terraform to manage and track the cluster's state. I had done all the configuration by hand, either clicking around in the Google Cloud console or using the cli. I have followed a quick Terraform udemy course and I am now in the process of importing our environments in Terraform states. I have set up an infrastructure repo (we are on gitlab) with a CI that validates, plans and applies the configuration. It is now live and working.

Now I need to know where the limit is. I have seen that Terraform can manage Helm and Kubernetes and I have started importing cluster-level services like prometheus, grafana, and our traefik ingress using the helm and kubernetes providers.

I think this is ok, since we are managing services that are tied to the clusters: if we need an environment, we definitely need monitoring, etc. set up.

But what about the apps? I have seen blog posts advising to stick to kubectl commands, but maybe this was before the kubernetes provider hit 2.0, and before the beta feature that allows for CRD deployment... I see two options:

- Keep a k8s directory in the apps. The app's CI will then be in charge of packaging the app and the deploy stage will be a series of kubectl and helm commands.
- Externalize the deployment by manually editing the infrastructure project whenever there is a change in pod count, new services, etc. - but that seems complex because I need to keep track and sync the two projects. The app's CI would just run kubectl rollouts to update the images.

What is the best practice here? Is there a third way?

(cross-posting this to devops and terraform subs)

https://redd.it/pzoxia
@r_devops
Reviews on Techworld with Nana devops bootcamp

Can anyone please throw light on Techworld with Nana bootcamp and if it had helped someone to transition to a devops role?

https://redd.it/q00vuu
@r_devops
Resume Review for a DevOps/Cloud related internship

Hey all,

Expected to graduate Dec 2022, looking for an internship or possibly a jr level position related to DevOps/Cloud. Would just like to upload my resume here and see if anyone has any tips that could help. Thanks

https://redd.it/q04nae
@r_devops
Logging aggregation using a specific docker app

Hello , Is there any docker app that sutomatically runs on docker host and Is able to capture all logs of all containers running on this specific host and display them via web? I nave a suite of 4 docker running inside a docker-compose environment , and i like to have an additional docker-compose service that Is able to gran all logs and display them in a Simply web interface

https://redd.it/q05jt6
@r_devops
Best way to create a log for api

Hello,

I need to admit that I'm not that skilled in coding, hence I typically outsource my IT developed to 3rd party...

I have a company that does backend stuff and another that does front-end...

Clearly in between there are APIs :)

Backend blames front-end for performance and vice versa...

Now I want to check myself and create a little "ping" application, recording in a log the results such as:

- time of ping
- response time
- response code (200, 401 etc...)
- is body response empty or not...

Where do I start? Please

Note: at the moment I'm using VS code "REST API" extension for individual tests...

Thank you :)

https://redd.it/q0dxro
@r_devops
🔥 When Jenkins deployment goes wrong in DevOps

https://youtu.be/MT5zPMGcB1o

🔥 When Jenkins deployment goes wrong in DevOps

Jenkins is an open source continuous integration/continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines

See what happens when Jenkins deployment goes wrong in DevOps...

https://redd.it/q0jbji
@r_devops
Where do you host your CI/CD tools that involves iOS development (Xcode)? I want to move my local hosted Jenkins

Where do you guys host your CI/CD tools that involves iOS development? Because, I want to move my local hosted Jenkins to the cloud, but I saw that Mac's VPS are too expensive for my personal (hobby) projects. Do you have any suggestions?

https://redd.it/q0jzue
@r_devops