Reddit DevOps
269 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Coupon code plugin for front/backend

Could you suggest some 3rd party plugin that helps integrate coupon codes?

​

For example, when you register on https://www.printful.com/ you get a coupon code that gives you a -30% discount. And coupon expires in 24 hours.


We want to use it on a custom-built website. We don't use WordPress or smth like that.

https://redd.it/sd5fn8
@r_devops
Running Terraform Cloud from Ansible Playbook

Hi Guys

I see a lot of information and documentation on the topic of integrating terraform and ansible and usually its always something like "run terraform (from local CLI or via Terraform Cloud) and then automatically run some ansible playbooks on the newly created machines via terraform".

As we are heading towards ansible Tower (RHAAP) it seems like it would be a lot more convenient to do this the other way around: controlling terraform cloud via ansible playbooks. Im thinking about using the URI Module to start API-Driven Runs on Terraform from ansible.

Does anybody have experience with that or knows any ressources / example solutions?

https://redd.it/sd72qc
@r_devops
API Traffic Viewer for Kubernetes



Hey all! I'm part of the team that developed Mizu, an open source on [GitHub](https://github.com/up9inc/mizu?utm_source=reddit&utm_medium=devops) \-[API Traffic Viewer for Kubernetes](https://github.com/up9inc/mizu) \-TCPDump and Wireshark re-invented for Kubernetes.

Features

* Simple and powerful CLI
* Monitoring network traffic in real-time. Supported protocols:

* HTTP/1.1 (REST, etc.)
* HTTP/2 (gRPC)
* AMQP (RabbitMQ, Apache Qpid, etc.)
* Apache Kafka
* Redis
* Works with Kubernetes APIs. No installation or code instrumentation
* Rich filtering

How to Run

1. Find pods you'd like to tap to in your Kubernetes cluster
2. Run mizu tap
or mizu tap PODNAME
3. Open browser on https://localhost:8899
**or** as instructed in the CLI
4. Watch the API traffic flowing
5. Type \^C to stop

[Download](https://github.com/up9inc/mizu#features)

[Download Mizu for your platform and operating system](https://github.com/up9inc/mizu#features)

https://redd.it/sd7czr
@r_devops
Unit-tests and production

Hey guys,
I’m really new at this and I’m still trying to learn. There’s something I don’t get about CI CD.

The way I see it unit tests are done by developers. Since they’re at micro level.

So let’s say a developer is working on a feature on a separate branch and then pushes said feature and initiates a pipeline.

Assuming all unit tests and whatnot passes, the next step would be to rebase the feature onto the master and initiate another pipeline and deploy to production, right ? I’m assuming this is also the responsibility of the developers. When doing that are they throwing the test files into gitignore/delete them ? Because they have no reason to be in production.


Also I’d like to know what’s your role in CICD? My friends who work in DevOps don’t really write tests, at all, like none. They more or less construct pipelines based on tests the developers give them and pre existing templates. Is that the common approach ?

Thanks.

https://redd.it/sd8wlg
@r_devops
PowerShell Master Class lesson one passes 300,000 views. THANK YOU!

Another nice milestone 🎉. Lesson one of the PowerShell Master Class hit 300,000 views! I keep this updated with recent new lessons around version 7, debugging, secrets and more.

https://youtube.com/playlist?list=PLlVtbbG169nFq_hR7FcMYg32xsSAObuq8

https://github.com/johnthebrit/PowerShellMC

PowerShell is cross-platform and such a useful tool to have in your belt.

#powershell #azure

https://redd.it/sd6j9r
@r_devops
Should I (we) skip "low level" server stuff and jump right into Kubernetes

Hi,

this thread is probably very opinion-heavy, therefore I think reddit is the best place to ask ;)

I work as a software engineer / devops dude in an IT startup with approx. 20 employees. We develop software that is already being hosted as a SaaS platform in "the cloud" by manually deploying it via Docker on a handful of servers. Each server contains the "full stack" of our software including infrastracture services like an S3 storage or Redis instance. There is "a bit" of tech debt that prevents us from horizontally scaling one of our compontents. But still, this approach has worked fine for now, since our software supports multi-tenancy and we "load balance" tenants across servers.

I am in the lucky position to heavily influence our tech roadmap and the way we do things moving forward. As our customer base is growing, I see limitations with our cloud deployment and want to distribute the mentioned components across several servers. I also want to be flexible with internal routing to avoid having to set up an internal DNS / service discovery system. Moving forward we will develop new applications that I want to integrate fast and often. I believe that this is the key to frequently delivering valuable increments for our customers.

At this point I am honestly struggeling to decide between two general options in regards to hosting our cloud software:

1. Set up an internal network. DHCP + DNS will be used to have application servers communicate with infrastructure services on other servers. Monitoring will be set up to monitor these servers on the same mechanisms.
2. Skip the "low level" stuff and jump right into Kubernetes.

As far as I am concerned, hosting Kubernetes itself is not a simple task. And adopting Kubernetes seems harder than adopting Docker containers (if you haven't used them before).

But still, in any case I (and collegues) will have to learn certain skills. I am unsure if long-term it would be wiser to directly drop Kubernetes if in the long run it would be the used orchestration framework anyway.

I hope I could kind of explain the struggle that I am facing, has been a long workday ;)

Looking forward to your opinions.

https://redd.it/sdcota
@r_devops
Help with ci/cd supervisord deployment

Hi everyone, i new in DevOps (i'm DevOps Jr) my first task is to automate the deployment of somewhat old applications that run through supervisord.

To perform the deployment of a new application tag I perform the following:

# clone repo dev in local machine
git clone URLGITLAB
# add prod repo
git remote add prod URLGITLAB
#
git push prod
git push prod --tags
# Into the instances
cd /opt/app
source env/bin/activate
git fetch
git fetch --tags
git reset --hard vx.x

We have two servers installed with gitlab, one for the dev environment and one for the production environment.

What do you recommend to automate this process? (Jenkins, Ansible, etc)

Regards,

https://redd.it/sde04w
@r_devops
Some ways DNS can break

>When I first learned about it, DNS didn’t seem like it should be THAT complicated. Like, there are DNS records, they’re stored on a server, what’s the big deal?
>
>But with DNS, reading about how it works in a textbook doesn’t prepare you for the sheer volume of different ways DNS can break your system in practice. It’s not just caching problems!
>
>So I asked people on Twitter for example of DNS problems they’ve run into, especially DNS problems that didn’t initially appear to be DNS problems. (the popular “it’s always DNS” meme)
>
>I’m not going to discuss how to solve or avoid any of these problems in this post, but I’ve linked to webpages discussing the problem where I could find them.

https://jvns.ca/blog/2022/01/15/some-ways-dns-can-break/

https://redd.it/sddzh1
@r_devops
Nomad Routing Question

I've been reading up on Nomad, trying to gauge how it works, and I have a question about the network.

Most documents recommend using a service mesh like Consul, but my question is: Does it route the traffic through Consul itself or is it just for service discovery?


I.e., is it: User Request -> Load Balancer -> Consul (on lb) -> Consul (on host) -> Web Service Container?


Or is it: User Request -> Load Balancer -> Web Service Container, where the Consul plugin for, say HAProxy just tells it which hosts it should route to?

https://redd.it/sdbpxk
@r_devops
Anyone experienced with squid proxy?

Have deployed 3.5 squid proxy to act as a firewall and a transparent proxy ( end user doesn’t know there’s a proxy) it works fine for the most part but some dev have complained that they get errors like server retuned error in unknown format ( they have been using salesforce bulkapi through java clients)
I check the logs and nothing is there on squid side. It happens only very rarely, same job goes through upon reruns. Scratching my head as I am not able to reproduce this problem. Does anyone have any suggestions?
Thanks

https://redd.it/sdhkxo
@r_devops
GH Actions + AWS - Is Terraform even needed?

Hey there,

So long story short, there is no DevOps guy on my team yet, so I'm helping with researching and basic setup in the meanwhile.

We are moving to a container based solution for one of our applications, and I was suggested to look into Terraform.

While I do understand what Terraform is for, describing a infrastructure on a special syntax, and let the TF do its magic to achieve that state by manipulating a remote env - in this case AWS - I'm having a bit of a trouble to understand why should I use it when there is a GH Action to deploy a container on AWS directly.

Could anybody provide a bit of insight of pros/cons of using or not using Terraform in this scenario? TIA

https://redd.it/sdi8ir
@r_devops
Kubernetes Cluster deployment using tekton

So we would like a way to be able to easily deploy new Kubernetes clusters whenever we would like all with the same configuration so we can easily deploy our apps, and are thinking of doing that in tekton. So you would run the tekton pipeline locally in minikube, and steps would be taken to install all the dependencies onto the (possibly bare metal) server, I suppose through an ssh connection.

Is this a good idea? Is there a standard way to do something like this? Thanks

BTW, I'm an inexperienced junior developer. So if this is not the right approach please let me know

https://redd.it/sdi2yx
@r_devops
Advice needed: creating CICD pipeline per code or manually

Hi guys, hope you can help me, give me an advice or have some good ideas. Currently we have lets say a pipeline generator where we can create new AWS codepipelines with just creating a small config file which includes the naming, source repo… works pretty fine, written in Typescript and uses CDK. We have 4 pipeline „types“ which differs on the stages and trigger type (s3, ecr, codecommit…). Now more and more developers ask for a custom pipeline because they want different stages, approval steps and so on instead of those pipelines we currently can create via the cicd pipeline generator.

PS: the cdk code of the generator is not written by us it was an external company.

Would you extend the generator to add more pipeline „types“ even if only one project need this custom pipeline, most of the requests are completely different use cases and so the pipelines.

Also creating a pipeline with the generator and modify the codepipeline manually to meet the requirements was a question from the manager. It would work but because the pipeline would be created with CDK so in the end Cloudformation and messing around manually is never a good idea, what you think?

Or would you manually create those custom pipelines? but than it would not be IaC :/ What management force us.

What you think?

https://redd.it/sdkzwn
@r_devops
Cheap CDN option for serving 50TB of video traffic in South America?

Hello,

I'm working on a project for a non-profit doing education via a video course online. They have a project which will require them to get a lot of people through their video course which will end up being about 50TB of video downloads when all is said and done (+/- 20%).

I've been looking at CDN options and so far the cheapest I can find is using DigitalOcean spaces (S3 clone) w/ built in CDN which will be $0.01 per GB of bandwidth transferred so about $500 for the 50TB (not bad!).

The downside with DigitalOcean is the CDN PoP locations aren't close to where the end users will be (in South America) and I worry about latency and playback start rate for the videos...

Cloudflare has closer PoP locations but their sales people are quoting me $5k/month minimum with 1 year contract which would be a starting amount of $50k and not something the non-profit can afford right now.

​

Are there any other CDN solutions for serving the 50TB of video (and in general for hosting video for fairly cheap) with good PoP locations in South America I might be overlooking?

https://redd.it/sd9idn
@r_devops
Cheap CDN option for serving 50TB of video traffic in South America?

Hello,

I'm working on a project for a non-profit doing education via a video course online. They have a project which will require them to get a lot of people through their video course which will end up being about 50TB of video downloads when all is said and done (+/- 20%).

I've been looking at CDN options and so far the cheapest I can find is using DigitalOcean spaces (S3 clone) w/ built in CDN which will be $0.01 per GB of bandwidth transferred so about $500 for the 50TB (not bad!).

The downside with DigitalOcean is the CDN PoP locations aren't close to where the end users will be (in South America) and I worry about latency and playback start rate for the videos...

Cloudflare has closer PoP locations but their sales people are quoting me $5k/month minimum with 1 year contract which would be a starting amount of $50k and not something the non-profit can afford right now.

​

Are there any other CDN solutions for serving the 50TB of video (and in general for hosting video for fairly cheap) with good PoP locations in South America I might be overlooking?

https://redd.it/sd9idn
@r_devops
Can't connect to MariaDB from a container

So I deployed an app from a container, based on Alpine. It's supposed to connect to a baremetal MariaDB on a different host, but it just won't do that.

* Connect to MariaDB from the Docker host (i.e., outside container) -- works
* ping to MariaDB from inside the container -- works
* `curl https://ifconfig.me` from inside the container -- works
* Connect to MariaDB from inside the container -- timeout

I don't know what else to do at the moment.

Additional info:

* It's part of a 2-node swarm
* Host OS is Ubuntu 20.04
* I'm managing the swarm using Swarmpit

I'd really appreciate any help in troubleshooting this issue.

https://redd.it/sdouyp
@r_devops
Switching to Sr Cloud Ops Engineer from SRE

I start my new job as a Sr Cloud Ops Engineer next month. Right now I am a SRE with 5 years of experience in AWS, IAC, serverless, Jenkins, etc. To my understanding the new job will be working with app teams on diagnosing their cloud environments and CI/CD pipelines. Feeling under prepared for the new job and am quite frankly nervous as this is a big jump in my career. Does anyone have any tips for somebody transitioning to a senior operations role?

https://redd.it/sdgrmd
@r_devops
𝐄𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠 𝐘𝐨𝐮 𝐍𝐞𝐞𝐝 𝐭𝐨 𝐊𝐧𝐨𝐰 𝐀𝐛𝐨𝐮𝐭 𝐘𝐀𝐌𝐋

Please check out my post in Better Programming-𝐄𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠 𝐘𝐨𝐮 𝐍𝐞𝐞𝐝 𝐭𝐨 𝐊𝐧𝐨𝐰 𝐀𝐛𝐨𝐮𝐭 𝐘𝐀𝐌𝐋
* YAML Stands for 𝐘𝐀𝐌𝐋 𝐀𝐢𝐧’𝐭 𝐌𝐚𝐫𝐤𝐮𝐩 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞.
* YAML is similar to 𝐉𝐒𝐎𝐍 𝐨𝐫 𝐗𝐌𝐋..
* YAML is used to write 𝐜𝐨𝐧𝐟𝐢𝐠𝐮𝐫𝐚𝐭𝐢𝐨𝐧 𝐟𝐢𝐥𝐞𝐬.
* YAML is used by 𝐃𝐨𝐜𝐤𝐞𝐫, 𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬, 𝐀𝐖𝐒 𝐜𝐥𝐨𝐮𝐝𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧, 𝐉𝐞𝐧𝐤𝐢𝐧𝐬, 𝐀𝐧𝐬𝐢𝐛𝐥𝐞, and several other tools.

https://betterprogramming.pub/everything-you-need-to-know-about-yaml-fdbb7acf6db6

https://redd.it/sdrkxf
@r_devops
Contract Negotiation for On-Call Compensation

I'm nearing the end of the interview process for a DevOps Engineer position and they've indicated that there are on-call responsibilities for the role. I am trying to gauge what fair compensation is and what to take into consideration. So far I have:

Stand-by compensation
Per incident compensation
Company cell phone
Service level agreement (how quickly to call-back)

Curious about what other people have in their current roles, not sure what fair market rate is for this.

In the past, I worked in a position where I had a company cell phone, got paid for stand-by (an extra 1/hr per weekday, 2/hr weekend), and got paid normal OT for time spent on an incident (round up to the nearest hour). Most of the time I would end up with enough OT to get time and half on the extra hours.

Bonus points: also curious about severance for these roles, especially when getting into the $150k+ salary range!

https://redd.it/sdcxno
@r_devops
IAM Policy to restict usrs do destory only instances that they own

Hi guys, I used [CloudCustodian to set up a Lambda function](https://cloudcustodian.io/docs/aws/examples/ec2-auto-tag-user.html) that adds a tag (CreatorName) to any newly created instance.

This part works quite well.

I'm now attempting to create an IAM policy that would allow only users that has its value in the EC2-instance CreatorName tag, do delete the machine.

This is the policy:

{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"ec2:TerminateInstances"
],
"Condition":{
"StringEquals":{
"aws:ResourceTag/CreatorName":"${aws:username}"
}
},
"Resource":"arn:aws:ec2:<redacted>:<redacted>:instance/*"
}
]
}

This simply does not work.

I have a hunch as to why, users login to AWS via SAML, so they're in the "SAML federated users" status.

This leads me to believe that variable ${aws:username} in the above template doesn't actually correspond to my login name.

So in example, the action is actually carried out by a user named 'admin', where my user (TEseSKal) is just the 'principal', right?

Here's the CloudTrails audit:

{
"eventVersion":"1.08",
"userIdentity":{
"type":"AssumedRole",
"principalId":"<redacted>:<redacted>",
"arn":"arn:aws:sts::<redacted>:assumed-role/Admin/<redacted>",
"accountId":"<redacted>",
"accessKeyId":"<redacted>",
"sessionContext":{
"sessionIssuer":{
"type":"Role",
"principalId":"<redacted>",
"arn":"arn:aws:iam::<redacted>:role/Admin",
"accountId":"<redacted>",
"userName":"Admin"
},
"webIdFederationData":{

},
"attributes":{
"creationDate":"2022-01-26T22:01:24Z",
"mfaAuthenticated":"false"
}
}
},
"eventTime":"2022-01-26T22:01:51Z",
"eventSource":"ec2.amazonaws.com",
"eventName":"TerminateInstances",
"awsRegion":"<redacted>",
"sourceIPAddress":"<redacted>",
"userAgent":"console.ec2.amazonaws.com",
"requestParameters":{
"instancesSet":{
"items":[
{
"instanceId":"<redacted>"
}
]
}
},
"readOnly":false,
"eventType":"AwsApiCall",
"managementEvent":true,
"recipientAccountId":"<redacted>",
"eventCategory":"Management",
"sessionCredentialFromConsole":"true"
}

So, am I correct in this assumption?

If so, is there way to make the policy take into account the principal, and not the user?

I Googled it but couldn't make any meaningful progress.

https://redd.it/sdi2pt
@r_devops
What don't you like about Heroku and PaaS ?

I plan to build a new PaaS alternative to Heroku, and cheaper.

Can you tell me what don't you like about Heroku and other PaaS ?
Which features do you like ? And which ones would you want see on a cloud platform ?
And finally what is your use case of Heroku ? What do you build on it ?

Thanks.

https://redd.it/sdtx0c
@r_devops