Reddit DevOps
270 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
How much of an upgrade would this be?

My current machine has 32 GB of memory and here are the specs:

​

SMBIOS 2.7 present.

​

Handle 0x1100, DMI type 17, 34 bytes

Memory Device

Array Handle: 0x1000

Error Information Handle: 0x0000

Total Width: 64 bits

Data Width: 64 bits

Size: 16384 MB

Form Factor: DIMM

Set: None

Locator: DIMM 0

Bank Locator: Not Specified

Type: RAM

Type Detail: None

Speed: Unknown

Manufacturer: Not Specified

Serial Number: Not Specified

Asset Tag: Not Specified

Part Number: Not Specified

Rank: Unknown

Configured Clock Speed: Unknown

​

Handle 0x1101, DMI type 17, 34 bytes

Memory Device

Array Handle: 0x1000

Error Information Handle: 0x0000

Total Width: 64 bits

Data Width: 64 bits

Size: 16384 MB

Form Factor: DIMM

Set: None

Locator: DIMM 1

Bank Locator: Not Specified

Type: RAM

Type Detail: None

Speed: Unknown

Manufacturer: Not Specified

Serial Number: Not Specified

Asset Tag: Not Specified

Part Number: Not Specified

Rank: Unknown

Configured Clock Speed: Unknown

​

Architecture: x86_64

CPU op-mode(s): 32-bit, 64-bit

Byte Order: Little Endian

CPU(s): 8

On-line CPU(s) list: 0-7

Thread(s) per core: 2

Core(s) per socket: 4

Socket(s): 1

NUMA node(s): 1

Vendor ID: GenuineIntel

CPU family: 6

Model: 79

Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @

2.30GHz

Stepping: 1

CPU MHz: 2300.084

BogoMIPS: 4600.16

Hypervisor vendor: Xen

Virtualization type: full

L1d cache: 32K

L1i cache: 32K

L2 cache: 256K

L3 cache: 46080K

NUMA node0 CPU(s): 0-7

​

​

I want to switch to this one, so I was wondering how much of a speed boost I would get. I am worried, because this one only has 16 GB, so I have no idea how it would perform.

​

c5.2xlarge 8 16 EBS-Only Up to 10 Up to 4,750

​

https://aws.amazon.com/ec2/instance-types/c5/c5.2xlarge

https://redd.it/sr89vo
@r_devops
"Daddy, what do you do at work?"

"Well sweetie I just stand here on this balance board and type stuff, occasionally breaking for more coffee."

"Why were you yelling at that man on the phone?"

"Oh, well you see princess, he's on what we call 'the CloudFlare sales team' and he won't stop calling daddy 10 times a week."

Seriously though, my kids are genuinely curious about what I do. Every time they ask, I try to answer and realize what I just said made no sense to them.

FWIW, they're 8 and 11. My son (11) is even starting to occasionally ask "so what are you working on right now?" It sucks because I'd really love to answer him but even when I try to dumb it down a few levels, nothing I say makes sense to him. So I usually just mutter something like, "ugh... trying to fix this shit." Lol

How do you all explain your job/tasks to your kids?

https://redd.it/sra77u
@r_devops
How do use Go or Python in your work?

What are the tasks that Go or Python help you solve in your work as a devops eng?

https://redd.it/srfk74
@r_devops
I have this idea. Thoughts?

Kubernetes ecosystem is pretty saturated. Everything you can think of there is already tool ready for you. However this is not the case for Linux automation / configuration management ecosystem. There are Chef, Puppet, Ansible. But they feel like they're not enough or at least not on par with k8s ecosystem. I wish if there was a tool like ArgoCD but for configuration management / state management for Linux itself. K8s is cool and all, but it runs on Linux. Linux servers have to be provisioned, automated & maintained in a long term. This is no easy task. Currently I'm working with two major tools. Puppet & Ansible. They're both useful in their own terms. IMHO Ansible's agentless mode comes with both advantage & disadvantage. In k8s ecosystem I just use ArgoCD and connect my git repository and forget about it. Unless there is an error on ArgoCD, I don't care. I know it's applied automatically and running healthy. However I cannot do the same for Linux server provisioning. Ansible doesn't have an agent, it's one shot operation. Puppet has agent but it's not realtime, AFAIK it runs on certain interval, default on 30 mins right? So what I really want to have is something like "ArgoCD but for Linux automation".. Imagine you define your Linux server's state in your git repository and your tool handles rest of it in realtime. It ensures your Linux server's state matches what you have defined in your git repository. Does this make sense? I don't think this kind of workflow doesn't exist currently unless I'm missing something. What's your opinion on this approach?

If there were a such tool would you use it? Is it already possible with certain tools? If yes please let me know. If no I'm willing to create an open source tool for this exact use case. Please let me know your opinion.

https://redd.it/sqnd12
@r_devops
Need help with side project deployment

I have several side projects and i am in the process of deploying on right now. I am trying to use GCP free tier. I dont want to use heroku. Any suggestions on how can i setup alerting, automation, logging and other required stuff ? Or Any guide i can follow. I will be deploying my other projects soon and this one is taking too much time.

https://redd.it/sqq6pf
@r_devops
Sysadmin VS Devops?

As a highschooler looking into various technologies regarding deployment and management of servers, be it in the cloud, a virtualized environment or even bare metal, I want to specialize (in a very general sense) in some branch of IT. Looking into the most popular ones I came to these observations:

- Networking = something I'll inevitably learn (at least the basics) as I'm learning for other areas, so I won't focus on it too much yet


- Security = not really my cup of tea as far as I can tell

- Storage/DB administration = kind of like networking

- System administration = definitely something I'm interested in

- DevOps = same here


I'm interested in your thoughts on the main differences between these 2, as well as their benefits and drawbacks. From what I know, DevOps is mostly present in newer companies/companies that want to advance, making it quite appealing in that regard. Then again, "classic" sys administration is still extremely popular, which is why I'm on the fence about this choice.

Thanks for your help.

(and before you say it, yes I will be posting this to other subreddits)

https://redd.it/sriaif
@r_devops
PeopleCert DevOps Fundamentals Exam

Hi,

I have bought a voucher for the PeopleCert DevOps Fundamentals exam but I have no material to study from. Also AXELOS has not published any official book to pass the PeopleCert DevOps Fundamentals exam. Does someone know where can I find the material to prepare for this exam?

https://redd.it/srjijr
@r_devops
Am I a good SysAdmin or Devops

Hi all,
My boss keeps telling me that I am a SysAdmin, despite the fact that My knowleges are:
Ansible
Terraform
Azure on-prem
Docker
PowerShell
Packrt
CI-CD

My question is:
What more do i need to enter the crazy world of devops?
Right now I am trying to learn python

https://redd.it/sqpy1o
@r_devops
This week in the Console newsletter we interviewed Ilya of NGS! NGS is a "next generation shell" built from the ground up for modern dev ops.

I thought /r/devops might be interested in reading the interview since Ilya's shell was designed for devops :)

https://console.substack.com/p/console-92

https://redd.it/srm4uk
@r_devops
Hikaru 0.11.0b released

Hikaru is a tool that provides you the ability to easily shift between YAML, Python objects/source, and JSON representations of your Kubernetes config files. It provides assistance in authoring these files in Python, opens up options in how you can assemble and customize the files, and provides some programmatic tools for inspecting large, complex files to enable automation of policy and security compliance.

Additionally, Hikaru allows you to use its K8s model objects to interact with Kubernetes, directing it to create, modify, and delete resources.

This is the most recent version of Hikaru that is a catch-up for the releases of the Python K8s client that have come out while Hikaru's build system was reimplemented. This latest version of Hikaru adds support for K8s 1.21 APIs and models, and includes support for the black code formatter's first full release.

This release also drops support for the 1.17 release of the K8s Python client, and support for the 1.18 release is deprecated.

Detailed notes on changes are in the release notes.

​

https://github.com/haxsaw/hikaru

https://pypi.org/project/hikaru/

https://redd.it/srnbvz
@r_devops
How do you deliver Kubernetes applications in 2022?

Hey everyone!

With my team, we're currently exploring what are the most common ways to maintain manifests and deploy them to Kubernetes in 2022. We are coming from automated (in our CICD servers) kubectl apply -f ... run against manifest files stored along with the application code. We wonder what people use these days to manage their apps deployments.

The main shortcomings we'd like to avoid (and that happen in the kubectl apply -f ... setup):

the multiplication of untracked resources (using namespaces better may already help us there tho..)
the drifting of the settings of deployed resources.

A few strategies we already have on our radar:

`kubectl apply -f ...`: well, it works right but it requires a bit of glue code and maybe tools can allow us to do things in a smarter way
Terraforming our K8S resources: we're exploring the option to Terraform our K8S manifests so we can keep track of the state of deployed resources and re-align them if they drift from expected setups. However, having all those .yaml manifests written in HCL is a bit hard to digest... Any strong cons for this option?
Helm charts: we like the fact that application are managed as atomic deployments that can be installed, upgraded, and removed. Coupling this with Terraform to effectively deploy may also give us some benefits in the way we approach deployments. However (and afaik), applying Helm charts with Terraform doesn't protect you from the drift happening in the resources associated with the chart.
....? Anything else?

We're open to consider any tool (or combination of tools) that can improve our K8S resources management ;-)

Thanks!

https://redd.it/sroq2s
@r_devops
Are interview prepping online services worth it?

Hello, I am studying for interviews with the big players in crypto and fintech in general. These companies have more than two interview sessions that are progressively challenging. I want to fully prepare for any kind of question, so I stumbled upon a service called Prepfully (just an example). A mock interview costs around 100$. Has anyone ever used such a service? They claim to have vetted sector experts in the required level as interviewers. Thank you.

https://redd.it/srqfuz
@r_devops
Can I run master / server K3S nodes on raspberry pi?

Just wondering if this may work

Also wondering if I could run this on a phone since I have few android devices that I do not use yet it has some decent power, can stay powered up for a long time, and GSM network is available 24 hours

So raspberries connected with local ethernet RJ45, and fallback mobile devices with batteries able to hold for a few days of power outage connected through primitive cellular network

https://redd.it/srqcst
@r_devops
How do you handle whom can deploy and tear down specific services?

From a devs perspective, it makes sense to have their app repo just build their code, and optionally deploy to a dev environment. But what about deploying to higher environments? Do you have a separate repo to deploy? If so, is this 'deploy' repo a mono repo for the entire firm's services or per team? How do you manage allowing devs to tear down services? If this also a repo, or something else?

As you can tell I'm trying to tackle lifecycle management, and make it self-service to each team. But at the same time trying to be cautious to prevent teams from impacting one another (e.g. tearing down the wrong service).

My initial thought would be to have a 'deploy' repo per team. Therefore permissions to the repo would be managed by that team; they would need to hard code the app version and commit to invoke a deployment. For tearing down they would update that same 'deploy' repo and change a value of 'instance_count' from something to zero. Effectively saying "please tear this down". With this approach everything is auditable since its git, and self-service since they have access to make commits. Using webhooks I can control the rest.

https://redd.it/srshyn
@r_devops
Opentelemetry Javascript Question

I'm trying to dive into using Opentelemetry and I understand the intra process child/parent nesting when I manually instrument and I can see them graphically in the tool I'm using, but I'm running into trouble using the context across services.

My problem:

I'm able to inject headers into my outgoing requests manually with the TraceId, SpanId, TraceFlags from the context of the calling service. But I can't seem to child the called services span. I tried passing it to the startSpan but it seems to not care about it.

Does anyone have an example of using the propagation API in the scope of fully manual instrumentation? I fear I'm trying to re-invent the wheel on the whole header injected and consuming while that API probably does what I need it to do.

https://redd.it/srp3ru
@r_devops
Lead Time on a Greenfield project

Hi all, I tried to do a very quick lazy search on this subject here and it doesn’t appear to have been discussed before. If not I do welcome any links to previous discussions.

To jump into it: I’d like to hear your thoughts on lead time (length of development cycle from start to production) for a Greenfield project - I.e. something typically being (re)written from scratch.

My understanding is that one of the measures of DevOps success is low lead time and frequent deployments to production.

How do you believe this ought to work with “greenfields”? Does the concept of a “production ready” build apply, and do you wait until your development has reached that level of maturity before deploying to prod? Consider this within the context of an enterprise application with heavy governance/compliance and security requirements (e.g. a system that contains PII or credit card information etc)

Or do you get whatever’s been tested into Prod, even if it may not be fully functional for an end-user? Perhaps treating “Prod” as iteratively as you would “dev”?

Or is there some subtle middle ground?

Traditionally, many of the companies I’ve worked for have had very slow lead time, partly based on the idea that you don’t touch prod until you’re “completely satisfied” with UAT/Stage. This seems to not be in alignment with DevOps, at least based on my growing understanding.

https://redd.it/srffa1
@r_devops
New to DevOps;

Hi, just want to recognise what's the difference between a Software Developer and a DevOps Engineer? What are the ‌‌‎ differences on their role and tasks?

https://redd.it/sr4p69
@r_devops
DevOps Handbook, 2nd Edition eBook

The second edition features 15 new case studies, including stories from Adidas, American Airlines, Fannie Mae, Target, and the US Air Force. In addition, renowned researcher and coauthor of Accelerate, Dr. Nicole Forsgren, provides her insights through new and updated material and research. With over 100 pages of new content throughout the book, this expanded edition is a must read for anyone who works with technology.

You can try eBook for Free from here: DevOps Handbook, 2nd Edition eBook

https://redd.it/sqwzyx
@r_devops
Best way to handle several python script plugins for a service? Create an image + container for each one? Create one for them all? Running them as microservices?

So we have an ftrack setup, and we have several plugins for it, and are in the process of creating more. We're also moving ftrack to the on-premises self-hosted version which uses kubernetes.

Each script is at most a few hundred lines of python, spread over a few files at most. Here's an example of one, but essentially it currently goes something like this:

import ftrack

def callback(event):
...

ftrack.setup()
ftrack.EVENTHUB.subscribe('topic=ftrack.update', callback)
ftrack.EVENT
HUB.wait()

(note that the above example and the one I linked to are on the old v1 deprecated API using python 2, but it's very similar for the new API with python 3, and we will be porting over the old ones)

Currently we just have each of them setup with this simple event server.

We're a smallish company with ~15 employees, and as I mentioned we're now deploying ftrack to an on-premises install, that's running on a small server (Ryzen 1800x, 64GB RAM, Proxmox) on kubernetes.

I'd like to make the plugins a bit more modern and reliable, ditching the event server. As if you look the event server doesn't really know what plugins doing once running, e.g. if they crash they just stay down until the entire event server stops and restarts.

So I'm looking for some advice on how to implement these plugins?

Should I create an image/container for each one and run that? Or would that be using a lot of resources even for simple plugins? If this way, how would I go about automating it so that I cn easily just generate an image for every plugin?

Should I create one image/container that then loads and manages the several plugins, making sure they restart when crashed etc? This way just sounds like re-inventing and implementing a lot of what potentially already exists.

Should I look into running them as microservices? I have some experience with AWS Lambda, and I think each plugin would work nicely on something like that. But we want it to be self-hosted locally. What sort of local free microservices frameworks are there, that are low resource enough to run on the server?

I'm leaning towards the microservices one, as this seems like a time and a place where microservices would actually be a very good idea?

Or are there perhaps other ways that would be better to implement the whole thing?

https://redd.it/sq88zy
@r_devops
DevOps conference List

I have prepared a list of online conferences for DevOps and SREs. This list should help you choose which conference to attend. It is especially interesting when you can attend an online conference that will be held in another country or continent.

The list is available on my blog (https://www.czerniga.it/2022/02/13/devops-online-conferences-list/) as well as on GitHub (https://github.com/czerniga/devops-online-conferences). If you want to add a new conference create a new Pull Request in the repository on GitHub.

I also list some upcoming conferences below:

| Date | Conference | Link|Price |
|:-|:-|:-|:-|
|17 February 2022|SKILup Day: Site Reliability Engineering|https://www.skilupdays.io/Sre-22/home|FREE |
|8-9 March 2022 |The DEVOPS Conference|https://www.thedevopsconference.com/|FREE |
|14–16 March 2022|SREcon22 Americas|https://www.usenix.org/conference/srecon22americas|US $550 – US $700 |
|24-25 March|DevOps.js Conference|https://devopsjsconf.com/|FREE / € 46 |
|26 – 29 April 2022|DevOpsCon London|https://devopscon.io/london|£ 512 - 1196 |

https://redd.it/ss55b8
@r_devops