Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
trying to convince anyone that the cloud is good or bad.
* Since a cloud exit depends on an enormous number of factors and there can be many dependencies for an application (especially in an enterprise environment), my goal is not to promise a solution that solves everything with just a Next/Next/Finish approach.

Many Thanks,
Bence.

https://redd.it/1gayf4t
@r_devops
Using ServiceConnection env variables

Hi there,

I've been trying to wrap my head around this. I'm fairly new to devops, so far I've been placing variables (such as tenantid, clientid etc) in the scripts themselves.
Then figured out a way to create 1 variables.yaml file per tenant, so that made things a bit nicer already.

Now I've run into something, I cant seem to get to work.

If I understand correctly, I should be able to extract the info such as tenantid, clientid, but also the accesstoken from the Service Connection I've configured for the Project in DevOps, using these $env: parameters

$env:AZURE_TENANT_ID
$env:AZURE_CLIENT_ID
$env:AZURE_ACCESS_TOKEN

I've modified my main.yaml with to set addSpnToEnvironment to true.
Ive added them as arguments to the script line.

Yet still when running the pipeline, the script returns these variables as empty.

The App Registration has API permissions for Directory.Read.All and Application.Read.All

So I believe that should be sufficient.

Can anyone please help me along? I'm starting to chase my own tail right now, ending up in circles with things I've already tried :)

Purpose of the script: Create a test script to figure out how to send emails from DevOps pipelines, using graph api. In the end we want to use this for all sorts of matters of automated tasks (clean up inactive devices, verify specific SAML settings for enterprise apps, whatever else you can think off that you can script which would reduce the daily workload of repetitive tasks).

Right now the PS1 is a bit of a mess, because of a full day of testing, modifying etc.

MAIN.YAML:

trigger: none
schedules:
  - cron: "0 0 1 "  # Run at midnight on the first day of every month
    displayName: Run once a month
    branches:
      include:
        - main
    always: true

pool:
  vmImage: 'windows-latest'

steps:
  - task: AzureCLI@2
    inputs:
      azureSubscription: 'Repo-EntraID'
      scriptType: 'ps'
      addSpnToEnvironment: true
      scriptLocation: 'inlineScript'
      inlineScript: |
        # Call the SendMailMessage script with the environment variables
        .\SendMailMessage\SendMailMessage.ps1 -AccessToken $env:AZUREACCESSTOKEN -TenantId $env:AZURETENANTID -ClientId $env:AZURECLIENTID
    displayName: 'Send Email using Microsoft Graph and Service Connection'




SendMailMessage.ps1

param (
    string$TenantId,
    string$ClientId,
    string$AccessToken
)

# Convert the access token to a secure string
Write-Host "Converting access token to secure string..."
$secureAccessToken = ConvertTo-SecureString $AccessToken -AsPlainText -Force

# Parameters for the email
$EmailSender = 'servicepunt@<domainname>'
$Recipient = '<my own mailaddress>'
$Subject = 'DevOps mail'
$Body = 'This is a mail from DevOps MDK'

# Show parameters
Write-Host "Starting script execution..."
Write-Host "From: $EmailSender"
Write-Host "To: $Recipient"
Write-Host "Subject: $Subject"
Write-Host "Body: $Body"
Write-Host "TenantID: $TenantId"
Write-Host "ClientID: $ClientId"
Write-Host "TenantID env: $env:AZURETENANTID"
Write-Host "ClientID env: $env:AZURECLIENTID"

# Check if AccessToken is empty
Write-Host "Checking if AccessToken is empty..."
if (string::IsNullOrWhiteSpace($AccessToken)) {
    Write-Error "AccessToken is empty. Please check your service connection and ensure it has the necessary permissions."
    exit 1  # Exit the script with a non-zero status code
}

Write-Host "Connecting to Microsoft Graph..."
Connect-MgGraph -AccessToken $secureAccessToken -NoWelcome

# Prepare headers for further API calls
Write-Host "Preparing headers for API calls..."
$header = @{
    'Authorization' = "Bearer $AccessToken"
}

# Verify connection to Microsoft Graph
Write-Host "Verifying connection to Microsoft
Graph..."
try {
    $graphProfileUrl = "https://graph.microsoft.com/v1.0/me"
    $profileResponse = Invoke-RestMethod -Uri $graphProfileUrl -Method Get -Headers $header

    Write-Host "Successfully connected to Microsoft Graph. User profile information retrieved:"
    Write-Host "User Display Name: $($profileResponse.displayName)"
} catch {
    Write-Error "Failed to connect to Microsoft Graph with the provided AccessToken: $"
    exit 1  # Exit the script with a non-zero status code
}

# Microsoft Graph API URL for sending mail
$mailSendUrl = "
https://graph.microsoft.com/v1.0/users/$EmailSender/sendMail"

# Compose Email
Write-Host "Composing email..."
$emailBody = @{
    message = @{
        subject = $Subject
        body = @{
            contentType = "Text"
            content     = $Body
        }
        toRecipients = @(
            @{
                emailAddress = @{
                    address = $Recipient
                }
            }
        )
        from = @{  # Specify the sender
            emailAddress = @{
                address = $EmailSender
            }
        }
    }
}

# Send Email using Microsoft Graph API
Write-Host "Sending email using Microsoft Graph API..."
try {
    $response = Invoke-RestMethod -Uri $mailSendUrl -Method Post -Headers $header -Body ($emailBody | ConvertTo-Json) -ContentType "application/json"

    if ($response.StatusCode -ge 200 -and $response.StatusCode -lt 300) {
        Write-Host "Email sent successfully."
    } else {
        Write-Host "Failed to send email with status code: $($response.StatusCode)"
    }
} catch {
    Write-Error "An error occurred while sending the email: $
"
}







https://redd.it/1gb2835
@r_devops
Why should I use ArgoCD and not Terraform only?

Hey everyone,

I'm digging into the Gitops topic at the moment, just to understand the use-cases where it's useful, when not ideal etc.

Currently, I have fully terraformed infrastructures. That includes multiple Kubernetes projects, each project multiple environments, each environment for each project on a dedicated AWS account.
All of it is deployed through Github actions, using terraform. My build stage deploys docker images on github registry (or aws ecr). Then, Terraform applies modules one after the other (network config, then cluster config, then application config). The image id is passed from the build to the terraform and is input as a variable, so terraform detects the diff and apply it.
Using HPA/PDB/Karpenter, we manager to have our environments running at all time, even when faulty image is deployed (pods are not all rolled out). Pipeline fails, so new image is not deployed.

This setup works fine, and we're happy about it.

What would ArgoCD bring to the table that I'm missing?
What are the scenarios, where our deployment wouldn't be as good as an ArgoCD one?

Thanks!

https://redd.it/1gb3rwn
@r_devops
Using zstd compression with BuildKit - decompresses 60%* faster

Last week I did a bit of a deep dive into BuildKit and Containerd to learn a little about the alternative compression methods for building images.

Each layer of an image pushed to a registry by Docker is compressed with `gzip` compression. This is also the default for `buildx build`, but we have a little more control with `buildx` and can select either `gzip`, `zstd`, or `estargz`.

I plan to do an additional deep dive into `estargz` specifically because it is a bit of a special use-case. Zstandard though, is another interesting option that I think more people need to be aware of and possibly start using.

>What is wrong with Gzip?

Gzip is an old but gold standard. It's great but it suffers from legacy choices that we don't dare change now for reliability and compatibility. The biggest issue is `gzip` is a single-threaded application.

When *building* an image with gzip, your builds can be substantially slower due to the fact that `gzip` just wont be able to take advantage of multiple cores. This is likely not something you would have noticed without a comparison though.

When `pulling` an image, whether locally or as part of a deployment, the images layers need to be extracted, and this is the most critical point. Faster decompression means faster deployments.

`gzip` is single-threaded but there is a parallel implementation of `gzip` called `pigz`. Containerd will attempt to use `pigz` for *decompression* if it is available on the host system. Unlike `gzip` and `zstd` which both have native Go implementations built into Containerd, interestingly it will reach out for an external `pigz` binary.

For compatibility and legacy reasons, Docker/Containerd has not implemented `pigz` for compression. The compression of `pigz` is essentially the same as `gzip` but scales in speed with the number of cores.

There is however, another compression method `zstd` which is natively supported, multi-threaded by default, and most importantly, decompresses even faster than `pigz`.

>How do I use `zstd`?

docker buildx build . --output type=image,name=<registry>/<namespace>/<repository>:<tag>,compression=<compression method>,oci-mediatypes=true,platform=linux/amd64

When using the `docker buildx build` (or `depot build` for depot users) you can specify the `--output` flag with a `compression` value of `zstd`.

>How much better is zstd than gzip?

To really answer this question will require knowledge of your hardware, and depend on if we are talking about the builder or the host machine. In either case, the tldr is more cores == better.

I ran some synthetic benchmarks on a 16 core vm just to get an idea of the differences. You can see the fancy graphs and full writeup in the [blog post](https://depot.dev/blog/building-images-gzip-vs-zstd).

Skipping to just the [decompression comparison](https://depot.dev/blog/building-images-gzip-vs-zstd#comparison-of-decompression-times) portion, there is a roughly 50% difference in speed going from `gzip`, to `pigz`, to `zstd` at every step.

|Decompression Method|Time (ms)|
|:-|:-|
|gzip|25341|
|pigz|14259|
|zstd|6108|

Meaning, even if `pigz` is installed on your host machine now, which is not a given, you are still giving up a 50% speed increase if you haven't switched to `zstd` (on a 16 core machine, it may be more or less depending).

Are you wondering how long it took to compress these images? Let's leave out `pigz` since it can't actually be used by Docker.

|Compression Method|Time (ms)|
|:-|:-|
|gzip|163014|
|zstd|14455|
|That is 90% faster compression. 90%... Nine followed by a zero.||

But you are thinking. There must be a trade-off in compression ratio. Let's check. The image we are compressing is 5.18GB uncompressed.

|Compression Method|Compressed Size (GB)|
|:-|:-|
|gzip|1.5|
|zstd|1.32|

Nope. 90% faster than gzip, smaller file, 60% faster to decompress.

# Conclusion

Zstandard is nearly universally a better choice in today's world, but it's always worth running a benchmark of your own using your own data and your
own hardware to ensure you are optimizing for your specific situation. In our tests, we saw a [60% decompression speed increase](https://depot.dev/blog/building-images-gzip-vs-zstd#conclusion) and that's ignoring that *massive* savings in the build stage where we are going from a single threaded application to a multi-threaded one.

https://redd.it/1gb4e98
@r_devops
Re: Container orchestration vs. VM orchestration

Hello devops! I wanted to start a new post in the same area as:

https://www.reddit.com/r/devops/comments/1bshdqx/containerorchestrationvsvmorchestrationin/

but ask a little different question. Does anyone have a favorite way to do VM orchestration as if they were pods and have a kubectl like cli tool for it?

Things I want are:

1. No container, no Docker file, I want my code to run directly on the VM.

2. Just a simple bash script that goes in startup
script. Here is pulumi example for GCP:

jammy = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240208"
computeinstance = gcp.compute.Instance(
"aa-aug-23-2024",
machine
type="e2-micro",
zone=zone,
metadatastartupscript=startupscript,
metadata={
"enable-oslogin": "false",
"ssh-keys": "the
key",
},
bootdisk=gcp.compute.InstanceBootDiskArgs(initializeparams=gcp.compute.InstanceBootDiskInitializeParamsArgs(
image=jammy,
size=30,
type="pd-ssd",
)
),

3. Be able to list all my running vms (as it they were pods) and get logs, spin up more, spin down to less, etc.

Is anyone doing this and is it catching on as a real kubectl alternative? I feel like I would have to hack together pulumi logic or specific aws/gcp cli commands and there isn't really this "back to vm" movement yet. Or is Nomad the tool for this? What tool is out there that is really trying to make this happen?



https://redd.it/1gb3of0
@r_devops
Branching Strategies?

Hello everyone. I'm currently researching the most optimal branching and deployment strategy to implement in my current company.

As of right now we are working with environment branching, where each team (3 teams) has a branch that they develop on. We also have a staging branch that is used by our QA team for testing and validation. Finally we have our production environment. All the lower environments should always be rebased on the master branch and updated.

Our teams produce new features over biweekly sprints, as well as hotfixes and bugfixes every couple of days. Maintaining 4 environments has become a headache. I'm looking for the most optimal branching strategy that could fit our business needs, keeping in mind how to handle migrations, different RabbitMQ queues, database instances, and so on.

I've been researching a trunk based solution with feature flags, however I failed to find a solution for handling migrations of unreleased features and so on. I would love to hear your insights regarding this topic. Thank you in advance!

https://redd.it/1gb7zwv
@r_devops
GitOps for Postgresql - What features would you want to have?


Hi all, I have about 10 years of software development and DevOps experience and I'm currently working on a personal project for managing Postgresql databases with GitOps.

My project started as a declarative way to manage logical replication publications and subscriptions, but I'm thinking for the future road maps. (No link to the project yet as it's in too early development to be useful to anyone)

If you had an app that functioned like Terraform or ArgoCD for managing Postgresql, what features do you think are key? Schema migrations? Access controls? Settings management?

The gist is it's written in rust and uses a reconciliation loop that reads a yaml file that declares the desired state, connects as a pg user to each database to inspect and update the state to match.

Once I have a decent road map and the foundations in place, I'll definitely share a link to the github and invite contributors/feedback. So, what's your thoughts on must-have features here?

Thanks all!

https://redd.it/1gb694s
@r_devops
I'm an IT student with a passion for cars — Should I pursue automotive tech as a career or keep it as a hobby?

I am a BS IT student and I absolutely love tech. I always have. But there’s something I love even more and that’s cars. I was fortunate enough to have a computer since childhood, so I was able to work with them hardware and software wise, learn a lot and be very good at it. There’s not much to do in computers hardware wise but I really enjoy it more than the software and programming. I am a gamer too and I love building gaming computers.

Similarly, the idea of working with cars really excites and I want to pursue it. I love cars, more than computers. Unfortunately I have never had the chance to own one or work with one but I wanna be able to do it.

I am going to do masters after my bachelors, I am pretty set on specializing in a field in IT (DevOps/cloud), but I was wondering if there’s something like automotive technician degree (not interested in automotive engineering) or course that I can do?

Another idea I had was that I can continue my career in IT and pursue this car thing as a hobby. Buy a car and learn to work with it, etc., and so on grow and buy another car.

I really want to work with cars. I really enjoy doing manual labor.

https://redd.it/1gb6ytt
@r_devops
Is there an argocd for cloud resources?

I was wondering if something allowing to have state reconciliation and declarative configuration but for cloud resources exist. Do you have any name ?

https://redd.it/1gbc8on
@r_devops
GitOps Channels/Canary-like Rollouts

Dear DevOps Community,
We recently adopted Flux to manage our K8s infrastructure components on more than 200 clusters across different cloud vendors in a „GitOps“ pull fashion.

TL/DR:

- How do you manage GitOps on your clusters? Are you using the Multi-Branch „Channel“ approach or another strategy?

- Is there may even a smart way to archive something like controlled „canary-like“ rollouts (10%…30% …60% clusters…)?

So far so good and Flux does it‘s job:
When there’s an update or a new feature to be rolled out, we branch of the main branch, prepare the changes and change the „flux source“ on a few testcluster for testing, before we merge back to main, so it will be rolled out on all clusters.
When this is done, we change the „source“ on our testclusters back to „main“.

This works well for us, but the continuous change/ cleanup of testcluster (especially when multiple features being developed at the same time) and having basically all clusters subscribing to the „main“ branch only, always comes with a slight doubt if it could be done better.
Especially when we want to follow a pattern of small, but frequent updates via GitOps.

Of course we could maintain next to „main“ some „branch channels“ (ie. „stable“, „beta“, „dev“,“test/upgradeX“,…), but I’m afraid that this will cause a mess by keeping all the branches up 2 date.

Thanks for sharing your thoughts :)

https://redd.it/1gbddtk
@r_devops
Recruitment process & technical challenge

Hi there,

Recently, I participated in a recruitment process for a DevOps role at a company that provides services to other businesses. The initial contact was a nearly one-hour interview. After that, the recruiter sent me an email with instructions to sign up on their platform to complete three additional steps.

The first step was a 30-minute test designed to measure IQ, logic, and other abilities to assess if my profile fits with the company.

The second step involved answering several questions while being recorded.

The final step was a technical challenge where I was supposed to build a pipeline for a Node.js application with multiple stages and then deploy everything to Azure using Terraform. Additionally, it required setting up three environments—dev, stage, and prod—along with several rules for merging branches, setting up the branch strategy, etc.

For this final step, the instructions specified that it should take no longer than one hour, and I had to record all steps and explain each part. I decided to decline the process because of these time-consuming requirements. I'm very busy and can't afford to spend a lot of time on these tasks. Since no sandbox environment was provided, I would need to set up everything on my own, which adds significant time to the process. Similarly, there isn't an automatic platform for recording the video, meaning I'd have to handle that setup as well.

I'm curious to hear your opinions on recruitment processes that require extensive time commitments, such as lengthy technical challenges without providing necessary resources like sandbox environments or recording platforms. Do you usually participate in them, or do you also choose to decline? I'd appreciate hearing your thoughts.

https://redd.it/1gbeebs
@r_devops
What matters most in a mocking tool?

Ayo, doing some research. My team was asking me what else would matter to me in a mocking tool, and obviously I care about it if its fast and easy to mock, but I was struggling to think of what else would really be a 'game-changer' for me to care enough.

Hosted mocks are great, dynamic vs static mocking is nice too...but like what else? What would make you guys care/ what do you look for in a mocking tool?

https://redd.it/1gbfjhr
@r_devops
Jenkins vs. Tekton for Openshift

Apologies if my question is stupid, I’m an SWE and far from an expert in DevOps.

We currently have our Repos in Bitbucket cloud and deploy them to Openshift with Bamboo. Our team wants to move away from Bamboo and the proposed alternatives are Jenkins or Tekton.

My gut feeling is Tekton is more suitable foe this use case, but I would appreciate any advice, especially pros and cons that should be considered. Thanks!

ETA: additional alternative suggestions are also more than welcome.

https://redd.it/1gbd8gl
@r_devops
How come containers don't have an OS?

I just heard today that containers do not have their own OS because they share the Host's kernel. On the other hand, many containers are based on a image such as Ubuntu, Alpine, Suse Linux, etc, although being extremely light and not a fully-fledged OS.

Would anyone enlighten me on which criteria does containers fall into? I really cannot understand why wouldn't them have an OS since it should be needed to manage processes. Or am i mistaken here?

Should the process inside a container start, become a zombie, or stops responding, whatever, whose responsibility would it be to manage them? Is it the container or the host?

https://redd.it/1gbi3kt
@r_devops
I have just been fired and wondering whether to continue in DevOps.

I came from a systems engineering background and spend the last two years in a DevOps role where I was promoted internally.

It was predominantly supporting a legacy sitecore(.net) workload running on windows instance, we used teamcity for builds and octopus for deployments. The deployments were really long and clunky. 5 hours end to end including testing.

We also did run some more typical DevOps stacks, Jenkins pipelines, deploying .net core applications in to fargate.

I am in a position where I am missing kubernetes and some other core DevOps skills, due to not using industry standard tools. I also found the work pretty overwhelming initially but that wasn't helped by what I considered a difficult co worker. I am not quite sure why I was fired, but probably had something to do with my relationship with my co worker who is best friends with our boss, I was assured it was not a performance issue.

These are some of behaviours that led to conflict, but it being my first DevOps job, I don't know how if this is just an expected standard behaviour, due to the fast nature of the work:

Making changes at 2am to our integration layer and not telling anyone

Making breaking changes to a production pipelines, not telling anyone then going on holiday. I atart looking in to the issue then he pops up on slack telling me the solution is easy and what do. Which I had done 40 mins prior

Agreeing with me, then publicly disagreeing me with me in front of the Devs on slack or to our boss.

Generally just going off and doing his own thing and not documenting anything, leaving you to pick up integrations he was working on that have failed in his absence

Messaging you about work on teams at the weekend and when you reply saying it's the weekend, he replies saying you didn't have to reply.

It would be good to get some feedback on how people collaborate with their co workers and what they consider acceptable or not and if you think DevOps promotes alot more conflict than other roles?

At this point, because I am missing some core skills. I could invest time in to skilling up and trying to get another role, but it also does seem like the stress is not worth the money, in the country I live in.

https://redd.it/1gbl2b4
@r_devops
New release: Jailer Database Tools

# Jailer Database Tools.

Jailer is a tool for database subsetting and relational data browsing.

It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.

The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.

The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.

# Features

Exports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.

Improves database performance by removing and archiving obsolete data without violating integrity.

Generates topologically sorted SQL-DML, hierarchically structured JSON, JAML, XML and DbUnit datasets.

Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.

SQL Console with code completion, syntax highlighting and database metadata visualization.

A demo database is included with which you can get a first impression without any configuration effort.Jailer Database Tools.Jailer is a tool for database subsetting and relational data browsing.It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.FeaturesExports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.Improves database performance by removing and archiving obsolete data without violating integrity.Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.SQL Console with code completion, syntax highlighting and database metadata visualization.A demo database is included with which you can get a first impression without any configuration effort.

https://redd.it/1gbnhqe
@r_devops
PagerDuty not great for small teams?

Not sure if I’m missing something here, but it seems like PagerDuty really isn’t built for smaller teams? I just recently broke up what was more or less a monolithic escalation policy where everyone on the schedule was more or less on call all the time and issues could be escalated to the same person if they didn’t ack, to smaller Escalation Policies and Schedules. Basically 3ish people per schedule.

PagerDuty recommends creating a primary and secondary schedule but, how’s that supposed to work with three people? Ideally I’d define primary and then secondary would be defined as an offset of that. Page primary, escalate to whoever is on deck to be on call next. It could work with the existing guidance, but all the people would have to be in both and then the offset would have to be managed manually. And then, if someone overrides in primary and doesn’t also make a similar override in secondary, you could end up with primary and secondary being the same person.

What I really want is an escalation policy that alarms to a team schedule, escalates through everyone there first, and then hits my team as a backup. Right now if the on call for that team doesn’t ack it jumps straight to me and I have to manually kick it to the next person on the schedule.

Am I missing something or is PagerDuty really just assuming that a team would have 6ish people with two full primary and secondary rotations?

https://redd.it/1gbn2dw
@r_devops
How do you guys track your deployments when doing configuration managment?

We are currently discussing migrating away from our current tool stack which consists of TFS. (For political and financial reasons).

We use it to host our code, build create and host our artifacts.

We can easily create a release with specific build artifacts and deploy it through agents using PowerShell.

We have around 100 different customer that we manage. Each customer, has between 2 and 4 'stages' (dev/int/prd for example) and we have a total of 4000 tests that gets execute par deployment per customer.

In the end, we have almost half a million of tests that run to ensure that our artifacts are correctly installed and configured.

Since we need to migrate, we have been evaluating GitLab, but we realized that it is not 'as complete' as TFS.
Especially the deployment part. It looks there that gitlab is only intended for smaller number of environments.

In addition to that, displaying the resulted tests, or just the pipeline run really doesn't scale and defeintly lacks some user friendlyness.


I was wondering how guys in other places hanlde this type of scenarios. I feel like we will not be able to find a similar product, and that it would be more of a 'agregation' of several products that would allow us to do this.

I would be curious to hear how you:

\- Deploy stuff onto your environments (Ansible ? DSC / Chef / puttet / something else ?)

\- how do you guys keep 'visually track' of what and where it passed / failed (Nice looking graphs with green & red )

Cheers

https://redd.it/1gbofud
@r_devops