Reddit DevOps
268 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Easy Database Management solution for self hosting in a docker container?

Heavily considering migrating a small web app off of Heroku, trying to decide the best way to host the database. Is there something like Scalegrid that I can self-host for a database running in a docker container? Or should I just fork out money for a managed solution like GCP's Cloud SQL. I'm trying to keep everything on one cloud. I don't know much about devops; I don't trust myself to write my own scripts or something to backup the database myself.

https://redd.it/sivr61
@r_devops
An SME guidebook to security with Kubernetes

I am sharing my knowledge and research on "Addressing security with containers – an SME perspective"



It is crucial to think about security from the very beginning of the project. As a first step, SMEs have to be cautious about the code being built. Let’s say you use a particular programming stack for building microservices architecture – Java. The development teams need to focus on security vulnerabilities that could arise from their particular application in Java and ensure they address them.

As a first step, they need Static Application Security Testing (SAST) tools to conduct a source code analysis. The second step is, while building an application, many third-party software components may be used. To address this particular risk, a Software Composition Analysis Tool is a must. It helps address vulnerabilities identified in third-party components. These are the two main areas an SME should focus on.

As a third step, let’s say you’re using Docker as your container technology. To create your container, make sure you pull out only a hardened image or instance from Docker Hub or any other repository. CIS benchmarks act as a guiding light to ensure images are always secure. For example, this can be one of the checks in securing the Docker image. The Docker image OS instance does not have any default credentials – such as ‘Admin’ as username and password. So, while importing Docker images, CIS controls should be applied to build a secure image. For this, you’ll need different scanners. Qualys has a container or registry image scanner which companies could use to build secure images.

All the applications or software components that have been scanned go into the secure image. After this, you need to make sure that secure code is now deployed in the Docker image. Then you have container image scans. These will scan for any container-related vulnerabilities.

These three security procedures are mandatory in order to ensure that your early cycles or phases of Kubernetes are really secure.

The final step is to look at doing Security Configurations Checks on Kubernetes. Companies can use CIS-defined baselines to help build a highly secure environment altogether.

These four components are required for any organization – SAST tools, Software Composition Analysis Tools, container image scanners, and running a configuration scan on Kubernetes.

​

Share some love if you like my first post in this group

###

https://redd.it/silged
@r_devops
Optimizing atomic deployments

Over the last few months my team (web) has sought to improve our deployment pipeline. We're making steady progress! But I have concerns... I thought some veterans over in r/devops could advise.

Everytime we deploy, our project is completely rebuilt. That means backend and frontend build steps are going, always, even for a little change in some unrelated config file. We're using an atomic deployment strategy, so no release is associated with the files of the previous.

To improve, my thoughts are to only run these steps when changes have been made in the respective domains. I.e. no changes to the front end? - don't rebuild it, just copy the build files from the previous release into the new one. The same pattern will apply for dependency installation.

My question is if there is any conventional way to do this? My first thoughts are to run a hash against the files of interest for the two latest deployments, but I suspect this could make the configuration worthless if there is any meta/timestamp associated.

What say you?

https://redd.it/sj3v8o
@r_devops
Jenkins VS Azure



Hi, my team and I are currently looking into a new CI|CD tool, I was curious to hear some pros and cons from Azure and Jenkins if any of you have any experience with them, would love to hear.

Thanks

https://redd.it/sixw1f
@r_devops
Which DevOps certification should I get ?

Hi r/devops community,

I am planning to get a certitification of devops engineer. Which one do you think is better.

I already have AWS solutions Architect - Associate certification and Certified Kubernetes Administration certification.

Please suggest.

View Poll

https://redd.it/sjar32
@r_devops
Why Cloud and DevOps are better together?

Cloud and DevOps are different approaches; however, they both aim to improve processes, productivity, and agility. And therefore, smart companies are proactively looking out for ways to combine both cloud and DevOps to further improve their agility, efficiency, and business results.

https://redd.it/sjbnbt
@r_devops
Using npm to install/uninstall in Dockerfile?

Hi I'm new to the JavaScript side of things. In some of our Dockerfiles we have npm installing angular and uninstalling a few other things, i.e "npm install @angular/..some version" or "npm uninstall <package>".

This seems like an anti pattern to me? Most of these packages are already in the apps package.json. I originally only did npm upgrade on a few packages that were broken pushed the package and package-lock. But later I found the container build was still failing because of these extra npm steps in the Dockerfiles.

Was I wrong to assume all of the package handling should be done through the package and package-lock? Or are there times when it makes sense to have different packages in the containers?

https://redd.it/sja3f0
@r_devops
Unit Test a Jenkinsfile

Has anybody ever had any experience with unit testing a jenkinsfile?

It's an area I'm not too confident in but we have a lot of large, complex, shared pipelines and the ask is to look into how we can unit test the jenkinsfiles/groovy scripts. I'm not too sure where to start so any advice would be great!

https://redd.it/siqurl
@r_devops
I do not know where to go next.

I study DevOps methodology, I like to manage a machine and program in Python. I've never worked as a DevOps engineer, so I'm on the lookout and at the same time studying tools.

I know how to use Kubernetes, Docker, Jenkins, GitHub actions and other tools.

Usually, I study everything by articles, and I also read a book: "The System Administrator's Guide" by Evi Nemeth. Now, I am seeing a decline in learning because there is no structured study of DevOps. For example, I studied Python during the course and thanks to this, I mastered it well, what to do with DevOps I don't know how to study it. Is it worth doing your own project and learning from it?

https://redd.it/sjee6x
@r_devops
Migrating from Kubernetes PodSecurityPolicies

PodSecurityPolicies provide security for Kubernetes pods & will be removed soon.
Read more about it here: https://www.appvia.io/blog/podsecuritypolicy-is-dead-long-live
In this blog you will learn:
\- What PSPs are?
\- A look at PSP alternatives
\- Migrate PSPs using Migration Tool

https://redd.it/sjfeim
@r_devops
Create zookeeper partial replication server

We were going through the Zookeeper documentation and found that zookeepers can have 2n+1 replica servers that contain the same data as that of the leader.

But in our requirement, we need a Central Server (server-0) which will be having configuration details of all the servers. Then partial replica server server-1, server-2, server-3 based on the namespace which will be having configuration data of their respective servers that will be in sync with server-0 to keep track of any changes to the data.

Configuration Server Diagram

Is it possible to create a solution for the above problem using either Zookeeper or any other configuration management system?

https://redd.it/sjgoo0
@r_devops
GitOps and progressive delivery

I manage k8s cluster state in GitOps way with ArgoCD and now I would like to do canary rollouts. There are Argo Rollouts I could use, but I have read an article from December 2020 that those are not really GitOps compatible because if a rollout fails the fact that a previous version of the deployment is running in the cluster (instead of a new rev) is not reflected in git state (in other words if you look at git you would assume a new version is running). It seems that Argo Rollout is not git aware at all.

How do you solve that problem today? Or a more broader questions is how do you progressively deliver new revisions with GitOps?

https://redd.it/sjj39h
@r_devops
Do you like your Dev-Ops job?

Just a simple survey i was interested in. Tell us how do you feel right now in your dev-ops related role?

View Poll

https://redd.it/sjmprt
@r_devops
HELP - s3fs slowness when using sftp

\#### Version of s3fs being used

Amazon Simple Storage Service File System V1.90 (commit:v1.90) with OpenSSL Copyright (C) 2010 Randy Rizun [email protected] License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.

\#### Version of fuse being used

Package: fuse
Status: deinstall ok config-files
Priority: optional
Section: utils
Installed-Size: 113
Maintainer: Ubuntu Developers <[email protected]>
Architecture: amd64
Version: 2.9.9-3
Config-Version: 2.9.9-3
Depends: libc6 (>= 2.28), libfuse2 (= 2.9.9-3), adduser, mount (>= 2.19.1), sed (>= 4)
Conffiles:
/etc/fuse.conf 298587592c8444196833f317def414f2 obsolete
Description: Filesystem in Userspace
Filesystem in Userspace (FUSE) is a simple interface for userspace programs to
export a virtual filesystem to the Linux kernel. It also aims to provide a
secure method for non privileged users to create and mount their own filesystem
implementations.
Original-Maintainer: Laszlo Boszormenyi (GCS) <[email protected]>
Homepage: https://github.com/libfuse/libfuse/wiki

&#x200B;

&#x200B;

\#### Kernel information

5.11.0-1028-aws

&#x200B;

\#### GNU/Linux Distribution

NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

&#x200B;

\#### /etc/fstab entry

s3fs#my-bucket-data /data fuse _netdev,allow_other,retries=1,iam_role=my-instance-role,default_acl=bucket-owner-full-control,use_cache=/tmp/cachem,use_sse=1,umask=0022,dbglevel=info 0 0

\#### Log File

Here is a link to the a [log file](https://pastebin.com/ugDJESP0)

&#x200B;

I am running an SFTP server that is chrooting users to \`/data/user\`. \`/data\` is the mount point for s3fs and \`my-bucket\`. My instance and the bucket are both in the us-east-1 region of AWS. When I attempt to connect to the sftp (\`sftp user@endpoint\`), after entering my password I have about a 12 second delay until I am able to see my directory (which currently has 0 files). This happens on any SFTP client. When I look at the logs, I notice that there are calls looking for directories that don't exist (nor have I specified them anywhere). For example: \`.cache, .ssh, proc\`, etc. When I created the users, I make sure to use a custom \`/etc/skel2\` which just creates a empty directory for the users home. Why are these files being looked for?

https://redd.it/sjnqbe
@r_devops
nginx doesn't serve static files (sends them to UWSGI)

I have read every single StackOverflow post about this and tried a dozen configurations, but the static files keep getting sent to UWSGI for my Flask app. I may be missing something really basic?? Here's the config file:

&#x200B;

workerprocesses auto;
worker
rlimitnofile 4096;

events {
worker
connections 4096;
}

http {
clientmaxbodysize 100M;

upstream fmc {
server fmc
app:8000;
}


server {
listen 80;
servername <mydomain>;
clientbodybuffersize 10M;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request
uri;
}
}

server {
listen 443 ssl;
servername <mydomain>;
sslcertificate /etc/letsencrypt/live/<mydomain>/fullchain.pem;
sslcertificatekey /etc/letsencrypt/live/<mydomain>/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl
dhparam /etc/letsencrypt/ssl-dhparams.pem;

location / {
proxypass https://fmc;
proxy
setheader X-Forwarded-For $proxyaddxforwardedfor;
proxy
setheader Host $host;
proxy
setheader X-Real-IP $remoteaddr;
proxysetheader X-Forwarded-Proto $scheme;
proxyredirect off;
proxy
readtimeout 3600;
proxy
sendtimeout 3600;
proxy
connecttimeout 300;
proxy
requestbuffering on;
client
bodytimeout 300;
keepalive
timeout 300;
}

location /static {
alias /var/www/static/;
}

}
}

nginx is in a container but I have added the necessary volumes and I can access the static folder from inside the container using docker exec -it

https://redd.it/sjpm6f
@r_devops
What is wrong with my YAML file for Azure DevOps?

My background: I am new to Azure DevOps and have very little coding experience although I am becoming more familiar with Powershell as it relates to provisioning services in Azure.

Problem: I want to use an Azure DevOps Pipeline to create a new resource group (starting basic and will increase complexity overtime) by using a YAML file to reference a Powershell file I have uploaded to an Azure DevOps File Repo.

Details:

The powershell script is as follow:
" new-azresourcegroup -location 'Eastus' -name 'resourcegroup-test' "
YAML File generated by Azure DevOps:
\- task: AzurePowerShell@5inputs:azureSubscription: 'Test Subscription(--tenantID--)'ScriptType: 'FilePath'ScriptPath: 'ResourceGroup-CreationTest.ps1'azurePowerShellVersion: 'LatestVersion'

I am posting here because I don't believe this to be overly complicated but when I try to run the Pipeline I get the error:

Encountered error(s) while parsing pipeline YAML:
/ResourceGroupCreation-TestPipeline.yml (Line: 1, Col: 1): A sequence was not expected

Thank you in advance and I don't need the entire solution even guidance is appreciated.

https://redd.it/sjrujs
@r_devops
What is wrong with my YAML file for Azure DevOps?

My background: I am new to Azure DevOps and have very little coding experience although I am becoming more familiar with Powershell as it relates to provisioning services in Azure.

Problem: I want to use an Azure DevOps Pipeline to create a new resource group (starting basic and will increase complexity overtime) by using a YAML file to reference a Powershell file I have uploaded to an Azure DevOps File Repo.

Details:

The powershell script is as follow:
" new-azresourcegroup -location 'Eastus' -name 'resourcegroup-test' "
YAML File generated by Azure DevOps:
\- task: AzurePowerShell@5
inputs:azureSubscription: 'Test Subscription(--tenantID--)'
ScriptType: 'FilePath'
ScriptPath: 'ResourceGroup-CreationTest.ps1'
azurePowerShellVersion: 'LatestVersion'

I am posting here because I don't believe this to be overly complicated but when I try to run the Pipeline I get the error:

Encountered error(s) while parsing pipeline YAML:
/ResourceGroupCreation-TestPipeline.yml (Line: 1, Col: 1): A sequence was not expected

Thank you in advance and I don't need the entire solution even guidance is appreciated.

https://redd.it/sjrujs
@r_devops
What is the formal name of the principle to have one artifact for all environments?

I know that the anti-pattern on a CICD pipeline is rebuilding a new artifact for each environment.

But I am not sure what is the "term" that is used to describe the pattern to only build the artifact once and deploy / promote it to multiple environment? I couldn't figure out what search term to use for references to this pattern.

https://redd.it/sjqkl1
@r_devops