Reddit DevOps
271 subscribers
9 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
--cert=/home/vault/certs/server-cert.pem --key=/home/vault/certs/server-key.pem -n vault
secret/vault-cert created
```
```
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc create secret generic pki-int-cert --form-file=ca.pem=/home/vault/certs/ca.pem -n vault
secret/pki-int-cert created
```
```
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc edit statefulset.apps/vault
```
And I've updated the volumeMounts section like that :
```
volumeMounts:
- mountPath: /vault/data
name: data
- mountPath: /vault/config
name: config
- mountPath: /home/vault
name: home
- mountPath: /vault/certs
name: certs
readOnly: true

```
And the volumes section like that :
```
volumes:
- configMap:
defaultMode: 420
name: vault-config
name: config
- emptyDir: {}
name: home
- name: certs
projected:
defaultMode: 420
sources:
- secret:
name: pki-int-cert
- secret:
name: vault-cert
```

I kill the vault-0 pod to take into account the changes and I check if my pod has access to my different secrets :
```
[tim@localhost certs]$ oc rsh vault-0
/ $ ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr vault
/ $ cd vault/
/vault $ ls
certs config data file logs
/vault $ cd certs/
/vault/certs $ ls
ca.pem tls.crt tls.key
```
Then I've edited the vault-config file like that :
```
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc edit cm vault-config
```
```
apiVersion: v1
data:
extraconfig-from-values.hcl: |-
disable_mlock = true
ui = true

listener "tcp" {
tls_cert_file = "/vault/certs/tls.crt"
tls_key_file = "/vault/certs/tls.key"
tls_client_ca_file = "/vaut/certs/ca.pem"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
```

And I rekill my pod.

After that, if I try to use the first route created, I've this error :

https://nsa40.casimages.com/img/2021/03/02/21030210511540708.png

So I've deleted the first route and I recreate it with https :
- Networking part > Routes > Create routes
- Name : vault-route
- Hostname : 192.168.130.11
- Path :
- Service : vault
- Target Port : 8200 -> 8200 (TCP)
- Secure route enabled
- TLS Termination : Passthrough

https://zupimages.net/up/21/09/c1le.png

And if I try the url https://192.168.130.11/ui :
https://zupimages.net/up/21/09/tkad.png


I've this error... I think I missed something but I don't know what...

Someone to help me ?

Thanks a lot !

https://redd.it/m0pqkx
@r_devops
Been in “DevOps” role for 2ish years...never done it.

Hello all,

Looking for some guidance here on next steps in my career. I was offered an opportunity to transition into a Cloud Infra / DevOps role at my current company about 2 years ago. Previously, I’d been working mostly on windows endpoints and servers for the last 6-7 years. I’ve always had a passion for automation and consider myself fairly advanced at scripting in PowerShell.

My interest in DevOps really came about when I taught myself Git and started using it for my scripts. When given the opportunity to make the switch, I did so with the expectation that the team/environment would help me build on this and get exposure to all aspects of IaC, ci/cd, etc. as well as nurture my interest in coding as it pertains to infrastructure. That has not been the case at all.

I should note this is an internal IT department. We’re not shipping any code or doing CI of any kind. For the most part, we have no idea what runs on the infrastructure we manage. I’ve made attempts to bridge the gaps between our team and dev teams - trying to understand how we can make their lives easier. But there is no alignment at all. They plan, strategize, deploy, and mostly just bother me when they need a new box or something breaks. My team, mostly made up of traditional infra sys admins, has essentially no interest either. I am the “DevOps guy”, they do other stuff. It wouldn’t be a problem had I actually got some exposure or experience in how this is all supposed to work previously, besides my own reading and studying. This has been a disappointing experience.

That said, I wouldn’t say the last two years have been a waste. I’ve learned a ton about Azure as well as finally got exposure to managing Linux infrastructure, mostly Java app servers, some ha clusters, and SFTP. But, I know I’m not going to get the exposure/experience I need to truly be successful by staying in the current environment. For the most part, our version of DevOps is pipelining our image builds with and putting config management/salt scripts into Git. That’s it. The job would be done at that point.

Being someone that loves coding, I really want to understand how web app architectures work, how to scale a production environment, ship code, implement meaningful observability, the works.

My question: given the circumstances, I’m planning to take 3-6’months off to frankly do everything I just said on my own and build a portfolio before applying to a “real” DevOps-minded shop. Is this wise? How can I explain that although I had the title, I didn’t get the exposure? Any suggestions for how to make the most of the time I will have to study/transition?

Thanks a ton

https://redd.it/m0tg7y
@r_devops
Comtrya: Rust Application for Local Configuration Management / Dotfiles

Hi,

I'm working on a new tool to help simplify dotfiles and packages when bootstrapping a new machine, with the plan to support more actions to provide single machine configuration management.

It's early days, but I wanted to share a quick demo and the repository and get some initial feedback.

​

Sharing both mirrors, as I'm happy to receive issues and PR/MRs on either.

​

https://gitlab.com/rawkode/comtrya

https://github.com/rawkode/comtrya

​

DEMO VIDEO>> https://i.rawko.de/kpu7rZ85

​

I hope some of you find this useful and I'm excited to bring more features over the coming days and weeks

https://redd.it/m0ujmn
@r_devops
Do you suffer from downtime when deploying and how do you manage to solve it?

Downtime that can be caused due to lack of coordination between new and older deployed services, bugs that are discovered only after deploy or any other incidents that may harm user experience and system overall resilience, what is your strategy and how effective is it?
(Do you still suffer from downtown although taking those measures?)
Thanks.

https://redd.it/m0sbl7
@r_devops
SSH into Ubuntu MAAS

I'm testing out MAAS. I've PXE booted a VM in vCenter to install ubuntu 18.04. The machine booted up and got an IP address.

Problem is, I can't seem to SSH into it. I've made sure to import the SSH key by doing:

john john-lnx ~ $ cat ~/.ssh/id_rsa.pub
# Copy the output, and paste it in the MAAS webgui for SSH keys. (I've done that in the MAAS installation but again now for troubleshooting)

When I try to SSH into the machine, this is what I get:


john john-lnx ~ $ ssh [email protected] -v
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /home/john/.ssh/config
debug1: /home/john/.ssh/config line 2: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 172.24.25.232 [172.24.25.232] port 22.
debug1: Connection established.
debug1: identity file /home/john/.ssh/id_rsa type 0
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ed25519-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 172.24.25.232:22 as 'ubuntu'
debug1: SSH2_MSG_KEXINIT sent
Connection reset by 172.24.25.232 port 22

Is this caused by the `key_load_public` it seems to look for? What did I do wrong?

https://redd.it/m13mp4
@r_devops
Ubuntu's MAAS install configuration

In usual a usual PXE installation, a preseed is given to set up things like date, timezone, language, root user, disks, etc..

I'm testing things out with MAAS, and after it installed a node, the node had already a custom configuration, sort of skipping those configurations.

In MAAS, how are those things defined? Where can I specify things like timezone, language, root user, etc?

Thanks ahead!

https://redd.it/m19f9u
@r_devops
Any good DevOps / Engineering podcasts?

Hey everyone!

Can someone recommend me some good DevOps / Engineering pod casts, that talks about various things like, different deployment strategies, different ways to do cicd, microservices etc.

https://redd.it/m19y4r
@r_devops
Attempt to visualize Nomad and Consul topology with old school tool (TcpDump)

Hi folks,

If you are working with Nomad and Consul you have noticed the lack of visualization tools for network traffic tools and solutions.

I took on this task and started a new solution called LiteArch Trafik

It cross references data between consul and docker metadata on each node and uses very small set of dependencies like: Jq, TcpDump, docker and consul API

There is LAB to launch nomad and consul cluster with Ubuntu Multipass using some bash scripting as well to try it out.

If you are looking for similar solutions give it a try and let me know what you think of it.

​

Documentation

Code

https://redd.it/m12lnp
@r_devops
Pushing new configurations



Our production environment is not connected to any other environment.

When new configuration are needed to implemented (Which happens pretty often)

The configuration files are packaged, one for Linux Nodes, the other for Windows Nodes.

And a release note is given and the engineers go thru the documents which contains instructions on how the new configurations ought to be deployed. Usually its the same but sometimes it changes, the only way we would know it, is to read the document.

Is there any way this can be automated?

i.e. A tool that reads the document and does the job automatically?

Or a way I can get the development team to packages the configuration files in a way where it can be easily deployed?

​

Idk if i asking something that is not out there, but i find what we do really ridiculous, but of course, i.e. asking the development team to package the configuration files in a different way would be a war since we have to deal with humans

But i think i am able to do it, if i push the bosses enough.

​

​

I am open to any ideas

https://redd.it/m10eej
@r_devops
I figured out a CI/CD pipeline with Github Actions and AWS CDK

Disclaimer, I'm not DevOps, just a plain ole' reggy SWE, but I believe this falls under the DevOps domain, right?

The flow is:
1. Github actions build the project; JS/CSS/HTML assets are exported
2. Github actions run cdk and deploy infra as well as upload assets
3. That's about it

Seems simple, but when it worked, it was mind blowing...

Post: JAMStack CI/CD with Lerna, NextJS, CDK, and Github Actions

https://redd.it/m0w444
@r_devops
Nginx sub URL based routing?

URLs
Locatoins
/R1/v1/dev should redirect to 10.10.10.10/v1/dev
so here /v1/dev will be part of the request that should be considered as a part of proxy_pass's URL
/v1/dev is not static value but whatever comes after R1 in location, that will be considered as proxy_pass's end URL.

/R1/v1/test should redirect to **20.20.20.20/v1/test**

is it possible to have this king of configuration of single nginx server ?

https://redd.it/m0zfon
@r_devops
New project: Event-Based Serverless Container Workflows with Direktiv

G'day DevOps!

Apologies if this is the wrong group - we posted this is r/serverless and asked for advice on other groups - someone dm'ed and suggested r/devops. Apologies if this is the wrong group! We wanted to share with you the latest creation from our team!

Direktiv is an open-source event-driven serverless container workflow engine.

Event-driven because we support the CloudEvents standard (also scheduled execution & API driven). Serverless because workflows and execution are instantiated when needed using containers or vorteil. Workflow engine because that's at its core what Direktiv is.

Direktiv was created to address 4 problems faced with workflow engines we faced:

1. Cloud agnostic: we wanted Direktiv to run on any platform, support any code and NOT be dependent on the cloud provider's services
2. Simplicity: the configuration of the workflow components should be simple more than anything else (only YAML and jq to express all states, transitions, evaluations and actions). We've modelled Direktiv's specification after the CNCF Serverless Workflow Specification with the ultimate goal to make it feature-complete and easy to implement
3. Reusable: should have the ability to reuse/standardise containerised code across workflows
4. Multi-tenanted/secure: we want to use Direktiv in a multi-tenant service provider space, which means all workflow executions have to be isolated; data access secured and isolated, and all workflows and actions are truly ephemeral.

The workflow language is VERY simple YAML primitives expressions. We're pretty confident in the engine now, so we're now focused on building standard containers to be used. You can see the progress (for now) on Docker Hub (https://hub.docker.com/search?q=vorteil&type=image)

Direktiv Github: https://github.com/vorteil/direktiv as open source

Documentation: https://docs.direktiv.io/

Beta front-end: https://wf.direktiv.io/ \- we hope to make this a commercial component of the product.

Please let us know what you think about the idea, the implementation, use-cases for it (we have a couple in mind) or some real-world examples (this is what we need help with).

I promised James (of the team members who talks a lot) that I would end the HN introduction with the lines below:

\# The Prime Direktiv:

Captain's log, stardate 47634.44. Cloud bills are high, we're dependent on dinosaur companies and we still have no standards. Forget about boldly changing anything, we just want to change SOMETHING

https://redd.it/m1jdf1
@r_devops
Best place to find AWS SRE contractors?

I'm looking to contract with a SRE that has experience setting up EKS clusters on AWS with full CI/CD (from Github), wildcard SSL, etc. I'd like to setup the ability to spin up ephemeral test environments based on PR creation as well. Would someone here be available for a contract? Or is there a better place to look?

https://redd.it/m1ihon
@r_devops
How common is it for a company to use SAAS products and the security to just object to every single external connection that SAAS provider requests?

It's a rant. But my company is trying to have a digital transformation. They have paid for every single tool in the world. But when it comes to working with SAAS products the security will simply put roadblocks for everything that the provider asks. For eg. A monitoring SAAS product we use is requesting access to our AWS account to pull metrics. However Security needs to review that request essentially delaying the work for unforeseeable future. How common is it in other companies? The previous companies I have worked in never had these issues and now I am pissed of due to these hurdles every single day.

https://redd.it/m1idgk
@r_devops
Securing deployment of NGINX config via git push with hooks

Hi All,

Let me know if this question would make more sense in r/nginx \- but this is less about NGINX and more about deployment of config to a server via git push.

Currently I have our nginx config in a git repo on my local machine, with a remote origin shared by the team in bitbucket.

On our primary NGINX server, I have setup a bare git repo at /nginx.git, with a post-receive hook something like :

#!/bin/bash
WORKTREE="/etc/nginx"
GIT
DIR="/nginx.git"
TARGETBRANCH="DEV"

while read oldrev newrev ref
do
if [ -n "$ref"] && [ "$ref" == "refs/heads/$TARGET
BRANCH" ]; then
git --work-tree=$WORKTREE --git-dir=$GITDIR checkout $TARGETBRANCH -f
  sudo nginx -t && sudo nginx -s reload
else
echo "ERROR : this server is only for $TARGET
BRANCH"
fi
done

On my local repo, I have a git remote setup pointing to our DEV,QA amd PROD NGINX primary servers:

dev nginxadmin@devnginx01:/nginx.git
qa nginxadmin@devnginx01:/nginx.git
prod nginxadmin@devnginx01:/nginx.git

this allows me to do a git push of branch DEV,QA or PROD to the remote NGINX server:

git push dev DEV

the hook will run, the config will be checked out to /etc/nginx, if the config check is successful, then the config is reloaded with sudo nginx -t && sudo nginx -s reload

There are multiple NGINX servers in each environment, the config is synced between each of them using nginx-sync.

This setup is working well and is how the team has been managing the deployment for some time.

I have a few issues with this setup in regards to security and am hoping for some advice on how to secure it further.

To start, the git checkout to /etc/nginx requires permission to overwrite those files - so we all use the same user for the git remote - nginxadmin, then nginxadmin owns all files in /etc/nginx instead of root.

The sudo nginx -t && sudo nginx -s reload requires nginxadmin being added to the suders file and allowed to run those commands without password.

nginxadmin ALL = NOPASSWD: /usr/sbin/nginx -t, /usr/sbin/nginx -s reload

nginx-sync runs as root and requires PermitRootLogin without-password to be added to sshd_config.

I can look at trying to run nginx-sync as nginxadmin and change ownership of /etc/nginx on all servers to nginxadmin - But is nginxadmin owning the /etc/nginx secure in the first place?

Is there any other way to check config and reload of successful after a config deployment?

Any other suggestions?

https://redd.it/m1nhq5
@r_devops
Nessus vulnerability scans

Why are some devices coming back with "Weak MAC algorithm supported" ?
I have sorted this with all other devices by editing the sshd.config file. But these still persist.

Any advice?

https://redd.it/m1jnjb
@r_devops
Looking for Zap automation with c# guide

Any recommendations? It’s my best language but every zap scanner guide is for some other programming language and i cant get it to work, which i Chuck up to me suckyness in Said language.
Ive been looking for a c# automation of ZAP but so far no luck :(

https://redd.it/m1id4b
@r_devops
DevOps Career Advice

**alt account**

I have kind of a weird background when it comes to IT and "DevOps" and am looking for advice.

Background:

\-Have a non-technology bachelor's degree and got a low-level job at a company a few years ago.

\-Through luck I moved from a non-tech job at this company to the Tech Manager role.

In the past 1-1.5yrs this company has changed a lot and is moving toward a technology focus including building out a new app. With my being the IT Manager role, I have bounced between the Ops and Engineering (IT) world (mainly doing lower-level IT things) but recently (the past 7-8 months) have overseen rebuilding our AWS environment to facilitate a secure and highly available infrastructure for the application.

Without having a dedicated DevOps person, I have also taken on a semi-split role in it (DevSecOps) along with Cloud “architecting” and my normal IT duties for which I usually have my IT Specialist take care of. (this is a small company of about 60)

Dilemma:

My dilemma makes me feel kind of greedy in that I feel I should be making more, but not sure how much more since I do not have a long history of experience and no degree. The only certificates I have are CompTIA Sec+ and AWS CCP (though studying for the AWS CSA; kind of on the back burner). When I bring up possibly making more money due to the responsibilities I have taken and the amount of progress in building out and managing our AWS Infrastructure, I usually get a “you have limited experience” or “most cloud architects/devops that make good money also know how to code”. I agree and understand, but I am also doing the work, besides coding…and now taking on a split DevOps'ish role I feel like I am basically doing higher level work for moderate pay.

More info:

When I took over the IT role:

Normal IT duties which migrated to compliance/audit proofing for a bit and eventually moved to implementation of SIEM, MDM, AWS maintenance (couple EC2’s, ECS, S3, VPC, Route53).

Past 8-9 months:

Working with RDS, ECS, EC2s, Beanstalk, S3, R53, Redshift, Jfrog, Neo4j (debugging and setup on ec2), lambda, Cloudwatch, Guardduty, etc. When this started, most of the infrastructure was built out as a 50/50 split between myself and the engineering team, but over the past 5-6 months I have built out a whole new Dev/Staging and Prod AWS Account/Env for which I did by myself, including migrating our CI.

​

Sorry for the randomness of thoughts...kind of in a weird spot

https://redd.it/m1c1cj
@r_devops
I get 401 Unauthorized when I run mvn deploy

Hello,I just installed Sonatype Nexus Repository Manager v3.30.0-01 on AWS EC2 ubuntu instance and I successfully access to the GUI.

Now my problem is when I execute `mvn deploy` on my local project it get rejected with 401 unauthorized

`[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project social-carpooling-commons: Failed to deploy artifacts: Could not transfer artifact io.social.carpooling:social-carpooling-commons:jar:0.0.1-20210309.180217-1 from/to snapshots (`[`https://myIpAddress:8081/repository/maven-snapshots`](https://3.235.192.194:8081/repository/maven-snapshots)`): Transfer failed for` [`https://`](https://3.235.192.194:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar)[`myIpAddress`](https://3.235.192.194:8081/repository/maven-snapshots)[`:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar`](https://3.235.192.194:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar) `401 Unauthorized`

Here is my pom.xml config :

<distributionManagement>
<repository>
<id>releases</id>
<name>Nexus Releases</name>
<url>https://ipAddress:8081/repository/maven-releases</url>
</repository>
<snapshotRepository>
<id>snapshots</id>
<name>Nexus Snapshots</name>
<url>https://ipAddress:8081/repository/maven-snapshots</url>
</snapshotRepository>
</distributionManagement>

and my maven settings.xml :

&#x200B;

<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="https://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd">
<!-- localRepository
| The path to the local repository maven will use to store artifacts.
|
| Default: ${user.home}/.m2/repository
<localRepository>/path/to/local/repo</localRepository>
-->

<proxies>
</proxies>

<servers>
<server>
<id>snapshots</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
<server>
<id>releases</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
<server>
<id>thirdparty</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
</servers>

<mirrors>
<mirror>
<!-- This sends everything else to /public -->
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>https://ipAddress:8081/nexus/content/groups/public</url>
</mirror>
</mirrors>

<profiles>
<profile>
<id>nexus</id>
<repositories>
<repository>
<id>central</id>
<url>https://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<url>https://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>

<activeProfiles>
<activeProfile>nexus</activeProfile>
</activeProfiles>