Do you suffer from downtime when deploying and how do you manage to solve it?
Downtime that can be caused due to lack of coordination between new and older deployed services, bugs that are discovered only after deploy or any other incidents that may harm user experience and system overall resilience, what is your strategy and how effective is it?
(Do you still suffer from downtown although taking those measures?)
Thanks.
https://redd.it/m0sbl7
@r_devops
Downtime that can be caused due to lack of coordination between new and older deployed services, bugs that are discovered only after deploy or any other incidents that may harm user experience and system overall resilience, what is your strategy and how effective is it?
(Do you still suffer from downtown although taking those measures?)
Thanks.
https://redd.it/m0sbl7
@r_devops
reddit
Do you suffer from downtime when deploying and how do you manage...
Downtime that can be caused due to lack of coordination between new and older deployed services, bugs that are discovered only after deploy or any...
SSH into Ubuntu MAAS
I'm testing out MAAS. I've PXE booted a VM in vCenter to install ubuntu 18.04. The machine booted up and got an IP address.
Problem is, I can't seem to SSH into it. I've made sure to import the SSH key by doing:
john john-lnx ~ $ cat ~/.ssh/id_rsa.pub
# Copy the output, and paste it in the MAAS webgui for SSH keys. (I've done that in the MAAS installation but again now for troubleshooting)
When I try to SSH into the machine, this is what I get:
john john-lnx ~ $ ssh [email protected] -v
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /home/john/.ssh/config
debug1: /home/john/.ssh/config line 2: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 172.24.25.232 [172.24.25.232] port 22.
debug1: Connection established.
debug1: identity file /home/john/.ssh/id_rsa type 0
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ed25519-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 172.24.25.232:22 as 'ubuntu'
debug1: SSH2_MSG_KEXINIT sent
Connection reset by 172.24.25.232 port 22
Is this caused by the `key_load_public` it seems to look for? What did I do wrong?
https://redd.it/m13mp4
@r_devops
I'm testing out MAAS. I've PXE booted a VM in vCenter to install ubuntu 18.04. The machine booted up and got an IP address.
Problem is, I can't seem to SSH into it. I've made sure to import the SSH key by doing:
john john-lnx ~ $ cat ~/.ssh/id_rsa.pub
# Copy the output, and paste it in the MAAS webgui for SSH keys. (I've done that in the MAAS installation but again now for troubleshooting)
When I try to SSH into the machine, this is what I get:
john john-lnx ~ $ ssh [email protected] -v
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /home/john/.ssh/config
debug1: /home/john/.ssh/config line 2: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 172.24.25.232 [172.24.25.232] port 22.
debug1: Connection established.
debug1: identity file /home/john/.ssh/id_rsa type 0
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/john/.ssh/id_ed25519-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 172.24.25.232:22 as 'ubuntu'
debug1: SSH2_MSG_KEXINIT sent
Connection reset by 172.24.25.232 port 22
Is this caused by the `key_load_public` it seems to look for? What did I do wrong?
https://redd.it/m13mp4
@r_devops
reddit
SSH into Ubuntu MAAS
I'm testing out MAAS. I've PXE booted a VM in vCenter to install ubuntu 18.04. The machine booted up and got an IP address. Problem is, I can't...
Ubuntu's MAAS install configuration
In usual a usual PXE installation, a preseed is given to set up things like date, timezone, language, root user, disks, etc..
I'm testing things out with MAAS, and after it installed a node, the node had already a custom configuration, sort of skipping those configurations.
In MAAS, how are those things defined? Where can I specify things like timezone, language, root user, etc?
Thanks ahead!
https://redd.it/m19f9u
@r_devops
In usual a usual PXE installation, a preseed is given to set up things like date, timezone, language, root user, disks, etc..
I'm testing things out with MAAS, and after it installed a node, the node had already a custom configuration, sort of skipping those configurations.
In MAAS, how are those things defined? Where can I specify things like timezone, language, root user, etc?
Thanks ahead!
https://redd.it/m19f9u
@r_devops
reddit
Ubuntu's MAAS install configuration
In usual a usual PXE installation, a preseed is given to set up things like date, timezone, language, root user, disks, etc.. I'm testing things...
Any good DevOps / Engineering podcasts?
Hey everyone!
Can someone recommend me some good DevOps / Engineering pod casts, that talks about various things like, different deployment strategies, different ways to do cicd, microservices etc.
https://redd.it/m19y4r
@r_devops
Hey everyone!
Can someone recommend me some good DevOps / Engineering pod casts, that talks about various things like, different deployment strategies, different ways to do cicd, microservices etc.
https://redd.it/m19y4r
@r_devops
reddit
Any good DevOps / Engineering podcasts?
Hey everyone! Can someone recommend me some good DevOps / Engineering pod casts, that talks about various things like, different deployment...
Attempt to visualize Nomad and Consul topology with old school tool (TcpDump)
Hi folks,
If you are working with Nomad and Consul you have noticed the lack of visualization tools for network traffic tools and solutions.
I took on this task and started a new solution called LiteArch Trafik
It cross references data between consul and docker metadata on each node and uses very small set of dependencies like: Jq, TcpDump, docker and consul API
There is LAB to launch nomad and consul cluster with Ubuntu Multipass using some bash scripting as well to try it out.
If you are looking for similar solutions give it a try and let me know what you think of it.
​
Documentation
Code
https://redd.it/m12lnp
@r_devops
Hi folks,
If you are working with Nomad and Consul you have noticed the lack of visualization tools for network traffic tools and solutions.
I took on this task and started a new solution called LiteArch Trafik
It cross references data between consul and docker metadata on each node and uses very small set of dependencies like: Jq, TcpDump, docker and consul API
There is LAB to launch nomad and consul cluster with Ubuntu Multipass using some bash scripting as well to try it out.
If you are looking for similar solutions give it a try and let me know what you think of it.
​
Documentation
Code
https://redd.it/m12lnp
@r_devops
GitHub
Overview
Contribute to mahermali/litearch.trafik development by creating an account on GitHub.
Pushing new configurations
Our production environment is not connected to any other environment.
When new configuration are needed to implemented (Which happens pretty often)
The configuration files are packaged, one for Linux Nodes, the other for Windows Nodes.
And a release note is given and the engineers go thru the documents which contains instructions on how the new configurations ought to be deployed. Usually its the same but sometimes it changes, the only way we would know it, is to read the document.
Is there any way this can be automated?
i.e. A tool that reads the document and does the job automatically?
Or a way I can get the development team to packages the configuration files in a way where it can be easily deployed?
​
Idk if i asking something that is not out there, but i find what we do really ridiculous, but of course, i.e. asking the development team to package the configuration files in a different way would be a war since we have to deal with humans
But i think i am able to do it, if i push the bosses enough.
​
​
I am open to any ideas
https://redd.it/m10eej
@r_devops
Our production environment is not connected to any other environment.
When new configuration are needed to implemented (Which happens pretty often)
The configuration files are packaged, one for Linux Nodes, the other for Windows Nodes.
And a release note is given and the engineers go thru the documents which contains instructions on how the new configurations ought to be deployed. Usually its the same but sometimes it changes, the only way we would know it, is to read the document.
Is there any way this can be automated?
i.e. A tool that reads the document and does the job automatically?
Or a way I can get the development team to packages the configuration files in a way where it can be easily deployed?
​
Idk if i asking something that is not out there, but i find what we do really ridiculous, but of course, i.e. asking the development team to package the configuration files in a different way would be a war since we have to deal with humans
But i think i am able to do it, if i push the bosses enough.
​
​
I am open to any ideas
https://redd.it/m10eej
@r_devops
reddit
Pushing new configurations
Our production environment is not connected to any other environment. When new configuration are needed to implemented (Which happens pretty...
I figured out a CI/CD pipeline with Github Actions and AWS CDK
Disclaimer, I'm not DevOps, just a plain ole' reggy SWE, but I believe this falls under the DevOps domain, right?
The flow is:
1. Github actions build the project; JS/CSS/HTML assets are exported
2. Github actions run
3. That's about it
Seems simple, but when it worked, it was mind blowing...
Post: JAMStack CI/CD with Lerna, NextJS, CDK, and Github Actions
https://redd.it/m0w444
@r_devops
Disclaimer, I'm not DevOps, just a plain ole' reggy SWE, but I believe this falls under the DevOps domain, right?
The flow is:
1. Github actions build the project; JS/CSS/HTML assets are exported
2. Github actions run
cdk and deploy infra as well as upload assets3. That's about it
Seems simple, but when it worked, it was mind blowing...
Post: JAMStack CI/CD with Lerna, NextJS, CDK, and Github Actions
https://redd.it/m0w444
@r_devops
Thekevinwang
JAMStack CI/CD with Lerna, NextJS, CDK, and Github Actions | Kevin Wang's Blog
Lerna, AWS CDK, and Github Actions make continuous integration and continuous delivery super easy. I figured out how to setup a CI/CD pipeline for my NextJS static apps backed by additional AWS infrastructure like Lambda functions, API Gateway, and Dynamo…
Nginx sub URL based routing?
URLs
Locatoins
/R1/v1/dev should redirect to 10.10.10.10/v1/dev
so here /v1/dev will be part of the request that should be considered as a part of proxy_pass's URL
/v1/dev is not static value but whatever comes after R1 in location, that will be considered as proxy_pass's end URL.
/R1/v1/test should redirect to **20.20.20.20/v1/test**
is it possible to have this king of configuration of single nginx server ?
https://redd.it/m0zfon
@r_devops
URLs
Locatoins
/R1/v1/dev should redirect to 10.10.10.10/v1/dev
so here /v1/dev will be part of the request that should be considered as a part of proxy_pass's URL
/v1/dev is not static value but whatever comes after R1 in location, that will be considered as proxy_pass's end URL.
/R1/v1/test should redirect to **20.20.20.20/v1/test**
is it possible to have this king of configuration of single nginx server ?
https://redd.it/m0zfon
@r_devops
reddit
Nginx sub URL based routing?
URLs Locatoins **/R1/v1/dev** should redirect to **10.10.10.10/v1/dev** so here /v1/dev will be part of the request that should be...
New project: Event-Based Serverless Container Workflows with Direktiv
G'day DevOps!
Apologies if this is the wrong group - we posted this is r/serverless and asked for advice on other groups - someone dm'ed and suggested r/devops. Apologies if this is the wrong group! We wanted to share with you the latest creation from our team!
Direktiv is an open-source event-driven serverless container workflow engine.
Event-driven because we support the CloudEvents standard (also scheduled execution & API driven). Serverless because workflows and execution are instantiated when needed using containers or vorteil. Workflow engine because that's at its core what Direktiv is.
Direktiv was created to address 4 problems faced with workflow engines we faced:
1. Cloud agnostic: we wanted Direktiv to run on any platform, support any code and NOT be dependent on the cloud provider's services
2. Simplicity: the configuration of the workflow components should be simple more than anything else (only YAML and jq to express all states, transitions, evaluations and actions). We've modelled Direktiv's specification after the CNCF Serverless Workflow Specification with the ultimate goal to make it feature-complete and easy to implement
3. Reusable: should have the ability to reuse/standardise containerised code across workflows
4. Multi-tenanted/secure: we want to use Direktiv in a multi-tenant service provider space, which means all workflow executions have to be isolated; data access secured and isolated, and all workflows and actions are truly ephemeral.
The workflow language is VERY simple YAML primitives expressions. We're pretty confident in the engine now, so we're now focused on building standard containers to be used. You can see the progress (for now) on Docker Hub (https://hub.docker.com/search?q=vorteil&type=image)
Direktiv Github: https://github.com/vorteil/direktiv as open source
Documentation: https://docs.direktiv.io/
Beta front-end: https://wf.direktiv.io/ \- we hope to make this a commercial component of the product.
Please let us know what you think about the idea, the implementation, use-cases for it (we have a couple in mind) or some real-world examples (this is what we need help with).
I promised James (of the team members who talks a lot) that I would end the HN introduction with the lines below:
\# The Prime Direktiv:
Captain's log, stardate 47634.44. Cloud bills are high, we're dependent on dinosaur companies and we still have no standards. Forget about boldly changing anything, we just want to change SOMETHING
https://redd.it/m1jdf1
@r_devops
G'day DevOps!
Apologies if this is the wrong group - we posted this is r/serverless and asked for advice on other groups - someone dm'ed and suggested r/devops. Apologies if this is the wrong group! We wanted to share with you the latest creation from our team!
Direktiv is an open-source event-driven serverless container workflow engine.
Event-driven because we support the CloudEvents standard (also scheduled execution & API driven). Serverless because workflows and execution are instantiated when needed using containers or vorteil. Workflow engine because that's at its core what Direktiv is.
Direktiv was created to address 4 problems faced with workflow engines we faced:
1. Cloud agnostic: we wanted Direktiv to run on any platform, support any code and NOT be dependent on the cloud provider's services
2. Simplicity: the configuration of the workflow components should be simple more than anything else (only YAML and jq to express all states, transitions, evaluations and actions). We've modelled Direktiv's specification after the CNCF Serverless Workflow Specification with the ultimate goal to make it feature-complete and easy to implement
3. Reusable: should have the ability to reuse/standardise containerised code across workflows
4. Multi-tenanted/secure: we want to use Direktiv in a multi-tenant service provider space, which means all workflow executions have to be isolated; data access secured and isolated, and all workflows and actions are truly ephemeral.
The workflow language is VERY simple YAML primitives expressions. We're pretty confident in the engine now, so we're now focused on building standard containers to be used. You can see the progress (for now) on Docker Hub (https://hub.docker.com/search?q=vorteil&type=image)
Direktiv Github: https://github.com/vorteil/direktiv as open source
Documentation: https://docs.direktiv.io/
Beta front-end: https://wf.direktiv.io/ \- we hope to make this a commercial component of the product.
Please let us know what you think about the idea, the implementation, use-cases for it (we have a couple in mind) or some real-world examples (this is what we need help with).
I promised James (of the team members who talks a lot) that I would end the HN introduction with the lines below:
\# The Prime Direktiv:
Captain's log, stardate 47634.44. Cloud bills are high, we're dependent on dinosaur companies and we still have no standards. Forget about boldly changing anything, we just want to change SOMETHING
https://redd.it/m1jdf1
@r_devops
Git vulnerability update your versions
git vulnerability with code execution issue https://github.com/git/git/security/advisories/GHSA-8prw-h3cq-mghm update now
https://redd.it/m1j6v9
@r_devops
git vulnerability with code execution issue https://github.com/git/git/security/advisories/GHSA-8prw-h3cq-mghm update now
https://redd.it/m1j6v9
@r_devops
GitHub
malicious repositories can execute remote code while cloning
### Impact
A specially crafted repository that contains symbolic links as well as files using a clean/smudge filter such as Git LFS, may cause just-checked out script to be executed while clonin...
A specially crafted repository that contains symbolic links as well as files using a clean/smudge filter such as Git LFS, may cause just-checked out script to be executed while clonin...
Best place to find AWS SRE contractors?
I'm looking to contract with a SRE that has experience setting up EKS clusters on AWS with full CI/CD (from Github), wildcard SSL, etc. I'd like to setup the ability to spin up ephemeral test environments based on PR creation as well. Would someone here be available for a contract? Or is there a better place to look?
https://redd.it/m1ihon
@r_devops
I'm looking to contract with a SRE that has experience setting up EKS clusters on AWS with full CI/CD (from Github), wildcard SSL, etc. I'd like to setup the ability to spin up ephemeral test environments based on PR creation as well. Would someone here be available for a contract? Or is there a better place to look?
https://redd.it/m1ihon
@r_devops
reddit
Best place to find AWS SRE contractors?
I'm looking to contract with a SRE that has experience setting up EKS clusters on AWS with full CI/CD (from Github), wildcard SSL, etc. I'd like...
How common is it for a company to use SAAS products and the security to just object to every single external connection that SAAS provider requests?
It's a rant. But my company is trying to have a digital transformation. They have paid for every single tool in the world. But when it comes to working with SAAS products the security will simply put roadblocks for everything that the provider asks. For eg. A monitoring SAAS product we use is requesting access to our AWS account to pull metrics. However Security needs to review that request essentially delaying the work for unforeseeable future. How common is it in other companies? The previous companies I have worked in never had these issues and now I am pissed of due to these hurdles every single day.
https://redd.it/m1idgk
@r_devops
It's a rant. But my company is trying to have a digital transformation. They have paid for every single tool in the world. But when it comes to working with SAAS products the security will simply put roadblocks for everything that the provider asks. For eg. A monitoring SAAS product we use is requesting access to our AWS account to pull metrics. However Security needs to review that request essentially delaying the work for unforeseeable future. How common is it in other companies? The previous companies I have worked in never had these issues and now I am pissed of due to these hurdles every single day.
https://redd.it/m1idgk
@r_devops
reddit
How common is it for a company to use SAAS products and the...
It's a rant. But my company is trying to have a digital transformation. They have paid for every single tool in the world. But when it comes to...
Securing deployment of NGINX config via git push with hooks
Hi All,
Let me know if this question would make more sense in r/nginx \- but this is less about NGINX and more about deployment of config to a server via git push.
Currently I have our nginx config in a git repo on my local machine, with a remote origin shared by the team in bitbucket.
On our primary NGINX server, I have setup a bare git repo at /nginx.git, with a post-receive hook something like :
#!/bin/bash
WORKTREE="/etc/nginx"
GITDIR="/nginx.git"
TARGETBRANCH="DEV"
while read oldrev newrev ref
do
if [ -n "$ref"] && [ "$ref" == "refs/heads/$TARGETBRANCH" ]; then
git --work-tree=$WORKTREE --git-dir=$GITDIR checkout $TARGETBRANCH -f
sudo nginx -t && sudo nginx -s reload
else
echo "ERROR : this server is only for $TARGETBRANCH"
fi
done
On my local repo, I have a git remote setup pointing to our DEV,QA amd PROD NGINX primary servers:
dev nginxadmin@devnginx01:/nginx.git
qa nginxadmin@devnginx01:/nginx.git
prod nginxadmin@devnginx01:/nginx.git
this allows me to do a git push of branch DEV,QA or PROD to the remote NGINX server:
git push dev DEV
the hook will run, the config will be checked out to /etc/nginx, if the config check is successful, then the config is reloaded with
There are multiple NGINX servers in each environment, the config is synced between each of them using nginx-sync.
This setup is working well and is how the team has been managing the deployment for some time.
I have a few issues with this setup in regards to security and am hoping for some advice on how to secure it further.
To start, the git checkout to /etc/nginx requires permission to overwrite those files - so we all use the same user for the git remote - nginxadmin, then nginxadmin owns all files in /etc/nginx instead of root.
The sudo nginx -t && sudo nginx -s reload requires nginxadmin being added to the suders file and allowed to run those commands without password.
nginxadmin ALL = NOPASSWD: /usr/sbin/nginx -t, /usr/sbin/nginx -s reload
nginx-sync runs as root and requires
I can look at trying to run nginx-sync as nginxadmin and change ownership of /etc/nginx on all servers to nginxadmin - But is nginxadmin owning the /etc/nginx secure in the first place?
Is there any other way to check config and reload of successful after a config deployment?
Any other suggestions?
https://redd.it/m1nhq5
@r_devops
Hi All,
Let me know if this question would make more sense in r/nginx \- but this is less about NGINX and more about deployment of config to a server via git push.
Currently I have our nginx config in a git repo on my local machine, with a remote origin shared by the team in bitbucket.
On our primary NGINX server, I have setup a bare git repo at /nginx.git, with a post-receive hook something like :
#!/bin/bash
WORKTREE="/etc/nginx"
GITDIR="/nginx.git"
TARGETBRANCH="DEV"
while read oldrev newrev ref
do
if [ -n "$ref"] && [ "$ref" == "refs/heads/$TARGETBRANCH" ]; then
git --work-tree=$WORKTREE --git-dir=$GITDIR checkout $TARGETBRANCH -f
sudo nginx -t && sudo nginx -s reload
else
echo "ERROR : this server is only for $TARGETBRANCH"
fi
done
On my local repo, I have a git remote setup pointing to our DEV,QA amd PROD NGINX primary servers:
dev nginxadmin@devnginx01:/nginx.git
qa nginxadmin@devnginx01:/nginx.git
prod nginxadmin@devnginx01:/nginx.git
this allows me to do a git push of branch DEV,QA or PROD to the remote NGINX server:
git push dev DEV
the hook will run, the config will be checked out to /etc/nginx, if the config check is successful, then the config is reloaded with
sudo nginx -t && sudo nginx -s reloadThere are multiple NGINX servers in each environment, the config is synced between each of them using nginx-sync.
This setup is working well and is how the team has been managing the deployment for some time.
I have a few issues with this setup in regards to security and am hoping for some advice on how to secure it further.
To start, the git checkout to /etc/nginx requires permission to overwrite those files - so we all use the same user for the git remote - nginxadmin, then nginxadmin owns all files in /etc/nginx instead of root.
The sudo nginx -t && sudo nginx -s reload requires nginxadmin being added to the suders file and allowed to run those commands without password.
nginxadmin ALL = NOPASSWD: /usr/sbin/nginx -t, /usr/sbin/nginx -s reload
nginx-sync runs as root and requires
PermitRootLogin without-password to be added to sshd_config.I can look at trying to run nginx-sync as nginxadmin and change ownership of /etc/nginx on all servers to nginxadmin - But is nginxadmin owning the /etc/nginx secure in the first place?
Is there any other way to check config and reload of successful after a config deployment?
Any other suggestions?
https://redd.it/m1nhq5
@r_devops
reddit
Securing deployment of NGINX config via git push with hooks
Hi All, Let me know if this question would make more sense in r/nginx \- but this is less about NGINX and more about deployment of config to a...
Nessus vulnerability scans
Why are some devices coming back with "Weak MAC algorithm supported" ?
I have sorted this with all other devices by editing the sshd.config file. But these still persist.
Any advice?
https://redd.it/m1jnjb
@r_devops
Why are some devices coming back with "Weak MAC algorithm supported" ?
I have sorted this with all other devices by editing the sshd.config file. But these still persist.
Any advice?
https://redd.it/m1jnjb
@r_devops
reddit
Nessus vulnerability scans
Why are some devices coming back with "Weak MAC algorithm supported" ? I have sorted this with all other devices by editing the sshd.config file....
Looking for Zap automation with c# guide
Any recommendations? It’s my best language but every zap scanner guide is for some other programming language and i cant get it to work, which i Chuck up to me suckyness in Said language.
Ive been looking for a c# automation of ZAP but so far no luck :(
https://redd.it/m1id4b
@r_devops
Any recommendations? It’s my best language but every zap scanner guide is for some other programming language and i cant get it to work, which i Chuck up to me suckyness in Said language.
Ive been looking for a c# automation of ZAP but so far no luck :(
https://redd.it/m1id4b
@r_devops
reddit
Looking for Zap automation with c# guide
Any recommendations? It’s my best language but every zap scanner guide is for some other programming language and i cant get it to work, which i...
DevOps Career Advice
**alt account**
I have kind of a weird background when it comes to IT and "DevOps" and am looking for advice.
Background:
\-Have a non-technology bachelor's degree and got a low-level job at a company a few years ago.
\-Through luck I moved from a non-tech job at this company to the Tech Manager role.
In the past 1-1.5yrs this company has changed a lot and is moving toward a technology focus including building out a new app. With my being the IT Manager role, I have bounced between the Ops and Engineering (IT) world (mainly doing lower-level IT things) but recently (the past 7-8 months) have overseen rebuilding our AWS environment to facilitate a secure and highly available infrastructure for the application.
Without having a dedicated DevOps person, I have also taken on a semi-split role in it (DevSecOps) along with Cloud “architecting” and my normal IT duties for which I usually have my IT Specialist take care of. (this is a small company of about 60)
Dilemma:
My dilemma makes me feel kind of greedy in that I feel I should be making more, but not sure how much more since I do not have a long history of experience and no degree. The only certificates I have are CompTIA Sec+ and AWS CCP (though studying for the AWS CSA; kind of on the back burner). When I bring up possibly making more money due to the responsibilities I have taken and the amount of progress in building out and managing our AWS Infrastructure, I usually get a “you have limited experience” or “most cloud architects/devops that make good money also know how to code”. I agree and understand, but I am also doing the work, besides coding…and now taking on a split DevOps'ish role I feel like I am basically doing higher level work for moderate pay.
More info:
When I took over the IT role:
Normal IT duties which migrated to compliance/audit proofing for a bit and eventually moved to implementation of SIEM, MDM, AWS maintenance (couple EC2’s, ECS, S3, VPC, Route53).
Past 8-9 months:
Working with RDS, ECS, EC2s, Beanstalk, S3, R53, Redshift, Jfrog, Neo4j (debugging and setup on ec2), lambda, Cloudwatch, Guardduty, etc. When this started, most of the infrastructure was built out as a 50/50 split between myself and the engineering team, but over the past 5-6 months I have built out a whole new Dev/Staging and Prod AWS Account/Env for which I did by myself, including migrating our CI.
​
Sorry for the randomness of thoughts...kind of in a weird spot
https://redd.it/m1c1cj
@r_devops
**alt account**
I have kind of a weird background when it comes to IT and "DevOps" and am looking for advice.
Background:
\-Have a non-technology bachelor's degree and got a low-level job at a company a few years ago.
\-Through luck I moved from a non-tech job at this company to the Tech Manager role.
In the past 1-1.5yrs this company has changed a lot and is moving toward a technology focus including building out a new app. With my being the IT Manager role, I have bounced between the Ops and Engineering (IT) world (mainly doing lower-level IT things) but recently (the past 7-8 months) have overseen rebuilding our AWS environment to facilitate a secure and highly available infrastructure for the application.
Without having a dedicated DevOps person, I have also taken on a semi-split role in it (DevSecOps) along with Cloud “architecting” and my normal IT duties for which I usually have my IT Specialist take care of. (this is a small company of about 60)
Dilemma:
My dilemma makes me feel kind of greedy in that I feel I should be making more, but not sure how much more since I do not have a long history of experience and no degree. The only certificates I have are CompTIA Sec+ and AWS CCP (though studying for the AWS CSA; kind of on the back burner). When I bring up possibly making more money due to the responsibilities I have taken and the amount of progress in building out and managing our AWS Infrastructure, I usually get a “you have limited experience” or “most cloud architects/devops that make good money also know how to code”. I agree and understand, but I am also doing the work, besides coding…and now taking on a split DevOps'ish role I feel like I am basically doing higher level work for moderate pay.
More info:
When I took over the IT role:
Normal IT duties which migrated to compliance/audit proofing for a bit and eventually moved to implementation of SIEM, MDM, AWS maintenance (couple EC2’s, ECS, S3, VPC, Route53).
Past 8-9 months:
Working with RDS, ECS, EC2s, Beanstalk, S3, R53, Redshift, Jfrog, Neo4j (debugging and setup on ec2), lambda, Cloudwatch, Guardduty, etc. When this started, most of the infrastructure was built out as a 50/50 split between myself and the engineering team, but over the past 5-6 months I have built out a whole new Dev/Staging and Prod AWS Account/Env for which I did by myself, including migrating our CI.
​
Sorry for the randomness of thoughts...kind of in a weird spot
https://redd.it/m1c1cj
@r_devops
reddit
DevOps Career Advice
\*\*alt account\*\* I have kind of a weird background when it comes to IT and "DevOps" and am looking for advice. **Background:** \-Have a...
I get 401 Unauthorized when I run mvn deploy
Hello,I just installed Sonatype Nexus Repository Manager v3.30.0-01 on AWS EC2 ubuntu instance and I successfully access to the GUI.
Now my problem is when I execute `mvn deploy` on my local project it get rejected with 401 unauthorized
`[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project social-carpooling-commons: Failed to deploy artifacts: Could not transfer artifact io.social.carpooling:social-carpooling-commons:jar:0.0.1-20210309.180217-1 from/to snapshots (`[`https://myIpAddress:8081/repository/maven-snapshots`](https://3.235.192.194:8081/repository/maven-snapshots)`): Transfer failed for` [`https://`](https://3.235.192.194:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar)[`myIpAddress`](https://3.235.192.194:8081/repository/maven-snapshots)[`:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar`](https://3.235.192.194:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar) `401 Unauthorized`
Here is my pom.xml config :
<distributionManagement>
<repository>
<id>releases</id>
<name>Nexus Releases</name>
<url>https://ipAddress:8081/repository/maven-releases</url>
</repository>
<snapshotRepository>
<id>snapshots</id>
<name>Nexus Snapshots</name>
<url>https://ipAddress:8081/repository/maven-snapshots</url>
</snapshotRepository>
</distributionManagement>
and my maven settings.xml :
​
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="https://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd">
<!-- localRepository
| The path to the local repository maven will use to store artifacts.
|
| Default: ${user.home}/.m2/repository
<localRepository>/path/to/local/repo</localRepository>
-->
<proxies>
</proxies>
<servers>
<server>
<id>snapshots</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
<server>
<id>releases</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
<server>
<id>thirdparty</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
</servers>
<mirrors>
<mirror>
<!-- This sends everything else to /public -->
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>https://ipAddress:8081/nexus/content/groups/public</url>
</mirror>
</mirrors>
<profiles>
<profile>
<id>nexus</id>
<repositories>
<repository>
<id>central</id>
<url>https://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<url>https://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>nexus</activeProfile>
</activeProfiles>
Hello,I just installed Sonatype Nexus Repository Manager v3.30.0-01 on AWS EC2 ubuntu instance and I successfully access to the GUI.
Now my problem is when I execute `mvn deploy` on my local project it get rejected with 401 unauthorized
`[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project social-carpooling-commons: Failed to deploy artifacts: Could not transfer artifact io.social.carpooling:social-carpooling-commons:jar:0.0.1-20210309.180217-1 from/to snapshots (`[`https://myIpAddress:8081/repository/maven-snapshots`](https://3.235.192.194:8081/repository/maven-snapshots)`): Transfer failed for` [`https://`](https://3.235.192.194:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar)[`myIpAddress`](https://3.235.192.194:8081/repository/maven-snapshots)[`:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar`](https://3.235.192.194:8081/repository/maven-snapshots/io/social/carpooling/social-carpooling-commons/0.0.1-SNAPSHOT/social-carpooling-commons-0.0.1-20210309.180217-1.jar) `401 Unauthorized`
Here is my pom.xml config :
<distributionManagement>
<repository>
<id>releases</id>
<name>Nexus Releases</name>
<url>https://ipAddress:8081/repository/maven-releases</url>
</repository>
<snapshotRepository>
<id>snapshots</id>
<name>Nexus Snapshots</name>
<url>https://ipAddress:8081/repository/maven-snapshots</url>
</snapshotRepository>
</distributionManagement>
and my maven settings.xml :
​
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="https://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd">
<!-- localRepository
| The path to the local repository maven will use to store artifacts.
|
| Default: ${user.home}/.m2/repository
<localRepository>/path/to/local/repo</localRepository>
-->
<proxies>
</proxies>
<servers>
<server>
<id>snapshots</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
<server>
<id>releases</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
<server>
<id>thirdparty</id>
<username>admin</username>
<password>nexus-admin</password>
</server>
</servers>
<mirrors>
<mirror>
<!-- This sends everything else to /public -->
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>https://ipAddress:8081/nexus/content/groups/public</url>
</mirror>
</mirrors>
<profiles>
<profile>
<id>nexus</id>
<repositories>
<repository>
<id>central</id>
<url>https://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<url>https://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>nexus</activeProfile>
</activeProfiles>
Is NoSQL irrelevant for data engineering?
In this article, we’ll investigate use cases for which data engineers may need to interact with NoSQL data stores.
Read more: https://dashbird.io/blog/nosql-database-data-engineering/
https://redd.it/m20jd1
@r_devops
In this article, we’ll investigate use cases for which data engineers may need to interact with NoSQL data stores.
Read more: https://dashbird.io/blog/nosql-database-data-engineering/
https://redd.it/m20jd1
@r_devops
Dashbird
NoSQL Databases Relevant For Data Engineering?- Dashbird
We investigate use cases for which data engineers may need to interact with NoSQL, as well as the pros and cons.
Do you prohibit containers which could POTENTIALLY be run as root?
Hi. If a container had the ability to run as root, but included clear documentation on how to run it as a non-root user and stated that was the best practice, would that be sufficient for your organization? Or, do you prohibit containers which even have the possibility that they can be run as root? Or, put another way; Does your security policies prohibit containers which have the ability to run as root (even though you don't deploy them that way).
Just curious... because I am being impacted by a prohibition like this. Is this typical across the devops landscape now? Sorry if I am out of touch.
https://redd.it/m1y2z3
@r_devops
Hi. If a container had the ability to run as root, but included clear documentation on how to run it as a non-root user and stated that was the best practice, would that be sufficient for your organization? Or, do you prohibit containers which even have the possibility that they can be run as root? Or, put another way; Does your security policies prohibit containers which have the ability to run as root (even though you don't deploy them that way).
Just curious... because I am being impacted by a prohibition like this. Is this typical across the devops landscape now? Sorry if I am out of touch.
https://redd.it/m1y2z3
@r_devops
reddit
Do you prohibit containers which could POTENTIALLY be run as root?
Hi. If a container had the ability to run as root, but included clear documentation on how to run it as a non-root user and stated that was the...
PromQL assistance with holtwinters
Hi all, I'm new to PromQL (and time series dbs in general) and I'm trying to figure out the below without much of a dev background.
Problem:
\-a count, let's call it "login\count"
\-it's seasonal (repeating after 7 days)
\-it has labels, let's say country
I'd like to set alerts so if the ratio drops below x (let's say 75%).
​
Here is what I have so far, but if I remove the label I run into all kinds of issues:
Here is what I thought I had to do:
​
I'm also not sure that I'm using holt_winters + subqueries correctly, however I seem to be getting the correct results.
https://redd.it/m27cw2
@r_devops
Hi all, I'm new to PromQL (and time series dbs in general) and I'm trying to figure out the below without much of a dev background.
Problem:
\-a count, let's call it "login\count"
\-it's seasonal (repeating after 7 days)
\-it has labels, let's say country
I'd like to set alerts so if the ratio drops below x (let's say 75%).
​
Here is what I have so far, but if I remove the label I run into all kinds of issues:
sum(increase(login_count{country="DE"}[10m])) / holt_winters(sum(increase(login_count{country="DE"}[10m]))[7d:],.005,.005)Here is what I thought I had to do:
sum(increase(login_count[10m])) by country / holt_winters(sum(increase(login_count[10m]))[7d:],.005,.005)​
I'm also not sure that I'm using holt_winters + subqueries correctly, however I seem to be getting the correct results.
https://redd.it/m27cw2
@r_devops
reddit
PromQL assistance with holt_winters
Hi all, I'm new to PromQL (and time series dbs in general) and I'm trying to figure out the below without much of a dev background. Problem: \-a...