Send VM's system information to a Webhook
Hello, I have created a bionic Ubuntu virtual machine using KVM and virt-install. How can I send its information to a webhook when it boots for the first time? How can I create a webhook? I'd appreciate it if anyone could explain it to me.
https://redd.it/nqfnic
@r_devops
Hello, I have created a bionic Ubuntu virtual machine using KVM and virt-install. How can I send its information to a webhook when it boots for the first time? How can I create a webhook? I'd appreciate it if anyone could explain it to me.
https://redd.it/nqfnic
@r_devops
reddit
Send VM's system information to a Webhook
Hello, I have created a bionic Ubuntu virtual machine using KVM and virt-install. How can I send its information to a webhook when it boots for...
How are you handling package/Image security?
Hey r/devops. . . haven't posted one of these in a while. . .I need to lock down, monitor and clean up my teams' usage of
* language packages (Python, Go, Javascript)
* Dockerfile dependencies - yum/apt packages and other misc non-language dependencies
* A new one - GH Action dependencies i.e. Actions that my teams are using in their CI/CD pipelines. The rub here is the we all LOVE GH Actions but we're also using a bunch of random actions which is horrible from a security standpoint.
We're using GitHub and GitHub Actions, ECR for images and no surprise, a bunch of open source libraries.
I need a sane way to alert on unapproved libraries/packages/actions or at least *new* usages of the above and ideally also enforce the usage of known code/tools. There are lots of different tools to use here and we're already using dependabot, native ECR scanning, various linters (e.g. golangci-lint).
Just looking for ideas and recommendations. . . I'm considering something like Artifactory and/or AWS Code Artifact to pin and control (and cache) external dependencies. Also, contemplating vendoring our Go code. I'm not even thinking about licensing scans at this point but that's something we'll probably need too (e.g. BlackDuck).
tl;dr how do you secure and manage your external dependencies???
https://redd.it/nqakxa
@r_devops
Hey r/devops. . . haven't posted one of these in a while. . .I need to lock down, monitor and clean up my teams' usage of
* language packages (Python, Go, Javascript)
* Dockerfile dependencies - yum/apt packages and other misc non-language dependencies
* A new one - GH Action dependencies i.e. Actions that my teams are using in their CI/CD pipelines. The rub here is the we all LOVE GH Actions but we're also using a bunch of random actions which is horrible from a security standpoint.
We're using GitHub and GitHub Actions, ECR for images and no surprise, a bunch of open source libraries.
I need a sane way to alert on unapproved libraries/packages/actions or at least *new* usages of the above and ideally also enforce the usage of known code/tools. There are lots of different tools to use here and we're already using dependabot, native ECR scanning, various linters (e.g. golangci-lint).
Just looking for ideas and recommendations. . . I'm considering something like Artifactory and/or AWS Code Artifact to pin and control (and cache) external dependencies. Also, contemplating vendoring our Go code. I'm not even thinking about licensing scans at this point but that's something we'll probably need too (e.g. BlackDuck).
tl;dr how do you secure and manage your external dependencies???
https://redd.it/nqakxa
@r_devops
reddit
How are you handling package/Image security?
Hey r/devops. . . haven't posted one of these in a while. . .I need to lock down, monitor and clean up my teams' usage of * language packages...
Best logging solution for startups
What’s a good paid logging management solution for a small startup. Logs are mostly for our api and worker clusters that will be used for troubleshooting errors. We don’t have resources to build our own stack and Cloudwatch’s UI just doesn’t seem to cut it.
https://redd.it/nq9pqk
@r_devops
What’s a good paid logging management solution for a small startup. Logs are mostly for our api and worker clusters that will be used for troubleshooting errors. We don’t have resources to build our own stack and Cloudwatch’s UI just doesn’t seem to cut it.
https://redd.it/nq9pqk
@r_devops
reddit
r/devops - Best logging solution for startups
2 votes and 3 comments so far on Reddit
Supported options for Docker Swarm persistent storage?
Preface: Yes I know about Kubernetes and no, at this point in time, it's not a feasible solution for this use case.
Does anyone know of an actively maintained persistent storage driver for Docker Swarm? My google-fu has revealed a ton of (understandably) dead projects that were consumed by kubernetes. I suspect this to be a dead-end search in 2021, but I figured I would reach out in case anyone is still running swarm and can share how their handling persistent storage.
For reference, I'm working within a VMWare vSphere environment (which, unfortunately, seems to no longer maintain their Docker-specific driver).
https://redd.it/nqaz7e
@r_devops
Preface: Yes I know about Kubernetes and no, at this point in time, it's not a feasible solution for this use case.
Does anyone know of an actively maintained persistent storage driver for Docker Swarm? My google-fu has revealed a ton of (understandably) dead projects that were consumed by kubernetes. I suspect this to be a dead-end search in 2021, but I figured I would reach out in case anyone is still running swarm and can share how their handling persistent storage.
For reference, I'm working within a VMWare vSphere environment (which, unfortunately, seems to no longer maintain their Docker-specific driver).
https://redd.it/nqaz7e
@r_devops
reddit
Supported options for Docker Swarm persistent storage?
**Preface:** Yes I know about Kubernetes and no, at this point in time, it's not a feasible solution for this use case. Does anyone know of an...
What are you making with Go? CLI's? REST APIs?
Looking for inspiration. Maybe thinking some simple file serving via Json, but not that handy with structs yet. Then love the idea of not needing to publish requirements.txt and interpreter everywhere.
Good simple projects? Plan is to write more go this year.
https://redd.it/nrpp3t
@r_devops
Looking for inspiration. Maybe thinking some simple file serving via Json, but not that handy with structs yet. Then love the idea of not needing to publish requirements.txt and interpreter everywhere.
Good simple projects? Plan is to write more go this year.
https://redd.it/nrpp3t
@r_devops
reddit
What are you making with Go? CLI's? REST APIs?
Looking for inspiration. Maybe thinking some simple file serving via Json, but not that handy with structs yet. Then love the idea of not needing...
Are there good options for researching Splunk Use?
I’ve worked at a company that hasn’t really made DevOps a priority since it’s inception, mostly having AWS do all the heavy lifting. Now we’re trying to go through the base steps of getting some observability in the app through the use of Prometheus for metrics and Splunk for logging.
Of course, Splunk is a huge app that does a ton more than logging, so if we’re going to get set up with a subscription I want to make sure we’re using tools of theirs that fill all our holes if it makes sense. My problem is I’m having a very hard time looking into it what each individual section of the app does. Does anyone know of good resources to look into this? Their website isn’t super detailed and I feel I need those details when considering things like RUM and APM, I’ll do all the reading or listening I need to do, just need to find good resources.
https://redd.it/nrsaqz
@r_devops
I’ve worked at a company that hasn’t really made DevOps a priority since it’s inception, mostly having AWS do all the heavy lifting. Now we’re trying to go through the base steps of getting some observability in the app through the use of Prometheus for metrics and Splunk for logging.
Of course, Splunk is a huge app that does a ton more than logging, so if we’re going to get set up with a subscription I want to make sure we’re using tools of theirs that fill all our holes if it makes sense. My problem is I’m having a very hard time looking into it what each individual section of the app does. Does anyone know of good resources to look into this? Their website isn’t super detailed and I feel I need those details when considering things like RUM and APM, I’ll do all the reading or listening I need to do, just need to find good resources.
https://redd.it/nrsaqz
@r_devops
reddit
Are there good options for researching Splunk Use?
I’ve worked at a company that hasn’t really made DevOps a priority since it’s inception, mostly having AWS do all the heavy lifting. Now we’re...
Simple kubernetes for staging/test server?
I’m new to devops. Rather than going full-blown k8, I was wondering whether there is something that can help quickly setup and deploy containerized apps into a server. Docker-compose maybe? For production it makes sense for me to go k8.
https://redd.it/nrjv78
@r_devops
I’m new to devops. Rather than going full-blown k8, I was wondering whether there is something that can help quickly setup and deploy containerized apps into a server. Docker-compose maybe? For production it makes sense for me to go k8.
https://redd.it/nrjv78
@r_devops
reddit
Simple kubernetes for staging/test server?
I’m new to devops. Rather than going full-blown k8, I was wondering whether there is something that can help quickly setup and deploy...
Move from US to EUrope but Work remotely for US company?
Hello all,
Looking for some advice. I'm a linux engineer who operates as a standard devops/sre in most orgs with over a decade of experience. Currently born and raised as a US citizen. Due to some family reasons, we're considering moving to my spouse's hometown in a European country and spending time close to the family there.
Prior to Covid I would have not even considered this as reasonable option, but now that everyone is working remotely more and demanding that, I suspect it's likely a more likely option now. I know most US companies won't hire an employee remotely except sometimes as 1099 due to the tax implications / legal hiring implications often associated with countries, but I know larger corporations are more open to this.
My question is... does anyone know where would one would start? All of my contacts / search sites / recruiters are in the US and operate solely there. I know it's been done, and it's rare, but ... Not really sure where to begin. :)
Thanks for the advice in advance.
https://redd.it/nrmhrl
@r_devops
Hello all,
Looking for some advice. I'm a linux engineer who operates as a standard devops/sre in most orgs with over a decade of experience. Currently born and raised as a US citizen. Due to some family reasons, we're considering moving to my spouse's hometown in a European country and spending time close to the family there.
Prior to Covid I would have not even considered this as reasonable option, but now that everyone is working remotely more and demanding that, I suspect it's likely a more likely option now. I know most US companies won't hire an employee remotely except sometimes as 1099 due to the tax implications / legal hiring implications often associated with countries, but I know larger corporations are more open to this.
My question is... does anyone know where would one would start? All of my contacts / search sites / recruiters are in the US and operate solely there. I know it's been done, and it's rare, but ... Not really sure where to begin. :)
Thanks for the advice in advance.
https://redd.it/nrmhrl
@r_devops
reddit
Move from US to EUrope but Work remotely for US company?
Hello all, Looking for some advice. I'm a linux engineer who operates as a standard devops/sre in most orgs with over a decade of experience....
Writing QCOW2 image to disk?
Does anyone have a way to write a disk image to an actual, physical disk? What I'm trying to do is have a Packer image that is applied to bare metal machines. Right now, what I do is make the disk image a block device with qemu-nbd. I then dd that block device (/dev/nbd0) to the disk (/dev/sda). However, I'm writing all the zeros in the disk as well, which is extremely inefficient and defeats the entire purpose. Does anyone have tools that they use for this or am I off into uncharted territories?
https://redd.it/nrkmqy
@r_devops
Does anyone have a way to write a disk image to an actual, physical disk? What I'm trying to do is have a Packer image that is applied to bare metal machines. Right now, what I do is make the disk image a block device with qemu-nbd. I then dd that block device (/dev/nbd0) to the disk (/dev/sda). However, I'm writing all the zeros in the disk as well, which is extremely inefficient and defeats the entire purpose. Does anyone have tools that they use for this or am I off into uncharted territories?
https://redd.it/nrkmqy
@r_devops
reddit
Writing QCOW2 image to disk?
Does anyone have a way to write a disk image to an actual, physical disk? What I'm trying to do is have a Packer image that is applied to bare...
Are cdns becoming less relevant with the service worker/cache api?
Or is that initial load, and update of the js bundle still critical to be delivered fast. Sorry if these sorts of posts aren't alowed.
https://redd.it/nr9xjf
@r_devops
Or is that initial load, and update of the js bundle still critical to be delivered fast. Sorry if these sorts of posts aren't alowed.
https://redd.it/nr9xjf
@r_devops
reddit
Are cdns becoming less relevant with the service worker/cache api?
Or is that initial load, and update of the js bundle still critical to be delivered fast. Sorry if these sorts of posts aren't alowed.
Upcoming First AWS Interview
Hi all,
I am currently network engineer and planning to transition to cloud I already passed the SAA02 exam and now I have upcoming interview as AWS Sysops Engineer. Can you tell me the possible questions and tips. thanks
https://redd.it/nrwecg
@r_devops
Hi all,
I am currently network engineer and planning to transition to cloud I already passed the SAA02 exam and now I have upcoming interview as AWS Sysops Engineer. Can you tell me the possible questions and tips. thanks
https://redd.it/nrwecg
@r_devops
reddit
Upcoming First AWS Interview
Hi all, I am currently network engineer and planning to transition to cloud I already passed the SAA02 exam and now I have upcoming interview as...
What is the best ingress controller for RabbitMQ?
Hi, I am running different versions of our application in different namespaces on one Kubernetes cluster. Each namespace also contains one RabbitMQ and I am looking for a good ingress controller/reverse proxy that can forward the traffic to each RabbitMQ, this way I don't need to create multiple public services for these RMQs and I will just create one public service for the ingress controller.
https://redd.it/nr8y6l
@r_devops
Hi, I am running different versions of our application in different namespaces on one Kubernetes cluster. Each namespace also contains one RabbitMQ and I am looking for a good ingress controller/reverse proxy that can forward the traffic to each RabbitMQ, this way I don't need to create multiple public services for these RMQs and I will just create one public service for the ingress controller.
https://redd.it/nr8y6l
@r_devops
reddit
What is the best ingress controller for RabbitMQ?
Hi, I am running different versions of our application in different namespaces on one Kubernetes cluster. Each namespace also contains one...
How to implement one-click rollback in GitLab ci/cd?
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it.
I want to achieve a one-click rollback in the web-ui.
My current mock up deployment pipeline is set up as per below.
(the real ci file is quite long)
stages:
- build
- pre-deploy
- deploy
docker_build:
stage: build
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
environment:
name: dev
tags:
- builder_01
pre_deploy_server_1:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-1:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-1 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
pre_deploy_server_2:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-2:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-2 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
deploy:
stage: deploy
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker stack deploy --compose-file docker-compose.yaml stack-name
environment:
name: dev
tags:
- docker_swarm_manager
The deploy stage and build/pre-deploy needs to be executed in separate runners, and using tags for this.
In my real ci file, the rsync task is executed to 10 severs, with a lot of additional commands not listed here.
I separated each of the rsync jobs as separate jobs to get the granularity in the UI to see exactly which node a deployment failed on.
In a rollback scenario with the current setup, I need to:
* Go to the Operations -> Environments section in GitLab
* Enter the "dev" environment
* Click the rollback button for **each** of the defined jobs as per the above ci file
I'm trying to achieve a one-click rollback solution, and I'm having a hard time understanding how I **should** structure the config to achieve this. Am I trying to implement something that is not possible?
Any advice or pointers is appreciated!
https://redd.it/nr8o91
@r_devops
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it.
I want to achieve a one-click rollback in the web-ui.
My current mock up deployment pipeline is set up as per below.
(the real ci file is quite long)
stages:
- build
- pre-deploy
- deploy
docker_build:
stage: build
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
environment:
name: dev
tags:
- builder_01
pre_deploy_server_1:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-1:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-1 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
pre_deploy_server_2:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-2:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-2 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
deploy:
stage: deploy
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker stack deploy --compose-file docker-compose.yaml stack-name
environment:
name: dev
tags:
- docker_swarm_manager
The deploy stage and build/pre-deploy needs to be executed in separate runners, and using tags for this.
In my real ci file, the rsync task is executed to 10 severs, with a lot of additional commands not listed here.
I separated each of the rsync jobs as separate jobs to get the granularity in the UI to see exactly which node a deployment failed on.
In a rollback scenario with the current setup, I need to:
* Go to the Operations -> Environments section in GitLab
* Enter the "dev" environment
* Click the rollback button for **each** of the defined jobs as per the above ci file
I'm trying to achieve a one-click rollback solution, and I'm having a hard time understanding how I **should** structure the config to achieve this. Am I trying to implement something that is not possible?
Any advice or pointers is appreciated!
https://redd.it/nr8o91
@r_devops
reddit
How to implement one-click rollback in GitLab ci/cd?
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it. I want...
Which CloudFormation stacks are managed by a CodePipeline - script
At work, we needed to know which CloudFormation stacks were deployed by a given CodePipeline. There would be no such question if we did properly tag each stack (which we should have). If you are like us and you didn't, here is a script which shows the stacks managed by the given pipeline.
​
https://github.com/ngs-lang/nsd/blob/master/aws/codepipeline/pipeline-stacks.ngs
​
Hope this helps. Have fun!
https://redd.it/nr8mkd
@r_devops
At work, we needed to know which CloudFormation stacks were deployed by a given CodePipeline. There would be no such question if we did properly tag each stack (which we should have). If you are like us and you didn't, here is a script which shows the stacks managed by the given pipeline.
​
https://github.com/ngs-lang/nsd/blob/master/aws/codepipeline/pipeline-stacks.ngs
​
Hope this helps. Have fun!
https://redd.it/nr8mkd
@r_devops
GitHub
ngs-lang/nsd
NGS Scripts Dumpster. Contribute to ngs-lang/nsd development by creating an account on GitHub.
Tools to provision and manage Public and Private Cloud.
We are a private cloud solutions company, and we are moving from only private cloud to hybrid + edge cloud, we use OpenStack as our private cloud solution, the problem is I am not able to find the perfect tool that manages and provisions Openstack and public clouds like AWS, GCP, and Azure. Terraform is what comes the closest to it, but the problem with terraform is that it's mostly a CLI tool, I am looking for something API-based or service-based. We don't mind using multiple tools if that gets the work done.
https://redd.it/nr7vhm
@r_devops
We are a private cloud solutions company, and we are moving from only private cloud to hybrid + edge cloud, we use OpenStack as our private cloud solution, the problem is I am not able to find the perfect tool that manages and provisions Openstack and public clouds like AWS, GCP, and Azure. Terraform is what comes the closest to it, but the problem with terraform is that it's mostly a CLI tool, I am looking for something API-based or service-based. We don't mind using multiple tools if that gets the work done.
https://redd.it/nr7vhm
@r_devops
reddit
Tools to provision and manage Public and Private Cloud.
We are a private cloud solutions company, and we are moving from only private cloud to hybrid + edge cloud, we use OpenStack as our private cloud...
What's the most convenient order in which to install Consul, Nomad, and Vault
I'm trying to set up a simple 3+-machine Vault, Consul and Nomad DC:
- Machine 1: Vault-server, Consul-server, Nomad-server
- Machine 2: Consul-client, Nomad-client
- Machine 3: Consul-client, Nomad-client
What is the most convenient order to set up these services?
Consul first, then Nomad, then Vault;
or Vault, Consul, Nomad; or Consul, Vault, Nomad?
I could have Vault running in a container, managed by Nomad, or I could use Vault to provide the certificates needed to set up mTLS with Consul.
If you have any tips or tricks, feel free to share.
https://redd.it/nr7oka
@r_devops
I'm trying to set up a simple 3+-machine Vault, Consul and Nomad DC:
- Machine 1: Vault-server, Consul-server, Nomad-server
- Machine 2: Consul-client, Nomad-client
- Machine 3: Consul-client, Nomad-client
What is the most convenient order to set up these services?
Consul first, then Nomad, then Vault;
or Vault, Consul, Nomad; or Consul, Vault, Nomad?
I could have Vault running in a container, managed by Nomad, or I could use Vault to provide the certificates needed to set up mTLS with Consul.
If you have any tips or tricks, feel free to share.
https://redd.it/nr7oka
@r_devops
reddit
What's the most convenient order in which to install Consul,...
I'm trying to set up a simple 3+-machine Vault, Consul and Nomad DC: - Machine 1: Vault-server, Consul-server, Nomad-server - Machine 2:...
Need advice on better designing a basic lamp workflow among multiple machines.
Hey there!
I'm struggling in trying to make this understandable.
I'm currently a one person show, developing with LAMP. I'm having trouble designing an efficient workflow when I decide to develop on a desktop alongside my macbook (air, 2014. pretty old. works well though.). I'd love other peoples input into their workflows of similar goals. I feel like I'm making this harder than it should be, but I'm at the level where I'm not sure what to google next.
I currently have an apache web server installed locally on my macbook, using the homebrew tool. Using the brew tool, I also install php etc. I initially write my files into a separate, project folder then push this to my local apache webserver for testing.
Now say I want to work on this project on my desktop. Right now I just have dropbox watching my source files, so they're readily available on both desktop and macbook.
On my desktop: I use vagrant to spin up a vanilla Ubuntu (16.04+) and install an lamp web server on that vm. and push my source files onto that server, as with my macbook workflow.
Problems;
When I develop on my desktop vagrant VM, the apache config works a little differently in both environments. I just don't feel confident with the fact that installing a webserver is done differently on both platforms, differing dependencies / other things I probably don't even know about.
I can't just run vagrant on my mac because of resource usage, battery life etc.
Between the latency connecting to central remote development server, and the fact I sometimes cannot afford to pay for a VPS, these rule out using digital ocean et al. as a development environment.
Having to push all my code to a local webserver every iteration for testing seems annoying. Is this just part of it? Should I set up some bash scripts to automate this file upload? AHHH
I'm not at the level where I absolutely need consistency between both platforms, but it's bothering me and I'm wondering how others approach it.
I would like to have a workflow that offers consistent develop environment across all platforms. It's easier if it's just front-end.
Thanks loads if you got through that!
https://redd.it/nr73jw
@r_devops
Hey there!
I'm struggling in trying to make this understandable.
I'm currently a one person show, developing with LAMP. I'm having trouble designing an efficient workflow when I decide to develop on a desktop alongside my macbook (air, 2014. pretty old. works well though.). I'd love other peoples input into their workflows of similar goals. I feel like I'm making this harder than it should be, but I'm at the level where I'm not sure what to google next.
I currently have an apache web server installed locally on my macbook, using the homebrew tool. Using the brew tool, I also install php etc. I initially write my files into a separate, project folder then push this to my local apache webserver for testing.
Now say I want to work on this project on my desktop. Right now I just have dropbox watching my source files, so they're readily available on both desktop and macbook.
On my desktop: I use vagrant to spin up a vanilla Ubuntu (16.04+) and install an lamp web server on that vm. and push my source files onto that server, as with my macbook workflow.
Problems;
When I develop on my desktop vagrant VM, the apache config works a little differently in both environments. I just don't feel confident with the fact that installing a webserver is done differently on both platforms, differing dependencies / other things I probably don't even know about.
I can't just run vagrant on my mac because of resource usage, battery life etc.
Between the latency connecting to central remote development server, and the fact I sometimes cannot afford to pay for a VPS, these rule out using digital ocean et al. as a development environment.
Having to push all my code to a local webserver every iteration for testing seems annoying. Is this just part of it? Should I set up some bash scripts to automate this file upload? AHHH
I'm not at the level where I absolutely need consistency between both platforms, but it's bothering me and I'm wondering how others approach it.
I would like to have a workflow that offers consistent develop environment across all platforms. It's easier if it's just front-end.
Thanks loads if you got through that!
https://redd.it/nr73jw
@r_devops
reddit
Need advice on better designing a basic lamp workflow among...
Hey there! I'm struggling in trying to make this understandable. I'm currently a one person show, developing with LAMP. I'm having trouble...
Question: GitLab CI/CD environments - One-click rollback with multiple jobs
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it.
I want to achieve a one-click rollback in the web-ui.
My current mock up deployment pipeline is set up as per below.
(the real ci file is quite long)
stages:
- build
- pre-deploy
- deploy
docker_build:
stage: build
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
environment:
name: dev
tags:
- builder_01
pre_deploy_server_1:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-1:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-1 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
pre_deploy_server_2:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-2:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-2 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
deploy:
stage: deploy
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker stack deploy --compose-file docker-compose.yaml stack-name
environment:
name: dev
tags:
- docker_swarm_manager
The deploy stage and build/pre-deploy needs to be executed in separate runners, and using tags for this.
In my real ci file, the rsync task is executed to 10 severs, with a lot of additional commands not listed here.
I separated each of the rsync jobs as separate jobs to get the granularity in the UI to see exactly which node a deployment failed on.
In a rollback scenario with the current setup, I need to:
* Go to the Operations -> Environments section in GitLab
* Enter the "dev" environment
* Click the rollback button for **each** of the defined jobs as per the above ci file
I'm trying to achieve a one-click rollback solution, and I'm having a hard time understanding how I **should** structure the config to achieve this. Am I trying to implement something that is not possible?
Any advice or pointers is appreciated!
https://redd.it/nr5zci
@r_devops
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it.
I want to achieve a one-click rollback in the web-ui.
My current mock up deployment pipeline is set up as per below.
(the real ci file is quite long)
stages:
- build
- pre-deploy
- deploy
docker_build:
stage: build
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
environment:
name: dev
tags:
- builder_01
pre_deploy_server_1:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-1:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-1 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
pre_deploy_server_2:
stage: pre-deploy
script:
- rsync -a src gitlab@fake-host-2:/opt/deployments/$CI_COMMIT_SHORT_SHA
- ssh gitlab@fake-host-2 "sudo ln -sfn /opt/deployments/$CI_COMMIT_SHORT_SHA /opt/appname"
environment:
name: dev
tags:
- builder_01
deploy:
stage: deploy
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker stack deploy --compose-file docker-compose.yaml stack-name
environment:
name: dev
tags:
- docker_swarm_manager
The deploy stage and build/pre-deploy needs to be executed in separate runners, and using tags for this.
In my real ci file, the rsync task is executed to 10 severs, with a lot of additional commands not listed here.
I separated each of the rsync jobs as separate jobs to get the granularity in the UI to see exactly which node a deployment failed on.
In a rollback scenario with the current setup, I need to:
* Go to the Operations -> Environments section in GitLab
* Enter the "dev" environment
* Click the rollback button for **each** of the defined jobs as per the above ci file
I'm trying to achieve a one-click rollback solution, and I'm having a hard time understanding how I **should** structure the config to achieve this. Am I trying to implement something that is not possible?
Any advice or pointers is appreciated!
https://redd.it/nr5zci
@r_devops
reddit
Question: GitLab CI/CD environments - One-click rollback with...
Hi all, I'm trying to utilize the rollback function and Environments in GitLab, and currently trying to figure out how to properly use it. I want...
Tips to deal with people that don't want to understand technology
Hi
I'm having a hard time in dealing with people that don't understand the technology and don't even bother to listen why things aren't as simple as they think it is.
I'm a CTO of a rather large company with multiple physical sites and my peers and CEO are in the top 10 that harass me most.
Thinks like "I just want to connect the damn thing to the internet" when we're talking about connecting a solar panel that requires access from WAN to it in a scenario of chained routers, VLANs, firewalls, and VPNs.
I don't feel listened to or respected when it comes to deciding/planning over technology and governance. I get a reaction like "you're overcomplicating" and "don't put problems where they don't exist". And later I show them that putting the cart before the horse screws things up.
It's becoming recurring, with all sorts of examples, and I'm lacking the soft skills to manage it.
And my patience too.
​
Any tips?
https://redd.it/ns0oje
@r_devops
Hi
I'm having a hard time in dealing with people that don't understand the technology and don't even bother to listen why things aren't as simple as they think it is.
I'm a CTO of a rather large company with multiple physical sites and my peers and CEO are in the top 10 that harass me most.
Thinks like "I just want to connect the damn thing to the internet" when we're talking about connecting a solar panel that requires access from WAN to it in a scenario of chained routers, VLANs, firewalls, and VPNs.
I don't feel listened to or respected when it comes to deciding/planning over technology and governance. I get a reaction like "you're overcomplicating" and "don't put problems where they don't exist". And later I show them that putting the cart before the horse screws things up.
It's becoming recurring, with all sorts of examples, and I'm lacking the soft skills to manage it.
And my patience too.
​
Any tips?
https://redd.it/ns0oje
@r_devops
reddit
Tips to deal with people that don't want to understand technology
Hi I'm having a hard time in dealing with people that don't understand the technology and don't even bother to listen why things aren't as simple...
DevOps Workflow Framework Repo
Hi!
I've been working on a python-based parallel workflow framework that is great for custom devops. It's still pre-alpha, but uses an innovative paradigm to write simple parallel task graphs that can orchestrate a variety of devops tasks, local or remote. Across cloud, containers, repos, etc.
Have a look and I appreciate any comments or contributions!
https://github.com/radiantone/entangle
Example task declarations:
Write your own decorators and mix-n-match to get powerful workflows with simple python!
I do need to expand the readme for devops use cases and that is coming soon.
https://redd.it/ns6fhe
@r_devops
Hi!
I've been working on a python-based parallel workflow framework that is great for custom devops. It's still pre-alpha, but uses an innovative paradigm to write simple parallel task graphs that can orchestrate a variety of devops tasks, local or remote. Across cloud, containers, repos, etc.
Have a look and I appreciate any comments or contributions!
https://github.com/radiantone/entangle
Example task declarations:
@process
@aws(keys=[])
@ec2(ami='ami-12345')
def myfunc():
return
@process
@aws(keys=[])
@fargate(ram='2GB', cpu='Xeon')
def myfunc():
return
@process
@docker(image="tensorflow/tensorflow:latest-gpu")
def reduce_sum():
import tensorflow as tf
return tf.reduce_sum(tf.random.normal([1000, 1000]))
Write your own decorators and mix-n-match to get powerful workflows with simple python!
I do need to expand the readme for devops use cases and that is coming soon.
https://redd.it/ns6fhe
@r_devops
GitHub
radiantone/entangle
A lightweight (serverless) native python parallel processing framework based on simple decorators and call graphs. - radiantone/entangle
Deploy ROR application on ubuntu VM using Capistrano and Gitlab CI/CD
I am getting the below error when I deployed the Ruby application on Ubuntu VM using GitLab ci errorNet::SSH::AuthenticationFailed: Authentication failed for user **[email protected]**
Here is my Gitlab ci
deploy:
stage: deploy
script:
- which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
- eval $(ssh-agent -s)- echo "$SSHPRIVATEKEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- bundle install --jobs $(nproc) "${FLAGS@}"
- gem install capistrano
- gem install net-ssh --pre
- cap production deploy
I can access the deployment server from GitLab runner and I have also put the Deploy server private key in the GitLab variable.
Please let me know where I am doing wrong or am I missing any step? I have followed the below link but it not working as expected
https://medium.com/2glab/gitlab-continuous-delivery-with-capistrano-169055a6da51
https://redd.it/ns4ais
@r_devops
I am getting the below error when I deployed the Ruby application on Ubuntu VM using GitLab ci errorNet::SSH::AuthenticationFailed: Authentication failed for user **[email protected]**
Here is my Gitlab ci
deploy:
stage: deploy
script:
- which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
- eval $(ssh-agent -s)- echo "$SSHPRIVATEKEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- bundle install --jobs $(nproc) "${FLAGS@}"
- gem install capistrano
- gem install net-ssh --pre
- cap production deploy
I can access the deployment server from GitLab runner and I have also put the Deploy server private key in the GitLab variable.
Please let me know where I am doing wrong or am I missing any step? I have followed the below link but it not working as expected
https://medium.com/2glab/gitlab-continuous-delivery-with-capistrano-169055a6da51
https://redd.it/ns4ais
@r_devops
Medium
GitLab Continuous Delivery with Capistrano
Moving swiftly and ceaselessly through the development and delivering updates and new features as soon as possible and pretty much with all…