DevOps Work Visas
Hi! I just wanted to ask if anyone here has had experience applying for work visas in foreign countries? How was your experience? Was it difficult?
I am looking into what countries I can move to as a devops engineer and I was wondering if anybody here can provide insight :)
Thank you in advance to those who reply!
https://redd.it/pmu7kb
@r_devops
Hi! I just wanted to ask if anyone here has had experience applying for work visas in foreign countries? How was your experience? Was it difficult?
I am looking into what countries I can move to as a devops engineer and I was wondering if anybody here can provide insight :)
Thank you in advance to those who reply!
https://redd.it/pmu7kb
@r_devops
reddit
DevOps Work Visas
Hi! I just wanted to ask if anyone here has had experience applying for work visas in foreign countries? How was your experience? Was it...
Komodor Rolls Out 'Pod Status & Logs' Feature!
For those of you who don't know, Komodor is a Kubernetes troubleshooting platform. It's meant to serve as the only place you go to when an issue arises; aggregating changes from across the system, including infra, cloud provider, DBs, K8s, monitoring and alerting tools, etc., and displaying them on a coherent timeline as your single source of truth.
[Full disclosure: I work for Komodor. You can DM me for questions\]
Pods Status and Logs’ is the latest feature to come out of Komodor, and it enables you to quickly drill down in the pods of an unhealthy service, all in one place.
This offers quick access to all of the pod-level data you`ll need for troubleshooting, including:
Overview of all pods running the service
Pod details, similar to what you would get with Kubectl describe
Live view of all events
Pod containers’ logs
For the full monty: https://komodor.com/blog/new-pod-status-and-logs-dash-saves-time-and-unifies-execution/
https://redd.it/pn01wd
@r_devops
For those of you who don't know, Komodor is a Kubernetes troubleshooting platform. It's meant to serve as the only place you go to when an issue arises; aggregating changes from across the system, including infra, cloud provider, DBs, K8s, monitoring and alerting tools, etc., and displaying them on a coherent timeline as your single source of truth.
[Full disclosure: I work for Komodor. You can DM me for questions\]
Pods Status and Logs’ is the latest feature to come out of Komodor, and it enables you to quickly drill down in the pods of an unhealthy service, all in one place.
This offers quick access to all of the pod-level data you`ll need for troubleshooting, including:
Overview of all pods running the service
Pod details, similar to what you would get with Kubectl describe
Live view of all events
Pod containers’ logs
For the full monty: https://komodor.com/blog/new-pod-status-and-logs-dash-saves-time-and-unifies-execution/
https://redd.it/pn01wd
@r_devops
Komodor
New ‘Pod Status and Logs’ Dash Saves Time and Unifies Execution | Komodor
‘Pods Status and Logs’ enables you to quickly drill down in the pods of an unhealthy service, all from the comfort of your Komodor dashboard.
Provisioning VM instances with Packer images vs Provider built images
Hi everyone!
​
I started to learn more about packer and I am trying to understand a bit better what exact value packer adds to the provisioning workflow. To be more specific, I am trying to create a workflow for deploying and configuring a server (let's use an app instance running on tomcat, with pre-built packages provided by a vendor [we are not building anything\]).
Option 1:
Create instance module with Terraform using cloud provider based image
Enroll instance to chef (through metadata/startup script) when VM is created
Let chef handle the rest (hardening, baseline configs, installation and configuration of required packages)
​
Option 2:
Create instance using packer
Only install chef agent into packer built image
Deploy instance via Terraform module with packer built image
Add instance to chef when instance is deployed
Let chef handle the rest (hardening, baseline configs, installation and configuration of required packages)
In my option for that specific example using packer adds unnecessary complexity since based on my understanding packer allows to inject custom built packages (which chef also can handle). A colleague of mine said that in his experience when instance is added to chef via startup script, chef might not have enough time to install and configure everything needed (I am having hard times understanding that tbh). So I am curios to hear other opinions.
The only potential risk I can see for option 1 is if somebody changes startup script, the next terraform apply will re-create VM, but this can be mitigated by utilizing module versions.
I am also interested to understand what exact value packer adds when you deploy your infrastructure.
Thanks in advance!
https://redd.it/pn0h9h
@r_devops
Hi everyone!
​
I started to learn more about packer and I am trying to understand a bit better what exact value packer adds to the provisioning workflow. To be more specific, I am trying to create a workflow for deploying and configuring a server (let's use an app instance running on tomcat, with pre-built packages provided by a vendor [we are not building anything\]).
Option 1:
Create instance module with Terraform using cloud provider based image
Enroll instance to chef (through metadata/startup script) when VM is created
Let chef handle the rest (hardening, baseline configs, installation and configuration of required packages)
​
Option 2:
Create instance using packer
Only install chef agent into packer built image
Deploy instance via Terraform module with packer built image
Add instance to chef when instance is deployed
Let chef handle the rest (hardening, baseline configs, installation and configuration of required packages)
In my option for that specific example using packer adds unnecessary complexity since based on my understanding packer allows to inject custom built packages (which chef also can handle). A colleague of mine said that in his experience when instance is added to chef via startup script, chef might not have enough time to install and configure everything needed (I am having hard times understanding that tbh). So I am curios to hear other opinions.
The only potential risk I can see for option 1 is if somebody changes startup script, the next terraform apply will re-create VM, but this can be mitigated by utilizing module versions.
I am also interested to understand what exact value packer adds when you deploy your infrastructure.
Thanks in advance!
https://redd.it/pn0h9h
@r_devops
reddit
Provisioning VM instances with Packer images vs Provider built images
Hi everyone! I started to learn more about packer and I am trying to understand a bit better what exact value packer adds to the...
Is there CLI for spotinst?
Hi there,
Is there any CLI for spotinst (spot.io) like was CLI? I want to update elastic group capacity using CLI.
Thanks
https://redd.it/pn2721
@r_devops
Hi there,
Is there any CLI for spotinst (spot.io) like was CLI? I want to update elastic group capacity using CLI.
Thanks
https://redd.it/pn2721
@r_devops
reddit
Is there CLI for spotinst?
Hi there, Is there any CLI for spotinst (spot.io) like was CLI? I want to update elastic group capacity using CLI. Thanks
What can I expect in adding a CI solution to my already in production application?
I run an application that currently doesn't use CI. It runs on a VPS and is containerized with docker/docker-compose. My usual deploying process is to
It has users so its not something I want to have significant downtime on due to messing around with getting CI setup. I've worked on projects that used CI pipelines at jobs (Drone) but have never set one up myself. What can I expect when implementing a CI solution for my already in-production project?
https://redd.it/pn0jvj
@r_devops
I run an application that currently doesn't use CI. It runs on a VPS and is containerized with docker/docker-compose. My usual deploying process is to
git pull --rebase and docker exec into the web ~~and DB~~ containers to pull new dependencies and run any DB migrations. It works okay but I'd like to automate it because there have been a couple times where I've goofed up and not updated dependencies or something like that. Plus it'd be nice to automatically run my test suite when I push new code to the repo.It has users so its not something I want to have significant downtime on due to messing around with getting CI setup. I've worked on projects that used CI pipelines at jobs (Drone) but have never set one up myself. What can I expect when implementing a CI solution for my already in-production project?
https://redd.it/pn0jvj
@r_devops
reddit
What can I expect in adding a CI solution to my already in...
I run an application that currently doesn't use CI. It runs on a VPS and is containerized with docker/docker-compose. My usual deploying process...
How do you add nginx to a wordpress docker project?
server {
listen 80;
servername localhost;
root /var/www/html;
index index.php;
accesslog /var/log/nginx/access.log;
errorlog /var/log/nginx/error.log;
location / {
tryfiles $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
tryfiles $uri =404;
fastcgisplitpathinfo ^(.+\.php)(/.+)$;
fastcgipass wordpress:8000;
fastcgiindex index.php;
include fastcgiparams;
fastcgiparam SCRIPTFILENAME $documentroot$fastcgiscriptname;
fastcgiparam PATHINFO $fastcgipathinfo;
}
}
I have this wordpress.conf file in the nginx folder and then I have this docker-compose file:
​
version: '3.1'
services:
nginx:
image: nginx:latest
ports:
- '80:80'
volumes:
- ./nginx:/etc/nginx/conf.d
- ./logs/nginx:/var/log/nginx
- ./wordpress:/var/www/html
links:
- wordpress
restart: always
db:
image: mysql:5.7
volumes:
- db-data:/var/lib/mysql
restart: always
environment:
MYSQLDATABASE: wordpress
MYSQLUSER: admin
MYSQLPASSWORD: admin
MYSQLRANDOMROOTPASSWORD: admin
networks:
- wpnet
wordpress:
dependson:
- db
image: wordpress:latest
volumes:
- ./wp-content:/var/www/html/wp-content
restart: always
ports:
- 8000:8000
environment:
WORDPRESSDBHOST: db
WORDPRESSDBUSER: admin
WORDPRESSDBPASSWORD: admin
WORDPRESSDBNAME: wordpress
networks:
- wpnet
volumes:
dbdata:
networks:
wpnet:
I think it's because I pasted the configs from a different docker-compose.yml and just changed the port numbers. What do you do to debug a docker project and is there a sort of flow chart that tells you what you should be doing to figure out why it's not working?
https://redd.it/pn3l2r
@r_devops
server {
listen 80;
servername localhost;
root /var/www/html;
index index.php;
accesslog /var/log/nginx/access.log;
errorlog /var/log/nginx/error.log;
location / {
tryfiles $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
tryfiles $uri =404;
fastcgisplitpathinfo ^(.+\.php)(/.+)$;
fastcgipass wordpress:8000;
fastcgiindex index.php;
include fastcgiparams;
fastcgiparam SCRIPTFILENAME $documentroot$fastcgiscriptname;
fastcgiparam PATHINFO $fastcgipathinfo;
}
}
I have this wordpress.conf file in the nginx folder and then I have this docker-compose file:
​
version: '3.1'
services:
nginx:
image: nginx:latest
ports:
- '80:80'
volumes:
- ./nginx:/etc/nginx/conf.d
- ./logs/nginx:/var/log/nginx
- ./wordpress:/var/www/html
links:
- wordpress
restart: always
db:
image: mysql:5.7
volumes:
- db-data:/var/lib/mysql
restart: always
environment:
MYSQLDATABASE: wordpress
MYSQLUSER: admin
MYSQLPASSWORD: admin
MYSQLRANDOMROOTPASSWORD: admin
networks:
- wpnet
wordpress:
dependson:
- db
image: wordpress:latest
volumes:
- ./wp-content:/var/www/html/wp-content
restart: always
ports:
- 8000:8000
environment:
WORDPRESSDBHOST: db
WORDPRESSDBUSER: admin
WORDPRESSDBPASSWORD: admin
WORDPRESSDBNAME: wordpress
networks:
- wpnet
volumes:
dbdata:
networks:
wpnet:
I think it's because I pasted the configs from a different docker-compose.yml and just changed the port numbers. What do you do to debug a docker project and is there a sort of flow chart that tells you what you should be doing to figure out why it's not working?
https://redd.it/pn3l2r
@r_devops
reddit
How do you add nginx to a wordpress docker project?
server { listen 80; server_name localhost; root /var/www/html; index index.php; access_log...
Dockerized Jenkins boilerplate
Hello everyone,
I made a Jenkins with the docker plugin boilerplate, it's open-sourced on Github for devs who could benefit from this
Repository
https://redd.it/pn55yu
@r_devops
Hello everyone,
I made a Jenkins with the docker plugin boilerplate, it's open-sourced on Github for devs who could benefit from this
Repository
https://redd.it/pn55yu
@r_devops
GitHub
GitHub - omaralsoudanii/jenkins-docker-ci: A docker setup for Jenkins with docker plugin integration
A docker setup for Jenkins with docker plugin integration - GitHub - omaralsoudanii/jenkins-docker-ci: A docker setup for Jenkins with docker plugin integration
Azure: Is there an endpoint for getting a list of ALL vm sizes? I just need to create a json file of them
I know there is an endpoint for "listing all available sizes" but that's tied to the sizes that a specific vm could resize to
https://docs.microsoft.com/en-us/rest/api/compute/virtual-machines/list-available-sizes
I'm looking to just create a json file with all VM sizes (so I can easily lookup RAM/Cores without hitting the api every time)
{
name: 'Standard_D2_v2_Promo',
numberOfCores: 2,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 102400,
memoryInMB: 7168,
maxDataDiskCount: 8
},
{
name: 'Standard_D3_v2_Promo',
numberOfCores: 4,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 204800,
memoryInMB: 14336,
maxDataDiskCount: 16
},
{
name: 'Standard_D4_v2_Promo',
numberOfCores: 8,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 409600,
memoryInMB: 28672,
maxDataDiskCount: 32
},
{
name: 'Standard_D5_v2_Promo',
numberOfCores: 16,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 819200,
memoryInMB: 57344,
maxDataDiskCount: 64
},
https://redd.it/pn30cf
@r_devops
I know there is an endpoint for "listing all available sizes" but that's tied to the sizes that a specific vm could resize to
https://docs.microsoft.com/en-us/rest/api/compute/virtual-machines/list-available-sizes
I'm looking to just create a json file with all VM sizes (so I can easily lookup RAM/Cores without hitting the api every time)
{
name: 'Standard_D2_v2_Promo',
numberOfCores: 2,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 102400,
memoryInMB: 7168,
maxDataDiskCount: 8
},
{
name: 'Standard_D3_v2_Promo',
numberOfCores: 4,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 204800,
memoryInMB: 14336,
maxDataDiskCount: 16
},
{
name: 'Standard_D4_v2_Promo',
numberOfCores: 8,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 409600,
memoryInMB: 28672,
maxDataDiskCount: 32
},
{
name: 'Standard_D5_v2_Promo',
numberOfCores: 16,
osDiskSizeInMB: 1047552,
resourceDiskSizeInMB: 819200,
memoryInMB: 57344,
maxDataDiskCount: 64
},
https://redd.it/pn30cf
@r_devops
Docs
Virtual Machines - List Available Sizes - REST API (Azure Compute)
Learn more about Compute service - Lists all available virtual machine sizes to which the specified virtual machine can be resized.
DevOps Blogs, publications etc
Hey guys!
I would like to know if you can recommend websites, blogs and or/publications that has some reputation or explain pretty well about DevOps.
I am working on my master thesis and we eventually will need to use "grey literature", as such topic is still recent in the academia due to its more technical background.
Thanks in advance!
https://redd.it/pmyczc
@r_devops
Hey guys!
I would like to know if you can recommend websites, blogs and or/publications that has some reputation or explain pretty well about DevOps.
I am working on my master thesis and we eventually will need to use "grey literature", as such topic is still recent in the academia due to its more technical background.
Thanks in advance!
https://redd.it/pmyczc
@r_devops
reddit
DevOps Blogs, publications etc
Hey guys! I would like to know if you can recommend websites, blogs and or/publications that has some reputation or explain pretty well about...
Linkerd: Looming on Service Meshes
Basically, what's Linkerd and a lot more
https://www.p3r.one/linkerd-service-mesh/
https://redd.it/pn8tzc
@r_devops
Basically, what's Linkerd and a lot more
https://www.p3r.one/linkerd-service-mesh/
https://redd.it/pn8tzc
@r_devops
p3r
Linkerd: Looming on Service Meshes | p3r
Linkerd service mesh adds critical security, observability, and reliability to your Kubernetes stack, without any code changes. Claimed to be the original “service mesh” by its creator Buoyant in 2016 it's one of the best options available.
"Staff Site Reliability Engineer" open position at Mozilla
I thought this might be an interesting role for some of you here. I don't quite have the experience in some of these tools to qualify, but I know there are plenty of you around here that probably do!
Mozilla Careers — Staff Site Reliability Engineer — Open Positions
"Mozilla’s SRE Team is looking for a Staff SRE to help us build and maintain infrastructure that supports Firefox’s many features, Mozilla’s web properties and upcoming products. You’ll combine skills from DevOps/SRE, systems administration, and software development to influence product architecture and evolution by crafting reliable cloud-based infrastructure for internal and external services.
As an SRE you’ll work closely with Mozilla’s engineering and product teams and participate in significant engineering projects across the company. You’ll collaborate with passionate engineers across different levels of experience and backgrounds. A lot of your work will involve improving existing systems, building new infrastructure, evaluating tools and eliminating toil.
This position is remote friendly or you may work in a local office when they reopen and available in the USA and Canada."
https://redd.it/pn909w
@r_devops
I thought this might be an interesting role for some of you here. I don't quite have the experience in some of these tools to qualify, but I know there are plenty of you around here that probably do!
Mozilla Careers — Staff Site Reliability Engineer — Open Positions
"Mozilla’s SRE Team is looking for a Staff SRE to help us build and maintain infrastructure that supports Firefox’s many features, Mozilla’s web properties and upcoming products. You’ll combine skills from DevOps/SRE, systems administration, and software development to influence product architecture and evolution by crafting reliable cloud-based infrastructure for internal and external services.
As an SRE you’ll work closely with Mozilla’s engineering and product teams and participate in significant engineering projects across the company. You’ll collaborate with passionate engineers across different levels of experience and backgrounds. A lot of your work will involve improving existing systems, building new infrastructure, evaluating tools and eliminating toil.
This position is remote friendly or you may work in a local office when they reopen and available in the USA and Canada."
https://redd.it/pn909w
@r_devops
careers.mozilla.org
Mozilla Careers — Staff Site Reliability Engineer — Open Positions
Mozilla Careers — Mozilla is hiring a Staff Site Reliability Engineer in San Francisco Office
In which order would you learn these?
A. Docker/Kubernetes
B. CI/CD
C. Terraform
D. Ansible
E. Serverless
I recently passed AWS Solutions Architect Associate and have Windows Server Administration, Git, and some monitoring tools under my belt.
Thanks!
https://redd.it/pn8off
@r_devops
A. Docker/Kubernetes
B. CI/CD
C. Terraform
D. Ansible
E. Serverless
I recently passed AWS Solutions Architect Associate and have Windows Server Administration, Git, and some monitoring tools under my belt.
Thanks!
https://redd.it/pn8off
@r_devops
reddit
In which order would you learn these?
A. Docker/Kubernetes B. CI/CD C. Terraform D. Ansible E. Serverless I recently passed AWS Solutions Architect Associate and have Windows Server...
Python Development Path for an Asipiring DevOps Engineer
I'm an aspiring devops engineer and working on getting better at coding by mainly studying Python. I've taken several courses and gone through a bunch of books, and have written some scripts to automate some tasks at work. I want to increase my skillset and get to an intermediate level in coding. It seems that most training sites offer two paths when teaching Python: Data Science/ML and Web Development. From what I know, DevOps engineers focus more on automation. Which of these two development paths would be more helpful for acquiring the skills necessary to be a devops engineer? Thanks!
https://redd.it/pms8j1
@r_devops
I'm an aspiring devops engineer and working on getting better at coding by mainly studying Python. I've taken several courses and gone through a bunch of books, and have written some scripts to automate some tasks at work. I want to increase my skillset and get to an intermediate level in coding. It seems that most training sites offer two paths when teaching Python: Data Science/ML and Web Development. From what I know, DevOps engineers focus more on automation. Which of these two development paths would be more helpful for acquiring the skills necessary to be a devops engineer? Thanks!
https://redd.it/pms8j1
@r_devops
reddit
Python Development Path for an Asipiring DevOps Engineer
I'm an aspiring devops engineer and working on getting better at coding by mainly studying Python. I've taken several courses and gone through a...
Home devops infra setup with two laptops
Was looking into setting up home self hosted full stack infra with one or two laptops bridge ,nat virtual box
Anyone has setup or reference to full stack infra setup
Github or gitlab Terraform Ansible ngnix containers on kubernetes
maybe virtual load balancer or haproxy too
https://redd.it/pmqge4
@r_devops
Was looking into setting up home self hosted full stack infra with one or two laptops bridge ,nat virtual box
Anyone has setup or reference to full stack infra setup
Github or gitlab Terraform Ansible ngnix containers on kubernetes
maybe virtual load balancer or haproxy too
https://redd.it/pmqge4
@r_devops
reddit
Home devops infra setup with two laptops
Was looking into setting up home self hosted full stack infra with one or two laptops bridge ,nat virtual box Anyone has setup or reference to...
Why do developers still use development on ETH?
​
About half a year ago, I started studying Solidity. In the beginning, like all of us, I used Testnet to learn how to deploy contracts and call various functions.
But when I got the choice to go to mainnet, I couldn't do it because of the huge commission. I started looking for some other alternatives.
Now I use Aurora EVM, I like the cool thing. There are practically no commissions, the speed is huge and there is the possibility of a cross-chain with Ethereum. What other similar projects and EVM can you recommend?
https://redd.it/pncuo0
@r_devops
​
About half a year ago, I started studying Solidity. In the beginning, like all of us, I used Testnet to learn how to deploy contracts and call various functions.
But when I got the choice to go to mainnet, I couldn't do it because of the huge commission. I started looking for some other alternatives.
Now I use Aurora EVM, I like the cool thing. There are practically no commissions, the speed is huge and there is the possibility of a cross-chain with Ethereum. What other similar projects and EVM can you recommend?
https://redd.it/pncuo0
@r_devops
reddit
Why do developers still use development on ETH?
About half a year ago, I started studying Solidity. In the beginning, like all of us, I used Testnet to learn how to deploy contracts...
Understanding that Dynatrace is an AI monitoring tools. But what steps should I take if no root cause provided by Dynatrace ?
Currently I'm using dynatrace to monitor few hosts, and then there are few number of AWS Ec2 spinned but gracefully shutdown after a few minutes.
Dynatrace only able to provide me problems alert, stating only "Host gracefully shutdown". No further and extra information given. Nothing was captured, it just boot up for 3 mins max then shutdown gracefully.
What steps should I take to troubleshoot this situation? Please share me your experience and knowledge, in need of help.
https://redd.it/pncrut
@r_devops
Currently I'm using dynatrace to monitor few hosts, and then there are few number of AWS Ec2 spinned but gracefully shutdown after a few minutes.
Dynatrace only able to provide me problems alert, stating only "Host gracefully shutdown". No further and extra information given. Nothing was captured, it just boot up for 3 mins max then shutdown gracefully.
What steps should I take to troubleshoot this situation? Please share me your experience and knowledge, in need of help.
https://redd.it/pncrut
@r_devops
reddit
Understanding that Dynatrace is an AI monitoring tools. But what...
Currently I'm using dynatrace to monitor few hosts, and then there are few number of AWS Ec2 spinned but gracefully shutdown after a few minutes. ...
I gave a talk recently about what the SolarWinds attack can teach us about the state of DevOps
I gave a talk at a conference recently about what we can all learn from the SolarWinds attack. This is especially important for DevOps teams.
https://youtu.be/nvXSlSbxnC0
https://redd.it/pncupx
@r_devops
I gave a talk at a conference recently about what we can all learn from the SolarWinds attack. This is especially important for DevOps teams.
https://youtu.be/nvXSlSbxnC0
https://redd.it/pncupx
@r_devops
YouTube
CrikeyCon 2021 - Paul McCarty - What the Solarwinds hack should tell us about software development
If there's anything that the Solarwinds hack has taught us, it's that our industry needs to look internally and really try to understand WHY developers are not embracing security. Simply saying we need to "shift left " is bullshit hype and means nothing.…
CI workflow with gitlab for Liferay DXP
I want to know if someone has make this before.
I saw there is a jenkins file for configure the jobs but i need to make it without Jenkins, just with gitlab CI.
Liferay DXP with multiple modules, but for updating individually.
​
Thank you in advance.
https://redd.it/pnegtr
@r_devops
I want to know if someone has make this before.
I saw there is a jenkins file for configure the jobs but i need to make it without Jenkins, just with gitlab CI.
Liferay DXP with multiple modules, but for updating individually.
​
Thank you in advance.
https://redd.it/pnegtr
@r_devops
reddit
CI workflow with gitlab for Liferay DXP
I want to know if someone has make this before. I saw there is a jenkins file for configure the jobs but i need to make it without Jenkins, just...
Terraform apply for ec-2 Instance
Hello Everyone,
having an issue by only creating Ec-2 instances with terraform.
While creating an Instance it actually says.
"Failed to reach target state. Reason:client.Internal error: Client error on launch"
But other services such as vpc,S3 buckets and users can be created by terraform easily but ec-2 throws this error.
https://redd.it/pne2f3
@r_devops
Hello Everyone,
having an issue by only creating Ec-2 instances with terraform.
While creating an Instance it actually says.
"Failed to reach target state. Reason:client.Internal error: Client error on launch"
But other services such as vpc,S3 buckets and users can be created by terraform easily but ec-2 throws this error.
https://redd.it/pne2f3
@r_devops
reddit
Terraform apply for ec-2 Instance
Hello Everyone, having an issue by only creating Ec-2 instances with terraform. While creating an Instance it actually says. "Failed to reach...
Restricting scope of Jenkins groovy global variables to a parallel stage?
I have a Jenkins pipeline with a lot of groovy code, which unfortunately have been written on assumption that it will NOT be used in parallel stages and as such contains lots of global variables.
Naturally attempting to wrap it into parallel stages produces collisions and race conditions.
Question: is there a way to tell Jenkins not to marshal the global variables between parallel branches?
Simple example would look like this
def someFunction (int branch) {
sh "echo $branch"
someString = "hello branch $branch"
sh "echo $someString"
}
node {
parallel {
branch1: {
someFunction (1)
},
branch2: {
someFunction (2)
}
}
}
Because someString is global, this results in branch1 sometimes printing hello branch 2 and vice versa.
Of course in this example I can fix it by declaring a separate someString at the beginning of each branch, but in the case I am actually dealing with there are a lot of these, so it gets out of hand very quickly. Is my only option to bite the bullet and fix it all?
https://redd.it/pncx3r
@r_devops
I have a Jenkins pipeline with a lot of groovy code, which unfortunately have been written on assumption that it will NOT be used in parallel stages and as such contains lots of global variables.
Naturally attempting to wrap it into parallel stages produces collisions and race conditions.
Question: is there a way to tell Jenkins not to marshal the global variables between parallel branches?
Simple example would look like this
def someFunction (int branch) {
sh "echo $branch"
someString = "hello branch $branch"
sh "echo $someString"
}
node {
parallel {
branch1: {
someFunction (1)
},
branch2: {
someFunction (2)
}
}
}
Because someString is global, this results in branch1 sometimes printing hello branch 2 and vice versa.
Of course in this example I can fix it by declaring a separate someString at the beginning of each branch, but in the case I am actually dealing with there are a lot of these, so it gets out of hand very quickly. Is my only option to bite the bullet and fix it all?
https://redd.it/pncx3r
@r_devops
reddit
Restricting scope of Jenkins groovy global variables to a parallel...
I have a Jenkins pipeline with a lot of groovy code, which unfortunately have been written on assumption that it will NOT be used in parallel...
Check My Strategy: IaC in 2021+
We're an infrastructure-focused team laying the groundwork and strategy for how to managed our environments and can influence the tools developers and other teams use. I'm struggling with the options because there are so many with good pros and cons. We're at the point where people are going to start investing a lot of time into learning these technologies so we need to make a good decision that will serve us for at least a few years.
We currently use the following:
* Ansible Tower for IaaS server deployments regardless of cloud
* Amazon Web Services (large footprint, rapidly growing)
* AWS SAM for serverless applications
* AWS CloudFormation for almost everything else (S3, IAM, etc)
* Microsoft Azure (small footprint, slowly growing)
* Failed attempt at using ARM for cloud-native resources years ago, left it behind and make changes by hand
* VMware vSphere (large footprint, shrinking)
* Ansible Tower for some network/host management stuff
Assumptions:
* We're doing CI/CD for any IaC
* We're not going to get rid of AWS SAM for serverless apps, so our team needs to know CloudFormation at some level to support developers
* VMware is probably going to stay mostly manual as the admins managing that infrastructure are not automation-focused
* We want to get better about managing our Azure resources/capabilities
* We want to follow industry best practices and use the best tools, without chasing every new shiny technology.
* We don't do cross-cloud applications. We use multiple clouds, but don't typically need to deploy "cross cloud".
My future strategy with reasoning:
* Ansible Tower for IaaS server deployments (unchanged)
* We "vend" servers which are consumed by other teams so long-term management and lifecycle isn't a good fit for traditional state-based IaC
* AWS SAM for serverless applications (unchanged)
* Best in class for managing serverless apps on AWS, which is the only place we do serverless.
* Terraform to replace AWS CloudFormation and Azure ARM for deploying resources that don't fall into the serverless or pure IaaS categories
* Really struggled with this because CDK is an up-and-comer, and the momentum for our environment is heavily toward AWS.
* Alternative would be AWS CloudFormation -> AWS CDK, and Azure ARM -> Terraform, but I'm not sure that CDK/Terraform are differentiated enough to warrant using the vendor-specific CDK technology.
* Terraform is a highly marketable skill with large community backing and momentum
* Allows for potential to branch into managing VMware more and other technologies we use (managed firewalls, monitoring, etc)
What do you think? Where did I go wrong.
https://redd.it/pnj2lj
@r_devops
We're an infrastructure-focused team laying the groundwork and strategy for how to managed our environments and can influence the tools developers and other teams use. I'm struggling with the options because there are so many with good pros and cons. We're at the point where people are going to start investing a lot of time into learning these technologies so we need to make a good decision that will serve us for at least a few years.
We currently use the following:
* Ansible Tower for IaaS server deployments regardless of cloud
* Amazon Web Services (large footprint, rapidly growing)
* AWS SAM for serverless applications
* AWS CloudFormation for almost everything else (S3, IAM, etc)
* Microsoft Azure (small footprint, slowly growing)
* Failed attempt at using ARM for cloud-native resources years ago, left it behind and make changes by hand
* VMware vSphere (large footprint, shrinking)
* Ansible Tower for some network/host management stuff
Assumptions:
* We're doing CI/CD for any IaC
* We're not going to get rid of AWS SAM for serverless apps, so our team needs to know CloudFormation at some level to support developers
* VMware is probably going to stay mostly manual as the admins managing that infrastructure are not automation-focused
* We want to get better about managing our Azure resources/capabilities
* We want to follow industry best practices and use the best tools, without chasing every new shiny technology.
* We don't do cross-cloud applications. We use multiple clouds, but don't typically need to deploy "cross cloud".
My future strategy with reasoning:
* Ansible Tower for IaaS server deployments (unchanged)
* We "vend" servers which are consumed by other teams so long-term management and lifecycle isn't a good fit for traditional state-based IaC
* AWS SAM for serverless applications (unchanged)
* Best in class for managing serverless apps on AWS, which is the only place we do serverless.
* Terraform to replace AWS CloudFormation and Azure ARM for deploying resources that don't fall into the serverless or pure IaaS categories
* Really struggled with this because CDK is an up-and-comer, and the momentum for our environment is heavily toward AWS.
* Alternative would be AWS CloudFormation -> AWS CDK, and Azure ARM -> Terraform, but I'm not sure that CDK/Terraform are differentiated enough to warrant using the vendor-specific CDK technology.
* Terraform is a highly marketable skill with large community backing and momentum
* Allows for potential to branch into managing VMware more and other technologies we use (managed firewalls, monitoring, etc)
What do you think? Where did I go wrong.
https://redd.it/pnj2lj
@r_devops
reddit
Check My Strategy: IaC in 2021+
We're an infrastructure-focused team laying the groundwork and strategy for how to managed our environments and can influence the tools developers...