Do you consider Devops as more of an art of science?
Obviously Devops focuses on using tools and solving problems using automation and hitting goals like faster releases.
However to be effective and good at the role is creativity and bringing new ideas also important? I ask because it can seem like just hitting and requirements and going off a specification sheet.
But there are many different ways to implement a solution and trade offs with each. Some people use terraform, others extend with Terragrunt. And then you can use Helm to update your stack, and CI will take care of everything. If you follow a different approach — say letting the systems get too complex then that becomes difficult to manage.
So how do you design the approach to be efficient, but also practical as well. As you add more features and things you want to do using tooling, then seems like more things can go wrong but that’s part of software adding new features and iterating.
In reality the end goal isn’t always desirable— I look at same an ambitious project like the game Cyberpunk not delivering on promises and being half baked. So is it the culture or processes that need to change? Because sometimes your org can’t agree on simple things and ask to cut it or go with another idea.. you see how this adds up with the final result becoming divergent from the original plan/idea on paper.
Any recommendations from knowledgeable practitioners on how to use good practices but also implement clean and agreeable design patterns?
https://redd.it/no71kd
@r_devops
Obviously Devops focuses on using tools and solving problems using automation and hitting goals like faster releases.
However to be effective and good at the role is creativity and bringing new ideas also important? I ask because it can seem like just hitting and requirements and going off a specification sheet.
But there are many different ways to implement a solution and trade offs with each. Some people use terraform, others extend with Terragrunt. And then you can use Helm to update your stack, and CI will take care of everything. If you follow a different approach — say letting the systems get too complex then that becomes difficult to manage.
So how do you design the approach to be efficient, but also practical as well. As you add more features and things you want to do using tooling, then seems like more things can go wrong but that’s part of software adding new features and iterating.
In reality the end goal isn’t always desirable— I look at same an ambitious project like the game Cyberpunk not delivering on promises and being half baked. So is it the culture or processes that need to change? Because sometimes your org can’t agree on simple things and ask to cut it or go with another idea.. you see how this adds up with the final result becoming divergent from the original plan/idea on paper.
Any recommendations from knowledgeable practitioners on how to use good practices but also implement clean and agreeable design patterns?
https://redd.it/no71kd
@r_devops
reddit
Do you consider Devops as more of an art of science?
Obviously Devops focuses on using tools and solving problems using automation and hitting goals like faster releases. However to be effective...
How do you deploy a dockerized application on EC2 without docker hub?
I tried to deploy a dockerized app on production by doing: docker-compose up
​
However, I got this message error by trying to do so.
​
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 426, in makerequest
File "<string>", line 3, in raisefrom
File "urllib3/connectionpool.py", line 421, in makerequest
File "http/client.py", line 1344, in getresponse
File "http/client.py", line 306, in begin
File "http/client.py", line 267, in readstatus
File "socket.py", line 589, in readinto
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 727, in urlopen
File "urllib3/util/retry.py", line 403, in increment
File "urllib3/packages/six.py", line 735, in reraise
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 428, in makerequest
File "urllib3/connectionpool.py", line 336, in raisetimeout
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker/api/client.py", line 205, in retrieveserverversion
File "docker/api/daemon.py", line 181, in version
File "docker/utils/decorators.py", line 46, in inner
File "docker/api/client.py", line 228, in get
File "requests/sessions.py", line 543, in get
File "requests/sessions.py", line 530, in request
File "requests/sessions.py", line 643, in send
File "requests/adapters.py", line 529, in send
requests.exceptions.ReadTimeout: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in performcommand
File "compose/cli/command.py", line 69, in projectfromoptions
File "compose/cli/command.py", line 132, in getproject
File "compose/cli/dockerclient.py", line 43, in getclient
File "compose/cli/dockerclient.py", line 170, in dockerclient
File "docker/api/client.py", line 188, in init
File "docker/api/client.py", line 213, in retrieveserverversion
docker.errors.DockerException: Error while fetching server API version: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
1089579 Failed to execute script docker-compose
​
Here's my docker-compose.yml file:
​
version: '3.1'
services:
php:
image: leonard/${CPROJECT}.php:tg1
build:
context: .
dockerfile: './docker/php/Dockerfile'
dependson:
- redis
- mariadb
command:
- /bin/bash
- -c
- umask 000 && ./php-fpm-build.sh && php-fpm
networks:
- backend
volumes:
- ./htomato.com/:/var/www/:consistent
- ./htomato.com/nodemodules/:/var/www/nodemodules/:cached
- ./htomato.com/vendor/:/var/www/vendor/:cached
- ./logs/php/:/var/log/htomato/:cached
apache:
image: leonard/common.apache:tg1
build: './docker/apache/'
dependson:
- php
networks:
- frontend
- backend
- traefik
I tried to deploy a dockerized app on production by doing: docker-compose up
​
However, I got this message error by trying to do so.
​
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 426, in makerequest
File "<string>", line 3, in raisefrom
File "urllib3/connectionpool.py", line 421, in makerequest
File "http/client.py", line 1344, in getresponse
File "http/client.py", line 306, in begin
File "http/client.py", line 267, in readstatus
File "socket.py", line 589, in readinto
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 727, in urlopen
File "urllib3/util/retry.py", line 403, in increment
File "urllib3/packages/six.py", line 735, in reraise
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 428, in makerequest
File "urllib3/connectionpool.py", line 336, in raisetimeout
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker/api/client.py", line 205, in retrieveserverversion
File "docker/api/daemon.py", line 181, in version
File "docker/utils/decorators.py", line 46, in inner
File "docker/api/client.py", line 228, in get
File "requests/sessions.py", line 543, in get
File "requests/sessions.py", line 530, in request
File "requests/sessions.py", line 643, in send
File "requests/adapters.py", line 529, in send
requests.exceptions.ReadTimeout: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in performcommand
File "compose/cli/command.py", line 69, in projectfromoptions
File "compose/cli/command.py", line 132, in getproject
File "compose/cli/dockerclient.py", line 43, in getclient
File "compose/cli/dockerclient.py", line 170, in dockerclient
File "docker/api/client.py", line 188, in init
File "docker/api/client.py", line 213, in retrieveserverversion
docker.errors.DockerException: Error while fetching server API version: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
1089579 Failed to execute script docker-compose
​
Here's my docker-compose.yml file:
​
version: '3.1'
services:
php:
image: leonard/${CPROJECT}.php:tg1
build:
context: .
dockerfile: './docker/php/Dockerfile'
dependson:
- redis
- mariadb
command:
- /bin/bash
- -c
- umask 000 && ./php-fpm-build.sh && php-fpm
networks:
- backend
volumes:
- ./htomato.com/:/var/www/:consistent
- ./htomato.com/nodemodules/:/var/www/nodemodules/:cached
- ./htomato.com/vendor/:/var/www/vendor/:cached
- ./logs/php/:/var/log/htomato/:cached
apache:
image: leonard/common.apache:tg1
build: './docker/apache/'
dependson:
- php
networks:
- frontend
- backend
- traefik
How do you deploy a dockerized application on EC2 without docker hub?
I tried to deploy a dockerized app on production by doing: docker-compose up
​
However, I got this message error by trying to do so.
​
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 426, in _make_request
File "<string>", line 3, in raise_from
File "urllib3/connectionpool.py", line 421, in _make_request
File "http/client.py", line 1344, in getresponse
File "http/client.py", line 306, in begin
File "http/client.py", line 267, in _read_status
File "socket.py", line 589, in readinto
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 727, in urlopen
File "urllib3/util/retry.py", line 403, in increment
File "urllib3/packages/six.py", line 735, in reraise
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 428, in _make_request
File "urllib3/connectionpool.py", line 336, in _raise_timeout
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker/api/client.py", line 205, in _retrieve_server_version
File "docker/api/daemon.py", line 181, in version
File "docker/utils/decorators.py", line 46, in inner
File "docker/api/client.py", line 228, in _get
File "requests/sessions.py", line 543, in get
File "requests/sessions.py", line 530, in request
File "requests/sessions.py", line 643, in send
File "requests/adapters.py", line 529, in send
requests.exceptions.ReadTimeout: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in perform_command
File "compose/cli/command.py", line 69, in project_from_options
File "compose/cli/command.py", line 132, in get_project
File "compose/cli/docker_client.py", line 43, in get_client
File "compose/cli/docker_client.py", line 170, in docker_client
File "docker/api/client.py", line 188, in __init__
File "docker/api/client.py", line 213, in _retrieve_server_version
docker.errors.DockerException: Error while fetching server API version: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
[1089579] Failed to execute script docker-compose
​
Here's my docker-compose.yml file:
​
version: '3.1'
services:
php:
image: leonard/${CPROJECT}.php:tg1
build:
context: .
dockerfile: './docker/php/Dockerfile'
depends_on:
- redis
- mariadb
command:
- /bin/bash
- -c
- umask 000 && ./php-fpm-build.sh && php-fpm
networks:
- backend
volumes:
- ./htomato.com/:/var/www/:consistent
- ./htomato.com/node_modules/:/var/www/node_modules/:cached
- ./htomato.com/vendor/:/var/www/vendor/:cached
- ./logs/php/:/var/log/htomato/:cached
apache:
image: leonard/common.apache:tg1
build: './docker/apache/'
depends_on:
- php
networks:
- frontend
- backend
- traefik
I tried to deploy a dockerized app on production by doing: docker-compose up
​
However, I got this message error by trying to do so.
​
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 426, in _make_request
File "<string>", line 3, in raise_from
File "urllib3/connectionpool.py", line 421, in _make_request
File "http/client.py", line 1344, in getresponse
File "http/client.py", line 306, in begin
File "http/client.py", line 267, in _read_status
File "socket.py", line 589, in readinto
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 727, in urlopen
File "urllib3/util/retry.py", line 403, in increment
File "urllib3/packages/six.py", line 735, in reraise
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 428, in _make_request
File "urllib3/connectionpool.py", line 336, in _raise_timeout
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker/api/client.py", line 205, in _retrieve_server_version
File "docker/api/daemon.py", line 181, in version
File "docker/utils/decorators.py", line 46, in inner
File "docker/api/client.py", line 228, in _get
File "requests/sessions.py", line 543, in get
File "requests/sessions.py", line 530, in request
File "requests/sessions.py", line 643, in send
File "requests/adapters.py", line 529, in send
requests.exceptions.ReadTimeout: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in perform_command
File "compose/cli/command.py", line 69, in project_from_options
File "compose/cli/command.py", line 132, in get_project
File "compose/cli/docker_client.py", line 43, in get_client
File "compose/cli/docker_client.py", line 170, in docker_client
File "docker/api/client.py", line 188, in __init__
File "docker/api/client.py", line 213, in _retrieve_server_version
docker.errors.DockerException: Error while fetching server API version: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
[1089579] Failed to execute script docker-compose
​
Here's my docker-compose.yml file:
​
version: '3.1'
services:
php:
image: leonard/${CPROJECT}.php:tg1
build:
context: .
dockerfile: './docker/php/Dockerfile'
depends_on:
- redis
- mariadb
command:
- /bin/bash
- -c
- umask 000 && ./php-fpm-build.sh && php-fpm
networks:
- backend
volumes:
- ./htomato.com/:/var/www/:consistent
- ./htomato.com/node_modules/:/var/www/node_modules/:cached
- ./htomato.com/vendor/:/var/www/vendor/:cached
- ./logs/php/:/var/log/htomato/:cached
apache:
image: leonard/common.apache:tg1
build: './docker/apache/'
depends_on:
- php
networks:
- frontend
- backend
- traefik
labels:
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache.service=${COMPOSE_CPROJECT_NAME}-apache
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache.entryPoints=web
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache.loadbalancer.server.port=80
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.entryPoints=websecure
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.service=${COMPOSE_CPROJECT_NAME}-apache-ssl
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.tls=true
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache-ssl.loadbalancer.server.port=80
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.port=80
volumes:
- ./htomato.com/public:/var/www/public
- ./docker/php/php.ini:/usr/local/etc/php/php.ini
mariadb:
image: leonard/common.mariadb:tg1
build: './docker/mariadb/'
restart: always
environment:
MYSQL_ROOT_PASSWORD: A7h2ie23
MYSQL_DATABASE: ${CPROJECT}
MYSQL_USER: ${CPROJECT}
MYSQL_PASSWORD: ${MARIADB_PASS}
DBDUMP: ${DBDUMP}
DATABASE: ${CPROJECT}
volumes:
- db-data:/var/lib/mysql
- ./docker/mariadb/import-dump.sh:/docker-entrypoint-initdb.d/a-import-dump.sh
networks:
- backend
ports:
- ${MARIADB_DEVPORT}:3301
redis:
image: redis
restart: always
networks:
- backend
varnish:
image: varnish:6.1
restart: always
depends_on:
- apache
networks:
- frontend
- backend
- traefik
volumes:
- ./docker/varnish:/etc/varnish
node:
image: leonard/node:8.17
build:
context: .
dockerfile: './docker/node/Dockerfile'
networks:
backend:
traefik:
labels:
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp.entryPoints=gulp
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp.service=${COMPOSE_CPROJECT_NAME}-apache-gulp
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache-gulp.loadbalancer.server.port=3000
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.entryPoints=gulp-ui
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.service=${COMPOSE_CPROJECT_NAME}-apache-gulp-ui
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.loadbalancer.server.port=3000
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.port=80
command:
- /bin/bash
- -c
- umask 000 && npm ci; socat TCP-LISTEN:80,fork,reuseaddr TCP:apache:80 & make css-browser
volumes:
- ./htomato.com:/htomato.com
working_dir: /htomato.com
volumes:
db-data:
networks:
frontend:
backend:
traefik:
external:
name: webgateway
​
When I run docker-compose up on traefik and my project file on my local machine running an Ubuntu VM, I am not having any problem. Also, how do I make the app publicly accessible from the outside?
https://redd.it/nop006
@r_devops
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache.service=${COMPOSE_CPROJECT_NAME}-apache
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache.entryPoints=web
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache.loadbalancer.server.port=80
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.entryPoints=websecure
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.service=${COMPOSE_CPROJECT_NAME}-apache-ssl
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-ssl.tls=true
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache-ssl.loadbalancer.server.port=80
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.port=80
volumes:
- ./htomato.com/public:/var/www/public
- ./docker/php/php.ini:/usr/local/etc/php/php.ini
mariadb:
image: leonard/common.mariadb:tg1
build: './docker/mariadb/'
restart: always
environment:
MYSQL_ROOT_PASSWORD: A7h2ie23
MYSQL_DATABASE: ${CPROJECT}
MYSQL_USER: ${CPROJECT}
MYSQL_PASSWORD: ${MARIADB_PASS}
DBDUMP: ${DBDUMP}
DATABASE: ${CPROJECT}
volumes:
- db-data:/var/lib/mysql
- ./docker/mariadb/import-dump.sh:/docker-entrypoint-initdb.d/a-import-dump.sh
networks:
- backend
ports:
- ${MARIADB_DEVPORT}:3301
redis:
image: redis
restart: always
networks:
- backend
varnish:
image: varnish:6.1
restart: always
depends_on:
- apache
networks:
- frontend
- backend
- traefik
volumes:
- ./docker/varnish:/etc/varnish
node:
image: leonard/node:8.17
build:
context: .
dockerfile: './docker/node/Dockerfile'
networks:
backend:
traefik:
labels:
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp.entryPoints=gulp
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp.service=${COMPOSE_CPROJECT_NAME}-apache-gulp
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache-gulp.loadbalancer.server.port=3000
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.rule=${HTTPRULE}
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.entryPoints=gulp-ui
- traefik.http.routers.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.service=${COMPOSE_CPROJECT_NAME}-apache-gulp-ui
- traefik.http.services.${COMPOSE_CPROJECT_NAME}-apache-gulp-ui.loadbalancer.server.port=3000
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.port=80
command:
- /bin/bash
- -c
- umask 000 && npm ci; socat TCP-LISTEN:80,fork,reuseaddr TCP:apache:80 & make css-browser
volumes:
- ./htomato.com:/htomato.com
working_dir: /htomato.com
volumes:
db-data:
networks:
frontend:
backend:
traefik:
external:
name: webgateway
​
When I run docker-compose up on traefik and my project file on my local machine running an Ubuntu VM, I am not having any problem. Also, how do I make the app publicly accessible from the outside?
https://redd.it/nop006
@r_devops
reddit
How do you deploy a dockerized application on EC2 without docker hub?
I tried to deploy a dockerized app on production by doing: docker-compose up However, I got this message error by trying to do...
Best way to set up a sever to host Django apps on a VPC
Hi,
I'm thinking of purchasing a VPC to host my own and clients websites/apps running with Django on the back-end. I will not be using AWS but another provider.
How would you set this up?
Linux + Docker + Jenkins? Is that enough?
https://redd.it/noh6iw
@r_devops
Hi,
I'm thinking of purchasing a VPC to host my own and clients websites/apps running with Django on the back-end. I will not be using AWS but another provider.
How would you set this up?
Linux + Docker + Jenkins? Is that enough?
https://redd.it/noh6iw
@r_devops
reddit
Best way to set up a sever to host Django apps on a VPC
Hi, I'm thinking of purchasing a VPC to host my own and clients websites/apps running with Django on the back-end. I will not be using AWS but...
Getting started with DevOps
I recently picked interest in Devops, and so far I've been learning how to use some of its tools.
I started with Docker, Kubernetes and I'm about to complete my course on Ansible.
What other tool should I learn and how do I combine these tools, so I can get some hands on experience?
https://redd.it/nofcig
@r_devops
I recently picked interest in Devops, and so far I've been learning how to use some of its tools.
I started with Docker, Kubernetes and I'm about to complete my course on Ansible.
What other tool should I learn and how do I combine these tools, so I can get some hands on experience?
https://redd.it/nofcig
@r_devops
reddit
Getting started with DevOps
I recently picked interest in Devops, and so far I've been learning how to use some of its tools. I started with Docker, Kubernetes and I'm about...
handling ecs deploys with terraform
I'm the new junior (and only) DevOps engineer for a small shop of devs. Currently there is no IaC in place. All devs currently work off one single EC2 dev instance(which has it's own host of problems..). Updates to that instance are basically running a docker-compose and pushing the image to the EC2. (Via Jenkins jobs)
My first given initiative was to give the devs a quick way to spin up/down an ephemeral environment. I was thinking of the following workflow for devs.
1. Use terraform to deploy all items required for an ECS Fargate setup(Cluster, service, task def, networking, etc)
2. Within the
3. BitBucket
4. The dev will enter the branch at prompt(step 2) e.g.
5. The TF then creates all required infra while also using the branch name entered to obtain the image from ECR and use that image for ECS.
Does this workflow sound reasonable? The only concern I have is that the branch image will never change names or tags (it will always be
I was thinking after the initial
Curious if anyone has anything similar setup, am I going down a good path, over/under engineering?
https://redd.it/noedze
@r_devops
I'm the new junior (and only) DevOps engineer for a small shop of devs. Currently there is no IaC in place. All devs currently work off one single EC2 dev instance(which has it's own host of problems..). Updates to that instance are basically running a docker-compose and pushing the image to the EC2. (Via Jenkins jobs)
My first given initiative was to give the devs a quick way to spin up/down an ephemeral environment. I was thinking of the following workflow for devs.
1. Use terraform to deploy all items required for an ECS Fargate setup(Cluster, service, task def, networking, etc)
2. Within the
aws_ecs_task_definition I will set a terraform variable for image with no default so it will prompt at terraform apply i.e. "Enter your branch: "3. BitBucket
dev branches will build & push to ECR (via Jenkins) and will be tagged after the ticket(i.e. OL-2932-feature). Subsequent pushes will overwrite the current ECR image. 4. The dev will enter the branch at prompt(step 2) e.g.
OL-2932-feature5. The TF then creates all required infra while also using the branch name entered to obtain the image from ECR and use that image for ECS.
Does this workflow sound reasonable? The only concern I have is that the branch image will never change names or tags (it will always be
OL-2932-feature:latest essentially, this is so the developer doesn't have to remember a variety of tags.) So another terraform apply won't refresh their infra with the newest image from ECR since TF won't detect new ECR images.I was thinking after the initial
terraform apply, use terraform apply -replace="aws_ecs_task_definition.my-api"going forward to get around that. Curious if anyone has anything similar setup, am I going down a good path, over/under engineering?
https://redd.it/noedze
@r_devops
reddit
handling ecs deploys with terraform
I'm the new junior (and only) DevOps engineer for a small shop of devs. Currently there is no IaC in place. All devs currently work off one single...
Monthly 'Getting into DevOps' thread - 2021/06
What is DevOps?
[AWS has a great article](https://aws.amazon.com/devops/what-is-devops/) that outlines DevOps as a work environment where development and operations teams are no longer "siloed", but instead work together across the entire application lifecycle -- from development and test to deployment to operations -- and automate processes that historically have been manual and slow.
Books to Read
The Phoenix Project - one of the original books to delve into DevOps culture, explained through the story of a fictional company on the brink of failure.
[The DevOps Handbook](https://www.amazon.com/dp/1942788002) - a practical "sequel" to The Phoenix Project.
Google's Site Reliability Engineering - Google engineers explain how they build, deploy, monitor, and maintain their systems.
[The Site Reliability Workbook](https://landing.google.com/sre/workbook/toc/) - The practical companion to the Google's Site Reliability Engineering Book
The Unicorn Project - the "sequel" to The Phoenix Project.
[DevOps for Dummies](https://www.amazon.com/DevOps-Dummies-Computer-Tech-ebook/dp/B07VXMLK3J/) - don't let the name fool you.
What Should I Learn?
Emily Wood's essay - why infrastructure as code is so important into today's world.
[2019 DevOps Roadmap](https://github.com/kamranahmedse/developer-roadmap#devops-roadmap) - one developer's ideas for which skills are needed in the DevOps world. This roadmap is controversial, as it may be too use-case specific, but serves as a good starting point for what tools are currently in use by companies.
This comment by /u/mdaffin - just remember, DevOps is a mindset to solving problems. It's less about the specific tools you know or the certificates you have, as it is the way you approach problem solving.
[This comment by /u/jpswade](https://gist.github.com/jpswade/4135841363e72ece8086146bd7bb5d91) - what is DevOps and associated terminology.
Roadmap.sh - Step by step guide for DevOps or any other Operations Role
Remember: DevOps as a term and as a practice is still in flux, and is more about culture change than it is specific tooling. As such, specific skills and tool-sets are not universal, and recommendations for them should be taken only as suggestions.
Previous Threads
https://www.reddit.com/r/devops/comments/n2n1jk/monthlygettingintodevopsthread202105/
https://www.reddit.com/r/devops/comments/mhx15t/monthlygettingintodevopsthread202104/
https://www.reddit.com/r/devops/comments/lvet1r/monthlygettingintodevopsthread202103/
https://www.reddit.com/r/devops/comments/la7j8w/monthlygettingintodevopsthread202102/
https://www.reddit.com/r/devops/comments/koijyu/monthlygettingintodevopsthread202101/
https://www.reddit.com/r/devops/comments/k4v7s0/monthlygettingintodevopsthread202012/
https://www.reddit.com/r/devops/comments/jmdce9/monthlygettingintodevopsthread202011/
https://www.reddit.com/r/devops/comments/j3i2p5/monthlygettingintodevopsthread202010/
https://www.reddit.com/r/devops/comments/ikf91l/monthlygettingintodevopsthread202009/
https://www.reddit.com/r/devops/comments/i1n8rz/monthlygettingintodevopsthread202008/
https://www.reddit.com/r/devops/comments/hjehb7/monthlygettingintodevopsthread202007/
https://www.reddit.com/r/devops/comments/gulrm9/monthlygettingintodevopsthread202006/
https://www.reddit.com/r/devops/comments/axcebk/monthlygettingintodevopsthread/
Please keep this on topic (as a reference for those new to devops).
https://redd.it/npua0y
@r_devops
What is DevOps?
[AWS has a great article](https://aws.amazon.com/devops/what-is-devops/) that outlines DevOps as a work environment where development and operations teams are no longer "siloed", but instead work together across the entire application lifecycle -- from development and test to deployment to operations -- and automate processes that historically have been manual and slow.
Books to Read
The Phoenix Project - one of the original books to delve into DevOps culture, explained through the story of a fictional company on the brink of failure.
[The DevOps Handbook](https://www.amazon.com/dp/1942788002) - a practical "sequel" to The Phoenix Project.
Google's Site Reliability Engineering - Google engineers explain how they build, deploy, monitor, and maintain their systems.
[The Site Reliability Workbook](https://landing.google.com/sre/workbook/toc/) - The practical companion to the Google's Site Reliability Engineering Book
The Unicorn Project - the "sequel" to The Phoenix Project.
[DevOps for Dummies](https://www.amazon.com/DevOps-Dummies-Computer-Tech-ebook/dp/B07VXMLK3J/) - don't let the name fool you.
What Should I Learn?
Emily Wood's essay - why infrastructure as code is so important into today's world.
[2019 DevOps Roadmap](https://github.com/kamranahmedse/developer-roadmap#devops-roadmap) - one developer's ideas for which skills are needed in the DevOps world. This roadmap is controversial, as it may be too use-case specific, but serves as a good starting point for what tools are currently in use by companies.
This comment by /u/mdaffin - just remember, DevOps is a mindset to solving problems. It's less about the specific tools you know or the certificates you have, as it is the way you approach problem solving.
[This comment by /u/jpswade](https://gist.github.com/jpswade/4135841363e72ece8086146bd7bb5d91) - what is DevOps and associated terminology.
Roadmap.sh - Step by step guide for DevOps or any other Operations Role
Remember: DevOps as a term and as a practice is still in flux, and is more about culture change than it is specific tooling. As such, specific skills and tool-sets are not universal, and recommendations for them should be taken only as suggestions.
Previous Threads
https://www.reddit.com/r/devops/comments/n2n1jk/monthlygettingintodevopsthread202105/
https://www.reddit.com/r/devops/comments/mhx15t/monthlygettingintodevopsthread202104/
https://www.reddit.com/r/devops/comments/lvet1r/monthlygettingintodevopsthread202103/
https://www.reddit.com/r/devops/comments/la7j8w/monthlygettingintodevopsthread202102/
https://www.reddit.com/r/devops/comments/koijyu/monthlygettingintodevopsthread202101/
https://www.reddit.com/r/devops/comments/k4v7s0/monthlygettingintodevopsthread202012/
https://www.reddit.com/r/devops/comments/jmdce9/monthlygettingintodevopsthread202011/
https://www.reddit.com/r/devops/comments/j3i2p5/monthlygettingintodevopsthread202010/
https://www.reddit.com/r/devops/comments/ikf91l/monthlygettingintodevopsthread202009/
https://www.reddit.com/r/devops/comments/i1n8rz/monthlygettingintodevopsthread202008/
https://www.reddit.com/r/devops/comments/hjehb7/monthlygettingintodevopsthread202007/
https://www.reddit.com/r/devops/comments/gulrm9/monthlygettingintodevopsthread202006/
https://www.reddit.com/r/devops/comments/axcebk/monthlygettingintodevopsthread/
Please keep this on topic (as a reference for those new to devops).
https://redd.it/npua0y
@r_devops
Amazon
What is DevOps?
Find out what is DevOps, how and why businesses utilize DevOps models, and how to use AWS DevOps services.
Monthly 'Shameless Self Promotion' thread - 2021/06
Feel free to post your personal projects here. Just keep it to one project per comment thread.
https://redd.it/npuade
@r_devops
Feel free to post your personal projects here. Just keep it to one project per comment thread.
https://redd.it/npuade
@r_devops
reddit
Monthly 'Shameless Self Promotion' thread - 2021/06
Feel free to post your personal projects here. Just keep it to one project per comment thread.
Responding to a sudo password request in a script?
New, so apologies for the wording. Essentially, I'm trying to run a script that contains a sudo command. It's part of a larger process that I'm trying to automate. Is there any way to respond to the password request within the script? I've tried a few things / googling but I haven't had any luck
This is what i tried
sshpass -p $password ssh -t $username@$ip "echo $password | sudo -S docker load < testimage.tar"
when i run this the docker command fails with ,"incorrect password"
I appreciate any time or guidance, thanks!
https://redd.it/npvlro
@r_devops
New, so apologies for the wording. Essentially, I'm trying to run a script that contains a sudo command. It's part of a larger process that I'm trying to automate. Is there any way to respond to the password request within the script? I've tried a few things / googling but I haven't had any luck
This is what i tried
sshpass -p $password ssh -t $username@$ip "echo $password | sudo -S docker load < testimage.tar"
when i run this the docker command fails with ,"incorrect password"
I appreciate any time or guidance, thanks!
https://redd.it/npvlro
@r_devops
reddit
Responding to a sudo password request in a script?
New, so apologies for the wording. Essentially, I'm trying to run a script that contains a sudo command. It's part of a larger process that I'm...
What CDN should I choose with Vercel?
I'm building a stock photo/illustration/icon site with Vercel. But not sure which CDN to go for.
So far Fastly, Cloudfront (not sure how hard it'll be to set up), mxcdn caught my eye.
What would you recommend?
https://redd.it/npv1bg
@r_devops
I'm building a stock photo/illustration/icon site with Vercel. But not sure which CDN to go for.
So far Fastly, Cloudfront (not sure how hard it'll be to set up), mxcdn caught my eye.
What would you recommend?
https://redd.it/npv1bg
@r_devops
reddit
What CDN should I choose with Vercel?
I'm building a stock photo/illustration/icon site with Vercel. But not sure which CDN to go for. So far Fastly, Cloudfront (not sure how hard...
Workarounds to AWS Site-vpn CIDR overlap with DX
Looking to setup a site-to-site VPN from AWS to customer Data center running Cisco Meraki Gateway. This shouldn't be much of hassle setting up and getting the tunnels up, however the issue is we are both on overlapping subnet CIDR.
The problem is that AWS transit gateway/site-vpn setup doesn't allow SNAT/DNAT and in this case the customer gateway (Meraki) also doesn't support SNAT/DNAT as a workaround.
I looked up setting up Openswan to SNAT/DNAT but the https://aws.amazon.com/articles/connecting-cisco-asa-to-vpc-ec2-instance-ipsec/ mentions setting up NAT on the destination side as well.
What are the some of the workarounds I can do to get this tunnels up and running?
I see one such solution from AWS but it is kind of cumbersome https://github.com/aws-samples/aws-transit-gateway-overlapping-cidrs
https://redd.it/nps17b
@r_devops
Looking to setup a site-to-site VPN from AWS to customer Data center running Cisco Meraki Gateway. This shouldn't be much of hassle setting up and getting the tunnels up, however the issue is we are both on overlapping subnet CIDR.
The problem is that AWS transit gateway/site-vpn setup doesn't allow SNAT/DNAT and in this case the customer gateway (Meraki) also doesn't support SNAT/DNAT as a workaround.
I looked up setting up Openswan to SNAT/DNAT but the https://aws.amazon.com/articles/connecting-cisco-asa-to-vpc-ec2-instance-ipsec/ mentions setting up NAT on the destination side as well.
What are the some of the workarounds I can do to get this tunnels up and running?
I see one such solution from AWS but it is kind of cumbersome https://github.com/aws-samples/aws-transit-gateway-overlapping-cidrs
https://redd.it/nps17b
@r_devops
Amazon
AWS Articles
Add AD user to local Admin group
How to add AD user to local Admin group using chef
https://redd.it/nppob4
@r_devops
How to add AD user to local Admin group using chef
https://redd.it/nppob4
@r_devops
reddit
Add AD user to local Admin group
How to add AD user to local Admin group using chef
Is it a good idea to add branch tags when commit history exists in Azure DevOps Repos?
Hi Everyone!
In one of the courses I took today, I noticed he was discussing the use of branch tags and it looks like a helpful feature for tracking commits and repositories. Another thing I noticed as well is that changes and commits are still visible in your commit histories, indicating your version updates.
I would like to get some suggestions on when branch tagging should be added. Is it a good idea to add branch tags on every commit or for every release?
When is branch tagging more useful than reading previous commits?
Thank you for your insights!
https://redd.it/npopds
@r_devops
Hi Everyone!
In one of the courses I took today, I noticed he was discussing the use of branch tags and it looks like a helpful feature for tracking commits and repositories. Another thing I noticed as well is that changes and commits are still visible in your commit histories, indicating your version updates.
I would like to get some suggestions on when branch tagging should be added. Is it a good idea to add branch tags on every commit or for every release?
When is branch tagging more useful than reading previous commits?
Thank you for your insights!
https://redd.it/npopds
@r_devops
reddit
Is it a good idea to add branch tags when commit history exists in...
Hi Everyone! In one of the courses I took today, I noticed he was discussing the use of branch tags and it looks like a helpful feature for...
Learning Spinnaker
Hi
Does anyone have good resources to learn Spinnaker (with AWA and Kubernetes)?
Thanks in advance.
https://redd.it/npn84v
@r_devops
Hi
Does anyone have good resources to learn Spinnaker (with AWA and Kubernetes)?
Thanks in advance.
https://redd.it/npn84v
@r_devops
reddit
Learning Spinnaker
Hi Does anyone have good resources to learn Spinnaker (with AWA and Kubernetes)? Thanks in advance.
OSError: Errno 0 Error when trying to access AWS API endpoint using python 3.6 and boto
Hi! As the title says, when I try to run a simple query to AWS API using boto, I got OSError: [Errno 0\]. I'm running it inside a docker using a proper python3.6-alpine image.
What I did in python3.6 REPL:
After that, I got the following error:
I've been googling it for some days with no success. Anyone would have any idea what's going on?
My requirements.txt file is:
awscli==1.16.230
boto==2.49.0
boto3==1.9.79
ansible==2.7.5
awslogs==0.11.0
Thanks!
Edit: when I run the equivalent command using AWS CLI, I got the expected response:
Edit 2: I found the request that raises the exception (I formated the string to facilitate the reading):
https://redd.it/npm0qf
@r_devops
Hi! As the title says, when I try to run a simple query to AWS API using boto, I got OSError: [Errno 0\]. I'm running it inside a docker using a proper python3.6-alpine image.
What I did in python3.6 REPL:
>>> import boto.ec2
>>> c = boto.ec2.connect_to_region('us-west-1', debug=2)
>>> c.get_all_instances()
After that, I got the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/usr/local/lib/python3.6/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/usr/local/lib/python3.6/site-packages/boto/connection.py", line 1187, in get_list
response = self.make_request(action, params, path, verb)
File "/usr/local/lib/python3.6/site-packages/boto/connection.py", line 1133, in make_request
return self._mexe(http_request)
File "/usr/local/lib/python3.6/site-packages/boto/connection.py", line 1045, in _mexe
raise ex
File "/usr/local/lib/python3.6/site-packages/boto/connection.py", line 948, in _mexe
request.body, request.headers)
File "/usr/local/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/usr/local/lib/python3.6/site-packages/boto/https_connection.py", line 133, in connect
ca_certs=self.ca_certs)
File "/usr/local/lib/python3.6/ssl.py", line 1166, in wrap_socket
ciphers=ciphers)
File "/usr/local/lib/python3.6/ssl.py", line 819, in __init__
self.do_handshake()
File "/usr/local/lib/python3.6/ssl.py", line 1082, in do_handshake
self._sslobj.do_handshake()
File "/usr/local/lib/python3.6/ssl.py", line 691, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error
I've been googling it for some days with no success. Anyone would have any idea what's going on?
My requirements.txt file is:
awscli==1.16.230
boto==2.49.0
boto3==1.9.79
ansible==2.7.5
awslogs==0.11.0
Thanks!
Edit: when I run the equivalent command using AWS CLI, I got the expected response:
$ aws ec2 describe-instances
{
"Reservations": []
}
Edit 2: I found the request that raises the exception (I formated the string to facilitate the reading):
method:(POST)
protocol:(https)
host(iam.amazonaws.com)
port(443)
path(/)
params({
'RoleName': 'ecsTaskExecutionRole',
'Action': 'ListRolePolicies',
'Version': '2010-05-08'}
)
headers({
'User-Agent': 'Boto/2.49.0 Python/3.6.12 Linux/5.10.25-linuxkit',
'X-Amz-Date': '<Redacted>',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Content-Length': '72',
'Authorization': '<Redacted>',
'Host': 'iam.amazonaws.com'
})
body(
Action=ListRolePolicies&
RoleName=ecsTaskExecutionRole&
Version=2010-05-08
)
https://redd.it/npm0qf
@r_devops
reddit
OSError: [Errno 0] Error when trying to access AWS API endpoint...
Hi! As the title says, when I try to run a simple query to AWS API using boto, I got OSError: \[Errno 0\]. I'm running it inside a docker using a...
Roles n Responsibilities between teams and working closer
I work for a growing east coast fintech and we're trying to move towards devops. We have the usual growing pains of a company finding its feet, having to produce more and more and also trying to migrate to a Devops culture. There are many views on how the teams should be structured and what should be done by whom. I'm trying to figure it out so we can work together in the most productive way possible and am keen to hear views please.
We have a solutions architect, 3 systems engineers, a helpdesk and 3 development teams each with a team lead. We have to build and support legacy IT EUC, maintain a legacy private cloud infra for the first version of our product suite as we migrate it to AWS. Our main product is in AWS in a mix of server and serverless products with some legacy connectivity and B2B stuff and a few off the shelf products that provide services and data to our main product.
There are some that believe that Devops is just 'Devs doing Ops' and want to do everything themselves. These views are spread across the dev squads and the solutions architect. The SA wants to do everything herself from design/architecture to proof of concepts to building out in prod and then supporting. The Systems engineers are getting fed up because they don't get to do any of the interesting work and are just doing governance, monitoring and basic access stuff in the AWS space.
As we need to allow the devs to spin up infrastructure in AWS to support the product and code as IaC, I see the governance around that and setting policies and restrictions around what they do as part of the systems engineers roles. Also, as the systems engineers are the ones on-call during the day and night, they need to be able to support everything so need the experience of working on these environments and understanding them inside and out before being required to resolve issues in production. I see the architect doing it all as a risk as a single person has all the knowledge. I think architecture should hand over to engineering to build including IaC stuff so there is a second pair of eyes as a sanity check, the production environment gets 'engineered' for production (rather than a POC that ends up in production as sometimes happens) and is supportable, monitorable, cost effective etc.
The devops element for me would be the devs supporting their code releases and dealing with any issues that arise immediately. They are also 3rd line for any prod issues that are escalated to them.
I am wondering though if I am a bit out of touch so want views on how we could structure the overall team and distribute the work to get the best outcomes. I could go on about the current structure and way things work but I hope I've painted a picture enough to get some views! What has worked for you? Where does each role begin and end and where are the overlaps? Are there any definite wrong ways of doing it?
Thanks in advance for any input or views.
RTS
https://redd.it/npesv6
@r_devops
I work for a growing east coast fintech and we're trying to move towards devops. We have the usual growing pains of a company finding its feet, having to produce more and more and also trying to migrate to a Devops culture. There are many views on how the teams should be structured and what should be done by whom. I'm trying to figure it out so we can work together in the most productive way possible and am keen to hear views please.
We have a solutions architect, 3 systems engineers, a helpdesk and 3 development teams each with a team lead. We have to build and support legacy IT EUC, maintain a legacy private cloud infra for the first version of our product suite as we migrate it to AWS. Our main product is in AWS in a mix of server and serverless products with some legacy connectivity and B2B stuff and a few off the shelf products that provide services and data to our main product.
There are some that believe that Devops is just 'Devs doing Ops' and want to do everything themselves. These views are spread across the dev squads and the solutions architect. The SA wants to do everything herself from design/architecture to proof of concepts to building out in prod and then supporting. The Systems engineers are getting fed up because they don't get to do any of the interesting work and are just doing governance, monitoring and basic access stuff in the AWS space.
As we need to allow the devs to spin up infrastructure in AWS to support the product and code as IaC, I see the governance around that and setting policies and restrictions around what they do as part of the systems engineers roles. Also, as the systems engineers are the ones on-call during the day and night, they need to be able to support everything so need the experience of working on these environments and understanding them inside and out before being required to resolve issues in production. I see the architect doing it all as a risk as a single person has all the knowledge. I think architecture should hand over to engineering to build including IaC stuff so there is a second pair of eyes as a sanity check, the production environment gets 'engineered' for production (rather than a POC that ends up in production as sometimes happens) and is supportable, monitorable, cost effective etc.
The devops element for me would be the devs supporting their code releases and dealing with any issues that arise immediately. They are also 3rd line for any prod issues that are escalated to them.
I am wondering though if I am a bit out of touch so want views on how we could structure the overall team and distribute the work to get the best outcomes. I could go on about the current structure and way things work but I hope I've painted a picture enough to get some views! What has worked for you? Where does each role begin and end and where are the overlaps? Are there any definite wrong ways of doing it?
Thanks in advance for any input or views.
RTS
https://redd.it/npesv6
@r_devops
reddit
Roles n Responsibilities between teams and working closer
I work for a growing east coast fintech and we're trying to move towards devops. We have the usual growing pains of a company finding its feet,...
Is your SSH key still secure?
In this blogpost I walk you through some aspects of the security of your SSH key.
Curious if you are still on the safe side?
https://marcofranssen.nl/upgrade-your-ssh-security
https://redd.it/npavog
@r_devops
In this blogpost I walk you through some aspects of the security of your SSH key.
Curious if you are still on the safe side?
https://marcofranssen.nl/upgrade-your-ssh-security
https://redd.it/npavog
@r_devops
marcofranssen.nl
Upgrade your SSH security | Marco Franssen
As a DevOps engineer you are probably familiar with SSH keys and how to use them already. I wrote some blogs on SSH in the past as well see the references. This time I want to zoom in a bit on the encryption strength of your keys and the encryption types…
Attaching Environment Name In a Script to .ebextensions for AWS EBS
Hey, so I've been running into a wall researching this info for awhile. I currently run a grails application through EBS, and I've been wanting to add splunk to it to externalize the logging. The best way I've found to do so is through using .ebextensions to have it install the splunk forwarder and getting everything set up when the environment is deployed.
Currently, I have everything working fine with 3 scripts that install the forwarder, grab the credentials from S3, then push the logs through the forwarder to Splunk. The problem is with attaching the environment to the logs.
I've been able to attach a "Development" tag to the logs and get it pushing to splunk with it set up like below:
containercommands:
01install-splunk:
command: /usr/local/bin/install-splunk.sh
02set-splunk-outputs:
command: /usr/local/bin/setsplunkoutputs.sh
env:
SPLUNKSERVERHOST: "splunk.host"
03add-inputs-to-splunk:
command: /usr/local/bin/add-inputs-to-splunk.sh
env:
ENVIRONMENTNAME: "Development"
cwd: /root
ignoreErrors: false
As I said, this is working and attaching the environment name to the logs of "Development", but I want to be able to have that environment just grab off of what the EBS environment name is so I don't have to have a bunch of different files with it hardcoded.
How can I grab that information?
https://redd.it/nphut6
@r_devops
Hey, so I've been running into a wall researching this info for awhile. I currently run a grails application through EBS, and I've been wanting to add splunk to it to externalize the logging. The best way I've found to do so is through using .ebextensions to have it install the splunk forwarder and getting everything set up when the environment is deployed.
Currently, I have everything working fine with 3 scripts that install the forwarder, grab the credentials from S3, then push the logs through the forwarder to Splunk. The problem is with attaching the environment to the logs.
I've been able to attach a "Development" tag to the logs and get it pushing to splunk with it set up like below:
containercommands:
01install-splunk:
command: /usr/local/bin/install-splunk.sh
02set-splunk-outputs:
command: /usr/local/bin/setsplunkoutputs.sh
env:
SPLUNKSERVERHOST: "splunk.host"
03add-inputs-to-splunk:
command: /usr/local/bin/add-inputs-to-splunk.sh
env:
ENVIRONMENTNAME: "Development"
cwd: /root
ignoreErrors: false
As I said, this is working and attaching the environment name to the logs of "Development", but I want to be able to have that environment just grab off of what the EBS environment name is so I don't have to have a bunch of different files with it hardcoded.
How can I grab that information?
https://redd.it/nphut6
@r_devops
reddit
Attaching Environment Name In a Script to .ebextensions for AWS EBS
Hey, so I've been running into a wall researching this info for awhile. I currently run a grails application through EBS, and I've been wanting to...
Web based templating, similar to cookiecutter?
I have a set of files that I would like to template for developers to be able to use 'cookiecutter' against. The concern I have is the requirement to have all devs to download cookiecutter cli.
Instead, would anyone be aware of a way to perform the same steps as cookiecutter, but from a webapp?
One kneejerk reaction could be to wrap cookiecutter with a webapp to achieve the goal, but figured to ask here in case anyone's aware of a solution. TIA
https://redd.it/npc1ip
@r_devops
I have a set of files that I would like to template for developers to be able to use 'cookiecutter' against. The concern I have is the requirement to have all devs to download cookiecutter cli.
Instead, would anyone be aware of a way to perform the same steps as cookiecutter, but from a webapp?
One kneejerk reaction could be to wrap cookiecutter with a webapp to achieve the goal, but figured to ask here in case anyone's aware of a solution. TIA
https://redd.it/npc1ip
@r_devops
reddit
Web based templating, similar to cookiecutter?
I have a set of files that I would like to template for developers to be able to use 'cookiecutter' against. The concern I have is the requirement...
Multi-tentant Next.js/React App on Vercel With Salesforce REST database?
Hi all, my team is about to deploy a new Nextjs/React E-Commerce storefront that is hydrated via REST calls to a Salesforce (SFDC) product database. We then of course use REST to write back to SFDC - essentially making SFDC the database.
We're planning on hosting on Vercel but are not against using AWS, DigitalOcean, etc. if that would work better for our use case. We're a small team with plenty of customers but don't really have a DevOps pro yet.
Our plan is to have SFDC license our solution and then we can rig up their environment via environment variables (SFDC account ID, their unique REST endpoints that are consumed by Nextjs, etc.) to have it talk to our Nextjs front-end.
Ideally, I'd like one Vercel project/codebase but each customer has a unique log-in page that passes them to the core application/deployment that instantiates the proper environment variables for that customer's account. This way we can identify that customer [email protected] is in fact a customer in our customer's SDFC account.
My problem is on Vercel the environment variables are at the project level so it seems like I'd have to create a copy of the project/codebase for each new customer/sale. I'd rather follow some scalable DevOps best practices and try to have one codebase to CI/update.
Any thoughts or suggestions on a framework of design for something like this?
https://redd.it/npcnqh
@r_devops
Hi all, my team is about to deploy a new Nextjs/React E-Commerce storefront that is hydrated via REST calls to a Salesforce (SFDC) product database. We then of course use REST to write back to SFDC - essentially making SFDC the database.
We're planning on hosting on Vercel but are not against using AWS, DigitalOcean, etc. if that would work better for our use case. We're a small team with plenty of customers but don't really have a DevOps pro yet.
Our plan is to have SFDC license our solution and then we can rig up their environment via environment variables (SFDC account ID, their unique REST endpoints that are consumed by Nextjs, etc.) to have it talk to our Nextjs front-end.
Ideally, I'd like one Vercel project/codebase but each customer has a unique log-in page that passes them to the core application/deployment that instantiates the proper environment variables for that customer's account. This way we can identify that customer [email protected] is in fact a customer in our customer's SDFC account.
My problem is on Vercel the environment variables are at the project level so it seems like I'd have to create a copy of the project/codebase for each new customer/sale. I'd rather follow some scalable DevOps best practices and try to have one codebase to CI/update.
Any thoughts or suggestions on a framework of design for something like this?
https://redd.it/npcnqh
@r_devops
reddit
Multi-tentant Next.js/React App on Vercel With Salesforce REST...
Hi all, my team is about to deploy a new Nextjs/React E-Commerce storefront that is hydrated via REST calls to a Salesforce (SFDC) product...