Gradual Update of the AWS Java SDK in the SpringBoot Project
#
Recently, in our project, we decided to update the AWS Java SDK from 1.x to 2.x, so we are able to use client-side metrics, available only in the newer version of the SDK.
​
Our whole system is AWS based, so we didn’t want to perform this update at once. We decided to do it granularly instead.
Fortunately, AWS SDK allows us to use both versions side by side.
## Preparation for AWS Java SDK update
In our project, we implemented an abstraction layer over AWS services, like:
QueueSender over SqsClient with AwsQueueSender implementation
QueuePublisher over SnsClient with AwsQueuePublisher implementation
ExternalStorage over S3Client with AwsFileUploader implementation
The approach of introducing an abstraction layer over external services and frameworks comes in really handy, especially in cases like ours — changing the implementation of these abstractions.
The first thing we did was to add a new implementation for these services using SDK v2, so we added AwsQueueSenderV2, AwsQueuePublisherV2 and AwsFileUploaderV2.
## Challenges
Some libraries that we used to implement our services don’t support SDK V2(or don’t support both versions side by side), so we needed to fork these libraries and adjust them for our needs. These are public repositories, so if you are planning to migrate your project, you could use:
amazon-sns-java-extended-client-lib
[amazon-sqs-java-extended-client-lib](https://github.com/bright/amazon-sqs-java-extended-client-lib/tree/sdk-v2-support)
amazon-sqs-java-messaging-lib
Then we could copy all the tests that were testing the original implementation, so we could test my new implementation.
Running tests over a new implementation allowed me to find a bug in my implementation — I messed up the order of parameters 🙈.
## Migration from 1.x to 2.x of the AWS Java SDK
We decided to take advantage of Spring capabilities to gradually replace old AWS services implementations with the new ones, and for that we used the @Priority annotation.
We annotated 1.x Beans implementations with u/Priority(1) and 2.x implementations with u/Priority(2). Then we deployed our application to a test environment and monitored if there were no unexpected changes. After verifying it, we deployed the application to the production environment and continued monitoring to confirm that everything is still fine.
In the next step, we chose a couple of non-business critical functionalities and replaced old services with the new ones, using the @Named annotation. After repeating the deployment and monitoring steps, we were sure our new implementations were working as expected, so we could release the application with all AWS Beans updated. We did this by changing the priority of 1.x Beans from @Priority(1) to @Priority(3).
## Cleanup
Everything went well, so we could remove temporary annotations, 1.x implementations, and V2 suffixes from 2.x implementations.
## Summary
Although we took a couple of extra steps, we were able to introduce an advanced update to our production-ready application without downtime or risk of introducing breaking changes. This way is much safer and allows us to avoid making mistakes that can affect our customers.
https://redd.it/y7ylw0
@r_devops
#
Recently, in our project, we decided to update the AWS Java SDK from 1.x to 2.x, so we are able to use client-side metrics, available only in the newer version of the SDK.
​
Our whole system is AWS based, so we didn’t want to perform this update at once. We decided to do it granularly instead.
Fortunately, AWS SDK allows us to use both versions side by side.
## Preparation for AWS Java SDK update
In our project, we implemented an abstraction layer over AWS services, like:
QueueSender over SqsClient with AwsQueueSender implementation
QueuePublisher over SnsClient with AwsQueuePublisher implementation
ExternalStorage over S3Client with AwsFileUploader implementation
The approach of introducing an abstraction layer over external services and frameworks comes in really handy, especially in cases like ours — changing the implementation of these abstractions.
The first thing we did was to add a new implementation for these services using SDK v2, so we added AwsQueueSenderV2, AwsQueuePublisherV2 and AwsFileUploaderV2.
## Challenges
Some libraries that we used to implement our services don’t support SDK V2(or don’t support both versions side by side), so we needed to fork these libraries and adjust them for our needs. These are public repositories, so if you are planning to migrate your project, you could use:
amazon-sns-java-extended-client-lib
[amazon-sqs-java-extended-client-lib](https://github.com/bright/amazon-sqs-java-extended-client-lib/tree/sdk-v2-support)
amazon-sqs-java-messaging-lib
Then we could copy all the tests that were testing the original implementation, so we could test my new implementation.
Running tests over a new implementation allowed me to find a bug in my implementation — I messed up the order of parameters 🙈.
## Migration from 1.x to 2.x of the AWS Java SDK
We decided to take advantage of Spring capabilities to gradually replace old AWS services implementations with the new ones, and for that we used the @Priority annotation.
We annotated 1.x Beans implementations with u/Priority(1) and 2.x implementations with u/Priority(2). Then we deployed our application to a test environment and monitored if there were no unexpected changes. After verifying it, we deployed the application to the production environment and continued monitoring to confirm that everything is still fine.
In the next step, we chose a couple of non-business critical functionalities and replaced old services with the new ones, using the @Named annotation. After repeating the deployment and monitoring steps, we were sure our new implementations were working as expected, so we could release the application with all AWS Beans updated. We did this by changing the priority of 1.x Beans from @Priority(1) to @Priority(3).
## Cleanup
Everything went well, so we could remove temporary annotations, 1.x implementations, and V2 suffixes from 2.x implementations.
## Summary
Although we took a couple of extra steps, we were able to introduce an advanced update to our production-ready application without downtime or risk of introducing breaking changes. This way is much safer and allows us to avoid making mistakes that can affect our customers.
https://redd.it/y7ylw0
@r_devops
Amazon
Developer Guide - AWS SDK for Java 1.x - AWS SDK for Java 1.x
The AWS SDK for Java provides a Java API for AWS services. Using the SDK, you can easily build Java applications that work with Amazon S3, Amazon EC2, DynamoDB, and more. We regularly add support for new services to the AWS SDK for Java. For a list of the…
Saga continues—DevOps vs politics
I work as part of a team responsible for automating the install and configuration of a pretty complex set of applications across groups of servers. We basically ensure all requirements are met for security, compliance, hardening, etc. Upstream devs build the application containers while a sysadmin ops group builds the OS templates our team uses. Our team also manages relationship with clients who rely on the apps. We took great pains to explain to clients why DevOps method is needed to reduce risk, and increase reliability and uptime. Clients have been happy with DevOps method and have been allowing us time to test changes properly before release to prod.
Recently, client group made some internal changes and are now going directly to sysadmin group for rights to make config changes outside of DevOps pipelines. Client representative claims our DevOps method is preventing ad-hoc changes just to test something. Our team is then asked to load small changes in after the fact. Our team has explained to clients that’s what a test environment is for. We didn’t go so far as to say what they are doing is backwards. Risk and workload are increased, but how do you convince a now hostile client enabled by a sysadmin group that doesn’t value DevOps?
https://redd.it/y82620
@r_devops
I work as part of a team responsible for automating the install and configuration of a pretty complex set of applications across groups of servers. We basically ensure all requirements are met for security, compliance, hardening, etc. Upstream devs build the application containers while a sysadmin ops group builds the OS templates our team uses. Our team also manages relationship with clients who rely on the apps. We took great pains to explain to clients why DevOps method is needed to reduce risk, and increase reliability and uptime. Clients have been happy with DevOps method and have been allowing us time to test changes properly before release to prod.
Recently, client group made some internal changes and are now going directly to sysadmin group for rights to make config changes outside of DevOps pipelines. Client representative claims our DevOps method is preventing ad-hoc changes just to test something. Our team is then asked to load small changes in after the fact. Our team has explained to clients that’s what a test environment is for. We didn’t go so far as to say what they are doing is backwards. Risk and workload are increased, but how do you convince a now hostile client enabled by a sysadmin group that doesn’t value DevOps?
https://redd.it/y82620
@r_devops
reddit
Saga continues—DevOps vs politics
I work as part of a team responsible for automating the install and configuration of a pretty complex set of applications across groups of...
Which Lets Encrypt client to use?
Dear fellow DevOps,
I am currently trying to run a service (Vault to be more precise), that's in a private subnet, but should have an SSL certificate.
We're currently running on GCP and use acme.sh. We have two projects, one for the service it self where it can store secrets and another project as ACME project to use the DNS alias mode.
The machines are managed in a Managed Instance Group and behind an internal L4 Loadbalancer
The process now looks like this:
1. Cloud init creates the gcloud profiles (via switching CLOUDSDK_ACTIVE_CONFIG_NAME)
2. Configure the systemd service files and packages via salt
2. In the startup script of the VM I activate the default profile and fetch the Lets Encrypt key to rest the limits from Lets Encrypt. Put the key in place
3. Install acme.sh
4. activate the get-certificate profile and kick off the certificate request
5. Change back to default profile and upload the LE key if it was empty in the beginning
In the end we use Caddy to reverse proxy to the service.
Unfortunately the problem is, that the cron job from acme.sh does not use the get-certificate profile and I'd have to customize that to renew certificates.
What client do you recommend in combination of GCP and DNS alias mode?
Thanks and BR
https://redd.it/y807h3
@r_devops
Dear fellow DevOps,
I am currently trying to run a service (Vault to be more precise), that's in a private subnet, but should have an SSL certificate.
We're currently running on GCP and use acme.sh. We have two projects, one for the service it self where it can store secrets and another project as ACME project to use the DNS alias mode.
The machines are managed in a Managed Instance Group and behind an internal L4 Loadbalancer
The process now looks like this:
1. Cloud init creates the gcloud profiles (via switching CLOUDSDK_ACTIVE_CONFIG_NAME)
2. Configure the systemd service files and packages via salt
2. In the startup script of the VM I activate the default profile and fetch the Lets Encrypt key to rest the limits from Lets Encrypt. Put the key in place
3. Install acme.sh
4. activate the get-certificate profile and kick off the certificate request
5. Change back to default profile and upload the LE key if it was empty in the beginning
In the end we use Caddy to reverse proxy to the service.
Unfortunately the problem is, that the cron job from acme.sh does not use the get-certificate profile and I'd have to customize that to renew certificates.
What client do you recommend in combination of GCP and DNS alias mode?
Thanks and BR
https://redd.it/y807h3
@r_devops
GitHub
GitHub - acmesh-official/acme.sh: A pure Unix shell script ACME client for SSL / TLS certificate automation
A pure Unix shell script ACME client for SSL / TLS certificate automation - acmesh-official/acme.sh
DevOps & Pipeline Runners: The Key to Sustainable App Development
DEVOPS WEBINAR!
📌Have you ever wondered how DevOps experts choose the best Pipeline Runners for the job? Join us as three DevOps experts discuss the tools they're using now and weigh the pros and cons of the most popular DevOps tools available.
📷 Save your seat! https://my.demio.com/ref/P1vnTtR1cOvEHVTz
https://redd.it/y88ejv
@r_devops
DEVOPS WEBINAR!
📌Have you ever wondered how DevOps experts choose the best Pipeline Runners for the job? Join us as three DevOps experts discuss the tools they're using now and weigh the pros and cons of the most popular DevOps tools available.
📷 Save your seat! https://my.demio.com/ref/P1vnTtR1cOvEHVTz
https://redd.it/y88ejv
@r_devops
Demio
DevOps & Pipeline Runners: The Key to Sustainable App Development
Register for this Upcoming Session on November 2nd, 2022 at 11:00AM CDT
AppDynamics Mentor
Hello all , I appreciate if anyone who has been working with app dynamics to assist or mentor me both with your advice and support
I really appreciate this /r/devops thank you
https://redd.it/y88sjn
@r_devops
Hello all , I appreciate if anyone who has been working with app dynamics to assist or mentor me both with your advice and support
I really appreciate this /r/devops thank you
https://redd.it/y88sjn
@r_devops
reddit
AppDynamics Mentor
Hello all , I appreciate if anyone who has been working with app dynamics to assist or mentor me both with your advice and support I really...
Amazed with pulumi
I don't know if this post will be considered advertising. I have no relation with Pulumi n'or am I sponsored by them.
I just want to say that I'm amazed at what Pulumi can provide. I make twitch videos of my side projects and I was playing with Pulumi in creating my lambda function. I wanted to use my Pulumi code to...
1. Zip my lambda source
2. Upload it to S3 based on file changes
3. Update Function
I understand that much of what I wanted to do in Pulumi can be done easier in shell with a pipeline. I just wanted to test out Pulumi so that's my reasoning.
This means that I can run specific methods based on context or on all contexts and be able to pass that data into the resources I'll create with Pulumi if desired.
One criticism with Pulumi is that their docs are not the best.
Here is a shameful plug to my twitch video where I went through the pains and gains.
https://www.twitch.tv/videos/1628400489
Also PS. Most of this can also be done with CDK I'm sure.
https://redd.it/y8418w
@r_devops
I don't know if this post will be considered advertising. I have no relation with Pulumi n'or am I sponsored by them.
I just want to say that I'm amazed at what Pulumi can provide. I make twitch videos of my side projects and I was playing with Pulumi in creating my lambda function. I wanted to use my Pulumi code to...
1. Zip my lambda source
2. Upload it to S3 based on file changes
3. Update Function
I understand that much of what I wanted to do in Pulumi can be done easier in shell with a pipeline. I just wanted to test out Pulumi so that's my reasoning.
This means that I can run specific methods based on context or on all contexts and be able to pass that data into the resources I'll create with Pulumi if desired.
One criticism with Pulumi is that their docs are not the best.
Here is a shameful plug to my twitch video where I went through the pains and gains.
https://www.twitch.tv/videos/1628400489
Also PS. Most of this can also be done with CDK I'm sure.
https://redd.it/y8418w
@r_devops
Twitch
Go! | Pulumi| AWS | Neovim | Programming - mo_ali141 on Twitch
mo_ali141 went live on Twitch. Catch up on their Software and Game Development VOD now.
Continuous Deploy an ASP.NET Core Web App in GCP Cloud Run
In this tutorial, we will see a methodical way to implement (CD) Continuous Deployment of an ASP .NET Core MVC Web App (.NET 6) on Google Cloud Run with the help of Google Cloud Build Trigger.
By the end of this tutorial, you will be able to have a full understanding of enabling Continuous Delivery of ASP .NET Core applications to Cloud Run via Cloud Build.
This tutorial covers in-depth concepts of working with Cloud Build triggers, Cloud Run features such as Logs, Revisions, SLOs etc.
The tutorial also helps you understand how to troubleshoot the Continuous Deployments on Cloud Run.
https://youtu.be/5M9yzZOJXaQ
https://redd.it/y8emmx
@r_devops
In this tutorial, we will see a methodical way to implement (CD) Continuous Deployment of an ASP .NET Core MVC Web App (.NET 6) on Google Cloud Run with the help of Google Cloud Build Trigger.
By the end of this tutorial, you will be able to have a full understanding of enabling Continuous Delivery of ASP .NET Core applications to Cloud Run via Cloud Build.
This tutorial covers in-depth concepts of working with Cloud Build triggers, Cloud Run features such as Logs, Revisions, SLOs etc.
The tutorial also helps you understand how to troubleshoot the Continuous Deployments on Cloud Run.
https://youtu.be/5M9yzZOJXaQ
https://redd.it/y8emmx
@r_devops
YouTube
How to Deploy ASP.NET 6 MVC Web App on Google Cloud Run using Cloud Build
Author: Navule Pavan Kumar Rao
Learn how to Deploy ASP.NET 6 MVC Web App on Google Cloud Run using Cloud Build.
In this tutorial, we will see a methodical way to implement (CD) Continuous Deployment of an ASP.NET 6 MVC Web App on Google Cloud Run with the…
Learn how to Deploy ASP.NET 6 MVC Web App on Google Cloud Run using Cloud Build.
In this tutorial, we will see a methodical way to implement (CD) Continuous Deployment of an ASP.NET 6 MVC Web App on Google Cloud Run with the…
Pushd and Popd
Hey, this might not be specific to DevOps in general but I figure you devopsians might benefit from this like I have. You can use the command `pushd` to save your current directory in a terminal and send yourself back to the home directory. Then when you're done doing whatever it is you're doing you can use `popd` to get back. Found it pretty handy, thought someone else might too.
https://redd.it/y8ddas
@r_devops
Hey, this might not be specific to DevOps in general but I figure you devopsians might benefit from this like I have. You can use the command `pushd` to save your current directory in a terminal and send yourself back to the home directory. Then when you're done doing whatever it is you're doing you can use `popd` to get back. Found it pretty handy, thought someone else might too.
https://redd.it/y8ddas
@r_devops
reddit
Pushd and Popd
Hey, this might not be specific to DevOps in general but I figure you devopsians might benefit from this like I have. You can use the command...
Leaving job with no job lined up officially
Hi everyone,
I would like to send in my two weeks as a devops engineer / software developer to my company due to a very toxic coworker. Basically at a breaking point with this worker and have brought this issues up to management once before and again recently but seems like this doesn’t really help change a person.
I have an offer coming my way in a few weeks I believe but it’s still unofficial since I don’t have it in hand.
I’ve gotten to the point where I don’t even want to work due to this person where I may take days off with the little pto I have. It’s very sickening and most definitely taking a toll on me.
Do you all think it’s okay for me to jump ship and send in my two weeks? Financial wise I’m good, no loans, no debts, and no rent/mortgage.
I’ll keep studying LC and cracking the coding interview while applying to jobs while I wait for this official offer to come in.
https://redd.it/y8ipqz
@r_devops
Hi everyone,
I would like to send in my two weeks as a devops engineer / software developer to my company due to a very toxic coworker. Basically at a breaking point with this worker and have brought this issues up to management once before and again recently but seems like this doesn’t really help change a person.
I have an offer coming my way in a few weeks I believe but it’s still unofficial since I don’t have it in hand.
I’ve gotten to the point where I don’t even want to work due to this person where I may take days off with the little pto I have. It’s very sickening and most definitely taking a toll on me.
Do you all think it’s okay for me to jump ship and send in my two weeks? Financial wise I’m good, no loans, no debts, and no rent/mortgage.
I’ll keep studying LC and cracking the coding interview while applying to jobs while I wait for this official offer to come in.
https://redd.it/y8ipqz
@r_devops
reddit
Leaving job with no job lined up officially
Hi everyone, I would like to send in my two weeks as a devops engineer / software developer to my company due to a very toxic coworker. Basically...
looking to learn
I know this may seem like a very vague question, but I don't see any other way to get the answer close to what I'm looking for
I'm a beginner developer and I'm creating a project so I can learn as I go.
The project will have an initial website that will be a search bar, a system that searches for youtube videos regarding the exact match that activates a player and an artificial intelligence.
Do you who have experience know some content that makes this journey easier?
When I finish the project I intend to make it available, I believe it can be useful to someone.
https://redd.it/y8jftt
@r_devops
I know this may seem like a very vague question, but I don't see any other way to get the answer close to what I'm looking for
I'm a beginner developer and I'm creating a project so I can learn as I go.
The project will have an initial website that will be a search bar, a system that searches for youtube videos regarding the exact match that activates a player and an artificial intelligence.
Do you who have experience know some content that makes this journey easier?
When I finish the project I intend to make it available, I believe it can be useful to someone.
https://redd.it/y8jftt
@r_devops
reddit
looking to learn
I know this may seem like a very vague question, but I don't see any other way to get the answer close to what I'm looking for I'm a beginner...
MariaDB Data-in-use Encryption using Intel SGX
Dear Community,
team enclaive.io has been working on adding data-in-use encryption to MariaDB. By data-in-use encryption, we mean that the whole database is encrypted during runtime. In contrast to data-at-rest encryption (https://mariadb.com/kb/en/encryption-key-management/), the query and data processing remains encrypted in memory. In other words, at no moment in time, MariaDB leaks data now. Hence, key rotations and the management of keys become somehow void.
We leverage confidential compute technology to enclave MariaDB. In a nutshell, confidential compute uses special security microinstructions provided by modern Intel/AMD CPUs to encrypt physical memory.
We have open-sourced the implementation. We prepared a docker container to get MariaDB running quickly.
GitHub: https://github.com/enclaive/enclaive-docker-mariadb-sgx
Demo Video: https://www.youtube.com/watch?v=PI2PosrdrCk
We would very much appreciate some feedback, beta-testing, likes, and solicit any form of support. Do you think the contribution should be merged with the MariaDB project?
https://redd.it/y8hakm
@r_devops
Dear Community,
team enclaive.io has been working on adding data-in-use encryption to MariaDB. By data-in-use encryption, we mean that the whole database is encrypted during runtime. In contrast to data-at-rest encryption (https://mariadb.com/kb/en/encryption-key-management/), the query and data processing remains encrypted in memory. In other words, at no moment in time, MariaDB leaks data now. Hence, key rotations and the management of keys become somehow void.
We leverage confidential compute technology to enclave MariaDB. In a nutshell, confidential compute uses special security microinstructions provided by modern Intel/AMD CPUs to encrypt physical memory.
We have open-sourced the implementation. We prepared a docker container to get MariaDB running quickly.
GitHub: https://github.com/enclaive/enclaive-docker-mariadb-sgx
Demo Video: https://www.youtube.com/watch?v=PI2PosrdrCk
We would very much appreciate some feedback, beta-testing, likes, and solicit any form of support. Do you think the contribution should be merged with the MariaDB project?
https://redd.it/y8hakm
@r_devops
MariaDB KnowledgeBase
Encryption Key Management
Managing encryption keys for data-at-rest encryption.
branch name as choice parameters in declarative pipeline Jenkins
I have tried to install git-parameter plugin already but I could not find those options in pipeline to fill those fields needed for parameterized build.
parameters {
choice(
name: 'Branch to build',
choices: 'dev', 'prod',
description: ''
)
}
I have used the above snippet in the format below:
import java.text.SimpleDateFormat
def branchname = ""
class Config {
static envForBranch = [
'test':'dev',
'develop': 'dev',
'master': 'prod',
]
}
pipeline {
agent any
triggers {
gitlab(triggerOnPush: true,
triggerOnMergeRequest: true,
branchFilterType: 'All')
}
options {
gitlabBuilds(builds: ['library', 'Artifacts', 'Docker Image', 'Deploy'])
ansiColor('xterm')
gitLabConnection('/*repo*/')
disableConcurrentBuilds()
}
parameters {
choice(
name: 'Branch to build',
choices: ['develop', 'master'],
description: ''
)
}
stages {
stage('Library') {
steps {
library (
/*code*/
])
)
While I use choices: \[${BRANCH\NAME}\] or choices: [{env.BRANCH_NAME}\] , I can not get all the branches available. I need to have a choice parameter which populates all available branches under scroll down button. As of now, I am able to get only `develop` and `master`.
https://redd.it/y8syo8
@r_devops
I have tried to install git-parameter plugin already but I could not find those options in pipeline to fill those fields needed for parameterized build.
parameters {
choice(
name: 'Branch to build',
choices: 'dev', 'prod',
description: ''
)
}
I have used the above snippet in the format below:
import java.text.SimpleDateFormat
def branchname = ""
class Config {
static envForBranch = [
'test':'dev',
'develop': 'dev',
'master': 'prod',
]
}
pipeline {
agent any
triggers {
gitlab(triggerOnPush: true,
triggerOnMergeRequest: true,
branchFilterType: 'All')
}
options {
gitlabBuilds(builds: ['library', 'Artifacts', 'Docker Image', 'Deploy'])
ansiColor('xterm')
gitLabConnection('/*repo*/')
disableConcurrentBuilds()
}
parameters {
choice(
name: 'Branch to build',
choices: ['develop', 'master'],
description: ''
)
}
stages {
stage('Library') {
steps {
library (
/*code*/
])
)
While I use choices: \[${BRANCH\NAME}\] or choices: [{env.BRANCH_NAME}\] , I can not get all the branches available. I need to have a choice parameter which populates all available branches under scroll down button. As of now, I am able to get only `develop` and `master`.
https://redd.it/y8syo8
@r_devops
reddit
branch name as choice parameters in declarative pipeline Jenkins
I have tried to install git-parameter plugin already but I could not find those options in pipeline to fill those fields needed for parameterized...
Do DevOps jobs without on-call duty exist?
I'm interested in the work, but I already have major difficulties with sleep. Is that a deal breaker for all things Infra/DevOps?
https://redd.it/y8ys5t
@r_devops
I'm interested in the work, but I already have major difficulties with sleep. Is that a deal breaker for all things Infra/DevOps?
https://redd.it/y8ys5t
@r_devops
reddit
Do DevOps jobs without on-call duty exist?
I'm interested in the work, but I already have major difficulties with sleep. Is that a deal breaker for all things Infra/DevOps?
Finding the right host
Hello I hope I'm in the right place here but my employer is thinking about changing their host because we had some issues with them.
I found this article by css-tricks that's saying that you should go to the host that makes things easiest for you. Our tech stack consists mostly of PHP and Next.js. For Next.js I found that using Vercel hosting is very developer friendly with their automatic deployment and previews, but after looking up their support for PHP I only found a community maintained project to enable PHP on their hosting. Is it normal to have different hosts for different languages/projects? Wouldn't it be easier to have everything hosted in one place? And if so, does such a host exist?
https://redd.it/y8zj1a
@r_devops
Hello I hope I'm in the right place here but my employer is thinking about changing their host because we had some issues with them.
I found this article by css-tricks that's saying that you should go to the host that makes things easiest for you. Our tech stack consists mostly of PHP and Next.js. For Next.js I found that using Vercel hosting is very developer friendly with their automatic deployment and previews, but after looking up their support for PHP I only found a community maintained project to enable PHP on their hosting. Is it normal to have different hosts for different languages/projects? Wouldn't it be easier to have everything hosted in one place? And if so, does such a host exist?
https://redd.it/y8zj1a
@r_devops
CSS-Tricks
The Differences in Web Hosting (Go with the Happy Path) | CSS-Tricks
One of our readers checked out "Helping a Beginner Understand Getting a Website Live" and had some follow up questions specifically about hosting
CODEPipeline to deploy infrastucture with terraform
Hello,
Our current method of deploying infrastructure on AWS is each team member running terraform on their local machines and using an s3 bucket for holding the state.
The company has grown quite a bit in recent years and after a recent audit we now have to show a trail of who did what and why when deploying any infrastructure.
We have been playing with the idea of a codepipline in each account that deploys infrastructure once it is merged to the main branch. While this works in principle it does have its issues.
Has anyone done something similar? What approach did you take with this? We have also looked at Terraform cloud. Does anyone recommend (or not) this product?
https://redd.it/y8sxhz
@r_devops
Hello,
Our current method of deploying infrastructure on AWS is each team member running terraform on their local machines and using an s3 bucket for holding the state.
The company has grown quite a bit in recent years and after a recent audit we now have to show a trail of who did what and why when deploying any infrastructure.
We have been playing with the idea of a codepipline in each account that deploys infrastructure once it is merged to the main branch. While this works in principle it does have its issues.
Has anyone done something similar? What approach did you take with this? We have also looked at Terraform cloud. Does anyone recommend (or not) this product?
https://redd.it/y8sxhz
@r_devops
reddit
CODEPipeline to deploy infrastucture with terraform
Hello, Our current method of deploying infrastructure on AWS is each team member running terraform on their local machines and using an s3 bucket...
mirrord 3.0 is out - run/debug your local process in the context of your k8s cluster
https://metalbear.co/blog/mirrord-3.0-is-out/
mirrord lets developers run local processes in the context of their cloud environment. It’s meant to provide the benefits of running your service on a cloud environment (e.g. staging) without actually going through the hassle of deploying it there, and without disrupting the environment by deploying untested code.
https://redd.it/y944db
@r_devops
https://metalbear.co/blog/mirrord-3.0-is-out/
mirrord lets developers run local processes in the context of their cloud environment. It’s meant to provide the benefits of running your service on a cloud environment (e.g. staging) without actually going through the hassle of deploying it there, and without disrupting the environment by deploying untested code.
https://redd.it/y944db
@r_devops
MetalBear 🐻 - Tools for Backend Engineers
mirrord 3.0 is out!
Our biggest release yet is now available for download
Jenkins, Terraform, Ansible and AWS - how do they all connect?
DevOps newbie here, trying to learn how Jenkins, Terraform, Ansible and AWS connect. Can anyone give me a general, ELI5 rundown on the image in the link?
https://repository-images.githubusercontent.com/291145908/a7c9b680-ece2-11ea-9105-3d56cd7f2abc
https://redd.it/y957q9
@r_devops
DevOps newbie here, trying to learn how Jenkins, Terraform, Ansible and AWS connect. Can anyone give me a general, ELI5 rundown on the image in the link?
https://repository-images.githubusercontent.com/291145908/a7c9b680-ece2-11ea-9105-3d56cd7f2abc
https://redd.it/y957q9
@r_devops
How to put my skills into practice?
I am currently training to be Devops, doing several courses on AWS, Terraform, Jenkins, etc. But I feel that I need to put this knowledge into practice, perhaps with "real projects" or some fictitious project that involves all the tools. I don't currently work as a Devops, so I can't gain experience working in that environment. What dou you recommend? Everything is welcome!
https://redd.it/y90p3x
@r_devops
I am currently training to be Devops, doing several courses on AWS, Terraform, Jenkins, etc. But I feel that I need to put this knowledge into practice, perhaps with "real projects" or some fictitious project that involves all the tools. I don't currently work as a Devops, so I can't gain experience working in that environment. What dou you recommend? Everything is welcome!
https://redd.it/y90p3x
@r_devops
reddit
How to put my skills into practice?
I am currently training to be Devops, doing several courses on AWS, Terraform, Jenkins, etc. But I feel that I need to put this knowledge into...
Switching from Nginx to Caddy - or not?
There has been a lot of praise for Caddy and its simple config file format.
I recently gave it a try, but even for my "simple" reverse proxy use-case, the config turned out to be more complicated than Nginx.
https://blog.cubieserver.de/2022/switching-from-caddy-to-nginx-or-not/
What has your experience been like?
https://redd.it/y985zr
@r_devops
There has been a lot of praise for Caddy and its simple config file format.
I recently gave it a try, but even for my "simple" reverse proxy use-case, the config turned out to be more complicated than Nginx.
https://blog.cubieserver.de/2022/switching-from-caddy-to-nginx-or-not/
What has your experience been like?
https://redd.it/y985zr
@r_devops
Jack's Blog
Switching from Caddy to Nginx - or not? · Jack's Blog
In this post I'm exploring the capabilities of one the most highly-regarded, newcomer webservers: Caddy. I'm also comparing it to my current Nginx setup for serving static websites from an S3 backend. This covers reverse proxying, URL rewriting, modifying…
Hacker News discussion: "DevOps is Broken"
Original article:
https://blog.massdriver.cloud/devops-is-bullshit
Hacker news discussion:
https://news.ycombinator.com/item?id=33274988
https://redd.it/y97ng5
@r_devops
Original article:
https://blog.massdriver.cloud/devops-is-bullshit
Hacker news discussion:
https://news.ycombinator.com/item?id=33274988
https://redd.it/y97ng5
@r_devops
DevOps is Bullshit | Massdriver Blog
DevOps is Bullshit
A Critique of How We've Fooled Ourselves for Years.
How should infrastructure and CI/CD pipelines be documented?
3-4 months ago, we hired an experienced Devops lead with strong industry experience in AWS and our CI/CD techstack. While they've done a good job, one of our asks was for them to document the details on our infrastructure's setup in a clear way, and till now this isn't clear to anyone apart from this person.
So our questions here would be:
- How should things like infra/CI-CD be documented such that they could be explained to other tech staff, and stakeholders?
- What are the industry practices here for documentation apart from high level UML diagrams that show how various AWS services come together?
https://redd.it/y9k6c2
@r_devops
3-4 months ago, we hired an experienced Devops lead with strong industry experience in AWS and our CI/CD techstack. While they've done a good job, one of our asks was for them to document the details on our infrastructure's setup in a clear way, and till now this isn't clear to anyone apart from this person.
So our questions here would be:
- How should things like infra/CI-CD be documented such that they could be explained to other tech staff, and stakeholders?
- What are the industry practices here for documentation apart from high level UML diagrams that show how various AWS services come together?
https://redd.it/y9k6c2
@r_devops
reddit
How should infrastructure and CI/CD pipelines be documented?
3-4 months ago, we hired an experienced Devops lead with strong industry experience in AWS and our CI/CD techstack. While they've done a good job,...