CypherMate
**🌟✨ Introducing CypherMate: A Leap Towards Secure Corporate Communications**
Today, I am incredibly proud to present CypherMate, an open-source project created by me, designed to revolutionize the way corporations handle secure communications within Slack. In our digital age, the protection of sensitive information is not just a necessity but a cornerstone of successful business operations.
**What is CypherMate?**
CypherMate is a cutting-edge Slack bot designed to make password sharing and sensitive information exchange both secure and effortless. With just a few simple commands, you can encrypt messages, generate one-time secure links, and ensure that your data is accessible only to the intended recipients.
Key Features:
* Encrypt & Decrypt Messages: Securely share encrypted information right within Slack, with easy decryption for the recipient.
* One-Time Secure Links: Share sensitive documents or messages through links that expire after a single use, adding an extra layer of security.
* User-Friendly: CypherMate simplifies complex encryption processes, making secure communication accessible to everyone in your organization.
**Why CypherMate? 🛡**
️
In an era where data breaches can have catastrophic consequences, ensuring the security of your corporate communications is paramount. CypherMate offers:
Enhanced Data Security: By encrypting your messages and using one-time links, CypherMate significantly reduces the risk of data leaks and unauthorized access.
Streamlined Workflow: Securely share information without disrupting your team’s workflow. CypherMate’s seamless integration with Slack means no more switching between apps or complicated encryption tools.
Peace of Mind: Know that your sensitive information is protected with state-of-the-art security measures, giving you the confidence to share what’s important.
**Ideal for Every Corporation**
Whether you’re a startup or a Fortune 500 company, CypherMate is the tool you need to secure your Slack communications. It’s not just about protecting data; it’s about fostering a culture of security and responsibility.
[https://github.com/Pyshios/CypherMate/tree/main](https://github.com/Pyshios/CypherMate/tree/main)
https://redd.it/1bs8u8n
@r_devops
**🌟✨ Introducing CypherMate: A Leap Towards Secure Corporate Communications**
Today, I am incredibly proud to present CypherMate, an open-source project created by me, designed to revolutionize the way corporations handle secure communications within Slack. In our digital age, the protection of sensitive information is not just a necessity but a cornerstone of successful business operations.
**What is CypherMate?**
CypherMate is a cutting-edge Slack bot designed to make password sharing and sensitive information exchange both secure and effortless. With just a few simple commands, you can encrypt messages, generate one-time secure links, and ensure that your data is accessible only to the intended recipients.
Key Features:
* Encrypt & Decrypt Messages: Securely share encrypted information right within Slack, with easy decryption for the recipient.
* One-Time Secure Links: Share sensitive documents or messages through links that expire after a single use, adding an extra layer of security.
* User-Friendly: CypherMate simplifies complex encryption processes, making secure communication accessible to everyone in your organization.
**Why CypherMate? 🛡**
️
In an era where data breaches can have catastrophic consequences, ensuring the security of your corporate communications is paramount. CypherMate offers:
Enhanced Data Security: By encrypting your messages and using one-time links, CypherMate significantly reduces the risk of data leaks and unauthorized access.
Streamlined Workflow: Securely share information without disrupting your team’s workflow. CypherMate’s seamless integration with Slack means no more switching between apps or complicated encryption tools.
Peace of Mind: Know that your sensitive information is protected with state-of-the-art security measures, giving you the confidence to share what’s important.
**Ideal for Every Corporation**
Whether you’re a startup or a Fortune 500 company, CypherMate is the tool you need to secure your Slack communications. It’s not just about protecting data; it’s about fostering a culture of security and responsibility.
[https://github.com/Pyshios/CypherMate/tree/main](https://github.com/Pyshios/CypherMate/tree/main)
https://redd.it/1bs8u8n
@r_devops
GitHub
GitHub - Pyshios/CypherMate: Slack bot for Encrypting and sending one time link
Slack bot for Encrypting and sending one time link - Pyshios/CypherMate
How to start a "DevOps advocacy project"?
Hi, we've decided to try and start a DevOps advocacy project because we've had issues with "organic" learning among developers.
We need to give them a basic understanding of the DevOps principles and the tools and platform we use to run the apps.
I'm not looking for any technical advice but for organizational stuff. How do you go about the "training", how to do it for frontend or backend developers, ideal scope size for the trainings, how often, does pair programming work, etc.?
Thank you all for your insights.
https://redd.it/1bsayfe
@r_devops
Hi, we've decided to try and start a DevOps advocacy project because we've had issues with "organic" learning among developers.
We need to give them a basic understanding of the DevOps principles and the tools and platform we use to run the apps.
I'm not looking for any technical advice but for organizational stuff. How do you go about the "training", how to do it for frontend or backend developers, ideal scope size for the trainings, how often, does pair programming work, etc.?
Thank you all for your insights.
https://redd.it/1bsayfe
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AWS hourly spend cost bot
At a former job, we had this AWS cost bot that would post a graph to Slack about our spend on an hourly basis or so and we could see at a glance if there was some weird spike.
Does anyone know what this tool is? I'd like to set one up at my current job. Or do you think it was just something set up using a maybe a lambda and calling some cost explorer api's?
https://redd.it/1bscioc
@r_devops
At a former job, we had this AWS cost bot that would post a graph to Slack about our spend on an hourly basis or so and we could see at a glance if there was some weird spike.
Does anyone know what this tool is? I'd like to set one up at my current job. Or do you think it was just something set up using a maybe a lambda and calling some cost explorer api's?
https://redd.it/1bscioc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Failed to connect to your instance after deploying mern app on aws ec2 instance
i dockerized my mern app (Next js, node js , mongodb) and trying to deploy it on aws ec2 instance. when i try to access my backend on port 5000 via aws public ip then it works fine when i try to access frontend then terminal stuck and if i try to reload the terminal then ssh gives error.
i am getting error if i try to reload the terminal
Failed to connect to your instance
Error establishing SSH connection to your instance. Try again later.
. then i have to stop the instance and start the instance. then again backend works fine and when try to access frontend it gives error. this is my folder structure looks like
myecommerce folder then it have two more folders backend frontend nginx (nginx have two files one is dockerfile and second is nginx.conf) docker-compose.yml
this is how my nginx docker file looks like
FROM nginx:latest
RUN rm /etc/nginx/conf.d/*
COPY ./nginx.conf /etc/nginx/conf.d/
CMD [ "nginx", "-g", "daemon off;" \]
this is how my nginx.conf file looks like
events {}
http {
server {
listen 80;
server_name here my aws public ip;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
this is my frontend folder docker file
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run dev
this is my backend folder docker file
FROM node:20-alpine
RUN npm install -g nodemon
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "run", "dev"\]
this is how my docker-compose.yml looks like
version: '3'
services:
frontend:
image: my frontend image from docker hub
ports:
\- "3000:3000"
backend:
image:my backend image from dockerhub
ports:
\- "5000:5000"
nginx:
image: my nginx image from dockerhub
ports:
\- "80:80"
later i want to setup github ci cd pipelines for it and using custom domain to access the website later. i am not sure if i am using docker-compose i still need to setup pm2. i am also posting my inbound rules i dont know why frontend is not working. guys i am beginner in aws deployment and dockerization. i am improving my skills please help me i am stuck in this from many days i saw alot of videos and watched multiple videos but not a single article or video doing what i am actually trying to do. Thanks in advance
https://redd.it/1bseijj
@r_devops
i dockerized my mern app (Next js, node js , mongodb) and trying to deploy it on aws ec2 instance. when i try to access my backend on port 5000 via aws public ip then it works fine when i try to access frontend then terminal stuck and if i try to reload the terminal then ssh gives error.
i am getting error if i try to reload the terminal
Failed to connect to your instance
Error establishing SSH connection to your instance. Try again later.
. then i have to stop the instance and start the instance. then again backend works fine and when try to access frontend it gives error. this is my folder structure looks like
myecommerce folder then it have two more folders backend frontend nginx (nginx have two files one is dockerfile and second is nginx.conf) docker-compose.yml
this is how my nginx docker file looks like
FROM nginx:latest
RUN rm /etc/nginx/conf.d/*
COPY ./nginx.conf /etc/nginx/conf.d/
CMD [ "nginx", "-g", "daemon off;" \]
this is how my nginx.conf file looks like
events {}
http {
server {
listen 80;
server_name here my aws public ip;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
this is my frontend folder docker file
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run dev
this is my backend folder docker file
FROM node:20-alpine
RUN npm install -g nodemon
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "run", "dev"\]
this is how my docker-compose.yml looks like
version: '3'
services:
frontend:
image: my frontend image from docker hub
ports:
\- "3000:3000"
backend:
image:my backend image from dockerhub
ports:
\- "5000:5000"
nginx:
image: my nginx image from dockerhub
ports:
\- "80:80"
later i want to setup github ci cd pipelines for it and using custom domain to access the website later. i am not sure if i am using docker-compose i still need to setup pm2. i am also posting my inbound rules i dont know why frontend is not working. guys i am beginner in aws deployment and dockerization. i am improving my skills please help me i am stuck in this from many days i saw alot of videos and watched multiple videos but not a single article or video doing what i am actually trying to do. Thanks in advance
https://redd.it/1bseijj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Failed to connect to your instance after deploying mern app on aws ec2 instance
i dockerized my mern app (Next js, node js , mongodb) and trying to deploy it on aws ec2 instance. when i try to access my backend on port 5000 via aws public ip then it works fine when i try to access frontend then terminal stuck and if i try to reload the terminal then ssh gives error.
i am getting error if i try to reload the terminal
Failed to connect to your instance
Error establishing SSH connection to your instance. Try again later.
. then i have to stop the instance and start the instance. then again backend works fine and when try to access frontend it gives error. this is my folder structure looks like
myecommerce folder then it have two more folders backend frontend nginx (nginx have two files one is dockerfile and second is nginx.conf) docker-compose.yml
this is how my nginx docker file looks like
FROM nginx:latest
RUN rm /etc/nginx/conf.d/*
COPY ./nginx.conf /etc/nginx/conf.d/
CMD [ "nginx", "-g", "daemon off;" \]
this is how my nginx.conf file looks like
events {}
http {
server {
listen 80;
server_name here my aws public ip;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
this is my frontend folder docker file
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run dev
this is my backend folder docker file
FROM node:20-alpine
RUN npm install -g nodemon
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "run", "dev"\]
this is how my docker-compose.yml looks like
version: '3'
services:
frontend:
image: my frontend image from docker hub
ports:
\- "3000:3000"
backend:
image:my backend image from dockerhub
ports:
\- "5000:5000"
nginx:
image: my nginx image from dockerhub
ports:
\- "80:80"
later i want to setup github ci cd pipelines for it and using custom domain to access the website later. i am not sure if i am using docker-compose i still need to setup pm2. i am also posting my inbound rules i dont know why frontend is not working. guys i am beginner in aws deployment and dockerization. i am improving my skills please help me i am stuck in this from many days i saw alot of videos and watched multiple videos but not a single article or video doing what i am actually trying to do. Thanks in advance
https://redd.it/1bseijj
@r_devops
i dockerized my mern app (Next js, node js , mongodb) and trying to deploy it on aws ec2 instance. when i try to access my backend on port 5000 via aws public ip then it works fine when i try to access frontend then terminal stuck and if i try to reload the terminal then ssh gives error.
i am getting error if i try to reload the terminal
Failed to connect to your instance
Error establishing SSH connection to your instance. Try again later.
. then i have to stop the instance and start the instance. then again backend works fine and when try to access frontend it gives error. this is my folder structure looks like
myecommerce folder then it have two more folders backend frontend nginx (nginx have two files one is dockerfile and second is nginx.conf) docker-compose.yml
this is how my nginx docker file looks like
FROM nginx:latest
RUN rm /etc/nginx/conf.d/*
COPY ./nginx.conf /etc/nginx/conf.d/
CMD [ "nginx", "-g", "daemon off;" \]
this is how my nginx.conf file looks like
events {}
http {
server {
listen 80;
server_name here my aws public ip;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
this is my frontend folder docker file
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run dev
this is my backend folder docker file
FROM node:20-alpine
RUN npm install -g nodemon
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "run", "dev"\]
this is how my docker-compose.yml looks like
version: '3'
services:
frontend:
image: my frontend image from docker hub
ports:
\- "3000:3000"
backend:
image:my backend image from dockerhub
ports:
\- "5000:5000"
nginx:
image: my nginx image from dockerhub
ports:
\- "80:80"
later i want to setup github ci cd pipelines for it and using custom domain to access the website later. i am not sure if i am using docker-compose i still need to setup pm2. i am also posting my inbound rules i dont know why frontend is not working. guys i am beginner in aws deployment and dockerization. i am improving my skills please help me i am stuck in this from many days i saw alot of videos and watched multiple videos but not a single article or video doing what i am actually trying to do. Thanks in advance
https://redd.it/1bseijj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Container orchestration vs. VM orchestration in the cloud.
I'm trying to understand the specific use cases where we'd prefer to use container orchestration (Kubernetes) as opposed to VM orchestration (Nomad) in a cloud setting.
It seems to me that clearly, if you're focused on batch jobs, you're working with single-purpose VMs that are started then destroyed after doing their specific bit of work, so setting up a VM image to provision them with everything they need would seem to me to introduce less overhead into the cluster, and it wouldn't make much sense to use Kubernetes for a case like this. The distinguishing properties of the cloud that makes it easy to find one or more VMs that match the required scaling seem to me to make it as elastic and malleable as a container-level orchestration.
In what specific cases would you prefer to use Kubernetes?
https://redd.it/1bshdqx
@r_devops
I'm trying to understand the specific use cases where we'd prefer to use container orchestration (Kubernetes) as opposed to VM orchestration (Nomad) in a cloud setting.
It seems to me that clearly, if you're focused on batch jobs, you're working with single-purpose VMs that are started then destroyed after doing their specific bit of work, so setting up a VM image to provision them with everything they need would seem to me to introduce less overhead into the cluster, and it wouldn't make much sense to use Kubernetes for a case like this. The distinguishing properties of the cloud that makes it easy to find one or more VMs that match the required scaling seem to me to make it as elastic and malleable as a container-level orchestration.
In what specific cases would you prefer to use Kubernetes?
https://redd.it/1bshdqx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Coursera Plus at 90% off
I will be inviting you to use Plus for a year (worth $399) on your email (corporate invites) at $39 and obviously you won't be paying me without any proof that you require from and before you are satisfied. If anyone is needy and actually needs it, can dm me. I'll help them!
https://redd.it/1bsj1nc
@r_devops
I will be inviting you to use Plus for a year (worth $399) on your email (corporate invites) at $39 and obviously you won't be paying me without any proof that you require from and before you are satisfied. If anyone is needy and actually needs it, can dm me. I'll help them!
https://redd.it/1bsj1nc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Vulnerability Management Lifecycle in DevSecOps
This is the first entry in a series on a technology-driven, automated approach to DevSecOps architecture! This post helps you set up your teams for success in making sense of all the noise that comes from various vulnerability scanners.
https://blog.gitguardian.com/vulnerability-management-lifecycle-in-devsecops/
https://redd.it/1bskn2k
@r_devops
This is the first entry in a series on a technology-driven, automated approach to DevSecOps architecture! This post helps you set up your teams for success in making sense of all the noise that comes from various vulnerability scanners.
https://blog.gitguardian.com/vulnerability-management-lifecycle-in-devsecops/
https://redd.it/1bskn2k
@r_devops
GitGuardian Blog - Take Control of Your Secrets Security
Vulnerability Management Lifecycle in DevSecOps
In this new series, CJ May shares his expertise in implementing secure-by-design software processes that empower engineering teams.
The first stage of his DevSecOps program: vulnerability management.
The first stage of his DevSecOps program: vulnerability management.
How do you monitor the uptime of different microservices in k8s?
tl;dr: Got a bunch of third party cybersecurity tools/services running in our k8s cluster, I need to figure out a way to measure/benchmark the uptime of different microservices that these tools spin up so we can establish some SLOs.
Bit of background, I am on a small devops team that is there to support the internal security team of my company, which in turns supports 5000+ devs.
Almost everything we run is vendor tooling for all kinds of different security scanners, using their helm charts/manifests/w.e. Some of these tools have their own monitoring, but they are ok at best for our needs.
I am looking for a solution to help me monitor the uptime of all the different microservices that get spun up by these tools. We do have grafana/prometheus setup, and I've got prometheus blackbox exporter running for probing HTTP endpoints without too much logic built into it, but that doesn't always paint the whole picture.
It'd be nice to aim for 99% uptime, but 95% as a start is also acceptable. The stuff we run isn't super critical except for a few times per year, but we keep a close eye on the cluster during that time anyways. So whatever solution I come up with, it needs to check every 5-10 minutes to give a good enough granularity for measuring up to 99%. Two main options that I am considering and one kind of crazy one:
- Expand upon Blackbox exporter, try and get it to hit as many API endpoints as possible. I think most things we run have some kind of an API that I can use to check whether a service is up or not. I'd want to avoid this though because I am personally not a fan of writing too much logic in YAML
- Add service specific labels to each pod, so if ALL pods with that label go down, I know the particular service is degraded.
- Write a custom operator? Never wrote one before, but maybe this is the answer?
https://redd.it/1bsmhdx
@r_devops
tl;dr: Got a bunch of third party cybersecurity tools/services running in our k8s cluster, I need to figure out a way to measure/benchmark the uptime of different microservices that these tools spin up so we can establish some SLOs.
Bit of background, I am on a small devops team that is there to support the internal security team of my company, which in turns supports 5000+ devs.
Almost everything we run is vendor tooling for all kinds of different security scanners, using their helm charts/manifests/w.e. Some of these tools have their own monitoring, but they are ok at best for our needs.
I am looking for a solution to help me monitor the uptime of all the different microservices that get spun up by these tools. We do have grafana/prometheus setup, and I've got prometheus blackbox exporter running for probing HTTP endpoints without too much logic built into it, but that doesn't always paint the whole picture.
It'd be nice to aim for 99% uptime, but 95% as a start is also acceptable. The stuff we run isn't super critical except for a few times per year, but we keep a close eye on the cluster during that time anyways. So whatever solution I come up with, it needs to check every 5-10 minutes to give a good enough granularity for measuring up to 99%. Two main options that I am considering and one kind of crazy one:
- Expand upon Blackbox exporter, try and get it to hit as many API endpoints as possible. I think most things we run have some kind of an API that I can use to check whether a service is up or not. I'd want to avoid this though because I am personally not a fan of writing too much logic in YAML
- Add service specific labels to each pod, so if ALL pods with that label go down, I know the particular service is degraded.
- Write a custom operator? Never wrote one before, but maybe this is the answer?
https://redd.it/1bsmhdx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need advice about end to end testing
Hi all,
I’m new to the world of dev ops and while there is a lot to learn, I am enjoying it so far. In particular, I like that dev ops allows me to increase confidence in my deployments, and have better control over quality.
One of the areas in which I’d like to improve is in my frontend deployment. My stack consists of a backend in one repository, and then several decoupled React front ends where each live in its own repository. I want to have full confidence that I don’t accidentally break the integration between the frontend and backend when deploying new frontend code, I.e. that the frontend successfully calls the API of my backend every time I deploy.
The way I am thinking of approaching this is:
In my GitHub actions workflow, add a build step for an end to end test in my frontend repository. This build step accesses the repository for my backend, deploys a production-like environment, and then run end to end tests on this environment. Once the test are over, tear down the test environment.
I am wondering if this is a valid approach? I’m curious how mature organizations handle this sort of thing.
https://redd.it/1bsn6l1
@r_devops
Hi all,
I’m new to the world of dev ops and while there is a lot to learn, I am enjoying it so far. In particular, I like that dev ops allows me to increase confidence in my deployments, and have better control over quality.
One of the areas in which I’d like to improve is in my frontend deployment. My stack consists of a backend in one repository, and then several decoupled React front ends where each live in its own repository. I want to have full confidence that I don’t accidentally break the integration between the frontend and backend when deploying new frontend code, I.e. that the frontend successfully calls the API of my backend every time I deploy.
The way I am thinking of approaching this is:
In my GitHub actions workflow, add a build step for an end to end test in my frontend repository. This build step accesses the repository for my backend, deploys a production-like environment, and then run end to end tests on this environment. Once the test are over, tear down the test environment.
I am wondering if this is a valid approach? I’m curious how mature organizations handle this sort of thing.
https://redd.it/1bsn6l1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AWS cost limit.
I’m an absolute beginner to aws and I have only on-premise or private cloud experience.
I like experimenting with new technologies and I’m not afraid to break things. However, this never applied to aws because I was afraid of financial ruin.
However, this situation sucks. I would like to learn aws in a save environment knowing that whatever comes I will never be charged more than e.g. 30$ per month.
Does such an option exist?
https://redd.it/1bslxt7
@r_devops
I’m an absolute beginner to aws and I have only on-premise or private cloud experience.
I like experimenting with new technologies and I’m not afraid to break things. However, this never applied to aws because I was afraid of financial ruin.
However, this situation sucks. I would like to learn aws in a save environment knowing that whatever comes I will never be charged more than e.g. 30$ per month.
Does such an option exist?
https://redd.it/1bslxt7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What’s the difference between devops , mlops
The devops guy can handle mlops?
https://redd.it/1bsq2md
@r_devops
The devops guy can handle mlops?
https://redd.it/1bsq2md
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What types of SLOs are you creating?
Do you guys have service level SLOs? On my team, we only have SLOs for CUJs and have a couple of SLOs per CUJ which encompass all services involved in that CUJ. There are no SLOs on any individual service.
A lot of documentation that I read seems to talk about service level SLOs. If you use these, do you alert on them? What CUJ do you group them into as a single service could belong to multiple CUJs? Do you use CUJ and service level SLOs?
I am trying to figure out if we are doing things incorrectly and should create SLOs per service as well
Also this seems to point to doing more product-level SLOs: https://sre.google/resources/practices-and-processes/product-focused-reliability-for-sre/#measure-performance
https://redd.it/1bsrbpu
@r_devops
Do you guys have service level SLOs? On my team, we only have SLOs for CUJs and have a couple of SLOs per CUJ which encompass all services involved in that CUJ. There are no SLOs on any individual service.
A lot of documentation that I read seems to talk about service level SLOs. If you use these, do you alert on them? What CUJ do you group them into as a single service could belong to multiple CUJs? Do you use CUJ and service level SLOs?
I am trying to figure out if we are doing things incorrectly and should create SLOs per service as well
Also this seems to point to doing more product-level SLOs: https://sre.google/resources/practices-and-processes/product-focused-reliability-for-sre/#measure-performance
https://redd.it/1bsrbpu
@r_devops
sre.google
Google SRE - Product SRE, improving reliability of services
Explore how Google SRE teams prioritize product reliability by focussing on end-user needs, redefining products and using SLO strategies for critical support.
Introducing Templater: A Simple CLI, inspired by helm, for Text File Templating for Developers
Today, I'm both excited and humbled to share a project that's been a labor of love and necessity: Templater.
This journey started with a personal frustration I encountered in my development work. The need for a simple, powerful way to template text data without diving into the deep end of another programming language led me to create Templater, drawing inspiration from Helm's templating capabilities.
Check out Templater on GitHub
Templater is an open-source tool that leverages the Sprig library, allowing you to template not just individual text files but entire directories. It's designed to be intuitively familiar for those of you who've worked with Helm. The main difference is you can feed any files, or directory.
I've included a practical example that demonstrates Templater's real-world application: a multi-region Packer build. This example, found in the examples directory, illustrates how Templater can streamline and simplify complex tasks, making it an invaluable tool in your development arsenal.
I warmly invite you to explore Templater, try it in your projects, and share your feedback. Your insights and contributions will be invaluable as we continue to refine and expand Templater's capabilities together.
Thank you for your support and curiosity. Let's make the development process a bit easier for everyone.
Warm regards,
Rajesh Rajendran
https://redd.it/1bsow5n
@r_devops
Today, I'm both excited and humbled to share a project that's been a labor of love and necessity: Templater.
This journey started with a personal frustration I encountered in my development work. The need for a simple, powerful way to template text data without diving into the deep end of another programming language led me to create Templater, drawing inspiration from Helm's templating capabilities.
Check out Templater on GitHub
Templater is an open-source tool that leverages the Sprig library, allowing you to template not just individual text files but entire directories. It's designed to be intuitively familiar for those of you who've worked with Helm. The main difference is you can feed any files, or directory.
I've included a practical example that demonstrates Templater's real-world application: a multi-region Packer build. This example, found in the examples directory, illustrates how Templater can streamline and simplify complex tasks, making it an invaluable tool in your development arsenal.
I warmly invite you to explore Templater, try it in your projects, and share your feedback. Your insights and contributions will be invaluable as we continue to refine and expand Templater's capabilities together.
Thank you for your support and curiosity. Let's make the development process a bit easier for everyone.
Warm regards,
Rajesh Rajendran
https://redd.it/1bsow5n
@r_devops
GitHub
GitHub - rjshrjndrn/templater: CMD golang template cli using sprig library. Like helm. But only for local templating.
CMD golang template cli using sprig library. Like helm. But only for local templating. - rjshrjndrn/templater
What is an essential read for DevOps?
Share your favorites resources, please.
https://redd.it/1bsx3o4
@r_devops
Share your favorites resources, please.
https://redd.it/1bsx3o4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Selecting an artifact management system for embedded firmware binaries
Really struggling with this at the moment so would appreciate some advice.
The company I currently work for produces electronic devices, some of which are heading the IoT route and require OTA update.
At the moment binaries are built by Jenkins and then manually stored on a backed-up Samba share. Internally they are published by email and a link to the binaries on this drive.
To publish some binaries for OTA we're planning an AWS Lightsail VPS which will need someone to manually upload new binaries and adjust a manifest file.
eye twitch intensifies
I've got the green light to introduce a company-wide artifact management system, one that will accept raw binaries in some form, allow promotion to our test department, then promotion to product management to decide when to publish. Product management could then promote the boundaries resulting in a push to some publication server (Lightsail?) or allow 3rd parties direct access to the artifacts in the management system itself, with suitable credentials.
But here's where I'm struggling and would take recommendations from anyone with suitable experience. JFrog Artifactory I've used before but seems rather complex a solution for our needs; getting everyone to understand it might be an uphill battle. When I used it before I had to wrap binary artifacts as RPMs for them to go into Artifactory; it seems that most artifact management systems expect to work with a package management system of some kind.
I want to avoid rolling our own solution and take something off the shelf if possible. "Artifactory but dumber and simpler to use, and can manage raw firmware binaries in some form."
https://redd.it/1bsxyyf
@r_devops
Really struggling with this at the moment so would appreciate some advice.
The company I currently work for produces electronic devices, some of which are heading the IoT route and require OTA update.
At the moment binaries are built by Jenkins and then manually stored on a backed-up Samba share. Internally they are published by email and a link to the binaries on this drive.
To publish some binaries for OTA we're planning an AWS Lightsail VPS which will need someone to manually upload new binaries and adjust a manifest file.
eye twitch intensifies
I've got the green light to introduce a company-wide artifact management system, one that will accept raw binaries in some form, allow promotion to our test department, then promotion to product management to decide when to publish. Product management could then promote the boundaries resulting in a push to some publication server (Lightsail?) or allow 3rd parties direct access to the artifacts in the management system itself, with suitable credentials.
But here's where I'm struggling and would take recommendations from anyone with suitable experience. JFrog Artifactory I've used before but seems rather complex a solution for our needs; getting everyone to understand it might be an uphill battle. When I used it before I had to wrap binary artifacts as RPMs for them to go into Artifactory; it seems that most artifact management systems expect to work with a package management system of some kind.
I want to avoid rolling our own solution and take something off the shelf if possible. "Artifactory but dumber and simpler to use, and can manage raw firmware binaries in some form."
https://redd.it/1bsxyyf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Automated post-deployment monitoring vendors
Hi all, I'm looking for some help sourcing some vendors general strategies for doing post-deployment monitoring.
**Tl;dr: I'm looking for a system that can handle pre-deployment checks, the actual deployment, and post-deployment monitoring with automated rollbacks.**
I recently joined a start-up with a manual deployment process. The start-up currently does scheduled releases on 2 days during the week, and we deploy the entire stack of a handful of microservices at the same time. Aside from the obvious problem to solve which is to move to a place where we can deploy individual services independently, I am looking for a platform where I can do pre-deployment checks (check list of things to verify it is safe to deploy), the deployment itself, and then post-deployment monitoring with auto-rollbacks.
To elaborate a bit on the pre and post-deployment phases: I want the ability check if there is an active OpsGenie incident declared, if it is a weekend deployment, etc. before we do an automated deployment. In post-deployment monitoring, I want the ability to configure an alarm that, when triggered, initiates a rollback automatically. This alarm would be monitored for a specified amount of time post-deployment (measured in hours or by some other trigger, like meeting a condition such as "1000 requests processed").
What I've looked at so far:
* **Argo Workflows**: promising amount of integrations, but very general purpose. My impression is that it can only deploy to K8s targets... am I wrong? We deploy more than just K8s workloads. I suppose I could write custom tasks to deploy to non-K8s... but Octopus can do this natively.
* **Octopus**: Can handle almost all of the pre-deployment requirements I have, can deploy to virtually any target (including K8s), but is unfortunately lacking any sort of post-deployment monitoring (verified after talking to a solutions architect at Octopus).
* **Codefresh**: Just bought by Octopus. Seems like it's also K8s specific.
I'd really appreciate any leads on systems out there! I come from one of the FAANGs where we had all of this (and more) but it was all internal tooling - does anything exist in the market?
https://redd.it/1bt08qs
@r_devops
Hi all, I'm looking for some help sourcing some vendors general strategies for doing post-deployment monitoring.
**Tl;dr: I'm looking for a system that can handle pre-deployment checks, the actual deployment, and post-deployment monitoring with automated rollbacks.**
I recently joined a start-up with a manual deployment process. The start-up currently does scheduled releases on 2 days during the week, and we deploy the entire stack of a handful of microservices at the same time. Aside from the obvious problem to solve which is to move to a place where we can deploy individual services independently, I am looking for a platform where I can do pre-deployment checks (check list of things to verify it is safe to deploy), the deployment itself, and then post-deployment monitoring with auto-rollbacks.
To elaborate a bit on the pre and post-deployment phases: I want the ability check if there is an active OpsGenie incident declared, if it is a weekend deployment, etc. before we do an automated deployment. In post-deployment monitoring, I want the ability to configure an alarm that, when triggered, initiates a rollback automatically. This alarm would be monitored for a specified amount of time post-deployment (measured in hours or by some other trigger, like meeting a condition such as "1000 requests processed").
What I've looked at so far:
* **Argo Workflows**: promising amount of integrations, but very general purpose. My impression is that it can only deploy to K8s targets... am I wrong? We deploy more than just K8s workloads. I suppose I could write custom tasks to deploy to non-K8s... but Octopus can do this natively.
* **Octopus**: Can handle almost all of the pre-deployment requirements I have, can deploy to virtually any target (including K8s), but is unfortunately lacking any sort of post-deployment monitoring (verified after talking to a solutions architect at Octopus).
* **Codefresh**: Just bought by Octopus. Seems like it's also K8s specific.
I'd really appreciate any leads on systems out there! I come from one of the FAANGs where we had all of this (and more) but it was all internal tooling - does anything exist in the market?
https://redd.it/1bt08qs
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Project advice for SE wanting to switch to DevOps
Hello people,
I am currently working as a software engineer, mainly using Rails. I used Java and React in my previous jobs.
Lately for the past 3 months, I got interested in the cloud and started studying for the AWS SAA certificate, and also trying to get better with Linux and Terraform on the side. I have some small project ideas to get your opinions on it:
- Setting up a React project I did earlier on AWS cloud with EC2, ASG, Load Balancer, and Route 53(with my own domain).
- Setting up a simple frontend plus backend project with basic CRUD stuff, that also uses RDS and CloudFront on top of the other things I used for the previous project. I want to write some basic tests for the backend so that I can set up a CI/CD with them.
I think of doing these projects manually via the Management console first, then learn how to automate it with Terraform.
I also want to practice using VPC, but don't know how I could use it in these projects. I also wonder how I could utilize Ansible in projects like this, would I even need it?
Do you think these projects are good to implement for getting into DevOps, or are they way too simple?
I am open to any positive or negative feedback. Feel free to roast me if I said something clueless.
Thanks in advance.
https://redd.it/1bt1mj7
@r_devops
Hello people,
I am currently working as a software engineer, mainly using Rails. I used Java and React in my previous jobs.
Lately for the past 3 months, I got interested in the cloud and started studying for the AWS SAA certificate, and also trying to get better with Linux and Terraform on the side. I have some small project ideas to get your opinions on it:
- Setting up a React project I did earlier on AWS cloud with EC2, ASG, Load Balancer, and Route 53(with my own domain).
- Setting up a simple frontend plus backend project with basic CRUD stuff, that also uses RDS and CloudFront on top of the other things I used for the previous project. I want to write some basic tests for the backend so that I can set up a CI/CD with them.
I think of doing these projects manually via the Management console first, then learn how to automate it with Terraform.
I also want to practice using VPC, but don't know how I could use it in these projects. I also wonder how I could utilize Ansible in projects like this, would I even need it?
Do you think these projects are good to implement for getting into DevOps, or are they way too simple?
I am open to any positive or negative feedback. Feel free to roast me if I said something clueless.
Thanks in advance.
https://redd.it/1bt1mj7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
In which order would you learn these?
Terraform, Docker, Kubernetes, Ansible, CI/CD, Prometheus/Grafana
I recently passed AWS Solutions Architect Associate and have Python development experience, RHCSA Linux, and Network+ under my belt.
I'm thinking of learning Terraform for a cloud-agnostic IaC (I already know cloudformation), followed by jumping into Docker containers and Kubernetes since that seems to be extremely in-demand within the industry. After that I might look into CI/CD with Gitlab and observability monitoring with Grafana. Thoughts?
Also, is Ansible stilll in demand in this day and age? The RHCE exam - which is the sequel to the RHCSA - is basically an Ansible exam. Knowing I've recently gotten the RHCSA, would it be a good idea to do the RHCE next, or should I skip that and focus more on Docker and Kubernetes?
https://redd.it/1bt3om5
@r_devops
Terraform, Docker, Kubernetes, Ansible, CI/CD, Prometheus/Grafana
I recently passed AWS Solutions Architect Associate and have Python development experience, RHCSA Linux, and Network+ under my belt.
I'm thinking of learning Terraform for a cloud-agnostic IaC (I already know cloudformation), followed by jumping into Docker containers and Kubernetes since that seems to be extremely in-demand within the industry. After that I might look into CI/CD with Gitlab and observability monitoring with Grafana. Thoughts?
Also, is Ansible stilll in demand in this day and age? The RHCE exam - which is the sequel to the RHCSA - is basically an Ansible exam. Knowing I've recently gotten the RHCSA, would it be a good idea to do the RHCE next, or should I skip that and focus more on Docker and Kubernetes?
https://redd.it/1bt3om5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
The Latest Innovation in Incident Response - Most Privilege Access
Least privilege access is important but has shown its limitations during incidents.
As a result entitle.io developed a new feature that instantly empowers EVERYONE in the organization with full admin rights to combat cyber threats collectively.
Here’s how it works: https://www.entitle.io/lp/most-privilege-access
https://redd.it/1bt3iig
@r_devops
Least privilege access is important but has shown its limitations during incidents.
As a result entitle.io developed a new feature that instantly empowers EVERYONE in the organization with full admin rights to combat cyber threats collectively.
Here’s how it works: https://www.entitle.io/lp/most-privilege-access
https://redd.it/1bt3iig
@r_devops
www.entitle.io
Most Privilege Access
Least privilege access, while vital, has shown its limitations during critical incidents. Today, we're thrilled to unveil a groundbreaking feature that challenges the status quo and introduces a bold new concept: Most Privilege Access.
Inputs into API calls
We have a use case where our CICD pipelines will “deploy” to API end points. We’re unsure how we should store the API inputs in our Gitlab CICD repos. I figure we’ll need to house the API version number along with with PUT command’s payload.
Has anyone here worked with something similar? My go to would be a JSON input that we would then parse into the request but curious what others have seen/used in similar situations.
https://redd.it/1bt74gp
@r_devops
We have a use case where our CICD pipelines will “deploy” to API end points. We’re unsure how we should store the API inputs in our Gitlab CICD repos. I figure we’ll need to house the API version number along with with PUT command’s payload.
Has anyone here worked with something similar? My go to would be a JSON input that we would then parse into the request but curious what others have seen/used in similar situations.
https://redd.it/1bt74gp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community