Automation for ECS Fargate standalone tasks - does this even viable?
Good am guys, posting this question hoping for any valuable input.
So the situation is this, I'd been given a task requesting to come up with a solution for delivering an automated deployment of ECS Fargate tasks. Now, while it sounds trivial, in fact it does not seem so: among requirements there are 1 task definition for all tasks, each task should be able to override default environmental variables and as I understood that's the main reason on why they don't need ECS services.
So it has to be 1 task definition, no services and a fleet of tasks, each having unique env vars per tenant (customer).
And the other thing is that they don't need to build images as a part of this deployment/automation as they (devs) wish to take care of that someway else.
So far I've been trying to wrap my head around this for just a couple of days and have not yet had a chance to ask further questions or raise concerns (but inevitably that's gonna happen). Also the task itself is not new and there was the other guy which already worked on it for some time and suggested to create some Jenkins jobs to automate this. However, at this point, I feel like the whole concept is not really viable and all I can think of right now is a series of some bash scripts running awscli commands to start/run/stop tasks and probably creating task definition revisions.
The other way around could be a bunch of task definitions, each containing unique set of env vars, used by services and subsequently tasks. However, as they want various stages (dev, prod) and dozens of tenants, I'm not so sure in this method as well.
Anyways, I would really appreciate any insights concerning this matter. Has anyone had any similar tasks back in the day?
Thanks in advance!
p.s: Im really sorry for my illiterate english in here, it's not my native language.
https://redd.it/ohm4dt
@r_devops
Good am guys, posting this question hoping for any valuable input.
So the situation is this, I'd been given a task requesting to come up with a solution for delivering an automated deployment of ECS Fargate tasks. Now, while it sounds trivial, in fact it does not seem so: among requirements there are 1 task definition for all tasks, each task should be able to override default environmental variables and as I understood that's the main reason on why they don't need ECS services.
So it has to be 1 task definition, no services and a fleet of tasks, each having unique env vars per tenant (customer).
And the other thing is that they don't need to build images as a part of this deployment/automation as they (devs) wish to take care of that someway else.
So far I've been trying to wrap my head around this for just a couple of days and have not yet had a chance to ask further questions or raise concerns (but inevitably that's gonna happen). Also the task itself is not new and there was the other guy which already worked on it for some time and suggested to create some Jenkins jobs to automate this. However, at this point, I feel like the whole concept is not really viable and all I can think of right now is a series of some bash scripts running awscli commands to start/run/stop tasks and probably creating task definition revisions.
The other way around could be a bunch of task definitions, each containing unique set of env vars, used by services and subsequently tasks. However, as they want various stages (dev, prod) and dozens of tenants, I'm not so sure in this method as well.
Anyways, I would really appreciate any insights concerning this matter. Has anyone had any similar tasks back in the day?
Thanks in advance!
p.s: Im really sorry for my illiterate english in here, it's not my native language.
https://redd.it/ohm4dt
@r_devops
reddit
Automation for ECS Fargate standalone tasks - does this even viable?
Good am guys, posting this question hoping for any valuable input. So the situation is this, I'd been given a task requesting to come up with a...
kubernetes: nginx ingress vs nginx server
Hello! Sorry if my question is going to be noob-ish, but I have only been learning k8s for 4 months and now reached out istio and ingress stages. So my question is:
Imagine I am running a site having php-fpm + nginx (via an upstream socket). On a "bar metal" I would simply install php + its php-fpm module and let nginx handle the requests via fastcgi and locations.
Now imagine I want to move my site into the kubernetes cluster, I have chosen ingress for flexible traffic management. How should the final architecture look? I mean where is the actual "nginx+php-fpm" should it be:
1. We install the nginx ingress
2. We run 2 containers (php-fpm + nginx) in the same pod/deployment
... or could ingress actually handle my php-fpm requests? I am concerned because in practice the nginx ingress looks like yet another web server handling the requests, so in fact it seems we have ingress + a separate server in the pod/deployment, that is why the question arose.
https://redd.it/ohm9oy
@r_devops
Hello! Sorry if my question is going to be noob-ish, but I have only been learning k8s for 4 months and now reached out istio and ingress stages. So my question is:
Imagine I am running a site having php-fpm + nginx (via an upstream socket). On a "bar metal" I would simply install php + its php-fpm module and let nginx handle the requests via fastcgi and locations.
Now imagine I want to move my site into the kubernetes cluster, I have chosen ingress for flexible traffic management. How should the final architecture look? I mean where is the actual "nginx+php-fpm" should it be:
1. We install the nginx ingress
2. We run 2 containers (php-fpm + nginx) in the same pod/deployment
... or could ingress actually handle my php-fpm requests? I am concerned because in practice the nginx ingress looks like yet another web server handling the requests, so in fact it seems we have ingress + a separate server in the pod/deployment, that is why the question arose.
https://redd.it/ohm9oy
@r_devops
reddit
kubernetes: nginx ingress vs nginx server
Hello! Sorry if my question is going to be noob-ish, but I have only been learning k8s for 4 months and now reached out istio and ingress stages....
Who uses Sentry or Clubhouse.io ?
Does anyone use Sentry? How is it compared to Jira?
What can and can’t you do in the free version
I’m working on a project myself…. Will I see that big of a difference?
https://redd.it/ohttn6
@r_devops
Does anyone use Sentry? How is it compared to Jira?
What can and can’t you do in the free version
I’m working on a project myself…. Will I see that big of a difference?
https://redd.it/ohttn6
@r_devops
reddit
Who uses Sentry or Clubhouse.io ?
Does anyone use Sentry? How is it compared to Jira? What can and can’t you do in the free version I’m working on a project myself…. Will I see...
How does managed services work?
Hi all, I've been interested in some devops topics for a while, but there's something that I've been curious about for quite a while, but can't find much information.
I was wondering exactly how managed services like AWS RDS, DigitalOcean Kubernetes, AWS SQS, etc etc works. I know of Ansible, where I could write playbooks and automate installations and server configuration and etc. But it still not clear to me how exactly does it work.
So when I click on AWS the frontend sends a JSON payload to the backend, but how exactly does that translate to Ansible actions in a server? Is it a combination of Terraform and Ansible or something?
And how about the so called serverless services? I've been using lambda for quite a while, but how would one implement a service like Lambda?
This probably is not the most well formed question, so I was wondering if anyone could point me in the right direction to understand this a bit better.
Thanks!
https://redd.it/ohtqr9
@r_devops
Hi all, I've been interested in some devops topics for a while, but there's something that I've been curious about for quite a while, but can't find much information.
I was wondering exactly how managed services like AWS RDS, DigitalOcean Kubernetes, AWS SQS, etc etc works. I know of Ansible, where I could write playbooks and automate installations and server configuration and etc. But it still not clear to me how exactly does it work.
So when I click on AWS the frontend sends a JSON payload to the backend, but how exactly does that translate to Ansible actions in a server? Is it a combination of Terraform and Ansible or something?
And how about the so called serverless services? I've been using lambda for quite a while, but how would one implement a service like Lambda?
This probably is not the most well formed question, so I was wondering if anyone could point me in the right direction to understand this a bit better.
Thanks!
https://redd.it/ohtqr9
@r_devops
reddit
How does managed services work?
Hi all, I've been interested in some devops topics for a while, but there's something that I've been curious about for quite a while, but can't...
Upgrading helm deploy with a different chart
Hello,
I ran into this peculiar issue in my home lab and want to use it as a learning opportunity.
I am running bitwarden local server, which was originally named bitwardenrs. I have used helm chart from k8s-at-home to deploy it - charts/charts/stable/bitwardenrs at bitwardenrs-2.1.11 · k8s-at-home/charts (github.com)
The server was recently renamed to vaultwardenrs and the deploy chart got updated as well - charts/charts/stable/vaultwarden at master · k8s-at-home/charts (github.com)
Now if I try to simply upgrade from one to another while providing existing deploy's name, I get the following error:
>Error: UPGRADE FAILED: template: vaultwarden/templates/common.yaml:1:3: executing "vaultwarden/templates/common.yaml" at <include "common.all" .>: error calling include: template: vaultwarden/charts/common/templates/_all.tpl:29:6: executing "common.all" at <include "common.pvc" .>: error calling include: template: vaultwarden/charts/common/templates/_pvc.tpl:7:19: executing "common.pvc" at <$PVC.enabled>: can't evaluate field enabled in type interface {}
I think the "right" process is to take backup of bitwarden, delete it, start up new container and restore config. But I want to see if there's a way to migrate it to another chart.
Any suggestions on how to approach this? I am honestly not even sure if helm supports migration from one chart to another and I my googling fails me so far.
Thanks!
https://redd.it/ohtdk3
@r_devops
Hello,
I ran into this peculiar issue in my home lab and want to use it as a learning opportunity.
I am running bitwarden local server, which was originally named bitwardenrs. I have used helm chart from k8s-at-home to deploy it - charts/charts/stable/bitwardenrs at bitwardenrs-2.1.11 · k8s-at-home/charts (github.com)
The server was recently renamed to vaultwardenrs and the deploy chart got updated as well - charts/charts/stable/vaultwarden at master · k8s-at-home/charts (github.com)
Now if I try to simply upgrade from one to another while providing existing deploy's name, I get the following error:
>Error: UPGRADE FAILED: template: vaultwarden/templates/common.yaml:1:3: executing "vaultwarden/templates/common.yaml" at <include "common.all" .>: error calling include: template: vaultwarden/charts/common/templates/_all.tpl:29:6: executing "common.all" at <include "common.pvc" .>: error calling include: template: vaultwarden/charts/common/templates/_pvc.tpl:7:19: executing "common.pvc" at <$PVC.enabled>: can't evaluate field enabled in type interface {}
I think the "right" process is to take backup of bitwarden, delete it, start up new container and restore config. But I want to see if there's a way to migrate it to another chart.
Any suggestions on how to approach this? I am honestly not even sure if helm supports migration from one chart to another and I my googling fails me so far.
Thanks!
https://redd.it/ohtdk3
@r_devops
GitHub
k8s-at-home/charts
Helm charts for applications you run at home. Contribute to k8s-at-home/charts development by creating an account on GitHub.
Terraform Conditional Loop
https://youtu.be/VVVa2o4d0rs?sub\_confirmation=1
https://redd.it/ohzu7q
@r_devops
https://youtu.be/VVVa2o4d0rs?sub\_confirmation=1
https://redd.it/ohzu7q
@r_devops
YouTube
Learn terraform | terraform conditional loops
In this video you will learn how to use conditional loops with terraform.
in this video we will try to setup network setting in azure storage account with specified subnet ranges, and for this we will be using terraform conditional loops.
#Terraform #azureterraform
in this video we will try to setup network setting in azure storage account with specified subnet ranges, and for this we will be using terraform conditional loops.
#Terraform #azureterraform
With Azure DevOps, use of a single project, and a team of ten who can each work on everything in the project. Is there any advantage to using multiple teams rather than one single teams for everyone?
I have an organization that develops around 10 simple mobile apps a year. We are a team of ten people, 6 developers, marketing, research, graphics, project manager. Every person has the potential to be involved in every app, either designing, developing, fixing bugs, or creating assets.
We are planning to use Agile Scrum with an Azure DevOps single project to handle everything. What I would like to know is if there is any advantage in having a single or multiple teams. For example:
- One team for everyone
or
- One team for developers, one for marketing, one for management
https://redd.it/oi2q8b
@r_devops
I have an organization that develops around 10 simple mobile apps a year. We are a team of ten people, 6 developers, marketing, research, graphics, project manager. Every person has the potential to be involved in every app, either designing, developing, fixing bugs, or creating assets.
We are planning to use Agile Scrum with an Azure DevOps single project to handle everything. What I would like to know is if there is any advantage in having a single or multiple teams. For example:
- One team for everyone
or
- One team for developers, one for marketing, one for management
https://redd.it/oi2q8b
@r_devops
reddit
With Azure DevOps, use of a single project, and a team of ten who...
I have an organization that develops around 10 simple mobile apps a year. We are a team of ten people, 6 developers, marketing, research,...
What do you have within your pipelines to ensure that containers deployed are secure?
Leaning more about this space and im wondering what you can get to ensure that your containers are secure all the time in terms of software patches and adhering to a specific hardening standard?
https://redd.it/oi3ut5
@r_devops
Leaning more about this space and im wondering what you can get to ensure that your containers are secure all the time in terms of software patches and adhering to a specific hardening standard?
https://redd.it/oi3ut5
@r_devops
reddit
What do you have within your pipelines to ensure that containers...
Leaning more about this space and im wondering what you can get to ensure that your containers are secure all the time in terms of software...
Ideas for a simple data Pipeline
I have a friend with a startup and he needs to set up a data pipeline that looks something like this:
1. Clients upload CSV files via his site, his backend stores them in S3.
2. Periodically (not in real time and not even same day), his data team needs to clean and transform the data.
3. The data folks also want to update training models based on this data.
4. The output needs to be dumped to a data lake.
5. Lastly, the output needs to be displayed/available in dashboards.
I've set up simple pipelines before but I'm not too clear on the tools/work involved in steps 2 and 3. I believe that Sagemaker could be useful here. My friend's team uses Jupyter notebooks and Python extensively. He was thinking about using Snowlake but I think Athena might work well to start. Also, he's wondering about Tableau vs Looker.
tl;dr there are MANY different ways to do this kind of thing, I'm looking for recommendations on any/all of the above. Thanks in advance.
https://redd.it/oi5rd4
@r_devops
I have a friend with a startup and he needs to set up a data pipeline that looks something like this:
1. Clients upload CSV files via his site, his backend stores them in S3.
2. Periodically (not in real time and not even same day), his data team needs to clean and transform the data.
3. The data folks also want to update training models based on this data.
4. The output needs to be dumped to a data lake.
5. Lastly, the output needs to be displayed/available in dashboards.
I've set up simple pipelines before but I'm not too clear on the tools/work involved in steps 2 and 3. I believe that Sagemaker could be useful here. My friend's team uses Jupyter notebooks and Python extensively. He was thinking about using Snowlake but I think Athena might work well to start. Also, he's wondering about Tableau vs Looker.
tl;dr there are MANY different ways to do this kind of thing, I'm looking for recommendations on any/all of the above. Thanks in advance.
https://redd.it/oi5rd4
@r_devops
reddit
Ideas for a simple data Pipeline
I have a friend with a startup and he needs to set up a data pipeline that looks something like this: 1. Clients upload CSV files via his site,...
Can Chinese users use Azure DevOps?
I am looking at a project that will be hosted on Azure DevOps, with some pipelines that will have self-hosted runners, some in US, some in China.
Does anyone know, if there are any major difficulties for Chinese users to be able to use DevOps hosted repository and ability to pull/push code to git repo?
I know we'll need to test all this but just wondering if anyone has had some experience with getting US and Chinese contributors to work together like this and what obstacles have you encountered.
https://redd.it/oi9n72
@r_devops
I am looking at a project that will be hosted on Azure DevOps, with some pipelines that will have self-hosted runners, some in US, some in China.
Does anyone know, if there are any major difficulties for Chinese users to be able to use DevOps hosted repository and ability to pull/push code to git repo?
I know we'll need to test all this but just wondering if anyone has had some experience with getting US and Chinese contributors to work together like this and what obstacles have you encountered.
https://redd.it/oi9n72
@r_devops
reddit
Can Chinese users use Azure DevOps?
I am looking at a project that will be hosted on Azure DevOps, with some pipelines that will have self-hosted runners, some in US, some in...
How to simplify packer AMI builds without using chef/ansible?
I've seen multiple companies use Terraform/Cloudformation to deploy their infrastructure yet still use something like Chef during the machine image build process.
These are mostly for "legacy" apps that haven't been containerized so some of the config may become complex. Besides a bash script, what's everyone else doing?
https://redd.it/oik149
@r_devops
I've seen multiple companies use Terraform/Cloudformation to deploy their infrastructure yet still use something like Chef during the machine image build process.
These are mostly for "legacy" apps that haven't been containerized so some of the config may become complex. Besides a bash script, what's everyone else doing?
https://redd.it/oik149
@r_devops
reddit
How to simplify packer AMI builds without using chef/ansible?
I've seen multiple companies use Terraform/Cloudformation to deploy their infrastructure yet still use something like Chef during the machine...
Best way to store information about every http request in application
I am working on a web application (DotNet core) and I would like to store some informations about every request;
- client IP
- api endpoint
- http return code
- user
- error message
And I would like to give users the ability to look at the audit log (they would be able to see only their requests and perform some filtering on it, by IP or return code for instance).
I tried using postgres (which I use as a database for my application) but within 3 days, I already ended up with 120,000 rows in my DB.
I am afraid the database will become a bottleneck for the application. MongoDB is not an alternative because of some license issues.
What can I use as an alternative?
https://redd.it/oikwu7
@r_devops
I am working on a web application (DotNet core) and I would like to store some informations about every request;
- client IP
- api endpoint
- http return code
- user
- error message
And I would like to give users the ability to look at the audit log (they would be able to see only their requests and perform some filtering on it, by IP or return code for instance).
I tried using postgres (which I use as a database for my application) but within 3 days, I already ended up with 120,000 rows in my DB.
I am afraid the database will become a bottleneck for the application. MongoDB is not an alternative because of some license issues.
What can I use as an alternative?
https://redd.it/oikwu7
@r_devops
reddit
Best way to store information about every http request in application
I am working on a web application (DotNet core) and I would like to store some informations about every request; - client IP - api endpoint - http...
Supply Chain Security Tips That Won’t Slow Development Down
As supply chain attacks continue to dominate headlines, software development teams are beginning to realize that package management can’t be taken lightly — the threats hidden under the hood are real. In this installment of The Source, we want to talk about the practices and tools that developers need to adopt in order to protect against supply chain attacks.
Supply Chain Risks Are Inherent to Open Source Dependencies
Open source components, via package managers and registries, are a great way to hack into a company’s supply chain. Developers are busy enough already, and no one has the time to review every single line of code in every single package, let alone the package updates.
Projects usually start out with the latest versions of all packages, and then slowly fall behind. Software development organizations’ AppSec strategies must take into account that while open source usage has many benefits, there are also risks. One of them is that open source dependencies contain open source supply chain risks. Failing to secure the open source supply chain opens the door to risks like outages, cryptojacking, botnets, leaked data, or legal risks related to open source licenses or data loss.
What developers need to remember is that for many of the ecosystems, merely installing a package could open the door to threats. Ecosystems line NPM, PyPI, and RubyGems contain post-install hooks. As soon as a developer installs a library, permissions are granted, allowing access to anything and everything associated with the account their machine is running on. If the installed library contains malicious code, it could easily cause havoc or infect other libraries while cleaning itself up.
Protecting Against Supply Chain Threats
While there is no one solution that addresses all of the risks, there is a series of countermeasures that developers can use to address supply chain security.
Use only verified package sources
Typosquatting and brandjacking are amongst most commonly used vectors of attack.
Review the open source licenses of the packages that you are using.
Many package registries provide information about the license for a given package. It’s important to remember that different releases might have different open source licenses.
Migrate from packages that are abandoned
Abandoned packages are more likely to be a subject of a malicious takeover. If you’re relying on a piece of software that does not get enough attention, consider either avoiding it or taking it over. You could also run a community assessment on the packages you plan to incorporate into your software.
Don’t use new packages
If a package is less than 30 days old, wait until it’s confirmed as safe by the community’s security researchers.
In the past year we saw several attempts to publish malicious packages to various registries. With this policy in place, the majority of them could be avoided.
Make sure that critical production-related CVE notifications are part of your security alert workflow.
Once in a while there may be a critical vulnerability that is affecting your production. It’s better to be woken up due to a security alert rather than a security incident.
If you are using automated tools to update your dependencies, make sure that packages are confirmed as safe before updates are automatically installed.
Use isolated CI stages.
Don’t use a single CI pipeline that has all of the environment variables for AWS, Docker registries, etc. If you’re using the same environment for running specs, building containers, pushing updates, and everything else — you are putting your environment, your company, and your customers, at risk.
Protect your entire development cycle, starting from developers.
The first step towards threat prevention is spreading awareness. Educate teams that randomly searching for and downloading packages is not OK. Make sure standard practice is to never
As supply chain attacks continue to dominate headlines, software development teams are beginning to realize that package management can’t be taken lightly — the threats hidden under the hood are real. In this installment of The Source, we want to talk about the practices and tools that developers need to adopt in order to protect against supply chain attacks.
Supply Chain Risks Are Inherent to Open Source Dependencies
Open source components, via package managers and registries, are a great way to hack into a company’s supply chain. Developers are busy enough already, and no one has the time to review every single line of code in every single package, let alone the package updates.
Projects usually start out with the latest versions of all packages, and then slowly fall behind. Software development organizations’ AppSec strategies must take into account that while open source usage has many benefits, there are also risks. One of them is that open source dependencies contain open source supply chain risks. Failing to secure the open source supply chain opens the door to risks like outages, cryptojacking, botnets, leaked data, or legal risks related to open source licenses or data loss.
What developers need to remember is that for many of the ecosystems, merely installing a package could open the door to threats. Ecosystems line NPM, PyPI, and RubyGems contain post-install hooks. As soon as a developer installs a library, permissions are granted, allowing access to anything and everything associated with the account their machine is running on. If the installed library contains malicious code, it could easily cause havoc or infect other libraries while cleaning itself up.
Protecting Against Supply Chain Threats
While there is no one solution that addresses all of the risks, there is a series of countermeasures that developers can use to address supply chain security.
Use only verified package sources
Typosquatting and brandjacking are amongst most commonly used vectors of attack.
Review the open source licenses of the packages that you are using.
Many package registries provide information about the license for a given package. It’s important to remember that different releases might have different open source licenses.
Migrate from packages that are abandoned
Abandoned packages are more likely to be a subject of a malicious takeover. If you’re relying on a piece of software that does not get enough attention, consider either avoiding it or taking it over. You could also run a community assessment on the packages you plan to incorporate into your software.
Don’t use new packages
If a package is less than 30 days old, wait until it’s confirmed as safe by the community’s security researchers.
In the past year we saw several attempts to publish malicious packages to various registries. With this policy in place, the majority of them could be avoided.
Make sure that critical production-related CVE notifications are part of your security alert workflow.
Once in a while there may be a critical vulnerability that is affecting your production. It’s better to be woken up due to a security alert rather than a security incident.
If you are using automated tools to update your dependencies, make sure that packages are confirmed as safe before updates are automatically installed.
Use isolated CI stages.
Don’t use a single CI pipeline that has all of the environment variables for AWS, Docker registries, etc. If you’re using the same environment for running specs, building containers, pushing updates, and everything else — you are putting your environment, your company, and your customers, at risk.
Protect your entire development cycle, starting from developers.
The first step towards threat prevention is spreading awareness. Educate teams that randomly searching for and downloading packages is not OK. Make sure standard practice is to never
gavinmiller.io
How I MITM'd rubygems.org ... Kinda
How rubgems.org can be used to MITM rubygems.org
install a package before checking who’s behind it.
Review packages based on research, not just the description on the git repository.
In order to review an open source project you’re interested in using, you will need to download the package and study its content to ensure it’s secure. You should not rely on the data that comes out of the registry you’re using. Or — use WhiteSource Diffend, which will analyze the packages for you to detect security and quality issues.
As security shifts left, developers are increasingly tasked with the detection and remediation of vulnerabilities.
While old methodologies put security at the end of the development process and slowed down the development cycle, today’s DevSecOps gives developers a seat at the security table from the earliest stages of development. Unfortunately, they aren’t always given the tools and practices that they need in order to share ownership over security.
Developers don’t need to become security experts in order to share ownership over security. They simply need to integrate the right automated tools and practices that will help them cover security threats like supply chain attacks, without slowing them down.
Source
https://redd.it/oimf1e
@r_devops
Review packages based on research, not just the description on the git repository.
In order to review an open source project you’re interested in using, you will need to download the package and study its content to ensure it’s secure. You should not rely on the data that comes out of the registry you’re using. Or — use WhiteSource Diffend, which will analyze the packages for you to detect security and quality issues.
As security shifts left, developers are increasingly tasked with the detection and remediation of vulnerabilities.
While old methodologies put security at the end of the development process and slowed down the development cycle, today’s DevSecOps gives developers a seat at the security table from the earliest stages of development. Unfortunately, they aren’t always given the tools and practices that they need in order to share ownership over security.
Developers don’t need to become security experts in order to share ownership over security. They simply need to integrate the right automated tools and practices that will help them cover security threats like supply chain attacks, without slowing them down.
Source
https://redd.it/oimf1e
@r_devops
WhiteSource
10 Supply Chain Security Tips That Won’t Slow Development Down
Learn how developers can adopt easy practices to secure the open source supply chain without slowing down development.
What is Kubernetes Downward API and why you might need it?
Let's check one of the lesser known Kubernetes features, that allows to expose pod metadata to your application - the Downward API: https://youtu.be/c4IOAXE5Mo8
https://redd.it/oin79r
@r_devops
Let's check one of the lesser known Kubernetes features, that allows to expose pod metadata to your application - the Downward API: https://youtu.be/c4IOAXE5Mo8
https://redd.it/oin79r
@r_devops
YouTube
What is Kubernetes Downward API and why you might need it?
Let's check one of the lesser known Kubernetes features, that allows to expose pod metadata to your application - the Downward API.
Links:
* https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
If you or…
Links:
* https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
If you or…
Watermelon Metrics: Green outside, Red Inside
Very interesting post on everyday questions that are hard to answer
​
https://blog.last9.io/need-for-systems-observability/
https://redd.it/oil8z6
@r_devops
Very interesting post on everyday questions that are hard to answer
​
https://blog.last9.io/need-for-systems-observability/
https://redd.it/oil8z6
@r_devops
Last9 SRE Platform
Systems Observability
Observability is not just about being able to ask questions to your systems. It's also about getting those answers in minutes and not hours.
All buzz-words which you need to know before interview
So let me start:
PS. Jokes are also appreciated.
https://redd.it/oimxpn
@r_devops
So let me start:
DevOps
GitOps
SRE
SaaS/PaaS/IaaS
IaC
PS. Jokes are also appreciated.
https://redd.it/oimxpn
@r_devops
reddit
All buzz-words which you need to know before interview
So let me start: ``` DevOps GitOps SRE SaaS/PaaS/IaaS IaC ``` PS. Jokes are also appreciated.
Do you use A Cloud Guru or similar for continuing professional development?
Like the question in the post title, I'm wondering if you use something like A Cloud Guru to continue improving your DevOps skills.
I ask because it dawned me that the project I have been working on might be redundant. It's a continuous improvement project to help DevOps professionals learn in a bite-sized, organic way.
It was mainly designed for neurodiverse (e.g. dyspraxia) professionals. It would work like this:
1. Break your job down into a visual map of key responsibility areas + responsibilities within each
2. You can then add incident reports, learning notes, updates onto specific responsibilities
3. Bring in your team leader or senior to add their feedback onto your progress in relevant areas
Eventually the team would get on board so you can pick various other responsibilities (that interest you) or self-select into projects that draw on your core strengths.
Alas, it seems a bit redundant if you can just do DIY learning like in an LMS or in a sandbox like what A Cloud Guru offers. Thoughts?
https://redd.it/oipwtq
@r_devops
Like the question in the post title, I'm wondering if you use something like A Cloud Guru to continue improving your DevOps skills.
I ask because it dawned me that the project I have been working on might be redundant. It's a continuous improvement project to help DevOps professionals learn in a bite-sized, organic way.
It was mainly designed for neurodiverse (e.g. dyspraxia) professionals. It would work like this:
1. Break your job down into a visual map of key responsibility areas + responsibilities within each
2. You can then add incident reports, learning notes, updates onto specific responsibilities
3. Bring in your team leader or senior to add their feedback onto your progress in relevant areas
Eventually the team would get on board so you can pick various other responsibilities (that interest you) or self-select into projects that draw on your core strengths.
Alas, it seems a bit redundant if you can just do DIY learning like in an LMS or in a sandbox like what A Cloud Guru offers. Thoughts?
https://redd.it/oipwtq
@r_devops
reddit
Do you use A Cloud Guru or similar for continuing professional...
Like the question in the post title, I'm wondering if you use something like A Cloud Guru to continue improving your DevOps skills. I ask because...
GitOps testing and promotion procedures and practices
Hi all. As we know GitOps is gaining more and more traction and for very good reason. The less points of friction that you have, the better the development workflow will be thus forcing the ownership onto the developers themselves, instead of admins.
But, I have a question, or an issue that I am trying to work through. That is the actual development workflow and any advices are welcomed.
Image the following scenario:
Dozen of app repos (only containing code and build instructions)
A config repo for Helm charts and values for various environments
How do you actually approach the integration testing when there are breaking changes in the app that need to be reflected in the helm chart as well? When the app is at 2.x.x and needs to more to 3.x.x and the chart is 1.x.x and needs to move to 2.x.x. How you deploy the changes in the app (living the the PR) and do integrations tests before merging this in master and thus triggering the version upgrade in the config/chart repo?
Also, to me it looks that the two repo approach is an overhead because the testing of the app is so detached for the deployment itself.
It seems to me that there are some tools, procedures, practices missing in order to glue all of this together, or am I not approaching this the right way.
https://redd.it/oiplne
@r_devops
Hi all. As we know GitOps is gaining more and more traction and for very good reason. The less points of friction that you have, the better the development workflow will be thus forcing the ownership onto the developers themselves, instead of admins.
But, I have a question, or an issue that I am trying to work through. That is the actual development workflow and any advices are welcomed.
Image the following scenario:
Dozen of app repos (only containing code and build instructions)
A config repo for Helm charts and values for various environments
How do you actually approach the integration testing when there are breaking changes in the app that need to be reflected in the helm chart as well? When the app is at 2.x.x and needs to more to 3.x.x and the chart is 1.x.x and needs to move to 2.x.x. How you deploy the changes in the app (living the the PR) and do integrations tests before merging this in master and thus triggering the version upgrade in the config/chart repo?
Also, to me it looks that the two repo approach is an overhead because the testing of the app is so detached for the deployment itself.
It seems to me that there are some tools, procedures, practices missing in order to glue all of this together, or am I not approaching this the right way.
https://redd.it/oiplne
@r_devops
reddit
GitOps testing and promotion procedures and practices
Hi all. As we know GitOps is gaining more and more traction and for very good reason. The less points of friction that you have, the better the...
Help me decide on a monitoring/log analysis stack - ELK vs. TICK vs. [Other]
I'm new to all of these stacks and am unsure of the right solution for my scenario.
**General description**
* Collect application logs from 100-200 instances of an application, each on a different server
* Footprint on these servers should be minimal (avoid parsing/transforming on these servers)
**Size of data**
* 1-5 GB per day, per server. Let's ballpark at 15 TB raw data per month.
**Log format and parsing/transformation requirements**
* See below for the raw format
* Note that each logged command has separate entries for *start* and *stop*, usually with other entries in between
* **Each command should be stored as a single record.** I.e., as part of processing the logs, the *start* and *stop* records should be merged into a single record with a *startTime*, *stopTime*, *duration,* and other fields.
​
[datetime] [commandId-1] start [commandType] [user] [transferSize]...
[datetime] [commandId-2] start [commandType] [user] [transferSize]...
[datetime] [commandId-1] stop [commandType] [user] [transferSize]...
**What kind of queries/reports/analytics/alerting do we want?**Examples:
* How many commands are issued per \[timeframe\]?
* Visualize commands per second over time
* Which *commandTypes* take the most execution time?
* Which users issue the most expensive commands?
* What commands did user "Bob" issue between 3 and 4 PM?
* Anomaly detection and alerting
# So, what's the right solution?
I'll take any suggestions. Send them my way :)
Below are my thoughts from the research I've done, but I'm new to this space
* ELK - Mostly sounds good. Filebeat would ship the logs (minimal footprint), Logstash would transform them, Elasticsearch would store them, and Kibana would display results. But I've heard concerns about Logstash at scale, much of what we're looking for feels more like metrics (commands per second) than logs, and I get the impression that anomaly detection and alerting are not as great (or included) with ELK.
* TICK - Could maybe work? I don't see the equivalent of Logstash in this stack and I don't want to do transforms on the application servers. I'm also not sure if the data structure supports keeping the related data in a log entry together.
* Scale and Cost - This is a big unknown to me. How well do these stacks handle this kind of scale and what does the hosting architecture usually look like?
https://redd.it/oiqjgf
@r_devops
I'm new to all of these stacks and am unsure of the right solution for my scenario.
**General description**
* Collect application logs from 100-200 instances of an application, each on a different server
* Footprint on these servers should be minimal (avoid parsing/transforming on these servers)
**Size of data**
* 1-5 GB per day, per server. Let's ballpark at 15 TB raw data per month.
**Log format and parsing/transformation requirements**
* See below for the raw format
* Note that each logged command has separate entries for *start* and *stop*, usually with other entries in between
* **Each command should be stored as a single record.** I.e., as part of processing the logs, the *start* and *stop* records should be merged into a single record with a *startTime*, *stopTime*, *duration,* and other fields.
​
[datetime] [commandId-1] start [commandType] [user] [transferSize]...
[datetime] [commandId-2] start [commandType] [user] [transferSize]...
[datetime] [commandId-1] stop [commandType] [user] [transferSize]...
**What kind of queries/reports/analytics/alerting do we want?**Examples:
* How many commands are issued per \[timeframe\]?
* Visualize commands per second over time
* Which *commandTypes* take the most execution time?
* Which users issue the most expensive commands?
* What commands did user "Bob" issue between 3 and 4 PM?
* Anomaly detection and alerting
# So, what's the right solution?
I'll take any suggestions. Send them my way :)
Below are my thoughts from the research I've done, but I'm new to this space
* ELK - Mostly sounds good. Filebeat would ship the logs (minimal footprint), Logstash would transform them, Elasticsearch would store them, and Kibana would display results. But I've heard concerns about Logstash at scale, much of what we're looking for feels more like metrics (commands per second) than logs, and I get the impression that anomaly detection and alerting are not as great (or included) with ELK.
* TICK - Could maybe work? I don't see the equivalent of Logstash in this stack and I don't want to do transforms on the application servers. I'm also not sure if the data structure supports keeping the related data in a log entry together.
* Scale and Cost - This is a big unknown to me. How well do these stacks handle this kind of scale and what does the hosting architecture usually look like?
https://redd.it/oiqjgf
@r_devops
reddit
Help me decide on a monitoring/log analysis stack - ELK vs. TICK...
I'm new to all of these stacks and am unsure of the right solution for my scenario. **General description** * Collect application logs from...
An Offline Environment - Brainstorming
Hi everyone!
We're deploying a k8s cluster in an offline environment and wanted to share our ideas for improving this process with the world since this case is quite rare in the cloud generation.
Our DEV environment is on the cloud. Production is offline.
Current situation:
We are packing the following, using a giant Shell script, in a tar bundle:
\- Nexus installation
\- RPMs, images, and helm charts
\- Ansible playbooks
\- Environment Variables.
\ Images and Env Vars are the only items that change between customers.
​
On-site, using VMWare vSphere, we create VM and use it as the managing point:
\- we upload the bundle to it.
\- we install Nexus on it and push all our images, RPMs, and charts inside.
\- we create the rest of the VMs for the k8s cluster and run the Ansible playbooks from the Nexus VM.
\- we run the helm charts and deploy our App. The images are taken from the Nexus VM.
​
Questions:
1. We've thought about Gravity as a tool for pack all the local Environment and send it "as is", but it has been deprecated.
Does anyone know another solution?
2. We've thought about Packer, for packing our Nexus VM. Do you think it's a good solution?
3. We've also thought about creating all the cluster VMs with Terraform. Any other ideas?
4. Any other DevOps tool for improving offline deployment will be welcomed.
Thanks!
Erez
https://redd.it/oisl70
@r_devops
Hi everyone!
We're deploying a k8s cluster in an offline environment and wanted to share our ideas for improving this process with the world since this case is quite rare in the cloud generation.
Our DEV environment is on the cloud. Production is offline.
Current situation:
We are packing the following, using a giant Shell script, in a tar bundle:
\- Nexus installation
\- RPMs, images, and helm charts
\- Ansible playbooks
\- Environment Variables.
\ Images and Env Vars are the only items that change between customers.
​
On-site, using VMWare vSphere, we create VM and use it as the managing point:
\- we upload the bundle to it.
\- we install Nexus on it and push all our images, RPMs, and charts inside.
\- we create the rest of the VMs for the k8s cluster and run the Ansible playbooks from the Nexus VM.
\- we run the helm charts and deploy our App. The images are taken from the Nexus VM.
​
Questions:
1. We've thought about Gravity as a tool for pack all the local Environment and send it "as is", but it has been deprecated.
Does anyone know another solution?
2. We've thought about Packer, for packing our Nexus VM. Do you think it's a good solution?
3. We've also thought about creating all the cluster VMs with Terraform. Any other ideas?
4. Any other DevOps tool for improving offline deployment will be welcomed.
Thanks!
Erez
https://redd.it/oisl70
@r_devops
reddit
An Offline Environment - Brainstorming
Hi everyone! We're deploying a k8s cluster in an offline environment and wanted to share our ideas for improving this process with the world...