Guide to debugging serverless applications
Our first long form post on debugging serverless applications. Do share your feedback.
https://blog.faasly.io/guide-to-debugging-serverless-applications
https://redd.it/l1ujac
@r_devops
Our first long form post on debugging serverless applications. Do share your feedback.
https://blog.faasly.io/guide-to-debugging-serverless-applications
https://redd.it/l1ujac
@r_devops
Serverless Blog
Guide to debugging serverless applications
The evolution of serverless architecture
It all started in 1953 when IBM launched its first commercial computer. And then, here we are today, discussing a Serverless Architecture. Through all these years, computing has not only revolutionized the way...
It all started in 1953 when IBM launched its first commercial computer. And then, here we are today, discussing a Serverless Architecture. Through all these years, computing has not only revolutionized the way...
Cockroach db in gitlab ci services
I have configured test cases in gitlab ci pipeline and it will require a database connection to run test cases. So I am trying to configure the cockroach db database in GitLab ci services. But I can't able to connect the database container from the app container service. here is my sample gitlab-ci.yml file.
​
test:
image: node
stage: test-cases
variables:
DATABASE_URL: postgresql://test_db:password@localhost:26257/test_db?sslmode=disable
services:
- name: cockroachdb/cockroach:v20.1.4
alias: localhost
entrypoint:
- "bash"
command:
- -c
- >
mkdir certs my-safe-directory
COCKROACH_DB="$(cat /etc/hosts |grep $HOSTNAME|cut -d$'\t' -f1)"
cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-node localhost 127.0.0.1 cockroachdb $(hostname) --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach start-single-node --certs-dir=certs --listen-addr=0.0.0.0:26257 --http-addr=0.0.0.0:8080 --background
cockroach sql --certs-dir=certs --execute="CREATE DATABASE test_db; CREATE USER test_db WITH PASSWORD 'password'; GRANT ALL ON DATABASE test_db TO test_db;"
tail -f /dev/null
cache: {}
script: |
curl -i https://localhost:8080 ##not able to connect service.
npm test
allow_failure: true
only:
- master
I also tried connecting using an alias but not worked. Help me if anyone has an idea.
https://redd.it/l1ryiy
@r_devops
I have configured test cases in gitlab ci pipeline and it will require a database connection to run test cases. So I am trying to configure the cockroach db database in GitLab ci services. But I can't able to connect the database container from the app container service. here is my sample gitlab-ci.yml file.
​
test:
image: node
stage: test-cases
variables:
DATABASE_URL: postgresql://test_db:password@localhost:26257/test_db?sslmode=disable
services:
- name: cockroachdb/cockroach:v20.1.4
alias: localhost
entrypoint:
- "bash"
command:
- -c
- >
mkdir certs my-safe-directory
COCKROACH_DB="$(cat /etc/hosts |grep $HOSTNAME|cut -d$'\t' -f1)"
cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-node localhost 127.0.0.1 cockroachdb $(hostname) --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach start-single-node --certs-dir=certs --listen-addr=0.0.0.0:26257 --http-addr=0.0.0.0:8080 --background
cockroach sql --certs-dir=certs --execute="CREATE DATABASE test_db; CREATE USER test_db WITH PASSWORD 'password'; GRANT ALL ON DATABASE test_db TO test_db;"
tail -f /dev/null
cache: {}
script: |
curl -i https://localhost:8080 ##not able to connect service.
npm test
allow_failure: true
only:
- master
I also tried connecting using an alias but not worked. Help me if anyone has an idea.
https://redd.it/l1ryiy
@r_devops
reddit
Cockroach db in gitlab ci services
I have configured test cases in gitlab ci pipeline and it will require a database connection to run test cases. So I am trying to configure the...
Versioning images and code releases
How do you guys handle versioning of application or code releases?
I would like to use a semantic versioning structure, but it has not been easy to do this the kubernetes way, as for example most images on Dockerhub have their version numbers (and arch's) in the tagfield. This makes sorting and determining the versions before the latest version pretty difficult.
I have also tried to inspect image descriptions to trace versions, but even for this, there is no clearly defined structure (or best practice) that is mostly used.
https://redd.it/l1li9q
@r_devops
How do you guys handle versioning of application or code releases?
I would like to use a semantic versioning structure, but it has not been easy to do this the kubernetes way, as for example most images on Dockerhub have their version numbers (and arch's) in the tagfield. This makes sorting and determining the versions before the latest version pretty difficult.
I have also tried to inspect image descriptions to trace versions, but even for this, there is no clearly defined structure (or best practice) that is mostly used.
https://redd.it/l1li9q
@r_devops
reddit
Versioning images and code releases
How do you guys handle versioning of application or code releases? I would like to use a semantic versioning structure, but it has not been easy...
What is the simplest log aggregation tool out there?
I have some log files on a server. I want to aggregate them into a single persistent place and then do some basic searches over them, and occasionaly just watch incoming logs.
Preferably I'd want to install just a single binary for the centralized server and the log shipping agents (if required)`, simple config, low overhead. No JVM for the sweet love of God.
Best thing I've found so far is Papertrail, which has some limitations for the free version. SumoLogic is OK but kind of bloated for what I want. Simple is the key criteria. Elasticsearch for example is way too bloated.
Any ideas?
https://redd.it/l1m0x8
@r_devops
I have some log files on a server. I want to aggregate them into a single persistent place and then do some basic searches over them, and occasionaly just watch incoming logs.
Preferably I'd want to install just a single binary for the centralized server and the log shipping agents (if required)`, simple config, low overhead. No JVM for the sweet love of God.
Best thing I've found so far is Papertrail, which has some limitations for the free version. SumoLogic is OK but kind of bloated for what I want. Simple is the key criteria. Elasticsearch for example is way too bloated.
Any ideas?
https://redd.it/l1m0x8
@r_devops
reddit
What is the simplest log aggregation tool out there?
I have some log files on a server. I want to aggregate them into a single persistent place and then do some basic searches over them, and...
Anyone else using Universal Resource Names (URN) for Asset Tracking and Automation?
Just like AWS' "arn" some shops use this resource name to identity assets. These arn's obviously help give an asset a 'name' to serve as a unique identifier. Having that name allows you to develop a service where you can 'attach' metadata to it (e.g. datacenter, region, managed_by, is_live, etc). Now that you have that metadata you could automate many things from that (e.g. enable/disable monitoring, where to send alerts, billing, etc).
One challenging area is naming the URN. Specifically, since URNs are strings delimited by ":" what do you define each token to be? One common approach is some form of reverse DNS (e.g. com.example.devops.service1). Curious if anyone else is using URNs, how your using them, and your approach to coming up with the tokens of the URN?
https://redd.it/l3ey3g
@r_devops
Just like AWS' "arn" some shops use this resource name to identity assets. These arn's obviously help give an asset a 'name' to serve as a unique identifier. Having that name allows you to develop a service where you can 'attach' metadata to it (e.g. datacenter, region, managed_by, is_live, etc). Now that you have that metadata you could automate many things from that (e.g. enable/disable monitoring, where to send alerts, billing, etc).
One challenging area is naming the URN. Specifically, since URNs are strings delimited by ":" what do you define each token to be? One common approach is some form of reverse DNS (e.g. com.example.devops.service1). Curious if anyone else is using URNs, how your using them, and your approach to coming up with the tokens of the URN?
https://redd.it/l3ey3g
@r_devops
reddit
Anyone else using Universal Resource Names (URN) for Asset...
Just like AWS' "arn" some shops use this resource name to identity assets. These arn's obviously help give an asset a 'name' to serve as a unique...
Director of IT & DevOps?
I'm not in DevOps, I recruit for a software company. Our VP of Product & Engineering is splitting the team into 2 groups: Product (for our core product, dashboard, etc) and "everything else" (AWS, corporate website, marketing automation, Salesforce, database, internal IT, etc)
He's asking for help coming up with a title for the "everything else" team. Has anyone seen a Director of IT & DevOps title? Or do you have any other suggestions for a role like I'm describing?
Any insight is appreciated!
https://redd.it/l0r154
@r_devops
I'm not in DevOps, I recruit for a software company. Our VP of Product & Engineering is splitting the team into 2 groups: Product (for our core product, dashboard, etc) and "everything else" (AWS, corporate website, marketing automation, Salesforce, database, internal IT, etc)
He's asking for help coming up with a title for the "everything else" team. Has anyone seen a Director of IT & DevOps title? Or do you have any other suggestions for a role like I'm describing?
Any insight is appreciated!
https://redd.it/l0r154
@r_devops
reddit
Director of IT & DevOps?
I'm not in DevOps, I recruit for a software company. Our VP of Product & Engineering is splitting the team into 2 groups: Product (for our core...
Traditional IT role with a State agency to DevOps?
Good Morning,
I wanted to see if anyone has any opinion on this. I currently have about 6 years of experience working in IT at a State agency primarily as an administrator for a few proprietary bits of software that we use(an internal case management system and a SaaS that we use that manages digital evidence). I have been looking into DevOps for some time now as I have experience with Python, Docker, virtualization, and various other IT functions, but most of them that would be applicable when applying to a job would be things that I do personally at my home lab and not so much at my office. The agency that I work for doesn't really have opportunities to work with these kinds of systems, unfortunately. I was originally looking to transfer to a role in QA as an entry point into the software development world since I won't be able to get anything like that where I'm at currently, but a lot of people I spoke with gave me the impression that doing something like that would most likely lock me into that position and that if I want to go for DevOps I should instead pursue that actively. My question is with the current experience in and out of the office should I try and get some certification to help me stand out or should I try and apply and hope that I can demonstrate that I am passionate about this, enough so that I do most of my learning in my own time and on my own dime.
If anyone is interested I would love to get some feedback on my resume, although I may need to update it because right now it's mostly tailored towards QA.
Thank you in advance for any and all assistance on this matter.
https://redd.it/l0lryc
@r_devops
Good Morning,
I wanted to see if anyone has any opinion on this. I currently have about 6 years of experience working in IT at a State agency primarily as an administrator for a few proprietary bits of software that we use(an internal case management system and a SaaS that we use that manages digital evidence). I have been looking into DevOps for some time now as I have experience with Python, Docker, virtualization, and various other IT functions, but most of them that would be applicable when applying to a job would be things that I do personally at my home lab and not so much at my office. The agency that I work for doesn't really have opportunities to work with these kinds of systems, unfortunately. I was originally looking to transfer to a role in QA as an entry point into the software development world since I won't be able to get anything like that where I'm at currently, but a lot of people I spoke with gave me the impression that doing something like that would most likely lock me into that position and that if I want to go for DevOps I should instead pursue that actively. My question is with the current experience in and out of the office should I try and get some certification to help me stand out or should I try and apply and hope that I can demonstrate that I am passionate about this, enough so that I do most of my learning in my own time and on my own dime.
If anyone is interested I would love to get some feedback on my resume, although I may need to update it because right now it's mostly tailored towards QA.
Thank you in advance for any and all assistance on this matter.
https://redd.it/l0lryc
@r_devops
reddit
Traditional IT role with a State agency to DevOps?
Good Morning, I wanted to see if anyone has any opinion on this. I currently have about 6 years of experience working in IT at a State agency...
How do i come up with a proof of concept
So i have a question which might a slightly off topic.
So i am not a DevOps engineer per say, but my job pretty much revolves around that.
Nevertheless, i pitched an idea to automate some of the stuffs the team does on a regular basis (Continuous deployment) to one my managers, whom agreed that we should implement it, but he has asked me to come up with a proof of concept.
So, i was wondering what do i have to include in a Proof of concept?
I researched online and i get various results.
Was hoping someone who does this regularly can point me in the right direction or even give me a good example of a proof of concept in the DevOps industry
Sorry to ask such a silly question, i have not done such before nor am i formally educated.
​
​
Thanks in advance
​
:))
https://redd.it/l0iwsj
@r_devops
So i have a question which might a slightly off topic.
So i am not a DevOps engineer per say, but my job pretty much revolves around that.
Nevertheless, i pitched an idea to automate some of the stuffs the team does on a regular basis (Continuous deployment) to one my managers, whom agreed that we should implement it, but he has asked me to come up with a proof of concept.
So, i was wondering what do i have to include in a Proof of concept?
I researched online and i get various results.
Was hoping someone who does this regularly can point me in the right direction or even give me a good example of a proof of concept in the DevOps industry
Sorry to ask such a silly question, i have not done such before nor am i formally educated.
​
​
Thanks in advance
​
:))
https://redd.it/l0iwsj
@r_devops
reddit
How do i come up with a proof of concept
So i have a question which might a slightly off topic. So i am not a DevOps engineer per say, but my job pretty much revolves around that....
New to devops, can't really understand something related to docker-compose
Hey guys, great community here by the way..
There is something I can't understand in the integration between Dockerfile and docker-compose.yml .
First \- Is it a must to have a Dockerfile in the same location as the docker-compose.yml file ?
Second \- I can write a line in the docker-compose such as : image: <SomeDockerHubImagePull>
Will it also build that image ? if so, why do I need Dockerfile at all ? only for a specific CMD/Run commands?
​
Third \- From what I understood, Dockerfile is serving as the location for the "base" image and the docker-compose.yml is kinda the place where I specify all of my services related to the base image.
So if I only specify the related services in the docker-compose.yml such as redis/consul services, and I run docker-compose up , will it also execute the Dockerfile located in the same folder as the docker-compose.yml ? Will it know that the services that I specify in the docker-compose are in dependent to the image I build in the Dockerfile?
​
Thanks for anyone willing to explain :):)
https://redd.it/l0ijlj
@r_devops
Hey guys, great community here by the way..
There is something I can't understand in the integration between Dockerfile and docker-compose.yml .
First \- Is it a must to have a Dockerfile in the same location as the docker-compose.yml file ?
Second \- I can write a line in the docker-compose such as : image: <SomeDockerHubImagePull>
Will it also build that image ? if so, why do I need Dockerfile at all ? only for a specific CMD/Run commands?
​
Third \- From what I understood, Dockerfile is serving as the location for the "base" image and the docker-compose.yml is kinda the place where I specify all of my services related to the base image.
So if I only specify the related services in the docker-compose.yml such as redis/consul services, and I run docker-compose up , will it also execute the Dockerfile located in the same folder as the docker-compose.yml ? Will it know that the services that I specify in the docker-compose are in dependent to the image I build in the Dockerfile?
​
Thanks for anyone willing to explain :):)
https://redd.it/l0ijlj
@r_devops
reddit
New to devops, can't really understand something related to...
Hey guys, great community here by the way.. There is something I can't understand in the integration between Dockerfile and docker-compose.yml...
What would be the advantage of separating appgateway and aks cluster into two different vnets and then peering them together to connect as opposed to having all of them in one vnet?
I understand at subnet level, the appgateway needs its own subnet in azure, but i am trying to understand what type of performance/security pros/cons would come from having two vnets like this vs 1 vnet
https://redd.it/l3kyi6
@r_devops
I understand at subnet level, the appgateway needs its own subnet in azure, but i am trying to understand what type of performance/security pros/cons would come from having two vnets like this vs 1 vnet
https://redd.it/l3kyi6
@r_devops
reddit
What would be the advantage of separating appgateway and aks...
I understand at subnet level, the appgateway needs its own subnet in azure, but i am trying to understand what type of performance/security...
Setting up OAuth for Grafana using Terraform and Auth0
I have some personal servers where I tend to install a bunch of internal tools that I want to check regularly (such as Kibana, Grafana, ...). I don't really store sensitive information, but anyways I don't want those to be publicly accessible on the internet. So over the years, I've built a bunch of workarounds like basic auth, ssh tunnels and whatnot.
I recently invested some time in setting up proper OAuth as a real solution that I can use over time. I started with Grafana. I used Auth0, but after some bad experiences not remembering what I did in the UI now I have everything in code using Terraform. Feel like it's a much more stable and maintainable solution. I wrote about the topic if you are interested in the details:
https://hceris.com/setting-up-oauth-for-grafana-with-auth0/
https://redd.it/l0ge4e
@r_devops
I have some personal servers where I tend to install a bunch of internal tools that I want to check regularly (such as Kibana, Grafana, ...). I don't really store sensitive information, but anyways I don't want those to be publicly accessible on the internet. So over the years, I've built a bunch of workarounds like basic auth, ssh tunnels and whatnot.
I recently invested some time in setting up proper OAuth as a real solution that I can use over time. I started with Grafana. I used Auth0, but after some bad experiences not remembering what I did in the UI now I have everything in code using Terraform. Feel like it's a much more stable and maintainable solution. I wrote about the topic if you are interested in the details:
https://hceris.com/setting-up-oauth-for-grafana-with-auth0/
https://redd.it/l0ge4e
@r_devops
Mario Fernandez
Setting up OAuth for Grafana with Auth0
Set up secure access for Grafana based on OAuth thanks to Auth0, nicely provisioned with Terraform using Infrastructure as Code
SaaS platform for automated servers backups
Hello,
I am web developer. Do you think that a SaaS platform for automated servers backups (VPS servers, block storage volumes, managed databases, files and folders) of some hosting providers (DigitalOcean, Vultr, Linode, etc.) would be a good project idea as for attract many customers to my website? The same purpose that some other platforms are already doing, like snapshooter.io, simplebackups.io, backupsheep.com, backup.ninja, etc.
For example, let's focus on just DigitalOcean for now.
\- VPS servers: they offer automated backups in a weekly basis, so not daily or hourly, or even more. My service would allow this.
\- Block storage volumes: they offer manual backups, so not automated. My service would allow this.
\- Managed databases: they offer a retention of 7 days, so more than 7 backups are not possible. My service would allow this.
\- Files and folders: this is not something that a hosting provides. My service would allow to just backup certain files and folders from the VPS servers.
Now, let's say that some day, very unusual to happen, DigitalOcean in particular decides to change his backups model, so that my solution would be partially or totally useless because they would be doing same as me, which I don't think it would happen just the way I said. Well, this would just be for this particular hosting provider. For the other ones, they would still have weak points, just like DigitalOcean has now.
With this service the customer has the ability to manage all the types of backups (vps, volumes, databases, files) for all the hosting providers in just one dashboard. It makes it possible to even have just one centralized place for all the customer's backups which can be stored in the hosting's own cloud storage, or in the customer's own cloud storage provider, or in my own cloud storage provider as part of the subscribed plan. So the backups can be both stored internally or externally of the customer's hosting providers. In the case that the customer chooses to store the backups externally this would suppose an extra layer of security.
Everything is as much automated as possible for the customer. The customer doesn't have to worry about the hosting's limitations: manual backups, backups periodicity, backups retention, or expensive backups costs. My service is intended to solve all these limitations, as each hosting provider has different backup models, which will differ from one to another, and that they would never satisfy the customer as much as I would. With my platform the customer has a centralized full control over all his backups.
So, coming back to the original question... Do you think that this project idea is feasible and can be successful in the sense that it can attract many customers? I think that the similar platforms that I mentioned at the beginning are having success, and the idea would be make something similar but improved and somewhat unique.
Regards,
Néstor Llamas
https://redd.it/l3hv7q
@r_devops
Hello,
I am web developer. Do you think that a SaaS platform for automated servers backups (VPS servers, block storage volumes, managed databases, files and folders) of some hosting providers (DigitalOcean, Vultr, Linode, etc.) would be a good project idea as for attract many customers to my website? The same purpose that some other platforms are already doing, like snapshooter.io, simplebackups.io, backupsheep.com, backup.ninja, etc.
For example, let's focus on just DigitalOcean for now.
\- VPS servers: they offer automated backups in a weekly basis, so not daily or hourly, or even more. My service would allow this.
\- Block storage volumes: they offer manual backups, so not automated. My service would allow this.
\- Managed databases: they offer a retention of 7 days, so more than 7 backups are not possible. My service would allow this.
\- Files and folders: this is not something that a hosting provides. My service would allow to just backup certain files and folders from the VPS servers.
Now, let's say that some day, very unusual to happen, DigitalOcean in particular decides to change his backups model, so that my solution would be partially or totally useless because they would be doing same as me, which I don't think it would happen just the way I said. Well, this would just be for this particular hosting provider. For the other ones, they would still have weak points, just like DigitalOcean has now.
With this service the customer has the ability to manage all the types of backups (vps, volumes, databases, files) for all the hosting providers in just one dashboard. It makes it possible to even have just one centralized place for all the customer's backups which can be stored in the hosting's own cloud storage, or in the customer's own cloud storage provider, or in my own cloud storage provider as part of the subscribed plan. So the backups can be both stored internally or externally of the customer's hosting providers. In the case that the customer chooses to store the backups externally this would suppose an extra layer of security.
Everything is as much automated as possible for the customer. The customer doesn't have to worry about the hosting's limitations: manual backups, backups periodicity, backups retention, or expensive backups costs. My service is intended to solve all these limitations, as each hosting provider has different backup models, which will differ from one to another, and that they would never satisfy the customer as much as I would. With my platform the customer has a centralized full control over all his backups.
So, coming back to the original question... Do you think that this project idea is feasible and can be successful in the sense that it can attract many customers? I think that the similar platforms that I mentioned at the beginning are having success, and the idea would be make something similar but improved and somewhat unique.
Regards,
Néstor Llamas
https://redd.it/l3hv7q
@r_devops
reddit
SaaS platform for automated servers backups
Hello, I am web developer. Do you think that a SaaS platform for automated servers backups (VPS servers, block storage volumes, managed...
New to devops best scripting language for Devops ?
Hi all,
I’m thinking to learn a scripting language for automation, can anyone with experience suggest their take on the same.
https://redd.it/l3gw72
@r_devops
Hi all,
I’m thinking to learn a scripting language for automation, can anyone with experience suggest their take on the same.
https://redd.it/l3gw72
@r_devops
reddit
[New to devops] best scripting language for Devops ?
Hi all, I’m thinking to learn a scripting language for automation, can anyone with experience suggest their take on the same.
Deploy Sentry through CloudFormation using only AWS services
# TL;DR
If anyone else is interested, I’ve written an alternative to this stack in CloudFormation which is deployed via AWS ECS (through either SPOT and ON-DEMAND Fargate containers) and supports all relevant micro-services.
It has been tested alongside Performance Monitoring on a platform with 5 different environments which generates on average about 5k events per hour using just t2.\* instance classes for RDS/Redis/Kafka.
Link = [https://github.com/Rungutan/sentry-performance-monitoring](https://github.com/Rungutan/sentry-performance-monitoring)
# What is Sentry?
Sentry is a service that helps you monitor and fix crashes in realtime. The server is in Python, but it contains a full API for sending events from any language, in any application.
With Performance Monitoring, teams can trace slow-loading pages back to its API call as well as surface all related errors. That way, Engineering Managers and Developers can resolve bottlenecks and deliver fast, reliable experiences that fit customer demands.
## Web vitals
More important than understanding that there’s been an error is understanding how your users have been impacted by that error. By gathering field data (variable network speed, browser, device, region) via Google’s Web Vitals, Performance helps you understand what’s happening at your user’s level. Now you know whether your users are suffering from slow loading times, seeing unexpected changes, or having trouble interacting with the page.
## Tracing
Trace poor-performing pages not only to its API call but to its children. Performance’s event detail waterfall visualizes your customer’s experience from beginning to end, all while connecting user device data to its expected operation.
## Transaction monitoring
With performance monitoring, you can view transactions by slowest duration time, related issues, or the number of users — all in one consolidated view. And release markers add another layer of context so your team can gauge how customers react to code recently pushed to production.
​
​
# How do I deploy it?
Let me make it clear before we go any further -> **Sentry** prides itself for being [open-source](https://sentry.io/_/open-source/) but it does offer a cloud-based solution as a [SaaS](https://sentry.io/welcome/) for those who do not want to deploy, manage and maintain the infrastructure for it.
There are a few community contributed ways of deploying it on premise if you do decide to not for the cloud version:
* One of the ways is use with **docker-compose** mentioned in one of Sentry's official GitHub repositories - [getsentry/onpremise](https://github.com/getsentry/onpremise)
* Another way is a community built **HELM** package available in this repo - [sentry-kubernetes/charts](https://github.com/sentry-kubernetes/charts)
Both of these solutions though have some downsides, specifically:
* Scaling ingestion of events is a bit hard due to the hard capacity limits of both solutions
* It is a well known fact that database systems perform better on NON-docker infrastructure points
* Keeping up with the different changes in versions is usually a hassle
* Customizing the different bits and pieces such as integrations require a lot of man hours
That's why, for those of you who use **Amazon Web Services** as their preferred cloud provider, we came in your help with **a fully scalable, easy to maintain and secure infrastructure** based on the following AWS services:
* AWS ECS Fargate
* AWS RDS
* AWS ElastiCache
* AWS MSK (Kafka)
* AWS OpsWorks
* AWS VPC
* AWS CloudWatch
​
You can deploy it by following these simple steps:
1. Create the stack in CloudFormation using this link ->
# TL;DR
If anyone else is interested, I’ve written an alternative to this stack in CloudFormation which is deployed via AWS ECS (through either SPOT and ON-DEMAND Fargate containers) and supports all relevant micro-services.
It has been tested alongside Performance Monitoring on a platform with 5 different environments which generates on average about 5k events per hour using just t2.\* instance classes for RDS/Redis/Kafka.
Link = [https://github.com/Rungutan/sentry-performance-monitoring](https://github.com/Rungutan/sentry-performance-monitoring)
# What is Sentry?
Sentry is a service that helps you monitor and fix crashes in realtime. The server is in Python, but it contains a full API for sending events from any language, in any application.
With Performance Monitoring, teams can trace slow-loading pages back to its API call as well as surface all related errors. That way, Engineering Managers and Developers can resolve bottlenecks and deliver fast, reliable experiences that fit customer demands.
## Web vitals
More important than understanding that there’s been an error is understanding how your users have been impacted by that error. By gathering field data (variable network speed, browser, device, region) via Google’s Web Vitals, Performance helps you understand what’s happening at your user’s level. Now you know whether your users are suffering from slow loading times, seeing unexpected changes, or having trouble interacting with the page.
## Tracing
Trace poor-performing pages not only to its API call but to its children. Performance’s event detail waterfall visualizes your customer’s experience from beginning to end, all while connecting user device data to its expected operation.
## Transaction monitoring
With performance monitoring, you can view transactions by slowest duration time, related issues, or the number of users — all in one consolidated view. And release markers add another layer of context so your team can gauge how customers react to code recently pushed to production.
​
​
# How do I deploy it?
Let me make it clear before we go any further -> **Sentry** prides itself for being [open-source](https://sentry.io/_/open-source/) but it does offer a cloud-based solution as a [SaaS](https://sentry.io/welcome/) for those who do not want to deploy, manage and maintain the infrastructure for it.
There are a few community contributed ways of deploying it on premise if you do decide to not for the cloud version:
* One of the ways is use with **docker-compose** mentioned in one of Sentry's official GitHub repositories - [getsentry/onpremise](https://github.com/getsentry/onpremise)
* Another way is a community built **HELM** package available in this repo - [sentry-kubernetes/charts](https://github.com/sentry-kubernetes/charts)
Both of these solutions though have some downsides, specifically:
* Scaling ingestion of events is a bit hard due to the hard capacity limits of both solutions
* It is a well known fact that database systems perform better on NON-docker infrastructure points
* Keeping up with the different changes in versions is usually a hassle
* Customizing the different bits and pieces such as integrations require a lot of man hours
That's why, for those of you who use **Amazon Web Services** as their preferred cloud provider, we came in your help with **a fully scalable, easy to maintain and secure infrastructure** based on the following AWS services:
* AWS ECS Fargate
* AWS RDS
* AWS ElastiCache
* AWS MSK (Kafka)
* AWS OpsWorks
* AWS VPC
* AWS CloudWatch
​
You can deploy it by following these simple steps:
1. Create the stack in CloudFormation using this link ->
GitHub
GitHub - Rungutan/sentry-fargate-cf-stack: AWS CloudFormation template to launch a highly-available Sentry 20 stack through ECS…
AWS CloudFormation template to launch a highly-available Sentry 20 stack through ECS Fargate at the minimum cost possible - GitHub - Rungutan/sentry-fargate-cf-stack: AWS CloudFormation template to...
[https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://s3.us-east-1.amazonaws.com/sentry-performance-monitoring/cloudformation-template.yaml&stackName=Sentry-Rungutan-ECS](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://s3.us-east-1.amazonaws.com/sentry-performance-monitoring/cloudformation-template.yaml&stackName=Sentry-Rungutan-ECS)
2. Fill in **AT LEAST** these parameters and hit "Create stack":
* SentrySystemSecretKey -> You can use a random UUIDv4 that you can get from [https://www.uuidgenerator.net/](https://www.uuidgenerator.net/)
* InitialAdminUserEmail -> A **very strong** password that you should set for the initial admin user
* InitialAdminUserPassword
* SslLoadBalancer -> Sentry **cannot** properly work without HTTPS and it is a requirement for this stack
* SentryEmailUsername -> We recommend SES for that and you can create a user/pass from [https://console.aws.amazon.com/ses/home#smtp-settings](https://console.aws.amazon.com/ses/home#smtp-settings):
* SentryEmailPassword -> We recommend SES for that and you can create a user/pass from [https://console.aws.amazon.com/ses/home#smtp-settings](https://console.aws.amazon.com/ses/home#smtp-settings):
* SentryEmailHost -> As mentioned in the description, the SES endpoint is **email-smtp.${aws\_region}.amazonaws.com**
* SentryEmailFrom -> If using SES, a confirmed address (or domain) from [https://console.aws.amazon.com/ses/home#verified-senders-email](https://console.aws.amazon.com/ses/home#verified-senders-email):
PS: It is recommended that you create your own administrators and delete the initial one after the initial deployment is done!
https://redd.it/l3b55z
@r_devops
2. Fill in **AT LEAST** these parameters and hit "Create stack":
* SentrySystemSecretKey -> You can use a random UUIDv4 that you can get from [https://www.uuidgenerator.net/](https://www.uuidgenerator.net/)
* InitialAdminUserEmail -> A **very strong** password that you should set for the initial admin user
* InitialAdminUserPassword
* SslLoadBalancer -> Sentry **cannot** properly work without HTTPS and it is a requirement for this stack
* SentryEmailUsername -> We recommend SES for that and you can create a user/pass from [https://console.aws.amazon.com/ses/home#smtp-settings](https://console.aws.amazon.com/ses/home#smtp-settings):
* SentryEmailPassword -> We recommend SES for that and you can create a user/pass from [https://console.aws.amazon.com/ses/home#smtp-settings](https://console.aws.amazon.com/ses/home#smtp-settings):
* SentryEmailHost -> As mentioned in the description, the SES endpoint is **email-smtp.${aws\_region}.amazonaws.com**
* SentryEmailFrom -> If using SES, a confirmed address (or domain) from [https://console.aws.amazon.com/ses/home#verified-senders-email](https://console.aws.amazon.com/ses/home#verified-senders-email):
PS: It is recommended that you create your own administrators and delete the initial one after the initial deployment is done!
https://redd.it/l3b55z
@r_devops
CI/CD getting started with CircleCI and Docker
Hi!
I've been using CI/CD pipelines with Docker for the last year and I figured that it would be cool to share a little bit of knowledge with everyone.
https://link.medium.com/YExGz4uNfdb
https://redd.it/l39fy3
@r_devops
Hi!
I've been using CI/CD pipelines with Docker for the last year and I figured that it would be cool to share a little bit of knowledge with everyone.
https://link.medium.com/YExGz4uNfdb
https://redd.it/l39fy3
@r_devops
Medium
Automate your deployment process with CircleCI and Docker
Introduction
Devops expertise requested - let's talk ML cloud infrastructure workflows!
Alright, so I've written a bunch of software to solve a host of problems I've had with machine learning workflows / experimentation - primarily training enormous language models on protein sequences. You have to set up a cloud compute account, ssh into some box, move files around with git, and figure out how log and track the results of a training run... (it's terrible guys, come on!)
I have yet to find good options on the market (have looked at anyscale, determined, databricks, etc.) so I wrote some software to do what I wanted and decided to turn it into a venture.
I want to open up this thread for discussion about issues with machine learning versioning, tracking, and training in general - and maybe see if this could be a valid solution - https://latch.ai/. Would love to get some substantive conversations going below!
https://redd.it/l38d46
@r_devops
Alright, so I've written a bunch of software to solve a host of problems I've had with machine learning workflows / experimentation - primarily training enormous language models on protein sequences. You have to set up a cloud compute account, ssh into some box, move files around with git, and figure out how log and track the results of a training run... (it's terrible guys, come on!)
I have yet to find good options on the market (have looked at anyscale, determined, databricks, etc.) so I wrote some software to do what I wanted and decided to turn it into a venture.
I want to open up this thread for discussion about issues with machine learning versioning, tracking, and training in general - and maybe see if this could be a valid solution - https://latch.ai/. Would love to get some substantive conversations going below!
https://redd.it/l38d46
@r_devops
reddit
Devops expertise requested - let's talk ML cloud infrastructure...
Alright, so I've written a bunch of software to solve a host of problems I've had with machine learning workflows / experimentation - primarily...
What tools do you use for a data migration?(story and context inside)
Hi all.
I am a new developer/kind of devops person.
Recently, I was tasked with moving our companies data from one CRM to another. Most of our time was spent data mapping and discussing what to bring over. I ended up using ruby on an internal Linux server to query one CRMs REST API, pull it down in to memory and massage it to get it ready for the send to the new CRM system. This process worked fine, there weren’t too many records to bring over, but I kept wondering... what if the size of this migration was much larger, what would I use? What if I needed to pull millions of records and massage/map to a different system. What platforms handle that?
I felt like what I did worked for a small company, but wouldn’t work much place else.
https://redd.it/l3s1so
@r_devops
Hi all.
I am a new developer/kind of devops person.
Recently, I was tasked with moving our companies data from one CRM to another. Most of our time was spent data mapping and discussing what to bring over. I ended up using ruby on an internal Linux server to query one CRMs REST API, pull it down in to memory and massage it to get it ready for the send to the new CRM system. This process worked fine, there weren’t too many records to bring over, but I kept wondering... what if the size of this migration was much larger, what would I use? What if I needed to pull millions of records and massage/map to a different system. What platforms handle that?
I felt like what I did worked for a small company, but wouldn’t work much place else.
https://redd.it/l3s1so
@r_devops
reddit
What tools do you use for a data migration?(story and context inside)
Hi all. I am a new developer/kind of devops person. Recently, I was tasked with moving our companies data from one CRM to another. Most of our...
SSL between reverse proxy and local nodejs app
fellow devops friends,
​
in your opinion, what are the benefits of having SSL between an NGINX reverse proxy and a NodeJS app, if both are running on the same VM and SSL terminates at the NGINX proxy.
https://redd.it/l32kwn
@r_devops
fellow devops friends,
​
in your opinion, what are the benefits of having SSL between an NGINX reverse proxy and a NodeJS app, if both are running on the same VM and SSL terminates at the NGINX proxy.
https://redd.it/l32kwn
@r_devops
reddit
SSL between reverse proxy and local nodejs app
fellow devops friends, in your opinion, what are the benefits of having SSL between an NGINX reverse proxy and a NodeJS app, if both...
Best Automated Build/deploy tool for maven project?
Hey all - I wanted to know what was the best-automated build/deploy tool for a maven project? Basically, I have a spring boot application(hosted on github) that I want to compile using maven and a specific profile for each deploy enviornmnet. After the buld is completed, I would like to SFTP the artifact(WAR file) to a remote server.
Is there a general consesus on the best tool to do this? I had tried at one point to get this working in pipelines but the transfer of the artifacts became complicated because I would have to basically install the FTP client on the VM that was spun up. I was hoping there was a more straightforward way to do this.
https://redd.it/l33psj
@r_devops
Hey all - I wanted to know what was the best-automated build/deploy tool for a maven project? Basically, I have a spring boot application(hosted on github) that I want to compile using maven and a specific profile for each deploy enviornmnet. After the buld is completed, I would like to SFTP the artifact(WAR file) to a remote server.
Is there a general consesus on the best tool to do this? I had tried at one point to get this working in pipelines but the transfer of the artifacts became complicated because I would have to basically install the FTP client on the VM that was spun up. I was hoping there was a more straightforward way to do this.
https://redd.it/l33psj
@r_devops
reddit
Best Automated Build/deploy tool for maven project?
Hey all - I wanted to know what was the best-automated build/deploy tool for a maven project? Basically, I have a spring boot application(hosted...
Will DevOps be dead in next 5 years?
I joined a form just before lockdown as a java developer and i had little knowledge about DevOps, due to this lockdown, they couldn't provide me a proper training so they offered me a DevOps job with high package so I accepted.
Now I am doing good with different Amazon Web Services and Kubernetes.
This firm is not a very big firm, there are just 2 DevOps engineers and I am worried about my Job that it will be replaced by any developer(as I joined jib as a developer and made easy path to DevOps).
https://redd.it/l36pit
@r_devops
I joined a form just before lockdown as a java developer and i had little knowledge about DevOps, due to this lockdown, they couldn't provide me a proper training so they offered me a DevOps job with high package so I accepted.
Now I am doing good with different Amazon Web Services and Kubernetes.
This firm is not a very big firm, there are just 2 DevOps engineers and I am worried about my Job that it will be replaced by any developer(as I joined jib as a developer and made easy path to DevOps).
https://redd.it/l36pit
@r_devops
reddit
Will DevOps be dead in next 5 years?
I joined a form just before lockdown as a java developer and i had little knowledge about DevOps, due to this lockdown, they couldn't provide me...