Measuring Software Quality Using Quality Metrics
The tempos of the software development process are growing every hour. Amazon uses to deploy software updates through their Apollo deployment service every 11.7 seconds. Etsy has a fully automated deployment pipeline that does about 50 deployments a day.
With deadlines becoming tougher every day, the product quality requirements are growing as well. Under these conditions, maintaining and constantly improving the product quality becomes a matter of primary importance.
In this article, we pay attention to the importance of software quality management. You will learn about the quality metrics used for assessing the software performance and ways to maintain the quality on the proper level. We also discuss the best practices of maintaining the software quality that the Jelvix team follows during product development for our customers.
https://redd.it/ng2ouo
@r_devops
The tempos of the software development process are growing every hour. Amazon uses to deploy software updates through their Apollo deployment service every 11.7 seconds. Etsy has a fully automated deployment pipeline that does about 50 deployments a day.
With deadlines becoming tougher every day, the product quality requirements are growing as well. Under these conditions, maintaining and constantly improving the product quality becomes a matter of primary importance.
In this article, we pay attention to the importance of software quality management. You will learn about the quality metrics used for assessing the software performance and ways to maintain the quality on the proper level. We also discuss the best practices of maintaining the software quality that the Jelvix team follows during product development for our customers.
https://redd.it/ng2ouo
@r_devops
Jelvix
Software Quality Metrics: Why is it Important for Business? | Jelvix
The quality of the product is the primary factor of customer satisfaction. Read the article to learn about main software quality metrics.
Pulumi, do you use it and what's your preferred lang?
Just checking out Pulumi and yeah, I am slightly more motivated to get into this than Terraform. Are you happy with Pulumi and in what lang are you using it?
https://redd.it/nfyntd
@r_devops
Just checking out Pulumi and yeah, I am slightly more motivated to get into this than Terraform. Are you happy with Pulumi and in what lang are you using it?
https://redd.it/nfyntd
@r_devops
reddit
Pulumi, do you use it and what's your preferred lang?
Just checking out Pulumi and yeah, I am slightly more motivated to get into this than Terraform. Are you happy with Pulumi and in what lang are...
To all experienced devops, how would you break apart microservices to support colocations?
I'm in a bit of a bind and am hoping the devops Jedis out there can guide me in the right direction.
Tldr: Currently saturating my 940/35mbps line, but have access to 1gb symmetrical fiber. Trying to figure out if my microservices with a message queue of 10k+ per second will work going from fiber to my connection or if I need to create a service to aggregate/de-aggregate messages because of the high volume.
What my setup looks like:
I built a hobby project in golang using nsq(similar to rabbitmq or kafka) as a message queue for microservices. Currently it runs locally on 2 machines. Machine 1 gets data from the internet and does heavy processing, it then sends 100-1000 of smaller messages(~1-250kb) messages to machine 2 which holds the messages and writes them at regular intervals to databases like mongodb and elastic search. Machine 2 then sends new info to machine 1 and the cycle repeats.
The problem:
I have 4 more machines available to use, however with only 1 machine fetching data I have already saturated 1/3 the download and 100% of the 35mbps of the upload; adding more hardware just increases latency not data throughput.
What I would like to do:
I have a family member with a 1gb symmetrical fiber that is happy to let me use their internet full time and stick a couple machines there. I would like to setup all 5 machines there which report back to the database machine at my house.
What I need guidance on:
Right now I'm passing 100-1000 messages (1-250kb each) per second through my message queue between 2 local machines. But I think if I make it 5x bigger and try passing 5000 messages per second(~400mbps)from the symmetrical fiber to my house there will be issues with data loss/communication because of how many there are. Is this right, wrong or does it depend?
What I think might be a solution:
Create an additional microservices which aggregates the messages at the fiber location, makes a zip file, uploads it to my home database machine, unzips it, parses the messages and then sends them off again locally.
Side notes:
Because of the volume of data and processing requirements, cloud is not an option. There are a dozen message queues that are tightly coupled between the database and the worker machine in a feedback loop which can't be undone. I'm already aggregating messages at optimal points and am writing to the DB in batches. Family member's house is a 70 min drive one way and I really don't want to mess around with driving back and forth trying to get it working initially. Both of us also have static ip's.
Thoughts, suggestions, ideas?
Thanks in advance, this is completely unknown territory for me and every little bit helps.
https://redd.it/nfzqz2
@r_devops
I'm in a bit of a bind and am hoping the devops Jedis out there can guide me in the right direction.
Tldr: Currently saturating my 940/35mbps line, but have access to 1gb symmetrical fiber. Trying to figure out if my microservices with a message queue of 10k+ per second will work going from fiber to my connection or if I need to create a service to aggregate/de-aggregate messages because of the high volume.
What my setup looks like:
I built a hobby project in golang using nsq(similar to rabbitmq or kafka) as a message queue for microservices. Currently it runs locally on 2 machines. Machine 1 gets data from the internet and does heavy processing, it then sends 100-1000 of smaller messages(~1-250kb) messages to machine 2 which holds the messages and writes them at regular intervals to databases like mongodb and elastic search. Machine 2 then sends new info to machine 1 and the cycle repeats.
The problem:
I have 4 more machines available to use, however with only 1 machine fetching data I have already saturated 1/3 the download and 100% of the 35mbps of the upload; adding more hardware just increases latency not data throughput.
What I would like to do:
I have a family member with a 1gb symmetrical fiber that is happy to let me use their internet full time and stick a couple machines there. I would like to setup all 5 machines there which report back to the database machine at my house.
What I need guidance on:
Right now I'm passing 100-1000 messages (1-250kb each) per second through my message queue between 2 local machines. But I think if I make it 5x bigger and try passing 5000 messages per second(~400mbps)from the symmetrical fiber to my house there will be issues with data loss/communication because of how many there are. Is this right, wrong or does it depend?
What I think might be a solution:
Create an additional microservices which aggregates the messages at the fiber location, makes a zip file, uploads it to my home database machine, unzips it, parses the messages and then sends them off again locally.
Side notes:
Because of the volume of data and processing requirements, cloud is not an option. There are a dozen message queues that are tightly coupled between the database and the worker machine in a feedback loop which can't be undone. I'm already aggregating messages at optimal points and am writing to the DB in batches. Family member's house is a 70 min drive one way and I really don't want to mess around with driving back and forth trying to get it working initially. Both of us also have static ip's.
Thoughts, suggestions, ideas?
Thanks in advance, this is completely unknown territory for me and every little bit helps.
https://redd.it/nfzqz2
@r_devops
reddit
To all experienced devops, how would you break apart microservices...
I'm in a bit of a bind and am hoping the devops Jedis out there can guide me in the right direction. Tldr: Currently saturating my 940/35mbps...
Cache MySQL database locally
Looking for a solution where I could have a remote MySQL database and keep its replica locally, in case remote database goes down results would be taken from locally cached database.
I was thinking that ProxySQL is capable of doing that? But after enabling cache and turning off remote database it just spits errors that the backend is unreachable.
Any ideas?
https://redd.it/nh41aw
@r_devops
Looking for a solution where I could have a remote MySQL database and keep its replica locally, in case remote database goes down results would be taken from locally cached database.
I was thinking that ProxySQL is capable of doing that? But after enabling cache and turning off remote database it just spits errors that the backend is unreachable.
Any ideas?
https://redd.it/nh41aw
@r_devops
reddit
Cache MySQL database locally
Looking for a solution where I could have a remote MySQL database and keep its replica locally, in case remote database goes down results would be...
Confused on how to write my tagging stage on Jenkins script
Hi all, I am working on the tagging stage of my pipeline. I am confused on how to get the following and then append them so I can tag on bitbucket:
1.version# in package.json
2.build number
3.branch name
https://redd.it/nh52sd
@r_devops
Hi all, I am working on the tagging stage of my pipeline. I am confused on how to get the following and then append them so I can tag on bitbucket:
1.version# in package.json
2.build number
3.branch name
https://redd.it/nh52sd
@r_devops
reddit
Confused on how to write my tagging stage on Jenkins script
Hi all, I am working on the tagging stage of my pipeline. I am confused on how to get the following and then append them so I can tag on...
How to deploy a multi-container app?
Hi there, newbie developer here.
I am developing a learning application with Docker containers, with the idea of deploying the frontend React container to Vercel and the three backend containers (node API, postgres and redis) to Digital Ocean, using docker-compose and github actions for my deployments.
So far, I have only deployed single containers to Heroku with a couple simple commands and I have dabbed a bit with Nginx, but I have no clue of the procedure for deploying this kind of multi container (and multi host?) apps. I have read (very little) about Kubernetes, but it feels overkill and overcomplicated for what I want to do and it makes me think it might not be the usual way of doing these things.
Any tips on the steps to follow or a starting point to being my research?
Cheers =)
https://redd.it/nh4ijy
@r_devops
Hi there, newbie developer here.
I am developing a learning application with Docker containers, with the idea of deploying the frontend React container to Vercel and the three backend containers (node API, postgres and redis) to Digital Ocean, using docker-compose and github actions for my deployments.
So far, I have only deployed single containers to Heroku with a couple simple commands and I have dabbed a bit with Nginx, but I have no clue of the procedure for deploying this kind of multi container (and multi host?) apps. I have read (very little) about Kubernetes, but it feels overkill and overcomplicated for what I want to do and it makes me think it might not be the usual way of doing these things.
Any tips on the steps to follow or a starting point to being my research?
Cheers =)
https://redd.it/nh4ijy
@r_devops
reddit
How to deploy a multi-container app?
Hi there, newbie developer here. I am developing a learning application with Docker containers, with the idea of deploying the frontend React...
Getting a repeatable build, every time
Hey DevOps fans, I spent a lot of time writing this article about best practices for managing build scripts in a growing organization. I'm hoping it would help someone be better at build engineering.
It's basically a collection of tips and tricks we learned over the years about how to make use of Makefile, Dockerfile, and Bash to make scripts understandable and repeatable.
Curious what you think! Feedback on how to improve the article is most welcome!
Article --> Getting a repeatable build, every time
https://redd.it/nh393b
@r_devops
Hey DevOps fans, I spent a lot of time writing this article about best practices for managing build scripts in a growing organization. I'm hoping it would help someone be better at build engineering.
It's basically a collection of tips and tricks we learned over the years about how to make use of Makefile, Dockerfile, and Bash to make scripts understandable and repeatable.
Curious what you think! Feedback on how to improve the article is most welcome!
Article --> Getting a repeatable build, every time
https://redd.it/nh393b
@r_devops
Earthly Blog
Getting a Repeatable Build, Every Time
I wanted to sit down and write about all the tricks we learned and that we used every day to help make builds more manageable in the absence of Ear...
Setting up server from scratch for hosting multiple web applications?
I am a developer but I have to setup a linux server from scratch for hosting dockerized web applications along with infrastructural things like ELK, Databases, Agents, etc. At the top of my head it was k8s but I am interested to know what others would suggest.
https://redd.it/ngxvvc
@r_devops
I am a developer but I have to setup a linux server from scratch for hosting dockerized web applications along with infrastructural things like ELK, Databases, Agents, etc. At the top of my head it was k8s but I am interested to know what others would suggest.
https://redd.it/ngxvvc
@r_devops
reddit
Setting up server from scratch for hosting multiple web applications?
I am a developer but I have to setup a linux server from scratch for hosting dockerized web applications along with infrastructural things like...
I was a full-stack engineer doing DevOps tasks for about 3 years; I've become too confident about my DevOps skills so I decided to go into a DevOps career path, and now I don't understand what my role actually is in my new company.
Excuse my grammar, I'm not that fluent in English. Possibly, I may have expressed some of my words the wrong way.
Title says it all. As a Full-stack dev, I was able to setup a lot of things that DevOps would would work on with the team. I was able to setup our Infra EKS in AWS using Terraform, k8s resources, logging using EFK stack, monitoring using Prometheus and Grafana, CI/CD using CircleCI working with Nexus and ECR repos, implemenent load tests using Gatling, enforce unit test coverage, troubleshoot when things go south, and help our QAs implement automated tests using cucumberJS.
I thought I was the man. I'm all knowing. I'm a Dev but I can do all this, I'm so powerful!!! And so I've decided to switch to a different company for a DevOps role, somehow I've managed to pass the interviews, finally a DevOps career!
And now I'm here for about 4 months in my new company, I realized that most of the task that I was doing in my previous company, I was doing because someone told me that we needed such things. I was just implementing the tasks that my Lead created, I didn't actually know the reason why we needed to implement such things.
Now, I understand that I actually don't know a LOT about DevOps. I don't know how to figure out what my current team actually needed. I feel like I've fucked up because now people in my team would expect me to know what to improve in our services, in our processes, and our best practices. I don't know how to do all of those!!!
I only knew how to implement tools, I didn't know that I was supposed to be the one to figure out what to improve. The good thing is, I'm not alone, we have another DevOps engineer in our team, and he's basically doing all the planning/investigating for improvements for me right now. Now I feel like I'm making things harder for him after joining the team.
So.. I've found this sub, created my account, so I can rant about this and accept all the shame. Lol.
But, also thanks to this sub, I found out about the books I can read and courses I can take to be better at the things that I currently suck at.
https://redd.it/ngvl76
@r_devops
Excuse my grammar, I'm not that fluent in English. Possibly, I may have expressed some of my words the wrong way.
Title says it all. As a Full-stack dev, I was able to setup a lot of things that DevOps would would work on with the team. I was able to setup our Infra EKS in AWS using Terraform, k8s resources, logging using EFK stack, monitoring using Prometheus and Grafana, CI/CD using CircleCI working with Nexus and ECR repos, implemenent load tests using Gatling, enforce unit test coverage, troubleshoot when things go south, and help our QAs implement automated tests using cucumberJS.
I thought I was the man. I'm all knowing. I'm a Dev but I can do all this, I'm so powerful!!! And so I've decided to switch to a different company for a DevOps role, somehow I've managed to pass the interviews, finally a DevOps career!
And now I'm here for about 4 months in my new company, I realized that most of the task that I was doing in my previous company, I was doing because someone told me that we needed such things. I was just implementing the tasks that my Lead created, I didn't actually know the reason why we needed to implement such things.
Now, I understand that I actually don't know a LOT about DevOps. I don't know how to figure out what my current team actually needed. I feel like I've fucked up because now people in my team would expect me to know what to improve in our services, in our processes, and our best practices. I don't know how to do all of those!!!
I only knew how to implement tools, I didn't know that I was supposed to be the one to figure out what to improve. The good thing is, I'm not alone, we have another DevOps engineer in our team, and he's basically doing all the planning/investigating for improvements for me right now. Now I feel like I'm making things harder for him after joining the team.
So.. I've found this sub, created my account, so I can rant about this and accept all the shame. Lol.
But, also thanks to this sub, I found out about the books I can read and courses I can take to be better at the things that I currently suck at.
https://redd.it/ngvl76
@r_devops
reddit
I was a full-stack engineer doing DevOps tasks for about 3 years;...
Excuse my grammar, I'm not that fluent in English. Possibly, I may have expressed some of my words the wrong way. Title says it all. As a...
JFrog Artifactory DR setup
Hi all, looking for advice on how fellow Artifactory users manage their disaster recovery setup.
I am currently using Artifactory 6.x, I have 4+ million artifacts at over 9TB disk usage. I have an active cluster with 2 nodes running in a datacenter in 1 part of the country and 2 more nodes running passive in a datacenter in another part of the country for DR. I am utilizing JFrog's Mission Control DR functionality to replicate all our repos from site 1 to site 2 for DR. I am preparing to upgrade to Artifactory 7.x but in 7.x they have removed the DR functionality from the Mission Control product.
My current thought for replacing this functionality is to rsync the filestore and logship the Artifactory Postgres database. I have not tested this thoroughly yet but I think it would work, just wouldn't have online nodes running. Bringing the database online and changing the URL to point to the DR laod balancer could be a simple ansible playbook.
Does anyone on this subreddit have a simillar setup and willing to share DR ideas for Artifactory 7.x?
Thanks!
https://redd.it/nh2xqz
@r_devops
Hi all, looking for advice on how fellow Artifactory users manage their disaster recovery setup.
I am currently using Artifactory 6.x, I have 4+ million artifacts at over 9TB disk usage. I have an active cluster with 2 nodes running in a datacenter in 1 part of the country and 2 more nodes running passive in a datacenter in another part of the country for DR. I am utilizing JFrog's Mission Control DR functionality to replicate all our repos from site 1 to site 2 for DR. I am preparing to upgrade to Artifactory 7.x but in 7.x they have removed the DR functionality from the Mission Control product.
My current thought for replacing this functionality is to rsync the filestore and logship the Artifactory Postgres database. I have not tested this thoroughly yet but I think it would work, just wouldn't have online nodes running. Bringing the database online and changing the URL to point to the DR laod balancer could be a simple ansible playbook.
Does anyone on this subreddit have a simillar setup and willing to share DR ideas for Artifactory 7.x?
Thanks!
https://redd.it/nh2xqz
@r_devops
reddit
JFrog Artifactory DR setup
Hi all, looking for advice on how fellow Artifactory users manage their disaster recovery setup. I am currently using Artifactory 6.x, I have 4+...
Is there way to run jenkins blue ocean pipeline remotely through url?
Hi
I hava problem using jenkins blue ocean
normal jenkins can build remotely by url with authentication token and build parameter
​
but blue ocean pipeline configure don't have any remote build options.
Is there way to run jenkins blue ocean pipeline remotely through url?
https://redd.it/ngty43
@r_devops
Hi
I hava problem using jenkins blue ocean
normal jenkins can build remotely by url with authentication token and build parameter
​
but blue ocean pipeline configure don't have any remote build options.
Is there way to run jenkins blue ocean pipeline remotely through url?
https://redd.it/ngty43
@r_devops
reddit
Is there way to run jenkins blue ocean pipeline remotely through url?
Hi I hava problem using jenkins blue ocean normal jenkins can build remotely by url with authentication token and build parameter but...
Legacy Application Modernization: 7 Alternative Ways to a Digital Future
Most technology products have a life cycle of only five years, Flexera says. Then outdated technologies become a severe IT issue that almost all organizations face eventually. Antiquated IT systems generate bugs, errors, and critical issues with a domino effect that must be eliminated.
Read why legacy applications modernization is so essential and choose the right way to upgrade your legacy technology.
With companies spending 60-80% of their IT budget supporting legacy systems and applications, 44% of CIOs rightly believe that complex legacy software is slowing business digital transformation.
Gartner says that for every dollar spent on digital innovation, three dollars are spent on upgrading applications. And this disproportionate amount of money wasted on keeping legacy systems afloat could be an investment in further development. Therefore, many companies are looking for ways to reduce the dependency on legacy technologies and move forward into the future.
https://redd.it/ngxgz3
@r_devops
Most technology products have a life cycle of only five years, Flexera says. Then outdated technologies become a severe IT issue that almost all organizations face eventually. Antiquated IT systems generate bugs, errors, and critical issues with a domino effect that must be eliminated.
Read why legacy applications modernization is so essential and choose the right way to upgrade your legacy technology.
With companies spending 60-80% of their IT budget supporting legacy systems and applications, 44% of CIOs rightly believe that complex legacy software is slowing business digital transformation.
Gartner says that for every dollar spent on digital innovation, three dollars are spent on upgrading applications. And this disproportionate amount of money wasted on keeping legacy systems afloat could be an investment in further development. Therefore, many companies are looking for ways to reduce the dependency on legacy technologies and move forward into the future.
https://redd.it/ngxgz3
@r_devops
Jelvix
Unblock Innovation With Legacy Application Modernization | Jelvix
Read why legacy applications modernization is so important and choose the right way to upgrade your legacy technology.
What are the framework activities are completed when following an evolutionary (or spiral) user interface development process?
I need to know more about the framework activities are completed when following an evolutionary (or spiral) user interface development process. Can someone please help?
https://redd.it/ngwlff
@r_devops
I need to know more about the framework activities are completed when following an evolutionary (or spiral) user interface development process. Can someone please help?
https://redd.it/ngwlff
@r_devops
reddit
What are the framework activities are completed when following an...
I need to know more about the framework activities are completed when following an evolutionary (or spiral) user interface development process....
Ansible 4 is here!
Ansible 4.0 (with ansible-core 2.11) is finally out!
https://groups.google.com/g/ansible-devel/c/AeF2En1RGI8
https://redd.it/ngmrq7
@r_devops
Ansible 4.0 (with ansible-core 2.11) is finally out!
https://groups.google.com/g/ansible-devel/c/AeF2En1RGI8
https://redd.it/ngmrq7
@r_devops
reddit
Ansible 4 is here!
Ansible 4.0 (with ansible-core 2.11) is finally out! https://groups.google.com/g/ansible-devel/c/AeF2En1RGI8
Disaster Recovery Plan(DRM) - doing it in-house.....
Good day community.....currently working in a smallish company as jnr devops engineer...in a previous life as a business analyst for bigger corporates - i was exposed to DRM testing and DRM/DRP - however it was for bigger corporates - as such the way. they handled it was always to outsource the DRM to DRM specialist companies.....however in current company we planning to do it inhouse (DIY)...just wanted to know if you doing same in your company and is there any specific documentation/software that you used to 1 - document the DRM/DRP approach and 2 - testing the plan (from an automation perspective using Ansible for instance...our stack is fairly opensource ...Docker/Python/Ansible/Gitlab/Postgres/Redis/Linux-Ubuntu Servers and VMs.......just some general feedback and advice would be appreciated...TIA
https://redd.it/nhig0l
@r_devops
Good day community.....currently working in a smallish company as jnr devops engineer...in a previous life as a business analyst for bigger corporates - i was exposed to DRM testing and DRM/DRP - however it was for bigger corporates - as such the way. they handled it was always to outsource the DRM to DRM specialist companies.....however in current company we planning to do it inhouse (DIY)...just wanted to know if you doing same in your company and is there any specific documentation/software that you used to 1 - document the DRM/DRP approach and 2 - testing the plan (from an automation perspective using Ansible for instance...our stack is fairly opensource ...Docker/Python/Ansible/Gitlab/Postgres/Redis/Linux-Ubuntu Servers and VMs.......just some general feedback and advice would be appreciated...TIA
https://redd.it/nhig0l
@r_devops
reddit
Disaster Recovery Plan(DRM) - doing it in-house.....
Good day community.....currently working in a smallish company as jnr devops engineer...in a previous life as a business analyst for bigger...
Tutorial 51 - Methods With The Same Name In GO | Golang For Beginners
Watch the video tutorial on creating methods in Golang with the same name premiering today at 10AM IST on my YouTube channel Brainstorm Codings.
Make sure to subscribe the channel if you find it interesting, like the video, comment your thoughts, and share.
https://redd.it/nhhjbj
@r_devops
Watch the video tutorial on creating methods in Golang with the same name premiering today at 10AM IST on my YouTube channel Brainstorm Codings.
Make sure to subscribe the channel if you find it interesting, like the video, comment your thoughts, and share.
https://redd.it/nhhjbj
@r_devops
reddit
Tutorial 51 - Methods With The Same Name In GO | Golang For Beginners
Watch the video tutorial on *creating methods in Golang with the same name* premiering **today at 10AM** IST on my YouTube channel **Brainstorm...
AWS service with CI
Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing (however you do it) and then deploy a ec2 instance using awscli?
https://redd.it/nhcubs
@r_devops
Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing (however you do it) and then deploy a ec2 instance using awscli?
https://redd.it/nhcubs
@r_devops
reddit
AWS service with CI
Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing...
Opinion Kubik: language to define validation rules
I'm working on a language to define validation rules. Purpose is to validate Kubernetes and other cloud configurations.
In this post trying to collect an opinion on the overall syntax. Entire doc is inside README.md. Examples there are using "life" cases (no k8s, etc.).
https://github.com/kubevious/kubik
Thank you!
https://redd.it/nhcccx
@r_devops
I'm working on a language to define validation rules. Purpose is to validate Kubernetes and other cloud configurations.
In this post trying to collect an opinion on the overall syntax. Entire doc is inside README.md. Examples there are using "life" cases (no k8s, etc.).
https://github.com/kubevious/kubik
Thank you!
https://redd.it/nhcccx
@r_devops
GitHub
GitHub - kubevious/kubik: Rule language for Kubevious
Rule language for Kubevious. Contribute to kubevious/kubik development by creating an account on GitHub.
Need help updating dependency on Lambda
I'm using a Python (3.8) runtime on AWS Lambda, and one of the packages I'm using requires OpenSSL 1.1.1+, but the Amazon Linux instance used by Lambda has an older version (1.0.2k).
I've read people doing this for NodeJS runtimes (Stack overflow link), but I'm too much of a noob to fully understand this.
How can I achieve this? I'm already using Lambda Layers to update Python libraries, but no idea how to do this for native Linux ones.
https://redd.it/nhc6ib
@r_devops
I'm using a Python (3.8) runtime on AWS Lambda, and one of the packages I'm using requires OpenSSL 1.1.1+, but the Amazon Linux instance used by Lambda has an older version (1.0.2k).
I've read people doing this for NodeJS runtimes (Stack overflow link), but I'm too much of a noob to fully understand this.
How can I achieve this? I'm already using Lambda Layers to update Python libraries, but no idea how to do this for native Linux ones.
https://redd.it/nhc6ib
@r_devops
Stack Overflow
NPM package `pem` doesn't seem to work in AWS lambda NodeJS 10.x (results in OpenSSL error)
When I run the function locally on NodeJS 11.7.0 it works, when I run it in AWS Lambda NodeJS 8.10 it works, but I've recently tried to run it in AWS Lambda NodeJS 10.x and get this response and this
Managing Binaries/Executables for Jenkins Agents
I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage executables required for a pipeline?
Some things that come to mind are the gcloud sdk and sops library for remote decryption of secrets, but I'm sure some other things could apply. So my question is this - what are the "best practice" ways of handling these things?
My initial thought was to create a custom image with all of the goods I need, but I reach the catch 22 of needing the gcloud sdk to access the image because we store our images in GCPs container registry. Some other things I've read include creating permanent agents with the software you need included, but my current setup uses the Kubernetes plugin to dynamically create pods for agents and assign them to nodes in our GKE cluster.
So, I'd love to hear everyone's thoughts, experiences, and industry go-to's for the issue!
https://redd.it/nh8qaz
@r_devops
I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage executables required for a pipeline?
Some things that come to mind are the gcloud sdk and sops library for remote decryption of secrets, but I'm sure some other things could apply. So my question is this - what are the "best practice" ways of handling these things?
My initial thought was to create a custom image with all of the goods I need, but I reach the catch 22 of needing the gcloud sdk to access the image because we store our images in GCPs container registry. Some other things I've read include creating permanent agents with the software you need included, but my current setup uses the Kubernetes plugin to dynamically create pods for agents and assign them to nodes in our GKE cluster.
So, I'd love to hear everyone's thoughts, experiences, and industry go-to's for the issue!
https://redd.it/nh8qaz
@r_devops
reddit
Managing Binaries/Executables for Jenkins Agents
I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage...
Sending request from react app served by nginx with ssl to node
Hi,
Any chance some one can help me with this question?
​
https://stackoverflow.com/questions/67610142/how-to-send-requests-to-a-nodejs-backend-from-a-react-app-served-by-nginx-with-s?fbclid=IwAR3GPM0hXbggyd0VkFKI90MUnlqKIM3JDnGsFjJifjWO9\_Pg4PKI0unbRmE
https://redd.it/nh6zvp
@r_devops
Hi,
Any chance some one can help me with this question?
​
https://stackoverflow.com/questions/67610142/how-to-send-requests-to-a-nodejs-backend-from-a-react-app-served-by-nginx-with-s?fbclid=IwAR3GPM0hXbggyd0VkFKI90MUnlqKIM3JDnGsFjJifjWO9\_Pg4PKI0unbRmE
https://redd.it/nh6zvp
@r_devops
Stack Overflow
How to send requests to a nodejs backend from a React app served by nginx with ssl configured
I have generated static files for a reactjs app using create react app tool. I then started an nginx server on a docker container to serve the front end built using reactjs.
The server is interacting
The server is interacting