Getting a repeatable build, every time
Hey DevOps fans, I spent a lot of time writing this article about best practices for managing build scripts in a growing organization. I'm hoping it would help someone be better at build engineering.
It's basically a collection of tips and tricks we learned over the years about how to make use of Makefile, Dockerfile, and Bash to make scripts understandable and repeatable.
Curious what you think! Feedback on how to improve the article is most welcome!
Article --> Getting a repeatable build, every time
https://redd.it/nh393b
@r_devops
Hey DevOps fans, I spent a lot of time writing this article about best practices for managing build scripts in a growing organization. I'm hoping it would help someone be better at build engineering.
It's basically a collection of tips and tricks we learned over the years about how to make use of Makefile, Dockerfile, and Bash to make scripts understandable and repeatable.
Curious what you think! Feedback on how to improve the article is most welcome!
Article --> Getting a repeatable build, every time
https://redd.it/nh393b
@r_devops
Earthly Blog
Getting a Repeatable Build, Every Time
I wanted to sit down and write about all the tricks we learned and that we used every day to help make builds more manageable in the absence of Ear...
Setting up server from scratch for hosting multiple web applications?
I am a developer but I have to setup a linux server from scratch for hosting dockerized web applications along with infrastructural things like ELK, Databases, Agents, etc. At the top of my head it was k8s but I am interested to know what others would suggest.
https://redd.it/ngxvvc
@r_devops
I am a developer but I have to setup a linux server from scratch for hosting dockerized web applications along with infrastructural things like ELK, Databases, Agents, etc. At the top of my head it was k8s but I am interested to know what others would suggest.
https://redd.it/ngxvvc
@r_devops
reddit
Setting up server from scratch for hosting multiple web applications?
I am a developer but I have to setup a linux server from scratch for hosting dockerized web applications along with infrastructural things like...
I was a full-stack engineer doing DevOps tasks for about 3 years; I've become too confident about my DevOps skills so I decided to go into a DevOps career path, and now I don't understand what my role actually is in my new company.
Excuse my grammar, I'm not that fluent in English. Possibly, I may have expressed some of my words the wrong way.
Title says it all. As a Full-stack dev, I was able to setup a lot of things that DevOps would would work on with the team. I was able to setup our Infra EKS in AWS using Terraform, k8s resources, logging using EFK stack, monitoring using Prometheus and Grafana, CI/CD using CircleCI working with Nexus and ECR repos, implemenent load tests using Gatling, enforce unit test coverage, troubleshoot when things go south, and help our QAs implement automated tests using cucumberJS.
I thought I was the man. I'm all knowing. I'm a Dev but I can do all this, I'm so powerful!!! And so I've decided to switch to a different company for a DevOps role, somehow I've managed to pass the interviews, finally a DevOps career!
And now I'm here for about 4 months in my new company, I realized that most of the task that I was doing in my previous company, I was doing because someone told me that we needed such things. I was just implementing the tasks that my Lead created, I didn't actually know the reason why we needed to implement such things.
Now, I understand that I actually don't know a LOT about DevOps. I don't know how to figure out what my current team actually needed. I feel like I've fucked up because now people in my team would expect me to know what to improve in our services, in our processes, and our best practices. I don't know how to do all of those!!!
I only knew how to implement tools, I didn't know that I was supposed to be the one to figure out what to improve. The good thing is, I'm not alone, we have another DevOps engineer in our team, and he's basically doing all the planning/investigating for improvements for me right now. Now I feel like I'm making things harder for him after joining the team.
So.. I've found this sub, created my account, so I can rant about this and accept all the shame. Lol.
But, also thanks to this sub, I found out about the books I can read and courses I can take to be better at the things that I currently suck at.
https://redd.it/ngvl76
@r_devops
Excuse my grammar, I'm not that fluent in English. Possibly, I may have expressed some of my words the wrong way.
Title says it all. As a Full-stack dev, I was able to setup a lot of things that DevOps would would work on with the team. I was able to setup our Infra EKS in AWS using Terraform, k8s resources, logging using EFK stack, monitoring using Prometheus and Grafana, CI/CD using CircleCI working with Nexus and ECR repos, implemenent load tests using Gatling, enforce unit test coverage, troubleshoot when things go south, and help our QAs implement automated tests using cucumberJS.
I thought I was the man. I'm all knowing. I'm a Dev but I can do all this, I'm so powerful!!! And so I've decided to switch to a different company for a DevOps role, somehow I've managed to pass the interviews, finally a DevOps career!
And now I'm here for about 4 months in my new company, I realized that most of the task that I was doing in my previous company, I was doing because someone told me that we needed such things. I was just implementing the tasks that my Lead created, I didn't actually know the reason why we needed to implement such things.
Now, I understand that I actually don't know a LOT about DevOps. I don't know how to figure out what my current team actually needed. I feel like I've fucked up because now people in my team would expect me to know what to improve in our services, in our processes, and our best practices. I don't know how to do all of those!!!
I only knew how to implement tools, I didn't know that I was supposed to be the one to figure out what to improve. The good thing is, I'm not alone, we have another DevOps engineer in our team, and he's basically doing all the planning/investigating for improvements for me right now. Now I feel like I'm making things harder for him after joining the team.
So.. I've found this sub, created my account, so I can rant about this and accept all the shame. Lol.
But, also thanks to this sub, I found out about the books I can read and courses I can take to be better at the things that I currently suck at.
https://redd.it/ngvl76
@r_devops
reddit
I was a full-stack engineer doing DevOps tasks for about 3 years;...
Excuse my grammar, I'm not that fluent in English. Possibly, I may have expressed some of my words the wrong way. Title says it all. As a...
JFrog Artifactory DR setup
Hi all, looking for advice on how fellow Artifactory users manage their disaster recovery setup.
I am currently using Artifactory 6.x, I have 4+ million artifacts at over 9TB disk usage. I have an active cluster with 2 nodes running in a datacenter in 1 part of the country and 2 more nodes running passive in a datacenter in another part of the country for DR. I am utilizing JFrog's Mission Control DR functionality to replicate all our repos from site 1 to site 2 for DR. I am preparing to upgrade to Artifactory 7.x but in 7.x they have removed the DR functionality from the Mission Control product.
My current thought for replacing this functionality is to rsync the filestore and logship the Artifactory Postgres database. I have not tested this thoroughly yet but I think it would work, just wouldn't have online nodes running. Bringing the database online and changing the URL to point to the DR laod balancer could be a simple ansible playbook.
Does anyone on this subreddit have a simillar setup and willing to share DR ideas for Artifactory 7.x?
Thanks!
https://redd.it/nh2xqz
@r_devops
Hi all, looking for advice on how fellow Artifactory users manage their disaster recovery setup.
I am currently using Artifactory 6.x, I have 4+ million artifacts at over 9TB disk usage. I have an active cluster with 2 nodes running in a datacenter in 1 part of the country and 2 more nodes running passive in a datacenter in another part of the country for DR. I am utilizing JFrog's Mission Control DR functionality to replicate all our repos from site 1 to site 2 for DR. I am preparing to upgrade to Artifactory 7.x but in 7.x they have removed the DR functionality from the Mission Control product.
My current thought for replacing this functionality is to rsync the filestore and logship the Artifactory Postgres database. I have not tested this thoroughly yet but I think it would work, just wouldn't have online nodes running. Bringing the database online and changing the URL to point to the DR laod balancer could be a simple ansible playbook.
Does anyone on this subreddit have a simillar setup and willing to share DR ideas for Artifactory 7.x?
Thanks!
https://redd.it/nh2xqz
@r_devops
reddit
JFrog Artifactory DR setup
Hi all, looking for advice on how fellow Artifactory users manage their disaster recovery setup. I am currently using Artifactory 6.x, I have 4+...
Is there way to run jenkins blue ocean pipeline remotely through url?
Hi
I hava problem using jenkins blue ocean
normal jenkins can build remotely by url with authentication token and build parameter
​
but blue ocean pipeline configure don't have any remote build options.
Is there way to run jenkins blue ocean pipeline remotely through url?
https://redd.it/ngty43
@r_devops
Hi
I hava problem using jenkins blue ocean
normal jenkins can build remotely by url with authentication token and build parameter
​
but blue ocean pipeline configure don't have any remote build options.
Is there way to run jenkins blue ocean pipeline remotely through url?
https://redd.it/ngty43
@r_devops
reddit
Is there way to run jenkins blue ocean pipeline remotely through url?
Hi I hava problem using jenkins blue ocean normal jenkins can build remotely by url with authentication token and build parameter but...
Legacy Application Modernization: 7 Alternative Ways to a Digital Future
Most technology products have a life cycle of only five years, Flexera says. Then outdated technologies become a severe IT issue that almost all organizations face eventually. Antiquated IT systems generate bugs, errors, and critical issues with a domino effect that must be eliminated.
Read why legacy applications modernization is so essential and choose the right way to upgrade your legacy technology.
With companies spending 60-80% of their IT budget supporting legacy systems and applications, 44% of CIOs rightly believe that complex legacy software is slowing business digital transformation.
Gartner says that for every dollar spent on digital innovation, three dollars are spent on upgrading applications. And this disproportionate amount of money wasted on keeping legacy systems afloat could be an investment in further development. Therefore, many companies are looking for ways to reduce the dependency on legacy technologies and move forward into the future.
https://redd.it/ngxgz3
@r_devops
Most technology products have a life cycle of only five years, Flexera says. Then outdated technologies become a severe IT issue that almost all organizations face eventually. Antiquated IT systems generate bugs, errors, and critical issues with a domino effect that must be eliminated.
Read why legacy applications modernization is so essential and choose the right way to upgrade your legacy technology.
With companies spending 60-80% of their IT budget supporting legacy systems and applications, 44% of CIOs rightly believe that complex legacy software is slowing business digital transformation.
Gartner says that for every dollar spent on digital innovation, three dollars are spent on upgrading applications. And this disproportionate amount of money wasted on keeping legacy systems afloat could be an investment in further development. Therefore, many companies are looking for ways to reduce the dependency on legacy technologies and move forward into the future.
https://redd.it/ngxgz3
@r_devops
Jelvix
Unblock Innovation With Legacy Application Modernization | Jelvix
Read why legacy applications modernization is so important and choose the right way to upgrade your legacy technology.
What are the framework activities are completed when following an evolutionary (or spiral) user interface development process?
I need to know more about the framework activities are completed when following an evolutionary (or spiral) user interface development process. Can someone please help?
https://redd.it/ngwlff
@r_devops
I need to know more about the framework activities are completed when following an evolutionary (or spiral) user interface development process. Can someone please help?
https://redd.it/ngwlff
@r_devops
reddit
What are the framework activities are completed when following an...
I need to know more about the framework activities are completed when following an evolutionary (or spiral) user interface development process....
Ansible 4 is here!
Ansible 4.0 (with ansible-core 2.11) is finally out!
https://groups.google.com/g/ansible-devel/c/AeF2En1RGI8
https://redd.it/ngmrq7
@r_devops
Ansible 4.0 (with ansible-core 2.11) is finally out!
https://groups.google.com/g/ansible-devel/c/AeF2En1RGI8
https://redd.it/ngmrq7
@r_devops
reddit
Ansible 4 is here!
Ansible 4.0 (with ansible-core 2.11) is finally out! https://groups.google.com/g/ansible-devel/c/AeF2En1RGI8
Disaster Recovery Plan(DRM) - doing it in-house.....
Good day community.....currently working in a smallish company as jnr devops engineer...in a previous life as a business analyst for bigger corporates - i was exposed to DRM testing and DRM/DRP - however it was for bigger corporates - as such the way. they handled it was always to outsource the DRM to DRM specialist companies.....however in current company we planning to do it inhouse (DIY)...just wanted to know if you doing same in your company and is there any specific documentation/software that you used to 1 - document the DRM/DRP approach and 2 - testing the plan (from an automation perspective using Ansible for instance...our stack is fairly opensource ...Docker/Python/Ansible/Gitlab/Postgres/Redis/Linux-Ubuntu Servers and VMs.......just some general feedback and advice would be appreciated...TIA
https://redd.it/nhig0l
@r_devops
Good day community.....currently working in a smallish company as jnr devops engineer...in a previous life as a business analyst for bigger corporates - i was exposed to DRM testing and DRM/DRP - however it was for bigger corporates - as such the way. they handled it was always to outsource the DRM to DRM specialist companies.....however in current company we planning to do it inhouse (DIY)...just wanted to know if you doing same in your company and is there any specific documentation/software that you used to 1 - document the DRM/DRP approach and 2 - testing the plan (from an automation perspective using Ansible for instance...our stack is fairly opensource ...Docker/Python/Ansible/Gitlab/Postgres/Redis/Linux-Ubuntu Servers and VMs.......just some general feedback and advice would be appreciated...TIA
https://redd.it/nhig0l
@r_devops
reddit
Disaster Recovery Plan(DRM) - doing it in-house.....
Good day community.....currently working in a smallish company as jnr devops engineer...in a previous life as a business analyst for bigger...
Tutorial 51 - Methods With The Same Name In GO | Golang For Beginners
Watch the video tutorial on creating methods in Golang with the same name premiering today at 10AM IST on my YouTube channel Brainstorm Codings.
Make sure to subscribe the channel if you find it interesting, like the video, comment your thoughts, and share.
https://redd.it/nhhjbj
@r_devops
Watch the video tutorial on creating methods in Golang with the same name premiering today at 10AM IST on my YouTube channel Brainstorm Codings.
Make sure to subscribe the channel if you find it interesting, like the video, comment your thoughts, and share.
https://redd.it/nhhjbj
@r_devops
reddit
Tutorial 51 - Methods With The Same Name In GO | Golang For Beginners
Watch the video tutorial on *creating methods in Golang with the same name* premiering **today at 10AM** IST on my YouTube channel **Brainstorm...
AWS service with CI
Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing (however you do it) and then deploy a ec2 instance using awscli?
https://redd.it/nhcubs
@r_devops
Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing (however you do it) and then deploy a ec2 instance using awscli?
https://redd.it/nhcubs
@r_devops
reddit
AWS service with CI
Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing...
Opinion Kubik: language to define validation rules
I'm working on a language to define validation rules. Purpose is to validate Kubernetes and other cloud configurations.
In this post trying to collect an opinion on the overall syntax. Entire doc is inside README.md. Examples there are using "life" cases (no k8s, etc.).
https://github.com/kubevious/kubik
Thank you!
https://redd.it/nhcccx
@r_devops
I'm working on a language to define validation rules. Purpose is to validate Kubernetes and other cloud configurations.
In this post trying to collect an opinion on the overall syntax. Entire doc is inside README.md. Examples there are using "life" cases (no k8s, etc.).
https://github.com/kubevious/kubik
Thank you!
https://redd.it/nhcccx
@r_devops
GitHub
GitHub - kubevious/kubik: Rule language for Kubevious
Rule language for Kubevious. Contribute to kubevious/kubik development by creating an account on GitHub.
Need help updating dependency on Lambda
I'm using a Python (3.8) runtime on AWS Lambda, and one of the packages I'm using requires OpenSSL 1.1.1+, but the Amazon Linux instance used by Lambda has an older version (1.0.2k).
I've read people doing this for NodeJS runtimes (Stack overflow link), but I'm too much of a noob to fully understand this.
How can I achieve this? I'm already using Lambda Layers to update Python libraries, but no idea how to do this for native Linux ones.
https://redd.it/nhc6ib
@r_devops
I'm using a Python (3.8) runtime on AWS Lambda, and one of the packages I'm using requires OpenSSL 1.1.1+, but the Amazon Linux instance used by Lambda has an older version (1.0.2k).
I've read people doing this for NodeJS runtimes (Stack overflow link), but I'm too much of a noob to fully understand this.
How can I achieve this? I'm already using Lambda Layers to update Python libraries, but no idea how to do this for native Linux ones.
https://redd.it/nhc6ib
@r_devops
Stack Overflow
NPM package `pem` doesn't seem to work in AWS lambda NodeJS 10.x (results in OpenSSL error)
When I run the function locally on NodeJS 11.7.0 it works, when I run it in AWS Lambda NodeJS 8.10 it works, but I've recently tried to run it in AWS Lambda NodeJS 10.x and get this response and this
Managing Binaries/Executables for Jenkins Agents
I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage executables required for a pipeline?
Some things that come to mind are the gcloud sdk and sops library for remote decryption of secrets, but I'm sure some other things could apply. So my question is this - what are the "best practice" ways of handling these things?
My initial thought was to create a custom image with all of the goods I need, but I reach the catch 22 of needing the gcloud sdk to access the image because we store our images in GCPs container registry. Some other things I've read include creating permanent agents with the software you need included, but my current setup uses the Kubernetes plugin to dynamically create pods for agents and assign them to nodes in our GKE cluster.
So, I'd love to hear everyone's thoughts, experiences, and industry go-to's for the issue!
https://redd.it/nh8qaz
@r_devops
I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage executables required for a pipeline?
Some things that come to mind are the gcloud sdk and sops library for remote decryption of secrets, but I'm sure some other things could apply. So my question is this - what are the "best practice" ways of handling these things?
My initial thought was to create a custom image with all of the goods I need, but I reach the catch 22 of needing the gcloud sdk to access the image because we store our images in GCPs container registry. Some other things I've read include creating permanent agents with the software you need included, but my current setup uses the Kubernetes plugin to dynamically create pods for agents and assign them to nodes in our GKE cluster.
So, I'd love to hear everyone's thoughts, experiences, and industry go-to's for the issue!
https://redd.it/nh8qaz
@r_devops
reddit
Managing Binaries/Executables for Jenkins Agents
I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage...
Sending request from react app served by nginx with ssl to node
Hi,
Any chance some one can help me with this question?
​
https://stackoverflow.com/questions/67610142/how-to-send-requests-to-a-nodejs-backend-from-a-react-app-served-by-nginx-with-s?fbclid=IwAR3GPM0hXbggyd0VkFKI90MUnlqKIM3JDnGsFjJifjWO9\_Pg4PKI0unbRmE
https://redd.it/nh6zvp
@r_devops
Hi,
Any chance some one can help me with this question?
​
https://stackoverflow.com/questions/67610142/how-to-send-requests-to-a-nodejs-backend-from-a-react-app-served-by-nginx-with-s?fbclid=IwAR3GPM0hXbggyd0VkFKI90MUnlqKIM3JDnGsFjJifjWO9\_Pg4PKI0unbRmE
https://redd.it/nh6zvp
@r_devops
Stack Overflow
How to send requests to a nodejs backend from a React app served by nginx with ssl configured
I have generated static files for a reactjs app using create react app tool. I then started an nginx server on a docker container to serve the front end built using reactjs.
The server is interacting
The server is interacting
What a devops do? Different point of view
Recently I have spoken with an it recruiter, she said that a devops is a sysadmin that knows a little of backend development and fronted development. I said instead that a devops is not a sysadmin and a developer, but a sysadmin that knows how to manage a system infrastructure with the code, I was referring to python and ansible above all.
Who has right? Let me know thanks.
Maybe neither lol.
https://redd.it/nh6lss
@r_devops
Recently I have spoken with an it recruiter, she said that a devops is a sysadmin that knows a little of backend development and fronted development. I said instead that a devops is not a sysadmin and a developer, but a sysadmin that knows how to manage a system infrastructure with the code, I was referring to python and ansible above all.
Who has right? Let me know thanks.
Maybe neither lol.
https://redd.it/nh6lss
@r_devops
reddit
What a devops do? Different point of view
Recently I have spoken with an it recruiter, she said that a devops is a sysadmin that knows a little of backend development and fronted...
Does datadog monitor query supports multiple tags with same key?
I am using terraform datadog_monitor resource to deploy some monitors, one thing I came accross is in the "query" paramater of datadog_monitor resource it only works with single tags as shown below.
resource "datadogmonitor" "httpdevtestdemo" {
..........................
query = "'http.canconnect'.over('environment:dev').by('host','instance','url').last(5).countbystatus()"
.........................
}
The below one does not work
resource "datadogmonitor" "httpdevtestdemo" {
..........................
query = "'http.canconnect'.over('environment:dev','environment:demo','environment:test').by('host','instance','url').last(5).countbystatus()"
.........................
}
Anyone came accross this issue or knows a solution for this except creating separate monitors for each environment.
https://redd.it/ngnrga
@r_devops
I am using terraform datadog_monitor resource to deploy some monitors, one thing I came accross is in the "query" paramater of datadog_monitor resource it only works with single tags as shown below.
resource "datadogmonitor" "httpdevtestdemo" {
..........................
query = "'http.canconnect'.over('environment:dev').by('host','instance','url').last(5).countbystatus()"
.........................
}
The below one does not work
resource "datadogmonitor" "httpdevtestdemo" {
..........................
query = "'http.canconnect'.over('environment:dev','environment:demo','environment:test').by('host','instance','url').last(5).countbystatus()"
.........................
}
Anyone came accross this issue or knows a solution for this except creating separate monitors for each environment.
https://redd.it/ngnrga
@r_devops
reddit
Does datadog monitor query supports multiple tags with same key?
I am using terraform datadog\_monitor resource to deploy some monitors, one thing I came accross is in the "query" paramater of datadog\_monitor...
restricting use of certain python library for developer
Imagine that we've learned the Python simplejson package has a major security vulnerability. The engineering teams have spent a few days replacing it with safer substitutes, so it's now secure and safe to ship. How would you ensure there are no regressions in the future? I.e., that no one adds the simplejson package back in.
https://redd.it/nhujsz
@r_devops
Imagine that we've learned the Python simplejson package has a major security vulnerability. The engineering teams have spent a few days replacing it with safer substitutes, so it's now secure and safe to ship. How would you ensure there are no regressions in the future? I.e., that no one adds the simplejson package back in.
https://redd.it/nhujsz
@r_devops
reddit
restricting use of certain python library for developer
Imagine that we've learned the Python simplejson package has a major security vulnerability. The engineering teams have spent a few days replacing...
Has anyone found success in switching to a night role?
Generally, this question is rooted in a lifelong struggle with ADHD but for a long time (years) I've avoided the reality that I can work much better at night, focus better at night, have zero need for medication at night or live with the anxiety and guilt of struggling through each day in the workplace. I believe this may be a better solution than getting back on stimulants, I can't put my body through that anymore.
My question is, has anyone out there had this realization and switched to night hours or found a role fitting that description in the wild and if you did make that switch, did you find that it worked for you? Any unexpected tradeoffs? I can think of a few that might come up with communicating with daytime hour teams or mandatory early meetings when they happen, etc.
Typically I don't see this in job postings but people in this sub are probably used to working with staggered team schedules/international teams in different time zones anyway.
Thanks for humoring my question.
https://redd.it/ni7xw5
@r_devops
Generally, this question is rooted in a lifelong struggle with ADHD but for a long time (years) I've avoided the reality that I can work much better at night, focus better at night, have zero need for medication at night or live with the anxiety and guilt of struggling through each day in the workplace. I believe this may be a better solution than getting back on stimulants, I can't put my body through that anymore.
My question is, has anyone out there had this realization and switched to night hours or found a role fitting that description in the wild and if you did make that switch, did you find that it worked for you? Any unexpected tradeoffs? I can think of a few that might come up with communicating with daytime hour teams or mandatory early meetings when they happen, etc.
Typically I don't see this in job postings but people in this sub are probably used to working with staggered team schedules/international teams in different time zones anyway.
Thanks for humoring my question.
https://redd.it/ni7xw5
@r_devops
reddit
Has anyone found success in switching to a night role?
Generally, this question is rooted in a lifelong struggle with ADHD but for a long time (years) I've avoided the reality that I can work much...
MidLevel DevOps Engineer Interviewing Sr DevOps Engineer
I work at a large company, and my manager asked me to conduct the “team member interview” portion of the hiring pipeline.
I’m a MidLevel DevOps Engineer with 2 - 3 years xp and will be interviewing an applicant with 6 years xp. I’m conducting the interview for our sister team, and am familiar with their tech stack, but am not sure how to “interview up” as I’ve only ever interviewed interns and seasonals (college job, not tech).
Any Sr engineers or up-interviewers have advice?
Thanks guys!
PS: Love this subreddit
https://redd.it/ni33gl
@r_devops
I work at a large company, and my manager asked me to conduct the “team member interview” portion of the hiring pipeline.
I’m a MidLevel DevOps Engineer with 2 - 3 years xp and will be interviewing an applicant with 6 years xp. I’m conducting the interview for our sister team, and am familiar with their tech stack, but am not sure how to “interview up” as I’ve only ever interviewed interns and seasonals (college job, not tech).
Any Sr engineers or up-interviewers have advice?
Thanks guys!
PS: Love this subreddit
https://redd.it/ni33gl
@r_devops
reddit
MidLevel DevOps Engineer Interviewing Sr DevOps Engineer
I work at a large company, and my manager asked me to conduct the “team member interview” portion of the hiring pipeline. I’m a MidLevel DevOps...
Gitlab-CI: Passing version from one stage to the next
I'm running into a bit of an issue which I'm not sure I'm solving in the right way. This is for a personal project, basically to continue learning things about gitlab-ci, etc...
​
What I am trying to achieve is:
1. Commits pushed to master
2. Gitlab CI runs on master and runs tests, lint, whatever
3. If tests pass, CI stage runs an automatic version: release-it, semantic-release, etc, bumps the version number and creates a commit with the updated package.json and CHANGELOG.md
4. The new version is then packaged into a docker image (tagged with new version, sentry release created with new version).
5. New docker image is pushed to deployment.
Problem being that the commit made in step 3, is not reflected in steps 4 and 5.
e.g.
Software is at 1.0.0 and I make some changes and run the pipeline. Step 3 runs and says, "cool, we can make this into 1.0.1 " and makes a commit back to the repo.
Steps 4 and 5 run and bundle the software and deploy it, which still shows 1.0.0 on the front end, with the version from package.json and without the updated CHANGELOG.md which was created/updated during the pipeline.
I hope that makes sense, and I'm totally unsure if I'm approaching this the right way. Basically I want the pipeline to create the next version of the software and release it.
I've found a bunch of stuff on automatic semantic versioning, but noting about carrying the new version forward through the pipeline.
https://redd.it/nhyxq5
@r_devops
I'm running into a bit of an issue which I'm not sure I'm solving in the right way. This is for a personal project, basically to continue learning things about gitlab-ci, etc...
​
What I am trying to achieve is:
1. Commits pushed to master
2. Gitlab CI runs on master and runs tests, lint, whatever
3. If tests pass, CI stage runs an automatic version: release-it, semantic-release, etc, bumps the version number and creates a commit with the updated package.json and CHANGELOG.md
4. The new version is then packaged into a docker image (tagged with new version, sentry release created with new version).
5. New docker image is pushed to deployment.
Problem being that the commit made in step 3, is not reflected in steps 4 and 5.
e.g.
Software is at 1.0.0 and I make some changes and run the pipeline. Step 3 runs and says, "cool, we can make this into 1.0.1 " and makes a commit back to the repo.
Steps 4 and 5 run and bundle the software and deploy it, which still shows 1.0.0 on the front end, with the version from package.json and without the updated CHANGELOG.md which was created/updated during the pipeline.
I hope that makes sense, and I'm totally unsure if I'm approaching this the right way. Basically I want the pipeline to create the next version of the software and release it.
I've found a bunch of stuff on automatic semantic versioning, but noting about carrying the new version forward through the pipeline.
https://redd.it/nhyxq5
@r_devops
reddit
Gitlab-CI: Passing version from one stage to the next
I'm running into a bit of an issue which I'm not sure I'm solving in the right way. This is for a personal project, basically to continue learning...