I'm an IT student with a passion for cars — Should I pursue automotive tech as a career or keep it as a hobby?
I am a BS IT student and I absolutely love tech. I always have. But there’s something I love even more and that’s cars. I was fortunate enough to have a computer since childhood, so I was able to work with them hardware and software wise, learn a lot and be very good at it. There’s not much to do in computers hardware wise but I really enjoy it more than the software and programming. I am a gamer too and I love building gaming computers.
Similarly, the idea of working with cars really excites and I want to pursue it. I love cars, more than computers. Unfortunately I have never had the chance to own one or work with one but I wanna be able to do it.
I am going to do masters after my bachelors, I am pretty set on specializing in a field in IT (DevOps/cloud), but I was wondering if there’s something like automotive technician degree (not interested in automotive engineering) or course that I can do?
Another idea I had was that I can continue my career in IT and pursue this car thing as a hobby. Buy a car and learn to work with it, etc., and so on grow and buy another car.
I really want to work with cars. I really enjoy doing manual labor.
https://redd.it/1gb6ytt
@r_devops
I am a BS IT student and I absolutely love tech. I always have. But there’s something I love even more and that’s cars. I was fortunate enough to have a computer since childhood, so I was able to work with them hardware and software wise, learn a lot and be very good at it. There’s not much to do in computers hardware wise but I really enjoy it more than the software and programming. I am a gamer too and I love building gaming computers.
Similarly, the idea of working with cars really excites and I want to pursue it. I love cars, more than computers. Unfortunately I have never had the chance to own one or work with one but I wanna be able to do it.
I am going to do masters after my bachelors, I am pretty set on specializing in a field in IT (DevOps/cloud), but I was wondering if there’s something like automotive technician degree (not interested in automotive engineering) or course that I can do?
Another idea I had was that I can continue my career in IT and pursue this car thing as a hobby. Buy a car and learn to work with it, etc., and so on grow and buy another car.
I really want to work with cars. I really enjoy doing manual labor.
https://redd.it/1gb6ytt
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is there an argocd for cloud resources?
I was wondering if something allowing to have state reconciliation and declarative configuration but for cloud resources exist. Do you have any name ?
https://redd.it/1gbc8on
@r_devops
I was wondering if something allowing to have state reconciliation and declarative configuration but for cloud resources exist. Do you have any name ?
https://redd.it/1gbc8on
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
GitOps Channels/Canary-like Rollouts
Dear DevOps Community,
We recently adopted Flux to manage our K8s infrastructure components on more than 200 clusters across different cloud vendors in a „GitOps“ pull fashion.
TL/DR:
- How do you manage GitOps on your clusters? Are you using the Multi-Branch „Channel“ approach or another strategy?
- Is there may even a smart way to archive something like controlled „canary-like“ rollouts (10%…30% …60% clusters…)?
So far so good and Flux does it‘s job:
When there’s an update or a new feature to be rolled out, we branch of the main branch, prepare the changes and change the „flux source“ on a few testcluster for testing, before we merge back to main, so it will be rolled out on all clusters.
When this is done, we change the „source“ on our testclusters back to „main“.
This works well for us, but the continuous change/ cleanup of testcluster (especially when multiple features being developed at the same time) and having basically all clusters subscribing to the „main“ branch only, always comes with a slight doubt if it could be done better.
Especially when we want to follow a pattern of small, but frequent updates via GitOps.
Of course we could maintain next to „main“ some „branch channels“ (ie. „stable“, „beta“, „dev“,“test/upgradeX“,…), but I’m afraid that this will cause a mess by keeping all the branches up 2 date.
Thanks for sharing your thoughts :)
https://redd.it/1gbddtk
@r_devops
Dear DevOps Community,
We recently adopted Flux to manage our K8s infrastructure components on more than 200 clusters across different cloud vendors in a „GitOps“ pull fashion.
TL/DR:
- How do you manage GitOps on your clusters? Are you using the Multi-Branch „Channel“ approach or another strategy?
- Is there may even a smart way to archive something like controlled „canary-like“ rollouts (10%…30% …60% clusters…)?
So far so good and Flux does it‘s job:
When there’s an update or a new feature to be rolled out, we branch of the main branch, prepare the changes and change the „flux source“ on a few testcluster for testing, before we merge back to main, so it will be rolled out on all clusters.
When this is done, we change the „source“ on our testclusters back to „main“.
This works well for us, but the continuous change/ cleanup of testcluster (especially when multiple features being developed at the same time) and having basically all clusters subscribing to the „main“ branch only, always comes with a slight doubt if it could be done better.
Especially when we want to follow a pattern of small, but frequent updates via GitOps.
Of course we could maintain next to „main“ some „branch channels“ (ie. „stable“, „beta“, „dev“,“test/upgradeX“,…), but I’m afraid that this will cause a mess by keeping all the branches up 2 date.
Thanks for sharing your thoughts :)
https://redd.it/1gbddtk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Recruitment process & technical challenge
Hi there,
Recently, I participated in a recruitment process for a DevOps role at a company that provides services to other businesses. The initial contact was a nearly one-hour interview. After that, the recruiter sent me an email with instructions to sign up on their platform to complete three additional steps.
The first step was a 30-minute test designed to measure IQ, logic, and other abilities to assess if my profile fits with the company.
The second step involved answering several questions while being recorded.
The final step was a technical challenge where I was supposed to build a pipeline for a Node.js application with multiple stages and then deploy everything to Azure using Terraform. Additionally, it required setting up three environments—dev, stage, and prod—along with several rules for merging branches, setting up the branch strategy, etc.
For this final step, the instructions specified that it should take no longer than one hour, and I had to record all steps and explain each part. I decided to decline the process because of these time-consuming requirements. I'm very busy and can't afford to spend a lot of time on these tasks. Since no sandbox environment was provided, I would need to set up everything on my own, which adds significant time to the process. Similarly, there isn't an automatic platform for recording the video, meaning I'd have to handle that setup as well.
I'm curious to hear your opinions on recruitment processes that require extensive time commitments, such as lengthy technical challenges without providing necessary resources like sandbox environments or recording platforms. Do you usually participate in them, or do you also choose to decline? I'd appreciate hearing your thoughts.
https://redd.it/1gbeebs
@r_devops
Hi there,
Recently, I participated in a recruitment process for a DevOps role at a company that provides services to other businesses. The initial contact was a nearly one-hour interview. After that, the recruiter sent me an email with instructions to sign up on their platform to complete three additional steps.
The first step was a 30-minute test designed to measure IQ, logic, and other abilities to assess if my profile fits with the company.
The second step involved answering several questions while being recorded.
The final step was a technical challenge where I was supposed to build a pipeline for a Node.js application with multiple stages and then deploy everything to Azure using Terraform. Additionally, it required setting up three environments—dev, stage, and prod—along with several rules for merging branches, setting up the branch strategy, etc.
For this final step, the instructions specified that it should take no longer than one hour, and I had to record all steps and explain each part. I decided to decline the process because of these time-consuming requirements. I'm very busy and can't afford to spend a lot of time on these tasks. Since no sandbox environment was provided, I would need to set up everything on my own, which adds significant time to the process. Similarly, there isn't an automatic platform for recording the video, meaning I'd have to handle that setup as well.
I'm curious to hear your opinions on recruitment processes that require extensive time commitments, such as lengthy technical challenges without providing necessary resources like sandbox environments or recording platforms. Do you usually participate in them, or do you also choose to decline? I'd appreciate hearing your thoughts.
https://redd.it/1gbeebs
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What matters most in a mocking tool?
Ayo, doing some research. My team was asking me what else would matter to me in a mocking tool, and obviously I care about it if its fast and easy to mock, but I was struggling to think of what else would really be a 'game-changer' for me to care enough.
Hosted mocks are great, dynamic vs static mocking is nice too...but like what else? What would make you guys care/ what do you look for in a mocking tool?
https://redd.it/1gbfjhr
@r_devops
Ayo, doing some research. My team was asking me what else would matter to me in a mocking tool, and obviously I care about it if its fast and easy to mock, but I was struggling to think of what else would really be a 'game-changer' for me to care enough.
Hosted mocks are great, dynamic vs static mocking is nice too...but like what else? What would make you guys care/ what do you look for in a mocking tool?
https://redd.it/1gbfjhr
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Jenkins vs. Tekton for Openshift
Apologies if my question is stupid, I’m an SWE and far from an expert in DevOps.
We currently have our Repos in Bitbucket cloud and deploy them to Openshift with Bamboo. Our team wants to move away from Bamboo and the proposed alternatives are Jenkins or Tekton.
My gut feeling is Tekton is more suitable foe this use case, but I would appreciate any advice, especially pros and cons that should be considered. Thanks!
ETA: additional alternative suggestions are also more than welcome.
https://redd.it/1gbd8gl
@r_devops
Apologies if my question is stupid, I’m an SWE and far from an expert in DevOps.
We currently have our Repos in Bitbucket cloud and deploy them to Openshift with Bamboo. Our team wants to move away from Bamboo and the proposed alternatives are Jenkins or Tekton.
My gut feeling is Tekton is more suitable foe this use case, but I would appreciate any advice, especially pros and cons that should be considered. Thanks!
ETA: additional alternative suggestions are also more than welcome.
https://redd.it/1gbd8gl
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How come containers don't have an OS?
I just heard today that containers do not have their own OS because they share the Host's kernel. On the other hand, many containers are based on a image such as Ubuntu, Alpine, Suse Linux, etc, although being extremely light and not a fully-fledged OS.
Would anyone enlighten me on which criteria does containers fall into? I really cannot understand why wouldn't them have an OS since it should be needed to manage processes. Or am i mistaken here?
Should the process inside a container start, become a zombie, or stops responding, whatever, whose responsibility would it be to manage them? Is it the container or the host?
https://redd.it/1gbi3kt
@r_devops
I just heard today that containers do not have their own OS because they share the Host's kernel. On the other hand, many containers are based on a image such as Ubuntu, Alpine, Suse Linux, etc, although being extremely light and not a fully-fledged OS.
Would anyone enlighten me on which criteria does containers fall into? I really cannot understand why wouldn't them have an OS since it should be needed to manage processes. Or am i mistaken here?
Should the process inside a container start, become a zombie, or stops responding, whatever, whose responsibility would it be to manage them? Is it the container or the host?
https://redd.it/1gbi3kt
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I have just been fired and wondering whether to continue in DevOps.
I came from a systems engineering background and spend the last two years in a DevOps role where I was promoted internally.
It was predominantly supporting a legacy sitecore(.net) workload running on windows instance, we used teamcity for builds and octopus for deployments. The deployments were really long and clunky. 5 hours end to end including testing.
We also did run some more typical DevOps stacks, Jenkins pipelines, deploying .net core applications in to fargate.
I am in a position where I am missing kubernetes and some other core DevOps skills, due to not using industry standard tools. I also found the work pretty overwhelming initially but that wasn't helped by what I considered a difficult co worker. I am not quite sure why I was fired, but probably had something to do with my relationship with my co worker who is best friends with our boss, I was assured it was not a performance issue.
These are some of behaviours that led to conflict, but it being my first DevOps job, I don't know how if this is just an expected standard behaviour, due to the fast nature of the work:
Making changes at 2am to our integration layer and not telling anyone
Making breaking changes to a production pipelines, not telling anyone then going on holiday. I atart looking in to the issue then he pops up on slack telling me the solution is easy and what do. Which I had done 40 mins prior
Agreeing with me, then publicly disagreeing me with me in front of the Devs on slack or to our boss.
Generally just going off and doing his own thing and not documenting anything, leaving you to pick up integrations he was working on that have failed in his absence
Messaging you about work on teams at the weekend and when you reply saying it's the weekend, he replies saying you didn't have to reply.
It would be good to get some feedback on how people collaborate with their co workers and what they consider acceptable or not and if you think DevOps promotes alot more conflict than other roles?
At this point, because I am missing some core skills. I could invest time in to skilling up and trying to get another role, but it also does seem like the stress is not worth the money, in the country I live in.
https://redd.it/1gbl2b4
@r_devops
I came from a systems engineering background and spend the last two years in a DevOps role where I was promoted internally.
It was predominantly supporting a legacy sitecore(.net) workload running on windows instance, we used teamcity for builds and octopus for deployments. The deployments were really long and clunky. 5 hours end to end including testing.
We also did run some more typical DevOps stacks, Jenkins pipelines, deploying .net core applications in to fargate.
I am in a position where I am missing kubernetes and some other core DevOps skills, due to not using industry standard tools. I also found the work pretty overwhelming initially but that wasn't helped by what I considered a difficult co worker. I am not quite sure why I was fired, but probably had something to do with my relationship with my co worker who is best friends with our boss, I was assured it was not a performance issue.
These are some of behaviours that led to conflict, but it being my first DevOps job, I don't know how if this is just an expected standard behaviour, due to the fast nature of the work:
Making changes at 2am to our integration layer and not telling anyone
Making breaking changes to a production pipelines, not telling anyone then going on holiday. I atart looking in to the issue then he pops up on slack telling me the solution is easy and what do. Which I had done 40 mins prior
Agreeing with me, then publicly disagreeing me with me in front of the Devs on slack or to our boss.
Generally just going off and doing his own thing and not documenting anything, leaving you to pick up integrations he was working on that have failed in his absence
Messaging you about work on teams at the weekend and when you reply saying it's the weekend, he replies saying you didn't have to reply.
It would be good to get some feedback on how people collaborate with their co workers and what they consider acceptable or not and if you think DevOps promotes alot more conflict than other roles?
At this point, because I am missing some core skills. I could invest time in to skilling up and trying to get another role, but it also does seem like the stress is not worth the money, in the country I live in.
https://redd.it/1gbl2b4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
New release: Jailer Database Tools
# Jailer Database Tools.
Jailer is a tool for database subsetting and relational data browsing.
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
# Features
Exports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.
Improves database performance by removing and archiving obsolete data without violating integrity.
Generates topologically sorted SQL-DML, hierarchically structured JSON, JAML, XML and DbUnit datasets.
Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
SQL Console with code completion, syntax highlighting and database metadata visualization.
A demo database is included with which you can get a first impression without any configuration effort.Jailer Database Tools.Jailer is a tool for database subsetting and relational data browsing.It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.FeaturesExports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.Improves database performance by removing and archiving obsolete data without violating integrity.Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.SQL Console with code completion, syntax highlighting and database metadata visualization.A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/1gbnhqe
@r_devops
# Jailer Database Tools.
Jailer is a tool for database subsetting and relational data browsing.
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
# Features
Exports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.
Improves database performance by removing and archiving obsolete data without violating integrity.
Generates topologically sorted SQL-DML, hierarchically structured JSON, JAML, XML and DbUnit datasets.
Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
SQL Console with code completion, syntax highlighting and database metadata visualization.
A demo database is included with which you can get a first impression without any configuration effort.Jailer Database Tools.Jailer is a tool for database subsetting and relational data browsing.It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.FeaturesExports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.Improves database performance by removing and archiving obsolete data without violating integrity.Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.SQL Console with code completion, syntax highlighting and database metadata visualization.A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/1gbnhqe
@r_devops
wisser.github.io
Open Jail - The Jailer Project Web Site
Data Export Tool
PagerDuty not great for small teams?
Not sure if I’m missing something here, but it seems like PagerDuty really isn’t built for smaller teams? I just recently broke up what was more or less a monolithic escalation policy where everyone on the schedule was more or less on call all the time and issues could be escalated to the same person if they didn’t ack, to smaller Escalation Policies and Schedules. Basically 3ish people per schedule.
PagerDuty recommends creating a primary and secondary schedule but, how’s that supposed to work with three people? Ideally I’d define primary and then secondary would be defined as an offset of that. Page primary, escalate to whoever is on deck to be on call next. It could work with the existing guidance, but all the people would have to be in both and then the offset would have to be managed manually. And then, if someone overrides in primary and doesn’t also make a similar override in secondary, you could end up with primary and secondary being the same person.
What I really want is an escalation policy that alarms to a team schedule, escalates through everyone there first, and then hits my team as a backup. Right now if the on call for that team doesn’t ack it jumps straight to me and I have to manually kick it to the next person on the schedule.
Am I missing something or is PagerDuty really just assuming that a team would have 6ish people with two full primary and secondary rotations?
https://redd.it/1gbn2dw
@r_devops
Not sure if I’m missing something here, but it seems like PagerDuty really isn’t built for smaller teams? I just recently broke up what was more or less a monolithic escalation policy where everyone on the schedule was more or less on call all the time and issues could be escalated to the same person if they didn’t ack, to smaller Escalation Policies and Schedules. Basically 3ish people per schedule.
PagerDuty recommends creating a primary and secondary schedule but, how’s that supposed to work with three people? Ideally I’d define primary and then secondary would be defined as an offset of that. Page primary, escalate to whoever is on deck to be on call next. It could work with the existing guidance, but all the people would have to be in both and then the offset would have to be managed manually. And then, if someone overrides in primary and doesn’t also make a similar override in secondary, you could end up with primary and secondary being the same person.
What I really want is an escalation policy that alarms to a team schedule, escalates through everyone there first, and then hits my team as a backup. Right now if the on call for that team doesn’t ack it jumps straight to me and I have to manually kick it to the next person on the schedule.
Am I missing something or is PagerDuty really just assuming that a team would have 6ish people with two full primary and secondary rotations?
https://redd.it/1gbn2dw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you guys track your deployments when doing configuration managment?
We are currently discussing migrating away from our current tool stack which consists of TFS. (For political and financial reasons).
We use it to host our code, build create and host our artifacts.
We can easily create a release with specific build artifacts and deploy it through agents using PowerShell.
We have around 100 different customer that we manage. Each customer, has between 2 and 4 'stages' (dev/int/prd for example) and we have a total of 4000 tests that gets execute par deployment per customer.
In the end, we have almost half a million of tests that run to ensure that our artifacts are correctly installed and configured.
Since we need to migrate, we have been evaluating GitLab, but we realized that it is not 'as complete' as TFS.
Especially the deployment part. It looks there that gitlab is only intended for smaller number of environments.
In addition to that, displaying the resulted tests, or just the pipeline run really doesn't scale and defeintly lacks some user friendlyness.
I was wondering how guys in other places hanlde this type of scenarios. I feel like we will not be able to find a similar product, and that it would be more of a 'agregation' of several products that would allow us to do this.
I would be curious to hear how you:
\- Deploy stuff onto your environments (Ansible ? DSC / Chef / puttet / something else ?)
\- how do you guys keep 'visually track' of what and where it passed / failed (Nice looking graphs with green & red )
Cheers
https://redd.it/1gbofud
@r_devops
We are currently discussing migrating away from our current tool stack which consists of TFS. (For political and financial reasons).
We use it to host our code, build create and host our artifacts.
We can easily create a release with specific build artifacts and deploy it through agents using PowerShell.
We have around 100 different customer that we manage. Each customer, has between 2 and 4 'stages' (dev/int/prd for example) and we have a total of 4000 tests that gets execute par deployment per customer.
In the end, we have almost half a million of tests that run to ensure that our artifacts are correctly installed and configured.
Since we need to migrate, we have been evaluating GitLab, but we realized that it is not 'as complete' as TFS.
Especially the deployment part. It looks there that gitlab is only intended for smaller number of environments.
In addition to that, displaying the resulted tests, or just the pipeline run really doesn't scale and defeintly lacks some user friendlyness.
I was wondering how guys in other places hanlde this type of scenarios. I feel like we will not be able to find a similar product, and that it would be more of a 'agregation' of several products that would allow us to do this.
I would be curious to hear how you:
\- Deploy stuff onto your environments (Ansible ? DSC / Chef / puttet / something else ?)
\- how do you guys keep 'visually track' of what and where it passed / failed (Nice looking graphs with green & red )
Cheers
https://redd.it/1gbofud
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Flox, a better alternative to Dev Containers
Hi my fellow DevOps,
I often have to setup dev environment for teams and projects I work with so I decided to write a short introduction on Flox which really hits the spot - especially compared to Dev Containers.
➡️ https://medium.com/@pierre_49652/flox-better-alternative-to-dev-containers-d02e1a2ec423
Let me know what you think :)
https://redd.it/1gbpbzp
@r_devops
Hi my fellow DevOps,
I often have to setup dev environment for teams and projects I work with so I decided to write a short introduction on Flox which really hits the spot - especially compared to Dev Containers.
➡️ https://medium.com/@pierre_49652/flox-better-alternative-to-dev-containers-d02e1a2ec423
Let me know what you think :)
https://redd.it/1gbpbzp
@r_devops
Medium
Flox: better alternative to Dev Containers
I was still in pain with my development environment setup despite using Dev Containers — then I discovered Flox.
Retrieving TenantID and ClientID from the Service Connection
Hi there,
In short, I found some articles on the internet which claim it should be possible to retrieve things as the ClientId and TenantId from the Service Connection that you specify in your main.yaml
This way I wouldn't have to put these into any variables file, or in the script themselves.
addSpnToEnvironment: true
$env:AZURETENANTID
$env:AZURECLIENTID
However, having tried to put this into the main.yaml, I can't seem to be able to use these variables.
When I use a Write-Host these variables come up empty.
Currently my main.yaml looks like this:
- task: AzureCLI@2
inputs:
azureSubscription: 'Repo-EntraID'
scriptType: 'ps'
addSpnToEnvironment: true
scriptLocation: 'inlineScript'
inlineScript: |
.\SendMailMessage\SendMailMessage.ps1 -AccessToken $env:AZUREACCESSTOKEN -TenantId $env:AZURETENANTID -ClientId $env:AZURECLIENTID
displayName: 'Send Email using Microsoft Graph and Service Connection'
Does anyone know how exactly I can get these variables from the Service Connection into my Powershell script?
Other then people (and Microsoft) mentioning that you can, I can't seem to find out how exactly.
Thanks in advance for anyone who can shed a light on this :-)
https://redd.it/1gbtznh
@r_devops
Hi there,
In short, I found some articles on the internet which claim it should be possible to retrieve things as the ClientId and TenantId from the Service Connection that you specify in your main.yaml
This way I wouldn't have to put these into any variables file, or in the script themselves.
addSpnToEnvironment: true
$env:AZURETENANTID
$env:AZURECLIENTID
However, having tried to put this into the main.yaml, I can't seem to be able to use these variables.
When I use a Write-Host these variables come up empty.
Currently my main.yaml looks like this:
- task: AzureCLI@2
inputs:
azureSubscription: 'Repo-EntraID'
scriptType: 'ps'
addSpnToEnvironment: true
scriptLocation: 'inlineScript'
inlineScript: |
.\SendMailMessage\SendMailMessage.ps1 -AccessToken $env:AZUREACCESSTOKEN -TenantId $env:AZURETENANTID -ClientId $env:AZURECLIENTID
displayName: 'Send Email using Microsoft Graph and Service Connection'
Does anyone know how exactly I can get these variables from the Service Connection into my Powershell script?
Other then people (and Microsoft) mentioning that you can, I can't seem to find out how exactly.
Thanks in advance for anyone who can shed a light on this :-)
https://redd.it/1gbtznh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Has anyone got a CISSP cert?
I am thinking about expanding my skill set and exploring some security engineering, I have a heavy sys admin and DevOps background, cloud experience and all the DevOps things. I am just wondering if anyone has any experience walking this path that I can learn from.
https://redd.it/1gbp3az
@r_devops
I am thinking about expanding my skill set and exploring some security engineering, I have a heavy sys admin and DevOps background, cloud experience and all the DevOps things. I am just wondering if anyone has any experience walking this path that I can learn from.
https://redd.it/1gbp3az
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Canary deployment
Need help with an issue with canary deployment using flagger. Does anyone have handson experience with it? Need urgent assistance:(
https://redd.it/1gbwoii
@r_devops
Need help with an issue with canary deployment using flagger. Does anyone have handson experience with it? Need urgent assistance:(
https://redd.it/1gbwoii
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Canary deployment issue
I am facing an issue with canary deployment using flagger. Would really appreciate any suggestions. More about the issue in comments.
https://redd.it/1gby1dj
@r_devops
I am facing an issue with canary deployment using flagger. Would really appreciate any suggestions. More about the issue in comments.
https://redd.it/1gby1dj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need handson projects for devops
Guys need help in gaining handson experience from end to end pipelines including kubernetes terraform docker jenkins/gitlabcicd please help me
https://redd.it/1gbwlcp
@r_devops
Guys need help in gaining handson experience from end to end pipelines including kubernetes terraform docker jenkins/gitlabcicd please help me
https://redd.it/1gbwlcp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best practice for organizing test mocks/stubs in a monorepo?
I have a Turborepo monorepo with two apps - a React + Vite frontend and a Fastify REST API. All shared packages are configured as native ES modules (`"type": "module"`) and have `sideEffects: false` since they only contain types, schemas, and constants.
I need to add test mocks/stubs for my types and schemas, and I'm trying to decide the best way to structure this. Should they live next to their types, or in a separate testing package?
Here's what I mean:
Option 1: Co-located mocks
import { ApiResponse, apiResponseStub } from '@acme/contract';
import { User, userStub } from '@acme/database';
import { Config, configStub } from '@acme/common';
Option 2: Separate testing package
import { ApiResponse } from '@acme/contract';
import { User } from '@acme/database';
import { Config } from '@acme/common';
import {
apiResponseStub,
userStub,
configStub
} from '@acme/testing';
While co-locating stubs next to their types/schemas feels a lot easier, I have some concerns:
1. Tree-shaking reliability: Even with `sideEffects: false`, can I trust that test code won't leak into production builds?
2. Package structure: If I go with a separate testing package, how should I organize it?
Appreciate any input I can get on this :)
https://redd.it/1gc0qst
@r_devops
I have a Turborepo monorepo with two apps - a React + Vite frontend and a Fastify REST API. All shared packages are configured as native ES modules (`"type": "module"`) and have `sideEffects: false` since they only contain types, schemas, and constants.
I need to add test mocks/stubs for my types and schemas, and I'm trying to decide the best way to structure this. Should they live next to their types, or in a separate testing package?
Here's what I mean:
Option 1: Co-located mocks
import { ApiResponse, apiResponseStub } from '@acme/contract';
import { User, userStub } from '@acme/database';
import { Config, configStub } from '@acme/common';
Option 2: Separate testing package
import { ApiResponse } from '@acme/contract';
import { User } from '@acme/database';
import { Config } from '@acme/common';
import {
apiResponseStub,
userStub,
configStub
} from '@acme/testing';
While co-locating stubs next to their types/schemas feels a lot easier, I have some concerns:
1. Tree-shaking reliability: Even with `sideEffects: false`, can I trust that test code won't leak into production builds?
2. Package structure: If I go with a separate testing package, how should I organize it?
Appreciate any input I can get on this :)
https://redd.it/1gc0qst
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
CLion with Docker toolchain: "The file does not belong to any project target; code insight features may not work properly"
I'm trying to adapt my development workflow to make use of Docker containers for local development.
I'm having a hell of a time trying to get CLion to be configured correctly. The full details of the problem are posted to StackOverflow if anyone is interested in contributing.
See StackOverflow post
https://redd.it/1gc2fkd
@r_devops
I'm trying to adapt my development workflow to make use of Docker containers for local development.
I'm having a hell of a time trying to get CLion to be configured correctly. The full details of the problem are posted to StackOverflow if anyone is interested in contributing.
See StackOverflow post
https://redd.it/1gc2fkd
@r_devops
Stack Overflow
CLion with Docker toolchain: "The file does not belong to any project target; code insight features may not work properly"
I'm trying to set up a local development environment for my C++-based project using Docker and CLion. I want CLion to recognize the libraries installed inside the Docker container and provide full ...
Trace your application with OpenTelemetry and Jaeger
Trace your application with OpenTelemetry and Jaeger
https://medium.com/@rasvihostings/trace-your-application-with-opentelemetry-and-jaeger-109fb0420b3b
#gke #k8s #openTelemetry #sre #observability #python
https://redd.it/1gc4c0t
@r_devops
Trace your application with OpenTelemetry and Jaeger
https://medium.com/@rasvihostings/trace-your-application-with-opentelemetry-and-jaeger-109fb0420b3b
#gke #k8s #openTelemetry #sre #observability #python
https://redd.it/1gc4c0t
@r_devops
Medium
Trace your application with OpenTelemetry and Jaeger
I’ll help you create three microservices with OpenTelemetry integration and deploy them to Google Kubernetes Engine (GKE).
Question for the devops folks
Dear DevOps Engineer, I have a question about deploying Docker images in Kubernetes. When I build an image, push it to a registry, and then pull it with Kubernetes, how does it get an IP address to make it accessible via a domain like www.example.com? Also, in my front end, I specify the API URL in an .env file. How can I know the correct API URL to use once it’s deployed to the cloud? I understand Kubernetes uses services, but could you explain how this setup works in a cloud environment?
https://redd.it/1gc6etv
@r_devops
Dear DevOps Engineer, I have a question about deploying Docker images in Kubernetes. When I build an image, push it to a registry, and then pull it with Kubernetes, how does it get an IP address to make it accessible via a domain like www.example.com? Also, in my front end, I specify the API URL in an .env file. How can I know the correct API URL to use once it’s deployed to the cloud? I understand Kubernetes uses services, but could you explain how this setup works in a cloud environment?
https://redd.it/1gc6etv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community