Advice for a student
Hello ,
I don't know if the right place to ask my question, so sorry in advance.
I am currently a computer science student on my penultimate year of study, and I want to start a career as a DevOps engineer (after taking a year off).
I've already had the opportunity to learn a lot of technology related to the field (docker, terraform, Jenkins ....), and I've come to wonder if I should start learning more about this technology or if by the time I work my knowledge will have become useless.
Should I already get certifications ( AWS , terraform ....) or should I wait? Or do I wait for the moment when I would like to work to get trained, to get the certifications .
Thank you in advance for your advice and feedback.
Ps: if the post doesn't fit here, can you advise me a /r where I can ask my question?
https://redd.it/10mtymz
@r_devops
Hello ,
I don't know if the right place to ask my question, so sorry in advance.
I am currently a computer science student on my penultimate year of study, and I want to start a career as a DevOps engineer (after taking a year off).
I've already had the opportunity to learn a lot of technology related to the field (docker, terraform, Jenkins ....), and I've come to wonder if I should start learning more about this technology or if by the time I work my knowledge will have become useless.
Should I already get certifications ( AWS , terraform ....) or should I wait? Or do I wait for the moment when I would like to work to get trained, to get the certifications .
Thank you in advance for your advice and feedback.
Ps: if the post doesn't fit here, can you advise me a /r where I can ask my question?
https://redd.it/10mtymz
@r_devops
Reddit
r/devops - Advice for a student
1 vote and 2 comments so far on Reddit
Anyone studying for RHCSA?
Studying for RHCSA. Anyone else doing this that wants to help keep each other accountable? Any discord servers that may have people studying for this?
https://redd.it/10nda8i
@r_devops
Studying for RHCSA. Anyone else doing this that wants to help keep each other accountable? Any discord servers that may have people studying for this?
https://redd.it/10nda8i
@r_devops
Reddit
r/devops - Anyone studying for RHCSA?
Posted in the devops community.
OneUptime: Open Source StatusPage.io Alternative
Hey r/devops,
I'm working on a project called OneUptime. Its an open-source StatusPage.io alternative. All of it is MIT licensed on GitHub. You can check the project out here: https://github.com/oneuptime/oneuptime
Please let me know what you think.
https://redd.it/10nfihi
@r_devops
Hey r/devops,
I'm working on a project called OneUptime. Its an open-source StatusPage.io alternative. All of it is MIT licensed on GitHub. You can check the project out here: https://github.com/oneuptime/oneuptime
Please let me know what you think.
https://redd.it/10nfihi
@r_devops
Atlassian
Improve Transparency with Statuspage | Atlassian
Statuspage provides real-time incident communication and status updates. Keep customers informed, build trust, and enhance transparency. Start now!
Can anyone give tasks to practise in Devops and Aws?
We are group of self learning Devops studs. We are unable to offord premium training programs. we want some real time experience persons to guide and give us simple to moderate tasks that you perform as part of your Devops job.
We have setup free GCP account for playground. Thank you in advance.
https://redd.it/10merfb
@r_devops
We are group of self learning Devops studs. We are unable to offord premium training programs. we want some real time experience persons to guide and give us simple to moderate tasks that you perform as part of your Devops job.
We have setup free GCP account for playground. Thank you in advance.
https://redd.it/10merfb
@r_devops
Reddit
r/devops - Can anyone give tasks to practise in Devops and Aws?
5 votes and 11 comments so far on Reddit
How long would it take you to deploy a ECS cluster in a brand new AWS account?
Recently I did some work for my client which included setting up an ECS cluster, load balancer and a database for prod and dev environments. Also automated deployment by using Gitlab CI/CD.
My initial estimate was between 8 and 10 hours but it took my longer.
Now I am just wondering how long would take someone else?
Just to note that deployment was trough GUI because it probably matters discussion wise.
https://redd.it/10medw7
@r_devops
Recently I did some work for my client which included setting up an ECS cluster, load balancer and a database for prod and dev environments. Also automated deployment by using Gitlab CI/CD.
My initial estimate was between 8 and 10 hours but it took my longer.
Now I am just wondering how long would take someone else?
Just to note that deployment was trough GUI because it probably matters discussion wise.
https://redd.it/10medw7
@r_devops
Reddit
r/devops on Reddit
How long would it take you to deploy a ECS cluster in a brand new AWS account?
How do you define SLO (and SLA) for a cloud platform
So we're starting to define our SLA. We're an AWS based SaaS platform.
I have read the Google SRE book on SLI/SLO/SLA, and the way to go is with request based SLO.
However, I'm confused that should we not take in to account the SLAs of the services we use in the backend.
As a simple example, if I'm running a web server on AWS, and this server is using a RDS database, our web server cannot have a SLO better than the Dabs SLA, can it?
If the SLA for the DB is 99.9%, our web server cannot have a SLO of 99.99%. isn't that right?
If the uptime of the DB is 99.9%, the web server cannot have an uptime of 99.99%.
Or should I not take in to account the services we use to serve the web server traffic?
https://redd.it/10nhle0
@r_devops
So we're starting to define our SLA. We're an AWS based SaaS platform.
I have read the Google SRE book on SLI/SLO/SLA, and the way to go is with request based SLO.
However, I'm confused that should we not take in to account the SLAs of the services we use in the backend.
As a simple example, if I'm running a web server on AWS, and this server is using a RDS database, our web server cannot have a SLO better than the Dabs SLA, can it?
If the SLA for the DB is 99.9%, our web server cannot have a SLO of 99.99%. isn't that right?
If the uptime of the DB is 99.9%, the web server cannot have an uptime of 99.99%.
Or should I not take in to account the services we use to serve the web server traffic?
https://redd.it/10nhle0
@r_devops
Reddit
r/devops - How do you define SLO (and SLA) for a cloud platform
Posted in the devops community.
Are there any advanced Jenkins Scripted Pipeline tutorials available?
I've recently joined a new DevOps team and they use Scripted pipelines to an extent that I have never seen before. My background consists of 2 years working with declarative pipelines and 1.5 years of groovy. I've checked out the docs for Jenkins and there seems to be one small section dedicated to explaining scripted pipelines. There aren't too many tutorials on the site either. Can anyone suggest any intermediate to advanced tutorials for scripted pipelines?
https://redd.it/10njapx
@r_devops
I've recently joined a new DevOps team and they use Scripted pipelines to an extent that I have never seen before. My background consists of 2 years working with declarative pipelines and 1.5 years of groovy. I've checked out the docs for Jenkins and there seems to be one small section dedicated to explaining scripted pipelines. There aren't too many tutorials on the site either. Can anyone suggest any intermediate to advanced tutorials for scripted pipelines?
https://redd.it/10njapx
@r_devops
Reddit
r/devops on Reddit
Are there any advanced Jenkins Scripted Pipeline t... - 3 votes and 9 comments
How to handle multiple log streams inside one container?
I have a single app service that runs inside the docker container.
The app service itself has five different log streams, all of them are important and has different format. Some in json, some in a plain text. Right now, all these log streams are pointed into stdout, and it gives me a headache to handle it, since these log streams should be separated and handled differently.
I can easily configure this service to write logs into separate log files inside the container. But in this case, what will be the best way to read them? What is your proposal to handle this situation?
I am using simple docker swarm if it's important.
https://redd.it/10nnoht
@r_devops
I have a single app service that runs inside the docker container.
The app service itself has five different log streams, all of them are important and has different format. Some in json, some in a plain text. Right now, all these log streams are pointed into stdout, and it gives me a headache to handle it, since these log streams should be separated and handled differently.
I can easily configure this service to write logs into separate log files inside the container. But in this case, what will be the best way to read them? What is your proposal to handle this situation?
I am using simple docker swarm if it's important.
https://redd.it/10nnoht
@r_devops
Reddit
r/devops on Reddit: How to handle multiple log streams inside one container?
Posted by u/beeyev - 2 votes and 5 comments
Using Kubernete & minIO. If anyone is able to assist me in finalising my minIO deployment, I'd appreciate it
Everytime I try accessing my minIO console via the browser with a port-forward, the connection will work briefly with multiple connection messages of:
Handling connection for 9000 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 ...
Then a moment later, this error message
E0128 18:22:01.801739 40952 portforward.go:378] error copying from remote stream to local connection: readfrom tcp6 ::1:42935->::1:50796: write tcp6 ::1:42935->::1:50796: write: broken pipe
Before it finally spamming with multiple messages of:
E0128 18:22:31.738313 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: Timeout occurred Handling connection for 42935 E0128 18:22:32.120930 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: write tcp 192.168.0.16:50776->34.133.9.102:443: write: broken pipe Handling connection for 42935 E0128 18:22:32.574837 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: write tcp 192.168.0.16:50776->34.133.9.102:443: write: broken pipe ...
Here's my deployment file:
apiVersion: apps/v1 kind: Deployment metadata: name: minio-deployment namespace: minio-ns spec: replicas: 1 selector: matchLabels: app: minio template: metadata: labels: app: minio spec: containers: - name: minio image: minio/minio args: - server - /data - --console-address - ":42935" volumeMounts: - name: minio-pv-storage mountPath: /data volumes: - name: minio-pv-storage persistentVolumeClaim: claimName: minio-pv-claim --- apiVersion: v1 kind: Service metadata: name: minio-service namespace: minio-ns spec: selector: app: minio ports: - name: minio port: 9000 targetPort: 9000 - name: minio-console port: 42935 targetPort: 42935 type: LoadBalancer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minio-pv-claim namespace: minio-ns spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
I changed the minio-service type to LoadBalancer (from ClusterIP) to access the console via the browser along with adding the --console-address flag and exposing the necessary port. This worked in allowing the minIO console to show despite being in a constant loading state. If I try to login, it will just refresh until crashing/timing out
https://redd.it/10nt2e0
@r_devops
Everytime I try accessing my minIO console via the browser with a port-forward, the connection will work briefly with multiple connection messages of:
Handling connection for 9000 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 Handling connection for 42935 ...
Then a moment later, this error message
E0128 18:22:01.801739 40952 portforward.go:378] error copying from remote stream to local connection: readfrom tcp6 ::1:42935->::1:50796: write tcp6 ::1:42935->::1:50796: write: broken pipe
Before it finally spamming with multiple messages of:
E0128 18:22:31.738313 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: Timeout occurred Handling connection for 42935 E0128 18:22:32.120930 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: write tcp 192.168.0.16:50776->34.133.9.102:443: write: broken pipe Handling connection for 42935 E0128 18:22:32.574837 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: write tcp 192.168.0.16:50776->34.133.9.102:443: write: broken pipe ...
Here's my deployment file:
apiVersion: apps/v1 kind: Deployment metadata: name: minio-deployment namespace: minio-ns spec: replicas: 1 selector: matchLabels: app: minio template: metadata: labels: app: minio spec: containers: - name: minio image: minio/minio args: - server - /data - --console-address - ":42935" volumeMounts: - name: minio-pv-storage mountPath: /data volumes: - name: minio-pv-storage persistentVolumeClaim: claimName: minio-pv-claim --- apiVersion: v1 kind: Service metadata: name: minio-service namespace: minio-ns spec: selector: app: minio ports: - name: minio port: 9000 targetPort: 9000 - name: minio-console port: 42935 targetPort: 42935 type: LoadBalancer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minio-pv-claim namespace: minio-ns spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
I changed the minio-service type to LoadBalancer (from ClusterIP) to access the console via the browser along with adding the --console-address flag and exposing the necessary port. This worked in allowing the minIO console to show despite being in a constant loading state. If I try to login, it will just refresh until crashing/timing out
https://redd.it/10nt2e0
@r_devops
Reddit
r/devops - Using Kubernete & minIO. If anyone is able to assist me in finalising my minIO deployment, I'd appreciate it
Posted in the devops community.
Datadog vs Graylog vs ELK
Hi all,
At my company, we use Datagog for metrics and Graylog for log management (and generally also derived metrics). Both Datadog and Graylog are getting older and we are thinking about upgrades.
All of our infra is on AWS in a bunch of accounts, what we need is to have monitoring with dashboards with metrics and alerting (very important). We need basic stuff - CPU/mem/disk/traffic/io/HTTP graphs. Our Graylog ingests around 5GB of logs a day.
Now, I was reading more about all 3 apps (Datadog, Graylog, ELK) and I can't make up my mind (not enough live experience), what we should do (worked closer only with ES, but not for monitoring):
\- use only Datadog and skip Graylog (I understand that Datadog can ingest logs as well)
\- use only Graylog (but I don't see it being able to show basic OS metrics)
\- use ELK with +beats for metrics
\- use AWS OpenSearch as above (but know it's a fork from an older version)
\- stick with just CloudWatch?
​
Or maybe there is some other system (Grafana/Loki, Splunk ....) that you would recommend?
Most important requirements:
\- metrics
\- logs management
\- alerting (setting custom thresholds)
\- the cheaper - the better, but the price is not an important factor
\- nice to have - good integration with AWS
\- nice to have - multi-user / fine-grained ACLs
\- nice to have - integration with Github and Atlassian stack
\- nice to have - easy deployment (Docker preferred)
​
Thanks in advance for any comments/recommendations :)
https://redd.it/10ntvs1
@r_devops
Hi all,
At my company, we use Datagog for metrics and Graylog for log management (and generally also derived metrics). Both Datadog and Graylog are getting older and we are thinking about upgrades.
All of our infra is on AWS in a bunch of accounts, what we need is to have monitoring with dashboards with metrics and alerting (very important). We need basic stuff - CPU/mem/disk/traffic/io/HTTP graphs. Our Graylog ingests around 5GB of logs a day.
Now, I was reading more about all 3 apps (Datadog, Graylog, ELK) and I can't make up my mind (not enough live experience), what we should do (worked closer only with ES, but not for monitoring):
\- use only Datadog and skip Graylog (I understand that Datadog can ingest logs as well)
\- use only Graylog (but I don't see it being able to show basic OS metrics)
\- use ELK with +beats for metrics
\- use AWS OpenSearch as above (but know it's a fork from an older version)
\- stick with just CloudWatch?
​
Or maybe there is some other system (Grafana/Loki, Splunk ....) that you would recommend?
Most important requirements:
\- metrics
\- logs management
\- alerting (setting custom thresholds)
\- the cheaper - the better, but the price is not an important factor
\- nice to have - good integration with AWS
\- nice to have - multi-user / fine-grained ACLs
\- nice to have - integration with Github and Atlassian stack
\- nice to have - easy deployment (Docker preferred)
​
Thanks in advance for any comments/recommendations :)
https://redd.it/10ntvs1
@r_devops
Reddit
r/devops - Datadog vs Graylog vs ELK
Posted in the devops community.
Simple PAAS open source that run on docker or docker-compose insted of openshift
Hi in work environment we want to make a PAAS platform but openshift if heavy to run and we want a light alternative and in next time migrate to Kubernetes PAAS platform any return of production ready that you can use ?
https://redd.it/10nkqgu
@r_devops
Hi in work environment we want to make a PAAS platform but openshift if heavy to run and we want a light alternative and in next time migrate to Kubernetes PAAS platform any return of production ready that you can use ?
https://redd.it/10nkqgu
@r_devops
Reddit
r/devops - Simple PAAS open source that run on docker or docker-compose insted of openshift
4 votes and 4 comments so far on Reddit
Tool for app testing?!
I don't know if this is the right subreddit for this question but I have a network where I have many systems that are directly unsupported for the application our organization is working on and what to set up an in-network server for the testing application.
I've heard about Citrix and Winflactor for my use case but both cost a ton and Citrix even has a complicated learning curve.
I'm looking for an easy-to-use solution, preferred open source software, that works on Windows and Mac OS.
https://redd.it/10mq20v
@r_devops
I don't know if this is the right subreddit for this question but I have a network where I have many systems that are directly unsupported for the application our organization is working on and what to set up an in-network server for the testing application.
I've heard about Citrix and Winflactor for my use case but both cost a ton and Citrix even has a complicated learning curve.
I'm looking for an easy-to-use solution, preferred open source software, that works on Windows and Mac OS.
https://redd.it/10mq20v
@r_devops
Reddit
r/devops on Reddit
Tool for app testing?!
How about a tool convert form .tf to png/design ?
I like Terraform, but I always want to know what my terraform code looks like in a design graph.
​
I want to know how my Terraform code is represented in the Cloud service:
How resources are grouped under the network
How resources are grouped under a region
How resources are grouped under a resource-group
How resources cost
Verify if our initial design is as same as the design terraform code generate
Etc.
​
If someone makes a tool for that, will you use it and pay for it?
https://redd.it/10o16a5
@r_devops
I like Terraform, but I always want to know what my terraform code looks like in a design graph.
​
I want to know how my Terraform code is represented in the Cloud service:
How resources are grouped under the network
How resources are grouped under a region
How resources are grouped under a resource-group
How resources cost
Verify if our initial design is as same as the design terraform code generate
Etc.
​
If someone makes a tool for that, will you use it and pay for it?
https://redd.it/10o16a5
@r_devops
Reddit
r/devops - How about a tool convert form .tf to png/design ?
Posted in the devops community.
(GITOPS) Progressive Delivery Tools and Rollbacks to Git ?
Hello,
For GitOps - Git is the source of truth. Let's imagine a situation when deployment fails and the application is rolled back to the previous configuration and version.
Should state in Git reflect that ?
What we do if we want to revert something in Git ? We create a revert commit. Then the state in git reflects reality that this change does not work and thus was reverted.
Do any of the GitOps tools are aware of that ?
An ideal GitOps approach pipeline would be:
- I commit a change to repository (change of state in Git)
- Tool pick-ups the change and compares it with the state on the Cluster
- Tool applies the state on the cluster - but the metric says the change causes a failed state
- Tool creates a rollback to the previous version
- Tool notifies something that was tracking Git that this change does not work
- This something else prepares a PR with revert to Git with explanation that this version failed due to X
- It's up to human to merge the PR thus restoring the Git state to again reflect what is truth
Thoughts ? Ideas ?
Seems like we have a gap in currently available tooling.
https://redd.it/10o4ux0
@r_devops
Hello,
For GitOps - Git is the source of truth. Let's imagine a situation when deployment fails and the application is rolled back to the previous configuration and version.
Should state in Git reflect that ?
What we do if we want to revert something in Git ? We create a revert commit. Then the state in git reflects reality that this change does not work and thus was reverted.
Do any of the GitOps tools are aware of that ?
An ideal GitOps approach pipeline would be:
- I commit a change to repository (change of state in Git)
- Tool pick-ups the change and compares it with the state on the Cluster
- Tool applies the state on the cluster - but the metric says the change causes a failed state
- Tool creates a rollback to the previous version
- Tool notifies something that was tracking Git that this change does not work
- This something else prepares a PR with revert to Git with explanation that this version failed due to X
- It's up to human to merge the PR thus restoring the Git state to again reflect what is truth
Thoughts ? Ideas ?
Seems like we have a gap in currently available tooling.
https://redd.it/10o4ux0
@r_devops
Reddit
r/devops - (GITOPS) Progressive Delivery Tools and Rollbacks to Git ?
Posted in the devops community.
How to automate baremetal migration
My team wants me to setup automation for the an application which is in baremetal/physical server, since this migration happens every year.
Bellow are the steps i need to automate. How would you do it? What tools would you use?
1. Procure new baremetals servers
2. Get IPs allocated
3. Get VIPs from network team
4. Open firewall rules - ticket to network team
5. Configure tomcat and install application
6. Configure database
7. Setup monitoring
8. Configure gslbs to point to new Load balancers
https://redd.it/10o3al5
@r_devops
My team wants me to setup automation for the an application which is in baremetal/physical server, since this migration happens every year.
Bellow are the steps i need to automate. How would you do it? What tools would you use?
1. Procure new baremetals servers
2. Get IPs allocated
3. Get VIPs from network team
4. Open firewall rules - ticket to network team
5. Configure tomcat and install application
6. Configure database
7. Setup monitoring
8. Configure gslbs to point to new Load balancers
https://redd.it/10o3al5
@r_devops
Reddit
r/devops - How to automate baremetal migration
2 votes and 13 comments so far on Reddit
Which CD solution would you use - if you had to start fresh?
If you were tasked to build a new K8s environment from scratch, what would you use for CD?
Considerations:
\- Minimal set-up time
\- Easy rollback
\- Cloud agnostic
\- Canary deployments
This is only part of the picture of course - if you chose one of these CDs, can you share what the rest of your set-up looks like?
View Poll
https://redd.it/10o6i2m
@r_devops
If you were tasked to build a new K8s environment from scratch, what would you use for CD?
Considerations:
\- Minimal set-up time
\- Easy rollback
\- Cloud agnostic
\- Canary deployments
This is only part of the picture of course - if you chose one of these CDs, can you share what the rest of your set-up looks like?
View Poll
https://redd.it/10o6i2m
@r_devops
Reddit
Which CD solution would you use - if you had to start fresh?
5 votes and 15 comments so far on Reddit
Whats the best practice on using a package your distro version doesn't support?
I am on Ubuntu 22.04, (Pop OS) which still doesn't support MongoDB 6.0. Some people have a suggested tinkering with the repo .list, but it seems kind of off.
If I were in an organization with a tigher security protocol, in what way would I develop locally with mongodb? I thought about running mongodb in docker but I wanted to hear your thoughts.
https://redd.it/10o995h
@r_devops
I am on Ubuntu 22.04, (Pop OS) which still doesn't support MongoDB 6.0. Some people have a suggested tinkering with the repo .list, but it seems kind of off.
If I were in an organization with a tigher security protocol, in what way would I develop locally with mongodb? I thought about running mongodb in docker but I wanted to hear your thoughts.
https://redd.it/10o995h
@r_devops
Reddit
r/devops - Whats the best practice on using a package your distro version doesn't support?
Posted in the devops community.
Am I missing something? (argo cd and helm in AWS)
My goal is simply to deploy helm charts for our applications via argo cd, but it seems harder than it should be. I’m not sure if I’m missing something but our environment can’t be uncommon.
We are using EKS and we have working helm releases - I was exploring simply moving from native helm to Argo applications. Our helm charts are stored via OCI in ECR.
The first thing I ran into is there is no native integration from Argo to private ECR over the OCI to get charts. Several people have workarounds or cronjobs to get ECR tokens but I’m not really looking to add hacks just to use Argo.
The second option was to just make my charts public and apply the values file from the git repo where our apps are. Immediately found that helm repos and git sources aren’t meant to be mixed by Argo. They’ve very very recently added support for this but it’s basically still in beta.
So I’m left wondering.. what am I missing here? I understand that these things are being addressed and there are ways to make it happen but how is everyone else doing this? How are you applying helm charts with private values files with Argo? Is everyone just using artifactory or harbor and I’m in the minority?
I get the sense Argo was made for kustomize and helm support was bolted on after. Which makes sense.. I guess helm isn’t really “gitops”.
https://redd.it/10o97jo
@r_devops
My goal is simply to deploy helm charts for our applications via argo cd, but it seems harder than it should be. I’m not sure if I’m missing something but our environment can’t be uncommon.
We are using EKS and we have working helm releases - I was exploring simply moving from native helm to Argo applications. Our helm charts are stored via OCI in ECR.
The first thing I ran into is there is no native integration from Argo to private ECR over the OCI to get charts. Several people have workarounds or cronjobs to get ECR tokens but I’m not really looking to add hacks just to use Argo.
The second option was to just make my charts public and apply the values file from the git repo where our apps are. Immediately found that helm repos and git sources aren’t meant to be mixed by Argo. They’ve very very recently added support for this but it’s basically still in beta.
So I’m left wondering.. what am I missing here? I understand that these things are being addressed and there are ways to make it happen but how is everyone else doing this? How are you applying helm charts with private values files with Argo? Is everyone just using artifactory or harbor and I’m in the minority?
I get the sense Argo was made for kustomize and helm support was bolted on after. Which makes sense.. I guess helm isn’t really “gitops”.
https://redd.it/10o97jo
@r_devops
Reddit
r/devops on Reddit: Am I missing something? (argo cd and helm in AWS)
Posted by u/from_the_river_flow - No votes and no comments
Microservices Authentication: SAML and JWT
I have the following problem: I want to create an authentication concept for a microservices environment. External requests by users go through an API gateway. User authentication and transfer of user context inside the platform should be done via JWTs.
A user should be able to authenticate to the platform via SAML. How could this be enabled?
I am aware that exchanging a SAML token to a JWT is not possible or very difficult. Would it be an option not to return a JWT to the user, but to generate it on the gateway after successful authentication and attach it to the user request?
https://redd.it/10o8yzd
@r_devops
I have the following problem: I want to create an authentication concept for a microservices environment. External requests by users go through an API gateway. User authentication and transfer of user context inside the platform should be done via JWTs.
A user should be able to authenticate to the platform via SAML. How could this be enabled?
I am aware that exchanging a SAML token to a JWT is not possible or very difficult. Would it be an option not to return a JWT to the user, but to generate it on the gateway after successful authentication and attach it to the user request?
https://redd.it/10o8yzd
@r_devops
Reddit
r/devops - Microservices Authentication: SAML and JWT
Posted in the devops community.
jenkins using variable in withcredentials block?
Hi guys,
I am not able to find how can I use variable for credentialsID in that withCredential script. I will use just same a jenkinsfile for all branches with different credentials so I need to do it.
​
I have tried these versions
​
https://redd.it/10ob2el
@r_devops
Hi guys,
I am not able to find how can I use variable for credentialsID in that withCredential script. I will use just same a jenkinsfile for all branches with different credentials so I need to do it.
​
withCredentials([usernamePassword(credentialsId: 'GITHUBCREDENTIALS' , passwordVariable: 'GIT_PASS', usernameVariable: 'GIT_USER')])I have tried these versions
'$GITHUBCREDENTIALS''${GITHUBCREDENTIALS}''"${GITHUBCREDENTIALS}"''"'${GITHUBCREDENTIALS}'"'​
https://redd.it/10ob2el
@r_devops
Reddit
r/devops - jenkins using variable in withcredentials block?
Posted in the devops community.
Is it possible to share the checkout and setup result for next jobs?
I'm fairly new to Github actions and started with this workflow
name: QA on pull request
on: pullrequest
jobs:
run-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: 1.19
- name: Run tests
run: make test
build-application:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: 1.19
- name: Build application
run: make build
I want to run both jobs in parallel so the build job doesn't have to wait for the tests to finish. But as you can see both of them have to checkout the repository and have to setup Go.
Is it possible to share this step or even share the result? This is my pseudo solution
name: QA on pull request
on: pullrequest
jobs:
setup:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: 1.19
# share all the data from here
run-tests:
runs-on: ubuntu-latest
steps:
- name: Import data from setup job
# Maybe as artifact?
- name: Run tests
run: make test
build-application:
runs-on: ubuntu-latest
steps:
- name: Import data from setup job
# Maybe as artifact?
- name: Build application
run: make build
If this is not possible, can I extract the duplicate logic into a "function" I can call twice so I don't have to write the logic in every job?
https://redd.it/10o4rnk
@r_devops
I'm fairly new to Github actions and started with this workflow
name: QA on pull request
on: pullrequest
jobs:
run-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: 1.19
- name: Run tests
run: make test
build-application:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: 1.19
- name: Build application
run: make build
I want to run both jobs in parallel so the build job doesn't have to wait for the tests to finish. But as you can see both of them have to checkout the repository and have to setup Go.
Is it possible to share this step or even share the result? This is my pseudo solution
name: QA on pull request
on: pullrequest
jobs:
setup:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: 1.19
# share all the data from here
run-tests:
runs-on: ubuntu-latest
steps:
- name: Import data from setup job
# Maybe as artifact?
- name: Run tests
run: make test
build-application:
runs-on: ubuntu-latest
steps:
- name: Import data from setup job
# Maybe as artifact?
- name: Build application
run: make build
If this is not possible, can I extract the duplicate logic into a "function" I can call twice so I don't have to write the logic in every job?
https://redd.it/10o4rnk
@r_devops
Reddit
r/devops on Reddit: Is it possible to share the checkout and setup result for next jobs?
Posted by u/markustuchel - No votes and 1 comment