what should i do as devsecops Engineer
I was recently recruited by this govt organization on the pretext of software development (which I am familiar with).
However, following the orientation programme, I was assigned the role of devsecops.
Now, because most senior management in government organisations aren't particularly knowledgeable about technology, they rely on various private firms to provide services such as code or infrastructure, each of which has its own devops pipeline, and these guys don't give a damn if you aren't from their own firm.
So, guys, please point me in the right direction, because even though they aren't teaching us much, the expectations are high
https://redd.it/ykunjx
@r_devops
I was recently recruited by this govt organization on the pretext of software development (which I am familiar with).
However, following the orientation programme, I was assigned the role of devsecops.
Now, because most senior management in government organisations aren't particularly knowledgeable about technology, they rely on various private firms to provide services such as code or infrastructure, each of which has its own devops pipeline, and these guys don't give a damn if you aren't from their own firm.
So, guys, please point me in the right direction, because even though they aren't teaching us much, the expectations are high
https://redd.it/ykunjx
@r_devops
reddit
what should i do as devsecops Engineer
I was recently recruited by this govt organization on the pretext of software...
AES Beanstalk not refreshing logs
Coming from heroku I tried to implement my web application on AWS EB. The application runs fine, however the logs are “stuck” at a certain timestamp and are not refreshed, even when I download all logs and restarted the application, the old logs are shown.
Did I run in some quota? Do I have to setup another service?
https://redd.it/ynshjw
@r_devops
Coming from heroku I tried to implement my web application on AWS EB. The application runs fine, however the logs are “stuck” at a certain timestamp and are not refreshed, even when I download all logs and restarted the application, the old logs are shown.
Did I run in some quota? Do I have to setup another service?
https://redd.it/ynshjw
@r_devops
reddit
AES Beanstalk not refreshing logs
Coming from heroku I tried to implement my web application on AWS EB. The application runs fine, however the logs are “stuck” at a certain...
Identity and Access management for DevOps tools
I wonder how do I get secure access to all my DevOps tools? Some of these tools may be use my AD or Okta groups to provide access. Nevertheless of these IAM tools I see DevOps folks use shared credentials, share tokens manually. I feel this is a huge security gap. I am curious to learn if every DevOps persona handles shared credential and tokens manually (by choice or the ecosystem they work within) and what is the reasoning behind it?
https://redd.it/ynurfv
@r_devops
I wonder how do I get secure access to all my DevOps tools? Some of these tools may be use my AD or Okta groups to provide access. Nevertheless of these IAM tools I see DevOps folks use shared credentials, share tokens manually. I feel this is a huge security gap. I am curious to learn if every DevOps persona handles shared credential and tokens manually (by choice or the ecosystem they work within) and what is the reasoning behind it?
https://redd.it/ynurfv
@r_devops
reddit
Identity and Access management for DevOps tools
I wonder how do I get secure access to all my DevOps tools? Some of these tools may be use my AD or Okta groups to provide access. Nevertheless of...
Tool for visualizing your backend, not just cloud infra
Hey there,
I was wondering if there is a tool that lets you visualize your backend at a higher-level than just cloud. Something that pulls info from my Github + AWS and shows things like:
* what API endpoints a microservice calls?
* what tables a service uses?
* what's the format of the messages passed between different services?
I could then interact with the nodes to make queries like:
* what are the last n calls made from one service to another?
* what are the current waiting messages in an async message queue?
I know there are tools like Cloudcraft, Lucidscale that automatically create diagrams of your cloud infra, but they're usually just limited to cloud-level details (e.g. what kinds of AWS instance a node is running).
Thanks!
https://redd.it/ynv1pu
@r_devops
Hey there,
I was wondering if there is a tool that lets you visualize your backend at a higher-level than just cloud. Something that pulls info from my Github + AWS and shows things like:
* what API endpoints a microservice calls?
* what tables a service uses?
* what's the format of the messages passed between different services?
I could then interact with the nodes to make queries like:
* what are the last n calls made from one service to another?
* what are the current waiting messages in an async message queue?
I know there are tools like Cloudcraft, Lucidscale that automatically create diagrams of your cloud infra, but they're usually just limited to cloud-level details (e.g. what kinds of AWS instance a node is running).
Thanks!
https://redd.it/ynv1pu
@r_devops
reddit
Tool for visualizing your backend, not just cloud infra
Hey there, I was wondering if there is a tool that lets you visualize your backend at a higher-level than just cloud. Something that pulls info...
Geo-routing with Apache APISIX
Apache APISIX, the Apache-led API Gateway, comes out of the box with many plugins to implement your use case. Sometimes, however, the plugin you’re looking for is not available. While creating your own is always possible, it’s sometimes necessary. Today, I’ll show you how to route users according to their location without writing a single line of Lua code.
Read more
https://redd.it/ynvbq8
@r_devops
Apache APISIX, the Apache-led API Gateway, comes out of the box with many plugins to implement your use case. Sometimes, however, the plugin you’re looking for is not available. While creating your own is always possible, it’s sometimes necessary. Today, I’ll show you how to route users according to their location without writing a single line of Lua code.
Read more
https://redd.it/ynvbq8
@r_devops
A Java geek
Geo-routing with Apache APISIX
Apache APISIX, the Apache-led API Gateway, comes out of the box with many plugins to implement your use case. Sometimes, however, the plugin you’re looking for is not available. While creating your own is always possible, it’s sometimes necessary. Today,…
keycloak oauth2-proxy configuration
Hi guys,
I'm right now stuck with some configuration I have in my kubernetes. In my lab I want to configure oauth2-proxy to use keycloak as an identity provider. I've everything ready but when trying to login using keycloak it shows a 403 Forbidden error "Login Failed: The upstream identity provider returned an error: invalid_scope"
Pod logs:
I've look for documentation and I don't see why is complaining about the scopes as I've them right.
This is my oauth2-proxy values:
provider = "keycloak-oidc"
provider_display_name = "Keycloak"
cookie_domains = ".test.dev"
oidc_issuer_url = "https://keycloak.test.dev/auth/realms/test"
reverse_proxy = true
email_domains = [ "*" \]
scope = "openid profile email groups"
whitelist_domains = ["test.dev",".test.dev"\]
pass_authorization_header = true
pass_access_token = true
pass_user_headers = true
set_authorization_header = true
set_xauthrequest = true
cookie_refresh = "1m"
cookie_expire = "30m"
And in keycloak I have the oauth2-proxy client created with Groups and Audience mappers.
I see these errors in keycloak:
If someone has experience with this and can point me to the right direction and tell me what I'm doing wrong I would be very grateful
Thank you
https://redd.it/ykwmrv
@r_devops
Hi guys,
I'm right now stuck with some configuration I have in my kubernetes. In my lab I want to configure oauth2-proxy to use keycloak as an identity provider. I've everything ready but when trying to login using keycloak it shows a 403 Forbidden error "Login Failed: The upstream identity provider returned an error: invalid_scope"
Pod logs:
[2022/11/03 08:49:31] [oauthproxy.go:752] Error while parsing OAuth2 callback: invalid_scope08:30:38,734 WARN [org.keycloak.events] (default task-43) type=LOGIN_ERROR, realmId=test, clientId=oauth2-proxy, userId=null, ipAddress=10.50.21.171, error=invalid_request, response_type=code, redirect_uri=https://oauth.test.dev/oauth2/callback, response_mode=query08:34:11,933 ERROR [org.keycloak.services] (default task-41) KC-SERVICES0093: Invalid parameter value for: scopeI've look for documentation and I don't see why is complaining about the scopes as I've them right.
This is my oauth2-proxy values:
provider = "keycloak-oidc"
provider_display_name = "Keycloak"
cookie_domains = ".test.dev"
oidc_issuer_url = "https://keycloak.test.dev/auth/realms/test"
reverse_proxy = true
email_domains = [ "*" \]
scope = "openid profile email groups"
whitelist_domains = ["test.dev",".test.dev"\]
pass_authorization_header = true
pass_access_token = true
pass_user_headers = true
set_authorization_header = true
set_xauthrequest = true
cookie_refresh = "1m"
cookie_expire = "30m"
And in keycloak I have the oauth2-proxy client created with Groups and Audience mappers.
I see these errors in keycloak:
LOGIN_ERRORClient oauth2-proxyError invalid_requestresponse_type coderedirect_uri `https://oauth.test.dev/oauth2/callback`response_mode queryIf someone has experience with this and can point me to the right direction and tell me what I'm doing wrong I would be very grateful
Thank you
https://redd.it/ykwmrv
@r_devops
pre-commit vs pre-push vs CI/CD for linting and formatting?
So, I generally use commits as a saving mechanism, but after adding a linting and formatting pre-commit hook, I do find myself committing less often. While this does help me catch syntax errors, and I guess I could argue that my commits are cleaner, this does seem to be a bit inconvenient. I think part of it is breaking the mold of what I'm used to, but I also wonder if I would be more productive if I moved it to a pre-push, or even to part of my CI pipeline (running before my tests). Does anyone have any recommendations?
https://redd.it/yo0y5i
@r_devops
So, I generally use commits as a saving mechanism, but after adding a linting and formatting pre-commit hook, I do find myself committing less often. While this does help me catch syntax errors, and I guess I could argue that my commits are cleaner, this does seem to be a bit inconvenient. I think part of it is breaking the mold of what I'm used to, but I also wonder if I would be more productive if I moved it to a pre-push, or even to part of my CI pipeline (running before my tests). Does anyone have any recommendations?
https://redd.it/yo0y5i
@r_devops
reddit
pre-commit vs pre-push vs CI/CD for linting and formatting?
So, I generally use commits as a saving mechanism, but after adding a linting and formatting pre-commit hook, I do find myself committing less...
Are forward auth and redirect auth the same?
So I'm new to Auth in general. Let's assume I have an IdP such as Keycloak, and we're doing OIDC-based auth. The desired architecture is where an unauthenticated API request hits reverse proxy, which then offloads the authentication to the IdP. Hence the reverse proxy acts as an API gateway.
I'm trying to understand if there exists a difference in the way the auth is handled:
Reverse Proxies like Traefik and Nginx seem to do "Forward Auth", which as I understand forwards the request to the authn/IdP service.
AWS ALB seems to do a "Redirect Auth", which as I understand redirects the authentication to the authn/IdP service which would require the authn endpoints to be exposed and results in more API calls from the client.
​
Is this accurate? If so, what are the pros and cons of each?
https://redd.it/yo2f4f
@r_devops
So I'm new to Auth in general. Let's assume I have an IdP such as Keycloak, and we're doing OIDC-based auth. The desired architecture is where an unauthenticated API request hits reverse proxy, which then offloads the authentication to the IdP. Hence the reverse proxy acts as an API gateway.
I'm trying to understand if there exists a difference in the way the auth is handled:
Reverse Proxies like Traefik and Nginx seem to do "Forward Auth", which as I understand forwards the request to the authn/IdP service.
AWS ALB seems to do a "Redirect Auth", which as I understand redirects the authentication to the authn/IdP service which would require the authn endpoints to be exposed and results in more API calls from the client.
​
Is this accurate? If so, what are the pros and cons of each?
https://redd.it/yo2f4f
@r_devops
reddit
Are forward auth and redirect auth the same?
So I'm new to Auth in general. Let's assume I have an IdP such as Keycloak, and we're doing OIDC-based auth. The desired architecture is where an...
How do we densify the ec2 instances?
We are running production workloads owned by different teams (which provision and own their own systems) on a number of EC2 instances. However, the utilization is comparatively low for the Auto Scaling groups. I am looking to density these 'n' number of EC2 instances so we can leverage the compute in a densified way.
I was thinking of deploying more services on ECS/Fargate or EKS? However, some of the use cases (legacy systems) are still running on EC2 instances. Is there any way we can identify workloads onto larger compute instances with better efficiency?
https://redd.it/ykzfer
@r_devops
We are running production workloads owned by different teams (which provision and own their own systems) on a number of EC2 instances. However, the utilization is comparatively low for the Auto Scaling groups. I am looking to density these 'n' number of EC2 instances so we can leverage the compute in a densified way.
I was thinking of deploying more services on ECS/Fargate or EKS? However, some of the use cases (legacy systems) are still running on EC2 instances. Is there any way we can identify workloads onto larger compute instances with better efficiency?
https://redd.it/ykzfer
@r_devops
reddit
How do we densify the ec2 instances?
We are running production workloads owned by different teams (which provision and own their own systems) on a number of EC2 instances. However,...
Best strategy to deploy
Hi everyone, I am brand new in this context, I have this scenario:
I have a github repo (next js project) and every time someone pushes in the main branch I want to build the project and dockerize it in a container than run that container on my server, which is the best way to reach this goal?
https://redd.it/ykxdsq
@r_devops
Hi everyone, I am brand new in this context, I have this scenario:
I have a github repo (next js project) and every time someone pushes in the main branch I want to build the project and dockerize it in a container than run that container on my server, which is the best way to reach this goal?
https://redd.it/ykxdsq
@r_devops
reddit
Best strategy to deploy
Hi everyone, I am brand new in this context, I have this scenario: I have a github repo (next js project) and every time someone pushes in the...
Moving to Devops culture - Leave or not to leave?
Hi dear redditors.
My professional profile fits with a classic Linux admin, with some basic experience on cloud + automation tools that I learned by myself with personal side projects out of my jobs.
Trying to plan the future, I wanted to start "moving" my professional profile to the cloud + automation side, trying to find an opportunity at a company that will offers me a job with projects where I would develop new skills an learn new particularities on environments and projects with a more "devops" culture.
One year and a half ago, I joined my current company, where the offer in theory was to cover projects more involved with cloud with a devops approach, just what I wanted.
Unfortunately, during the time that I'm here, I doesn't developed many projects related with that, as they keep me working on projects that aren't very interesting for me like working on classical admin tasks or deploying tools that aren't related to my interests.
In resume, after all this time my feelings are that I don't learned anything interesting and I wasted an entire year and a half here without progressing to much.
Some weeks ago, I communicate this situacion to my boss and he proposed me to involve me on "another" project that covers part of my interests, I wan't to give my last chance and I accepted.
Now, is true that the project have some tools that are interesting for me but I'm starting to spot some points that aren't very comfortable, like the burocracy imposed to progress, working through VDIs, using Windows by the force, very restricted machines, etc.
What do you think?
Is better to keep myself working on my latest project where I'm involved, learning new skills but working on a unpleasant development environment to gain more experience and later move to another company, or do you think that is better to just leave my current company and take some time to learn the tools that are really interesting for me?
The point is that if I choose to leave to learn by myself, later maybe I will have a lack of "real" experience that can be a handicap to find a new job.
Thanks for your time.
https://redd.it/ykwjfd
@r_devops
Hi dear redditors.
My professional profile fits with a classic Linux admin, with some basic experience on cloud + automation tools that I learned by myself with personal side projects out of my jobs.
Trying to plan the future, I wanted to start "moving" my professional profile to the cloud + automation side, trying to find an opportunity at a company that will offers me a job with projects where I would develop new skills an learn new particularities on environments and projects with a more "devops" culture.
One year and a half ago, I joined my current company, where the offer in theory was to cover projects more involved with cloud with a devops approach, just what I wanted.
Unfortunately, during the time that I'm here, I doesn't developed many projects related with that, as they keep me working on projects that aren't very interesting for me like working on classical admin tasks or deploying tools that aren't related to my interests.
In resume, after all this time my feelings are that I don't learned anything interesting and I wasted an entire year and a half here without progressing to much.
Some weeks ago, I communicate this situacion to my boss and he proposed me to involve me on "another" project that covers part of my interests, I wan't to give my last chance and I accepted.
Now, is true that the project have some tools that are interesting for me but I'm starting to spot some points that aren't very comfortable, like the burocracy imposed to progress, working through VDIs, using Windows by the force, very restricted machines, etc.
What do you think?
Is better to keep myself working on my latest project where I'm involved, learning new skills but working on a unpleasant development environment to gain more experience and later move to another company, or do you think that is better to just leave my current company and take some time to learn the tools that are really interesting for me?
The point is that if I choose to leave to learn by myself, later maybe I will have a lack of "real" experience that can be a handicap to find a new job.
Thanks for your time.
https://redd.it/ykwjfd
@r_devops
reddit
Moving to Devops culture - Leave or not to leave?
Hi dear redditors. My professional profile fits with a classic Linux admin, with some basic experience on cloud + automation tools that I learned...
how was your k8s learning curve?
I've recently started to pick up kubernetes in my homelab as a learning experience, and although I have a working k3s cluster set up, most of the time I have the slightest idea of what I'm doing while following guides online. Most of my time is spent banging my head against the wall if something doesn't work and I don't know where to even start debugging it.
I know that it's a process and to give it some time, but I'm curious how you all ended up picking it up, or how it's going so far?
https://redd.it/yocb9b
@r_devops
I've recently started to pick up kubernetes in my homelab as a learning experience, and although I have a working k3s cluster set up, most of the time I have the slightest idea of what I'm doing while following guides online. Most of my time is spent banging my head against the wall if something doesn't work and I don't know where to even start debugging it.
I know that it's a process and to give it some time, but I'm curious how you all ended up picking it up, or how it's going so far?
https://redd.it/yocb9b
@r_devops
reddit
how was your k8s learning curve?
I've recently started to pick up kubernetes in my homelab as a learning experience, and although I have a working k3s cluster set up, most of the...
Found a zero day vulnerability in our application yesterday…now what?
8 years of experience.
Was out of work for a while at the end of last year and took a startup job (IPO coming in the next year or so) while I was desperate. Had been a senior architect and got down-leveled to Tier 1, fine whatever, I’ll do what I need to do to feed my family.
The infrastructure is suuuuuuper ghetto. No automation, they want everything manual, no SAML, no AD.
Realized yesterday that there’s a zero day vulnerability in the infra. Problem is, I’m not allowed to do anything about it, because the senior software person has designed the code and the infra and thinks it’s flawless and perfect and any criticism is criticism of him.
When I say zero day, I mean, the way he’s got it set up, it would be impossible for us to even know if there was a breach and PII could be leaked for the entire company for two years or more. OOB event possibly.
I’ve tried to warn the CTO, but he’s not technical. Senior doesn’t think there’s anything wrong. I’ve been here 9 months.
Security guy agrees, says it’s critical and must be mitigated now for compliance reasons, CTO and SSWE don’t think it’s worth fixing and wanna do it in a few years.
Do I try to make this better or just start looking for a new job now, immediately?
https://redd.it/yoepr3
@r_devops
8 years of experience.
Was out of work for a while at the end of last year and took a startup job (IPO coming in the next year or so) while I was desperate. Had been a senior architect and got down-leveled to Tier 1, fine whatever, I’ll do what I need to do to feed my family.
The infrastructure is suuuuuuper ghetto. No automation, they want everything manual, no SAML, no AD.
Realized yesterday that there’s a zero day vulnerability in the infra. Problem is, I’m not allowed to do anything about it, because the senior software person has designed the code and the infra and thinks it’s flawless and perfect and any criticism is criticism of him.
When I say zero day, I mean, the way he’s got it set up, it would be impossible for us to even know if there was a breach and PII could be leaked for the entire company for two years or more. OOB event possibly.
I’ve tried to warn the CTO, but he’s not technical. Senior doesn’t think there’s anything wrong. I’ve been here 9 months.
Security guy agrees, says it’s critical and must be mitigated now for compliance reasons, CTO and SSWE don’t think it’s worth fixing and wanna do it in a few years.
Do I try to make this better or just start looking for a new job now, immediately?
https://redd.it/yoepr3
@r_devops
reddit
Found a zero day vulnerability in our application yesterday…now what?
8 years of experience. Was out of work for a while at the end of last year and took a startup job (IPO coming in the next year or so) while I was...
How do you store/share passwords and links in your org?
Hi, I currently store and share passwords and links using a private sharepoint in my org. It for sure serves the purpose, but I wanted something that's a little bit classier. If you know what I mean. I'm very curious how people do it in the industry. I'd love to copy it, if its serves my purpose.
https://redd.it/ykvbz3
@r_devops
Hi, I currently store and share passwords and links using a private sharepoint in my org. It for sure serves the purpose, but I wanted something that's a little bit classier. If you know what I mean. I'm very curious how people do it in the industry. I'd love to copy it, if its serves my purpose.
https://redd.it/ykvbz3
@r_devops
reddit
How do you store/share passwords and links in your org?
Hi, I currently store and share passwords and links using a private sharepoint in my org. It for sure serves the purpose, but I wanted something...
mulesoft and API
Is there anyone used MuleSoft platform ?
I have one confusion.
Do we deploy API in mulesoft platform itself ? or it help connecting customer APIs deployed in their environments ?
https://redd.it/ykv3uj
@r_devops
Is there anyone used MuleSoft platform ?
I have one confusion.
Do we deploy API in mulesoft platform itself ? or it help connecting customer APIs deployed in their environments ?
https://redd.it/ykv3uj
@r_devops
reddit
mulesoft and API
Is there anyone used MuleSoft platform ? I have one confusion. Do we deploy API in mulesoft platform itself ? or it help connecting customer...
Is it possible to pass the value of the handler to a AWS lambda function?
I mean dynamically such as from a stage variable?
I’m guessing not just thought I’d ask as it could simply something I’m planning on building.
https://redd.it/yoi800
@r_devops
I mean dynamically such as from a stage variable?
I’m guessing not just thought I’d ask as it could simply something I’m planning on building.
https://redd.it/yoi800
@r_devops
reddit
Is it possible to pass the value of the handler to a AWS lambda...
I mean dynamically such as from a stage variable? I’m guessing not just thought I’d ask as it could simply something I’m planning on building.
How can I create URL specific redirects? I've tried in DNS but that doesn't allow redirecting based on whole URL - just the main domain part.
So I currently have `blog.mysite.io` pointing to our Medium through DNS, however we now host the blogs directly on our website, so we want `blog.mysite.io` to redirect there - which is fine. We use AWS Route53 for DNS so I can simply update it.
The problem is that specific article URLs on Medium are different to the ones on our Wordpress website. e.g. the medium ones look something like this
Medium: `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433`
and the corresponding URL on our Wordpress site looks like this:
Native site: `https://www.mysite.io/blah-blah-tech-3-0/`
So I guess I need somewhere to have the mapping logic to say `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433` goes to this `https://www.mysite.io/blah-blah-tech-3-0/`.
We only have 12 original blog posts on Medium so it can be pretty quick and dirty, it doesn't need to be dynamic or handle lots of traffic.
I could solve this by spinning up an EC2 instance and deploying an ExpressJS app to do the redirect logic but that feels like overkill.
Is there a way to use S3 or CloudFront maybe?
Thanks for any suggestions!
https://redd.it/yohwtq
@r_devops
So I currently have `blog.mysite.io` pointing to our Medium through DNS, however we now host the blogs directly on our website, so we want `blog.mysite.io` to redirect there - which is fine. We use AWS Route53 for DNS so I can simply update it.
The problem is that specific article URLs on Medium are different to the ones on our Wordpress website. e.g. the medium ones look something like this
Medium: `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433`
and the corresponding URL on our Wordpress site looks like this:
Native site: `https://www.mysite.io/blah-blah-tech-3-0/`
So I guess I need somewhere to have the mapping logic to say `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433` goes to this `https://www.mysite.io/blah-blah-tech-3-0/`.
We only have 12 original blog posts on Medium so it can be pretty quick and dirty, it doesn't need to be dynamic or handle lots of traffic.
I could solve this by spinning up an EC2 instance and deploying an ExpressJS app to do the redirect logic but that feels like overkill.
Is there a way to use S3 or CloudFront maybe?
Thanks for any suggestions!
https://redd.it/yohwtq
@r_devops
Datadog confusing graph
I am trying to visualize kubernetes cpu usage in Datadog.
So I create timeseris graph, "kubernetes.cpu.usage.total" as metric and max it by a container name, like this
>max:kubernetes.cpu.usage.total{container_name:my_container_name} by {container_name}
What is confusing me is that I have different values based on the time period I select. When I set 1 week time period the biggest "spike" is 200millicores, but when I zoom on that spike (so that period is 1 hour) suddenly the biggest spike is 1.5 cores.
What is exactly happening here and what am I doing wrong?
https://redd.it/yooa5b
@r_devops
I am trying to visualize kubernetes cpu usage in Datadog.
So I create timeseris graph, "kubernetes.cpu.usage.total" as metric and max it by a container name, like this
>max:kubernetes.cpu.usage.total{container_name:my_container_name} by {container_name}
What is confusing me is that I have different values based on the time period I select. When I set 1 week time period the biggest "spike" is 200millicores, but when I zoom on that spike (so that period is 1 hour) suddenly the biggest spike is 1.5 cores.
What is exactly happening here and what am I doing wrong?
https://redd.it/yooa5b
@r_devops
Is it possible to "send" a user to an external URL with a path when they hit my subdomain, if that subdomain doesn't have hosting?
I'm trying to send a user to a Google Form if they hit my subdomain. Forward, redirect, any method of sending them there.
The subdomain doesn't have hosting (and that isn't an option at the moment).
I can't do a CName record because it won't accept paths, and my DNS-level redirects are only supported at the domain level, not subdomain.
Do I have any other options?
Thanks!
https://redd.it/yopogq
@r_devops
I'm trying to send a user to a Google Form if they hit my subdomain. Forward, redirect, any method of sending them there.
The subdomain doesn't have hosting (and that isn't an option at the moment).
I can't do a CName record because it won't accept paths, and my DNS-level redirects are only supported at the domain level, not subdomain.
Do I have any other options?
Thanks!
https://redd.it/yopogq
@r_devops
reddit
Is it possible to "send" a user to an external URL with a path...
I'm trying to send a user to a Google Form if they hit my subdomain. Forward, redirect, any method of sending them there. The subdomain doesn't...
How can I my front end and backend communicate when they are the part of same docker container?
I was able to combine both of them but Problem is in order to use backend , I have to expose the port of my backend which I don't want.
Can my frontend internally communicate with my back-end?
https://redd.it/yosn3j
@r_devops
I was able to combine both of them but Problem is in order to use backend , I have to expose the port of my backend which I don't want.
Can my frontend internally communicate with my back-end?
https://redd.it/yosn3j
@r_devops
reddit
How can I my front end and backend communicate when they are the...
I was able to combine both of them but Problem is in order to use backend , I have to expose the port of my backend which I don't want. Can my...
CI-pipeline: (.NET) Building and testing in Docker or directly on runner?
Hi
I'm setting up a CI-pipeline and I'm wondering whether to use Docker to build and test or not. You guys got any opinions/tips/ideas/experience to share?
Example in GitHub Actions
Using runner:
Takes 35sec.
No Docker image built on every push. Only on push to release branch the `dotnet publish`\-output gets downloaded and built into an image and pushed to registry.
Pros
* Fast
* Simple
Cons
* The runner's building environment is impossible to reproduce exactly on other machines
`services:`
`redis:`
`# Needed for tests`
`image: redis:6.0-buster`
`ports:`
`- 6379:6379`
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup .NET`
`uses: actions/setup-dotnet@v3`
`with:`
`dotnet-version: 6.0.x`
`- name: Setup Nuget-cache`
`uses: actions-cache@v3`
`with:`
`path: ${{ env.NUGET_PACKAGES_PATH }}`
`key: nugets-${{ hashFiles('**/*.csproj') }}`
`- name: dotnet restore`
`run: dotnet restore ${{ env.SOLUTION_PATH }}`
`- name: dotnet build`
`run: dotnet build ${{ env.SOLUTION_PATH }} --no-restore --configuration Release`
`- name: dotnet test`
`run: dotnet test ${{ env.SOLUTION_PATH }} --no-build --configuration Release --logger trx --results-directory ${{ env.TEST_RESULTS_PATH }}`
`- name: dotnet publish`
`run: dotnet publish ${{ env.SOLUTION_PATH }}/Web/Web.csproj --no-build --configuration Release --output ${{ env.PUBLISH_OUTPUT_PATH }}`
`- name: Upload publish output`
`if: ${{ inputs.package }}`
`uses: actions/upload-artifact@v2`
`with:`
`name: dotnet-publish-output`
`path: ${{ env.PUBLISH_OUTPUT_PATH }}`
`if-no-files-found: error`
`retention-days: 1`
\-----------------------------------------------------------------
Using Docker:
Takes 1min 35sec.
Docker image built on every push. Only pushed to registry on if release branch.
Pros:
* Verifies the whole image building on every push.
* Easily reproducible on any machine.
Cons:
* Slower, but not that slow?
* Harder to read and understand
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup Docker Buildx`
`uses: docker/setup-buildx-action@v2`
`- name: Log in to Docker Registry`
`uses: docker/login-action@v2`
`with:`
`registry: ${{ secrets.REGISTRY_REPO_URL }}`
`username: ${{ secrets.REGISTRY_REPO_USER }}`
`password: ${{ secrets.REGISTRY_REPO_TOKEN }}`
`- name: Build`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`target: build`
`tags: ${{ env.TEST_IMAGE_TAG }}`
`load: true`
`cache-from: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }},mode=max`
`- name: Create Test container`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose build ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`docker compose create ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`- name: Test`
`id: test`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose run --rm --volume ${{ github.workspace }}/${{ env.TEST_RESULTS_DIRECTORY_NAME }}:/${{ env.TEST_RESULTS_DIRECTORY_NAME }} ${{ env.DOCKER_COMPOSE_SERVICE_NAME }} \`
`dotnet test --no-build --configuration Release --logger trx --results-directory /${{ env.TEST_RESULTS_DIRECTORY_NAME }}`
`- name: Package`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`tags: ${{ secrets.REGISTRY_REPO_URL }}/${{ env.IMAGE_NAME }}:${{ env.VERSION_NUMBER }}`
`push: ${{ inputs.push-image == true }}`
`build-args: |`
`HTTP_PROXY=${{ env.DOCKER_PROXY_URL }}`
`HTTPS_PROXY=${{ env.DOCKER_PROXY_URL }}`
`cache-from: |`
`type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL
Hi
I'm setting up a CI-pipeline and I'm wondering whether to use Docker to build and test or not. You guys got any opinions/tips/ideas/experience to share?
Example in GitHub Actions
Using runner:
Takes 35sec.
No Docker image built on every push. Only on push to release branch the `dotnet publish`\-output gets downloaded and built into an image and pushed to registry.
Pros
* Fast
* Simple
Cons
* The runner's building environment is impossible to reproduce exactly on other machines
`services:`
`redis:`
`# Needed for tests`
`image: redis:6.0-buster`
`ports:`
`- 6379:6379`
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup .NET`
`uses: actions/setup-dotnet@v3`
`with:`
`dotnet-version: 6.0.x`
`- name: Setup Nuget-cache`
`uses: actions-cache@v3`
`with:`
`path: ${{ env.NUGET_PACKAGES_PATH }}`
`key: nugets-${{ hashFiles('**/*.csproj') }}`
`- name: dotnet restore`
`run: dotnet restore ${{ env.SOLUTION_PATH }}`
`- name: dotnet build`
`run: dotnet build ${{ env.SOLUTION_PATH }} --no-restore --configuration Release`
`- name: dotnet test`
`run: dotnet test ${{ env.SOLUTION_PATH }} --no-build --configuration Release --logger trx --results-directory ${{ env.TEST_RESULTS_PATH }}`
`- name: dotnet publish`
`run: dotnet publish ${{ env.SOLUTION_PATH }}/Web/Web.csproj --no-build --configuration Release --output ${{ env.PUBLISH_OUTPUT_PATH }}`
`- name: Upload publish output`
`if: ${{ inputs.package }}`
`uses: actions/upload-artifact@v2`
`with:`
`name: dotnet-publish-output`
`path: ${{ env.PUBLISH_OUTPUT_PATH }}`
`if-no-files-found: error`
`retention-days: 1`
\-----------------------------------------------------------------
Using Docker:
Takes 1min 35sec.
Docker image built on every push. Only pushed to registry on if release branch.
Pros:
* Verifies the whole image building on every push.
* Easily reproducible on any machine.
Cons:
* Slower, but not that slow?
* Harder to read and understand
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup Docker Buildx`
`uses: docker/setup-buildx-action@v2`
`- name: Log in to Docker Registry`
`uses: docker/login-action@v2`
`with:`
`registry: ${{ secrets.REGISTRY_REPO_URL }}`
`username: ${{ secrets.REGISTRY_REPO_USER }}`
`password: ${{ secrets.REGISTRY_REPO_TOKEN }}`
`- name: Build`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`target: build`
`tags: ${{ env.TEST_IMAGE_TAG }}`
`load: true`
`cache-from: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }},mode=max`
`- name: Create Test container`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose build ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`docker compose create ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`- name: Test`
`id: test`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose run --rm --volume ${{ github.workspace }}/${{ env.TEST_RESULTS_DIRECTORY_NAME }}:/${{ env.TEST_RESULTS_DIRECTORY_NAME }} ${{ env.DOCKER_COMPOSE_SERVICE_NAME }} \`
`dotnet test --no-build --configuration Release --logger trx --results-directory /${{ env.TEST_RESULTS_DIRECTORY_NAME }}`
`- name: Package`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`tags: ${{ secrets.REGISTRY_REPO_URL }}/${{ env.IMAGE_NAME }}:${{ env.VERSION_NUMBER }}`
`push: ${{ inputs.push-image == true }}`
`build-args: |`
`HTTP_PROXY=${{ env.DOCKER_PROXY_URL }}`
`HTTPS_PROXY=${{ env.DOCKER_PROXY_URL }}`
`cache-from: |`
`type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL