Get DevOps Assessment
Want to improve your DevOps capability? With DevOps assessment, we will help you to identify your strengths, weaknesses and suggest next steps for your DevOps journey. https://www.softwebsolutions.com/devops-survey.html
https://redd.it/lhgtox
@r_devops
Want to improve your DevOps capability? With DevOps assessment, we will help you to identify your strengths, weaknesses and suggest next steps for your DevOps journey. https://www.softwebsolutions.com/devops-survey.html
https://redd.it/lhgtox
@r_devops
softwebsolutions
DevOps Assessment Survey | Free DevOps Consultation with our Experts
Companies are struggling to migrate from legacy approach to DevOps Approach, take a 2 minutes survey and grab a free copy of the ultimate DevOps implementation checklist along with 2hrs consultation call with our experts.
Jenkins Notification Update
Is it possible to send a custom Jenkins build notification to one slack channel while at the same time sending the default error message/notification to another channel?
So for example:
job success message will be sent to #release, and
build failure message will be sent to #jenkins channel
https://redd.it/lgxxqp
@r_devops
Is it possible to send a custom Jenkins build notification to one slack channel while at the same time sending the default error message/notification to another channel?
So for example:
job success message will be sent to #release, and
build failure message will be sent to #jenkins channel
https://redd.it/lgxxqp
@r_devops
reddit
Jenkins Notification Update
Is it possible to send a custom Jenkins build notification to one slack channel while at the same time sending the default error...
Custome metrics pod kubernetes
Hello,
I'am new in Kubernetes world.
I want to deploy a pod with custom metrics. I want to generate a random number and expose it to prometheus to monitor.
How can i do that ?
https://redd.it/lgqwqv
@r_devops
Hello,
I'am new in Kubernetes world.
I want to deploy a pod with custom metrics. I want to generate a random number and expose it to prometheus to monitor.
How can i do that ?
https://redd.it/lgqwqv
@r_devops
reddit
Custome metrics pod kubernetes
Hello, I'am new in Kubernetes world. I want to deploy a pod with custom metrics. I want to generate a random number and expose it to...
Developer needed for SOAR/SIEM platforms
We're looking to hire a developer with skills building plugins/add-ons for popular SOAR & SIEM platforms. This would be ongoing work for at least 3-6 months.
Please reach out if you are a good fit.
https://redd.it/lhubwj
@r_devops
We're looking to hire a developer with skills building plugins/add-ons for popular SOAR & SIEM platforms. This would be ongoing work for at least 3-6 months.
Please reach out if you are a good fit.
https://redd.it/lhubwj
@r_devops
reddit
Developer needed for SOAR/SIEM platforms
We're looking to hire a developer with skills building plugins/add-ons for popular SOAR & SIEM platforms. This would be ongoing work for at least...
Artifactory "best practice" for automatically deleting old artifacts from Jenkins.
I'm currently looking for the best way to automatically delete old artifacts uploaded from Jenkins.
Our folder structure in Artifactory goes "project name/release version/build number".
Using rtBuildInfo I'm able to limit it to automatically delete old builds. However, this doesn't take into account version numbers. I can get around this by using a different build name depending on versions, but this quickly gets messy as we create new release versions.
The project does not use Maven or other common build tools supported by the Jenkins-Artifactory plugin, so I'm limited to using most of the generic pipeline functions.
Any ideas for best practice here would be greatly appreciated.
https://redd.it/lhu8qv
@r_devops
I'm currently looking for the best way to automatically delete old artifacts uploaded from Jenkins.
Our folder structure in Artifactory goes "project name/release version/build number".
Using rtBuildInfo I'm able to limit it to automatically delete old builds. However, this doesn't take into account version numbers. I can get around this by using a different build name depending on versions, but this quickly gets messy as we create new release versions.
The project does not use Maven or other common build tools supported by the Jenkins-Artifactory plugin, so I'm limited to using most of the generic pipeline functions.
Any ideas for best practice here would be greatly appreciated.
https://redd.it/lhu8qv
@r_devops
reddit
Artifactory "best practice" for automatically deleting old...
I'm currently looking for the best way to automatically delete old artifacts uploaded from Jenkins. Our folder structure in Artifactory goes...
How to manage multiple single-tenant infrastructures
Can anyone guide me on how to efficiently manage multiple single-tenant infrastructures and avoid common traps and pitfalls?
I have vast experience in managing multi-tenant SaaS product - one infrastructure with multiple users. We use Terraform for IaC and Azure DevOps for deployment. It's pretty easy and straightforward on how to do upgrades of infra, etc.
But can't really imagine how to maintain IaC with upgrades and deployments for multiple infrastructures at once. Furthermore, different customer can have different infrastructure configuration (scaling or some additional resources provided). Moreover, the creation of a new environment must be automated with correctly configurated infrastructure and deployment).
If I use Terraform I assume I should go with state per tenant with a configurable module of our infrastructure, am I right? How is deployment handled in this scenario? I don.t wanna manage pipeline per customer and I would rather use one general configurable pipeline for all customers, is it a good idea?
https://redd.it/lht8jp
@r_devops
Can anyone guide me on how to efficiently manage multiple single-tenant infrastructures and avoid common traps and pitfalls?
I have vast experience in managing multi-tenant SaaS product - one infrastructure with multiple users. We use Terraform for IaC and Azure DevOps for deployment. It's pretty easy and straightforward on how to do upgrades of infra, etc.
But can't really imagine how to maintain IaC with upgrades and deployments for multiple infrastructures at once. Furthermore, different customer can have different infrastructure configuration (scaling or some additional resources provided). Moreover, the creation of a new environment must be automated with correctly configurated infrastructure and deployment).
If I use Terraform I assume I should go with state per tenant with a configurable module of our infrastructure, am I right? How is deployment handled in this scenario? I don.t wanna manage pipeline per customer and I would rather use one general configurable pipeline for all customers, is it a good idea?
https://redd.it/lht8jp
@r_devops
reddit
How to manage multiple single-tenant infrastructures
Can anyone guide me on how to efficiently manage multiple single-tenant infrastructures and avoid common traps and pitfalls? I have vast...
AWS EC2 launch configurations vs launch templates
[Original Source](https://brennerm.github.io/posts/aws-launch-configuration-vs-template.html)
At first sight AWS launch configurations and templates may seem very similar. Both allow you to define a blueprint for EC2 instances. Let's have a look at their differences and see which one we should prefer.
## They grow up so fast
Launch configuration are old. In terms of cloud technologies they are essentially ancient. During my research I found articles that date back to 2010. It's hard to find exact details but it seems like they have been introduced together Auto Scaling Groups (ASGs) or shortly after. This also explains why there are only compatible with ASGs. Want to create a single EC2 instance based on an launch configuration? That is not going to happen.
Settings that are supported include:
- the EC2 image (AMI)*
- the instance type (e.g. m5.large)*
- an SSH key pair to connect to the VM*
- the purchase options (on-demand or spot)
- an IAM profile
- one or more security groups
- a block device mapping to specify additional storage volumes
- a few more minor things
_* marks required values_
Changing any of these parameters is not supported due to launch configurations' nature of being immutable. This means instead of updating it in place you need to delete and recreate it.
All in all launch configurations have a very specific use case and a set of configuration options limited to the basic parameters. Let's see how launch templates compare.
## The hot stuff
The first big difference is the wider range of AWS services that are compatible with launch templates. Additionally to ASGs it can be used in managed EKS node groups and to create single EC2 instances.
Regarding configuration options, they support a bit more than launch configurations like network settings and a few more advanced details (interruption behavior, termination protection, CloudWatch monitoring, ...).

The main difference here is that every setting is optional. As you can see in the image above you can set the value "Don't include in launch template" for every parameter. You can essentially create a launch template that specifies nothing. That's kinda pointless but you get the idea.
Combining this with the ability to source values from existing templates and you can start to imagine all the options that arise. Similar to something like Docker images you can start to create your base template(s) and inherit more specific templates from them.
As nice as this may sound I just want to advice you to be cautious with doing this. Depending on organization and your upfront template "architecture" planning this may work really well. But it hasn't been just once that I've seen this ending up in dependency hell. (including rhyme in blog post ✅) So think about if you don't wanna stick with independent templates especially when factoring in the next feature.
Launch templates support versioning. Meaning while a single version is immutable you are still able to make modifications which will result in a new one that you can refer to. In my opinion this workflow provides a much better user experience compared to the delete and recreate approach that you need to go through with launch configurations. But again it adds complexity of managing the references in your ASGs and child templates.
## Which is the better choice?
So, how did both do? Is there a clear winner or can I give you at least a recommendation which one you should prefer?
I'm not sure how things really evolved so take the following with a grain of salt. To me it seems like launch configurations have been created out of necessity when introducing ASGs. Afterwards the folks from AWS noticed that having an EC2 blueprint could be useful for other services as well. Cause it was probably easier to create something new instead of making the existing solution more generic they
[Original Source](https://brennerm.github.io/posts/aws-launch-configuration-vs-template.html)
At first sight AWS launch configurations and templates may seem very similar. Both allow you to define a blueprint for EC2 instances. Let's have a look at their differences and see which one we should prefer.
## They grow up so fast
Launch configuration are old. In terms of cloud technologies they are essentially ancient. During my research I found articles that date back to 2010. It's hard to find exact details but it seems like they have been introduced together Auto Scaling Groups (ASGs) or shortly after. This also explains why there are only compatible with ASGs. Want to create a single EC2 instance based on an launch configuration? That is not going to happen.
Settings that are supported include:
- the EC2 image (AMI)*
- the instance type (e.g. m5.large)*
- an SSH key pair to connect to the VM*
- the purchase options (on-demand or spot)
- an IAM profile
- one or more security groups
- a block device mapping to specify additional storage volumes
- a few more minor things
_* marks required values_
Changing any of these parameters is not supported due to launch configurations' nature of being immutable. This means instead of updating it in place you need to delete and recreate it.
All in all launch configurations have a very specific use case and a set of configuration options limited to the basic parameters. Let's see how launch templates compare.
## The hot stuff
The first big difference is the wider range of AWS services that are compatible with launch templates. Additionally to ASGs it can be used in managed EKS node groups and to create single EC2 instances.
Regarding configuration options, they support a bit more than launch configurations like network settings and a few more advanced details (interruption behavior, termination protection, CloudWatch monitoring, ...).

The main difference here is that every setting is optional. As you can see in the image above you can set the value "Don't include in launch template" for every parameter. You can essentially create a launch template that specifies nothing. That's kinda pointless but you get the idea.
Combining this with the ability to source values from existing templates and you can start to imagine all the options that arise. Similar to something like Docker images you can start to create your base template(s) and inherit more specific templates from them.
As nice as this may sound I just want to advice you to be cautious with doing this. Depending on organization and your upfront template "architecture" planning this may work really well. But it hasn't been just once that I've seen this ending up in dependency hell. (including rhyme in blog post ✅) So think about if you don't wanna stick with independent templates especially when factoring in the next feature.
Launch templates support versioning. Meaning while a single version is immutable you are still able to make modifications which will result in a new one that you can refer to. In my opinion this workflow provides a much better user experience compared to the delete and recreate approach that you need to go through with launch configurations. But again it adds complexity of managing the references in your ASGs and child templates.
## Which is the better choice?
So, how did both do? Is there a clear winner or can I give you at least a recommendation which one you should prefer?
I'm not sure how things really evolved so take the following with a grain of salt. To me it seems like launch configurations have been created out of necessity when introducing ASGs. Afterwards the folks from AWS noticed that having an EC2 blueprint could be useful for other services as well. Cause it was probably easier to create something new instead of making the existing solution more generic they
Max Brenner
AWS EC2 launch configurations vs launch templates
A comparison of EC2 launch configurations and launch templates + an advice which one you should prefer
[introduced launch templates in late 2017](https://aws.amazon.com/about-aws/whats-new/2017/11/introducing-launch-templates-for-amazon-ec2-instances/).
Additionally as far as I know there's nothing that you can achieve with launch configurations which is not doable using launch templates. Please let me know if there's a use case I'm missing here. The only advantage of launch configurations is that they are just simpler. No versioning, no inheritance, immutability, just a minimal set of required and a few optional values and you are good to create your ASG.
My impression from reading through the documentation is that AWS will soon start to deprecate launch configurations. They clearly [discourage from using them](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html) and even provide [a guide](https://docs.aws.amazon.com/autoscaling/ec2/userguide/replace-launch-config.html) to replace existing launch configurations with templates. That's why I'd suggest you to use launch templates for anything new and start to migrate your existing launch configurations if you plan on continue using them long-term.
That's all with my little comparison. Hope you got some value out of it. Enjoy your day 👍
https://redd.it/lhtpi7
@r_devops
Additionally as far as I know there's nothing that you can achieve with launch configurations which is not doable using launch templates. Please let me know if there's a use case I'm missing here. The only advantage of launch configurations is that they are just simpler. No versioning, no inheritance, immutability, just a minimal set of required and a few optional values and you are good to create your ASG.
My impression from reading through the documentation is that AWS will soon start to deprecate launch configurations. They clearly [discourage from using them](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html) and even provide [a guide](https://docs.aws.amazon.com/autoscaling/ec2/userguide/replace-launch-config.html) to replace existing launch configurations with templates. That's why I'd suggest you to use launch templates for anything new and start to migrate your existing launch configurations if you plan on continue using them long-term.
That's all with my little comparison. Hope you got some value out of it. Enjoy your day 👍
https://redd.it/lhtpi7
@r_devops
Amazon
Introducing Launch Templates for Amazon EC2 instances
GKE ingress 502 error for specific nodeport
I have an alpine docker image to run my raw PHP website on an apache server(PHP 7.4).<br>
I want to run the image on Kubernetes(GKE) with an ingress controller.<br>
I'm pushing the image with gcloud command to the google container registry.<br>
Both the deployment and service have no errors and created successfully as NodePort.<br>
I tried to expose the deployment as a loadbalencer and it is working fine (https://35.226.234.2/).<br>
The ingress I deployed is from google docs (https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress)<br>
In my ingress now there is:<br>
\- https://34.102.225.215/hello
\- https://34.102.225.215/jb
​
The /hello is same configuration of docs and it is working fine.<br>
The /jb is same configuration as I mentioned below and always returning 502 Error.<br>
Ingress details in the GCP console shows "Some backend services are in UNHEALTHY state"<br>
I checked the Backend services with errors and this is what i got:
​
Health check<br>
k8s-be-31222--c185bb99eb8717c7<br>
port: 31222, timeout: 60s, check interval: 60s, unhealthy threshold: 10 attempts
​
I checked the log for link request(https://34.102.225.215/jb) and it gives a "failed_to_pick_backend" error.<br>
​
kubectl get events shows "Warning FailedToUpdateEndpoint endpoints/jb-loadbalancer-service Failed to update endpoint default/jb-loadbalancer-service: Operation cannot be fulfilled on endpoints "jb-loadbalancer-service": the object has been modified; please apply your changes to the latest version and try again"<br>
But this error is for this https://35.226.234.2/ and not the ingress. (https://prnt.sc/yy6c1h)<br>
I have checked:<br>
https://stackoverflow.com/questions/66110561/kubernetes-gke-ingress-502-server-error<br>
https://stackoverflow.com/questions/49540280/gke-ingress-502-error-when-downloading-file<br>
https://stackoverflow.com/questions/50368210/502-server-error-google-kubernetes
​
Here is the deployment file:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: jb-deployment
spec:
selector:
matchLabels:
greeting: jb
department: jomlahbazar
replicas: 1
template:
metadata:
labels:
greeting: jb
department: jomlahbazar
spec:
containers:
\- name: jb
image: "us.gcr.io/third-nature-273904/jb-img-1-0:v3"
env:
\- name: "PORT"
value: "80"
```
​
Here is the service file:
```
apiVersion: v1
kind: Service
metadata:
name: jb-kubernetes
spec:
type: NodePort
selector:
greeting: jb
department: jomlahbazar
ports:
\- protocol: TCP
port: 80
targetPort: 8080
```
Here in the ingress file:
```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
\# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
spec:
rules:
\- http:
paths:
\- path: /hello
backend:
serviceName: hello-world
servicePort: 60000
\- path: /jb
backend:
serviceName: jb-kubernetes
servicePort: 80
```
​
Here is backend configuration:
```
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
timeoutSec:
I have an alpine docker image to run my raw PHP website on an apache server(PHP 7.4).<br>
I want to run the image on Kubernetes(GKE) with an ingress controller.<br>
I'm pushing the image with gcloud command to the google container registry.<br>
Both the deployment and service have no errors and created successfully as NodePort.<br>
I tried to expose the deployment as a loadbalencer and it is working fine (https://35.226.234.2/).<br>
The ingress I deployed is from google docs (https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress)<br>
In my ingress now there is:<br>
\- https://34.102.225.215/hello
\- https://34.102.225.215/jb
​
The /hello is same configuration of docs and it is working fine.<br>
The /jb is same configuration as I mentioned below and always returning 502 Error.<br>
Ingress details in the GCP console shows "Some backend services are in UNHEALTHY state"<br>
I checked the Backend services with errors and this is what i got:
​
Health check<br>
k8s-be-31222--c185bb99eb8717c7<br>
port: 31222, timeout: 60s, check interval: 60s, unhealthy threshold: 10 attempts
​
I checked the log for link request(https://34.102.225.215/jb) and it gives a "failed_to_pick_backend" error.<br>
​
kubectl get events shows "Warning FailedToUpdateEndpoint endpoints/jb-loadbalancer-service Failed to update endpoint default/jb-loadbalancer-service: Operation cannot be fulfilled on endpoints "jb-loadbalancer-service": the object has been modified; please apply your changes to the latest version and try again"<br>
But this error is for this https://35.226.234.2/ and not the ingress. (https://prnt.sc/yy6c1h)<br>
I have checked:<br>
https://stackoverflow.com/questions/66110561/kubernetes-gke-ingress-502-server-error<br>
https://stackoverflow.com/questions/49540280/gke-ingress-502-error-when-downloading-file<br>
https://stackoverflow.com/questions/50368210/502-server-error-google-kubernetes
​
Here is the deployment file:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: jb-deployment
spec:
selector:
matchLabels:
greeting: jb
department: jomlahbazar
replicas: 1
template:
metadata:
labels:
greeting: jb
department: jomlahbazar
spec:
containers:
\- name: jb
image: "us.gcr.io/third-nature-273904/jb-img-1-0:v3"
env:
\- name: "PORT"
value: "80"
```
​
Here is the service file:
```
apiVersion: v1
kind: Service
metadata:
name: jb-kubernetes
spec:
type: NodePort
selector:
greeting: jb
department: jomlahbazar
ports:
\- protocol: TCP
port: 80
targetPort: 8080
```
Here in the ingress file:
```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
\# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
spec:
rules:
\- http:
paths:
\- path: /hello
backend:
serviceName: hello-world
servicePort: 60000
\- path: /jb
backend:
serviceName: jb-kubernetes
servicePort: 80
```
​
Here is backend configuration:
```
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
timeoutSec:
Google Cloud Documentation
Configure Ingress for external Application Load Balancers | Google Kubernetes Engine (GKE) | Google Cloud Documentation
100
```
Here is ingress describe:
```
Name: my-ingress
Namespace: default
Address: 34.102.225.215
Default backend: default-http-backend:80 (10.20.1.6:8080)
Rules:
Host Path Backends
\---- ---- --------
*
/hello hello-world:60000 (10.20.0.22:50000)
/jb jb-kubernetes:80 (10.20.2.16:8080)
Annotations:
ingress.kubernetes.io/target-proxy:k8s2-tp-21ig1let-default-my-ingress-dpjd8xm8
ingress.kubernetes.io/url-map:k8s2-um-21ig1let-default-my-ingress-dpjd8xm8
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cloud.google.com/backend-config":"{\\"default\\": \\"my-backendconfig\\"}","kubernetes.io/ingress.class":"gce"},"name":"my-ingress","namespace":"default"},"spec":{"rules":\[{"http":{"paths":\[{"backend":{"serviceName":"hello-world","servicePort":60000},"path":"/hello"},{"backend":{"serviceName":"jb-kubernetes","servicePort":80},"path":"/jb"}\]}}\]}}
​
kubernetes.io/ingress.class:gce
cloud.google.com/backend-config:{"default": "my-backendconfig"}
ingress.kubernetes.io/backends:{"k8s-be-30037--c185bb99eb8717c7":"HEALTHY","k8s-be-31222--c185bb99eb8717c7":"UNHEALTHY","k8s-be-32398--c185bb99eb8717c7":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-21ig1let-default-my-ingress-dpjd8xm8
Events:
Type Reason Age From Message
\---- ------ ---- ---- -------
Normal CREATE 50m loadbalancer-controller ip: 34.102.225.215
```
Output of kubectl get ing my-ingress -o yaml
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
ingress.kubernetes.io/backends: '{"k8s-be-30037--c185bb99eb8717c7":"HEALTHY","k8s-be-31222--c185bb99eb8717c7":"UNHEALTHY","k8s-be-32398--c185bb99eb8717c7":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-21ig1let-default-my-ingress-dpjd8xm8
ingress.kubernetes.io/target-proxy: k8s2-tp-21ig1let-default-my-ingress-dpjd8xm8
ingress.kubernetes.io/url-map: k8s2-um-21ig1let-default-my-ingress-dpjd8xm8
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cloud.google.com/backend-config":"{\\"default\\":
```
Here is ingress describe:
```
Name: my-ingress
Namespace: default
Address: 34.102.225.215
Default backend: default-http-backend:80 (10.20.1.6:8080)
Rules:
Host Path Backends
\---- ---- --------
*
/hello hello-world:60000 (10.20.0.22:50000)
/jb jb-kubernetes:80 (10.20.2.16:8080)
Annotations:
ingress.kubernetes.io/target-proxy:k8s2-tp-21ig1let-default-my-ingress-dpjd8xm8
ingress.kubernetes.io/url-map:k8s2-um-21ig1let-default-my-ingress-dpjd8xm8
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cloud.google.com/backend-config":"{\\"default\\": \\"my-backendconfig\\"}","kubernetes.io/ingress.class":"gce"},"name":"my-ingress","namespace":"default"},"spec":{"rules":\[{"http":{"paths":\[{"backend":{"serviceName":"hello-world","servicePort":60000},"path":"/hello"},{"backend":{"serviceName":"jb-kubernetes","servicePort":80},"path":"/jb"}\]}}\]}}
​
kubernetes.io/ingress.class:gce
cloud.google.com/backend-config:{"default": "my-backendconfig"}
ingress.kubernetes.io/backends:{"k8s-be-30037--c185bb99eb8717c7":"HEALTHY","k8s-be-31222--c185bb99eb8717c7":"UNHEALTHY","k8s-be-32398--c185bb99eb8717c7":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-21ig1let-default-my-ingress-dpjd8xm8
Events:
Type Reason Age From Message
\---- ------ ---- ---- -------
Normal CREATE 50m loadbalancer-controller ip: 34.102.225.215
```
Output of kubectl get ing my-ingress -o yaml
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
ingress.kubernetes.io/backends: '{"k8s-be-30037--c185bb99eb8717c7":"HEALTHY","k8s-be-31222--c185bb99eb8717c7":"UNHEALTHY","k8s-be-32398--c185bb99eb8717c7":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-21ig1let-default-my-ingress-dpjd8xm8
ingress.kubernetes.io/target-proxy: k8s2-tp-21ig1let-default-my-ingress-dpjd8xm8
ingress.kubernetes.io/url-map: k8s2-um-21ig1let-default-my-ingress-dpjd8xm8
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cloud.google.com/backend-config":"{\\"default\\":
\\"my-backendconfig\\"}","kubernetes.io/ingress.class":"gce"},"name":"my-ingress","namespace":"default"},"spec":{"rules":\[{"http":{"paths":\[{"backend":{"serviceName":"hello-world","servicePort":60000},"path":"/hello"},{"backend":{"serviceName":"jb-kubernetes","servicePort":80},"path":"/jb"}\]}}\]}}
kubernetes.io/ingress.class: gce
creationTimestamp: "2021-02-09T14:41:38Z"
finalizers:
\- networking.gke.io/ingress-finalizer-V2
generation: 5
name: my-ingress
namespace: default
resourceVersion: "1766500"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/my-ingress
uid: 3f67cdc5-f5ec-4e49-a7bb-c671f493faf7
spec:
rules:
\- http:
paths:
\- backend:
serviceName: hello-world
servicePort: 60000
path: /hello
\- backend:
serviceName: jb-kubernetes
servicePort: 80
path: /jb
status:
loadBalancer:
ingress:
\- ip: 34.102.225.215
```
​
log of https://34.102.225.215/jb
```
{
"insertId": "14iy6rjg1j17wpc",
"jsonPayload": {
"@type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
"statusDetails": "failed_to_pick_backend"
},
"httpRequest": {
"requestMethod": "GET",
"requestUrl": "https://34.102.225.215/jb",
"requestSize": "337",
"status": 502,
"responseSize": "488",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0",
"remoteIp": "217.165.113.67",
"latency": "0.208404s"
},
"resource": {
"type": "http_load_balancer",
"labels": {
"target_proxy_name": "k8s2-tp-21ig1let-default-my-ingress-dpjd8xm8",
"project_id": "third-nature-273904",
"forwarding_rule_name": "k8s2-fr-21ig1let-default-my-ingress-dpjd8xm8",
"zone": "global",
"url_map_name": "k8s2-um-21ig1let-default-my-ingress-dpjd8xm8",
"backend_service_name": "k8s-be-31222--c185bb99eb8717c7"
}
},
"timestamp": "2021-02-10T09:09:23.871337Z",
"severity": "WARNING",
"logName": "projects/third-nature-273904/logs/requests",
"trace": "projects/third-nature-273904/traces/f4f7368312cfa2e201bbdb09d6a0b32a",
"receiveTimestamp": "2021-02-10T09:09:25.334817139Z",
"spanId": "12d590e83a4ba16c"
}
```
https://redd.it/lgq8rc
@r_devops
kubernetes.io/ingress.class: gce
creationTimestamp: "2021-02-09T14:41:38Z"
finalizers:
\- networking.gke.io/ingress-finalizer-V2
generation: 5
name: my-ingress
namespace: default
resourceVersion: "1766500"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/my-ingress
uid: 3f67cdc5-f5ec-4e49-a7bb-c671f493faf7
spec:
rules:
\- http:
paths:
\- backend:
serviceName: hello-world
servicePort: 60000
path: /hello
\- backend:
serviceName: jb-kubernetes
servicePort: 80
path: /jb
status:
loadBalancer:
ingress:
\- ip: 34.102.225.215
```
​
log of https://34.102.225.215/jb
```
{
"insertId": "14iy6rjg1j17wpc",
"jsonPayload": {
"@type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
"statusDetails": "failed_to_pick_backend"
},
"httpRequest": {
"requestMethod": "GET",
"requestUrl": "https://34.102.225.215/jb",
"requestSize": "337",
"status": 502,
"responseSize": "488",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0",
"remoteIp": "217.165.113.67",
"latency": "0.208404s"
},
"resource": {
"type": "http_load_balancer",
"labels": {
"target_proxy_name": "k8s2-tp-21ig1let-default-my-ingress-dpjd8xm8",
"project_id": "third-nature-273904",
"forwarding_rule_name": "k8s2-fr-21ig1let-default-my-ingress-dpjd8xm8",
"zone": "global",
"url_map_name": "k8s2-um-21ig1let-default-my-ingress-dpjd8xm8",
"backend_service_name": "k8s-be-31222--c185bb99eb8717c7"
}
},
"timestamp": "2021-02-10T09:09:23.871337Z",
"severity": "WARNING",
"logName": "projects/third-nature-273904/logs/requests",
"trace": "projects/third-nature-273904/traces/f4f7368312cfa2e201bbdb09d6a0b32a",
"receiveTimestamp": "2021-02-10T09:09:25.334817139Z",
"spanId": "12d590e83a4ba16c"
}
```
https://redd.it/lgq8rc
@r_devops
Need some reinforcement regarding this career change decision:
I currently work in healthcare and this past year has made really jaded with how those of us have been treated. I started looking at the most in demand remote jobs with a view to improving my QoL (get a dog, travel more) and combined with consulting my cousin who is successful in tech sales, decided on cloud engineering/AWS.
Ive always had a passion for computers as a hobby, having built several pcs and dabbled in coding and so it was not difficult to get into the swing of learning via A Cloud Guru’a premium subscription.
Lately I’ve been looking at stack overflow and other sites to get an idea of what vacancies require so that I know when I have a reasonable amount of experience/certs to apply for jobs but I’ve been put off by the amount of vacancies asking for 3-5 years experience using these tools I’m learning. Some are less demanding in terms of aws experience but ask for some help desk experience instead but therein lies the problem - I can’t see many entry level help desk jobs in my area (Glasgow, UK) and so it’s starting to feel like I’m gatekept.
Would a strong independent portfolio showing my ability with python/AWS adequately replace the seemingly universal requirement of several years experience? I don’t want to grind away at this for 3 years on a wing and a prayer that I might land a job. It’s far too much to expect of someone as a time investment as I’m sure you’ll agree.
Any positive words of hope for this fledgling cloud engineer would be great!
https://redd.it/lgq4rt
@r_devops
I currently work in healthcare and this past year has made really jaded with how those of us have been treated. I started looking at the most in demand remote jobs with a view to improving my QoL (get a dog, travel more) and combined with consulting my cousin who is successful in tech sales, decided on cloud engineering/AWS.
Ive always had a passion for computers as a hobby, having built several pcs and dabbled in coding and so it was not difficult to get into the swing of learning via A Cloud Guru’a premium subscription.
Lately I’ve been looking at stack overflow and other sites to get an idea of what vacancies require so that I know when I have a reasonable amount of experience/certs to apply for jobs but I’ve been put off by the amount of vacancies asking for 3-5 years experience using these tools I’m learning. Some are less demanding in terms of aws experience but ask for some help desk experience instead but therein lies the problem - I can’t see many entry level help desk jobs in my area (Glasgow, UK) and so it’s starting to feel like I’m gatekept.
Would a strong independent portfolio showing my ability with python/AWS adequately replace the seemingly universal requirement of several years experience? I don’t want to grind away at this for 3 years on a wing and a prayer that I might land a job. It’s far too much to expect of someone as a time investment as I’m sure you’ll agree.
Any positive words of hope for this fledgling cloud engineer would be great!
https://redd.it/lgq4rt
@r_devops
reddit
Need some reinforcement regarding this career change decision:
I currently work in healthcare and this past year has made really jaded with how those of us have been treated. I started looking at the most in...
GitLab pipeline fails because of composer
Hello r/devops!
We run GitLab CI to build our containers until yesterday everything was good and fancy.
But since then, our pipelines started to fail. We built our services in PHP and we are using the GitLab private composer repository to store them, so we passed down the CIJOBTOKEN to authenticate with the repository.
This is the point where we fail, composer can't authenticate with the repository so it can't download the packages.
Have anyone experienced something similar? Is there a breaking change in composer or in GitLab itself, or I did something bad or hacky way and a patch killed it?
One of our job:
The Dockerfile that belongs to it:
https://redd.it/lgpeow
@r_devops
Hello r/devops!
We run GitLab CI to build our containers until yesterday everything was good and fancy.
But since then, our pipelines started to fail. We built our services in PHP and we are using the GitLab private composer repository to store them, so we passed down the CIJOBTOKEN to authenticate with the repository.
This is the point where we fail, composer can't authenticate with the repository so it can't download the packages.
Have anyone experienced something similar? Is there a breaking change in composer or in GitLab itself, or I did something bad or hacky way and a patch killed it?
One of our job:
image: docker:latest
stages:
- build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build_auth:
stage: build
script:
- cd authentication
- docker build --build-arg CI_JOB_TOKEN=${CI_JOB_TOKEN} --pull -t "$CI_REGISTRY_IMAGE/authentication" .
- docker push "$CI_REGISTRY_IMAGE/authentication"
only:
refs:
- master
changes:
- authentication/**/*
- .gitlab-ci.yml
The Dockerfile that belongs to it:
FROM composer:2 AS autoloader
COPY . /app
WORKDIR /app
RUN if [[ -n ${CI_JOB_TOKEN+x} ]]; then git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/".insteadOf "[email protected]:"; fi \
&& composer install --quiet --no-dev --no-scripts --optimize-autoloader --ignore-platform-reqs
FROM php:7.4-apache
COPY ./apache2.conf /etc/apache2/apache2.conf
COPY --from=autoloader /app /var/www/html
RUN apt update \
&& apt install openssl libssl-dev libcurl4-openssl-dev libonig-dev $PHPIZE_DEPS -y \
&& docker-php-ext-install pdo pdo_mysql mbstring \
&& chown -R www-data:www-data /var/www/html \
&& a2enmod rewrite
https://redd.it/lgpeow
@r_devops
reddit
GitLab pipeline fails because of composer
Hello r/devops! We run GitLab CI to build our containers until yesterday everything was good and fancy. But since then, our pipelines started to...
Tools to list and share subscriptions and their credentials
What sort of tools do small businesses use to manage and share a list of subscriptions and their credentials? Password managers are an option but that focuses more on credentials. Is there anything better?
We have Google Cloud, Rackspace, various shared hosting, Slack, Office365, Windows subscriptions/licenses. Need a good way to show and share consolidated data of what do we use, for what. As of now, I am just looking for something that IT and Product teams will just manually add when they pick a new thing or remove when it's stopped. It should have metadata to describe the purpose so it's easy for everybody to figure why it is being used, at a glance on a dashboard.
If it can allow me to securely store and share credentials, that'd be an added benefit.
It can either be hosted or self-hosted tool.
https://redd.it/li9gh3
@r_devops
What sort of tools do small businesses use to manage and share a list of subscriptions and their credentials? Password managers are an option but that focuses more on credentials. Is there anything better?
We have Google Cloud, Rackspace, various shared hosting, Slack, Office365, Windows subscriptions/licenses. Need a good way to show and share consolidated data of what do we use, for what. As of now, I am just looking for something that IT and Product teams will just manually add when they pick a new thing or remove when it's stopped. It should have metadata to describe the purpose so it's easy for everybody to figure why it is being used, at a glance on a dashboard.
If it can allow me to securely store and share credentials, that'd be an added benefit.
It can either be hosted or self-hosted tool.
https://redd.it/li9gh3
@r_devops
reddit
Tools to list and share subscriptions and their credentials
What sort of tools do small businesses use to manage and share a list of subscriptions and their credentials? Password managers are an option but...
Assistant app
Hi, I would like help with a project because I quck at programming.
I had the idea of a calendar app that would fill itself for you.
You give certain times of the day some attributes such as "lunch time", "work time",.. And tasks would have attributes too, as well as priority levels.
So when you want to had an event or task, the app would schedule them according to their attributes or priorities.
For instance, some basic rules I would implement are.
Grouping tasks as much as possible, filling the beginning of the week first,..
I hope I was clear enough
If you're interested in helping me it would be awesome.
https://redd.it/libubz
@r_devops
Hi, I would like help with a project because I quck at programming.
I had the idea of a calendar app that would fill itself for you.
You give certain times of the day some attributes such as "lunch time", "work time",.. And tasks would have attributes too, as well as priority levels.
So when you want to had an event or task, the app would schedule them according to their attributes or priorities.
For instance, some basic rules I would implement are.
Grouping tasks as much as possible, filling the beginning of the week first,..
I hope I was clear enough
If you're interested in helping me it would be awesome.
https://redd.it/libubz
@r_devops
reddit
Assistant app
Hi, I would like help with a project because I quck at programming. I had the idea of a calendar app that would fill itself for you. You give...
CircleCi Saas – on which public cloud is it running
Hi guys,
I can't find the info. Any of you know? Also, do you know where it is located?
thx!
https://redd.it/lieea5
@r_devops
Hi guys,
I can't find the info. Any of you know? Also, do you know where it is located?
thx!
https://redd.it/lieea5
@r_devops
reddit
CircleCi Saas – on which public cloud is it running
Hi guys, I can't find the info. Any of you know? Also, do you know where it is located? thx!
Terraform cloud
We don't have a build process for tf currently, states in s3. we're just in the process of adding the builder and considering terraform cloud - the infrastructure is decently large with over 30 AWS accounts. the primary concern here is not having control of the build infrastructure and the security given a third party system has full access to your cloud.
if you use it in your enterprise, how did you get past these security concerns and build trust with tf cloud? also, did you face outages in the past?
tf cloud seems to be easier to work with than Atlantis and all the features available, so was giving preference to it.
https://redd.it/lidnmw
@r_devops
We don't have a build process for tf currently, states in s3. we're just in the process of adding the builder and considering terraform cloud - the infrastructure is decently large with over 30 AWS accounts. the primary concern here is not having control of the build infrastructure and the security given a third party system has full access to your cloud.
if you use it in your enterprise, how did you get past these security concerns and build trust with tf cloud? also, did you face outages in the past?
tf cloud seems to be easier to work with than Atlantis and all the features available, so was giving preference to it.
https://redd.it/lidnmw
@r_devops
reddit
Terraform cloud
We don't have a build process for tf currently, states in s3. we're just in the process of adding the builder and considering terraform cloud -...
Embedding pictures into a work item with Flow
Hi guys. I'm doing an Internship in a company and I've been given a task.
It's my first time ever using Flow and Dev Ops, and it's only been a week.
Basically, to make it simple, the company wanted me to build a flow that will allow the creation of a work item once a mail is received and play with some filters. That was alright, I did that.
Then, they wanted to make it possible to pass the attachments in the work item too. I struggled a bit with that, but it's also done and I'm really happy although it does look easy to do for experienced users.
Now I'm trying to push it further: I'd really love to embed the pictures that I receive by mail in the Work Item description. I've tried looking around and it doesn't seem like a very common demand so I'm wondering if it's even possible to do it. Usually, on Outlook, people just drag and drop pictures and don't put them as attachments.
My goal is to have that same body with the picture embedded in DevOps and I'm struggling with this. Apparently, it's not a native feature. But I think there are ways to do it.
If so, could you guys maybe redirect me to an explanation or a link where someone does that?
That would be so helpful. Thanks a lot!
https://redd.it/lie410
@r_devops
Hi guys. I'm doing an Internship in a company and I've been given a task.
It's my first time ever using Flow and Dev Ops, and it's only been a week.
Basically, to make it simple, the company wanted me to build a flow that will allow the creation of a work item once a mail is received and play with some filters. That was alright, I did that.
Then, they wanted to make it possible to pass the attachments in the work item too. I struggled a bit with that, but it's also done and I'm really happy although it does look easy to do for experienced users.
Now I'm trying to push it further: I'd really love to embed the pictures that I receive by mail in the Work Item description. I've tried looking around and it doesn't seem like a very common demand so I'm wondering if it's even possible to do it. Usually, on Outlook, people just drag and drop pictures and don't put them as attachments.
My goal is to have that same body with the picture embedded in DevOps and I'm struggling with this. Apparently, it's not a native feature. But I think there are ways to do it.
If so, could you guys maybe redirect me to an explanation or a link where someone does that?
That would be so helpful. Thanks a lot!
https://redd.it/lie410
@r_devops
reddit
Embedding pictures into a work item with Flow
Hi guys. I'm doing an Internship in a company and I've been given a task. It's my first time ever using Flow and Dev Ops, and it's only been a...
What do you use for service management?
We are currently using Ambari to maintain all our custom services in a non-K8 environment. We have written custom service descriptors in Ambari for Prometheus, AlertManager, Thanos, Grafana, Kibana, Jaeger and many metric exporters and use them to start/stop/restart services.
I do like Ambari since it allows me to manage my services from a centralized UI but feel its not very generic and mainly useful for managing Hadoop/HDFS cluster (for which it does a good job).
I wanted to know what other tools would you guys recommend for this job. The only requirement is to be able to start/stop/restart/monitor my services from a centralized UI.
https://redd.it/liadg1
@r_devops
We are currently using Ambari to maintain all our custom services in a non-K8 environment. We have written custom service descriptors in Ambari for Prometheus, AlertManager, Thanos, Grafana, Kibana, Jaeger and many metric exporters and use them to start/stop/restart services.
I do like Ambari since it allows me to manage my services from a centralized UI but feel its not very generic and mainly useful for managing Hadoop/HDFS cluster (for which it does a good job).
I wanted to know what other tools would you guys recommend for this job. The only requirement is to be able to start/stop/restart/monitor my services from a centralized UI.
https://redd.it/liadg1
@r_devops
reddit
What do you use for service management?
We are currently using Ambari to maintain all our custom services in a non-K8 environment. We have written custom service descriptors in Ambari...
The role of sampling in distributed tracing
Distributed tracing is a technique that produces a high-fidelity observability signal: each data point (trace) represents a concrete execution of a code path. In an HTTP-based service, this typically means that each request would generate a trace containing data representing all the operations that were executed as a result of the request: database calls, message queue interactions, calls to downstream microservices, and so on.
As you can imagine, collecting this level of information for all requests received by a service can quickly generate a seemingly endless amount of data that is hard to manage. Making things even less appealing, the vast majority of this data will represent requests that are not that interesting, given that they’d represent successful operations. In the end, we might end up collecting, transferring, and storing data that will end up being deleted without being used at all.
The holy grail, the ultimate goal for distributed tracing is to collect only data that we’ll need in the future.
While this goal might be very hard to achieve, it’s certainly possible to get close to it by making use of a technique called sampling.
Continue reading here: https://blog.kroehling.de/the-role-of-sampling-in-distributed-tracing
https://redd.it/li7vii
@r_devops
Distributed tracing is a technique that produces a high-fidelity observability signal: each data point (trace) represents a concrete execution of a code path. In an HTTP-based service, this typically means that each request would generate a trace containing data representing all the operations that were executed as a result of the request: database calls, message queue interactions, calls to downstream microservices, and so on.
As you can imagine, collecting this level of information for all requests received by a service can quickly generate a seemingly endless amount of data that is hard to manage. Making things even less appealing, the vast majority of this data will represent requests that are not that interesting, given that they’d represent successful operations. In the end, we might end up collecting, transferring, and storing data that will end up being deleted without being used at all.
The holy grail, the ultimate goal for distributed tracing is to collect only data that we’ll need in the future.
While this goal might be very hard to achieve, it’s certainly possible to get close to it by making use of a technique called sampling.
Continue reading here: https://blog.kroehling.de/the-role-of-sampling-in-distributed-tracing
https://redd.it/li7vii
@r_devops
jpkroehling
The role of sampling in distributed tracing
What is sampling, which common types are out there, and what are their trade-offs?
Self Service Infrastructure?
So I have an idea, could you create an application like a webpage for drop down menu for provisioning infrastructure which then creates a terraform script for deployment for example. Like a self service app for developers, so instead of bugging operations teams they can manually say what they want, it automatically creates a terraform script and then it gets deployed and send the information back (e.g. Address, location and deployment stats). Is something like that possible? If so how would you do it? Thinking I could save some time!
https://redd.it/lil3oo
@r_devops
So I have an idea, could you create an application like a webpage for drop down menu for provisioning infrastructure which then creates a terraform script for deployment for example. Like a self service app for developers, so instead of bugging operations teams they can manually say what they want, it automatically creates a terraform script and then it gets deployed and send the information back (e.g. Address, location and deployment stats). Is something like that possible? If so how would you do it? Thinking I could save some time!
https://redd.it/lil3oo
@r_devops
reddit
Self Service Infrastructure?
So I have an idea, could you create an application like a webpage for drop down menu for provisioning infrastructure which then creates a...