The automation challenge: Kubernetes operators vs Helm Charts with Ana-Maria Mihalceanu
Check out this live-coding talk with Ana-Maria Mihalceanu, Co-founder of Bucharest Software Craftsmanship Community.
Working with Kubernetes for some time or you just started your journey?
If you love automation and dislike having to perform repetitive tasks manually, you have come across concepts of Helm charts and Kubernetes operators. Although they solve similar types of problems, they are not exactly interchangeable tools, but rather complementary.
During this session, Ana-Maria will highlight which to use and when by sharing several code based examples and lessons learned.
In this talk, you'll learn:
1. Kubernetes operators and Helm Charts: which to use when
2. How Kubernetes are complementary not interchangeable tools
​
[Video](https://youtu.be/dGx8PjmWkyM)
**Slides**
https://redd.it/oa60lz
@r_devops
Check out this live-coding talk with Ana-Maria Mihalceanu, Co-founder of Bucharest Software Craftsmanship Community.
Working with Kubernetes for some time or you just started your journey?
If you love automation and dislike having to perform repetitive tasks manually, you have come across concepts of Helm charts and Kubernetes operators. Although they solve similar types of problems, they are not exactly interchangeable tools, but rather complementary.
During this session, Ana-Maria will highlight which to use and when by sharing several code based examples and lessons learned.
In this talk, you'll learn:
1. Kubernetes operators and Helm Charts: which to use when
2. How Kubernetes are complementary not interchangeable tools
​
[Video](https://youtu.be/dGx8PjmWkyM)
**Slides**
https://redd.it/oa60lz
@r_devops
YouTube
The Automation Challenge: Kubernetes Operators vs Helm Charts • Ana-Maria Mihalceanu • GOTO 2021
This presentation was recorded at GOTOpia February 2021. #GOTOcon #GOTOpiahttps://gotopia.euAna-Maria Mihalceanu - Co-founder of Bucharest Software Craftsmans...
Could not request certificate: execution expired while giving PUPPET status
when i try to see the puppet master status it says Could not request certificate: execution expired
can anyone help me out
this is my master config file
[master\]
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puppetlabs/code
​
[main\]
certname = puppetmaster
server = puppetmaster
runinterval = 1hr
strict_variables = true
​
and this is my client config file
[main\]
certname = puppetclient
server= puppetmaster
runinterval = 1h
​
​
​
\~
https://redd.it/oa5lnk
@r_devops
when i try to see the puppet master status it says Could not request certificate: execution expired
can anyone help me out
this is my master config file
[master\]
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puppetlabs/code
​
[main\]
certname = puppetmaster
server = puppetmaster
runinterval = 1hr
strict_variables = true
​
and this is my client config file
[main\]
certname = puppetclient
server= puppetmaster
runinterval = 1h
​
​
​
\~
https://redd.it/oa5lnk
@r_devops
reddit
Could not request certificate: execution expired while giving...
when i try to see the puppet master status it says Could not request certificate: execution expired can anyone help me out this is my master...
SRE without programming experience?
I have 13 plus years in the industry. I come from a systems administration background with the last few years on platform engineering . Powershell/bash/basic python/cloudformation/terraform. I don't come from a programming background and do okay at scripting and automation. I have almost a decade of experience with application/server/production support.
I also have experience working with ci cd in aws / azure.
Is it wise to try to move into SRE if I don't have a programming background? Or should I build those skills before I do?
https://redd.it/oa4vb6
@r_devops
I have 13 plus years in the industry. I come from a systems administration background with the last few years on platform engineering . Powershell/bash/basic python/cloudformation/terraform. I don't come from a programming background and do okay at scripting and automation. I have almost a decade of experience with application/server/production support.
I also have experience working with ci cd in aws / azure.
Is it wise to try to move into SRE if I don't have a programming background? Or should I build those skills before I do?
https://redd.it/oa4vb6
@r_devops
reddit
SRE without programming experience?
I have 13 plus years in the industry. I come from a systems administration background with the last few years on platform engineering ....
Small ELK setup on Azure
Hello Folks,
In my current project we would like to setup small ELK stack to monitor our Prod Env application ( for now its closer to PoC rather than real setup, worst case scenario we will scale ). What number of machines and setup ( one elastic or maybe two, one kibana or kibana + grafana,HA,lB) would you recommend? We will push data in json format using Rest Api to Elasticsearch indexes instead of reading them from fs ( proprietary solution, no access to logs on server) so most likely we will not gonna use logstash or his peers. I did some research but I feel that there are 10's of posts regarding this topic and I'm little bit lost on this. We will host it on Azure, so if you know maybe what are optimal machines in terms of resources that we would not go bankrupt I would also appreciate that.
https://redd.it/oa7pxu
@r_devops
Hello Folks,
In my current project we would like to setup small ELK stack to monitor our Prod Env application ( for now its closer to PoC rather than real setup, worst case scenario we will scale ). What number of machines and setup ( one elastic or maybe two, one kibana or kibana + grafana,HA,lB) would you recommend? We will push data in json format using Rest Api to Elasticsearch indexes instead of reading them from fs ( proprietary solution, no access to logs on server) so most likely we will not gonna use logstash or his peers. I did some research but I feel that there are 10's of posts regarding this topic and I'm little bit lost on this. We will host it on Azure, so if you know maybe what are optimal machines in terms of resources that we would not go bankrupt I would also appreciate that.
https://redd.it/oa7pxu
@r_devops
reddit
Small ELK setup on Azure
Hello Folks, In my current project we would like to setup small ELK stack to monitor our Prod Env application ( for now its closer to PoC rather...
Question about https with AWS loadbalancer
Hi all,
I see something happening that I did not expect and it is probably because I miss some knowledge here so hopefully you girls and guys can help me fill the gaps.
I have an app running in AWS ec2. It is behind a LB. I have a SSL certificate associated with it. The LB has a security group that allows only 443 incoming.
Now I moved the app to a new domain, and the new certificate is not yet validated. I would expect that I then can not access the app.
However if I connect to the new domain, it gives me a certificate error in the browser but when I tell it to just go ahead and connect insecure it actually does that. This is unexpected for me. How do I make sure that it is not only accessible over https, but also ONLY accessible over https ? What do I miss ?
https://redd.it/oaa90h
@r_devops
Hi all,
I see something happening that I did not expect and it is probably because I miss some knowledge here so hopefully you girls and guys can help me fill the gaps.
I have an app running in AWS ec2. It is behind a LB. I have a SSL certificate associated with it. The LB has a security group that allows only 443 incoming.
Now I moved the app to a new domain, and the new certificate is not yet validated. I would expect that I then can not access the app.
However if I connect to the new domain, it gives me a certificate error in the browser but when I tell it to just go ahead and connect insecure it actually does that. This is unexpected for me. How do I make sure that it is not only accessible over https, but also ONLY accessible over https ? What do I miss ?
https://redd.it/oaa90h
@r_devops
reddit
Question about https with AWS loadbalancer
Hi all, I see something happening that I did not expect and it is probably because I miss some knowledge here so hopefully you girls and guys can...
Is DevOps appropriate for hardware/embedded designs?
I work as a design engineer doing hardware and embedded designs (bare metal not Linux), and I am wondering if a DevOps workflow would be a good change for me and my team.
From what I read, the entire DevOps cycle doesn't really apply to the overall workflow of our company. We unfortunately have no say in the company wide workflow, but we have full autonomy with our group.
Does anyone here have any experience implementing DevOps practices at this low level? I've gotten out builds automated and have now slowly started to introduce the concept of HDL simulations and C unit tests to our process. Nothing officially mandated though.
https://redd.it/oa9ynm
@r_devops
I work as a design engineer doing hardware and embedded designs (bare metal not Linux), and I am wondering if a DevOps workflow would be a good change for me and my team.
From what I read, the entire DevOps cycle doesn't really apply to the overall workflow of our company. We unfortunately have no say in the company wide workflow, but we have full autonomy with our group.
Does anyone here have any experience implementing DevOps practices at this low level? I've gotten out builds automated and have now slowly started to introduce the concept of HDL simulations and C unit tests to our process. Nothing officially mandated though.
https://redd.it/oa9ynm
@r_devops
reddit
r/devops - Is DevOps appropriate for hardware/embedded designs?
0 votes and 3 comments so far on Reddit
Devtron, Heroku for Kubernetes. An Open Source DevOps tool to Manage and Operationalize your applications on K8s
I am one of the contributors of Devtron, Heroku for Kubernetes.
TL;DR - [Devtron, An OpenSource DevOps tool](https://github.com/devtron-labs/devtron) to manage and operationalize your applications End-to-End on Kubernetes. Would love to know what you think about it.
A short background; In the past, while working with Kubernetes, we have had the first-hand experience of using multiple tools on top of it. Being a DevOps engineer, it sure was a hassle to manage various aspects of the application lifecycle while they don't talk to each other - CI, CD, security, cost observability, stabilization. We could not find any viable solution to solve this issue, to manage and operationalizing applications without an in-depth understanding of each tool.
So, we started working on Devtron to tackle the problem. With Devtron, we integrated with the existing open source systems like argocd, Argo workflow, Clair, hibernator, grafana, Prometheus, envoy, and many others and add capabilities on top of them to enable self serve for developers and DevOps.
Devtron, in short, is an Open Source application-first way of looking at Kubernetes, meaning deep integrations with existing OpenSource and commercial software to quickly onboard state-of-the-art systems. We call it 'The AppOps approach.' :)
Some of the Features:
* Zero code software delivery workflow
* Multi-cloud deployment
* Easy dev-sec-ops integration
* Application debugging dashboard
* Enterprise-grade security and compliances
* Gitops aware
* Operational insights
You can check the [Devtron repo](https://github.com/devtron-labs/devtron) to know more about the project.
You can also check the [docs](https://docs.devtron.ai/) directly if you'd like.
Would love to know what you think about this. Happy to hear all your suggestions and improvements regarding the project.
https://redd.it/oab67i
@r_devops
I am one of the contributors of Devtron, Heroku for Kubernetes.
TL;DR - [Devtron, An OpenSource DevOps tool](https://github.com/devtron-labs/devtron) to manage and operationalize your applications End-to-End on Kubernetes. Would love to know what you think about it.
A short background; In the past, while working with Kubernetes, we have had the first-hand experience of using multiple tools on top of it. Being a DevOps engineer, it sure was a hassle to manage various aspects of the application lifecycle while they don't talk to each other - CI, CD, security, cost observability, stabilization. We could not find any viable solution to solve this issue, to manage and operationalizing applications without an in-depth understanding of each tool.
So, we started working on Devtron to tackle the problem. With Devtron, we integrated with the existing open source systems like argocd, Argo workflow, Clair, hibernator, grafana, Prometheus, envoy, and many others and add capabilities on top of them to enable self serve for developers and DevOps.
Devtron, in short, is an Open Source application-first way of looking at Kubernetes, meaning deep integrations with existing OpenSource and commercial software to quickly onboard state-of-the-art systems. We call it 'The AppOps approach.' :)
Some of the Features:
* Zero code software delivery workflow
* Multi-cloud deployment
* Easy dev-sec-ops integration
* Application debugging dashboard
* Enterprise-grade security and compliances
* Gitops aware
* Operational insights
You can check the [Devtron repo](https://github.com/devtron-labs/devtron) to know more about the project.
You can also check the [docs](https://docs.devtron.ai/) directly if you'd like.
Would love to know what you think about this. Happy to hear all your suggestions and improvements regarding the project.
https://redd.it/oab67i
@r_devops
GitHub
GitHub - devtron-labs/devtron: The only Kubernetes dashboard you need
The only Kubernetes dashboard you need. Contribute to devtron-labs/devtron development by creating an account on GitHub.
DevOps / GitOps way to manage Operations tools
Formatting warning as I am on mobile.
I am the sole NOC Engineer for my company, and have been creating a bunch of PowerShell/python (Soon C# to manage Microsoft products) tools for my team. No one in my operations team is able to script or develop tools, and I only have half a year of experience writing and maintaining code in a professional environment.
I have made tools that have drastically reduced toil, so we no longer create user and license then by hand, turning a 4-8 man hour daily process to 10 minutes with minimal intervention. I have also created a plethora of one off or sparsely used scripts to resolve repeat issues. These all execute in a PowerShell terminal on a shared computer.
I would appreciate some insite on how to develop and maintain these tools in a way that others in the future could come in and maintain/improve these tools (currently doing some less than best practices involving private repos). My current idea involves using the Azure DevOps suite provided to my team (which is currently unused and empty) to store, test, and push code to a VM or possibly a static webpage (blob storage?) that makes API calls to these tools (azure functions?).
I don't have any coworkers or superiors to lean on for advice, and my manager said that I am free to try anything that would improve our workflow. I can reach out to developers in our company for help with specific products we manage, but not to help contribute to the codebase I'm any meaningful way. Thank you for your advice in this situation 😁
https://redd.it/oabs5z
@r_devops
Formatting warning as I am on mobile.
I am the sole NOC Engineer for my company, and have been creating a bunch of PowerShell/python (Soon C# to manage Microsoft products) tools for my team. No one in my operations team is able to script or develop tools, and I only have half a year of experience writing and maintaining code in a professional environment.
I have made tools that have drastically reduced toil, so we no longer create user and license then by hand, turning a 4-8 man hour daily process to 10 minutes with minimal intervention. I have also created a plethora of one off or sparsely used scripts to resolve repeat issues. These all execute in a PowerShell terminal on a shared computer.
I would appreciate some insite on how to develop and maintain these tools in a way that others in the future could come in and maintain/improve these tools (currently doing some less than best practices involving private repos). My current idea involves using the Azure DevOps suite provided to my team (which is currently unused and empty) to store, test, and push code to a VM or possibly a static webpage (blob storage?) that makes API calls to these tools (azure functions?).
I don't have any coworkers or superiors to lean on for advice, and my manager said that I am free to try anything that would improve our workflow. I can reach out to developers in our company for help with specific products we manage, but not to help contribute to the codebase I'm any meaningful way. Thank you for your advice in this situation 😁
https://redd.it/oabs5z
@r_devops
reddit
DevOps / GitOps way to manage Operations tools
Formatting warning as I am on mobile. I am the sole NOC Engineer for my company, and have been creating a bunch of PowerShell/python (Soon C# to...
Isn't putting a private SSH key on Gitlab (or any other CI solution) really insecure? New to CI, would love some thoughts!
Not sure if this is the right place to ask this, so I'm sorry if it isn't!
I'm messing around with Gitlab CI and I'm currently trying to evaluate security risks involved with storing a private key in Gitlab's CI Variables.
My goal is to build some Javascript/HTML files and then to deploy them on my VPS. I'm planning on deploying by using rsync with SSH.
However, my internal spider senses are tingling since (1) I'm storing a private key in the cloud and (2) I feel that if this key gets compromised then my whole server would be too.
Am I being too paranoid? I really want to know what the best practices regarding this are. My plan is to make a new user and to put it into a chroot jail, however I've read chroot jail isn't really secure. Obviously I'll be rechecking all folder permissions, but I'm still not really comfortable.
Am I missing something? I would really appreciate any thoughts.
Thanks!
https://redd.it/oae8mc
@r_devops
Not sure if this is the right place to ask this, so I'm sorry if it isn't!
I'm messing around with Gitlab CI and I'm currently trying to evaluate security risks involved with storing a private key in Gitlab's CI Variables.
My goal is to build some Javascript/HTML files and then to deploy them on my VPS. I'm planning on deploying by using rsync with SSH.
However, my internal spider senses are tingling since (1) I'm storing a private key in the cloud and (2) I feel that if this key gets compromised then my whole server would be too.
Am I being too paranoid? I really want to know what the best practices regarding this are. My plan is to make a new user and to put it into a chroot jail, however I've read chroot jail isn't really secure. Obviously I'll be rechecking all folder permissions, but I'm still not really comfortable.
Am I missing something? I would really appreciate any thoughts.
Thanks!
https://redd.it/oae8mc
@r_devops
reddit
Isn't putting a private SSH key on Gitlab (or any other CI...
Not sure if this is the right place to ask this, so I'm sorry if it isn't! I'm messing around with Gitlab CI and I'm currently trying to evaluate...
DAST in Gitlab
Hey guys, as an DevOps engineer, I have integrated native SAST and open source tools into my gitlab pipelines. I want to integrate DAST into pipelines but the problem is DAST scans are so long that they are delaying the pipelines and developers are not happy they have to wait so long everytime.
I don't use Gitlab Uktinage which has ZAP as part of it but even then I don't see how it can beat the ling delays due to scan time.
Any thoughts on how to create the work flow without affecting developer experience.
https://redd.it/oafhy1
@r_devops
Hey guys, as an DevOps engineer, I have integrated native SAST and open source tools into my gitlab pipelines. I want to integrate DAST into pipelines but the problem is DAST scans are so long that they are delaying the pipelines and developers are not happy they have to wait so long everytime.
I don't use Gitlab Uktinage which has ZAP as part of it but even then I don't see how it can beat the ling delays due to scan time.
Any thoughts on how to create the work flow without affecting developer experience.
https://redd.it/oafhy1
@r_devops
reddit
DAST in Gitlab
Hey guys, as an DevOps engineer, I have integrated native SAST and open source tools into my gitlab pipelines. I want to integrate DAST into...
Free intro to Linux commandline/server course starting 5 July 2021
This free month-long course is re-starting again on the first Monday of next month.
This course has been running successfully now every month since February 2020 - more detail at: https://LinuxUpskillChallenge.org - daily lessons appear in the sub-reddit r/linuxupskillchallenge - which is also used for support/discussion.
Suitable whatever your background, and aims to provide that "base layer" of traditional Linux skills in a fun interactive way.
Any feedback very welcome.
https://redd.it/oaf8cy
@r_devops
This free month-long course is re-starting again on the first Monday of next month.
This course has been running successfully now every month since February 2020 - more detail at: https://LinuxUpskillChallenge.org - daily lessons appear in the sub-reddit r/linuxupskillchallenge - which is also used for support/discussion.
Suitable whatever your background, and aims to provide that "base layer" of traditional Linux skills in a fun interactive way.
Any feedback very welcome.
https://redd.it/oaf8cy
@r_devops
linuxupskillchallenge.org
Linux Upskill Challenge - Linux Upskill Challenge
A month-long course aimed at those who aspire to get Linux-related jobs in the industry - junior Linux sysadmin, DevOps-related work, and similar. Learn the skills required to sysadmin a remote Linux server from the commandline.
Best Udemy Course to learn DevSecOps
Hi everyone!
I have 3 years of agile development (Scrum) in Java Web. I started studying topics about security, like OWASP, Pentesting, etc and I want to work with DevSecOps.
I live in Brazil, so things can be a bit different, but can you point to me some of the best courses about this subject, so I can apply to jobs?
Thanks :)
https://redd.it/oaiyhg
@r_devops
Hi everyone!
I have 3 years of agile development (Scrum) in Java Web. I started studying topics about security, like OWASP, Pentesting, etc and I want to work with DevSecOps.
I live in Brazil, so things can be a bit different, but can you point to me some of the best courses about this subject, so I can apply to jobs?
Thanks :)
https://redd.it/oaiyhg
@r_devops
reddit
r/devops - Best Udemy Course to learn DevSecOps
0 votes and 0 comments so far on Reddit
Learn Kubernetes by Example
a free and continuously updated online collection of resources on everything Kubernetes , by RedHat
https://www.i-programmer.info/news/150-training-a-education/14680-learn-kubernetes-by-example.html
https://redd.it/oabatj
@r_devops
a free and continuously updated online collection of resources on everything Kubernetes , by RedHat
https://www.i-programmer.info/news/150-training-a-education/14680-learn-kubernetes-by-example.html
https://redd.it/oabatj
@r_devops
www.i-programmer.info
Learn Kubernetes by Example
Programming book reviews, programming tutorials,programming news, C#, Ruby, Python,C, C++, PHP, Visual Basic, Computer book reviews, computer history, programming history, joomla, theory, spreadsheets and more.
Sql failover groups and cross regional DR
I thought the purpose was to have high availability so if one goes down the other will pick up however it seems they are all on the same server so that wouldnt work? How do you use them for cross region geo disaster recovery in azure? How much of it is redundant?
https://redd.it/oa8wx6
@r_devops
I thought the purpose was to have high availability so if one goes down the other will pick up however it seems they are all on the same server so that wouldnt work? How do you use them for cross region geo disaster recovery in azure? How much of it is redundant?
https://redd.it/oa8wx6
@r_devops
reddit
Sql failover groups and cross regional DR
I thought the purpose was to have high availability so if one goes down the other will pick up however it seems they are all on the same server so...
Delivery Plan Expanded
Hi,
​
Anyone have any ideas how to plot your epics/features ala Gannt chart? I like delivery plan because it has a good look and details but it does not scale out enough. ie I want to scale out 1-2 years not just 4-5 months potentially. Anyone have any recommendations>
https://redd.it/oa5tuh
@r_devops
Hi,
​
Anyone have any ideas how to plot your epics/features ala Gannt chart? I like delivery plan because it has a good look and details but it does not scale out enough. ie I want to scale out 1-2 years not just 4-5 months potentially. Anyone have any recommendations>
https://redd.it/oa5tuh
@r_devops
reddit
r/devops - Delivery Plan Expanded
1 vote and 7 comments so far on Reddit
Jira collector hygieie 401 unauthorized error
Below is my application.properties file.
\# Database Name
dbname=dashboarddb
​
\# Database HostName - default is localhost
dbhost=9.8.x.x
​
\# Database Port - default is 27017
dbport=27016
​
\# MongoDB replicaset
dbreplicaset=[false if you are not using MongoDB replicaset\]
dbhostport=[host1:port1,host2:port2,host3:port3\]
​
\# Database Username - default is blank
dbusername=dashboarduser
​
\# Database Password - default is blank
dbpassword=dbpassword
​
\# Logging File location
logging.file=./logs/jira.log
​
\# PageSize - Expand contract this value depending on Jira implementation's
\# default server timeout setting (You will likely receive a SocketTimeoutException)
feature.pageSize=100
​
\# Delta change date that modulates the collector item task
\# Occasionally, these values should be modified if database size is a concern
feature.deltaStartDate=2016-03-01T00:00:00.000000
feature.masterStartDate=2016-03-01T00:00:00.000000
feature.deltaCollectorItemStartDate=2016-03-01T00:00:00.000000
​
\# Chron schedule: S M D M Y [Day of the Week\]
feature.cron=0 * * * * *
​
\# ST Query File Details - Required, but DO NOT MODIFY
feature.queryFolder=jiraapi-queries
feature.storyQuery=story
feature.epicQuery=epic
​
\# JIRA CONNECTION DETAILS:
\# Enterprise Proxy - ONLY INCLUDE IF YOU HAVE A PROXY
\#feature.jiraProxyUrl=https://proxy.com
\#feature.jiraProxyPort=9000
feature.jiraBaseUrl=https://xxx.atlassian.net
feature.jiraQueryEndpoint=rest/api/2/
\# For basic authentication, requires username:password as string in base64
\# This command will make this for you: echo -n username:password | base64
​
feature.jiraCredentials=xxx
​
\# OAuth is not fully implemented; please blank-out the OAuth values:
​
feature.jiraOauthAuthtoken=
feature.jiraOauthRefreshtoken=
feature.jiraOauthRedirecturi=
feature.jiraOauthExpiretime=
​
\#############################################################################
\# In Jira, general IssueType IDs are associated to various 'issue'
\# attributes. However, there is one attribute which this collector's
\# queries rely on that change between different instantiations of Jira.
\# Please provide a string name reference to your instance's IssueType for
\# the lowest level of Issues (for example, 'user story') specific to your Jira
\# instance. Note: You can retrieve your instance's IssueType Name
\# listings via the following URI: https://[your-jira-domain-name\]/rest/api/2/issuetype/
\# Multiple comma-separated values can be specified.
\#############################################################################
feature.jiraIssueTypeName=Story
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'sprint' or 'timebox' details,
\# which includes a list of information. This field allows you to specify that data field for your
\# instance of Jira. Note: You can retrieve your instance's sprint data field name
\# via the following URI, and look for a package name com.atlassian.greenhopper.service.sprint.Sprint;
\# your custom field name describes the values in this field:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraBugDataFieldName=customfield_10201
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'super story' or 'epic' back-end ID,
\# which includes a list of information. This field allows you to specify that data field for your instance
\# of Jira. Note: You can retrieve your instance's epic ID field name via the following URI where your
\# queried user story issue has a super issue (for example, epic) tied
Below is my application.properties file.
\# Database Name
dbname=dashboarddb
​
\# Database HostName - default is localhost
dbhost=9.8.x.x
​
\# Database Port - default is 27017
dbport=27016
​
\# MongoDB replicaset
dbreplicaset=[false if you are not using MongoDB replicaset\]
dbhostport=[host1:port1,host2:port2,host3:port3\]
​
\# Database Username - default is blank
dbusername=dashboarduser
​
\# Database Password - default is blank
dbpassword=dbpassword
​
\# Logging File location
logging.file=./logs/jira.log
​
\# PageSize - Expand contract this value depending on Jira implementation's
\# default server timeout setting (You will likely receive a SocketTimeoutException)
feature.pageSize=100
​
\# Delta change date that modulates the collector item task
\# Occasionally, these values should be modified if database size is a concern
feature.deltaStartDate=2016-03-01T00:00:00.000000
feature.masterStartDate=2016-03-01T00:00:00.000000
feature.deltaCollectorItemStartDate=2016-03-01T00:00:00.000000
​
\# Chron schedule: S M D M Y [Day of the Week\]
feature.cron=0 * * * * *
​
\# ST Query File Details - Required, but DO NOT MODIFY
feature.queryFolder=jiraapi-queries
feature.storyQuery=story
feature.epicQuery=epic
​
\# JIRA CONNECTION DETAILS:
\# Enterprise Proxy - ONLY INCLUDE IF YOU HAVE A PROXY
\#feature.jiraProxyUrl=https://proxy.com
\#feature.jiraProxyPort=9000
feature.jiraBaseUrl=https://xxx.atlassian.net
feature.jiraQueryEndpoint=rest/api/2/
\# For basic authentication, requires username:password as string in base64
\# This command will make this for you: echo -n username:password | base64
​
feature.jiraCredentials=xxx
​
\# OAuth is not fully implemented; please blank-out the OAuth values:
​
feature.jiraOauthAuthtoken=
feature.jiraOauthRefreshtoken=
feature.jiraOauthRedirecturi=
feature.jiraOauthExpiretime=
​
\#############################################################################
\# In Jira, general IssueType IDs are associated to various 'issue'
\# attributes. However, there is one attribute which this collector's
\# queries rely on that change between different instantiations of Jira.
\# Please provide a string name reference to your instance's IssueType for
\# the lowest level of Issues (for example, 'user story') specific to your Jira
\# instance. Note: You can retrieve your instance's IssueType Name
\# listings via the following URI: https://[your-jira-domain-name\]/rest/api/2/issuetype/
\# Multiple comma-separated values can be specified.
\#############################################################################
feature.jiraIssueTypeName=Story
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'sprint' or 'timebox' details,
\# which includes a list of information. This field allows you to specify that data field for your
\# instance of Jira. Note: You can retrieve your instance's sprint data field name
\# via the following URI, and look for a package name com.atlassian.greenhopper.service.sprint.Sprint;
\# your custom field name describes the values in this field:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraBugDataFieldName=customfield_10201
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'super story' or 'epic' back-end ID,
\# which includes a list of information. This field allows you to specify that data field for your instance
\# of Jira. Note: You can retrieve your instance's epic ID field name via the following URI where your
\# queried user story issue has a super issue (for example, epic) tied
to it; your custom field name describes the
\# epic value you expect to see, and is the only field that does this for a given issue:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraEpicIdFieldName=customfield_10002
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'story points'
\# This field allows you to specify that data field for your instance
\# of Jira. Note: You can retrieve your instance's storypoints ID field name via the following URI where your
\# queried user story issue has story points set on it; your custom field name describes the
\# story points value you expect to see:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraStoryPointsFieldName=customfield_10003
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'team'
\# This field allows you to specify that data field for your instance
\# of Jira. Note: You can retrieve your instance's team ID field name via the following URI where your
\# queried user story issue has team set on it; your custom field name describes the
\# team value you expect to see:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraTeamFieldName=
​
\# Defines how to update features per board. If true then only update based on enabled collectorItems otherwise full update
feature.collectorItemOnlyUpdate=true
​
\#Defines the maximum number of features allow per board. If limit is reach collection will not happen for given board
feature.maxNumberOfFeaturesPerBoard=1000
​
\# Set this to true if you use boards as team
feature.jiraBoardAsTeam=false
​
\#Defines the number of hours between each board/team and project data refresh
feature.refreshTeamAndProjectHours=3
​
I'm getting 401 un-authorized error when I run java -jar /opt/hygieia-feature-jira-collector/target/jira-feature-collector.jar --spring.config.name=feature --spring.config.location=application.properties
​
2021-06-29 09:51:00,034 [taskScheduler-1\] ERROR o.s.s.s.TaskUtils$LoggingErrorHandler - Unexpected error occurred in scheduled task.
org.springframework.web.client.HttpClientErrorException: 401 Unauthorized
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:108)
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:709)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:662)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:622)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:540)
at com.capitalone.dashboard.client.RestClient.makeRestCallGet(RestClient.java:158)
at com.capitalone.dashboard.collector.DefaultJiraClient.makeRestCall(DefaultJiraClient.java:828)
at com.capitalone.dashboard.collector.DefaultJiraClient.getJiraIssueTypeIds(DefaultJiraClient.java:299)
at com.capitalone.dashboard.collector.FeatureCollectorTask.getCollector(FeatureCollectorTask.java:98)
at com.capitalone.dashboard.collector.FeatureCollectorTask.getCollector(FeatureCollectorTask.java:50)
at
\# epic value you expect to see, and is the only field that does this for a given issue:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraEpicIdFieldName=customfield_10002
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'story points'
\# This field allows you to specify that data field for your instance
\# of Jira. Note: You can retrieve your instance's storypoints ID field name via the following URI where your
\# queried user story issue has story points set on it; your custom field name describes the
\# story points value you expect to see:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraStoryPointsFieldName=customfield_10003
​
\#############################################################################
\# In Jira, your instance will have its own custom field created for 'team'
\# This field allows you to specify that data field for your instance
\# of Jira. Note: You can retrieve your instance's team ID field name via the following URI where your
\# queried user story issue has team set on it; your custom field name describes the
\# team value you expect to see:
\# https://[your-jira-domain-name\]/rest/api/2/issue/[some-issue-name\]
\#############################################################################
feature.jiraTeamFieldName=
​
\# Defines how to update features per board. If true then only update based on enabled collectorItems otherwise full update
feature.collectorItemOnlyUpdate=true
​
\#Defines the maximum number of features allow per board. If limit is reach collection will not happen for given board
feature.maxNumberOfFeaturesPerBoard=1000
​
\# Set this to true if you use boards as team
feature.jiraBoardAsTeam=false
​
\#Defines the number of hours between each board/team and project data refresh
feature.refreshTeamAndProjectHours=3
​
I'm getting 401 un-authorized error when I run java -jar /opt/hygieia-feature-jira-collector/target/jira-feature-collector.jar --spring.config.name=feature --spring.config.location=application.properties
​
2021-06-29 09:51:00,034 [taskScheduler-1\] ERROR o.s.s.s.TaskUtils$LoggingErrorHandler - Unexpected error occurred in scheduled task.
org.springframework.web.client.HttpClientErrorException: 401 Unauthorized
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:108)
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:709)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:662)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:622)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:540)
at com.capitalone.dashboard.client.RestClient.makeRestCallGet(RestClient.java:158)
at com.capitalone.dashboard.collector.DefaultJiraClient.makeRestCall(DefaultJiraClient.java:828)
at com.capitalone.dashboard.collector.DefaultJiraClient.getJiraIssueTypeIds(DefaultJiraClient.java:299)
at com.capitalone.dashboard.collector.FeatureCollectorTask.getCollector(FeatureCollectorTask.java:98)
at com.capitalone.dashboard.collector.FeatureCollectorTask.getCollector(FeatureCollectorTask.java:50)
at
com.capitalone.dashboard.collector.CollectorTask.run(CollectorTask.java:56)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
https://redd.it/oa53sg
@r_devops
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
https://redd.it/oa53sg
@r_devops
Nobody's ever fired for picking AWS...
Is it just me, or do many of us see the AWS vs. Azure vs. GCP decision tree like this?:
IF you can't live without Windows => Let's get on Azure, first
ELSEIF you're addicted to BigQuery => Hmm, guess we'll try GCP
ELSE "Any objections to AWS? No? Woot, free tier!"
(notwithstanding those of you with VIP treatment / special deals with your cloud's sales team)
https://redd.it/oaijme
@r_devops
Is it just me, or do many of us see the AWS vs. Azure vs. GCP decision tree like this?:
IF you can't live without Windows => Let's get on Azure, first
ELSEIF you're addicted to BigQuery => Hmm, guess we'll try GCP
ELSE "Any objections to AWS? No? Woot, free tier!"
(notwithstanding those of you with VIP treatment / special deals with your cloud's sales team)
https://redd.it/oaijme
@r_devops
reddit
Nobody's ever fired for picking AWS...
Is it just me, or do many of us see the AWS vs. Azure vs. GCP decision tree like this?: IF you can't live without Windows => Let's get on Azure,...
Connecting sonarlint with online sonarqube
Currently, I'm using sonarLint in intelliJ IDEA, but I realised that some of the detected issues in sonarLint are different in SonarQube. Does anyone have any idea why?
To remove the gap, I'm attempting to bind the project to a remote SonarQube, (my company account), is it possible to do that? From all the tutorials I've seen, they are usually binded to a local sonarQube (localhost:9000).
Appreciate any insights on this! Thank you!
https://redd.it/oapv4a
@r_devops
Currently, I'm using sonarLint in intelliJ IDEA, but I realised that some of the detected issues in sonarLint are different in SonarQube. Does anyone have any idea why?
To remove the gap, I'm attempting to bind the project to a remote SonarQube, (my company account), is it possible to do that? From all the tutorials I've seen, they are usually binded to a local sonarQube (localhost:9000).
Appreciate any insights on this! Thank you!
https://redd.it/oapv4a
@r_devops
reddit
Connecting sonarlint with online sonarqube
Currently, I'm using sonarLint in intelliJ IDEA, but I realised that some of the detected issues in sonarLint are different in SonarQube. Does...
Building in a chroot for pxe booting nodes
So I've got a slightly different situation but I'd really like to use packer.
I've got a bunch of nodes that boot from PXE. They boot into an NFS root, grab a squashfs on that NFS root, load it into ram and then switch root into that and ditch the nfs.
The images, both NFS and squashfs are just built as flat folders, so you can chroot into them on the build/boot hosts. No VM's or other stuff. But I'd really love to use packer to make them.
Currently we have to by hand chroot into the image, install stuff/change settings and whatever, then squashfs the resulting folder up.
It would be far nicer to just edit the packer files and run a build.
I've looked at the available chroot builders and they don't quite do what I need.
I guess I'm interested in how hard it would be to make a new builder/whatever to do these from scratch. I mean its just a "rpm --initdb --root blah" "rpm --root blah -ihv centos-release-whatever.centos.x86_64.rpm" and a "yum install --installroot=blah bash yum rpm" to get to a usable chroot state.
I realise it's a bit of a departure from packers standard mode of operation but I don't think it's too far.
Any tips/advice or builders I've missed (no kvm use please) would be welcome.
EDIT: It looks like I might be able to modify this builder https://github.com/summerwind/packer-builder-qemu-chroot to do what I want. Now I just need to learn go.
https://redd.it/oapnc5
@r_devops
So I've got a slightly different situation but I'd really like to use packer.
I've got a bunch of nodes that boot from PXE. They boot into an NFS root, grab a squashfs on that NFS root, load it into ram and then switch root into that and ditch the nfs.
The images, both NFS and squashfs are just built as flat folders, so you can chroot into them on the build/boot hosts. No VM's or other stuff. But I'd really love to use packer to make them.
Currently we have to by hand chroot into the image, install stuff/change settings and whatever, then squashfs the resulting folder up.
It would be far nicer to just edit the packer files and run a build.
I've looked at the available chroot builders and they don't quite do what I need.
I guess I'm interested in how hard it would be to make a new builder/whatever to do these from scratch. I mean its just a "rpm --initdb --root blah" "rpm --root blah -ihv centos-release-whatever.centos.x86_64.rpm" and a "yum install --installroot=blah bash yum rpm" to get to a usable chroot state.
I realise it's a bit of a departure from packers standard mode of operation but I don't think it's too far.
Any tips/advice or builders I've missed (no kvm use please) would be welcome.
EDIT: It looks like I might be able to modify this builder https://github.com/summerwind/packer-builder-qemu-chroot to do what I want. Now I just need to learn go.
https://redd.it/oapnc5
@r_devops
GitHub
summerwind/packer-builder-qemu-chroot
A builder plugin of Packer to support building QEMU images within chroot. - summerwind/packer-builder-qemu-chroot