Getting into DevOps story
I did not create this blog post nor do I gain anything for you reading IT: https://www.jesuisundev.com/en/when-dev-goes-devops/
Its a really cool story about a Developer and his team going into DevOps. I really liked it. Many in here might feel the same way going from SysAdm into DevOps.
https://redd.it/l1bu0d
@r_devops
I did not create this blog post nor do I gain anything for you reading IT: https://www.jesuisundev.com/en/when-dev-goes-devops/
Its a really cool story about a Developer and his team going into DevOps. I really liked it. Many in here might feel the same way going from SysAdm into DevOps.
https://redd.it/l1bu0d
@r_devops
Je suis un dev
When dev goes DevOps
The DevOps culture has grown so much that it is rare to come across a company without it. I was thrown into this adventure park.
The importance of separated environments
In this blog-post, I'll share the difficulties that I faced during deployments to production when using two accounts dev+stg and prd. These difficulties could've been avoided, by separating all environments to different accounts dev, stg, and prd.
https://dev.to/unfor19/the-importance-of-separated-environments-1gja
https://redd.it/l1b4ku
@r_devops
In this blog-post, I'll share the difficulties that I faced during deployments to production when using two accounts dev+stg and prd. These difficulties could've been avoided, by separating all environments to different accounts dev, stg, and prd.
https://dev.to/unfor19/the-importance-of-separated-environments-1gja
https://redd.it/l1b4ku
@r_devops
DEV Community
The importance of separated environments
TL;DR Separating environments (dev, stg, prd) per cloud-provider account (AWS, Azure, GCP, etc.) is p...
Is CloudBees Dockerhub/registry notification plugin still supported in 2021?
I've recently downloaded this plugin on Jenkins but there seems to be an issue when I try to Monitor latest image change when I attempt to configure my Jenkins Build.
Upon selecting "Monitor Docker HUB for image changes" and selecting specified repositories will trigger this job "docker/imagename"
I received an error message saying that " JSONObject["org-jenkinsci-plugins-registry-notification-opt-impl-TriggerForAllUsedInJob"\] is not a JSONObject. " upon saving the build.
Does anyone know how to submit an issue regarding this?
https://redd.it/l15bp3
@r_devops
I've recently downloaded this plugin on Jenkins but there seems to be an issue when I try to Monitor latest image change when I attempt to configure my Jenkins Build.
Upon selecting "Monitor Docker HUB for image changes" and selecting specified repositories will trigger this job "docker/imagename"
I received an error message saying that " JSONObject["org-jenkinsci-plugins-registry-notification-opt-impl-TriggerForAllUsedInJob"\] is not a JSONObject. " upon saving the build.
Does anyone know how to submit an issue regarding this?
https://redd.it/l15bp3
@r_devops
reddit
Is CloudBees Dockerhub/registry notification plugin still...
I've recently downloaded this plugin on Jenkins but there seems to be an issue when I try to Monitor latest image change when I attempt to...
Argo-CD help
Hey,
Hopefully there's someone experienced with Argo-CD who can help me out.
I currently want to achieve the following:
1. Terraform provisions EKS/Kubernetes
2. Deploy Argo-CD on Kubernetes
3. Argo-CD configured to point to internal git repositories and start provisioning all of the resources defined there.
4. After which it will sync with and make changes as necessary and prune anything not defined there.
I would like to be able to delete kubernetes/EKS clusters and for Argo-CD to restore it's previously configured options upon cluster re-creation.
Other than persistent volumes is there another way of doing this?
I see that there is some documentation around declarative setup.
What I would like though is the ability to define applications within the UI and for those changes to be saved somewhere centrally like a git repository without having to define them all manually within config files.
Is there any option with Argo-CD to do this natively with a config option or will I have to script exporting the config and re-import post provisioning.
Thanks in advance
https://redd.it/l0sv2m
@r_devops
Hey,
Hopefully there's someone experienced with Argo-CD who can help me out.
I currently want to achieve the following:
1. Terraform provisions EKS/Kubernetes
2. Deploy Argo-CD on Kubernetes
3. Argo-CD configured to point to internal git repositories and start provisioning all of the resources defined there.
4. After which it will sync with and make changes as necessary and prune anything not defined there.
I would like to be able to delete kubernetes/EKS clusters and for Argo-CD to restore it's previously configured options upon cluster re-creation.
Other than persistent volumes is there another way of doing this?
I see that there is some documentation around declarative setup.
What I would like though is the ability to define applications within the UI and for those changes to be saved somewhere centrally like a git repository without having to define them all manually within config files.
Is there any option with Argo-CD to do this natively with a config option or will I have to script exporting the config and re-import post provisioning.
Thanks in advance
https://redd.it/l0sv2m
@r_devops
reddit
Argo-CD help
Hey, Hopefully there's someone experienced with Argo-CD who can help me out. I currently want to achieve the following: 1. Terraform...
Understanding / Creating a pipeline?
Hi All I am a software developer and am wanting to learn more about ci cd and building pipelines/ enviornments. I will be starting a job at a smaller company and will be able to learn and do more things aside from purely coding.
​
I know most repos will have a:
* dev branch
* staging / UAT branch
* production branch
I am wanting help to understand the point of staging.
​
For me, dev is where I push new feature code. Before it can be merged in dev it has to build and test
From there what would be a typical pipeline to staging and then prod? I feel as though its the same thing as dev ... build and test so whats the point in having the same thing twice?
​
Any help is appreciated!
​
Thanks
https://redd.it/l0o79v
@r_devops
Hi All I am a software developer and am wanting to learn more about ci cd and building pipelines/ enviornments. I will be starting a job at a smaller company and will be able to learn and do more things aside from purely coding.
​
I know most repos will have a:
* dev branch
* staging / UAT branch
* production branch
I am wanting help to understand the point of staging.
​
For me, dev is where I push new feature code. Before it can be merged in dev it has to build and test
From there what would be a typical pipeline to staging and then prod? I feel as though its the same thing as dev ... build and test so whats the point in having the same thing twice?
​
Any help is appreciated!
​
Thanks
https://redd.it/l0o79v
@r_devops
reddit
Understanding / Creating a pipeline?
Hi All I am a software developer and am wanting to learn more about ci cd and building pipelines/ enviornments. I will be starting a job at a...
CI/CD case study for edge infrastructure with a lot of Raspberry Pis
Hi,
In our project, we have a lot of Raspis and configure them all with Ansible. From time to time we are adding new features and fixing bugs in our Ansible scripts and that makes them more complex. We are using GitLab to review our code, but what is missing is a validation of the syntax and that the code will actually work on production’s hardware.
Our Pis are an integral part of our customers. And that’s not the definition for CD(for me :)). Our pis are not easily accessible in a matter of minutes and are distributed across several hundreds of kilometers.
The full case study at https://turingpi.com/case-study-turing-pi-gitlab-ci-ansible/
https://redd.it/l1yy3w
@r_devops
Hi,
In our project, we have a lot of Raspis and configure them all with Ansible. From time to time we are adding new features and fixing bugs in our Ansible scripts and that makes them more complex. We are using GitLab to review our code, but what is missing is a validation of the syntax and that the code will actually work on production’s hardware.
Our Pis are an integral part of our customers. And that’s not the definition for CD(for me :)). Our pis are not easily accessible in a matter of minutes and are distributed across several hundreds of kilometers.
The full case study at https://turingpi.com/case-study-turing-pi-gitlab-ci-ansible/
https://redd.it/l1yy3w
@r_devops
Turing Pi
Case study: CI/CD with Turing Pi + Gitlab CI + Ansible - Turing Pi
How Turing Pi can help to build a robust CI/CD pipeline with GitLab and Ansible
Helm vs Kustomize
What do you prefer for applying Kubernetes resources? Helm, Kustomize, or something else?
From my perspective, #kustomize vs #helm boils down to Kubernetes patching vs templating. Where do you stand? Do you prefer Helm or Kustomize? Why did you choose one over the other?
\>>> https://youtu.be/Twtbg6LFnAg
https://redd.it/l1z39q
@r_devops
What do you prefer for applying Kubernetes resources? Helm, Kustomize, or something else?
From my perspective, #kustomize vs #helm boils down to Kubernetes patching vs templating. Where do you stand? Do you prefer Helm or Kustomize? Why did you choose one over the other?
\>>> https://youtu.be/Twtbg6LFnAg
https://redd.it/l1z39q
@r_devops
YouTube
Kustomize - How to Simplify Kubernetes Configuration Management
Kustomize is a template-free declarative approach to Kubernetes configuration management and customization.
Make sure to watch "Helm vs Kustomize" (https://youtu.be/ZMFYSm0ldQ0) if you're interested in a comparison.
#kustomize #kubernetes #k8s
Timecodes…
Make sure to watch "Helm vs Kustomize" (https://youtu.be/ZMFYSm0ldQ0) if you're interested in a comparison.
#kustomize #kubernetes #k8s
Timecodes…
Outsourcing your devops is bad - example blogs / articles?
I kicked off a discussion internally in my company regarding cross-functional teams and how you can't just "make a devops team" and throw all your builds to them because you're going to end up in the same situation as when you throw your builds at an Ops team. I'm arguing in favor of cross-functional teams and team ownership of code and deployments from git to Live.
I have read a lot articles over the years along the lines of "we made a dedicated devops team and it didn't work", but now i want to show those blogs as examples to my team and i can't find any. The most i've got is the DevOps Topologies site.
Does anyone have examples of articles elaborating on the above that I can forward to my team for further reading?
Thanks very much.
EDIT: I've posted a summary of my findings below. Thanks all for the help.
https://redd.it/l1vejo
@r_devops
I kicked off a discussion internally in my company regarding cross-functional teams and how you can't just "make a devops team" and throw all your builds to them because you're going to end up in the same situation as when you throw your builds at an Ops team. I'm arguing in favor of cross-functional teams and team ownership of code and deployments from git to Live.
I have read a lot articles over the years along the lines of "we made a dedicated devops team and it didn't work", but now i want to show those blogs as examples to my team and i can't find any. The most i've got is the DevOps Topologies site.
Does anyone have examples of articles elaborating on the above that I can forward to my team for further reading?
Thanks very much.
EDIT: I've posted a summary of my findings below. Thanks all for the help.
https://redd.it/l1vejo
@r_devops
Devopstopologies
DevOps Topologies
The primary goal of any DevOps effort within an organisation is to improve the delivery of value for customers and the business, not in itself to reduce costs, increase automation, or drive everything from configuration management; this means that different…
Generic Declarative Infra testing framework
I might just be struggling with what to google here so any pointers are useful.
I'm looking to spin up temporary infra (EC2 VMs), configure them and then run a suite of performance tests on the infra and spit out the results before tearing down the infra.
In some cases I might want to modify the infra post setup, e.g. bring down a node of a distributed stack.
I'm currently using terraform for provisioning, Ansible for config and have the test suite cases identified in simple scripts. I'd like to build this all into one flow e.g. a) run terraform&ansible b) run test suite.
Are there some declarative testing frameworks I should be looking at for this purpose? I'd rather not write out lengthy code or jenkins jobs.
https://redd.it/l259zd
@r_devops
I might just be struggling with what to google here so any pointers are useful.
I'm looking to spin up temporary infra (EC2 VMs), configure them and then run a suite of performance tests on the infra and spit out the results before tearing down the infra.
In some cases I might want to modify the infra post setup, e.g. bring down a node of a distributed stack.
I'm currently using terraform for provisioning, Ansible for config and have the test suite cases identified in simple scripts. I'd like to build this all into one flow e.g. a) run terraform&ansible b) run test suite.
Are there some declarative testing frameworks I should be looking at for this purpose? I'd rather not write out lengthy code or jenkins jobs.
https://redd.it/l259zd
@r_devops
reddit
Generic Declarative Infra testing framework
I might just be struggling with what to google here so any pointers are useful. I'm looking to spin up temporary infra (EC2 VMs), configure them...
CloudFlare vs Azure vs Cloudfront vs CDN77 vs Fastly
Hi everyone, sorry if this is not allowed or not the right place to post but I was wondering about the differences between these CDN platforms and what the pros and cons of each might be.
Specifically how CloudFlare compares to these other products.
Many thanks!
https://redd.it/l21bt0
@r_devops
Hi everyone, sorry if this is not allowed or not the right place to post but I was wondering about the differences between these CDN platforms and what the pros and cons of each might be.
Specifically how CloudFlare compares to these other products.
Many thanks!
https://redd.it/l21bt0
@r_devops
reddit
CloudFlare vs Azure vs Cloudfront vs CDN77 vs Fastly
Hi everyone, sorry if this is not allowed or not the right place to post but I was wondering about the differences between these CDN platforms and...
Is it time that we solved building multirepo applications once and for all?
After 9 months working from home I'm starting to become more restless. I haven't had the energy up until now to work with my pet projects that I never complete due to way too grandiose ideas that are totally out of proportion. But now I'm ready, now I will start spending a few evenings every week to work on something new, something that is needed by the community, something at a scale I can actually complete. I want to give back something to humanity, it doesn't matter if it's small and not very technical, but I want it to be valuable to people and make the devops world a better place.
So I was thinking of what I'm good at and what I understand well enough to not make something that's irrelevant without me knowing about it until it's finished. I happen to be pretty good at making build pipelines, I'm talking about the real deal with merge requests, integration tests, dependency chains, reproducibility, etc.
The tool I had in mind would be a reboot of the build system I made for my current client which I've been working with/on for a few months. It builds a couple of embedded Linux platforms, a set of applications that can run on said platforms, and finally a couple of complete firmware images put together by slapping a subset of the applications on top of one of the Linux platforms. I want to generalize this to be able to build whatever you like such as node or python applications, not just embedded linux applications.
Some of the features I can think of from the top of my head.
1. Fully reproducible, check out the meta repo from 14 years ago on any computer be it windows, Ubuntu, arch or nixos and hit build. No dependencies required other than installing one single binary and you'll get the same artifacts as you got 14 years ago down to the last bit.
2. Smart caching, only build what have never been built on a build server before, the rest is brought in from cache.
3. Distributable builds, you can add as many machines as you'd like.
4. Coherent, run the exact same command to build on your dev machine as you run on the build servers.
5. Highly integratable, use it with github actions, gitlab ci, jenkins, Travis, drone, circle ci, or just locally on your own machine.
Would you be interested in a canned solution such as this? If not, why? If yes, what would you want to get out of it?
https://redd.it/l280nm
@r_devops
After 9 months working from home I'm starting to become more restless. I haven't had the energy up until now to work with my pet projects that I never complete due to way too grandiose ideas that are totally out of proportion. But now I'm ready, now I will start spending a few evenings every week to work on something new, something that is needed by the community, something at a scale I can actually complete. I want to give back something to humanity, it doesn't matter if it's small and not very technical, but I want it to be valuable to people and make the devops world a better place.
So I was thinking of what I'm good at and what I understand well enough to not make something that's irrelevant without me knowing about it until it's finished. I happen to be pretty good at making build pipelines, I'm talking about the real deal with merge requests, integration tests, dependency chains, reproducibility, etc.
The tool I had in mind would be a reboot of the build system I made for my current client which I've been working with/on for a few months. It builds a couple of embedded Linux platforms, a set of applications that can run on said platforms, and finally a couple of complete firmware images put together by slapping a subset of the applications on top of one of the Linux platforms. I want to generalize this to be able to build whatever you like such as node or python applications, not just embedded linux applications.
Some of the features I can think of from the top of my head.
1. Fully reproducible, check out the meta repo from 14 years ago on any computer be it windows, Ubuntu, arch or nixos and hit build. No dependencies required other than installing one single binary and you'll get the same artifacts as you got 14 years ago down to the last bit.
2. Smart caching, only build what have never been built on a build server before, the rest is brought in from cache.
3. Distributable builds, you can add as many machines as you'd like.
4. Coherent, run the exact same command to build on your dev machine as you run on the build servers.
5. Highly integratable, use it with github actions, gitlab ci, jenkins, Travis, drone, circle ci, or just locally on your own machine.
Would you be interested in a canned solution such as this? If not, why? If yes, what would you want to get out of it?
https://redd.it/l280nm
@r_devops
reddit
Is it time that we solved building multirepo applications once and...
After 9 months working from home I'm starting to become more restless. I haven't had the energy up until now to work with my pet projects that I...
MySQL Configuration Performance Tuning Workflow
Every database system operation on a server will have four main system resources. The CPU is the powerhouse behind the system. The memory encodes, stores, and retrieves information. Disk I/O is the input and output process for data moving from storage to other hardware components. The network consists of the client connections to the server.
When using of these resources aren’t optimized, they can cause performance degradation of the operating system and database system. Fine-tuning the parameters of MySQL is vital for DBAs and DevOps engineers that want to prevent and quickly solve performance issues causing SQL server slowdowns. Ultimately the most important metric, is how quickly the query is received and the data returned by the server. The following results can be achieved for database systems with tuned and configured system parameters:
* Improve application performance.
* Improve the efficiency of server resource utilization.
* Reduce costs.
## Adjust MySQL parameters
Difference situations and events may require recalculating and adjusting MySQL system parameters. Instead of a DBA spending resources troubleshooting server performance issues.
We recommend adjust MySQL parameters in the following cases:
* First-time server setup.
* Low applications performance.
* Changing of server resources (RAM, CPU).
* Changing the application or application load like the count of visitors.
## Tuning MySQL Parameters
To get the most out of your server and curating the ultimate workflow we created a step-by-step guide to doing so: [https://dev-to-uploads.s3.amazonaws.com/i/7994irnb3xtds8199s8b.png](https://res.cloudinary.com/practicaldev/image/fetch/s--I91oCgLs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7994irnb3xtds8199s8b.png)
## 1. MySQL Documentation
MySQL has excellent documentation resources useful to even veteran database system administrators. MySQL provides server reference manuals for each currently supported version.
* [MySQL 8.0 Reference Manual](https://dev.mysql.com/doc/refman/8.0/en/)
* [MySQL 5.7 Reference Manual](https://dev.mysql.com/doc/refman/5.7/en/)
* [MySQL 5.6 Reference Manual](https://dev.mysql.com/doc/refman/5.6/en/)
MySQL 5.6 will not be supported after February 2021.
It’s helpful for all users of MySQL to be familiar with these resources. And it’s highly recommended to spend time working through the documentation to better understand how the database server works.
## 2. Study MySQL Configuration Best Practices
There are an array of resources available to learn about MySQL configuration. MySQL has hundreds of configuration options, but for many server needs, only a handful are critical. The tuning will vary depending on workload and hardware, but DBAs that familiarize themselves with best practices (for their specific version of MySQL) will better be able to understand and solve performance issues.
Releem has collected an [amazing list of articles and resources](https://github.com/Releem/awesome-mysql-performance) that are related to MySQL / MariaDB / Percona configuration.
## 3. Analyze Monitoring Data
Next, monitoring software should be used to continually monitor and analyze data from the MySQL server. These tools will help monitor serve health while providing unique ways to visualize metrics and handle alerts. Open-source as well as licensed software is available. Below are some of the most highly recommended options:
* [Zabbix](https://www.zabbix.com/) is an open-source monitoring tool capable of monitoring networks, servers, cloud, applications, and services. Zabbix is highly secure and easily scalable.
* [Prometheus](https://prometheus.io/) is open-source monitoring software marketing it’s simplicity and visualization tools.
* [Percona Monitoring and Management](https://www.percona.com/software/database-tools/percona-monitoring-and-management) is an open-source monitoring solution aimed at helping improve database performance and improving data security
* [Nagios
Every database system operation on a server will have four main system resources. The CPU is the powerhouse behind the system. The memory encodes, stores, and retrieves information. Disk I/O is the input and output process for data moving from storage to other hardware components. The network consists of the client connections to the server.
When using of these resources aren’t optimized, they can cause performance degradation of the operating system and database system. Fine-tuning the parameters of MySQL is vital for DBAs and DevOps engineers that want to prevent and quickly solve performance issues causing SQL server slowdowns. Ultimately the most important metric, is how quickly the query is received and the data returned by the server. The following results can be achieved for database systems with tuned and configured system parameters:
* Improve application performance.
* Improve the efficiency of server resource utilization.
* Reduce costs.
## Adjust MySQL parameters
Difference situations and events may require recalculating and adjusting MySQL system parameters. Instead of a DBA spending resources troubleshooting server performance issues.
We recommend adjust MySQL parameters in the following cases:
* First-time server setup.
* Low applications performance.
* Changing of server resources (RAM, CPU).
* Changing the application or application load like the count of visitors.
## Tuning MySQL Parameters
To get the most out of your server and curating the ultimate workflow we created a step-by-step guide to doing so: [https://dev-to-uploads.s3.amazonaws.com/i/7994irnb3xtds8199s8b.png](https://res.cloudinary.com/practicaldev/image/fetch/s--I91oCgLs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7994irnb3xtds8199s8b.png)
## 1. MySQL Documentation
MySQL has excellent documentation resources useful to even veteran database system administrators. MySQL provides server reference manuals for each currently supported version.
* [MySQL 8.0 Reference Manual](https://dev.mysql.com/doc/refman/8.0/en/)
* [MySQL 5.7 Reference Manual](https://dev.mysql.com/doc/refman/5.7/en/)
* [MySQL 5.6 Reference Manual](https://dev.mysql.com/doc/refman/5.6/en/)
MySQL 5.6 will not be supported after February 2021.
It’s helpful for all users of MySQL to be familiar with these resources. And it’s highly recommended to spend time working through the documentation to better understand how the database server works.
## 2. Study MySQL Configuration Best Practices
There are an array of resources available to learn about MySQL configuration. MySQL has hundreds of configuration options, but for many server needs, only a handful are critical. The tuning will vary depending on workload and hardware, but DBAs that familiarize themselves with best practices (for their specific version of MySQL) will better be able to understand and solve performance issues.
Releem has collected an [amazing list of articles and resources](https://github.com/Releem/awesome-mysql-performance) that are related to MySQL / MariaDB / Percona configuration.
## 3. Analyze Monitoring Data
Next, monitoring software should be used to continually monitor and analyze data from the MySQL server. These tools will help monitor serve health while providing unique ways to visualize metrics and handle alerts. Open-source as well as licensed software is available. Below are some of the most highly recommended options:
* [Zabbix](https://www.zabbix.com/) is an open-source monitoring tool capable of monitoring networks, servers, cloud, applications, and services. Zabbix is highly secure and easily scalable.
* [Prometheus](https://prometheus.io/) is open-source monitoring software marketing it’s simplicity and visualization tools.
* [Percona Monitoring and Management](https://www.percona.com/software/database-tools/percona-monitoring-and-management) is an open-source monitoring solution aimed at helping improve database performance and improving data security
* [Nagios
XI](https://www.nagios.com/) is a premium monitoring software but offers a free trial for new users. Nagios XI promises to be limitlessly scalable and highly customizable.
Using any of these monitoring tools will provide critical data on how MySQL uses server resources and history values of MySQL status.
## 4. Analyze MySQL Status
Make sure the server has been running for at least 24 hours. Analyze MySQL status to detect any variables that need configuration. Querying with "SHOW GLOBAL STATUS;" will deliver various metrics.
## 5. Use Scripts to get Configuration Recommendations
Using an automated script will save the incredible hassle and time of searching through the metrics. A script will check the metrics from the server and compare them against expected reasonable values. Anything operating outside the expected values will be flagged for review.
* [MySQLTuner](https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl) is a powerful configuration tuner that is open-source and available on [Github](https://github.com/major/MySQLTuner-perl).
* [MySQL Tuning-Primer.sh](https://raw.githubusercontent.com/BMDan/tuning-primer.sh/master/tuning-primer.sh) is another strong open-source option on [Github](https://github.com/BMDan/tuning-primer.sh). This tool is older and should be used only with MySQL versions 5.5 - 5.7.
## 6. Calculate Values of MySQL Performance Settings
Now armed with the monitoring data, MySQL status, and configuration recommendations from steps 3 through 5, the specific changes and values can finally be determined to minimize bottlenecks and inefficiencies. The MySQL documentation and configuration best practices that were studied in steps 1 and 2 will show their value during this process. Refer to the documentation, resources, and internet as needed.
## 7. Create New Configuration File
When MySQL is first installed a standard configuration file is installed in the base directory. The new configuration file can be created to reflect the parameter changes made based on the previous steps.
## 8. Apply New Configuration File
The new performance settings can be applied incrementally to test stability and performance or the configuration file can be applied to update all changes at once. The server should now be optimized and ready for queries!
## Streamline the MySQL Tuning
Without experience, fine-tuning the parameters of MySQL requires a significant investment to learn the surrounding information. DBAs and DevOps engineers need to understand the process otherwise, it’s an inefficient use of time. Thankfully MySQL database server tuning doesn’t need to be completed very often. But expect to revisit this process when a new server is setup, performance issues arise, system resources are changed, or an application is changed.
*Originally posted at* [*https://releem.com*](https://releem.com/blog/tpost/op7am2y9b1-mysql-configuration-performance-tuning)
https://redd.it/l23l8n
@r_devops
Using any of these monitoring tools will provide critical data on how MySQL uses server resources and history values of MySQL status.
## 4. Analyze MySQL Status
Make sure the server has been running for at least 24 hours. Analyze MySQL status to detect any variables that need configuration. Querying with "SHOW GLOBAL STATUS;" will deliver various metrics.
## 5. Use Scripts to get Configuration Recommendations
Using an automated script will save the incredible hassle and time of searching through the metrics. A script will check the metrics from the server and compare them against expected reasonable values. Anything operating outside the expected values will be flagged for review.
* [MySQLTuner](https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl) is a powerful configuration tuner that is open-source and available on [Github](https://github.com/major/MySQLTuner-perl).
* [MySQL Tuning-Primer.sh](https://raw.githubusercontent.com/BMDan/tuning-primer.sh/master/tuning-primer.sh) is another strong open-source option on [Github](https://github.com/BMDan/tuning-primer.sh). This tool is older and should be used only with MySQL versions 5.5 - 5.7.
## 6. Calculate Values of MySQL Performance Settings
Now armed with the monitoring data, MySQL status, and configuration recommendations from steps 3 through 5, the specific changes and values can finally be determined to minimize bottlenecks and inefficiencies. The MySQL documentation and configuration best practices that were studied in steps 1 and 2 will show their value during this process. Refer to the documentation, resources, and internet as needed.
## 7. Create New Configuration File
When MySQL is first installed a standard configuration file is installed in the base directory. The new configuration file can be created to reflect the parameter changes made based on the previous steps.
## 8. Apply New Configuration File
The new performance settings can be applied incrementally to test stability and performance or the configuration file can be applied to update all changes at once. The server should now be optimized and ready for queries!
## Streamline the MySQL Tuning
Without experience, fine-tuning the parameters of MySQL requires a significant investment to learn the surrounding information. DBAs and DevOps engineers need to understand the process otherwise, it’s an inefficient use of time. Thankfully MySQL database server tuning doesn’t need to be completed very often. But expect to revisit this process when a new server is setup, performance issues arise, system resources are changed, or an application is changed.
*Originally posted at* [*https://releem.com*](https://releem.com/blog/tpost/op7am2y9b1-mysql-configuration-performance-tuning)
https://redd.it/l23l8n
@r_devops
Nagios Enterprises
Enterprise IT Infrastructure Monitoring | Nagios Enterprises
Prevent costly IT downtime with Nagios enterprise monitoring solutions. Comprehensive server, network & application monitoring trusted by 1M+ users globally.
Request: Guides for ‘RTL’ in Cloud
Looking for any good guides, books, videos on the best way to perform an RTL in a cloud.
Anything to do with migrating from RTLs to cloud would also be appreciated.
https://redd.it/l23amy
@r_devops
Looking for any good guides, books, videos on the best way to perform an RTL in a cloud.
Anything to do with migrating from RTLs to cloud would also be appreciated.
https://redd.it/l23amy
@r_devops
reddit
Request: Guides for ‘RTL’ in Cloud
Looking for any good guides, books, videos on the best way to perform an RTL in a cloud. Anything to do with migrating from RTLs to cloud would...
Supercharge your project with each commit using 3 awesome tools.
Read the article on dev.to
# The Trifecta
### Conventional Commits, SemVer, and Standard Version
Do you know those projects that have a cool version number like 3.23.12? And they have a nice changelog that shows the changes in each version? If you're like me, you love the idea of being that organized, but you don't love the idea of doing more busywork when you want to focus on coding. Well, you're in luck, because with a few tools we can automate the whole thing!
There are three tools that we can use to accomplish this:
- Conventional Commits (A convention for how to write commit messages)
- SemVer (Semantic Versioning - the nice X.X.X version number system)
- Standard Version (A node module that automatically updates a changelog with version numbers and commit info)
The best part of all of this is that you can slowly transition into using these tools. You don't have to make any big changes or commit to making a permanent change to your workflow or repository.
## Step One: Get Familiar with Conventional Commits
Conventional Commits is a standardized way to write your commit messages - as well as the driver of this whole operation. Basically, when writing your commit messages all you have to do is follow a standard format. It may seem complicated at first, but with a little bit of time and the Commitizen VS Code Extension, you will quickly find it feeling natural. Firstly, it would be best to familiarize yourself with the convention, you can read up on it here.
Using Commitizen in VS Code will do the heavy lifting for you. Instead of committing via command line or VS Code's built-in Source Control, you can just open the command palette (ctrl-shift-p) and type select Commitizen. It will prompt you for all of the necessary information and format your commit message according to the standard. Then you can just push your code up as you normally would!
Conventional Commits alone will make an awesome addition to your development arsenal - if a whole team transitions to Conventional Commits, you will notice all changes to the repo are far more readable.
## Step Two: Understand Semantic Versioning
Semantic Versioning is far simpler to understand. You can read up on it at the site, but it's simple enough to quickly explain here.
The version number is split into three sections with .'s like so: 2.5.11 - and each number has its own distinct meaning. I'll explain from the bottom up.
- 1.1.Z Z = Patch version. This starts at zero and increases with every backward-compatible bugfix or patch that is implemented.
- 1.Y.1 Y = Minor version. This starts at zero and increases with every backward-compatible new feature or functionality that is implemented. Every time this goes up, reset the patch version to 0.
- X.1.1 X = Major version. This starts at zero and increases with every major change or addition of functionality that is not backward-compatible.
- Note: If you reach number 9 for any of the versions the next increase is 10. This can even get into the triple digits, ie 1.2.99 => 1.2.100.
In your projects, you can feel free to apply these versions in whatever way makes the most sense to the project. For instance, in some projects, I might increase the major version at each milestone instead of waiting until a breaking change.
## Step Three: Use Standard Version to Automate
Finally, you can use the node package Standard Version to automate the process of creating and updating a changelog. It uses your conventional commits to understand the changes you've made and updates your version number accordingly. The changelog that it generates also succinctly lists the changes in each version.
To
Read the article on dev.to
# The Trifecta
### Conventional Commits, SemVer, and Standard Version
Do you know those projects that have a cool version number like 3.23.12? And they have a nice changelog that shows the changes in each version? If you're like me, you love the idea of being that organized, but you don't love the idea of doing more busywork when you want to focus on coding. Well, you're in luck, because with a few tools we can automate the whole thing!
There are three tools that we can use to accomplish this:
- Conventional Commits (A convention for how to write commit messages)
- SemVer (Semantic Versioning - the nice X.X.X version number system)
- Standard Version (A node module that automatically updates a changelog with version numbers and commit info)
The best part of all of this is that you can slowly transition into using these tools. You don't have to make any big changes or commit to making a permanent change to your workflow or repository.
## Step One: Get Familiar with Conventional Commits
Conventional Commits is a standardized way to write your commit messages - as well as the driver of this whole operation. Basically, when writing your commit messages all you have to do is follow a standard format. It may seem complicated at first, but with a little bit of time and the Commitizen VS Code Extension, you will quickly find it feeling natural. Firstly, it would be best to familiarize yourself with the convention, you can read up on it here.
Using Commitizen in VS Code will do the heavy lifting for you. Instead of committing via command line or VS Code's built-in Source Control, you can just open the command palette (ctrl-shift-p) and type select Commitizen. It will prompt you for all of the necessary information and format your commit message according to the standard. Then you can just push your code up as you normally would!
Conventional Commits alone will make an awesome addition to your development arsenal - if a whole team transitions to Conventional Commits, you will notice all changes to the repo are far more readable.
## Step Two: Understand Semantic Versioning
Semantic Versioning is far simpler to understand. You can read up on it at the site, but it's simple enough to quickly explain here.
The version number is split into three sections with .'s like so: 2.5.11 - and each number has its own distinct meaning. I'll explain from the bottom up.
- 1.1.Z Z = Patch version. This starts at zero and increases with every backward-compatible bugfix or patch that is implemented.
- 1.Y.1 Y = Minor version. This starts at zero and increases with every backward-compatible new feature or functionality that is implemented. Every time this goes up, reset the patch version to 0.
- X.1.1 X = Major version. This starts at zero and increases with every major change or addition of functionality that is not backward-compatible.
- Note: If you reach number 9 for any of the versions the next increase is 10. This can even get into the triple digits, ie 1.2.99 => 1.2.100.
In your projects, you can feel free to apply these versions in whatever way makes the most sense to the project. For instance, in some projects, I might increase the major version at each milestone instead of waiting until a breaking change.
## Step Three: Use Standard Version to Automate
Finally, you can use the node package Standard Version to automate the process of creating and updating a changelog. It uses your conventional commits to understand the changes you've made and updates your version number accordingly. The changelog that it generates also succinctly lists the changes in each version.
To
DEV Community
Supercharge your project with each commit using 3 awesome tools.
The Trifecta Conventional Commits, SemVer, and Standard Version Do you know tho...
use it, you can install it to your project as a developer dependency with this command:
To run it for the first time:
To run it for your normal releases:
If you want to release a major or minor version regardless of what your commits indicate you can run:
Running the release command will generate a changelog and you can push your code to your repo.
## Other notes
If you are working in a repository that utilizes multiple branches and merges. You can explore using squash merges so the versioning and changelog only apply to the branch you want them to.
All three of these tools are powerful in their own right, but once you are comfortable using them you can explore some of their more advanced features and make your project organization even better.
https://redd.it/l237s2
@r_devops
npm i --save-dev standard-version. To run it for the first time:
npm run release -- --first-release.To run it for your normal releases:
npm run release.If you want to release a major or minor version regardless of what your commits indicate you can run:
npm run release -- --release-as CHANGE - replacing CHANGE with either major or minor. Also, you can release to any version you define with npm run release -- --release-as VERSION - replacing VERSION with any number you like such as 1.2.0.Running the release command will generate a changelog and you can push your code to your repo.
## Other notes
If you are working in a repository that utilizes multiple branches and merges. You can explore using squash merges so the versioning and changelog only apply to the branch you want them to.
All three of these tools are powerful in their own right, but once you are comfortable using them you can explore some of their more advanced features and make your project organization even better.
https://redd.it/l237s2
@r_devops
reddit
Supercharge your project with each commit using 3 awesome tools.
[Read the article on dev.to](https://dev.to/addisoncodes/supercharge-your-project-with-each-commit-using-3-awesome-tools-bkg) # The Trifecta ###...
How CI/CD tools are built?
Hello everyone,
at the beginning I'd like to apologize if you find this question "obvious", however I've tried to search for satisfying answers but couldn't find anything that could clear my thoughts. Or maybe I wasn't sure what should I search for.
But back to the problem.
There are a bunch of solutions working similarly - like Travis CI, Circle CI, and many more. Even if there is a lot of tutorials describing how to build CI/CD pipes properly I couldn't find anything about HOW those tools are built? What I understand there is some Virtual Machine under the hood that is started in case of action. But does they store whole VMs images and "clone" them to start by all their customers? Or those VMs are just a nomenclature, but they're using simple Docker images?
Could you suggest me some keywords to look for to learn more about this topic? Or even better, is there any publicly available example to learn this topic by building such tool?
https://redd.it/l1t4sf
@r_devops
Hello everyone,
at the beginning I'd like to apologize if you find this question "obvious", however I've tried to search for satisfying answers but couldn't find anything that could clear my thoughts. Or maybe I wasn't sure what should I search for.
But back to the problem.
There are a bunch of solutions working similarly - like Travis CI, Circle CI, and many more. Even if there is a lot of tutorials describing how to build CI/CD pipes properly I couldn't find anything about HOW those tools are built? What I understand there is some Virtual Machine under the hood that is started in case of action. But does they store whole VMs images and "clone" them to start by all their customers? Or those VMs are just a nomenclature, but they're using simple Docker images?
Could you suggest me some keywords to look for to learn more about this topic? Or even better, is there any publicly available example to learn this topic by building such tool?
https://redd.it/l1t4sf
@r_devops
reddit
How CI/CD tools are built?
Hello everyone, at the beginning I'd like to apologize if you find this question "obvious", however I've tried to search for satisfying answers...
Capacity planning for frontend sites + backend apis
Hi, I'm deploying a frontend + api server with a High user rate. Currently, I don't know how to make a capacity plan for both of then. The api server will be on Digital Ocean backed by VMs, but I'm looking for a good service for the frontend but don't know how to estimate usage, costs. Could you people give any help?
https://redd.it/l22th7
@r_devops
Hi, I'm deploying a frontend + api server with a High user rate. Currently, I don't know how to make a capacity plan for both of then. The api server will be on Digital Ocean backed by VMs, but I'm looking for a good service for the frontend but don't know how to estimate usage, costs. Could you people give any help?
https://redd.it/l22th7
@r_devops
reddit
Capacity planning for frontend sites + backend apis
Hi, I'm deploying a frontend + api server with a High user rate. Currently, I don't know how to make a capacity plan for both of then. The api...
Jenkins build pipeline
I am trying basic build pipeline (Jenkins) for java spring boot sample application but receiving below errors:
Waiting for Jenkins to finish collecting data [ERROR\] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project demo: Fatal error compiling: invalid target release: 11 -> [Help 1\] [ERROR\] [ERROR\] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR\] Re-run Maven using the -X switch to enable full debug logging. [ERROR\] [ERROR\] For more information about the errors and possible solutions, please read the following articles: [ERROR\] [Help 1\] **https://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException** [JENKINS\] Archiving /var/lib/jenkins/workspace/springboot-job/pom.xml to com.example/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.pom channel stopped Finished: FAILURE
​
Running Java11 on Jenkins server.
Maven details on Jenkins server:
mvn -version
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /opt/maven/apache-maven-3.6.3
Java version: 11.0.9.1, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-11-openjdk-11.0.9.11-3.el8_3.x86_64
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.18.0-193.el8.x86_64", arch: "amd64", family: "unix"
\------------------------------------------------------------------------------------------
Here is my Pom.xml
​
<?xml version="1.0" encoding="UTF-8"?><project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.4.2</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>demo</artifactId> <version>0.0.1-SNAPSHOT</version> <name>demo</name> <description>Demo project for Spring Boot</description> <properties> <java.version>11</java.version> </properties> <dependencies> <!--<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>
​
Any help on how to resolve this issue??
https://redd.it/l2giwk
@r_devops
I am trying basic build pipeline (Jenkins) for java spring boot sample application but receiving below errors:
Waiting for Jenkins to finish collecting data [ERROR\] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project demo: Fatal error compiling: invalid target release: 11 -> [Help 1\] [ERROR\] [ERROR\] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR\] Re-run Maven using the -X switch to enable full debug logging. [ERROR\] [ERROR\] For more information about the errors and possible solutions, please read the following articles: [ERROR\] [Help 1\] **https://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException** [JENKINS\] Archiving /var/lib/jenkins/workspace/springboot-job/pom.xml to com.example/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.pom channel stopped Finished: FAILURE
​
Running Java11 on Jenkins server.
Maven details on Jenkins server:
mvn -version
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /opt/maven/apache-maven-3.6.3
Java version: 11.0.9.1, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-11-openjdk-11.0.9.11-3.el8_3.x86_64
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.18.0-193.el8.x86_64", arch: "amd64", family: "unix"
\------------------------------------------------------------------------------------------
Here is my Pom.xml
​
<?xml version="1.0" encoding="UTF-8"?><project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.4.2</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>demo</artifactId> <version>0.0.1-SNAPSHOT</version> <name>demo</name> <description>Demo project for Spring Boot</description> <properties> <java.version>11</java.version> </properties> <dependencies> <!--<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>
​
Any help on how to resolve this issue??
https://redd.it/l2giwk
@r_devops
reddit
Jenkins build pipeline
I am trying basic build pipeline (Jenkins) for java spring boot sample application but receiving below errors: Waiting for Jenkins to finish...
My Server was compromised
Hi there.
Recently I got abuse email from the vps provider, that my host scanning network etc.
Yes, that was true. I found some processes loading CPU for 100% (miner seems to be) and a lot of connections to sub-network to port 5432. That is Postgres default port.
The interesting thing is that I had firewall setup denying everything except 80, 443 and custom ssh ports. But allowing all outgoing connections. Only nginx (latest version from debian repo) was accessible for outside world.
I can assume that somebody used known vulnerability in nginx to get inside my vps. Or firewall can have holes? What do you think?
​
And another interesting thing that processes I saw with htop or ps had no launch command. They had name but no path to executable.
​
Interesting to get what do you think about current situation and what preferable way of protection to not have this happening again.
https://redd.it/l1xyky
@r_devops
Hi there.
Recently I got abuse email from the vps provider, that my host scanning network etc.
Yes, that was true. I found some processes loading CPU for 100% (miner seems to be) and a lot of connections to sub-network to port 5432. That is Postgres default port.
The interesting thing is that I had firewall setup denying everything except 80, 443 and custom ssh ports. But allowing all outgoing connections. Only nginx (latest version from debian repo) was accessible for outside world.
I can assume that somebody used known vulnerability in nginx to get inside my vps. Or firewall can have holes? What do you think?
​
And another interesting thing that processes I saw with htop or ps had no launch command. They had name but no path to executable.
​
Interesting to get what do you think about current situation and what preferable way of protection to not have this happening again.
https://redd.it/l1xyky
@r_devops
reddit
My Server was compromised
Hi there. Recently I got abuse email from the vps provider, that my host scanning network etc. Yes, that was true. I found some processes...
What do you use to inject HTTP status code errors?
I have a Vue frontend which communicates to a backend server. Historically, when there have been errors in the frontend, that causes components to be blank and show no sign that anything is wrong.
I think I've figured out all the places where errors can happen, but is there a tool which can inject HTTP errors between the frontend and the backend, so I can check this? In the past, I've done this by modifying the backend to create an error, but I'd like a better tool.
Heres the research I've done on this so far:
1. It looks like Burp Suite is capable of modifying the HTTP response code. Have not used it before.
2. nginx can do this, but it looks like you need to set up an error page for each status code you're changing.
https://redd.it/l2hvcl
@r_devops
I have a Vue frontend which communicates to a backend server. Historically, when there have been errors in the frontend, that causes components to be blank and show no sign that anything is wrong.
I think I've figured out all the places where errors can happen, but is there a tool which can inject HTTP errors between the frontend and the backend, so I can check this? In the past, I've done this by modifying the backend to create an error, but I'd like a better tool.
Heres the research I've done on this so far:
1. It looks like Burp Suite is capable of modifying the HTTP response code. Have not used it before.
2. nginx can do this, but it looks like you need to set up an error page for each status code you're changing.
https://redd.it/l2hvcl
@r_devops
forum.portswigger.net
Modify a response status code - Burp Suite User Forum
Hi,
Is it possible within a burp extension to change a responses status code?
Or if not,
is it possible within a burp extension to intercept...
Is it possible within a burp extension to change a responses status code?
Or if not,
is it possible within a burp extension to intercept...