php ci/cd flow
Hi guys, first post here.
So, I have previously setup a java cd/cd pipeline with this flow.
git->bitbucket->maven->sonarqube->Artifactory->ansible->dockerhub->kubernetes
For a php pipeline, would it be much different. I'm not a php programmer so not sure how to design the flow.
Also is php built with a tool like maven or does one just copy the php files into an apache document root thereby eliminating the maven, gradle build tool.
Thank you in advance
Brian
https://redd.it/ky0cf9
@r_devops
Hi guys, first post here.
So, I have previously setup a java cd/cd pipeline with this flow.
git->bitbucket->maven->sonarqube->Artifactory->ansible->dockerhub->kubernetes
For a php pipeline, would it be much different. I'm not a php programmer so not sure how to design the flow.
Also is php built with a tool like maven or does one just copy the php files into an apache document root thereby eliminating the maven, gradle build tool.
Thank you in advance
Brian
https://redd.it/ky0cf9
@r_devops
reddit
php ci/cd flow
Hi guys, first post here. So, I have previously setup a java cd/cd pipeline with this...
Open source cmdb
I'm looking a lightweight cmdb solution to store very basic data of servers, vm's and k8s clusters. Like, mac, ip, custom tags and so. I need api to access the data and possibility to add custom fields to it. Any recommendations?
https://redd.it/ky08ac
@r_devops
I'm looking a lightweight cmdb solution to store very basic data of servers, vm's and k8s clusters. Like, mac, ip, custom tags and so. I need api to access the data and possibility to add custom fields to it. Any recommendations?
https://redd.it/ky08ac
@r_devops
reddit
Open source cmdb
I'm looking a lightweight cmdb solution to store very basic data of servers, vm's and k8s clusters. Like, mac, ip, custom tags and so. I need api...
What’s the best way to create an ansible user account on a fleet of Linux servers?
If you’re getting started in using Ansible in managing 20-30 Linux servers, what’s the best way to create and manage the ansible account on each server for SSH access?
https://redd.it/kxmr4a
@r_devops
If you’re getting started in using Ansible in managing 20-30 Linux servers, what’s the best way to create and manage the ansible account on each server for SSH access?
https://redd.it/kxmr4a
@r_devops
reddit
What’s the best way to create an ansible user account on a fleet...
If you’re getting started in using Ansible in managing 20-30 Linux servers, what’s the best way to create and manage the ansible account on...
Looking for a conference talk video on monitoring/graphing
I saw a video in early 2020 some time, which was talking about how to handle monitoring / graphing of metrics. There was an example given of an API or web service which had an update pushed, the mean response time went down but a histogram showed the majority of requests were slower, with some evens being far quicker.
We were talking about histograms in the office, and I mentioned that video and that I'd find it and share the link, but I can't for the life of me find it now. Hoping someone knows what I'm talking about and has it bookmarked and can link me to it!
https://redd.it/kydepf
@r_devops
I saw a video in early 2020 some time, which was talking about how to handle monitoring / graphing of metrics. There was an example given of an API or web service which had an update pushed, the mean response time went down but a histogram showed the majority of requests were slower, with some evens being far quicker.
We were talking about histograms in the office, and I mentioned that video and that I'd find it and share the link, but I can't for the life of me find it now. Hoping someone knows what I'm talking about and has it bookmarked and can link me to it!
https://redd.it/kydepf
@r_devops
reddit
Looking for a conference talk video on monitoring/graphing
I saw a video in early 2020 some time, which was talking about how to handle monitoring / graphing of metrics. There was an example given of an...
Should library modules use terraform.workspace directly?
First, let me get nomenclature out of the way:
A terraform "root" module is a directory containing .tf files where one can run various terraform action commands (such as plan, apply and destroy) and expect them to do something.
A terraform "library" module is a similar directory, but not all inputs have been provided, so terraform action commands aren't expected to do anything other than interactively prompt for missing values.
Root modules often reference library modules by using the "module" keyword.
I'm sure that hashicorp offers more precise definitions, but I think this is close enough.
Now the question: when developing a library module, which implies use by one or more root modules, is it bad practice to refer to ${terraform.workspace} directly? The alternative is to define a variable for each usage where your library namespaces resources, and leave it up to the root module developer to optionally include the workspace in the values passed to the library module.
View Poll
https://redd.it/kxlb2u
@r_devops
First, let me get nomenclature out of the way:
A terraform "root" module is a directory containing .tf files where one can run various terraform action commands (such as plan, apply and destroy) and expect them to do something.
A terraform "library" module is a similar directory, but not all inputs have been provided, so terraform action commands aren't expected to do anything other than interactively prompt for missing values.
Root modules often reference library modules by using the "module" keyword.
I'm sure that hashicorp offers more precise definitions, but I think this is close enough.
Now the question: when developing a library module, which implies use by one or more root modules, is it bad practice to refer to ${terraform.workspace} directly? The alternative is to define a variable for each usage where your library namespaces resources, and leave it up to the root module developer to optionally include the workspace in the values passed to the library module.
View Poll
https://redd.it/kxlb2u
@r_devops
What is the best CI if I need cloud gpu runners and on prem self-hosted runners?
I'm looking for advice on a CI system that could support my use cases. 1) I have a number of build jobs that will need to run on nvidia-gpu instances to test a deep learning powered app. 2) I need to run some tests on specialized hardware that is on prem (arm64 Linux). 3) I also have regular Linux builds to support, but those can be on typical cpu instances.
So far we've been using a Jenkins solution, but it has been unreliable of late and I'm looking to replace it with something more modern, secure and easy to maintain.
From my search, none of the managed offerings seem to match my needs without some big caveat like self-hosted runners push you into a crazy pricing plan (circleci) or gpu nodes are non standard and require workarounds (azure pipelines). Gitlab seems like it could work but the stack is quite involved and I'm not super clear on whether I can have gpu nodes handled with scaling and on prem runners.
We use GitHub for vcs.
Has anyone had success using GitHub self-hosted runners for on prem builds? I feel like this could solve things for me if paired with a ci solution that provides the gpu nodes as a simple choice in the job definition yaml. I'm considering GH self-hosted runner + AWS CodeBuild (since it allows me to pick a gpu or beefy cpu machine at fair prices and without any workarounds), despite what I feel is a much worse user experience on the visibility front.
Thoughts?
https://redd.it/kyopgv
@r_devops
I'm looking for advice on a CI system that could support my use cases. 1) I have a number of build jobs that will need to run on nvidia-gpu instances to test a deep learning powered app. 2) I need to run some tests on specialized hardware that is on prem (arm64 Linux). 3) I also have regular Linux builds to support, but those can be on typical cpu instances.
So far we've been using a Jenkins solution, but it has been unreliable of late and I'm looking to replace it with something more modern, secure and easy to maintain.
From my search, none of the managed offerings seem to match my needs without some big caveat like self-hosted runners push you into a crazy pricing plan (circleci) or gpu nodes are non standard and require workarounds (azure pipelines). Gitlab seems like it could work but the stack is quite involved and I'm not super clear on whether I can have gpu nodes handled with scaling and on prem runners.
We use GitHub for vcs.
Has anyone had success using GitHub self-hosted runners for on prem builds? I feel like this could solve things for me if paired with a ci solution that provides the gpu nodes as a simple choice in the job definition yaml. I'm considering GH self-hosted runner + AWS CodeBuild (since it allows me to pick a gpu or beefy cpu machine at fair prices and without any workarounds), despite what I feel is a much worse user experience on the visibility front.
Thoughts?
https://redd.it/kyopgv
@r_devops
reddit
What is the best CI if I need cloud gpu runners and on prem...
I'm looking for advice on a CI system that could support my use cases. 1) I have a number of build jobs that will need to run on nvidia-gpu...
Reka: a cloud infra tool reaper
Hello Guys,
Checkout this tool I'm currently still working on. A tool to help you manage infrastructural resources on your cloud provider. You can easily stop and resume resources, clean up unused resources and destroy resources based on info about that resource. It's a common issue to have orphaned resources from say test environments. This tool helps to curb that. It could also prove useful for cost management. Say stop your instances and resume them when you want to. You could set a cronjob with your config to run it. It's still under active development, I'd appreciate any help.
https://github.com/mensaah/reka
https://redd.it/kyo8c4
@r_devops
Hello Guys,
Checkout this tool I'm currently still working on. A tool to help you manage infrastructural resources on your cloud provider. You can easily stop and resume resources, clean up unused resources and destroy resources based on info about that resource. It's a common issue to have orphaned resources from say test environments. This tool helps to curb that. It could also prove useful for cost management. Say stop your instances and resume them when you want to. You could set a cronjob with your config to run it. It's still under active development, I'd appreciate any help.
https://github.com/mensaah/reka
https://redd.it/kyo8c4
@r_devops
GitHub
MeNsaaH/reka
A Cloud Resource management Tool to destroy, stop, resume, or clean up unsed resources - MeNsaaH/reka
Looking for advice on GitOps with multiple repos
a) I started working with a team of devs that make a NodeJS application made out of 6 separate components (API server, queue worker, several SPA frontends).
The first idea that came to my mind is creating a monorepo, possibly with submodules if devs don't have something against merging everything together but then, all components and their Docker images would be rebuilt on a single push to that monorepo, right?
Is there any recommended or alternative scenario to make this painless for both sides?
b) Additionally, my workflow would look something like this:
devs work on your local feature/\ and bugfix/* branches
once they’re ready to deploy to staging, they merge their changes to the dev branch
Github Actions build starts and if successful, it checks code to the staging branch
ArgoCD kicks in and syncs changes to the staging Kubernetes cluster
Then they go to the staging website URL and test if all works and looks okay
Once they’re happy, they create PR to merge staging to the main branch
ArgoCD kicks again in and syncs changes to the production Kubernetes cluster
Does this sound reasonable and are there better/simpler workflows?
c) Since this app is by default in dev/staging ran with NODE\_ENV=development variable, how would I change that variable to NODE\_ENV=production once the staging branch is merged to the main branch?
I'm asking this because after the staging is built and deployed to the staging server, I'd like to avoid another CI build and would like just to promote this build to the production cluster with NODE\_ENV=production environment variable added to all .env files in each of 6 components.
Thanks a ton!
https://redd.it/kypoww
@r_devops
a) I started working with a team of devs that make a NodeJS application made out of 6 separate components (API server, queue worker, several SPA frontends).
The first idea that came to my mind is creating a monorepo, possibly with submodules if devs don't have something against merging everything together but then, all components and their Docker images would be rebuilt on a single push to that monorepo, right?
Is there any recommended or alternative scenario to make this painless for both sides?
b) Additionally, my workflow would look something like this:
devs work on your local feature/\ and bugfix/* branches
once they’re ready to deploy to staging, they merge their changes to the dev branch
Github Actions build starts and if successful, it checks code to the staging branch
ArgoCD kicks in and syncs changes to the staging Kubernetes cluster
Then they go to the staging website URL and test if all works and looks okay
Once they’re happy, they create PR to merge staging to the main branch
ArgoCD kicks again in and syncs changes to the production Kubernetes cluster
Does this sound reasonable and are there better/simpler workflows?
c) Since this app is by default in dev/staging ran with NODE\_ENV=development variable, how would I change that variable to NODE\_ENV=production once the staging branch is merged to the main branch?
I'm asking this because after the staging is built and deployed to the staging server, I'd like to avoid another CI build and would like just to promote this build to the production cluster with NODE\_ENV=production environment variable added to all .env files in each of 6 components.
Thanks a ton!
https://redd.it/kypoww
@r_devops
reddit
Looking for advice on GitOps with multiple repos
**a)** I started working with a team of devs that make a NodeJS application made out of 6 separate components (*API server, queue worker, several...
Reliable CI for PR Verification ( c++ )
Hi folks,
Wanted some suggestions on what would be a reliable / fast event based PR verification CI/CD tool to use. We are currently using Jenkins and it has too many points of failure, its getting slow for various reasons and so we are advocating to start to migrate to another service, preferably that is good for cmake projects. We are looking into Buildbot . The biggest challenge is to have as reliable a system as possible. The git repo we are using is hosted at Bitbucket. It has issues of traffic and ssh slowness but we will deal with that eventually
The first step is to increase dev productivity so migrate PR merge and verification and then may be add other features like release, logging etc ..
https://redd.it/kyo9ub
@r_devops
Hi folks,
Wanted some suggestions on what would be a reliable / fast event based PR verification CI/CD tool to use. We are currently using Jenkins and it has too many points of failure, its getting slow for various reasons and so we are advocating to start to migrate to another service, preferably that is good for cmake projects. We are looking into Buildbot . The biggest challenge is to have as reliable a system as possible. The git repo we are using is hosted at Bitbucket. It has issues of traffic and ssh slowness but we will deal with that eventually
The first step is to increase dev productivity so migrate PR merge and verification and then may be add other features like release, logging etc ..
https://redd.it/kyo9ub
@r_devops
reddit
Reliable CI for PR Verification ( c++ )
Hi folks, Wanted some suggestions on what would be a reliable / fast event based PR verification CI/CD tool to use. We are currently using...
Collaborative notebooks to train, track, deploy, and monitor machine learning models
Hi,
We built iko.ai as an internal project to solve the problems we faced in machine learning projects for our clients these past few years.
\- No-setup Jupyter environments with the most popular libraries pre-installed
\- Real-time collaboration on notebooks
\- Multiple versions of your notebooks
\- Long-running notebook scheduling with output that survives closed tabs and network disruptions
\- Automatic experiment tracking: automatically detects your models, parameters, and metrics and saves them without you remembering to do so or polluting your notebook with tracking code
\- Easily deploy your model and get a "REST endpoint" so data scientists don't tap on anyone's shoulder to deploy their model, and developers don't need to worry about ML dependencies to use the models
\- Build a Docker image for your model and push it to a registry to use it wherever you want
\- Monitor your models' performance on a live dashboard
\- Publish notebooks as AppBooks: automatically parametrize a notebook to enable clients to interact with it without exporting PDFs or having to build an application or mutate the notebook. This is very useful when you want to expose some parameters that are very domain specific to a domain expert.
Much more on our roadmap. We're only focusing on actual problems we have faced serving our clients, and problems we are facing now. We'd love to hear your thoughts and problems you have faced.
https://redd.it/kypr21
@r_devops
Hi,
We built iko.ai as an internal project to solve the problems we faced in machine learning projects for our clients these past few years.
\- No-setup Jupyter environments with the most popular libraries pre-installed
\- Real-time collaboration on notebooks
\- Multiple versions of your notebooks
\- Long-running notebook scheduling with output that survives closed tabs and network disruptions
\- Automatic experiment tracking: automatically detects your models, parameters, and metrics and saves them without you remembering to do so or polluting your notebook with tracking code
\- Easily deploy your model and get a "REST endpoint" so data scientists don't tap on anyone's shoulder to deploy their model, and developers don't need to worry about ML dependencies to use the models
\- Build a Docker image for your model and push it to a registry to use it wherever you want
\- Monitor your models' performance on a live dashboard
\- Publish notebooks as AppBooks: automatically parametrize a notebook to enable clients to interact with it without exporting PDFs or having to build an application or mutate the notebook. This is very useful when you want to expose some parameters that are very domain specific to a domain expert.
Much more on our roadmap. We're only focusing on actual problems we have faced serving our clients, and problems we are facing now. We'd love to hear your thoughts and problems you have faced.
https://redd.it/kypr21
@r_devops
reddit
Collaborative notebooks to train, track, deploy, and monitor...
Hi, We built [iko.ai](https://iko.ai/) as an internal project to solve the problems we faced in machine learning projects for our clients these...
DevOps roadmap
Has anyone managed to compile the learning resources they used throughout their journey to becoming a DevOps engineer?
Interesting to know the source you'd use for the tools/tech mentioned on roadmap.sh/DevOps
https://redd.it/kythyj
@r_devops
Has anyone managed to compile the learning resources they used throughout their journey to becoming a DevOps engineer?
Interesting to know the source you'd use for the tools/tech mentioned on roadmap.sh/DevOps
https://redd.it/kythyj
@r_devops
Terrible 1.2.0 has been released
Terrible is an Ansible playbook that allows you to initialize and then deploy an entire infrastructure through the aid of Terraform, on a QEMU/KVM environment.
https://github.com/89luca89/terrible
https://redd.it/kyte9x
@r_devops
Terrible is an Ansible playbook that allows you to initialize and then deploy an entire infrastructure through the aid of Terraform, on a QEMU/KVM environment.
https://github.com/89luca89/terrible
https://redd.it/kyte9x
@r_devops
GitHub
GitHub - 89luca89/terrible: An Ansible playbook that applies the principle of the Infrastructure as Code on a QEMU/KVM environment.
An Ansible playbook that applies the principle of the Infrastructure as Code on a QEMU/KVM environment. - 89luca89/terrible
Remote work culture
Pre-COVID, we were not a remote first company, but like a lot of companies have transitioned to almost full remote for the foreseeable future.
In my company, and all the other companies I have worked for all of which were not remote first, it was much more common to get up and go talk to someone face to face rather than chat/message them. I would say chat was used much more sparingly and for shorter interactions. As we have gone remote, chat and messaging has been used more, but it doesn't feel like it has replaced face-to-face interactions for us completely.
So, people that work in companies that were remote first before COVID, or in places that have a good remote culture, what is your Product team communication like? Is it more chat or Slack based? Through PRs? All-day Zoom meetings?
https://redd.it/kylwm2
@r_devops
Pre-COVID, we were not a remote first company, but like a lot of companies have transitioned to almost full remote for the foreseeable future.
In my company, and all the other companies I have worked for all of which were not remote first, it was much more common to get up and go talk to someone face to face rather than chat/message them. I would say chat was used much more sparingly and for shorter interactions. As we have gone remote, chat and messaging has been used more, but it doesn't feel like it has replaced face-to-face interactions for us completely.
So, people that work in companies that were remote first before COVID, or in places that have a good remote culture, what is your Product team communication like? Is it more chat or Slack based? Through PRs? All-day Zoom meetings?
https://redd.it/kylwm2
@r_devops
reddit
Remote work culture
Pre-COVID, we were not a remote first company, but like a lot of companies have transitioned to almost full remote for the foreseeable future. ...
Store a certificate in AWS without cert chain.
I'm working with a 3rd party API that I need to access with a PFX cert. I'm trying to build a solution with Amplify / Lambda that can call this API with the cert.
How should I store the cert in AWS? Can't import it to ACM as it doesn't allow users to store non-self signed certs without certification chain.
Can I just store the cert in an S3 bucket maybe?
https://redd.it/kyrjwy
@r_devops
I'm working with a 3rd party API that I need to access with a PFX cert. I'm trying to build a solution with Amplify / Lambda that can call this API with the cert.
How should I store the cert in AWS? Can't import it to ACM as it doesn't allow users to store non-self signed certs without certification chain.
Can I just store the cert in an S3 bucket maybe?
https://redd.it/kyrjwy
@r_devops
reddit
Store a certificate in AWS without cert chain.
I'm working with a 3rd party API that I need to access with a PFX cert. I'm trying to build a solution with Amplify / Lambda that can call this...
Open source integration in Mule and DevOps tools for integration
Mule ESB, the open-source integration platform in the world. It has an increasing community of more than 175,000 developers. Then helps more than 1,600 companies to create application networks. This is in more than 60 countries increase company clock speed.
Mule ESB is an open-source enterprise-grade solution. This offers enterprise readiness needed by Global 500 businesses operating mission-critical environments. The integration framework enables users to develop custom integrations. Then provides a wide range of connectors to a couple of apps on-site or in the cloud. Besides, the benefits you have come to expect from open source technology. You can provide it by Mule Enterprise Service Bus. Mule ESB offers different components for enterprise readiness and critical production implementations. This is such as security, high availability, resilience, performance management, and award-winning support. This is to increase the powerful features of an open-source platform.
​
Mature Developer Tooling:
Create easy but efficient data mappings and transformations. This is a visual data mapping interface with advanced features. This is for developer usability and powerful runtime capabilities. Use AnyPoint Studio, Data Weave, DevKit, and many other tools to help with development.
Mule Enterprise Management Console (MMC):
It is important to have visibility and control. This is over the ESB infrastructure and associated services. Developers can check apps, metrics, SLAs, and control day-to-day operations. This is by using the dashboard, all from a single web-based console. It is the only solution that helps you to solve problems, mitigate risk. Then reduce operating costs during development.
Connectors and Transports:
Besides, group connectors, users of Mule ESB Enterprise have access to a range. This is of easy-to-use transport for instant connectivity. Thus, to many more applications and set order. This is to help you cross the gap between on-site and cloud applications. MuleSoft's wide range of networking choices includes transport and countless SaaS connectors.
Enterprise Security:
Protection is not threatened by open source. Your integration environment is end-to-end with enterprise protection from Mule. With Mule, blocking unauthorized access to your systems. This, drop data exposure, and preventing threat-management attacks is easy.
High Efficiency:
Mule ESB will be able to handle much more in transaction volume. This is more than other ESBs with edge caching technology. Thus, it beats the competition in performance, hands down. In experiments, Mule can usually process the transaction volume of other ESBs twice. Then can process up to 30 times as much transaction volume in certain cases. Besides, the ability to handle more transactions on the same amount of hardware or less. This means reducing running costs.
High Availability and Clustering:
Mission-critical applications enable which guarantees delivery with Mule ESB. This guarantees 100 percent clustering efficiency for your apps. Clustering means that transactions can transition to a failover node. This is if an application fails.
Highly Scalable:
To support the largest environments, Mule ESB has the ability to scale. Mule evolves with the needs of your company. Then can tailor it to the ever-changing business requirements. This eliminates the need for a major future overhaul. Businesses have the choice - horizontal or vertical - to scale-out indefinitely.
DevOps tools integration
It is becoming the standard to establish a DevOps practice. But what DevOps tools will help push this new form of collaboration. One research, which surveyed 1000 SQL Server professionals. You may find that 47 percent of respondents work in an organization with some capability. Thus, it is already a DevOps practice. Then the other 33 percent work in an organization that expects to launch a DevOps practice. This is in the next two years. This means that 1 in 3 organizations
Mule ESB, the open-source integration platform in the world. It has an increasing community of more than 175,000 developers. Then helps more than 1,600 companies to create application networks. This is in more than 60 countries increase company clock speed.
Mule ESB is an open-source enterprise-grade solution. This offers enterprise readiness needed by Global 500 businesses operating mission-critical environments. The integration framework enables users to develop custom integrations. Then provides a wide range of connectors to a couple of apps on-site or in the cloud. Besides, the benefits you have come to expect from open source technology. You can provide it by Mule Enterprise Service Bus. Mule ESB offers different components for enterprise readiness and critical production implementations. This is such as security, high availability, resilience, performance management, and award-winning support. This is to increase the powerful features of an open-source platform.
​
Mature Developer Tooling:
Create easy but efficient data mappings and transformations. This is a visual data mapping interface with advanced features. This is for developer usability and powerful runtime capabilities. Use AnyPoint Studio, Data Weave, DevKit, and many other tools to help with development.
Mule Enterprise Management Console (MMC):
It is important to have visibility and control. This is over the ESB infrastructure and associated services. Developers can check apps, metrics, SLAs, and control day-to-day operations. This is by using the dashboard, all from a single web-based console. It is the only solution that helps you to solve problems, mitigate risk. Then reduce operating costs during development.
Connectors and Transports:
Besides, group connectors, users of Mule ESB Enterprise have access to a range. This is of easy-to-use transport for instant connectivity. Thus, to many more applications and set order. This is to help you cross the gap between on-site and cloud applications. MuleSoft's wide range of networking choices includes transport and countless SaaS connectors.
Enterprise Security:
Protection is not threatened by open source. Your integration environment is end-to-end with enterprise protection from Mule. With Mule, blocking unauthorized access to your systems. This, drop data exposure, and preventing threat-management attacks is easy.
High Efficiency:
Mule ESB will be able to handle much more in transaction volume. This is more than other ESBs with edge caching technology. Thus, it beats the competition in performance, hands down. In experiments, Mule can usually process the transaction volume of other ESBs twice. Then can process up to 30 times as much transaction volume in certain cases. Besides, the ability to handle more transactions on the same amount of hardware or less. This means reducing running costs.
High Availability and Clustering:
Mission-critical applications enable which guarantees delivery with Mule ESB. This guarantees 100 percent clustering efficiency for your apps. Clustering means that transactions can transition to a failover node. This is if an application fails.
Highly Scalable:
To support the largest environments, Mule ESB has the ability to scale. Mule evolves with the needs of your company. Then can tailor it to the ever-changing business requirements. This eliminates the need for a major future overhaul. Businesses have the choice - horizontal or vertical - to scale-out indefinitely.
DevOps tools integration
It is becoming the standard to establish a DevOps practice. But what DevOps tools will help push this new form of collaboration. One research, which surveyed 1000 SQL Server professionals. You may find that 47 percent of respondents work in an organization with some capability. Thus, it is already a DevOps practice. Then the other 33 percent work in an organization that expects to launch a DevOps practice. This is in the next two years. This means that 1 in 3 organizations
could have a DevOps exercise by 2019.
While DevOps is becoming the norm, some IT leaders fail to define a DevOps practice. This is as it is difficult to reorganize individuals and reinvent the relationship. This is between development and operational teams. Besides, in this reorganization process, technology plays a vital role. Besides, it is difficult for some IT leaders to understand. This is DevOps instruments and technologies. Thus, they can use to allow this collaborative environment.
Innumerable DevOps tools are available.
Below, we list some resources inside the DevOps ecosystem. This is for designing, testing, and deploying with ease. The list below is not exhaustive and the resources in any specific order are not specified.
What tools are out there for DevOps?
The composable enterprise's growth & value
.NET APIs: Using Visual Studio RAML Tools
Tools from DevOps: Building
· Gradle:
A software for open-source automation that can use it as a DevOps tool. Grandle helps users on any platform create, test, distribute, and package apps. The tool has a collection of plugins and rich APIs that are open source.
· Maven:
A common, open-source tool for both testing and development. It is a DevOps tool that can perform unit test reports. Thus, include coverage, among other features.
· Visual Studio:
A Microsoft platform that enables users to compile and create projects. Then it helps to make apps in a personalized and automated way.
Bitbucket, Docker, Git, and Perforce are other tools.
Resources from DevOps: Research
· M Unit:
A platform for Mule application testing that allows you to automate integration. It offers a complete suite of capabilities for integration and unit tests.
· SoapUI:
· A SOAP and REST API open-source API testing tool.
Apps provide functional testing of the REST API, WSDL coverage, and SOAP Web Services.
· JUnit:
· A DevOps open-source framework that is underuse. It is a server for automation that is part of Jenkins. This instrument makes it simpler and more automated for research.
Other tools include: Arma, Perfecto, Parasoft, and Zuul
Tools of DevOps: Deploying
· Artifactory:
A framework for the management of binary repositories. You can use it alongside Maven, Gradle, and other software.
· Puppet:
A DevOps open-source platform that provides users with the framework for DevOps activities. This includes automated testing, continuous integration, and continuous delivery.
· Ansible:
A basic DevOps tool for IT automation that enables users to automate solutions. This is by making it easier to deploy systems and apps. The weblikemilar to Chef and Puppet.
Other instruments include: Chef, HP Codar, Deploy IBM UrbanCode, and Jenkins
API-led Connectivity Integration of DevOps Tools
There are many resources that companies may use to build a DevOps practice, as shown by the list above. Luckily, many of these resources are open-source. This enables teams to jump straight into developing a DevOps environment. However, the proliferation of tools leads to a serious challenge. This is how do users integrate, reveal assets, and ensure managed control. This is of many DevOps tools in the process.
One approach is API-led networking, a methodical integration approach. Thus, links assets through modern managed APIs and expose them. As a result, via plug-and-play, each asset or API can incorporate. By self-service, the asset also becomes discoverable and regulate by compliance. Organizations may ensure that they do not repeat efforts. This build applications in silos, or expose assets around the enterprise. This is ineffective through API-led connectivity.
API led approach
By implementing an API-led approach to integration, one of our clients. This is a large tech corporation, strengthen their DevOps practice. A DevOps activity was already in place for the customer. But rapid development, the proliferation of SaaS apps and DevOps tools. This creates an unscalable and fragile IT infrastructure link by point-to-point integration. There were many
While DevOps is becoming the norm, some IT leaders fail to define a DevOps practice. This is as it is difficult to reorganize individuals and reinvent the relationship. This is between development and operational teams. Besides, in this reorganization process, technology plays a vital role. Besides, it is difficult for some IT leaders to understand. This is DevOps instruments and technologies. Thus, they can use to allow this collaborative environment.
Innumerable DevOps tools are available.
Below, we list some resources inside the DevOps ecosystem. This is for designing, testing, and deploying with ease. The list below is not exhaustive and the resources in any specific order are not specified.
What tools are out there for DevOps?
The composable enterprise's growth & value
.NET APIs: Using Visual Studio RAML Tools
Tools from DevOps: Building
· Gradle:
A software for open-source automation that can use it as a DevOps tool. Grandle helps users on any platform create, test, distribute, and package apps. The tool has a collection of plugins and rich APIs that are open source.
· Maven:
A common, open-source tool for both testing and development. It is a DevOps tool that can perform unit test reports. Thus, include coverage, among other features.
· Visual Studio:
A Microsoft platform that enables users to compile and create projects. Then it helps to make apps in a personalized and automated way.
Bitbucket, Docker, Git, and Perforce are other tools.
Resources from DevOps: Research
· M Unit:
A platform for Mule application testing that allows you to automate integration. It offers a complete suite of capabilities for integration and unit tests.
· SoapUI:
· A SOAP and REST API open-source API testing tool.
Apps provide functional testing of the REST API, WSDL coverage, and SOAP Web Services.
· JUnit:
· A DevOps open-source framework that is underuse. It is a server for automation that is part of Jenkins. This instrument makes it simpler and more automated for research.
Other tools include: Arma, Perfecto, Parasoft, and Zuul
Tools of DevOps: Deploying
· Artifactory:
A framework for the management of binary repositories. You can use it alongside Maven, Gradle, and other software.
· Puppet:
A DevOps open-source platform that provides users with the framework for DevOps activities. This includes automated testing, continuous integration, and continuous delivery.
· Ansible:
A basic DevOps tool for IT automation that enables users to automate solutions. This is by making it easier to deploy systems and apps. The weblikemilar to Chef and Puppet.
Other instruments include: Chef, HP Codar, Deploy IBM UrbanCode, and Jenkins
API-led Connectivity Integration of DevOps Tools
There are many resources that companies may use to build a DevOps practice, as shown by the list above. Luckily, many of these resources are open-source. This enables teams to jump straight into developing a DevOps environment. However, the proliferation of tools leads to a serious challenge. This is how do users integrate, reveal assets, and ensure managed control. This is of many DevOps tools in the process.
One approach is API-led networking, a methodical integration approach. Thus, links assets through modern managed APIs and expose them. As a result, via plug-and-play, each asset or API can incorporate. By self-service, the asset also becomes discoverable and regulate by compliance. Organizations may ensure that they do not repeat efforts. This build applications in silos, or expose assets around the enterprise. This is ineffective through API-led connectivity.
API led approach
By implementing an API-led approach to integration, one of our clients. This is a large tech corporation, strengthen their DevOps practice. A DevOps activity was already in place for the customer. But rapid development, the proliferation of SaaS apps and DevOps tools. This creates an unscalable and fragile IT infrastructure link by point-to-point integration. There were many
problems with this infrastructure. This includes the lack of connectivity between DevOps and continuous integration. It helps with the integration of back offices. The client had to step beyond point-to-point to overcome this problem. Then establish an efficient DevOps practice, as it is an integration approach. You can increase its dependencies within the DevOps environment.
This customer was able to reset their back-office infrastructure with API-led connectivity. Then adopt a fresh approach to integrating their data, systems, apps. DevOps tools while keeping in mind reusability, scalability, and security. To extend DevOps and ongoing integration to their back-office integration. Then the client uses it as a canonical model. They have set up a governance mechanism that, without sacrificing protection. It provides granular access to specific resources.
AnyPoint platform
This new IT architecture and model of governance. Thus, the client turns it to the AnyPoint Platform TM of Mule Soft. This is to construct REST APIs. Thus, it can abstract the complexity of underlying systems. Adopting an API-led approach to integration speed the development phase of the customer. Then increase their developers' productivity by a staggering 300 percent.
Conclusion
There are many DevOps resources at the hands of organizations. The key benefit is not only to choose the tool that works best for one's use case in real-time. But also to ensure that when more tools used in the DevOps environment. Then you can incorporate all assets and manage them effectively, exposed, and agile. Mule Soft helps organizations through API-led networking, to achieve such an approach. You can learn more about this and other integration through MuleSoft online training.
https://redd.it/kyi4ku
@r_devops
This customer was able to reset their back-office infrastructure with API-led connectivity. Then adopt a fresh approach to integrating their data, systems, apps. DevOps tools while keeping in mind reusability, scalability, and security. To extend DevOps and ongoing integration to their back-office integration. Then the client uses it as a canonical model. They have set up a governance mechanism that, without sacrificing protection. It provides granular access to specific resources.
AnyPoint platform
This new IT architecture and model of governance. Thus, the client turns it to the AnyPoint Platform TM of Mule Soft. This is to construct REST APIs. Thus, it can abstract the complexity of underlying systems. Adopting an API-led approach to integration speed the development phase of the customer. Then increase their developers' productivity by a staggering 300 percent.
Conclusion
There are many DevOps resources at the hands of organizations. The key benefit is not only to choose the tool that works best for one's use case in real-time. But also to ensure that when more tools used in the DevOps environment. Then you can incorporate all assets and manage them effectively, exposed, and agile. Mule Soft helps organizations through API-led networking, to achieve such an approach. You can learn more about this and other integration through MuleSoft online training.
https://redd.it/kyi4ku
@r_devops
Onlineitguru
Online Courses | Online IT Certification Training | OnlineITGuru
Best online course provider in the world. Students get trained, Certified from professional instructors. Training provided round the clock.
Do you find it hard to find the time to create and update chatops bots? Would you be interested in a chatbot service that integrates with CI/CD like bamboo/jenkins/Codepipeline as well as monitoring services like Cloudwatch, Datadog, NewRelic, etc?
Hey all,
I just joined an SRE team after a while of being in software engineering/devops.
This might be more of a problem at smaller organizations, but a common theme I've noticed is that chatops bots are really helpful but it's hard to find time to create/update them. However, when they're done well they tend to be very useful. On the other hand, sometimes they're done just well enough to be useful but there isn't enough time to fix issues or improve them by adding features. Sometimes useful ones just go away because no one had time to maintain them!
Would anyone here be interested in a service that provides ready to go chatbots that plug into common ci/cd services, cloud providers, and monitoring services?
Here are a few examples:
* You have a new Jenkins/Bamboo/Codepipeline ci/cd pipeline and you want to approve/deny deployments via slack or Microsoft teams. You log in, put in your pipeline details and you have a chatbot that will give pipeline updates and allow you to approve deployments.
* You build a new service that provides internal metrics to teams. You want those metrics to post every day to a slack channel. You pick a graph template, map some values, and you have a chatbot.
* You have an internal API that provides customer info based on IDs to help identify customer impact when debugging. You log in and provide the webhook, the values, and pick a template and you have a slack bot for your API.
https://redd.it/kycu9g
@r_devops
Hey all,
I just joined an SRE team after a while of being in software engineering/devops.
This might be more of a problem at smaller organizations, but a common theme I've noticed is that chatops bots are really helpful but it's hard to find time to create/update them. However, when they're done well they tend to be very useful. On the other hand, sometimes they're done just well enough to be useful but there isn't enough time to fix issues or improve them by adding features. Sometimes useful ones just go away because no one had time to maintain them!
Would anyone here be interested in a service that provides ready to go chatbots that plug into common ci/cd services, cloud providers, and monitoring services?
Here are a few examples:
* You have a new Jenkins/Bamboo/Codepipeline ci/cd pipeline and you want to approve/deny deployments via slack or Microsoft teams. You log in, put in your pipeline details and you have a chatbot that will give pipeline updates and allow you to approve deployments.
* You build a new service that provides internal metrics to teams. You want those metrics to post every day to a slack channel. You pick a graph template, map some values, and you have a chatbot.
* You have an internal API that provides customer info based on IDs to help identify customer impact when debugging. You log in and provide the webhook, the values, and pick a template and you have a slack bot for your API.
https://redd.it/kycu9g
@r_devops
reddit
Do you find it hard to find the time to create and update chatops...
Hey all, I just joined an SRE team after a while of being in software engineering/devops. This might be more of a problem at smaller...
Help me in choosing
What should I do devops or azure stack.Help me in choosing ?
https://redd.it/kyeuq2
@r_devops
What should I do devops or azure stack.Help me in choosing ?
https://redd.it/kyeuq2
@r_devops
reddit
Help me in choosing
What should I do devops or azure stack.Help me in choosing ?
end to end infrastructure testing frameworks?
What are some good testing frameworks for end to end infrastructure testing?
For example, I have a cloudwatch event rule that fires whenever a certain ECS task changes state and then triggers a lambda function to update a DNS record. Looking for a way to guard against regressions. So for example, do a terraform apply and then have an automated verification that nothing broke.
I feel like this can be done with python or bash but is there some kind of framework built for that kind of thing.
https://redd.it/kyabuw
@r_devops
What are some good testing frameworks for end to end infrastructure testing?
For example, I have a cloudwatch event rule that fires whenever a certain ECS task changes state and then triggers a lambda function to update a DNS record. Looking for a way to guard against regressions. So for example, do a terraform apply and then have an automated verification that nothing broke.
I feel like this can be done with python or bash but is there some kind of framework built for that kind of thing.
https://redd.it/kyabuw
@r_devops
reddit
end to end infrastructure testing frameworks?
What are some good testing frameworks for end to end infrastructure testing? For example, I have a cloudwatch event rule that fires whenever a...
Fylamynt - Cloud Workflow Automation Platform
Hi Everyone!
My name is Pradeep Padala and I am the Co-Founder/CEO of Fylamynt. I would like to introduce our automation platform that can help save significant time and money for cloud operations. We launched our company in December.
Our goal is to not replace existing automation tools like Terraform and Ansible but to help in connecting services like DataDog, Splunk, Slack, Cloud Services (EC2, EKS, etc.) to code (Terraform, Python, Ansible, etc.) Fylamynt is a connector similar to Zapier, but for cloud automation.
I would love to get your feedback on the product and use-cases. Any comments/feedback are appreciated.
Cheers!
Pradeep
https://redd.it/ky7ag9
@r_devops
Hi Everyone!
My name is Pradeep Padala and I am the Co-Founder/CEO of Fylamynt. I would like to introduce our automation platform that can help save significant time and money for cloud operations. We launched our company in December.
Our goal is to not replace existing automation tools like Terraform and Ansible but to help in connecting services like DataDog, Splunk, Slack, Cloud Services (EC2, EKS, etc.) to code (Terraform, Python, Ansible, etc.) Fylamynt is a connector similar to Zapier, but for cloud automation.
I would love to get your feedback on the product and use-cases. Any comments/feedback are appreciated.
Cheers!
Pradeep
https://redd.it/ky7ag9
@r_devops