DevOps roadmap
Has anyone managed to compile the learning resources they used throughout their journey to becoming a DevOps engineer?
Interesting to know the source you'd use for the tools/tech mentioned on roadmap.sh/DevOps
https://redd.it/kythyj
@r_devops
Has anyone managed to compile the learning resources they used throughout their journey to becoming a DevOps engineer?
Interesting to know the source you'd use for the tools/tech mentioned on roadmap.sh/DevOps
https://redd.it/kythyj
@r_devops
Terrible 1.2.0 has been released
Terrible is an Ansible playbook that allows you to initialize and then deploy an entire infrastructure through the aid of Terraform, on a QEMU/KVM environment.
https://github.com/89luca89/terrible
https://redd.it/kyte9x
@r_devops
Terrible is an Ansible playbook that allows you to initialize and then deploy an entire infrastructure through the aid of Terraform, on a QEMU/KVM environment.
https://github.com/89luca89/terrible
https://redd.it/kyte9x
@r_devops
GitHub
GitHub - 89luca89/terrible: An Ansible playbook that applies the principle of the Infrastructure as Code on a QEMU/KVM environment.
An Ansible playbook that applies the principle of the Infrastructure as Code on a QEMU/KVM environment. - 89luca89/terrible
Remote work culture
Pre-COVID, we were not a remote first company, but like a lot of companies have transitioned to almost full remote for the foreseeable future.
In my company, and all the other companies I have worked for all of which were not remote first, it was much more common to get up and go talk to someone face to face rather than chat/message them. I would say chat was used much more sparingly and for shorter interactions. As we have gone remote, chat and messaging has been used more, but it doesn't feel like it has replaced face-to-face interactions for us completely.
So, people that work in companies that were remote first before COVID, or in places that have a good remote culture, what is your Product team communication like? Is it more chat or Slack based? Through PRs? All-day Zoom meetings?
https://redd.it/kylwm2
@r_devops
Pre-COVID, we were not a remote first company, but like a lot of companies have transitioned to almost full remote for the foreseeable future.
In my company, and all the other companies I have worked for all of which were not remote first, it was much more common to get up and go talk to someone face to face rather than chat/message them. I would say chat was used much more sparingly and for shorter interactions. As we have gone remote, chat and messaging has been used more, but it doesn't feel like it has replaced face-to-face interactions for us completely.
So, people that work in companies that were remote first before COVID, or in places that have a good remote culture, what is your Product team communication like? Is it more chat or Slack based? Through PRs? All-day Zoom meetings?
https://redd.it/kylwm2
@r_devops
reddit
Remote work culture
Pre-COVID, we were not a remote first company, but like a lot of companies have transitioned to almost full remote for the foreseeable future. ...
Store a certificate in AWS without cert chain.
I'm working with a 3rd party API that I need to access with a PFX cert. I'm trying to build a solution with Amplify / Lambda that can call this API with the cert.
How should I store the cert in AWS? Can't import it to ACM as it doesn't allow users to store non-self signed certs without certification chain.
Can I just store the cert in an S3 bucket maybe?
https://redd.it/kyrjwy
@r_devops
I'm working with a 3rd party API that I need to access with a PFX cert. I'm trying to build a solution with Amplify / Lambda that can call this API with the cert.
How should I store the cert in AWS? Can't import it to ACM as it doesn't allow users to store non-self signed certs without certification chain.
Can I just store the cert in an S3 bucket maybe?
https://redd.it/kyrjwy
@r_devops
reddit
Store a certificate in AWS without cert chain.
I'm working with a 3rd party API that I need to access with a PFX cert. I'm trying to build a solution with Amplify / Lambda that can call this...
Open source integration in Mule and DevOps tools for integration
Mule ESB, the open-source integration platform in the world. It has an increasing community of more than 175,000 developers. Then helps more than 1,600 companies to create application networks. This is in more than 60 countries increase company clock speed.
Mule ESB is an open-source enterprise-grade solution. This offers enterprise readiness needed by Global 500 businesses operating mission-critical environments. The integration framework enables users to develop custom integrations. Then provides a wide range of connectors to a couple of apps on-site or in the cloud. Besides, the benefits you have come to expect from open source technology. You can provide it by Mule Enterprise Service Bus. Mule ESB offers different components for enterprise readiness and critical production implementations. This is such as security, high availability, resilience, performance management, and award-winning support. This is to increase the powerful features of an open-source platform.
​
Mature Developer Tooling:
Create easy but efficient data mappings and transformations. This is a visual data mapping interface with advanced features. This is for developer usability and powerful runtime capabilities. Use AnyPoint Studio, Data Weave, DevKit, and many other tools to help with development.
Mule Enterprise Management Console (MMC):
It is important to have visibility and control. This is over the ESB infrastructure and associated services. Developers can check apps, metrics, SLAs, and control day-to-day operations. This is by using the dashboard, all from a single web-based console. It is the only solution that helps you to solve problems, mitigate risk. Then reduce operating costs during development.
Connectors and Transports:
Besides, group connectors, users of Mule ESB Enterprise have access to a range. This is of easy-to-use transport for instant connectivity. Thus, to many more applications and set order. This is to help you cross the gap between on-site and cloud applications. MuleSoft's wide range of networking choices includes transport and countless SaaS connectors.
Enterprise Security:
Protection is not threatened by open source. Your integration environment is end-to-end with enterprise protection from Mule. With Mule, blocking unauthorized access to your systems. This, drop data exposure, and preventing threat-management attacks is easy.
High Efficiency:
Mule ESB will be able to handle much more in transaction volume. This is more than other ESBs with edge caching technology. Thus, it beats the competition in performance, hands down. In experiments, Mule can usually process the transaction volume of other ESBs twice. Then can process up to 30 times as much transaction volume in certain cases. Besides, the ability to handle more transactions on the same amount of hardware or less. This means reducing running costs.
High Availability and Clustering:
Mission-critical applications enable which guarantees delivery with Mule ESB. This guarantees 100 percent clustering efficiency for your apps. Clustering means that transactions can transition to a failover node. This is if an application fails.
Highly Scalable:
To support the largest environments, Mule ESB has the ability to scale. Mule evolves with the needs of your company. Then can tailor it to the ever-changing business requirements. This eliminates the need for a major future overhaul. Businesses have the choice - horizontal or vertical - to scale-out indefinitely.
DevOps tools integration
It is becoming the standard to establish a DevOps practice. But what DevOps tools will help push this new form of collaboration. One research, which surveyed 1000 SQL Server professionals. You may find that 47 percent of respondents work in an organization with some capability. Thus, it is already a DevOps practice. Then the other 33 percent work in an organization that expects to launch a DevOps practice. This is in the next two years. This means that 1 in 3 organizations
Mule ESB, the open-source integration platform in the world. It has an increasing community of more than 175,000 developers. Then helps more than 1,600 companies to create application networks. This is in more than 60 countries increase company clock speed.
Mule ESB is an open-source enterprise-grade solution. This offers enterprise readiness needed by Global 500 businesses operating mission-critical environments. The integration framework enables users to develop custom integrations. Then provides a wide range of connectors to a couple of apps on-site or in the cloud. Besides, the benefits you have come to expect from open source technology. You can provide it by Mule Enterprise Service Bus. Mule ESB offers different components for enterprise readiness and critical production implementations. This is such as security, high availability, resilience, performance management, and award-winning support. This is to increase the powerful features of an open-source platform.
​
Mature Developer Tooling:
Create easy but efficient data mappings and transformations. This is a visual data mapping interface with advanced features. This is for developer usability and powerful runtime capabilities. Use AnyPoint Studio, Data Weave, DevKit, and many other tools to help with development.
Mule Enterprise Management Console (MMC):
It is important to have visibility and control. This is over the ESB infrastructure and associated services. Developers can check apps, metrics, SLAs, and control day-to-day operations. This is by using the dashboard, all from a single web-based console. It is the only solution that helps you to solve problems, mitigate risk. Then reduce operating costs during development.
Connectors and Transports:
Besides, group connectors, users of Mule ESB Enterprise have access to a range. This is of easy-to-use transport for instant connectivity. Thus, to many more applications and set order. This is to help you cross the gap between on-site and cloud applications. MuleSoft's wide range of networking choices includes transport and countless SaaS connectors.
Enterprise Security:
Protection is not threatened by open source. Your integration environment is end-to-end with enterprise protection from Mule. With Mule, blocking unauthorized access to your systems. This, drop data exposure, and preventing threat-management attacks is easy.
High Efficiency:
Mule ESB will be able to handle much more in transaction volume. This is more than other ESBs with edge caching technology. Thus, it beats the competition in performance, hands down. In experiments, Mule can usually process the transaction volume of other ESBs twice. Then can process up to 30 times as much transaction volume in certain cases. Besides, the ability to handle more transactions on the same amount of hardware or less. This means reducing running costs.
High Availability and Clustering:
Mission-critical applications enable which guarantees delivery with Mule ESB. This guarantees 100 percent clustering efficiency for your apps. Clustering means that transactions can transition to a failover node. This is if an application fails.
Highly Scalable:
To support the largest environments, Mule ESB has the ability to scale. Mule evolves with the needs of your company. Then can tailor it to the ever-changing business requirements. This eliminates the need for a major future overhaul. Businesses have the choice - horizontal or vertical - to scale-out indefinitely.
DevOps tools integration
It is becoming the standard to establish a DevOps practice. But what DevOps tools will help push this new form of collaboration. One research, which surveyed 1000 SQL Server professionals. You may find that 47 percent of respondents work in an organization with some capability. Thus, it is already a DevOps practice. Then the other 33 percent work in an organization that expects to launch a DevOps practice. This is in the next two years. This means that 1 in 3 organizations
could have a DevOps exercise by 2019.
While DevOps is becoming the norm, some IT leaders fail to define a DevOps practice. This is as it is difficult to reorganize individuals and reinvent the relationship. This is between development and operational teams. Besides, in this reorganization process, technology plays a vital role. Besides, it is difficult for some IT leaders to understand. This is DevOps instruments and technologies. Thus, they can use to allow this collaborative environment.
Innumerable DevOps tools are available.
Below, we list some resources inside the DevOps ecosystem. This is for designing, testing, and deploying with ease. The list below is not exhaustive and the resources in any specific order are not specified.
What tools are out there for DevOps?
The composable enterprise's growth & value
.NET APIs: Using Visual Studio RAML Tools
Tools from DevOps: Building
· Gradle:
A software for open-source automation that can use it as a DevOps tool. Grandle helps users on any platform create, test, distribute, and package apps. The tool has a collection of plugins and rich APIs that are open source.
· Maven:
A common, open-source tool for both testing and development. It is a DevOps tool that can perform unit test reports. Thus, include coverage, among other features.
· Visual Studio:
A Microsoft platform that enables users to compile and create projects. Then it helps to make apps in a personalized and automated way.
Bitbucket, Docker, Git, and Perforce are other tools.
Resources from DevOps: Research
· M Unit:
A platform for Mule application testing that allows you to automate integration. It offers a complete suite of capabilities for integration and unit tests.
· SoapUI:
· A SOAP and REST API open-source API testing tool.
Apps provide functional testing of the REST API, WSDL coverage, and SOAP Web Services.
· JUnit:
· A DevOps open-source framework that is underuse. It is a server for automation that is part of Jenkins. This instrument makes it simpler and more automated for research.
Other tools include: Arma, Perfecto, Parasoft, and Zuul
Tools of DevOps: Deploying
· Artifactory:
A framework for the management of binary repositories. You can use it alongside Maven, Gradle, and other software.
· Puppet:
A DevOps open-source platform that provides users with the framework for DevOps activities. This includes automated testing, continuous integration, and continuous delivery.
· Ansible:
A basic DevOps tool for IT automation that enables users to automate solutions. This is by making it easier to deploy systems and apps. The weblikemilar to Chef and Puppet.
Other instruments include: Chef, HP Codar, Deploy IBM UrbanCode, and Jenkins
API-led Connectivity Integration of DevOps Tools
There are many resources that companies may use to build a DevOps practice, as shown by the list above. Luckily, many of these resources are open-source. This enables teams to jump straight into developing a DevOps environment. However, the proliferation of tools leads to a serious challenge. This is how do users integrate, reveal assets, and ensure managed control. This is of many DevOps tools in the process.
One approach is API-led networking, a methodical integration approach. Thus, links assets through modern managed APIs and expose them. As a result, via plug-and-play, each asset or API can incorporate. By self-service, the asset also becomes discoverable and regulate by compliance. Organizations may ensure that they do not repeat efforts. This build applications in silos, or expose assets around the enterprise. This is ineffective through API-led connectivity.
API led approach
By implementing an API-led approach to integration, one of our clients. This is a large tech corporation, strengthen their DevOps practice. A DevOps activity was already in place for the customer. But rapid development, the proliferation of SaaS apps and DevOps tools. This creates an unscalable and fragile IT infrastructure link by point-to-point integration. There were many
While DevOps is becoming the norm, some IT leaders fail to define a DevOps practice. This is as it is difficult to reorganize individuals and reinvent the relationship. This is between development and operational teams. Besides, in this reorganization process, technology plays a vital role. Besides, it is difficult for some IT leaders to understand. This is DevOps instruments and technologies. Thus, they can use to allow this collaborative environment.
Innumerable DevOps tools are available.
Below, we list some resources inside the DevOps ecosystem. This is for designing, testing, and deploying with ease. The list below is not exhaustive and the resources in any specific order are not specified.
What tools are out there for DevOps?
The composable enterprise's growth & value
.NET APIs: Using Visual Studio RAML Tools
Tools from DevOps: Building
· Gradle:
A software for open-source automation that can use it as a DevOps tool. Grandle helps users on any platform create, test, distribute, and package apps. The tool has a collection of plugins and rich APIs that are open source.
· Maven:
A common, open-source tool for both testing and development. It is a DevOps tool that can perform unit test reports. Thus, include coverage, among other features.
· Visual Studio:
A Microsoft platform that enables users to compile and create projects. Then it helps to make apps in a personalized and automated way.
Bitbucket, Docker, Git, and Perforce are other tools.
Resources from DevOps: Research
· M Unit:
A platform for Mule application testing that allows you to automate integration. It offers a complete suite of capabilities for integration and unit tests.
· SoapUI:
· A SOAP and REST API open-source API testing tool.
Apps provide functional testing of the REST API, WSDL coverage, and SOAP Web Services.
· JUnit:
· A DevOps open-source framework that is underuse. It is a server for automation that is part of Jenkins. This instrument makes it simpler and more automated for research.
Other tools include: Arma, Perfecto, Parasoft, and Zuul
Tools of DevOps: Deploying
· Artifactory:
A framework for the management of binary repositories. You can use it alongside Maven, Gradle, and other software.
· Puppet:
A DevOps open-source platform that provides users with the framework for DevOps activities. This includes automated testing, continuous integration, and continuous delivery.
· Ansible:
A basic DevOps tool for IT automation that enables users to automate solutions. This is by making it easier to deploy systems and apps. The weblikemilar to Chef and Puppet.
Other instruments include: Chef, HP Codar, Deploy IBM UrbanCode, and Jenkins
API-led Connectivity Integration of DevOps Tools
There are many resources that companies may use to build a DevOps practice, as shown by the list above. Luckily, many of these resources are open-source. This enables teams to jump straight into developing a DevOps environment. However, the proliferation of tools leads to a serious challenge. This is how do users integrate, reveal assets, and ensure managed control. This is of many DevOps tools in the process.
One approach is API-led networking, a methodical integration approach. Thus, links assets through modern managed APIs and expose them. As a result, via plug-and-play, each asset or API can incorporate. By self-service, the asset also becomes discoverable and regulate by compliance. Organizations may ensure that they do not repeat efforts. This build applications in silos, or expose assets around the enterprise. This is ineffective through API-led connectivity.
API led approach
By implementing an API-led approach to integration, one of our clients. This is a large tech corporation, strengthen their DevOps practice. A DevOps activity was already in place for the customer. But rapid development, the proliferation of SaaS apps and DevOps tools. This creates an unscalable and fragile IT infrastructure link by point-to-point integration. There were many
problems with this infrastructure. This includes the lack of connectivity between DevOps and continuous integration. It helps with the integration of back offices. The client had to step beyond point-to-point to overcome this problem. Then establish an efficient DevOps practice, as it is an integration approach. You can increase its dependencies within the DevOps environment.
This customer was able to reset their back-office infrastructure with API-led connectivity. Then adopt a fresh approach to integrating their data, systems, apps. DevOps tools while keeping in mind reusability, scalability, and security. To extend DevOps and ongoing integration to their back-office integration. Then the client uses it as a canonical model. They have set up a governance mechanism that, without sacrificing protection. It provides granular access to specific resources.
AnyPoint platform
This new IT architecture and model of governance. Thus, the client turns it to the AnyPoint Platform TM of Mule Soft. This is to construct REST APIs. Thus, it can abstract the complexity of underlying systems. Adopting an API-led approach to integration speed the development phase of the customer. Then increase their developers' productivity by a staggering 300 percent.
Conclusion
There are many DevOps resources at the hands of organizations. The key benefit is not only to choose the tool that works best for one's use case in real-time. But also to ensure that when more tools used in the DevOps environment. Then you can incorporate all assets and manage them effectively, exposed, and agile. Mule Soft helps organizations through API-led networking, to achieve such an approach. You can learn more about this and other integration through MuleSoft online training.
https://redd.it/kyi4ku
@r_devops
This customer was able to reset their back-office infrastructure with API-led connectivity. Then adopt a fresh approach to integrating their data, systems, apps. DevOps tools while keeping in mind reusability, scalability, and security. To extend DevOps and ongoing integration to their back-office integration. Then the client uses it as a canonical model. They have set up a governance mechanism that, without sacrificing protection. It provides granular access to specific resources.
AnyPoint platform
This new IT architecture and model of governance. Thus, the client turns it to the AnyPoint Platform TM of Mule Soft. This is to construct REST APIs. Thus, it can abstract the complexity of underlying systems. Adopting an API-led approach to integration speed the development phase of the customer. Then increase their developers' productivity by a staggering 300 percent.
Conclusion
There are many DevOps resources at the hands of organizations. The key benefit is not only to choose the tool that works best for one's use case in real-time. But also to ensure that when more tools used in the DevOps environment. Then you can incorporate all assets and manage them effectively, exposed, and agile. Mule Soft helps organizations through API-led networking, to achieve such an approach. You can learn more about this and other integration through MuleSoft online training.
https://redd.it/kyi4ku
@r_devops
Onlineitguru
Online Courses | Online IT Certification Training | OnlineITGuru
Best online course provider in the world. Students get trained, Certified from professional instructors. Training provided round the clock.
Do you find it hard to find the time to create and update chatops bots? Would you be interested in a chatbot service that integrates with CI/CD like bamboo/jenkins/Codepipeline as well as monitoring services like Cloudwatch, Datadog, NewRelic, etc?
Hey all,
I just joined an SRE team after a while of being in software engineering/devops.
This might be more of a problem at smaller organizations, but a common theme I've noticed is that chatops bots are really helpful but it's hard to find time to create/update them. However, when they're done well they tend to be very useful. On the other hand, sometimes they're done just well enough to be useful but there isn't enough time to fix issues or improve them by adding features. Sometimes useful ones just go away because no one had time to maintain them!
Would anyone here be interested in a service that provides ready to go chatbots that plug into common ci/cd services, cloud providers, and monitoring services?
Here are a few examples:
* You have a new Jenkins/Bamboo/Codepipeline ci/cd pipeline and you want to approve/deny deployments via slack or Microsoft teams. You log in, put in your pipeline details and you have a chatbot that will give pipeline updates and allow you to approve deployments.
* You build a new service that provides internal metrics to teams. You want those metrics to post every day to a slack channel. You pick a graph template, map some values, and you have a chatbot.
* You have an internal API that provides customer info based on IDs to help identify customer impact when debugging. You log in and provide the webhook, the values, and pick a template and you have a slack bot for your API.
https://redd.it/kycu9g
@r_devops
Hey all,
I just joined an SRE team after a while of being in software engineering/devops.
This might be more of a problem at smaller organizations, but a common theme I've noticed is that chatops bots are really helpful but it's hard to find time to create/update them. However, when they're done well they tend to be very useful. On the other hand, sometimes they're done just well enough to be useful but there isn't enough time to fix issues or improve them by adding features. Sometimes useful ones just go away because no one had time to maintain them!
Would anyone here be interested in a service that provides ready to go chatbots that plug into common ci/cd services, cloud providers, and monitoring services?
Here are a few examples:
* You have a new Jenkins/Bamboo/Codepipeline ci/cd pipeline and you want to approve/deny deployments via slack or Microsoft teams. You log in, put in your pipeline details and you have a chatbot that will give pipeline updates and allow you to approve deployments.
* You build a new service that provides internal metrics to teams. You want those metrics to post every day to a slack channel. You pick a graph template, map some values, and you have a chatbot.
* You have an internal API that provides customer info based on IDs to help identify customer impact when debugging. You log in and provide the webhook, the values, and pick a template and you have a slack bot for your API.
https://redd.it/kycu9g
@r_devops
reddit
Do you find it hard to find the time to create and update chatops...
Hey all, I just joined an SRE team after a while of being in software engineering/devops. This might be more of a problem at smaller...
Help me in choosing
What should I do devops or azure stack.Help me in choosing ?
https://redd.it/kyeuq2
@r_devops
What should I do devops or azure stack.Help me in choosing ?
https://redd.it/kyeuq2
@r_devops
reddit
Help me in choosing
What should I do devops or azure stack.Help me in choosing ?
end to end infrastructure testing frameworks?
What are some good testing frameworks for end to end infrastructure testing?
For example, I have a cloudwatch event rule that fires whenever a certain ECS task changes state and then triggers a lambda function to update a DNS record. Looking for a way to guard against regressions. So for example, do a terraform apply and then have an automated verification that nothing broke.
I feel like this can be done with python or bash but is there some kind of framework built for that kind of thing.
https://redd.it/kyabuw
@r_devops
What are some good testing frameworks for end to end infrastructure testing?
For example, I have a cloudwatch event rule that fires whenever a certain ECS task changes state and then triggers a lambda function to update a DNS record. Looking for a way to guard against regressions. So for example, do a terraform apply and then have an automated verification that nothing broke.
I feel like this can be done with python or bash but is there some kind of framework built for that kind of thing.
https://redd.it/kyabuw
@r_devops
reddit
end to end infrastructure testing frameworks?
What are some good testing frameworks for end to end infrastructure testing? For example, I have a cloudwatch event rule that fires whenever a...
Fylamynt - Cloud Workflow Automation Platform
Hi Everyone!
My name is Pradeep Padala and I am the Co-Founder/CEO of Fylamynt. I would like to introduce our automation platform that can help save significant time and money for cloud operations. We launched our company in December.
Our goal is to not replace existing automation tools like Terraform and Ansible but to help in connecting services like DataDog, Splunk, Slack, Cloud Services (EC2, EKS, etc.) to code (Terraform, Python, Ansible, etc.) Fylamynt is a connector similar to Zapier, but for cloud automation.
I would love to get your feedback on the product and use-cases. Any comments/feedback are appreciated.
Cheers!
Pradeep
https://redd.it/ky7ag9
@r_devops
Hi Everyone!
My name is Pradeep Padala and I am the Co-Founder/CEO of Fylamynt. I would like to introduce our automation platform that can help save significant time and money for cloud operations. We launched our company in December.
Our goal is to not replace existing automation tools like Terraform and Ansible but to help in connecting services like DataDog, Splunk, Slack, Cloud Services (EC2, EKS, etc.) to code (Terraform, Python, Ansible, etc.) Fylamynt is a connector similar to Zapier, but for cloud automation.
I would love to get your feedback on the product and use-cases. Any comments/feedback are appreciated.
Cheers!
Pradeep
https://redd.it/ky7ag9
@r_devops
Configure Circle to manage a monorepo
This is a functioning configuration of Circle to manage monorepos. It's still a WIP and I'd like some feedback!
https://github.com/itajaja/circle-monorepo-config
https://redd.it/ky5vjw
@r_devops
This is a functioning configuration of Circle to manage monorepos. It's still a WIP and I'd like some feedback!
https://github.com/itajaja/circle-monorepo-config
https://redd.it/ky5vjw
@r_devops
GitHub
itajaja/circle-monorepo-config
Circle, the monorepo way. Contribute to itajaja/circle-monorepo-config development by creating an account on GitHub.
How to promote career development and social engagement? DevOps Leadership
So finally I managed to get buy in from upper management to encourage career development and social engagement within our team due to the recent wave of resignations attributed to burnouts. Formerly a startup, it has grown out of proportion from 3 to 10(now 6) where before COVID19 things were manageable as we had an office. We used to conduct demos and lunch N Learn but we can no longer do that since we’re now remote.
Feedback I’ve gotten:
- No time to learn in business hours and life gets in the way (people with family)
- No opportunities to apply new skills
- Loss of interest after a few sessions
I’m wondering if anyone else’s place have some sort of system to encourage people to learn new skills or increase interests in the technology stack?
What has worked for you?
TIA
https://redd.it/kz9gei
@r_devops
So finally I managed to get buy in from upper management to encourage career development and social engagement within our team due to the recent wave of resignations attributed to burnouts. Formerly a startup, it has grown out of proportion from 3 to 10(now 6) where before COVID19 things were manageable as we had an office. We used to conduct demos and lunch N Learn but we can no longer do that since we’re now remote.
Feedback I’ve gotten:
- No time to learn in business hours and life gets in the way (people with family)
- No opportunities to apply new skills
- Loss of interest after a few sessions
I’m wondering if anyone else’s place have some sort of system to encourage people to learn new skills or increase interests in the technology stack?
What has worked for you?
TIA
https://redd.it/kz9gei
@r_devops
reddit
How to promote career development and social engagement? DevOps...
So finally I managed to get buy in from upper management to encourage career development and social engagement within our team due to the recent...
Agile and Ci/CD means that a project is never finished
It also means that developers are paid to produce both unfinished work and also to "fix" what has already been released until the end of time.
https://redd.it/kzbc6w
@r_devops
It also means that developers are paid to produce both unfinished work and also to "fix" what has already been released until the end of time.
https://redd.it/kzbc6w
@r_devops
reddit
Agile and Ci/CD means that a project is never finished
It also means that developers are paid to produce both unfinished work and also to "fix" what has already been released until the end of time.
Azure Devops STFP intergration
I wanted to run a pipeline where everything is going to be built in my target machine but after it's built, I wanted to make it into an artifact that can be consumed by a release pipeline and from there, I wanted to get that artifact, a simple csv file of my making, and transfer it to another computer via sftp.
I have winscp on my machine now but I wanted to make it more flexible by making it where I don't need to have it installed on the machine so that I can change machines on the fly without having to go in and install it if it's not there already.
Is there anyway I can approach this?
https://redd.it/kzf0dr
@r_devops
I wanted to run a pipeline where everything is going to be built in my target machine but after it's built, I wanted to make it into an artifact that can be consumed by a release pipeline and from there, I wanted to get that artifact, a simple csv file of my making, and transfer it to another computer via sftp.
I have winscp on my machine now but I wanted to make it more flexible by making it where I don't need to have it installed on the machine so that I can change machines on the fly without having to go in and install it if it's not there already.
Is there anyway I can approach this?
https://redd.it/kzf0dr
@r_devops
reddit
Azure Devops STFP intergration
I wanted to run a pipeline where everything is going to be built in my target machine but after it's built, I wanted to make it into an artifact...
AWS cloudformation - python canary?
Hi Guys,
I'm trying to create a python synthetics canary via cloudformation using the AWS::Synthetics::Canary resource with an inline script and I'm getting stuck on the "Script.Handler" value.
My resource definition is as follows - I'm aware the script does not do anything, I'm just trying to get it to work at the moment:
FrontendCanary:
Type: AWS::Synthetics::Canary
Properties:
ArtifactS3Location: !Sub 's3://${CanaryArtifactBucket}/frontend-loadbalancer'
Code:
Handler: 'script.handler'
Script: | # TODO
def handler(event, context):
pass
ExecutionRoleArn: !GetAtt CanaryRole.Arn
FailureRetentionPeriod: 10
SuccessRetentionPeriod: 10
Name: !Sub 'pc-${EnvName}-fe'
RuntimeVersion: syn-python-selenium-1.0
Schedule:
Expression: 'rate(1 minute)'
StartCanaryAfterCreation: true
From the cloudformation docs and the error messages I am getting, it appears the handler needs to end in '.handler'. I've tried various values for the first part including the function name and 'index', all of which produce an error.
In a test canary that I created via the console, the handler value is set to 'pageLoadBlueprint.handler'. "pageLoadBlueprint" is the name of the filename in the lambda layer package, and "handler" is the name of the handler function.
When I download the lambda function code package for my cloudwatch-generated canary it is completely empty. Despite that, I can see the function code in the AWS console.
Annoyingly, I can't find any examples of the inline python script cloudformation pattern on the internet.
Does anybody have any ideas on this, or any examples?
https://redd.it/kzbj8e
@r_devops
Hi Guys,
I'm trying to create a python synthetics canary via cloudformation using the AWS::Synthetics::Canary resource with an inline script and I'm getting stuck on the "Script.Handler" value.
My resource definition is as follows - I'm aware the script does not do anything, I'm just trying to get it to work at the moment:
FrontendCanary:
Type: AWS::Synthetics::Canary
Properties:
ArtifactS3Location: !Sub 's3://${CanaryArtifactBucket}/frontend-loadbalancer'
Code:
Handler: 'script.handler'
Script: | # TODO
def handler(event, context):
pass
ExecutionRoleArn: !GetAtt CanaryRole.Arn
FailureRetentionPeriod: 10
SuccessRetentionPeriod: 10
Name: !Sub 'pc-${EnvName}-fe'
RuntimeVersion: syn-python-selenium-1.0
Schedule:
Expression: 'rate(1 minute)'
StartCanaryAfterCreation: true
From the cloudformation docs and the error messages I am getting, it appears the handler needs to end in '.handler'. I've tried various values for the first part including the function name and 'index', all of which produce an error.
In a test canary that I created via the console, the handler value is set to 'pageLoadBlueprint.handler'. "pageLoadBlueprint" is the name of the filename in the lambda layer package, and "handler" is the name of the handler function.
When I download the lambda function code package for my cloudwatch-generated canary it is completely empty. Despite that, I can see the function code in the AWS console.
Annoyingly, I can't find any examples of the inline python script cloudformation pattern on the internet.
Does anybody have any ideas on this, or any examples?
https://redd.it/kzbj8e
@r_devops
How could I have handled this better?
Hi!
I'm a solo developer working on a management system for a client. After a month of trial testing, we're ready to release the system for approval tomorrow. As a precursor, here's the stack that I use:
1. A monolithic Django app running with Gunicorn.
2. It uses PostgreSQL for persistent storage.
3. Nginx sits in front of it to handle HTTP requests.
4. All of the above are running as separate Docker containers in which I use docker-compose to setup.
5. The infrastructure is setup in AWS using Terraform (manually through
6. Note that the application code and the database is hosted in the same EC2 instance
6. Configuration is handled by Ansible (manually as well using playbooks)
8. Code is stored in Gitlab
Essentially whenever I feel like my current code is good enough, I run a deploy playbook to update the prod's codebase and rebuild the containers. Every now and then I update my Terraform files when I need a new AWS service provisioned.
---
The problem:
I was working on a feature wherein a PDF copy of a
1. User submits a form containing
2. The
3. The instance is is then passed to RabbitMQ where the PDF file is generated in order to not block the current request
I ran the tests, deployed my code, and even used a dummy account in production to check if the feature works as expected. I went to sleep, and was woken up to the news that our prod server is down. Upon looking at the AWS metrics, it seems that the CPU utilization hits max just before the server crashed. So I rebooted the EC2 instance and ran my rebuild playbook and everything works fine again. The database seems up to date, and the PDFs are stored properly in S3 as well.
The first thought that came to mind is that Django might have passed too many tasks to RabbitMQ, overloading the CPU usage. But upon looking at the RabbitMQ, Django, and Celery logs, I can't pinpoint a specific area that might confirm my theory. All of Celery's tasks completed without any error FWIW.
As a resolution, I temporarily removed the RabbitMQ components from my stack.
This is my first time handling such a system from development to production, and I want to improve my ops and cloud skills in order to identify and prevent such events. Can you guys provide some practical tips for me?
Thanks a lot and stay safe!
https://redd.it/kz9gcq
@r_devops
Hi!
I'm a solo developer working on a management system for a client. After a month of trial testing, we're ready to release the system for approval tomorrow. As a precursor, here's the stack that I use:
1. A monolithic Django app running with Gunicorn.
2. It uses PostgreSQL for persistent storage.
3. Nginx sits in front of it to handle HTTP requests.
4. All of the above are running as separate Docker containers in which I use docker-compose to setup.
5. The infrastructure is setup in AWS using Terraform (manually through
terraform apply)6. Note that the application code and the database is hosted in the same EC2 instance
6. Configuration is handled by Ansible (manually as well using playbooks)
8. Code is stored in Gitlab
Essentially whenever I feel like my current code is good enough, I run a deploy playbook to update the prod's codebase and rebuild the containers. Every now and then I update my Terraform files when I need a new AWS service provisioned.
---
The problem:
I was working on a feature wherein a PDF copy of a
Document entity is generated and saved to S3. Now this Document entity might contain a ton of images, so I decided to move the processing in the background using RabbitMQ. So the process becomes this:1. User submits a form containing
Document data2. The
Document instance is saved to Postgres3. The instance is is then passed to RabbitMQ where the PDF file is generated in order to not block the current request
I ran the tests, deployed my code, and even used a dummy account in production to check if the feature works as expected. I went to sleep, and was woken up to the news that our prod server is down. Upon looking at the AWS metrics, it seems that the CPU utilization hits max just before the server crashed. So I rebooted the EC2 instance and ran my rebuild playbook and everything works fine again. The database seems up to date, and the PDFs are stored properly in S3 as well.
The first thought that came to mind is that Django might have passed too many tasks to RabbitMQ, overloading the CPU usage. But upon looking at the RabbitMQ, Django, and Celery logs, I can't pinpoint a specific area that might confirm my theory. All of Celery's tasks completed without any error FWIW.
As a resolution, I temporarily removed the RabbitMQ components from my stack.
This is my first time handling such a system from development to production, and I want to improve my ops and cloud skills in order to identify and prevent such events. Can you guys provide some practical tips for me?
Thanks a lot and stay safe!
https://redd.it/kz9gcq
@r_devops
reddit
How could I have handled this better?
Hi! I'm a solo developer working on a management system for a client. After a month of trial testing, we're ready to release the system for...
Help finding scripting opportunities
Hey all,
I recently (6months) started a new position at a small shop (maybe taking care of ~500 machines) and as the lowest staff member am not given any interesting tasks that can help me grow as an engineer. Mostly I do password resets or SSL installs, or fix customers' WordPress bugs.
Any interesting tasks that the company thinks of are given to either the CTO or the guy above me. I had a chat with my boss about this and he said any other tasks I wanted to do I had to think of myself rather than getting given more interesting tasks. Then upper management would decide if it was a good idea or not.
So, I'm struggling for ideas and was wondering if anyone can help? We mostly run websites with the LAMP stack, using Nginx, Varnish and Apache2. We also have some email servers and customers running their own email servers on VMs.
My current ideas are to:
1. Add automatic email checking for customers posting tickets to ensure they're authorised
2. Have a script to do an automatic new WordPress install.
3. A script looking at Varnish logs to see if we're catching stuff that barely gets hits from the cache and add exceptions for those to save cache space
4. A script to auto install email on a customers VM (could this be done with Ansible?)
5. A script to check the email server configuration just like apache2ctl or nginx -t
6. Maybe some machine learning tool to auto optimise the varnish cache. Using Selenium to test whether the site is the same as when uncached.
Just wondering if anyone has some neat script ideas or have thoughts on if my ideas could be helpful, or build on my ideas, or even just have advice for someone in my position.
Cheers!
https://redd.it/kz471g
@r_devops
Hey all,
I recently (6months) started a new position at a small shop (maybe taking care of ~500 machines) and as the lowest staff member am not given any interesting tasks that can help me grow as an engineer. Mostly I do password resets or SSL installs, or fix customers' WordPress bugs.
Any interesting tasks that the company thinks of are given to either the CTO or the guy above me. I had a chat with my boss about this and he said any other tasks I wanted to do I had to think of myself rather than getting given more interesting tasks. Then upper management would decide if it was a good idea or not.
So, I'm struggling for ideas and was wondering if anyone can help? We mostly run websites with the LAMP stack, using Nginx, Varnish and Apache2. We also have some email servers and customers running their own email servers on VMs.
My current ideas are to:
1. Add automatic email checking for customers posting tickets to ensure they're authorised
2. Have a script to do an automatic new WordPress install.
3. A script looking at Varnish logs to see if we're catching stuff that barely gets hits from the cache and add exceptions for those to save cache space
4. A script to auto install email on a customers VM (could this be done with Ansible?)
5. A script to check the email server configuration just like apache2ctl or nginx -t
6. Maybe some machine learning tool to auto optimise the varnish cache. Using Selenium to test whether the site is the same as when uncached.
Just wondering if anyone has some neat script ideas or have thoughts on if my ideas could be helpful, or build on my ideas, or even just have advice for someone in my position.
Cheers!
https://redd.it/kz471g
@r_devops
reddit
Help finding scripting opportunities
Hey all, I recently (6months) started a new position at a small shop (maybe taking care of ~500 machines) and as the lowest staff member am not...
How to avoid Remote access server crashing?
So, I'm sure a lot of people are SSH'ing Into servers these days to remotely run their code. I have a local server in my office which I SSH into. Running Ubuntu and I use it primarily for Deep Learning and ML training and inference.
Usually, there's some support staff that can quickly act if the server crashes or something and restarts. But recently due to COVID, access has been pretty limited and while running a piece of code it crashed the system (maybe!?) and I'm not sure what crashed. Memory error or some Python library issue. But, the SSH ssems to be inactive and failed now.
Is there some way, to run code in some way that does not ever crash my system? I'm thinking to put a CRON monitoring script to check every few minutes if the system is fine. Maybe SSH or something and if there's an issue it can sudo - restart quickly. Any other tips/tricks? This isn't a production server or anything. Just my own system.
Thanks!
https://redd.it/kz2awk
@r_devops
So, I'm sure a lot of people are SSH'ing Into servers these days to remotely run their code. I have a local server in my office which I SSH into. Running Ubuntu and I use it primarily for Deep Learning and ML training and inference.
Usually, there's some support staff that can quickly act if the server crashes or something and restarts. But recently due to COVID, access has been pretty limited and while running a piece of code it crashed the system (maybe!?) and I'm not sure what crashed. Memory error or some Python library issue. But, the SSH ssems to be inactive and failed now.
Is there some way, to run code in some way that does not ever crash my system? I'm thinking to put a CRON monitoring script to check every few minutes if the system is fine. Maybe SSH or something and if there's an issue it can sudo - restart quickly. Any other tips/tricks? This isn't a production server or anything. Just my own system.
Thanks!
https://redd.it/kz2awk
@r_devops
reddit
How to avoid Remote access server crashing?
So, I'm sure a lot of people are SSH'ing Into servers these days to remotely run their code. I have a local server in my office which I SSH into....
Slow local queries
Using AWS' DocumentDB, I've deployed a test database cluster in Germany. When I run a test-query on it from an EC2 instance in Germany, it takes less than a 2 seconds. When I query it from my country (middle-east) it takes more then a minute.
This is the simple test I run (it just goes through the whole collection):
var lastdoc= db.dev.find()
while(lastdoc.hasNext()){ lastdoc = lastdoc.next(); }
Thing is, we also have an RDS (MySQL) DB in AWS and a query of the same size takes less than 2 seconds from my country to RDS (Also in Germany).
I tried viewing the logs so I followed this document but I can't seem to find any logs from
Does anyone happen to have suggestion on how to tackle this? What should I do/look for?
Thanks ahead!
https://redd.it/kzqs4d
@r_devops
Using AWS' DocumentDB, I've deployed a test database cluster in Germany. When I run a test-query on it from an EC2 instance in Germany, it takes less than a 2 seconds. When I query it from my country (middle-east) it takes more then a minute.
This is the simple test I run (it just goes through the whole collection):
var lastdoc= db.dev.find()
while(lastdoc.hasNext()){ lastdoc = lastdoc.next(); }
Thing is, we also have an RDS (MySQL) DB in AWS and a query of the same size takes less than 2 seconds from my country to RDS (Also in Germany).
I tried viewing the logs so I followed this document but I can't seem to find any logs from
docdb in CloudWatch. These are my cluster parameters. I also tried opening a ticket with AWS but apparently our basic subscription doesn't allow creating tickets. Does anyone happen to have suggestion on how to tackle this? What should I do/look for?
Thanks ahead!
https://redd.it/kzqs4d
@r_devops
Amazon
Profiling Amazon DocumentDB operations - Amazon DocumentDB
Use the profiler to log the execution time and details of operations that were performed on your Amazon DocumentDB cluster.
Question: SaaS delivery to private customers
Has anyone delivered their public saas application, also to a customer who is walled privately (eg. AWS outpost, private DC)?
Assumption: that platform used by this private customer 100% conforms to architecture and requirements of your SaaS application. (K8s, AWS whatever).
If so, how do you manage your ops, specially when customer controls inbound updates?
When your CICD pipeline delivers a release to public, but is selectively allowed by customer (say every 6 months) - does this create problem for engineering and/or devops?
Observability is blocked, and you get controlled access when error happens and reported reactively by customer.
https://redd.it/kzsleg
@r_devops
Has anyone delivered their public saas application, also to a customer who is walled privately (eg. AWS outpost, private DC)?
Assumption: that platform used by this private customer 100% conforms to architecture and requirements of your SaaS application. (K8s, AWS whatever).
If so, how do you manage your ops, specially when customer controls inbound updates?
When your CICD pipeline delivers a release to public, but is selectively allowed by customer (say every 6 months) - does this create problem for engineering and/or devops?
Observability is blocked, and you get controlled access when error happens and reported reactively by customer.
https://redd.it/kzsleg
@r_devops
reddit
Question: SaaS delivery to private customers
Has anyone delivered their public saas application, also to a customer who is walled privately (eg. AWS outpost, private DC)? Assumption: that...