could have a DevOps exercise by 2019.
While DevOps is becoming the norm, some IT leaders fail to define a DevOps practice. This is as it is difficult to reorganize individuals and reinvent the relationship. This is between development and operational teams. Besides, in this reorganization process, technology plays a vital role. Besides, it is difficult for some IT leaders to understand. This is DevOps instruments and technologies. Thus, they can use to allow this collaborative environment.
Innumerable DevOps tools are available.
Below, we list some resources inside the DevOps ecosystem. This is for designing, testing, and deploying with ease. The list below is not exhaustive and the resources in any specific order are not specified.
What tools are out there for DevOps?
The composable enterprise's growth & value
.NET APIs: Using Visual Studio RAML Tools
Tools from DevOps: Building
· Gradle:
A software for open-source automation that can use it as a DevOps tool. Grandle helps users on any platform create, test, distribute, and package apps. The tool has a collection of plugins and rich APIs that are open source.
· Maven:
A common, open-source tool for both testing and development. It is a DevOps tool that can perform unit test reports. Thus, include coverage, among other features.
· Visual Studio:
A Microsoft platform that enables users to compile and create projects. Then it helps to make apps in a personalized and automated way.
Bitbucket, Docker, Git, and Perforce are other tools.
Resources from DevOps: Research
· M Unit:
A platform for Mule application testing that allows you to automate integration. It offers a complete suite of capabilities for integration and unit tests.
· SoapUI:
· A SOAP and REST API open-source API testing tool.
Apps provide functional testing of the REST API, WSDL coverage, and SOAP Web Services.
· JUnit:
· A DevOps open-source framework that is underuse. It is a server for automation that is part of Jenkins. This instrument makes it simpler and more automated for research.
Other tools include: Arma, Perfecto, Parasoft, and Zuul
Tools of DevOps: Deploying
· Artifactory:
A framework for the management of binary repositories. You can use it alongside Maven, Gradle, and other software.
· Puppet:
A DevOps open-source platform that provides users with the framework for DevOps activities. This includes automated testing, continuous integration, and continuous delivery.
· Ansible:
A basic DevOps tool for IT automation that enables users to automate solutions. This is by making it easier to deploy systems and apps. The weblikemilar to Chef and Puppet.
Other instruments include: Chef, HP Codar, Deploy IBM UrbanCode, and Jenkins
API-led Connectivity Integration of DevOps Tools
There are many resources that companies may use to build a DevOps practice, as shown by the list above. Luckily, many of these resources are open-source. This enables teams to jump straight into developing a DevOps environment. However, the proliferation of tools leads to a serious challenge. This is how do users integrate, reveal assets, and ensure managed control. This is of many DevOps tools in the process.
One approach is API-led networking, a methodical integration approach. Thus, links assets through modern managed APIs and expose them. As a result, via plug-and-play, each asset or API can incorporate. By self-service, the asset also becomes discoverable and regulate by compliance. Organizations may ensure that they do not repeat efforts. This build applications in silos, or expose assets around the enterprise. This is ineffective through API-led connectivity.
API led approach
By implementing an API-led approach to integration, one of our clients. This is a large tech corporation, strengthen their DevOps practice. A DevOps activity was already in place for the customer. But rapid development, the proliferation of SaaS apps and DevOps tools. This creates an unscalable and fragile IT infrastructure link by point-to-point integration. There were many
While DevOps is becoming the norm, some IT leaders fail to define a DevOps practice. This is as it is difficult to reorganize individuals and reinvent the relationship. This is between development and operational teams. Besides, in this reorganization process, technology plays a vital role. Besides, it is difficult for some IT leaders to understand. This is DevOps instruments and technologies. Thus, they can use to allow this collaborative environment.
Innumerable DevOps tools are available.
Below, we list some resources inside the DevOps ecosystem. This is for designing, testing, and deploying with ease. The list below is not exhaustive and the resources in any specific order are not specified.
What tools are out there for DevOps?
The composable enterprise's growth & value
.NET APIs: Using Visual Studio RAML Tools
Tools from DevOps: Building
· Gradle:
A software for open-source automation that can use it as a DevOps tool. Grandle helps users on any platform create, test, distribute, and package apps. The tool has a collection of plugins and rich APIs that are open source.
· Maven:
A common, open-source tool for both testing and development. It is a DevOps tool that can perform unit test reports. Thus, include coverage, among other features.
· Visual Studio:
A Microsoft platform that enables users to compile and create projects. Then it helps to make apps in a personalized and automated way.
Bitbucket, Docker, Git, and Perforce are other tools.
Resources from DevOps: Research
· M Unit:
A platform for Mule application testing that allows you to automate integration. It offers a complete suite of capabilities for integration and unit tests.
· SoapUI:
· A SOAP and REST API open-source API testing tool.
Apps provide functional testing of the REST API, WSDL coverage, and SOAP Web Services.
· JUnit:
· A DevOps open-source framework that is underuse. It is a server for automation that is part of Jenkins. This instrument makes it simpler and more automated for research.
Other tools include: Arma, Perfecto, Parasoft, and Zuul
Tools of DevOps: Deploying
· Artifactory:
A framework for the management of binary repositories. You can use it alongside Maven, Gradle, and other software.
· Puppet:
A DevOps open-source platform that provides users with the framework for DevOps activities. This includes automated testing, continuous integration, and continuous delivery.
· Ansible:
A basic DevOps tool for IT automation that enables users to automate solutions. This is by making it easier to deploy systems and apps. The weblikemilar to Chef and Puppet.
Other instruments include: Chef, HP Codar, Deploy IBM UrbanCode, and Jenkins
API-led Connectivity Integration of DevOps Tools
There are many resources that companies may use to build a DevOps practice, as shown by the list above. Luckily, many of these resources are open-source. This enables teams to jump straight into developing a DevOps environment. However, the proliferation of tools leads to a serious challenge. This is how do users integrate, reveal assets, and ensure managed control. This is of many DevOps tools in the process.
One approach is API-led networking, a methodical integration approach. Thus, links assets through modern managed APIs and expose them. As a result, via plug-and-play, each asset or API can incorporate. By self-service, the asset also becomes discoverable and regulate by compliance. Organizations may ensure that they do not repeat efforts. This build applications in silos, or expose assets around the enterprise. This is ineffective through API-led connectivity.
API led approach
By implementing an API-led approach to integration, one of our clients. This is a large tech corporation, strengthen their DevOps practice. A DevOps activity was already in place for the customer. But rapid development, the proliferation of SaaS apps and DevOps tools. This creates an unscalable and fragile IT infrastructure link by point-to-point integration. There were many
problems with this infrastructure. This includes the lack of connectivity between DevOps and continuous integration. It helps with the integration of back offices. The client had to step beyond point-to-point to overcome this problem. Then establish an efficient DevOps practice, as it is an integration approach. You can increase its dependencies within the DevOps environment.
This customer was able to reset their back-office infrastructure with API-led connectivity. Then adopt a fresh approach to integrating their data, systems, apps. DevOps tools while keeping in mind reusability, scalability, and security. To extend DevOps and ongoing integration to their back-office integration. Then the client uses it as a canonical model. They have set up a governance mechanism that, without sacrificing protection. It provides granular access to specific resources.
AnyPoint platform
This new IT architecture and model of governance. Thus, the client turns it to the AnyPoint Platform TM of Mule Soft. This is to construct REST APIs. Thus, it can abstract the complexity of underlying systems. Adopting an API-led approach to integration speed the development phase of the customer. Then increase their developers' productivity by a staggering 300 percent.
Conclusion
There are many DevOps resources at the hands of organizations. The key benefit is not only to choose the tool that works best for one's use case in real-time. But also to ensure that when more tools used in the DevOps environment. Then you can incorporate all assets and manage them effectively, exposed, and agile. Mule Soft helps organizations through API-led networking, to achieve such an approach. You can learn more about this and other integration through MuleSoft online training.
https://redd.it/kyi4ku
@r_devops
This customer was able to reset their back-office infrastructure with API-led connectivity. Then adopt a fresh approach to integrating their data, systems, apps. DevOps tools while keeping in mind reusability, scalability, and security. To extend DevOps and ongoing integration to their back-office integration. Then the client uses it as a canonical model. They have set up a governance mechanism that, without sacrificing protection. It provides granular access to specific resources.
AnyPoint platform
This new IT architecture and model of governance. Thus, the client turns it to the AnyPoint Platform TM of Mule Soft. This is to construct REST APIs. Thus, it can abstract the complexity of underlying systems. Adopting an API-led approach to integration speed the development phase of the customer. Then increase their developers' productivity by a staggering 300 percent.
Conclusion
There are many DevOps resources at the hands of organizations. The key benefit is not only to choose the tool that works best for one's use case in real-time. But also to ensure that when more tools used in the DevOps environment. Then you can incorporate all assets and manage them effectively, exposed, and agile. Mule Soft helps organizations through API-led networking, to achieve such an approach. You can learn more about this and other integration through MuleSoft online training.
https://redd.it/kyi4ku
@r_devops
Onlineitguru
Online Courses | Online IT Certification Training | OnlineITGuru
Best online course provider in the world. Students get trained, Certified from professional instructors. Training provided round the clock.
Do you find it hard to find the time to create and update chatops bots? Would you be interested in a chatbot service that integrates with CI/CD like bamboo/jenkins/Codepipeline as well as monitoring services like Cloudwatch, Datadog, NewRelic, etc?
Hey all,
I just joined an SRE team after a while of being in software engineering/devops.
This might be more of a problem at smaller organizations, but a common theme I've noticed is that chatops bots are really helpful but it's hard to find time to create/update them. However, when they're done well they tend to be very useful. On the other hand, sometimes they're done just well enough to be useful but there isn't enough time to fix issues or improve them by adding features. Sometimes useful ones just go away because no one had time to maintain them!
Would anyone here be interested in a service that provides ready to go chatbots that plug into common ci/cd services, cloud providers, and monitoring services?
Here are a few examples:
* You have a new Jenkins/Bamboo/Codepipeline ci/cd pipeline and you want to approve/deny deployments via slack or Microsoft teams. You log in, put in your pipeline details and you have a chatbot that will give pipeline updates and allow you to approve deployments.
* You build a new service that provides internal metrics to teams. You want those metrics to post every day to a slack channel. You pick a graph template, map some values, and you have a chatbot.
* You have an internal API that provides customer info based on IDs to help identify customer impact when debugging. You log in and provide the webhook, the values, and pick a template and you have a slack bot for your API.
https://redd.it/kycu9g
@r_devops
Hey all,
I just joined an SRE team after a while of being in software engineering/devops.
This might be more of a problem at smaller organizations, but a common theme I've noticed is that chatops bots are really helpful but it's hard to find time to create/update them. However, when they're done well they tend to be very useful. On the other hand, sometimes they're done just well enough to be useful but there isn't enough time to fix issues or improve them by adding features. Sometimes useful ones just go away because no one had time to maintain them!
Would anyone here be interested in a service that provides ready to go chatbots that plug into common ci/cd services, cloud providers, and monitoring services?
Here are a few examples:
* You have a new Jenkins/Bamboo/Codepipeline ci/cd pipeline and you want to approve/deny deployments via slack or Microsoft teams. You log in, put in your pipeline details and you have a chatbot that will give pipeline updates and allow you to approve deployments.
* You build a new service that provides internal metrics to teams. You want those metrics to post every day to a slack channel. You pick a graph template, map some values, and you have a chatbot.
* You have an internal API that provides customer info based on IDs to help identify customer impact when debugging. You log in and provide the webhook, the values, and pick a template and you have a slack bot for your API.
https://redd.it/kycu9g
@r_devops
reddit
Do you find it hard to find the time to create and update chatops...
Hey all, I just joined an SRE team after a while of being in software engineering/devops. This might be more of a problem at smaller...
Help me in choosing
What should I do devops or azure stack.Help me in choosing ?
https://redd.it/kyeuq2
@r_devops
What should I do devops or azure stack.Help me in choosing ?
https://redd.it/kyeuq2
@r_devops
reddit
Help me in choosing
What should I do devops or azure stack.Help me in choosing ?
end to end infrastructure testing frameworks?
What are some good testing frameworks for end to end infrastructure testing?
For example, I have a cloudwatch event rule that fires whenever a certain ECS task changes state and then triggers a lambda function to update a DNS record. Looking for a way to guard against regressions. So for example, do a terraform apply and then have an automated verification that nothing broke.
I feel like this can be done with python or bash but is there some kind of framework built for that kind of thing.
https://redd.it/kyabuw
@r_devops
What are some good testing frameworks for end to end infrastructure testing?
For example, I have a cloudwatch event rule that fires whenever a certain ECS task changes state and then triggers a lambda function to update a DNS record. Looking for a way to guard against regressions. So for example, do a terraform apply and then have an automated verification that nothing broke.
I feel like this can be done with python or bash but is there some kind of framework built for that kind of thing.
https://redd.it/kyabuw
@r_devops
reddit
end to end infrastructure testing frameworks?
What are some good testing frameworks for end to end infrastructure testing? For example, I have a cloudwatch event rule that fires whenever a...
Fylamynt - Cloud Workflow Automation Platform
Hi Everyone!
My name is Pradeep Padala and I am the Co-Founder/CEO of Fylamynt. I would like to introduce our automation platform that can help save significant time and money for cloud operations. We launched our company in December.
Our goal is to not replace existing automation tools like Terraform and Ansible but to help in connecting services like DataDog, Splunk, Slack, Cloud Services (EC2, EKS, etc.) to code (Terraform, Python, Ansible, etc.) Fylamynt is a connector similar to Zapier, but for cloud automation.
I would love to get your feedback on the product and use-cases. Any comments/feedback are appreciated.
Cheers!
Pradeep
https://redd.it/ky7ag9
@r_devops
Hi Everyone!
My name is Pradeep Padala and I am the Co-Founder/CEO of Fylamynt. I would like to introduce our automation platform that can help save significant time and money for cloud operations. We launched our company in December.
Our goal is to not replace existing automation tools like Terraform and Ansible but to help in connecting services like DataDog, Splunk, Slack, Cloud Services (EC2, EKS, etc.) to code (Terraform, Python, Ansible, etc.) Fylamynt is a connector similar to Zapier, but for cloud automation.
I would love to get your feedback on the product and use-cases. Any comments/feedback are appreciated.
Cheers!
Pradeep
https://redd.it/ky7ag9
@r_devops
Configure Circle to manage a monorepo
This is a functioning configuration of Circle to manage monorepos. It's still a WIP and I'd like some feedback!
https://github.com/itajaja/circle-monorepo-config
https://redd.it/ky5vjw
@r_devops
This is a functioning configuration of Circle to manage monorepos. It's still a WIP and I'd like some feedback!
https://github.com/itajaja/circle-monorepo-config
https://redd.it/ky5vjw
@r_devops
GitHub
itajaja/circle-monorepo-config
Circle, the monorepo way. Contribute to itajaja/circle-monorepo-config development by creating an account on GitHub.
How to promote career development and social engagement? DevOps Leadership
So finally I managed to get buy in from upper management to encourage career development and social engagement within our team due to the recent wave of resignations attributed to burnouts. Formerly a startup, it has grown out of proportion from 3 to 10(now 6) where before COVID19 things were manageable as we had an office. We used to conduct demos and lunch N Learn but we can no longer do that since we’re now remote.
Feedback I’ve gotten:
- No time to learn in business hours and life gets in the way (people with family)
- No opportunities to apply new skills
- Loss of interest after a few sessions
I’m wondering if anyone else’s place have some sort of system to encourage people to learn new skills or increase interests in the technology stack?
What has worked for you?
TIA
https://redd.it/kz9gei
@r_devops
So finally I managed to get buy in from upper management to encourage career development and social engagement within our team due to the recent wave of resignations attributed to burnouts. Formerly a startup, it has grown out of proportion from 3 to 10(now 6) where before COVID19 things were manageable as we had an office. We used to conduct demos and lunch N Learn but we can no longer do that since we’re now remote.
Feedback I’ve gotten:
- No time to learn in business hours and life gets in the way (people with family)
- No opportunities to apply new skills
- Loss of interest after a few sessions
I’m wondering if anyone else’s place have some sort of system to encourage people to learn new skills or increase interests in the technology stack?
What has worked for you?
TIA
https://redd.it/kz9gei
@r_devops
reddit
How to promote career development and social engagement? DevOps...
So finally I managed to get buy in from upper management to encourage career development and social engagement within our team due to the recent...
Agile and Ci/CD means that a project is never finished
It also means that developers are paid to produce both unfinished work and also to "fix" what has already been released until the end of time.
https://redd.it/kzbc6w
@r_devops
It also means that developers are paid to produce both unfinished work and also to "fix" what has already been released until the end of time.
https://redd.it/kzbc6w
@r_devops
reddit
Agile and Ci/CD means that a project is never finished
It also means that developers are paid to produce both unfinished work and also to "fix" what has already been released until the end of time.
Azure Devops STFP intergration
I wanted to run a pipeline where everything is going to be built in my target machine but after it's built, I wanted to make it into an artifact that can be consumed by a release pipeline and from there, I wanted to get that artifact, a simple csv file of my making, and transfer it to another computer via sftp.
I have winscp on my machine now but I wanted to make it more flexible by making it where I don't need to have it installed on the machine so that I can change machines on the fly without having to go in and install it if it's not there already.
Is there anyway I can approach this?
https://redd.it/kzf0dr
@r_devops
I wanted to run a pipeline where everything is going to be built in my target machine but after it's built, I wanted to make it into an artifact that can be consumed by a release pipeline and from there, I wanted to get that artifact, a simple csv file of my making, and transfer it to another computer via sftp.
I have winscp on my machine now but I wanted to make it more flexible by making it where I don't need to have it installed on the machine so that I can change machines on the fly without having to go in and install it if it's not there already.
Is there anyway I can approach this?
https://redd.it/kzf0dr
@r_devops
reddit
Azure Devops STFP intergration
I wanted to run a pipeline where everything is going to be built in my target machine but after it's built, I wanted to make it into an artifact...
AWS cloudformation - python canary?
Hi Guys,
I'm trying to create a python synthetics canary via cloudformation using the AWS::Synthetics::Canary resource with an inline script and I'm getting stuck on the "Script.Handler" value.
My resource definition is as follows - I'm aware the script does not do anything, I'm just trying to get it to work at the moment:
FrontendCanary:
Type: AWS::Synthetics::Canary
Properties:
ArtifactS3Location: !Sub 's3://${CanaryArtifactBucket}/frontend-loadbalancer'
Code:
Handler: 'script.handler'
Script: | # TODO
def handler(event, context):
pass
ExecutionRoleArn: !GetAtt CanaryRole.Arn
FailureRetentionPeriod: 10
SuccessRetentionPeriod: 10
Name: !Sub 'pc-${EnvName}-fe'
RuntimeVersion: syn-python-selenium-1.0
Schedule:
Expression: 'rate(1 minute)'
StartCanaryAfterCreation: true
From the cloudformation docs and the error messages I am getting, it appears the handler needs to end in '.handler'. I've tried various values for the first part including the function name and 'index', all of which produce an error.
In a test canary that I created via the console, the handler value is set to 'pageLoadBlueprint.handler'. "pageLoadBlueprint" is the name of the filename in the lambda layer package, and "handler" is the name of the handler function.
When I download the lambda function code package for my cloudwatch-generated canary it is completely empty. Despite that, I can see the function code in the AWS console.
Annoyingly, I can't find any examples of the inline python script cloudformation pattern on the internet.
Does anybody have any ideas on this, or any examples?
https://redd.it/kzbj8e
@r_devops
Hi Guys,
I'm trying to create a python synthetics canary via cloudformation using the AWS::Synthetics::Canary resource with an inline script and I'm getting stuck on the "Script.Handler" value.
My resource definition is as follows - I'm aware the script does not do anything, I'm just trying to get it to work at the moment:
FrontendCanary:
Type: AWS::Synthetics::Canary
Properties:
ArtifactS3Location: !Sub 's3://${CanaryArtifactBucket}/frontend-loadbalancer'
Code:
Handler: 'script.handler'
Script: | # TODO
def handler(event, context):
pass
ExecutionRoleArn: !GetAtt CanaryRole.Arn
FailureRetentionPeriod: 10
SuccessRetentionPeriod: 10
Name: !Sub 'pc-${EnvName}-fe'
RuntimeVersion: syn-python-selenium-1.0
Schedule:
Expression: 'rate(1 minute)'
StartCanaryAfterCreation: true
From the cloudformation docs and the error messages I am getting, it appears the handler needs to end in '.handler'. I've tried various values for the first part including the function name and 'index', all of which produce an error.
In a test canary that I created via the console, the handler value is set to 'pageLoadBlueprint.handler'. "pageLoadBlueprint" is the name of the filename in the lambda layer package, and "handler" is the name of the handler function.
When I download the lambda function code package for my cloudwatch-generated canary it is completely empty. Despite that, I can see the function code in the AWS console.
Annoyingly, I can't find any examples of the inline python script cloudformation pattern on the internet.
Does anybody have any ideas on this, or any examples?
https://redd.it/kzbj8e
@r_devops
How could I have handled this better?
Hi!
I'm a solo developer working on a management system for a client. After a month of trial testing, we're ready to release the system for approval tomorrow. As a precursor, here's the stack that I use:
1. A monolithic Django app running with Gunicorn.
2. It uses PostgreSQL for persistent storage.
3. Nginx sits in front of it to handle HTTP requests.
4. All of the above are running as separate Docker containers in which I use docker-compose to setup.
5. The infrastructure is setup in AWS using Terraform (manually through
6. Note that the application code and the database is hosted in the same EC2 instance
6. Configuration is handled by Ansible (manually as well using playbooks)
8. Code is stored in Gitlab
Essentially whenever I feel like my current code is good enough, I run a deploy playbook to update the prod's codebase and rebuild the containers. Every now and then I update my Terraform files when I need a new AWS service provisioned.
---
The problem:
I was working on a feature wherein a PDF copy of a
1. User submits a form containing
2. The
3. The instance is is then passed to RabbitMQ where the PDF file is generated in order to not block the current request
I ran the tests, deployed my code, and even used a dummy account in production to check if the feature works as expected. I went to sleep, and was woken up to the news that our prod server is down. Upon looking at the AWS metrics, it seems that the CPU utilization hits max just before the server crashed. So I rebooted the EC2 instance and ran my rebuild playbook and everything works fine again. The database seems up to date, and the PDFs are stored properly in S3 as well.
The first thought that came to mind is that Django might have passed too many tasks to RabbitMQ, overloading the CPU usage. But upon looking at the RabbitMQ, Django, and Celery logs, I can't pinpoint a specific area that might confirm my theory. All of Celery's tasks completed without any error FWIW.
As a resolution, I temporarily removed the RabbitMQ components from my stack.
This is my first time handling such a system from development to production, and I want to improve my ops and cloud skills in order to identify and prevent such events. Can you guys provide some practical tips for me?
Thanks a lot and stay safe!
https://redd.it/kz9gcq
@r_devops
Hi!
I'm a solo developer working on a management system for a client. After a month of trial testing, we're ready to release the system for approval tomorrow. As a precursor, here's the stack that I use:
1. A monolithic Django app running with Gunicorn.
2. It uses PostgreSQL for persistent storage.
3. Nginx sits in front of it to handle HTTP requests.
4. All of the above are running as separate Docker containers in which I use docker-compose to setup.
5. The infrastructure is setup in AWS using Terraform (manually through
terraform apply)6. Note that the application code and the database is hosted in the same EC2 instance
6. Configuration is handled by Ansible (manually as well using playbooks)
8. Code is stored in Gitlab
Essentially whenever I feel like my current code is good enough, I run a deploy playbook to update the prod's codebase and rebuild the containers. Every now and then I update my Terraform files when I need a new AWS service provisioned.
---
The problem:
I was working on a feature wherein a PDF copy of a
Document entity is generated and saved to S3. Now this Document entity might contain a ton of images, so I decided to move the processing in the background using RabbitMQ. So the process becomes this:1. User submits a form containing
Document data2. The
Document instance is saved to Postgres3. The instance is is then passed to RabbitMQ where the PDF file is generated in order to not block the current request
I ran the tests, deployed my code, and even used a dummy account in production to check if the feature works as expected. I went to sleep, and was woken up to the news that our prod server is down. Upon looking at the AWS metrics, it seems that the CPU utilization hits max just before the server crashed. So I rebooted the EC2 instance and ran my rebuild playbook and everything works fine again. The database seems up to date, and the PDFs are stored properly in S3 as well.
The first thought that came to mind is that Django might have passed too many tasks to RabbitMQ, overloading the CPU usage. But upon looking at the RabbitMQ, Django, and Celery logs, I can't pinpoint a specific area that might confirm my theory. All of Celery's tasks completed without any error FWIW.
As a resolution, I temporarily removed the RabbitMQ components from my stack.
This is my first time handling such a system from development to production, and I want to improve my ops and cloud skills in order to identify and prevent such events. Can you guys provide some practical tips for me?
Thanks a lot and stay safe!
https://redd.it/kz9gcq
@r_devops
reddit
How could I have handled this better?
Hi! I'm a solo developer working on a management system for a client. After a month of trial testing, we're ready to release the system for...
Help finding scripting opportunities
Hey all,
I recently (6months) started a new position at a small shop (maybe taking care of ~500 machines) and as the lowest staff member am not given any interesting tasks that can help me grow as an engineer. Mostly I do password resets or SSL installs, or fix customers' WordPress bugs.
Any interesting tasks that the company thinks of are given to either the CTO or the guy above me. I had a chat with my boss about this and he said any other tasks I wanted to do I had to think of myself rather than getting given more interesting tasks. Then upper management would decide if it was a good idea or not.
So, I'm struggling for ideas and was wondering if anyone can help? We mostly run websites with the LAMP stack, using Nginx, Varnish and Apache2. We also have some email servers and customers running their own email servers on VMs.
My current ideas are to:
1. Add automatic email checking for customers posting tickets to ensure they're authorised
2. Have a script to do an automatic new WordPress install.
3. A script looking at Varnish logs to see if we're catching stuff that barely gets hits from the cache and add exceptions for those to save cache space
4. A script to auto install email on a customers VM (could this be done with Ansible?)
5. A script to check the email server configuration just like apache2ctl or nginx -t
6. Maybe some machine learning tool to auto optimise the varnish cache. Using Selenium to test whether the site is the same as when uncached.
Just wondering if anyone has some neat script ideas or have thoughts on if my ideas could be helpful, or build on my ideas, or even just have advice for someone in my position.
Cheers!
https://redd.it/kz471g
@r_devops
Hey all,
I recently (6months) started a new position at a small shop (maybe taking care of ~500 machines) and as the lowest staff member am not given any interesting tasks that can help me grow as an engineer. Mostly I do password resets or SSL installs, or fix customers' WordPress bugs.
Any interesting tasks that the company thinks of are given to either the CTO or the guy above me. I had a chat with my boss about this and he said any other tasks I wanted to do I had to think of myself rather than getting given more interesting tasks. Then upper management would decide if it was a good idea or not.
So, I'm struggling for ideas and was wondering if anyone can help? We mostly run websites with the LAMP stack, using Nginx, Varnish and Apache2. We also have some email servers and customers running their own email servers on VMs.
My current ideas are to:
1. Add automatic email checking for customers posting tickets to ensure they're authorised
2. Have a script to do an automatic new WordPress install.
3. A script looking at Varnish logs to see if we're catching stuff that barely gets hits from the cache and add exceptions for those to save cache space
4. A script to auto install email on a customers VM (could this be done with Ansible?)
5. A script to check the email server configuration just like apache2ctl or nginx -t
6. Maybe some machine learning tool to auto optimise the varnish cache. Using Selenium to test whether the site is the same as when uncached.
Just wondering if anyone has some neat script ideas or have thoughts on if my ideas could be helpful, or build on my ideas, or even just have advice for someone in my position.
Cheers!
https://redd.it/kz471g
@r_devops
reddit
Help finding scripting opportunities
Hey all, I recently (6months) started a new position at a small shop (maybe taking care of ~500 machines) and as the lowest staff member am not...
How to avoid Remote access server crashing?
So, I'm sure a lot of people are SSH'ing Into servers these days to remotely run their code. I have a local server in my office which I SSH into. Running Ubuntu and I use it primarily for Deep Learning and ML training and inference.
Usually, there's some support staff that can quickly act if the server crashes or something and restarts. But recently due to COVID, access has been pretty limited and while running a piece of code it crashed the system (maybe!?) and I'm not sure what crashed. Memory error or some Python library issue. But, the SSH ssems to be inactive and failed now.
Is there some way, to run code in some way that does not ever crash my system? I'm thinking to put a CRON monitoring script to check every few minutes if the system is fine. Maybe SSH or something and if there's an issue it can sudo - restart quickly. Any other tips/tricks? This isn't a production server or anything. Just my own system.
Thanks!
https://redd.it/kz2awk
@r_devops
So, I'm sure a lot of people are SSH'ing Into servers these days to remotely run their code. I have a local server in my office which I SSH into. Running Ubuntu and I use it primarily for Deep Learning and ML training and inference.
Usually, there's some support staff that can quickly act if the server crashes or something and restarts. But recently due to COVID, access has been pretty limited and while running a piece of code it crashed the system (maybe!?) and I'm not sure what crashed. Memory error or some Python library issue. But, the SSH ssems to be inactive and failed now.
Is there some way, to run code in some way that does not ever crash my system? I'm thinking to put a CRON monitoring script to check every few minutes if the system is fine. Maybe SSH or something and if there's an issue it can sudo - restart quickly. Any other tips/tricks? This isn't a production server or anything. Just my own system.
Thanks!
https://redd.it/kz2awk
@r_devops
reddit
How to avoid Remote access server crashing?
So, I'm sure a lot of people are SSH'ing Into servers these days to remotely run their code. I have a local server in my office which I SSH into....
Slow local queries
Using AWS' DocumentDB, I've deployed a test database cluster in Germany. When I run a test-query on it from an EC2 instance in Germany, it takes less than a 2 seconds. When I query it from my country (middle-east) it takes more then a minute.
This is the simple test I run (it just goes through the whole collection):
var lastdoc= db.dev.find()
while(lastdoc.hasNext()){ lastdoc = lastdoc.next(); }
Thing is, we also have an RDS (MySQL) DB in AWS and a query of the same size takes less than 2 seconds from my country to RDS (Also in Germany).
I tried viewing the logs so I followed this document but I can't seem to find any logs from
Does anyone happen to have suggestion on how to tackle this? What should I do/look for?
Thanks ahead!
https://redd.it/kzqs4d
@r_devops
Using AWS' DocumentDB, I've deployed a test database cluster in Germany. When I run a test-query on it from an EC2 instance in Germany, it takes less than a 2 seconds. When I query it from my country (middle-east) it takes more then a minute.
This is the simple test I run (it just goes through the whole collection):
var lastdoc= db.dev.find()
while(lastdoc.hasNext()){ lastdoc = lastdoc.next(); }
Thing is, we also have an RDS (MySQL) DB in AWS and a query of the same size takes less than 2 seconds from my country to RDS (Also in Germany).
I tried viewing the logs so I followed this document but I can't seem to find any logs from
docdb in CloudWatch. These are my cluster parameters. I also tried opening a ticket with AWS but apparently our basic subscription doesn't allow creating tickets. Does anyone happen to have suggestion on how to tackle this? What should I do/look for?
Thanks ahead!
https://redd.it/kzqs4d
@r_devops
Amazon
Profiling Amazon DocumentDB operations - Amazon DocumentDB
Use the profiler to log the execution time and details of operations that were performed on your Amazon DocumentDB cluster.
Question: SaaS delivery to private customers
Has anyone delivered their public saas application, also to a customer who is walled privately (eg. AWS outpost, private DC)?
Assumption: that platform used by this private customer 100% conforms to architecture and requirements of your SaaS application. (K8s, AWS whatever).
If so, how do you manage your ops, specially when customer controls inbound updates?
When your CICD pipeline delivers a release to public, but is selectively allowed by customer (say every 6 months) - does this create problem for engineering and/or devops?
Observability is blocked, and you get controlled access when error happens and reported reactively by customer.
https://redd.it/kzsleg
@r_devops
Has anyone delivered their public saas application, also to a customer who is walled privately (eg. AWS outpost, private DC)?
Assumption: that platform used by this private customer 100% conforms to architecture and requirements of your SaaS application. (K8s, AWS whatever).
If so, how do you manage your ops, specially when customer controls inbound updates?
When your CICD pipeline delivers a release to public, but is selectively allowed by customer (say every 6 months) - does this create problem for engineering and/or devops?
Observability is blocked, and you get controlled access when error happens and reported reactively by customer.
https://redd.it/kzsleg
@r_devops
reddit
Question: SaaS delivery to private customers
Has anyone delivered their public saas application, also to a customer who is walled privately (eg. AWS outpost, private DC)? Assumption: that...
How to move WITs from one board to another?
I have a backlog with a bunch of features. I've described the the states that a feature goes through from idea to done and want to implement this in my backlog setup. The states cover two overall process:
1) Specifying the feature (4 states)
2 Developing and releaseing the feature (5 states)
Question: Is it possible to have two boards attached to your backlog, so after specifying the feature on Board #1, I move it to Board #2, where I handle development release?
The easy solution would be to put the 9 states into the same board, but this would be a mess. Want to split in two for easier overview it´s two seperated workflows...
Any advice?
https://redd.it/kztzqp
@r_devops
I have a backlog with a bunch of features. I've described the the states that a feature goes through from idea to done and want to implement this in my backlog setup. The states cover two overall process:
1) Specifying the feature (4 states)
2 Developing and releaseing the feature (5 states)
Question: Is it possible to have two boards attached to your backlog, so after specifying the feature on Board #1, I move it to Board #2, where I handle development release?
The easy solution would be to put the 9 states into the same board, but this would be a mess. Want to split in two for easier overview it´s two seperated workflows...
Any advice?
https://redd.it/kztzqp
@r_devops
reddit
How to move WITs from one board to another?
I have a backlog with a bunch of features. I've described the the states that a feature goes through from *idea* to *done* and want to implement...
What is Distributed Tracing?
https://deepsource.io/blog/distributed-tracing/
https://redd.it/kzxsin
@r_devops
https://deepsource.io/blog/distributed-tracing/
https://redd.it/kzxsin
@r_devops
DeepSource
What is Distributed Tracing?
If distributed systems are like the backbone of cloud infrastructure, distributed tracing can rightly be declared as the backbone of microservices monitoring.
Sysadmin looking to enter the dev side of things... where to begin?
Hey all, I've been working as sysadmin/infrastructure side of things for the last several years, ranging from Windows shops w/Powershell to AWS shops and running everything in Terraform/Bash/Ansible/etc.
The writing is on the wall with things like CDK becoming more popular in the market. A lot of local companies I'm interested in are NodeJS/React shops. At first I considered a coding bootcamp, but I'm motivated enough to self-teach for the time being.
So that said... where do I begin? There's a plethora of courses out there, but I'm looking for ones that might appeal to more recoverying sysadmins. Any suggestions?
Thanks!
https://redd.it/l07sz2
@r_devops
Hey all, I've been working as sysadmin/infrastructure side of things for the last several years, ranging from Windows shops w/Powershell to AWS shops and running everything in Terraform/Bash/Ansible/etc.
The writing is on the wall with things like CDK becoming more popular in the market. A lot of local companies I'm interested in are NodeJS/React shops. At first I considered a coding bootcamp, but I'm motivated enough to self-teach for the time being.
So that said... where do I begin? There's a plethora of courses out there, but I'm looking for ones that might appeal to more recoverying sysadmins. Any suggestions?
Thanks!
https://redd.it/l07sz2
@r_devops
reddit
Sysadmin looking to enter the dev side of things... where to begin?
Hey all, I've been working as sysadmin/infrastructure side of things for the last several years, ranging from Windows shops w/Powershell to AWS...
CICD pipeline
Hey guys, sorry for the novice questions but as I am studying CICD flows a lot of questions come to my mind and I am looking for a couple of answers:
In the scenario where I have a pipeline where I do a build (docker image) on every commit, what’s the best way to manage and handle all the docker images created? Let’s say, 10 devs commit/push the code upstream and that would build a docker image 10x, how do we control it in a docker repository? Keep the same tag or different tags so that the next phase of the pipeline can take the image/deploy to be tested?
Also, what’s the best way to create CD to deploy in a Kubernetes cluster with Jenkins? During the cicd pipeline, would my newly docker image have a release tag which then Jenkins could trigger a set image in my deployment?
Thank you in advance!
https://redd.it/l07fk8
@r_devops
Hey guys, sorry for the novice questions but as I am studying CICD flows a lot of questions come to my mind and I am looking for a couple of answers:
In the scenario where I have a pipeline where I do a build (docker image) on every commit, what’s the best way to manage and handle all the docker images created? Let’s say, 10 devs commit/push the code upstream and that would build a docker image 10x, how do we control it in a docker repository? Keep the same tag or different tags so that the next phase of the pipeline can take the image/deploy to be tested?
Also, what’s the best way to create CD to deploy in a Kubernetes cluster with Jenkins? During the cicd pipeline, would my newly docker image have a release tag which then Jenkins could trigger a set image in my deployment?
Thank you in advance!
https://redd.it/l07fk8
@r_devops
reddit
CICD pipeline
Hey guys, sorry for the novice questions but as I am studying CICD flows a lot of questions come to my mind and I am looking for a couple of...
Anyone available for a quick chat regarding user provisioning?
"DevOps" / Software Developer here. I've done data integrations for K-12. Integrating Active Directory, Google Workspace, and a bunch of other apps. Important question to ask you!
Today companies generally have AD, Azure AD (Microsoft 365), and/or Google Workspace. Companies generally also have some kind of HR system.
To get new employees accounts into these systems, IT gets an email from HR or a manager that someone is starting and a manual back and forth begins. There's the entire life cycle of an employee at your company that requires manual steps as well.
I think it's crazy we don't have an easy way to automate the integration between HR and IT systems. Every company generally has the same issue. How do you handle this today?
There's things like Okta, Onelogin, JumpCloud, etc. My issue with these apps (I've used Okta and Onelogin) is that as a new business today do you really want to also pay for Okta on top of Microsoft 365 / Google?
There's also Manage Engine, Tools4Ever, and a flury of other products. My issue with all of these apps is they hide the mappings (ie code) behind an App which leads to all of your standard low/no code issues.
Would anyone be willing to hop on a quick call, video call, or even just a comment below?
https://redd.it/l05iya
@r_devops
"DevOps" / Software Developer here. I've done data integrations for K-12. Integrating Active Directory, Google Workspace, and a bunch of other apps. Important question to ask you!
Today companies generally have AD, Azure AD (Microsoft 365), and/or Google Workspace. Companies generally also have some kind of HR system.
To get new employees accounts into these systems, IT gets an email from HR or a manager that someone is starting and a manual back and forth begins. There's the entire life cycle of an employee at your company that requires manual steps as well.
I think it's crazy we don't have an easy way to automate the integration between HR and IT systems. Every company generally has the same issue. How do you handle this today?
There's things like Okta, Onelogin, JumpCloud, etc. My issue with these apps (I've used Okta and Onelogin) is that as a new business today do you really want to also pay for Okta on top of Microsoft 365 / Google?
There's also Manage Engine, Tools4Ever, and a flury of other products. My issue with all of these apps is they hide the mappings (ie code) behind an App which leads to all of your standard low/no code issues.
Would anyone be willing to hop on a quick call, video call, or even just a comment below?
https://redd.it/l05iya
@r_devops
reddit
Anyone available for a quick chat regarding user provisioning?
"DevOps" / Software Developer here. I've done data integrations for K-12. Integrating Active Directory, Google Workspace, and a bunch of other...