Is it possible to learn devops by yourself?
hello everyone, I'm a python developer, I want to try myself as a devops. Is it possible to learn everything by yourself? For example, a developer makes his projects by the type of scripts, sites (if you are a web developer), bots, etc., can devops do the same? advise the best resources where you can learn something, where to practice for a young devops? Tell your stories on the path to becoming a devops
https://redd.it/119u16m
@r_devops
hello everyone, I'm a python developer, I want to try myself as a devops. Is it possible to learn everything by yourself? For example, a developer makes his projects by the type of scripts, sites (if you are a web developer), bots, etc., can devops do the same? advise the best resources where you can learn something, where to practice for a young devops? Tell your stories on the path to becoming a devops
https://redd.it/119u16m
@r_devops
Reddit
r/devops on Reddit: Is it possible to learn devops by yourself?
Posted by u/Belyua - No votes and no comments
Drift management in cloud infrastructure
I've published a blog post about IaC drift detection and management
https://www.tailwarden.com/blog/infrastructure-drift-management
Looking forward for your feedback :)
https://redd.it/119ss6p
@r_devops
I've published a blog post about IaC drift detection and management
https://www.tailwarden.com/blog/infrastructure-drift-management
Looking forward for your feedback :)
https://redd.it/119ss6p
@r_devops
Tailwarden
Drift management in cloud infrastructure
Learn how to use Komiser to identify any deviations in your managed resources, as well as to detect any unmanaged resources within your cloud environments.
Why I think that most people prefer GKE to EKS and why this is important for the future of DevOps
Most people who have used GKE and EKS prefer GKE
Don’t take my word for it; take a look at the **consensus**.
GKE is not better because it has more features or spins nodes faster. It is preferable because the user experience is far superior; It is easier to use and better integrated with the rest of the platform; in other words, it’s better designed.
This has been my experience with GCP in general. It’s not that GCP is technically superior or offers more than AWS; it’s just so much easier and intuitive to use, to the point that I stopped taking AWS jobs altogether.
For infrastructure engineers with a long experience working with AWS, this may not be an issue because they can dedicate thousands of hours to familiarize themselves with it and leverage automation, but for developers who need to do infrastructure tasks, using AWS can be an extremely daunting and harrowing experience. This has the undesired effect of slowing down the pace of development and impairing innovation by overloading developers with tasks they shouldn’t be worrying about.
Platform Engineering aims to deliver self-service that hinges on Developer Experience (DX) by abstracting away infrastructure complexities with Internal Developer Platforms. I predict that the future of DevOps will have a heavier focus on DX, with tools and platforms that will enable delivering self-service and standardization by design, greatly mitigating cognitive load.
The cloud platforms underneath may not be any less complex because flexibility will always be needed, but the tools that enable platform engineers to provide self-service via a well-designed Internal Developer Platform will reign in the DevOps landscape.
What do you think?
https://redd.it/119sy27
@r_devops
Most people who have used GKE and EKS prefer GKE
Don’t take my word for it; take a look at the **consensus**.
GKE is not better because it has more features or spins nodes faster. It is preferable because the user experience is far superior; It is easier to use and better integrated with the rest of the platform; in other words, it’s better designed.
This has been my experience with GCP in general. It’s not that GCP is technically superior or offers more than AWS; it’s just so much easier and intuitive to use, to the point that I stopped taking AWS jobs altogether.
For infrastructure engineers with a long experience working with AWS, this may not be an issue because they can dedicate thousands of hours to familiarize themselves with it and leverage automation, but for developers who need to do infrastructure tasks, using AWS can be an extremely daunting and harrowing experience. This has the undesired effect of slowing down the pace of development and impairing innovation by overloading developers with tasks they shouldn’t be worrying about.
Platform Engineering aims to deliver self-service that hinges on Developer Experience (DX) by abstracting away infrastructure complexities with Internal Developer Platforms. I predict that the future of DevOps will have a heavier focus on DX, with tools and platforms that will enable delivering self-service and standardization by design, greatly mitigating cognitive load.
The cloud platforms underneath may not be any less complex because flexibility will always be needed, but the tools that enable platform engineers to provide self-service via a well-designed Internal Developer Platform will reign in the DevOps landscape.
What do you think?
https://redd.it/119sy27
@r_devops
Reddit
From the kubernetes community on Reddit
Explore this post and more from the kubernetes community
Needed some ci cd advice / pipelines
Hey All,
Just had a question and needed some views on this, new devops guy and no one internally to learn from.
My question is around pipelines and tasks.
The environment i have come into has a pipeline running that builds the app and then saves it in a aws ecr.
my question is around the next steps, i want to automate the docker build on a machine, do i go to the machine and do that, or is it best practie to create another pipeline for the other tasks in your workflow.
im not actually sure how that bit works. if anyone could help i'd appreciate it.
https://redd.it/118u12b
@r_devops
Hey All,
Just had a question and needed some views on this, new devops guy and no one internally to learn from.
My question is around pipelines and tasks.
The environment i have come into has a pipeline running that builds the app and then saves it in a aws ecr.
my question is around the next steps, i want to automate the docker build on a machine, do i go to the machine and do that, or is it best practie to create another pipeline for the other tasks in your workflow.
im not actually sure how that bit works. if anyone could help i'd appreciate it.
https://redd.it/118u12b
@r_devops
Reddit
r/devops on Reddit: Needed some ci cd advice / pipelines
Posted by u/bjjwhitebeltjay - No votes and no comments
What's your favorite developer community?
What's your favorite community as a developer/SRE/DevOps Engineer and what's the value you get from it?
I'm working on building a developer community and I would like to make sure that I avoid all the fluff and actually provide information/assets people care about.
https://redd.it/118xpss
@r_devops
What's your favorite community as a developer/SRE/DevOps Engineer and what's the value you get from it?
I'm working on building a developer community and I would like to make sure that I avoid all the fluff and actually provide information/assets people care about.
https://redd.it/118xpss
@r_devops
Reddit
r/devops on Reddit: What's your favorite developer community?
Posted by u/konkaterina - No votes and 2 comments
Used to work at another company and their Confluence page was highly customizable - new Confluence seems more restrictive and way too much whitespace, making it inefficient for longer documentation. What happened?
as the title says.
The Confluence is so bad. I can't even vertical align center text in a table.
I use SphinxDocs for internal documentation but we're still looking for something a bit more accessible.
Am I missing something?
https://redd.it/11a01b0
@r_devops
as the title says.
The Confluence is so bad. I can't even vertical align center text in a table.
I use SphinxDocs for internal documentation but we're still looking for something a bit more accessible.
Am I missing something?
https://redd.it/11a01b0
@r_devops
Reddit
r/devops on Reddit: Used to work at another company and their Confluence page was highly customizable - new Confluence seems more…
Posted by u/V3Qn117x0UFQ - No votes and no comments
What's the best path to enter the DevOps field?
Hello everyone,
I'm currently looking for advice on the best path to enter the DevOps field. I'm currently studying for about 4 hours a day and have an intermediate level of knowledge in networking and programming.
I was previously studying for the CCNA, but after looking for job openings, I decided to switch to DevOps. I'm currently studying for the AWS Solutions Architect, AWS DevOps, and AWS SysOps Associate certifications for about 2 hours a day, along with 1 hour of Linux (RHCSA) and 1 hour or more of English. Although I'm not fluent in English, I can understand about 80% of what I read or hear.
My goal is to find the most efficient and effective way to get a Junior DevOps job. I plan to take each AWS certification in 45 days, which will take me about 4 and a half months. Then, I plan to take the CKA (Certified Kubernetes Administrator) exam in 2 months, including learning Docker before that. After that, I plan to take the Terraform certifications. My goal is to complete all these certifications in 7 months and RHCSA in 12 months.
I want to make it clear that I'm not collecting or memorizing questions for these certifications; I'm doing it because it motivates me to study and work on projects. Besides, I hope to get a Junior DevOps job once I complete these certifications. Based on my study plan and the materials and simulations I've used, these dates seem reasonable.
I do realize that this is a lot of material to cover, and I'm not under the illusion that I'll become an expert in such a short amount of time. However, I'm hoping that this is the best path for me to enter the DevOps field, get a job, and continue my studies. I'm committed to learning as much as I can and putting in the necessary effort to achieve my goals. Any advice or suggestions on how to optimize my learning and job search process would be greatly appreciated.
My question is, is this the best path to take to get a job in DevOps? Are there other things I should consider besides this stack?
My current stack includes:
* AWS (Solutions Architect, DevOps, SysOps Associate certifications)
* RHCSA
* Terraform
* CKA
* Docker
* Ansible
* Jenkins
* Prometheus
* GitLab
* Vault
* Consul
I appreciate any advice or suggestions you may have. Thank you!
https://redd.it/118o42q
@r_devops
Hello everyone,
I'm currently looking for advice on the best path to enter the DevOps field. I'm currently studying for about 4 hours a day and have an intermediate level of knowledge in networking and programming.
I was previously studying for the CCNA, but after looking for job openings, I decided to switch to DevOps. I'm currently studying for the AWS Solutions Architect, AWS DevOps, and AWS SysOps Associate certifications for about 2 hours a day, along with 1 hour of Linux (RHCSA) and 1 hour or more of English. Although I'm not fluent in English, I can understand about 80% of what I read or hear.
My goal is to find the most efficient and effective way to get a Junior DevOps job. I plan to take each AWS certification in 45 days, which will take me about 4 and a half months. Then, I plan to take the CKA (Certified Kubernetes Administrator) exam in 2 months, including learning Docker before that. After that, I plan to take the Terraform certifications. My goal is to complete all these certifications in 7 months and RHCSA in 12 months.
I want to make it clear that I'm not collecting or memorizing questions for these certifications; I'm doing it because it motivates me to study and work on projects. Besides, I hope to get a Junior DevOps job once I complete these certifications. Based on my study plan and the materials and simulations I've used, these dates seem reasonable.
I do realize that this is a lot of material to cover, and I'm not under the illusion that I'll become an expert in such a short amount of time. However, I'm hoping that this is the best path for me to enter the DevOps field, get a job, and continue my studies. I'm committed to learning as much as I can and putting in the necessary effort to achieve my goals. Any advice or suggestions on how to optimize my learning and job search process would be greatly appreciated.
My question is, is this the best path to take to get a job in DevOps? Are there other things I should consider besides this stack?
My current stack includes:
* AWS (Solutions Architect, DevOps, SysOps Associate certifications)
* RHCSA
* Terraform
* CKA
* Docker
* Ansible
* Jenkins
* Prometheus
* GitLab
* Vault
* Consul
I appreciate any advice or suggestions you may have. Thank you!
https://redd.it/118o42q
@r_devops
Reddit
r/devops on Reddit: What's the best path to enter the DevOps field?
Posted by u/Crepszz - No votes and 4 comments
Use this shortcut to refer to the last executed command!! (1 minute)
https://www.youtube.com/watch?v=ExEtlFAarXU
https://redd.it/11937bl
@r_devops
https://www.youtube.com/watch?v=ExEtlFAarXU
https://redd.it/11937bl
@r_devops
YouTube
Use this shorthand to refer to the last executed command!!
In this video, you'll learn how to refer to the last executed command with a simple shorthand
Please, subscribe! 🙏 💚 😊
We're also available on:
- Twitter: https://www.twitter.com/RubyCademy
- Medium: https://www.medium.com/@rubycademy
- Reddit: …
Please, subscribe! 🙏 💚 😊
We're also available on:
- Twitter: https://www.twitter.com/RubyCademy
- Medium: https://www.medium.com/@rubycademy
- Reddit: …
Update: File support (max 1MB) added to the self-hosting app to create one-time shareable secrets - (new feature)
https://github.com/rpgeeganage/ots-share-app
How to use
Text: https://github.com/rpgeeganage/ots-share-app#for-text
Files: [https://github.com/rpgeeganage/ots-share-app#for-files](https://github.com/rpgeeganage/ots-share-app#for-files)
View secret:
Texts: [https://github.com/rpgeeganage/ots-share-app#for-text-1](https://github.com/rpgeeganage/ots-share-app#for-text-1)
Files: https://github.com/rpgeeganage/ots-share-app#for-files-1
Complete feature list:
Support both texts and small files (maximum `1MB`).
Creates shareable links which valid for a maximum of 24 hours.
The contents are encrypted with `AES` in `CBC` mode, with a `256-bit` key. (Using [Crypto-js](https://cryptojs.gitbook.io/docs/#the-cipher-algorithms))
Passwords are NOT sent to the backend server.
The app periodically deletes encrypted content after it expires, and the encrypted content gets deleted once the web UI fetches it.
CLI support.
Multiple database connectivity support.
`Postgres`
Give it a star if you like it.
https://redd.it/11a3zfy
@r_devops
https://github.com/rpgeeganage/ots-share-app
How to use
Text: https://github.com/rpgeeganage/ots-share-app#for-text
Files: [https://github.com/rpgeeganage/ots-share-app#for-files](https://github.com/rpgeeganage/ots-share-app#for-files)
View secret:
Texts: [https://github.com/rpgeeganage/ots-share-app#for-text-1](https://github.com/rpgeeganage/ots-share-app#for-text-1)
Files: https://github.com/rpgeeganage/ots-share-app#for-files-1
Complete feature list:
Support both texts and small files (maximum `1MB`).
Creates shareable links which valid for a maximum of 24 hours.
The contents are encrypted with `AES` in `CBC` mode, with a `256-bit` key. (Using [Crypto-js](https://cryptojs.gitbook.io/docs/#the-cipher-algorithms))
Passwords are NOT sent to the backend server.
The app periodically deletes encrypted content after it expires, and the encrypted content gets deleted once the web UI fetches it.
CLI support.
Multiple database connectivity support.
Mongo`Postgres`
MySQLGive it a star if you like it.
https://redd.it/11a3zfy
@r_devops
GitHub
GitHub - rpgeeganage/ots-share-app: A self-hosting app to share secrets only one-time.
A self-hosting app to share secrets only one-time. - rpgeeganage/ots-share-app
How do you deploy the backend side (express and postgres) on netlify?
For react to deploy I just npm run build and drag it and deploy. But I tried just dragging my whole folder with my front end and backend (express and postgres) I get an error 500. Anyone know how to deploy the backend to make it work?
https://redd.it/11a4gaj
@r_devops
For react to deploy I just npm run build and drag it and deploy. But I tried just dragging my whole folder with my front end and backend (express and postgres) I get an error 500. Anyone know how to deploy the backend to make it work?
https://redd.it/11a4gaj
@r_devops
Reddit
r/devops on Reddit: How do you deploy the backend side (express and postgres) on netlify?
Posted by u/Comfortable_Alarm_55 - No votes and 4 comments
Is there any service that affords free domain and hosting?
I'm learning DevOps and the course instructor told that'll simulate an production environment and listed the requirements and one of them is a domain with 4 subdomains (1 for rancher server and 3 for kubernetes). I don't know if I did the right choice using Vercel to create and host a free domain.
Apparently, the domain is working but the problem starts when I have to create subdomains because I don't know what I must insert on the value field and the placeholder suggests "host.example.com" and my intuition told me that I've to insert the domain that I created on Vercel but when I tested if the subdomain is available, the route isn't accessible (e.g "rancher-server.example.com"). I spent whole day understanding it but my guess is this feature only works if you purchase a domain
If anyone can help me with this, I'd be so grateful :)
https://redd.it/118k9h3
@r_devops
I'm learning DevOps and the course instructor told that'll simulate an production environment and listed the requirements and one of them is a domain with 4 subdomains (1 for rancher server and 3 for kubernetes). I don't know if I did the right choice using Vercel to create and host a free domain.
Apparently, the domain is working but the problem starts when I have to create subdomains because I don't know what I must insert on the value field and the placeholder suggests "host.example.com" and my intuition told me that I've to insert the domain that I created on Vercel but when I tested if the subdomain is available, the route isn't accessible (e.g "rancher-server.example.com"). I spent whole day understanding it but my guess is this feature only works if you purchase a domain
If anyone can help me with this, I'd be so grateful :)
https://redd.it/118k9h3
@r_devops
Exposing Azure Storage on Domain Apex With Let's Encrypt SSL via Terraform
https://ssmertin.com/articles/exposing-azure-storage-on-domain-apex-with-letsencrypt-ssl/
https://redd.it/11a990v
@r_devops
https://ssmertin.com/articles/exposing-azure-storage-on-domain-apex-with-letsencrypt-ssl/
https://redd.it/11a990v
@r_devops
ssmertin.com
Exposing Azure Storage on Domain Apex with Let's Encrypt SSL
Expose an Azure Storage Account through a top-level domain through azurerm_key_vault_certificate with the Let's Encrypt SSL certificate you can get for free.
Saltstack crystal enterprise VMware
Setup for saltstack crystal enterprise seems broken especially during the installation process wants to connect to the internet? If crystal is for offline how then are they using pip to install software that requires internet. Note Used 8.10 and 8.11. Do you need to setup all 4 systems? or can it be installed on a monolithic VM?
https://redd.it/11aa4e7
@r_devops
Setup for saltstack crystal enterprise seems broken especially during the installation process wants to connect to the internet? If crystal is for offline how then are they using pip to install software that requires internet. Note Used 8.10 and 8.11. Do you need to setup all 4 systems? or can it be installed on a monolithic VM?
https://redd.it/11aa4e7
@r_devops
Reddit
r/devops on Reddit: Saltstack crystal enterprise VMware
Posted by u/metromsi - No votes and no comments
Best AWS IAM approach for isolated logging and backup accounts?
I'm aware that per best practices, logs and backups should happen in isolated accounts where the writers don't have any permission to delete or permanently overwrite them. However, I'm unsure how to best implement that. We have IAM policies much stricter than AWS's suggestions (no wildcards etc) thanks to Terraform, and I shudder a bit when I think about implementing that in a cross-account scenario.
As a rather lax approach, I thought about just giving the writing accounts blanket access to all write operations for the log data itself, but no delete operations, and enforce bucket versioning and such to prevent overwriting data. That would make it fairly simple to migrate our existing policies and configuration as I could just basically just replace some ARNs and be done with it. There may of course be a lot of non-obvious pitfalls with that, though.
Unfortunately, I haven't found any guides on the big picture stuff like that, just either very specific per-service instructions from AWS or the usual blogspam that may or may not contain a single example cross-account policy.
How do you implement that at your organization?
For reference, our modules for service X also set up log groups, KMS, IAM, S3 logging buckets etc. for that service (or instance etc) to make it a complete package. KMS CMK encryption of everything is a requirement, but not very fine-grained - some things such a "generic Lambdas" use a common CMK for everything (we only use them as glue so far), more substantial things use the same per-instance / cluster keys for encrypting both their data and their logs.
https://redd.it/11ac6so
@r_devops
I'm aware that per best practices, logs and backups should happen in isolated accounts where the writers don't have any permission to delete or permanently overwrite them. However, I'm unsure how to best implement that. We have IAM policies much stricter than AWS's suggestions (no wildcards etc) thanks to Terraform, and I shudder a bit when I think about implementing that in a cross-account scenario.
As a rather lax approach, I thought about just giving the writing accounts blanket access to all write operations for the log data itself, but no delete operations, and enforce bucket versioning and such to prevent overwriting data. That would make it fairly simple to migrate our existing policies and configuration as I could just basically just replace some ARNs and be done with it. There may of course be a lot of non-obvious pitfalls with that, though.
Unfortunately, I haven't found any guides on the big picture stuff like that, just either very specific per-service instructions from AWS or the usual blogspam that may or may not contain a single example cross-account policy.
How do you implement that at your organization?
For reference, our modules for service X also set up log groups, KMS, IAM, S3 logging buckets etc. for that service (or instance etc) to make it a complete package. KMS CMK encryption of everything is a requirement, but not very fine-grained - some things such a "generic Lambdas" use a common CMK for everything (we only use them as glue so far), more substantial things use the same per-instance / cluster keys for encrypting both their data and their logs.
https://redd.it/11ac6so
@r_devops
Reddit
r/devops on Reddit: Best AWS IAM approach for isolated logging and backup accounts?
Posted by u/Benutzernutzer - No votes and no comments
KEDA + SQS
Hey all,
I was wandering if anyone here has had the experience of using KEDA to scale a deployment based on an SQS queue.
It's really a simple setup, but KEDA keeps behaving strangely / inconsistently for us. ScaledObject is configured to scale on in-flight messages, but as soon as messages in queue reaches 0 KEDA thinks it's ok to scale back to minimum, meanwhile there are hundreds if not thousands messages in-flight.
So the scaling goes roughly 100->200...->800->600...->100 instead of 100->200...->800->700->600->700... This obviously leads to messages in queue being accumulated and our grafana graphs looking like mean saws.
Maybe I'm not understanding KEDA configuration correctly (since it's not actually scaling the deployment itself and only controls the HPA), but scaleOnInFlight seems pretty clear to me in terms of what it is supposed to do.
And also, it actually was behaving as we wanted it to, but started the weirdness after a blue-green cluster switch. All configs / ScaledObjects / other manifests are the same though. And I guess it's not some SQS throttling / limits since KEDA operator would be reporting errors, which it doesn't.
Any ideas / help appreciated.
https://redd.it/11a8z00
@r_devops
Hey all,
I was wandering if anyone here has had the experience of using KEDA to scale a deployment based on an SQS queue.
It's really a simple setup, but KEDA keeps behaving strangely / inconsistently for us. ScaledObject is configured to scale on in-flight messages, but as soon as messages in queue reaches 0 KEDA thinks it's ok to scale back to minimum, meanwhile there are hundreds if not thousands messages in-flight.
So the scaling goes roughly 100->200...->800->600...->100 instead of 100->200...->800->700->600->700... This obviously leads to messages in queue being accumulated and our grafana graphs looking like mean saws.
Maybe I'm not understanding KEDA configuration correctly (since it's not actually scaling the deployment itself and only controls the HPA), but scaleOnInFlight seems pretty clear to me in terms of what it is supposed to do.
And also, it actually was behaving as we wanted it to, but started the weirdness after a blue-green cluster switch. All configs / ScaledObjects / other manifests are the same though. And I guess it's not some SQS throttling / limits since KEDA operator would be reporting errors, which it doesn't.
Any ideas / help appreciated.
https://redd.it/11a8z00
@r_devops
Reddit
r/devops on Reddit: KEDA + SQS
Posted by u/calibrono - 1 vote and 1 comment
What tools do you use to manage/handle your pull requests ?
I'm using GitHub, and sometimes I struggle managing, merging, and globally handling all the PR's needed in my repo (dependencies, bots, etc...) . It can take a lot of time doing all this.
I saw that GitHub were putting up a "Merge Queue" but it doesn't seems finished at all, is there other tools, applications, methodologies you use to make your GitHub use more performant and less frustrating ?
https://redd.it/1183yjt
@r_devops
I'm using GitHub, and sometimes I struggle managing, merging, and globally handling all the PR's needed in my repo (dependencies, bots, etc...) . It can take a lot of time doing all this.
I saw that GitHub were putting up a "Merge Queue" but it doesn't seems finished at all, is there other tools, applications, methodologies you use to make your GitHub use more performant and less frustrating ?
https://redd.it/1183yjt
@r_devops
Reddit
r/devops on Reddit: What tools do you use to manage/handle your pull requests ?
Posted by u/yet_another_devvv - 1 vote and no comments
Thoughts on logs gathering
Hi everyone,
Said large company has 100s of load balancers active in 10s of regions, and they don't keep access logs of them, which makes it difficult for us to inventorize the API endpoints that are linked to them. Activating access logs logging will cost the company a huge sum of money constantly and is off limits.
For us, the cost will be fine, since we can just sample 1:100(or any other ratio) log files from their logs, but it will be a problem for them. Do you know by any chance, how we can gather access logs from a load balancer while not costing the company a lot of money?
Every help is highly appreciated !
Cheers \~
https://redd.it/117ywdm
@r_devops
Hi everyone,
Said large company has 100s of load balancers active in 10s of regions, and they don't keep access logs of them, which makes it difficult for us to inventorize the API endpoints that are linked to them. Activating access logs logging will cost the company a huge sum of money constantly and is off limits.
For us, the cost will be fine, since we can just sample 1:100(or any other ratio) log files from their logs, but it will be a problem for them. Do you know by any chance, how we can gather access logs from a load balancer while not costing the company a lot of money?
Every help is highly appreciated !
Cheers \~
https://redd.it/117ywdm
@r_devops
Reddit
r/devops on Reddit: Thoughts on logs gathering
Posted by u/CheekyBreeky_v2 - 1 vote and no comments
Need Help with AWS Credits
I have got some AWS credits with around a year of expiry date.
I am not exactly sure, if I will be making use of all of it. Is there any way to make a good money from those credits. Can anyone give some idea?
https://redd.it/117xgeq
@r_devops
I have got some AWS credits with around a year of expiry date.
I am not exactly sure, if I will be making use of all of it. Is there any way to make a good money from those credits. Can anyone give some idea?
https://redd.it/117xgeq
@r_devops
Reddit
r/devops on Reddit: Need Help with AWS Credits
Posted by u/noobieprogrammer1001 - 1 vote and no comments
Question about remote app delivery
Hey all, I manage an infrastructure that hosts multiple engineering applications like Ansys. This is currently delivered through ms rds remote app but the solution is end of life
I’m looking for methods to deliver high computational applications to low end machines through something like Remote Desktop or a browser based solution.
Do you peeps have any suggestions?
Cheers
https://redd.it/11ambtw
@r_devops
Hey all, I manage an infrastructure that hosts multiple engineering applications like Ansys. This is currently delivered through ms rds remote app but the solution is end of life
I’m looking for methods to deliver high computational applications to low end machines through something like Remote Desktop or a browser based solution.
Do you peeps have any suggestions?
Cheers
https://redd.it/11ambtw
@r_devops
Reddit
r/devops on Reddit: Question about remote app delivery
Posted by u/Initial-Dentist-5476 - No votes and no comments
Done with the basics. Now what?
A couple of months back I asked about how I can increase my chances of becoming a DevOps Engineer.
After getting mixed responses, I chose to stick with the ones that best suited my interests while keeping the negative ones at the back of my mind.
I am a fresher(2020 grad) who has done some projects and will soon appear for the AWS Solutions architect Cert exam.
Here are a few projects I have mentioned in my CV
● AWS LIFT AND SHIFT :
○ Successfully migrated a Java web application to the AWS cloud using a variety of AWS services, including EC2, ELB,
Auto-Scaling Group, Route 53, CloudFront, Elastic Beanstalk, CloudWatch, RDS, ElastiCache, and AmazonMQ,
showcasing strong knowledge of cloud infrastructure and best practices.
○ Re-architected the application using Elastic Beanstalk and CloudFormation, demonstrating knowledge of PaaS, SaaS andIaC along with Implementing security best practices by using Security Groups and Key-Pairs, showing understanding ofcloud security.
○ Improved performance and scalability by using ElastiCache and AmazonMQ, demonstrated knowledge of messaging
queues and caching and showcased ability to work with multiple technology stacks including AWS, Maven, Java, Nginx,
Apache Tomcat, RabbitMQ, Memcached, MySQL, and Linux.
● CI/CD WITH JENKINS :
○ Successfully implemented a continuous integration and delivery pipeline using Jenkins, SonarQube, Docker, Nexus, ECS,Fargate, and ECR, showcasing strong knowledge of pipeline management and best practices.
○ Used Jenkins to automate the build, test and deployment of software, while also maintaining Jenkins master-slave
architecture to improve build efficiency and scalability, demonstrating experience with load balancing and distributedsystems and incorporated SonarQube to ensure code quality and security, demonstrating understanding of code analysis,
and how to detect and prevent software bugs and vulnerabilities.
○ Utilized Docker, Nexus, and ECR to containerize the application and improve its portability and scalability, showcasing
knowledge of containerization and microservices.
● CI/CD WITH AWS :
○ Demonstrated experience with building a complete CI/CD pipeline using AWS services such as Code Commit, Code
Build, Code Deploy, Beanstalk, RDS, and pipeline, showcasing knowledge of AWS services and their integration to build
a pipeline.
○ Utilized AWS Code Build and Code Deploy to automate the build and deployment process, showcasing experience with
continuous integration and delivery.
○ Improved the performance and scalability of the application by integrating Beanstalk, RDS, and pipeline, showcasing
knowledge of PaaS, IaaS and best practices, also implemented version control and build processes using Code Commit and Code Build, and streamlined the deployment process by automating it with Code Deploy and Beanstalk, showcasing
knowledge of deployment automation and best practices.
Also, here are a few Technologies I am currently learning -
Ansible, Terraform, Kubernetes, Docker, Jenkins.
Also, my Linux basics are good enough. And I know bash scripting and python, so automation skills are okayish I guess.
P.S. I submitted my resume at multiple location and want to let you guys know that I am getting calls from HR. For some reason, most HRs will only realise that I do not hold industry level experience when they call me and then would reject me for the same. Also, I must say, I have 2 Job offers. At this point, I kinda would like to have a few more offers so that I have a few options.
Based on my projects, do you guys have any feedback or guidance?
https://redd.it/11amxhd
@r_devops
A couple of months back I asked about how I can increase my chances of becoming a DevOps Engineer.
After getting mixed responses, I chose to stick with the ones that best suited my interests while keeping the negative ones at the back of my mind.
I am a fresher(2020 grad) who has done some projects and will soon appear for the AWS Solutions architect Cert exam.
Here are a few projects I have mentioned in my CV
● AWS LIFT AND SHIFT :
○ Successfully migrated a Java web application to the AWS cloud using a variety of AWS services, including EC2, ELB,
Auto-Scaling Group, Route 53, CloudFront, Elastic Beanstalk, CloudWatch, RDS, ElastiCache, and AmazonMQ,
showcasing strong knowledge of cloud infrastructure and best practices.
○ Re-architected the application using Elastic Beanstalk and CloudFormation, demonstrating knowledge of PaaS, SaaS andIaC along with Implementing security best practices by using Security Groups and Key-Pairs, showing understanding ofcloud security.
○ Improved performance and scalability by using ElastiCache and AmazonMQ, demonstrated knowledge of messaging
queues and caching and showcased ability to work with multiple technology stacks including AWS, Maven, Java, Nginx,
Apache Tomcat, RabbitMQ, Memcached, MySQL, and Linux.
● CI/CD WITH JENKINS :
○ Successfully implemented a continuous integration and delivery pipeline using Jenkins, SonarQube, Docker, Nexus, ECS,Fargate, and ECR, showcasing strong knowledge of pipeline management and best practices.
○ Used Jenkins to automate the build, test and deployment of software, while also maintaining Jenkins master-slave
architecture to improve build efficiency and scalability, demonstrating experience with load balancing and distributedsystems and incorporated SonarQube to ensure code quality and security, demonstrating understanding of code analysis,
and how to detect and prevent software bugs and vulnerabilities.
○ Utilized Docker, Nexus, and ECR to containerize the application and improve its portability and scalability, showcasing
knowledge of containerization and microservices.
● CI/CD WITH AWS :
○ Demonstrated experience with building a complete CI/CD pipeline using AWS services such as Code Commit, Code
Build, Code Deploy, Beanstalk, RDS, and pipeline, showcasing knowledge of AWS services and their integration to build
a pipeline.
○ Utilized AWS Code Build and Code Deploy to automate the build and deployment process, showcasing experience with
continuous integration and delivery.
○ Improved the performance and scalability of the application by integrating Beanstalk, RDS, and pipeline, showcasing
knowledge of PaaS, IaaS and best practices, also implemented version control and build processes using Code Commit and Code Build, and streamlined the deployment process by automating it with Code Deploy and Beanstalk, showcasing
knowledge of deployment automation and best practices.
Also, here are a few Technologies I am currently learning -
Ansible, Terraform, Kubernetes, Docker, Jenkins.
Also, my Linux basics are good enough. And I know bash scripting and python, so automation skills are okayish I guess.
P.S. I submitted my resume at multiple location and want to let you guys know that I am getting calls from HR. For some reason, most HRs will only realise that I do not hold industry level experience when they call me and then would reject me for the same. Also, I must say, I have 2 Job offers. At this point, I kinda would like to have a few more offers so that I have a few options.
Based on my projects, do you guys have any feedback or guidance?
https://redd.it/11amxhd
@r_devops
Reddit
r/devops on Reddit: Done with the basics. Now what?
Posted by u/Plastic-Date-8717 - No votes and no comments
ChatGPT got it ALL WRONG !!
I asked ChatGPT to write a Docker file for a node.js application.
I came up with more related questions and continually asked for solutions until it mentioned HTTPS, which wasn't valid.
I searched to see if there were any others facing the same issue. I found this article, which mentions a similar issue.
Like seriously am i the only one over here facing this issue??
https://redd.it/11alsdp
@r_devops
I asked ChatGPT to write a Docker file for a node.js application.
I came up with more related questions and continually asked for solutions until it mentioned HTTPS, which wasn't valid.
I searched to see if there were any others facing the same issue. I found this article, which mentions a similar issue.
Like seriously am i the only one over here facing this issue??
https://redd.it/11alsdp
@r_devops
dbi Blog
ChatGPT vs DevOps
As a lot of curious people, I have experimented with ChatGPT in order to figure out if it could be helpful in my work as a DevOps consultant or even replace me (why not?). If you are also interested to see how ChatGPT compares with an Oracle DBA, you can…