Scalable multi-environment logging?
Hey all, I'm currently looking at making some changes to our company's dev logging infrastructure. We have these testing environments which can be created and destroyed at-will and there can be any number of them at any given time. Basically, a developer can choose a branch to deploy and a new ec2 instance is created and the application stack is started up in Docker. Currently, each of these environments has its own ELK stack.
What I'm looking to do is remove the ELK stack from each of these environments. I'm trying to do some research on solutions which would take in the logs and make them easily accessible to the developers.
There are quite a few solutions available, so I'm hoping some of you might have some experience or insight into something like this. What do you all think?
https://redd.it/sfqgh1
@r_devops
Hey all, I'm currently looking at making some changes to our company's dev logging infrastructure. We have these testing environments which can be created and destroyed at-will and there can be any number of them at any given time. Basically, a developer can choose a branch to deploy and a new ec2 instance is created and the application stack is started up in Docker. Currently, each of these environments has its own ELK stack.
What I'm looking to do is remove the ELK stack from each of these environments. I'm trying to do some research on solutions which would take in the logs and make them easily accessible to the developers.
There are quite a few solutions available, so I'm hoping some of you might have some experience or insight into something like this. What do you all think?
https://redd.it/sfqgh1
@r_devops
reddit
Scalable multi-environment logging?
Hey all, I'm currently looking at making some changes to our company's dev logging infrastructure. We have these testing environments which can be...
Kubespray vs. Rancher vs. Cloud Managed Kubernetes
I work at a small company, and they are wanting to keep costs low. The app is a game server so not really the standard stateless web app. I am wondering what the best way to deploy our Kubernetes clusters would be, in a way that is reproducible and simple
I guess it's a question of cost vs. complexity. My company is thinking of using bare metal servers and operating Kubernetes manually on them, I suppose using either Kubespray or Rancher, or maybe custom Ansible playbook. The other option is to use a cloud provider's managed Kubernetes. Would that really cost that much more?
From my research of Kubespray vs. Rancher, it seems like Rancher is simpler and more well-liked, but that the simplest solution would be cloud managed Kubernetes.
Would there anything to take into consideration about our specific scenario, or advice?
Thanks
https://redd.it/sfrvs3
@r_devops
I work at a small company, and they are wanting to keep costs low. The app is a game server so not really the standard stateless web app. I am wondering what the best way to deploy our Kubernetes clusters would be, in a way that is reproducible and simple
I guess it's a question of cost vs. complexity. My company is thinking of using bare metal servers and operating Kubernetes manually on them, I suppose using either Kubespray or Rancher, or maybe custom Ansible playbook. The other option is to use a cloud provider's managed Kubernetes. Would that really cost that much more?
From my research of Kubespray vs. Rancher, it seems like Rancher is simpler and more well-liked, but that the simplest solution would be cloud managed Kubernetes.
Would there anything to take into consideration about our specific scenario, or advice?
Thanks
https://redd.it/sfrvs3
@r_devops
reddit
Kubespray vs. Rancher vs. Cloud Managed Kubernetes
I work at a small company, and they are wanting to keep costs low. The app is a game server so not really the standard stateless web app. I am...
Share your loki config!
Seems that loki is a bit tricky to config for aggresive logs search (~10Go/days) looking for good helm chart config!
https://redd.it/sfq6it
@r_devops
Seems that loki is a bit tricky to config for aggresive logs search (~10Go/days) looking for good helm chart config!
https://redd.it/sfq6it
@r_devops
reddit
Share your loki config!
Seems that loki is a bit tricky to config for aggresive logs search (~10Go/days) looking for good helm chart config!
Why do you need one more utility for data aggregation and streaming?
# Dive into problem
Several years ago I started develop SIP server. The first problem I encountered — I didn’t know about SIP.
The good way is learning about SIP by studying theory, but I don’t like studying — I like investigating!
Therefore, I started from investigation of simple SIP call.
But, the next problem that I encountered, was how many servers (or micro-services) need for doing simple SIP call — approximetly 20.
20 servers! It means, that before you will hear anything in the ip-telephone you need to trace more than 20 servers, and each server will doing work with your call!
How to trace one SIP call? You have several ways:
1. Setup ELK stack in your micro-services environment and investigate logs after sip call
2. Via ssh get any information that you need
3. Write own utility for investigation
# Daggy - Data Aggregation Utility and C/C++ developer library for data streams catching
What’s wrong with firstable two variants?
ELK stack looks good, but:
1. What if you want looks, for example, at tcp dumps and ELK don’t aggregate them?
2. What if you don’t have ELK?
From other side, via ssh and command line you can do anything, but, what if ou need to aggregate data from over 20 servers and run several commands on each server? This task is turning to the bash/powershell nightmare.
Therefore, several years ago, I wrote the utility, that can:
1. Aggregate and streams data via command-line commands from multiple servers at the same time
2. Each aggregation session is saved into separete folder. Each aggregation command is saving and steaming into separate file
3. Data Aggregation Sources are simply and can be used repeatly.
# Is it about devops?
Often, in distributed network systems, need to catch any data for analyzing and debuging any user scenarious. But server based solutions for this can be expensive - the adding new type of data catching for you ELK system is not a simple. From other side, you can want to get any binary data, like tcpdumps, during user scenario execution. In this cases daggy will help you!
https://github.com/synacker/daggy
https://redd.it/sfr5eu
@r_devops
# Dive into problem
Several years ago I started develop SIP server. The first problem I encountered — I didn’t know about SIP.
The good way is learning about SIP by studying theory, but I don’t like studying — I like investigating!
Therefore, I started from investigation of simple SIP call.
But, the next problem that I encountered, was how many servers (or micro-services) need for doing simple SIP call — approximetly 20.
20 servers! It means, that before you will hear anything in the ip-telephone you need to trace more than 20 servers, and each server will doing work with your call!
How to trace one SIP call? You have several ways:
1. Setup ELK stack in your micro-services environment and investigate logs after sip call
2. Via ssh get any information that you need
3. Write own utility for investigation
# Daggy - Data Aggregation Utility and C/C++ developer library for data streams catching
What’s wrong with firstable two variants?
ELK stack looks good, but:
1. What if you want looks, for example, at tcp dumps and ELK don’t aggregate them?
2. What if you don’t have ELK?
From other side, via ssh and command line you can do anything, but, what if ou need to aggregate data from over 20 servers and run several commands on each server? This task is turning to the bash/powershell nightmare.
Therefore, several years ago, I wrote the utility, that can:
1. Aggregate and streams data via command-line commands from multiple servers at the same time
2. Each aggregation session is saved into separete folder. Each aggregation command is saving and steaming into separate file
3. Data Aggregation Sources are simply and can be used repeatly.
# Is it about devops?
Often, in distributed network systems, need to catch any data for analyzing and debuging any user scenarious. But server based solutions for this can be expensive - the adding new type of data catching for you ELK system is not a simple. From other side, you can want to get any binary data, like tcpdumps, during user scenario execution. In this cases daggy will help you!
https://github.com/synacker/daggy
https://redd.it/sfr5eu
@r_devops
Recommended courses for CKA certification
Hi guys
I want to certify myself for the Certified Kubernetes Administrator. The course I want to use to prepare myself is the one on Udemy from Kodekloud.
Do you guys recommend this course or any other courses?
Thnx!
https://redd.it/sfr2uw
@r_devops
Hi guys
I want to certify myself for the Certified Kubernetes Administrator. The course I want to use to prepare myself is the one on Udemy from Kodekloud.
Do you guys recommend this course or any other courses?
Thnx!
https://redd.it/sfr2uw
@r_devops
reddit
Recommended courses for CKA certification
Hi guys I want to certify myself for the Certified Kubernetes Administrator. The course I want to use to prepare myself is the one on Udemy...
Common avenues for reducing waste in AWS (Specifically EC2)
I'm tasked with collecting data on CPU and memory usage in EC2 and trying to figure out the best way to eliminate wasted capacity. I've got data on a few thousand instances and can see plenty of examples of boxes that run at low CPU and memory utilization (and so we usually tell the owners of those boxes to either scale down or containerize). What are some common ways to look for waste in your aws resources? We're also working on incorporating the TrustedAdvisor report into our thinking
https://redd.it/sfxaf1
@r_devops
I'm tasked with collecting data on CPU and memory usage in EC2 and trying to figure out the best way to eliminate wasted capacity. I've got data on a few thousand instances and can see plenty of examples of boxes that run at low CPU and memory utilization (and so we usually tell the owners of those boxes to either scale down or containerize). What are some common ways to look for waste in your aws resources? We're also working on incorporating the TrustedAdvisor report into our thinking
https://redd.it/sfxaf1
@r_devops
reddit
Common avenues for reducing waste in AWS (Specifically EC2)
I'm tasked with collecting data on CPU and memory usage in EC2 and trying to figure out the best way to eliminate wasted capacity. I've got data...
Trunk-based Development, PRs and CI Question
I've been having conversations today that have me looking at my pipelines again.
They are currently based on what I thought was considered to be trunk-based development:
Develop locally in `trunk`
If everything looks good, `git push origin trunk:short_lived_feature_branch` since remote `trunk` is protected / locked
Open PR, CI pipelines run automated testing and code reviewer reviews to make sure
If approved, the `short_lived_feature_branch` is merged to `trunk` and deleted
The merge to
But I was told that isn't really trunk-based development.
In a "pure" trunk-based development process, you'd be pushing directly into the remote trunk which would then run CI, and there wouldn't even be a PR.
I'm having trouble wrapping my brain around how that would work.
I use Azure DevOps, and If I push directly into trunk, my changes are there immediately. This does trigger the CI pipeline, but it could be several minutes before an issue is detected by them. Meanwhile the changes are in trunk that other developers could have fetched and rebased from.
In Azure DevOps, you can have branch policies and build validations, but those only apply to PRs and they have to be turned off to push directly to trunk.
Hoping someone can explain how this "pure" trunk-based development would be implemented that doesn't turn into a shit show of developers pulling bad code and then having to communicate to them it needs to be reverted.
Going down a rabbit-hole at this point...
https://redd.it/sfwa1i
@r_devops
I've been having conversations today that have me looking at my pipelines again.
They are currently based on what I thought was considered to be trunk-based development:
Develop locally in `trunk`
fetch and rebase on trunk before pushing to remoteIf everything looks good, `git push origin trunk:short_lived_feature_branch` since remote `trunk` is protected / locked
Open PR, CI pipelines run automated testing and code reviewer reviews to make sure
trunk does not break and coding practices are being followedIf approved, the `short_lived_feature_branch` is merged to `trunk` and deleted
The merge to
trunk triggers the CD pipelineBut I was told that isn't really trunk-based development.
In a "pure" trunk-based development process, you'd be pushing directly into the remote trunk which would then run CI, and there wouldn't even be a PR.
I'm having trouble wrapping my brain around how that would work.
I use Azure DevOps, and If I push directly into trunk, my changes are there immediately. This does trigger the CI pipeline, but it could be several minutes before an issue is detected by them. Meanwhile the changes are in trunk that other developers could have fetched and rebased from.
In Azure DevOps, you can have branch policies and build validations, but those only apply to PRs and they have to be turned off to push directly to trunk.
Hoping someone can explain how this "pure" trunk-based development would be implemented that doesn't turn into a shit show of developers pulling bad code and then having to communicate to them it needs to be reverted.
Going down a rabbit-hole at this point...
https://redd.it/sfwa1i
@r_devops
reddit
Trunk-based Development, PRs and CI Question
I've been having conversations today that have me looking at my pipelines again. They are currently based on what I thought was considered to be...
Next Generation Shell (NGS)
I thought the r/devops subreddit might be interested in this project I just found!
https://github.com/ngs-lang/ngs
https://redd.it/sfz6uv
@r_devops
I thought the r/devops subreddit might be interested in this project I just found!
https://github.com/ngs-lang/ngs
https://redd.it/sfz6uv
@r_devops
GitHub
GitHub - ngs-lang/ngs: Next Generation Shell (NGS)
Next Generation Shell (NGS). Contribute to ngs-lang/ngs development by creating an account on GitHub.
aws nginx handle two api locations?
Any help is appreciated. I'm trying to run 2 node express servers on 2 ports on an AWS instance with NGINX.
Prod HTTP 404s with URL:
But in local dev it works with
Requests to the original
https://redd.it/sfw397
@r_devops
Any help is appreciated. I'm trying to run 2 node express servers on 2 ports on an AWS instance with NGINX.
Prod HTTP 404s with URL:
/api-new/servermembers/some-email-addressBut in local dev it works with
https://127.0.0.1:8080/api-new/servermembers/some-email-addressRequests to the original
/api URL still work.server_name xxxx.xxxx.com; # managed by Certbot
root /home/ubuntu/discord-bot/web/client/public;
rewrite ^/([^/.]+)$ /$1/index.html break;
error_page 404 /404/index.html;
location /api-new {
proxy_pass https://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass $http_upgrade;
}
location /api {
proxy_pass https://127.0.0.1:8222;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass $http_upgrade;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
https://redd.it/sfw397
@r_devops
reddit
aws nginx handle two api locations?
Any help is appreciated. I'm trying to run 2 node express servers on 2 ports on an AWS instance with NGINX. Prod HTTP 404s with URL:...
How do you explain your job to people so they can understand it generally (and not bore them)?
So I am basically a combo of DevOps+sysadmin, leaning more to the DevOps part of that. Usual stuff: integrate databases, make dashboards, move services to cluster infrastructure, and create a CI/CD framework -among other duties.
I can't for the life of me find a way to explain what I do, and if I try it's a conversation stopper..
How would you explain your job (or mine) to someone, if they seem interested enough to ask a follow up question about it?
https://redd.it/sg3bp7
@r_devops
So I am basically a combo of DevOps+sysadmin, leaning more to the DevOps part of that. Usual stuff: integrate databases, make dashboards, move services to cluster infrastructure, and create a CI/CD framework -among other duties.
I can't for the life of me find a way to explain what I do, and if I try it's a conversation stopper..
How would you explain your job (or mine) to someone, if they seem interested enough to ask a follow up question about it?
https://redd.it/sg3bp7
@r_devops
reddit
How do you explain your job to people so they can understand it...
So I am basically a combo of DevOps+sysadmin, leaning more to the DevOps part of that. Usual stuff: integrate databases, make dashboards, move...
1️⃣0️⃣0️⃣,0️⃣0️⃣0️⃣ Subscribers - John Savill's Technical Training Channel - THANK YOU!
Saturday morning, I hit a goal of mine since I really started to focus on my YouTube channel two years ago which was to help as many people as possible with their IT and career goals. That goal:
1️⃣0️⃣0️⃣,0️⃣0️⃣0️⃣ subscribers 🎉
I feel truly blessed to be able to help so many people and for the amazing support the channel has seen so sincerely, THANK YOU 🙏
For those that don't know my channel I focus on Azure, DevOps, PowerShell with some other technology and mentoring thrown in.
There are ZERO adverts, memberships or upsells on the channel. Nothing to distract you from being the best you can be.
Some key content for people new to the channel is below but there are over five hundred videos ranging from deep dives to overviews.
📖 Recommended Learning Path for Azure
🔗 https://learn.onboardtoazure.com
🥇Certification Content Repository
🔗 https://github.com/johnthebrit/CertificationMaterials
📅 Weekly Azure Update
🔗 https://youtube.com/playlist?list=PLlVtbbG169nEv7jSfOVmQGRp9wAoAM0Ks
☁ Azure Master Class
🔗 https://youtube.com/playlist?list=PLlVtbbG169nGccbp8VSpAozu3w9xSQJoY
⚙ DevOps Master Class
🔗 https://youtube.com/playlist?list=PLlVtbbG169nFr8RzQ4GIxUEznpNR53ERq
💻 PowerShell Master Class
🔗 https://youtube.com/playlist?list=PLlVtbbG169nFq\_hR7FcMYg32xsSAObuq8
🎓 Certification Cram Videos
🔗 https://youtube.com/playlist?list=PLlVtbbG169nHz2qfLvPsAz9CnnXofhmcA
🧠 Mentoring Content
🔗 https://youtube.com/playlist?list=PLlVtbbG169nGHxNkSWB0PjzZHwZ0BkXZZ
❔ Question about my setup?
🔗 https://youtube.com/playlist?list=PLlVtbbG169nHuSSHudxXDdn9Vz3T4-0mS
👕 Cure Childhood Cancer Charity T-Shirt Channel Store
🔗 https://johns-t-shirts-store.creator-spring.com/
SUBSCRIBE ✅ https://www.youtube.com/channel/UCpIn7ox7j7bH\_OFj7tYouOQ?sub\_confirmation=1
So, one final THANK YOU!
🤙
https://redd.it/sg65gp
@r_devops
Saturday morning, I hit a goal of mine since I really started to focus on my YouTube channel two years ago which was to help as many people as possible with their IT and career goals. That goal:
1️⃣0️⃣0️⃣,0️⃣0️⃣0️⃣ subscribers 🎉
I feel truly blessed to be able to help so many people and for the amazing support the channel has seen so sincerely, THANK YOU 🙏
For those that don't know my channel I focus on Azure, DevOps, PowerShell with some other technology and mentoring thrown in.
There are ZERO adverts, memberships or upsells on the channel. Nothing to distract you from being the best you can be.
Some key content for people new to the channel is below but there are over five hundred videos ranging from deep dives to overviews.
📖 Recommended Learning Path for Azure
🔗 https://learn.onboardtoazure.com
🥇Certification Content Repository
🔗 https://github.com/johnthebrit/CertificationMaterials
📅 Weekly Azure Update
🔗 https://youtube.com/playlist?list=PLlVtbbG169nEv7jSfOVmQGRp9wAoAM0Ks
☁ Azure Master Class
🔗 https://youtube.com/playlist?list=PLlVtbbG169nGccbp8VSpAozu3w9xSQJoY
⚙ DevOps Master Class
🔗 https://youtube.com/playlist?list=PLlVtbbG169nFr8RzQ4GIxUEznpNR53ERq
💻 PowerShell Master Class
🔗 https://youtube.com/playlist?list=PLlVtbbG169nFq\_hR7FcMYg32xsSAObuq8
🎓 Certification Cram Videos
🔗 https://youtube.com/playlist?list=PLlVtbbG169nHz2qfLvPsAz9CnnXofhmcA
🧠 Mentoring Content
🔗 https://youtube.com/playlist?list=PLlVtbbG169nGHxNkSWB0PjzZHwZ0BkXZZ
❔ Question about my setup?
🔗 https://youtube.com/playlist?list=PLlVtbbG169nHuSSHudxXDdn9Vz3T4-0mS
👕 Cure Childhood Cancer Charity T-Shirt Channel Store
🔗 https://johns-t-shirts-store.creator-spring.com/
SUBSCRIBE ✅ https://www.youtube.com/channel/UCpIn7ox7j7bH\_OFj7tYouOQ?sub\_confirmation=1
So, one final THANK YOU!
🤙
https://redd.it/sg65gp
@r_devops
Recommended courses for beginners
Hello, can you guys recommend for some beginner level courses for Devops on Udemy or something like that? Thanks!
https://redd.it/sg937a
@r_devops
Hello, can you guys recommend for some beginner level courses for Devops on Udemy or something like that? Thanks!
https://redd.it/sg937a
@r_devops
reddit
Recommended courses for beginners
Hello, can you guys recommend for some beginner level courses for Devops on Udemy or something like that? Thanks!
First junior DevOps interview - Advice needed
Hi fellows!
I recently started to apply for devops jobs here in Europe. After applying for a couple of offers, I got a link to the python knowledge test. I was able to solve 6 questions out of 8. After it, (I think I met the minimum requirement) I had the first intro interview with hr and there is an upcoming technical interview for me soon. As the recruiter mentioned, during 60 mins I will need to solve some python codes and explain my methods, etc. During the intro interview, I was asked if I have experience with Flask and Django. However, I do not have experience with them, just only basics. I have couple of days for the technical interview. Should I dive into Django and flask immediately? Or make stronger my knowledge in data structures and algorithms?
​
by the way, I am a self pathed learner. No CS degree. I am good at Linux, bash scripting, the foundation of aws (currently on the learning path of Solution Architect Associate)
Just need your advice as it is first devops technical interview I will have.
​
job desc look like below:
On Your First Day, We Expect You To Have
Scripting experience (Python, Bash, etc.) - we don't want you doing repetitive work
Practical knowledge of Python web frameworks (Django and/or Flask)
Basic knowledge of Docker
Good communication skills with fluent English (both spoken and written)
A collaborative spirit - in our world, it’s not about having all the answers, it’s about sometimes saying "I don't know" and working on finding solutions rather than starting with an assumption
Required
It's great – but not required – for you to have
Knowledge of a configuration management tool (for example Ansible/Puppet/Chef/Terraform)
Experience with integrating services using REST interfaces or similar techniques
JS (React preferred) experience
Hands-on experience with cloud infrastructure such as AWS
Basic understanding of Kubernetes
It’s a long list, but your teammates will guide you through onboarding and you’ll have enough time to become familiar with our tools, processes and people.
​
Thank you much in advance!
https://redd.it/sgbhl2
@r_devops
Hi fellows!
I recently started to apply for devops jobs here in Europe. After applying for a couple of offers, I got a link to the python knowledge test. I was able to solve 6 questions out of 8. After it, (I think I met the minimum requirement) I had the first intro interview with hr and there is an upcoming technical interview for me soon. As the recruiter mentioned, during 60 mins I will need to solve some python codes and explain my methods, etc. During the intro interview, I was asked if I have experience with Flask and Django. However, I do not have experience with them, just only basics. I have couple of days for the technical interview. Should I dive into Django and flask immediately? Or make stronger my knowledge in data structures and algorithms?
​
by the way, I am a self pathed learner. No CS degree. I am good at Linux, bash scripting, the foundation of aws (currently on the learning path of Solution Architect Associate)
Just need your advice as it is first devops technical interview I will have.
​
job desc look like below:
On Your First Day, We Expect You To Have
Scripting experience (Python, Bash, etc.) - we don't want you doing repetitive work
Practical knowledge of Python web frameworks (Django and/or Flask)
Basic knowledge of Docker
Good communication skills with fluent English (both spoken and written)
A collaborative spirit - in our world, it’s not about having all the answers, it’s about sometimes saying "I don't know" and working on finding solutions rather than starting with an assumption
Required
It's great – but not required – for you to have
Knowledge of a configuration management tool (for example Ansible/Puppet/Chef/Terraform)
Experience with integrating services using REST interfaces or similar techniques
JS (React preferred) experience
Hands-on experience with cloud infrastructure such as AWS
Basic understanding of Kubernetes
It’s a long list, but your teammates will guide you through onboarding and you’ll have enough time to become familiar with our tools, processes and people.
​
Thank you much in advance!
https://redd.it/sgbhl2
@r_devops
reddit
First junior DevOps interview - Advice needed
Hi fellows! I recently started to apply for devops jobs here in Europe. After applying for a couple of offers, I got a link to the python...
Code Scanning Solution in CI/CD - could vs "on premise". DevOps views needed.
Hi,
Run a SaaS/startup called Scanmycode.today
It is checking code for best practices and code quality. More on the website.
Briefly you can plug in/use in the solution many tools, as long as they produce JSON output. Currently it uses many tools, mostly Open Source to produce one report. That should be the main value. There is also ability to enable/disable each individual check, collaborate on findings, fast scans using snapshots (only new code is scanned on next iterations)
Other tools are proprietary and I can imagine very expensive.
Checks cover also Security areas (OWASP Top 10)
Idea was to save time for users in finding the tools and integrating them. Typical integration would just show tool output, separately for each tool
From everybody I talked to, uploading code to it was a concern. So I want to Open Source it, make on premise version.
I think to create community edition, open sourced version of full package under LGPL-2.1
More here: https://tldrlegal.com/license/gnu-lesser-general-public-license-v2.1-(lgpl-2.1)
With Commonsclause
More here:
https://commonsclause.com/
Meaning you will get the source, but no rights to it and cannot sell it, make your own SaaS of it.
This will give 100% transparency to see Scanmycode code and in case of on premise deployments (laptop, server) you fully control your codebase. Run it via Docker. One command to spin it up.
Organizations could still get GitHub and Organizations integrations plugins and/or other plugins and contribute. On a case by case basis.
I think with open source scanners, one report and many checks and possibility to add your own via tools and semantic greps makes the solution unique on the market.
Gauging the interest now.
Looking to commercialize through other optional plugins i.e GitHub, GitHub organizations, maybe support and donations via https://github.com/sponsors, https://opencollective.com/, https://www.buymeacoffee.com/
What do you think about idea?
Would you use it?
As a DevOps reponsible/advising would you approve it? What variant?
Or would you keep it closed sourced, as it is now.
What could be my advantages and disadvantages in both situations?
Thanks,
https://redd.it/sgepdg
@r_devops
Hi,
Run a SaaS/startup called Scanmycode.today
It is checking code for best practices and code quality. More on the website.
Briefly you can plug in/use in the solution many tools, as long as they produce JSON output. Currently it uses many tools, mostly Open Source to produce one report. That should be the main value. There is also ability to enable/disable each individual check, collaborate on findings, fast scans using snapshots (only new code is scanned on next iterations)
Other tools are proprietary and I can imagine very expensive.
Checks cover also Security areas (OWASP Top 10)
Idea was to save time for users in finding the tools and integrating them. Typical integration would just show tool output, separately for each tool
From everybody I talked to, uploading code to it was a concern. So I want to Open Source it, make on premise version.
I think to create community edition, open sourced version of full package under LGPL-2.1
More here: https://tldrlegal.com/license/gnu-lesser-general-public-license-v2.1-(lgpl-2.1)
With Commonsclause
More here:
https://commonsclause.com/
Meaning you will get the source, but no rights to it and cannot sell it, make your own SaaS of it.
This will give 100% transparency to see Scanmycode code and in case of on premise deployments (laptop, server) you fully control your codebase. Run it via Docker. One command to spin it up.
Organizations could still get GitHub and Organizations integrations plugins and/or other plugins and contribute. On a case by case basis.
I think with open source scanners, one report and many checks and possibility to add your own via tools and semantic greps makes the solution unique on the market.
Gauging the interest now.
Looking to commercialize through other optional plugins i.e GitHub, GitHub organizations, maybe support and donations via https://github.com/sponsors, https://opencollective.com/, https://www.buymeacoffee.com/
What do you think about idea?
Would you use it?
As a DevOps reponsible/advising would you approve it? What variant?
Or would you keep it closed sourced, as it is now.
What could be my advantages and disadvantages in both situations?
Thanks,
https://redd.it/sgepdg
@r_devops
Tldrlegal
GNU Lesser General Public License v2.1 (LGPL-2.1) Explained in Plain English - TLDRLegal
The GNU Lesser General Public License v2.1 (LGPL-2.1) summarized/explained in plain English.
Docker Desktop's Grace period has ended
> Hello,
> As a reminder you’re receiving this email because on August 31, 2021 we updated the terms applicable to the Docker products or services you use.
> On January 31, 2022, the grace period ends for free commercial use of Docker Desktop in larger enterprises. Companies with more than 250 employees OR more than $10 million USD in annual revenue now require a paid subscription to use Docker Desktop. Read the blog or visit our FAQ to learn more about these updates.
> What you need to know:
> Docker Desktop remains free for personal use, education, non-commercial open source projects, and small businesses with fewer than 250 employees AND less than $10 million USD in annual revenue.
By continuing to use Docker, you are agreeing to the new Docker Subscription Service Agreement.
For organizations requiring Single Sign-On (SSO), it is now generally available for Docker Business subscribers.
To purchase a Docker subscription, visit our pricing page to compare subscription tiers, starting at just $5 per month, per user on an annual basis. For organizations with more than 50 users requiring an invoice, contact sales.
> Thank you,
The Docker Team
I am not part of any of the exception groups mentioned above. What should I migrate to?
https://redd.it/sggg5b
@r_devops
> Hello,
> As a reminder you’re receiving this email because on August 31, 2021 we updated the terms applicable to the Docker products or services you use.
> On January 31, 2022, the grace period ends for free commercial use of Docker Desktop in larger enterprises. Companies with more than 250 employees OR more than $10 million USD in annual revenue now require a paid subscription to use Docker Desktop. Read the blog or visit our FAQ to learn more about these updates.
> What you need to know:
> Docker Desktop remains free for personal use, education, non-commercial open source projects, and small businesses with fewer than 250 employees AND less than $10 million USD in annual revenue.
By continuing to use Docker, you are agreeing to the new Docker Subscription Service Agreement.
For organizations requiring Single Sign-On (SSO), it is now generally available for Docker Business subscribers.
To purchase a Docker subscription, visit our pricing page to compare subscription tiers, starting at just $5 per month, per user on an annual basis. For organizations with more than 50 users requiring an invoice, contact sales.
> Thank you,
The Docker Team
I am not part of any of the exception groups mentioned above. What should I migrate to?
https://redd.it/sggg5b
@r_devops
reddit
Docker Desktop's Grace period has ended
> Hello, > As a reminder you’re receiving this email because on August 31, 2021 we updated the terms applicable to the Docker products or...
AWS DevOps Engineer Professional Certification -2022 Exam Prep
https://kanger.dev/aws-certified-devops-engineer-professional-courses-exam/
https://redd.it/sgeiqo
@r_devops
https://kanger.dev/aws-certified-devops-engineer-professional-courses-exam/
https://redd.it/sgeiqo
@r_devops
kanger.dev
Top 5 Courses for AWS Certified DevOps Engineer Professional
Find the best courses to sanitize yourself to pass the AWS Certified DevOps Engineer - Professional Exam in 2022.
Are you facing the problem to increase cloud cost every month?
Are you facing the problem to increase cloud cost every month?
View Poll
https://redd.it/sftt6x
@r_devops
Are you facing the problem to increase cloud cost every month?
View Poll
https://redd.it/sftt6x
@r_devops
upgrading old version of gitlab
I have a really old version of self-hosted Gitlab, 12.7, and I need to upgrade it to latest.
Has anybody attempted an upgrade from this or similar version? I suspect I'd be better off creating a new instance and not bother upgrading. But then I need to migrate all the data (projects, repositories, etc).
I am looking to find out if anybody has gone through either of the exercise and what they'd recommend.
https://redd.it/sgmdr2
@r_devops
I have a really old version of self-hosted Gitlab, 12.7, and I need to upgrade it to latest.
Has anybody attempted an upgrade from this or similar version? I suspect I'd be better off creating a new instance and not bother upgrading. But then I need to migrate all the data (projects, repositories, etc).
I am looking to find out if anybody has gone through either of the exercise and what they'd recommend.
https://redd.it/sgmdr2
@r_devops
reddit
upgrading old version of gitlab
I have a really old version of self-hosted Gitlab, 12.7, and I need to upgrade it to latest. Has anybody attempted an upgrade from this or...
Is there something to log all alerts of rancher 1.6 like in rancher 2.x?
Hello, I need to log all the alerts in rancher 1.6 but I cannot find a documentation about it.
In rancher 2.x there is project alerts, tool > alerts, etc
No, I cannot update, it's not me who decides.
Can you help me? Thank you.
https://redd.it/sgwvdj
@r_devops
Hello, I need to log all the alerts in rancher 1.6 but I cannot find a documentation about it.
In rancher 2.x there is project alerts, tool > alerts, etc
No, I cannot update, it's not me who decides.
Can you help me? Thank you.
https://redd.it/sgwvdj
@r_devops
reddit
Is there something to log all alerts of rancher 1.6 like in...
Hello, I need to log all the alerts in rancher 1.6 but I cannot find a documentation about it. In rancher 2.x there is project alerts, tool >...
Docker Hub alternative for base images
A time ago Docker announced another limit. Now anonymous users are allowed no more than 100 pulls every 6 hours.
I have already stopped to use Docker Hub for storing my images in private repositories, but the problem is that for images build I am using base images from Docker Hub and build it from a shared environment (on Azure DevOps Microsoft-hosted agents and GitHub Actions hosted runners). In such situation there is no guarantee that the environment already haven't exceeded the limit.
As a result, made the demo repository for using Github packages to store base images built from the scratch. Currently it contains ubuntu and alpine images. Workflows are triggered every month. Images can be pulled anonymously.
https://redd.it/sh0mle
@r_devops
A time ago Docker announced another limit. Now anonymous users are allowed no more than 100 pulls every 6 hours.
I have already stopped to use Docker Hub for storing my images in private repositories, but the problem is that for images build I am using base images from Docker Hub and build it from a shared environment (on Azure DevOps Microsoft-hosted agents and GitHub Actions hosted runners). In such situation there is no guarantee that the environment already haven't exceeded the limit.
As a result, made the demo repository for using Github packages to store base images built from the scratch. Currently it contains ubuntu and alpine images. Workflows are triggered every month. Images can be pulled anonymously.
https://redd.it/sh0mle
@r_devops
Docker
Checking Your Current Docker Pull Rate Limits and Status | Docker
Learn from Docker experts to simplify and advance your app development and management with Docker. Stay up to date on Docker events and new version
How to Develop Software 10x Faster with DevOps
https://levelup.gitconnected.com/how-to-develop-software-10x-faster-with-devops-ee43ca6d20af?sk=ce6345bc32022854ae9762c6938a830c
https://redd.it/sh0w88
@r_devops
https://levelup.gitconnected.com/how-to-develop-software-10x-faster-with-devops-ee43ca6d20af?sk=ce6345bc32022854ae9762c6938a830c
https://redd.it/sh0w88
@r_devops
Medium
How to Develop Software 10x Faster with DevOps
Software development becomes too slow if you don’t implement DevOps principles