Advice on stopping a persistent attack on one of our api endpoints.
Over the past couple of months we've had a persistent attack on us trying to flood one of our API endpoints. This specific endpoint is for our identity provider path. Our site is within Cloudflare and we have a variety of WAF rules such as rate limiting, blocking by country, blocking by bot score, blocking by JA3 fingerprint. We additionally have F5 finger printing JS on the frontend.
The specifics of the attack are: All IP's are US based. Thousands of unique IPs, all coming from various big datacenters(Comcast, Verzion, etc). All using valid user agents. I'm really having trouble finding anything that can make these attacks unique. Our team is heavily on the infra side but at this point I think at this point we may need to make a functional change in our application and suggest that path to our backend devs...but I'm not sure what that solution looks like. Right now it's just a cat and mouse game that will continue to go. Also as this is our identity endpoint the partner controls a portion of it so we are a bit limited in some aspects.
https://redd.it/1d7k13b
@r_devops
Over the past couple of months we've had a persistent attack on us trying to flood one of our API endpoints. This specific endpoint is for our identity provider path. Our site is within Cloudflare and we have a variety of WAF rules such as rate limiting, blocking by country, blocking by bot score, blocking by JA3 fingerprint. We additionally have F5 finger printing JS on the frontend.
The specifics of the attack are: All IP's are US based. Thousands of unique IPs, all coming from various big datacenters(Comcast, Verzion, etc). All using valid user agents. I'm really having trouble finding anything that can make these attacks unique. Our team is heavily on the infra side but at this point I think at this point we may need to make a functional change in our application and suggest that path to our backend devs...but I'm not sure what that solution looks like. Right now it's just a cat and mouse game that will continue to go. Also as this is our identity endpoint the partner controls a portion of it so we are a bit limited in some aspects.
https://redd.it/1d7k13b
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to Transition from Backend Development to DevOps Engineering 🔄?
I’ve been a software developer for the past 10 years, focusing primarily on backend development. Recently, I've been thinking about switching gears and moving into the DevOps domain.
I’m planning to kick things off with a DevOps course from KodeKloud and aim to complete the CKAD certification afterward.
For those who have made a similar transition or are already in the DevOps field, I have a few questions:
1. Will completing this course and certification be enough to land senior roles in DevOps?
2. What additional skills or experiences should I focus on to make myself a strong candidate for these roles?
3. Any tips or resources that you found particularly helpful during your own transition?
Also, given my 10 years of experience, I’m not looking to start in a junior role. Any advice on making this transition without having to start over at a junior level would be greatly appreciated!
Thanks in advance for any advice or insights! :)
https://redd.it/1d7phm0
@r_devops
I’ve been a software developer for the past 10 years, focusing primarily on backend development. Recently, I've been thinking about switching gears and moving into the DevOps domain.
I’m planning to kick things off with a DevOps course from KodeKloud and aim to complete the CKAD certification afterward.
For those who have made a similar transition or are already in the DevOps field, I have a few questions:
1. Will completing this course and certification be enough to land senior roles in DevOps?
2. What additional skills or experiences should I focus on to make myself a strong candidate for these roles?
3. Any tips or resources that you found particularly helpful during your own transition?
Also, given my 10 years of experience, I’m not looking to start in a junior role. Any advice on making this transition without having to start over at a junior level would be greatly appreciated!
Thanks in advance for any advice or insights! :)
https://redd.it/1d7phm0
@r_devops
Kodekloud
Master DevOps, Cloud & AI with Hands-on Labs and Guided Videos
KodeKloud is the #1 DevOps course provider and helps students learn trending technologies they need to thrive in their career. Learn more about KodeKloud!
Can I add a project which didn't go live?
So basically, I wrote an ansible playbook to orchestrate container deployment in the servers. But my lead refrains from using ansible somehow and thus, it didn't go live. Can I still show it in my resume?
I actually liked the fact that I can trigger the playbook against multiple environments at the same time, bringing up the new containers up in a single command.
https://redd.it/1d7qty2
@r_devops
So basically, I wrote an ansible playbook to orchestrate container deployment in the servers. But my lead refrains from using ansible somehow and thus, it didn't go live. Can I still show it in my resume?
I actually liked the fact that I can trigger the playbook against multiple environments at the same time, bringing up the new containers up in a single command.
https://redd.it/1d7qty2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Software Architecture Diagrams with C4 Model
https://packagemain.tech/p/software-architecture-diagrams-c4
https://redd.it/1d7sz0u
@r_devops
https://packagemain.tech/p/software-architecture-diagrams-c4
https://redd.it/1d7sz0u
@r_devops
packagemain.tech
Software Architecture Diagrams with C4 Model
Diagrams should be effortless to create and update, ensuring everyone has access to the latest information.
How do you learn advanced techniques like docker performance optimization and parallel testing?
I have troubles grasping these topics are work
https://redd.it/1d7sxpn
@r_devops
I have troubles grasping these topics are work
https://redd.it/1d7sxpn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Using nginx to forward UI applications
Hi guys,
Right now I have several different UI apps which are on different domains.
I want to move them all to a single domain and separate them by an url path, for example:
www.foo.bar/grafana
www.foo.bar/rabbitmq
The way I've envisioned this is that I'd be using nginx proxy_pass to forward requests to local services with a config like that:
location /grafana/ {
proxypass https://grafana.local/;
proxysetheader X-Forwarded-Host $host;
proxysetheader Accept-Encoding;
subfiltertypes *;
subfilteronce off;
subfilter "src=\"" "src=\"grafana/"
}
, but I've encountered 2 problems:
1. Html is trying to download resources from base domain, not from domain + path. So for example if there is some element in html having src="path/style.css" browser will try to download from www.foo.bar/path/style.css and not www.foo.bar/grafana/path/style.css. This will obviously fail as nginx won't know what to do with this request.
This can be dealt with using "sub_filter" directive (with some pain) so it's not that bad. However, the next problem is much worse.
2. Redirects
The problem is very similar to the previous one. When I go to the grafana index page it redirects me to /login path. The issue is that it will take me to www.foo.bar/login and not www.foo.bar/grafana/login. I haven't found any way of dealing with this and it's preventing me from proceeding. Grafana is kind enough to give you root_url config which is made for situations like these, but rabbitmq or kafka-ui and other services simply don't.
Anyone has any experience with stuff like this?
https://redd.it/1d7tw8w
@r_devops
Hi guys,
Right now I have several different UI apps which are on different domains.
I want to move them all to a single domain and separate them by an url path, for example:
www.foo.bar/grafana
www.foo.bar/rabbitmq
The way I've envisioned this is that I'd be using nginx proxy_pass to forward requests to local services with a config like that:
location /grafana/ {
proxypass https://grafana.local/;
proxysetheader X-Forwarded-Host $host;
proxysetheader Accept-Encoding;
subfiltertypes *;
subfilteronce off;
subfilter "src=\"" "src=\"grafana/"
}
, but I've encountered 2 problems:
1. Html is trying to download resources from base domain, not from domain + path. So for example if there is some element in html having src="path/style.css" browser will try to download from www.foo.bar/path/style.css and not www.foo.bar/grafana/path/style.css. This will obviously fail as nginx won't know what to do with this request.
This can be dealt with using "sub_filter" directive (with some pain) so it's not that bad. However, the next problem is much worse.
2. Redirects
The problem is very similar to the previous one. When I go to the grafana index page it redirects me to /login path. The issue is that it will take me to www.foo.bar/login and not www.foo.bar/grafana/login. I haven't found any way of dealing with this and it's preventing me from proceeding. Grafana is kind enough to give you root_url config which is made for situations like these, but rabbitmq or kafka-ui and other services simply don't.
Anyone has any experience with stuff like this?
https://redd.it/1d7tw8w
@r_devops
Help with Hashicorp Vault Audit Logging to Datadog in Kubernetes
Hi all,
I'm running Hashicorp Vault in Kubernetes and need help with audit logging. Here are the issues I'm facing:
1- Local File Limitation: Vault's audit logs only support local files. If the file gets full, Vault stops servicing requests.
2- Data Export: I need to send these logs to Datadog.
Has anyone managed to:
- Mitigate the local file limitation risk?
- Export Vault logs to Datadog or other platform?
Any advice or solutions ideas would be great!
Thanks!
https://redd.it/1d7wmre
@r_devops
Hi all,
I'm running Hashicorp Vault in Kubernetes and need help with audit logging. Here are the issues I'm facing:
1- Local File Limitation: Vault's audit logs only support local files. If the file gets full, Vault stops servicing requests.
2- Data Export: I need to send these logs to Datadog.
Has anyone managed to:
- Mitigate the local file limitation risk?
- Export Vault logs to Datadog or other platform?
Any advice or solutions ideas would be great!
Thanks!
https://redd.it/1d7wmre
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Gov consulting?
Does anyone here do any government consulting? Specifically part time or as a side hustle? If so, how did you get into it? Did you bid on a contract?
https://redd.it/1d7xp46
@r_devops
Does anyone here do any government consulting? Specifically part time or as a side hustle? If so, how did you get into it? Did you bid on a contract?
https://redd.it/1d7xp46
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I'm the only white guy in my 20 person team. Everyone else is an H1B from India lol
Our company touts diversity, and this feel like it aint it!
Ive interviewed multiple folks who are not of indian descent and my manager never moves them forward in the process.
It kind of sucks because I definitely do feel left out.
https://redd.it/1d7yuwd
@r_devops
Our company touts diversity, and this feel like it aint it!
Ive interviewed multiple folks who are not of indian descent and my manager never moves them forward in the process.
It kind of sucks because I definitely do feel left out.
https://redd.it/1d7yuwd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Automate Servers patching across multiple cloud providers
So I've been tasked to find a long term solution for automating and centralising the patching of all of our linux servers across multiple cloud providers. We're currently mostly on AWS-GCP, some Azure exposure and to add to the mess, some old onprem stuff on V-Sphere.
So far I've been dealing mostly with AWS myself, successfully automating the patching of EC2 instances using the in-built functionalities like PatchManager and AWS' Automations.
As of today for some reason, the bosses don't want to patch our servers from within the cloud provider anymore but they're asking for a solution that can be centralised instead, with the goal of patching all of our servers with one unified procedure, regardless of their cloud location or operating system (We run RedHat, Debian, AmazonLinux etc.).
I need to come up with a plan. So far I've been thinking maybe I could set up Ansible Playbooks and run them across all the VMs targeting their operating system, that's the first thing that comes to mind. I'm not sure as to how to proceed yet.
Do you have any suggestions/tips as to how you would tackle this? Also is there a service out there already doing this?
Any insight is much appreciated!
https://redd.it/1d7ym6u
@r_devops
So I've been tasked to find a long term solution for automating and centralising the patching of all of our linux servers across multiple cloud providers. We're currently mostly on AWS-GCP, some Azure exposure and to add to the mess, some old onprem stuff on V-Sphere.
So far I've been dealing mostly with AWS myself, successfully automating the patching of EC2 instances using the in-built functionalities like PatchManager and AWS' Automations.
As of today for some reason, the bosses don't want to patch our servers from within the cloud provider anymore but they're asking for a solution that can be centralised instead, with the goal of patching all of our servers with one unified procedure, regardless of their cloud location or operating system (We run RedHat, Debian, AmazonLinux etc.).
I need to come up with a plan. So far I've been thinking maybe I could set up Ansible Playbooks and run them across all the VMs targeting their operating system, that's the first thing that comes to mind. I'm not sure as to how to proceed yet.
Do you have any suggestions/tips as to how you would tackle this? Also is there a service out there already doing this?
Any insight is much appreciated!
https://redd.it/1d7ym6u
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Struggling to stop containers in docker
When I try to stop containers in Docker Desktop, the containers just... don't stop. So I end up restarting my computer, which forces them to stop.
The problem is that I am developing in VS Code with Docker. And every couple of hours, VS Code will lose the Docker connection. So I will restart my computer, which 'solves' the problem, but obviously isn't a great solution.
When I try to stop containers in Docker Desktop, the containers just... don't stop. So I end up restarting my computer, which forces them to stop.
The problem is that I am developing in VS Code with Docker. And every couple of hours, VS Code will lose the Docker connection. So I will restart my computer, which 'solves' the problem, but obviously isn't a great solution.
'Error: Process exited with code 126' is what VS Code is showing the Docker error to be.
https://redd.it/1d7zuy1
@r_devops
When I try to stop containers in Docker Desktop, the containers just... don't stop. So I end up restarting my computer, which forces them to stop.
The problem is that I am developing in VS Code with Docker. And every couple of hours, VS Code will lose the Docker connection. So I will restart my computer, which 'solves' the problem, but obviously isn't a great solution.
When I try to stop containers in Docker Desktop, the containers just... don't stop. So I end up restarting my computer, which forces them to stop.
The problem is that I am developing in VS Code with Docker. And every couple of hours, VS Code will lose the Docker connection. So I will restart my computer, which 'solves' the problem, but obviously isn't a great solution.
'Error: Process exited with code 126' is what VS Code is showing the Docker error to be.
https://redd.it/1d7zuy1
@r_devops
Reddit
Struggling to stop containers in docker : r/devops
342K subscribers in the devops community.
Consulting Educational Resources
I have recently been asked by a former director to consult part time for a company that I used to work for. The initial contract would be quite short, and consist only of providing subject matter expertise and project planning to the current team of Platform Engineers.
I have been considering transitioning into consulting at some point, likely after my wife goes back to work (SAHM until our daughter goes to school) in about a year. I think this might be a good opportunity to start getting a foot in the door, however I have not contracted for companies before, and am quickly trying to get up to speed on legal concerns, taxes, finances, etc. This offer has been made pretty short notice.
If anyone has any good resources they would recommend on how to start getting an LLC setup, or general advice they are willing to share I would greatly appreciate it.
https://redd.it/1d7z3ni
@r_devops
I have recently been asked by a former director to consult part time for a company that I used to work for. The initial contract would be quite short, and consist only of providing subject matter expertise and project planning to the current team of Platform Engineers.
I have been considering transitioning into consulting at some point, likely after my wife goes back to work (SAHM until our daughter goes to school) in about a year. I think this might be a good opportunity to start getting a foot in the door, however I have not contracted for companies before, and am quickly trying to get up to speed on legal concerns, taxes, finances, etc. This offer has been made pretty short notice.
If anyone has any good resources they would recommend on how to start getting an LLC setup, or general advice they are willing to share I would greatly appreciate it.
https://redd.it/1d7z3ni
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I made iOS monitoring app for DigitalOcean with homescreen widgets. Free, no ads, no tracking.
App Store link: https://apps.apple.com/us/app/status-for-digitalocean/id6499493955
Core features:
- View service status
- View current incidents
- View past incidents
- View scheduled and active maintenances
- Add customizable home screen widgets
- Light and dark mode support
- Customizable app icon
I just wanted to make something cool people would use, I’m not a devops engineer so I’d appreciate any feedback. If you’d like to suggest any improvements feel free to leave any comments below.
https://redd.it/1d8534x
@r_devops
App Store link: https://apps.apple.com/us/app/status-for-digitalocean/id6499493955
Core features:
- View service status
- View current incidents
- View past incidents
- View scheduled and active maintenances
- Add customizable home screen widgets
- Light and dark mode support
- Customizable app icon
I just wanted to make something cool people would use, I’m not a devops engineer so I’d appreciate any feedback. If you’d like to suggest any improvements feel free to leave any comments below.
https://redd.it/1d8534x
@r_devops
App Store
Status for DigitalOcean
Get real-time updates on DigitalOcean's service status with our app. Monitor services, access widgets for quick checks, view past incidents, and stay informed about upcoming maintenances. Whether you're a developer, sysadmin, or business owner, stay informed…
Does an IaC Platform exist? HELP!
I had always dreamed of using a platform where I could select some settings based on the type of application and then generate all the infra code in Terraform that I would need to achieve that, and probably just manage everything in that platform as well.
For example, imagine I have a Java dockerized microservice that will use Kubernetes, needs security, and has a Postgres DB.
Let's assume it will be deployed in a brand-new AWS account.
What I'm looking is for a platform where I can create everything just as a wizard; that platform has to:
1) Create code to connect GitHub/GitLab to AWS (build docker container and push it to ECR)
2) Create a CI/CD pipeline to deploy the dockerized service to EKS (This can be triggered from the platform as well, to hide implementation)
3) Under the hood, based on previous settings, it knows the service needs an EKS cluster, ECR, Cognito, and Postgres.
Wondering if you are aware of a platform with those capabilities.
https://redd.it/1d86won
@r_devops
I had always dreamed of using a platform where I could select some settings based on the type of application and then generate all the infra code in Terraform that I would need to achieve that, and probably just manage everything in that platform as well.
For example, imagine I have a Java dockerized microservice that will use Kubernetes, needs security, and has a Postgres DB.
Let's assume it will be deployed in a brand-new AWS account.
What I'm looking is for a platform where I can create everything just as a wizard; that platform has to:
1) Create code to connect GitHub/GitLab to AWS (build docker container and push it to ECR)
2) Create a CI/CD pipeline to deploy the dockerized service to EKS (This can be triggered from the platform as well, to hide implementation)
3) Under the hood, based on previous settings, it knows the service needs an EKS cluster, ECR, Cognito, and Postgres.
Wondering if you are aware of a platform with those capabilities.
https://redd.it/1d86won
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
NLB to lightsail data charges
Hi,
AWS Support has given me both answers on two separate tickets on this so I'm turning to you guys.
If an NLB proxies tcp_udp connections to Lightsail through VPC peering (using Lightsail's private IP), and Lightsail "replies" to the client using a TCP tunnel, does the data transferred to the end-client count as Lightsail data charges or EC2 data charges (meaning, if Lightsail sends 4TB worth of data, will they be charged or fall under the free-tier of the respective package?)
Keep in mind that the client ONLY ever sees/sends/receives traffic from the NLB's public IP, none of the packets are marked with Lightsail's public IP.
AWS support has literally given me two separate answers, one being that they fall under EC2 and the other being the opposite -.-
Thanks in advance
https://redd.it/1d84drl
@r_devops
Hi,
AWS Support has given me both answers on two separate tickets on this so I'm turning to you guys.
If an NLB proxies tcp_udp connections to Lightsail through VPC peering (using Lightsail's private IP), and Lightsail "replies" to the client using a TCP tunnel, does the data transferred to the end-client count as Lightsail data charges or EC2 data charges (meaning, if Lightsail sends 4TB worth of data, will they be charged or fall under the free-tier of the respective package?)
Keep in mind that the client ONLY ever sees/sends/receives traffic from the NLB's public IP, none of the packets are marked with Lightsail's public IP.
AWS support has literally given me two separate answers, one being that they fall under EC2 and the other being the opposite -.-
Thanks in advance
https://redd.it/1d84drl
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Certifications worth pursuing?
I am aware that experience outweighs certifications any day. However, besides the obvious ones like AWS, GCP, and Azure, are there any other certifications that would make a difference for a software engineer transitioning into DevOps?
If you would pick any from this list, which one would be?
1. Certified Kubernetes Administrator (CKA)
2. Docker Certified Associate (DCA)
3. HashiCorp Certified: Terraform Associate
4. Certified Jenkins Engineer (CJE)
5. Red Hat Certified Specialist in Ansible Automation
6. Puppet Certified Professional
7. ITIL Foundation Certification
8. Certified Agile DevOps Professional
9. CompTIA Linux+
10. Google Professional Cloud DevOps Engineer
https://redd.it/1d8ct5k
@r_devops
I am aware that experience outweighs certifications any day. However, besides the obvious ones like AWS, GCP, and Azure, are there any other certifications that would make a difference for a software engineer transitioning into DevOps?
If you would pick any from this list, which one would be?
1. Certified Kubernetes Administrator (CKA)
2. Docker Certified Associate (DCA)
3. HashiCorp Certified: Terraform Associate
4. Certified Jenkins Engineer (CJE)
5. Red Hat Certified Specialist in Ansible Automation
6. Puppet Certified Professional
7. ITIL Foundation Certification
8. Certified Agile DevOps Professional
9. CompTIA Linux+
10. Google Professional Cloud DevOps Engineer
https://redd.it/1d8ct5k
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Seeking Advice on Learning Development and Starting a Tech Startup
Hi everyone,
I'm a former AI Product Manager with no prior development experience. I have a strong desire to build a software product and start my own tech startup. To achieve this, I know I need to gain development knowledge and learn how to code.
Where should I start? Any tips or resources you can recommend would be greatly appreciated!
Thanks in advance!
https://redd.it/1d8j5py
@r_devops
Hi everyone,
I'm a former AI Product Manager with no prior development experience. I have a strong desire to build a software product and start my own tech startup. To achieve this, I know I need to gain development knowledge and learn how to code.
Where should I start? Any tips or resources you can recommend would be greatly appreciated!
Thanks in advance!
https://redd.it/1d8j5py
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Internet Speed vs LAN Switch
Vendors are tying to push Giga Switch when my internet speed is, download at 30mb/s. Are they trying to upsell me?
Will I loose performance on the internet speed if I go for a Switch that supports up to 100mb/s?
https://redd.it/1d8kuu8
@r_devops
Vendors are tying to push Giga Switch when my internet speed is, download at 30mb/s. Are they trying to upsell me?
Will I loose performance on the internet speed if I go for a Switch that supports up to 100mb/s?
https://redd.it/1d8kuu8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Should I create the database and the user using Terraform or ansible?
I am working in a software house and for app demopnstration towards the client we are using EC-2 with installed LEMP stack. For the server I use terraform:
```
resource "aws_instance" "instance" {
ami=var.ami
instance_type="t3a.micro"
key_name = var.ssh_key
iam_instance_profile = aws_iam_instance_profile.ec2_profile.name
root_block_device {
volume_size = 30
volume_type = "gp3"
}
count = var.ec2_instance_num
vpc_security_group_ids=var.ec2_security_groups
provisioner "file" {
source = "${path.module}/provision.sh"
destination = "/home/ubuntu/provision.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/provision.sh",
local.final_provision_command
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file(var.private_key_path)}"
host = self.public_ip
}
}
```
With the follwing script:
```
#!/usr/bin/env bash
if tput colors >/dev/null 2>&1; then
RED='\033[0;31m'
YELLOW='\033[1;33m'
CYAN='\033[1;35m'
NC='\033[0m' # No Color
else
RED=''
GREEN=''
YELLOW=''
NC=''
fi
print_help() {
echo -e "Usage: ${YELLOW}$0${NC} [options]"
echo -e "${CYAN}Options:${NC}"
echo " --php_ver <version> Specify PHP version (default is 8.2)"
echo " --nodb Do not install any database"
echo " --db_root_password <pass> Set root password for the database"
echo " -h, --help Show this help message"
}
cleanup () {
echo -e "${CYAN}Cleanup${NC}"
rm -rf /home/ubuntu/install
apt-get autoremove && apt-get autoclean
reboot
exit 0;
}
if [ "$EUID" -ne 0 ]; then
echo -e "${RED}ERROR: Run this script as root or via using sudo.${NC}"
echo
print_help
exit 1;
fi
export DEBIAN_FRONTEND=noninteractive
PHP_VERSION="8.2"
DB_TYPE="mariadb"
while [ "$1" != "" ]; do
case $1 in
"--php_ver")
PHP_VERSION=$2
shift 2
;;
"--nodb")
DB_TYPE="none"
shift
;;
"--db_root_password")
DB_ROOT_PASSWORD=$2
shift 2
;;
"-h" | "--help")
print_help
exit 0
;;
*)
echo -e " ${RED}Invalid option: ${YELLOW}$1${NC}"
exit 1
;;
esac
done
apt-get update && apt-get upgrade -y
if [ "$PHP_VERSION" == "" ]; then
echo -e "${RED}No php version provided defaulting into 8.2${NC}"
PHP_VERSION="8.2"
fi
echo -e "${CYAN}PHP ${YELLOW}$PHP_VERSION${CYAN} will be installed ${NC}"
apt-get install -y nginx ca-certificates apt-transport-https software-properties-common ruby-full
add-apt-repository -y ppa:ondrej/php
apt-get update
apt-get install -y php${PHP_VERSION}-fpm \
php${PHP_VERSION}-mbstring \
php${PHP_VERSION}-mysql \
php${PHP_VERSION}-oauth \
php${PHP_VERSION}-opcache \
php${PHP_VERSION}-readline \
php${PHP_VERSION}-xml
POOL_CONF="/etc/php/${PHP_VERSION}/fpm/pool.d/www.conf"
if [ -f "$POOL_CONF" ]; then
echo -e "${CYAN}Configuring PHP-FPM to listen on ${YELLOW}127.0.0.1:9000${NC}"
sed -i "s|^listen = .*|listen = 127.0.0.1:9000|" "$POOL_CONF"
systemctl restart php${PHP_VERSION}-fpm
else
echo -e "${RED}Failed to configure PHP-FPM: ${POOL_CONF} not found${NC}"
cleanup
exit 1
fi
echo -e "${CYAN}Configuring default Vhost${NC}"
rm -rf /var/www/html/*
echo "<?php phpinfo();" > /var/www/html/index.php
systemctl stop nginx
cat >/etc/nginx/sites-available/default <<EOL
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-cgi (or other tcp sockets):
fastcgi_pass 127.0.0.1:9000;
}
location ~ /\.ht {
deny all;
}
}
EOL
systemctl start nginx
echo -e "${CYAN}Installing ${YELLOW}Codeploy Agent${NC}"
rm -rf ./install
wget
I am working in a software house and for app demopnstration towards the client we are using EC-2 with installed LEMP stack. For the server I use terraform:
```
resource "aws_instance" "instance" {
ami=var.ami
instance_type="t3a.micro"
key_name = var.ssh_key
iam_instance_profile = aws_iam_instance_profile.ec2_profile.name
root_block_device {
volume_size = 30
volume_type = "gp3"
}
count = var.ec2_instance_num
vpc_security_group_ids=var.ec2_security_groups
provisioner "file" {
source = "${path.module}/provision.sh"
destination = "/home/ubuntu/provision.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/provision.sh",
local.final_provision_command
]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file(var.private_key_path)}"
host = self.public_ip
}
}
```
With the follwing script:
```
#!/usr/bin/env bash
if tput colors >/dev/null 2>&1; then
RED='\033[0;31m'
YELLOW='\033[1;33m'
CYAN='\033[1;35m'
NC='\033[0m' # No Color
else
RED=''
GREEN=''
YELLOW=''
NC=''
fi
print_help() {
echo -e "Usage: ${YELLOW}$0${NC} [options]"
echo -e "${CYAN}Options:${NC}"
echo " --php_ver <version> Specify PHP version (default is 8.2)"
echo " --nodb Do not install any database"
echo " --db_root_password <pass> Set root password for the database"
echo " -h, --help Show this help message"
}
cleanup () {
echo -e "${CYAN}Cleanup${NC}"
rm -rf /home/ubuntu/install
apt-get autoremove && apt-get autoclean
reboot
exit 0;
}
if [ "$EUID" -ne 0 ]; then
echo -e "${RED}ERROR: Run this script as root or via using sudo.${NC}"
echo
print_help
exit 1;
fi
export DEBIAN_FRONTEND=noninteractive
PHP_VERSION="8.2"
DB_TYPE="mariadb"
while [ "$1" != "" ]; do
case $1 in
"--php_ver")
PHP_VERSION=$2
shift 2
;;
"--nodb")
DB_TYPE="none"
shift
;;
"--db_root_password")
DB_ROOT_PASSWORD=$2
shift 2
;;
"-h" | "--help")
print_help
exit 0
;;
*)
echo -e " ${RED}Invalid option: ${YELLOW}$1${NC}"
exit 1
;;
esac
done
apt-get update && apt-get upgrade -y
if [ "$PHP_VERSION" == "" ]; then
echo -e "${RED}No php version provided defaulting into 8.2${NC}"
PHP_VERSION="8.2"
fi
echo -e "${CYAN}PHP ${YELLOW}$PHP_VERSION${CYAN} will be installed ${NC}"
apt-get install -y nginx ca-certificates apt-transport-https software-properties-common ruby-full
add-apt-repository -y ppa:ondrej/php
apt-get update
apt-get install -y php${PHP_VERSION}-fpm \
php${PHP_VERSION}-mbstring \
php${PHP_VERSION}-mysql \
php${PHP_VERSION}-oauth \
php${PHP_VERSION}-opcache \
php${PHP_VERSION}-readline \
php${PHP_VERSION}-xml
POOL_CONF="/etc/php/${PHP_VERSION}/fpm/pool.d/www.conf"
if [ -f "$POOL_CONF" ]; then
echo -e "${CYAN}Configuring PHP-FPM to listen on ${YELLOW}127.0.0.1:9000${NC}"
sed -i "s|^listen = .*|listen = 127.0.0.1:9000|" "$POOL_CONF"
systemctl restart php${PHP_VERSION}-fpm
else
echo -e "${RED}Failed to configure PHP-FPM: ${POOL_CONF} not found${NC}"
cleanup
exit 1
fi
echo -e "${CYAN}Configuring default Vhost${NC}"
rm -rf /var/www/html/*
echo "<?php phpinfo();" > /var/www/html/index.php
systemctl stop nginx
cat >/etc/nginx/sites-available/default <<EOL
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-cgi (or other tcp sockets):
fastcgi_pass 127.0.0.1:9000;
}
location ~ /\.ht {
deny all;
}
}
EOL
systemctl start nginx
echo -e "${CYAN}Installing ${YELLOW}Codeploy Agent${NC}"
rm -rf ./install
wget
https://aws-codedeploy-eu-west-1.s3.eu-west-1.amazonaws.com/latest/install
chmod +x ./install
./install auto
systemctl start codedeploy-agent
echo -e "${CYAN}Config ${YELLOW}cron${CYAN} for ${YELLOW}Codeploy Agent${NC}"
croncmd="@reboot systemctl start codedeploy-agent"
( crontab -l | grep -v -F "$croncmd" ; echo "$croncmd" ) | crontab -
if [ "$DB_TYPE" == 'none' ];then
echo -e "${YELLOW}No Db support will be installed${NC}"
cleanup
exit 0;
fi
echo -e "${CYAN}Installing ${YELLOW}${DB_TYPE}${NC}"
apt-get -y install mariadb-server mariadb-client
if [ "$DB_ROOT_PASSWORD" == "" ]; then
echo -e "${YELLOW}DB Root password is missing. skipping${NC}"
cleanup
exit 0;
fi
echo "${CYAN}Provisioning Root User${NC}"
# Make sure that NOBODY can access the server without a password
mysql -e "UPDATE mysql.user SET Password = PASSWORD('${DB_ROOT_PASSWORD}') WHERE User = 'root'"
# Kill the anonymous users
mysql -e "DROP USER ''@'localhost'"
# Because our hostname varies we'll use some Bash magic here.
mysql -e "DROP USER ''@'$(hostname)'"
# Kill off the demo database
mysql -e "DROP DATABASE IF EXISTS test"
# Make our changes take effect
mysql -e "FLUSH PRIVILEGES"
```
And I have a question should I also create the db via terraform or use ansiublwe for that. My concern is, because terraform encourages the Immutable Infrastructure if I need to change the db user password I will also lose the db data.
So should do you reccomend using Ansible Instread?
https://redd.it/1d8kd2f
@r_devops
chmod +x ./install
./install auto
systemctl start codedeploy-agent
echo -e "${CYAN}Config ${YELLOW}cron${CYAN} for ${YELLOW}Codeploy Agent${NC}"
croncmd="@reboot systemctl start codedeploy-agent"
( crontab -l | grep -v -F "$croncmd" ; echo "$croncmd" ) | crontab -
if [ "$DB_TYPE" == 'none' ];then
echo -e "${YELLOW}No Db support will be installed${NC}"
cleanup
exit 0;
fi
echo -e "${CYAN}Installing ${YELLOW}${DB_TYPE}${NC}"
apt-get -y install mariadb-server mariadb-client
if [ "$DB_ROOT_PASSWORD" == "" ]; then
echo -e "${YELLOW}DB Root password is missing. skipping${NC}"
cleanup
exit 0;
fi
echo "${CYAN}Provisioning Root User${NC}"
# Make sure that NOBODY can access the server without a password
mysql -e "UPDATE mysql.user SET Password = PASSWORD('${DB_ROOT_PASSWORD}') WHERE User = 'root'"
# Kill the anonymous users
mysql -e "DROP USER ''@'localhost'"
# Because our hostname varies we'll use some Bash magic here.
mysql -e "DROP USER ''@'$(hostname)'"
# Kill off the demo database
mysql -e "DROP DATABASE IF EXISTS test"
# Make our changes take effect
mysql -e "FLUSH PRIVILEGES"
```
And I have a question should I also create the db via terraform or use ansiublwe for that. My concern is, because terraform encourages the Immutable Infrastructure if I need to change the db user password I will also lose the db data.
So should do you reccomend using Ansible Instread?
https://redd.it/1d8kd2f
@r_devops
Debug Github actions with the help of an LLM-powered pull request bot
[I built this during a recent hackday](https://github.com/marketplace/treebeard-build)...here's the background:
I maintain a popular [pytest plugin](https://github.com/treebeardtech/nbmake) and throughout its life have supported and observed many developers struggling with GitHub actions.
* It's hard to identify what caused a failure given the length of some ci logs
* Multiple ci jobs can fail with the same cause meaning it's noisy
* It's unclear how to prioritise fixes to these failures
This Github app gives you a prioritised, de-duplicated list of issues relating to your GitHub actions failure.
It uses LLMs (GPT3.5 at the moment) to identify the most likely root cause, highlight relevant source files, and order the issues by priority.
Feedback welcome!
https://redd.it/1d8kbto
@r_devops
[I built this during a recent hackday](https://github.com/marketplace/treebeard-build)...here's the background:
I maintain a popular [pytest plugin](https://github.com/treebeardtech/nbmake) and throughout its life have supported and observed many developers struggling with GitHub actions.
* It's hard to identify what caused a failure given the length of some ci logs
* Multiple ci jobs can fail with the same cause meaning it's noisy
* It's unclear how to prioritise fixes to these failures
This Github app gives you a prioritised, de-duplicated list of issues relating to your GitHub actions failure.
It uses LLMs (GPT3.5 at the moment) to identify the most likely root cause, highlight relevant source files, and order the issues by priority.
Feedback welcome!
https://redd.it/1d8kbto
@r_devops
GitHub
GitHub is where people build software. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects.