AWS Compliance scan with cloudquery
I've added compliance queries for AWS - [https://github.com/cloudquery/cloudquery#aws-compliance-pack](https://github.com/cloudquery/cloudquery#aws-compliance-pack) . I'd love to get feedback
https://redd.it/kaec29
@r_devops
I've added compliance queries for AWS - [https://github.com/cloudquery/cloudquery#aws-compliance-pack](https://github.com/cloudquery/cloudquery#aws-compliance-pack) . I'd love to get feedback
https://redd.it/kaec29
@r_devops
GitHub
GitHub - cloudquery/cloudquery: Data pipelines for cloud config and security data. Build cloud asset inventory, CSPM, FinOps, and…
Data pipelines for cloud config and security data. Build cloud asset inventory, CSPM, FinOps, and vulnerability management solutions. Extract from AWS, Azure, GCP, and 70+ cloud and SaaS sources. -...
AWS NLB stuck on pending on new KOPS cluster
I have a new KOPS cluster I created today and am trying to get the cluster to apply a NLB so I can have my ingress work. I am using the YAML provided here: https://kubernetes.github.io/ingress-nginx/deploy/#aws \- I have taken the file and split it up into it's own sections and all depoys fine. Nothing wrong except the service for the load balancer is stuck in the pending stage and describing the service does nothing useful other than tell me how long it has been in that state.
Bottom of describe
Normal EnsuringLoadBalancer 103s (x47 over 3h27m) service-controller Ensuring load balancer
My ingress.yaml file
apiVersion: networking.k8s.io/v1beta1
# apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-stage"
# needed to allow the front end to talk to the back end
nginx.ingress.kubernetes.io/cors-allow-origin: "https://api.dev.mydomain.ca"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
# needed for monitoring - maybe
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
#for nginx ingress controller
ad.datadoghq.com/nginx-ingress-controller.checknames: '["nginx","nginxingresscontroller"]'
ad.datadoghq.com/nginx-ingress-controller.initconfigs: '{},{}'
ad.datadoghq.com/nginx-ingress-controller.instances: '{"nginx_status_url": "https://%%host%%:18080/nginx_status"},{"prometheus_url": "https://%%host%%:10254/metrics"}'
ad.datadoghq.com/nginx-ingress-controller.logs: '{"service": "controller", "source":"nginx-ingress-controller"}'
name: nginx-ingress
namespace: custom-namespace
spec:
rules:
- host: api.dev.mydomain.ca
http:
paths:
- backend:
serviceName: express-api
servicePort: 8090
path: /
- host: socket.dev.mydomain.ca
http:
paths:
- backend:
serviceName: socketio
servicePort: 9000
path: /
tls:
- hosts:
- api.dev.mydomain.ca
secretName: express-ingress-cert
- hosts:
- socket.dev.mydomain.ca
secretName: socket-ingress-cert
I am wondering how I can get an NLB to provision and allow me to point DNS at it and have the above ingress resource direct traffic where it needs to go.
https://redd.it/nq7yie
@r_devops
I have a new KOPS cluster I created today and am trying to get the cluster to apply a NLB so I can have my ingress work. I am using the YAML provided here: https://kubernetes.github.io/ingress-nginx/deploy/#aws \- I have taken the file and split it up into it's own sections and all depoys fine. Nothing wrong except the service for the load balancer is stuck in the pending stage and describing the service does nothing useful other than tell me how long it has been in that state.
Bottom of describe
Normal EnsuringLoadBalancer 103s (x47 over 3h27m) service-controller Ensuring load balancer
My ingress.yaml file
apiVersion: networking.k8s.io/v1beta1
# apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-stage"
# needed to allow the front end to talk to the back end
nginx.ingress.kubernetes.io/cors-allow-origin: "https://api.dev.mydomain.ca"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
# needed for monitoring - maybe
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
#for nginx ingress controller
ad.datadoghq.com/nginx-ingress-controller.checknames: '["nginx","nginxingresscontroller"]'
ad.datadoghq.com/nginx-ingress-controller.initconfigs: '{},{}'
ad.datadoghq.com/nginx-ingress-controller.instances: '{"nginx_status_url": "https://%%host%%:18080/nginx_status"},{"prometheus_url": "https://%%host%%:10254/metrics"}'
ad.datadoghq.com/nginx-ingress-controller.logs: '{"service": "controller", "source":"nginx-ingress-controller"}'
name: nginx-ingress
namespace: custom-namespace
spec:
rules:
- host: api.dev.mydomain.ca
http:
paths:
- backend:
serviceName: express-api
servicePort: 8090
path: /
- host: socket.dev.mydomain.ca
http:
paths:
- backend:
serviceName: socketio
servicePort: 9000
path: /
tls:
- hosts:
- api.dev.mydomain.ca
secretName: express-ingress-cert
- hosts:
- socket.dev.mydomain.ca
secretName: socket-ingress-cert
I am wondering how I can get an NLB to provision and allow me to point DNS at it and have the above ingress resource direct traffic where it needs to go.
https://redd.it/nq7yie
@r_devops
Deploy a Containerized React App to AWS, Azure, Google Cloud
Learn how to create a production-ready Docker image with React. Then we will use that Docker Image to push it to the Container Registry and we will Host it to AWS Fargate, Azure Container Instance, Google Run
Deploy a React App to AWS: https://youtu.be/9nrgqtFHMUc
Deploy a React App to Azure: https://youtu.be/1gEMiFil4q4
Deploy a React App to Google Cloud: https://youtu.be/82Z\_VrazXcs
https://redd.it/nzqa0q
@r_devops
Learn how to create a production-ready Docker image with React. Then we will use that Docker Image to push it to the Container Registry and we will Host it to AWS Fargate, Azure Container Instance, Google Run
Deploy a React App to AWS: https://youtu.be/9nrgqtFHMUc
Deploy a React App to Azure: https://youtu.be/1gEMiFil4q4
Deploy a React App to Google Cloud: https://youtu.be/82Z\_VrazXcs
https://redd.it/nzqa0q
@r_devops
YouTube
Deploy a Containerized React App to AWS Fargate
Learn how to create a production-ready Docker image with React. Then we will use that Docker Image to push it to AWS Container Registry and we will Host it to AWS Fargate.
Nginx: https://gist.github.com/scalablescripts/a69000659a0a09bb71066b9b4d3fcecb
Check…
Nginx: https://gist.github.com/scalablescripts/a69000659a0a09bb71066b9b4d3fcecb
Check…
What is Kubernetes Downward API and why you might need it?
Let's check one of the lesser known Kubernetes features, that allows to expose pod metadata to your application - the Downward API: https://youtu.be/c4IOAXE5Mo8
https://redd.it/oin79r
@r_devops
Let's check one of the lesser known Kubernetes features, that allows to expose pod metadata to your application - the Downward API: https://youtu.be/c4IOAXE5Mo8
https://redd.it/oin79r
@r_devops
YouTube
What is Kubernetes Downward API and why you might need it?
Let's check one of the lesser known Kubernetes features, that allows to expose pod metadata to your application - the Downward API.
Links:
* https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
If you or…
Links:
* https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
If you or…
How to balance ECS Fargate with ALB
Short explainer on providing L7 load balancing to serverless ECS tasks: https://www.youtube.com/watch?v=YH8y-oKIpIY&lc=Ugyz6Q9hgcPemQ82xRt4AaABAg
https://redd.it/px0kdy
@r_devops
Short explainer on providing L7 load balancing to serverless ECS tasks: https://www.youtube.com/watch?v=YH8y-oKIpIY&lc=Ugyz6Q9hgcPemQ82xRt4AaABAg
https://redd.it/px0kdy
@r_devops
YouTube
How to create an ALB in ECS fargate
Following the series of videos showing how to create an AWS ALB in EKS and in GCP Cloud Run today was the turn of ECS fargate.
We are going to enjoy discovering how to create a layer 7 application load balancer and how to connect to our task.
If you or…
We are going to enjoy discovering how to create a layer 7 application load balancer and how to connect to our task.
If you or…
Pulling/Pushing out any AWS ECR images from/to AWS ECR through AWS Route53 CNAME
The main idea of this article is to show how to use CNAME of Route53 to pull or/and push images from/to AWS ECR service. By default, Amazon doesn’t allow to do it (SSL handshake is not working. SSL has been signed by Amazon side).
I played around with Amazon API and Python and proxy and I have found several solutions:
* Use Python and develop wrapper to log in, pull & push AWS ECR images from/to ECR through AWS Route53 CNAME of AWS ECS service.
* Use Some proxy (Ex: Nginx, Traefik, etc) and make forwarding rules with needed headers. This implementation is \`TBD\` soon!
* Amazon ECR interface VPC endpoints (AWS PrivateLink).
The full article you can read here: [https://medium.com/@solo.metalisebastian/pulling-pushing-out-any-aws-ecr-images-from-to-aws-ecr-through-aws-route53-cname-7c92307f9c25](https://medium.com/@solo.metalisebastian/pulling-pushing-out-any-aws-ecr-images-from-to-aws-ecr-through-aws-route53-cname-7c92307f9c25)
The code: [https://github.com/SebastianUA/ecr-pull-push](https://github.com/SebastianUA/ecr-pull-push)
\#AWS #AWSRoute53 #Docker #AWSECR
https://redd.it/qxo7hr
@r_devops
The main idea of this article is to show how to use CNAME of Route53 to pull or/and push images from/to AWS ECR service. By default, Amazon doesn’t allow to do it (SSL handshake is not working. SSL has been signed by Amazon side).
I played around with Amazon API and Python and proxy and I have found several solutions:
* Use Python and develop wrapper to log in, pull & push AWS ECR images from/to ECR through AWS Route53 CNAME of AWS ECS service.
* Use Some proxy (Ex: Nginx, Traefik, etc) and make forwarding rules with needed headers. This implementation is \`TBD\` soon!
* Amazon ECR interface VPC endpoints (AWS PrivateLink).
The full article you can read here: [https://medium.com/@solo.metalisebastian/pulling-pushing-out-any-aws-ecr-images-from-to-aws-ecr-through-aws-route53-cname-7c92307f9c25](https://medium.com/@solo.metalisebastian/pulling-pushing-out-any-aws-ecr-images-from-to-aws-ecr-through-aws-route53-cname-7c92307f9c25)
The code: [https://github.com/SebastianUA/ecr-pull-push](https://github.com/SebastianUA/ecr-pull-push)
\#AWS #AWSRoute53 #Docker #AWSECR
https://redd.it/qxo7hr
@r_devops
Medium
Pulling/Pushing out any AWS ECR images from/to AWS ECR through AWS Route53 CNAME
The main idea of this article is to show how to use CNAME of Route53 to pull or/and push images from/to AWS ECR service. By default, Amazon…
Running AWS Services In A Laptop Using LocalStack
LocalStack is a fully functional mock of AWS services running locally on your computer. We can use it to develop and test cloud and serverless apps offline. It can run through the CLI, in a Docker container, or in a Kubernetes cluster. We can use it to create mocks of S3 buckets, Lambda functions, RDS databases, ECR repositories, and more.
https://www.youtube.com/watch?v=8hi9P1ffaQk
https://redd.it/qy3xqg
@r_devops
LocalStack is a fully functional mock of AWS services running locally on your computer. We can use it to develop and test cloud and serverless apps offline. It can run through the CLI, in a Docker container, or in a Kubernetes cluster. We can use it to create mocks of S3 buckets, Lambda functions, RDS databases, ECR repositories, and more.
https://www.youtube.com/watch?v=8hi9P1ffaQk
https://redd.it/qy3xqg
@r_devops
YouTube
Running AWS Services In A Laptop Using LocalStack
LocalStack is a fully functional mock of AWS services running locally on your computer. We can use it to develop and test cloud and serverless apps offline. It can run through the CLI, in a Docker container, or in a Kubernetes cluster. We can use it to create…
Create and Restore RDS Snapshot in a specific time
If you want to learn this thing in a more simple way
then you can refer to the given link👇👇
https://www.youtube.com/watch?v=2qGKr5gn5wo&t=229s
What is Amazon RDS?
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security, and compatibility they need.
​
To create a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. In Choose a database creation method, select Standard Create.
6. In Engine options, choose the engine type: MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL. Microsoft SQL Server is shown here.
7. For Edition, if you're using Oracle or SQL Server choose the DB engine edition that you want to use.
MySQL has only one option for the edition, and MariaDB and PostgreSQL have none.
8. For Version, choose the engine version.
9. In Templates, choose the template that matches your use case. If you choose Production, the following are preselected in a later step:
Multi-AZ failover option
Provisioned IOPS storage option
Enable deletion protection option
We recommend these features for any production environment.
1. To enter your master password, do the following:
1. In the Settings section, open Credential Settings.
2. If you want to specify a password, clear the Auto-generate a password check box if it is selected.
3. (Optional) Change the Master username value.
4. Enter the same password in Master password and Confirm password.
2. For the remaining sections, specify your DB instance settings. For information about each setting, see Settings for DB instances.
3. Choose Create database.
If you chose to use an automatically generated password, the View credential details button appears on the Databases page.
To view the master user name and password for the DB instance, choose View credential details.
4. For Databases, choose the name of the new DB instance.
On the RDS console, the details for the new DB instance appear. The DB instance has a status of creating until the DB instance is created and ready for use. When the state changes to Available, you can connect to the DB instance. Depending on the DB instance class and storage allocated, it can take several minutes for the new instance to be available.
​
TO RESTORE A DB INSTANCE TO A SPECIFIED TIME
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose the DB instance that you want to restore.
4. For Actions, choose to Restore to point in time.
The Restore to point in time window appears.
5. Choose the Latest restorable time to restore to the latest possible time, or choose Custom to choose a time.
If you chose Custom, enter the date and time to which you want to restore the instance.
6. For the DB instance identifier, enter the name of the target restored DB instance. The name must be unique.
7. Choose other options as needed, such as DB instance class, storage, and whether you want to use storage autoscaling.
8. Choose to Restore to point in time.
https://redd.it/r6d6cr
@r_devops
If you want to learn this thing in a more simple way
then you can refer to the given link👇👇
https://www.youtube.com/watch?v=2qGKr5gn5wo&t=229s
What is Amazon RDS?
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security, and compatibility they need.
​
To create a DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. In Choose a database creation method, select Standard Create.
6. In Engine options, choose the engine type: MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL. Microsoft SQL Server is shown here.
7. For Edition, if you're using Oracle or SQL Server choose the DB engine edition that you want to use.
MySQL has only one option for the edition, and MariaDB and PostgreSQL have none.
8. For Version, choose the engine version.
9. In Templates, choose the template that matches your use case. If you choose Production, the following are preselected in a later step:
Multi-AZ failover option
Provisioned IOPS storage option
Enable deletion protection option
We recommend these features for any production environment.
1. To enter your master password, do the following:
1. In the Settings section, open Credential Settings.
2. If you want to specify a password, clear the Auto-generate a password check box if it is selected.
3. (Optional) Change the Master username value.
4. Enter the same password in Master password and Confirm password.
2. For the remaining sections, specify your DB instance settings. For information about each setting, see Settings for DB instances.
3. Choose Create database.
If you chose to use an automatically generated password, the View credential details button appears on the Databases page.
To view the master user name and password for the DB instance, choose View credential details.
4. For Databases, choose the name of the new DB instance.
On the RDS console, the details for the new DB instance appear. The DB instance has a status of creating until the DB instance is created and ready for use. When the state changes to Available, you can connect to the DB instance. Depending on the DB instance class and storage allocated, it can take several minutes for the new instance to be available.
​
TO RESTORE A DB INSTANCE TO A SPECIFIED TIME
1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose the DB instance that you want to restore.
4. For Actions, choose to Restore to point in time.
The Restore to point in time window appears.
5. Choose the Latest restorable time to restore to the latest possible time, or choose Custom to choose a time.
If you chose Custom, enter the date and time to which you want to restore the instance.
6. For the DB instance identifier, enter the name of the target restored DB instance. The name must be unique.
7. Choose other options as needed, such as DB instance class, storage, and whether you want to use storage autoscaling.
8. Choose to Restore to point in time.
https://redd.it/r6d6cr
@r_devops
YouTube
How to Create and Restore AWS RDS Database with Snapshots | Easy AWS Tutorial
In this video, I show you how we can easily create AWS RDS database manual and how we can restore that database in a given time.
#AWS #RDS #Database #Restoration
If you have any questions or doubts you can ask in the comment section below.
Do like and…
#AWS #RDS #Database #Restoration
If you have any questions or doubts you can ask in the comment section below.
Do like and…
Use GOTOAWS to simplify the AWS CLI tool.
GoToAWS is a tool that simplifies the AWS CLI for several operations.
I'm not sure how well-known it is, so I wanted to show it off. This video is short and digestible, so I hope you all enjoy it.
https://www.youtube.com/watch?v=uLtx1PUUZJQ
Let me know if you have any questions!
Cheers!
https://redd.it/r9ijro
@r_devops
GoToAWS is a tool that simplifies the AWS CLI for several operations.
I'm not sure how well-known it is, so I wanted to show it off. This video is short and digestible, so I hope you all enjoy it.
https://www.youtube.com/watch?v=uLtx1PUUZJQ
Let me know if you have any questions!
Cheers!
https://redd.it/r9ijro
@r_devops
YouTube
IS GOTOAWS BETTER THAN THE AWS CLI? (YES, IT IS)
#aws #awscli #amazonwebservices
GOTOAWS simplifies awcli operations and MAY BE better than the AWS CLI. Check out this video to learn more!
📝 Installation Instructions
https://github.com/hupe1980/gotoaws
LIKE and SUBSCRIBE if you enjoyed this video!
To…
GOTOAWS simplifies awcli operations and MAY BE better than the AWS CLI. Check out this video to learn more!
📝 Installation Instructions
https://github.com/hupe1980/gotoaws
LIKE and SUBSCRIBE if you enjoyed this video!
To…
Mount S3 Objects to Kubernetes Pods
One of our customers asked for a solution to mount large files from S3 transparently to EKS pods.
Here's our solution - complete with a Docker image and a Helm chart:
https://dev.to/otomato\_io/mount-s3-objects-to-kubernetes-pods-12f5
\#kubernetes #eks #aws
https://redd.it/sgx57e
@r_devops
One of our customers asked for a solution to mount large files from S3 transparently to EKS pods.
Here's our solution - complete with a Docker image and a Helm chart:
https://dev.to/otomato\_io/mount-s3-objects-to-kubernetes-pods-12f5
\#kubernetes #eks #aws
https://redd.it/sgx57e
@r_devops
DEV Community
Mount S3 Objects to Kubernetes Pods
This post describes how to mount an S3 bucket to all the nodes in an EKS cluster and make it...
New Route53 Cli release in - Get info about your records from the terminal - quickly!
New Release - r53
Example:
r53 -q
my.company.domain.com
It will return a list:
Hosted Zone ID + Web URL
The target behind (Load balancer, Lambda, etc) + Web URL to target
Recursively expand records
Verify NS match with dig
Install:
$ brew tap isan-rivkin/toolbox
$ brew install r53
New features:
\- Exposed SDK on top of CLI
\- Now support JSON output --output-json
https://github.com/Isan-Rivkin/route53-cli
​
\#aws #route53 #golang #go #dns #networking
https://redd.it/u4u0it
@r_devops
New Release - r53
Example:
r53 -q
my.company.domain.com
It will return a list:
Hosted Zone ID + Web URL
The target behind (Load balancer, Lambda, etc) + Web URL to target
Recursively expand records
Verify NS match with dig
Install:
$ brew tap isan-rivkin/toolbox
$ brew install r53
New features:
\- Exposed SDK on top of CLI
\- Now support JSON output --output-json
https://github.com/Isan-Rivkin/route53-cli
​
\#aws #route53 #golang #go #dns #networking
https://redd.it/u4u0it
@r_devops
GitHub
GitHub - Isan-Rivkin/route53-cli: Route53 CLI - Get info about your records from the terminal - quickly!
Route53 CLI - Get info about your records from the terminal - quickly! - Isan-Rivkin/route53-cli
how to automate AWS marketplace publishing with Ansible - A beginner's guide
Hello everyone,
I've been a long-time subscriber to this subreddit, but this is my first post. I recently published an article on automating AWS marketplace publishing using Ansible. If you're new to Ansible or are looking to streamline your AWS marketplace publishing process, this article is for you!
In this article, I cover the basics of Ansible, how to create an EC2 instance, create an Amazon Machine Image (AMI), and how to use Ansible to automate the publishing process on the AWS marketplace.
I also share some tips and best practices for using Ansible to automate your AWS marketplace publishing.
You can find the article here: https://medium.com/@arshad.zameer/getting-started-with-ansible-for-aws-marketplace-publishing-a547cc13d182
I hope the article is helpful to you. If you have any questions or feedback, feel free to comment.
Thanks for reading!
\#Ansible #AWS #AWSMarketplace #Automation
https://redd.it/10iaq9a
@r_devops
Hello everyone,
I've been a long-time subscriber to this subreddit, but this is my first post. I recently published an article on automating AWS marketplace publishing using Ansible. If you're new to Ansible or are looking to streamline your AWS marketplace publishing process, this article is for you!
In this article, I cover the basics of Ansible, how to create an EC2 instance, create an Amazon Machine Image (AMI), and how to use Ansible to automate the publishing process on the AWS marketplace.
I also share some tips and best practices for using Ansible to automate your AWS marketplace publishing.
You can find the article here: https://medium.com/@arshad.zameer/getting-started-with-ansible-for-aws-marketplace-publishing-a547cc13d182
I hope the article is helpful to you. If you have any questions or feedback, feel free to comment.
Thanks for reading!
\#Ansible #AWS #AWSMarketplace #Automation
https://redd.it/10iaq9a
@r_devops
Medium
Getting started with Ansible for AWS marketplace publishing
Welcome to my Medium blog, where I, Arshad Zameer, an AWS-certified professional with over 10 years of IT industry experience, will share…
Surf CLI - New Feature: Fuzzy search DynamoDB (even encoded data)
**DynamoDB:**
[https://github.com/Isan-Rivkin/surf#aws-dynamodb-usage](https://github.com/Isan-Rivkin/surf#aws-dynamodb-usage)
​
**TLDR**
* surf ddb --query "my-text-\*" --table "\^prod" --out json
* Pattern matching inside objects
* Additional Supported formats: JSON, Protobuf, Base64, Binary
​
**Supported Platforms**
* surf <platform> -q <some text>
* AWS Route53, DynamoDB, ACM, S3, Opensearch
* Elasticsearch
* [Logz.io](https://logz.io/)
* Hashicorp Vault, Consul
​
**Overview**
SURF is built for Infrastructure Engineers as a CLI tool that enables searching any pattern across different platforms. Usually, the results are returned with a direct web URL.
The search process depends on the context, for example: if you're searching in Vault it'll pattern match against keys. Instead, if you're searching in Route53 AWS a DNS address it'll return links to the targets behind it (e.g Load balancer).
https://redd.it/10v119c
@r_devops
**DynamoDB:**
[https://github.com/Isan-Rivkin/surf#aws-dynamodb-usage](https://github.com/Isan-Rivkin/surf#aws-dynamodb-usage)
​
**TLDR**
* surf ddb --query "my-text-\*" --table "\^prod" --out json
* Pattern matching inside objects
* Additional Supported formats: JSON, Protobuf, Base64, Binary
​
**Supported Platforms**
* surf <platform> -q <some text>
* AWS Route53, DynamoDB, ACM, S3, Opensearch
* Elasticsearch
* [Logz.io](https://logz.io/)
* Hashicorp Vault, Consul
​
**Overview**
SURF is built for Infrastructure Engineers as a CLI tool that enables searching any pattern across different platforms. Usually, the results are returned with a direct web URL.
The search process depends on the context, for example: if you're searching in Vault it'll pattern match against keys. Instead, if you're searching in Route53 AWS a DNS address it'll return links to the targets behind it (e.g Load balancer).
https://redd.it/10v119c
@r_devops
GitHub
GitHub - Isan-Rivkin/surf: CLI Text Search across your infrastructure platforms, Universal Ctrl+F for infra
CLI Text Search across your infrastructure platforms, Universal Ctrl+F for infra - Isan-Rivkin/surf
What is OpenID Connect Authentication? A Practical Guide
Hello, devops community,
Today, I present to you a topic that is less discussed and often taken for granted in our daily jobs.
OpenID Connect is among our industry's most widely used and least discussed topics.
Yet, it is so crucial when it comes to granting third-party access to a service provider. Have you seen those "sign-in with Google" before!?
In this guide, I will explain the notion of OIDC using a practical, real-world example: granting GitHub Actions access to an AWS account.
Feel free to ask any questions that come up.
https://developer-friendly.blog/2024/04/14/what-is-openid-connect-authentication-a-practical-guide/
\#oauth2 #oidc #github #aws
https://redd.it/1c4fbhh
@r_devops
Hello, devops community,
Today, I present to you a topic that is less discussed and often taken for granted in our daily jobs.
OpenID Connect is among our industry's most widely used and least discussed topics.
Yet, it is so crucial when it comes to granting third-party access to a service provider. Have you seen those "sign-in with Google" before!?
In this guide, I will explain the notion of OIDC using a practical, real-world example: granting GitHub Actions access to an AWS account.
Feel free to ask any questions that come up.
https://developer-friendly.blog/2024/04/14/what-is-openid-connect-authentication-a-practical-guide/
\#oauth2 #oidc #github #aws
https://redd.it/1c4fbhh
@r_devops
developer-friendly.blog
What is OpenID Connect Authentication? A Practical Guide - Developer Friendly Blog
Learn how to grant GitHub Actions runner jobs access to the AWS services without storing long-lived credentials and avoiding the overhead of secrets rotation.
How to Access AWS From Azure VM Using OpenID Connect
Do you work in a multi-cloud environment?
Do you usually find yourself passing around cloud credentials? Hasn't it ever felt kinda wrong?
Did you ask yourself if there's a better way around this?
I'm here to tell you that there is. There is a much better way to handle such service-to-service communications.
This blog post elaborates on what OpenID Connect can do to help you avoid passing around long-lived credentials, relieving you from the chore of frequent secret rotation.
If you enjoy this post, please share it with your network.
#aws #azure #oidc #openidconnect
https://developer-friendly.blog/2024/05/27/how-to-access-aws-from-azure-vm-using-openid-connect/
https://redd.it/1d1mjhd
@r_devops
Do you work in a multi-cloud environment?
Do you usually find yourself passing around cloud credentials? Hasn't it ever felt kinda wrong?
Did you ask yourself if there's a better way around this?
I'm here to tell you that there is. There is a much better way to handle such service-to-service communications.
This blog post elaborates on what OpenID Connect can do to help you avoid passing around long-lived credentials, relieving you from the chore of frequent secret rotation.
If you enjoy this post, please share it with your network.
#aws #azure #oidc #openidconnect
https://developer-friendly.blog/2024/05/27/how-to-access-aws-from-azure-vm-using-openid-connect/
https://redd.it/1d1mjhd
@r_devops
developer-friendly.blog
How to Access AWS From Azure VM Using OpenID Connect - Developer Friendly Blog
Learn how to grant an Azure Virtual Machine access to AWS services without passing hard-coded credetials, with the power of OpenID Connect.
From 0 to 3k PODs in "notime"
interesting what is possible when you know what you are doing...
https://www.youtube.com/watch?v=nQlV7hEU8ro
https://redd.it/1g0p5dc
@r_devops
interesting what is possible when you know what you are doing...
https://www.youtube.com/watch?v=nQlV7hEU8ro
https://redd.it/1g0p5dc
@r_devops
YouTube
AWS EKS And Karpenter: From 0 To 3000!!! Pods In No Time With Viktor Vedmich #aws #kubernetes
We run 3000 pods in this video! How to scale your AWS EKS installation? How quickly it can be done? How Karpenter will manage the pods and nodes? Everything you can find in this video!
Viktor Vedmich, AWS Developer Advocate and friend of mine shows how to…
Viktor Vedmich, AWS Developer Advocate and friend of mine shows how to…
AI agent, Multi-Cloud Support: AWS (using SageMaker) GCP (using Gemini)
Azure (using Copilot)
I'm building an AI assistant to guide the setup of cloud resources in a secure manner.
Example prompt Setup a production-grade AWS foundation with the following requirements :
\- VPC in us-east-1 with 3 availability zones
\- Private and public subnets- Network segmentation for different workloads
\- Implement security best practices- Enable encryption for all services
\- Setup CloudTrail for audit logging
\- Configure AWS Backup for critical resources
\- Implement WAF and Shield for protection
https://medium.com/@rasvihostings/ai-agent-to-set-up-a-foundation-in-a-public-cloud-secure-manner-4a555d8fdd84
hashtag#googlecloud hashtag#aws hashtag#azure hashtag#gemini hashtag#SageMaker hashtag#Copilot hashtag#python hashtag#terraform
https://redd.it/1gf5m0b
@r_devops
Azure (using Copilot)
I'm building an AI assistant to guide the setup of cloud resources in a secure manner.
Example prompt Setup a production-grade AWS foundation with the following requirements :
\- VPC in us-east-1 with 3 availability zones
\- Private and public subnets- Network segmentation for different workloads
\- Implement security best practices- Enable encryption for all services
\- Setup CloudTrail for audit logging
\- Configure AWS Backup for critical resources
\- Implement WAF and Shield for protection
https://medium.com/@rasvihostings/ai-agent-to-set-up-a-foundation-in-a-public-cloud-secure-manner-4a555d8fdd84
hashtag#googlecloud hashtag#aws hashtag#azure hashtag#gemini hashtag#SageMaker hashtag#Copilot hashtag#python hashtag#terraform
https://redd.it/1gf5m0b
@r_devops
Medium
AI agent to set up a foundation in a public cloud secure manner
I’ve created a comprehensive cloud foundation setup agent that integrates with different AI services and uses Terraform for infrastructure…
Stuck between AWS and Azure — need your advice!
I’m about to dive into Cloud Computing, but I’m currently torn between starting with AWS or Azure.
I’ve heard the differences between them aren’t that big in terms of core concepts, and that Azure might be easier for beginners, especially with its user-friendly interface and Microsoft integration.
But I’m also thinking about the bigger picture:
• Which one has better career opportunities overall?
• Which one provides more flexibility and long-term growth?
• And is it true that once you learn one, switching to the other is relatively smooth?
Would love to hear your thoughts and experiences! Any advice or perspective is welcome 🙌
#CloudComputing #AWS #Azure #CareerGrowth #ITCareers #TechLearning
https://redd.it/1lpjif2
@r_devops
I’m about to dive into Cloud Computing, but I’m currently torn between starting with AWS or Azure.
I’ve heard the differences between them aren’t that big in terms of core concepts, and that Azure might be easier for beginners, especially with its user-friendly interface and Microsoft integration.
But I’m also thinking about the bigger picture:
• Which one has better career opportunities overall?
• Which one provides more flexibility and long-term growth?
• And is it true that once you learn one, switching to the other is relatively smooth?
Would love to hear your thoughts and experiences! Any advice or perspective is welcome 🙌
#CloudComputing #AWS #Azure #CareerGrowth #ITCareers #TechLearning
https://redd.it/1lpjif2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking for Job (Please Reply)
Hi Everyone,
I hope you’re all doing well.
I’m writing to express my interest in the Junior DevOps Engineer position. I recently completed a 3-month internship as a DevOps Intern.
I have good technical knowledge around DevOps skills and hands-on experience on major DevOps tools.
I worked on several real-world DevOps projects:
• Deployment of a MERN Stack application on AWS EKS with DevSecOps integration, Helm charts, and ArgoCD.
• Automated infrastructure monitoring using Terraform, Prometheus, Grafana, and AWS CloudWatch, including email alerts via AWS SNS for high CPU utilization.
• Serverless automation using AWS Lambda to delete stale AWS snapshots.
Additionally, I bring 4 years of corporate experience-not completely fresher. So, learning and adapting new skills and tools won’t be a big issue for me.
I’m now seeking a full-time opportunity as a Junior DevOps Engineer, where I can contribute, learn, and continue growing within a dynamic environment.
Thank you for your time and consideration. I would truly appreciate the opportunity to be part of your team.
#devops #aws #community #jobsearch #it #hr #hiring #opentowork #linkedintech #ithiring
https://redd.it/1o9ry8k
@r_devops
Hi Everyone,
I hope you’re all doing well.
I’m writing to express my interest in the Junior DevOps Engineer position. I recently completed a 3-month internship as a DevOps Intern.
I have good technical knowledge around DevOps skills and hands-on experience on major DevOps tools.
I worked on several real-world DevOps projects:
• Deployment of a MERN Stack application on AWS EKS with DevSecOps integration, Helm charts, and ArgoCD.
• Automated infrastructure monitoring using Terraform, Prometheus, Grafana, and AWS CloudWatch, including email alerts via AWS SNS for high CPU utilization.
• Serverless automation using AWS Lambda to delete stale AWS snapshots.
Additionally, I bring 4 years of corporate experience-not completely fresher. So, learning and adapting new skills and tools won’t be a big issue for me.
I’m now seeking a full-time opportunity as a Junior DevOps Engineer, where I can contribute, learn, and continue growing within a dynamic environment.
Thank you for your time and consideration. I would truly appreciate the opportunity to be part of your team.
#devops #aws #community #jobsearch #it #hr #hiring #opentowork #linkedintech #ithiring
https://redd.it/1o9ry8k
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community