Logging all Pull Requests / Merge Requests for audit purposes
Hi guys, we are in the process of adopting CI/CD at our company and want to work using Merge Requests. This is quite a new area in our company. In order to allow people to go to production automatically, we want to use the Merge Request feature in GitLab. In the Merge Request, approvals are given and before the Merge Request the corresponding pipeline result is displayed.
For audit purposes, is it a good idea to store these Merge Requests somewhere? Because, would it be possible for someone to overwrite the git history locally, and then let someone else merge that in?
How do other devops folks in this subreddit handle auditing their Merge Requests?
Additionally, is it a good idea to store the corresponding build logs that result from the pipeline run following the merging of the Merge Request? Audit wants to see if a change was succesfull or not.
Really appreciating your input here..
https://redd.it/fce6i2
@r_devops
Hi guys, we are in the process of adopting CI/CD at our company and want to work using Merge Requests. This is quite a new area in our company. In order to allow people to go to production automatically, we want to use the Merge Request feature in GitLab. In the Merge Request, approvals are given and before the Merge Request the corresponding pipeline result is displayed.
For audit purposes, is it a good idea to store these Merge Requests somewhere? Because, would it be possible for someone to overwrite the git history locally, and then let someone else merge that in?
How do other devops folks in this subreddit handle auditing their Merge Requests?
Additionally, is it a good idea to store the corresponding build logs that result from the pipeline run following the merging of the Merge Request? Audit wants to see if a change was succesfull or not.
Really appreciating your input here..
https://redd.it/fce6i2
@r_devops
reddit
Logging all Pull Requests / Merge Requests for audit purposes
Hi guys, we are in the process of adopting CI/CD at our company and want to work using Merge Requests. This is quite a new area in our company. In...
What sounds better to a prospective employer?
I have received a job offer today, but I had a final stage interview last Friday with a company I like more.
Which of these e-mails would sound better?
​
> May I ask at what stage is the review of my interview? I have received other offers as well and would Like to know about my application as well at your company so I can fully review my offers in order to let everyone know my next step.
​
>My job search has reached the offer stage with my other applications. Can you let me know when you are planning to update as to the status of my application, so that I can fully evaluate my options.
https://redd.it/fcbvva
@r_devops
I have received a job offer today, but I had a final stage interview last Friday with a company I like more.
Which of these e-mails would sound better?
​
> May I ask at what stage is the review of my interview? I have received other offers as well and would Like to know about my application as well at your company so I can fully review my offers in order to let everyone know my next step.
​
>My job search has reached the offer stage with my other applications. Can you let me know when you are planning to update as to the status of my application, so that I can fully evaluate my options.
https://redd.it/fcbvva
@r_devops
reddit
What sounds better to a prospective employer?
I have received a job offer today, but I had a final stage interview last Friday with a company I like more. Which of these e-mails would sound...
Is there a way to make Jest reuse cache when working in a different directory?
We use Jest in our CI pipelines. The tests take about 5 minutes to complete with no cache and about 20 seconds to complete with cache.
We are using Jenkins with multi-branch so a different folder is used for basically every job that is run. I've noticed with Jest that it knows what folder you are running from and creates a unique cache dir in /tmp/jest_XX/jest-transform-cache-2e21483d6fe4693e52e5df028952xxxxxxxx
If you are running the tests from the same folder the cache is reused but if you even do so much as rename your work folder a new directory in /tmp/jest_XX/ is created from scratch when jest is run again.
This essentially means that every single pipeline run in Jenkins is not re-using jest cache so unit tests take ~5 minutes.
https://redd.it/fcedxa
@r_devops
We use Jest in our CI pipelines. The tests take about 5 minutes to complete with no cache and about 20 seconds to complete with cache.
We are using Jenkins with multi-branch so a different folder is used for basically every job that is run. I've noticed with Jest that it knows what folder you are running from and creates a unique cache dir in /tmp/jest_XX/jest-transform-cache-2e21483d6fe4693e52e5df028952xxxxxxxx
If you are running the tests from the same folder the cache is reused but if you even do so much as rename your work folder a new directory in /tmp/jest_XX/ is created from scratch when jest is run again.
This essentially means that every single pipeline run in Jenkins is not re-using jest cache so unit tests take ~5 minutes.
https://redd.it/fcedxa
@r_devops
reddit
Is there a way to make Jest reuse cache when working in a...
We use Jest in our CI pipelines. The tests take about 5 minutes to complete with no cache and about 20 seconds to complete with cache. We are...
[Issue] Unable to access vrops after DNS and IP changed
Hi guys. need some help as I have been working this issue for the last 3 to 4 days.
History: We changed location which prompt the need for new IP and DNS, so I updated the IP and DNS via following method: `/opt/vmware/share/vami/vami_config_net`
The menu allowed me to reconfigued the IP and DNS. Next, I deleted the tomcat ./work/Catalina/localhost folder then restarted tomcat.
Still not able to bring up the login spash screen for the `https://{ip}/admin`
I verified I am able to ping the ip and resolve the DNS from the vm and from my local machine.
Can someone help if they exprienced this issue before or reconfigured the IP and DNS and what else did you do?
​
Thanks ahead.
https://redd.it/fchcx6
@r_devops
Hi guys. need some help as I have been working this issue for the last 3 to 4 days.
History: We changed location which prompt the need for new IP and DNS, so I updated the IP and DNS via following method: `/opt/vmware/share/vami/vami_config_net`
The menu allowed me to reconfigued the IP and DNS. Next, I deleted the tomcat ./work/Catalina/localhost folder then restarted tomcat.
Still not able to bring up the login spash screen for the `https://{ip}/admin`
I verified I am able to ping the ip and resolve the DNS from the vm and from my local machine.
Can someone help if they exprienced this issue before or reconfigured the IP and DNS and what else did you do?
​
Thanks ahead.
https://redd.it/fchcx6
@r_devops
reddit
[Issue] Unable to access vrops after DNS and IP changed
Hi guys. need some help as I have been working this issue for the last 3 to 4 days. History: We changed location which prompt the need for new...
What are some good companies with full-time remote Cloud Engineer type jobs that I can apply for?
https://redd.it/fccgjl
@r_devops
https://redd.it/fccgjl
@r_devops
reddit
What are some good companies with full-time remote Cloud Engineer...
Posted in r/devops by u/badger-on-parade • 1 point and 1 comment
Where do devops engineers look for job postings online?
Apart from Indeed and similar generic job sites, if I have a devops position available, where should I post it?
\[edit\] Followup: any tips on how I reach devops engineers who may not necessarily be searching for a job, but who might be interested in the position I have open? StackOverflow jobs?
https://redd.it/fcfiit
@r_devops
Apart from Indeed and similar generic job sites, if I have a devops position available, where should I post it?
\[edit\] Followup: any tips on how I reach devops engineers who may not necessarily be searching for a job, but who might be interested in the position I have open? StackOverflow jobs?
https://redd.it/fcfiit
@r_devops
reddit
Where do devops engineers look for job postings online?
Apart from Indeed and similar generic job sites, if I have a devops position available, where should I post it? \[edit\] Followup: any tips on...
Is there a sensible way to link Terraform and Ansible?
Use-case is provisioning a web server host (Terraform - EC2/AWS) and then applying its latest configuration (ansible).
Terraform warn/advise against using their "provisioners", so I am reluctant to use Terraform to copy the files. The web server config is also subject to change regularly, hence not wanting to bake it into the disk/AMI image.
Ansible is good for this because it will ensure all the packages are installed/running and keep the server configs in sync/updated.
The host, when ready, also needs to be added to a loadbalancer target group when done, so would be something like:
1. Create host(s) with Terraform
2. Apply config with ansible
3. Add host to target group (with terraform/manually?)
Is there a better way to approach this that I'm probably missing?
Kinda feels clunking trying to do infrastructure-as-code but having to use two separate tools like this.
https://redd.it/fcjx2t
@r_devops
Use-case is provisioning a web server host (Terraform - EC2/AWS) and then applying its latest configuration (ansible).
Terraform warn/advise against using their "provisioners", so I am reluctant to use Terraform to copy the files. The web server config is also subject to change regularly, hence not wanting to bake it into the disk/AMI image.
Ansible is good for this because it will ensure all the packages are installed/running and keep the server configs in sync/updated.
The host, when ready, also needs to be added to a loadbalancer target group when done, so would be something like:
1. Create host(s) with Terraform
2. Apply config with ansible
3. Add host to target group (with terraform/manually?)
Is there a better way to approach this that I'm probably missing?
Kinda feels clunking trying to do infrastructure-as-code but having to use two separate tools like this.
https://redd.it/fcjx2t
@r_devops
reddit
Is there a sensible way to link Terraform and Ansible?
Use-case is provisioning a web server host (Terraform - EC2/AWS) and then applying its latest configuration (ansible). Terraform...
Stackstorm on Openshift
Hey y'all, hoping someone here can help me out a bit. I'm trying to get a poc going of stackstorm as I think the workflows and automation capabilities could be extremely useful for my team. I plan on integrating it with our chatbot too. Catch is that we run on openshift currently and the stackstorm container expects to run on root. Anyone have any experience with a situation like this?
For context, I dont have cluster admin privs so I can't alter the ssc
https://redd.it/fcfyag
@r_devops
Hey y'all, hoping someone here can help me out a bit. I'm trying to get a poc going of stackstorm as I think the workflows and automation capabilities could be extremely useful for my team. I plan on integrating it with our chatbot too. Catch is that we run on openshift currently and the stackstorm container expects to run on root. Anyone have any experience with a situation like this?
For context, I dont have cluster admin privs so I can't alter the ssc
https://redd.it/fcfyag
@r_devops
reddit
Stackstorm on Openshift
Hey y'all, hoping someone here can help me out a bit. I'm trying to get a poc going of stackstorm as I think the workflows and automation...
Error Aggregation from Kinesis stream
Hi people,
​
We have an existing observability pipeline where applications log to Cloudwatch, and our logs flow into Kinesis. Developers are asking for an error management tool, eg. Sentry or Rollbar etc.
​
Has anyone got prior art for plugging into one of these tools with API integration only from a Lambda function?
https://redd.it/fcfpxl
@r_devops
Hi people,
​
We have an existing observability pipeline where applications log to Cloudwatch, and our logs flow into Kinesis. Developers are asking for an error management tool, eg. Sentry or Rollbar etc.
​
Has anyone got prior art for plugging into one of these tools with API integration only from a Lambda function?
https://redd.it/fcfpxl
@r_devops
reddit
Error Aggregation from Kinesis stream
Hi people, We have an existing observability pipeline where applications log to Cloudwatch, and our logs flow into Kinesis. Developers...
How to monitor WordPress contact form to AWS SES connection?
Hi all,
tldr; how to monitor a connection between website contact form and SES to see if it’s logged in.
Non-profit marketing guy and non-programmer here. We ran into a situation recently where our our WordPress website contact forms, which send email notifications via AWS SES lost their authentication/connection and to both the person filling out the form and us, it looked like everything was working fine.
It took a us awhile to figure out that the connection was lost and we don’t want to run into that situation. Without scrapping the whole system for something else, I’m looking for advice on how we can monitor and get alerts if the connection is lost.
Right now we’re manually and randomly filling out contact forms to see if they’re working, which is driving our program staff bonkers and wasting our own time.
I know in the AWS SES backend we can see number of emails sent, delivered, bounced, etc but without a baseline of how many sent per day, we can’t really use that to monitor connectivity. Sometimes we get no contact form submissions, and some days it’s hundreds of legit requests.
I’ve seen that there are APM services out there that can fill out forms and look for a response like hitting a goal page, but how could it look for an email getting triggered and sent unless it could maybe receive an inbound email? I just don’t know enough about how to see what these can do.
We have a some money to solve this and access to developers who with instructions could install anything necessary but I figured I’d ask here before I went to the guys our IT department recommended at $300/hr for discovery.
Thank you and happy to answer any questions that might be helpful.
https://redd.it/fcmr41
@r_devops
Hi all,
tldr; how to monitor a connection between website contact form and SES to see if it’s logged in.
Non-profit marketing guy and non-programmer here. We ran into a situation recently where our our WordPress website contact forms, which send email notifications via AWS SES lost their authentication/connection and to both the person filling out the form and us, it looked like everything was working fine.
It took a us awhile to figure out that the connection was lost and we don’t want to run into that situation. Without scrapping the whole system for something else, I’m looking for advice on how we can monitor and get alerts if the connection is lost.
Right now we’re manually and randomly filling out contact forms to see if they’re working, which is driving our program staff bonkers and wasting our own time.
I know in the AWS SES backend we can see number of emails sent, delivered, bounced, etc but without a baseline of how many sent per day, we can’t really use that to monitor connectivity. Sometimes we get no contact form submissions, and some days it’s hundreds of legit requests.
I’ve seen that there are APM services out there that can fill out forms and look for a response like hitting a goal page, but how could it look for an email getting triggered and sent unless it could maybe receive an inbound email? I just don’t know enough about how to see what these can do.
We have a some money to solve this and access to developers who with instructions could install anything necessary but I figured I’d ask here before I went to the guys our IT department recommended at $300/hr for discovery.
Thank you and happy to answer any questions that might be helpful.
https://redd.it/fcmr41
@r_devops
reddit
How to monitor WordPress contact form to AWS SES connection?
Hi all, tldr; how to monitor a connection between website contact form and SES to see if it’s logged in. Non-profit marketing guy and...
AWS Elasticsearch Service Security
Hello,
I was curious if anyone has used the public facing Elasticsearch Service? I was curious of how secure it actually is (ex: would you funnel customer data through it?). For instance, if I had all the users authenticate into Kibana via SAML/Cognito and have my ec2 instances only able to PUT/POST data to it (access policy example below). How secure would that be?
Is there a better way to handle this if not? I have tried the VPC method, but I always have issues with allow open access to the VPC(?) domain and still requiring users to login.
Open to suggestions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpPost",
"es:ESHttpPatch",
"es:ESHttpPut"
],
"Resource": "arn:${Partition}:es:${Region}:${Account}:domain/${DomainName}/*",
"Condition": {"IpAddress": {"aws:SourceIp": "123.45.67.0/21"}} # VPC CIDR RANGE
]
}
https://redd.it/fccaub
@r_devops
Hello,
I was curious if anyone has used the public facing Elasticsearch Service? I was curious of how secure it actually is (ex: would you funnel customer data through it?). For instance, if I had all the users authenticate into Kibana via SAML/Cognito and have my ec2 instances only able to PUT/POST data to it (access policy example below). How secure would that be?
Is there a better way to handle this if not? I have tried the VPC method, but I always have issues with allow open access to the VPC(?) domain and still requiring users to login.
Open to suggestions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpPost",
"es:ESHttpPatch",
"es:ESHttpPut"
],
"Resource": "arn:${Partition}:es:${Region}:${Account}:domain/${DomainName}/*",
"Condition": {"IpAddress": {"aws:SourceIp": "123.45.67.0/21"}} # VPC CIDR RANGE
]
}
https://redd.it/fccaub
@r_devops
reddit
AWS Elasticsearch Service Security
Hello, I was curious if anyone has used the public facing Elasticsearch Service? I was curious of how secure it actually is (ex: would you...
How we optimised our build system using umake
Over the past few months we worked on a project to improve our build times. We wanted to replace our makefile based build with something modern and fast. We compared multiple tools such as google bazel, facebook buck, ninja and plain old cmake. At the end of the day we figured that none of them matched our exact needs.
Eventually we reached tup, which looked very promising. The issue with tup was the lack of strong remote caching. Initially we wanted to improve tup to match our needs. After a while we figured that we should just build something new. With all the good stuff that we took from tup and strong caching like sccache from mozzila. The result was a brand new tool - umake. It is fast (really fast), easy to use and correct. No more building the same binary in the office if someone else already built it. No more running make -j10 and getting broken results. It just works, and it works fast.
I'll be happy to hear your thoughts on the topic.
For more details check out: https://drivenets.com/blog/the-inside-story-of-how-we-optimized-our-own-build-system/ https://github.com/grisha85/umake/
https://redd.it/fcbelt
@r_devops
Over the past few months we worked on a project to improve our build times. We wanted to replace our makefile based build with something modern and fast. We compared multiple tools such as google bazel, facebook buck, ninja and plain old cmake. At the end of the day we figured that none of them matched our exact needs.
Eventually we reached tup, which looked very promising. The issue with tup was the lack of strong remote caching. Initially we wanted to improve tup to match our needs. After a while we figured that we should just build something new. With all the good stuff that we took from tup and strong caching like sccache from mozzila. The result was a brand new tool - umake. It is fast (really fast), easy to use and correct. No more building the same binary in the office if someone else already built it. No more running make -j10 and getting broken results. It just works, and it works fast.
I'll be happy to hear your thoughts on the topic.
For more details check out: https://drivenets.com/blog/the-inside-story-of-how-we-optimized-our-own-build-system/ https://github.com/grisha85/umake/
https://redd.it/fcbelt
@r_devops
DriveNets
The Inside Story of How We Optimized our Own Build System - DriveNets
The story of the DriveNets build system started like most other startups. We took various different open source projects as a base for the product. Almost every one of them had a different way of building it. From plain old Makefiles to automake, CMake and…
How do you usually deal with versioning of helm charts?
There are 2 approaches that I want to ask about:
1 - Increment versions of components inside the chart code and every time you do it generate new chart version
2 - Update versions by supplying local values via parameters or value file on the instance itself - this way chart version remains unchanged
Approach 2 does not contradict the first one, but trying to use both at the same time looks a little messy.
Would appreciate thoughts on this.
https://redd.it/fcol46
@r_devops
There are 2 approaches that I want to ask about:
1 - Increment versions of components inside the chart code and every time you do it generate new chart version
2 - Update versions by supplying local values via parameters or value file on the instance itself - this way chart version remains unchanged
Approach 2 does not contradict the first one, but trying to use both at the same time looks a little messy.
Would appreciate thoughts on this.
https://redd.it/fcol46
@r_devops
reddit
r/devops - How do you usually deal with versioning of helm charts?
0 votes and 1 comment so far on Reddit
Need help with cloudformation
{
"Parameters": {
"redisboxes": {
"Type": "String",
"Default": "2",
"Description": "launch 2 boxes"
}
},
"InstanceType": {
"Description": "Select one of the possible instance types",
"Type": "String",
"Default": "t2.micro",
"AllowedValues": ["t2.micro", "t2.small", "t2.medium"]
},
"Outputs": {
"LoadBalancerIP": {
"Value": {
"Ref": "LoadBalancerIP"
}
}
},
"Resources": {
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.0.0.0/16",
"EnableDnsHostnames": true
}
},
"VPCGateway": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"InternetGatewayId": {
"Ref": "Gateway"
},
"VpcId": {
"Ref": "VPC"
}
}
},
"PublicSubnet": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "10.0.0.0/24",
"VpcId": {
"Ref": "VPC"
}
}
},
"PrivateSubnet": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "10.0.1.0/24",
"VpcId": {
"Ref": "VPC"
}
}
},
"PrivateSubnetRoute": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"RouteTableId": {
"Ref": "PrivateRouteTable"
},
"SubnetId": {
"Ref": "PrivateSubnet"
}
}
},
"PrivateRouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
}
}
},
"PrivateRouteGlobal": {
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "PrivateRouteTable"
},
"DestinationCidrBlock": "0.0.0.0/0",
"InstanceId": {
"Ref": "NATDevice"
}
},
"DependsOn": "PublicRouteGlobal"
},
"PublicSubnetRoute": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"RouteTableId": {
"Ref": "PublicRouteTable"
},
"SubnetId": {
"Ref": "PublicSubnet"
}
}
},
"PublicRouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
}
}
},
"PublicRouteGlobal": {
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "PublicRouteTable"
},
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": {
"Ref": "Gateway"
}
}
},
"NATIPAddress": {
"Type": "AWS::EC2::EIP",
"Properties": {
"Domain": "vpc",
"InstanceId": {
"Ref": "NATDevice"
}
},
"DependsOn": "VPCGateway"
},
"NATDevice": {
"Type": "AWS::EC2::Instance",
"Properties": {
"SubnetId": {
"Ref": "PublicSubnet"
},
"SourceDestCheck": "false",
"ImageId": {
"Fn::FindInMap": ["AWSNATAMI", {
"Ref": "AWS::Region"
}, "AMI"]
},
"SecurityGroupIds": [{
"Ref": "InstanceSecurityGroup"
}],
"Tags": [{
"Key": "Name",
"Value": "Serf Demo NAT Device"
}]
}
},
"LoadBalancer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": {
"Fn::FindInMap": ["AWSINSTAMI", {
"Ref": "AWS::Region"
}, "AMI"]
},
"PrivateIpAddress": "10.0.0.5",
"SecurityGroupIds": [{
"Ref": "InstanceSecurityGroup"
}],
"SubnetId": {
"Ref": "PublicSubnet"
},
"Tags": [{
"Key": "Name",
"Value": "Serf Demo LB"
}],
"UserData": "IyEvYmluL3NoCgpzZXQgLWUKCiMgSW5zdGFsbCBIQVBy
{
"Parameters": {
"redisboxes": {
"Type": "String",
"Default": "2",
"Description": "launch 2 boxes"
}
},
"InstanceType": {
"Description": "Select one of the possible instance types",
"Type": "String",
"Default": "t2.micro",
"AllowedValues": ["t2.micro", "t2.small", "t2.medium"]
},
"Outputs": {
"LoadBalancerIP": {
"Value": {
"Ref": "LoadBalancerIP"
}
}
},
"Resources": {
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.0.0.0/16",
"EnableDnsHostnames": true
}
},
"VPCGateway": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"InternetGatewayId": {
"Ref": "Gateway"
},
"VpcId": {
"Ref": "VPC"
}
}
},
"PublicSubnet": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "10.0.0.0/24",
"VpcId": {
"Ref": "VPC"
}
}
},
"PrivateSubnet": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "10.0.1.0/24",
"VpcId": {
"Ref": "VPC"
}
}
},
"PrivateSubnetRoute": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"RouteTableId": {
"Ref": "PrivateRouteTable"
},
"SubnetId": {
"Ref": "PrivateSubnet"
}
}
},
"PrivateRouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
}
}
},
"PrivateRouteGlobal": {
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "PrivateRouteTable"
},
"DestinationCidrBlock": "0.0.0.0/0",
"InstanceId": {
"Ref": "NATDevice"
}
},
"DependsOn": "PublicRouteGlobal"
},
"PublicSubnetRoute": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"RouteTableId": {
"Ref": "PublicRouteTable"
},
"SubnetId": {
"Ref": "PublicSubnet"
}
}
},
"PublicRouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
}
}
},
"PublicRouteGlobal": {
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "PublicRouteTable"
},
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": {
"Ref": "Gateway"
}
}
},
"NATIPAddress": {
"Type": "AWS::EC2::EIP",
"Properties": {
"Domain": "vpc",
"InstanceId": {
"Ref": "NATDevice"
}
},
"DependsOn": "VPCGateway"
},
"NATDevice": {
"Type": "AWS::EC2::Instance",
"Properties": {
"SubnetId": {
"Ref": "PublicSubnet"
},
"SourceDestCheck": "false",
"ImageId": {
"Fn::FindInMap": ["AWSNATAMI", {
"Ref": "AWS::Region"
}, "AMI"]
},
"SecurityGroupIds": [{
"Ref": "InstanceSecurityGroup"
}],
"Tags": [{
"Key": "Name",
"Value": "Serf Demo NAT Device"
}]
}
},
"LoadBalancer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": {
"Fn::FindInMap": ["AWSINSTAMI", {
"Ref": "AWS::Region"
}, "AMI"]
},
"PrivateIpAddress": "10.0.0.5",
"SecurityGroupIds": [{
"Ref": "InstanceSecurityGroup"
}],
"SubnetId": {
"Ref": "PublicSubnet"
},
"Tags": [{
"Key": "Name",
"Value": "Serf Demo LB"
}],
"UserData": "IyEvYmluL3NoCgpzZXQgLWUKCiMgSW5zdGFsbCBIQVBy
b3h5CnN1ZG8gYXB0LWdldCB1cGRhdGUKc3VkbyBhcHQtZ2V0IGluc3RhbGwgLXkgaGFwcm94eQoKIyBDb25maWd1cmUgaXQgaW4gYSBqYW5rIHdheQpjYXQgPDxFT0YgPi90bXAvaGFwcm94eS5jZmcKZ2xvYmFsCiAgICBkYWVtb24KICAgIG1heGNvbm4gMjU2CgpkZWZhdWx0cwogICAgbW9kZSBodHRwCiAgICB0aW1lb3V0IGNvbm5lY3QgNTAwMG1zCiAgICB0aW1lb3V0IGNsaWVudCA1MDAwMG1zCiAgICB0aW1lb3V0IHNlcnZlciA1MDAwMG1zCgpsaXN0ZW4gc3RhdHMKICAgIGJpbmQgKjo5OTk5CiAgICBtb2RlIGh0dHAKICAgIHN0YXRzIGVuYWJsZQogICAgc3RhdHMgdXJpIC8KICAgIHN0YXRzIHJlZnJlc2ggMnMKCiNsaXN0ZW4gaHR0cC1pbgojICAgIGJpbmQgKjo4MAojICAgIGJhbGFuY2Ugcm91bmRyb2JpbgojICAgIG9wdGlvbiBodHRwLXNlcnZlci1jbG9zZQoKZnJvbnRlbmQgcmVkaXMKICBiaW5kIDEyNy4wLjAuMTo1MDAwIG5hbWUgcmVkaXMKICBkZWZhdWx0X2JhY2tlbmQgcmVkaXNfc2VydmVycwogIG1heGNvbm4gMTAyNAoKYmFja2VuZCByZWRpc19zZXJ2ZXJzCiAgYmFsYW5jZSByb3VuZHJvYmluCiAgI29wdGlvbiB0Y3AtY2hlY2sKICAjdGNwLWNoZWNrIGNvbm5lY3QKICAjdGNwLWNoZWNrIHNlbmQgUElOR1xyXG4KICAjdGNwLWNoZWNrIGV4cGVjdCBzdHJpbmcgK1BPTkcKICAjdGNwLWNoZWNrIHNlbmQgUVVJVFxyXG4KICAjdGNwLWNoZWNrIGV4cGVjdCBzdHJpbmcgK09LCiAgI3NlcnZlciByZWRpc183MDAwIGxvY2FsaG9zdDo3MDAwIGNoZWNrIGludGVyIDFzIHdlaWdodCA3NwogICNzZXJ2ZXIgcmVkaXNfNzAwMSBsb2NhbGhvc3Q6NzAwMSBjaGVjayBpbnRlciAxcyB3ZWlnaHQgMzMKCkVPRgpzdWRvIG12IC90bXAvaGFwcm94eS5jZmcgL2V0Yy9oYXByb3h5L2hhcHJveHkuY2ZnCgojIEVuYWJsZSBIQVByb3h5CmNhdCA8PEVPRiA+L3RtcC9oYXByb3h5CkVOQUJMRUQ9MQpFT0YKc3VkbyBtdiAvdG1wL2hhcHJveHkgL2V0Yy9kZWZhdWx0L2hhcHJveHkKCiMgU3RhcnQgaXQKc3VkbyAvZXRjL2luaXQuZC9oYXByb3h5IHN0YXJ0CgpleHBvcnQgU0VSRl9ST0xFPSJsYiIKCgpzZXQgLWUKCnN1ZG8gYXB0LWdldCBpbnN0YWxsIC15IHVuemlwCgpjZCAvdG1wCnVudGlsIHdnZXQgLU8gc2VyZi56aXAgaHR0cHM6Ly9kbC5iaW50cmF5LmNvbS9taXRjaGVsbGgvc2VyZi8wLjYuNF9saW51eF9hbWQ2NC56aXA7IGRvCiAgICBzbGVlcCAxCmRvbmUKdW56aXAgc2VyZi56aXAKc3VkbyBtdiBzZXJmIC91c3IvbG9jYWwvYmluL3NlcmYKCiMgVGhlIG1lbWJlciBqb2luIHNjcmlwdCBpcyBpbnZva2VkIHdoZW4gYSBtZW1iZXIgam9pbnMgdGhlIFNlcmYgY2x1c3Rlci4KIyBPdXIgam9pbiBzY3JpcHQgc2ltcGx5IGFkZHMgdGhlIG5vZGUgdG8gdGhlIGxvYWQgYmFsYW5jZXIuCmNhdCA8PEVPRiA+L3RtcC9qb2luLnNoCmlmIFsgInhcJHtTRVJGX1RBR19ST0xFfSIgIT0gInhsYiIgXTsgdGhlbgogICAgZWNobyAiTm90IGFuIGxiLiBJZ25vcmluZyBtZW1iZXIgam9pbi4iCiAgICBleGl0IDAKZmkKd2hpbGUgcmVhZCBsaW5lOyBkbwogICAgUk9MRT1cYGVjaG8gXCRsaW5lIHwgYXdrICd7cHJpbnQgXFxcJDMgfSdcYAogICAgaWYgWyAieFwke1JPTEV9IiAhPSAieHdlYiIgXTsgdGhlbgogICAgICAgIGNvbnRpbnVlCiAgICBmaQogICAgZWNobyBcJGxpbmUgfCBcXAogICAgICAgIGF3ayAneyBwcmludGYgIiAgICBzZXJ2ZXIgJXMgJXMgY2hlY2tcXG4iLCBcJDEsIFwkMiB9JyA+Pi9ldGMvaGFwcm94eS9oYXByb3h5LmNmZwpkb25lCi9ldGMvaW5pdC5kL2hhcHJveHkgcmVsb2FkCkVPRgpzdWRvIG12IC90bXAvam9pbi5zaCAvdXNyL2xvY2FsL2Jpbi9zZXJmX21lbWJlcl9qb2luLnNoCmNobW9kICt4IC91c3IvbG9jYWwvYmluL3NlcmZfbWVtYmVyX2pvaW4uc2gKCiMgVGhlIG1lbWJlciBsZWF2ZSBzY3JpcHQgaXMgaW52b2tlZCB3aGVuIGEgbWVtYmVyIGxlYXZlcyBvciBmYWlscyBvdXQKIyBvZiB0aGUgc2VyZiBjbHVzdGVyLiBPdXIgc2NyaXB0IHJlbW92ZXMgdGhlIG5vZGUgZnJvbSB0aGUgbG9hZCBiYWxhbmNlci4KY2F0IDw8RU9GID4vdG1wL2xlYXZlLnNoCmlmIFsgInhcJHtTRVJGX1RBR19ST0xFfSIgIT0gInhsYiIgXTsgdGhlbgogICAgZWNobyAiTm90IGFuIGxiLiBJZ25vcmluZyBtZW1iZXIgbGVhdmUiCiAgICBleGl0IDAKZmkKd2hpbGUgcmVhZCBsaW5lOyBkbwogICAgTkFNRT1cYGVjaG8gXCRsaW5lIHwgYXdrICd7cHJpbnQgXFxcJDEgfSdcYAogICAgc2VkIC1pJycgIi9cJHtOQU1FfSAvZCIgL2V0Yy9oYXByb3h5L2hhcHJveHkuY2ZnCmRvbmUKL2V0Yy9pbml0LmQvaGFwcm94eSByZWxvYWQKRU9GCnN1ZG8gbXYgL3RtcC9sZWF2ZS5zaCAvdXNyL2xvY2FsL2Jpbi9zZXJmX21lbWJlcl9sZWZ0LnNoCmNobW9kICt4IC91c3IvbG9jYWwvYmluL3NlcmZfbWVtYmVyX2xlZnQuc2gKCiMgQ29uZmlndXJlIHRoZSBhZ2VudApjYXQgPDxFT0YgPi90bXAvYWdlbnQuY29uZgpkZXNjcmlwdGlvbiAiU2VyZiBhZ2VudCIKc3RhcnQgb24gcnVubGV2ZWwgWzIzNDVdCnN0b3Agb24gcnVubGV2ZWwgWyEyMzQ1XQpleGVjIC91c3IvbG9jYWwvYmluL3NlcmYgYWdlbnQgXFwKICAgIC1ldmVudC1oYW5kbGVyICJtZW1iZXItam9pbj0vdXNyL2xvY2FsL2Jpbi9zZXJmX21lbWJlcl9qb2luLnNoIiBcXAogICAgLWV2ZW50LWhhbmRsZXIgIm1lbWJlci1sZWF2ZSxtZW1iZXItZmFpbGVkPS91c3IvbG9jYWwvYmluL3NlcmZfbWVtYmVyX2xlZnQuc2giIFxcCiAgICAtZXZlbnQtaGFuZGxlciAicXVlcnk6bG9hZD11cHRpbWUiIFxcCiAgICAtdGFnIHJvbGU9JHtTRVJGX1JPTEV9ID4+L3Zhci9sb2cvc2VyZi5sb2cgMj4mMQpFT0YKc3VkbyBtdiAvdG1wL2FnZW50LmNvbmYgL2V0Yy9pbml0L3NlcmYuY29uZgoKIyBTdGFydCB0aGUgYWdlbnQhCnN1ZG8gc3RhcnQgc2VyZgoKIyBJZiB3ZSdyZSB0aGUgd2ViIG5vZGUsIHRoZW4gd2UgbmVlZCB0byBjb25maWd1cmUgdGhlIGpvaW4gcmV0cnkKaWYgWyAieCR7U0VSRl9ST0xFfSIgIT0gInh3ZWIiIF07IHRoZW4KICAgIGV4aXQgMApmaQoKY2F0IDw8RU9G
ID4vdG1wL2pvaW4uY29uZgpkZXNjcmlwdGlvbiAiSm9pbiB0aGUgc2VyZiBjbHVzdGVyIgpzdGFydCBvbiBydW5sZXZlbCBbMjM0NV0Kc3RvcCBvbiBydW5sZXZlbCBbITIzNDVdCnRhc2sKcmVzcGF3bgpzY3JpcHQKICAgIHNsZWVwIDUKICAgIGV4ZWMgL3Vzci9sb2NhbC9iaW4vc2VyZiBqb2luIDEwLjAuMC41CmVuZCBzY3JpcHQKRU9GCnN1ZG8gbXYgL3RtcC9qb2luLmNvbmYgL2V0Yy9pbml0L3NlcmYtam9pbi5jb25mCnN1ZG8gc3RhcnQgc2VyZi1qb2luCgpjYXQgPDxFT0YgPi90bXAvcXVlcnkuY29uZgpkZXNjcmlwdGlvbiAiUXVlcnkgdGhlIHNlcmYgY2x1c3RlciBsb2FkIgpzdGFydCBvbiBydW5sZXZlbCBbMjM0NV0Kc3RvcCBvbiBydW5sZXZlbCBbITIzNDVdCnJlc3Bhd24Kc2NyaXB0CiAgICBlY2hvIGBkYXRlYCBJIGFtICIke0hPU1ROQU1FfTxicj4iID4gL3Zhci93d3cvaW5kZXguaHRtbC4xCiAgICBzZXJmIHF1ZXJ5IC1uby1hY2sgbG9hZCB8IHNlZCAnc3wkfDxicj58JyA+PiAvdmFyL3d3dy9pbmRleC5odG1sLjEKICAgIG12IC92YXIvd3d3L2luZGV4Lmh0bWwuMSAvdmFyL3d3dy9pbmRleC5odG1sCiAgICBzbGVlcCAxMAplbmQgc2NyaXB0CkVPRgpzdWRvIG12IC90bXAvcXVlcnkuY29uZiAvZXRjL2luaXQvc2VyZi1xdWVyeS5jb25mCnN1ZG8gc3RhcnQgc2VyZi1xdWVyeQoKCgoKCg=="
},
"DependsOn": "PublicRouteGlobal"
},
"LoadBalancerIP": {
"Type": "AWS::EC2::EIP",
"Properties": {
"InstanceId": {
"Ref": "LoadBalancer"
},
"Domain": "vpc"
},
"DependsOn": "VPCGateway"
},
"redisasg": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"AvailabilityZones": [{
"Fn::GetAtt": ["PrivateSubnet", "AvailabilityZone"]
}],
"LaunchConfigurationName": {
"Ref": "WebLaunchConfig"
},
"DesiredCapacity": {
"Ref": "redisboxes"
},
"MinSize": {
"Ref": "redisboxes"
},
"MaxSize": {
"Ref": "redisboxes"
},
"VPCZoneIdentifier": [{
"Ref": "PrivateSubnet"
}]
},
"DependsOn": ["NATDevice", "NATIPAddress", "PrivateRouteGlobal"]
},
"RedisboxesConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"ImageId": {
"Fn::FindInMap": ["AWSINSTAMI", {
"Ref": "AWS::Region"
}, "AMI"]
},
"InstanceType": "m1.small",
"SecurityGroups": [{
"Ref": "InstanceSecurityGroup"
}],
"UserData": "IyEvYmluL3NoCgpzZXQgLWUKCiMgSW5zdGFsbCBIQVByb3h5CnN1ZG8gYXB0LWdldCB1cGRhdGUKc3VkbyBhcHQtZ2V0IGluc3RhbGwgLXkgaGFwcm94eQoKIyBDb25maWd1cmUgaXQgaW4gYSBqYW5rIHdheQpjYXQgPDxFT0YgPi90bXAvaGFwcm94eS5jZmcKZ2xvYmFsCiAgICBkYWVtb24KICAgIG1heGNvbm4gMjU2CgpkZWZhdWx0cwogICAgbW9kZSBodHRwCiAgICB0aW1lb3V0IGNvbm5lY3QgNTAwMG1zCiAgICB0aW1lb3V0IGNsaWVudCA1MDAwMG1zCiAgICB0aW1lb3V0IHNlcnZlciA1MDAwMG1zCgpsaXN0ZW4gc3RhdHMKICAgIGJpbmQgKjo5OTk5CiAgICBtb2RlIGh0dHAKICAgIHN0YXRzIGVuYWJsZQogICAgc3RhdHMgdXJpIC8KICAgIHN0YXRzIHJlZnJlc2ggMnMKCmZyb250ZW5kIHJlZGlzCiAgYmluZCAxMjcuMC4wLjE6NTAwMCBuYW1lIHJlZGlzCiAgZGVmYXVsdF9iYWNrZW5kIHJlZGlzX3NlcnZlcnMKICBtYXhjb25uIDEwMjQKCmJhY2tlbmQgcmVkaXNfc2VydmVycwogIGJhbGFuY2Ugcm91bmRyb2JpbgogICNvcHRpb24gdGNwLWNoZWNrCiAgI3RjcC1jaGVjayBjb25uZWN0CiAgI3RjcC1jaGVjayBzZW5kIFBJTkdcclxuCiAgI3RjcC1jaGVjayBleHBlY3Qgc3RyaW5nICtQT05HCiAgI3RjcC1jaGVjayBzZW5kIFFVSVRcclxuCiAgI3RjcC1jaGVjayBleHBlY3Qgc3RyaW5nICtPSwogICNzZXJ2ZXIgcmVkaXNfNzAwMCBsb2NhbGhvc3Q6NzAwMCBjaGVjayBpbnRlciAxcyB3ZWlnaHQgNzcKICAjc2VydmVyIHJlZGlzXzcwMDEgbG9jYWxob3N0OjcwMDEgY2hlY2sgaW50ZXIgMXMgd2VpZ2h0IDMzCkVPRgpzdWRvIG12IC90bXAvaGFwcm94eS5jZmcgL2V0Yy9oYXByb3h5L2hhcHJveHkuY2ZnCgojIEVuYWJsZSBIQVByb3h5CmNhdCA8PEVPRiA+L3RtcC9oYXByb3h5CkVOQUJMRUQ9MQpFT0YKc3VkbyBtdiAvdG1wL2hhcHJveHkgL2V0Yy9kZWZhdWx0L2hhcHJveHkKCiMgU3RhcnQgaXQKc3VkbyAvZXRjL2luaXQuZC9oYXByb3h5IHN0YXJ0CgpleHBvcnQgU0VSRl9ST0xFPSJyZWRpcyIKCgpjYXQgPDxFT0YgPi90bXAvcmVkaXMuY29uZgpiaW5kIDEyNy4wLjAuMQpwcm90ZWN0ZWQtbW9kZSBubwp0aW1lb3V0IDAKdGNwLWtlZXBhbGl2ZSAzMDAKbG9nbGV2ZWwgbm90aWNlIApwaWRmaWxlIC92YXIvcnVuL3JlZGlzXzYzNzkucGlkCkVPRgoKc3VkbyBtdiAvdG1wL3JlZGlzLmNvbmYgL2V0Yy9yZWRpcy9yZWRpcy5jb25mCgpzZXQgLWUKCnN1ZG8gYXB0LWdldCBpbnN0YWxsIC15IHVuemlwCgpjZCAvdG1wCnVudGlsIHdnZXQgLU8gc2VyZi56aXAgaHR0cHM6Ly9kbC5iaW50cmF5LmNvbS9taXRjaGVsbGgvc2VyZi8wLjYuNF9saW51eF9hbWQ2NC56aXA7IGRvCiAgICBzbGVlcCAxCmRvbmUKdW56aXAgc2VyZi56aXAKc3VkbyBtdiBzZXJmIC91c3IvbG9
},
"DependsOn": "PublicRouteGlobal"
},
"LoadBalancerIP": {
"Type": "AWS::EC2::EIP",
"Properties": {
"InstanceId": {
"Ref": "LoadBalancer"
},
"Domain": "vpc"
},
"DependsOn": "VPCGateway"
},
"redisasg": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"AvailabilityZones": [{
"Fn::GetAtt": ["PrivateSubnet", "AvailabilityZone"]
}],
"LaunchConfigurationName": {
"Ref": "WebLaunchConfig"
},
"DesiredCapacity": {
"Ref": "redisboxes"
},
"MinSize": {
"Ref": "redisboxes"
},
"MaxSize": {
"Ref": "redisboxes"
},
"VPCZoneIdentifier": [{
"Ref": "PrivateSubnet"
}]
},
"DependsOn": ["NATDevice", "NATIPAddress", "PrivateRouteGlobal"]
},
"RedisboxesConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"ImageId": {
"Fn::FindInMap": ["AWSINSTAMI", {
"Ref": "AWS::Region"
}, "AMI"]
},
"InstanceType": "m1.small",
"SecurityGroups": [{
"Ref": "InstanceSecurityGroup"
}],
"UserData": "IyEvYmluL3NoCgpzZXQgLWUKCiMgSW5zdGFsbCBIQVByb3h5CnN1ZG8gYXB0LWdldCB1cGRhdGUKc3VkbyBhcHQtZ2V0IGluc3RhbGwgLXkgaGFwcm94eQoKIyBDb25maWd1cmUgaXQgaW4gYSBqYW5rIHdheQpjYXQgPDxFT0YgPi90bXAvaGFwcm94eS5jZmcKZ2xvYmFsCiAgICBkYWVtb24KICAgIG1heGNvbm4gMjU2CgpkZWZhdWx0cwogICAgbW9kZSBodHRwCiAgICB0aW1lb3V0IGNvbm5lY3QgNTAwMG1zCiAgICB0aW1lb3V0IGNsaWVudCA1MDAwMG1zCiAgICB0aW1lb3V0IHNlcnZlciA1MDAwMG1zCgpsaXN0ZW4gc3RhdHMKICAgIGJpbmQgKjo5OTk5CiAgICBtb2RlIGh0dHAKICAgIHN0YXRzIGVuYWJsZQogICAgc3RhdHMgdXJpIC8KICAgIHN0YXRzIHJlZnJlc2ggMnMKCmZyb250ZW5kIHJlZGlzCiAgYmluZCAxMjcuMC4wLjE6NTAwMCBuYW1lIHJlZGlzCiAgZGVmYXVsdF9iYWNrZW5kIHJlZGlzX3NlcnZlcnMKICBtYXhjb25uIDEwMjQKCmJhY2tlbmQgcmVkaXNfc2VydmVycwogIGJhbGFuY2Ugcm91bmRyb2JpbgogICNvcHRpb24gdGNwLWNoZWNrCiAgI3RjcC1jaGVjayBjb25uZWN0CiAgI3RjcC1jaGVjayBzZW5kIFBJTkdcclxuCiAgI3RjcC1jaGVjayBleHBlY3Qgc3RyaW5nICtQT05HCiAgI3RjcC1jaGVjayBzZW5kIFFVSVRcclxuCiAgI3RjcC1jaGVjayBleHBlY3Qgc3RyaW5nICtPSwogICNzZXJ2ZXIgcmVkaXNfNzAwMCBsb2NhbGhvc3Q6NzAwMCBjaGVjayBpbnRlciAxcyB3ZWlnaHQgNzcKICAjc2VydmVyIHJlZGlzXzcwMDEgbG9jYWxob3N0OjcwMDEgY2hlY2sgaW50ZXIgMXMgd2VpZ2h0IDMzCkVPRgpzdWRvIG12IC90bXAvaGFwcm94eS5jZmcgL2V0Yy9oYXByb3h5L2hhcHJveHkuY2ZnCgojIEVuYWJsZSBIQVByb3h5CmNhdCA8PEVPRiA+L3RtcC9oYXByb3h5CkVOQUJMRUQ9MQpFT0YKc3VkbyBtdiAvdG1wL2hhcHJveHkgL2V0Yy9kZWZhdWx0L2hhcHJveHkKCiMgU3RhcnQgaXQKc3VkbyAvZXRjL2luaXQuZC9oYXByb3h5IHN0YXJ0CgpleHBvcnQgU0VSRl9ST0xFPSJyZWRpcyIKCgpjYXQgPDxFT0YgPi90bXAvcmVkaXMuY29uZgpiaW5kIDEyNy4wLjAuMQpwcm90ZWN0ZWQtbW9kZSBubwp0aW1lb3V0IDAKdGNwLWtlZXBhbGl2ZSAzMDAKbG9nbGV2ZWwgbm90aWNlIApwaWRmaWxlIC92YXIvcnVuL3JlZGlzXzYzNzkucGlkCkVPRgoKc3VkbyBtdiAvdG1wL3JlZGlzLmNvbmYgL2V0Yy9yZWRpcy9yZWRpcy5jb25mCgpzZXQgLWUKCnN1ZG8gYXB0LWdldCBpbnN0YWxsIC15IHVuemlwCgpjZCAvdG1wCnVudGlsIHdnZXQgLU8gc2VyZi56aXAgaHR0cHM6Ly9kbC5iaW50cmF5LmNvbS9taXRjaGVsbGgvc2VyZi8wLjYuNF9saW51eF9hbWQ2NC56aXA7IGRvCiAgICBzbGVlcCAxCmRvbmUKdW56aXAgc2VyZi56aXAKc3VkbyBtdiBzZXJmIC91c3IvbG9
jYWwvYmluL3NlcmYKCiMgVGhlIG1lbWJlciBqb2luIHNjcmlwdCBpcyBpbnZva2VkIHdoZW4gYSBtZW1iZXIgam9pbnMgdGhlIFNlcmYgY2x1c3Rlci4KIyBPdXIgam9pbiBzY3JpcHQgc2ltcGx5IGFkZHMgdGhlIG5vZGUgdG8gdGhlIGxvYWQgYmFsYW5jZXIuCmNhdCA8PEVPRiA+L3RtcC9qb2luLnNoCmlmIFsgInhcJHtTRVJGX1RBR19ST0xFfSIgIT0gInhsYiIgXTsgdGhlbgogICAgZWNobyAiTm90IGFuIGxiLiBJZ25vcmluZyBtZW1iZXIgam9pbi4iCiAgICBleGl0IDAKZmkKd2hpbGUgcmVhZCBsaW5lOyBkbwogICAgUk9MRT1cYGVjaG8gXCRsaW5lIHwgYXdrICd7cHJpbnQgXFxcJDMgfSdcYAogICAgaWYgWyAieFwke1JPTEV9IiAhPSAieHdlYiIgXTsgdGhlbgogICAgICAgIGNvbnRpbnVlCiAgICBmaQogICAgZWNobyBcJGxpbmUgfCBcXAogICAgICAgIGF3ayAneyBwcmludGYgIiAgICBzZXJ2ZXIgJXMgJXMgY2hlY2tcXG4iLCBcJDEsIFwkMiB9JyA+Pi9ldGMvaGFwcm94eS9oYXByb3h5LmNmZwpkb25lCi9ldGMvaW5pdC5kL2hhcHJveHkgcmVsb2FkCkVPRgpzdWRvIG12IC90bXAvam9pbi5zaCAvdXNyL2xvY2FsL2Jpbi9zZXJmX21lbWJlcl9qb2luLnNoCmNobW9kICt4IC91c3IvbG9jYWwvYmluL3NlcmZfbWVtYmVyX2pvaW4uc2gKCiMgVGhlIG1lbWJlciBsZWF2ZSBzY3JpcHQgaXMgaW52b2tlZCB3aGVuIGEgbWVtYmVyIGxlYXZlcyBvciBmYWlscyBvdXQKIyBvZiB0aGUgc2VyZiBjbHVzdGVyLiBPdXIgc2NyaXB0IHJlbW92ZXMgdGhlIG5vZGUgZnJvbSB0aGUgbG9hZCBiYWxhbmNlci4KY2F0IDw8RU9GID4vdG1wL2xlYXZlLnNoCmlmIFsgInhcJHtTRVJGX1RBR19ST0xFfSIgIT0gInhsYiIgXTsgdGhlbgogICAgZWNobyAiTm90IGFuIGxiLiBJZ25vcmluZyBtZW1iZXIgbGVhdmUiCiAgICBleGl0IDAKZmkKd2hpbGUgcmVhZCBsaW5lOyBkbwogICAgTkFNRT1cYGVjaG8gXCRsaW5lIHwgYXdrICd7cHJpbnQgXFxcJDEgfSdcYAogICAgc2VkIC1pJycgIi9cJHtOQU1FfSAvZCIgL2V0Yy9oYXByb3h5L2hhcHJveHkuY2ZnCmRvbmUKL2V0Yy9pbml0LmQvaGFwcm94eSByZWxvYWQKRU9GCnN1ZG8gbXYgL3RtcC9sZWF2ZS5zaCAvdXNyL2xvY2FsL2Jpbi9zZXJmX21lbWJlcl9sZWZ0LnNoCmNobW9kICt4IC91c3IvbG9jYWwvYmluL3NlcmZfbWVtYmVyX2xlZnQuc2gKCiMgQ29uZmlndXJlIHRoZSBhZ2VudApjYXQgPDxFT0YgPi90bXAvYWdlbnQuY29uZgpkZXNjcmlwdGlvbiAiU2VyZiBhZ2VudCIKc3RhcnQgb24gcnVubGV2ZWwgWzIzNDVdCnN0b3Agb24gcnVubGV2ZWwgWyEyMzQ1XQpleGVjIC91c3IvbG9jYWwvYmluL3NlcmYgYWdlbnQgXFwKICAgIC1ldmVudC1oYW5kbGVyICJtZW1iZXItam9pbj0vdXNyL2xvY2FsL2Jpbi9zZXJmX21lbWJlcl9qb2luLnNoIiBcXAogICAgLWV2ZW50LWhhbmRsZXIgIm1lbWJlci1sZWF2ZSxtZW1iZXItZmFpbGVkPS91c3IvbG9jYWwvYmluL3NlcmZfbWVtYmVyX2xlZnQuc2giIFxcCiAgICAtZXZlbnQtaGFuZGxlciAicXVlcnk6bG9hZD11cHRpbWUiIFxcCiAgICAtdGFnIHJvbGU9JHtTRVJGX1JPTEV9ID4+L3Zhci9sb2cvc2VyZi5sb2cgMj4mMQpFT0YKc3VkbyBtdiAvdG1wL2FnZW50LmNvbmYgL2V0Yy9pbml0L3NlcmYuY29uZgoKIyBTdGFydCB0aGUgYWdlbnQhCnN1ZG8gc3RhcnQgc2VyZgoKIyBJZiB3ZSdyZSB0aGUgd2ViIG5vZGUsIHRoZW4gd2UgbmVlZCB0byBjb25maWd1cmUgdGhlIGpvaW4gcmV0cnkKaWYgWyAieCR7U0VSRl9ST0xFfSIgIT0gInh3ZWIiIF07IHRoZW4KICAgIGV4aXQgMApmaQoKY2F0IDw8RU9GID4vdG1wL2pvaW4uY29uZgpkZXNjcmlwdGlvbiAiSm9pbiB0aGUgc2VyZiBjbHVzdGVyIgpzdGFydCBvbiBydW5sZXZlbCBbMjM0NV0Kc3RvcCBvbiBydW5sZXZlbCBbITIzNDVdCnRhc2sKcmVzcGF3bgpzY3JpcHQKICAgIHNsZWVwIDUKICAgIGV4ZWMgL3Vzci9sb2NhbC9iaW4vc2VyZiBqb2luIDEwLjAuMC41CmVuZCBzY3JpcHQKRU9GCnN1ZG8gbXYgL3RtcC9qb2luLmNvbmYgL2V0Yy9pbml0L3NlcmYtam9pbi5jb25mCnN1ZG8gc3RhcnQgc2VyZi1qb2luCgpjYXQgPDxFT0YgPi90bXAvcXVlcnkuY29uZgpkZXNjcmlwdGlvbiAiUXVlcnkgdGhlIHNlcmYgY2x1c3RlciBsb2FkIgpzdGFydCBvbiBydW5sZXZlbCBbMjM0NV0Kc3RvcCBvbiBydW5sZXZlbCBbITIzNDVdCnJlc3Bhd24Kc2NyaXB0CiAgICBlY2hvIGBkYXRlYCBJIGFtICIke0hPU1ROQU1FfTxicj4iID4gL3Zhci93d3cvaW5kZXguaHRtbC4xCiAgICBzZXJmIHF1ZXJ5IC1uby1hY2sgbG9hZCB8IHNlZCAnc3wkfDxicj58JyA+PiAvdmFyL3d3dy9pbmRleC5odG1sLjEKICAgIG12IC92YXIvd3d3L2luZGV4Lmh0bWwuMSAvdmFyL3d3dy9pbmRleC5odG1sCiAgICBzbGVlcCAxMAplbmQgc2NyaXB0CkVPRgpzdWRvIG12IC90bXAvcXVlcnkuY29uZiAvZXRjL2luaXQvc2VyZi1xdWVyeS5jb25mCnN1ZG8gc3RhcnQgc2VyZi1xdWVyeQo="
}
},
"InstanceSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Serf demo security group",
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [{
"IpProtocol": "icmp",
"FromPort": "-1",
"ToPort": "-1",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "22",
"ToPort": "22",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "6379",
"ToPort": "6379",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "7946",
"ToPort": "7946",
"CidrIp": "0.0.0.0/0"
}
},
"InstanceSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Serf demo security group",
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [{
"IpProtocol": "icmp",
"FromPort": "-1",
"ToPort": "-1",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "22",
"ToPort": "22",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "6379",
"ToPort": "6379",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "7946",
"ToPort": "7946",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "7373",
"ToPort": "7373",
"CidrIp": "0.0.0.0/0"
}]
}
},
"InstanceSecurityGroupec2": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "ec2 jump security grp",
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [{
"IpProtocol": "icmp",
"FromPort": "-1",
"ToPort": "-1",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "22",
"ToPort": "22",
"CidrIp": "0.0.0.0/0"
}],
"SecurityGroupEgress": {
"IpProtocol": "tcp",
"FromPort": "-1",
"ToPort": "-1",
"CidrIp": "0.0.0.0/0"
}
}
},
"ec2Server": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-123456",
"InstanceType": {
"Ref": "InstanceType"
},
"SecurityGroupIds": [{
"Ref": "InstanceSecurityGroupec2"
}],
"SubnetId": {
"Ref": "PublicSubnet"
},
},
}
"InstanceSecurityGroupSelfRule": {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"GroupId": {
"Ref": "InstanceSecurityGroup"
},
"IpProtocol": "-1",
"FromPort": "0",
"ToPort": "65535",
"SourceSecurityGroupId": {
"Ref": "InstanceSecurityGroup"
}
}
}
}
}
lbuserdata.sh
#!/bin/sh
set -e
# Install HAProxy
sudo apt-get update
sudo apt-get install -y haproxy
# Configure it in a jank way
cat <<EOF >/tmp/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
mode http
stats enable
stats uri /
stats refresh 2s
#listen http-in
# bind *:80
# balance roundrobin
# option http-server-close
frontend redis
bind 127.0.0.1:5000 name redis
default_backend redis_servers
maxconn 1024
backend redis_servers
balance roundrobin
#option tcp-check
#tcp-check connect
#tcp-check send PING\r\n
#tcp-check expect string +PONG
#tcp-check send QUIT\r\n
#tcp-check expect string +OK
#server redis_7000 localhost:7000 check inter 1s weight 77
#server redis_7001 localhost:7001 check inter 1s weight 33
EOF
sudo mv /tmp/haproxy.cfg /etc/haproxy/haproxy.cfg
# Enable HAProxy
cat <<EOF >/tmp/haproxy
ENABLED=1
EOF
sudo mv /tmp/haproxy /etc/default/haproxy
# Start it
sudo /etc/init.d/haproxy start
export SERF_ROLE="lb"
set -e
sudo apt-get install -y unzip
cd /tmp
until wget -O serf.zip https://dl.bintray.com/mitchellh/serf/0.6.4_linux_amd64.zip; do
sleep 1
done
unzip serf.zip
sudo mv serf /usr/local/bin/serf
# The member join script is invoked when a member joins the Serf cluster.
# Our join script simply adds the node to the load balancer.
cat <<EOF >/tmp/join.sh
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member join."
exit 0
fi
while read line; do
ROLE=\`echo \$line | awk '{print \\\$3 }'\`
if [ "x\${ROLE}" != "xweb" ]; then
continue
fi
echo \$line | \\
awk '{ printf " server %s %s check\\n", \$1, \$2 }' >>/etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/join.sh /usr/local/bin/serf_member_join.sh
chmod +x /usr/local/bin/serf_member_join.sh
# The member leave script is invoked when a member leaves or fails out
# of the serf cluster. Our script removes the node from the load balancer.
cat <<EOF >/tmp/leav
"IpProtocol": "tcp",
"FromPort": "7373",
"ToPort": "7373",
"CidrIp": "0.0.0.0/0"
}]
}
},
"InstanceSecurityGroupec2": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "ec2 jump security grp",
"VpcId": {
"Ref": "VPC"
},
"SecurityGroupIngress": [{
"IpProtocol": "icmp",
"FromPort": "-1",
"ToPort": "-1",
"CidrIp": "0.0.0.0/0"
}, {
"IpProtocol": "tcp",
"FromPort": "22",
"ToPort": "22",
"CidrIp": "0.0.0.0/0"
}],
"SecurityGroupEgress": {
"IpProtocol": "tcp",
"FromPort": "-1",
"ToPort": "-1",
"CidrIp": "0.0.0.0/0"
}
}
},
"ec2Server": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-123456",
"InstanceType": {
"Ref": "InstanceType"
},
"SecurityGroupIds": [{
"Ref": "InstanceSecurityGroupec2"
}],
"SubnetId": {
"Ref": "PublicSubnet"
},
},
}
"InstanceSecurityGroupSelfRule": {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"GroupId": {
"Ref": "InstanceSecurityGroup"
},
"IpProtocol": "-1",
"FromPort": "0",
"ToPort": "65535",
"SourceSecurityGroupId": {
"Ref": "InstanceSecurityGroup"
}
}
}
}
}
lbuserdata.sh
#!/bin/sh
set -e
# Install HAProxy
sudo apt-get update
sudo apt-get install -y haproxy
# Configure it in a jank way
cat <<EOF >/tmp/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
mode http
stats enable
stats uri /
stats refresh 2s
#listen http-in
# bind *:80
# balance roundrobin
# option http-server-close
frontend redis
bind 127.0.0.1:5000 name redis
default_backend redis_servers
maxconn 1024
backend redis_servers
balance roundrobin
#option tcp-check
#tcp-check connect
#tcp-check send PING\r\n
#tcp-check expect string +PONG
#tcp-check send QUIT\r\n
#tcp-check expect string +OK
#server redis_7000 localhost:7000 check inter 1s weight 77
#server redis_7001 localhost:7001 check inter 1s weight 33
EOF
sudo mv /tmp/haproxy.cfg /etc/haproxy/haproxy.cfg
# Enable HAProxy
cat <<EOF >/tmp/haproxy
ENABLED=1
EOF
sudo mv /tmp/haproxy /etc/default/haproxy
# Start it
sudo /etc/init.d/haproxy start
export SERF_ROLE="lb"
set -e
sudo apt-get install -y unzip
cd /tmp
until wget -O serf.zip https://dl.bintray.com/mitchellh/serf/0.6.4_linux_amd64.zip; do
sleep 1
done
unzip serf.zip
sudo mv serf /usr/local/bin/serf
# The member join script is invoked when a member joins the Serf cluster.
# Our join script simply adds the node to the load balancer.
cat <<EOF >/tmp/join.sh
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member join."
exit 0
fi
while read line; do
ROLE=\`echo \$line | awk '{print \\\$3 }'\`
if [ "x\${ROLE}" != "xweb" ]; then
continue
fi
echo \$line | \\
awk '{ printf " server %s %s check\\n", \$1, \$2 }' >>/etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/join.sh /usr/local/bin/serf_member_join.sh
chmod +x /usr/local/bin/serf_member_join.sh
# The member leave script is invoked when a member leaves or fails out
# of the serf cluster. Our script removes the node from the load balancer.
cat <<EOF >/tmp/leav
e.sh
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member leave"
exit 0
fi
while read line; do
NAME=\`echo \$line | awk '{print \\\$1 }'\`
sed -i'' "/\${NAME} /d" /etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/leave.sh /usr/local/bin/serf_member_left.sh
chmod +x /usr/local/bin/serf_member_left.sh
# Configure the agent
cat <<EOF >/tmp/agent.conf
description "Serf agent"
start on runlevel [2345]
stop on runlevel [!2345]
exec /usr/local/bin/serf agent \\
-event-handler "member-join=/usr/local/bin/serf_member_join.sh" \\
-event-handler "member-leave,member-failed=/usr/local/bin/serf_member_left.sh" \\
-event-handler "query:load=uptime" \\
-tag role=${SERF_ROLE} >>/var/log/serf.log 2>&1
EOF
sudo mv /tmp/agent.conf /etc/init/serf.conf
# Start the agent!
sudo start serf
# If we're the web node, then we need to configure the join retry
if [ "x${SERF_ROLE}" != "xweb" ]; then
exit 0
fi
cat <<EOF >/tmp/join.conf
description "Join the serf cluster"
start on runlevel [2345]
stop on runlevel [!2345]
task
respawn
script
sleep 5
exec /usr/local/bin/serf join 10.0.0.5
end script
EOF
sudo mv /tmp/join.conf /etc/init/serf-join.conf
sudo start serf-join
cat <<EOF >/tmp/query.conf
description "Query the serf cluster load"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
echo `date` I am "${HOSTNAME}<br>" > /var/www/index.html.1
serf query -no-ack load | sed 's|$|<br>|' >> /var/www/index.html.1
mv /var/www/index.html.1 /var/www/index.html
sleep 10
end script
EOF
sudo mv /tmp/query.conf /etc/init/serf-query.conf
sudo start serf-query
redisboxdata.sh
#!/bin/sh
set -e
# Install HAProxy
sudo apt-get update
sudo apt-get install -y haproxy
# Configure it in a jank way
cat <<EOF >/tmp/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
mode http
stats enable
stats uri /
stats refresh 2s
frontend redis
bind 127.0.0.1:5000 name redis
default_backend redis_servers
maxconn 1024
backend redis_servers
balance roundrobin
#option tcp-check
#tcp-check connect
#tcp-check send PING\r\n
#tcp-check expect string +PONG
#tcp-check send QUIT\r\n
#tcp-check expect string +OK
#server redis_7000 localhost:7000 check inter 1s weight 77
#server redis_7001 localhost:7001 check inter 1s weight 33
EOF
sudo mv /tmp/haproxy.cfg /etc/haproxy/haproxy.cfg
# Enable HAProxy
cat <<EOF >/tmp/haproxy
ENABLED=1
EOF
sudo mv /tmp/haproxy /etc/default/haproxy
# Start it
sudo /etc/init.d/haproxy start
export SERF_ROLE="redis"
cat <<EOF >/tmp/redis.conf
bind 127.0.0.1
protected-mode no
timeout 0
tcp-keepalive 300
loglevel notice
pidfile /var/run/redis_6379.pid
EOF
sudo mv /tmp/redis.conf /etc/redis/redis.conf
set -e
sudo apt-get install -y unzip
cd /tmp
until wget -O serf.zip https://dl.bintray.com/mitchellh/serf/0.6.4_linux_amd64.zip; do
sleep 1
done
unzip serf.zip
sudo mv serf /usr/local/bin/serf
# The member join script is invoked when a member joins the Serf cluster.
# Our join script simply adds the node to the load balancer.
cat <<EOF >/tmp/join.sh
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member join."
exit 0
fi
while read line; do
ROLE=\`echo \$line | awk '{print \\\$3 }'\`
if [ "x\${ROLE}" != "xweb" ]; then
con
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member leave"
exit 0
fi
while read line; do
NAME=\`echo \$line | awk '{print \\\$1 }'\`
sed -i'' "/\${NAME} /d" /etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/leave.sh /usr/local/bin/serf_member_left.sh
chmod +x /usr/local/bin/serf_member_left.sh
# Configure the agent
cat <<EOF >/tmp/agent.conf
description "Serf agent"
start on runlevel [2345]
stop on runlevel [!2345]
exec /usr/local/bin/serf agent \\
-event-handler "member-join=/usr/local/bin/serf_member_join.sh" \\
-event-handler "member-leave,member-failed=/usr/local/bin/serf_member_left.sh" \\
-event-handler "query:load=uptime" \\
-tag role=${SERF_ROLE} >>/var/log/serf.log 2>&1
EOF
sudo mv /tmp/agent.conf /etc/init/serf.conf
# Start the agent!
sudo start serf
# If we're the web node, then we need to configure the join retry
if [ "x${SERF_ROLE}" != "xweb" ]; then
exit 0
fi
cat <<EOF >/tmp/join.conf
description "Join the serf cluster"
start on runlevel [2345]
stop on runlevel [!2345]
task
respawn
script
sleep 5
exec /usr/local/bin/serf join 10.0.0.5
end script
EOF
sudo mv /tmp/join.conf /etc/init/serf-join.conf
sudo start serf-join
cat <<EOF >/tmp/query.conf
description "Query the serf cluster load"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
echo `date` I am "${HOSTNAME}<br>" > /var/www/index.html.1
serf query -no-ack load | sed 's|$|<br>|' >> /var/www/index.html.1
mv /var/www/index.html.1 /var/www/index.html
sleep 10
end script
EOF
sudo mv /tmp/query.conf /etc/init/serf-query.conf
sudo start serf-query
redisboxdata.sh
#!/bin/sh
set -e
# Install HAProxy
sudo apt-get update
sudo apt-get install -y haproxy
# Configure it in a jank way
cat <<EOF >/tmp/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
mode http
stats enable
stats uri /
stats refresh 2s
frontend redis
bind 127.0.0.1:5000 name redis
default_backend redis_servers
maxconn 1024
backend redis_servers
balance roundrobin
#option tcp-check
#tcp-check connect
#tcp-check send PING\r\n
#tcp-check expect string +PONG
#tcp-check send QUIT\r\n
#tcp-check expect string +OK
#server redis_7000 localhost:7000 check inter 1s weight 77
#server redis_7001 localhost:7001 check inter 1s weight 33
EOF
sudo mv /tmp/haproxy.cfg /etc/haproxy/haproxy.cfg
# Enable HAProxy
cat <<EOF >/tmp/haproxy
ENABLED=1
EOF
sudo mv /tmp/haproxy /etc/default/haproxy
# Start it
sudo /etc/init.d/haproxy start
export SERF_ROLE="redis"
cat <<EOF >/tmp/redis.conf
bind 127.0.0.1
protected-mode no
timeout 0
tcp-keepalive 300
loglevel notice
pidfile /var/run/redis_6379.pid
EOF
sudo mv /tmp/redis.conf /etc/redis/redis.conf
set -e
sudo apt-get install -y unzip
cd /tmp
until wget -O serf.zip https://dl.bintray.com/mitchellh/serf/0.6.4_linux_amd64.zip; do
sleep 1
done
unzip serf.zip
sudo mv serf /usr/local/bin/serf
# The member join script is invoked when a member joins the Serf cluster.
# Our join script simply adds the node to the load balancer.
cat <<EOF >/tmp/join.sh
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member join."
exit 0
fi
while read line; do
ROLE=\`echo \$line | awk '{print \\\$3 }'\`
if [ "x\${ROLE}" != "xweb" ]; then
con
tinue
fi
echo \$line | \\
awk '{ printf " server %s %s check\\n", \$1, \$2 }' >>/etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/join.sh /usr/local/bin/serf_member_join.sh
chmod +x /usr/local/bin/serf_member_join.sh
# The member leave script is invoked when a member leaves or fails out
# of the serf cluster. Our script removes the node from the load balancer.
cat <<EOF >/tmp/leave.sh
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member leave"
exit 0
fi
while read line; do
NAME=\`echo \$line | awk '{print \\\$1 }'\`
sed -i'' "/\${NAME} /d" /etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/leave.sh /usr/local/bin/serf_member_left.sh
chmod +x /usr/local/bin/serf_member_left.sh
# Configure the agent
cat <<EOF >/tmp/agent.conf
description "Serf agent"
start on runlevel [2345]
stop on runlevel [!2345]
exec /usr/local/bin/serf agent \\
-event-handler "member-join=/usr/local/bin/serf_member_join.sh" \\
-event-handler "member-leave,member-failed=/usr/local/bin/serf_member_left.sh" \\
-event-handler "query:load=uptime" \\
-tag role=${SERF_ROLE} >>/var/log/serf.log 2>&1
EOF
sudo mv /tmp/agent.conf /etc/init/serf.conf
# Start the agent!
sudo start serf
# If we're the web node, then we need to configure the join retry
if [ "x${SERF_ROLE}" != "xweb" ]; then
exit 0
fi
cat <<EOF >/tmp/join.conf
description "Join the serf cluster"
start on runlevel [2345]
stop on runlevel [!2345]
task
respawn
script
sleep 5
exec /usr/local/bin/serf join 10.0.0.5
end script
EOF
sudo mv /tmp/join.conf /etc/init/serf-join.conf
sudo start serf-join
cat <<EOF >/tmp/query.conf
description "Query the serf cluster load"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
echo `date` I am "${HOSTNAME}<br>" > /var/www/index.html.1
serf query -no-ack load | sed 's|$|<br>|' >> /var/www/index.html.1
mv /var/www/index.html.1 /var/www/index.html
sleep 10
end script
EOF
sudo mv /tmp/query.conf /etc/init/serf-query.conf
sudo start serf-query
Hello, I m new to this. I have written this which generates
\--vps
\--private/public subnet
\--asg(with starup script)
\--elb
\--separate ec2 instance as a jumpbox to access the redis instacnes in a round robin manner
​
getting the following error:
error: Parse error on line 321:
...icSubnet" }, }, } "InstanceS
---------------------^
Expecting 'STRING', got '}'
Also it would be a great help if someone told me if i m on the right track.
https://redd.it/fce9pm
@r_devops
fi
echo \$line | \\
awk '{ printf " server %s %s check\\n", \$1, \$2 }' >>/etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/join.sh /usr/local/bin/serf_member_join.sh
chmod +x /usr/local/bin/serf_member_join.sh
# The member leave script is invoked when a member leaves or fails out
# of the serf cluster. Our script removes the node from the load balancer.
cat <<EOF >/tmp/leave.sh
if [ "x\${SERF_TAG_ROLE}" != "xlb" ]; then
echo "Not an lb. Ignoring member leave"
exit 0
fi
while read line; do
NAME=\`echo \$line | awk '{print \\\$1 }'\`
sed -i'' "/\${NAME} /d" /etc/haproxy/haproxy.cfg
done
/etc/init.d/haproxy reload
EOF
sudo mv /tmp/leave.sh /usr/local/bin/serf_member_left.sh
chmod +x /usr/local/bin/serf_member_left.sh
# Configure the agent
cat <<EOF >/tmp/agent.conf
description "Serf agent"
start on runlevel [2345]
stop on runlevel [!2345]
exec /usr/local/bin/serf agent \\
-event-handler "member-join=/usr/local/bin/serf_member_join.sh" \\
-event-handler "member-leave,member-failed=/usr/local/bin/serf_member_left.sh" \\
-event-handler "query:load=uptime" \\
-tag role=${SERF_ROLE} >>/var/log/serf.log 2>&1
EOF
sudo mv /tmp/agent.conf /etc/init/serf.conf
# Start the agent!
sudo start serf
# If we're the web node, then we need to configure the join retry
if [ "x${SERF_ROLE}" != "xweb" ]; then
exit 0
fi
cat <<EOF >/tmp/join.conf
description "Join the serf cluster"
start on runlevel [2345]
stop on runlevel [!2345]
task
respawn
script
sleep 5
exec /usr/local/bin/serf join 10.0.0.5
end script
EOF
sudo mv /tmp/join.conf /etc/init/serf-join.conf
sudo start serf-join
cat <<EOF >/tmp/query.conf
description "Query the serf cluster load"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
echo `date` I am "${HOSTNAME}<br>" > /var/www/index.html.1
serf query -no-ack load | sed 's|$|<br>|' >> /var/www/index.html.1
mv /var/www/index.html.1 /var/www/index.html
sleep 10
end script
EOF
sudo mv /tmp/query.conf /etc/init/serf-query.conf
sudo start serf-query
Hello, I m new to this. I have written this which generates
\--vps
\--private/public subnet
\--asg(with starup script)
\--elb
\--separate ec2 instance as a jumpbox to access the redis instacnes in a round robin manner
​
getting the following error:
error: Parse error on line 321:
...icSubnet" }, }, } "InstanceS
---------------------^
Expecting 'STRING', got '}'
Also it would be a great help if someone told me if i m on the right track.
https://redd.it/fce9pm
@r_devops
reddit
Need help with cloudformation
{ "Parameters": { "redisboxes": { "Type": "String", "Default": "2", "Description": "launch 2 boxes" } ...
Security Engineer for Small vs Large Company?
Hello all im currently a security engineer for a small company, im responsible for many things like, SIEM tool, Monitoring tools, Identity management, CI/CD pipeline, cloud infrastructure, kubernetes clusters, etc etc. Basically a Cloud Security Engineer + DevOps, Because we are small company (250) and im the only one on this role i get a a bit of respect and people look up to me, i feel important many times. I get perks like training (SANS included) once a year, they will pay for any certification exam fee, as many as i take during the year, including some cheap training. I been with this company for about 1.5 years, and i just got my first raise few months ago, i was hoping it to be way larger than it was but it was only 4% increase. So i updated LinkedIn and recruiters had a new target :)
That being said i currently got offers too two large companies, one with 700k employees and the other with over 50k employees. Both offers are about 30% than what im currently making, with potential higher bonus, better benefits, but i will go from being a key member that have a say and do on the entire organization to a member of a large team. Which could be a bad thing, or a really good thing since i will have other members to learn from, and to share ideas with.
Note: I did talk to my CIO about not being happy about the salary raise, he told me to eat dirt (on a nice way).
https://redd.it/fcnecx
@r_devops
Hello all im currently a security engineer for a small company, im responsible for many things like, SIEM tool, Monitoring tools, Identity management, CI/CD pipeline, cloud infrastructure, kubernetes clusters, etc etc. Basically a Cloud Security Engineer + DevOps, Because we are small company (250) and im the only one on this role i get a a bit of respect and people look up to me, i feel important many times. I get perks like training (SANS included) once a year, they will pay for any certification exam fee, as many as i take during the year, including some cheap training. I been with this company for about 1.5 years, and i just got my first raise few months ago, i was hoping it to be way larger than it was but it was only 4% increase. So i updated LinkedIn and recruiters had a new target :)
That being said i currently got offers too two large companies, one with 700k employees and the other with over 50k employees. Both offers are about 30% than what im currently making, with potential higher bonus, better benefits, but i will go from being a key member that have a say and do on the entire organization to a member of a large team. Which could be a bad thing, or a really good thing since i will have other members to learn from, and to share ideas with.
Note: I did talk to my CIO about not being happy about the salary raise, he told me to eat dirt (on a nice way).
https://redd.it/fcnecx
@r_devops
reddit
Security Engineer for Small vs Large Company?
Hello all im currently a security engineer for a small company, im responsible for many things like, SIEM tool, Monitoring tools, Identity...