In previous posts, I covered the basics of Amazon Web Services, Preparing AWS Services to support our sample WordPress workload, and Installing WordPress in AWS with ElastiCache and S3/Glacier backups. In this post we’ll look at how to configure an AWS Auto Scaling group of WordPress servers that respond dynamically to the load placed on them by creating new EC2 instances on demand, or spinning down unnecessary instances during low demand to save on operational expenses. Auto Scaling uses AWS Cloud Watch to monitor AWS components and responds to alarm metrics (CPU utilization, number of connections, etc.).
AWS Developer Tools
The first thing we need in place to configure an Auto Scaling group of WordPress servers is a command line environment for AWS Developer tools. Download and configure the Auto Scaling Command Line Tool, the Amazon EC2 API Tools, and Amazon CloudWatch Command Line Tool from https://aws.amazon.com/developertools. Check out the other tools while you are there as they will come in handy as you increase the level of automation in your environment.
Create an AMI
Our last article ended with a complete WordPress server. We now need to convert that running server to an Amazon Machine Image (AMI). In the EC2 console, select the WordPress server and click Actions | Create Image (EBS AMI). Give the image a name and description. Leave the storage configuration as default when you create the AMI. Note the ID for the pending image in the confirmation screen:
With our template EC2 instance now being copied to an AMI, let’s go ahead and remove it from our Elastic Load Balancer configuration. We want to serve our website off of Auto Scaling instances, not a static server.
Configure Auto Scaling
With our AMI created, we can now configure Auto Scaling. The first step is to use the Auto Scaling Command Line Tool to create a Launch Config. A Launch Config defines the AMI, instance type, access key and security group for our auto scaled servers. Here’s the command:
as-create-launch-config WordPressLC --image-id ami-36b7cb5f --instance-type t1.micro --key MyAWSKey --group WordPress
where WordPressLC is the name of the Launch Config I am creating and the ami-<ID> is the AMI created from our template server. I’ll also define the instance type of the EC2 servers I want created, my server access key and the security group I want my server instances placed in.
Next I’ll create an Auto Scaling group with the following command:
as-create-auto-scaling-group WordPressASGroup --launch-configuration WordPressLC --availability-zones us-east-1aus-east-1b us-east-1c --min-size 1 --max-size 6 --grace-period 30 --desired-capacity 2 --load-balancersWordpressASLB --health-check-type ELB --tag "k=Name,v=autoscale-wordpress, p=true"
A few things to note about this command:
- The Auto Scaling group will create instances in all three US-East availability zones.
- My auto-scaling group will have a minimum size of one EC2 instance and a maximum size of six EC2 instances. My desired capacity is two servers – this is the number of servers that we’ll start out with in our group. I also define a grace period of 30 seconds – this is the amount of time after a new instance is created (booting) that it is allowed to be unresponsive/unhealthy before being removed from the group.
- I define my Elastic Load Balancer for the group (this ELB is assigned our website’s DNS CNAME in Route 53) and give the instance some tags so I know what they are while looking at them in the EC2 Console.
- The health check type is ELB. By default, your Auto Scaling group determines the health state of each instance by periodically checking the results of Amazon EC2 instance status checks. If you have associated your Auto Scaling group with an Elastic Load Balancing load balancer and have chosen to use the Elastic Load Balancing health check, Auto Scaling will determine the health status of the instances by checking the results of both the Amazon EC2 instance status checks and the Elastic Load Balancing instance health checks.
Now we need to define some scaling policies using the following commands:
as-put-scaling-policy --auto-scaling-group WordPressASGroup --name scale-up --adjustment 1 --type ChangeInCapacity --cooldown 60
as-put-scaling-policy --auto-scaling-group WordPressASGroup --name scale-dn “--adjustment -1” --type ChangeInCapacity --cooldown 60
The syntax of these commands is pretty simple so I won’t cover it. I will point out that after you run one of the above commands you will receive a string value beginning with ‘arn:aws:’ as a return. Copy this string for each scaling policy you put.
Now we’ll implement some CloudWatch monitors for our Auto Scaling group. I want the scale-up policy to be triggered when the CPU utilization on my instances reaches 30%, and I want the scale-dn policy to be triggered when the CPU utilization on my instances reaches 10%. To accomplish this, I will use the following commands against the CloudWatch Command Line Tool:
mon-put-metric-alarm wordpress-scale-up --comparison-operator GreaterThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace AWS/EC2 --period 60 --statistic Average --threshold 30 --actions-enabled true --dimensions "AutoScalingGroupName=WordpressASGroup" --alarm-actions arn:aws:autoscaling:us-east-1:610328387659:scalingPolicy:50a96126-c6bf-44c0-a483-691f4fded997:autoScalingGroupName/WordpressASGroup:policyName/scale-up --alarm-description "Scale up at 30% load" --unit Percent
mon-put-metric-alarm wordpress-scale-dn --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace AWS/EC2 --period 60 --statistic Average --threshold 10 --actions-enabled true --dimensions "AutoScalingGroupName=WordpressASGroup" --alarm-actions arn:aws:autoscaling:us-east-1:610328387659:scalingPolicy:be98ea99-91ed-4945-b6be-9612f6a607c7:autoScalingGroupName/WordpressASGroup:policyName/scale-dn --alarm-description "Scale down at 10% load" --unit Percent
(Note: These values are probably too low for a production workload, but I want my demos to kick in quickly. There are nearly 2000 counters you can monitor with CloudWatch. Adjust your thresholds and counters accordingly.)
These commands are pretty long, and I’ve found that the order of variables matters on these. Again, the syntax is pretty clear so I won’t belabor you with explanations except to point out that each mon-put-metric-alarm uses a different arn:aws value from the corresponding as-put-scaling-policy command return values.
You can now switch to your EC2 Console. You should see a new EC2 instance starting up with the name and the aws:autoscaling:groupname tag assigned in as-create-auto-scaling-group command. You’ll also notice that your instances are distributed across availability zones. If you inspect your Elastic Load Balancer, you’ll see your new Auto Scaling instance(s) have been added to the load balancer.
Your site should now load, assuming DNS has had time to propagate. Here’s my site:
Auto Scaling to Meet Demand
Now we need some users to hit my site so I can see Auto Scaling go to work when the servers come under load. I’m pretty sure I can’t get a rush of users to visit my demo site, so I’ll simulate a load using Siege (you could also use https://blitz.io). I have Siege running in an EC2 instance, and prepopulated with a list of URLs for my site that I plucked from my XML sitemap.
To simulate a user load, I ran Siege for 500 seconds and simulated a user count of 20 against my URLs file created from my sitemap:
./siege -t500s -c20 -i --file=/etc/siege/urls.txt
Within a few seconds I can see Siege begin working. The output from Siege will show a steadily increasing access time for my site until connections begin to time out. If you pay close attention, you might see the access times briefly improve as the ElastiCache is warmed, until the server is overwhelmed (I chose the micro instance because it’s not too hard to bring it to its knees in a demo).
As Siege is running, check out the monitoring tab on your Elastic load balancer. You will see an increasing connection count and instances that become unhealthy as they fail to respond to health checks.
You can also switch to the CloudWatch console and see that the alarms you created in the command line tool are displayed and the scale-up alarm is triggered:
If you browse to the site under siege it is sluggish at best – probably unresponsive. Once our grace periods, evaluation periods, and cool down periods are passed, you’ll see new EC2 instances being spawned by your Auto Scaling policies:
These instances will be added to your ELB once healthy and your response times will begin to improve. Notice in the screen grab above that the 3 instances currently running have been equally distributed across availability zones. If you reconfigure Siege to use a greater number of users and you’ll see more EC2 instances being created in response to your load. You can continue this until you have reached the maximum number of instances defined in your Auto Scaling Group. After the maximum number of instances has been hit, your site may become unavailable until the siege is lifted. This is where AWS Simple Notification Services (SNS) would be good to have in place. SNS could send notifications to administrators that thresholds and limits are both being exceeded and take action to correct the situation. Remediation options would include raising the maximum number of EC2 instances in the Auto Scaling group, updating the Launch Configuration to use a more powerful instance type (t1.micro to m1.large, for example). Additional ElastiCache nodes could also be added on demand, or the RDS database instance could be upsized on demand if it is determined that it is the bottleneck.
Scaling Down After Demand
Once the siege has been lifted and demand drops, our scale-down policy will begin shutting down un-needed EC2 instances.
Because we only pay per hour for instances that we consume, our scaling will cost little while meeting the demands of our workload.
Wrapping Up
And there you have it – an auto-scaling WordPress environment that takes advantage of a host of Amazon Web Services:
- EC2 server instances
- Elastic Load Balancers
- Route 53
- RDS
- S3
- Glacier
- CloudFront
- ElastiCache
- Auto Scale
- CloudWatch
I hope that this series has provided you with a basic understanding of Amazon Web Services and how a real-world workload can be deployed and scaled automatically on AWS. I have simplified quite a bit in this series of articles – if you have questions or need help architecting a solution on AWS, integrating your on-premise VMware cloud with public clouds, or understanding how to use the cloud for backup and disaster recovery, feel free to reach out to Clearpath – We’re happy to help!
Series Links:
Part 1: Introduction to Amazon Web Services (AWS)
Part 2: Preparing Amazon Web Services (AWS) for an Auto-Scaling WordPress Site
Part 3: Installing WordPress on AWS
Part 4: Configuring AWS Auto Scaling for WordPress
willcheckbackandreadresponse says
what chrome extensions do you have. I see
quicktab and hangouts. What are the others?
David McMurray says
Can you explain which part of this set up ensures that each instance of the webserver contains the same web content? For example if you have added some posts to WordPress, when a new instance of the webserver starts how do you ensure that it contains any new db records and files (images, etc.) that have been added to the main instance since the AMI image was created for the auto-scaling instances?
Similarly, when you have 2 or more instances of the webserver running with a copy of WordPress on each, when you log into one to create a post/upload images, how does this new content also get added to the other instances?
Piotr Klubinski says
wated to ask the same question…
seems like this can be usefull only for ‘static’ pages… which seems to be not the use case of wordpress…
Frank says
This article only deals with autoscaling EC2 instances based on load…it has nothing to do with WordPress and does not solve the problem of static media content on a Word Press server, such as images, video or other static content. So basically the article is useless for someone who wants to auto scale a real-world Word Press installation in AWS.
Josh Townsend says
Sorry to disappoint, Frank. If you let me know what other help you were looking for instead of just dropping the nasty old your site is useless negative bomb I may be able to provide it.
If you look at some of the other articles in this series you’ll see that I did cover some of the static content issues via an external NFS server that would hold the static WordPress files and media. Not the most elegant or easily scalable (you would have to set up clustering and/or file sync), but it works. With the forthcoming AWS Elastic File System (EFS), you can just mount an EFS NFS on your web servers and let EFS handle scaling and availability of static content. Also, most – if not all – of your media content would be on the CloudFront CDN, so static media becomes less of a concern (at least from a performance point of view).
If you have any specific questions leave a (positive) comment.
playpassmedia says
looks like EFS is in preview…https://aws.amazon.com/efs/preview/ , i’ve also seen a plugin that would upload anything in media to aws s3…so i think that plus cloudfront would solve the static content concerns. testing would be needed to confirm.
I thank you for this tut, it’s the best one I’ve seen…i look forward to more, i think focusing on wordpress is a good use case…since so many users. i will look forward to an updated wp one, this year 🙂 (hint, hint)