AWS CLI: 10 useful commands you may not know

The AWS console is certainly very well laid out and, with time, becomes very easy to use. However if you are not using the AWS CLI (Command Line Interface) from your local terminal, you may be missing out on whole lot of great functionality and speed. If you are not yet comfortable with the AWS Command Line Interface, there’s a great course on the subject available right now on Cloud Academy.

Even if you are used to the AWS CLI, I encourage you to take a look at the commands below, as you may not be completely aware of the power of the AWS CLI, and you might just end up saving yourself a whole lot of time. One important note: the precise syntax of some commands can vary between versions and packages.

1. Delete an S3 bucket and all its contents with just one command

Sometimes you may end up with a bucket full of hundreds or thousands of files that you no longer need. If you have ever had to delete a substantial number of items in S3, you know that this can be a little time consuming. The following command will delete a bucket and all of its content including directories:

aws s3 rb s3://bucket-name –force

2. Recursively copy a directory and its subfolders from your PC to Amazon S3

If you have used the S3 Console, at some stage, you’ve probably found yourself having to copy a ton of files to a bucket from your PC. It can be a little clunky at times, especially if you have multiple directory levels that need to be copied. The following AWS CLI command will make the process a little easier, as it will copy a directory and all of its sub folders from your PC to Amazon S3 to a specified region.

aws s3 cp MyFolder s3://bucket-name — recursive [–region us-west-2]

3. Display subsets of all available ec2 images

The following will display all available ec2 images, filtered to include only those built on Ubuntu (assuming, of course, that you’re working from a terminal on a Linux or Mac machine).

aws ec2 describe-images | grep ubuntu

Warning: this may take a few minutes.

4. List users in a different format

Sometimes, depending on the output format you chose as default, when you invoke long lists – like a large set of users – the display format can be a little hard to read. Including the –output parameter with, say, the table argument, will display a nice, easy-to-read table this one time without having to change your default.

aws iam list-users –output table

5. List the sizes of an S3 bucket and its contents

The following command uses JSON output to list the size of a bucket and the items stored within. This might come in handy when auditing what is taking up all your S3 storage.

aws s3api list-objects –bucket BUCKETNAME –output json –query “[sum(Contents[].Size), length(Contents[])]”

6. Move S3 bucket to different location

If you need to quickly move an S3 bucket to a different location, then this command just might save you a ton of time.

aws s3 sync s3://oldbucket s3://newbucket –source-region us-west-1 –region us-west-2

7. List users by ARN

“jq” is like sed for JSON data – you can use it to slice, filter, map, and transform structured data with the same ease that sed, awk, grep and friends let you play with non-JSON text.

Armed with that knowledge, we can now nicely list all our users, but only show their ARNs.

aws iam list-users –output json | jq -r .Users[].Arn

Note: jq, might not be installed on your system by default. On Debian-based systems (including Ubuntu), use sudo apt-get install jq

8. List all of your instances that are currently stopped, and the reason for the stop

Here’s another use of the JSON output parameter. This one will list all of your stopped instances and, best of all, show the reason that they were stopped:

aws ec2 describe-instances –filters Name=instance-state-name,Values=stopped –region eu-west-1 –output json | jq -r .Reservations[].Instances[].StateReason.Message
9. Test one of your public CloudFormation templates

If you have written a Cloud Formation Template and need to validate it before launching, you can do it from the CLI using the following format:

aws cloudformation validate-template –region eu-west-1 –template-url
10. Other ways to pass input parameters to the AWS CLI with JSON:

You can pass all sorts of input parameters to the AWS CLI. Here’s an example of how to do it:

aws iam put-user-policy –user-name AWS-Cli-Test –policy-name Power-Access –policy-document ‘{ “Statement”: [ { “Effect”: “Allow”, “NotAction”: “iam:*”, “Resource”: “*” } ] }’

mrepo/rhel-2014.09-x86_64/RPMS.all/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 404 Not Found

Now if you see this error on the Amazon Linux AMO that you have build in AWS and while installing any package with your own custom repository than you are on the right page for the solution. 🙂

The root cause is AMI image (and Amazon) itself. it doesn’t use redhat version numbering like 5, 6, 7. It uses date for releases e.g. 2014.09. It doesn’t use official Centos and RedHat repositories and uses internal amazon repositories with different structure and logic.
Failure from above is caused by “latest” symlink in URL which points to latest Amazon release. we can’t set such symlink pointed exclusively to Percona Centos 6 repository.

I see two options there to resolve it as of now:

1. Do not allow AMI to define $releasever as “latest” and set it manually in percona-release.repo. you need to replace $releasever with exact Centos version in percona-release.repo. example command to replace on Centos 6 based AMIs: sed -i ‘s/$releasever/6/g’ /etc/yum.repos.d/percona-release.repo.
* do not use Amazon AMIs in such case, because they are not exactly the same OS, it’s some kind of OS fork made by Amazon and adjusted exclusively for Amazon services, software and infrastructure. use Centos AMIs.

It helped me so try it if that solves the issue.