AWS CLI Cheat Sheet


One of the many benefits of AWS is that it can be managed across different operating systems with a single unified tool. The tool in question is the AWS Command line interface. In this blog post we will show you how to install the CLI and outline some essential commands to get you started.

Installing the CLI


For Windows operating systems you have a couple of options. You can either run the following command: 


 Or you can download it from here: 


For Linux operating systems you can open a terminal and run the following commands in order:

$ curl "" -o ""
$ unzip
$ sudo ./aws/install/


For mac operating systems you can run the following commands in order: 

$ curl "" -o "AWSCLIV2.pkg
$ sudo installer -pkg AWSCLIV2.pkg -target /

9 Essential Commands 

1.       Create a new bucket 
 The first thing you are going to want to do when you have everything installed and configured is create a new bucket to store data. To do this run the following command
$ aws s3 mb s3://ExampleBucket

2.       Copy a file from your machine to a bucket 
 Once you have a bucket set up you are going to want to get some data in there. In the below example we are copying a file called example.txt to our brand new ExampleBucket we created above. 
$ aws s3 cp example.txt s3://ExampleBucket

 3.       Delete a bucket and everything within it
If you no longer need a bucket and the contents of it you can simply delete the whole thing and all the data within. To do this run the following command:
$ aws s3 rb s3://ExampleBucket –force

  4.       List all the files in your bucket
Now you have some data in your bucket you might want to see what’s in there. To produce a list of the files within a bucket you can run the command below. This also shows you the size of the bucket.
$ aws s3 ls s3://ExampleBucket –recursive --human-readable –summarize

  5.       Download a file from the bucket 
 To download a file from your bucket you run a cp command which essential copies the file from the bucket back on to your computer. To do this use the following command:                
$ aws s3 cp s3://ExampleBucket/test.txt/blockquote>

  6.       Move a file to a bucket 
 This command is slightly different from the 2nd command where we copied a file to a bucket. This command will actually move the file so it is no longer in its original place and now only exists in the bucket. Essentially this is a cut and paste as opposed to a copy and paste. To do this run the following:
$ aws s3 cp s3://ExampleBucket/test.txt

  7.       Move a file from one bucket to another 
 Similar to the last command this one will move a file from one bucket to a different one. This is useful if you put a file in the wrong bucket and want to remove it from one but make sure it gets to the right one. This example shows test.txt being taken from ExampleBucket and being placed into ExampleBucket2. The command is below:
$ aws s3 mv s3://ExampleBucket/test.txt s3://ExampleBucket2

  8.       Sync files from local  folder to bucket 
 This command will create a link between a folder on your machine and your bucket. When you add a file to the folder it will also then appear in the bucket. This is very handy if you haven’t go the time to be uploading files one by one. All you will need to do is drop files into the synced folder. In this example the folder on your machine would be called ‘example’ and that is synced with ExampleBucket. The command is as follows:
$ aws s3 sync example s3://ExampleBucket

  9.       Sync files from bucket to local folder 
 This command is essentially the opposite of the previous whereby you have a local folder that updates whenever a file gets added to the bucket. The command is as follows:                 
$ aws s3 sync s3://ExampleBucket/tmp /example

Join us at Channel Live 2022

We are pleased to announce that we will be attending the Channel live event with PoINT Software and Systems. Channel live is an ICT trade specific exhibition, the only one of its kind in the UK.. 
The event is taking place between the 30th and 31st of March 2022.  

Tape Vs Magnet

We will be running a series on using Tape for Data Storage, in our Lab, we have everything from 9 track, 3420, 3480, 3490, exabyte, DAT and LTO we have customers using large enterprise libraries such as the IBM TS4500 which can hold exabytes of data. We have controllers that even emulate an escon interface so a mainframe can believe it is writing to a legacy tape drive. 

The series will cover both current tape and how these can be used with both open source and commercial software. Leave your feedback below and let us know what you would like to see as part of the series.

We write some data to a 9 track 1/2 tape using "cat" and piping to "dd". Once written we then upload the tape unwind it past the BOT marker and then use a magnet to corrupt the data on a small part of the tape. 

We then load the tape back into the drive and read the tape back in, we move the drive past the damaged part of the tape and recover the rest of the data. 

The commands used in the video are cat, dd, mt, cmp, dmesg and grep. 

The tape drive is an M4 Data 9914 9-track tape drive, its connected to a PC via single-ended SCSI. The drive can also be connected via Pertec and differential SCSI. The PC is running Ubuntu. The drive can read and write 800, 1600, 3200, 6250 BPI which makes it great for data conversions from older tapes. 

The drive we use also has clear real on the take-up motor so we can easily see the tape moving.

Tape Library On Linux

We demonstrate with an older IBM enterprise tape library the IBM 3490E and Quantum LTO tape library i3 scaler. 
The IBM 3490E tape library is a 10 slot autoloader that is a rebadged overland L490E and can support both 3480 18 track tape and 36 track 3490 and 3490E tape with compression. The tape drive is connected with single-ended SCSI. The drive can support both differential and single-ended SCSI with a simple internal adjustment. 

The second tape library media changer is the newer LTO Quantum i3 Scalar tape library which uses a SAS interface. The library is equipped with two LTO tape drives and 100 slots, though only 50 are activated. In the demonstration, we show how to identify the device names in Linux using Dmesg |grep scsi And then use mtx command to load and unload tape and query the inventory of the tape libraries. Once the tapes are loaded we use tar to read and write data and mt to check the status of the drives. 

 The mtx commands are --- note: replace sg2 with the device name identified from dmesg 

mtx -f /dev/sg2 status --- reports back slot status.
mtx -f /dev/sg2 unload 10 -- unloads drive 0 to slot 10 

The tar commands are tar -cvf /dev/st0 *.txt -- create a tar archive on tape device 0 of all txt files 
tar -tvf /dev/sto – list files on the tape 
tar -xvf /dev/st0 – restore files back from tape to the current directory