OmicsPipe on AWS uses a custom StarCluster image, created with docker.io (which installs docker.io, environment-modules, and easybuild on an AWS EC2 cluster). All you have to do is get the docker image, upload your data, launch the Amazon cluster and run a single command to analyze all of your data according to published, best-practice methods.
From inside the Docker environment, run the command:
docker run -i -t omicspipe/aws_readymade /bin/bash
Note
If you want to share a file from your local computer with the docker container, follow the instructions for Docker Folder Sharing, put your desired file within the shared folder and run the command below (this is recommended for saving your /.starcluster/config file from the next step:
docker run -it --volumes-from NameofSharedDataFolder omicspipe/aws_readymade /bin/bash
After running the omicspipe/aws_readymade Docker container, run the command below to edit the StarCluster configuration file:
nano ~/.starcluster/config
Or if you prefer vim::
vim ~/.starcluster/config
Enter your “AWS ACCESS KEY ID”, “AWS SECRET ACCESS KEY”, and “AWS USER ID”
Change the AWS REGION NAME and AWS REGION HOST variables if you do not live in the AWS us-west region to the appropriate region AWS Regions.
Select your desired pre-configured cluster by editing the “DEFAULT_TEMPLATE” variable or creating a custom cluster. The default is a test cluster with 5 c3.large nodes.
Create your starcluster SSH key by running the command:
starcluster createkey omicspipe -o ~/.ssh/omicspipe.rsa
To remove a key from the AWS registry, run the command:
starcluster removekey omicspipe
For more information on editing the StarCluster configuration file, see the StarCluster website.
Create AWS volumes to store the raw data and results of your analyses. From within the Docker environment, run:
starcluster createvolume --name=data -i ami-52112317 -d -s <volume size in GB> us-west-1a
starcluster createvolume --name=results -i ami-52112317 -d -s <volume size in GB> us-west-1a
- Go to the AWS-Console
- Click on the EC2 option
- Click on Volumes
- Click on “Create Volume”
- Set availability zone
- In Snapshot ID search for “omicspipe_db” and click on the resulting Snapshot ID
- Click Create
- From the Volumes tab, note the “VOLUME_ID” of the database snapshot
Edit your StarCluster configuration file to add your volume IDs. Run the command below and edit the VOLUME_ID variables for data, results, and database:
nano ~/.starcluster/config
Edit the fields below:
[volume results]
VOLUME_ID =
MOUNT_PATH = /data/results
[volume data]
VOLUME_ID =
MOUNT_PATH = /data/data
[volume database]
VOLUME_ID =
MOUNT_PATH = /data/database
Save your StarCluster configuration file to ~/.starcluster/config
From the Docker container, run the command below to start a new cluster with the name “mypipe”:
starcluster start mypipe
Optional but Recommended: To load balance the cluster, type the command below:
starcluster loadbalance mypipe
(see `_load balance`_ for configuration options. Note: make sure to keep at least one woker node attached to the cluster)
SSH into the cluster by running the command below:
starcluster sshmaster mypipe
Now that you are in your cluster, you can use it like any other cluster. Before running omics pipe on your own data (you can skip this step if you are running the test data, you will want to upload your data. There are two options to upload your data:
Upload data from your local machine or cluster using StarCluster put:
starcluster put mypipe <myfile> /data/raw
Retrieve a file from an FTP server:
scp <localfile>username@tohostname:<remotefile>
Get a file from an S3 bucket with S3cmd:
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
Use Webmin to transfer files from your local system to the cluster (recommended for small files only, like parameter files).
- In the AWS Management Console go to “Security Groups”
- Select the “StarCluster-0_95_5” group associated with your cluster’s name
- On the Inbound tab click on “Edit”
- Click on “Add Rule” and a new “Custom TCP Rule” will apear. On “Port Range” enter “10000” and on “Source” select “My IP”
- Hit “Save”
- Selct Instances in the AWS managemnt console and note the “Public IP” of your instance
- In a Web browser, enter https://the_public_ip:10000. Type in the Login info when prompted: user: root password: sulab
- This will take a few seconds to load, and once it does, you can navigate your cluster’s file structure with the tabs on the left
- To upload a file from your local file system, click “upload” and specify the directory /data/data to upload your data.
Both the GATK and MuTect software are used by OmicsPipe, but they require licenses from The Broad Institute and cannot be distributed with the OmicsPipe software. GATK and MuTect are free to download after accepting the license agreement.
To install GATK:
Upload the GenomeAnalysisTK.jar file to the /root/.local/easybuild/software/gatk/3.2-2 using either Webmin or StarCluster put
Make the jar file executable by running the command:
chmod +x /root/.local/easybuild/software/gatk/3.2-2/GenomeAnalysisTK.jar
To install MuTect:
Upload the muTect-1.1.4.jar file to the /root/.local/easybuild/software/mutect/1.1.4 using either Webmin or StarCluster put
Make the jar file executable by running the command:
chmod +x /root/.local/easybuild/software/mutect/1.1.4/muTect-1.1.4.jar
Adding software that OmicsPipe was not built with might require a little more configuration, but OmicsPipe is designed as a foundation to which new software can be added. New software can obviously be added in any manner that the user prefers, but to follow the structure that was used to build OmicsPipe, please refer to the “custombuild” scripts.
Important
Download docker.io following the instructions at Get-Docker
Run the command:
docker build -t <Repository Name> https://bitbucket.org/sulab/omics_pipe/downloads/Dockerfile_AWS_prebuiltAMI_public
This will store the dockercluster image in the Repository Name of your choice.
There is also an AWS_custombuild Dockerfile, which can be used to build an Amazon Machine Image from scratch
Within StarCluster create x new volumes by running:
nvolumes=2 #number of volumes
vsize=1000 #in gb
instance=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
akey=<AWS KEY>
skey=<AWS SECRET KEY>
region=us-west-1
zone=us-west-1a
for x in $(seq 1 $nvolumes)
do
ec2-create-volume \
--aws-access-key $akey \
--aws-secret-key $skey \
--size $vsize \
--region $region \
--availability-zone $zone
done > /tmp/vols.txt
Attach the volumes to the head node:
i=0
for vol in $(awk '{print $2}' /tmp/vols.txt)
do
i=$(( i + 1 ))
ec2-attach-volume $vol \
-O $akey \
-W $skey \
-i $instance \
--region $region \
-d /dev/sdh${i}
done > /tmp/attach.txt
Mark the EBS volumes as physical volumes:
for i in $(find /dev/xvdh*)
do
pvcreate $i
done
Create a volume group:
vgcreate vg /dev/xvdh*
Create a logical volume:
lvcreate -l100%VG -n lv vg
Create the file system:
mkfs -t xfs /dev/vg/lv
Mount the file system:
mount /dev/vg/lv /data/data_large
Create mount point and mount the device:
mkdir /data/data_large
mount /dev/md0 /data/data_large
Add new mountpoint to /etc/exports:
for x in $(qconf -sh | tail -n +2)
do
echo '/data/data_large' ${x}'(async,no_root_squash,no_subtree_check,rw)' >> /etc/exports
done
Reload /etc/exports:
exportfs -a
Mount the new folder on all nodes:
for x in $(qconf -sh | tail -n +2)
do
ssh $x 'mkdir /data/data_large'
ssh $x 'mount -t nfs master:/data/data_large /data/data_large'
done
How to increase volume size?
Create and attach EBS volumes as described in steps 1. & 2. and then create the additional physical volumes:
for i in $(cat /tmp/attach.txt | cut -f 4 | sed 's/[^0-9]*//g')
do
pvcreate /dev/xvdh${i}
done
Add new volumes to the volume group:
for i in $(cat /tmp/attach.txt | cut -f 4 | sed 's/[^0-9]*//g')
do
vgextend vg /dev/xvdh${i}
done
lvextend -l100%VG /dev/mapper/vg-lv
Grow the file system to the new size:
xfs_growfs /data/data_large
Within StarCluster create x new volumes by running:
nvolumes=2 #number of volumes
vsize=1000 #in gb
instance=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
akey=<AWS KEY>
skey=<AWS SECRET KEY>
region=us-west-1
zone=us-west-1a
for x in $(seq 1 $nvolumes)
do
ec2-create-volume \
--aws-access-key $akey \
--aws-secret-key $skey \
--size $vsize \
--region $region \
--availability-zone $zone
done > /tmp/vols.txt
Attach the volumes to the head node:
i=0
for vol in $(awk '{print $2}' /tmp/vols.txt)
do
i=$(( i + 1 ))
ec2-attach-volume $vol \
-O $akey \
-W $skey \
-i $instance \
--region $region \
-d /dev/sdh${i}
done
Create a raid 0 volume:
mdadm --create -l 0 -n $nvolumes /dev/md0 /dev/xvdh*
Create a file system:
mkfs -t ext4 /dev/md0
Create mount point and mount the device:
mkdir /data/data_large
mount /dev/md0 /data/data_large
Add new mountpoint to /etc/exports:
for x in $(qconf -sh | tail -n +2)
do
echo '/data/data_large' ${x}'(async,no_root_squash,no_subtree_check,rw)' >> /etc/exports
done
Reload /etc/exports:
exportfs -a
Mount the new folder on all nodes:
for x in $(qconf -sh | tail -n +2)
do
ssh $x 'mkdir /data/data_large'
ssh $x 'mount -t nfs master:/data/data_large /data/data_large'
done
Run:
s3cmd --configure
and follow the instructions
Create a S3 bucket:
s3cmd mb s3://backup
Copy data to the bucket:
s3cmd put -r /data/results s3://backup
More info on s3cmd here: https://github.com/s3tools/s3cmd