Cloud Computing at Amazon’s EC2 Joe Steele jrsteele@unomaha.edu.

Post on 19-Dec-2015

214 views 1 download

Tags:

transcript

Cloud Computingat Amazon’s EC2

Joe Steelejrsteele@unomaha.edu

Grid Computing

Shared resources – many computer clusters transferring data and running jobs.

Geographically distributed.Cross-grid collaboration.Idea is analogous to electric power network

(grid), where power generators are distributed, but users access electric power without bothering about the source of energy and its location.

LHC Computing Grid (LCG)

Cloud Computing

What if I don’t have my own cluster?Cloud computing refers to a cluster that invites

users to send jobs. (SaaS –Software as a Service)Computation, software, data access, and storage

services that do not require user knowledge of the location or configuration of the system.

Term comes from the cloud drawing used in the past to represent the telephone network, later represents the internet.

Cloud Computing

Cloud Computing

Private companies large data centers.When considering operational costs, 50k servers

are cheaper per cpu then 1k servers (5 to 7 times cheaper).

Amazon: • $0.085/cpu-hour• No minimum, maximum• No contract

Amazon E2

aws.amazon.comComputing cluster – create an account and

provide a credit card.Let Amazon take care of the hardware.

Cloud BioLinux

JCVI (J. Craig Venter Institute) created cloud version of NERC BioLinux VM.

An Ubuntu machine with over 100 NEBC software packages. Image stored at EC2, is available to be copied at no charge, by EC2 users.

http://aws.amazon.com

Create a new account

Enter your information

Sign up for an EC2 account

Click on “Sign up for Amazon EC2”

EC2 Account

• Signing up for EC2 automatically signs you up for Amazon Simple Storage Service, and Amazon Virtual Private Cloud.

• Requires credit card information.• No charges until you start using the services.• Amazon will email with Access Identifiers, and

instructions for your first log in.

Click on “AWS Management Console”

Click the EC2 Tab

Launch an Instance

I recommend biolinux

Click “Select”

Pricing

• Amazon has a variety of VM sizes available – pricing is at: http://aws.amazon.com/ec2/pricing/

• You are charged for CPU usage, for data storage, and for data transferred to or from Amazon. Charges continue until a VM is “Terminated”.

• You can set up a small test VM for free – select “Micro” for the size.

Kernel defaults are fine

Create a Key Pair

Create security group

Launch

Machine info

“Terminate” to end charges

ssh to the machine

A window opens, telling you how to connect to your new VM, eg,:

“ssh -i key_pair_name.pem root@ec2-76-202-01-919.compute-1.amazonaws.com”

However, for biolinux, do:ssh –i key_pair_name.pem ubuntu@ec2-76-202-

01-919.compute-1.amazonaws.com

NX

Use NX for the graphical display (built in to biolinux already). Open source, can be found at http://www.nomachine.com/

Must ssh into VM FIRST, using the key pair.>adduser <username>>groups >usermod -G <grp1>,<grp2>,ssh <username>

Start NX

“Configure”

BioLinux over NX

Data Stored at Amazon

There are large datasets stored at Amazon, available for use – free of charge (mostly). You are charged for any data you copy.

http://aws.amazon.com/datasetsto search through them.

http://aws.amazon.com/datasets

DatasetsHuman DNA sequences: • 1000 Genomes Project (7,300 GB) • Ensembl Annotated Human Genome - FASTA (115 GB)• Ensembl Annotated Human Genome - MySQL (200 GB) • GenBank (200 GB) • Human Liver Cohort (Sage Bionetworks) (0.6 GB) • Illumina - Jay Flatley's Human Genome Data Set. (350 GB) • YRI Trio Data - complete genome sequence for three individuals (700 GB)

Other (might include some human data): • Ensembl - FASTA DB (100 GB) • Influenza Virus (including Swine Flu) - from NCBI (1 GB) • UniGene - from NCBI (10 GB) •

PubChem Library - from NCBI (230 GB)

Public Snapshots

Select “Volumes”

Create a Volume

Instance Information

Attach it to your Instance

Mount the Volume

From your VM:>sudo mkfs –t ext3 /dev/sdf>sudo mkdir /mnt/datasets>sudo mount –t ext3 /dev/sdf /mnt/datasets

200GB of genbank data are now in /mnt/datasets