+ All Categories
Home > Technology > AWS August Webinar Series - S3 Deep Dive

AWS August Webinar Series - S3 Deep Dive

Date post: 16-Apr-2017
Category:
Upload: amazon-web-services
View: 3,641 times
Download: 1 times
Share this document with a friend
30
© 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Guy Farber 8/20/2015 Amazon S3: Deep Dive and Best Practices
Transcript
Page 1: AWS August Webinar Series - S3 Deep Dive

© 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved.

Guy Farber

8/20/2015

Amazon S3: Deep Dive and Best Practices

Page 2: AWS August Webinar Series - S3 Deep Dive

Amazon S3: Year in ReviewAdvanced Capabilities 2014-2015Server Side Encryption for KMS

Lifecycle Management for Versioning

Cross Region Replication

VPC Private Endpoints

New for July 2015• Amazon S3 Delete event notifications• CloudWatch metrics for S3 Storage• Bucket limit increase

Page 3: AWS August Webinar Series - S3 Deep Dive

Amazon S3 server-side encryption

Page 4: AWS August Webinar Series - S3 Deep Dive

S3 Server-side encryption options

SSE with Amazon S3 managed keys“Check-the-box” to encrypt your data at rest

SSE with customer provided keysYou manage your encryption keys and provide them for PUTs and GETS

SSE with Amazon Key Management Service managed keysKeys managed centrally in AWS KMS with permissions and auditing of usage

Page 5: AWS August Webinar Series - S3 Deep Dive

SSE using KMS

Amazon S3 AWS KMSRequest

Policy

Keys managed centrally in Amazon KMS with permissions and auditing of usage

Page 6: AWS August Webinar Series - S3 Deep Dive

Versioning + lifecycle policies

Page 7: AWS August Webinar Series - S3 Deep Dive

Preserve, retrieve, and restore every version of every object stored in your bucket

S3 automatically adds new versions and preserves deleted objects with delete markers

Easily control the number of versions kept by using lifecycle expiration policies

Easy to turn on in the AWS Management Console

Key = photo.gifID = 121212

Key = photo.gifID = 111111

Versioning Enabled

PUTKey = photo.gif

S3 versioning

Page 8: AWS August Webinar Series - S3 Deep Dive

Use Amazon Glacierfor lowest-cost, durable cold

storage of archival data

Use Amazon S3 for reliable,

durable primary storage

Use Amazon S3 Reduced Redundancy Storage for secondary backups

at a lower cost

RRS

Optimize your storage spending by tiering on AWS

Page 9: AWS August Webinar Series - S3 Deep Dive

Key prefix “logs/”

Transition objects to Glacier 30 days after creation

Delete 365 days after creation date

<LifecycleConfiguration> <Rule>

<ID>archive-in-30-days</ID> <Prefix>logs/</Prefix> <Status>Enabled</Status> <Transition>

<Days>30</Days>

<StorageClass>GLACIER</StorageClass> </Transition> <Expiration>

<Days>365</Days> </Expiration>

</Rule></LifecycleConfiguration

S3 lifecycle policies

Page 10: AWS August Webinar Series - S3 Deep Dive

Amazon S3 cross-region replication

Page 11: AWS August Webinar Series - S3 Deep Dive

Source(Virginia)

Destination(Oregon)

• Only replicates new PUTs. Once S3 is configured, all new uploads into a source bucket will be replicated

• Entire bucket or prefix based• 1:1 replication between any 2

regions• Versioning required

Use casesCompliance - store data hundreds of miles apartLower latency - distribute data to regional customers)Security - create remote replicas managed by separate AWS accounts

S3 cross-region replicationAutomated, fast, and reliable asynchronous replication of data across AWS regions

Page 12: AWS August Webinar Series - S3 Deep Dive

Details on Cross-Region ReplicationVersioning - Need to enable S3 versioning for the source and destination buckets.Lifecycle Rules - You can choose to use Lifecyle Rules on the destination bucket to manage older versions by deleting them or migrating them to Amazon Glacier.Determining Replication Status  - Use the HEAD operation on a source object to determine its replication status. Region-to-Region - Replication always takes place between a pair of AWS regions. You cannot use this feature to replicate content to two buckets that are in the same region.New Objects - Replicates new objects and changes to existing objects. Use S3 COPY to replicate existing objects

Page 13: AWS August Webinar Series - S3 Deep Dive

Amazon S3 VPC endpoints

Page 14: AWS August Webinar Series - S3 Deep Dive

Prior to S3 VPCE

S3 virtual private endpoint (VPCE)

Using S3 VPCE

Public IP on EC2 Instances and IGW

Private IP on EC2 Instances and NAT

Access S3 using S3 Private Endpoint (VPE) without using NAT instances or Gateways

Increased security

Amazon S3S3

Page 15: AWS August Webinar Series - S3 Deep Dive

Creating and using VPCE

Open the VPC Dashboard and Select the desired region.

Locate the Endpoints item in the navigation bar and click on it

Page 16: AWS August Webinar Series - S3 Deep Dive

Creating and using VPCEIf you have already created some VPC Endpoints, they will appear in the list:

Page 17: AWS August Webinar Series - S3 Deep Dive

Creating and using VPCENow click on Create Endpoint, choose the desired VPC, and customize the access policy (if you want):

Page 18: AWS August Webinar Series - S3 Deep Dive

Creating and using VPCENow choose the VPC subnets that will be allowed to access the endpoint:

Page 19: AWS August Webinar Series - S3 Deep Dive

Security: Allow a specific VPC Endpoint access to my S3 bucket and vice versa{ "Id": "Policy1415115909152", "Statement": [ { "Sid": "Stmt1415115903450", "Action": "s3:*", "Effect": "Deny", "Resource": ["arn:aws:s3:::my_secure_bucket", "arn:aws:s3:::my_secure_bucket/*"] "Condition": { "ArnNotEquals": { "aws:sourceVpe": " arn:aws:ec2:us-east-1:account:vpc/vpce-123abc" } }, "Principal": "*" } ]}

Page 20: AWS August Webinar Series - S3 Deep Dive

Amazon S3 event notifications

Page 21: AWS August Webinar Series - S3 Deep Dive

Amazon S3 event notificationsDelivers notifications to Amazon SNS, Amazon SQS, or AWS Lambda when events occur in Amazon S3

S3

Events

SNS topic

SQS queue

Lambda function

Notifications

Notifications

Notifications

Support for notification when objects are created via PUT, POST, Copy, or Multipart Upload.

Support for notification when objects are deleted, as well as with filtering on prefixes and suffixes for all types of notifications.

Foo() {…}

Page 22: AWS August Webinar Series - S3 Deep Dive

What’s in it for you?

Integration - A new surface on the Amazon S3 “building block” for event-based computing

Speed - typical time to send notifications is less than a second

Simplicity - Avoids proxies or polling to detect changes

Notifications

List/Diff

or

Proxy

Page 23: AWS August Webinar Series - S3 Deep Dive

Use cases

Transcoding media files

Updating data stores

Processing data/log files

Customers have told us about powerful applications …

Object change alerts

… and we look forward to seeing what you create.

Page 24: AWS August Webinar Series - S3 Deep Dive

S3 storage metrics

Page 25: AWS August Webinar Series - S3 Deep Dive

S3 Storage Metrics

Monitor and set alarms on Amazon S3 storage usage through CloudWatch

Supported metrics include:Total bytes for Standard Storage, Total bytes for Reduced-Redundancy Storage (RRS), Total number of objects for a given S3 bucket.

Page 26: AWS August Webinar Series - S3 Deep Dive

Bucket limit increase

Page 27: AWS August Webinar Series - S3 Deep Dive

Bucket limit increase

Up to 100 buckets by defaultPrefixes (virtual directories) can sometimes be used instead of buckets by assigning a specific prefix per user or project:

• examplebucket/UserStorage/GuyFarber/• examplebucket/UserStorage/OmairGillani/• Prefix support for bucket level policies such as lifecycle and

cross-region replicationSome use cases require dedicated buckets

• Region specific application deployments• Charge-backs• Life-cycle rule per user

Page 28: AWS August Webinar Series - S3 Deep Dive

Bucket limit increase

You can now increase your Amazon S3 bucket limit per AWS account

Open a case to request additional buckets by visiting AWS Support Center

Page 29: AWS August Webinar Series - S3 Deep Dive

Read-after-write consistency for the AWS US-Standard regionRead-after-write consistency allows you to retrieve objects immediately after creation in S3.

Now we have consistent consistency model across all AWS regions

Previously: buckets in the US Standard Region provided eventual consistency for newly created objects

Page 30: AWS August Webinar Series - S3 Deep Dive

Q&A

Learn more at: http://aws.amazon.com/s3

[email protected]


Recommended