Friday, February 27, 2015

App Stacks with NetflixOSS Asgard, Eureka, and Ribbon

At a recent user meeting on NetflixOSS, I was asked how to share a single cloud environment (account) with various copies of an overall application stack.  I wondered the same thing when I was working at IBM using NetflixOSS.

This is a quick note to explain how to do this.

You may have noticed in Asgard, that it allows you when creating a cluster to specify a "stack".  At IBM we never specified the stack or if we did never used it at runtime.  Inside of Netflix, this "stack" ends up propagating down to instance as part of the environment, specifically as an environment variable called "NETFLIX_STACK".  You can do this by customizing the user data injected by Asgard or any other way that you pass down context.  You could also parse the auto scaling group name on the instance by querying the meta data url.  You can also make this any variable, say just "STACK".

So on an instance that was my acmeair webapp application, the application name was ACMEAIR_WEBAPP and the stacks are "aspyker" (for my development) and "smoke" for a CI smoke test I'm building.

If I go onto an instance in aspyker:

export NETFLIX_STACK="aspyker"

And on my smoke test instances:

export NETFLIX_STACK="smoke"

Now, how do we get the IPC layer to pay attention to this?  There is a cryptic note that tells you the key in the Ribbon Javadoc here.

"Ribbon supports a comma separated set of logical addresses for a Ribbon Client. Typical/default implementation uses the list of servers obtained from the first of the comma separated list and progresses down the list only when the prior vipAddress contains no servers. e.g. vipAddress settings ${foo}.bar:${port},${foobar}:80,localhost:8080 The above list will be resolved by this class as apple.bar:80,limebar:80,localhost:8080 provided that the Configuration library resolves the property foo=apple,port=80 and foobar=limebar"

Armed with this you can start to attack Eureka registration of your called service.  In your configuration, you need to set "eureka.vipAddress".  I typically set this to something like "ACMEAIR_AUTH_SERVICE".  That will mean that anyone looking up the "ACMEAIR_AUTH_SERVICE" vip address in Eureka will see your server.

If instead you set it to:
ACMEAIR_AUTH_SERVICE${NETFLIX_STACK}
it will register into Eureka as "ACMEAIR_AUTH_SERVICE/aspyker" or "ACMEAIR_AUTH_SERVICE/smoke" depending on the stack.  Both will end up under the same app in Eureka, but they are separately queriable via VIP address which is the default way Eureka gets queried from Ribbon.

Finally, you need to consume the service.  In Ribbon, you set the DeploymentContextBasedVipAddress.  Previously, I set this to "ACMEAIR_AUTH_SERVICE" which would mean it would only find services registered by that vip address.

If instead with my consumer being in a stack, I set the property as follows:

acmeair-auth-service-client.ribbon.DeploymentContextBasedVipAddresses=ACMEAIR_AUTH_SERVICE${NETFLIX_STACK}

A consumer in the "aspyker" stack will lookup
ACMEAIR_AUTH_SERVICEaspyker
and a consumer in the "smoke" stack will lookup
ACMEAIR_AUTH_SERVICEsmoke

You can take this a step further and do:

ACMEAIR_AUTH_SERVICE${NETFLIX_STACK},ACMEAIR_AUTH_SERVICE

This should according to the documentation, but I haven't personally tested, first use any instances from the stack and if none of those exist the instances in a "stackless" group.

Let me know if this works and helps you.  We'll find a way to get this back into official docs eventually.  I just wanted to get the info out here given the questions.


Thursday, December 18, 2014

Kafka network utilization (in vs. out)

One thing that was confusing me as I looked into the metrics in my Kafka performance testing (as shown in any of the graphs in my previous blog post) was the approximately 2x factor of input network bytes vs. output network bytes. Given I was doing replication, shouldn't the number of bytes be the same since I have to exactly replicate the messages to a following broker?

Let's assume one producer sending at 300MB/sec and three brokers with a replication factor of two and three partitions and no consumers for a simple example.

A single brokers (bid = 0) receives 100 MB/sec from the producer (eth0 in) for a partition (say pid = 0) as it is the leader for one of the partitions.

The single broker turns around and sends (eth0 out) 100 MB/sec to the broker who is the in sync replica (say bid = 1) for all of that traffic.  To be accurate, it is actually the in sync replica (bid = 1) that pulls.

Next is what I was missing in quick thinking about this problem ...

The single broker (bid = 0) also gets sent (actually pulls), 100 MB/sec from another broker's (say bid = 2) partition (say pid = 2) to which it is not the leader, but an in sync replica.

This means the traffic load for this broker is:
  • producer to broker (bid = 0) for partition (pid = 0)
    • eth0 in = 100 MB/sec
  • broker (bid = 0) for partition (pid = 0) to in sync replica broker (bid = 1)
    • eth0 out = 100 MB/sec
  • broker (bid = 2) for partition (pid = 2) to in sync replica broker (bid = 0)
    • eth0 in = 100 MB/sec
  • total:
    • eth0 in = 200 MB/sec
    • eth0 out = 100 MB/sec
Notice I originally said no consumers.  If I added consumers they would start to add to the eth0 out figure and possibly balance the in vs. out, but only if they were consuming at the same rate as the producers.  If there were more consumers than producers, the consumers could easily overrun the input rate which would be common for streams that were heavily fanned out to different consumer groups.

Now, let's consider what happens when we make the configuration more interesting.  Specifically, we'd want to consider a larger number of brokers and a larger number of partitions and a larger replication factor.

Let's consider the case of 300 brokers with 3000 partitions with the same replication factor of 2.  Let's imagine a producer group that could send at 3000 MB/sec.  That mean's every partition will receive 1 MB/sec (eth0 in).  Every broker will be a leader for 10 of these partitions.  So for broker 0, it would receive 10 MB/sec of the producer traffic.  It would need to send that traffic to 10 other ISR's sending 10 MB/sec of replication "out" traffic.  It would be a ISR non-leader for 10 partitions so it would receive 10 MB/sec of replication "in" traffic.  That would mean 20 MB/sec in and 10 MB/sec out.  Again 2X factor.


Let's now consider 300 brokers with 3000 partitions and a replication factor of 6.  Again imagine a producer group that could send at 3000 MB/sec.  That mean's every partition will receive 1 MB/sec (eth0 in).  Every broker will be a leader for 10 of these partitions.  So for broker 0, it would receive 10 MB/sec of the producer traffic.  It would need to send that traffic to 50 other ISR's (5 for each of the 10 partitions) sending 50 MB/sec of replication "out" traffic.  It would be a ISR non-leader for 50 partitions so it would receive 50 MB/sec of replication "in" traffic.  That would mean 60 MB/sec in and 50 MB/sec out.  Now a 1.2 factor.  So by increasing the number of partitions will decrease the difference between in and out traffic.

Let's do some algebra to generalize this:

Let:
np = number of partitions
nb = number of brokers
rf = replication factor
tr = transfer rate total

tr_p_p = transfer rate for partition = tr / np
nlp_p_b = number of leader partitions per broker = np / nb
f_p_p = number of followers per partition = rf - 1
nl_p_p = number of leaders per partition = 1
f_tot = total number of following partitions = f_p_p * np
f_tot_p_b = total number of following partitions per broker = f_tot / nb

Tr in from producers for a single broker =
    tr_p_p * nlp_p_b =
    tr / np * np / nb =
    tr / nb (let's call this transfer rate per broker = tr_p_b)

Tr out to followers for a single broker =
    nlp_p_b * tr_p_p * f_p_p = 
    np / nb * tr / np * (rf - 1) =
    tr / nb * (rf - 1) =
    tr_p_b * (rf - 1)

Tr in as a follower =
    f_tot_p_b * tr_p_p =
    f_tot / nb * tr / np = 
    f_p_p * np / nb * tr / np = 
    (rf - 1) * np / nb * tr / np = 
    (rf - 1) / nb * tr =
    tr  / nb * (rf - 1)
    tr_p_b * (rf - 1)

Total for a single broker in =
  tr_p_b + (rf - 1) * tr_p_b

Total out for a single broker out =
  tr_p_b * (rf - 1)

The above generalization assumes a larger number of brokers so the replication factor comes into play as it relates to networking.  If you run with three brokers (as my examples above), the replication factor is still really limited to 3 regardless of how many partitions you have, so the difference between input and output would still be around 2X.  So it might be better to consider rf above as the minimum of rf and number of brokers.  However, if you add more brokers to scale this out, the generalization should apply.

In summary, in a large enough cluster, a single broker will take it's share of traffic of front end traffic and a multiple of that same share at one less than the replication factor incoming when acting as a follower.  Also, the single broker will send a similar share of one less than the replication factor to other followers.

Monday, December 15, 2014

The number of Kafka partitions when using compression/batching

Early on in my Kafka performance testing, I started as simple as possible. I started with simple non-compressed and non-batched messages with one broker, one partition, one producer and one consumer to understand the relative performance of each aspect of the system. Over time, I added more of each. I found pretty quickly that in my environment (without compression and batching), the main bottleneck with Kafka performance testing was bandwidth. Specifically it was easy to max out the gigabit ethernet in the EC2 broker instance by just adding a few more producers.

At first, my broker was 4 cores (a m3.xlarge instance) and later 8 cores (a i2.2xlarge). To ensure I maxed out the new broker, I ran a few more producers. I found an interesting "problem" at that point. Specifically, I found what looked like a lock when any single message was getting written to a leader log file. This wasn't all that interesting as one partition and one broker isn't all that realistic. After adding a few partitions, this wasn't an issue.

While it was easy to max out the 8 core system's bandwidth, the cpu utilization was low. With Kafka there are two ways to increase throughput in this situation. First, you can enable batching of messages from the producer. Second you can compress the messages within the batch. Upon adding batching and compression, I hit the same issue that I saw in the single partition per broker case. I wasn't able to drive the system to saturation even though there was additional bandwidth and cpu available. The issue again was the lock writing to the leader log file. From the stack traces (see below), it looks like Kafka takes a lock for writing to the leader partition and then decompresses and works with the batch. Given the compression is CPU intensive and the batch is large, the lock can be help for a while. On a high SMP system this can lead to poor system utilization.

To show this, below is a run of the same Kafka producers and brokers with nothing changing but the number of partitions. In each case, given cpu utilization wasn't yet the bottleneck, I expected that I could drive the broker to the point where the bandwidth bottlenecks (125,000 KBytes/sec). You will see when I started, I was only at 70,000 regardless of the large number of producers.

First my configuration:


Producers
  • 15 m3.xlarge instances (4 core)
  • Each running 20 producers (total of 300)
  • Using a Netflix library that wraps the kafka_2.9.2-0.8.2-beta client
  • Using snappy compression
  • Using a batch size of 8192
  • Using request required acks of 1
  • Using a replay of realistic data (each message being JSON formatted of 2-2.5k in size)
Brokers
  • 3 i2.2xlarge instances (8 core)
  • Using a Netflix service that wraps the kafka_2.9.2-0.8.1.1 broker code
  • With topics that are always replication factor of 2
  • With topics that range from 6 partitions (2 leader partitions per broker) to 15 partitions (5 leader partitions per broker)
Consumers
  • None in this case to stress the problem

Next the results


6 partitions (2 leader partitions per broker)
- Average CPU utilization: 30%
- Average Network:  79,000 KBytes/sec




9 partitions (3 leader partitions per broker)
- Average CPU utilization: 38%
- Average Network:  95,000 KBytes/sec



12 partitions (4 leader partitions per broker)
- Average CPU utilization: 42%
- Peak Network:  119,000 KBytes/sec



15 partitions (5 leader partitions per broker)
- Average CPU utilization: 50%
- Peak Network:  125,000 KBytes/sec


So you can see by only changing the number of partitions, I can increase the utilization up to the bandwidth bottleneck of each of the brokers.  I believe the reason five was the peak was - out of eight cores, it looks like there is by default one core handling network, one core handling the replication and five cores handling writing to logs.  I could have tried going up to 18 partitions (6 per core), but at this level of load my drive became unstable rather quickly (as evidenced by the "spiky" bandwidth you see in Network KBytes).  I later found this instability in the producers to be a bug in snappy compression bug in 1.1.1.5.  After upgrading snappy, I now have more stables runs, but it didn't change the relative results presented here.

I tend to focus on bottlenecks in a system as opposed to the overall throughput until I understand the system.  If you were to look at the throughput instead, you can see that the difference is measurable.    Here you will see the throughput measured by Kafka in AllTopicsMessagesPerSecond across the three brokers for each of the above configurations:


Here you can see that the throughput per broker is:

  • 6 partitions - 40K messages/sec
  • 9 partitions - 48K messages/sec
  • 12 partitions - 50K messages/sec
  • 15 partitions - 54K messages/sec

So, by increasing the number of partitions, I was able to increase the performance by 35%.

To confirm, my suspicion, I did thread dumps.  Here are two thread dumps from the run with only 6 partitions (2 leader partitions per broker) which shows the problem in its worst case. You'll see that Kafka has 8 request handler threads. Also, in both dumps, only 2 threads are doing work under kafka.cluster.Partition.appendMessagesToLeader, while all other six threads are blocked waiting for the lock.

This all said, there are some real world aspects that are specifically set to show the issue in my test.  First, I had no consumers.  If I had consumers, they would be using CPU and bandwidth without waiting on the lock.  Also, I am running on a high way SMP VM (8 cores).  The issue would be less noticeable on a 4 core box in terms of CPU utilization.  However, given I was able to hit the issue, I wanted to share in case others see the same thing.  It isn't out of the question of others hitting a similar problem on very hot topics with lower numbers of partitions when consumers aren't listening.

I will work with the Kafka team to see if this is something that should be addressed, or if it is just something to consider when tuning Kafka configurations.  I wonder if the work for compression could be done before taking the lock to decrease the time in the critical section.  However, I am still new to Kafka and haven't read the code, so I may be missing something.  If this is a real issue, I'll get a github issue opened and will then update this blog post.

Sunday, November 16, 2014

AWS re:Invent 2014 Video & Slide Presentation Links with Easy Index

As with last year, here is my quick index of all re:Invent sessions.  Please wait for a few days and I'll keep running the tool to fill in the index.  It usually takes Amazon a few weeks to fully upload all the videos and slideshares.

See below for how I created the index (with code):


ADV403 - Dynamic Ad Performance Reporting with Amazon Redshift: Data Science and Complex Queries at Massive Scale
by Vidhya Srinivasan - Senior Manager, Software Development with Amazon Web ServicesTimon Karnezos - Director, Infrastructure with NeuStar
Delivering deep insight on advertising metrics and providing customers easy data access becomes a challenge as scale increases. In this session, Neustar, a global provider of real-time analytics, shows how they use Redshift to help advertisers and agencies reach the highest-performing customers using data science at scale. Neustar dives into the queries they use to determine how best to target ads based on their real reach, how much to pay for ads using multi-touch attribution, and how frequently to show ads. Finally, Neustar discusses how they operate a fleet of Redshift clusters to run workloads in parallel and generate daily reports on billions of events within hours. Session includes how Neustar provides daily feeds of event-level data to their customers for ad-hoc data science.
ADV402 - Beating the Speed of Light with Your Infrastructure in AWS
by Siva Raghupathy - Principal Solutions Architect with Amazon Web ServicesValentino Volonghi - CTO with AdRoll
With Amazon Web Services it's possible to serve the needs of modern high performance advertising without breaking the bank. This session covers how AdRoll processes more than 60 billion requests per day in less than 100 milliseconds each using Amazon DynamoDB, Auto Scaling, and Elastic Load Balancing. This process generates more than 2 GB of data every single second, which will be processed and turned into useful models over the following hour. We discuss designing systems that can answer billions of latency-sensitive global requests every day and look into some tricks to pare down the costs.
ADV303 - MediaMath's Data Revolution with Amazon Kinesis and Amazon EMR
by Aditya Krishnan - Sr. Product Manager with Amazon Web ServicesEdward Fagin - VP, Engineering with MediaMathIan Hummel - Sr. Director, Data Platform with MediaMath
Collecting and processing terabytes of data per day is a challenge for any technology company. As marketers and brands become more sophisticated consumers of data, enabling granular levels of access to targeted subsets of data from outside your firewalls presents new challenges. This session discusses how to build scalable, complex, and cost-effective data processing pipelines using Amazon Kinesis, Amazon EC2 Spot Instances, Amazon EMR, and Amazon Simple Storage Service (S3). Learn how MediaMath revolutionized their data delivery platform with the help of these services to empower product teams, partners, and clients. As a result, a number of innovative products and services are delivered on top of terabytes of online user behavior. MediaMath covers their journey from legacy batch processing and vendor lock-in to a new world where the raw materials to build advanced lookalike models, optimization algorithms, or marketing attribution models are readily available to any engineering team in real time, substantially reducing the time - and cost - of innovation.
AFF302 - Responsive Game Design: Bringing Desktop and Mobile Games to the Living Room
by Jesse Freeman - Developer Evangelist, HTML5 &Games with Amazon Web Services
In this session, we cover what's needed to bring your Android app or game to Fire TV. We walk you through controller support for a game scenario (buttons and analog sticks), controller support for UI (selection, moving between menu items, invoking the keyboard), and how to account for the form factor (overscan, landscape, device and controller detection). By the end of this session, you'll be able to understand what you need to do if you want to build or modify your own app to work on a TV.
AFF301 - Fire Phone: The Dynamic Perspective API, Under the Hood
by Bilgem Cakir - Senior Software Development Engineer with Amazon Web ServicesPeter Heinrich - Developer Evangelist with Amazon Web Services
Fire phone's Dynamic Perspective adds a whole new dimension to UI and customer interaction, combining dedicated hardware and advanced algorithms to enable real-time head-tracking and 3D effects. This session goes behind the scenes with the Dynamic Perspective team, diving into the unique technical challenges they faced during development. Afterward, we explore the Dynamic Perspective SDK together so you leave the session knowing how to add innovative features like Peek, Tilt, 3D controls, and Parallax to your own apps.
AFF202 - Everything You Need to Know about Building Apps for the Fire Phone
by David Isbitski - Developer Evangelist, Amazon Mobile Apps &Games with Amazon Web Services
Fire is the first phone designed by Amazon. We show you the new customer experiences it enables and how top developers have updated their Android apps to take advantage of Fire phone. Learn more about the hardware, the services, and the development SDK including Enhanced Carousel, Firefly and Dynamic Perspective, Appstore Developer Select, submitting to the Amazon Appstore, and Best Practices for developing great Fire apps.
AFF201 - What the Top 50 Games Do with In-App Purchasing That the Rest of Us Don't
by Mike Hines - Developer Evangelist with Amazon Web ServicesSalim Mitha - EVP Product, Marketing &Monetization with Playtika - Caesars Interactive
Not sure when (or if) to run a sale? Not sure what IAP items to offer? In this session, Playtika EVP Salim Mitha and Amazon show you what works. We share best practices and analytics data that we've aggregated from the top 50 in-app purchase (IAP) grossing games in the Amazon Appstore.  We cover user retention and engagement data comparisons and examine several purchasing UI layouts to learn how to manage and present IAP item selection. We also cover how to manage IAP price points and how and when to tailor price variety, sales, and offers for customers. You get actionable data and suggestions that you can use on your current as well as future projects to help maximize IAP revenue.Â
APP402 - Serving Billions of Web Requests Each Day with Elastic Beanstalk
by Mik Quinlan - Director, Engineering, Mobile Advertising with ThinkNearJohn Hinnegan - VP of Software Engineering with Thinknear
AWS Elastic Beanstalk provides a number of simple and flexible interfaces for developing and deploying your applications. Follow Thinknear's rapid growth from inception to acquisition, scaling from a few dozen requests per hour to billions of requests served each day with AWS Elastic Beanstalk.  Thinknear engineers demonstrate how they extended the AWS Elastic Beanstalk platform to scale to billions of requests while meeting response times below 100 ms, discuss tradeoffs they made in the process, and what did and did not work for their mobile ad bidding business.
APP315-JT - Coca-Cola: Migrating to AWS - Japanese Track
by Michael Connor - Senior Platform Architect with Coca-Cola
This session details Coca-Cola's effort to migrate hundreds of applications from on-premises to AWS. The focus is on migration best practices, security considerations, helpful tools, automation, and business processes used to complete the effort. Key AWS technologies highlighted will be AWS Elastic Beanstalk, Amazon VPC, AWS CloudFormation, and the AWS APIs. This session includes demos and code samples. Participants should walk away with a clear understanding of how AWS Elastic Beanstalk compares to "platform as a service," and why it was chosen to meet strict standards for security and business intelligence. This is a repeat session that will be translated simultaneously into Japanese.
APP315 - Coca-Cola: Migrating to AWS
by Michael Connor - Senior Platform Architect with Coca-Cola
This session details Coca-Cola's effort to migrate hundreds of applications from on-premises to AWS. The focus is on migration best practices, security considerations, helpful tools, automation, and business processes used to complete the effort. Key AWS technologies highlighted will be AWS Elastic Beanstalk, Amazon VPC, AWS CloudFormation, and the AWS APIs. This session includes demos and code samples. Participants should walk away with a clear understanding of how AWS Elastic Beanstalk compares to "platform as a service," and why it was chosen to meet strict standards for security and business intelligence.
APP313 - NEW LAUNCH: Amazon EC2 Container Service in Action
by Daniel Gerdesmeier - Software Development Engineer, EC2 with Amazon Web ServicesDeepak Singh - Principal Product Manager with Amazon Web Services
Container technology, particularly Docker, is all the rage these days. Â At AWS, our customers have been running Linux containers at scale for several years, and we are increasingly seeing customers adopt Docker, especially as they build loosely coupled distributed applications. Â However, to do so they have to run their own cluster management solutions, deal with configuration management, and manage their containers and associated metadata. Â We believe that those capabilities should be a core building block technology, just like EC2. Today, we are announcing the preview of Amazon EC2 Container Service, a new AWS service that makes is easy to run and manage Docker-enabled distributed applications using powerful APIs that allow you to launch and stop containers, get complete cluster state information, and manage linked containers. Â In this session we will discuss why we built the EC2 Container Service, some of the core concepts, and walk you through how you can use the service for your applications.
APP311 - Lessons Learned From Over a Decade of Deployments at Amazon
by Andy Troutman - Senior Manager, Software Development with Amazon Web Services
Amazon made the transition to a service-oriented architecture over a decade ago. That move drove major changes to the way we release updates to our applications and services. We learned many lessons over those years, and we used that experience to refine our internal tools as well as the services that we make available to our customers. In this session, we share that learning with you, and demonstrate how to optimize for agility and reliability in your own deployment process.
APP310 - Scheduling Using Apache Mesos in the Cloud
by Sharma Podila - Senior Software Engineer with Netflix
How can you reliably schedule tasks in an unreliable, autoscaling cloud environment? This presentation talks about the design of our Fenzo scheduler, built on Apache Mesos, that serves as the core of our stream-processing platform, Mantis, designed for real-time insights. We focus on the following aspects of the scheduler:  - Resource granularity  - Fault tolerance  - Bin packing, task affinity, stream locality  - Autoscaling of the cluster and of individual service jobs  - Constraints (hard and soft) for individual tasks such as zone balancing, unique, and exclusive instances This talk also includes detailed information on a holistic approach to scheduling in a distributed, autoscaling environment to achieve both speed and advanced scheduling optimizations.
APP309 - Running and Monitoring Docker Containers at Scale
by Alexis Lê-Quôc - CTO with Datadog
If you have tried Docker but are unsure about how to run it at scale, you will benefit from this session. Like virtualization before, containerization (À la Docker) is increasing the elastic nature of cloud infrastructure by an order of magnitude. But maybe you still have questions: How many containers can you run on a given Amazon EC2 instance type? Which metric should you look at to measure contention? How do you manage fleets of containers at scale? Datadog is a monitoring service for IT, operations, and development teams who write and run applications at scale. In this session, the cofounder of Datadog presents the challenges and benefits of running containers at scale and how to use quantitative performance patterns to monitor your infrastructure at this magnitude and complexity. Sponsored by Datadog.
APP308 - Chef on AWS: Deep Dive
by Michael Ducy - Global Partner Evangelist with ChefJOHN KEISER - DEVELOPER LEAD with Chef
When your infrastructure scales, you need to have the tooling and knowledge to support that scale. Chef is one of the commonly used tools for deploying and managing all kinds of infrastructure at any scale. In this session, we focus on how you can get your existing infrastructure robustly represented in Chef. We dive deep on all the specifics that make deploying with Chef on AWS easy: authentication management, versioning, recipe testing, and leveraging AWS resources in your recipes. Whether you're building new infrastructure with no existing operations management software or deploying existing Chef recipes into AWS, this session will outline all the tips and tricks you need to be a master Chef in the cloud.
APP307 - Leverage the Cloud with a Blue/Green Deployment Architecture
by Sean Berry - Principal Engineer with CrowdStrikeJim Plush - Sr Director of Engineering with CrowdStrike
Minimizing customer impact is a key feature in successfully rolling out frequent code updates. Learn how to leverage the AWS cloud so you can minimize bug impacts, test your services in isolation with canary data, and easily roll back changes. Learn to love deployments, not fear them, with a blue/green architecture model. This talk walks you through the reasons it works for us and how we set up our AWS infrastructure, including package repositories, Elastic Load Balancing load balancers, Auto Scaling groups, internal tools, and more to help orchestrate the process. Learn to view thousands of servers as resources at your command to help improve your engineering environment, take bigger risks, and not spend weekends firefighting bad deployments.
APP306 - Using AWS CloudFormation for Deployment and Management at Scale
by Tom Cartwright - Exec. Product Manager with BBCYavor Atanasov - Senior Software Engineer with BBC
With AWS CloudFormation you can model, provision, and update the full breadth of AWS resources. You can manage anything from a single Amazon EC2 instance to a multi-tier application. The British Broadcasting Corporation (BBC) uses AWS and CloudFormation to help deliver a range of services, including BBC iPlayer. Learn straight from the BBC team on how they developed these services with a multitude of AWS features and how they operate at scale. Get insight into the tooling and best practices developed by the BBC team and how they used CloudFormation to form an end-to-end deployment and management pipeline. If you are new to AWS CloudFormation, get up to speed for this session by completing the Working with CloudFormation lab in the self-paced Labs Lounge.
APP304 - AWS CloudFormation Best Practices
by Chris Whitaker - Senior Manager of Software Development with Amazon Web ServicesChetan Dandekar - Senior Product Manager with Amazon Web Services
With AWS CloudFormation you can model, provision, and update the full breadth of AWS resources. You can manage anything from a single Amazon EC2 instance to a multi-tier application. If you are familiar with AWS CloudFormation or using it already, this session is for you. If you are familiar with AWS CloudFormation, you may have questions such as ‘How do I plan my stacks?', ‘How do I deploy & bootstrap software on my stacks?' and ‘Where does AWS CloudFormation fit in a DevOps pipeline?' If you are using AWS CloudFormation already, you may have questions such as ‘How do I manage my templates at scale?', ‘How do I safely update stacks?', and ‘How do I audit changes to my stack?' This session is intended to answer those questions. If you are new to AWS CloudFormation, get up to speed for this session by completing the Working with CloudFormation lab in the self-paced Labs Lounge.
APP303 - Lightning Fast Deploys with Docker Containers and AWS
by Nathan LeClaire - Solutions Engineer with Docker
Docker is an open platform for developers to build, ship, and run distributed applications in Linux containers. In this session, Nathan LeClaire, a Solutions Engineer at Docker Inc., will be demonstrating workflows that can dramatically accelerate the development and deployment of distributed applications with Docker containers. Through in-depth demos, this session will show how to achieve painless deployments that are both readily scalable and highly available by combining AWS's strengths as an infrastructure platform with those of Docker's as a platform that transforms the software development lifecycle.
APP301 - AWS OpsWorks Under the Hood
by Reza Spagnolo - Software Development Engineer with Amazon Web ServicesJonathan Weiss - Senior Software Development Manager with Amazon Web Services
AWS OpsWorks helps you deploy and operate applications of all shapes and sizes. With AWS OpsWorks, you can model your application stack with layers that define the building blocks of your application: load balancers, application servers, databases, etc. But did you know that you can also extend AWS OpsWorks layers or build your own custom layers? Whether you need to perform a specific task or install a new software package, AWS OpsWorks gives you the tools to install and configure your instances consistently and help them evolve in an automated and predictable fashion. In this session, we dive into the development process including how to use attributes, recipes, and lifecycle events; show how to develop your environment locally; and provide troubleshooting steps that reduce your development time.
APP204 - NEW LAUNCH: Introduction to AWS Service Catalog
by Ashutosh Tiwary - General Manager, Cloud Formation with Amazon Web ServicesAbhishek Lal - Senior Product Manager @ AWS with Amazon Web Services
Running an IT department in a large organization is not easy. To provide your internal users with access to the latest and greatest technology so that they can be as efficient and as productive as possible needs to be balanced with the need to set and maintain corporate standards, collect and disseminate best practices, and provide some oversight to avoid runaway spending and technology sprawl. Introducing AWS Service Catalog, a service that allows end users in your organization to easily find and launch products using a personalized portal. You can manage catalogs of standardized offerings and control which users have access to which products, enabling compliance with business policies. Your organization can benefit from increased agility and reduced costs. Attend this session to be one of the first to learn about this new service.Â
APP203 - How Sumo Logic and Anki Build Highly Resilient Services on AWS to Manage Massive Usage Spikes
by Ben Whaley - Director of Infrastructure with AnkiChristian Beedgen - CTO with Sumo Logic
In just two years, Sumo Logic's multitenant log analytics service has scaled to query over 10 trillion more logs each day. Christian, Sumo Logic's cofounder and CTO shares the three most important lessons he has learned in building such a massive service on AWS. Ben Whaley is an AWS Community Hero who works for Anki as an AWS cloud architect. Ben uses hundreds of millions of logs to troubleshoot and improve Anki Drive, the coolest battle robot racing game on the planet. This is an ideal session for cloud architects constantly looking to improve scalability and application performance on AWS. Sponsored by Sumo Logic.
APP202 - Deploy, Manage, and Scale Your Apps with AWS OpsWorks and AWS Elastic Beanstalk
by Abhishek Singh - Senior Product Manager, AWS Elastic Beanstalk, Amazon Web Services with Amazon Web ServicesChris Barclay - Senior Product Manager with Amazon Web Services
AWS offers a number of services that help you easily deploy and run applications in the cloud. Come to this session to learn how to choose among these options. Through interactive demonstrations, this session shows you how to get an application running using AWS OpsWorks and AWS Elastic Beanstalk application management services. You also learn how to use AWS CloudFormation templates to document, version control, and share your application configuration. This session covers application updates, customization, and working with resources such as load balancers and databases.
APP201 - Going Zero to Sixty with AWS Elastic Beanstalk
by Abhishek Singh - Senior Product Manager, AWS Elastic Beanstalk, Amazon Web Services with Amazon Web Services
AWS Elastic Beanstalk provides an easy way for you to quickly deploy, manage, and scale applications in the AWS cloud. This session shows you how to deploy your code to AWS Elastic Beanstalk, easily enable or disable application functionality, and perform zero-downtime deployments through interactive demos and code samples for both Windows and Linux. Are you new to AWS Elastic Beanstalk? Get up to speed for this session by first completing the 60-minute Fundamentals of AWS Elastic Beanstalk lab in the self-paced Lab Lounge.
ARC403 - From One to Many: Evolving VPC Design
by Yinal Ozkan - Principal Solutions Architect with Amazon Web Services
As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against increasingly complex design requirements. This session follows the evolution of a single regional VPC into a multi-VPC, multiregion design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, managing multitenant VPCs, conducting VPC-to-VPC traffic, running multiple hybrid environments over AWS Direct Connect, and integrating corporate multiprotocol label switching (MPLS) clouds into multiregion VPCs..
ARC402 - Deployment Automation: From Developers' Keyboards to End Users' Screens
by Chris Munns - Solutions Architect with Amazon Web Services
Some of the best businesses today are deploying their code dozens of times a day. How? By making heavy use of automation, smart tools, and repeatable patterns to get process out of the way and keep the workflow moving. Come to this session to learn how you can do this too, using services such as AWS OpsWorks, AWS CloudFormation, Amazon Simple Workflow Service, and other tools. We'll discuss a number of different deployment patterns, and what aspects you need to focus on when working toward deployment automation yourself.
ARC401 - Black-Belt Networking for the Cloud Ninja
by Steve Morad - Principal Solutions Architect with Amazon Web Services
Do you need to get beyond the basics of VPC and networking in the cloud? Do terms like virtual addresses, integrated networks and network monitoring get you motivated? Come discuss black-belt networking topics including floating IPs, overlapping network management, network automation, network monitoring, and more. This expert-level networking discussion is ideally suited for network administrators, security architects, or cloud ninjas who are eager to take their AWS networking skills to the next level.
ARC318 - Continuous Delivery at a Rate of 500 Deployments a Day!
by Elias Torres - VP of Engineering with Driftt
Every development team would love to spend more time building products and less time shepherding software releases. What if you had the ability to repeatably push any version of your code and not have to worry about the optimal server allocation for your services? This talk will cover how HubSpot and a team of 100 engineers deploys 500 times a day with very minimal effort. Singularity, an open-source project which HubSpot built from scratch, works with Apache Mesos to manage a multipurpose cluster in AWS to support web services, cron jobs, map/reduce tasks, and one-off processes. This talk will discuss the HubSpot service architecture and cultural advantages and the costs and benefits of the continuous delivery approach.
ARC317 - Maintaining a Resilient Front Door at Massive Scale
by Daniel Jacobson - VP of Engineering, Edge and Playback with NetflixBenjamin Schmaus - Director, Edge Systems with Netflix
The Netflix service supports more than 50 million subscribers in over 40 countries around the world. These subscribers use more than 1,000 different device types to connect to Netflix, resulting in massive amounts of traffic to the service. In our distributed environment, the gateway service that receives this customer traffic needs to be able to scale in a variety of ways while simultaneously protecting our subscribers from failures elsewhere in the architecture. This talk will detail how the Netflix front door operates, leveraging systems like Hystrix, Zuul, and Scryer to maximize the AWS infrastructure and to create a great streaming experience.
ARC313 - So You Think You Can Architect?
by Constantin Gonzalez - Solutions Architect with Amazon Web ServicesJan Metzner - Solutions Architect with Amazon Web ServicesMichael Sandbichler - CTO with ProSiebenSat.1 Digital GmbH
TV talent shows with online and mobile voting options pose a huge challenge for architects: How do you handle millions of votes in a very short time, while keeping your system robust, secure, and scalable? Attend this session and learn from AWS customers who have solved the architectural challenges of setting up, testing, and operating mobile voting infrastructures. We will start with a typical, standard web application, then introduce advanced architectural patterns along the way that help you scale, secure, and simplify your mobile voting infrastructure and make it bulletproof for show time! We'll also touch on topics like testing and support during the big event.
ARC312 - Processing Money in the Cloud
by Sri Vasireddy - President with REAN CloudSoofi Safavi - CTO, SVP with Radian
Financial transactions need to be processed and stored securely and in real time. Together with a giant in the mortgage insurance industry, we have developed an elastic, secure, and compliant data processing framework on AWS that meets these processing requirements and drastically improves the time it takes to make a decision on a loan. This session will discuss what we've learned along the way, how we have overcome multiple security and compliance hurdles, and how other organizations in regulated industries can do the same. This session is targeted at business decision-makers and solutions architects working in regulated industries with high security and compliance requirements.
ARC311 - Extreme Availability for Mission-Critical Applications
by Eduardo Horai - Manager, Solutions Architecture with Amazon Web ServicesRaul Frias - Solutions Architect with Amazon Web ServicesAndre Fatala - CDO with Magazine Luiza
More and more businesses are deploying their mission-critical applications on AWS, and one of their concerns is how to improve the availability of their services, going beyond traditional availability concepts. In this session, you will learn how to architect different layers of your application-beginning with an extremely available front-end layer with Amazon EC2, Elastic Load Balancing, and Auto Scaling, and going all the way to a protected multitiered information layer, including cross-region replicas for relational and NoSQL databases. The concepts that we will share, using services like Amazon RDS, Amazon DynamoDB, and Amazon Route 53, will provide a framework you can use to keep your application running even with multiple failures. Additionally, you will hear from Magazine Luiza, in an interactive session, on how they run a large e-commerce application with a multiregion architecture using a combination of features and services from AWS to achieve extreme availability.
ARC309 - Building and Scaling Amazon Cloud Drive to Millions of Users
by Ashish Mishra - Sr Software Dev Engineer with Amazon Web ServicesTarlochan Cheema - Software Development Manager, Amazon Cloud Drive with Amazon Web Services
Learn from the Amazon Cloud Drive team how Amazon Cloud Drive services are built on top of AWS core services using Amazon S3, Amazon DynamoDB, Amazon EC2, Amazon SQS, Amazon Kinesis, and Amazon CloudSearch. This session will cover design and implementation aspects of large-scale data uploads, metadata storage and query, and consistent and fault-tolerant services on top of the AWS stack. The session will provide guidance and best practices about how and when to leverage and integrate AWS infrastructure and managed services for scalable solutions. This session will also cover how Cloud Drive services teams innovated to attain high throughputs.
ARC308 - Nike's Journey into Microservices
by Amber Milavec - Sr Technical Architect - Infrastructure with Nike, Inc.Jason Robey - Director of Database and Data Services with Nike, Inc.
Tightly coupled monolithic stacks can present challenges for companies looking to take full advantage of the cloud. In order to move to a 100 percent cloud-native architecture, the Nike team realized they would need to rewrite all of the Nike Digital sites (Commerce, Sport, and Brand) as microservices. This presentation will discuss this journey and the architecture decisions behind making this happen. Nike presenters will talk about adopting the Netflix operations support systems (OSS) stack for their deployment pipeline and application architecture, covering the problems this solved and the challenges this introduced.
ARC307-JT - Infrastructure as Code - Japanese Track
by Alex Corley - SA with Amazon Web ServicesDavid Winter - Enterprise Sales with Amazon Web ServicesTom Wanielista - Chief Engineer with Simple
While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure. This is a repeat session that will be translated simultaneously into Japanese.
ARC307 - Infrastructure as Code
by David Winter - Enterprise Sales with Amazon Web ServicesAlex Corley - SA with Amazon Web ServicesTom Wanielista - Chief Engineer with Simple
While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
ARC306 - IoT: Small Things and the Cloud
by Brett Francis - Strategic Account Solutions Architect with Amazon Web Services
Working with fleets of “Internet of Things” (IoT) devices brings about distinct challenges. In this session, we will explore four of these challenges: telemetry, commands, device devops, and audit and authorization, and how they transform when deploying hundreds-of-thousands of resource-constrained devices. We'll explore high-level architectural patterns that customers use to meet these challenges through the functionality and ubiquity of a globally accessible cloud platform. If you consider yourself a device developer, an electrical, industrial, or hardware engineer, a hardware incubator class member, a new device manufacturer, an existing device manufacturer who wants to smarten up their next-gen devices, or a software developer working with people who identify as part of these tribes, you'll want to participate in this session.
ARC304 - Designing for SaaS: Next-Generation Software Delivery Models on AWS
by Matt Tavis - Principal Solutions Architect with Amazon Web Services
SaaS architectures can be deployed onto AWS in a number of ways, and each optimizes for different factors from security to cost optimization. Come learn more about common deployment models used on AWS for SaaS architectures and how each of those models are tuned for customer specific needs. We will also review options and tradeoffs for common SaaS architectures, including cost optimization, resource optimization, performance optimization, and security and data isolation.
ARC303 - Panning for Gold: Analyzing Unstructured Data
by Ganesh Raja - Solutions Architect with Amazon Web ServicesKrishnan Venkata - Director with LatentView Analytics
Mining unstructured data for valuable information has historically been frustrating and difficult. This session will walk through practical examples of how multiple AWS services can be leveraged to provide extremely flexible, scalable, and available systems to successfully analyze massive amounts of data. Come learn how an application was adapted to leverage Elastic MapReduce and Amazon Kinesis to collect and analyze terabytes of web log data a day. Learn how Amazon Redshift can be used to clean up and visualize data and how AWS CloudFormation enables this analytical framework to be deployed in multiple regions while honoring privacy laws.
ARC302 - Running Lean Architectures: How to Optimize for Cost Efficiency
by Constantin Gonzalez - Solutions Architect with Amazon Web ServicesYimin Jiang - Cloud Performance & Reliability Lead with Adobe
Whether you're a startup getting to profitability or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. Building on last year's popular foundation of how to reduce waste and fine-tune your AWS spending, this session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customer Adobe Systems. With the massive growth of subscribers to Adobe's Creative Cloud, Adobe's footprint in AWS continues to expand. We will discuss the techniques used to optimize and manage costs, while maximizing performance and improving resiliency. We'll cover effectively combining EC2 On-Demand, Reserved, and Spot instances to handle different use cases, leveraging auto scaling to match capacity to workload, choosing the most optimal instance type through load testing, taking advantage of multi-AZ support, and using CloudWatch to monitor usage and automatically shutting off resources when not in use. Other techniques we'll discuss include taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely, by leveraging AWS high-level services. We will also showcase simple tools to help track and manage costs, including the AWS Cost Explorer, Billing Alerts, and Trusted Advisor. This session will be your pocket guide for running cost effectively in the Amazon cloud.
ARC206 - Architecting Reactive Applications on AWS
by Revanth Talari - Systems Analyst, USEReady with USEReadyAtul Shukla - Platform Architect with USEReady
Application requirements have changed dramatically in recent years, requiring millisecond or even microsecond response times and 100 percent uptime. This change has led to a new wave of "reactive applications" with architectures that are event-driven, scalable, resilient, and responsive. In this session, we present the blueprint for building reactive applications on AWS. We compare reactive architecture to the classic n-tier architecture and discuss how it is cost-efficient and easy to implement using AWS. Next, we walk through how to design, build, deploy, and run reactive applications in the AWS cloud, delivering highly responsive user experiences with a real-time feel. This architecture uses Amazon EC2 instances to implement server push to broadcast events to application clients; AWS messaging (Amazon SQS/SNS); Amazon SWF to decouple system components; Amazon DynamoDB to minimize contention; and Elastic Load Balancing, Auto Scaling, Availability Zones, Amazon VPC, and Amazon Route 53 to make reactive applications scalable and resilient.
ARC205-JT - Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options - Japanese Track
by Brett Hollman - Manager, Solutions Architecture with Amazon Web Services
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint. This is a repeat session that will be translated simultaneously into Japanese.
ARC205 - Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
by Brett Hollman - Manager, Solutions Architecture with Amazon Web Services
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint.
ARC204 - Architecting Microsoft Workloads on AWS
by Mike Pfeiffer - Solutions Architect with Amazon Web Services
Are you interested in implementing key Microsoft workloads such as Windows Server, Active Directory, SQL Server, or SharePoint Server on AWS? Have you wondered how to securely manage your Microsoft-based workloads on AWS? In this session, we step you through the architectural considerations, implementation steps, and best practices for deploying and administering these key Microsoft workloads on the AWS cloud. Find out how to deploy these workloads on your own, or by using automated solutions such as AWS Quick Start. Hear how existing AWS customers have successfully implemented Microsoft workloads on AWS and walk away with a better idea of how to implement or migrate your Microsoft-based workloads to AWS.
ARC203 - Expanding Your Data Center with Hybrid Infrastructure
by Rich Uhl - Enterprise Solutions Architect with Amazon Web ServicesDerek Lyon - Principle Product Manager, Amazon EC2 with Amazon Web ServicesTheo Carpenter - Systems Engineer with Woot.comDaniel Pinkard - Manager, Systems Admin with Woot.com
Today, many enterprises' data centers are at capacity, and these data centers are looking to expand their infrastructure footprint using the cloud. By leveraging a hybrid architecture, enterprises can expand their capabilities while maintaining some or all of their existing management tools. This session will go into detail on managing your AWS infrastructure with the AWS Management Portal for vCenter, integrating the AWS Management Pack for Microsoft System Center for monitoring your AWS resources, and possible future System Center and vCenter AWS cloud management features and functionality.
ARC202 - Real-World Real-Time Analytics
by Sebastian Montini - Solutions Architect with SocialmetrixGustavo Arjones - Co-founder & CTO with Socialmetrix
Working with big volumes of data is a complicated task, but it's even harder if you have to do everything in real time and try to figure it all out yourself. This session will use practical examples to discuss architectural best practices and lessons learned when solving real-time social media analytics, sentiment analysis, and data visualization decision-making problems with AWS. Learn how you can leverage AWS services like Amazon RDS, AWS CloudFormation, Auto Scaling, Amazon S3, Amazon Glacier, and Amazon Elastic MapReduce to perform highly performant, reliable, real-time big data analytics while saving time, effort, and money. Gain insight from two years of real-time analytics successes and failures so you don't have to go down this path on your own.
ARC201 - Cloud-Native Cost Optimization
by Adrian Cockcroft - Technology Fellow with Battery Ventures
For traditional data center applications, capacity is a fixed upfront cost. Thus, there is little incentive to stop using capacity once it's been allocated, and it has to be overprovisioned most of the time so there is enough capacity for peak loads. When traditional application and operating practices are used in cloud deployments, immediate benefits occur in speed of deployment, automation, and transparency of costs. The next step is a re-architecture of the application to be cloud-native, and significant operating cost reductions can help justify this development work. Cloud-native applications are dynamic and use ephemeral resources that customers are only charged for when the resources are in use. This talk will discuss best practices for cloud-native development, test, and production deployment architectures that turn off unused resources and take full advantage of optimizations such as reserved instances and consolidated billing.
BAC404 - Deploying High Availability and Disaster Recovery Architectures with AWS
by Kamal Arora - Solutions Architect with Amazon Web ServicesVikram Garlapati - Manager Solutions Architect, Amazon Web Services with Amazon Web Services
In this session, we show how to architect, deploy, and scale an application for high availability within a region along with failing over to another AWS region in the event of a disaster at your primary region. During the session, we use real-time live demos and code examples for high availability and disaster recovery scenarios.
BAC310 - Building an Enterprise-Class Backup and Archive Storage Solution Using AWS
by Jennifer Burnham - Director of Content &Comm with Druva, Inc.Jaspreet Singh - Founder & CEO with Druva, Inc.
In this session, Druva's technical founder shares how the company has used AWS to bring scalable and efficient data backup and archival services to market at scale.  Druva has built a scalable storage-as-a-service platform in the cloud, designed to collect, preserve, and help enterprises easily discover information masquerading as “just some data.” Druva leveraged Amazon S3 for a large pool of distributed storage for warm copies of data, Amazon Glacier for long-term preservation, and a metadata layer on top of Amazon DynamoDB for high transaction and fast data deduplication. The system is uniquely designed like a file system, so developers can write an application with the same basic assumptions. Druva used the design principles of metadata and data separation and deployed the concept of “microthreads” to counter latency challenges. The result is a completely distributed cloud-oriented architecture that is geodistributed, highly available, and able to scale to manage massive volumes of data across multiple customers and geographies. Even if your attention is not particularly on backup solutions, it will be instructive to learn how Druva is taking advantage of these AWS services at massive scale. This session is targeted at developers and at technical and product leaders interested in ways to use AWS in enterprise environments.  Sponsored by Druva.
BAC309 - Automating Backup and Archiving with AWS and CommVault
by Paul McClure - Chief Technologist, Cloud Solutions Group with CommVault
Are you looking to automate backup and archive of your business-critical data workloads? Attend this session to better understand key use cases, best practices, and considerations to help protect your data with AWS and CommVault. This session will feature lessons learned from CommVault customers that have: migrated onsite backup data into Amazon S3 to reduce hardware footprint and improve recoverability; implemented data tiering and archived data in Amazon Glacier for long term retention and compliance; performed snapshot-based protection and recovery for applications running in Amazon EC2; and provisioned and managed VMs in Amazon EC2. Sponsored by CommVault.
BAC307 - The Cold Data Playbook: Building the Ultimate Archive Solution in Amazon Glacier
by Colin Lazier - GM, Amazon Glacier with Amazon Web ServicesDavid Rosen - VP, Solutions Architect with Sony Corporation of America
In this session we will present some of the key features of Glacier including security, durability and price. You will learn best practices for managing your cold data, including ingest, retrieval and security controls. We will also discuss how to optimize the storage, upload and retrieval costs and help you identify the most applicable workloads and recommend optimizations based on a few sample use cases. Â
BAC304 - Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum Efficiency
by Matt Lehwess - Solution Architect with Amazon Web ServicesRavi Madabhushanam - Solution Architect with Apps Associates LLC
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster. We will also include a real life customer example of a deployment using AWS for High Availability and Disaster Recovery. Â
BAC302 - Using AWS to Create a Low Cost, Secure Backup Environment for Your On-premises Data
by Curd Zechmeister - Sr Mgr, Solutions Architecture with Amazon Web ServicesJason Blevins - Director, Systems Engineering with Amtrak - Information TechnologyAntoine Boury - Head of Information Technology and Services. MEA with JWT (J. Walter Thompson, a WPP Company)
In this session, you learn how you can leverage AWS services together with third-party storage appliances and gateways to automate your backup and recovery processes so that they are not only less complex and lightweight, but also easy to manage and maintain. We demonstrate how to manage data flow from on-premises systems to the cloud and how to leverage storage gateways. You also learn best practices for quick implementation, reducing TCO, and automating lifecycle management.
BAC208 - Bursting to the Cloud: Deploying a Hybrid Cloud Storage Solution with AWS
by Matt Yanchyshyn - Principal Solutions Architect with Amazon Web ServicesRon Bianchini - President and CEO with Avere SystemsAaron Black - Director of Informatics with Inova Translational Medicine Institute
In this session, you learn how Inova Translational Medicine Institute (ITMI) uses Amazon S3 together with an on-premises cloud gateway from Avere Systems to take advantage of the unlimited capacity scaling in the cloud while lowering the cost of data storage. ITMI describes how they worked together with AWS and Avere Systems to deploy a highly available, secure and scalable data storage solution for managing its database of 5000 complete whole genome sequences. You hear how ITMI's hybrid cloud storage solution works together with their integrated environment and allows them to tie together disparate systems and treat the cloud as an on-premises data center, without the maintenance overhead. Â
BAC202 - Introducing AWS Solutions for Backup and Archiving
by Brad Carlstedt - Manager, Business Development with Amazon Web ServicesMichael Holtby - AWS Tech Lead with News UK
Learn how to use the variety of AWS storage services and features to deploy backup and archiving solutions that are low cost and easy to deploy, manage and maintain. The session will present reference architectures, best practices and use cases based on AWS services including Amazon S3, Glacier and Storage Gateway. Special topics will include how to move your data securely into the AWS cloud, how to retrieve and restore your data, and how to backup on-premises data to the cloud using Amazon Storage gateway and other third party storage gateways.
BDT403 - Netflix's Next Generation Big Data Platform
by Eva Tse - Director of Big Data Platform with Netflix
As Netflix expands their services to more countries, devices, and content, they continue to evolve their big data analytics platform to accommodate the increasing needs of product and consumer insights. This year, Netflix re-innovated their big data platform: they upgraded to Hadoop 2, transitioned to the Parquet file format, experimented with Pig on Tez for the ETL workload, and adopted Presto as their interactive querying engine. In this session, Netflix discusses their latest architecture, how they built it on the Amazon EMR infrastructure, the contributions put into the open source community, as well as some performance numbers for running a big data warehouse with Amazon S3.
BDT402 - Performance Profiling in Production: Analyzing Web Requests at Scale Using Amazon Elastic MapReduce and Storm
by Zach Musgrave - Software Engineer with Yelp, Inc.
Code profiling gives a rich, detailed view of runtime performance. However, it's difficult to achieve in production: for even a small fraction of web requests, huge challenges in scalability, access, and ease of use appear. Despite this, Yelp profiles a nontrivial fraction of its traffic by combining Amazon EC2, Amazon EMR, and Amazon S3. Developers can search, sort, filter, and combine interesting profiles; during a site slowdown or page failure, this allows a fast diagnosis and speedy recovery. Some of our analyses run nightly, while others run in real-time via Storm topologies. This session includes our use cases for code profiling, its benefits, and the implementation of its handlers and analysis flows. We include both performance results and implementation challenges of our MapReduce and Storm jobs, including code overviews. We also touch on issues such as concurrent logging, cross-data center replication, job scheduling, and API definitions.
BDT401 - Big Data Orchestra - Harmony within Data Analysis Tools
by Guy Ernest - Sr. Manager Solutions Architecture with Amazon Web Services
Yes, you can build a data analytics solution with a relational database, but should you? What about scalability? What about flexibility? What about cost? In this session, we demonstrate how to build a real world solution for location-based data analytics, with the combination of Amazon Kinesis, Amazon DynamoDB, Amazon Redshift, Amazon CloudSearch, and Amazon EMR. We discuss how to integrate these services to create a robust solution in terms of security, simplicity, speed, and low cost.
BDT312 - Using the Cloud to Scale from a Database to a Data Platform
by Ryan Horn - Technical Lead, User Data with Twilio
Scaling highly available database infrastructure to 100x, 1000x, and beyond has historically been one of the hardest technical challenges that any successful web business must face. This is quickly changing with fully-managed database services such as Amazon DynamoDB and Amazon Redshift, as the scaling efforts which previously required herculean effort are now as simple as an API call.  Over the last few years, Twilio has evolved their database infrastructure to a pipeline consisting of Amazon SQS, Sharded MySQL, Amazon DynamoDB, Amazon S3, Amazon EMR and Amazon Redshift. In this session, Twilio cover show they achieved success, specifically: - How they replaced their data pipeline deployed to Amazon EC2 to meet their scaling needs with zero downtime. - How they adopted Amazon DynamoDB and Amazon Redshift at the same scale as their MySQL infrastructure, at 1/5th the cost and operational overhead. - Why they believe adopting managed database services like Amazon DynamoDB is key to accelerating delivery of value to their customers. Sponsored by Twilio. Â
BDT311 - MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On-demand Clusters for Manufacturing Production Workloads
by Jason Stowe - CEO with Cycle ComputingPatrick Saris - Chemist with University of Southern CaliforniaDavid Hinz - Global Director, Cloud, Data Center, Computing Engineering with HGST
Not only did the 156,000+ core run (nicknamed the MegaRun) on Amazon EC2 break industry records for size, scale, and power, but it also delivered real-world results. The University of Southern California ran the high-performance computing job in the cloud to evaluate over 220,000 compounds and build a better organic solar cell. In this session, USC provides an update on the six promising compounds that we have found and is now synthesizing in laboratories for a clean energy project. We discuss the implementation of and lessons learned in running a cluster in eight AWS regions worldwide, with highlights from Cycle Computing's project Jupiter, a low-overhead cloud scheduler and workload manager. This session also looks at how the MegaRun was financially achievable using the Amazon EC2 Spot Instance market, including an in-depth discussion on leveraging Spot Instances, a strategy to deal with the variability of Spot pricing, and a template to avoid compromising workflow integrity, security, or management. After a year of production workloads on AWS, HGST, a Western Digital Company, has zeroed in on understanding how to create on-demand clusters to maximize value on AWS. HGST will outline the company's successes in addressing the company's changes in operations, culture, and behavior to this new vision of on-demand clusters. In addition, the session will provide insights into leveraging Amazon EC2 Spot Instances to reduce costs and maximize value, while maintaining the needed flexibility, and agility that AWS is known for." Â
BDT310 - Big Data Architectural Patterns and Best Practices on AWS
by Siva Raghupathy - Principal Solutions Architect with Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
BDT309-JT - Delivering Results with Amazon Redshift, One Petabyte at a Time - Japanese Track
by Samar Sodhi - Manager, Data Engineering with Amazon Web ServicesErik Selberg - Dir, Amazon Data Warehouse with Amazon Web Services
The Amazon Enterprise Data Warehouse team, responsible for data warehousing across all of Amazon's divisions, spent 2014 working with Amazon Redshift on its largest datasets, including web log traffic. The key goals in this project were to provide a viable, enterprise-grade solution that enabled full scans of 2 trillion rows in under an hour at load. Key to success were automation of routine DW tasks that become complicated at scale: backfilling erroneous data, re-calculating statistics, re-sorting daily additions, and so forth. In this session, we discuss the scale and performance of a 100-node 1PB Amazon Redshift cluster, as well as describing some of the technical aspects and best practices of running 100-node clusters in an enterprise environment. This is a repeat session that will be translated simultaneously into Japanese.
BDT309 - Delivering Results with Amazon Redshift, One Petabyte at a Time
by Samar Sodhi - Manager, Data Engineering with Amazon Web ServicesErik Selberg - Dir, Amazon Data Warehouse with Amazon Web Services
The Amazon Enterprise Data Warehouse team, responsible for data warehousing across all of Amazon's divisions, spent 2014 working with Amazon Redshift on its largest datasets, including web log traffic. The key goals in this project were to provide a viable, enterprise-grade solution that enabled full scans of 2 trillion rows in under an hour at load. Key to success were automation of routine DW tasks that become complicated at scale: backfilling erroneous data, re-calculating statistics, re-sorting daily additions, and so forth. In this session, we discuss the scale and performance of a 100-node 1PB Amazon Redshift cluster, as well as describing some of the technical aspects and best practices of running 100-node clusters in an enterprise environment.
BDT308-JT - Using Amazon Elastic MapReduce as Your Scalable Data Warehouse - Japanese Track
by Steve McPherson - Senior Manager, Elastic MapReduce with Amazon Web Services
In this presentation, we will demonstrate how to use Amazon Elastic MapReduce as your scalable data warehouse. Amazon EMR supports clusters with thousands of nodes and is used to access petabyte scale data warehouses. Amazon EMR is not only fast, but it is also easy to use for rapid development and adhoc analysis. We will show you how access the large scale data warehouses with emerging tools such as Hue, Hive, low latency SQL applications like Presto, and alternative execution engines like Apache Spark. We will also show you how these tools integrate directly with other AWS big data services such as Amazon S3, Amazon DynamoDB, and Amazon Kinesis. This is a repeat session that will be translated simultaneously into Japanese. Â
BDT308 - Using Amazon Elastic MapReduce as Your Scalable Data Warehouse
by Steve McPherson - Senior Manager, Elastic MapReduce with Amazon Web Services
In this presentation, we will demonstrate how to use Amazon Elastic MapReduce as your scalable data warehouse. Amazon EMR supports clusters with thousands of nodes and is used to access petabyte scale data warehouses. Amazon EMR is not only fast, but it is also easy to use for rapid development and adhoc analysis. We will show you how access the large scale data warehouses with emerging tools such as Hue, Hive, low latency SQL applications like Presto, and alternative execution engines like Apache Spark. We will also show you how these tools integrate directly with other AWS big data services such as Amazon S3, Amazon DynamoDB, and Amazon Kinesis.
BDT307 - Running NoSQL on Amazon EC2
by Matt Yanchyshyn - Principal Solutions Architect with Amazon Web ServicesRahul Bhartia - Ecosystem Solution Architect with Amazon Web Services
Deploying self-managed NoSQL databases on Amazon Web Services (AWS) is more straightforward than you would think. This session focuses on three popular self-managed NoSQL systems on Amazon EC2: MongoDB, Cassandra, and Couchbase. We start with an overview of each of these popular NoSQL databases, discuss their origins and characteristics, and demonstrate the use of the AWS ecosystem to deploy these NoSQL databases quickly. Later in the session, we dive deep on use cases, design patterns, and discuss creating highly-available and high-performance architectures with careful consideration for failure and recovery. Whether you're a NoSQL developer, architect, or administrator, join us for a comprehensive session on looking at three different NoSQL systems from a uniform perspective.
BDT306 - Mission-Critical Stream Processing with Amazon EMR and Amazon Kinesis
by Ricardo DeMatos - Solutions Architect with Amazon Web ServicesDerek Chiles - Manager, Solutions Architecture with Amazon Web ServicesYekesa Kosuru - V.P Engineering with DataXu
Organizations processing mission critical high-volume data must be able to achieve high levels of throughput and durability in data processing workflows. In this session, we will learn how DataXu is using Amazon Kinesis, Amazon S3, and Amazon EMR for its patented approach to programmatic marketing. Every second, the DataXu Marketing Cloud processes over 1 Million ad requests and makes more than 40 billion decisions to select and bid on ad impressions that are most likely to convert. In addition to addressing the scalability and availability of the platform, we will explore Amazon Kinesis producer and consumer applications that support high levels of scalability and durability in mission-critical record processing.
BDT305 - Lessons Learned and Best Practices for Running Hadoop on AWS
by Rahul Bhartia - Ecosystem Solution Architect with Amazon Web ServicesAmandeep Khurana - Principal Solutions Architect with Cloudera Inc
Enterprises are starting to deploy large scale Hadoop clusters to extract value out of the data that they are generating. These clusters often span hundreds of nodes. To speed up the time to value, a lot of the newer deployments are happening in AWS, moving from the traditional on-premises, bare-metal world. Cloudera supports just such deployments. In this session, Cloudera shares the lessons learned and best practices for deploying multi-tenant Hadoop clusters in AWS. They will cover what reference deployments look like, what services are relevant for Hadoop deployments, network configurations, instance types, backup and disaster recovery considerations, and security considerations. They will also talk about what works well, what doesn't, and what has to be done going forward to improve the operability of Hadoop on AWS.
BDT303 - Construct Your ETL Pipeline with AWS Data Pipeline, Amazon EMR, and Amazon Redshift
by Roy Ben-Alta - Big Data Analytics Practice Lead at Amazon Web Services - NA with Amazon Web ServicesP. Thomas Barthelemy - Software Engineer with Coursera
An advantage to leveraging Amazon Web Services for your data processing and warehousing use cases is the number of services available to construct complex, automated architectures easily. Using AWS Data Pipeline, Amazon EMR, and Amazon Redshift, we show you how to build a fault-tolerant, highly available, and highly scalable ETL pipeline and data warehouse. Â Coursera will show how they built their pipeline, and share best practices from their architecture.
BDT302 - Big Data Beyond Hadoop: Running Mahout, Giraph, and R on Amazon EMR and Analyzing Results with Amazon Redshift
by Nikolay Bakaltchev - Senior Solutions Architect with Amazon Web ServicesMarco Merens - Acting Chief Integrated Aviation Analysis with ICAO
We will explore the strengths and limitations of Hadoop for analyzing large data sets and review the growing ecosystem of tools for augmenting, extending, or replacing Hadoop MapReduce. We will introduce the Amazon Elastic MapReduce (EMR) platform as the big data foundation for Hadoop and beyond by providing specific examples of running Machine Learning (Mahout), Graph Analytics (Giraph), and Statistical Analysis (R) on EMR. We will discuss also big data analytics and visualization of results with Amazon Redshift + third party business intelligence tools, as well as typical end-to-end Big Data workflow on AWS. We will conclude with real-world examples from ICAO of Big Data analytics for aviation safety data on AWS. The integrated Safety Trend Analysis and Reporting System (iSTARS) is a web based system linking a collection of safety datasets and related web application to perform online safety and risk analysis. It uses AWS EC2, S3, EMR and related partner tools for continuous data aggregation and filtering.
BDT209 - Intel's Healthcare Cloud Solution Using Wearables for Parkinson's Disease Research
by Moty Fania - Principal Architect with Intel
In this session, learn how the Intel team of software engineers and data scientists, in collaboration with the Michael J Fox Foundation, built a big data analytics platform  using Hadoop and other IoT technologies. The solution leverages wearable sensors and smartphone application to monitor PD patient's motor activities, 24/7. The platform collects and processes large stream of data, and enables different analytics services such as activity recognition and different PD related measurements to researchers.  These machine learning algorithms are used to  detect patterns in the data that can help researchers understand the progression of the disease and develop effective treatments. You leave with a comprehensive view of the tools and platforms from Intel that you can use in building your own applications on AWS. In addition there will be a deeper dive to explain the way this platforms  enables near real- time analytics as part of the ingestion process. Parkinson's Disease-a neuromuscular disease that causes gradually worsening symptoms such as tremors, difficulty in movement, and sleep loss -affects over 5 million people worldwide. Because the symptoms vary from individual to individual, research into the disease is hampered by the lack of objective data. As is typical of many healthcare applications, the collection, storage, and analysis of data is complex, expensive, and time-consuming. Intel is tackling this challenge by building a solution that uses wearable devices to collect data from patients anonymously and store it securely. Sponsored by Intel. Â
BDT208 - Finding High Performance in the Cloud for HPC
by David Pellerin - Principal Business Development Manager, EC2 with Amazon Web ServicesRay Milhem - Vice President with ANSYS, Inc.Ayumi Tada - Infrastructure Technology Department with Honda Motor Co., Ltd.Nicole Hemsoth - Editor in Chief, Cloud Insights and HPCWire with IDGDebra Goldfarb - Chief Analyst, Data Center Division and Senior Director of Market Intelligence with Intel
 Hear how high performance for HPC workloads can be found in the cloud. In this panel session, Nicole Hemsoth, Editor in Chief of Cloud Insights and of HPCWire, hosts a lively discussion with experts who address the reality that big data and high performance computing performance cannot be sacrificed. They explore how HPC users have been finding the AWS cloud a cost-effective way to get large-scale, performance-sensitive jobs done in a fraction of the time and without the heavy lifting required to maintain on-premises systems. Users are always looking for more (and more interesting) ways to implement their HPC jobs in the cloud. From working with GPU-powered instances to instances with more memory or capability, panelists explore what performance really means in the cloud, how on-site, traditional supercomputing compares to the cloud, and what the future may be for HPC end users with cloud services. Sponsored by Intel. Â
BDT207 - Use Streaming Analytics to Exploit Perishable Insights
by Mike Gualtieri - Principal Analyst with Forrester Research
Streaming analytics is about knowing and acting on what's happening in your business and with your customers right this second. Forrester calls these perishable insights because they occur at a moment's notice and you must act on them fast. The high velocity, whitewater flow of data from innumerable real-time data sources such as market data, internet of things, mobile, sensors, clickstream, and even transactions remain largely un-navigated by most firms. The opportunity to leverage streaming analytics has never been greater. In this session, Forrester analyst Mike Gualtieri explains the opportunity, use cases, and how to use cloud-based streaming solutions in your application architecture.
BDT206 - See How Amazon Redshift is Powering Business Intelligence in the Enterprise
by Rahul Pathak - Principal Product Manager with Amazon Web ServicesJason Timmes - Associate VP of Software Development with Nasdaq OMXKevin Diamond - Chief Technology Officer, Nordstromrack.com | HauteLook with NordstromRack.com | HauteLook
Take a look into how NordstromRack.com | HauteLook and Nasdaq OMX are using Amazon Redshift for data warehouse and supporting business intelligence workloads one year after they made the move to using Amazon Redshift.  We will cover why HauteLook chose Redshift, how they built the architecture, discuss what data is being stored and accessed, and overall, how that data is powering the HauteLook business.  We will also discuss how Nasdaq migrated from an on-premised data warehouse to Amazon Redshift, and how they've been able to take advantage of Redshift's array of security features such as hardware security modules (HSM), encryption, and audit-logging. Â
BDT205 - Your First Big Data Application on AWS
by Matt Yanchyshyn - Principal Solutions Architect with Amazon Web Services
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.
BDT204 - Rendering a Seamless Satellite Map of the World with AWS and NASA Data
by Will White - Engineering with MapboxEric Gundersen - CEO with Mapbox
NASA imaging satellites deliver GB's of images to Earth every day. Mapbox uses AWS to process that data in real-time and build the most complete, seamless satellite map of the world. Learn how Mapbox uses Amazon S3 and Amazon SQS to stream data from NASA into clusters of EC2 instances running a clever algorithm that stiches images together in parallel. This session includes an in-depth discussion of high-volume storage with Amazon S3, cost-effecient data processing with Amazon EC2 Spot Instances, reliable job orchestration with Amazon SQS, and demand resilience with Auto Scaling.
BDT203 - From Zero to NoSQL Hero: Amazon DynamoDB Tutorial
by David Yanacek - Sr. Software Dev Engineer with Amazon Web ServicesJason Lambert - Senior Software Engineer with Here
Got data? Interested in learning about NoSQL? In this session, we take you from not knowing anything about Amazon DynamoDB to being able to build an advanced application on top of DynamoDB. We start with an overview of the service, basic fundamental concepts, and then dive right in to a hands-on follow along tutorial in which you: create your own table, make queries, add secondary indexes to existing tables, query against the secondary indexes, modify your indexes, as well as detect changes to your data in DynamoDB to build all kinds of analytics and complex event processing apps. You can walk in a novice with DynamoDB, but rest assured, you will walk out as a NoSQL expert ready to tackle large distributed systems problems with your database problems addressed with DynamoDB.
BDT202 - HPC Now Means 'High Personal Computing'
by Ricardo Geh - Enterprise Solutions Architect, Amazon Web Services with Amazon Web ServicesSergio Mafra - IT Innovation Leader with ONS - Operador Nacional do Sistema Eletrico
Since 2011, ONS.org.br (responsible for planning and operating the Brazilian Electric Sector) has been using AWS to run daily simulations using complex mathematical models. The use of the MIT StarCluster toolkit makes running HPC on AWS much less complex and lets ONS provision a high performance cluster in less than 5 minutes. Since the elapsed time of a big cluster depends of the user, ONS decide to develop a HPC portal where its engineers can interface with AWS and MIT StarCluster without knowing a line of code or having to use the command terminal. It is just a simple turn-on/turn-off portal. The cluster now gets personal, and every engineer runs the models using HPC on AWS as if they are using a PC.
BDT201 - Big Data and HPC State of the Union
by Ben Butler - Senior Solutions Marketing Manager, Big Data and HPC with Amazon Web ServicesAyumi Tada - Infrastructure Technology Department with Honda Motor Co., Ltd.
Leveraging big data and high performance computing (HPC) solutions enables your organization to make smarter and faster decisions that influence strategy, increase productivity, and ultimately grow your business. We kick off the Big Data &HPC track with the latest advancements in data analytics, databases, storage, and HPC at AWS. Hear customer success stories and discover how to put data to work in your own organization.
BIZ401 - Kellogg Company Runs SAP in a Hybrid Environment
by Steven Jones - SAP Solutions Architect with Amazon Web ServicesPeter Mauel - Global Alliance leader with Amazon Web ServicesWee Sim - Senior IT Architect with Kellogg Company
Many enterprises today are moving their SAP workloads to the cloud in order to achieve business agility. In this session, learn strategies and recommended practices for architecting and implementing a phased (“hybrid”) approach for SAP workloads, while optimizing for availability and performance. In this session, Kellogg Company will walk through the business justification and how they leveraged a hybrid approach when implementing SAP Business Warehouse (BW) on SAP HANA on the AWS cloud. Â
BIZ307 - Yamaha Corporation: Migrating Business Applications to AWS
by Vimal Thomas - Vice-President, Information Technology Division with Yamaha Corporation of AmericaKris Bliesner - CTO &Co-Founder with 2nd Watch
When Yamaha Corporation needed to reduce infrastructure cost, AWS was the solution. In this session, learn how Yamaha and AWS partner 2nd Watch migrated mission-critical applications such as Microsoft Exchange and SharePoint, configured Availability Zones for data replication, configured disaster recovery for Oracle E-Business Suite, and designed file system backups. This session will get you up to speed on how AWS supports mission-critical business applications
BIZ306 - Migrating Trimble Sketchup 3D Warehouse to AWS
by Clay Parker - Cloud Services Manager with Trimble Navigation
Trimble was tasked with moving a newly acquired application, Sketchup 3D Warehouse, to AWS. This session will discuss how, using spot instances, Trimble rendered over 2.5 million images to AWS in large amounts, at a fraction of the cost of physical or virtual options. Trimble will discuss the AWS services used (Amazon EC2, Amazon CloudFront, and others) and the flexibility Trimble achieved by using these services-such as how CloudFront allowed Trimble to operate out of a single region, greatly reducing the complexity of deployment across the world. Finally, Trimble will discuss why AWS was the right choice for running Sketchup 3D Warehouse.
BIZ305 - Case Study: Migrating Oracle E-Business Suite to AWS
by Mike McGrath - VP - IT with American Commercial LinesThiru Sadagopan - VP Cloud Services with Apps Associates LLC
With the maturity and breadth of cloud solutions, more enterprises are moving mission-critical workloads to the cloud. American Commercial Lines (ACL) recently migrated their Oracle ERP to AWS. ERP solutions such as Oracle E-Business Suite require specific knowledge in mapping AWS infrastructure to the specific configurations and needs of running these workloads. In this session, Apps Associates &ACL walk through the considerations for running Oracle E-Business Suite on AWS, including deployment architectures, concurrent processing, load balanced forms and web services, varying database transactional workloads, and performance requirements, as well as security and monitoring aspects. ACL shares their experiences and business drivers in making this transition to AWS.
BIZ303 - Active Directory in the AWS Cloud
by Wayne Saxe - Ecosystem Solutions Architect with Amazon Web Services
Most enterprises have come to rely upon Active Directory for authentication and authorization-for users, workstations, servers, and business applications. Among your first considerations when planning a major implementation initiative will be how best to architect Active Directory-and take advantage of the benefits of the AWS cloud. This session will focus on best practice implementation patterns including AD Backup and Recovery in AWS, Region and Availability Zone design considerations for AD replication, and Security. To finish, we selected the three most common design patterns to discuss: Single Forest, Federate and Disconnected. We will talk about when each is appropriate to use, how it is designed and the practical implications of that choice. While each AD implementation is unique, these three patterns represent the fundamental building blocks upon which you will design your own Directory.  You will leave the session knowing how to best to architect AWS to support the Active Directory your enterprise relies upon.
BIZ301 - Getting Started: Running SAP on AWS
by Frank Stienhans - Principal, AWS Professional Services with Amazon Web ServicesBill Timm - SAP Solutions Architect with Amazon Web Services
AWS is certified to run key SAP enterprise solutions, such as SAP Business Suite and SAP HANA, in production on the AWS cloud. Learn more about the recommended best practices for systems migration, including how to prepare your environment for minimal downtime; security design; database configuration including EC2 instance configuration; and backup and restore for SAP. We will also discuss high availability (HA) and disaster recovery (DR) scenarios for SAP. This session will highlight several customer success stories, and provide details on where to find available tools and trials to get started.
DEV309 - From Asgard to Zuul: How Netflix's Proven Open Source Tools Can Help Accelerate and Scale Your Services
by Ruslan Meshenberg - ‎Director, Cloud Platform Engineering with Netflix
Learn how you can leverage the many Netflix Open Source tools to help grow your services to web-scale, and make them robust and resilient. We cover a variety of the OSS components-from operational tools like Asgard and Simian Army, to core services and libraries like Zuul, Eureka, Archaius, and Hystrix, plus a variety of security and big data tools. We walk through a sample application to illustrate how the many components fit together to build a cohesive solution.
DEV308 - Automating Your Software Delivery Pipeline
by Josh Kalderimis - CEO with Travis CICorey Donohoe - Hacker with GitHub
The challenge facing developers today is to reduce the time between writing code and getting it into production, all while maintaining quality. What's needed is a workflow built upon highly integrated and automated tools so that developers can focus on building new features. This session demonstrates plugging together an end-to-end release workflow, including code review, acceptance testing, branch deployments, and chat ops, all using GitHub and Travis CI.
DEV307 - Introduction to Version 3 of the AWS SDK for Python (Boto)
by Daniel Taylor - Software Development Engineer with Amazon Web Services
In this session, we introduce Boto 3, the next major version of the AWS SDK for Python. You will learn about the new features in the SDK, such as the high-level resource APIs that simplify working with AWS collections and objects, and the eventing model that enables customizing your calls to AWS services. We use a sample application to demonstrate these features, and show how to integrate them with your existing projects.
DEV306 - Building Cross-Platform Applications Using the AWS SDK for JavaScript
by Aditya Manohar - Software Development Engineer with Amazon Web Services
JavaScript is the ubiquitous runtime for browser code, and the popularity of Node.js as a server-side platform continues to grow. The AWS SDKs for Node.js and JavaScript in the Browser enable you to call AWS services from either platform. In this talk, we demonstrate the portability of the SDK by building a Node.js web app and then porting the code to run as a browser extension. In the process, you'll learn about a number of the productivity features included in these SDKs.
DEV305 - Building Apps with the AWS SDK for PHP
by Jeremy Lindblom - Software Development Engineer with Amazon Web Services
For both new and experienced users of the AWS SDK for PHP, we highlight features of the SDK as we work through building a simple, scalable PHP application. Attendees will learn about core features of the SDK including service clients, iterators, and waiters. We will also introduce new features in the upcoming Version 3 of the SDK, including asynchronous requests, paginators, and the new JMESPath result querying syntax.
DEV304 - What's New in the AWS SDK for .NET
by Norm Johanson - Software Development Engineer with Amazon Web ServicesSteve Roberts - Software Development Engineer with Amazon Web Services
AWS provides the tools that Windows developers have come to expect. In this session, you learn about the easy-to-use abstractions included in the AWS SDK for .NET. We demonstrate how the AWS Toolkit for Visual Studio helps to streamline your iterative dev-test cycle. You also see how the AWS Tools for Windows PowerShell enables you to create powerful automation scripts.
DEV303 - Touring Version 2 of the AWS SDK for Ruby
by Alex Wood - Software Development Engineer with Amazon Web Services
Version 2 of the AWS SDK for Ruby adds a number of new features to help reduce the amount of code that you need to write. We will discuss and walk through code sample for new features such as the Resource APIs, paginators, waiters, and more. Attendees will leave this session having a firm grasp on Version 2 of the AWS SDK for Ruby.
DEV302 - Tips, Tricks, and Best Practices for the AWS SDK for Java
by David Murray - Software Development Engineer with Amazon Web Services
The AWS SDK for Java contains many powerful tools for working with AWS, some of which you might not know about. In this session, we take a tour through the different layers of the SDK with a focus on the newest features of the SDK. We cover a wide variety of tips and best practices to show you how to take advantage of the SDK to improve your AWS development productivity. Learn about client-side data encryption, high-level APIs, tips for securely handling your credentials, and the newly released AWS Resource APIs.
DEV301 - Advanced Usage of the AWS CLI
by James Saryerwinnie - Software Development Engineer with Amazon Web Services
The AWS CLI provides an easy-to-use command line interface to AWS and allows you to create powerful automation scripts. In this session, you learn advanced techniques that open up new scenarios for using the CLI. We demonstrate how to filter and transform service responses, how to chain and script commands, and how to write custom plugins.
EDU203 - Instructing on the Cloud: Using AWS to Aid Professors and Teach Students
by Ken Eisner - Director, Global Education Strategy with Amazon Web ServicesMajd Sakr - Professor of Computer Science with Carnegie Mellon University
In the past, academic institutions and departments-primarily those focused on computer science, information systems, or other technology instruction-have made significant use of on-premises servers for labs, projects, and research efforts. Many of these institutions are now migrating from their on-premises environments to the cloud. They are providing computational support for coursework and back-end support for capstone projects, all while enhancing their cloud curriculum to create the next generation of IT innovators. In this session, learn how faculty can implement multi-user environments in the classroom; access AWS assets such as AWS credits, training resources, content, and labs; and collaborate on content creation and shared Amazon Machine Images (AMIs). Hear about best practices and case studies for implementing AWS in the classroom and in academic research. This session is an opportunity for representatives from academia to learn how they can leverage cloud computing to aid them in coursework development and research.
EDU202 - Enterprise Cloud Adoption Strategies in Higher Education
by Sharif Nijim - Enterprise Application Architect, Office of Information Technologies with University of Notre DameRobert Winding - Lead Architect Professional with University of Notre DameRyan Frazier - Director with Harvard University IT
We have reached a tipping point in enterprise cloud adoption in higher education institutions. Many large research universities have now taken on various cloud projects, touching almost every aspect of their enterprise. One excellent example of this is Harvard University. Over the course of the past 18 months, Harvard University IT (HUIT) has been investigating, developing, and implementing an enterprise-scale cloud adoption program. This session presents HUIT's efforts to date, including approaches to vision, strategy, culture, education, staffing, and technology. The session also includes examples from other major universities. You'll receive practical advice about how to begin the adoption journey, and you'll learn about frameworks that can help you make decisions in the context of your own institutional environment.
EDU201 - How Technology is Transforming Education
by Anthony Abate - COO &CFO with Echo360, Inc.John Stuart - Director, DevOps with Chegg Inc.
The implementation of highly scalable, easy-to-deploy technology is radically transforming educational models and student engagement. Many companies have used cloud computing to innovate in ways that have significantly improved the student experience. AWS has been an integral part of the strategies and solutions of these companies, from inception to large-scale growth to market leadership. In this session, Echo 360 and Chegg show how they use AWS services-such as Amazon EC2, Amazon EBS, and Auto Scaling-to dynamically scale and allocate resources based on the school year and cyclical nature of their businesses. They discuss globalization, privacy, PCI compliance, and cost efficiency. Learn how you can apply their insights to your own educational and business models.
ENT401-JT - Hybrid Infrastructure Integration - Japanese Track
by Miha Kralj - Principal Consultant, AWS Professional Services with Amazon Web ServicesPaul Nau - Senior Consultant, AWS Professional Services with Amazon Web Services
Hybrid Infrastructure Integration is an approach to connect on-premises IT resources with AWS and bridge processes, services, and technologies used in common enterprise customer environments. This session addresses connectivity patterns, security controls, account governance, and operations monitoring approaches successfully implemented in enterprise engagements. Infrastructure architects and IT professionals can get an overview of various integration types, approaches, methodologies, and common service patterns, helping them to better understand and overcome typical challenges in hybrid enterprise environments. This is a repeat session that will be translated simultaneously into Japanese.
ENT401 - Hybrid Infrastructure Integration
by Paul Nau - Senior Consultant, AWS Professional Services with Amazon Web ServicesMiha Kralj - Principal Consultant, AWS Professional Services with Amazon Web Services
Hybrid Infrastructure Integration is an approach to connect on-premises IT resources with AWS and bridge processes, services, and technologies used in common enterprise customer environments. This session addresses connectivity patterns, security controls, account governance, and operations monitoring approaches successfully implemented in enterprise engagements. Infrastructure architects and IT professionals can get an overview of various integration types, approaches, methodologies, and common service patterns, helping them to better understand and overcome typical challenges in hybrid enterprise environments.
ENT312 - Should You Build or Buy Cloud Infrastructure and Platforms?
by Lydia Leong - VP Distinguished Analyst with Gartner
The public cloud IaaS and PaaS markets are moving at a tremendous speed, and delivering innovative, differentiated services at rapidly-decreasing costs. Yet many IT organizations nevertheless believe that they would prefer to build a private cloud, or have one custom-built for them. Furthermore, the desires of IT Operations and Application Development often clash. Buyers must not only choose a solution that meets their current needs, but figure out how to meet their future needs, too. We'll discuss how to determine a strategy and source the capabilities you need. Â
ENT311 - Public IaaS Provider Bake-off: AWS vs Azure
by Kyle Hilgendorf - Research Director with Gartner
Public cloud IaaS services continue to be the hottest segment of the cloud market with Amazon Web Services and Microsoft Azure gaining all the attention. Â Almost all customers are currently evaluating, selecting or deploying major IaaS services. In this session, Gartner lays out recommended evaluation criteria for IaaS providers and objectively evaluates how AWS and Azure stack up against one another. Â Â The following key questions will be answered in this session:Â What is the recommended evaluation criteria for IaaS providers? How do AWS and Azure compare to one another? What does the future hold for the public IaaS provider market?
ENT308 - Best Practices for Implementing Hybrid Architecture Solutions
by John Landy - CTO with DatapipeGil Llanos - Solution Architect with DatapipeOvidio Borrero - Solution Architect with Datapipe
In this session, Datapipe's Chief Technology Officer, John Landy, will lead a conversation with Datapipe Solution Architects around the steps taken to architect and manage an end-to-end hybrid infrastructure. This session will cover real world hybrid use-cases including migration, disaster recovery, governance, compliance and redundancy with multi-zone, multi-region deployments through discussion of three common challenges organizations face when moving to the cloud: Architecting a Secure and Compliant Hybrid Solution Staging Migrations: Getting from point A to point B to point AB Ongoing management and optimization Sponsored by Datapipe
ENT307 - AWS Direct Connect Solutions and Network Automation
by Brooke Mouland - Director, Partner Solutions Architecture with Level 3 CommunicationsBrian Hoekelman - Vice President of Business and Cloud Ecosystem Development with Level 3 Communications
As an AWS Direct Connect partner, Level 3 Communications delivers the ability to establish rapid, flexible and private connectivity from your on-premises environment to AWS for increased control and performance. This session covers enterprise use cases related to disaster recovery and migration from on-premises environments to the cloud. The session also addresses best practices and considerations for designing your architecture to include multiple virtual private clouds and global deployments with AWS Direct Connect. Sponsored by Level 3 Communications. Â
ENT306 - Application Portfolio Migration
by Miha Kralj - Principal Consultant, AWS Professional Services with Amazon Web ServicesPaul Nau - Senior Consultant, AWS Professional Services with Amazon Web ServicesMagesh Chandramouli - Distinguished Architect with ExpediaAman Bhutani - SVP, Expedia Worldwide Engineering with Expedia
Migrating large fleets of legacy applications to AWS cloud infrastructure requires careful planning, since each phase needs to balance risk tolerance against the speed of migration. Through participation in many large-scale migration engagements with customers, AWS Professional Services has developed a set of successful best practices, tools, and techniques that help migration factories optimize speed of delivery and success rate. In this session, we cover the complete lifecycle of an application portfolio migration with special emphasis on how to organize and conduct the assessment and how to identify elements that can benefit from cloud architecture.
ENT305 - Develop an Enterprise-wide Cloud Adoption Strategy
by Miha Kralj - Principal Consultant, AWS Professional Services with Amazon Web ServicesBlake Chism - Senior Consultant, AWS Professional Services with Amazon Web Services
Taking a "cloud first" approach requires a different approach than you probably had to consider for your initial few workloads in the cloud. You'll be diving into the deep end of hybrid environments, and that means taking a broad view of your IT strategy, architecture, and organizational design. Through our experience in helping enterprises navigate this change, AWS has developed the Cloud Adoption Framework (CAF) to assist with planning, creating, managing, and supporting the shift. In this session, we cover how the CAF offers practical guidance and comprehensive guidelines to enterprise organizations, particularly around roles, governance, and efficiency.
ENT304 - Governed, Trusted, and Rogue: The Good, the Bad, and the Ugly Inside the Enterprise
by Mike Davis - Cloud Architect with SAS
Most enterprises struggle with the delicate balance of enabling agility and innovation while ensuring proper compliance and corporate governance. In this session, we share lessons learned in identifying, consolidating, and governing AWS accounts across an enterprise while still allowing autonomy and innovation. We walk through the different ways enterprises manage their AWS accounts: governed, trusted, and rogue, the lessons learned in transitioning account types, and the benefits of each. Additionally, we share best practices for optimizing and controlling your AWS costs, managing security and user roles, and improving overall program management.
ENT303-JT - Getting Started with AWS for VMware Professionals - Japanese Track
by Derek Lyon - Principle Product Manager, Amazon EC2 with Amazon Web Services
This session helps you deploy and manage your first AWS resources using a combination of AWS and VMware tools. In this session, you get an overview of the similarities and differences between AWS and VMware, common tools that you can use to leverage your existing VMware experience when getting started with AWS, and a walkthrough of how you can create and manage your first AWS resources. Â This session also provides several tips and tricks from AWS customers on what to focus on and what to avoid when getting started managing a hybrid environment. We conclude by showcasing some innovative approaches that we have seen for building hybrid AWS/VMware architectures. This is a repeat session that will be translated simultaneously into Japanese.
ENT303 - Getting Started with AWS for VMware Professionals
by Derek Lyon - Principle Product Manager, Amazon EC2 with Amazon Web Services
This session helps you deploy and manage your first AWS resources using a combination of AWS and VMware tools. In this session, you get an overview of the similarities and differences between AWS and VMware, common tools that you can use to leverage your existing VMware experience when getting started with AWS, and a walkthrough of how you can create and manage your first AWS resources. Â This session also provides several tips and tricks from AWS customers on what to focus on and what to avoid when getting started managing a hybrid environment. We conclude by showcasing some innovative approaches that we have seen for building hybrid AWS/VMware architectures.
ENT302 - Cost Optimization on AWS
by Tom Johnston - Business Development Manager, Cloud Economics with Amazon Web Services
This session is a deep dive into techniques used by successful customers who optimized their use of AWS. Learn tricks and hear tips you can implement right away to reduce waste, choose the most efficient instance, and fine-tune your spending; often with improved performance and a better end-customer experience. We showcase innovative approaches and demonstrate easily applicable methods to save you time and money with Amazon EC2, Amazon S3, and a host of other services.
ENT301 - Understanding Total Cost of Ownership on AWS
by Marc Johnson - Business Development Manager - Cloud Economics &TCO with Amazon Web ServicesRohit Rahi - Lead, AWS Competitive Strategy with Amazon Web ServicesTodd Curry - Expert Principal, Big Data Architecture & Advanced Analytics with BCG
With AWS, you can reduce capital costs, lower your overall bill, and match your expense to your usage. This session describes how to calculate the total cost of ownership (TCO)Â for deploying solutions on AWS vs. on-premises or at a colocation facility, as well as how to address common pitfalls in building a TCO analysis. The session presents and models customer examples.
ENT222 - Reduce Business Cost and Risk with Disaster Recovery for AWS
by Gil Haberman - Group Product Manager with Riverbed
Given the distributed nature of today's workforce, many IT organizations must support branch offices and remote sites. These multiple sites create islands of infrastructure that are necessary to meet local performance and reliability needs, but are costly to manage and increase the risks associated with distributed data. Consolidation is key to reducing costs and eliminating risks, but how do customers leverage the power of AWS as part of this consolidation? Riverbed SteelFusion is a converged infrastructure solution, encompassing server, projected storage, networking, and WAN optimization. When combined with AWS Storage Gateway, SteelFusion allows customers to connect their on-premises infrastructure to AWS. Session attendees will learn how to leverage WAN Optimization and Projected Storage technologies as part of their IT strategy to consolidate and provide disaster recovery for branch offices and remote sites. Sponsored by Riverbed. Â Â
ENT221 - Transforming Government through Technology
by
For governments, the cloud offers not only cost savings and agility, but the opportunity to develop breakthroughs in research, accelerate economic development and innovation, and enable "the always-up, always-on" infrastructure necessary to support critical missions. National and local governments are coping with the dual challenges of constrained budgets and human resources along with strict information security regimes. Many agencies and organizations are attracted to web services rather than on premises hardware, both for the cost benefits and for agility they offer. This session provides an overview of the use cases ranging from offloading simple websites to running parallel test environments to migrating major enterprise production applications to the cloud-all of them balancing security and compliance with economy and access. Examples from FINRA, UCAS, the government of Singapore, and the US Department of Health and Human Services highlight the significant transformative impact of cloud architectures including Amazon EC2, Amazon RDS, Amazon CloudFront and Amazon DynamoDB. Learn the practical strategies being deployed by governments worldwide to break down innovation barriers and tackle mission-critical operations with the cloud.
ENT220 - How AWS Empowered Billions in Divestments by Migrating Entire Enterprise IT Suites to the Cloud
by Ira Bell - Co-founder &COO with NimboJames McDonald - Cloud Architect with Hess Corporation
In 2013, leading global energy company, Hess Corporation announced it was exiting its downstream businesses to focus on a higher growth portfolio of Exploration and Production (E&P) assets. With obligations to deliver functioning and operational IT infrastructure to the buyers and little time for redesign, Hess leveraged AWS to establish a repeatable, cloud escrow model which enabled them to rapidly migrate a large and diverse application inventory. This session provides the technical aspects and unique insights into enterprise cloud adoption challenges and how Hess and Nimbo were able to successfully overcome them by relying on the flexibility and scale of AWS.
ENT218 - Elastic Bandwidth in a Cloud Computing World
by Denver Maddux - CEO with Megaport
On-demand bandwidth, when you need it.  Megaport's fully automated wide area networking platform gives customers the ability to dynamically provision, modify, and tear down circuits on demand via the web, mobile apps, and API.  Provisioning happens in real time and customers only pay for the bandwidth they use.  AWS Direct Connect offers customers a dedicated and private way to connect to AWS. The Megaport platform is connected to AWS Direct Connect gateways around the world giving customers the ability to dynamically create AWS Direct Connect circuits to AWS when their business demands it and turn them off again when not required. This presentation discusses how customers are using Megaport to solve business problems and control costs in ways that were not available to them before.  We also demonstrate how a customer can set up virtual connections to AWS and any AWS Direct Connect partner in the AWS partner ecosystem in a matter of minutes from a Megaport connected data center.  Sponsored by Megaport.
ENT217 - Delivering Next Generation Web and Mobile Apps in a Software-Defined Workplace
by Peter Bats - Sr. Solutions Architect with CitrixMatt Lull - Managing Director, Global Strategic Alliances with Citrix Systems
Delivering an application to a mobile workforce may seem straightforward, but how do you efficiently deliver, secure, and support a myriad of web, mobile, and enterprise applications and desktops with a diverse and distributed user base?  See how Citrix cloud-ready networking, application, and data delivery platforms - including NetScaler, XenApp, XenDesktop, XenMobile, and Sharefile - can help you deliver internet applications and services, as well as enterprise-grade mobile experiences, complete with compliance, security and stability.  In this session Citrix covers the following: 1. Enterprise-grade, mobile workspace service delivery on AWS 2. Cloud-scale application delivery management with compliance and resiliency 3. Application-aware networking, seamlessly spanning AWS Direct Connect and securely delivering enterprise-grade and compliant content management 4. GPU-enabled G2 instances rendering 2D and 3D rich client applications to mobile devices 5. Enterprise-ready, cloud powered, business continuity solutions.  Sponsored by Citrix
ENT216 - Adapting Systems Architecture and IT Practices for Running Enterprise Business-Critical Workloads in the Cloud
by Saju Sankaran Kutty - Assoc. VP &Regional Sales Head, Cloud and Infrastructure Services with InfosysVishnu Bhat - SVP and Global Head of Cloud Services with InfosysAbhijit Shroff - Principal Technology Architect, Cloud Services with Infosys
In this session, Infosys (an AWS Premier Consulting Partner) discusses how some of their large enterprise customers are leveraging AWS for business-critical workloads. Specific case studies include how a large wealth management firm addressed industry regulations for moving data to the cloud, as well as how customers in the aircraft manufacturing industry and education are achieving operational efficiencies through the cloud. This discussion also includes the following topics: - Moving business-critical workloads like SAP and CRM to the cloud - Understanding and implementing governance across the entire IT ecosystem - Adapting system architecture and IT practices for efficiency in the cloud Sponsored by Infosys. Â
ENT215 - Cloud Readiness for Government Institutions, How to Lead the Charge to AWS
by Mike Cardwell - CIO with DSHS, State of TexasMax Peterson - Director with Amazon Web ServicesNicci Williams - Senior Business Development Manager with Lockheed MartinJoel Davne - CEO with Cloudnexa
Join Cloudnexa (an AWS Premier Consulting Partner), the Texas Department of Health and Human Services, and Lockheed Martin in a panel discussion to examine cloud readiness for government organizations. Areas of focus include governance, security, and application and team readiness, including the transformation that is required within the IT department itself to transition from a facilities-based to cloud-based support model. Also covered in the discussion are best practices for acquiring and delivering cloud solutions inside a government entity, from contracts to department-level engagement models and satisfying compliance concerns such as HIPAA, and how energy savings performance contracts (ESPCs) are allowing government agencies to achieve data center consolidation objectives. Sponsored by Cloudnexa.
ENT214 - Flying Through Airport Security Using a Multiregion, Managed Solution
by Kevin Lupowitz - CIO with CLEARMJ DiBerardino - CTO with Cloudnexa
Join Cloudnexa (an AWS Premier Consulting Partner) and electronic personal identity innovator, CLEAR, as they discuss CLEAR's innovative service offering that gets clients through airport security with predictable speed, while addressing a diverse set of compliance requirements ranging from security to point-of-sale to  online commerce. Find out how this commercial business built its solution using a multiregion architecture, including AWS GovCloud (US) to deliver this Department of Homeland Security SAFETY Act approved service for its clients. In this session you learn about CLEAR's migration from an on-premises data center to AWS; the differences between AWS GovCloud (US) and other AWS regions; how they optimized for AWS post-migration; how they managed their training and IT  knowledge gaps; and their roadmap ahead. Sponsored by Cloudnexa.
ENT212 - How Autodesk Leverages Splunk as an Assurance Platform on AWS
by Alan Williams - Principal Engineer with AutodeskPraveen Rangnath - Director of Cloud Product Marketing with Splunk
This session highlights the critical role of real-time visibility in Autodesk's adoption of AWS.  Autodesk shares how they use Splunk software to gain insight into applications and services deployed in AWS, achieve centralized visibility across on-premises and cloud systems, and monitor critical security-related user activity in their AWS account. Autodesk shares how these insights provide the required level of confidence and assurance to migrate significant enterprise workloads to AWS.  In this session, Splunk also presents their cloud solutions enabling real-time visibility and monitoring in AWS. This session explains how to accelerate your AWS adoption by delivering centralized and real-time visibility and how to get started with Splunk in AWS at no cost. Sponsored by Splunk.
ENT211 - Migrating the US Government to the Cloud
by Matthew Carroll - CTO with CSC
The US government has built hundreds of applications that must be refactored to task advantage of modern distributed systems. This session discusses EzBake, an open-source, secure big data platform deployed on top of Amazon EC2 and using Amazon S3 and Amazon RDS. This solution has helped speed the US government to the cloud and make big data easy. Furthermore this session discusses critical architecture design decisions through the creation of the platform in order to add additional security, leverage future AWS offerings, and cut total operations and maintenance costs. Sponsored by CSC Â
ENT210 - Accelerating Business Innovation with DevOps on AWS
by Eddie Satterly - CTO, Big Data &Analytics with CSC
IT must innovate at the speed of market change and many enterprises are realizing that DevOps and cloud computing are a means to this end. Cloud-based DevOps solutions that enforce fine-grain governance policies and automate software releases across the development tool chain can accelerate application time to market while also improving software quality. In this session, attendees learn the following: - How cloud and DevOps together can significantly accelerate software release cycles, so you can speed business innovation and gain competitive advantage - Best practices for leveraging CSC Agility Platform, AWS, and a hybrid IT strategy for DevOps - How to eliminate software release bottlenecks via policy-based automation, orchestration, and governance of application deployment environments. Sponsored by CSC.
ENT209 - Netflix Cloud Migration, DevOps and Distributed Systems
by Yury Izrailevsky - VP of Cloud Computing and Platform Engineering with NetflixNeil Hunt - Chief Product Officer with Netflix
Netflix's migration to the cloud as our primary streaming control plane was paralleled by ourmove from traditional IT and centralized operations to a more decentralized DevOpsorganizational model. In this session, we explore the relationship between technicalinfrastructure and organization and how to find the right balance of centralized anddecentralized operations. We also cover the rationale, goals, strategies, and technologiesapplied to accomplish this daunting task. We reflect on where we stand today and howwe've realized many of our goals.
ENT208 - Why Traditional Enterprises are Moving Business Applications to AWS
by Leonard Simmons - Manager Technical Architecture and Projects with The Mosaic CompanyJames Kocsi - Manager, Strategic Technologies &Applications with CapgeminiJoseph Coyle - CTO, North America with Capgemini
Mosaic, a global leader in industrial and agricultural products, engaged Capgemini to help them consolidate their datacenter footprint and migrate a significant portion of their SAP applications to a 500-instance environment on AWS. This session covers the drivers, planning considerations, and business and IT benefits achieved by moving business applications to the cloud, in the context of the Mosaic case study. Sponsored by Capgemini. Â
ENT207 - Creating a Culture of Cost Management in Your Organization
by J.R. Storment - Chief Customer Officer with Cloudability
As your organization increases its AWS usage, budget owners and users demand new levels of cost visibility. This session explains how scaled organizations control and optimize their AWS spending as usage increases across multiple product teams. Attendees walk away with a strategy for driving cost effective behavior across the entire organization, from engineering to finance. Sponsored by Cloudability. Topics include: - Maintaining cost oversight while giving autonomy to individual teams - Allocating costs across dozens or hundreds of accounts or applications - Creating accountability around spending habits and waste - Tying cloud spending to the bottom line
ENT206 - Migrating Thousands of Workloads to AWS at Enterprise Scale
by Christopher Wegmann - Managing Director with AccentureChris Scott - Senior Manager with AccentureTom Laszewski - Global Lead System Integrator Solution Architects with Amazon Web Services
Migrating workloads to AWS in an enterprise environment is not easy, but with the right approach, an enterprise-sized organization can migrate thousands of instances to AWS quickly and cost effectively. You can leave this session with a good understanding of the migration framework used to assess an enterprise application portfolio and how to move thousands of instances to AWS in a quick and repeatable fashion. Â In this session, we describe the components of Accenture's cloud migration framework, including tools and capabilities provided by Accenture, AWS, and third-party software solutions, and how enterprises can leverage these techniques to migrate efficiently and effectively. The migration framework covers: - Defining an overall cloud strategy - Assessing the business requirements, including application and data requirements - Creating the right AWS architecture and environment - Moving applications and data using automated migration tools- Services to manage the migrated environment
ENT205 - AWS and VMware: How to Architect and Manage Hybrid Environments
by Rishi Vaish - VP of Product with RightScale IncBrian Adler - Principal Cloud Architect with RightScale, Inc.
AWS and VMware is not an either/or decision. Many enterprises are looking to leverage AWS in addition to their existing VMware virtualized environments. They want to choose the right venue for each application and move applications between VMware and AWS as their business needs dictate. In this session, you hear how RightScale helps customers successfully implement and manage hybrid environments that span AWS and VMware vSphere. This session covers:- 5 common use cases for hybrid environments - Why VMware isn't the same as a cloud, and what to do about it- Architectural considerations for hybrid environments- Is portability a possibility or a pipe dream?- A demo of a single-pane-of-glass to manage hybrid environmentsSponsored by RightScale.
ENT204 - From Architecture to Dev-Ops: Building Skills and Capability to Exploit the Value of the AWS Cloud
by Rochana Golani - Head of Global Training Curriculum &Certification with Amazon Web Services
Enterprises are no longer asking “Should I move to the cloud?”; instead they're asking “When and how fast can I adopt the cloud?”. Key questions that we hear from enterprise customers include: Where do I start? What are the technical skill sets needed? What are the necessary skills for architecting cloud applications and hybrid applications? Who will take care of operations on a day to day basis? How do I monitor my cloud for costs, security, availability, performance? Is my organization ready for DevOps, and when does that become important? What specific roles will I need to develop? If any of these questions are familiar to you, attend this session and learn about the skills, learning opportunities, and training available to build the technical and operational capability to take advantage of the AWS cloud. Expect to walk out with a mental roadmap of the cloud skillset you want to develop for your team.
ENT203 - Iterating Your Way To 95% Reserved Instance Usage
by Toban Zolman - VP of Product Development with Cloudability
Managing a large portfolio of reservations across an ever-changing infrastructure requires a sophisticated and systematic approach. Attendees in this session walk away with a strategy for maximizing Reserved Instance (RI) coverage in their organization, as well as an understanding of specific tools and tactics to put that strategy into action. Sponsored by Cloudability. Topics include: - Reducing cycle times on the RI buying process - Building a RI-friendly architecture - Implementing a buy-measure-learn methodology that adapts to change
ENT202 - Four Critical Things to Consider When Moving Your Core Business Applications to the Cloud
by James Plourde - VP Cloud Services with InforPam Murphy - Chief Operating Officer with Infor Global SolutionsAmul Merchant - Sr. Director with InforJim Hoover - Infor Information Security Officer with Infor Global Solutions
Does moving core business applications to AWS make sense for your organization? This session covers key business and IT considerations gathered from industry experts and real-world enterprise customers who have chosen to move their mission critical ERP applications to the AWS cloud, resulting in lower costs and better service. This session covers the following: - Insights from industry experts and analysts, who explain how the cloud affects costs from three angles: launch, operations, and long-term infrastructure expense - Review of how time-to-value and cloud launch processes differ from on-premises infrastructure - How AWS offers increased security and reliability over what some enterprises can afford on their own Sponsored by Infor Â
ENT201 - New Generation Hybrid Architectures with Suncorp, NetApp, and AWS
by Stuart Devenish - Cloud &Enterprise Architect with SuncorpPhil Brotherton - VP, Cloud Solutions Group with NetApp
Suncorp is Australia's largest insurance provider and leading regional bank, with 15,000 employees, $96 billion in assets, and 9 million customers across Australia and New Zealand. Last year, the company announced intentions to move its entire infrastructure to AWS. In this session, a Suncorp cloud architect discusses the hybrid IT approach that allows Suncorp to use AWS while satisfying strict IT compliance requirements. He shares the thinking, challenges and roadmap related to Suncorp's use of NetApp Private Storage (NPS)Â for AWS for enterprise-wide disaster recovery, dual data centers for redundancy and compliance, and plans for enterprisewide production workloads in phase two of the company's cloud transition. Learn why making application architecture changes and moving workloads to AWS is quicker and easier when your on-premises storage platform is consistent with your cloud storage platform. Session includes a demo of the NPSÂ for AWS solution used by Suncorp as well as a discussion of the latest public and hybrid cloud developments available to customers. Sponsored by NetApp. Â
FIN401 - Seismic Shift: Nasdaq's Migration to Amazon Redshift
by Jason Timmes - Associate VP of Software Development with Nasdaq OMX
Jason Timmes led the migration of the primary data warehouse for Nasdaq's Transaction Services U.S. business unit (which operates Nasdaq's U.S. equity and options exchanges) from a traditional on-premises MPP database to Amazon Redshift. The project significantly reduced operational expenses. Jason, who is an Associate Vice President of Software Development at Nasdaq, describes how his team migrated a warehouse that loads approximately 7 billion rows a day into the cloud, satisfied several security and regulatory audits, optimized read and write performance, ensures high availability, and orchestrates other back-office activities that depend on the warehouse daily loads completing. Along with sharing several technical lessons learned, Jason will discuss Nasdaq's roadmap to integrating Redshift with more AWS services, as well as with more Nasdaq products, to offer even greater benefit to clients (internal and external) in the months ahead.
FIN303 - Inlet: Leveraging AWS to Change the Way Businesses Communicate with Consumers
by Robert Krugman - SVP Digital Strategy with Broadridge Financial Solutions, Inc.
Broadridge Financial Solutions and Pitney Bowes have partnered to develop a new venture, Inlet, that enables brands across industries to deliver sensitive client communications (such as bills, statements, and tax documents) to the digital channels that consumers use every day, including online banking websites, social media, and cloud storage solutions. Inlet is built entirely on AWS, leveraging a broad range of AWS services including Amazon EC2, Amazon VPC, AWS IAM, AWS CloudFormation, and AWS Direct Connect. In this session, learn how building and running Inlet on AWS enabled Broadridge to deliver an innovative new solution for sensitive content delivery to the market in a very short period of time. Robert Krugman, SVP Digital Strategy, discusses Inlet's service architecture and the competitive advantages and cost benefits of deploying on AWS.  Â
FIN302 - From 10 Days to 10 Minutes: How Aon Benfield Leverages AWS GPUs to Make Actuarial Calculations More Efficient
by Yinal Ozkan - Principal Solutions Architect with Amazon Web ServicesAamir Mohammad - Director with Aon Benfield
Aon Benfield Securities, a leading global provider of risk management, insurance, and reinsurance brokerage, needs high-powered computing to process financial simulations for wealth management products. In the past, the firm's quarterly financial reporting took two weeks and a small army of people to complete-but by finding a creative way to use state-of-the-art AWS GPUs, Aon reduced calculation time from 10 days to 10 minutes. This session will cover Aon Benfield's use of AWS GPUs and other services from AWS, including Amazon Virtual Private Cloud (VPC) and Amazon Elastic Block Store (EBS) to perform HPC calculations.Â
FIN202 - Addressing Data Security Concerns in Financial Services: Fidelity Investment's Use of SSE-C
by Travell Perkins - CTO Fidsafe with Fidelity Investments
Data security is a paramount concern for financial services firms. This session discusses how Fidelity Investments use Amazon S3 with server-side encryption with customer-provided keys (SSE-C) to protect critical information and the firm's use of other AWS services, which include AWS Elastic Beanstalk, Elastic Load Balancer, and Amazon DynamoDB. Fidelity Investments is one of the largest mutual fund and financial services groups in the world. Fidelity manages a large family of mutual funds, provides fund distribution and investment advice services, and also provides discount brokerage services, retirement services,wealth management, securities execution and clearance, life insurance and a number of other services. Â
GAM405 - Create Streaming Game Experiences with Amazon AppStream
by Nic Branker - Solutions Architect with Amazon Web ServicesCollin Davis - Director, Application Services, AppStream with Amazon Web Services
What if you could deliver a console-quality gaming experience to mobile devices anywhere in the world? In this session, learn about Amazon AppStream and how it enables real-time app streaming as a service via a few SDK calls. Hear how CCP has designed a new initial experience for their massive multiplayer game, EVE Online, that streams their character creator from the cloud, while the game downloads in the background, increasing conversions. We look at how Amazon Game Studios is developing hybrid games that run half on the tablet, half in the cloud, enabling console-quality graphics on mobile devices.
GAM404 - Gaming DevOps: Scopely's Continuous Deployment Pipeline
by Mitch Garnaat - Director of Cloud Operations with Scopely, Inc.
How do you deploy a game with millions of online users, playing across the globe, without interrupting their experience? Learn how Scopely uses AWS automation tools to build, deploy, and manage highly-scalable mobile games. They show how to use AWS CloudFormation and Ansible to build "golden AMIs." See how they do green/blue deployment of those AMIs using Auto Scaling and Amazon Elastic Load Balancing, to avoid kicking players offline. Then, hear how they leverage Amazon Kinesis, ElasticSearch, and Amazon SNS to create a unified monitoring and alerting infrastructure for your games. Finally, learn how Scopely use Amazon VPC and AWS Identity and Access Management (IAM) to keep your scalable gaming infrastructure safe and secure.
GAM402 - Deploying a Low-Latency Multiplayer Game Globally: Loadout
by Nate Wiger - Principal Gaming Solutions Architect with Amazon Web ServicesJames Gwertzman - CEO with PlayFab
This is a deep-dive straight into the guts of running a low-latency multiplayer game, such as a first-person shooter, on a global scale. We dive into architectures that enable you to split apart your back-end APIs from your game servers, and Auto Scale them independently. Â See how to run game servers in multiple AWS regions such as China and Frankfurt, and integrate them with your central game stack. Â We'll even demo this in action, using AWS CloudFormation and Chef to deploy Unreal Engine game servers. In the second half, hear from PlayFab, who built the backend for the Top-10 free-to-play PC shooter Loadout. PlayFab reveals details about their architecture, including AWS Elastic Beanstalk setup, Amazon DynamoDB and Amazon RDS patterns, data sharding, and use of multiple Availability Zones. Finally, PlayFab highlights challenges they faced when deploying to AWS China, and how they solved them.Â
GAM304 - How Riot Games re:Invented Their AWS Model
by Marty Chong - Sr. Network Engineer with Riot GamesJonathan McCaffrey - Software Architect with Riot Games
Riot Games is a high-paced dynamic environment with many groups striving to release new content, features, and tools. Riot runs League of Legends, one of the biggest online multiplayer games, and uses AWS to host many complex sites that service millions of players everyday. In this session, Riot Games talks about the evolution of their management practice on AWS over the past two years, some lessons learned the hard way, and where they hope to be in the future. Key topics include: SSO (Single-Sign On) integration with IAM roles High-level AWS architecture (How to make it easy on your organization) VPC design, centralization, and simplification DevOps tooling and automation How and why we use Auto Scaling
GAM303 - Beyond Game Servers: Load Testing, Rendering, and Cloud Gaming
by Dhruv Thukral - Solutions Architect-Gaming with Amazon Web ServicesYuval Noimark - VP R&D with Playcast Media Systems
In this session, we go beyond online game servers, to explore other areas where AWS can benefit your game. First, we dive into using AWS to perform load testing of your game. We present architecture patterns, what makes a good load test, and real-world example scenarios. We then highlight emerging trends with cloud rendering, and show how you can integrate Amazon EC2 GPU-based instances into your game workflow. Finally, hear from Playcast, who brought their Cloud Gaming service to new players worldwide, by leveraging the G2 EC2 instance. Playcast share how they architected their streaming service to best leverage the cloud, things they learned, and demo their service streaming games from AWS.Â
GAM302 - EA's Real-World Hurdles with Millions of Players in the Simpsons: Tapped Out
by Colin Shirley - Software Engineer with EAChris Gallinaro - Software Engineer with EA
How do you really architect a game that can handle 5, 6, or 7 million daily active users? Learn about the scalability challenges that EA had to overcome for The Simpsons: Tapped Out. Hear how EA had to redesign their MySQL-based database layer on the fly, migrating over to Amazon DynamoDB, while keeping the game running. See how EA added AWS Elastic Beanstalk and Auto Scaling to simplify their deployments, while also lowering costs by enabling them to respond to changing player counts. EA shows how they switched from sticky sessions to Amazon ElastiCache, solving player disconnects and allowing further scaling out. Finally, EA shares some interesting statistics about The Simpsons: Tapped Out, as well as their overall learnings about how best to develop, deploy, and monitor a game on AWS.
GAM301 - Real-Time Game Analytics with Amazon Kinesis, Amazon Redshift, and Amazon DynamoDB
by Suhas Kulkarni - VP Engineering with GREEKandarp Shah - Engineering Manager with GREE
Success in free-to-play gaming requires knowing what your players love most. The faster you can respond to players' behavior, the better your chances of success. Learn how mobile game company GREE, with over 150 million users worldwide, built a real-time analytics pipeline for their games using Amazon Kinesis, Amazon Redshift, and Amazon DynamoDB. They walk through their analytics architecture, the choices they made, the challenges they overcame, and the benefits they gained. Also hear how GREE migrated to the new system while keeping their games running and collecting metrics.
GAM201 - Scalable Game Architectures That Don't Break the Bank
by Martin Elwin - Manager, Solutions Architecture with Amazon Web ServicesRoope Kangas - Lead Server Developer with Grand Cru Games
In this session, AWS shares best practices for mobile, console, and MMO games that can scale from 1,000 to 1,000,000 users. See how to create a game backend using Amazon EC2 and AWS Elastic Beanstalk. Learn about database scaling challenges, and how to use Amazon DynamoDB and Amazon ElastiCache to address them. And, hear how to deliver game assets efficiently using Amazon S3 and Amazon CloudFront. Then, hear from Roope Kangas, Lead Server Developer and co-founder at Grand Cru, about their journey launching and cost-optimizing Supernauts on AWS. Grand Cru used load testing to validate their system before launch, enabling them to reach 1 million users in 6 days. Then, after launch, the team optimized their architecture based on system metrics to cut their AWS costs by more than half.
HLS402 - Getting into Your Genes: The Definitive Guide to Using Amazon EMR, Amazon ElastiCache, and Amazon S3 to Deliver High-Performance, Scientific Applications
by Sami Zuhuruddin - Enterprise Solutions Architect, Amazon Web Services with Amazon Web ServicesShakila Pothini - Associate Director, qPCR Cloud Applications with Thermo FisherPuneet Suri - Senior Director, Software Engineering, Life Sciences Solutions with Thermo Fisher
The key to fighting cancer through better therapeutics is a deep understanding of the basic biology of this disease at a cellular and molecular level. Comprehensive analysis of cancer mutations in specific tumors or cancer cell lines by using Life Technologies sequencing and real-time PCR systems generates gigabytes to terabytes of data every day. Our customers bring together this data in studies that seek to discover the genetic fingerprint of cancer. The data typically translates to millions of records in databases that require complex algorithmic processing, cross-application analysis, and interactive visualizations with real-time response (2-3 seconds) to enable users to consume large volumes of complex scientific information. We have chosen the AWS platform to bring this new era of data analysis power to our customers by using technologies such as Amazon S3, ElastiCache, and DynamoDB for storage and fast access and Amazon EMR for parallelizing complex computations. Our talk tells the story with rich details about challenges and roadblocks in building data-intense, highly interactive applications in the cloud. We also highlight enhanced customer workflows and highly optimized applications with orders of magnitude improvement in performance and scalability.Â
HLS401 - Architecting for HIPAA Compliance on AWS
by Bill Shinn - Principal Security Solutions Architect with Amazon Web ServicesDaniel Stover - Release Manager and Software Developer, Emdeon with EmdeonFrank Macreery - CTO with AptibleJason McKay - VP of Engineering with Logicworks
This session brings together the interests of engineering, compliance, and security experts, and shows you how to align your AWS workload to the controls in the HIPAA Security Rule. You hear from customers who process and store Protected Health Information (PHI) on AWS, and you learn how they satisfied their compliance requirements while maintaining agility. This session helps security and compliance experts find out what is technically possible on AWS and learn how implementing the Technical Safeguards in the HIPAA Security Rule can be simple and familiar. We walk through the Technical Safeguards of the Security Rule and map them to AWS features and design choices to help developers, operations teams, and engineers speak the language of their security and compliance peers.
HLS305 - Transforming Cancer Treatment: Integrating Data to Deliver on the Promise of Precision Medicine
by Nate Slater - SA with Amazon Web ServicesJon Hirsch - Founder &President with SyapseKristen McCaleb - Program Manager, UCSF Genomic Medicine Initiative with UCSF
In the past ten years, the cost of sequencing a human genome has fallen from $3 billion dollars to $1,000, unlocking the ability for clinicians to use genomics in routine care. As the volume of genomic data used in the clinic begins to grow, healthcare providers are facing a number of new IT challenges, such as how to integrate this data with clinical data stored in electronic medical records, and how to make both available in real time to inform clinical decisions. In this session, find out how UCSF Medical Center and Syapse met these challenges head-on and solved them using AWS, all while remaining compliant with privacy and security requirements. Learn how Syapse's precision medicine platform uses Amazon VPC, Dedicated Instances, Amazon EC2, and Amazon EBS to build a high performance, scalable, and HIPAA-compliant data platform that enables UCSF to deliver on the promise of precision medicine by dramatically reducing time and increasing the accuracy and utility of genomic profiling in cancer treatment.Â
HLS304 - Building a Secure and Scalable Healthcare Platform
by Vidhya Srinivasan - Senior Manager, Software Development with Amazon Web ServicesDerek Slager - Director of Engineering with IMS Health
Healthcare and life sciences companies depend on thousands of dissimilar data streams from a complicated array of sources for all aspects of their businesses. The complexity of managing this data can present challenges in understanding and engaging patients and providers, effectively managing clinical trials, making data-driven decisions, and developing intelligent insights. In this session, we show how IMS Health uses AWS services (AWS Data Pipeline, Amazon EC2, Amazon Glacier, Amazon RDS, Amazon Redshift, Amazon Route 53, and Amazon S3) to address the healthcare needs of its customers. In addition, we show how IMS Health uses AWS today to optimize the integration of sales and marketing activities through customer use cases. Find out how both applications and data services will improve as access to data is accelerated and new capabilities are delivered through Amazon Redshift.Â
HLS303 - How Cloud Computing is Redefining Research: Secure, Collaborative Science at Scale
by Angel Pizarro - Technical Business Development Manager with Amazon Web ServicesJeffrey Reid - Head of Genome Informatics with Regeneron PharmaceuticalsOmar Serang - Chief Cloud Officer with DNAnexus
Genome sequencing technologies have lowered costs and increased data output at a much faster rate than traditional hardware refresh cycles can service effectively. With lowered costs, more biomedical research and clinical operations are integrating sequencing into a larger portion of their projects, compounding an already tough problem for IT operations. When Regeneron committed to using Next Generation Sequencing technology for biopharmaceutical research, the decision was made to keep the server room empty and deliver all genomic analysis and storage services using AWS and DNAnexus. We review the consequences of this decision, illustrating how basic AWS core services are game-changers in the design and operation of a scientific infrastructure at scale. We show how Amazon S3 changes the storage performance equation, discuss the impact of new instance types on the cost of genomic analysis, and explain how secure collaboration enables Regeneron to advance biopharmaceutical research techniques and support the future of genomic medicine.
HLS201 - Using AWS and Data Science to Analyze Vaccine Yield
by Jerry Megaro - Director, Advanced Analytics and Innovation with Merck ManufacturingBrian Keller - Chief Technologist with Booz Allen HamiltonNic Perez - Cloud Architect with Booz Allen Hamilton
Producing vaccines is a significant and complex effort that spans manufacturing, biological materials, streaming data, and complex computational challenges. In this session, speakers from Merck and Booz Allen Hamilton discuss how they partnered to leverage AWS and data science techniques, enabling them to pioneer new approaches for analyzing vaccine production yields. The solution they created combines a shared data lake service built on AWS services-such as Amazon EC2 and Amazon VPC-as well as Hadoop MapReduce, HDFS, Hive, and R to implement the data science infrastructure and analysis that created models of complex biological processes. As a result of this project, Merck has analyzed 12 years of vaccine manufacturing data from 16 data sources, conducted over 15 billion calculations, and was recognized with the InformationWeek Elite Business Innovation Award for the innovative application of data science towards enhancing vaccine yield rates and saving lives.
SPOT205-JT - State of the Union: AWS Mobile Services and New World of Connected Products - Japanese Track
by Jinesh Varia - Senior Program Manager, Mobile Services with Amazon Web ServicesMarco Argenti - VP, AWS Mobile with Amazon Web Services
In this session, Marco Argenti, Vice President of AWS Mobile, will kick off the Mobile and Connected Devices Track and share our vision and the latest products and features we have launched this year. He will give an overview of our mobile services and share trends we are seeing among mobile customers. This is a repeat session that will be translated simultaneously into Japanese.
MBL401 - Social Logins for Mobile Apps with Amazon Cognito
by Bob Kinney - Software Engineer with Amazon Web Services
Streamline your mobile app sign-up experience with Amazon Cognito. In this session, we demonstrate how to use Cognito to build secure mobile apps without storing keys in them. Learn how to apply policies to existing Facebook, Google, or Amazon identities to secure access to AWS resources, such as personnel files stored in Amazon S3. Finally, we show how to handle anonymous access to AWS from mobile apps when there is no user logged in.
MBL311 - Workshop: Build an Android App Using AWS Mobile Services
by Danilo Poccia - Technical Evangelist with Amazon Web Services
Learn how to build a powerful Android app that leverages a variety of AWS services. In this three-hour, demo-heavy workshop, we show how you can build a modern native client app using the AWS Mobile SDK that uses a number of cross-platform mobile cloud services directly with minimal code on the client. We share best practices for building a highly scalable backend so you can add your own functionality. This is a step-by-step journey where you configure and add components to your architecture, then modify and test your components inside a mobile location-based messaging application. In the end, you will have a mobile application with your own backend consisting of different AWS services including: Amazon Cognito, Amazon Mobile Analytics, Amazon SNS Push Notification, Amazon S3, Amazon CloudFront, Amazon CloudSearch, Amazon DynamoDB, Amazon SQS, and AWS Elastic Beanstalk. Feel free to bring your laptop and follow along. There will be two 15 minute breaks during the session, at 9:45 am and at 10:45 am.
MBL310 - Workshop: Build iOS Apps Using AWS Mobile Services
by Bob Kinney - Software Engineer with Amazon Web ServicesSebastien Stormacq - Technical Trainer with Amazon Web ServicesStefano Buliani - Product Manager with Amazon Web Services
Learn how to build a powerful iOS app that leverages a variety of AWS services. In this three-hour, demo-heavy workshop, we show how you can build a modern native client app using Apple Swift and the AWS Mobile SDK that uses a number of cross-platform mobile cloud services directly with minimal code on the client. We share best practices for building a highly scalable backend so you can add your own functionality. This is a step-by-step journey where you configure and add components to your architecture, then modify and test your components inside a mobile location-based messaging app. In the end, you will have a mobile app with your own backend consisting of different AWS services including: Amazon Cognito, Amazon Mobile Analytics, Amazon SNS Push Notification, Amazon S3, Amazon CloudFront, Amazon CloudSearch, Amazon DynamoDB, Amazon SQS, and AWS Elastic Beanstalk. Feel free to bring your laptop and follow along. There will be two 15 minute breaks during the session, at 9:45 am and at 10:45 am.
MBL305 - The World Cup Second Screen Experience
by Carlos Conde - Chief Technology Evangelist, Amazon Web Services EMEA with Amazon Web ServicesThiago Catoto - Mobile Lead Engineer with Magazine Luiza
How can you combine the power of the cloud to provide an immersive real-time experience for your mobile and television viewers? "Second Screen Experience" provides an enhanced viewing experience for your users. We present best practices for implementing these experiences irrespective of your users' platform. Magazine Luiza is one of the largest retail chains in Brazil and was a sponsor of the Top TV station in the country during the FIFA World Cup. They ran ads on game intervals and reached spikes of four times more traffic by mobile users. Come see how they built the second screen experience and the architecture to manage the Magazine Luiza mobile strategy on top of AWS.
MBL304 - Building Scalable Mobile Services with Global Footprints
by Jan Metzner - Solutions Architect with Amazon Web ServicesVinicius Gracia - Founder &CTO with Easy TaxiSuresh Rasaretnam - Architect with HTC
In this session, hear directly from HTC and EasyTaxi, who share their architecture stories of how they built powerful mobile backend services on AWS. Learn how HTC built a news aggregation service (BlinkFeed), a photo-sharing service (HTC Share), and a phone backup service (HTC Backup) on AWS. Also learn how they used AWS for deploying server code, provisioning handsets, and business intelligence, and how they launched these services worldwide, with high availability and low latency in just 6 months. EasyTaxi is building the world's largest taxi mobile app. EasyTaxi shares their experience building an app as they scaled to 35+ countries and thousands of transactions per second. They discuss how they had to reinvent their architecture and infrastructure to meet several cultural as well technical challenges such as adapting to different kinds of access, mobile networks, and working with different charsets to change server locations. They share best practices on how they met the demanding traffic and transactional workloads by building on AWS and leveraging help from Amazon Enterprise Support.
MBL303 - Get Deeper Insights Using Amazon Mobile Analytics
by Andy Kelm - Director, Product Management, Mobile Services with Amazon Web ServicesChris Keyser - Ecosystem Solution Architect with Amazon Web ServicesPatrik Arnesson - CEO with Football AddictsVedad Babic - Data Scientist with Football Addicts
Choosing the right mobile analytics solution can help you understand user behavior, engage users, and maximize user lifetime value. After this session, you will understand how you can learn more about your users and their behavior quickly across platforms with just one line of code using Amazon Mobile Analytics.Â
MBL302 - Mastering Synchronization Across Mobile Devices, Login Providers, and the Web
by David Behroozi - Sr. Software Engineer with Amazon Web ServicesStefano Buliani - Product Manager with Amazon Web Services
In the past, content and preferences would be moved to the device. Now devices are just a window to content and services that live in the cloud. The cloud enables your content and preferences to follow you wherever you go. You have the ability to transition between your phone, tablet, and laptop and seamlessly pick up where you left off. With Amazon Cognito, you can synchronize user data across mobile OS/devices and bridge the web world with the mobile world. In this session, learn how you can implement sync in Android, iOS, and JavaScript so you can deliver a “WOW” customized user experience to your customers. We show you how to integrate with Amazon Cognito to sync with mobile devices and the web and delve into some of the nuances of syncing, such as conflict resolution and account merging.
MBL301 - Beyond the App - Extend Your User Experience with Mobile Push Notifications - Featuring Mailbox
by Rich Cowper - Solution Architect with Amazon Web ServicesSean Beausoleil - Engineering Manager with MailboxDavid Barshow - Backend Engineer with Mailbox
Cross-platform push notifications that can engage your customers even when your app is in the background are becoming a central part of a mobile app user experience. Some customers may rarely open an app that provides useful information to them; for them, the notifications are the most important part. But great user experiences can break if your messages get dropped or delayed. How do you ensure your messages are delivered fast and reliably at scale? And how can you use them to extend the user experience of your app? In this session, we show you how Amazon SNS provides the performance and simplicity of a managed service, while also supporting interactive notifications, silent push, and broadcasts to large groups. We also learn from Mailbox, who rely on large-scale push notifications as a core part of the user experience, and who will share real-world design patterns.
MBL202 - NEW LAUNCH: Getting Started with AWS Lambda
by Tim Wagner - Director of Engineering, AWS Mobile with Amazon Web Services
AWS Lambda is a new compute service that runs your code in response to events and automatically manages compute resources for you. In this session, you learn what you need to get started quickly, including a review of key features, a live demonstration, how to use AWS Lambda with Amazon S3 event notifications and Amazon DynamoDB streams, and tips on getting the most out of Lambda functions.
MBL201 - Device Clouds: Best Practices in Building a Connected Device Backend in the Cloud
by Jinesh Varia - Senior Program Manager, Mobile Services with Amazon Web ServicesZach Supalla - CEO with Spark LabsKyle Roche - CEO with 2lemetryJohn Cox - Sr. Technology Director with MachineShop
The more devices are connected, the more new applications and services emerge that use those connected devices. AWS offers a wide variety of services that can be used to build "device clouds," backend infrastructure for connecting different kinds of devices, including smart phones, tablets, smart meters, connected cars, sensor and actuator gateways, and so on. In this session, we share best practices for authentication, authorization, messaging, data collection, analytics, and much more. We also hear from Internet of Things (IoT) customers who have successfully built highly-scalable device clouds on AWS.
MED305 - Achieving Consistently High Throughput for Very Large Data Transfers with Amazon S3
by Stéphane Houet - Product Manager with EVS Broadcast EquipmentJay Migliaccio - Director of Cloud Technologies with AsperaMichelle Munson - President and Co-Founder with Aspera
A difficult problem for users of Amazon S3 that deal in large-form data is how to consistently transfer ultralarge files and large sets of files at fast speeds over the WAN. Although a number of tools are available for network transfer with S3 that exploit its multipart APIs, most have practical limitations when transferring very large files or large sets of very small files with remote regions. Transfers can be slow, degrade unpredictably, and for the largest sizes fail altogether. Additional complications include resume, encryption at rest, encryption in transit, and efficient updates for synchronization. Aspera has expertise and experience in tackling these problems and has created a suite of transport, synchronization, monitoring, and collaboration software that can transfer and store both ultralarge files (up to the 5 TB limit of an S3 object) and large numbers of very small files (millions < 100 KB) consistently fast, regardless of region. In this session, technical leaders from Aspera explain how to achieve very large file WAN transfers and integrate them into mission-critical workflows across multiple industries. EVS, a media service provider to the 2014Â FIFAÂ World Cup Brazil explains how they used Aspera solutions for the delivery of high-speed, live video transport, moving real-time video data from sports matches in Brazil to Europe for AWS-based transcoding, live streaming, and file delivery. Sponsored by Aspera.
MED304 - The Future of Rendering: A Complete VFX Studio in the AWS Cloud
by Matt Yanchyshyn - Principal Solutions Architect with Amazon Web ServicesGerald Tiu - Professional Services Consultant, Amazon Web Services with Amazon Web ServicesUsman Shakeel - Principle Solutions Architect with Amazon Web Services
Today's studios and visual effects companies require massive computing power and large amounts of storage to produce high-end digital scenes and videos.  Maintaining the infrastructure required for these jobs is expensive and operationally difficult, plus demand fluctuates day to day. Geographically diverse workforces adds additional complexity to data and content transfer. The low-cost, utility computing model as well as unique virtualization capabilities offered by AWS are well-suited to addressing these challenges. In this session you will learn how to build and deploy a studio-quality, scalable Arnold render farm on AWS with reusable templates. We'll also demonstrate how to run Maya and Deadline remotely with AWS AppStream, and use them to edit scenes and coordinate render jobs entirely in the cloud.
MED303 - Secure Media Streaming and Delivery
by Nihar Bihani - Principal Product Manager, CloudFront with Amazon Web ServicesDhruv Parpia - Solutions Architect with Amazon Web ServicesJeroen Wijering - Co-founder &Creator with JW Player
Media content, whether it be the latest blockbuster movie or a company's confidential webcasts, can be some of the most important assets for a media business. Storing, preparing, and delivering this content securely involves leveraging systems that can scale and ensure top-of-the-line security. Come find out how AWS can help you implement these workflows in the cloud using highly available, scalable, and secure cloud services such as Amazon S3 (storage), Amazon Elastic Transcoder (transcoding) and Amazon CloudFront (delivery). Â Â We also discuss the underlying concepts of secure media delivery (e.g., policy-based DRM and signed URLs), the challenges faced by customers who need to design and implement these critical modules, and how to leverage the power of AWS to accomplish those while saving on costs. Â In addition, we take a deep dive into a media processing stack implemented on AWS using open source components to deliver encrypted HTTP Live Streams (HLS) to various devices.
MED302 - Leveraging Cloud-Based Predictive Analytics to Strengthen Audience Engagement
by Mike Limcaco - Enterprise Solutions Architect with Amazon Web Services
In order to improve audience engagement., media companies must deal with vast amounts of raw data from web, social media, devices, catalogs, and back-channel sources. This session dives into predictive analytic solutions on AWS: We present architecture patterns for optimizing media delivery and tuning overall user experience based on representative data sources (video player clickstream, web logs, CDN, user profiles, social media sentiment, etc.). We dive into concrete implementations of cloud-based machine learning services and show how they can be leveraged for profiling audience demand, cueing content recommendations and prioritizing delivery of related media. Services covered include Amazon EC2, Amazon S3, Amazon CloudFront, and Amazon EMR.
MED301 - Brazil's World Cup: Interacting with TV Viewers in Real-Time
by Michel Pereira - Solutions Architect with Amazon Web ServicesFabio Castro - Architect &Lead Programmer with TV Globo
For the World Cup hosted by Brazil, TV Globo created a live interactive experience with the viewer, providing live analysis and statistics about the match and the ability to interact with the TV anchor. The challenge was to do near real-time interaction with the viewer that was synchronized with the plays of the soccer match on the screen. This session explores how AWS and TV Globo combined Amazon EC2, Amazon EMR, ElastiCache, Auto Scaling, Amazon SQS, DynamoDB, Amazon RDS, CloudFront and more than 500 instances to make the experience possible.
PFC403 - Maximizing Amazon S3 Performance
by Felipe Garcia - Solutions Architect with Amazon Web Services
This session drills deep into the Amazon S3 technical best practices that help you maximize storage performance for your use case. We provide real-world examples and discuss the impact of object naming conventions and parallelism on Amazon S3 performance, and describe the best practices for multipart uploads and byte-range downloads.
PFC402 - Bigger, Faster: Performance Tips for High Speed and High Volume Applications
by Ben Clay - Software Development Engineer with Amazon Web ServicesBrett McCleary - VP of Software Development with Precision Exams
This expert level session covers best practices and tips on how to reduce latency to the absolute minimum when dealing with high volume, high speed datasets, using Amazon DynamoDB. We take a deep dive into the design patterns and access patterns geared to provide low latency at very high throughput. We cover some ways in which customers have achieved low latencies and have a customer speak about their experience of using DynamoDB at scale.
PFC308 - How Dropbox Scales Massive Workloads Using Amazon SQS
by Akhil Gupta - Head of Infrastructure with Dropbox
In this session, learn how Dropbox scales to provide one of the largest cloud storage and file sharing services in the world. Hear how Dropbox leverages Amazon EC2 to run varied workloads including thumbnail generation and document prevent, as well as document indexing to support full-text search. Dropbox presents “Livefill” - a generic framework built on top of Amazon SQS. Livefill enables them to trigger customizable data-processing workloads on data stored in Amazon S3 and helps them support more than 200,000 workload requests per second, spread across thousands of machines.
PFC307 - Auto Scaling: A Machine Learning Approach
by Callum Hughes - Enterprise Solutions Architect with Amazon Web ServicesSumit Amar - Director of Engineering with Electronic Arts
Auto Scaling groups used in conjunction with auto-scaling policies define when to scale out or scale in instances. These policies define actionable states based on a defined event and time frame (e.g., add instance when CPU utilization is greater than 90% for 5 consecutive minutes). In this session, Electronic Arts (EA) discusses a pro-active approach to scaling. You learn how to analyze past resource usage to help pre-emptively determine when to add or remove instances for a given launch configuration. Past data is retrieved via Amazon CloudWatch APIs, and the application of supervised machine learning models and time series smoothing is discussed.
PFC306 - Performance Tuning Amazon EC2 Instances
by Brendan Gregg - Senior Performance Architect with Netflix
Netflix tunes Amazon EC2 instances for maximum performance. In this session, you learn how Netflix configures the fastest possible EC2 instances, while reducing latency outliers. This session explores the various Xen modes (e.g., HVM, PV, etc.) and how they are optimized for different workloads. Hear how Netflix chooses Linux kernel versions based on desired performance characteristics and receive a firsthand look at how they set kernel tunables, including hugepages. You also hear about Netflix's use of SR-IOV to enable enhanced networking and their approach to observability, which can exonerate EC2 issues and direct attention back to application performance. Â
PFC305 - Embracing Failure: Fault-Injection and Service Reliability
by Josh Evans - Director of Operations Engineering with NetflixNaresh Gopalani - Distributed Systems Architect | Engineer with Netflix
Complex distributed systems fail. They fail more frequently, and in different ways, as they scale and evolve over time. In this session, you learn how Netflix embraces failure to provide high service availability. Netflix discusses their motivations for inducing failure in production, the mechanics of how Netflix does this, and the lessons they learned along the way. Come hear about the Failure Injection Testing (FIT) framework and suite of tools that Netflix created and currently uses to induce controlled system failures in an effort to help discover vulnerabilities, resolve them, and improve the resiliency of their cloud environment.
PFC304-JT - Effective Interprocess Communications in the Cloud: The Pros and Cons of Micro Services Architectures - Japanese Track
by Sudhir Tonse - Manager, Cloud Platform with Netflix
Microservices are becoming more mainstream and bring with them a host of benefits and challenges. In this session, Netflix highlights the pros and cons of building software applications as suites of independently deployable services, as well as practical approaches for overcoming challenges. You get a firsthand look at the robust interprocess communications (IPC) framework that Netflix built and how they address the varying capacities, network usage patterns, and performance characteristics of the hundreds of microservices (e.g., Eureka, Karyon, Ribbon, RxNetty) in their cloud ecosystem. This is a repeat session that will be translated simultaneously into Japanese.
PFC304 - Effective Interprocess Communications in the Cloud: The Pros and Cons of Microservices Architectures
by Sudhir Tonse - Manager, Cloud Platform with Netflix
Microservices are becoming more mainstream and bring with them a host of benefits and challenges. In this session, Netflix highlights the pros and cons of building software applications as suites of independently deployable services, as well as practical approaches for overcoming challenges. You get a firsthand look at the robust interprocess communications (IPC) framework that Netflix built and how they address the varying capacities, network usage patterns, and performance characteristics of the hundreds of microservices (e.g., Eureka, Karyon, Ribbon, RxNetty) in their cloud ecosystem.Â
PFC303 - Milliseconds Matter: Design, Deploy, and Operate Your Application for Best Possible Performance
by John Mancuso - Solutions Architect with Amazon Web ServicesPrasad Kalyanaraman - VP, AWS Edge Services with Amazon Web Services
You can't (yet) bend the law of Physics, but you can use the power of the cloud to design applications that run as fast as the speed of light! This session will focus on the best practices for optimizing performance to the very last millisecond. We'll dive into topics such as caching at every layer of your application, TCP optimizations, SSL optimizations, latency based routing, and much more. These best practices can help you to streamline your infrastructure utilization, improve performance and allow you to scale economically.
PFC302 - Performance Benchmarking on AWS
by Dougal Ballantyne - Solutions Architect with Amazon Web ServicesBennie Johnston - Head of APIs with JUST EAT
In this session, we explain how to measure the key performance-impacting metrics in a cloud-based application and best practices for a reliable benchmarking process. Measuring the performance of applications correctly can be challenging and there are many tools available to measure and track performance. This session will provide you with specific examples of good and bad tests. We make it clear how to get reliable measurements of and how to map benchmark results to your application. We also cover the importance of selecting tests wisely, repeating tests, and measuring variability. In addition a customer will provide real-life examples of how they developed their application testing stack, utilize it for repeatable testing and identify bottlenecks.
SDD424 - Simplifying Scalable Distributed Applications Using DynamoDB Streams
by Parik Pol - Software Development Manager, DynamoDB, Amazon Web Services with Amazon Web ServicesAkshat Vig - Software Development Engineer with Amazon Web Services
Dynamo Streams provides a stream of all the updates done to your DynamoDB table. It is a simple but extremely powerful primitive which will enable developers to easily build solutions like cross-region replication, and to host additional materialized views, for instance an ElasticSearch index, on top of DynamoDB tables. In this session we will dive deep into details of Dynamo Streams, and how customers can leverage Dynamo Streams to build custom solutions and to extend the functionality of DynamoDB. We will give a demo of an example application built on top of Dynamo Streams to demonstrate the power and simplicity of Dynamo Streams.
SDD423 - Elastic Load Balancing Deep Dive and Best Practices
by David Brown - Director, Software Dev, EC2 Load Balancing with Amazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service's many customization choices. We also share best practices and useful tips for success.
SDD422 - Amazon VPC Deep Dive
by Kevin Miller - Sr. Manager, Software Development with Amazon Web Services
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define. In this talk, we discuss advanced tasks in Amazon VPC, including the implementation of VPC peering, the creation of multiple network zones, the establishment of private connections, and the use of multiple routing tables. We also provide information for current EC2-Classic network customers and help you prepare to adopt Amazon VPC.
SDD421 - Amazon EC2 Purchasing Deep Dive and Best Practices
by Stephen Elliott - Sr. Product Manager with Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides customers three different purchasing models that give you the flexibility to optimize your costs. In this session, you learn how to balance and optimize your use of On-Demand, Reserved, and Spot Instances. We discuss the right applications for the different Reserved Instance types, as well as guidelines for making applications ready to run on Spot Instances. We also discuss how to understand and track your Amazon EC2 spending.
SDD420 - Amazon WorkSpaces: Advanced Topics and Deep Dive
by Eric Schultze - Principal Product Manager, AWS Windows Business with Amazon Web ServicesDeepak Suryanarayanan - Senior Product Manager, Amazon WorkSpaces with Amazon Web Services
Amazon WorkSpaces is an enterprise desktop computing service in the cloud. In this session, we dive deep into configuration, administration, and advanced networking topics for WorkSpaces. We also discuss integration of WorkSpaces to your corporate active directory and best practices for enabling your WorkSpaces to access resources on your corporate intranet.
SDD419 - Amazon EC2 Networking Deep Dive and Best Practices
by Becky Weiss - Principal Software Engineer, EC2 Networking with Amazon Web Services
Amazon EC2 instances give customers a variety of high-bandwidth networking choices. In this session, we discuss how to choose among Amazon EC2 networking technologies and examine how to get the best performance out of Amazon EC2 enhanced networking and cluster networking. We also share best practices and useful tips for success.
SDD418 - Amazon CloudWatch Deep Dive
by Henry Hahn - Senior Product Manager with Amazon Web Services
In this session, we go deep on best practices for you to get the most out of Amazon CloudWatch. Learn how you can use new service metrics to keep even more of your systems and applications running smoothly. See how CloudWatch Logs can help you monitor your logs in near-real time for events you care about and store the log data in low cost, highly durable storage. Hear about best practices to retrieve Amazon Web Services metrics from CloudWatch using the API. Get a demonstration of how you can use the Amazon EC2 configuration service to monitor applications and events on Windows Server.
SDD416 - Amazon EBS Deep Dive
by Dougal Ballantyne - Solutions Architect with Amazon Web Services
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we conduct a detailed analysis of the differences among the three types of Amazon EBS block storage: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. We discuss how to maximize Amazon EBS performance, with a special eye towards low-latency, high-throughput applications like databases. We discuss Amazon EBS encryption and share best practices for Amazon EBS snapshot management. Throughout, we share tips for success.
SDD415 - NEW LAUNCH: Amazon Aurora: Amazon's New Relational Database Engine
by Manish Dalwadi - Sr. Product Manager with Amazon Web ServicesAnurag Gupta - General Manager, Amazon Redshift with Amazon Web Services
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Starting today, you can sign up for an invitation to the preview of the service. Come to our session for an overview of the service and learn how Aurora delivers up to five times the performance of MySQL yet is priced at a fraction of what you'd pay for a commercial database with similar performance and availability.
SDD414 - Amazon Redshift Deep Dive and What's Next
by Rahul Pathak - Principal Product Manager with Amazon Web ServicesAnurag Gupta - General Manager, Amazon Redshift with Amazon Web Services
Get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance. Finally, we'll announce new features that we've been working on over the past few months.
SDD413 - Amazon S3 Deep Dive and Best Practices
by Saad Ladki - Manager, Product Management with Amazon Web ServicesTim Hunt - Sr. Product Manager with Amazon Web Services
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
SDD412 - Amazon Simple Email Service Deep Dive and Best Practices
by Abhishek Mishra - General Manager, Amazon Simple Email Service (SES) with Amazon Web ServicesSina Yeganeh - Technical Program Manager with Amazon.comMorgan Thomas - Software Development Engineer with Amazon Web Services
Almost all applications and services have a need to communicate over email. Amazon Simple Email Service (SES) enables email functionality that will scale with your business. So what should an email-sending application that integrates with SES look like? In this session we cover common patterns, architectures, and best practices that you can use to build a robust email solution that takes advantage of the SES platform.
SDD411 - Amazon CloudSearch Deep Dive and Best Practices
by Jon Handler - CloudSearch Solutions Architect with Amazon Web Services
Amazon CloudSearch is a fully-managed search service in the cloud that lets you quickly and easily set up and use a search solution for your application. The latest version of CloudSearch includes tons of new and advanced search and administrative features. This session covers how to design for high scale at low cost, as well as best practices for handling multiple languages, ranking your search results, securing your CloudSearch domains, achieving cost-effective multi-tenancy, sourcing from many different systems, and getting the most out of your CloudSearch instances.
SDD409 - Amazon RDS for PostgreSQL Deep Dive
by Grant McAlister - Senior Principal Engineer with Amazon Web ServicesGreg Roberts - Associate Director with Illumina
Learn the specifics of Amazon RDS for PostgreSQL's capabilities and extensions that make it powerful. This session covers database data import, performance tuning and monitoring, troubleshooting, security, and leveraging open source solutions with RDS. Throughout, this session focuses on capabilities particular to RDS for PostgreSQL.
SDD408 - Amazon Route 53 Deep Dive: Delivering Resiliency, Minimizing Latency
by Lee-Ming Zen - Software Development Manager, Amazon Route 53 with Amazon Web ServicesManoj Chaudhary - CTO and VP of Engineering with Loggly
Learn how to utilize Amazon Route 53 latency-based routing, weighted round-robin, and other features in conjunction with DNS failover to direct traffic to the least latent, most available endpoints across a global infrastructure. We explore topics such as balancing traffic between endpoints in terms of load and latency, and discuss how to provide multi-record answers to improve client-side resiliency. As part of this session, Loggly will present how they utilize Route 53 for their traffic management needs.
SDD407 - Amazon DynamoDB: Data Modeling and Scaling Best Practices
by David Yanacek - Sr. Software Dev Engineer with Amazon Web Services
Amazon DynamoDB is a fully managed, highly scalable distributed database service. In this technical talk, we show you how to use DynamoDB to build high-scale applications like social gaming, chat, and voting. We show you how to use building blocks such as secondary indexes, conditional writes, consistent reads, and batch operations to build the higher-level functionality such as multi-item atomic writes and join queries. We also discuss best practices such as index projections, item sharding, and parallel scan for maximum scalability.
SDD406 - Amazon EC2 Instances Deep Dive
by John Phillips - Manager, Product Management, Amazon EC2 with Amazon Web ServicesAnthony Liguori - Principal Software Engineer, EC2 with Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
SDD405 - Amazon Kinesis Deep Dive
by Aditya Krishnan - Sr. Product Manager with Amazon Web Services
Amazon Kinesis is the AWS service for real-time streaming big data ingestion and processing. This talk gives a detailed exploration of Kinesis stream processing. We'll discuss in detail techniques for building, and scaling Kinesis processing applications, including data filtration and transformation. Finally we'll address tips and techniques to emitting data into S3, DynamoDB, and Redshift.
SDD404 - Amazon RDS for Microsoft SQL Server Deep Dive
by Sergei Sokolenko - Senior Product Manager with Amazon Web ServicesGhim-Sim Chua - Senior Product Manager, Relational Database Service (RDS), Amazon Web Services with Amazon Web ServicesMiguel Simões João - Lead Software Engineer with Outsystems SA
Come learn how to turbocharge your application development with useful tips and tricks for Amazon RDS for SQL Server. We will cover practical and useful topics such as migrating your data to an Amazon RDS for SQL Server instance and as well as detailed guidance for taking advantage of Multi-AZ deployments to architect high-performance applications and production workloads.
SDD403-JT - Amazon RDS for MySQL Deep Dive - Japanese Track
by Grant McAlister - Senior Principal Engineer with Amazon Web Services
Learn about architecting a highly available RDS MySQL implementation to support your high-performance applications and production workloads. We will also talk about best practices in the areas of security, storage, compute configurations, and management that will contribute to your success with Amazon RDS for MySQL. In addition, you will learn about how to effectively move data between Amazon RDS and on-premises instances. This is a repeat session that will be translated simultaneously into Japanese.
SDD403 - Amazon RDS for MySQL Deep Dive
by Sajee Mathew - Solutions Architect with Amazon Web ServicesPavan Pothukuchi - Principal Product Manager, Amazon RDS, Amazon Web Services with Amazon Web Services
Learn about architecting a highly available RDS MySQL implementation to support your high-performance applications and production workloads. We will also talk about best practices in the areas of security, storage, compute configurations, and management that will contribute to your success with Amazon RDS for MySQL. In addition, you will learn about how to effectively move data between Amazon RDS and on-premises instances.
SDD402 - Amazon ElastiCache Deep Dive
by Sami Zuhuruddin - Enterprise Solutions Architect, Amazon Web Services with Amazon Web ServicesFrank Wiebe - Principal Scientist with Adobe Systems
Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache.
SDD401 - Amazon Elastic MapReduce Deep Dive and Best Practices
by Ian Meyers - Principal Solutions Architect with Amazon Web Services
Amazon Elastic MapReduce is one of the largest Hadoop operators in the world. Since its launch five years ago, AWS customers have launched more than 5.5 million Hadoop clusters. In this talk, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters and other Amazon EMR architectural patterns. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost efficient.
SDD302 - A Tale of One Thousand Instances - Migrating from Amazon EC2-Classic to VPC
by Sumbry _ - Director of Cloud Services with TwilioJonas Borjesson - Tech Lead, SIP with Twilio
In this session, you learn why Twilio chose to migrate from Amazon EC2-Classic to VPC and how they leveraged features available only in VPC, specifically: - AWS CloudHSM: Build out a secure key encryption or role-based access control for internal use; also used to securely store and encrypt data for external customers.- Elastic Network Interface (ENI):Â Allows multiple Elastic IPs per instance and the ability to move network interface between instances.- Hardware Virtual Machine (HVM) instances w/SRV-IO: New hardware virtualized instances that allow line-level performance of network interfaces for up to 10g Ethernet speeds. Secure data-in-transit by default, which ensures all machines communicate via a software-defined network and work in the same manner as VLAN tagging for compliance reasons. Sponsored by Twilio. Â
SEC406 - NEW LAUNCH: Building Secure Applications with AWS Key Management Service
by Gregory Roth - Sr. Security Engineer with Amazon Web Services
Learn how you can use the AWS Key Management Service to protect data in your applications. This talk shows you how to use the encryption features of AWS Key Management Service within your applications and provides an in-depth walk-through of applying policy control to keys to control access.
SEC405 - Enterprise Cloud Security via DevSecOps
by Aaron Wilson - Senior Consultant, AWS Professional Services with Amazon Web ServicesScott Kennedy - Chief Security Scientist - Cloud Security with IntuitShannon Lietz - Sr Mgr DevSecOps with Intuit
If you're trying to figure out how to run enterprise applications and services on AWS securely, come join Intuit and the AWS Professional Services team to learn how to embrace a new discipline called DevSecOps. You'll learn more about software-defined security and why we think that DevSecOps helps organizations large and small adopt cloud services at a rapid pace. We'll provide you with links and information to help you get started with creating your own DevSecOps team.
SEC404 - Incident Response in the Cloud
by Don Bailey - Principal Security Engineer with Amazon Web ServicesGregory Roth - Sr. Security Engineer with Amazon Web Services
You've employed the practices outlined for incident detection, but what do you do when you detect an incident in the cloud? This session walks you through a hypothetical incident response on AWS. Learn to leverage the unique capabilities of the AWS environment when you respond to an incident, which in many ways is similar to how you respond to incidents in your own infrastructure. This session also covers specific environment recovery steps available on AWS.Â
SEC403 - Building AWS Partner Applications Using IAM Roles
by Bob Van Zant - Software Engineer with Bracket Computing
AWS Identity and Access Management (IAM)Â roles are powerful primitives you can use to build applications that can access a broad range of data without collecting databases of credentials. This session explains how to model applications that are granted access to large numbers of AWS accounts through the use of IAM roles. It covers advanced role permission modeling and sample implementations.Â
SEC402 - Intrusion Detection in the Cloud
by Graeme Baer - Software Development Engineer with Amazon Web ServicesDon Bailey - Principal Security Engineer with Amazon Web Services
If your business runs entirely on AWS, your AWS account is one of your most critical assets. Just as you might run an intrusion detection system in your on-premises network, you should monitor activity in your AWS account to detect abnormal behavior. This session walks you through leveraging unique capabilities in AWS that you can use to detect and respond to changes in your environment.
SEC316 - SSL with Amazon Web Services
by Colm MacCarthaigh - Principal Engineer with Amazon Web Services
The SSL and TLS protocols are critical to online security and performance. This session discusses how the SSL and TLS protocols work and how they are integrated with many AWS services such as Amazon CloudFront, Elastic Load Balancing, and Amazon S3. Learn how technologies such as Perfect Forward Secrecy and HSTS can be used to protect end-user data, and why browsers and servers are now removing support for version 3 of the SSL protocol, SHA-1 signatures and some encryption algorithms such as RC4. By the end of the session you'll be able to understand each of these technologies and how to adapt to the changing security landscape.
SEC315 - NEW LAUNCH: Get Deep Visibility into Resource Configurations
by Prashant Prahlad - Sr. Product Manager with Amazon Web Services
AWS Config is a new cross-resource service that allows you to discover new resources, how they're configured, and how these configurations changed over time. The service defines and captures relationships an dependencies between resources, helping you determine if a change to one resource affects other resources.
SEC314 - Customer Perspectives on Implementing Security Controls with AWS
by Jason Cradit - Director, Information Solutions with WillbrosMark Nunnikhoven - Vice President, Cloud &Emerging Technologies with Trend MicroAaron Hughes - Systems Architect with Washington Department of Fish & WildlifeMark Burns - Enterprise Security Manager with Medibank Private LimitedMAURICIO FERNANDES - CEO with DEDALUS PRIME
Security postures in the cloud can take different forms, depending upon your specific business and IT requirements. Hear from customer panelists representing the energy industry, IT services, and government about how they have successfully delivered projects on AWS using Trend Micro solutions, while meeting or exceeding their security requirements. Focus is on the practical considerations and options for improving your overall IT security posture with the AWS shared responsibility security model. Sponsored by Trend Micro.
SEC313 - Updating Security Operations for the Cloud
by Mark Nunnikhoven - Vice President, Cloud &Emerging Technologies with Trend Micro
Learn how to increase the effectiveness of your security operations as you move to the cloud. This session for architects and IT administrators covers considerations for optimizing your incident response, monitoring, and audit response tactics to take advantage of built-in capabilities in AWS. This session provides practical advice you can apply today, pulled from industry research, direct experience helping customers migrate to the cloud, and from the speaker's own hard-earned lessons. Sponsored by Trend Micro.
SEC312 - Taking a DevOps Approach to Security
by George Miranda - Partner Evangelist with Chef Software, Inc.Paul Fisher - VP of Technology Operations with Alert Logic
More organizations are embracing DevOps to realize compelling business benefits, such as more frequent feature releases, increased application stability, and more productive resource utilization. However, security and compliance monitoring tools have not kept up. In fact, they often represent the largest single remaining barrier to continuous delivery. Learn how to integrate security controls in your DevOps program from experts at Alert Logic and George Miranda, engineer and evangelist at Chef. Sponsored by Alert Logic. Â
SEC311 - Architecting for End-to-End Security in the Enterprise
by Hart Rossman - Principal Consultant, Professional Services Global Security, Risk, and Compliance Practice, Amazon Web Services with Amazon Web ServicesBill Shinn - Principal Security Solutions Architect with Amazon Web Services
This session tells the story of how security-minded enterprises provide end-to-end protection of their sensitive data in AWS. Learn about the enterprise security architecture design decisions made by Fortune 500 organizations during actual sensitive workload deployments, as told by the AWS security solution architects and professional service security, risk, and compliance team members who lived them. In this technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, end-to-end security architecture and service composition, security configuration decisions, and the creation of AWS security operations playbooks to support the architecture.
SEC310 - Integrating AWS with External Identity Management
by Mark Diodati - Technical Director, CTO Office with Ping Identity
Amazon Web Services IAM has a cohesive set of features, including authentication, service and resource authorization, and privilege delegation. But how does AWS IAM interact with an organization's external identity management framework? In this session, we will look at the identity disciplines, including authorization, identity governance and administration (IGA), provisioning, authentication and single sign-on-and their associated standards like XACML, SCIM, SAML, OAuth, OpenID Connect, and FIDO. We will specify how these externalized identity functions can be integrated with AWS to deliver a cohesive organizational identity management framework. We will also cover real-world deployments of externalized identity systems with AWS.
SEC309 - Amazon VPC Configuration: When Least Privilege Meets the Penetration Tester
by Jason Bubolz - Senior Security Engineer with iSEC Partners
Enterprises trying to deploy infrastructure to the cloud and independent software companies trying to deliver a service have similar problems to solve. They need to know how to create an environment in AWS that enforces least-privilege access between components while also allowing administration and change management. Amazon Elastic Cloud Compute (EC2) and Identity and Access Management (IAM), coupled with services like AWS Security Token Service (STS), offer the necessary building blocks. In this session, we walk through some of the mechanisms available to control access in an Amazon Virtual Private Cloud (VPC). Next, we focus on using IAM and STS to create a least-privilege access model. Finally, we discuss auditing strategies to catch common mistakes and discuss techniques to audit and maintain your infrastructure.
SEC308 - Navigating PCI Compliance in the Cloud
by Jesse Angell - CTO with PaymentSpring
Navigating Payment Card Industry (PCI) compliance on AWS can be easier than in a traditional data center. This session discusses how PaymentSpring implemented a PCI level-1 certified payment gateway running entirely on AWS. PaymentSpring will talk about how they designed the system to make PCI validation easier, what AWS provided, and what additional tools PaymentSpring added. Along the way, they'll cover some things they did to reduce costs and increase the overall security of the system.
SEC307 - Building a DDoS-Resilient Architecture with Amazon Web Services
by Andrew Kiggins - Software Development Manager with Amazon Web ServicesAdrian Newby - CTO, CrownPeak with CrownPeak Technology
In this session, we'll give an overview of Distributed Denial of Service (DDoS) and discuss techniques using AWS and security solutions from AWS Marketplace to help build services that are resilient in the face of DDoS attacks. We'll discuss anti-DDoS features available in AWS, such as Route 53's Anycast Routing, Auto Scaling for EC2, and CloudWatch's alarms, and how these features can be used jointly to help protect your services. Also, you'll hear from CrownPeak, an AWS Technology Partner, on how it used techniques discussed in the presentation to help mitigate an actual DDoS attack.
SEC306 - Turn on CloudTrail: Log API Activity in Your AWS Account
by Sivakanth Mundru - Sr. Product Manager with Amazon Web ServicesSteve Toback - Cloud Architect with Merck &Co., Inc.
Do you need to know who made an API call? What resources were acted upon in an API call? Do you need to find the source IP address of an API call? AWS CloudTrail helps you answer these questions. In this session we review the basics of CloudTrail and then dive into CloudTrail features. We demo solutions that you can use to analyze API activity recorded and delivered by CloudTrail. Join us if you are interested in security or compliance and how you can architect, build, and maintain compliant applications on AWS.
SEC305-JT - IAM Best Practices - Japanese Track
by Anders Samuelsson - Principal Technical Program Manager with Amazon Web Services
Ever wondered how to help secure your AWS environment? This session explains a series of best practices that help you do just that with AWS Identity and Access Management (IAM). We discuss how to create great access policies; manage security credentials (access keys, password, multi-factor authentication (MFA) devices, etc.); how to set up least privilege; how to minimize the use of your root account, and much, much more. This is a repeat session that will be translated simultaneously into Japanese.
SEC305 - IAM Best Practices
by Anders Samuelsson - Principal Technical Program Manager with Amazon Web Services
Ever wondered how to help secure your AWS environment? This session explains a series of best practices that help you do just that with AWS Identity and Access Management (IAM). We discuss how to create great access policies; manage security credentials (access keys, password, multi-factor authentication (MFA) devices, etc.); how to set up least privilege; how to minimize the use of your root account, and much, much more.
SEC304 - Bring Your Own Identities - Federating Access to Your AWS Environment
by Shon Shah - Senior Product Manager with Amazon Web Services
Have you wondered how you can use your corporate directory for accessing AWS? Or how you can build an AWS-powered application accessible to the millions of users from social identity providers like Amazon, Google, or Facebook? If so, this session will give you the tools you need to get started. It will provide a variety of examples to make it easier for you to use other identity pools with AWS, as well as cover open standards like Security Assertion Markup Language (SAML). Anyone who deals with external identities won't want to miss this session.
SEC303 - Mastering Access Control Policies
by Jeff Wierer - Sr. Manager with Amazon Web Services
If you have ever wondered how best to scope down permissions in your account, this in-depth look at the AWS Access Control Policy language is for you. We start with the basics of the policy language and how to create policies for users and groups. We look at how to use policy variables to simplify policy management. Finally, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket, allowing an IAM user to manage their own credentials and passwords, and more.
SEC302 - Delegating Access to Your AWS Environment
by Shon Shah - Senior Product Manager with Amazon Web Services
Do you have multiple AWS accounts that you want to share resources across? Considering an AWS partner offering that requires access to your AWS account? Delegation is your friend! Come learn how you can easily and securely delegate access to users in other AWS accounts, 3rd parties, or even other AWS services using delegation options available in AWS Identity and Access Management (IAM).
SEC301 - Encryption and Key Management in AWS
by Todd Cignetti - Senior Product Manager, Security with Amazon Web ServicesKen Beer - Principal Product Manager with Amazon Web Services
Sensitive customer data needs to be protected throughout AWS. This session discusses the options available for encrypting data at rest in AWS. It focuses on several scenarios, including transparent AWS management of encryption keys on behalf of the customer to provide automated server-side encryption and customer key management using partner solutions or AWS CloudHSM. This session is helpful for anyone interested in protecting data stored in AWS.
SEC202 - Closing the Gap: Moving Critical, Regulated Workloads to AWS
by Chad Woolf - Director, AWS Risk &Compliance with Amazon Web Services
AWS provides a number of tools and processes to help you decide when and how to move audited, regulated, and critical business data to the cloud. In this session, we answer the following questions: when is it time for you to make this significant move? When will you be ready to address industry best practices for control (including third-party audits, access control configurations, incident response, data sovereignty, and encryption). We discuss how some highly regulated AWS customers have addressed the challenges that legacy regulatory requirements present to partners, vendors, and customers in migrating to the AWS Cloud. Finally, we cover general trends we're seeing in several regulated industries leveraging AWS and the trends we're seeing from the regulators themselves who audit and accept AWS control environments.
SEC201 - AWS Security Keynote Address
by Steve Schmidt - Chief Info. Sec. Officer with Amazon Web Services
Security must be at the forefront for any online business. At AWS, security is priority number one. Stephen Schmidt, vice president and chief information officer for AWS, shares his insights into cloud security and how AWS meets our customers' demanding security and compliance requirements, and in many cases helps them improve their security posture. Stephen, with his background with the FBI and his work with AWS customers in the government, space exploration, research, and financial services organizations, shares an industry perspective that's unique and invaluable for today's IT decision makers. At the conclusion of this session, Stephen also provides a brief summary of the other sessions available to you in the security track.
SOV209 - Introducing AWS Directory Service
by Gene Farrell - General Manager, Amazon WorkSpaces, EC2 Windows, &AWS Directory Service with Amazon Web Services
AWS Directory Service is a managed service that allows you to connect your AWS resources with an existing on-premises Microsoft Active Directory or to set up a new, standalone directory in the AWS cloud. Connecting to an on-premises directory is easy, and once this connection is established, all users can access AWS resources and applications with their existing corporate credentials. You can also launch managed, Samba-based directories in a matter of minutes, simplifying the deployment and management of Windows workloads in the AWS cloud. You can join Amazon EC2 Windows instances, get Kerberos-based SSO, and use your favorite Windows tools for administration. In this session, we demonstrate AWS Directory Service features and show you how to use this service to reduce workflow complexity for your users and IT staff.
SOV208 - Amazon WorkSpaces and Amazon Zocalo
by Alaa Badr - Principal BDM - Amazon Desktop and Apps with Amazon Web Services
This session provides an overview and demonstrations of the key features and benefits of Amazon WorkSpaces and Amazon Zocalo. Â Amazon WorkSpaces is a fully managed desktop computing service in the cloud that allows you to easily provision cloud-based desktops that allow users to access the documents, applications, and resources they need. Amazon Zocalo is a fully managed enterprise storage and sharing service that offers enhanced security, strong administrative controls, and feedback capabilities. Users can access both services wherever they are with a device of their choice, including PCs and Macs as well as iPad, Kindle Fire, or Android tablets. Attend this session to learn more about these services, including how to manage them, what the experience is like for users, and how to get the most out of these services.
SOV207 - Amazon AppStream
by Collin Davis - Director, Application Services, AppStream with Amazon Web ServicesChris Van Duyne - Chief Engineer with DiSTI
Amazon AppStream is an application streaming service that enables powerful GPU, CPU, and memory-intensive applications to run on mass-market devices, thereby removing device constraints. In this technical session, we show you how to use Amazon AppStream to build and deploy an application, customize the clients, and manage entitlements for user access.
SOV204 - Scaling Up to Your First 10 Million Users
by Chris Munns - Solutions Architect with Amazon Web Services
Cloud computing gives you a number of advantages, such as the ability to scale your application on demand. If you have a new business and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
SOV203 - Understanding AWS Storage Options
by Guy Farber - Business Development Manager with Amazon Web Services
With AWS, you can choose the right storage service for the right use case. This session shows the range of AWS choices-from object storage to block storage-that is available to you. We include specifics about real-world deployments from customers who are using Amazon S3, Amazon EBS, Amazon Glacier, and AWS Storage Gateway.
SOV202 - Choosing Among AWS Managed Database Services
by Brian Rice - Product Marketing Manager, Amazon RDS with Amazon Web Services
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We'll cover how each service might help support your application, how much each service costs, and how to get started.
SPOT305 - Event-Driven Computing on Change Logs in AWS
by Khawaja Shams - Technical Advisor with Amazon Web ServicesMarvin Theimer - VP - Distringuished Engineer with Amazon Web Services
An increasingly common form of computing is computation in response to recently occurring events. These might be newly arrived or changed data, such as an uploaded Amazon S3 image file or an update to an Amazon DynamoDB table, or they might be changes in the state of some system or service, such as termination of an EC2 instance. Support for this form of computing requires both a means of efficiently surfacing events as a sequence of change records, as well as frameworks for processing such change logs. This session provides an overview of how AWS intends to facilitate event-driven computing through support for both change logs as well as various means of processing them.
SPOT304 - The Quest for the Last 9: Building Highly Available Services from the Ground Up
by Khawaja Shams - Technical Advisor with Amazon Web ServicesValentino Volonghi - CTO with AdRoll
The heart of any service is its availability. Even the coolest game, mobile app, or website is hard to love if users can't rely on it to be available when they need it the most. In this session, we dive deep into how we think about availability of services at AWS and the measures we take to maximize our availability at scale: starting from multi-datacenter services to our deployment methodology, canaries, and fault tolerance in general. In this session, you will get a comprehensive view of how different organizations think about availability and the tradeoffs we make on our way to building highly available services.Â
SPOT303 - Building Mission Critical Database Applications: A Conversation with AWS Customers about Best Practices
by Swami Sivasubramanian - General Manager, AWS NoSQL with Amazon Web Services
Databases are at the heart of IT systems and their operation present unique challenges. Join us for an open discussion with Financial Times and Swami Sivasubramanian, co-creator and builder of DynamoDB and General Manager of AWS NoSQL, to learn about strategies and best practices around managed database solutions: how they can be leveraged to scale along with your business; when the use of managed databases makes sense (and when it doesn't); and how to save money with the different types of database services offered by AWS.
SPOT302 - Under the Covers of AWS: Core Distributed Systems Primitives That Power Our Platform
by Swami Sivasubramanian - General Manager, AWS NoSQL with Amazon Web ServicesAllan Vermeulen - VP/Distinguished Engineer, Storage Services with Amazon Web Services
AWS and Amazon.com operate some of the world's largest distributed systems infrastructure and applications. In our past 18 years of operating this infrastructure, we have come to realize that building such large distributed systems to meet the durability, reliability, scalability, and performance needs of AWS requires us to build our services using a few common distributed systems primitives. Examples of these primitives include a reliable method to build consensus in a distributed system, reliable and scalable key-value store, infrastructure for a transactional logging system, scalable database query layers using both NoSQL and SQL APIs, and a system for scalable and elastic compute infrastructure. In this session, we discuss some of the solutions that we employ in building these primitives and our lessons in operating these systems. We also cover the history of some of these primitives - DHTs, transactional logging, materialized views and various other deep distributed systems concepts; how their design evolved over time; and how we continue to scale them to AWS.Â
SPOT301 - AWS Innovation at Scale
by James Hamilton - VP &Distinguished Engineer with Amazon Web Services
This session, led by James Hamilton, VP &Distinguished Engineer, gives an insider view of some the innovations that help make the AWS cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking protocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing. James will also dive into fundamental database work AWS is delivering to open up scaling and performance limits, reduce costs, and eliminate much of the administrative burden of managing databases. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
SPOT211 - State of the Union: Amazon Compute Services
by Tom Johnston - Business Development Manager, Cloud Economics with Amazon Web ServicesPeter De Santis - Vice President, Amazon Compute Services with Amazon Web ServicesMatt Garman - Vice President, Amazon EC2 with Amazon Web Services
Join Peter De Santis, Vice President of Amazon Compute Services, and Matt Garman, Vice President of Amazon EC2 as they share a “behind the scenes” look at the evolution of compute at AWS. You hear about the drivers behind the innovations we've introduced, and learn how we've scaled our compute services to meet dramatic usage growth.
SPOT209 - State of the Union: AWS Simple Storage and Glacier Services
by Mai-Lan Tomsen Bukovec - General Manager, S3 with Amazon Web Services
General Manager of Amazon Simple Storage Service, Mai-Lan Tomsen Bukovec, will share our learnings running and growing the AWS storage services. You will hear the interesting ways customers are using S3 and Glacier, learn about the new Amazon features that we launched this year, and how we think about evolving the Storage services.
SPOT208 - Managing the Pace of Innovation: Behind the Scenes at AWS
by Jim Scharf - Technical Advisor with Amazon Web ServicesCharlie Bell - Sr. Vice President of Utility Computing with Amazon Web Services
AWS launched in 2006, and since then we have released more than 1,000 services, features, and major announcements. Every year, we outpace the previous year in launches and are continuously accelerating the pace of innovation across the organization. In this session, Charlie Bell, Sr. Vice President of Engineering, will share how his teams are able to formulate customer-centric ideas, turn them into features and services, and get them to market quickly. This session dives deep into how an idea becomes a service at AWS and how we continue to evolve the service after release through innovation at every level. We will then walk through some real-world examples of how we applied these concepts to launch a new AWS service. Come learn about the rapid pace of innovation at AWS, and the culture that formulates magic behind the scenes.
SPOT207 - State of the Union: AWS Database Services
by Raju Gulabani - Vice President of Database Services with Amazon Web Services
Raju Gulabani, Vice President of AWS Database Services, will share the thinking behind the evolution of database services at AWS and the drivers of the innovations we've delivered. You'll learn how customers are using Amazon RDS, Amazon DynamoDB, Amazon ElastiCache and Amazon Redshift, and how to choose the right service for your workload.
SPOT205 - State of the Union: AWS Mobile Services and New World of Connected Products
by Jinesh Varia - Senior Program Manager, Mobile Services with Amazon Web ServicesMarco Argenti - VP, AWS Mobile with Amazon Web ServicesPatrik Arnesson - CEO with Football AddictsLandon Spear - Software Engineer with Path
In this session, Marco Argenti, Vice President of AWS Mobile, kicks off the Mobile and Connected Devices Track and shares our vision and the latest products and features we have launched this year. He gives an overview of our mobile services, shares trends we are seeing among mobile customers, and brings some key mobile customers on stage to share their experiences.Â
SPOT204 - VC Panel Discussion
by Brad Steele - Business Development Manager with Amazon Web ServicesJerry Chen - Partner with Greylock PartnersAriel Tseitlin - Partner with Scale Venture PartnersStephen Herrod - Managing Director with General CatalystMatthew McIlwain - Managing Director with Madrona Venture GroupJoel Yarmon - Partner with Draper Associates
Hear what this high-powered panel of top venture capitalists of the next wave of cloud innovation have to say about trends, pleasant and unpleasant surprises, the next big things on the horizon, and emerging startup hotspots for cloud apps and infrastructure. Jerry Chen (Partner, Greylock Partners); Joel Yarmon (Partner, Draper Associates); Ariel Tseitlin (Partner, Scale Venture Partners); Matt McIlwain (Managing Director, Madrona Venture Group); and Dr. Steve Herrod (Partner, General Catalyst Partners) convene to share their expertise and thoughts on what the future holds for startups across the globe.
SPOT203 - 3rd Annual Startup Launches moderated by Werner Vogels
by Werner Vogels - CTO with Amazon Web Services
Join this exciting session and see five AWS-powered startups launch on stage with Amazon.com CTO, Dr. Werner Vogels. Learn how these innovative new startups are building solutions using the AWS cloud as each company makes a significant, never before shared launch announcement. Session attendees also receive special discounts on the newly-launched products. Whether you're an entrepreneur, startup, or tech enthusiast, you won't want to miss these startup launches!
SPOT202 - CTO-to-CTO Fireside Chat with Dr. Werner Vogels
by Werner Vogels - CTO with Amazon Web ServicesSeth Proctor - CTO with NuoDBAndrew Miklas - CTO with PagerDutyChris Wanstrath - CEO &Co-Founder with GitHub
This one-on-one fireside chat, hosted by Amazon CTO Werner Vogels, gets into the mindsets of the technical leaders behind some of the most progressive and innovative startups in the world. This is your opportunity to learn what happens behind the scenes, how pivotal technology and AWS infrastructure decisions are made, and the thinking that leads to products and services that disrupt and reshape how businesses and people use technologies day to day.
SPOT201 - Founders Fireside Chat with Dr. Werner Vogels
by Werner Vogels - CTO with Amazon Web ServicesAdam Jacob - CTO and Co-Founder of Chef with ChefDan Wagner - CEO and Founder with Civis AnalyticsAlan Schaaf - Founder and CEO with Imgur
Werner Vogels, Amazon CTO, sits down face-to-face with the leaders who have taken their startups from an idea on a cocktail napkin to known names in a matter of a few years by harnessing the possibilities of technology and AWS. Their insights and learnings apply not only to fledgling startups and future entrepreneurs, but to enterprises seeking out ways to become more agile, responsive, and dynamic in the rapid technology race.
WEB401 - Optimizing Your Web Server on AWS
by Jonathan Desrocher - Solutions Architect, Amazon Web Services with Amazon Web ServicesJustin Lintz - Sr. Web Operations Engineer with Chartbeat
Tuning your EC2 web server will help you to improve application server throughput and cost-efficiency as well as reduce request latency. In this session we will walk through tactics to identify bottlenecks using tools such as CloudWatch in order to drive the appropriate allocation of EC2 and EBS resources. In addition, we will also be reviewing some performance optimizations and best practices for popular web servers such as Nginx and Apache in order to take advantage of the latest EC2 capabilities.
WEB307 - Scalable Site Management Using AWS OpsWorks
by Chris Barclay - Senior Product Manager with Amazon Web ServicesJonathan Quail - Software Development Engineer with FillZCliff Mccollum - Software Engineering Manager with FillZ
Migrating from a hosted environment to AWS is a good opportunity to streamline deployment and site operations. This session shows how FillZ used AWS OpsWorks with other tools to automate site operations and deliver a highly available site that is used by large numbers of customers. Through code and examples, this session shows you how to automate deployments across an entire fleet, configure a patching strategy, use common tools to create useful alarms and monitor system performance, and employ security best-practices in AWS.
WEB306 - UI, Load, and Performance Testing Your Websites on AWS
by Dave Mozealous - Quality Assurance Manager with Amazon Web ServicesLeo Zhadanovsky - Senior Solutions Architect, Amazon Web Services with Amazon Web Services
The only way to accurately see how your website performs on heavy load, spiky usage, and with different web browsers and platforms is to do load and UI testing. This session explains how Amazon.com uses Amazon EC2 to do automated UI testing, at scale. We also cover using a variety of open source and commercial load and performance testing tools. These tools help you identify weak points in your web architecture, fix them, and ensure that your website can scale to the demand of your users. This session shows DevOps engineers, systems administrators, software developers, QA specialists, and front-end developers how to use tools and services like Amazon CloudWatch, Selenium, Siege, Bees with Machine Guns, and New Relic to load test their websites and applications running on AWS, and achieve desired levels of performance.
WEB305 - Migrating Your Website to AWS
by Randall Hunt - Technical Evangelist with Amazon Web ServicesEugene Ventimiglia - Director of Technical Operations with Buzzfeed
Moving your website to AWS can provide you numerous advantages around the ability to grow, increasing physical security, and lowering the costs of running your website. In this session we'll focus on how you can move your existing website to AWS so you can take advantage of these benefits. You'll be hearing the about how BuzzFeed migrated to AWS when Hurricane Sandy impacted their operations. Director of Buzzfeed's Tech Ops, Eugene Ventimiglia, will walk through the timeline of the migration and describe how BuzzFeed was able to continue serving millions of users during hurricane Sandy. We'll discuss how to set up your site in AWS, strategies for managing the transition through deployment tools, load balancing trial deployments, and DNS cutover, as well as configuration settings necessary to ensure that your site will run well.
WEB304 - Running and Scaling Magento on AWS
by Shaun Pearce - Solutions Architect, Amazon Web Services with Amazon Web ServicesZachary Stevens - Chief Architect with Elastera
Magento is a leading open source, eCommerce platform used by many global brands. However, architecting your Magento platform to grow with your business can sometimes be a challenge. This session walks through the steps needed to take an out-of-the-box, single-node Magento implementation and turn it into a highly available, elastic, and robust deployment. This includes an end-to-end caching strategy that provides an efficient front-end cache (including populated shopping carts) using Varnish on Amazon EC2 as well as offloading the Magento caches to separate infrastructure such as Amazon ElastiCache. We also look at strategies to manage the Magento Media library outside of the application instances, including EC2-based shared storage solutions and Amazon S3. At the data layer we look at Magento-specific Amazon RDS-tuning strategies including configuring Magento to use read replicas for horizontal scalability. Finally, we look at proven techniques to manage your Magento implementation at scale, including tips on cache draining, appropriate cache separation, and utilizing AWS CloudFormation to manage your infrastructure and orchestrate predictable deployments.
WEB302 - Best Practices for Running WordPress on AWS
by Andreas Chatzakis - Solutions Architect with Amazon Web ServicesChris Pitchford - Primary Vapor Wrangler with News UK and Ireland
WordPress is an open-source blogging tool and content management system (CMS) that can power anything from personal blogs to high traffic websites. This session covers best practices for deploying scalable Wordpress-powered websites on AWS. Starting from one-click single-instance installations from the AWS Marketplace, we move on to Wordpress implementation details that help you make the most of AWS elasticity. We provide a blueprint architecture for high availability (Elastic Load Balancing, Auto Scaling, Amazon RDS multi-AZ). You learn how to use Amazon S3 to create a stateless web tier, how to improve performance with Amazon ElastiCache and Amazon CloudFront, how to manage your application lifecycle with AWS Elastic Beanstalk, and more.
WEB301 - Operational Web Log Analysis
by Chris Munns - Solutions Architect with Amazon Web Services
Log data contains some of the most valuable raw information you can gather and analyze about your infrastructure and applications.  Amid the mess of confusing lines of seemingly random text can be hints about performance, security, flaws in code, user access patterns, and other operational data. Without the proper tools, finding insights in these logs can be like searching for a hay-colored needle in a haystack.  In this session you learn what practices and patterns you can easily implement that can help you better understand your log files. You see how you can customize web logs to add more information to them, how to digest logs from around your infrastructure, and how to analyze your log files in near real time.  Â
WEB204 - Speeding Up Your Site's Performance with a Web Cache
by Steve Mueller - Solutions Architect with Amazon Web Services
Adding a web caching layer to your website can decrease page load times and increase the number of users each instance can serve. This session helps you understand how to use tools like Varnish to configure and utilize a web caching layer to improve the performance of your website. Learn how to add a web caching layer to your existing website and how to use it most effectively. We also cover tips and tricks for deploying and running a web caching layer in AWS.
WEB203 - Building a Website That Costs Pennies to Operate
by John Mancuso - Solutions Architect with Amazon Web Services
Amazon S3 gives you the ability to serve files from your Amazon S3 buckets. This session shows you how to set up a website with Amazon S3 to serve your static content.  We show how you can use open source tools like Jekyll and Octopress to run a blog on your static site.  Finally, you see how you can make that site more dynamic using other AWS products and the AWS SDK for JavaScript.
WEB202 - Best Practices for Handling a 20x Traffic Spike
by Alex Dunlap - Senior Manager, Amazon Web Services with Amazon Web ServicesKrzysztof Wiercioch - Infrastructure Engineer with Scribblelive
Promotions and product updates can bring a heavy load of traffic to your website all at once. In this session, we cover best practices for being able to handle those traffic spikes easily using Amazon Route 53, Elastic Load Balancing, and Amazon CloudFront. You learn how to configure your site to scale as needed, and how to configure your DNS and CDN settings so that a massive influx in traffic won't keep your site's visitors from having a great experience.Â



Inspired by Rodney Haywood's index in 2012, I decided to do the same for 2013.  I borrowed from his HTML formatting.  The code is this github project which a mix of chrome dev tools web scraping, google data API (YouTube), JSlideShare (with updates required), and JMustache.   Last year, I wrote the code in Groovy (~ 150 lines of code) as I wanted quick prototyping and wanted to smaller project to play around with Groovy. This year I decided to do the code in Scala (~ 150 lines of code as well). The code took two evenings of hacking.  If you see any missing information feel free to issue a pull request to fix it.