Tuesday, December 17, 2013

Zookeeper as a cloud native service registry

I have been working with IBM teams to deploy IBM public cloud services using large parts of the Netflix OSS cloud platform.  I realize that there are multiple ways to skin every cat and there are parts of the Netflix approach that aren’t well accepted across the majority of cloud native architectures within the industry (as cloud native is still an architecture being re-invented and improved across the industry).   Due to this current state of the world, I get asked questions along the lines of “why should we use Netflix technology or approach foobar vs. some other well adopted cloud approach baz that I saw at a meetup”.  Sometimes the answer is easy.  For example, things that Netflix might due for web scale reasons that aren’t initially sensible for smaller than web scale.  Sometimes the answers are far harder and the answer likely lies within the experience Netflix has gained that isn’t evident until the service not adopting the technology gets hit with a service disruption that makes the reasoning clear.   My services haven’t yet been through near this battle hardening.  Therefore, I admit that personally I sometimes lack the full insight required to fully answer the questions.

One “foobar” that has bothered me for some time was the use of Eureka for service registration and discovery.  The equivalent “baz” has been zookeeper.  You can read Netflix’s explanation of why they use Eureka for a service registry (vs. zookeeper) on their wiki and more discussion on their mailing list.  The explanations seem reasonable, but it kept bothering me as I see “baz” approaches left and right.  Some examples are Parse’s recent re:invent 2013 session and Airbnb’s SmartStack.  SmartStack has shown more clearly how they are using zookeeper by releasing their Nerve and Synapse in open source, but Charity Majors was pretty clear architecturally in her talk.  Unfortunately both seem to lack clarity in how zookeeper handles cloud native high availability (how it handles known cloud failures and how zookeeper is deployed and bootstrapped).   For what it’s worth, Airbnb already states publically “The achilles heel of SmartStack is zookeeper”.  I do think there are non-zookeeper aspects of SmartStack that are very interesting (more on that later).

To try to see if I understood the issues I wanted to try a quick experiment around high-availability.  The goal was to understand what happens to clients during a zookeeper node failure or network partition event between availability zones.

I decided to download the docker container from Scott Clasen that has zookeeper and Netflix’s exhibitor already installed and ready to play with.  Docker is useful as it allows you to stich together things so quickly locally without the need for full-blown connectivity to the cloud.  I was able to hack this together running three containers (across a zookeeper ensemble) by using a mix of docker hostnames and xip.io but I wasn’t comfortable that this was a fair test given the DNS wasn’t “standard”.   Exhibitor also allowed me to get moving quickly with zookeeper given it’s web based UI.  Once I saw the issue with docker and not being able to edit /etc/hosts, it made me even less trusting if I would be able to simulate network partitions (which I planned to use Jepsen’s iptables DROP like approach).

Instead I decided to move the files Scott setup on the docker image over to cloud instances (hello, where is docker import support on clouds?!).  I started up one zookeeper/exhibitor per availability zone within a region.  Pretty quickly I was able to define (with real hostnames) and run a three node zookeeper ensemble across the availability zones.  I could then do the typical service registration (using zkCli.sh) using ephemeral nodes:

create –e /hostnames/{hostname or other id} {ipaddress or other service metadata}

Under normal circumstances, I saw the new hosts across availability zones immediately.  Also, after a client goes away (exit from zkCli.sh), the other nodes would see the host disappear.  This is the foundation for service registration and ephemeral de-registration.  Using zookeeper for this ephemeral registration and automatic de-registration and watches by clients is rather straightforward.  However, this was under normal circumstances.

I then used exhibitor to stop one of the three instances.  For clients connected to the remaining two instances, I could still interact with the zookeeper ensemble.  This is expected as zookeeper keeps working as long as a majority of the nodes are still in service.  I would also expect instances connected to the failed instance to keep working either through a more advanced client connectivity configuration than I did with zkCli.sh or through a discovery client’s caching of last known results.  Also, in a realistic configuration, likely you’d run more than once instance of zookeeper per availability zone so a single failing node isn’t all that interesting.   I didn’t spend too much time on this part of the experiment, as I wanted to instead focus on network partition events.

For network partition events, I decided to follow a Jepsen like approach for simulating network partitions.  I went into one of the instances and quickly did an iptables DROP on all packets coming from the other two instances.  This would simulate an availability zone continuing to function, but that zone losing network connectivity to the other availability zones.  What I saw was that the two other instances noticed that the first server “going away”, but they continued to function as they still saw a majority (66%).  More interestingly the first instance noticed the other two servers “going away” dropping the ensemble availability to 33%.  This caused the first server to stop serving requests to clients (not only writes, but also reads).

You can see this happening “live” via the following animated GIF.  Note the colors of the nodes in the exhibitor UI’s as I separate the left node from the middle and right node:


To me this seems like a concern, as network partitions should be considered an event that should be survived.  In this case (with this specific configuration of zookeeper) no new clients in that availability zone would be able to register themselves with consumers within the same availability zone.  Adding more zookeeper instances to the ensemble wouldn’t help considering a balanced deployment as in this case the availability would always be majority (66%) and non-majority (33%).

It is worth noting that if your application cannot survive a network partition event itself, then you likely aren't going to worry so much about your service registry and what I have shown here.  That said, applications that can handle network partitions well are likely either completely stateless or have relaxed eventual consistency in their data tier.

I didn’t simulate the worst-case scenario, which would be to separate all three availability zones from each other.  In that case, all availability of zookeeper would go to zero, as each zone would represent a minority (33% for each zone).

One other thing I didn’t cover here is the bootstrapping of service registry location (ensuring that the service registry location process is highly available as well).  With Eureka, there is a pretty cool trick that only depends on DNS (and DNS text records) on any client.  With Eureka the actual location of Eureka servers is not burned into client images.  I’m not sure how to approximate this with zookeeper currently (or if there are client libraries for zookeeper that keep this from being an issue), but you could certainly follow a similar trick.

I believe most of this comes down to the fact that zookeeper isn’t really designed to be a distributed cloud native service registry.  It is true that zookeeper can make implementing service registration, ephemeral de-registration, and client notifications easy, but I think its focus on consistency makes it a poor choice for a service registry in a cloud environment with an application that is designed to handle network partitions.  That’s not to say there are not use cases where zookeeper is far better than Eureka.  As Netflix has said, for cases that have high consistency requirements (leader election, ordered updates, and distributed synchronization) zookeeper is a better choice.


I am still learning here myself, so I’d welcome comments from Parse and Airbnb on if I missed something important about their use of zookeeper for service registration.  For now, my experiment shows at the very least, that I wouldn’t want to use zookeeper for service registration without a far more advanced approach to handling network partitions.

Saturday, November 30, 2013

AWS re:Invent 2013 Video & Slide Presentation Links with Easy Index

I wanted a quick index of all of the re:Invent 2013 sessions, links to the slides on SlideShare and videos on YouTube.  Some are not yet posted.  I'll rerun the script again in a week or so and see if any more appear.  If you see any errors, let me know by posting a comment.  For information on the code that generated this page see below.


ARC201 - They Don't Hug Back! Or Why You Need to Stop Worrying about Prodweb001 and Start Loving i-98fb9856
by Chris Munns - Solutions Architect with Amazon Web ServicesMartin Rhoads - Site Reliability Engineer with Airbnb
Traditionally, IT organizations have treated infrastructure components like family pets. We name them, we worry about them, and we let them wake us up at 4:00 am. Amazon CTO Werner Vogels has dubbed these behaviors as "server hugging" and antiquated in today's cloud infrastructures. In this breakout session, we will discuss methods and methodology to get away from "server hugging" and be concerned more with the overall status and life of our entire infrastructure. From making use of toss-away-able on-demand infrastructure, to monitoring services and not individual servers, to getting away from "naming" instances, this session helps you see your infrastructure for what it is, technology that you control.
ARC202 - High Availability Application Architectures in Amazon VPC
by Brett Hollman - Solutions Architect with Amazon Web Services
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual data centerÊ that you define. In this session you learn how to leverage the VPC networking constructs to configure a highly available and secure virtual data center on AWS for your application.Ê We cover best practices around choosing an IP range for your VPC, creating subnets, configuring routing, securing your VPC, establishing VPN connectivity, and much more.Ê The session culminates in creating a highly available web application stack inside of VPC and testing its availability with Chaos Monkey.
ARC203 - How Adobe Deploys: Refreshing the Entire Stack Every Time
by John Martinez - Manager, Cloud Platform Engineering with Adobe
Automating application deployments is old hat. Imagine a world where you can build everything from your "data center" up to your application via code and automate it. It's a reality, and it's called AWS CloudFormation. At Adobe, we use AWS CloudFormation to define our infrastructure in AWS as code. By using AWS CloudFormation in combination with other tools, including OpsCode Chef, we are able to create highly flexible and customized workflows that ensure consistent and audited deployments. From our VPCs to our applications, we can build and tear down in a matter of minutes. Come see how we have put the power of AWS CloudFormation to work at Adobe using advanced techniques such as substacks, identity and access management roles, bootstrapping into Chef, and moreÑincluding a demo of our automation environment. For us, AWS CloudFormation is the service that ties a pretty bow around all of the other powerful AWS offerings.
ARC205 - Deploying the 'League of Legends' Data Pipeline with Chef
by Trotter Cashion - Engineering Manager with Riot Games
Over the past year, the data team at Riot Games has been using Chef to both configure instances in Amazon Elastic Compute Cloud (EC2) and build AMIs. With Chef as an integral part of the workflow, we've autoscaled thousands of instances in support of the data pipeline for League of Legends and have found that Chef doesn't always play perfectly in the world of autoscaling groups and ephemeral instances. In this talk, we cover what's worked and what's failed and explain how to best utilize Chef in the world of Amazon Web Services.
ARC206 - Scaling on AWS for the First 10 Million Users
by Simon Elisha - Principal Solution Architect with Amazon Web Services
Cloud computing gives you a number of advantages in being able to scale on demand, easily replace whole parts of your infrastructure, and much more. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?"Ê Join us at this session to understand some of the common patterns and recommended areas of focus you can expect to work through while scaling an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud. The patterns and practices reviewed in this session will get you there.
ARC208 - Enterprise Service Delivery from the AWS Cloud
by Sridhar Devarapalli - Director, Product Management with Citrix Systems
(Presented by Citrix)ÊAs we move to a world where all users are mobile and apps are increasingly delivered from the cloud, security, compliance, and user experienceÊservice-level expectations are higher than ever, necessitating that IT look beyond traditional methods for delivering applications. However, there are intelligent cloud networking and provisioning solutions on AWS that can be leveraged to create a service delivery model that addresses the new paradigm. Learn how Citrix NetScaler VPX on AWS provides full application visibility and control through a combination of customer case studies and demos. In this session, you learn how to: Deploy Citrix application delivery technologies (NetScaler, NetScaler Gateway, CloudBridge) into AWS Optimize next-gen web applications delivered from AWS, using traffic management and application acceleration capabilitiesÊ Provide global application availability across on-premises data centers and multiple AWS regionsÊusing CloudBridge, global server load balancing, and Amazon Route 53 DNS
ARC210 - DevOps Nirvana: Seven Steps to a Peaceful Life on AWS
by Andrew Shieh - Ops Manager with SmugMugPhilip Jacob - Head of Technology with Stackdriver
(Presented by Stackdriver)ÊKey decisions related to architecture, tools, processes, and even team composition can have a dramatic effect on the human effort required to operate distributed applications on AWS.Ê If you make the wrong decisions on in these areas, you spend your days, nights, weekends, and vacations dealing with issues and noise. If you make the right decisions, you and your team can focus on building customer value, and your time away from work is spentÉ not working. Stackdriver and Smugmug describe theÊseven most important practices that world-class operations teams employ to minimize operational overhead, highlighting real-world examples to illustrate the importance of each.
ARC212 - Drinking our own Champagne: How Woot, an Amazon subsidiary, uses AWS technologies
by Dan Pinkard - Systems Manager with WootVivek Sagi - CTO with Woot
Woot, an Amazon subsidiary, specializes in offering great new product deals every day. Woot's deeply discounted deals; and signature events like the 'Woot Off 'and 'Bag of Crap' sales launch at specific times throughout the day, and the resulting spiky traffic patterns are highly correlated to revenue. In this session, we offer an unvarnished perspective into how Woot uses services such as Amazon DynamoDB, EC2, ELB, CloudSearch, CloudFront, and SES. Learn how to architect for security and PCI for a retail website running on AWS. Dig into the technical details of a data-store comparison between DynamoDB, Mongo, Oracle, and SQLServer, to find the right solution for unique workloads. Join us as we share our musings and real-lessons learned from using a cocktail of AWS services. We encourage you to attend even if none of this makes sense or is interesting. Don't miss the opportunity to hang out with Mortimer the Woot monkey and his crew and to walk away with one of our legendary flying monkeys.
ARC213 - How to Host and Manage Enterprise Customers on AWS: Toyota, Nippon Television, UNIQLO Use Cases
by Kazutaka Goto - Evangelist with cloudpackKen Tamagawa - Sr. Manager, Solutions Architecture with Amazon Web Services
(Presented by cloudpack) cloudpack is a premium consulting partner of AWS in Japan, and since 2010 has been helping customers architect their workloads for scalability, availability and disaster recovery.Ê In this session, cloudpack explains how they are solving customer pain points with AWS architecture best practices. Specifically, they will discuss a multi-region Disaster Recovery system designed for Toyota and a highly available and scalable second screen system for Nippon Television (JoinTV)."
ARC301 - Controlling the Flood: Massive Message Processing with Amazon SQS and Amazon DynamoDB
by Ari Neto - Ecosystem Solution Architect with Amazon Web Services
Amazon Simple Queue Service (SQS) and Amazon DynamoDB build together a really fast, reliable and scalable layer to receive and process high volumes of messages based on its distributed and high available architecture. We propose a full system that would handle any volume of data or level of throughput, without losing messages or requiring other services to be always available. Also, it enables applications to process messages asynchronously and includes more compute resources based on the number of messages enqueued. The whole architecture helps applications reach predefined SLAs as we can add more workers to improve the whole performance.Ê In addition, it decreases the total costs because we use new workers briefly and only when they are required.
ARC302 - Data Replication Options in AWS
by Thomas Park - Mgr, Solution Architecture with Amazon Web Services
One of the most critical roles of an IT department is to protect and serve its corporate data.Ê As a result, IT departments spend tremendous amounts of resources developing, designing, testing, and optimizing data recovery and replication options in order to improve data availability and service response time.Ê This session outlines replication challenges, key design patterns, and methods commonly used in todayÕs IT environment.Ê Furthermore, the session provides different data replication solutions available in the AWS cloud.Ê Finally, the session outlines several key factors to be considered when implementing data replication architectures in the AWS cloud.
ARC303 - Unmeltable Infrastructure at Scale: Using Apache Kafka, Twitter Storm, and Elastic Search on AWS
by Philip O'Toole - Senior Architect and Lead Developer with LogglyJim Nisbet - CTO and VP of Engineering with Loggly
This is a technical architect's case study of how Loggly has employed the latest social-media-scale technologies as the backbone ingestion processing for our multi-tenant, geo-distributed, and real-time log management system. This presentation describes design details of how we built a second-generation system fully leveraging AWS services including Amazon Route 53 DNS with heartbeat and latency-based routing, multi-region VPCs, Elastic Load Balancing, Amazon Relational Database Service, and a number of pro-active and re-active approaches to scaling computational and indexing capacity. The talk includes lessons learned in our first generation release, validated by thousands of customers; speed bumps and the mistakes we made along the way; various data models and architectures previously considered; and success at scale: speeds, feeds, and an unmeltable log processing engine.
ARC304 - Cloud Architectures with AWS Direct Connect
by Steve Carter - Solutions Architect with Amazon Web ServicesRoger Greene - Senior Director, Product Marketing with Level 3 Communications
Modern IT is embracing hybrid cloud as part of their overall IT strategy. AWS Direct Connect provides a critical tool for ingesting web scale data or leveraging custom appliances and legacy applications. This talk discusses the unique benefits of using Direct Connect to reduce cost, increase bandwidth, and provide a more consistent network experience between on-premises resources and the cloud. It details the components, requirements, and configuration options.
ARC305 - How Netflix Leverages Multiple Regions to Increase Availability: An Isthmus and Active-Active Case Study
by Ruslan Meshenberg - Director, Platform Engineering with Netflix
Netflix sought to increase availability beyond the capabilities of a single region. How is that possible, you ask? We walk you through the journey that Netflix underwent to redesign and operate our service to achieve this lofty goal. Using the principles of isolation and redundancy, ourÊdestination is a fully redundant active-active deployment architecture where end users can be served out of multiple AWS regions. If one region fails, another can quickly take its place. Along the way, we'll explore our "Isthmus" architecture, a stepping stone toward full active-active deployment where only the edge services are redundantly deployed to multiple regions. We'll cover real-world challenges we had to overcome, like near-real-time data replication, and the operational tools and best-practices we needed to develop to make it a success. Discover a whole new breed of monkeys we created to test multiregional resiliency scenarios.
ARC306 - Lumberjacking on AWS: Cutting Through Logs to Find What Matters
by Guy Ernest - Solutions Architect with Amazon Web Services
AWS offers services that revolutionize the scale and cost for customers to extract information from large data sets, commonly called Big Data. This session analyzes Amazon CloudFront logs combined with additional structured data as a scenario for correlating log and transactional data. Successfully implementing this type of solution requires architects and developers to assemble a set of services with multiple decision points. The session provides a design and example of architecting and implementing the scenario using Amazon S3, AWS Data Pipeline, Amazon Elastic MapReduce, and Amazon Redshift. It explores loading, query performance, security, incremental updates, and design trade-off decisions.
ARC307 - Continuous Integration and Deployment Best Practices on AWS
by Leo Zhadanovsky - Senior Solutions Architect with Amazon Web ServicesJP Schneider - DevOps / Internet Jedi with Mozilla Foundation
With AWS, companies now have the ability to develop and run their applications with speed and flexibility like never before. Working with an infrastructure that can be 100 percent API driven enables businesses to use lean methodologies and realize these benefits. This in turn leads to greater success for those who make use of these practices. In this session, we talk about some key concepts and design patterns for continuous deployment and continuous integration, two elements of lean development of applications and infrastructures.
ARC308 - Architecting for End-to-End Security in the Enterprise
by Hart Rossman - Principal Consultant with Amazon Web ServicesBill Shinn - Principal Security Solutions Architect with Amazon Web Services
This session tells the story of how security-minded enterprises provide end-to-end protection of their sensitive data in AWS. Learn about the enterprise security architecture decisions made by Fortune 500 organizations during actual sensitive workload deployments as told by the AWS professional service security, risk, and compliance team members who lived them. In this technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, end-to-end security architecture & service composition, security configuration decisions, and the creation of AWS security operations playbooks to support the architecture.Ê
ARC309 - Dynamic Content Acceleration: Lightning Fast Web Apps with Amazon CloudFront and Amazon Route 53
by Parviz Deyhim - Solution Architect with Amazon Web ServicesPrasad Kalyanaraman - Vice President, Edge Services with Amazon Web Services
Traditionally, content delivery networks (CDNs) were known to accelerate static content. Amazon CloudFront has come a long way and now supports delivery of entire websites that include dynamic and static content. In this session, we introduce you to CloudFrontÕs dynamic delivery features that help improve the performance, scalability, and availability of your website while helping you lower your costs. We talk about architectural patterns such as SSL termination, close proximity connection termination, origin offload with keep-alive connections, and last-mile latency improvement. Also learn how to take advantage of Amazon Route 53's health check, automatic failover, and latency-based routing to build highly available web apps on AWS.
ARC310 - Orchestration and Deployment Options for Hybrid Enterprise Environments
by Donn Morrill - Manager, Solutions Architecture with Amazon Web Services
"Configure once, deploy anywhere" is one of the most sought-after enterprise operations requirements.Ê Large-scale IT shops want to keep the flexibility of using on-premises and cloud environments simultaneously while maintaining the monolithic custom, complex deployment workflows and operations. This session brings together several hybrid enterprise requirements and compares orchestration and deployment models in depth without a vendor pitch or a bias. This session outlines several key factors to consider from the point of view of a large-scale real IT shop executive.Ê Since each IT shop is unique, this session compares strengths, weaknesses, opportunities, and the risks of each model and then helps participants create new hybrid orchestration and deployment options for the hybrid enterprise environments.
ARC312 - SmugMug's Zero-Downtime Migration to AWS
by Andrew Shieh - Ops Manager with SmugMug
SmugMug spent six years split between its datacenters and AWS. Find out how and why SmugMug went 100% AWS, migrating 30 TB of databases, hundreds of frontends, load balancing, and caches, across the US in one night with zero downtime.We show you specific techniques and processes that made our large-scale migration a resounding success:Ê moving massive MySQL databases, testing and sizing a new AWS infrastructure, automating AWS operations, managing the risks involved in wholesale infrastructure change, and architecting for reliability in multiple AWS Availability Zones. We talk about the performance, scalability, operational, and business benefits and challenges we've seen since moving 100% to AWS. Finally, we share secrets about our favorite AWS products.
ARC313 - Running Lean and Mean: Designing Cost-efficient Architectures on AWS
by Constantin Gonzalez - Solutions Architect with Amazon Web ServicesRalph Gootee with PlanGrid
Whether you're a startup getting to profitability or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. Dive deep into techniques used by successful customers to reduce waste and fine-tune their AWS spending, often with improved performance and a better end-customer experience. Some techniques covered in this session: Learn how to make the most of Auto Scaling, develop an effective Spot Instance strategy, and optimize for your daily traffic cycles. Learn techniques to tier storage, offload your static content to Amazon S3 and Amazon CloudFront, reduce your database loads with edge caching, spawn part-time databases, pool resources across accounts, and even teach your dev/test instances to sleep. Showcasing easily-applicable methods, this session could be your best invested hour all day.
ARC401 - From One to Many: Evolving VPC Design
by Robert Alexander - Solutions Architect with Amazon Web Services
As more customers adopt Amazon Virtual Private Cloud architectures, the features and flexibility of the service are squaring off against increasingly complex design requirements. This session follows the evolution of a single regional VPC into a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, managing multi-tenant VPCs, conducting VPC-to-VPC traffic, extending corporate federation and name services into VPC, running multiple hybrid environments over AWS Direct Connect, and integrating corporateÊmultiprotocol label switching (MPLS)Êclouds into multi-region VPCs.
BDT101 - Big Data 'State of the Union'
by Paul Duffy - Principal Product Manager with Amazon Web ServicesScott Hagedorn - CEO with AnnalectLisa Green - Director with Common Crawl
Big Data is more than petabytes and capacity. It is the opportunity to use data to your advantage to make smart decisions that increase productivity and grow your business. In this session, you'll learn about the latest advancements in data analytics, databases, storage, and high performance computing (HPC) Êat AWS and discover how to put data to work in your own organization.
BDT102 - Small Steps in Visual Analytics for NASA's Big Data
by Robert Witoff - Data Scientist with NASA JPLTom Soderstrom - IT CTO with JPLTom Soderstrom - IT Chief Technology Officer with NASA
Exploring the cosmos for the better half of a century has generated voluminous stores of varied data. While this data was collected remotely, strides have been made back on earthly clouds that can now elastically store, visually describe and analyze these volumes of data. As the space agency explores ambitious new missions, JPL is taking small steps to unleash these cloud tools on our data that visually educate and enable our engineers to best tackle our future with a data driven understanding of the present. This talk will discuss how JPL is using cloud based visual analytics to resolve mysteries whose questionsÊand answers lay hidden in our data. Usage of AmazonÊDynamoDB, Amazon S3, Amazon Redshift, Amazon Glacier, Amazon EC2, Amazon EMR, AWS GovCloud and a variety of visual descriptive methods with real examples will be included.
BDT103 - New Launch: Introducing Amazon Kinesis: The New AWS Service for Real-time Processing of Streaming Big Data
by Ryan Waite - General Manager, Data Services with Amazon Web ServicesAditya Krishnan - Senior Product Manager with Amazon Web ServicesMarvin Theimer - Vice President, Distinguished Engineer with Amazon Web Services
This presentation will introduce Kinesis, the new AWS service for real-time streaming big data ingestion and processing. WeÕll provide an overview of the key scenarios and business use cases suitable for real-time processing, and discuss how AWS designed Amazon Kinesis to help customers shift from a traditional batch-oriented processing of data to a continual real-time processing model. WeÕll provide an overview of the key concepts, attributes, APIs and features of the service, and discuss building a Kinesis-enabled application for real-time processing. WeÕll also contrast with other approaches for streaming data ingestion and processing. Finally, weÕll discuss how Kinesis fits as part of a larger big data infrastructure on AWS, including S3, DynamoDB, EMR, and Redshift.
BDT203 - Building Your Own Web Analytics Service with node.js, Amazon DynamoDB, and Amazon EMR
by Jonathan Keebler - Founder, Chief Technology Officer with ScribbleLive
Want to learn how to build your own Google Analytics? Learn how to build a scalable architecture using node.js, Amazon DynamoDB, and Amazon EMR. This architecture is used by ScribbleLive to track billions of engagement minutes per month. In this session, we go over the code in node.js, how to store the data in Amazon DynamoDB, and how to roll-up the data using Hadoop and Hive.Ê Attend this session to learn how to move data quickly at any scale as well as how to use genomic analysis tools and pipelines for next generation sequencers using Globus on AWS.
BDT204 - GraphLab: Large-Scale Machine Learning on Graphs
by Joseph Gonzalez - Co-Founder with GraphLabCarlos Guestrin - CEO and Founder with GraphLab
GraphLab is like Hadoop for graphs in that it enables users to easily express and execute machine learning algorithms on massive graphs. In this session, we illustrate how GraphLab leverages Amazon EC2 and advances in graph representation, asynchronous communication, and scheduling to achieve orders-of-magnitude performance gains over systems like Hadoop on real-world data.
BDT205 - An MPI-IO Cloud Cluster Bioinformatics Summer Project
by Dougal Ballantyne - Solutions Architect with Amazon Web ServicesBoyd Wilson - Executive with OmnibondBrandon Posey - Research Student with Marshall University / Clemson University
Researchers at Clemson University assigned a student summer intern to explore bioinformatics cloud solutions that leverage MPI, the OrangeFS parallel file system, AWS CloudFormation templates, and a Cluster Scheduler. The result was an AWS cluster that runs bioinformatics code optimized using MPI-IO. We give an overview of the process and show how easy it is to create clusters in AWS.
BDT206 - Consumer Analytics in Real Time: How InfoScout Tracks Purchase Behavior with Mechanical Turk
by Sharon Chiarella - Vice President with Amazon Web ServicesJon Brelig - Co-Founder, CTO with InfoScout
Understanding the factors that drive consumer purchase behavior make brands better marketers. ÊIn this session, join the Vice President of Mechanical Turk to explore how retail businesses are marrying human judgment with large scale data analytics without sacrificing efficiency or scalability.Ê WeÕll highlight real world examples and introduce Jon Brelig, CTO of InfoScout, to explore how his company is leveragingÊa combination of automated methods and Mechanical Turk to build out a real-world analytics solution relied upon by brands, such as P&G, Unilever, and General Mills. By extracting item-level purchase data from more than 40,000 consumer receipt images each day and associating it with specific products, brands, user surveys and other digital marketing signals, Infoscout is able to rapidly gauge changes in consumer behavior and market share with remarkable granularity
BDT207 - Orchestrating Big Data Integration and Analytics Data Flows with AWS Data Pipeline
by Jon Einkauf - Senior Product Manager with Amazon Web ServicesAnthony Accardi - Head of Engineering with Swipely
AWS offers many data services, each optimized for a specific set of structure, size, latency, and concurrency requirements. ÊMaking the best use of all specialized services has historically required custom, error-prone data transformation and transport. ÊNow, users can use the AWS Data Pipeline service to orchestrate data flows between Amazon S3, Amazon RDS, Amazon DynamoDB, Amazon Redshift, and on-premise data stores, seamlessly and efficiently applying EC2 instances and EMR clusters to process and transform data. ÊIn this session, we demonstrate how you can use AWS Data Pipeline to coordinate your Big Data workflows, applying the optimal data storage technology to each part of your data integration architecture. ÊSwipely's Head of Engineering shows how Swipely uses AWS Data Pipeline to build batch analytics, backfilling all their data, while using resources efficiently. ÊConsequently, Swipely launches novel product features with less development time and less operational complexity. With AWS Data Pipeline, it's easier to reap the benefits of Big Data technology.
BDT208 - How the Weather Company Monetizes Weather, the Original Big Data Challenge
by Sathish Gaddipati - VP of Enterprise Data with The Weather CompanyRaja Selvaraj - Manager, Data Systems Engineering with The Weather Company
(Presented by Basho) This session will discuss the transformation of the most widely distributed cable TV network in the United States, building on one of the world's most visited digital properties, to create a world class Big Data platform. Architects, CTOs, CIOs, IT Director, and development managers will learn how to run highly scalable analytics workloads on Amazon EC2 and Amazon EMR for complex, real-time analysis of large data sets. All while decreasing time to results and increasing business agility. Bryson Koehler, EVP & CIO of The Weather Company, will discuss architecture, technology choices, performance results and business benefits realized as part of their use of AWS services to host an exciting set of weather.com solutions and generate new revenue streams. Weather impacts over 30% of the global GDP daily and is the source of vast amounts of data collection. The Weather Company is the leader in weather forecasting and is bringing the world's most accurate forecasting capabilities alive in a full suite of data APIs built fully on Infrastructure as a Service platforms, including AWS and next generation products like Basho Riak, Hadoop, and Dasein. This session will discuss how the application of these technologies help keep people safe and helps businesses plan and become more profitable, thanks to the latest intersection of consumer behavior and weather forecasting and reporting.
BDT209 - Trusted Analytics as a Service
by Vin Sharma - Marketing Manager with Intel
Ê(Presented by Intel) This is the best of times and the worst of times for cloud services developers. At no other time in history has open access to data, open interfaces to data analytics, and open licensing of source code come together with scalable, cost-effective, cloud infrastructures. This is the good news. The bad news is that enterprises are being left behind. Stymied by concerns of data protection and data governance, enterprises need proof that the services and solutions built on a cloud infrastructure comply with policies and practices theyÕve come to learn (not necessarily love). At its heart is the root of trust issue - how far down can I trust the cloud service, its infrastructure software, and the data that it analyzes? And how do I know my keys are safe? Join this session to learn how Intel has been enabling trusted analytics with cloud services secured top to bottom - from Apache Hadoop to Java, Xen, and Linux - without compromising security.
BDT210 - Adding Location and Geospatial Analytics to Big Data Analytics
by Marwa Mabrouk - Cloud and Big Data Product Manager with ESRI
(Presented by Esri) When people analyze a problem, they often include location at the core of the analysis. Location and spatial context, combined with geographical knowledge, can make the biggest difference in understanding a problem and analyzing it in a more meaningful way. In this session, we show how Amazon EMR can be used with location and geospatial analytics, and how the Amazon EMR API and the Python SDK were used to build tools that integrate Big Data and geospatial analysis. We also show powerful visualization options for displaying your results, using maps which can be shared in reports or distributed online and to mobile apps.
BDT211 - Build Next Generation Real-time Applications with SAP HANA on the AWS Cloud
by Doug Turner - CEO with Mantis TechnologiesRobert Groat - Chief Technology Officer with SmartronixSwen Conrad - Senior Director, Product Marketing, SAP HANA Cloud Solutions with SAP
(Presented by SAP) SAP HANA, available on the AWS Cloud, is an industry transforming in-memory platform, which has been adopted by many startups and ISVs, as well as traditional SAP enterprise customers. SAP HANA converges database and application platform capabilities in-memory to transform transactions, analytics, text analysis, predictive, and spatial processing so businesses can operate in real-time. Please join us to learn what SAP HANA can do for you! ÊÊ Doug Turner, CEO of Mantis Technologies, and an early adopter of SAP HANA One on AWS, will present and share his experience migrating his Sentiment Analysis solution from MySQL to SAP HANA One. He will talk about following benefits that he achieved with this migration: Dramatic simplification of his system architecture and landscape System consolidation by moving from 23 MySQL instances to one SAP HANA One instance Reduced overall AWS infrastructure cost as well as reduced admin effort and efficiency We will conclude with an overview of all the key SAP HANA capabilities on the AWS Cloud like text analysis, predictive analytics, geospatial, data integration. We will round out the session with an in-depth view of what new HANA deployment options are available on the AWS Cloud like customersÕ ability to bring their own licenses (BYOL) of SAP HANA to run on AWS in a variety of configurations ranging from 244GB up to 1.22TB.Ê
BDT212 - Real-world Cloud HPC at Scale, for Production Workloads
by Jason Stowe - CEO with Cycle ComputingMichael Steeves - Senior Systems Engineer with Novartis Institute for BioMedical Research, Inc.Steve Phillpott - CIO with HGST, Inc., a Western Digital CompanyBill Williams - IT Executive with The Aerospace Corporation
Running high-performance scientific and engineering applications is challenging no matter where you do it. Join IT executives from HGST, Inc., a Western Digital Company, The Aerospace Corporation, Novartis, and Cycle Computing and learn how they have used the AWS cloud to deploy mission-critical HPC workloads.Ê Cycle Computing leads the session on how organizations of any scale can run HPC workloads on AWS.ÊÊHGST, Inc., discusses experiences using the cloud to create next-generation hard drives.Ê The Aerospace Corporation provides perspectives on running MPI and other simulations, and offer insights into considerations like security while running rocket science on the cloud.Ê Novartis Institutes for Biomedical Research talks about a scientific computing environmentÊto do performance benchmark workloads and large HPC clusters, including a 30,000-core environment for research in the fight against cancer, using the Cancer Genome Atlas (TCGA).
BDT301 - Scaling your Analytics with Amazon Elastic MapReduce
by Peter Sirota - Sr Manager, Software Development with Amazon Web ServicesBob Harris - CTO with Channel 4 TelevisionEva Tse - Director of Big Data Platform with Netflix
Big data technologies let you work with any velocity, volume, or variety of data in a highly productive environment. Join the General Manager of Amazon EMR, Peter Sirota, to learn how to scale your analytics, use Hadoop with Amazon EMR, write queries with Hive, develop real world data flows with Pig, and understand the operational needs of a production data platform.
BDT302 - Deft Data at Netflix: Using Amazon S3 and Amazon Elastic MapReduce for Monitoring at Gigascale
by Roy Rapoport - Manager, Cloud Monitoring with Netflix, Inc
How does Netflix stay on top of the operations of its Internet service with millions of users and billions of metrics? With Atlas, its own massively distributed, large-scale monitoring system. Come learn how Netflix built Atlas with multiple processing pipelines using Amazon S3 and Amazon EMR to provide low-latency access to billions of metrics while supporting query-time aggregation along multiple dimensions.
BDT303 - Using AWS to Build a Graph-Based Product Recommendation System
by Andre Fatala - R&D Manager with Magazine Luiza - luizalabsRenato Pedigoni - Lead Software Engineer with Magazine Luiza - luizalabs
Magazine Luiza, one of the largest retail chains in Brazil, developed an in-house product recommendation system, built on top of a large knowledge Graph. AWS resources like Amazon EC2, Amazon SQS, Amazon ElastiCache and others made it possible for them to scale from a very small dataset to a huge Cassandra cluster. By improving their big data processing algorithms on their in-house solution built on AWS, they improved their conversion rates on revenue by more than 25 percent compared to market solutions they had used in the past.Ê
BDT304 - Empowering Congress with Data-Driven Analytics
by Sri Vasireddy - Chief Cloud Officer with 8kMilesMathew Chase - CIO with MACPAC.gov
MACPAC is a federal legislative branch agency tasked with reviewing state and federal Medicaid and ÊChildren's Health Insurance Program (CHIP) access and payment policies and making recommendations to Congress. By March 15 and again by June 15 each year, the agency produces a comprehensive report for Congress that compiles results from Medicaid and CHIP data sources for the 50 states and territories. The CIO of MACPAC wanted a secure, cost-effective, high performance platform that met their needs to crunch this large amount of health data. In this session, learn how MACPAC and 8KMiles helped set up the agencyÕs Big Data/HPC analytics platform on AWS using SAS analytics software.
BDT305 - Building a Cloud Culture at Yelp
by Jim Blomo - Engineering Manager - Data Mining with Yelp
Yelp is evolving from a purely hosted infrastructure environment to running many systems in AWSÑpaving the way for their growth to 108 million monthly visitors (source: Google Analytics). Embracing a cloud culture reduced reliability issues, sped up the pace of innovation, and helped them support dozens of data-intensive Yelp features, including search relevance, usage graphs, review highlights, spam filtering, and advertising optimizations. Today, Yelp runs 7+ TB hosted databases, 250+ GB compressed logs per day in Amazon S3, and hundreds of Amazon Elastic MapReduce jobs per day. In this session, Yelp engineers share the secrets of their success and show how they achieved big wins with Amazon EMR and open source libraries, policies around development, privacy, and testing.
BDT306 - Data Science at Netflix with Amazon EMR
by Kurt Brown - Director, Data Platform with Netflix
A few years ago, Netflix had a fairly "classic" business intelligence tech stack. Things have definitely changed. Netflix is a heavy user of AWS for much of its ongoing operations, andÊData Science & Engineering (DSE) is no exception. In this talk, we dive into the Netflix DSE architecture: what and why. Key topics include their use ofÊBig Data technologies (Cassandra, Hadoop, Pig + Python, and Hive); their Amazon S3 central data hub; their multiple persistent Amazon EMR clusters; how they benefit from AWS elasticity; their data science-as-a-service approach, how they made a hybrid AWS/data center setup work well, their open-sourceÊHadoop-related software, and more.
BDT307 - PetaMongo: A Petabyte Database for as Little as $200
by Miles Ward - Sr. Manager, Solutions Architecture with Amazon Web ServicesChristopher Biow - Principal Technologist and Technical Director with MongoDB
1,000,000,000,000,000 bytes. On demand. Online. Live. Big doesn't quite describe this data. Amazon Web Services makes it possible to construct highly elastic computing systems, and you can further increase cost efficiency by leveraging the Spot Pricing model for Amazon EC2. We showcase elasticity by demonstrating the creation and teardown of a petabyte-scale multiregion MongoDB NoSQL database cluster, using Amazon EC2 Spot Instances, for as little as $200 in total AWS costs. Oh and it offers up fourÊmillionÊIOPS to storage via the power of PIOPS EBS.Ê Christopher Biow, Principal Technologist Êat 10gen | MongoDBÊÊcovers MongoDB best practices on AWS, so you can implement this NoSQL system (perhaps at a more pedestrian hundred-terabyte scale?) confidently in the cloud. You could build a massive enterprise warehouse, process a million human genomes, or collect a staggering number of cat GIFs. The possibilities are huMONGOus.
BDT308 - The Problem and Promise of Translational Genetics and a Step to the Clouded Solution of Scalable Clinical Whole Genome Sequencing
by Jafar Shameem - Business Development Manager with Amazon Web ServicesDennis Wall - Associate Professor with Stanford UniversityPeter Tonellato with Harvard Medical School
Professors Wall and Tonellato of Harvard Medical School in collaboration with Beth Israel Deaconess Medical Center discuss the emerging area of clinical whole genome sequencing analysis and tools. They report on the use ofÊAmazon EC2 and Spot Instances to achieve a robust "clinical time" processing solution and examine the barriers to and resolution of producing clinical-grade whole genome results in the cloud. They benchmark an AWS solution, called COSMOS, against local computing solutions and demonstrate the time and capacity gains conferred through the use of AWS.
BDT309 - A Modern Framework for Amazon Elastic MapReduce
by Jeremy Karn - Engineer with Mortar DataK Young - CEO with Mortar Data
If you've ever developed code for processing data, you know what a mess it can beÑespecially on Hadoop. You lack debugging tools, instant feedback, automated tests, and a sane deploy. Mortar has developed a modern framework for data processing on Hadoop and Amazon Elastic MapReduce. It is a free, open framework providing instant, step-by-step execution visibility, automated testing, reusable components, and one-button deployment.Ê See how Mortar demonstrates this framework on Amazon EMR on a sample data set to solve a big data problem.
BDT310 - Globus and Globus Genomics: How Science-as-a-Service is Accelerating Discovery
by Ravi Madduri - Product Manager with University of ChicagoIan Foster - Director, Computation Institute with University of Chicago and Argonne National Laboratory
In this talk, hear about two high-performant research services developed and operated by the Computation Institute at the University of Chicago running on AWS. Globus.org, a high-performance, reliable, robust file transfer service, has over 10,000 registered users who have moved over 25 petabytes of data using the service. The Globus service is operated entirely on AWS, leveraging Amazon EC2, Amazon EBS, Amazon S3, Amazon SES, Amazon SNS, etc. Globus Genomics is an end-to-end next-gen sequencing analysis service with state-of-art research data management capabilities. Globus Genomics uses Amazon EC2 for scaling out analysis, Amazon EBS for persistent storage, and Amazon S3 for archival storage.ÊAttend this session to learn how to move data quickly at any scale as well as how to use genomic analysis tools and pipelines for next generation sequencers using Globus on AWS. Ê
BDT311 - New Launch: Supercharge Your Big Data Infrastructure with Amazon Kinesis: Learn to Build Real-time Streaming Big data Processing Applications
by John Dunagan - Principal Algorithm Engineer with Amazon Web ServicesRyan Waite - General Manager, Data Services with Amazon Web ServicesMarvin Theimer - Vice President, Distinguished Engineer with Amazon Web Services
This presentation provides an overview of the technical architecture of Kinesis, the new AWS service for real-time streaming big data ingestion and processing. This is done as part of describing how to implement a sample application that processes a Kinesis stream. The talk also describes how data ingested through Kinesis can be easily filtered, transformed, and uploaded into a variety of AWS storage services, such as S3 and Redshift. Ê
BDT401 - Using AWS to Build a Scalable Big Machine Data Management and Processing Service
by Christian Beedgen - CTO & Co-Founder with Sumo Logic
By turning the data center into an API, AWS has enabled Sumo Logic to build a very large scale IT operational analytics platform as a service at unprecedented scale and velocity. Based around Amazon EC2 and Amazon S3, the Sumo Logic system is ingesting many terabytes of unstructured log data a day while at the same time delivering real-time dashboards and supporting hundreds of thousands of queries against the collected data. When co-founder and CTO Christian Beedgen started Sumo Logic, it was obvious that the service would have to scale quickly and elastically, and AWS has been providing the perfect infrastructure for this endeavor from the start.Ê In this talk, Christian dives into the core Sumo Logic architecture and explains which AWS services are making Sumo Logic possible. Based around an in-house developed automation and continuous deployment system, Sumo Logic is leveraging Amazon S3 in particular for large-scale data management and Amazon DynamoDB for cluster configuration management. By relying on automation, Sumo Logic is also able to perform sophisticated staging of new code for rapid deployment. Using the log-based instrumentation of the Sumo Logic codebase, Christian will dive into the performance characteristics achieved by the system today and share war stories about lessons learned along the way.
BDT402 - Finding New Sub-Atomic Particles on the AWS Cloud
by Jamie Kinney - Sr. Manager, Scientific and Research Computing with Amazon Web ServicesMiron Livny - Professor of Computer Science with University of Wisconsin
This session will describe how members of the US Large Hadron Collider (LHC) community have benchmarked the usage of Amazon Elastic Compute Cloud (Amazon EC2) resource to simulate events observed by experiments at the European Organization for Nuclear Research (CERN). ÊMiron Livny from the University of Wisconsin-Madison who has been collaborating with the US-LHC community for more than a decade will detail the process for benchmarkingÊhigh-throughput computing (HTC) applications running across multiple AWS regions using the open source HTCondor distributed computing software. ÊThe presentation will also outline the different ways that AWS and HTCondor can help meet the needs of compute intensive applications from other scientific disciplines.
BDT404 - Amazon Elastic MapReduce Deep Dive and Best Practices
by Parviz Deyhim - Solution Architect with Amazon Web Services
Amazon Elastic MapReduce is one of the largest Hadoop operators in the world. Since its launch four years ago, our customers have launched more than 5.5 million Hadoop clusters. In this talk, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters and other Amazon EMR architectural patterns. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost efficient.
CPN102 - Large Scale Load Testing Amazon.com’s Traffic on AWS
by Carlos Arguelles - Senior Software Design Engineer with Amazon
ItÕs 4am and you donÕt know it, but you're about to get three times the traffic you were expecting. Is your service ready to handle it?ÊSystems are only as scalable as their weakest component. Large scale load testing in production is the best (and surest) way to ensure that services can truly scale to the unexpected. But the load generator itself can be difficult to scale, expensive to run on hundreds or thousands of hosts, challenging to keep the data secure, and time consuming to develop.ÊThe Amazon.com retail site is one of most heavily used sites in the world, and has to be ready for anything, at anytime. How do you design a load test for this in record time while keeping it cost effective? Well, you use AWS!ÊCome learn Best Practices on how you can use Amazon SQS, Amazon S3, Amazon EC2, Amazon CloudWatch, Auto Scaling, and Amazon DynamoDB to design horizontally scalable large-scale load tests that can simulate the load that millions of users are putting onto your site. We met a tight schedule and did it under budget thanks to AWS and you can too!
CPN201 - More Nines for Your Dimes: Improving Availability and Lowering Costs using Auto Scaling and Amazon EC2
by Derek Pai - Sr. Product Manager, Monitoring with Amazon Web ServicesBrandon Adams - Senior Operations Engineer with DreamBox Learning Inc.Laurent Rouquette - Mgr Cloud Ops with AdobeKeith Baker - Senior Engineer with Here.comCameron Stokes - Applications Architect with The Weather Channel
Running your Amazon EC2 instances in Auto Scaling groups allows you to improve your application's availability right out of the box. Auto Scaling replaces impaired or unhealthy instances automatically to maintain your desired number of instances (even if that number is one). You can also use Auto Scaling to automate the provisioning of new instances and software configurations as well as to track of usage and costs by app, project, or cost center. Of course, you can also use Auto Scaling to adjust capacity as needed - on demand, on a schedule, or dynamically based on demand. In this session, we show you a few of the tools you can use to enable Auto Scaling for the applications you run on Amazon EC2. We also share tips and tricks we've picked up from customers such as Netflix, Adobe, Nokia, and Amazon.com about managing capacity, balancing performance against cost, and optimizing availability.
CPN202 - AWS Compute Services State of the Union
by Peter De Santis - VP, AWS Compute Services with Amazon Web Services
In this session, Peter De Santis, VP of Compute Services will provide an overview of the key priorities for Amazon Elastic Compute Cloud (Amazon EC2). ÊIn this session, you will hear about some of the most innovative ways in which customers are using Amazon EC2, learn more about key capabilities launched over the past year, and gain insights into the near term roadmap and priorities.
CPN203 - Bringing Your Applications to the Fast Lane
by Steven Jones - Principle Solutions Architect with Amazon Web ServicesDeepak Singh - Principal Product Manager - Amazon EC2 with Amazon Web ServicesChristos Kalantzis - Engineering Manager with Netflix Inc.
Amazon Elastic Compute Cloud (Amazon EC2) has added a number of instance types that provide a high level of performance. ÊInstances range from compute-optimized instances to instances that deliver thousands of IOPS. ÊIn this session, you will learn more about Amazon EC2 high performance instance types and hear from customers about how they are using these instances to improve application performance, and reduce costs.
CPN204 - Architecting for Availability & Scalability with Elastic Load Balancing and Route 53
by David Brown - Sr. Manager, Software Development with Amazon Web ServicesSean Meckley - Product Manager, Amazon Route 53 with Amazon Web ServicesPaul Kearney - Chief Architect with InfoSpace
Elastic Load Balancing provides a scalable and highly-available load balancer that automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. In this session, we take a deeper look at some of the existing and newer features that enable application developers to architect highly-available architectures that are resilient to load spikes and application failures. We also explore some of the features that allow seamless integration with services such as Auto Scaling and Amazon Route 53 to further improve the scalability and resilience of your applications.
CPN205 - Securing Your Amazon EC2 Environment with AWS IAM Roles and Resource-Based Permissions
by Derek Lyon - Principal Product Manager with Amazon Web Services
Customers with multiple AWS administrators need a way to control who can do what in their Amazon EC2 environment to ensure both security and availability. This session demonstrates how to secure your Amazon EC2 environment using IAM roles and resource-based permissions.
CPN206 - Overview of Windows on AWS
by Ulf Schoo - EC2 Windows Product Management with Amazon Web ServicesGene Farrell - General Manager with Amazon Web ServicesKuldeep Sajwal with Dell
Moving your data center to theÊAWS cloudÊcan reduce your cost, improveÊflexibility, and simplify resource management. Come learn how you can use Windows-based technologies in the AWS cloud to power your business. If you want to explore running Microsoft Windows and Windows-based workloads of any size and level of complexity in AWS, this session is for you. ÊWe focus on key topics ranging from taking the first simple steps running Windows on AWS to expert topics like deploying enterprise workloads using AWS deployment technologies and the tools you already know and are familiar with, such as Visual Studio, .NET, and Windows PowerShell. Filled with tons of demos, this session shows you how to use Windows to bridge your on-premises network with AWS and quickly deploy enterprise workloads such as Exchange Server, SQL Server, SharePoint Server, and Windows ISV applications.ÊBe prepared with all your Windows questions.
CPN207 - Re:Inventing your Innovation Cycle by Scaling Out with Spot Instances
by Weston Jossey - Staff Engineer with TapjoyMiguel Picornell - VP Operations & Support with Optaros, Inc
Hear Optaros and Tapjoy detail how they architected large scale systems on AWS and accelerated their time to result while taking advantage of Spot Instance savings up to 90% off of On Demand prices. Optaros is a global digital commerce service partner that has served clients like Wal-Mart, MacyÕs, 20th Century Fox and PUMA. Optaros has reduced its EC2 spending by 50% while maintaining strict SLAs and performance metrics and improving the customer experience of its e-commerce sites. Tapjoy, a mobile performance-based advertising platform, reaches over 435MM users per month. By fully integrating Spot into their production, cloud development and continuous integration environments, Tapjoy expects to reduce its AWS costs by 20%. Both will share Spot tips, from bidding strategies to embarrassingly parallel and fault tolerant system design.
CPN208 - Selecting the Best VPC Network Architecture
by Eric Schultze - Principal Product Manager with Amazon Web ServicesPhil Schulz - Agile Project Manager with VodafoneRoshan Vilat - Solution Architect with Vodafone AustraliaClay Parker - Senior Cloud Architect with Trimble Navigation
Which is better:Ê a single VPC with multiple subnets or multiple accounts with many VPCs? Should you simplify management with a single VPC or use multiple VPCs to lessen the blast radius of network changes?Ê In this session, we hear from customers who've implemented each approach and discuss how they addressed management, security, and connectivity for their Amazon EC2 environments.
CPN209 - Scaling your Application for Growth using Automation
by Gregarious Narain - Co-Founder / CTO with ChuteKen Leung - CTO with Euclid Analytics
Growing too quickly may sound like a nice problem to have, unless you are the one having it. A growing business canÕt afford not to keep up with customer demand and availability. DonÕt be left behind. Come learn how start-ups Chute and Euclid kept up with real-time user-generated data from over 3,000 apps and 2 TB of metadata and stayed ahead of retail peak-time traffic, all with AWS. Hear how they used all that data on their own growth to propel their business even further and deepen relationships with customers. Not planning for growth is just like not planning to grow!
CPN210 - Getting Cloudy with Remote Graphics and GPU Compute Using G2 instances
by John Phillips - Sr. Product Manager with Amazon Web ServicesGyorgy Ordody - Principal Engineer with AutodeskTeng Lin - Senior Scientist & Scientific Software Developer with Schrodinger
Amazon EC2 now offers the G2, a new GPU instance type capable of running 3D graphics and GPU compute workloads in the AWS cloud. In this session, we take a deeper look at the remote graphics and GPU compute capabilities of G2 instances through the lens of AWS customers Autodesk and Schršdinger.Ê Autodesk, a leader in 3D design and engineering software, will discuss the role of GPU instances in the evolution of computer-aided design.Ê Schršdinger, a leader in software solutions and services for life sciences and materials research, will discuss the role of GPU instances in drug discovery. ÊEach will touch upon the benefits of GPU instances and a high-level overview of their architectures on AWS.
CPN211 - Reducing Cost & Maximizing Efficiency: Tightening the Belt on AWS
by Kingsley Wood - APAC Business Development Manager with Amazon Web ServicesTom Johnston - Business Development Manager with Amazon Web ServicesAshay Padwal - CTO with Vserv.mobiSean Simpson - Director of Operations with Stitcher, Inc
This session dives deep into techniques used by successful customers who optimized their use of AWS. Learn tricks and hear tips you can implement right away to reduce waste, choose the most efficient instance, and fine-tune your spending, often with improved performance and a better end-customer experience. We showcase innovative approaches and demonstrate easily-applicable methods for cost optimizing Amazon EC2, AmazonÊS3, and a host of other services to save you time and money.
CPN212 - Application Optimized Performance: Choosing the Right Instance
by Jason Waxman - VP, GM Cloud Platforms with Intel
(Presented by Intel)ÊEach application places a different set of requirements on the underlying infrastructure.ÊÊWhether it is web, big data analytics, technical computing, or general enterprise applications, applications are run more efficiently when performance, IO bandwidth, and memory capacity have been custom-tailored for that specific application.Ê Jason Waxman, GM and VP of IntelÕs Cloud Platform Group, looks under the hood at the different types of processors that comprise Amazon Web Services instances and shares insights from Intel IT and industry best practices for right-sizing infrastructure for different application characteristics and capabilities.Ê By leveraging the underlying performance, security capabilities, and flexibility of various instance types, developers can more easily migrate applications into the cloud and drive down TCO for cloud-based services.
CPN301 - Amazon EC2 to Amazon VPC: A case study
by Eric Schultze - Principal Product Manager with Amazon Web ServicesMatthew Barlocker - Chief Architect with Lucid Software Inc
In this session, you learn about Amazon Virtual Private Cloud and why you should consider using it for your applications. You also hear from the makers of Lucidchart, an online diagramming tool, which was originally launched in 2008 on the Amazon EC2 Classic platform. As the user base grew, so did their need for a more robust, secure infrastructure. After much debate about other vendors and colocation, Lucidchart chose Amazon VPC. To find out why, check out this session for a comparison of AmazonÊEC2 Classic against Amazon VPC. Matthew Barlocker, Chief Architect at Lucidchart, discusses their migration plan, pain points, and unexpected issues.
CPN302 - Your Linux AMI: Optimization and Performance
by thor nolen with Amazon Web ServicesCoburn Watson - Manager, Cloud Performance Engineering with Netflix
Your AMI is one of the core foundations for running applications and services effectively on Amazon EC2. In this session, you'll learn how to optimize your AMI, including how you can measure and diagnose system performance and tune parameters for improved CPU and network performance. We'll cover application-specific examples from Netflix on how optimized AMIs can lead to improved performance.
CPN303 - Cloud Connected Devices on a Global Scale
by Bryant Eastham - Chief Architect with PanasonicJustin Leung - Senior Engineer - Server Lead with Banjo, Inc.
Increasingly, mobile and other connected devices are leveraging the scalability and capabilities of the cloud to deliver services to end users. However, connecting these devices to the cloud presents unique challenges. Resource constraints make it impossible to use many common frameworks and transport restrictions make it difficult to use dynamic cloud resources. In this session, learn how you can develop and deploy highly-scalable global solutions using Amazon Web Services (Amazon Virtual Private Cloud, Elastic IP addresses, Amazon Route 53, Auto Scaling) and tools like Puppet. Hear how Panasonic and Banjo architect their cloud infrastructure from both a start-up and enterprise perspective.Ê
CPN401 - A Day in the Life of a Billion Packets
by Eric Brandwine - Sr. Principal Security Engineer with Amazon Web Services
In this talk,Êwe walk through the VPC network presentation, and describe the problems we were trying to solve.ÊNext, we walk through how these problems are traditionally solved, and why those solutions are not scalable, cheap, or secure enough for AWS.ÊÊFinally,Êwe provide an overview of the solution that we've implemented and discuss some of the unique mechanisms that we use to ensure customer isolation.
DAT101 - Production NoSQL in an Hour: Introduction to Amazon DynamoDB
by David Lang - Sr. Product Manager, Amazon DynamoDB with Amazon Web ServicesValentino Volonghi - Chief Architect with AdRollDavid Albrecht - Senior Engineer in Operations with Crittercism
Amazon DynamoDB is a fully-managed, zero-admin, high-speed NoSQL database service. Amazon DynamoDB was built to support applications at any scale. With the click of a button, you can scale your database capacity from a few hundred I/Os per second to hundreds of thousands of I/Os per second or more. You can dynamically scale your database to keep up with your application's requirements while minimizing costs during low-traffic periods. The service has no limit on storage. You also learn about Amazon DynamoDB's design principles and history.
DAT103 - Introduction to Amazon Redshift and What's Next
by Rahul Pathak - Sr. Product Manager with Amazon Web ServicesAnurag Gupta - Director, Database Engines with Amazon Web Services
Amazon Redshift is a fast, fully-managed, petabyte-scale data warehouse service that costs less than $1,000 per terabyte per yearÑless than a tenth the price of most traditional data warehousing solutions. In this session, you get an overview of Amazon Redshift, including how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. Finally, we announce new features that we've been working on over the past few months.Ê
DAT201 - Understanding AWS Database Options
by Sundar Raghavan - General Manager with Amazon Web ServicesMichael Thomas - Principal Software Engineer with ScopelyZac Sprackett - Vice President of Operations with SugarCRM
With AWS you can choose the right database technology and software for the job. Given the myriad of choices, from relational databases to non-relational stores, this session provides details and examples of some of the choices available to you. This session also provides details about real-world deployments from customers using Amazon RDS, Amazon ElastiCache, Amazon DynamoDB, and Amazon Redshift.
DAT202 - Using Amazon RDS to Power Enterprise Applications
by Abdul Sait - Principal Solutions Architect with Amazon Web ServicesShawn Leviski - Director, Enterprise Solutions with Select StaffingDavid Brunet - VP of Research and Development with DLZP GroupMark Saneholtz - Director with Select Staffing
Amazon RDS makes it cheap and easy to deploy, manage, and scale relational databases using a familiar MySQL, Oracle, or Microsoft SQL Server database engine. Amazon RDS can be an excellent choice for running many large, off-the-shelf enterprise applications from companies likeÊJD Edwards, Oracle,ÊPeopleSoft, and Siebel. In this session, you learn how to best leverage Amazon RDS for use with enterprise applications and learn about best practices and data migration strategies.
DAT203 - AWS Storage and Database Architecture Best Practices
by Siva Raghupathy - Enterprise Solutions Architect with Amazon Web Services
Learn about architecture best practices for combining AWS storage and database technologies. We outline AWS storage options (Amazon EBS, Amazon EC2 Instance Storage, Amazon S3 and Amazon Glacier) along with AWS database options including Amazon ElastiCache (in-memory data store), Amazon RDS (SQL database), Amazon DynamoDB (NoSQL database), Amazon CloudSearch (search), Amazon EMR (hadoop) and Amazon Redshift (data warehouse). Then we discuss how to architect your database tier by using the right database and storage technologies to achieve the required functionality, performance, availability, and durabilityÑat the right cost.
DAT204 - SmugMug: From MySQL to Amazon DynamoDB
by Brad Clawsie - Software Engineer with SmugMug
SmugMug.com is a popular hosting and commerce platform for photo enthusiasts withÊhundreds of thousands of subscribers and millions of viewers. Learn now SmugMug uses Amazon DynamoDB to provide customers detailed information aboutÊmillions of daily image and video views. Smugmug shares code and information about their stats stack, which includes an HTTP interface to Amazon DynamoDB and also interfaces with their internal PHP stack and other tools such as Memcached. Get a detailed picture of lessons learned and the methods SmugMug uses to create a system that is easy to use, reliable, and high performing.
DAT205 - Amazon Redshift in Action: Enterprise, Big Data, and SaaS Use Cases
by Rahul Pathak - Sr. Product Manager with Amazon Web ServicesParag Thakker - VP Client Partner with Roundarch IsobarKevin Diamond - CTO with HauteLookJason Timmes - AVP, Software Development with NASDAQ OMXColin McGuigan - Technical Architect with Roundarch Isobar
Since Amazon Redshift launched last year, it has been adopted by a wide variety of companies for data warehousing. In this session, learn how customers NASDAQ, HauteLook, and Roundarch Isobar are taking advantage of Amazon Redshift for three unique use cases: enterprise, big data, and SaaS. Learn about their implementations and how they made data analysis faster, cheaper, and easier with Amazon Redshift.
DAT207 - Accelerating Application Performance with Amazon ElastiCache
by Omer Zaki - Senior Product Manager with Amazon Web ServicesNick Dor - Sr. Director Engineering with GREE International, Inc.James Kenigsberg - Chief Technical Officer with 2U, Inc.
Learn how you can use Amazon ElastiCache to easily deploy a Memcached or Redis compatible, in-memory caching system to speed up your application performance. We show you how to use Amazon ElastiCache to improve your application latency and reduce the load on your database servers. We'll also show you how to build a caching layer that is easy to manage and scale as your application grows. During this session, we go over various scenarios and use cases that can benefit by enabling caching, and discuss the features provided by Amazon ElastiCache.
DAT209 - Scaling MongoDB on Amazon Web Services
by Michael Saffitz - CTO & Co-Founder with Apptentive
Over the past year, mobile in-app feedback provider Apptentive has scaled MongoDB on AWS from a single machine to a sharded, thousands-of-operations-per-second, several hundred gigabyte cluster. This sessionÑpacked with demos, code, and actual performance numbersÑshares the lessons learned along the way. Topics include picking the right tools for the job (instance sizing and selection, I/O choices, and topological choices); using chef/AWS OpsWorks and AWS CloudFormation to deploy and scale; monitoring with Amazon CloudWatch and MMS; managing backups with Amazon EBS snapshots; and using Amazon Elastic MapReduce alongside MongoDB instances.
DAT210 - New Launch: Introducing Amazon RDS for PostgreSQL
by Pavan Pothukuchi - Sr. Product Manager with Amazon Web ServicesSrikanth Deshpande - Senior Technichal Product Manager with Amazon Web ServicesNick Hertl - Software Developer Manager with Amazon Web ServicesGabriel Arnett - Senior Director, Software Architect with Moody's Analytics
AWS customers have been asking us for Amazon RDS for PostgreSQL, and weÕre excited to announce its immediate availability. Learn how you can offload the management of your PostgreSQL database instances to Amazon RDS using automated backups and point-in-time recovery, Multi-AZ deployments for high availability, and provisioned IOPS for fast and predictable performance. Also learn how to take advantage of familiar PostgreSQL features such as PostGIS with Amazon RDS for PostgreSQL.
DAT301 - Running Highly-Available and Performance-Intensive Production Applications on Amazon RDS
by Grant McAlister - Senior Principal Engineer with Amazon Web ServicesBrennan Saeta - Software Engineer with Coursera
Learn how to take advantage of Amazon RDS to run highly-available and performance-intensive production applications on AWS. We show you what you can do to achieve the highest levels of availability and performance for your relational databases. You learn how easy it is to architect for these requirements using several Amazon RDS features, such as Multi-AZ deployments, read replicas, and Provisioned IOPS storage. In addition, you learn how to quickly architect for the level of disaster recovery required by your business. Finally, some of our customers share how they built very high performing web and enterprise applications on Amazon RDS.
DAT302 - A Closer Look at Amazon RDS for MySQL – Deep Dive into Diagnostics, Security, and Data Migration
by Pavan Pothukuchi - Sr. Product Manager with Amazon Web ServicesSorin Stoiana - Operations Lead with OptarosAntonio Graeff - Technology Director with Titans Group
Learn how to monitor your database performance closely and troubleshoot database issues quickly using a variety of features provided by Amazon RDS and MySQL including database events, logs, and engine-specific features. You also learn about the security best practices to use with Amazon RDS for MySQL. In addition, you learn about how to effectively move data between Amazon RDS and on-premises instances. Lastly, you learn the latest about MySQL 5.6 and how you can take advantage of its newest features with Amazon RDS.
DAT303 - A Closer Look at Amazon RDS for Microsoft SQL Server – Deep Dive into Performance, Security, and Data Migration Best Practices
by Sergei Sokolenko - Senior Product Manager with Amazon Web ServicesAlex Vodovoz - ‎VP of Engineering with ViddyAllan Parsons - VP, Technical Operations with Viddy
Come learn about architecting high-performance applications and production workloads using Amazon RDS for SQL Server. Understand how to migrate your data to an Amazon RDS instance, apply security best practices, and optimize your database instance and applications for high availability.
DAT304 - Mastering NoSQL: Advanced Amazon DynamoDB Design Patterns for Ultra-High Performance Apps
by David Yanacek - Sr. Software Development Engineer, DynamoDB with Amazon Web ServicesGreg Nelson - Director of Engineering with DropcamDavid Tuttle - Engineering Manager, Platform Team with Devicescape Software, Inc.
Learn how to deliver extremely low latency, fast performance and throughput for web-scale applications built on Amazon DynamoDB. We show you how to model data, maintain maximum throughput, drive analytics, and use secondary indexes with Amazon DynamoDB. You also hear how customers have built large-scale applications and the real-world lessons they've learned along the way.
DAT305 - Getting Maximum Performance from Amazon Redshift: High Concurrency, Fast Data Loads and Complex Queries
by Rahul Pathak - Sr. Product Manager with Amazon Web ServicesBen Myles - Lead Software Engineer with Desk.comNiek Sanders - VP of Engineering with HasOffersTimon Karnezos - Director, Platform Infrastructure with Aggregate Knowledge
Get the most out of Amazon Redshift by learning about cutting-edge data warehousing implementations. Desk.com, a Salesforce.com company, discusses how they maintain a large concurrent user base on their customer-facing business intelligence portal powered by Amazon Redshift. HasOffers shares how they load 60 million events per day into Amazon Redshift with a 3-minute end-to-end load latency to support ad performance tracking for thousands of affiliate networks. Finally, Aggregate Knowledge discusses how they perform complex queries at scale with Amazon Redshift to support their media intelligence platform.
DAT306 - How Amazon.com, with One of the World’s Largest Data Warehouses, Is Leveraging Amazon Redshift
by Erik Selberg - Director, Amazon Enterprise Data Warehouse with Amazon.comAbhishek Agrawal - Development Manager - DW Redshift Integration with Amazon Web ServicesAdam Duncan - Technical Program Manager, Enterprise Data Warehouse with Amazon.com
Learn how AmazonÕs enterprise data warehouse, one of the world's largest data warehouses managing petabytes of data, is leveraging Amazon Redshift. Learn about Amazon's enterprise data warehouse best practices and solutions, and how theyÕre using Amazon Redshift technology to handle design and scale challenges.
DAT307 - Deep Dive into Amazon ElastiCache Architecture and Design Patterns
by Nate Wiger - Principal Solutions Architect, Gaming with Amazon Web Services
Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache. We also include a demo where we enable Amazon ElastiCache for a web application and show the resulting performance improvements.
DAT308 - Advanced Data Migration Techniques for Amazon RDS
by Abdul Sait - Principal Solutions Architect with Amazon Web ServicesShakil Langha - Business Development Mgr with Amazon Web ServicesBharanendra Nallamotu with Amazon Web Services
Migrating data from the existing environments to AWS is a key part of the overall migration to Amazon RDS for most customers. Moving data into Amazon RDS from existing production systems in a reliable, synchronized manner with minimum downtime requires careful planning and the use of appropriate tools and technologies. Because each migration scenario is different, in terms of source and target systems, tools, and data sizes, you need to customize your data migration strategy to achieve the best outcome. In this session, we do a deep dive into various methods, tools, and technologies that you can put to use for a successful and timely data migration to Amazon RDS.
DMG102 - Getting to the Cloud the Right Way: A Public Sector Perspective
by Kemal Badur - Assistant Director with University of MinnesotaShaun Enright - VP & GM Public Sector with Cloudnexa
(Presented By Cloudnexa)ÊThe University of Minnesota recently closed asuccessful bid for IaaS and selected AWS as a campus-wide partner. In thistalk, University staff will discuss the way the bid process was handled, whatthey encountered in the responses, and what lessons they learned. They will alsohighlight the benefits of a reseller partner for the process and the potentialbenefits of managed services from a reseller for an institution like theUniversity of Minnesota. The discussion will also include some current andupcoming use cases for the University.
DMG201 - Zero to Sixty: AWS CloudFormation
by Chetan Dandekar - Senior Product Manager - AWS CloudFormation with Amazon Web ServicesCapen Brinkley - Software Engineer with Intuit
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.ÊIn this Zero to Sixty session, learn about CloudFormation's latest features along with best practices for using them, including maintaining complex environments with CloudFormation, template management and re-use, and controlling stack updates. Demos and code samples are available to all session attendees. Are you new to AWS CloudFormation? Get up to speed for this session by first completing the 60-minute "Fundamentals of CloudFormation" lab in the Self Paced Lab Lounge.
DMG202 - Zero to Sixty: AWS OpsWorks
by Thomas Metschke - Technical Program Manager with AmazonJohn Gaa - VP of Engineering with BeachMint
AWS OpsWorks is a solution for managing applications of any scale or complexity on the AWS cloud. Accelerate your use of OpsWorks by learning how to use several of its operational features in this Zero to Sixty session.Ê It starts with a demo of the OpsWorks main workflowsÑmanage and configure instances, create and deploy apps, monitoring, and security. BeachMint will explain how they set up OpsWorks as part of their continuous deployment pipeline. The session finishes off by explaining how to use the OpsWorks API and Chef recipes to automate standard operating procedures. Demos and code samples are available to all session attendees. Are you new to AWS OpsWorks?Ê Get up to speed for this session by first completing the 60-minute "Introduction to AWS OpsWorks" lab in the Self-Paced Hands-On Lab Lounge. It will lead you through all major functions of the service with a fun example.
DMG203 - AWS Billing Deep Dive
by Serge Hairanian with Amazon Web ServicesVadim Jelezniakov with Amazon Web Services
This session walks through the mechanics of AWS bill computation and consolidated billing to help you understand your bill. AWS billing has many features to help you manage and control your costs in the AWS cloud environment including detailed billing reports, programmatic access, cost allocation, billing alerts, and IAM access. We provide an overview of these features and then demonstrate how to use and incorporate them into your own account setup.
DMG204 - Zero to Sixty: AWS Elastic Beanstalk
by Evan Brown with Amazon Web ServicesGeraldo Thomaz - co-Founder and co-CEO with VTEXann wallace - Solution Architect with Nike, Inc.Amber Milavec - solutions architect with Nike
AWS Elastic Beanstalk provides an easy way for you to quickly deploy and manage applications in the AWS cloud. In this Zero to Sixty session, accelerate your use of Elastic Beanstalk by learning how Nike and VTEX use several of its most powerful features. Through interactive demos and code samples for both Windows and Linux, this session teaches you how to achieve deployments with zero downtime, how to easily enable or disable application functionality via feature flags, and how to customize your Elastic Beanstalk environments with extensions. ÊDemos and code samples are available to all session attendees. Are you new to Elastic Beanstalk? Get up to speed for this session by first completing the 60-minute "Fundamentals of Elastic Beanstalk" lab in the Self Paced Lab Lounge.
DMG205 - Edmunds.com—Migrating, Deploying, and Managing a Traditional On-Premises Web Property to AWS
by John Martin - Sr Director, Production Engineering with Edmunds.com
Taking a stack composed of 30 web applications and their service dependencies to the cloud is no easy feat. Do you take the entirety of the stack or go the hybrid path? How transparent should the end result be to your technology teams? Does it look exactly the same in the cloud as it does in your data center? These are not rhetorical questions; they were very real for those tasked with the challenge of taking Edmunds.com to the AWS Cloud. This talk addresses these questions and many more, examining the challenges, successes, and lessons learned as the team took their first steps out of their own data centers. The presenters also cover how this experience is shaping the future direction of their stack's architecture to be friendlier to systems outside of their own data centers.
DMG206 - Netflix Development Patterns for Rapid Iteration, Scale, Performance & Availability
by Neil Hunt - Chief Product Officer with Netflix
This session explains how Netflix is using the capabilities of AWS toÊbalance the rate of change against the risk of introducing a fault. Netflix uses a modular architecture with fault isolation and fallback logic for dependencies to maximize availability. This approach allows for rapid independent evolution of individual components to maximize the pace of innovation and A/B testing, and offers nearly unlimited scalability as the business grows. Learn how we balance managing change to (or subtraction from) the customer experience, while aggressively scraping barnacle features that add complexity for little value.
DMG208 - Lessons Learned on Capgemini’s COMPLETE Managed Services Platform
by Joe Coyle - North American Chief Technology Officer with Capgemini
(Presented by Capgemini)ÊIn this session Capgemini discusses how their enterprise customers leverage AWS using the COMPLETE platform to deploy and manage applications such as SAP, Business Information Management Elastic Analytics, and mobility solutions. This session also shows detailed AWS architectures they are delivering to clients and how Capgemini is using AWS infrastructure internally.Ê
DMG209 - Enterprise Management for the AWS Cloud
by Joel Rosenberger - EVP Software, Cloud Computing with 2nd Watch
(Presented by 2nd Watch)ÊEnterprise IT professionals have unique challenges with cloud resources. Deploying and managing an enterprise application today requires a solution that ensures compliance with corporate IT governance requirements and has predictable and repeatable performance and costs. ÊIn addition business users also want solutions that can be deployed quickly. ÊIn this session 2nd Watch shows you how to deal with these enterprise class cloud deployment challenges. ÊYou see how AWS CloudFormation scripts can be extended to automate reference architecture design creation, deployment, and management. ÊYou also learn how to visually inventory deployed AWS reference architectures and monitor AWS usage, including how to budget for platform usage by project, department, or program, and track and allocate costs in a similar way.Ê
DMG210 - Panel Discussion: Business and Technology Leaders Discuss Adoption and Management Issues and Strategies for AWS Cloud Implementations
by Rich Gopstein - Director, Infrastructure Eng with Bristol-Myers SquibbLex Crosett - EVP Software and Services with Conservation Services GroupMJ DiBerardino - CTO with CloudnexaJosh Hofmann - Principal Manager with AmazonMichael Gordon - Managing Director, New York Mellon Investment Management
(Presented by Cloudnexa)Ê A moderated panel followed by audience Question and Answers.Ê This unique presentation of technology leaders will discuss the challenges of integrating cloud into their organizations.Ê Learn from their collective experiences as they provide best practices and strategies for effective acquisition, management, and growth.Ê Gain insight into leadership tactics to overcome barriers of adoption, as well as firsthand accounts of the decision making process that has brought cloud to their organization, and finally how AWS positions their organizations for growth.ÊÊÊÊ Speaking participants from Bristol Myers Squibb, Conservation Services Group and New York Mellon Investment Management. Ê Moderated by: AWS and Cloudnexa Ê
DMG211 - Using Red Hat’s OpenShift Platform-as-a-Service to Develop Scalable Applications on AWS
by Steve Citron-Pousty - Developer Evangelist with Red Hat
(Presented by Red Hat)ÊLearn how you can quickly develop, host, and scale applications in the AWS cloud with Red HatÕs OpenShift. In this session, we walk you through the simple process of deploying and managing your own Linux-based application in the cloud using live demonstrations. We also discuss key use-cases and benefits to automated configuration, deployment, and administration of application stacks.
DMG212 - Instrumenting your Application Stack in a Dynamically Scaling Environment
by Michael Fiedler - Director of Technical Operations with DatadogMichael Fiedler - Director of Technical Operations with Datadog
(Presented by Datadog) Gaining visibility into an application stackÕs performance is necessary to understand how the stack is running and to configure alerts effectively. Instrumenting each component in the stack to produce metrics provides this insight. In an environment that scales automatically, hosts are being automatically added, removed, and reassigned. Using an automated methodology for instrumentation in these environments can improve results and save you time. This session includes a live demo component to show auto-instrumentation of hosts, graphing, and alerting on metrics.
DMG301 - AWS Elastic Beanstalk under the Hood
by Chris Whitaker - Senior Software Development Manager with Amazon Web ServicesSudhindra Rao - Software Developer with ThoughtWorks Inc.Pengchao Wang - Software Engineer with Thoughtworks
AWS Elastic Beanstalk provides a number of simple, flexible interfaces for developing and deploying your applications. In this session, learn how ThoughtWorks leverage the Elastic Beanstalk API to continuously deliver their applications with smoke tests and blue-green deployments. Also learn how to deploy your apps with Git and eb, a powerful CLI that allows developers to create, configure, and manage Elastic Beanstalk applications and environments from the command line.
DMG303 - AWS CloudFormation under the Hood
by DJ Edwards with Amazon Web ServicesAdam Thomas - Software Development Engineer with Amazon Web Services
You already know that AWS CloudFormation is a powerful tool for provisioning and managing your AWS infrastructure, but did you know that it can also provision and manage resources outside of AWS? Did you know that CloudFormation can fully bootstrap your EC2 instances, securely download data from S3, and even supports Mustache templates? In this session you will go on a deep dive, touring of some of CloudFormation's most advanced features with a member of the team that built the service. Explore custom resources, cfn-init, S3 authentication, and Mustache templates in a series of technical demos with code samples available for download afterwards.
DMG304 - AWS OpsWorks Under the Hood
by Jonathan Weiss - Software Development Manager with Amazon Web ServicesReza Spagnolo - Software Development Engineer with Amazon Web Services
AWS OpsWorks lets you model your application with layers that define the building blocks of your application: load balancers, application servers, databases, etc. But did you know that you can also extend OpsWorks layers or build your own custom layers? Whether you need to perform a specific task or install a new software package, OpsWorks gives you the tools to install and configure your instances consistently, and evolve them in an automated and predictable fashion through your applicationÕs lifecycle. We'll dive into the development process including how to use attributes, recipes, and lifecycle events; show how to develop your environment locally; and provide troubleshooting steps that reduce your development time.
DMG305 - How Intuit Leveraged AWS OpsWorks as the Engine of Our PaaS
by Capen Brinkley - Software Engineer with IntuitRick Mendes - Senior Software Engineer with Intuit, inc.
In this talk, the engineering team behind the Intuit PaaS takes you through the design of our shared PaaS and its integration with AWS OpsWorks. We give an overview of why we decided to build our own PaaS, why we chose OpsWorks as the engine, technical details of the implementation as well as all the challenges in building a shared runtime environment for different applications. Anyone interested in OpsWorks or building a PaaS should attend for key lessons from our journey.
ENT104 - New Launch: Amazon WorkSpaces: Desktop Computing in the Cloud
by Deepak Suryanarayan - Senior Product Manager with Amazon Web ServicesGene Farrell - General Manager with Amazon Web Services
Desktop virtualization has long held the promise of productivity and security benefits, but has been held back by large CapEx requirements and complicated installation and management. In this session, we provide a detailed introduction to SkyLight, a new AWS service that combines the benefits of desktop virtualization and a cloud-based, pay-as-you-go model. You learn about the key steps for setting up and delivering a secure cloud-based workspace accessed through purpose-built client applications.
ENT106 - Professional Grade Cloud for your Enterprise Needs
by Alan Chhabra - Area VP - Consulting Services with BMC Software
(Presented by BMC Software)ÊLike most companies, yours is probably building both an internal, flexible IT environment and leveraging the power of Amazon Web Services. Each plays a key role in the IT footprint, and together they offer unparalleled flexibility, efficiency, and the agility to meet changing business needs. You may be starting small or moving quickly to address a range of business needs, geographies, or users. As you do, consider the role of an integrated, holistic management model to your enterprise. This session will introduce the key qualities of integrated management, built to address the hybrid use cases of AWS, on-premises converged fabrics, advanced service catalogs, multi-data center deployments, and, ultimately, IT transformation using the cloud. You've likely already dabbled inÑor even masteredÑthe basics of the cloud. Now learn from two case studies what it takes to reach the next level and lead your company to success with hybrid IT including AWS.
ENT107 - Backup, Archive, and Restore System z Mainframe Data with Amazon Glacier and Amazon S3
by Mark Behrje - Senior Director with CA TechScott Arnett - Sr. Principal Product Manager with CA Technologies
(Presented by CA Technologies)ÊThere are a lot of mainframes still out there, with a lot of data to back up and archive. CA Technologies (CA) is the largest mainframe software provider in the world. This session provides an overview and demo of CAÕs Cloud Storage for System z solution used for mainframe storage backup to the AWS cloud. Traditional mainframe storage solutions are expensive and complex. For IT Directors, VPs of IT, VPs of Storage, and Storage Administrators, this session discusses how CA has partnered with AWS and Riverbed to provide an innovative solution that provides a low cost and secure solution for efficient backup and recovery of critical mainframe data. You see a demo of Chorus managing data flow from the mainframe to the cloud using the Riverbed Whitewater appliance. You also hear from a demanding customer, Mark Behrje, Sr. Director Global Information Services, CA Technologies, about how they implemented quickly, how much theyÕve reduced their backup TCO, and lessons learned.
ENT108 - Network-Ready Your Hybrid IT Environment
by Sean Iraca - Director, Global Cloud Services with Equinix
(Presented by Equinix)Ê As IT decision makers shift their consumptionmodels to leverage more cloud services, they are faced with overcomingperformance, security and compliance limitations associated with the publicInternet as an access method. ÊIn this session, we illustrate how Equinixdelivers a unique and enhanced AWS Direct Connect global service offeringÊthatenables secure, high-speed access with the flexible, dynamic bandwidth to meetthe needs of the evolving IT consumption trends.
ENT201 - Bringing Governance to an Existing Cloud at NASA's Jet Propulsion Laboratory
by Jonathan Chiang - IT Chief Engineer with JPLMatthew Derenski - CyberSecurity Engineer with JPL
Amazon Web Services provides JPL a vast array of capabilities to store, process, and analyze mission data. JPLers were early to adopt AWS services to build complex solutions. However, we quickly grew to over 50 AWS accounts, 80 IAM users, and hundreds of resources. A team of engineers inside JPL's Office of the CIO developed a cloud governance model. The true challenge was implementing it on existing deployments. Learn about our model and how we overcame the challenges.
ENT203 - What an Enterprise Can Learn from Netflix, a Cloud-native Company
by Brian Rice with Amazon Web ServicesYury Izrailevsky - Vice President, Cloud Computing and Platform Engineering with Netflix
In moving its streaming product to the cloud, Netflix has been able to realize tremendous benefits in scalability, performance, and availability. The biggest benefit came from moving to a service-based architecture, which allowed engineering teams to accelerate their development cycle and innovate more quickly. However, cloud migration was a substantial effort. We mobilized resources across the company over several years, reorganized our engineering and operations teams, developed new security policies, migrated to the DevOps operations model, and even embraced a new product architecture. In this talk, we trace the evolution of the Netflix cloud model, Êboth the successes and the challenges, and present them in a way thatÕs maximally useful to enterprises considering making the move to the cloud.
ENT206 - Using AWS Enterprise Support to the Fullest
by Simon Elisha - Principal Solution Architect with Amazon Web ServicesFergus Hammond - Senior Manager, Cloud Hosting with Adobe Systems Incorporated
At Adobe, we look at AWS Enterprise Support as our partners for success. With their help, we matured our use of AWS in many ways. This session details how AWS Support gave us insight into our AWS use and what we did to effect improvements. We're also making use of the Trusted Advisor SDK; we detail how we're building on top of that to drive further enhancements.
ENT207 - How Federal Home Loan Bank of Chicago Maintains Control in the Cloud
by Eric Geiger - Managing Director IT Operations with Federal Home Loan Bank of Chicago
Cloud computing on AWS provides central IT organizations with the ability to manage cost and control their infrastructure growth, data, and security. Explaining that to your executives or board, however, can be difficult.Ê This session will detail the framework, processes, and controls that helped the Federal Home Loan Bank of Chicago become comfortable moving into the cloud. Telluride
ENT208 - Life Technologies' Journey to the Cloud
by Sean Baumann - Director, Enterprise Software Engineering with Life TechnologiesMark Field - CTO & VP of Software Engineering with Life Technologies
Life TechnologiesÊinitially planned to build out its own data center infrastructure, but when aÊcost analysis revealed that by using Amazon Web Services the company would save $325,000 in hardware alone for a single new initiative, the company decided to use AWS instead. Within 6 months of adopting AWS, Life Technologies launched their Digital Hub platform in production, which now undergirds Life Technologies' entire instrumentation product suite.This immediately began to decrease their time-to-market and enhance their customers' user experience. InÊthis session, we provide an overview of our path to the AWS cloud, with particular focus on the evaluation criteria used to make a cloud vendor decision. We also discuss the lessons learned since going into production.
ENT209 - Implementing Dole Food's Global Collaboration Platform and Web Presence on AWS
by Joanna Dyer - Director IT Solutions with Dole
Dole Food needed a global SharePoint infrastructure that met tough goals for availability, performance, scalability, and price. Dole also needed a highly scalable and resilient hosting infrastructure for its public web presence. By deploying both on AWS, Dole Food met its goals while avoiding capital expenditures and operational costs. We trace the projectÕs timeline, discussing how those goals were met and sharing lessons learned. We also talk about how we extended Dole FoodÕs corporate Active Directory into the AWS cloud.
ENT211 - McGraw-Hill Education: From US Brick and Mortar to Global in a Little Under 2 Years
by Shane Shelton - Sr. Director of DevOps with McGraw-Hill Education
McGraw-Hill Education, a multi-billion-dollar publishing company, moved to digital on company-owned data centers, and now are migrating to AWS. This session details their two-year migration plan for the eventual global deployment of all MHE platforms on AWS. Learn about the business drivers as well as the technical and business challenges they have faced and overcome to date.
ENT212 - The System Administrator Role in the Cloud Era: Better Than Ever
by James Staten - VP, Principal Analyst with Forrester Research
With developers and business leaders driving the charge into cloud computing, where does this leave the IT department and, to put it bluntly, me, the sysadmin? Fear not, IT operation skills are highly relevant and in demand in the cloud era, but it might take a little repositioning on your part to get the opportunity. In this session, Forrester analyst James Staten shares how the leading sysadmins are engaging the business on their cloud journey and what they have done to evolve their role, advance their skills, and position themselves as IT change agents and leaders for the next generation.
ENT213 - How Boeing is Using AWS to Transform Commercial Aviation
by Brian Rice with Amazon Web ServicesKevin Smith - Platform Development Leader with The Boeing Company
In a highly regulated, safety-critical industry like commercial aviation, information sharing is of paramount importance, even when competitive relationships also exist. To solve this problem, Boeing has built a vendor-neutral platform where the players can securely work together to maximize safety and operational efficiency. Leveraging the best-of-industry offerings from AWS, Boeing and other companies can launch innovative applications on the platform with data governance that secures competition-sensitive information from the wrong eyes. In this session, we explore the business drivers behind the project and the challenges met and overcome during design and implementation.
ENT214 - Migrating My.T-Mobile.com to AWS
by Shyam Sasidharan - Director, Technology transformation with T-MobileGopala Gaddipati - Principal Architect with T-Mobile USA
When T-Mobile wanted to rebuild its next generation web customer service platform, it chose AWS to enhance its customersÕ user experience. In this session, learn how T-Mobile adopted the AWS cloud platform, implemented an agile development methodology, embraced faster release cycles, and paved the way for greater AWS adoption within the organization. In doing so, T-Mobile was also able to deliver a consistent, comparable experience to its customers across four screens: PCs, tablets, smartphones, and feature phones. T-Mobile was also able to demonstrate agility and efficiency from a technology and business perspective.
ENT215 - The Data Center Is The Heartbeat of Today's IT Transformation
by Ted Chamberlin - VP, Market Development with CoreSite
(Presented by CoreSite)ÊThe cloud-enabled data center sits at the center of IT transformation. It facilitates the interconnection and communities that come together, propelling growth for both buyers and sellers. Learn how CoreSite is bringing together best-of-breed partners through the Open Cloud Exchange, resulting in public, private, and hybrid IT interconnection and management as well as integration of AWS Direct Connect.
ENT216 - NetApp Private Storage for AWS—70% Compute Cost Savings and Faster Innovation With Data Control
by David McLaughlin - CEO with Blue River Information TechnologyPhil Brotherton - VP, Cloud Solutions Group with NetApp
(Presented by NetApp) NetApp Private Storage for Amazon Web Services represents a new generation of hybrid data center solution that marries public cloud benefits with proven enterprise storage.Ê Well suited for workloads with variable or unpredictable compute needs that also require a high degree of data stewardship, the solution combines the economics, elasticity, and time-to-market benefits of Amazon Elastic Compute Cloud (EC2) with an agile data infrastructure characterized by the proven performance, availability, security, and compliance of dedicated NetApp storage. We showcase customer results across a variety of use cases ranging from web and business applications to big data analytics while paying special attention to the benefits of creating a disaster recovery as a service solution for Microsoft, Oracle, and SAP Applications. Disaster Recovery for enterprise applications is a key piece of a robust business continuity strategy, but a traditional solution that requires establishing a redundant infrastructure stack in a remote location can take a big bite out of IT budgets and is sometimes hard to justify.Ê Using Amazon EC2, AWS Direct Connect, and NetApp storage and data management, customers have achieved cost-effective enterprise-grade data protection and business continuity from on-premises production environments to a single AWS region and across geo-diverse AWS regions. Blue River demonstrates how to achieve these results using NetApp application integrated Snapshots and asynchronous mirroring to replicate and failover from on-premises infrastructure to EC2 and NetApp in a Direct Connect enabled Equinix colo. When used in combination with AWS Reserved Instances at the failover destination, this solution provides a low cost, reliable Disaster Recovery option with granular recovery points, fast recovery time, and full data control on private storage.Ê We also show how the DR site can be dual purposed to provide a highly efficient Development and Test environment that accelerates innovation.
ENT217 - How to Say Yes to Self-Service in the Cloud and Become an IT Hero
by Rishi Vaish - VP of Product with RightScale
(Presented by RightScale)ÊWeÕve all seen the trend: your internal customers, developers, and business users alike, want self-service access to IT resources they need. And increasingly they want those resources in the cloud via a single management console. ÊCloud computing offers them the agility they need to respond to market demands and the ability to scale up while keeping costs low. IT can become a hero by providing a self-service portal where internal customers can provision their own cloud resources while IT can maintain the control and visibility an enterprise requires. The happy result: enterprise agility and enterprise control. During this session, learn how to empower your internal customers to provision the necessary cloud resources when they need them but also ensure that they are well within IT approved guidelines. To help illustrate the effectiveness of this approach, our presenters walk you through real-world examples and the overall impact on their organizations. Create an IT vending machine with consistent and reproducible processes. Manage multiple IT environments through a single-pane-of-glass Use cost planning and forecasting to fine-tune and understand cloud spend. Discover reporting and auditing tools to ensure compliance.
ENT218 - The Best of Both Worlds: Implementing Hybrid IT with AWS
by Brian Adler with RightScale
(Presented by RightScale)ÊWith the increased use of cloud services, organizations are faced with finding the most efficient way to leverage existing IT infrastructure alongside cloud-based compute, storage, and networking resources. This has resulted in the rise of hybrid infrastructure so IT teams can deliver agility and performance with visibility and control. At RightScale, weÕve implemented some of the worldÕs largest hybrid IT deployments. In this session, we share implementation approaches, architecture design considerations, and steps for a successful hybrid IT model. This session covers: How to develop a strategy and framework for a successful path to hybrid IT How to prioritize applications for public cloud versus on-premises How to manage multiple compute resource pools through a unified management framework Implementation and continuous improvement of a hybrid IT environment Examples of enterprise hybrid IT implementations include cloudbursting, high availability, and disaster recovery.
ENT220 - How Accenture CAS Accelerates Delivery with AWS
by Thorsten Maier - Software Product Manager with AccentureCarsten Muller - Delivery Excellence LeadCarsten Mueller - Senior Technology Architect with Accenture CAS
(Presented by Accenture)ÊAccenture CAS is the leading integrated sales platform for the consumer goods industry. Accenture CAS leverages Amazon Web ServicesÑincluding Amazon EC2, Amazon EBS, Elastic Load Balancing, Amazon CloudWatch, and AWS Management ConsoleÑto quickly perform technology case studies for clients and demonstrate cloud readiness without having to invest in expensive infrastructure. This session illustrates how AWSÕ flexible and scalable technology and application services deliver value to Accenture CAS and Accenture CAS clients simultaneously. The session provides a deeper dive into a detailed case study and highlights the CAS approach, advantages, issues faced, and lessons learned. Demonstrated Business Challenge and Value Added: In typical customer engagements Accenture CAS needs to prove that the data integration and replication technology for enterprise resource planning, backend systems, and mobile clients can handle the customerÕs data volume, processing times, and scalability requirements. Scenarios and technical setups vary from client to client and even from use case to use case and require Accenture CAS to quickly adapt the underlying hardware infrastructure and setups. This is a time consuming and expensive task with on-premises installations. Accenture CAS harnesses the power and capabilities of the AWS cloud in order to meet this demand while understanding and optimizing costs, efforts, and timing. Using the example of a high volume data integration and replication scenario supporting 20,000 mobile devices with peaks of 6500 devices per hour, topics covered include: Design, setup, and configuration of the respective AWS cloud environment Deployment and configuration of Accenture CAS software, testing and simulation tools to the AWS cloud Monitoring, maintenance, and management of the complete setup Execution, analysis, and results
ENT221 - The Science Behind Choosing EC2 Reserved Instances
by Toban Zolman - VP, Product Development with Cloudability
(Presented by Cloudability)ÊAmazon EC2 Reserved Instances (RIs) are a great way for many customers to save money on AWS, particularly because the break-even point is well inside of their term. But for some customers the analysis and selection process is not well understood and can prevent them from making a decision that could save them money. In this talk, Cloudability VP of Product Development Toban Zolman walks you through the most common scenarios for RIs, shows you how to make the best possible decisions for RI purchases, and how to significantly reduce the time needed to make those decisions.
ENT222 - Enterprise Transformation through Cognizant’s XaaS fabric on AWS
by Jai Venkat - Corporate Vice President with Cognizant TechnologiesRamesh Panuganty - Managing Director, Cloud360 with Cognizant
(Presented by Cognizant)ÊUnlocking the true value of the AWS cloud is not a one-size-fits all task. As a Premier Consulting Partner, we have worked with a number of enterprises on their journey towards the AWS cloud. As a best practice, we have developed tools and frameworks to assist along the way. Join our experts as we discuss practical examples of AWS implementations, based on which you can help your organization run better and run different. Cloud Stepz is a structured "factory based" process framework that helps clients migrate their application workloads to a cloud environment. It covers three major phases of the cloud journey, which are Strategy & Roadmap, Workload Assessment, and Migration Foundry. Cloud360 hyperplatform is a manager of enterprise cloud services that abstracts and governs private, public, and legacy IT assets and delivers a superior, on-demand service experience. assetSERV makes digital content management easy for large enterprises. Its cloud-based platform delivers tailored, on-demand marketing content that can be accessed and managed anytime, anywhere, and on any device.
ENT223 - Delivering Desktops and Apps to the Mobile Device
by Joe Vaccaro - Director, Product Management - Desktop and Apps Group with Citrix
(Presented by Citrix) As an IT infrastructure decision-maker and administrator, you need to meet the needs of your employees who want to use the latest mobile devices. Application architects and system admins learn how to use Citrix XenApp on AWS coupled with Citrix Receiver on the device to provide seamless access to applications and desktops, while maintainingÊoptimal performance, reliability, security and affordability. Learn best practices through demonstrations and case studies. This session includes: A deep dive into the XenApp on AWS Reference Architecture, including whatÕs available (e.g., AWS CloudFormation templates) to help architect your XenApp environment A demonstration of Receiver on mobile devices providing access to desktops and apps served from ÊXenApp on AWS A demonstration of Citrix XenMobile, an Enterprise MDM solution running in AWS that enables you to configure, secure, provision, and support mobile devices such as Kindle
ENT301 - Integrating On-premises Enterprise Storage Workloads with AWS
by Yinal Ozkan - Principal Solutions Architect with Amazon Web ServicesHarry Dewedoff - Sr. Director, Global Information Technology Serv with NASDAQ OMX
AWSÊgives designers of enterprise storage systems a completely new set of options. Aimed at enterprise storage specialists and managers of cloud-integration teams, this session gives you the tools and perspective to confidently integrate your storage workloads with AWS. We show working use cases, a thorough TCO model, and detailed customer blueprints. Throughout we analyze how data-tiering options measure up to the design criteria that matter most: performance, efficiency, cost, security, and integration.
ENT303 - Migrating Enterprise Applications to AWS: Best Practices, Tools, and Techniques
by Tom Laszewski - Strategic Solution Architect with Amazon Web ServicesBharanendra Nallamotu with Amazon Web ServicesAbdul Sait - Principal Solutions Architect with Amazon Web Services
This session discusses strategies, tools, and techniques for migrating enterprise software systems to AWS. We consider applications like Oracle eBusiness Suite, SAP, PeopleSoft, JD Edwards, and Siebel. These applications are complex by themselves; they are frequently customized; they have many touch points on other systems in the enterprise; and they often have large associated databases. Nevertheless, running enterprise applications in the cloud affords powerful benefits. We identify success factors and best practices.
ENT304 - Big TV Means Big Infrastructure and Big Data
by Charles Hammell - Solutions Architect with ComcastJacques Louvet - Infrastructure Lead with Comcast Corporation
To support their advanced Xfinity X1 platform and provide services to millions of customers, Comcast maintains physical data centers, but the company is increasingly relying on AWS as well; it plans to maintain both going forward. AWS supplies Comcast with resources on demand so that capacity issues in its brick-and-mortar sites can be more readily managed as part of its high-availability strategy. Comcast also employs AWS as a flexible, cost-effective platform for development and prototyping. Finally, AWS also helps Comcast make effective tactical and strategic use of the huge data streams generated by its customersÕ use of its services. In this talk, we trace how Comcast uses AWS to mitigate risk and to open up new technical capabilities.
ENT305 - Best Practices for Benchmarking and Performance Analysis in the Cloud
by Robert Barnes - Director of Benchmarking with Amazon Web Services
In this session, we explain how to measure the key performance-impacting metrics in a cloud-based application. With specific examples of good and bad tests, we make it clear how to get reliable measurements of CPU, memory, disk, and how to map benchmark results to your application. We also cover the importance of selecting tests wisely, repeating tests, and measuring variability.
MBL201 - Mobile Game Architectures on AWS
by Nate Wiger - Principal Solutions Architect, Gaming with Amazon Web Services
The gaming industry is undergoing massive changes, and AWS offers unique capabilities that game developers can use to succeed. In this session, we cover cloud gaming architectural patterns you can use to create highly available and scalable online games. We discuss games-as-APIs, database design considerations, decoupled architectures, and the best instance types for mobile, social, and AAA online games. By the end of this session, youÕll understand the different architecture patterns for the major classes of online games, as well as understand which AWS technologies will help you meet the unique challenges of each one.
MBL202 - A Telco's Story About Launching Voice-Command Personal Agent Service with AWS Cloud
by Minoru Etoh - Managing Director, R&D Strategy Department SVP with NTT DoCoMo
In March 2012, JapanÕs leading mobile operator, NTT DOCOMO, introduced Shabette Concier, an advanced voice-activated personal agent service that enables customers to intuitively and directly operate services and smartphone features with voice commands. Millions of DOCOMO's subscribers are now using this service. This session explains Shabette Concier'sÊdistributed speech recognition architecture, and dialogue-understanding system design, with machine learning technologies and large-scale database systems. Learn why DOCOMO chose the AWS cloud and how DOCOMO engineers overcame all the difficulties from CEO-imposed time constraints, unexpectedly rapid service growth, usage spikes driven by marketing campaigns, and internal resistance to the use of cloud services. The session concludes with lessons learned from a telco's large-scale service development of a mobile app with the AWS cloud.
MBL203 - Mobile App Performance: Getting the Most from APIs
by PIerre-Luc Simard - Chief Technology Officer with MiregoNate Isley - Sales Engineer
(Presented by New Relic)ÊToo often, developers think of a mobile app as simply code running on the device. A mobile app is much more than that. Every web API used by an app becomes as much a part of the app as the code running on the device. But while mobile developers have control over their code, they don't always have control over the APIs they use. Web APIs and their infrastructure impact app performance and ultimately the user experience. This presentation covers some of the essential aspects of app performance management when web APIs are present, including: HTTP headers are your friend--stop ignoring all they have to tell you Control your network connections on the deviceÑdon't just leave things to the OS Configure all your caches and use them Whatever you do, measure early and oftenÊ The session includes a customer story from the CTO of Mirego, and demos of New Relic mobile app performance monitoring, where you see how to drill down into specific requests to see performance by response time, throughput, and data transfer size.
MBL301 - Coding Tips You Should Know Before Distributing Your HTML5 Web App on Mobile Devices
by Arindam Bhatacharya - Sr. Product Manager with Lab126Russell Beattie - Technical Evangelist, HTML5 Platform with Amazon Lab126
Before you surface your web app through a mobile marketplace, you should be aware of common errors and performance tips to optimize your web app for mobile devices. This session teaches you the important things you need to know, like the best practices around improving frame rate and ways to optimize <canvas>. We also help you avoid mistakes, such as common cases that kill scrolling performance. Through a case study, we dive deep into the subtleties of CSS animations and improving rendering speed by reducing composite layers in your web apps. We also show you how to profile performance bottlenecks on-device using the Web App Tester tool. You get a peek into the Chromium-based web runtime on Kindle Fire devices and learn how to accommodate device rotation, touch feedback, device resolution, and screen aspect ratios to fine-tune your web app experience on mobile devices.
MBL302 - Scaling a Mobile Web App to 100 Million Clients and Beyond
by Joey Parsons - Head of Operations with Flipboard
Mobile apps have different service requirements from their desktop and web-based analogs. Bandwidth, client processing, and other considerations can impose significant extra demands on a scalable service. This session is a technical discussion of the challenges Flipboard met while scaling a data-intensive mobile app from 0 to 100 million clients and how they are working on scaling 10x using AWS. At each major step, Flipboard has encountered many challenges. Learn about how they handled those challenges and the evolution of their systems architecture, design choices, and software selection.
MBL303 - Gaming Ops - Running High-Performance Ops for Mobile Gaming
by Nick Dor - Sr. Director Engineering with GREE International, Inc.Eduardo Saito - Director of Engineering with GREE International, Inc
GREE International creates free-to-play mobile games for iOS and Android devices that are played by millions of people around the world. In the last year, the ops team at GREE has doubled the amount of AWS instances under management, migrated games from traditional data centers into AWS, and developed resilient provisioning processes to accommodate live events across the entire portfolio of games built and supported by GREE's San Francisco game studio. Ê This session covers how the GREE team monitors game health, deals with incident response, enables mass system configuration, and handles release management. The session provides insight into the benefits and challenges of working with the cloud and how game analytics are used to improve player engagement and drive monetization. The session also includes detailsabout implementing intelligent Amazon EC2 auto-scaling policies, supporting in-game live events, and the intricacies involved in migrating a live game from a traditional data center into AWS without players noticing.
MBL304 - Building a World in the Clouds: MMO Architecture on AWS
by Jeffrey Berube - Director of Technical Operations with Red 5 Studios, Inc.
Can you really build the infrastructure required to bring aÊmassively multiplayer online game (MMO) to life in the cloud? This session discusses the evolution of Red 5 Studios' FireFallÑa free-to-play MMO.ÊFireFall runs entirely on the AWS platform and allows players from around the world to play together in the cloud. The session covers some of the design decisions made over the last two yearsÑthe things that worked well and not so well. The session also presents some of the solutions Red 5 implemented to ease the transition from dedicated data center hardware to virtual servers in AWS.
MBL305 - Building Killzone's Servers - How We Used AWS in Our Flexible Architecture and Component-Based Design for Killzone ShadowFall and Killzone Mercenary
by Tim Darby - Senior Principle Server Engineer with Sony Computer Entertainment Europe
For Killzone Mercenary and Killzone ShadowFall, Guerrilla Games and SCE Online Technology Group switched from an inflexible, classic 3-tier architecture to one using well understood tools and AWS technologies. This switch allowed them to deliver more title-specific features without losing the benefits of sharing code between titles easily. This session covers problems in the previous architectures, how they fundamentally changed how they built game servers, some of the problems they faced while rebuilding from the ground up, and rabbit holes they got stuck downÑboth generally, and specific to their use of AWS Elastic Beanstalk, Amazon DynamoDB, and other AWS services. They show how they were able to react quicker to the changing needs of the title, make last-minute feature changes, whip up servers at a moment's notice, and switch out implementations when a new AWS feature came along that would deliver performance or cost improvements.
MBL306 - Taking In-App Purchasing to the Next Level: Selling Physical Goods through Your App & Other Monetization Strategies
by David Isbitski - Developer Evangelist with Amazon Appstore
How to make your app work as hard for you as youÕve worked on it is a common concern for most developers. In this session, learn how to implement monetization technology to help your Android app step up its game regardless of which store customers use to download your apps. This session covers a new API that helps you earn advertising fees for physical goods you sell in the context of your mobile app. The session offers code samples and best practices for bundling physical and digital goods for a seamless buying experience in your game or app. Learn how to implement more traditional monetization technology such as in-app purchasing and display ads.
MBL307 - How Parse Built a Mobile Backend as a Service on AWS
by Charity Majors - Production Engineering Manager with Parse
Parse is a BaaS for mobile developers that is built entirely on AWS. With over 150,000 mobile apps hosted on Parse, the stability of the platform is our primary concern, but it coexists with rapid growth and a demanding release schedule. This session is a technical discussion of the current architecture and the design decisions that went in to scaling the platform rapidly and robustly over the past year and a half. We talk about some of the lessons learned managing and scaling MongoDB, Cassandra, Redis, and MySQL in the cloud. We also discuss how Parse went from launching individual instances using chef to managing clusters of hosts with Auto Scaling groups, with instance discovery and registry handled by ZooKeeper, thus enabling us to manage vastly larger sets of services with fewer human resources. This session is useful to anyone who is trying to scale up from startup to established platform without sacrificing agility.
MBL308 - Engage Your Customers with Amazon SNS Mobile Push
by Constantin Gonzalez - Solutions Architect with Amazon Web ServicesPablo Varela - Software Engineer with Plumbee
Amazon SNS mobile push is a scalable, fully-managed, cross-platform mobile push notifications service. In this session, we show you how to implement a massively scalable notification system across multiple platforms (including Apple, Google, and Kindle Fire devices). We cover common design patterns including the code you need, and we demonstrate live on stage just how fast and scalable SNS can be. Also hear from customers who have combined Amazon SNS with Amazon Redshift and Amazon DynamoDB to engage their own customers with precisely targeted messages.
MBL309 - Speeding Mobile Development with Cross-Platform APIs: High-Level Amazon Services for Data Sync, Player Engagement, and A/B Testing
by Peter Heinrich - Technical Evangelist with Amazon AppstoreSourabh Ahuja - VP, Android and Cross-Platform Development with Glu Mobile
Is your mobile app or game built to run on multiple platforms? Do you want a fast, straightforward solution to common problems like data synchronization, player achievements, and leaderboards? Amazon offers several services that allow you to plug-and-play features in your mobile app or game, with very little set-up and no server-side development. This session provides instructions for integrating GameCircle (Whispersync for Games, Achievements, Leaderboards) in your Android apps, and demonstrates how to add A/B Testing and fine tune your app based on user response. Learn how to speed up your time to market, regardless of whether you distribute through Google Play or the Amazon Appstore.
MBL402 - Building Cloud-Backed Mobile Apps
by Glenn Dierkes - Software Development Engineer with Amazon Web Services
Connecting your mobile app to AWS can unlock powerful features. With AWS, you can streamline your sign-in experience with social login, store user data in the cloud and share it between devices, display location-specific information using geospatial queries, and engage your customers across multiple platforms with push notifications. In this session, you learn how to integrate these powerful features into a sample mobile app using Amazon DynamoDB, Amazon Simple Notification Service (Amazon SNS), and web identity federation.
MED301 - Media Content Ingest, Storage, and Archiving with AWS
by John Downey - Busienss Development Manager - Storage with Amazon Web ServicesMichael Raposa - Vice President with iN DEMAND
The first step in a successful cloud-based media workflow is getting the content transferred and stored. From there you can achieve massive efficiencies for downstream processing and delivery via content access instead of content transfer. In this session you'll learn about best practices for ingesting content to the cloud; relevant AWS partners within the media ecosystem; how to use storage tiers based on the business value of your assets; and how to eliminate tape, tape museums, and tech refresh within your long term archive strategy; and ultimately how to remonetize archived assets.
MED302 - Scalable Media Processing in the Cloud
by David Sayed - Principal Product Manager with Amazon Web ServicesPhil Cluff - Principal Software Engineer with BBC
The cloud empowers you to process media at scale in ways that were previously not possible, enabling you to make business decisions that are no longer constrained by infrastructure availability. Hear about best practices to architect scalable, highly available, high-performance workflows for digital media processing. In addition, this session covers AWS and partner solutions for transcoding, content encryption (watermarking and DRM), QC, and other processing topics.
MED303 - Maximizing Audience Engagement in Media Delivery
by Usman Shakeel - Principal Solutions Architect with Amazon Web ServicesShobana Radhakrishnan - Engineering Manager with Netflix
Providing a great media consumption experience to customers is crucial to maximizing audience engagement. To do that, it is important that you make content available for consumption anytime, anywhere, on any device, with a personalized and interactive experience. This session explores the power of big data log analytics (real-time and batched), using technologies like Spark, Shark, Kafka, Amazon Elastic MapReduce, Amazon Redshift and other AWS services. Such analytics are useful for content personalization, recommendations, personalized dynamic ad-insertions, interactivity, and streaming quality. This session also includes a discussion from Netflix, which explores personalized content search and discovery with the power of metadata.Ê
MED304 - Automated Media Workflows in the Cloud
by John Mancuso - Solutions Architect with Amazon Web ServicesTony Koinov - Director of Engineering with Netflix
Ingesting, storing, processing and delivering a large library of content involves massive complexity. This session walks through sample code that leverages AWS Services to perform all these tasks while coordinating the activities with Amazon Simple Workflow Service (SWF). Along the journey you are introduced to best practices for cost optimization, monitoring, reporting, andÊ exception or error handling. In addition to the sample workflow, a guest speaker from Netflix takes the audience on a deep dive into their Òdigital supply chainÓ where you learn how they have automated their processes in moving data all the way from the studios to the last mile. Services covered include Amazon SWF, Amazon Simple Storage Service (S3), Amazon Glacier, Amazon Elastic Compute Cloud (EC2), Amazon Elastic Transcoder, Amazon Mechanical Turk, and Amazon CloudFront.
MED305 - On-demand and Live Streaming with Amazon CloudFront in the Post-PC World
by Alex Dunlap - Senior Manager, Amazon Web Services with Amazon Web ServicesIvan Yang - Director, Systems Engineering with VEVO
Learn how AWS customers are using Amazon CloudFront to deliver their video content toÊ users over HTTP(S) using a number of modern streaming protocols such as Smooth Streaming, HLS, DASH, etc. You also learn about options for end-to-end security of your video contentÑthrough both encryption and preventing access from unauthorized users based on their location. Finally, we demonstrate how easy it is to use CloudFront to deliver both your on-demand and live video to a global audience with great performance.
MED401 - Securing Media Content and Applications in the Cloud
by Usman Shakeel - Principal Solutions Architect with Amazon Web ServicesBen Masek - SVP & CTO with Sony Media Cloud Services
Are your media assets secure? For media companies, security is paramount. Few things can more directly impact your company's bottom line. As the move to store, process, and distribute digital media via the cloud continues, it is imperative to examine the relevant security implications of a multitenant public cloud environment. This talk is intended to answer questions around securely storing, processing, distributing, and archiving digital media assets in the AWS environment. The talk also covers the security controls, features, and services that AWS provides its customers. Learn how AWS aligns with the MPAA security best practices and how media companies can leverage that for their media workloads. This session also includes a representative from Sony Media Cloud Sevices discussing the path to MPAA alignment of their application "Ci" on AWS based on these best practices.
MED402 - Building a Scalable Video / Digital Asset Management (DAM) Platform in the Cloud
by Michael Limcaco - Enterprise Solutions Architect with Amazon Web ServicesJonathan Rivers - Director, Technical Operations with PBS
With the breadth of AWS services available that are relevant to digital media, organizations can readily build out complete content/asset management (DAM/MAM/CMS) solutions in the cloud. This session provides a detailed walkthrough for implementing a scalable, rich-media asset management platform capable of supporting a variety of industry use cases. The session includes code-level walkthrough, AWS architecture strategies, and integration best practices for content storage, metadata processing, discovery, and overall library management functionalityÑwith particular focus on the use of Amazon S3, Amazon Elastic Transcoder, Amazon DynamoDB and Amazon CloudSearch.Ê Customer case study will highlight successful usage of Amazon CloudSearch by PBS to enable rich discovery of programming content across the breadth of their network catalog.
SEC101 - AWS Security – Keynote Address
by Stephen Schmidt - Chief Info Security Officer with Amazon Web Services
Security must be the number one priority for any cloud provider and that's no different for AWS. Stephen Schmidt, vice president and chief information officer for AWS, will share his insights into cloud security and how AWS meets the needs of today's IT security challenges. Stephen, with his background with the FBI and his work with AWS customers in the government and space exploration, research, and financial services organizations, shares an industry perspective that's unique and invaluable for today's IT decision makers. At the conclusion of this session, Stephen also provides a brief summary of the other sessions available to you in the security track.
SEC102 - Cloud Identity Management for North Carolina Department of Public Instruction
by Samuel Carter - Customer with Friday Institute / NC State UniversityTroy Moreland - Chief Technology Officer with Identity Automation
(Presented by Identity Automation) Identity Automation has worked with the North Carolina Department of Public Instruction since April 2013 to provide a cloud-based identity management service for all employees, students, parents and guests of the StateÕs K12 organizations.Ê In this session, Identity Automation will discuss how the service was used to synchronize identities with target systems, provide federation services as well as end-user self-service and to delegate administration functionality. Ê
SEC201 - Access Control for the Cloud: AWS Identity and Access Management (IAM)
by Jim Scharf - Director with Amazon Web Services
Learn how AWS IAM enables you to control who can do what in your AWS environment. We discuss how IAM provides flexible access control that helps you maintain security while adapting to your evolving business needs. Wel review how to integrate AWS IAM with your existing identity directories via identity federation. We outline some of the unique challenges that make providing IAM for the cloud a little different. And throughout the presentation, we highlight recent features that make it even easier to manage the security of your workloads on the cloud.
SEC203 - Security Assurance and Governance in AWS
by Sara Duffer - Senior Manager, AWS Risk with Amazon Web ServicesChad Woolf - Director, AWS Risk & Compliance with Amazon Web Services
With the rapid increase of complexity in managing security for distributed IT and cloud computing, security, and compliance managers can innovate in how to ensure a high level of security is practiced to manage AWS resources. In this session, Chad Woolf, Director of Compliance for AWS will discuss which AWS service features can be leveraged to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. Attendees will also learn first-hand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.
SEC204 - Building Secure Applications and Navigating FedRAMP in the AWS GovCloud (US) Region
by CJ Moses - GM, Government Cloud Services with Amazon Web ServicesBrett McMillen - Principal Sales Executive - US with Amazon Web ServicesChris Gile - Manager, Federal Compliance Programs with Amazon Web ServicesJennifer Gray - Federal Cloud Lead - HHS Enterprise Cloud Architect with US Department of Health & Human ServicesTom Soderstrom - IT CTO with JPLTom Soderstrom - IT Chief Technology Officer with NASA
This session covers the shared responsibility model for security and compliance specific to the AWS GovCloud (US) region. This presentation highlights the enhanced security offerings of AWS GovCloud (US), such as FIPS-140 Level 2 encryption, as well as the supported compliance regimes. It also reviews how our customers can build secure applications in GovCloud using the various security features such as IAM and VPC. This presentation also offers a brief overview of FedRAMP, explains the shared responsibility model through customer use cases, and covers how customers can obtain an Authority to Operate.
SEC205 - Cybersecurity Engineers, You’re More Secure in the Cloud!
by Matthew Derenski - CyberSecurity Engineer with JPL
Security professionals are paranoid by nature and the cloud presents unique challenges. Working with AWS, the Jet Propulsion Labs (JPL) Cybersecurity team at the California Institute of Technology evolved its processes, procedures, and tools to meet their advanced security, forensics, and compliance needs in the cloud. By using the AWS API's to automate security monitoring, asset management, and forensics, JPL leveraged AWS for a highly secure environment that allowed their team to be more agile than they could be in their physical infrastructure. This presentation will discuss the growing pains, challenges, and improved capabilities provided by AWS to JPL. This discussion will focus on the security of the AWS console, provisioning , asset management, IAM, EC2, and S3. This talk will help organizations think about and develop a security roadmap for their AWS deployments.
SEC206 - Navigating PCI Compliance in the Cloud
by Jesse Angell - Chief Technology Officer with PaymentSpring
People assume that implementing the Payment Card Industry Data Security Standard (PCI DSS) on AWS is more difficult than in a traditional data center, but that's simply not true. Come learn how PaymentSpring implemented a PCI DSS level 1 compliant gateway running entirely on AWS. Learn how they designed the system to make PCI DSS validation easier, what they could depend on AWS to provide, and what they still had to take care of. The session covers some of the things PaymentSpring did to significantly reduce costs and increase the overall security of the system. But most importantly, learn why it's easier to maintain compliance over time. Jesse Angell, CTO of PaymentSpring, shares his first-hand experiences with implementing PCI DSS on AWS at his organization.
SEC207 - New Launch: Turn On AWS CloudTrail to Track Access and Changes to AWS Resources in Your Account
by Sivakanth Mundru - Senior Product Manager with Amazon Web Services
Customers using AWS resources such as EC2 instances, EC2 Security Groups and RDS instances would like to track changes made to such resources and who made those changes. In this session, customers will learn about gaining visibility into user activity in their account and aggregating logs across multiple accounts into a single bucket. Customers will also learn about how they can use the user activity logs to meet the logging guidelines/requirements of different compliance standards. AWS Advanced Technology Partners will demonstrate applications for analyzing user activity within an AWS account.
SEC208 - How to Meet Strict Security & Compliance Requirements in the Cloud
by JD Sherry - Vice President, Technology and Solutions with Trend Micro, Inc.
(Presented by Trend Micro)ÊIn this session, you learn about the AWS shared security model, including considerations and best practices for deploying a secure and compliant application on AWS, and how to leverage the features and APIs provided by AWS. You also learn how to use best-in-class security and compliance solutions that have been optimized for enterprises deploying in AWS. Key topics covered are Amazon EC2 and Amazon EBS encryption, including several key management methodologies as well as intrusion detection and prevention, anti-malware, anti-virus, integrity monitoring, firewall, and web reputation in the cloud.
SEC301 - Top 10 AWS Identity and Access Management (IAM) Best Practices
by Anders Samuelsson - Manager, Technical Program Management with Amazon Web Services
Learn about best practices on how to secure your AWS environment with AWS Identity and Access Management (IAM). We will discuss how you best create access policies; manage security credentials (i.e., access keys, password, multi factor authenticationÊ(MFA)Êdevices etc); how to set up least privilege; minimizing the use of your root account etc.
SEC302 - Mastering Access Control Policies
by Jeff Wierer - Principal Product Manager with Amazon Web Services
This session will take an in-depth look at using the AWS Access Control Policy language. We will start with the basics of understanding the policy language and how to create policies for users and groups.Ê Next weÕll take a look at how to use policy variables to simplify the management of policies.Ê Finally, weÕll cover some of the common use cases such as granting a user secure access to an S3 bucket, allowing an IAM user to manage their own credentials and passwords, and more!
SEC303 - Delegating Access to your AWS Environment
by Jeff Wierer - Principal Product Manager with Amazon Web Services
At times you may have a need to provide external entities access to resources within your AWS account. You may have users within your enterprise that want to access AWS resources without having to remember a new username and password. Alternatively, you may be creating a cloud-backed application that is used by millions of mobile users. Or you have multiple AWS accounts that you want to share resources across. Regardless of the scenario, AWS Identity and Access Management (IAM) provides a number of ways you can securely and flexibly provide delegated access to your AWS resources. Come learn how to best take advantage of these options in your AWS environment.
SEC304 - Encryption and key management in AWS
by Ken Beer - Principal Product Manager with Amazon Web ServicesTodd Cignetti - Sr. Product Manager with Amazon Web ServicesJason Chan - Engineering Director with Netflix
This session will discuss the options available for encrypting data at rest and key management in AWS. It will focus on two primary scenarios: (1) AWS manages encryption keys on behalf of the customer to provide automated server-side encryption; (2) the customer manages their own encryption keys using partner solutions and/or AWS CloudHSM. Real-world customer examples will be presented to demonstrate adoption drivers of specific encryption technologies in AWS. Netflix Jason Chan will provide an overview of how NetFlix uses CloudHSM for secure key storage.
SEC305 - DDoS Resiliency with Amazon Web Services
by Nathan Dye - Software Development Manager with Amazon Web Services
It's a rough world out there, filled with mega bot nets that threaten the availability of your web service. How do you keep your service running in the event of a 10,000x increase in traffic? Maximizing service availability under DDoS conditions requires thoughtful service architecture, and at times, fast acting operations teams. This presentation covers best practices for DDoS-resilient services.
SEC306 - Implementing Bullet-Proof HIPAA Solutions on AWS
by Mark Welscott - Director, Information Services Planning and Architecture with Spectrum HealthKeith Brophy - CEO with Ideomed, Inc.Gerry Miller - Founder and CTO with Cloudticity
Implementing a HIPAA solution presents challenges from day one. Not only are you saddled with seemingly insurmountable regulatory challenges, you also take on the stewardship of people's most deeply personal information. The AWS platform simplifies deployment of HIPAA applications by offering a rich set of dynamic scalability, developer services, high availability options, and strong security. Hosting a HIPAA application on the public cloud may seem pretty scary, but Ideomed solved some of this architecture's most vexing challenges by building a major health portal and deploying it on AWS. Come hear Ideomed CEO Keith Brophy and solution architect Gerry Miller talk first-hand about the challenges and solutions, including CloudHSM encryption, multi-AZ failover, dynamic scaling, and more!
SEC307 - Learn How Trend Micro Used AWS to Build their Enterprise Security Offering (Deep Security as a Service)
by Mark Nunnikhoven - Principal Engineer with Trend Micro
In this session, learn how Trend Micro built Deep Security as a service on AWS. This service offers enterprise-grade security controls for AWS deployments in the form of intrusion detection and prevention, anti-malware, a firewall, web reputation, and integrity monitoring. With over 400 internal requirements set by their in-house Information Security and IT Operations teams, the Service team was challenged with building the case to deploy Deep Security as a service on AWS instead of in-house.ÊThis session walks through the reasons why the team chose AWS, the design decisions they made, and how they were able to meet or exceed their in-house requirements while deploying on AWS.
SEC308 - Auto-Scaling Web Application Security in Amazon Web Services
by Misha Govshteyn - Chief Strategy Officer & Co-Founder with Alert Logic
(Presented by Alert Logic)ÊAWS provides multiple levels of security between the physical server and facilities up to the host operating system and virtualization layer.Ê This session covers strategies for ensuring your applications, network, and data are secure in a highly-scalable environment.Ê In this session, you receive practical guidance for implementing scalable web application security in the AWS cloud, including: Common techniques and tools used to provide security for auto-scaling web applications including Chef/Puppet, AWS CloudFormation, and Elastic Load Balancing. Using auto-scaling groups and requirements for management APIs in automatically deploying web security infrastructure. Common scaling triggers and mechanisms by which web application security infrastructure must scale to operate in lockstep with elastic web server farms. Approach for deploying application security controls embedded directly into web applications, and considerations for PaaS cloud environments. This session is designed for an advanced audience with strong understanding of IP networking, web application security fundamentals, and experience in managing security infrastructure in a public cloud environment; however, the information coveredÊis also of interest to intermediate attendees that set technology strategy and formulate requirements for cloud security controls.Ê
SEC309 - Securing Your AWS Deployment: Expert Advice & Stories from the Trenches
by Syed Asghar - Vodafone Solutions Security Manager with VodafoneRoshan Vilat - Solution Architect with Vodafone AustraliaJD Sherry - Vice President, Technology and Solutions with Trend Micro, Inc.Phil Schulz - Agile Project Manager with Vodafone
(Presented by Trend Micro)ÊSecurity isnÕt just about stopping the Òbad peopleÓ.Ê ItÕs also about making sure your deployments are functioning as you intended.Ê By leveraging the features and APIs provided by AWS, you can automate easy-to-implement solutions and simplify the security deployment process without impeding the flexibility and scalability of AWS. This session covers the practical considerations for implementing a comprehensive security solution for running your applications in the AWS cloud, within the context of a large AWS customer case study. Security experts from Trend Micro cover Amazon EC2 and Amazon EBS encryption, including several key management methodologies, as well as intrusion detection and prevention, anti-malware, anti-virus, integrity monitoring, firewall, and web reputation in the cloud. Trend Micro will joined by the following speakers from Vodafone: Syed Asghar, Vodafone Solution Security Manager Roshan Vilat, Vodafone Solution Architect Phil Schulz, Vodafone Agile Project Manager
SEC401 - Integrate Social Login Into Mobile Apps
by Bob Kinney - Mobile SDE at AWS with Amazon Web Services
Streamline your mobile app signup experience with social login. ÊWe demonstrate how to use web identity federation to enable users to log into your app using their existing Facebook, Google, or Amazon accounts. Learn how to apply policies to these identities to secure access to AWS resources, such as personal files stored in Amazon S3. Finally, we show how to handle anonymous access to AWS from mobile apps when there is no user logged in.
SEC402 - Intrusion Detection in the Cloud
by Don Bailey - Principal Security Engineer with Amazon Web ServicesGreg Roth - Sr Security Engineer with Amazon Web Services
For businesses running entirely on AWS, your AWS account is one of your most critical assets. Just as you might run an intrusion detection system in your on-premises network, you should monitor activity in your account to detect abnormal behavior. This session walks you through leveraging unique capabilities provided within AWS that enable you to detect and respond to changes in your environment.
SPOT101 - 2nd Annual Start-up Launches with Dr. Werner Vogels
by Werner Vogels - Chief Technology Officer with Amazon Web Services
Attend this fun, fast-paced session and see five AWS-powered start-ups launch on-stage with Amazon.com CTO, Dr. Werner Vogels. You'll hear directly from these hand-selected companies and learn how they went from an idea to launch, using the AWS cloud. This exciting hour is your firsthand look at some of the hottest new start-ups, as well as a chance to get access to their new products and features. Whether youÕre a booming enterprise or a blossoming start-up, this is a re:Invent activity thatÕs not to be missed.
SPOT201 - Managing the Pace of Innovation: Behind the Scenes at AWS
by Khawaja Shams - Technical Advisor with Amazon Web ServicesCharlie Bell - Vice President with Amazon Web Services
AWS launched in 2006, and since then we have released more than 530 services, features, and major announcements. Every year, we outpace the previous year in launches and are continuously accelerating the pace of innovation across the organization. Ever wonder how we formulate customer-centric ideas, turn them into features and services, and get them to market quickly?ÊThis session dives deep into how an idea becomes a service at AWS and how we continue to evolve the service after release through innovation at every level. We even spill the beans on how we manage operational excellence across our services to ensure the highest possible availability. Come learn about the rapid pace of innovation at AWS, and the culture that formulates magic behind the scenes.Ê
SPOT202 - A Venture Capitalist’s View on the Start-up Ecosystem and the Cloud
by Brad Steele - Venture Capital Business Development Manager with Amazon Web ServicesMichael Skok - General Partner with North Bridge Venture PartnersMatt McIlwain - Managing Director with Madrona Venture GroupMike Goguen - General Partner with Sequoia CapitalAsheem Chandna - Partner with Greylock PartnersTim Guleri - Managing Director with Sierra Ventures
This panel of top venture capitalists discusses how cloud computing has democratized the entrepreneurial ecosystem, transformed business models, and fundamentally changed the venture capital funding model. Join panelists Asheem Chandna (Greylock Partners), Mike Goguen (Sequoia Capital),ÊTim Guleri (Sierra Ventures), Matt McIlwain (Madrona Venture Group), and Michael Skok (North Bridge Venture Partners), who share insights into their investment evaluation process, what they consider to be the key ingredients of a fundable and high potential start-up, where opportunities lie for further innovation in the cloud, and what they see as exciting new emerging cloud applications over the next year.Ê
SPOT203 - Fireside Chats with Amazon CTO Werner Vogels – Session One, Start-up Founders
by Werner Vogels - Chief Technology Officer with Amazon Web ServicesEliot Horowitz - CTO with MongoDBValentino Volonghi - Chief Architect with AdRollJeff Lawson - Co-Founder & CEO with Twilio
Amazon VP and CTO Werner Vogels hosts a fireside chat with three leaders in the start-up community.Ê Werner will sit down with Eliot Horowitz, CTO of MongoDB, to discuss how MongoDB sees the cloud evolving and their role in the AWS ecosystem; Jeff Lawson, CEO of Twilio, to discuss how Twilio uses, manages, and measures APIs and how the cloud helps; and Valentino Volonghi, Chief Architect at Adroll, to discuss the role of DynamoDB in building an advertising business that bids more than 10 billion impressions per day.
SPOT204 - Fireside Chats with Amazon CTO Werner Vogels – Session Two, Start-up Influencers
by Werner Vogels - Chief Technology Officer with Amazon Web ServicesDavid Cohen with TechstarsAlbert Wenger with Union Square Ventures
In part two of his fireside chat series, Amazon VP and CTO Werner Vogels hosts three influencers inÊthe startup community. Werner will sit down with David Cohen, Founder and CEO of TechStars, to discuss the impact of accelerators and the Cloud onÊtheÊstartup ecosystem, and the role of the Cloud both as infrastructure platform and as delivery channel for new products; Ash Fontana, head of Fundraising Products at AngelList, to discuss the impact that new sources of equity financing have on startups; andÊAlbert Wenger, Managing Partner at Union Square Ventures, to discuss how the Cloud should evolve to make it even easier for startups to be totallyÊproduct focused.Ê
SPOT205 - Why Scale Matters and How the Cloud Really is Different
by James Hamilton - Vice President & Distinguished Engineer with Amazon Web Services
A behind the scenes look at key aspects of the AWS infrastructure deployments. Some of the true differences between a cloud infrastructure design and conventional enterprise infrastructure deployment and why the cloud fundamentally changes application deployment speed, economics, and provides more and better tools for delivering high reliability applications. Few companies can afford to have a datacenter in every region in which they serve customers or have employees. Even fewer can afford to have multiple datacenter in each region where they have a presence. Even fewer can afford to invest in custom optimized network, server, storage, monitoring, cooling, and power distribution systems and software. We'll look more closely at these systems, how they work, how they are scaled, and the advantages they bring to customers.
SPOT401 - Leading the NoSQL Revolution: Under the Covers of Distributed Systems at Scale
by Swami Sivasubramanian - General Manager with Amazon Web ServicesKhawaja Shams - Technical Advisor with Amazon Web Services
The Dynamo paper started a revolution in distributed systems. The contributions from this paper are still impacting the design and practices of some of the world's largest distributed systems, including those at Amazon.com and beyond. Building distributed systems is hard, but our goal in this session is to simplify the complexity of this topic to empower the hacker in you! Have you been bitten by the eventual consistency bug lately? We show you how to tame eventual consistency and make it a great scaling asset. As you scale up, you must be ready to deal with node, rack, and data center failure. We shareÊinsights on how to limit the blast radius of the individual components of your system,Êbattle testedÊtechniques for simulating failures (network partitions, data center failure), and how we used core distributed systems fundamentals to buildÊhighlyÊscalable, performance, durable, and resilient systems. Come watch usÊuncover the secret sauce behind Amazon DynamoDB, Amazon SQS, Amazon SNS, and the fundamental tenents that define them as Internet scale services. To turn this session into a hacker's dream, we go over design and implementation practices you can follow to build an application with virtually limitless scalability on AWS within an hour. We even share insights and secret tips on how to make the most out of one of the services released during the morning keynote.
STG101 - Understanding AWS Storage Options
by Joe Lyons - Manager, Global Storage Business Development with Amazon Web ServicesTamara Monson - VP of Technology with iMemories
With AWS, you can choose the right storage service for the right use case. Given the myriad of choices, from object storage to block storage, this session will profile details and examples of some of the choices available to you, with details on real world deployments from customers who are using Amazon Simple Storage Service (Amazon S3), Amazon Elastic Block Store (Amazon EBS), Amazon Glacier, and AWS Storage Gateway.Ê In addition, this session will cover all the new AWS storage features introduced in the last 12 months.Ê
STG201 - Backup to the Cloud
by Travis Greenstreet - Senior Cloud Architect with 2nd Watch, Inc
In this session, you will learn how to leverage the AWS cloud to build a robust backup environment. 2nd Watch will discuss using Amazon Glacier for archives, including API calls, 3rd party utilities, and leveraging the Amazon S3 lifecycle policy. Learn how AWS Storage Gateway allows you to convert your physical drives to Amazon Elastic Block Store (EBS) volumes for rapid cloud failover. We will also discuss resolutions to common obstacles, like slow uploads and targeted file retrieval.
STG202 - Simplify Your Operations and Reduce Storage TCO using AWS Storage Gateway, Seamlessly Leveraging Amazon S3 and Amazon Glacier
by ajith Kuttai Venkatraman - Product Mgr with Amazon Web Services
AWS Storage Gateway allows you to reduce capital costs and simplify operations compared to traditional on-premises storage solutions. This session covers the set of use cases you can address with AWS Storage Gateway and introduces the newly launched Gateway-VTL . Gateway-VTL provides a cost-effective, scalable, and durable virtual tape infrastructure that eliminates the challenges associated with owning and operating an on-premises physical tape infrastructure.
STG203 - How Johnson & Johnson Pharma R&D Enables Science with Amazon S3 and Amazon Glacier
by Michael Miller with PfizerJason Stowe - CEO with Cycle ComputingThomas Messina - Senior IT Manager with Johnson & Johnson
Many organizations face a cost/latency tradeoff with data access: Amazon GlacierÕs lower cost, Òcold storageÓ or more expensive, readily available Amazon S3 storage. Learn how Johnson & Johnson avoids this tradeoff by centrally orchestrating movement and access of 100s of terabytes of data to and from S3 and Glacier as applications demand it, without re-coding or disrupting their applicationÕs workflow.
STG301 - AWS Storage Tiers for Enterprise Workloads - Best Practices
by Tom Laszewski - Strategic Solution Architect with Amazon Web ServicesChris Gattoni - IT Director with RISOThiru Sadagopan - VP, Infrastructure Managed Services with Apps Associates LLC
Enterprise environments utilize many different kinds of storage technologies from different vendors to fulfill various needs in their IT landscape. These are often very expensive and procurement cycles quite lengthy. They also need specialized expertise in each vendor's storage technologies to configure them and integrate them into the ecosystem, again resulting in prolonged project cycles and added cost. AWS provides end-to-end storage solutions that will fulfill all these needs of Enterprise Environments that are easily manageable, extremely cost effective, fully integrated and totally on demand. These storage technologies include Elastic Block Store (EBS) for instance attached block storage, Amazon Simple Storage Service (Amazon S3) for object (file) storage and Amazon Glacier for archival. An enterprise database environment is an excellent example of a system that could use all these storage technologies to implement an end-to-end solution using striped PIOPS volumes for data files, Standard EBS volumes for log files, S3 for database backup using Oracle Secure Backup and Glacier for long-time archival from S3 based on time lapse rules. In this session, we will explore the best practices for utilizing AWS storage tiers for enterprise workloads
STG302 - Maximizing EC2 and Elastic Block Store Disk Performance
by Miles Ward - Sr. Manager, Solutions Architecture with Amazon Web ServicesVinay Kumar with Amazon Web ServicesDoug Grismore - Director, Operations and Site Reliability Engineering with Twitter
Learn tips and techniques that will improve the performance of your applications and databases runningÊon Amazon EC2 instance storageÊand/or Amazon Elastic Block Store (EBS).Ê This advanced session discussesÊwhen toÊuse HI1, HS1, and Amazon EBS.Ê ItÊshares an "under the hood" view on how to tune theÊperformance of Elastic Block Store. The presenter(s) will share best practices on running workloads on Amazon EBS, such as relational databases (MySQL, Oracle, SQL Server, postgres) and NoSQL data stores such as MongoDB and Riak.Ê
STG303 - Running Microsoft and Oracle Stacks on Elastic Block Store
by Abdul Sait - Principal Solutions Architect with Amazon Web ServicesUlf Schoo - EC2 Windows Product Management with Amazon Web ServicesJafar Shameem - Business Development Manager with Amazon Web Services
Run your enterprise applications on Amazon Elastic Block Store (EBS). This session will explain how you can leverage the block storage platform (Amazon EBS) as you move your Microsoft (SQL Server, Exchange, SharePoint) and Oracle (Databases, E-business Suite, Business Intelligence) workloads onto Amazon Web Services (AWS). The session will cover high availability, performance, and backup/restore best practices.
STG304 - Maximizing Amazon S3 Performance
by Craig Carl - Partner Solution Architect with Amazon Web Services
This advanced session targets Amazon Simple Storage Service (Amazon S3) technical users.ÊWe will discuss the impact of object naming conventions and parallelism on S3 performance, provide real-world examples, and code the implementation best practices for naming of objects and implementing parallelism of both PUTs and GETs, cover multipart uploads and byte-range downloads and introduce GNU parallel for a quick and easy way to improve Amazon S3 performance.
STG305 - Disaster Recovery Site on AWS - Minimal Cost Maximum Efficiency
by Vikram Garlapati - Manager Solution Architect with AmazonAbdul Sait - Principal Solutions Architect with Amazon Web ServicesKamal Arora - Solution Architect with Amazon Web Services
Implementation of a disaster recovery (DR) site is crucial for the business continuity of any enterprise. Due to the fundamental nature of features like elasticity, scalability, and geographic distribution, DR implementation on AWS can be done at 10-50% of the conventional cost. In this session, we do a deep dive into proven DR architectures on AWS and the best practices, tools and techniques to get the most out of them.
STG306 - Dropbox presents Cloud Storage for App Developers
by Steve Marx - Developer Advocate with Dropbox
ItÕs more important than ever to create apps that provide an amazing user experience across multiple platforms, devices, and even offline. Even though saving data in cloud storage is becoming ubiquitous, it's still difficult for developers to manage user authentication, syncing, and caching. In this session, you'll learn how Dropbox addresses these challenges, how they leverage the AWS platform, and what tools they provide to developers on the Dropbox platform.
STG401 - NFS and CIFS Options for AWS
by Craig Carl - Partner Solution Architect with Amazon Web ServicesNoam Shendar - VP Business Development with Zadara Storage, Inc.
In this session, you learn about the use cases for Network File Systems (NFS) and Common Internet File Systems (CIFS), and when NFS and CIFS are appropriate on AWS. We cover the use cases for ephemeral, Amazon EBS, AmazonÊEBS P-IOPS, and Amazon S3 as the persistent stores for NFS and CIFS shares.ÊWe shareÊAWS CloudFormationÊtemplates that build multiple solutionsÑa single instance with Amazon EBS, clustered instances with Amazon EBS, and Gluster clusterÑas well as introduce AWS partner solutions.
STG402 - Advanced EBS Snapshot Management
by Craig Carl - Partner Solution Architect with Amazon Web Services
Amazon EBS snapshot is an easy to use feature that backs up your data on an Amazon EBS volume.ÊThis sessionÊcovers file system selection (XFS, ext*, etc.), quiescing of the file system, tagging of snapshots, and lifecycle management of snapshots. In this session,Êwe introduce two new OSS tools. One tool manages arrays of snapshots using tagging, making it easier to snapshot and recover a RAID array of Amazon EBS volumes; another tool manages snapshots of root volumes across an AWS account, and automatically snapshotting and applying lifecycle management to root volume snapshots.Ê
SVC101 - 7 Use Cases in 7 Minutes Each : The Power of Workflows and Automation
by Sunjay Pandey - Senior Product Manager with Amazon Web ServicesRyan Ayres - VP of Technology with GolfTECAndrew Ryan - Chief Executive Officer with MemberSuite, Inc.Joe Pollard - Software Engineer with BazaarvoiceJeff Winner - CTO with CardSpring
The Amazon Simple Workflow (Amazon SWF) service is a building block for highly scalable applications. Where Amazon EC2 helps developers scale compute and Amazon S3 helps developers scale storage, Amazon SWF helps developers scale their business logic. Customers use Amazon SWF to coordinate, operate, and audit work across multiple machinesÑacross the cloud or their own data centers. In this power-packed session, we demonstrate the power of workflows through 7 customer stories and 7 use cases, in 7 minutes each. We show how you can use Amazon SWF for curating social media streams, processing user-generated video, managing CRM workflows, and more. We show how customers are using Amazon SWF to automate virtually any script, library, job, or workflow and scale their application pipeline cost-effectively.
SVC102 - Fast, Simple, Familiar: Customer Success With Preconfigured Business Software from AWS Marketplace
by Robert Peterson - Sr. Product Manager with Amazon Web ServicesMiki Mullor - VP Product with Jobaline
Whether you are deploying your preferred business application, networking software, or developer tool, AWS Marketplace provides a wide variety of well-known, preconfigured software , 1000+ listings across 24 categories, that you can deploy with 1-Click and pay only for what you use, by the hour or month. In this session, you will learn how customers quickly discover, deploy and use Êcomplex software such as MongoDB for NoSQL database, Citrix Netscaler for application traffic management, Parallels Plesk Panel for web server management, NITRC for medical data analysis, Wowza media server for compressed streaming media, and more.
SVC103 - New Launch: Getting Started with Amazon AppStream
by Jerry Heinz with Amazon Web Services
Amazon AppStream is a new service that provides developers with the ability to stream resource intensive applications, such as 3D games or interactive HD applications, from the cloud. With Amazon Appstream, mobile and PC developers have the flexibility to stream their entire application or only parts of their application that need additional cloud resources. You will learn how to build, upload, and deploy your first application, how to create clients for PC and mobile devices, and how to optimize your application for Amazon AppStream.
SVC201 - Automate Your Big Data Workflows
by Jinesh Varia - Technology Evangelist with Amazon Web ServicesFrancisco Roque - CTO with UnsiloVijay Ramesh - Software Engineer, Data/science with Change.orgFred Benenson - Data Engineer with KickstarterTimothy James - Software Manager, Architect - Data/science with Change.org
As troves of data grow exponentially, the number of analytical jobs that process the data also grows rapidly. When you have large teams running hundreds of analytical jobs, coordinating and scheduling those jobs becomes crucial. Using Amazon Simple Workflow Service (Amazon SWF)ÊandÊAWS Data Pipeline, you can create automated, repeatable, schedulable processes that reduce or even eliminate the custom scripting and help you efficiently run your Amazon Elastic MapReduce (Amazon EMR)Êor AmazonÊRedshift clusters. In this session, we show how you can automate your big data workflows. Learn best practices from customers like Change.org, KickStarter and UnSilo on how they use AWS to gain business insights from their data in a repeatable and reliable fashion.
SVC202 - Asgard, Aminator, Simian Army and more: How Netflix’s Proven Tools Can Help Accelerate Your Start-up
by Adrian Cockcroft - Director of Architecture, Cloud Systems with NetflixRuslan Meshenberg - Director, Platform Engineering with Netflix
You're on the verge of a new startup and you need to build a world-class, high-scale web application on AWS so it can handle millions of users. ÊHow do you build it quickly without having to reinvent and re-implement the best-practices of large successful Internet companies? ÊNetflixOSS is your answer. ÊIn this session, weÕll cover how an emerging startup can leverage the different open source tools that Netflix has developed and uses every day in production, ranging from baking and deploying applications (Asgard, Aminator), to hardening resiliency to failures (Hystrix, Simian Army, Zuul), making them highly distributed and load balanced (Eureka, Ribbon, Archaius) and managing your AWS resources efficiently and effectively (Edda, Ice). YouÕll learn how to get started using these tools, learn best practices from engineers who actually created them, so, like Netflix, you can too unleash the power of AWS and scale your application processes as you grow.
SVC203 - Cache is King – Scale Your Application while Improving Performance and Lowering Costs
by Chris Munns - Solutions Architect with Amazon Web ServicesNihar Bihani - Sr. Product Manager with Amazon Web ServicesStephen Evans - Head of Digital Technology with The Toronto Star
Scaling your application as you grow should not mean slow to load and expensive to run. Learn how you can use different AWS building blocks such as Amazon ElastiCache and Amazon CloudFront to Òcache everything possibleÓ and increase the performance of your application by caching your frequently-accessed content. This means caching at different layers of the stack: from HTML pages to long-running database queries and search results, from static media content to application objects. And how can caching more actually cost less? Attend this session to find out!
SVC204 - Experience Transparency Between Your Private Data Center and AWS Cloud Infrastructure
by Charlie Cano - Sr Solution Architect with F5 Networks
(Presented by F5)ÊHaving a flexible architecture that gives you total application control while maintaining performance and availability is critical for an enterprise. Application architects and system administrators can take control and successfully deliver applications end-to-end with F5 BIG-IP for Amazon Web Services (AWS). In this session, learn how the F5 BIG-IP product suite for AWS ensures that your applications are fast, secure, and available through a combination of best practices and demos. See how to: Use the F5 intelligent services framework to seamlessly expand application workloads to the cloud Rapidly provision and deploy BIG-IP instances to adapt to changing application workloads and ensure that intermittent application spikes are accommodated Migrate application workloads from the data center to the AWS cloud and ensure service levels that meet or exceed on-premises deployments Provide application availability and disaster recovery by using the BIG-IP intelligent services framework in the AWS cloud
SVC206 - Speed and Reliability at Any Scale – Combining Amazon SQS and Database Services
by Jonathan Desrocher - Solutions Architect with Amazon Web ServicesColin Vipurs - Senior Technology Evangelist with Shazam Entertainment Ltd
Amazon Simple Queue Service (Amazon SQS) makes it easy and inexpensive to enhance the scalability and reliability of your cloud application. In this session, we demonstrate design patterns for using Amazon SQS in conjunction with Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Elastic MapReduce,ÊAmazon Relational Database Service, and Amazon Redshift. Shazam will share their experience of combining Amazon SQS with Amazon DynamoDB to support a Super Bowl advertising campaign.
SVC207 - Application Darwinism - Why Most Enterprise Apps Will Evolve to the Cloud
by Brad Schick - CTO with Skytap Inc.
(Presented byÊSkytap)ÊComplex multi-tier enterprise applications that have been under development for decades assume reliable hardware and typically have dependencies on underlying operating systems, hardware configurations, and network topologies. The boundary between one application or service and another is often fuzzy, with many interdependencies. These traits make some enterprise applications difficult to refactor and move to a public cloud. Even the teams that manage these applications can be unfamiliar with cloud terminology and concepts. In this session for enterprise IT architects and developers, Brad Schick, CTO ofÊSkytap and Skytap customers Fulcrum, DataXu and F5 will share their insights into why the evolution of enterprise applications will lead to hybrid applications that opportunistically take advantage of cloud-based services. Brad will then demonstrateÊSkytapÊCloud with Amazon Web Services and discuss how enterprises can easily achieve this integration today for application development and testing. Ê
SVC301 - From 0 to 100M+ Emails Per Day: Story of Scaling Your Sending Email with Amazon SES
by Anderson Imes - Software Development Manager, Email Marketing Platform with Amazon.comAbhishek Mishra - Software Development Manager, Simple Email Service with Amazon Web ServicesDave Young - CTO, Co-Founder with Blue Shell Games
ItÕs difficult to imagine an app without email. When you integrate Amazon Simple Email Service (Amazon SES), you not only increase your productivity and add richness to your application, but you also enhance your ability to scale your application to new heights. But how do you scale the service as you grow your application? In this session, we will tell you a story and weave all the best practices of sending high-volume email such as basic inbox placement, high-throughput pipeline to email ISPs, message signing, and retries in the face of temporary failures. We'll also explain how to scale up your application to best take advantage of what Amazon SES can do for your business.
SVC302 - Enrich Search User Experience For Different Parts of Your Application Using Amazon CloudSearch
by Jon Handler - CloudSearch Solution Architect with Amazon Web ServicesPeter Simpkin - Solution Architect with Elsevier
Today's applications work across many different data assets - documents stored in Amazon S3, metadata stored in NoSQL data stores, catalogs and orders stored in relational database systems, raw files in filesystems, etc.ÊBuilding a great search experience across all these disparate datasets and contexts can be daunting. Amazon CloudSearch provides simple, low-cost search, enabling your users to find the information they are looking for. In this session, we will show you how to integrate search with your application, including key areas such as data preparation, domain creation and configuration, data upload, integration of search UI, search performance and relevance tuning. We will cover search applications that are deployed for both desktop and mobile devices.
TLS301 - Accelerate Your Java Development on AWS
by Jason Fulghum - Software Development Engineer with Amazon Web Services
The AWS SDK for Java is an essential tool for any Java developer building applications that leverage AWS. See how easy the SDK is to use and learn new tips and tricks that will help you get more out of AWS. During this session, we develop a sample application, see how to use various SDK APIs, and take advantage of other tools, like the AWS Toolkit for Eclipse, to speed your development.
TLS302 - Building Scalable Windows and .NET Apps on AWS
by Norm Johanson - Software Development Engineer with Amazon Web ServicesJim Flanagan - Software Development Engineer with Amazon Web Services
The AWS SDK for .NET and the AWS Toolkit for Visual Studio help developers build scalable apps on AWS services. Learn how to use these tools to define app data in AmazonÊDynamoDB and access it through a simple object persistence framework. We demonstrate deploying a web app to a customized, auto-scaled AWS Elastic Beanstalk environment. Finally, using the new version of the AWS SDK for .NET, you learn how to access your AWS data from apps targeting the Windows Store and Windows Phone platforms.
TLS303 - Writing JavaScript Applications with the AWS SDK
by Loren Segal - Software Development Engineer - SDKs and Tools with Amazon Web ServicesTrevor Rowe - Software Development Engineer - SDK and Tools with Amazon Web Services
We give a guided tour of using the AWS SDK for JavaScript to create powerful web applications. Learn the best practices for configuration, credential management, streaming requests, as well as how to use some of the higher level features in the SDK. We also show a live demonstration of using AWS services with the SDK to build a real-world JavaScript application.
TLS304 - Becoming a Command Line Expert with the AWS CLI
by James Saryerwinnie - Software Development Engineer with Amazon Web Services
The AWS CLI is a command line interface that allows you to control the full set of AWS services. You learn how to perform quick ad hoc service operations, and how to create rich scripts to automate your ongoing maintenance. We also share tips on getting the most out of the CLI through built-in features and complementary tools.
TLS305 - Diving Into the New AWS SDK for Ruby
by Loren Segal - Software Development Engineer - SDKs and Tools with Amazon Web ServicesTrevor Rowe - Software Development Engineer - SDK and Tools with Amazon Web Services
Ruby developers: attend this session and learn about the next major version of the AWS SDK for Ruby, the aws-core gem. We dive deep into the SDK, covering topics such as waiters, request enumeration and pagination, resource modeling, version locking, and more. Learn how to take advantage of these features as we construct a sample Ruby application using the AWS SDK.
TLS306 - Mastering the AWS SDK for PHP
by Jeremy Lindblom - Software Development Engineer with Amazon Web ServicesMichael Dowling - Software Development Engineer with Amazon Web Services
The AWS SDK for PHP allows PHP developers to interact with AWS services in a fluid and familiar way. Learn how to use convenience features, like iterators and waiters, as well as high-level abstractions, such as the Amazon Simple Storage Service (Amazon S3) stream wrapper. We also demonstrate the powerful capabilities inherited from the underlying Guzzle library.

Inspired by Rodney Haywood's index last year, I decided to do the same for 2013.  I borrowed from his HTML formatting.  The code is this github project which a mix of chrome dev tools web scraping, google data API (YouTube), JSlideShare (with updates required), and JMustache.  I wrote the code in Groovy (~ 150 lines of code) as I wanted quick prototyping and wanted to smaller project to play around with Groovy.  The code took two evenings of hacking.  If you see any missing information feel free to issue a pull request to fix it.