The overall concept
Today at Dockercon, Jerry Cuomo went over the concept of borderless cloud and how it relates to IBM's strategy. He talked about how Docker is one of the erasers of the lines between various clouds with regards to openness. He talked about how, regardless of vendor, deployment option and location, we need to focus on the following things:
Especially in the age of devops and continuous delivery how lack of speed is a killer. Even worse, actually unforgivable, having manual steps that introduce error is not acceptable any longer. Docker helps with this by having layered file systems that allow for just updates to be pushed and loaded. Also, with its process model it starts as fast as you'd expect your applications to start. Finally, Docker helps by having a transparent (all the way to source) description model for images which guarantees you run what you coded, not some mismatch between dev and ops.
Optimized means not only price/performance but also optimization of location of workloads. In the price/performance area IBM technologies (like our IBM Java read-only memory class sharing) can provide for much faster application startup and less memory when similar applications are run on a single node. Also, getting the hypervisor out of the way can help I/O performance significantly (still a large challenge in VM based approaches) which will help data oriented applications like Hadoop and databases.
Openness of cloud is very important to IBM, just like it was for Java and Unix/Linux. Docker can provide the same write once, run anywhere experience for cloud workloads. It is interesting how this openness combined with the fast/small also allows for advances in devops not possible before with VM's. It is now possible to now run production like workload configurations on premise (and on developer's laptops) in almost the exact same way as deployed in production due to the reduction in overhead vs. running a full virtual machine.
Moving fast isn't enough. You have to most fast with responsibility. Specifically you need to make sure you don't ignore security, high availability, and operational visibility when moving so fast. With the automated and repeatable deployment possible with Docker (and related scheduling systems) combined with micro-service application design high availability and automatic recovery becomes easier. Also, enterprise deployments of Docker will start to add to the security and operational visibility capabilities.
The demo - SoftLayer cloud running Docker
After Jerry covered these areas, I followed up with a live demo.
On Monday, I showed how the technology we've been building to host IBM public cloud services, the Cloud Services Fabric (CSF), works on top of Docker. We showed how the kernel of the CSF, based in part on NetflixOSS, and powered by IBM technologies was fully open source and easily run on a developer's laptop. I talked about how this can even allow developers to Chaos Gorilla test their micro-service implementations.
I showed how building the sample application and its microservice was extremely fast. Building an update to the war file took more time than containerizing the same war for deployment. Both were done in seconds. While we haven't done it yet, I could imagine eventually optimizing this to container generation as part of an IDE auto compile.
In the demo today, I followed this up with showcasing how we could take the exact same environment and marry it with the IBM SoftLayer public cloud. I took the exact same sample application container image and instead of loading locally, pushing through a Docker registry to the SoftLayer cloud. The power of this portability (and openness) is very valuable to our teams as it will allow for local testing to mirror more closely production deployment.
Finally, I demonstrated how adding SoftLayer to Docker added to the operational excellence. Specifically I showed how once we told docker to use a non-default bridge (that was assigned a SoftLayer portable subnet attached to the host private interface), I could have Docker assign IP's out of a routable subnet within the SoftLayer network. This networking configuration means that the containers spun up would work in the same networks as SoftLayer bare metal and virtual machine instances transparently around the global SoftLayer cloud. Also, advanced SoftLayer networking features such as load balancers and firewalls would work just as well with the containers. I also talked about how we deployed this across multiple hosts in multiple datacenters (availability zones) further adding to the high availability options for deployment. To prove this, I unleashed targeted chaos army like testing. I showed how I could emulate a failure of a container (by doing a docker rm -f) and how the overall CSF system would auto recover by replacing the container with a new container.
You can see the slides from Jerry's talk on slideshare.
Direct Link (HD Version)