I took Friday to play with the Kubernetes project open sourced by Google at Dockercon.
I was able to get a basic multi-tier Acme Air (NetflixOSS enabled) application working. I was able to reuse (for the most part) containers we built for Docker local (laptop) from the IBM open sourced docker port. By basic, I mean the front end Acme Air web app, back end Acme Air authentication micro service, Cassandra node and Acme Air data loader, and the NetflixOSS Eureka service discovery server. I ran a single instance of each, but I believe I could have pretty easily scaled up each instance of the Acme Air application itself easily.
I pushed the containers to Dockerhub (as Kubernetes by default pulls all container images from there). This was as pretty easy using these steps:
1. Download and build locally the IBM Acme Air NetflixOSS Docker containers
2. Login to dockerhub (needed once I did a push) via 'docker login'
3. Tag the images - docker tag [imageid] aspyker/acmeair-containername
4. Push the containers to Dockerhub - docker push aspyker/acmeair-containername
I started each container as a single instance via the cloudcfg script:
cluster/cloudcfg.sh -p 8080:80 run aspyker/acmeair-webapp 1 webapp
I started with "using it wrong" (TM, Andrew 2014) with regards to networking. For example, when Cassandra starts, it needs to know about what seed and peer nodes exist and Cassandra wants to know what IP addresses these other nodes are at. For a single Cassandra node, that means I needed to update the seed list to the IP address of the Cassandra container's config file to itself. Given our containers already listen on ssh and run supervisord to run the container function (Cassandra in this case), I was able to login to the container, stop Cassandra, update the config file with the container's IP address (obtained via docker inspect [containerid] | grep ddr), and restart Cassandra. Similarly I needed to update links between containers (for how the application/micro service found the Cassandra container as well how the application/micro service found Eureka). I could ssh into those containers and update routing information that exists in NetflixOSS Archaius config files inside of the applications.
This didn't perfectly work as the routing in NetflixOSS powered by Ribbon and Eureka use hostnames by default. The hostnames currently assigned to containers in Kubernetes are not resolvable by all other containers (so when the web app tried to route to the auth service based on the hostname registered and discovered in Eureka, it failed with UnknownHostException). We hit this in our SoftLayer runs as well and had patched Eureka client to never register the hostname. I had asked about this previously on the Eureka mailing list and discovered this is something that Netflix fixes internally in Ribbon. I ended up writing a patch for this for Ribbon to just use IP addresses and patched the ribbon-eureka module in Acme Air.
At this point, I could map the front end web app instance to the Kubernetes minion host via cloudcfg run -p 8080:80 port specification and access Acme Air from the Internet in my browser.
My next steps are to look are running replicationControllers around the various tiers of the application as well as making them services so I can use the Kubernetes built in service location and routing. I can see how to do this via the guestbook example. In running that example I can see how if I "bake" into my images an idea of a port for each service, I can locate the port via environment variables. Kubernetes will ensure that this port is routing traffic to the right service implementations on each Kubernetes host via a load balancer. That will mean that I can start to route all eureka traffic to port 10000, all web app traffic to port 10001, all Cassandra traffic to port 10002, all auth micro service traffic to port 10003 for example. This approach sounds pretty similar to an approach used at Netflix with Zuul.
Beyond that I'll need to consider additional items like:
1. Application data and more advanced routing in the service registration/location
2. How available the service discovery is, especially as we consider adding availability zones/fault domains.
3. How do I link this into front facing (public internet) load balancers?
4. How would I link in the concept of security groups? Or is the port exposure enough?
5. How I could start to do chaos testing to see how well the recovery and multiple fault domains works.
I do want to thank the folks at Google that helped me get through the newbie GCE and Kubernetes issues (Brendan, Joe and Daniel).