Tuesday, May 31, 2016

µ-Services with Netflix Titus-π and Spinnaker



For those that don’t know about it, Hackday is held twice a year so teams across Netflix can work on any project they’ve been thinking about for 24 hours.  Hackday projects do not have to be related to anything that would ever become part of the Netflix product or infrastructure.  In fact, projects on Hackday can be about pure fun across teams who want to work together in ways they can’t easily when focusing on normal business priorities.  In this spirit of complete fun, the Titus team embarked on a totally superfluous project - Titus-π (or Titus Pi if you prefer).


For background, Titus is the container cloud at Netflix.  We scale our container hosting nodes cluster daily to 100’s of r3.8xl’s.  We have operationalized this environment for batch and service jobs users at Netflix on top of Amazon EC2.  This real Titus runs as a combination of Mesos and our own Titus framework leveraging Fenzo for resource scheduling and Docker for container execution.  Our framework is written in Java for the master scheduler and golang for Docker execution.  Given it runs on EC2, all of it is compiled to run on Intel x86-64 environments.


For this HackDay, we wondered if we could scale Titus down.  The question we asked ourselves was “Could we take our Netflix microservices and scale them down to Raspberry Pi’s?”.  We hoped to learn from this about what we could do back on the real Titus to make things simpler and smaller on EC2 on x86-64.  Doing this scale down would would mean cross-compiling this entire stack to ARM.  Beyond our Titus runtime, we wanted to make sure we could deploy to the Pi’s from our existing continuous delivery system - Spinnaker.  What we hoped the most was to just have fun for a day.


Below is the step by step guide of what we did to achieve this:

Building our “Rack”

We borrowed our rack design from a team who built a similar rack for Kubernetes research (see the Required Materials List add the optional Micro-USB and CAT6 cables and our own switch).  You can see the build progress here:

output_eMyekF.gif


Docker on ARM

Getting docker to work on ARM was quite simple.  There is an excellent project called HypriotOS that is a minimal Debian based OS with Docker 1.10.3 pre-installed.  At the time of our project they did not have a HypriotOS ready for the Pi 3.  They did however, already have docker pre-compiled for Raspian available on their package distribution site as described on this blog post.  Given this, here were the instructions we ended up using setting up a Pi install from a Macbook:


On Mac, plug in SD card in reader:

diskutil list
 ⇒ finds the card, on my mac it is /dev/disk2
diskutil unmountdisk /dev/disk2
sudo dd if=2016-03-18-raspbian-jessie-lite.img of=/dev/rdisk2 bs=1m
 ⇒ writes the image to your SD card
diskutil unmountdisk /dev/disk2

Take the SD card to your Pi with a USB keyboard attached, and power it on

Login as pi/raspberry

Ifconfig
 ⇒ get the IP address [X.X.X.X]

On your Mac:

ssh pi@X.X.X.X
sudo su
passwd
 ⇒ change root's password
passwd pi
 ⇒ change pi's password

echo "overlay" | sudo tee -a /etc/modules
sudo raspi-config
 ⇒ resize root fs to full SD card
shutdown -r now

ssh pi@X.X.X.X
sudo apt-get update
sudo apt-get install -y apt-transport-https
wget -q https://packagecloud.io/gpg.key -O - | sudo apt-key add -
echo 'deb https://packagecloud.io/Hypriot/Schatzkiste/debian/ wheezy main' | sudo tee /etc/apt/sources.list.d/hypriot.list
sudo apt-get update
sudo apt-get install -y docker-hypriot
sudo systemctl enable docker

To test:

sudo docker run -d -p 80:80 hypriot/rpi-busybox-httpd

From your Mac, load http://X.X.X.X/

Mesos Agent on ARM

There were no easily available binary packages of Mesos for ARM, so we ended up needing to compile from source.  We found a few older guides with the best overview here which helps with a required zookeeper change, but was based on older Mesos builds.  We wanted to upgrade to Mesos 0.24.1 as this is the version we are currently using in Titus.


Originally we compiled Mesos natively.  This required creating a larger swap file:


On the Pi:

sudo su
dd if=/dev/zero of=/var/swap2 bs=1024 count=1048576
chown root:root /var/swap2
chmod 0600 /var/swap2
mkswap /var/swap2
swapon /var/swap2
free -h


We checked the source code out from github and switched to the 0.24.1 tag (Titus currently uses 0.24.1 Mesos).  We then compiled the source code according to the Mesos getting started guide.  On ARM, we had to make two fixes.  The first was in Zookeeper’s mt_adapter.c and the second was in Mesos’ fs.cpp.  As you can see these issues have been fixed in Mesos master, but they are not part of the latest Mesos release (even 0.28.1).  After about two hours, we had a build with which we could experiment. If you want to see the commands to run the build natively, take them from the Dockerfile in the section below.

After the build, you can do a make install. Then you can tarball up the install directory into a file mesos-install.tar.gz. You can then apply and run this on your Pi by running via the following commands:

On the Pi:

sudo su
apt-get upgrade
apt-get install libsvn-dev libcurl4-nss-dev
tar -zxvf mesos-install.tar.gz
export LD_LIBRARY_PATH=/path/usr/local/lib
/path/usr/local/sbin/mesos-slave --master=mesosmaster:mesosmasterport


If you have followed the instructions, you should be able to look at your mesos console and see the pi registered in your cluster:


Screen Shot 2016-04-28 at 10.43.50 PM.png


In the actual runs, we ended up with a slightly more complicated meso-slave command line of:


export LD_LIBRARY_PATH=/usr/local/lib
mesos-slave --log_dir=/tmp/mesos-slave --work_dir=/tmp/mesos --recover=reconnect --strict=false --attributes="stack:pi3;id:pi1" --master=${MESOS_MASTER_HOSTNAME}:${MESOS_MASTER_PORT} --ip=${MESOS_AGENT_IP} --hostname=${MESOS_AGENT_HOSTNAME}

Mesos Agent on ARM (under Docker)

Note that after hackday, we decided to change the build and running of Mesos to be done in Docker containers. This makes it far easier to share the work. You can find out mesos-on-pi skunkworks project here. It will point you to patch files for both of the ARM compilation issues. Running the docker build commands should take 2 hours. We’ve seen with later Mesos builds (0.28.1) that this takes longer than two hours, possibly rendering your Pi useless as it swaps away. Watch for updates as we work on the issue (trying to reduce the linker memory used). We highly recommend following the Docker based build as it should be entirely self contained (other than creating the larger Docker host required swap file) or if you only want to run Mesos, just use this pre-build docker container (aspyker/mesos-agent-pi:arm-mesos-0.24.1).

Running our version using the pre-built Docker container is as easy as running:


docker run -it --net="host" --env MESOS_MASTER_HOSTNAME=${MESOS_MASTER_HOSTNAME} --env MESOS_MASTER_PORT=:${MESOS_MASTER_PORT} --env MESOS_AGENT_IP=${MESOS_AGENT_IP} --env MESOS_AGENT_HOSTNAME=${MESOS_AGENT_HOSTNAME} aspyker/mesos-agent-pi:arm-mesos-0.24.1

Mesos Master

We originally had hoped to run the master on the Pi’s, but found that there are additional ARM cross-compilation issues with running the Mesos master on ARM.  We decided to punt on this and grabbed a Ubuntu laptop for our master.  Given we run our master on Ubuntu on EC2, we were able to reuse our existing deb file for installing the Mesos master.

Titus Framework Scheduler Master and Titus API

Given we ran the Mesos master on our Ubuntu laptop, we decided to run our scheduler master on the same laptop.  We also have a horizontally scalable API that allows users to control jobs in Titus and we decided to run this on the same laptop.  Of course, in our real Titus container cloud we run all of these in a more HA operational manner, but for the Hackday project we “hacked” by running each of these tiers together with a cluster size of one.

Titus Framework Agent Executor

Given our Titus Mesos executor works with many Docker and cgroup aspects, it is written in golang.  Golang makes it very easy to cross-compile to ARM.  Literally, we just re-ran our build on x86-64 Linux after telling the go build about our intended platform via export GOARCH=arm.


Titus extends Mesos to provide critical extensions to Docker to make it work well on the Amazon EC2 cloud.  While our Titus EC2 integration includes agents to perform operational details such as container/cgroup usage metric collection, live rotated log uploads to S3, EC2 metadata proxying (for IAM roles and instance level data), and IP/security group per container VPC networking drivers, we support turning each of these off.  Given our Pi’s didn’t have any of these cloud services, we disabled each of these features of our Titus executor.  What was left was the core integration with Mesos-agent and our container execution and lifecycle control.


We did add an additional Mesos agent attribute (arch).  We set the arch=arm and then used Titus’ scheduler support for hard constraints to ensure that only arm workloads were deployed to the Pi’s.

Capturing the Agent Image

Once we had Mesos agent, Docker, and our Agent Executor working on one of the Pi’s we dumped the SD card image to our Mac.  This took a long time as it had to read 32G from the SD Card.  To make it faster to write the image to other SD Cards we shrunk the img file using gparted as described in this article.  This allowed us to shrink the 32G image down to a 4G img file.  We were then able to restore this image to each SD card and power up the entire rack of Pi’s.

Continuous Deployment via Spinnaker

At this point we had a full blown Titus environment between our API, the Titus Master, the Mesos Master and a six node stack of Titus Agent Executors and Mesos agent.  Given we already have a Spinnaker cloud driver for Titus, all we had to do is adjust that API DNS address.  The Spinnaker team helped us setup a minimal Spinnaker setup pointing to our Titus-Pi cluster API address.

Final Product

In the end we were able to deploy clusters of ARM based services (we cross-compiled a simple “service” in golang).  You can see in real time in the following video a deploy of a 24 instance server group (ASG) and how quickly that server group starts running on three of the Pi’s:



Adding Flair

After Hackday, each team gets to present their project in a two minute lightning talk.  It is always best to add to add a little “Flair”.  We decided to add a Unicorn Hat to our Pi stack.  We programmed it to express our love of Titus as you can see here:





The Team

The extended Titus team was able to put this hack together.  Ok, we cheated and worked ahead on building the hardware and getting Mesos agent pre-compiled (it takes a few hours, so we didn’t want to waste our time during the day on this).  However, the porting of Titus and integration of the entire system was done in 12 hours during Hackday.  The team consisted of Amit Joshi, Andrew Leung, Tomasz Bak, Noel Yap, Tim Bozarth, Tomas Lin, Sharma Podila and myself.  We couldn’t have done it without the previous work by others noted in the references.  In the end, we had an absolute blast and now we have a cool digital sign board to go along with our real operational dashboards.

If you think such crazy projects are fun, and more importantly, if you have interest in doing the real thing on Amazon EC2 at scale with reliability, we’re hiring. Come have fun with us!

Tuesday, March 22, 2016

What I need to improve - 2016 360 Peer Feedback

This morning our 360 feedbacks were delivered to all employees across Netflix.  360's are a way that we help encourage our colleagues to continue to be excellent as well as help us see personal blind spots that we'd all like to improve.  I wanted to summarize some of the common themes I received from 23 of folks across Netflix that took the time to help me understand myself better.  I am specifically focusing on "start" and "stop" areas, as those will help me improve as an engineer, team member, and leader.  I am proud of the "continue" areas and I'll make sure those areas don't turn from a weakness into a strength.


My personal tag cloud of feedback





Things I need to work on



1.  I am too verbose in my communication (documents, slides, in meetings) which can lose audiences and impact the effectiveness of team collaboration.

2.  I do not always include colleagues at the appropriate times.  I need to do a better job of bringing key peers, related teams, and potential collaborators into aspects of our projects earlier.

3.  I need to better separate tactical from strategic.  I need to be able to shift gears between both aspects of our projects better knowing when to focus on each.  Right now, my current project focus needs to switch more from tactical to strategic.

4.  I need to spend more time focusing on Open Source.  This is one of my focus areas at Netflix. My colleagues would like for me to spend more time improving our approach to OSS.

I realize some of the above items feel harsh, but the fact is my colleagues gave me this feedback to improve our excellence as a team.  Therefore, I take each of the above points seriously.  I also hope by writing them down myself and being able to refer to them through 2016, that I'll more strongly work to improve each of these items.

I imagine every leader has aspects of the above areas they have actively worked on improving.  If you have ways you have helped improve yourself in these areas, please let me know.  Books, blogs, and personal exercises are welcome.  I hope to look back in 2017 and say that I have actively improved each of the above areas.