Last week, I spent the run up to the Fourth of July up in Portland for NodeConf 2012. Held in the boho-boutique Jupiter Hotel / Douglas Fir Lounge / Imago Theatre complex, the show was a very different experience then some of the other Node conferences I’ve been to recently. Even though the event was completely sold out, there were only about 220 of us attending. Mikeal Rogers, the conference’s lead organizer, was inspired by FunConf and took inspiration from its carefully chosen constraints and chose to keep the show intimate. NodeConf featured a single track, had no preannounced schedule and there were no Q & A’s with presenters. It was sort of a nerd gesamstkunstwerk, where everybody knew Node and was there to learn more.
History & Plans
The first presentation of the conference was from Ryan Dahl – the creator and current ‘Benevolent Dictator Emeritus’ for Node. The thing that struck me most about Ryan’s presentation was that he failed (a lot) before he hit upon Node. His presentation was littered with the detritus of past failed attempts to come up with web servers that used evented I/O to go fast. Speaking as a perfectionist, it was inspirational and humbling to hear just how hard Ryan had to bang his head against the wall to come up with something as simple and refined as Node.
Ryan was also fortunate in that the Node community is small, friendly and seems to do a pretty good job maintaining its own consensus. This has led to some interesting partnerships that haven’t always been there for other platforms. Matthew Podwysocki gave a talk about Microsoft’s history with Node and it’s no coincidence that Node.js worked out of the box with Azure. MS Open Tech and the Node maintainers have a good working relationship. While a lot of the credit for this comes from Matthew’s early passion for Node, the easy-going (and more or less apolitical) nature of the Node team also helped.
Isaac Schlueter gave a compressed overview of Node’s release history as a way of demonstrating how rapidly the platform has matured. The recent 0.8 release of Node wasn’t a 1.0 release because the team wanted to give themselves some wiggle room to decide what a 1.0 release means for the project. APIs are getting locked down and the process is a little more rigorous. (In fact, a decision to hard-deprecate one of the library name changes was rolled back between 0.8.0 and 0.8.1 — a concession that probably wouldn’t have been made before.) At this point everyone is just waiting to see if the underlying platform abstraction layer (libuv) and the new build system (gyp) are as solid as they seem to be.
There was another interesting statistic in Isaac’s talk. At the time of NodeConf 2011, there were 1,400 packages available for Node. Today, there are over 12,000. That kind of explosive growth is not without cost, but it also bespeaks an energetic and enthusiastic community.
Meanwhile, Ryan has freed himself up to research a bunch of random other projects and is doing some work on what he’s referring to as ‘Node 2’. But even that sounds more like a refinement than a radical re-envisioning. My favorite goal of his is to remove all dynamic library dependencies from Node, up to and including libc. So it seems like there’s a pretty solid (if informal) roadmap in place.
Speaking of streams, several presentations were exclusively devoted to them. While they are easily the most powerful feature in Node, they’re often overlooked. Almost every I/O operation in Node can be expressed as a stream and that the notion of pipelining is built into the core stream class. It’s possible to easily compose an event-driven pipeline that only buffers when you tell it to and correctly deals with a whole lot of weirdness, mostly due to the simplicity of the implementation.
Much of the discussion around streams touched on the issue of backpressure, which is the dark side of using streams. Backpressure is caused by either saturated network channels, high CPU load, or bad connections between streams. (e.g. one that has a much different throughput than the other or there’s absolutely no buffering anywhere in the I/O pipeline.) When streaming systems backup, the backpressure from the later stages in the I/O pipeline can wreak havoc throughout the whole system.
Matt Ranney, CEO of Voxer, gave the best talk on the subject. His deep and subtle presentation showed what happens to streaming distributed systems under load. Matt was bringing the fire down from the mountaintop. Few people are proficient with this extremely important topic and it was all I could do to keep up with what he was saying. Voxer is pushing around a lot of data with Node in contexts that are highly sensitive to latency and they’ve figured out how to make the whole thing work. Node is already doing the heavy lifting in the real world.
Node’s capacity for streaming live data to the browser is light-years ahead of pretty much everything else out there. It was one of the first server side platforms to support Websockets and Socket.IO, a multi-platform library for streaming data between servers and browsers. Many talks touched on different applications for Socket.IO and Engine.IO, a new implementation of Websockets and their alternative browser-server connection mechanisms that promise to ease some of the difficulty of working with Websockets. A couple of talks demonstrated how to combine Socket.IO with Browserify, SubStack’s browser adaptor layer for Node code. Seeing Socket.IO combined with a multiplexer in the browser to allow multi-channel communication over a single connection between the browser and server was mind-bending.
Performance: Debugging, Profiling and Monitoring
A lot of the attendees were enthusiastic New Relic customers, and everyone who knew New Relic was really encouraged to hear that Node support is on New Relic’s radar. The nearly constant stream of encouragement I got from other developers was extremely encouraging. Isaac Schlueter provided me with some very direct, tangible assistance, but a lot of other people (notably Max Ogden and Tom Croucher-Hughes) helped me narrow in on what approaches I should try. I’m grateful for their help.
There were a number of talks about monitoring, profiling and performance tuning. The dominant theme was that experimentation and measurement are very important. Several presenters, most notably Matt Ranney, Daniel Shaw (also of Voxer) and Mikito Takada (of Zendesk, who has also written a great little ebook on building single page apps,) stressed the importance of having a strong upfront concept of your system’s architecture (which grows in importance as the degree of distribution of the system increases.) Everyone agreed that it’s necessary to test your architecture early and often. V8 is an insanely fast and efficient VM, but its optimizations were chosen by Google to meet the specific needs of its own web browser and they aren’t always what you would expect or consider to be reasonable.
Dave Pacheco has been doing yeoman work for a while now at Joyent on putting together the tools offered by Joyent’s SmartMachines, and he gave a great talk on using SmartOS and DTrace to generate “flame graphs,” which are an interactive profiling visualization not at all dissimilar to some of the things we do here, although operating at a lower level. As always, there was a lot of curiosity about DTrace and flame graphs after his talk, so I put together a little how-to guide on how to work with SmartOS locally, without needing a connection to Joyent’s cloud.
Which brings me to the final theme and the conference and the part that was probably the most fun – people hacking on hardware. (That and the NodeConf pickup band who spent an hour putting together and rehearsing a set of songs about separating all the concerns and mocking your terrible pull requests.) Nothing brings out a nerd’s inner 12-year old like robots and laser beams. From Emily Rose’s Node-driven security system for Casa Diablo (Portland’s premiere vegan strip club) through Voicebox’s Node-assisted karaoke system to Rick Waldron’s Node-controlled robots to Elijah Insua’s painstaking work making Node control CNC machines (and MIDI controllers), people are doing a LOT of interesting stuff with Node and hardware.
Anyone who has done any hardware hacking knows that the low-level tools involved are just not human friendly. No matter what your using — from CNC control protocols that express motion entirely in terms of resets to origin or relative movements to basic level components that straight-up refuse to give you any meaningful indication of why they’re not working — it’s all a big hassle.
One of the most fascinating themes from this set of talks was that it turns out adding an event-drive, high level API to these systems is both relatively easy to achieve and makes these things much easier to use. If it’s easier to intuitively model what’s going to happen, it becomes a lot easier to play with the hardware. The rich Node ecosystem makes this all possible. Sometimes there will be four or five layers of machinery between a control program and the physical hardware. But when a ultrasonic sensor is triggered and 10 or more of the audience’s phones start ringing simultaneously (from something that’s probably written with less than 40 or 50 line of code) it feels pretty magical.
To bring this back where I started, Elijah Insua delivered one of the most humbling talks of the day. He decided he wanted to build a MIDI/USB controller with square, velocity sensitive buttons and he wanted to fabricate it himself. He was able to do it, but to do so he had to build his own CNC mill, figure out how to get flat-mounted LEDs soldiered onto circuit boards, come up with an intuitive means for controlling his tooling, and interactively modeling how his CNC would play out using WebGL simulators that ran low level CNC commands (naturally.) Doing this involved a hell of a lot of failure. Elijah had thousand of pictures of all the work he did along the way and many of them are just awkward looking contraptions. Not only did he not give up on what must have seemed like a never-ending, deeply frustrating task, he ended up with something that worked as well as a whole bunch of useful tools and APIs. None of which he would have been able to do without the extensive help from the Node community.
The Node community is great. Everyone was approachable, the conversation was free flowing and it looked like everyone had a great time. I got to spend a surprisingly large amount of time yammering at many of the core members of the Node team. Not only were they friendly, they all seemed interested in finding opportunities to be helpful and move things along. (I may have committed myself to a few projects while I was talking to them.)
Try New Relic for free today. Sign up and deploy and instantly gain deep insight into your web apps.
[corrections: @creationix pointed out on IRC that Candor is a JITted VM, like v8, and not a native compiler. @mikeal said the attendance was 220, and that he was inspired by FunConf and not MaxFunCon. Sorry for the errors!]