Recap of NodeConf 2012

Saying NodeLast week, I spent the run up to the Fourth of July up in Portland for NodeConf 2012. Held in the boho-boutique Jupiter Hotel / Douglas Fir Lounge / Imago Theatre complex, the show was a very different experience then some of the other Node conferences I’ve been to recently. Even though the event was completely sold out, there were only about 220 of us attending. Mikeal Rogers, the conference’s lead organizer, was inspired by FunConf and took inspiration from its carefully chosen constraints and chose to keep the show intimate. NodeConf featured a single track, had no preannounced schedule and there were no Q & A’s with presenters. It was sort of a nerd gesamstkunstwerk, where everybody knew Node and was there to learn more.

In addition to this recap, I also put together a collection of links from people’s talks and did a lot of tweeting from the show. It was a great, very social experience, and I learned a whole lot!

History & Plans
Ryan Dahl speaking at NodeConf 2012The first presentation of the conference was from Ryan Dahl – the creator and current ‘Benevolent Dictator Emeritus’ for Node. The thing that struck me most about Ryan’s presentation was that he failed (a lot) before he hit upon Node. His presentation was littered with the detritus of past failed attempts to come up with web servers that used evented I/O to go fast. Speaking as a perfectionist, it was inspirational and humbling to hear just how hard Ryan had to bang his head against the wall to come up with something as simple and refined as Node.

Ryan was also fortunate in that the Node community is small, friendly and seems to do a pretty good job maintaining its own consensus. This has led to some interesting partnerships that haven’t always been there for other platforms. Matthew Podwysocki gave a talk about Microsoft’s history with Node and it’s no coincidence that Node.js worked out of the box with Azure. MS Open Tech and the Node maintainers have a good working relationship. While a lot of the credit for this comes from Matthew’s early passion for Node, the easy-going (and more or less apolitical) nature of the Node team also helped.

Isaac Schlueter speaking at NodeConf 2012Isaac Schlueter gave a compressed overview of Node’s release history as a way of demonstrating how rapidly the platform has matured. The recent 0.8 release of Node wasn’t a 1.0 release because the team wanted to give themselves some wiggle room to decide what a 1.0 release means for the project. APIs are getting locked down and the process is a little more rigorous. (In fact, a decision to hard-deprecate one of the library name changes was rolled back between 0.8.0 and 0.8.1 — a concession that probably wouldn’t have been made before.) At this point everyone is just waiting to see if the underlying platform abstraction layer (libuv) and the new build system (gyp) are as solid as they seem to be.

There was another interesting statistic in Isaac’s talk. At the time of NodeConf 2011, there were 1,400 packages available for Node. Today, there are over 12,000. That kind of explosive growth is not without cost, but it also bespeaks an energetic and enthusiastic community.

Meanwhile, Ryan has freed himself up to research a bunch of random other projects and is doing some work on what he’s referring to as ‘Node 2’. But even that sounds more like a refinement than a radical re-envisioning. My favorite goal of his is to remove all dynamic library dependencies from Node, up to and including libc. So it seems like there’s a pretty solid (if informal) roadmap in place.

Platform
Tim Caswell speaking at NodeConf 2012Tim Caswell contributed a lot to Node in the early days when he was working on WebOS. He recently became fascinated by libuv – the platform abstraction layer underpinning Node — and decided to see what he could do by combining it with luvit. Lua is a much simpler language than JavaScript and offers the potential of a much lower overhead because of its tininess. He and a small number of collaborators were able to put together luvit, a Lua equivalent of the Node API, in just a few months.

However, the true value of luvit is that it demonstrates the real power of Node. Instead of being a just a JavaScript library with a fairly complete standard library, Node’s power lies in the set of abstractions represented by libuv (which is pretty well documented in uv.h.) There’s nothing stopping anyone from adapting that set of abstractions to other high-level languages (or indeed just using them from C.)

Similarly, one of the Nodejitsu developers has been working on a JavaScript-like language called Candor. Unlike most recent JavaScript spinoffs, Candor isn’t transpiled — it’s natively compiled. Candor is more like Google’s Dart than, say, CoffeeScript, and Fedor Indutny, its developer, has been putting a lot of work into the compiler (and making it work with libuv). While I’m not a big fan of the linguistic fragmentation this represents, I am glad there are alternatives to ECMAScript 6 (especially if it turns out to be the bloated disaster it’s threatening to become.)

On the other end of the web stack, one of Node’s most touted advantages is that since it’s written in JavaScript, you can share code between the server and the browser. Many presenters suggested that this is true more in theory than in reality, but there has been a lot of work done to make the more interesting bits of Node code run in the browser. Most of this glue code has been written by the incredibly prolific developer SubStack. (And yes, he does have a real name, but no one uses it.) Some of the most powerful abstractions in Node, like streams and Node’s great EventEmitter class, are now available in the browser to developers who are willing to do a little work.

Streams
Speaking of streams, several presentations were exclusively devoted to them. While they are easily the most powerful feature in Node, they’re often overlooked. Almost every I/O operation in Node can be expressed as a stream and that the notion of pipelining is built into the core stream class. It’s possible to easily compose an event-driven pipeline that only buffers when you tell it to and correctly deals with a whole lot of weirdness, mostly due to the simplicity of the implementation.

Much of the discussion around streams touched on the issue of backpressure, which is the dark side of using streams. Backpressure is caused by either saturated network channels, high CPU load, or bad connections between streams. (e.g. one that has a much different throughput than the other or there’s absolutely no buffering anywhere in the I/O pipeline.) When streaming systems backup, the backpressure from the later stages in the I/O pipeline can wreak havoc throughout the whole system.

Matt Ranney speaking at NodeConf 2012Matt Ranney, CEO of Voxer, gave the best talk on the subject. His deep and subtle presentation showed what happens to streaming distributed systems under load. Matt was bringing the fire down from the mountaintop. Few people are proficient with this extremely important topic and it was all I could do to keep up with what he was saying. Voxer is pushing around a lot of data with Node in contexts that are highly sensitive to latency and they’ve figured out how to make the whole thing work. Node is already doing the heavy lifting in the real world.

Socket.IO
Node’s capacity for streaming live data to the browser is light-years ahead of pretty much everything else out there. It was one of the first server side platforms to support Websockets and Socket.IO, a multi-platform library for streaming data between servers and browsers. Many talks touched on different applications for Socket.IO and Engine.IO, a new implementation of Websockets and their alternative browser-server connection mechanisms that promise to ease some of the difficulty of working with Websockets. A couple of talks demonstrated how to combine Socket.IO with Browserify, SubStack’s browser adaptor layer for Node code. Seeing Socket.IO combined with a multiplexer in the browser to allow multi-channel communication over a single connection between the browser and server was mind-bending.

Performance: Debugging, Profiling and Monitoring
A lot of the attendees were enthusiastic New Relic customers, and everyone who knew New Relic was really encouraged to hear that Node support is on New Relic’s radar. The nearly constant stream of encouragement I got from other developers was extremely encouraging. Isaac Schlueter provided me with some very direct, tangible assistance, but a lot of other people (notably Max Ogden and Tom Croucher-Hughes) helped me narrow in on what approaches I should try. I’m grateful for their help.

Mikito TakadaThere were a number of talks about monitoring, profiling and performance tuning. The dominant theme was that experimentation and measurement are very important. Several presenters, most notably Matt Ranney, Daniel Shaw (also of Voxer) and Mikito Takada (of Zendesk, who has also written a great little ebook on building single page apps,) stressed the importance of having a strong upfront concept of your system’s architecture (which grows in importance as the degree of distribution of the system increases.) Everyone agreed that it’s necessary to test your architecture early and often. V8 is an insanely fast and efficient VM, but its optimizations were chosen by Google to meet the specific needs of its own web browser and they aren’t always what you would expect or consider to be reasonable.

The most impressive demonstration of this was during Felix Geisendörfer’s talk. He maintains a pure JavaScript MySQL client library for Node. Felix spent the first part of his talk live coding a streaming protocol parser for MySQL to demonstrate the challenges he’s faced and discuss architectural strategies. It turns out that most of his initial assumptions, including that state machines built on switch statements are fast and making lots of function calls are slow, were incorrect. Through a process of constant reworking and benchmarking, he now has a driver that’s significantly faster than the version using C bindings to libmysql. The performance of his driver is getting within spitting distance of the PHP client, which (gallingly enough) is the fastest MySQL client library out there. His presentation was very specific to Node, but it was a tour de force and a hell of a lot of fun to watch.

Dave Pacheco has been doing yeoman work for a while now at Joyent on putting together the tools offered by Joyent’s SmartMachines, and he gave a great talk on using SmartOS and DTrace to generate “flame graphs,” which are an interactive profiling visualization not at all dissimilar to some of the things we do here, although operating at a lower level. As always, there was a lot of curiosity about DTrace and flame graphs after his talk, so I put together a little how-to guide on how to work with SmartOS locally, without needing a connection to Joyent’s cloud.

Hardware
Rick Waldron at NodeConfWhich brings me to the final theme and the conference and the part that was probably the most fun – people hacking on hardware. (That and the NodeConf pickup band who spent an hour putting together and rehearsing a set of songs about separating all the concerns and mocking your terrible pull requests.) Nothing brings out a nerd’s inner 12-year old like robots and laser beams. From Emily Rose’s Node-driven security system for Casa Diablo (Portland’s premiere vegan strip club) through Voicebox’s Node-assisted karaoke system to Rick Waldron’s Node-controlled robots to Elijah Insua’s painstaking work making Node control CNC machines (and MIDI controllers), people are doing a LOT of interesting stuff with Node and hardware.

Anyone who has done any hardware hacking knows that the low-level tools involved are just not human friendly. No matter what your using — from CNC control protocols that express motion entirely in terms of resets to origin or relative movements to basic level components that straight-up refuse to give you any meaningful indication of why they’re not working — it’s all a big hassle.

One of the most fascinating themes from this set of talks was that it turns out adding an event-drive, high level API to these systems is both relatively easy to achieve and makes these things much easier to use. If it’s easier to intuitively model what’s going to happen, it becomes a lot easier to play with the hardware. The rich Node ecosystem makes this all possible. Sometimes there will be four or five layers of machinery between a control program and the physical hardware. But when a ultrasonic sensor is triggered and 10 or more of the audience’s phones start ringing simultaneously (from something that’s probably written with less than 40 or 50 line of code) it feels pretty magical.

Elijah Insua at NodeConfTo bring this back where I started, Elijah Insua delivered one of the most humbling talks of the day. He decided he wanted to build a MIDI/USB controller with square, velocity sensitive buttons and he wanted to fabricate it himself. He was able to do it, but to do so he had to build his own CNC mill, figure out how to get flat-mounted LEDs soldiered onto circuit boards, come up with an intuitive means for controlling his tooling, and interactively modeling how his CNC would play out using WebGL simulators that ran low level CNC commands (naturally.) Doing this involved a hell of a lot of failure. Elijah had thousand of pictures of all the work he did along the way and many of them are just awkward looking contraptions. Not only did he not give up on what must have seemed like a never-ending, deeply frustrating task, he ended up with something that worked as well as a whole bunch of useful tools and APIs. None of which he would have been able to do without the extensive help from the Node community.

The Node community is great. Everyone was approachable, the conversation was free flowing and it looked like everyone had a great time. I got to spend a surprisingly large amount of time yammering at many of the core members of the Node team. Not only were they friendly, they all seemed interested in finding opportunities to be helpful and move things along. (I may have committed myself to a few projects while I was talking to them.)

Conclusion
JIFASNIFMax Ogden said during his presentation that he was struck by the old-school Ruby community motto of MINASWAN: Matz is nice and so we are nice. It’s a little mantra that reminds everyone that hacking on Ruby is about having a good time and helping out. Max came up with another analog for Node, which is JIFASNIF: JavaScript is fun and so Node is fun. He was specifically talking about how JavaScript’s simplicity and flexibility made working with streams fun and natural, but it could stand for the Node community as a whole. Working with Node is fun. There are a lot of us working on very hard problems, but working on them is fun! We’ll all help each other out. SubStack and TJ Holowaychuk will contine to be streams constantly emitting new Node module events. Next year, we’ll all get together again for NodeConf 2013 and will still be having fun.

Try New Relic for free today. Sign up and deploy and instantly gain deep insight into your web apps.

[corrections: @creationix pointed out on IRC that Candor is a JITted VM, like v8, and not a native compiler. @mikeal said the attendance was 220, and that he was inspired by FunConf and not MaxFunCon. Sorry for the errors!]

forrest@newrelic.com'

View posts by .

Interested in writing for New Relic Blog? Send us a pitch!