Teaching the Closure Compiler About React #

tl;dr: react-closure-compiler is a project that contains a custom Closure Compiler pass that understands React concepts like components, elements and mixins. It allows you to get type-aware checks within your components and compile React itself alongside your code with full minification.

Late last year, Quip started a gradual migration to React for our web UI (incidentally the chat features that were launched recently represent the first major functionality to be done entirely using React). When I started my research into the feasibility of using React, one of my questions was “Does it work with the Closure Compiler?” At Quip we rely heavily on it not just for minification, but also for type annotations to make refactorings less scary and code more self-documenting¹, and for its many warnings to prevent other gotchas in JavaScript development. The tidbits that I found were encouraging, though a bit sparse:

  • An externs file with type declarations for most of React's API²
  • A Quora post by Pete Hunt (a React core contributor) describing React as “closure compiler compatible”
  • React's documentation about refs mentions making sure to quote refs annotated via string attributes³

In general I got the impression that it was certainly possible to use React with the Closure Compiler, but that not a lot of people were, and thus I would be off the beaten path⁴.

My first attempt was to add react.js (the unminified version) as source input along with a simple “hello world” component⁵. The rationale behind doing it this way was that, if React was to be a core library, it should be included in the main JavaScript bundle that we serve to our users, instead of being a separate file. It also wouldn't need an externs file, since the compiler was aware of it. Finally, since it was going to be minified with the rest of our code, I could use the non-minified version as the input, and get better error messages. I was then greeted by hundreds of errors and warnings which broadly fell into three categories:

  1. “illegal use of unknown JSDoc tag providesModule” and similar warnings about JSDoc tags that the React source uses that the Closure Compiler didn't understand
  2. “variable React is undeclared” indicating that the Closure compiler did not realize what symbols react.js exported, most likely because the module wrapper that it uses is a bit convoluted, and thus it's not obvious that the exported symbols are in the global scope
  3. “dangerous use of the global this object” within my component methods, since the Closure Compiler did not realize that the functions within the spec passed to React.createClass were going to be run as methods on the component instance.

Since I was still in a prototyping stage with React, I looked into the most minimal set of changes I could do to deal with these issues. For 2, adding the externs file to our list helped, since the compiler now knew that there was a React symbol and its many properties. This did seem somewhat wrong, since the React source was not actually external, and it was in fact safe to (globally) rename createClass and other methods, but it did quieten those errors. For 1 and 3 I wrote a small custom warnings guard that ignored all “errors” in the React source itself and the “dangerous use of global thiswarning in .jsx files.

Once I did all that, the code compiled, and appeared to run fine with all the other warnings and optimizations that we had. However, a few days later, as I was working on a more complex component, I ran into another error. Given:

var Comp = React.createClass({
    render: function() {...},
    someComponentMethod: function() {...}
});
var compInstance = React.render(React.createElement(Comp), ...);
compInstance.someComponentMethod();

I was told that someComponentMethod was not a known property on compInstance (which was of type React.ReactComponent — per the externs file). This once again boiled down to the compiler not understanding that the React.createClass construct (i.e. that it defined a type). It looked like I had two options for dealing with this:

  1. Add a @suppress {missingProperties} annotation at the callsite, so that the compiler wouldn't complain about the property that it didn't know about
  2. Add a @lends {React.ReactComponent.prototype} annotation to the class spec, so that the compiler would know that someComponentMethod was indeed a method on components (this seemed to be the approach taken by some other code I came across).

The main problem with 2 is that it then told the compiler that all component instances had a someComponentMethod method, which was not true. However, it seemed like the best option, so I added it and kept writing more components.

After a few more weeks, when more engineers started to write React code, these limitations started to chafe a bit. There was both the problem of having to teach others about how to handle sometimes cryptic error messages (@lends is not a frequently-encountered JSDoc tag), as well as genuine bugs that were missed because the compiler did not have a good enough understanding of the code patterns to flag them. Additionally, the externs file didn't quite match with the latest terminology (e.g. React.render's signature had it both taking and returning a ReactComponent). Finally, the use of an externs file meant that none of the React API calls were getting renamed, which was adding some bloat to our JavaScript.

After thinking about these limitations for a while, I began to explore the possibility of creating a custom Closure Compiler pass that would teach it about components, mixins, and other React concepts. It already had a custom pass that remapped goog.defineClass calls to class definitions, so teaching it about React.createClass didn't seem like too much of a stretch.

Fast forward a few weeks (and a baby) later, and react-closure-compiler is a GitHub project that implements this custom pass. It takes constructs of the form:

var Comp = React.createClass({
    render: function() {...},
    someComponentMethod: function() {...}
});

And transforms it to (before any of the normal compiler checks or type information was extracted):

/**
 * @interface
 * @extends {ReactComponent}
 */
function CompInterface() {}
CompInterface.prototype = {
    render: function() {},
    otherMethod: function() {}
};
/** @typedef {CompInterface} */
var Comp = React.createClass({
    /** @this {Comp} */
    render: function() {...},
    /** @this {Comp} */
    otherMethod: function() {...}
});
/** @typedef {ReactElement.<Comp>} */
var CompElement;

Things of note in the transformed code:

  • The CompInterface type is necessary in order to teach the compiler about all the methods that are present on the component. Having it as an @interface means that no extra code ends up being generated (and the existing code is left untouched). The methods in the interface are just stubs — they have the same parameters (and JSDoc is copied over, if any), but the body is empty.
  • The @typedef is added to the component variable so that user-authored code can treat that as the type (the interface is an implementation detail).
  • The @this annotations that are automatically added to all component methods means that the compiler understands that those functions do not run in the global scope.
  • The CompElement @typedef is designed to make adding types to elements for that component less verbose.

A bit more formally, these are the types that the compiler knows about given the Comp definition:

  • ReactClass.<Comp>, for the class definition
  • ReactElement.<Comp> for an element created from that definition (via JSX or React.createElement())
  • Comp for rendered instances of this component (this is subclass of ReactComponent).

This means that, for example, you can use {Comp} to as a @return, @param or @type annotation for functions that operate on rendered instances of Comp. Additionally, React.render invocations on JSX tags or explicit React.createElement calls are automatically annotated with the correct type.

To teach the compiler about the React API, I ended up having a types.js file with the full API definition (teaching the compiler how to parse the module boilerplate seemed too complex, and in any case the React code does not have type annotations for everything). For the actual type hierarchy, in addition to looking at the terminology in the React source itself, I also drew on the TypeScript and Flow type definitions for React. Note that this is not an externs file, it's injected into the React source itself (since it's inert, it does not result in any output changes). This means that all React API calls can be renamed (with the exception of React.createElement, which cannot be renamed due to the collision with the createElement DOM API that's in another externs file).

Having done the basics, I then turned to mixins (one of the reasons why we're not using ES6 class syntax for components). I ended up requiring that mixins be wrapped in a React.createMixin(...) call, which was introduced with React 0.13 (though it's not documented). This means that it's possible to cheaply understand mixins: [SomeMixin] declarations in the compiler pass without having to do more complex source analysis.

The README covers more of the uses and gotchas, but the summary is that Quip itself is using this compiler pass to pre-process all our client-side code. The process of converting our 400+ components (from the externs type annotations) took a couple of days (which included tweaks to the pass itself, as well as fixing a few bugs that the extra checks uncovered).

The nice thing about having custom code in the compiler is that it provides an easy point to inject more React-specific behavior. For example, we're heavy users of propTypes, but they're only useful when using the non-minified version of React — propTypes are not checked in minified production builds. The compiler pass can thus strip them if compiling with the minified version.

Flow was the obvious alternative to consider if we wanted static type checking that was React-aware. I also more recently came across Typed React. However, extending the Closure Compiler allows us to benefit from the hundreds of other (non-React) source files that have Closure Compiler type annotations. Additionally, the compiler is not just a checker, it is also a minifier, and some minification passes rely on type information, thus it is beneficial to have type information accessible to the compiler. One discovery that I made while working on this project is that the compiler has a pass that converts type expressions to JSDoc, and generally seems to have some understanding of type expressions that (at least superficially) resemble Flow's and TypeScript's. It would be nice to have one type annotated codebase that all three toolchains could be run on, but I think that's a significant undertaking at this point.

If you use React and the Closure Compiler together, please give the pass a try (it integrates with Plovr easily, and can otherwise be registered programatically) and let me know how it works out for you.

  1. I continue to find doing large-scale refactorings less scary in our client-side code than ones in our server-side Python code, despite better test coverage in the latter environment.
  2. I ended up contributing to it a bit, as we started to use less common React APIs.
  3. Spelunking through React's codebase that I did much later turned up keyOf and many other indicators that React was definitely developed with unquoted property renaming minification in mind.
  4. Indeed the original creator of the React externs file has indicated that he's no longer using the combination of React/Closure Compiler.
  5. Which used JSX, but that was not of interest to the Closure Compiler: it was transformed to plain JavaScript before the compiler saw it.

WKWebView Communication Latency #

One of the many exciting things about the modern WebKit API is that it has an officially sanctioned communication mechanism for doing JavaScript-to-native communication. Hacks should no longer be necessary to get data back out of the WKWebView. It is a simple matter of registering a WKScriptMessageHandler on the native side and then invoking it with window.webkit.messageHandlers.<handler name>.postMessage on the JS side.

After a heads-up from Jordan that messaging latency with the new web view was not that great, I extended the test bed I had developed to measure UIWebView communication mechanisms to also measure this setup. I got results that mirrored Jordan's experiences — on an iPad mini 2 (A7), UIWebView's best officially supported communication mechanism (changing location.hash¹) took 0.44ms, while the same round trip on a WKWebView took 3.63ms. I was expecting somewhat higher latency, since there is a cross-process IPC involved now, but not this much higher.

Curious as to what was going on, I pointed a profiler at the test harness, and got the following results.

Running Time         Symbol Name
4147.0ms   94.1%     Main Thread
                         10 stack frames elided
2165.0ms   49.1%             IPC::Connection::dispatchOneMessage()
2164.0ms   49.1%              IPC::Connection::dispatchMessage(…)
2161.0ms   49.0%               WebKit::WebProcessProxy::didReceiveMessage(…)
2160.0ms   49.0%                IPC::MessageReceiverMap::dispatchMessage(…)
2137.0ms   48.4%                 void IPC::handleMessage<…>
2137.0ms   48.4%                  WebKit::WebUserContentControllerProxy::didPostMessage(…)
2132.0ms   48.3%                   ScriptMessageHandlerDelegate::didPostMessage(…)
1274.0ms   28.9%                    -[JSContext initWithVirtualMachine:]
 769.0ms   17.4%                    -[JSContext init]
 30.0ms     0.6%                 -[BenchmarkViewController userContentController:didReceiveScriptMessage:]
 16.0ms     0.3%                 -[JSValue toObject]

It looks like nearly half the time is spent in JSContext² initializers. As previously mentioned, the WKWebView API is open source, so it's actually possible to read the source and see what's going on. If we look at the ScriptMessageHandlerDelegate::didPostMessage source we can see that indeed for every postMessage() call on the JS side a new JSContext is initialized to own the deserialized value.

Even when not doing any postMessage calls on the JS side, and just measuring WKWebView's evaluateJavaScript:completionHandler: latency, there was still lots of time spent in JSContext initialization. This turned out to be because the completion handler also results in a JSContext being created in order to own the deserialized value. Thankfully this only happens if a completion handler is specified and there is a result, in other cases the function exits early.

After thinking about this some more, it occurred to me that the old location.hash mechanism could still work with a WKWebView. By adding a WKNavigationDelegate to the web view, it's possible to observe location changes on the native side. I implemented this approach and was pleasantly surprised to see that it took 0.95ms. This was almost 4x faster than the officially sanctioned mechanism (albeit still about twice as slow as the equivalent on a UIWebView, but that is presumably explained by the IPC overhead).

I then wondered if using location.hash to communicate in both directions (changing the location on the native side, listening for hashchange events on the JS side) would be be even better (to bypass more of the JS execution machinery), but that approach ended up being slower (for both UIWebView and WKWebView) since it involved more delegate invocations.

Putting all this together, here are the results (in milliseconds, averaged over 100 round trips) using these mechanisms on various devices (all running iOS 8.1) using my test bed.

Method/Device iPhone 5 (A6) iPad mini 2 (A7) iPhone 6 (A8) Simulator (Core i7)
UIWebView
location.hash 0.63 0.44 0.37 0.15
WKWebView
WKScriptMessageHandler 6.97 3.63 3.03 2.75
location.hash 1.3 0.95 0.77 0.32

I reported this performance problem to Apple last summer (while iOS 8 was still in beta), but I haven't seen activity in this area. Ideally there would be a way to skip over JS value deserialization altogether, for cases where the client doesn't actually benefit from the built-in parsing that is done. Specifically, Quip uses Protocol Buffer-encoded messages to have richer (and typed) data passed back and forth between the native and JS side, and so, as far as the JS runtime is concerned, they're all strings anyway.

Separately, it is a good question as to why I'm making such a big fuss over the overhead of one millisecond (or three) per call. Quip continues to be a “hybrid” app, with the editing experience implemented in a web view (a UIWebView for now). With the launch of spreadsheets we now have even more native ↔ web communication, and in some cases we notify the web view of touch events continuously while UIScrollViews are scrolled (for custom behavior³). When trying to hit 60 frames per second, spending 3 milliseconds of your 16 millisecond budget on pure overhead feels wasteful.

Update on 06/29/2015: Mark Lam recently fixed ScriptMessageHandlerDelegate::didPostMessage to reuse JSContext instances across calls, which fixes one of the two performance problems that this post covers. I've filed another bug to track the remaining issue with evaluateJavaScript:completionHandler:.

  1. Astute observers will remark that my blog post used to recommend either changing location.hash or using the click() method on an anchor node. I have since discovered that click() results in a forced layout recalculation (via hit testing) as part of dispatching the event. I have since updated the post to recommend location.hash.
  2. I find it a bit odd that JavaScriptCore is not part of Apple's web-accessible documentation set. Even their own release notes announcing the framework say “for information about the classes of this framework, see the header files.”
  3. This involved yet more spelunking into UIWebView internals, something I will hopefully be posting about soon.

RetroGit #

tl;dr: RetroGit is a simple tool that sends you a daily (or weekly) digest of your GitHub commits from years past. Use it as a nostalgia trip or to remind you of TODOs that you never quite got around to cleaning up. Think of it as Timehop for your codebase.

It's now been a bit more than two years since I've joined Quip. I recall a sense of liberation the first few months as we were working in a very small, very new codebase. Compared with the much older and larger projects at Google, experimentation was expected, technical debt was non-existent, and in any case it seemed quite likely that almost everything would be rewritten before any real users saw it¹. It was also possible to skim every commit and generally have a sense that you could keep the whole project in your head.

As time passed, more and more code was written, prototypes were replaced with “productionized” systems and whole new areas that I was less familiar with (e.g. Android) were added. After about a year, I started to have the experience, familiar to any developer working on a large codebase for a while, of running blame on a file and being surprised by seeing my own name next to foreign-looking lines of code.

Generally, it seemed like the codebase was still manageable when working in a single area. Problems with keeping it all in my head appeared when doing context switches: working on tables for a month, switching to annotations for a couple of months, and then trying to get back into tables. By that point tables had been “swapped out” and it all felt a bit alien. Extrapolating from that, it seemed like coming back to a module a year later would effectively mean starting from scratch.

I wondered if I could build a tool to help me keep more of the codebase “paged in”. I've been a fan of Timehop for a while, back to the days when they were known as 4SquareAnd7YearsAgo. Besides the nostalgia factor, it did seem like periodic reminders of places I've gone to helped to keep those memories fresher. Since Quip uses GitHub for our codebase (and I had also migrated all my projects there a couple of years ago), it seemed like it would be possible to build a Timehop-like service for my past commits via their API.

I had also wanted to try building something with Go², and this seemed like a good fit. Between go-github and goauth2, the “boring” bits would be taken care of. App Engine's Go runtime also made it easy to deploy my code, and it didn't seem like this would be a very resource-intensive app (famous last words).

I started experimenting over Fourth of July weekend, and by working on it for a few hours a week I had it emailing me my daily digests by the end of the month. At this point I ran into what Akshay described as the “eh, it works well enough” trough, where it was harder to find the motivation to clean up the site so that others could use it too. But eventually it did reach a “1.0” state, including a name change, ending up with RetroGit.

The code ended up being quite straightforward, though I'm sure I have quite a ways to go before writing idiomatic Go. The site employs a design similar to Tweet Digest, where it doesn't store any data beyond an OAuth token, and instead just makes the necessary API calls on the fly to get the commits from years past. The GitHub API behaved as advertised — the only tricky bit was how to handle the my aforementioned migrated repositories. Their creation dates were 2011-2012, but they had commits going back much further. I didn't want to “probe” the interval going back indefinitely, just in case there were commits from that year — in theory someone could import some very old repositories into GitHub³. I ended up using the statistics endpoint to determine when the first commit for a user was in a repository, and persisting that as a “vintage” timestamp.

I'm not entirely happy with the visual design — I like the general “retro” theme, but I think executing it well is a bit beyond my Photoshop abilities. The punch card graphic is based on this “Fortran statement” card from this collection. WhatTheFont! identified the header font as ITC Blair Medium. Hopefully the styling within the emails is restrained enough that it won't affect readability. Relatedly, this was my first project where I had to generate HTML email, and I escaped with most of my sanity intact, though some things were still annoying. I found the CSS compatibility tables from MailChimp and Campaign Monitor, though I'm happy that I don't have care too much about more “mass market” clients (sorry Outlook users).

As to whether or not RetroGit is achieving its intended goal of helping me keep more of the Quip codebase in my head, it's hard to say for sure. One definite effect is that I pay more attention to commit messages, since I know I'll be seeing them a year from now. They're not quite link bait, but I do think that going beyond things like “Fixes #787” to also include a one-line summary in the message is helpful. In theory the issue has more details as to what was broken, but they can end up being re-opened, fixes re-attempted, etc. so it's nice to capture the context of a commit better. I've also been reminded of some old TODOs and done some commenting cleanups when it became apparent a year later that things could have been explained better.

If you'd like to try it yourself, all the site needs is for you to sign in with your GitHub account. There is an FAQ for the security conscious, and for the paranoid running your own instance on App Engine should be quite easy — the README covers the minimal setup necessary.

  1. It took me a while to stop having hangups about not choosing the most optimal/scalable solution for all problems. I didn't skew towards “over-engineered” solutions at Google, but somehow enough of the “will it scale” sentiment did seep in.
  2. My last attempt was pre-Go 1.0, and was too small to really “stick”.
  3. Now that Go itself has migrated to GitHub, the Gophers could use this to get reminders of where they started.

HTML Munging My Way To a React.js Conf Ticket #

Like many others, I was excited to see that Facebook is putting on a conference for the React community. Tickets were being released in three waves, and so for the last three Fridays I have been trying to get one. The first Friday I did not even manage to see an order form. The next week I got as far as choosing a quantity, before being told that tickets were sold out when pushing the “Submit” button.

Today was going to be my last chance, so I enlisted some coworkers to the cause — if any of them managed to get an order form the plan was that I would come by their desk and fill it out with my details. At 12 o'clock I struck out, but Bret and Casey both managed to run the gauntlet and make it to the order screen. However, once I tried to submit I was greeted with:

React.js Conf Order Failure

Based on Twitter, I was not the only one. Looking at the Dev Tools console showed that a bunch of URLs were failing to load from CloudFront, with pathnames like custom-tickets.js and custom-tickets.css. I assumed that some supporting resources were missing, hence the form was not entirely populated¹. After checking that those URL didn't load while tethered to my phone (in case our office network was banned for DDoS-like behavior), I decided to spelunk through the code and see if I could inject the missing form fields by hand. I found some promising-looking JavaScript of the form:

submitPaymentForm({
    number: $('.card-number').val(),
    cvc: $('.card-cvc').val(),
    exp_month: $('.card-expiry-month').val(),
    exp_year: $('.card-expiry-year').val(),
    name: $('.cardholder-name').val(),
    address_zip: $('.card-zipcode').val()
});

I therefore injected some DOM nodes with the appropriate class names and values and tried resubmitting. Unfortunately, I got the same error message. When I looked at the submitPaymentForm implementation, I could see that the input parameter was not actually used:

function submitPaymentForm(fields) {
    var $form = $("#billing-info-form");
    warnLeave = false;
    $form.get(0).submit();
}

I looked at the form fields that had loaded, and they had complex names like order[TicketOrder][email]. It seemed like it would be difficult to guess the names of the missing ones (I checked the network request and they were not being submitted at all). I then had the idea of finding another Splash event order form, and seeing if I could get the valid form fields from there. I eventually ended up on the ticket page for a music video release party that had a working credit card form. Excited, I copied the form fields into the React order page that I still had up, filled them out, and pressed “Submit”. There was a small bump where it thought that the expiration date field was required and not provided, but I bypassed that client-side check and got a promising-looking spinner that indicated that the order was being processed.

I was all set to be done, when the confirmation page finally loaded, and instead of being for the React conference, it was for the video release party. Evidently starting the order in a second tab had clobbered the some kind of cookie or other client-side state from the React tab. I had thus burned the order page on Bret's computer. However, Case had been dutifully keeping the session on his computer alive, so I switched to that one. There I went through the same process, but opened the “donor” order page in a Chrome incognito window. With bated breath I pressed the order button, and was relieved to see a confirmation page for React, and this email to arrive shortly after:

React.js Conf Order Success?

Possibly not quite as epic was my last attempt at working around broken service providers, but it still made for an exciting excuse to be late for lunch. And if anyone wants to go a music video release party tonight in Brooklyn, I have a ticket².

  1. I now see that other Splash order forms have the same network error, so it may have been a red herring.
  2. Amusingly enough, I appeared to have bought the last pre-sale ticket there — they are sold out too.

Two Hard Things #

Inspired by Phil Karlton's (possibly apocryphal) Two Hard Things, here's what I struggle with on a regular basis:

  1. Getting information X from system A to system B without violating the N layers of abstraction in between.
  2. Naming X such that the name is concise, unique (i.e. greppable) and has the right connotations to someone new to the codebase (or myself, a year later).

Gmail's HTML Tag Whitelist #

I couldn't find a comprehensive list of the HTML tags that Gmail's sanitizer allows through, so I wrote one up.

The Modern WebKit API is Open Source #

When doing web development (or really any development on a complex enough platform), sometimes the best course of action for understanding puzzling behavior is to read the source. It's therefore been fortunate that Chrome/Blink, WebKit (though not Safari) and Firefox/Gecko are all open source (and often the source is easily searchable too).

One of exceptions has been mobile WebKit when accessed via a UIWebView on iOS. Though it's based on the same components (WebCore, JavaScriptCore, WebKit API layer) as its desktop counterpart, there is enough mobile-specific behavior (e.g. interaction with auto-complete and the on-screen keyboard) that source access would come in handy. Apple would periodically do code dumps on opensource.apple.com, but those only included the WebCore and JavaScriptCore components¹ and in any case there hasn't been one since iOS 6.1.

At WWDC, as part of iOS 8, Apple announced a modern WebKit API that would be unified between the Mac and iOS. Much of the (positive) reaction has been about the new API giving third-party apps access to faster, JITed, JavaScript execution. However, just as important to me is the fact that implementation of the new API is open source.

Besides browsing around the source tree, it's also possible to track its development more closely, via an RSS feed of commits. However, there are no guarantees that just because something is available in the trunk repository that it will also be available on the (presumed) branch that iOS 8 is being worked on. For example, [WKWebView evaluateJavaScript:completionHandler:] was added on June 10, but it didn't show up in iOS 8 until beta 3, released on July 7 (beta 2 was released on June 17). More recent changes, such as the ability to control selection granularity (added on June 26) have yet to show up. There don't seem to be any (header) changes² that live purely in the iOS 8 SDK, so I'm inclined to believe that (at least at this stage) there's not much on-branch development, which is encouraging.

Many thanks to Anders, Benjamin, Mitz and all the other Apple engineers for doing all in the open.

Update on July 21, 2014: The selection granularity API has shown up in beta 4, which was released today.

  1. IANAL, but my understanding is that WebCore and JavaScriptCore are LGPL-licensed (due to its KHTML heritage) and so modifications in shipping software have to distributed as source, while WebKit is BSD-licensed, and therefore doesn't have that requirement.
  2. Modulo some munging done as part of the release process.