Screenshot-proof images via temporal dithering #

Snapchat's (and now Facebook Poke's) main claim to fame is that it lets you send "self-destructing" image messages. Setting aside the debate about the uses of this beyond sexting, the key vulnerability in both apps is the built-in ability to take screenshots. Both take a reactive approach, where you're notified if the recipient took a screenshot, but can't really do anything about it.

I was thinking about ways of mitigating this issue, and figured that perhaps turning the image into an animation where individual frames are not (or at least less) recognizable would be the right path. This is a variant of temporal dithering, except we're intentionally pretending like each frame has a limited amount of precision, and only when averaged together is the original image re-created.

I've created a proof of concept (source) of this. It loads the image into a <canvas> and generates a "positive" and "negative" frame out of it. The positive frame has a random offset added to each pixel's RGB components, while the negative one has it subtracted. When displayed in quick sequence (requestAnimationFrame is used to do this every time the screen refreshes) the two offsets should cancel out, and the resulting image should re-appear.

Source Temporal dithering
Source Temporal dithering1
Positive frame Negative frame
Positive frame Negative frame

The resulting flicker is unfortunate, but perhaps that only enhances the furtiveness of an image that will disappear in a few seconds. It also seemed fitting to include Lenna as a source image, given its origin.

Obviously this technique is meant to deter only casual attempts at screen capture. Beyond the analog hole, screen recording software (albeit not an issue on non-jailbroken iOS devices) can easily reconstruct the original image.

Potential areas of exploration are doing the offsets in a different color space (RGB is not linear) and using more than two frames. One option for generating more frames is to keep making random positive and negative pairs. That way repeated screen captures are less likely to yield a pair that can be combined to reconstruct the original image. Another option for generating more frames is to create three or more images that need to be averaged together to yield the original. However, that will result in more image degradation, since the frames are less likely to be perceived as one image; persistence of vision lasts for 1/25th of a second, which is between 2 and 3 frames at 60Hz.

Update: See also the discussion on Hacker News.

  1. The embedded example in this blog post is rendered as an animated GIF. Due to clamping of GIF frame rates, lack of precision (the delay is specified as an integer counting hundredths of a second) and guessing at the screen's refresh rate (it assumes 60Hz), it will exhibit even more flicker than the programmatic version.

I like Cilantro with my Avocado #

Ann and I have been using Avocado for the past few months to stay in touch while we're apart. It's been working well for us, but it didn't have a great workflow for sharing links (beyond copy-and-paste). Since it now has an API, I thought I could perhaps fix this with a Chrome extension.

Cilantro screenshotCilantro is that extension (source). It's a straighforward browser action that lets you share the current tab's URL to Avocado. It relies on you being signed into Avocado to make requests (and scrapes a bit of the homepage's JavaScript to get the current user's signature -- Taylor and/or Chris, please don't rename your variables).

About the only clever thing that the extension does is that if you're currently in a Google Reader tab, it will pick up the currently selected item's link to share. It relies on Reader exposing a getPermalink() JavaScript hook1. Chrome extension code normally runs in an isolated world when pointed at a page, so it can't see that function. However, by "navigating" the tab to a javascript: URL, it's able to invoke it, since that code runs in the "main" world. To get at the result, it adds a message listener and then has the javascript: snippet do a postMessage (since the main world can invoke the isolated world's event handlers). This is described in more detail in the extensions documentation.

For the sake of full disclosure, lest you think my graphics skills have suddenly improved, Avocado co-founders Jenna and Chris are friends of mine and they spruced up Cilantro's UI to look less programmer-y minimalist.

  1. This was originally added so that Google Notebook (R.I.P.) could extract the current item when clipping from Reader. Since then, Instapaper's bookmarklet has also started to use it.

Clipboard Sync Chrome Extension #

At work I frequently switch between quite a few computers: two Macs, one Linux box, one Windows machine, and the occasional Chromebook. Between a KVM (for the desktops) and a swiveling chair (for the laptops), this isn't so bad. The one part that felt awkward was the lack of unified clipboard support, which would be handy when IM-ing links or checking out URLs on multiple computers.

I initially used Syncopy to solve this problem, but it didn't fully solve my problem (due to being Mac-only). Additionally, I didn't really like the idea of my clipboard contents ending up on the developer's server (or having to run a local, could-be-doing-anything binary). Eventually, Syncopy stopped being a viable solution altogether since the developer abandoned it (see the reviews for the iPhone client).

It occurred to me that I could build a replacement as a Chrome extension. Extensions can access the clipboard, and the storage API provides a synced key-value store. Some quick experimentation showed that changes were synced within a few seconds, which was good enough for my needs. There was some rate-limitting, but it didn't seem like it would affect any day-to-day use.

Clipboard Sync notificationClipboard Sync (source) is my implementation of that idea. Its main UI surface is a browser action icon. When you want your clipboard synced, click the icon. On all the other Chrome instances that are running and are synced with the same account, you'll get a notification saying that the clipboard data has been pushed. Click on the notification, and the local clipboard will be updated (and the notifications on the other instances will be dismissed).

Clipboard syncing was actually filed as a feature request for Chrome a couple of years ago. It was (rightfully) WontFix-ed since it's a pretty niche feature. It's quite reassuring that the extension system is now flexible enough to allow it to be implemented pretty seamlessly (especially once the commands API hits the stable channel).

How I Consume Twitter #

In light of the Twitter API 1.1 announcement and the surrounding brouhaha, I thought I would take a moment to document how I read Twitter, since it might all have to change1.

It shouldn't be surprising at all that I consume Twitter in Google Reader. I do this with the aid of two tools that I've written, Bird Feeder2 and Tweet Digest3.

Bird Feeder in Google Reader screenshotBird Feeder lets you sign in with Twitter, and from that generates a "private" Atom feed out of your "with friends" timeline. It tries to be reasonably clever about inlining thumbnails and unpacking URLs, but is otherwise a very basic client. The only distinctive thing about it is that it uses a variant of my PubSubHubbub bridge prototype to make the feed update in near-realtime4. What makes it my ideal client is that it leverages all of Reader's capabilities: read state, tagging, search (and back in the day, sharing). Most importantly, it means I don't need to add yet another site/app to my daily routine.

In terms of the display guidelinesrequirements, Bird Feeder runs afoul of a bunch of the cosmetic rules (e.g. names aren't displayed, just usernames), but those could easily be fixed. The rule that's more interesting is 5a: "Tweets that are grouped together into a timeline should not be rendered with non-Twitter content. e.g. comments, updates from other networks." Bird Feeder itself doesn't include non-Twitter content, it outputs a straight representation of the timeline, as served by Twitter. However, when displayed in Reader, the items representing tweets can end up intermixed with all sorts of other content.

Bird Feeder lets me (barely) keep up with the 170 accounts that I follow, which generate an average of 82 tweets per day (it's my highest volume Reader subscription). However, there are other accounts that I'm interested in which are too high-volume to follow directly. For those I use Tweet Digest, which batches up their updates into a once-a-day post. I group accounts into digests by theme using Twitter lists (so that I can add/remove accounts without having to resubscribe to the feeds). It adds up to 54 accounts posting an average of 112 tweets per day.

This approach to Twitter consumption is very bespoke, and caters to my completionist tendencies. I don't expect Twitter's official clients to ever go in this direction, so I'm glad that the API is flexible enough to allow it to work, and hopefully that will continue to be the case.

  1. Though I'm hoping that I'm such an edge case that Twitter's Brand Enforcement team won't come after me.
  2. Bird Feeder is whitelisted for Ann's and my use only. This is partly because I don't want it to attract Twitter's attention, and partly because I don't need yet another hobby project ballooning into an App Engine budget hog. However, you're more than welcome to fork the code and run your own instance.
  3. It's amazing that Tweet Digest is almost 5 years old. Time flies.
  4. This was a good excuse to learn Go. Unfortunatley, though I liked what I saw, I haven't touched that code (or any other Go code) in 6 months, so nothing has stuck.

Protecting users from malware via (strict) default settings #

One of the features in Mountain Lion, Apple's newest OS X release, that has gotten quite a bit of attention is Gatekeeper. It's a security measure that, in its default configuration, allows only apps downloaded from the Mac App Store or signed with an Apple-provided (per-developer) certificate to run. This a good security move that makes a bunch of people happy. The assumption is that, though Gatekeeper can be turned off, it's on by default, so it will be a great deterrent for malware authors. For example, here's an excerpt from John Siracusa's Mountain Lion review:

All three of these procedures—changing a security setting in System Preferences, right-clicking to open an application, and running a command-line tool—are extremely unlikely to ever be performed by most Mac users. This is why the choice of the default Gatekeeper setting is so important.

However, a cautionary tale comes from the web security world. The same-origin policy is an inherent1 property of the web. This means that, barring bugs, it shouldn't be possible to have cross-site scripting (XSS) not allowed by the host site. But at the same time that scripting ability was added to browsers, the javascript: URL scheme was introduced, which allowed snippets of JavaScript to be run in the context of the current page. These could be used anywhere URLs were accepted (leading to bookmarklets), including the browser's location bar.

In theory, this feature meant that users could XSS themselves by entering and running a javascript: URL provided by an attacker. But surely no one would just enter arbitrary code given to them by a disreputable-looking site? As it turns out, enough of people do. There is a class of Facebook worms that spread via javascript: URLs. They entice the user with a desired Facebook feature (e.g. get a "dislike" button) and say "all you have to do to get it is copy and paste this code into your address bar and press enter."2 Once the user follows the instructions, the attacker is able to impersonate the user on Facebook.

If the target population is big enough, it doesn't matter what the default setting is, or how convoluted the steps are to bypass it. 0.1% of Facebook's ~1 billion users is still 1 million users. In this particular case, browser vendors are able to mitigate the attack. Chrome will strip a javascript: prefix from strings pasted into the omnibox, and I believe other modern browsers have similar protections. For the attacker's perspective, working around this involves making the "instructions" even more complicated, leading to hopefully a large drop-off in the infection success rate, and perhaps the dropping of the attempt altogether.

This isn't to say that Gatekeeper as deployed today will not work. It's just that it'll take some time before the ease-of-use/configuration and security trade-offs can be evaluated. After all, javascript: URLs were introduced in 1995, and weren't exploited until 2011.

  1. So inherent that it was taken for granted and not standardized until the end of 2011.
  2. I'm guessing that it's not helpful that legitimate sites occasionally instruct users to do the same thing.

My Chrome Apps Journey #

A couple of years ago, I switched from the Google Reader team to the Chrome team1. I focused on the WebKit side, working a bunch on layout tests and associated tooling. I became a reviewer, messed around with events and generally scratched my "move down the stack" itch. However, I realized I missed a few things: Spec compliance and tooling are very important, but they don't necessarily trigger that awesome "we've shipped!" feeling (cf. HTML5 being a living document). I also missed having something concrete to put in front of users (or developers).

Those thoughts were beginning to coalesce when, fortuitously, Aaron approached me in the spring of 2011 about joining Chrome's "Apps and Extensions" team. I'd been a fan of Aaron's work since my Greasemonkey days, and the Chrome extension system had impressed me with its solid foundations. After spending some time getting to know life outside of src/third_party/WebKit, I ended up focusing more on the apps side of things, implementing things like inline installation and tweaking app processes.

Historically, apps and extensions were handled by the same team since Chrome apps rose out of the extension system. They share the manifest format, the packaging mechanism, the infrastructure to define their APIs, etc. The lines were further blurred since apps could use extension APIs to affect the rest of the browser.

As we were discussing in the fall of 2011 where extensions and apps were heading, it became apparent that both areas had big enough goals2 that having doing them all with one team would result in a lack of focus and/or an unwieldy group with lots of overhead. Instead, there was a (soft) split: Aaron remained as tech lead of extensions, and I took on apps.

We've now had a few months to prototype, iterate, and start to ship code (in the canary channel for now). Erik and I gave an overview of what we've been up to with Chrome apps at last week's Google I/O:


(DailyJS has a very good summary, and the presentation is available as an app in our samples repository.)

It's still the early days for these "evolved" Chrome packaged apps. We're pretty confident about some things, like the app programming model with background pages that receive events. We know we have a bunch of work in other areas like OS integration, bugs and missing features. There are also some things we haven't quite figured out yet (known unknowns if you will), like cross-app communication and embedding (though I'm getting pretty good at waving my hands and saying "web intents").

What makes all this less daunting is that we get to build on the web platform, so we get a lot of things for "free". Chrome apps are an especially good opportunity to play with the web's bleeding edge3 and even the future4.

Going back to the impetus for the switch that I described at the start of this post, doing the I/O presentation definitely triggered that "We've launched!" feeling that I was looking for. However, it's still the early days. To extend the "launch" analogy, we've been successful at building sounding rockets, but the long term goal is the moon. We're currently at the orbital launch stage, and hopefully it'll go more like Explorer 1 and less like Vanguard.

I'm planning on blogging more about apps. In the meantime, if your curiosity is piqued, you can watch the presentation, dig around the samples repository, or peruse the docs. If you'd like to get in touch, try the chromium-apps@chromium.org mailing list or #chromium-apps on freenode.

  1. Though I have trouble letting go sometimes.
  2. The extensions team had its own talk at I/O about their evolution. Highly recommended viewing, since the same factors influenced our thinking about apps.
  3. See Eric's The Web Can Do That!? presentation.
  4. See Dimitri's and Alex's Web Components presentation. They can be used by apps (and extensions) that add the "experimental" permission.

Life Imitates Art #

Cryptonomicon (1999):

[...] guerilla mechanic teams had been surveilling Randy’s grandmother ever since and occasionally swiping her Lincoln from the church parking lot on Sunday mornings and taking it down to Patterson’s for sub rosa oil changes. The ability of the Lincoln to run flawlessly for a quarter of a century without maintenance — without even putting gasoline in the tank — had only confirmed Grandmother’s opinions about the amusing superfluity of male pursuits.

Cherry (2012):

Cherry is the carwash that comes to you. Park anywhere, check in online, and we'll wash your car right where you left it. Only $29.99 per wash, which includes exterior, interior, vacuum, tires, rims, and an air freshener of your choice.

Feed Web Intent Viewer #

Chrome 19 (just released to the stable channel) includes support for Web Intents, and the support is further improved in Chrome 20 (currently in the beta channel). One of the improvements is that downloaded RSS and Atom feeds will now dispatch a view intent. This is neat, getting things a bit closer to truly fixing one of the earliest Chrome bug reports. However, if you're occasionally engaged in some technology necrophilia, then you might prefer seeing the angle brackets instead of handing over the feed to another app.

Feed Intent Viewer screenshotTo that end, I've made Feed Intent Viewer, a simple intent handler that shows the feed as pretty-printed XML. Once you install it, clicking on links such as this one will trigger the sheet shown on the right, and choosing the "Feed Intent Viewer" app will show you your beloved angle brackets.

It's implemented as a packaged app (source), so that all of the feed data is processed locally, instead of being sent to a server. Even better, the download system includes the downloaded data with the intent (as a Blob) so that it doesn't have to be re-fetched at all. When the intent is dispatched with just a URL, then the data is fetched via XMLHttpRequest (this explains why the app has the "your data on all websites" permission). The new-ish responseType property of XHR is used, so that it can also be read as a blob. The feed blob data is read via a FileReader into a string, so that some light pre-processing can happen (currently, just removal of stylesheets, allowing the raw XML can be displayed). Finally, the feed text is put back into a blob that's served with a text/xml MIME type. This makes WebKit's XML viewer kick in, saving me the trouble of actually having to pretty-print anything.

While writing this up, I got a sense of déjà vu, which turned out to be warranted: In 2007, I created a similar hack to get Firefox 2.0 to show pretty-printed XML for feeds, instead of something friendlier.

Bookmarklet to download complete Google+ albums to Picasa #

PicasaWeb albums have an option to download the entire album to Picasa, which comes in quite handy when you'd like to have a complete archive of the pictures taken at an event. However, PicasaWeb URLs now redirect to the view of the album in Google+, which doesn't expose the equivalent command. There is a workaround for this (append noredirect=1 to the PicasaWeb URL), but it's rather involved. As an easier alternative, here's a bookmarklet:

Download to Picasa

If you invoke it on a Google+ photo album page, it'll try to launch Picasa and download that album (unless the owner has disabled that option).

The pretty-printed version of the bookmarklet script is:

(function(){
  var ALBUM_URL_RE =
      new RegExp('https://plus\\.google\\.com/photos/(\\d+)/albums/(\\d+)[^?]*(\\?.+)?');
  var match = ALBUM_URL_RE.exec(location.href);
  if (match) {
    location.href = 'picasa://downloadfeed/?url=' +
        'https%3A%2F%2Fpicasaweb.google.com%2Fdata%2Ffeed%2Fback_compat%2Fuser%2F' + 
        match[1] + '%2Falbumid%2F' + match[2] + '%3F' +
        (match[3] ? encodeURIComponent(match[3].substring(1) + '&') : '') +
        'kind%3Dphoto%26alt%3Drss%26imgdl%3D1';
  } else {
    alert('Oops, not on a Google+ album page?');
  }
})();

The picasa://downloadfeed URL scheme was observed by looking at what happens when "Download to Picasa" is selected on a page through Chrome's DevTools' Network tab. The attempt at preserving query parameters is to that the authkey token and the like are passed on (unclear if it actually does anything though).

Being a new parent as told through Reader trends #

Paternity leave has meant lots of 10-20 minute gaps in the day that can filled with Reader catchup:

Google Reader Trends 30 day chart

Even when trying to put the baby to sleep at 1am or 5am, thanks to 1-handed keyboard shortcuts:

Google Reader Trends hour of day chart

Playback Rate Chrome Extension #

Playback Rate extension screenshotEvan recently shared shared a link to Bret Victor's CUSEC presentation (which is worth watching, but not what this post is about). In the (internal) thread, Ami Fischman mentioned that, in addition to being more reliable, Vimeo's HTML5 player allows the use of the playbackRate attribute to speed up playback without affecting its pitch (at least with the codecs used by Chrome). I'm a big fan of listening to podcasts at higher speeds (especially via Downcast, which supports up to 3x, though I haven't made my way that high yet), so the idea of being able to do this for any video was very appealing.

YouTube's HTML 5 player already makes use of this, but for those sites that don't, I've made a Chrome extension (source). It adds a context menu that allows you to control the playback rate for any <video> or <audio> element. Note that Chrome's audio implementation currently has a bug which may result in the 1.25 and 1.5 rates not working, but it should be fixed soon. Also, Vimeo turned out to the tricky, since it overlays other nodes on top of its <video> element, and buffers very slowly. Your best bet is to let the video buffer a bit, and then right-click just above the controller.

Bonus tip: If you are viewing a video in QuickTime Player (the QuickTime X version) and miss the old playback rate controls from the A/V Control palette, you can instead option-click on the fast-forward button to increase the speed.

Stack Overflow Musings #

I recently spent an enjoyable Sunday morning tracking down a Chrome extension-related bug. Though not as epic as some past bugs, it was still interesting, since it involved the interaction of four distinct codebases (wix.com's, Facebook Connect's, the extension, and Chromium's).

The reason why I'm writing about it here (or rather, just pointing it out), is because it seemed like a bit of waste to have that experience live only on Stack Overflow. It's not that I don't trust Stack Overflow (they seem to have good intentions and deeds). However, I'm no Jon Skeet, Stack Overflow isn't a big enough part of my online life that I feel like I have an enduring presence there. The test that I apply is "If I vaguely recall this piece of content in 10 years, will I be able to remember what site it was on? Will that site still be around/indexed?" The answer to the latter is most likely yes, but to the former, I'm not so sure (I already have a hard time searching across the mailing lists, bug tracker and Stack Overflow silos).

On the other hand, a blog post is too heavyweight for every piece of (notable) content. The lifestreaming fad of yesteryear also doesn't seem right, I don't want this aggregation to be necessarily public or a destination site. ThinkUp (and a plugin) seems like what I want, if only I could get over the PHP hurdle.

My earlier stance on Stack Overflow was based on being a pretty casual user (answering the occasional Reader question, using it as a reference when it turned up in search results). Now that it's an official support channel, I've been using it more, and the Kool Aid has started to wear off. For every interesting question, there are repeated ones that end up being about the same topic. For Chrome extension development, a recurring issue is not understanding that (nearly) all APIs are asynchronous.

Is the right thing there to mark them as duplicates of each other, since they have the same underlying cause? I tried that recently, and the moderator did not agree (relatedly, I'm amused that Eric Lippert's epic answer about local variables is on a question that was later closed as a duplicate). Even if closing as dupes were OK, what should the canonical answer look like? Presumably it would have to be something generic, by which point it's not all that different from the section in the official documentation that explains asynchronous functions. Is it just a matter of people not knowing the right terminology, so they will never search for [chrome extension api asynchronous]? Is the optimal end state a page/question per API function asking "Why does chrome.<API name here>() return undefined?" with a pointer to the documentation?

PuSH Bot Lives! #

PuSH Bot is a PubSubHubbub-to-XMPP service that I created a couple of years ago. It had been running pretty much attended for a while, but in the past couple of months I started to get a complaint or two that it was flaky. It turned out that the new App Engine pricing model was making it not fit into the free quota anymore, so by late afternoon the service would start to report errors (it's not the only unattended App Engine app that this has happened to).

I decided to dust off the code, move it to its own repository and see if it could be brought back to life. Between some basic optimizations and kicking out some (unintentionally?) abusive users, it now fits in the free quota on most days. I also took this opportunity to modernize the code a bit.

With Google Reader shared items gone, some of the initial use cases aren't there anymore (I don't want to subscribe to random blogs with it; that's what Reader is for). However, there are still plenty of other sites that support PubSubHubbub. One recent addition that I'm experimenting with is Stack Overflow. With per-tag feeds, I can now get notified via IM when someone asks a question in the two topics that I tend to answer.