A Greasemonkey Christmas #

Update on 1/23/2006: The label colors script exposed a XSS vulnerability. It has now been updated to remove this hole - all users are encouraged to install the latest version of the script.

After many delays, I've finally gotten around to updating my Greasemonkey scripts so that they run under Firefox 1.5 and Greasemonkey 0.6.4. In fact, these scripts will most likely not work in older versions, and I will not be supporting Firefox 1.0.x or Greasemonkey 0.3.x.

Conversation Preview Bubbles (script) was the most straight-forward, since I had already done some work to make it compatible with the unreleased Greasemonkey 0.5.x releases. In fact, it seems to work better under FF 1.5/GM 0.6.4 - before the script did not trigger for label and search results views, but now it does. Saved Searches (script) required a bit more work, and in the process I decided to clean it up a bit. The result count functionality was removed, since it was not that useful. The fixed font toggle feature was spun off into a separate script since it didn't make any sense for it to be bundled.

Gmail macros screenshot I'm also taking this opportunity to announce two scripts that I had previously written and never gotten around to releasing. Gmail Macros adds additional keyboard shortcuts to Gmail. Some are obvious (and have been done by other scripts) such as "t" for move to trash and "r" for mark as read. However, I strove to provide a bit more functionality. For example, "p" both marks a message as read and archives it, when you really don't want to read something (the "p" stands for "purge"). Additionally, the shortcuts can be easily customized by editing the HANDLERS_TABLE constant. More than one action can be chained together by providing a list of action codes (which are contained in the script and were extracted by looking at the generated "More Actions..." menu in Gmail). The other novel feature is for label operations. Pressing "g" brings up a Quicksilver-like display that allows you to begin typing in a label name to go to it (special names like "Inbox" and "Trash" work too). Similarly, pressing "l" allows you to label a conversation with the label of your choosing.

Gmail label colors screenshot The other script is Gmail Label Colors. In the much-quoted Walt Mossberg article on Gmail and Yahoo Mail, he claims "Gmail doesn't allow folders, only color-coded labels." Ignoring the folders vs. labels debate for now, this sentence is not actually true, since labels in Gmail cannot be color-coded. This script adds that functionality, since it turns out to be very useful (if used sparingly, otherwise too many colors can get overwhelming). To specify a color, simply rename a label to "Labelname #color" (e.g. to make the label "Foo" be red, use "Foo #red" and to make the label "Bar" be orange-ish, use "Bar ##d52"). It works in a similar way to the conversation bubble script, in that it overrides the JavaScript function through which Gmail receives data. It has to jump through some hoops to avoid the HTML escaping that Gmail does; intrepid Greasemonkey hackers may want to look at the source.

If you're wondering why so many of my Greasemonkey scripts are Gmail-related, it's because it's the application where I spend a significant part of my day, thus every bit of productivity improvement counts. Not only do I use Gmail for my personal email, but I also use an internal version for my email needs at Google (as do most of the other Googlers - this constant dog-fooding helps to make Gmail a better product).

(the usual) Disclaimer: I happen to work for Google. These scripts were produced without any internal knowledge of Gmail, and they are not endorsed by Google in any way. If you have any problems with them, please contact only me.

ChainShot ColorJunction Lives! #

Almost a couple of years ago I released ChainShot, a Konfabulator Yahoo Widgets Widget. When the Google Homepage API was announced internally, a contest was held to develop sample modules in time for launch. I decided to port ChainShot, partly due to a lack of any better ideas, but also because I was curious how the two systems compared. As it turned out, my game (since renamed to ColorJunction) was selected for external release (you can find it in the directory).

If you're looking to make your own module, the documentation is pretty thorough. The sample modules all have unobfuscated source code, so they should serve as good starting points. There's a pretty powerful JavaScript API (e.g. we provide you with a proxy so you can jump across the same-domain limitation of XMLHttpRequest) and a descriptive XML language. But in the end, all your AJAX-y/HTML/CSS/JavaScript skills can be used pretty much as is (an advantage the Homepage API shares with Apple's Dashboard).

N.B. ChainShot/ColorJunction is a clone of the 80s Japanese game of the same name.

Greasemonkey Hacks #

I finally received my complimentary copy of Greasemonkey Hacks. I'm also glad to see that my saved searches user script was selected as one of the sample ones. Reading the contributors section was somewhat amusing due to the homogeneity of the entries (e.g. more than half mentioned blogs). The book is also already out of date, with Greasemonkey and Firefox both having undergone major releases. I also doubt the wisdom of printing the full code behind each hack - especially some of the longer ones that were already online. Commentary attached to interesting snippets might have worked better. However, as a whole the book seems very well put together and it brought to attention some scripts I hadn't heard of before.

As a side note, I realize that with the recent release of Firefox 1.5 and Greasemonkey 0.6.4 most of my user scripts have suffered some code rot. I hope to bring everything up to date in the next few days.

More Scraped Feeds #

Some time ago I posted about scraped comic feeds. Some time in the past few months, the one for Frazz disappeared. I have therefore taken it upon myself to produce an unofficial Frazz feed. Comics.com doesn't make it easy to do this, since they embed a hash (unless I'm missing some pattern) in each day's comic image. Still, it was reasonably easy to parse the archive pages with Python's htmllib and scrape the necessary URLs.

In a similar vein, I've created a feed for recordings of Stanford's Computer Systems Laboratory Colloquium. Some interesting speakers come, and the entire lecture is available online. The schedule is available, but checking it by hand is tedious. Items tend to show up early, so the scraper script actually checks for a valid video URL before including it in the feed.

Update on 11/9/2005: It occurred to me that I never like how to the source for these scraping scripts is unvailable, since if one of them stops working, someone else willing to pick up the ball has to start from scratch. I've therefore uploaded the Python code that generates these scraped feeds.

Google Reader #

About a year ago I switched from NetNewsWire to Bloglines. I was on an extended trip and had only brought my PC laptop with me, and Bloglines seemed like the least hassle to set up. Surprisingly enough, when I got back from the trip I never switched back. While the Bloglines UI wasn't quite as slick as NetNewsWire's, the sheer convenience of being able to access my subscriptions anywhere outweighed all other cons. This of course is not news to anyone that has switched from a desktop mail application to Gmail, from a traditional photo software to Flickr, and so on.

That being said, the Bloglines UI is not as good as it could be, even given the constraints of web applications. I initially attempted to remedy this with some Greasemonkey scripts, but that didn't seem quite satisfying enough. I therefore jumped at the chance to work on what would eventually become Google Reader. Ben, Chris, Jason, Laurence and I played around with many prototypes, did usability studies and in general tried to come up with a product that all of us (and all of you) would use.

Google Reader launched this past Friday at Web 2.0. I'm very glad it's out there since we can now begin to iterate and react to user feedback. I think the ideal feed reader user interface still hasn't been discovered*, but I hope that we can explore some more avenues with our work with our experimentation. There are some elements of a River of News reader, while at the same time we still wanted to allow users to control their reading via labels and ranking options.

On the launch day I found that the best way to gather feedback was to be subscribed to a [google reader] search on Google Blog Search and Ice Rocket. This allowed us to find out quickly what the top issues were (speed, OPML import, new subscription notifications) and quickly fix them without having to wait for a full support mail/filtering/prioritization cycle. I'm still subscribed to those feeds, so blogging about the product is the easiest way to make sure that at least one engineer sees feedback.

* Then again, considering that applications like Zoë and Gmail continue to push the bounds of mail client UI design 30+ years after the creation of email, I wouldn't be so sure of settling on a design any time soon.

Movable Type 3.2 and Comments #

I've been wanting to enable comments on this site for a while. Unfortunately I was running Movable Type 2.64, i.e. a version from over two years ago that doesn't cope well with comment spam. The release of Movable Type 3.2 seemed like a good time to upgrade. Although Six Apart has a decent migration strategy in place, my old MT installation was pretty heavily customized (see early 2004 meta postings). I chose to do a "clean" installation, where I started with a new MT directory and migrated my plugins one-by-one, as need arose. I was able to get rid of a couple of them, since their features had been incorporated into MT itself.

I must say, for two years of development, MT 3.2 doesn't blow me away in comparison to the version I was using. More UI polish is definitely apparent, but there are still parts that feel clunky (e.g. the need for nearly identical comment entry forms on the entry and preview templates). The mt-search integration is still very crude (a separate template directory that's not accessible through the GUI). One of the use cases of MT (as opposed to LiveJournal or TypePad) is a techy user who uses their site as a knowledge repository (e.g. me) - in this case having decent search is very important.

Going back to what caused this upgrade, comments are now enabled. There's still some playing around to be done with CSS and I still haven't figured out what approach to spam I'll take (e.g. moderation vs. requiring TypeKey).

Update on 9/28/2005: Email addresses are no longer required. TypeKey authentication should now actually work (the trailing slash in blog URLs on the profile page is key).

As Evan mentions, Movable Type 3.2 supports OpenID authentication (albeit only as an extra). Supporting both TypeKey and OpenID makes the comment form a bit overwhelming. Since TypeKey is now an OpenID server, the latter should be enough, since it has a superset of the functionality. However, I think TypeKey still has more user awareness (and entering your TypeKey profile URL is not exactly user friendly), so for the time being I have both up.

Google Blog Search Filtering Trick #

Scoble's blog is too high traffic and occasionally too kool-aid-y for me to read, which is why I unsubscribed a few months ago. However, I do want to know when he posts about Google*. Using the recently launched Google Blog Search I can perform a search for all his posts that contain "google" and then subscribe to that search's feed. Now I have my very own filtered Scobleizer.

It appears that this is possible in IceRocket too, but the process is more convoluted. They support a blogId: restrict that could be used to only show results from a blog. The question is, what is Scoble's blog ID? To find that out, I had to search for a recent phrase from his blog. With that, we see that his blog ID is 413, and can set up the equivalent search. Feedster can do it with the inrss parameter, but it expects a feed URL, which is slightly less user friendly than the blog URL. As far as I can tell, Technorati lacks a site/blog restrict operator.

Update: Amit points out that Technorati has a site restrict, so for example this search is equivalent to those above. However, I am not able to determine how to get the from argument into a watchlist, thus there's no way to generate a feed out of it.

* The irony is that about half his posts seem to be about Google nowadays, thus the value of the filter is diminished.

Gmail Conversation Preview Bubbles #

Update on 12/23/2005: The script has been updated to be compatible with Firefox 1.5. See this entry for more information.

Update on 8/28/2005: A bug that prevented the bubble from working correctly once a a conversation had been archived or trashed has been fixed. Please reinstall the script to use this updated version.

Preview Bubble Screenshot

Short Version

Want preview bubbles for conversations in Gmail, as shown in the screenshot on the left? Then install the Gmail Conversation Preview Greasemonkey script. You can then right-click on any conversation to see its recent messages in a preview bubble. Greasemonkey 0.5 is required. Should work in Greasemonkey 0.3.5 or 0.5 (neither one has the security issues that plagued earlier versions).

Full Story

One of the things touted by the upcoming Yahoo Mail and Hotmail releases is that they will have a preview/reading pane which will let you see message contents at a glance without having to navigate to an entirely new view. Gmail offers a lightweight version of this already, by showing the first hundred or so characters of each message as a snippet next to the subject. While this is handy for one-liner emails, a full-blown preview pane is often more appropriate.

Given my past experiences with Gmail and Greasemonkey, I figured that adding a preview area to Gmail may just be possible. Ignoring the technical aspects, the first issue was deciding what it should look like. My main issue with traditional preview panes is that they take up a lot of room, even when they aren't needed. Eventually, I was inspired by Google Map's bubbles and decided to try that approach.

I initially triggered the bubbles on mouse hovering, but that was not a good fit: fetching the entire conversation is a heavy-weight operation with some latency while data that appears on hover should be light-weight and display instantaneously. I then tried inserting a small magnifying glass icon within each conversation row that when clicked showed the bubble. However, the icon was too small and hard to click on (Fitts' law and all that). I then considered a keyboard modifier plus mouse click as a trigger, but that seemed like too much effort on the part of the user. In the end, attaching it to right-click seemed like the best choice. Since Gmail doesn't use real links, the contextual menu triggered by the right mouse button is mostly useless, thus overriding it seemed like an acceptable tradeoff. Furthermore, this trigger means that the user does not have to take his/her hand off the mouse. To also facilitate those who use Gmail's many keyboard shortcuts, the V key was made to toggle the preview bubble for the current conversation.

Integrating the preview bubble as smoothly as possible with the rest of the Gmail interface was also a challenge. Given a message ID, it's reasonably easy to fetch its contents (making a GET request for URLs of the form &view=cv&search=all&th=message-id&lvp=-1&cvp=2&qt=). However, message IDs are not stored in the DOM directly. Instead, it turns out we can leverage Gmail's communication scheme in order to get this information. As others have documented, Gmail receives data from the server in form of JavaScript snippets. Looking at the top of any conversation list's source, we can see that the D() function that receives data in turns calls a function P() in the frame where all the JavaScript resides. Since all data must pass through this global P() function, we can use Greasemonkey to hook into it. This is similar to the trap patching way of extending Classic Mac OS. Specifically, the Greasemonkey script gets a hold of the current P() function and replaces it with a version that first records relevant data in an internal array, and then calls the original function (so that Gmail operations are not affected). Once we have the list of conversations (including IDs) in hand, we can easily map it to its corresponding DOM nodes (each conversation's row has the ID w_message-id) and show the appropriate bubble.

The script also tries to do clever things by resizing the bubble so that it best fits the displayed messages. Since fetching a conversation implicitly marks it as read, a "Leave Unread" option is provided that actually does a POST request to the server with the appropriate mark as unread command (LiveHTTPHeaders is indispensable for figuring this out). To parse data from a fetched conversation, we grab the appropriate JavaScript text and eval() it while defining the appropriate D() function that extracts the data. In general, the script code is architected the script reasonably cleanly, with a PreviewBubble object with appropriate methods and comments for non-intuitive places, thus it should be ready to hacked on by other people. There are still some rough edges, as well as some drawing bugs in Deer Park that may or may not be my fault. However, I have been using it for the past few weeks and it's very handy when going through lots of email quickly that needs to be read but not necessarily replied to.

(the usual) Disclaimer: I happen to work for Google. This script was produced without any internal knowledge of Gmail, and is not endorsed by Google in any way. If you have any problems with it, please contact only me.

Google Maps vs. MSN Virtual Earth #

MSN Virtual Earth was released today. My contribution to the discussion is not a review, but a Greasemonkey script. Google Maps vs. MSN Virtual Earth is a simple script that adds links between the sites. Using their respective UI APIs, it transfers state between them, making it very easy to do side-by-side comparisons. With this in hand, feel free to make your own (informed) decision as to which is the mapping site for you.

Screenshot of Greasemonkey modifications

Impromptu Market Research #

I was at an improv show yesterday, which ended up being very disappointing. The best thing that I got out of it was a quick glance at a class signup sheet that was outside the theater. I was happy to see that Gmail was doing just as well as the more established email players, despite being invite-only, in beta, and all that.

  • Gmail: 3 people
  • Yahoo: 3 people
  • AOL: 3 people
  • Hotmail: 2 people

Putting Your Reading List Online #

Lately, I've been playing around with Delicious Library, and after a few days of tedious labor (I am iSight-less), I have imported my entire collection of books into it. Once I had all this data gathered, I wondered what I could do with it. Unfortunately, while the program has all sorts of clever ways of getting data into it, it's much more miserly when it comes to exporting it. The only officially supported feature is an export command that outputs a CSV file. Unfortunately, with no AppleScript or Automator support, there's no programmatic way of invoking it.

However, as it turns out, Delicious Library stores all of its data in an XML file. ~/Library/Application Support/Delicious Library/Library Media Data.xml has a very sane schema and is easily parsed (I'm obviously not the first to realize this, some Googling turned up this FileMaker import script for example). With this in hand, I decided to complement my automatically updated online reading list with a book analogue.

The result is my book reading list, which is also linked to from the sidebar. A cron job periodically uploads my Library XML file to my server. There, another cron job reads it, generates a simple HTML file, and tells Movable Type to refresh the relevant template. I figured that dumping my entire library would not be all that interesting, so instead I picked a few "shelves" that I use to keep track of recent purchases and books I should read.

The source to the simple generator script is available It relies on ElementTree for its XML parsing. I am considering putting more metadata into Delicious Library, such as personal reviews, to play around with the hReview microformat. Other output formats are possible as well: my reading list may not be that exciting, but having other erudite people's available as RSS may be.

data: URL-based Animation #

A while back I happened to see Evan's minimalist Gmail notifier. One thing that struck me about it was that he not only base64 encoded his icon data, but that he split them into header, palette and footer sections. By swapping out the palettes, he could easily create active and inactive versions of the same icon.

Back in the days when most games were written for 8-bit displays, color palette animation was a common technique for simulating change without having to actually push new pixels to the screen. Out of sheer curiosity, I tried to replicate the same effect in a browser, using data: URLs and JavaScript. The result is this simple animation that is programatically generated by shifting a simple gradient color palette.

The main gotcha that I encountered is that using PNGs isn't really an option. All chunks in a PNG image (including the PLTE one that specifies the color palette) must be followed by a CRC of their data (see section 3.4). If I were to dynamically update the palette, I would have to re-compute the checksum. JavaScript isn't ideal for bit twiddling, and performance would've gotten even worse (the example given just about maxes out the CPU in Firefox/Mac and Safari on a 1.5 GHz G4).

I'm not sure if this technique has any real world value (even ignoring MSIE's lack of support for data: URLs), but it's still fun to see old school techniques such as this resurrected.

Gmail and Persistent Searches #

Users of my Gmail persistent searches user script will have noticed that it stopped working today. This is because Google changed Gmail's domain from gmail.google.com to mail.google.com, which is not on the script's list of included pages. You can either modify the list by hand (select "Manage User Scripts..." from the Tools menu) or re-install the script by right-clicking on the previous link and selecting "Install User Script...".

Since Gmail's domain changed, and the searches were stored in a cookie (unless you use the modified version that uses contacts), your existing searches are not preserved. To prevent this problem from reoccurring, I've switched to using GM_getValue and GM_setValue, two Greasemonkey functions that were not available at the time I wrote the script (version 0.3 or later required). To recover your existing searches, look for a cookie with the name PersistentSearches set on the gmail.google.com domain and extract its value.

Update on 7/5/2005: I had forgotten that my Gmail skinning hack is also keyed on Gmail's domain. I have updated that as well.

Google and Valid HTML #

There's a perception that Google doesn't care about valid HTML, since bandwidth costs trump correctness. While that thinking has merit for high traffic sites, there's more leeway on our smaller properties. Specifically, I was happy to discover that Google Video validates. And it's not just the relatively simple front page, search results validate too.

Big (Build) Brother is Watching #

Build Status Photo

Though perhaps not as cool as this build status display, it was fun/satisfying to get one up and running in my corner of the office.

Nifty Navigation Widget #

New York Times article navigation widget Widgetopia is a site that seeks to collect interesting UI elements from various sites and applications. It appears to be moribund based on its latest post ("Since I have no time for little widgetopia anymore..."), but today I saw something that made of me think of it.

What was special about this sighting was that it wasn't on a computer at all, but in the print version of the New York Times Magazine. This week was "The Money Issue", and they had a series of articles loosely connected with this topic. What grabbed my attention was the graphic at the beginning of each one, a sample of which I've included on the right. It's a bread-crumb trail of sorts, except it also indicates the relative lengths of the articles (conveyed by the size of the circle). I'm not sure if this is a sign that online navigation design is making its way back to the print media, but it was a pleasant surprise nonetheless.

N.B. my Tufte books are still on my shelf, waiting to be read, thus possibly explaining my ignorance.

In How Many Ways Can an URL be Mispelled? #

As I was looking through this site's access logs this week, I noticed that I was getting a lot of failed requests for what looked like attempts at getting at my Gmail skinning post. The strange thing was that these 404s had no referrer, diverse user agents and IPs, and were attempted between one and five times. At first I suspected a (shady) crawler run amok, but the variety of IPs made that unlikely. I then briefly wondered whether a worm could be responsible, but if my little site was getting so many requests, then presumably it would've been noticed by other people as well. Since most of the failed requests were one or two characters off from the real URL, I wondered if traffic was getting subtly corrupted. However, no relevant outages were mentioned, and since the requests were spread out over a few weeks, it's unlikely that such a thing would've gone unnoticed.

For the curious-minded, the relevant access log snippets are here. Below are the top 10 (by frequency) 404-causing requests:

  1. 509: /archives/2004/10/05gmail-skinning
  2. 160: /archives/2004/10/05gmailskinning
  3. 20: /archives/2004/10/05/gmail-skinning/
  4. 16: /archives/2004/10/05/gmailskinning
  5. 14: /archives/2004/10/05gmail-skinning.com
  6. 12: /archieves/2004/10/05gmailskinning
  7. 10: /archieves/2004/10/05gmail-skinning
  8. 9: /archives/2004/10/05gmail_skinning
  9. 9: /archives/2004/10/5/gmail-skinning
  10. 8: /archives2004/10/05/gmail-skinning

The only theory that is consistent with all the facts is that the post was mentioned in some print publication, and that the printed URL was incorrect. That would explain the lack of referrers and the diversity of IP addresses and user agents. I assume the URL was wrong since the most popular failed request has an unlikely typo. Users are sensitive to slashes and would not miss one; the second most popular failed request shows a more natural mistake - skipping a hyphen.

I've now added a redirect from the top two items in the list above, though it may be too late. But more importantly, despite my attempt at clean URLs, this shows that they are not as friendly as they could be. The year/month/day hierarchy may be the cleanest, but I could probably get away with just the year, since my entry keywords rarely collide. Perhaps more modern blogging software than my 2003-vintage Movable Type 2.64 installation can do better, but this incident doesn't provide the activation energy to investigate further.

Update on 6/10/2005: It turns that the entry was indeed mentioned in the June 2005 issue of Popular Science. Yay for deduction.

Penny Arcade's RSS Feed #

I've previously referred to Penny Arcade as being enlightened by having an RSS feed. However, the feed has been broken for a while (specifically, since April 22). They seem to be aware, but nothing has been done. Running it through the feed validator reveals a server misconfiguration (most likely having to do with Gzip encoding). Since they're taking their time fixing this, and since curl has no trouble fetching the feed (probably because it ignores the Gzip header), I've put up a local copy that Bloglines (and other aggregators) should have no trouble reading.

Update on 6/15/2005: Penny Arcade seems to have fixed the problem a couple of days ago, so my local copy of the feed now just redirects to the official location.

Persistent Persistent Searches #

One thing that bothered me about my Gmail persistent search script was that the searches themselves were stored in a cookie, and thus weren't shareable between computers (and at the same time, they were visible to all Gmail users using that browser). At some point I figured out that storing searches in a contact was a solution to both of these problems, but I never had the time to actually implement this feature. Now it looks like I won't have to, since Luke Baker has done it for me. Yay LazyWeb.

ChangeLog for Safari 1.3/2.0 #

The KHTML developers have complained that it's hard to integrate work done by Apple's Safari team, since all they get are periodic code dumps with no history. I agree that the Safari/KHTML relationship doesn't seem to be one of full development peers, but the lack of history claim is not true. WebCore 315 and WebCore 413 both include very detailed (checkin by checkin) changelogs of what went into them.

In fact, these ChangeLogs make interesting reading even for non-browser developers. For example, there are lines such as "crash in ApplyStyleCommand::applyBlockStyle pasting contents of webpage into Mail or Blot". It's well known that Mail.app uses WebCore in contentEditable mode for composing, but the "Blot" application is new. Based on its name, one might be inclined to suspect that it's a blog authoring tool. There's also a lot of self-reviewing (search for "reviewed by me") which wouldn't fly at other development shops (*cough*).

"By 'cruise' I mean 'Russia'" #

Expanding Snippet Feeds in Bloglines #

The full content vs. snippets/summary RSS feed debate is well known, and perhaps over-discussed. My only contribution to the matter is an enhanced version of my Bloglines Tweaks user script. Each item now has a "Expand" link that attempts to fetch the destination page and extract the full article content, which then replaces the snippet. It appears to work pretty well with MacMinute, Ars Technica, Mac Rumors and Palm Info Center, though I'm sure that there are sites for which its (simple) heuristics fail. Feedback is welcome (my email address is in the sidebar).

The user script requires version 0.2.6 or later of Greasemonkey, and my previous entry on the subject has installation instructions.

Life Immitating Art #

Some at work was complaining that the new maps satellite imagery makes New York look upside down. Then there was the inevitable comparison with SimCity, and how Maxis is blowing us away, graphics-wise. Which is of course nonsense, all you need is the right angle:

New York Isometric View
New York Isometric View
Taken 7/19/2004 (with a bit too much Unsharp Mask applied).
vs.
Sim City Isometric View
Courtesy of image search.

Greasemonkey Followups #

My Bloglines-del.icio.us integration script was tweaked into a Bloglines-BlogMarks.net script. Too bad that's the one Evan noticed.

The Gmail persistent searches user script was well received. Apparently it made it as far as ETech:

A popular XUL overlay is GreaseMonkey which the author showed could be used to add features to web sites such as persistent searches to GMail all using client side script.

Drew Amato submitted a simple patch that makes the searches box remember its expanded/collapsed state. Norman Rasmussen submitted another that makes sure the searches box ends up between the labels and invites boxes. Both of these have been incorporated into the script.

Sung Kim has spun off the idea into a Google Scholar persistent search add-on. Scholar Monitor lets you keep track of authors, research groups or topics.

Finally, this is only tangentially related, but Brad (of the LiveJournal fame) has come up with an even cooler use for data: URLs: a shared whiteboard (description here). The whiteboard data is refreshed via a base64 encoded PNG that is returned from the server via a XMLHttpRequest object. Crazy stuff.

The Relentless March of Progress #

My bank was acquired by another bank a while back, and I guess the merger has now reached the point where the IT systems must be integrated. I received a letter informing me of this and the specific changes that it entails:

If your Username contains letters, you'll be asked to change to a new numeric ID. Passwords that are eight or more letters will also need to be changed. The new passcode can contain 4-7 letters, numbers or combination of the two...To retain history currently viewed through Cardmember Access [the old system], we recommend that you print a copy or save any paper statements you received...Any alerts that you set up through Cardmember Access will be discontinued when the upgrade process begins.

Upgrade indeed. I'm pretty sure that these arbitrary limits will do wonders for security, and that the user experience will be improved by not having functionality present on the old site. I'm not holding my breath for RSS feeds of account activity or any actual improvements.

Adding Persistent Searches to Gmail #

Update on 12/23/2005: The script has been updated to be compatible with Firefox 1.5. See this entry for more information.

Persistent searches (a.k.a. smart folders or saved searches) seem to be the feature du jour of email clients. Thunderbird has them, Evolution has them, and Mail.app soon will. On the other hand, Gmail is the web mail app to use. While one doesn't normally think of web apps as having such advanced power user features, it recently occurred to me that it should be possible to add persistent searches to Gmail:

  • Persistent Searches ScreenshotIf you haven't already, install the excellent greasemonkey Firefox extension.
  • Open up this user script (in Firefox).
  • From the "Tools" menu, select "Install User Script.." and confirm all of the various prompts.
  • Go to your Gmail account (some refreshing may be necessary).
  • There should now be a "Searches" box on the left size, below the "Labels" and "Invite a friend" ones.
  • Clicking on a search executes the saved query. To refresh result counts, click on the refresh icon in the upper right corner.
  • Use the "Edit searches" link to customize your persistent searches.
  • As a bonus feature, all threads now have a "Toggle font" link which switches the message font to a fixed size one - great for reading source code.

There are some caveats. Saved searches are stored in a cookie. This means that you cannot easily share them between computers. Ideally this could be remedied by storing the searches within Gmail itself (perhaps as a dummy contact or a special filter), but I'm not quite sure how to do that yet. Furthermore, result counts may not be accurate beyond a certain limit (e.g. Gmail itself reports "about 80" results when there are in fact 77). In general, the smaller the result size, the more accurate the search is.

Toggle Font ScreenshotThe user script has a pretty straightforward implementation. It looks for the "Labels" box, and if it finds one, it inserts a "Searches" one. As previously mentioned, I store all the searches in a cookie. To actually perform a search, I created a an XMLHttpRequest object and use it to fetch the search results for each saved search. The response contains in it the total number of messages that matched the query. It would've been nice to use the DOM (and then a JavaScript eval()) to parse it, but this turned out to be more difficult than expected (XMLHttpRequest only provides a parsed DOM for XML documents).

Rather than specifying all of the CSS properties inline or via the JavaScript style object, an approach which separated appearance from structure was used. Effectively, a style sheet was embedded into the user script and inserted upon initialization. This style sheet was also used for the toggling of the font (the message body always appears to be in a <div> of class mb). It also has the advantage of making the script self-contained, since it doesn't depend on an external CSS file. Also for the same goal of encapsulation, the font toggling icon was embedded in the script itself via a data: URL (generated with hixie's tool).

Disclaimer: I happen to work for Google. This script was produced without any internal knowledge of Gmail, and is not endorsed by Google in any way. If you have any problems with it, please only contact me.

Integrating Bloglines and del.icio.us #

Want to easily post things you read in Bloglines to del.icio.us? Follow these steps:

  • If you haven't already, install the excellent greasemonkey Firefox extension.
  • Open up this user script (in Firefox).
  • From the "Tools" menu, select "Install User Script.." and confirm all of the various prompts
  • Go to your Bloglines account.
  • Observe that all "Clip/Blog this" links at the bottom of each entry have been changed to "Post to del.icio.us."
  • Click on one to post that item to del.icio.us (you will be prompted for your username the first time you do this).

As an added bonus, the script makes the "Extras" section in the sidebar toggleable, so that it doesn't always take up so much room. This is all done in a very straightforward manner using DOM operations. It is possible that things could be made more elegant with some XPath-Fu. Things may stop working if Bloglines alters their markup significantly.

I would like to integrate the two services even further, but I'm not sure how much more can be done. Ideally each item in Bloglines would have the tags you've assigned it and perhaps the top N community tags as well. However, given the limitations imposed by the JavaScript security model, I'm not sure how to talk to del.icio.us directly, since the scripts will execute within the Bloglines context. In any case, this JavaScript del.icio.us API looks promising.

XmlSerializer and JavaScript #

At work I've been digging around the WebCore code base, trying to see how Safari supports (or more correctly, doesn't support) XML-related things. I happened to notice a XMLSerializer class that I hadn't heard of before. A bit more digging turned up that Mozilla implements it and Opera 8 will too, thus it is a de facto standard of sorts.

Safari's implementation seems rather limited, with the only method that it supports being serializeToString. Furthermore, it only accepts complete DOM documents. However, it may still be useful in certain circumstances. For example, click here to view the serialized version of my RSS feed. To code to do this is:

function serializeRSS() {
  var request = new XMLHttpRequest();

  request.open("GET", "/index.xml", false);
  request.send(null);

  var serializer = new XMLSerializer();
  alert(serializer.serializeToString(request.responseXML));
}

The best part about the above code snippet is that it's very straightforward and natural-looking. No IFRAME tricks required. No need for regexp-based parsing. Although browser-based development still has a ways to go, I'm glad it's headed in the right direction.

Thefacebook Adoption Rates #

Thefacebook reached my school in the second semester of my senior year. Although I haphazardly copied over my Orkut profile and added a bunch of friends, I never really got into it, having more pressing things to occupy my time.

However, noticing the steady stream of friend requests from people in younger classes, I was curious to see where it stood now. I was initially surprised by how many people in my year (2004) got around to signing up (perhaps they hadn't suffered social networking exhaustion like me). More shocking were the numbers that I saw when looking at the classes still enrolled in school. The class of 2008 has 84% of its members on Thefacebook, which is amazing when considering that this is a third party service with no endorsement whatsoever from the school. With such reach, I can see why some people get excited about the field.

Flexible Drop Shadows #

While working on yet another pet project, I decided to add an alpha-blended shadow around a draggable container. Although I have done such things before (for the magnifier), up until now I have only needed this effect for fixed-sized objects. In this case, the container could have any dimensions (within reasonable bounds). After some experimentation, I came up with a method that used two sliding door style <div>s with a background image (I'm not a big fan of the cutesy names that the designer community chooses for CSS techniques, but such is life). The net result is visibible here.

Each image is wrapped in a shadow <div> that also has four inner ones for each corner of the image. These corner <div>s are absolutely positioned within the container. Two of them are rather small, and only have the non-repeated parts of the of the upper-right and lower-left corners as their background images. The other two corners also have background images, except they are much larger (1000 x 1000 pixels, though even larger sizes should be feasible since they compress very well). However, we only want to display part of these images, making them big enough to contain the inner <div>. We accomplish this by giving them heights and widths of 100%. We then use negative position offsets to move these four <div>s outward so that they do not overlap.

One issue encountered is that MSIE does not directly support PNG transparency. The usual workaround is to use the filter CSS attribute. However, it does not seem to support the relative pinning that regular CSS background images can have. Due to the lack of a workaround, the best I could come up with was to disable this effect for this browser. It can be done in JavaScript, but I chose to rely on MSIE's lack of support for the direct descendant CSS selector. As a result, all drop-shadow related rules are prefixed by html>body so that MSIE does not see them.

As a bonus, I also threw in an image loader that can load any URL and determine the image's dimensions with JavaScript. Since some browsers do not immediately populate the width and height properties of the JavaScript object, the solution was to use a timeout (via window.setTimeout), checking for these properties until they were non-zero (actually, until they were greater than 24, since MSIE seems to default to the placeholder's image dimensions until the real one is loaded).

Having done all this, I discovered that nearly the same technique has already been covered on A List Apart. I suppose I can take some pride in having come up with it independently, and also in the fact that my shadows go all the way around the image (though they are nearly imperceptible on the upper and left-hand sides), allowing for the possibility of fancier borders.

As a side note, it's interesting to see how my JavaScript coding style has improved since my earlier experiments. Now I use closures and other fun things. It's always good to know that your skills have room to improve.

Bluetooth Internet Access with the Motorola v710 #

Even though I have been disappointed with my v710's crippling by Verizon, it should still make an acceptable internet connection platform. Verizon has a 1xRTT network with coverage in most interesting places. A bit of searching turned up these instructions on using the v710 as a Bluetooth Modem with Mac OS X. They work pretty much as advertised, with the only caveat being that the connection can be dropped occasionally. I initially thought this was due to some limiting by Verizon, but it doesn't seem to be dependent on the amount of data transferred or frequency of access.

My experiences, performance-wise, seem to mirror everybody else's - faster than a 56K modem, but with worse latency. As an example of the latter, compare these two ping statistics (to www.google.com):

v710 via Verizon 1xRTT:
  round-trip min/avg/max = 344.774/457.334/589.046 ms
TimeWarner Cable:
  round-trip min/avg/max = 11.13/16.128/32.77 ms

Speed seemed to vary based on signal intensity. A speed test reported 62 kbps down and 46 kbps up, but I could get the sustained transfer rate to vary between 144 kbps and 40 kpbs depending on where I placed my phone. Burstiness was also an issue, even when holding the phone still. In general this is better than my experiences with T-Mobile's GPRS network, though we'll have to see how much this ends up costing me.

The New Machine Order #

The introduction of the Mac mini significantly alters the possibile configurations that I can have. Even taking into account recent revelations that it has a 2.5 inch drive, it is still a decent enough machine for my needs. First, where I stand right now:

  • 1 GHz TiBook with 1 GB of RAM, 60 GB internal storage and 80 GB internal storage (2.5 inch), and a combo drive. Used as the primary computer, usually in conjunction with an external flat panel, keyboard and mouse.
  • Dual 450 MHz G4 with 1.5 GB of RAM, 100 + 120 + 300 GB internal storage and DVD burner. Used as a headless server.
  • Toshiba Portege R100 with 768 MB of RAM, and 40 GB internal storage.

All machines have wireless (802.11b cards in the computers, 802.11g on the basestation), and the two Macs have Gigabit Ethernet (and are interconnected with a Gigabit switch).

Step 1

Wait for more enterprising people to purchase a Mac mini and take it apart. Ensure that RAM and hard drive can be upgraded by the user with minimal hassle (the fact that this may void the warranty does not bother me). Determine if the inevitable flaws in a Rev. A product are acceptable.

Step 2

Purchase low-end Mac mini with Airport Extreme card and third party 1 GB of RAM for a total of ~$750. Install the 80 GB 2.5 inch drive internally.

Step 3

Sell the TiBook on eBay for ~$1600 (assuming not too much depreciation in the meantime).

Sell the dual G4 (minus 300 GB of the internal storage) on eBay for ~$400.

Step 4

Wait and see if I actually need to do anything more. I will in theory have a slightly more performant main machine that takes up less room and requires less external doohickeys.

Step 5

If the lack of a big honking tower leaves me unsatisfied and unmanly, build an Athlon 64 machine for ~$1500 or a dual Opteron workstation for $2500.

Unofficial Comic Feeds #

RSS is an ideal medium for reading online comics, since it removes the need to check sites repeatedly or keep track of update schedules. Some are enlightened about this, others don't seem to get it yet. For the laggards, the usual approach is to scrape the site. Below is a list of the comics that I read for which this has been done, in the hope that others will find these Atom or RSS feeds useful.