Now that resampling is working to some extent, and I can load/store things into the SQL backend, I've been able to get a basic query system working. It's sort of kludge-y, in that I do simple binary comparisons. I look at the query stroke, and decide whether it looks more like a circle or a square (using only the inner angle histogram). Then, I return all strokes from the dataset that are also more like the basic shape that I chose. Very plain, but this was meant to get the rest of the infrastructure (query input, results display, (de)serialization) in place. Now real progress can be made.
Well, except for this one thing: the thinned image shown. The little lines (I'll call them "hairs") sticking out of the basic skeleton are disturbing the stroke extraction process. Consider the bottom portion. Assuming we start with the bottom left corner, we keep advancing along the pixels. As we get to the first intersection, we continue moving to the right, since we favor moving in the same direction. We do the same thing at the second intersection (in the bottom right corner), resulting in us reaching the end of this segment. Therefore, we're forced to begin a stroke anew, with the net result being that the core feature of this shape, the rectangle, is broken up into several strokes. One solution appears to be to do a pre-processing step that attempts to take these hairs into account. Consider a graph that whose nodes are either endpoints or intersections in the thinned image. By looking at edge lengths in the graph, it should be possible to determine which are hairs (relative short lengths, one node is an endpoint). Then, we can either discounts those points entirely, or simply give their direction a lower weight when reaching an intersection in the stroke extraction phase.
Although this graph-based approach was suggested by Szymon, I believe that something like it is described in Liu97 (Robust Stroke Segmentation Method for Handwritten Chinese Character Recognition), which is where I got the idea to use thinning in the first place. As it happened, the dataset that I was using at that stage didn't exhibit these artifacts, and so I felt like I could discard the "hair removal" part of the paper.