It turned out both of those problems (certain diagonal lines not being thinned, others being removed completely) were caused by the same thing, a bug in the aforementioned
c functions. It turns out that it checks of the following patterns (*'s can be of any value, red is the current pixel around which the pattern is centered):
* 0 0 0 1 * * 1 0 0 0 * 1 1 0 or 1 1 0 or 0 1 1 or 0 1 1 0 1 * * 0 0 0 0 * * 1 0
These patterns look for two pixel thick diagonal lines, which although would satisfy the other criteria for not being thinned (e.g. have more than one white to dark transition as their neighborhood is traversed), they should in fact be gotten rid of. What is interesting is that the Wang paper only uses the first two patterns, even though the latter are clearly valid as well. Adding them improved results even further.
It's now tempting to try and improve the thinning even further (e.g. do a king of reverse despeckling, so that holes in an otherwise solid area are filled so they can be thinned away) or to tweak the stroke sub-image extraction (e.g. remove very small ones, but not all of them, since periods and what not get to be pretty small when they're thinned too -- maybe only those not near any big sub-images). But such refinements can wait, it is now time for stroke extraction.
However, the Liu paper is beginning to seem increasingly dubious, since it's pretty elaborate and somewhat skimpy on some details (parts of it reference another paper by the author that I can't seem to find). Another approach to consider is to mimic the method used for the mesh simplification assignment. I can build up strokes based on the thinned image that initially use every single pixel as a control point. Then, defining the collapse operation as a vertex removal, I can remove as many as I deem necessary in order of increasing error (my error metric will be pixel overlap of stroke and original thinned image). I can play around with different stroke interpolation methods (I have code for linear and Catmull-Rom from 426).