Haven't had that much time for this project lately. Unlike Sonar, Grendel and Iconographer, it's still in the early stages, and thus I can't just pick it up anytime, work for a bit, and then move on, rather it requires some concentrated thought. However, I am tempted to start to implement the architecture I described earlier for storing the models, and leave the user interface for later.
I'm starting to have an idea of how the user interface will work. First of all, when the user clicks, I have to determine which object was hit. This isn't quite as simple as it sounds, since the scene can be at a weird camera angle, with objects obstructing one another, etc. OpenGL has a picking/selection mode which allows you to see which object was clicked on. What you basically do is assign each object a distinct color, turn off lighting and other effcts, and render a very small area around the location that was clicked. Then by looking at the color, you can determine which object was hit. However, this doesn't tell you where exactly the click was. I thought I was going to have to get the inverse of the modelview and projection matrices, and do this myself, but apparently there's a function called
gluUnproject which does this for me. Once I have the click transformed to be in terms of the hit object's coordinates, then I can actually process it.
One of the bug remaining issues is that of the tools. I've mentioned earlier how I had come up with a flexible and extensible architecture for handling various object types (geometric primitives, meshes, etc.). I'm thinking of extending this to the tools, but I'm not quite sure how it'll work yet. If each object type has a single tool associated with it, then things should work out pretty well, but I don't know if that will be the case. I guess I should draw up some mock-ups of how the user-interface should look, and then the class hierarchy behind it which would be used to implement it.
Once I get an idea of how the whole application will work, I can see two possible ways of starting. I can work on the back-end (the loading and rendering of models) and use GLUT as a temporary user interface until I get to that, or I can work on the user interface, and use simple placeholders (e.g. OpenGL has a teapot model built-in) for the actual models. Right now I'm leaning towards the first option, because it'll be more satisfying (3D models are cooler than the user interface), and I need to learn more about the rendering contexts.