I recently stumbled upon the Visual Structure From Motion code from Changchang Wu. It’s a beautiful GUI frontend to a collection of utilities that include, among others, multicore bundle adjustment, SIFT on the GPU, and other functionality. Instructions on how to install it on Ubuntu are here. Its output on this dataset of high-res images of a statue is very similar to what is shown in this video. In addition to being easy to use, VisualSFM does not require camera calibration information, although you could configure it to do so. The only disadvantage that I can see is that it does not make it possible to enter known position and translation transformations between pairs of camera poses, which would be very useful for stereo imagery. In other words, due to using monocular camera data, the rotation information is accurate but the translation is not; at best the distances are scaled. That said, it seems to work very well, and the meshes that it produces look great, particularly if you visually cover the entire object.
My last computer vision assignment had a very cool component, so I thought I’d share it. We were supposed to implement Harris’ corner detector, which is a method for detecting corners in an image. By a “corner” we usually mean the point at which a pair of edges cross each other. For the purposes of this method though, a corner is a pixel such that the intensities (of one of the RGB channels) within a small neighbourhood around it are significantly different than the intensities in the neighbourhood of a nearby pixel. A more detailed explanation of why the method works and how to implement it can be found here. These are the results of the algorithm, applied first to a simple image consisting of rectangles and then a real image (keep in mind that a threshold had to put on the number of corners marked on the image to avoid cluttering it):
What would happen if we took and pretended that we didn’t know where it’s origin was? One of the ramifications would be that points and vectors in that space would have to be considered different entities. When the origin is fixed and we know it any point in the space can be represented as the vector . This means that when we are working in an affine space (any point can be the origin), points are regarded as fixed locations in space and vectors are regarded as displacements without an origin — two different things.
Formally, an affine space consists of a vector space and a set of points such that:
- if then
- if and then
So, to specify an affine space we need a basis for (suppose it’s the image plane) and a point to act as the “unknown” origin. becomes the reference frame of the affine space. As usual, every vector in the affine space can be expressed as , while points are expressed as . In other words, vectors are of the form while points are of the form .
Does this remind you of anything? Yup, homogeneous coordinates. I believe this is why my lecture notes mentioned that “ is interpreted as a line (vector, horizon) when and as a point when .” It makes much more sense now then back when I first looked at that black magic of a sentence.
OK, so far so good. What really confuses me now, and confused some of my friends when they took the graphics course at UofT, is the question of why is it necessary to use affine spaces & homogeneous coordinates (and thus differentiate between points and vectors) in computer graphics? Why can’t we use our good old euclidean geometry? I believe the answer has to do with the types of transformations that euclidean geometry allows: only rigid-body transformations (i.e. identity, translations and rotations) are allowed and they preserve distances and angles.
Which classes of transformations are missing from that list and might be very useful in computer graphics? Linear transformations (e.g. scaling, reflection and shear), affine transformations (they preserve parallel lines) and projective transformations (artistic perspective). For instance, euclidean geometry says that two parallel lines never meet thus failing to describe perspective and the vanishing point on the horizon:
Projective geometry allows projective transformations such as the above. Euclidean geometry is a subset of affine and that in turn is a subset of projective, with respect to the classes of transformations each one includes. So, I presume that the answer to my question is that in computer graphics people use projective geometry because it makes the end result much more realistic.
My first two lectures on computer graphics left me puzzled with some unanswered questions that have to do with basic projective geometry and its transformations. I’ll try to collect my thoughts and some links, and write them down primarily for the sake of summarizing to reduce my own confusion.
Basically, both in computer vision and in computer graphics the first geometric problem that we study is that of projecting points from a 3D scene to a 2D image plane (think of it as a photograph), using the pinhole camera model. The projection looks like this:
An important thing to notice on this picture is that any point that lies on one of the dotted lines will be projected to the same image location. This leads us to write for any nonzero , where is the euclidean coordinate system that we are accustomed to. This “equality” holds in the sense of equivalence classes, not in the sense of euclidean geometry.
Using a similar triangles argument we can prove that the pixel on the image is equal to . However, can also be viewed as part of the 3D scene like this provided that . The correspondence between image pixels and representatives of the equivalence classes of is one-to-one. The equivalence class corresponding to an image pixel is called the homogeneous coordinates of that pixel. Many of the articles that I found online assumed for the sake of simplicity and mentioned that the mapping is “just a trick” to be followed blindly because
What happens when ? In that case does not correspond to any point in the euclidean space, but to a unique point-at-infinity along the direction . The proof is in the first link.