3D mesh from a collection of images

I recently stumbled upon the Visual Structure From Motion code from Changchang Wu. It’s a beautiful GUI frontend to a collection of utilities that include, among others, multicore bundle adjustment, SIFT on the GPU, and other functionality. Instructions on how to install it on Ubuntu are here. Its output on this dataset of high-res images of a statue is very similar to what is shown in this video. In addition to being easy to use, VisualSFM does not require camera calibration information, although you could configure it to do so. The only disadvantage that I can see is that it does not make it possible to enter known position and translation transformations between pairs of camera poses, which would be very useful for stereo imagery. In other words, due to using monocular camera data, the rotation information is accurate but the translation is not; at best the distances are scaled.  That said, it seems to work very well, and the meshes that it produces look great, particularly if you visually cover the entire object.

Advertisements
3D mesh from a collection of images

Control theory online courses

I absolutely recommend Russ Tedrake‘s online course on Underactuated Robotics (mostly nonlinear control) and Magnus Egerstedt’s course on linear control. The latter is not a formal prerequisite of the former, but it helps if you know some of the concepts. Russ’ course is just amazing, and extremely rich in mathematical concepts and algorithmic tools. He insists on providing visual intuition for almost every single concept that he introduces, which is great. I’ve recommended it to everyone who cared to listen. Some of my friends and I are currently taking the course.

Control theory online courses

Literature review

Friday night in Montreal, with the streets being almost blocked by a snowstorm that left us half a meter of snow between our door and the outside world. It is also a night in which my next-door neighbours have decided to test the full extent of their speaker’s bass capabilities — currently playing a remix of Lana Del Rey’s Blue Jeans. It’s impressive to hear that they are accompanying the song during its high notes; perhaps it’s time to throw that neighbourhood karaoke party after all.

What’s also impressive is that this note has diverged so early on, from its first paragraph. This note is actually about a literature review of a set of pattern recognition and robotics papers, whose existence I think is important to know. I wrote this review as a partial requirement for my upcoming Ph.D. comprehensive exam at McGill. The purpose of the literature review is to summarize ~20 papers related to your research area, and the committee is free to ask whatever question they wish around these papers and their references, trying to see exactly what are the boundaries of the student’s knowledge and how this knowledge has been acquired: by memorization or by actual understanding. If the student is unclear about a method or an algorithm, this committee, which consists of some of the most experienced professors in the department, will most likely detect it and dig deep enough with questions to see exactly what the student does not understand and why.

My review document covers some methods in these broad topics:

  • Bayesian filtering and estimation
  • Sampling-based path planning
  • Clustering
  • Classification and ensemble methods
  • Active vision

The summary is quite dense, and admittedly one could spend an entire document focusing on one of these areas. I though it was going to be more fun though to write a compact document. I should clarify that this document has not been reviewed yet by the committee, so I haven’t received much feedback on it, and I don’t know exactly how many errors it contains. Notwithstanding that, I thought it is useful enough to share.

 

 

Literature review

Cool courses

This semester the Computer Science department at McGill is offering a couple of courses for which my friends and I are very excited. The first one is Structure and Dynamics of Networks, which apparently is otherwise known as Network Science, but it really should be called Applied Graph Theory for Large Graphs. Naming aside, the lectures seem to be very promising, as examples will involve networks from different areas, such as biology, political and social science etc.

The second one is Topics in Game Theory, where I believe we’ll study algorithms for selecting strategies to minimize the probability of loss (or maximize the probability of winning) in games where there are many participants. Or at least, that’s what I am expecting — the syllabus doesn’t seem to be posted yet.

Last but not least, the graduate Mobile Robotics class is offered again this year. This course starts with lecture-style material in the first half, during which the students are exposed to motion planning algorithms, sensors, and some fundamental problems in algorithmic robotics. The second half consists of student presentations of research papers. The class is highly interactive and lots of fun.

Cool courses

Support vector machines

Yet another classification method used in machine learning. Here is the most accessible tutorial I found on this topic, by Tristan Fletcher at UCL. It might also be useful to see how you can use SVMs for regression (to predict continuous variables, instead of classes). This technical report, by Steve Gunn at U of Southampton, was the one that added the most intuition among the tutorials I found on SVMs, along with Geoff Hinton’s notes. And if you are interested in having a library that implements different SVMs then you might want to take a look at Shogun. It provides interfaces to Matlab, R, Octave and Python. It seems like a pretty neat library — at least from what I can see on its website.

Support vector machines

Bagging and boosting

Bagging and boosting are two methods introduced in machine learning that aim to combine different classifiers into one classifier that outperforms its components. AdaBoost seems to be the most widely used boosting algorithm, or at least the one profs seem to be more excited about. A very detailed description of how and why it works can be found here. However, I find Chris Bishop’s chapter 14 in Pattern Recognition and Machine Learning much more accessible if you haven’t seen this material before.

I have the feeling that the more algorithms I am introduced to in my machine learning class the less ways I know to solve problems, or at least, the less confident I am about which method is most appropriate. I guess that will come with time and experience, but still, I have to admit I have more respect now for statisticians.

Thankfully, Prof. Aaron Hertzmann’s notes for CSC411 at UofT are really helpful and accessible for people who visit the machine learning jungle for the first time.

Bagging and boosting

The backpropagation algorithm

One of the lectures in my Machine Learning class briefly described the backpropagation algorithm used in neural networks. I wasn’t very satisfied with how the lecture notes described it, and there were many details I didn’t understand. Fortunately, this chapter from Raul Rojas’ Neural Networks: a Systematic Introduction does a much better and detailed job of describing the algorithm. Hope it helps!

The backpropagation algorithm