Learning descriptors for wide-baseline matching
Student project @ EPFL Computer Vision Lab, Fall semester 2018.

[img] [img]
Description

Local features are one of the fundamental building blocks in Computer Vision, as they provide tools to reliably find and describe local image regions. With the emergence of SIFT [3], they became the de facto standard to match images across wide baselines, and remain very effective solutions for e.g. camera pose estimation, 3D reconstruction, image stitching or visual SLAM. They have been the subject of vast amounts of research over the years.

Until recently, the best-performing and more reliable methods were built on carefully hand-crafted solutions. Over the past few years, most traditional Computer Vision techniques have been superseded by modern methods based in Machine Learning, and deep networks in particular. However, most deep learning solutions are dense, i.e. process the entire image at once, and these architectures are not directly applicable to keypoint learning, which consists of a pipeline with multiple, distinct components.

The Computer Vision lab at EPFL has introduced multiple works in this direction over the previous years (see references). LIFT [1] was the first fully-differentiable architecture for local feature learning. Its successor, LF-Net [2], bypassed some its limitations and allows us to train directly from images rather than patches. However, the modules they are both built from are relatively straightforward networks we had had success with in the past, and more powerful and discriminative solutions are bound to exist.

In this project, the candidate will investigate alternative formulations for the descriptor network, with the goal of boosting their invariance against geometric and photometric transformations, and in turn improve the performance of the entire pipeline. Ideally, the candidate will build on our existing framework and leverage in-house datasets. For further details, please contact us directly.

References:

For other projects available at the Computer Vision Lab, please visit our website.

Details

This project is aimed at first-year PhD students, but MSc students will be taken under consideration. The candidate should have programming experience in Python. Previous experience with Computer Vision, Machine Learning, and libraries such as Tensorflow or Pytorch would be a big plus. The project is 30% theory, 30% implementation and 40% experimentation.

Contact

For further information, please send us an e-mail: