Three researchers at Microsoft Research (Johannes Kopf, Michael Cohen, and Richard Szeliski) have developed an algorithm that turns erratic first-person footage into smooth hyperlapse videos. The problem, as they put it, is that first-person footage is generally so long that the best way to actually view it is through a time-lapse video, but a time-lapse video further exacerbates the general shakiness and erratic nature of first-person footage. Kopf, Cohen, and Szeliski solve this problem through a complex algorithm that ultimately stitches and blends certain frames into a cohesive whole.
Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video (shown in red) that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input.
More on the algorithm, including a more technical breakdown, is available at Microsoft Research.