Dragomir Anguelov(drago@cs.stanford.edu) Affine Motion-compensated Prediction ------------------------------------ Traditional motion-compensated coders use only translational motion estimates for motion-compensated prediction. The goal of this project is to compare the traditional motion-compensated hybrid coder (TMCC), with its counterpart, where we have an affine motion model instead of the translational one (AMCC). The affine motion model describes a plane's motion in 2D space and offers potentially better description of block motion, especially for some camera motions like zoom and rotation, and for complex motion of objects between frames. The drawback of the affine model is the increase in the number of parameters (6 parameters instead of 2 for translation) and increased computational cost. I plan to implement an algorithm which computes affine motion estimates, and dynamically makes the decision whether the motion is different enough from pure translational motion for all the affine parameters to be transmitted. I will attempt to use Lucas@Kanade algorithm for feature tracking in order to obtain the motion estimates. As a final result, I expect to have a rate-distortion performance comparison of both algorithms for varying block sizes and varying motion-estimation accuracy, as well as a time efficiency comparison.