GreenDot is a research project that investigates motion capture, pattern recognition, and "Intrinsic Biometrics" techniques to detect and analyze human movement signatures in video. The goal of the project is to train a computer to recognize a person based on his or her motions, and to identify the person's emotional state, cultural background, and other attributes. The research is federally funded (by the National Science Foundation and the Office of Naval Research), and conducted by an interdisciplinary team of computer scientists, movement experts, linguists, and others. The current focus is the analysis of national and international public figures while they are giving speeches, with future plans to investigate many other domains.

The research team is building a large database of people's motions, using cable television recordings and web video downloads. With techniques similar to those used in speech recognition, this project applies machine learning (an Artificial Intelligence technique) to train a computer system to compare the detected movement signature of an individual in a video, to that of a database of other subjects.

Some of the findings about these people's movement signature (or body language) have been surprising. Notable research results include high recognition performance of public figures (heads of state, talk show hosts, etc.), clustering of similar body language style, and a new audio-visual speaker recognition system that performs bettern than the video-only or audio-only system.