taggingTagging, the drudgery of identifying each of the persons in the photo and coming up with a cute caption so you can remember or better share the context that the photo was taken, is soon going to be automated or at least computer-assisted, according to several computer scientists at Duke University.

Using the extensive sensors already on the mobile device such as the accelerometer to indicate movement such as dancing, the GPS chip to present location, the light sensor to indicate indoor or outdoors and the microphone to sense for laughter or alarm these Duke University PhD students have developed an algorithm that can help people improve their tagging and captioning of photos. Combined with facial recognition processing, we may have all the ingredients to significantly automate the process of photo tagging/captioning.

Presented at a scholarly computer science conference earlier this week, the software, called TagSense, was developed by students from Duke University and the University of South Carolina (USC). Xuan Bao and Chuan Qin, developed the app working with Romit Roy Choudhury, assistant professor of electrical and computer engineering at Duke’s Pratt School of Engineering. Qin and Bao are currently involved in summer internships at Microsoft Research.

The example given to describe the functionality explains that the phone’s built-in accelerometer can tell if a person is standing still for a posed photograph, bowling or even dancing. Light sensors in the phone’s camera can tell if the shot is being taken indoors or outdoors on a sunny or cloudy day. The sensors can also approximate environmental conditions – such as snow or rain — by looking up the weather conditions at that time and location. The microphone can detect whether or not a person in the photograph is laughing, or quiet. All of these attributes are then assigned to each photograph making the tagging process fast and comprehensive.

This has implications for refining the process of image searching and promises to be extremely lucrative in the long run. Today, image search is quite primitive, but is growing in demand.

This post has already been read 0 times!

Edit