We all identify cities by certain attributes, such as building architecture, street signage, even the lamp posts and parking meters dotting the sidewalks. Now there’s a neat study by computer graphics researchers at Carnegie Mellon University — presented at SIGGRAPH 2012 earlier this month — that develops novel computational techniques to analyze imagery in Google Street View and identify what gives a city its character (more following the link):
Given a large repository of geotagged imagery, we seek to automatically ﬁnd visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difﬁcult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner. We demonstrate that these elements are visually interpretable and perceptually geo-informative. The discovered visual elements can also support a variety of computational geography tasks, such as mapping architectural correspondences and inﬂuences within and across cities, ﬁnding representative elements at different geo-spatial scales, and geographically-informed image retrieval.
Read all about it in the SIGGRAPH paper posted online.
And be sure to check out the Computing Community Consortium’s (CCC) Computing Research Highlight of the Week, updated every Thursday, for more cool advances in computer science. And if you have an interesting research result that you would like featured here, submit a Highlight to us today!
(Contributed by Erwin Gianchandani, CCC Director)