scorecardresearch Skip to main content
OPINION

AI is coming to our neighborhoods and will show us the future of cities

Our research at MIT has found that once trained, visual AI is shockingly accurate at predicting property values, crime rates, and even public health outcomes.

Walk down any street in Boston and visual cues will tell you a lot about it. The potholes and the storefronts, the models of cars and the makeup of crowds all contribute critical insights about our city.David L. Ryan/Globe Staff/David L Ryan, Globe Staff

What can you learn about a city by looking at it? Seventy years ago, the renowned urban planner Kevin Lynch walked the streets of Boston to find out. As he wrote in “The Image of the City,” he and his team found a city that was “vivid in form and full of locational difficulties,” and they were fascinated by how people used landmarks like the Charles River and Boston Common to orient themselves in the Hub’s treacherous streets.

Lynch’s groundbreaking research showed how the image of a city can tell you a lot about the life of a city. Now, as artificial intelligence learns to process street imagery, we are realizing that Lynch barely scratched the surface. Our research at MIT has found that once trained, visual AI is shockingly accurate at predicting property values, crime rates, and even public health outcomes — just by analyzing photos. This will be a revolutionary tool for policy makers, giving them a data-driven understanding of every urban block, but it has a dangerous capacity to introduce biases and lead us astray. As unblinking, digital eyes come to our streets, will we be able to keep our own eyes open?

Advertisement



Walk down any street in Boston and visual cues will tell you a lot about it. The potholes and the storefronts, the models of cars and the makeup of crowds all contribute critical insights about our city. This street-level assessment is especially important for identifying neighborhoods that are up-and-coming.

Today, visual AI can follow the same signs on a massive scale. In a recent paper, our lab at MIT obtained 27 million pictures of American streets along with quantitative data about neighborhood characteristics ranging from crime levels to the incidence of different mental health conditions. Then the computer used this training data to identify the correlations between visual features and underlying realities. This is the same pattern-matching process that powers ChatGPT and self-driving cars, but we are only beginning to see its potential in our streets. Our study discovered that visual AI is remarkably effective at predicting many aspects of a neighborhood’s profile, including poverty, crime, mobility, real estate values, and public health.

Advertisement



This fine-grain knowledge has the potential to let us make data-backed decisions on every street. Cities could detect which parks are most in need of benches or which corners are at the highest risk of car accidents or violent crime. An advertising company could use computer vision to ascertain which are the best corners to place a billboard; a street vendor could figure out exactly where to move their cart from hour to hour. In the future, just like we will all have ChatGPT to help us learn information, we could all have visual bots to help us see the physical world.

But as powerful as visual AI can be, these algorithms only see as clearly as their training data. Thus arise classic concerns about bias; an opaque algorithm could systematically devalue the homes of marginalized communities or encourage over policing in those neighborhoods. We must also remember Goodhart’s law: that when a measure becomes a target, it ceases to be a good measure. Imagine a dystopian future where everyone paints their walls a certain color to impress the bot, or local leaders focus on improving a neighborhood’s score on an AI metric rather than curing its real problems. Finally, predictions could become prophecies, locking cities into a preset ideal condition. Algorithms can optimize within their parameters, but they also paralyze us in existing ways of thinking.

Advertisement



Thus, the era of AI puts a new premium on human intelligence, creativity, and courage — to use these tools and not to be limited by them. Conventional wisdom is always conservative; its pull becomes more powerful now that it is baked into bots that are mostly accurate and always available. A computer vision program is an invaluable tool to help a city optimize its traffic lights but might lull human users into forgetting we should invest in bikes and subways. Computers can count every leaf on every tree, but only we can see the forest and decide what we want to do with it. AI does not replace subjectivity with objectivity; it makes political questions of power and priority-setting even more urgent.

It will take human intelligence, creativity, and courage to break away from these constricting conventions: to expose the flaw in the algorithm, to question an optimization that does not make sense in a given context, or to propose an alternative that has never been attempted. Tireless, penetrating artificial eyes are coming to our streets, promising to show us things we have never seen before. They will be incredible tools to guide us — but only if we keep our own eyes open.

Carlo Ratti is a professor at the MIT Department of Urban Studies and Planning, where he directs the Senseable City Lab. Antoine Picon is a professor at Harvard’s Graduate School of Design. Ratti and Picon are coauthors of “Atlas of the Senseable City.”

Advertisement