British computer scientists have developed a system using artificial intelligence to show – with pictures – how to make ugly urban areas more beautiful.
Dubbed FaceLift, the system uses deep learning techniques to create before and after images showing how an ugly space looks today, and how it might look with beautification ideas that the computer has dreamed up on its own. And results show that, most of the time, humans agree with the suggestions.
Computers have been looking at our urban areas for a while now, and have previously shown skill at determining whether people are likely to find a particular urban environment pleasant or not by assessing it against a set of known human preferences. Does it have greenery? Is it visually rich? Is it cozy?
The results have been good, and have also helped to predict whether people will feel safe in a particular space.
But to date this has been an assessment of an environment already created or at least proposed by humans.
A research team from King’s College, London, and Nokia Bell Labs in Cambridge, UK, wanted to go further: building a computer system that could generate its own beautification plans, and justify them.
Writing in Royal Society Open Science, lead author Sagar Joglekar says this challenge had never been tackled before.
The team started by identifying five key metrics that determine whether people find a space “beautiful”: whether it looks walkable, has greenery, has an open feel, provides memorable landmarks, and offers visual complexity.
They then assembled 20,000 images of places that volunteers had labelled as beautiful or ugly, and supplemented these with Google street views of the locale from different angles and distances.
They fed all these images into a computer running a deep learning framework – a kind of artificial intelligence that mimics the human brain by processing data in neural networks. The computer therefore had a pretty good idea of what humans thought was ugly or beautiful.
The next step was to ask the computer to draw on this learning to improve an ugly scene, which it did using a generative adversarial network – a relatively recent class of computer learning systems that has previously been used to analyse and create images including human faces.
The resulting images contained the computer’s raw suggestions: add some greenery here, some open space there, shift the parked cars and so on. But the images did not look realistic, and so the image library was searched for a real space that most closely corresponded to the computer’s ideas.
In the final step, the computer explains how the addition and removal of specific urban elements has helped the scene better match the five key determinants of beauty.
Did it work?
“With flying colours,” says Joglekar. For example, the beautified spaces were at least twice as walkable as before.
The researchers surveyed volunteers and found that they agreed with 77.5% of the computer’s recommendations. They also surveyed experts and found that four out of five believed the system would help decision-making in urban planning.
But it does have its limitations. One is that the computer can get carried away with unrealistic suggestions, such as moving a whole building or widening a road in a crowded city.
Because of this, FaceLift will not replace human planners, but is intended to be a useful tool for them.
“We do not expect machine-generated scenes to equal the quality of designs done by experts,” says Joglekar. “However, unlike the work of an expert, FaceLift is able to generate beautified scenes very fast – in seconds – and at scale (for an entire city).”