AI could make cities autonomous, but that doesn’t mean we should let it happen
You are walking back home. Suddenly the ground seems to open, and a security drone emerges, blocking your way to verify your identity. This might sound far-fetched, but it is based on an existing technology – a drone system made by the AI company Sunflower Labs.
Federico Cugurullo, Assistant Professor in Smart and Sustainable Urbanism, Trinity College Dublin further explores.
As part of an international project looking at the impact of AI on cities, we recently ‘broke ground’ on a new field of research called AI urbanism. This is different from the concept of a ‘smart city’. Smart cities gather information from technology, such as sensor systems, and use it to manage operations and run services more smoothly.
AI urbanism represents a new way of shaping and governing cities, by means of artificial intelligence (AI). It departs substantially from contemporary models of urban development and management. While it’s vital that we closely monitor this emerging area, we should also be asking whether we should involve AI so closely in the running of cities in the first place.
The development of AI is intrinsically connected to the development of cities. Everything that city dwellers do teaches AI something precious about our world. The way you drive your car or ride your bike helps train the AI behind an autonomous vehicle in how urban transport systems function.
What you eat and what you buy tells AI systems about your preferences. Multiply these individual records by the billions of people that live in cities, and you will get a feeling for how much data AI can harvest from urban settings.
Predictive policing
Under the traditional concept of smart cities, technologies such as the Internet of Things use connected sensors to observe and quantify what is happening. For example, smart buildings can calculate how much energy we consume, and real-time technology can quantify how many people are using a subway at any one time. AI urbanism does not simply quantify, it tells stories, explaining why and how certain events take place.
We are not talking about complex narratives, but even a basic story can have substantial repercussions. Take the AI system developed by US company Palantir, that is already employed in several cities, to predict where crimes will take place and who will be involved.
These predictions may be acted on by police officers in terms of where to assign resources. Predictive policing in general is one of the most controversial powers that artificial intelligences are gaining under AI urbanism: the capacity to determine what is right or wrong, and who is ‘good’ or ‘bad’ in a city.
This is a problem because, as the recent example of ChatGPT has made clear, AI can produce a detailed account, without grasping its meaning. It is an amoral intelligence, in the sense that it is indifferent to questions of right or wrong.
And yet this is exactly the kind of question that we are increasingly delegating to AI in urban governance. This might save our city managers some time, given AI’s extraordinary velocity in analysing large volumes of data, but the price that we are paying in terms of social justice is enormous.
A human problem
Recent studies indicate that AI-made decisions are penalising racial minorities in the fields of housing and real-estate. There is also a substantial environmental cost to bear in mind since AI technology is energy intensive. It is projected to contribute significantly to carbon emissions from the tech sector in coming decades, and the infrastructure needed to maintain it consumes critical raw materials. AI seems to promise a lot in terms of sustainability), but when we look at its actual costs and applications in cities, the negatives can easily outweigh the positives.
It is not that AI is getting out of control, as we see in sci-fi movies and read in novels. Quite the opposite: we humans are consciously making political decisions that place AI in the position to make decisions about the governance of cities. We are willingly ceding some of our decision-making responsibilities to machines and, in different parts of the world, we can already see the genesis of new cities supposed to be completely operated by AI.
This trend is exemplified by Neom, a colossal project of regional development currently under construction in Saudi Arabia. Neom will feature new urban spaces, including a linear city called The Line, managed by a multitude of AI systems, and it is supposed to become a paragon of urban sustainability. These cities of the future will feature self-driving vehicles transporting people, robots cooking and serving food and algorithms predicting your behaviour to anticipate your needs.
These visions resonate with the concept of the autonomous city which refers to urban spaces where AI autonomously performs social and managerial functions with humans out of the loop.
We need to remember that autonomy is a zero sum game. As the autonomy of AI grows, ours decreases and the rise of autonomous cities risks severely undermining our role in urban governance. A city run not by humans but by AIs would challenge the autonomy of human stakeholders, as it would also challenge many people’s wellbeing.
Are you going to qualify for a home mortgage and be able to buy a property to raise a family? Will you be able to secure life insurance? Is your name on a list of suspects that the police are going to target? Today the answers to these questions are already influenced by AI. In the future, should the autonomous city become the dominant reality, AI could become the sole arbiter.
AI needs cities to keep devouring our data. As citizens, it is now time to carefully question the spectre of the autonomous city as part of an expanded public debate, and ask one very simple question: do we really need AI to make our cities sustainable?
This article was originally published on The Conversation.