Podcast: Tomorrow’s Cities - AI in Urban Planning

While AI holds great promise, can the planning sector harness its potential to create better cities?

Photo by Carl Kho on Unsplash

Photo by Carl Kho on Unsplash

AI’s influence on planning and cities is no longer theoretical—it is already transforming how urban environments are designed, managed, and experienced.

While AI holds great promise, can the planning sector address political, ethical, and practical challenges to ensure these technologies deliver on their potential for better, fairer cities? 

Professor Mike Raco and Nissa Shahid join Professor Lauren Andres to discuss the synergy between cutting-edge AI technologies and the evolving field of urban planning in the next instalment of our Bartlett Review podcast series. 

Listen to the podcast

Transcript

Voiceover:

This is a podcast for the Bartlett Review, sharing new ideas and disruptive thinking for the built environment brought to you by The Bartlett faculty of the Built Environment at University College London. 

Prof Lauren Andres: 

One of the reasons why the acceptance and use of AI in planning is slow is because there is really this huge political risk. 

Prof Mike Raco: 

If you put in the right amount of resource, if you have the right technical skills, if you are able to mine the right kind of data. You are able to generate the most remarkably detailed insights on places – it is astonishing. 

Nissa Shahid: 

I don’t really think that we are in a place, and I’m really happy to be proven wrong about this. We have access to strong technology, we just don’t know what to do with it. 

Prof Lauren Andres: 

Hello and welcome to the latest episode of the Bartlett Review podcast. I am Professor Lauren Andres, Director of Research at the Bartlett School of Planning. In January, 2025, the UK Prime Minister Kier Starmer made a speech on the AI revolution where he argued that the rollout of artificial intelligence will be the defining opportunity of our generation. He also said Britain will be one of the great AI superpowers. This tells us a lot about how every aspect of contemporary life is being and will be impacted by artificial intelligence. The purpose of today's episode is to consider what impact AI is having and could potentially have on planning and cities. We want to examine the ways AI is already a reality and what may happen in the future for the better or the worse.  

To explore this, I am delighted to have two excellent guests with me today to discuss this crucial topic. First is my colleague Mike Raco, who is Professor of Urban Governance and Development and Head of the Bartlett School of Planning. Mike and I are currently leading a research looking at AI and the sustainable development and management of urban built environments in London and Beijing. Welcome Mike to the podcast. 

Prof Mike Raco: 

Thank you. Great to be here. 

Prof Lauren Andres: 

And also with us is Nissa Shahid from the consultancy group Arup. Nissa is a chartered planner specializing in digital planning and championing the use of AI in the planning industry. Thank you for being here today Nissa. 

Nissa Shahid: 

Hello and thank you for inviting me. 

Prof Lauren Andres: 

So to warm up, I'm really interested to hear your thoughts, Mike, about really what is AI in planning and what could be the different uses of AI in planning? 

Prof Mike Raco: 

Thank you. Lauren, there are probably two different types of what we might call artificial intelligence within the use of planning. You could say that on the one hand there are weaker forms of AI, which are mainly reactive passive systems in which AI is used to summarise and categorise existing data sets. For example, in planning, you might categorise the responses you've had to a consultation for a local plan or you might use AI to map the ways in which people use different types of transport in the city.  

On the other hand, you might say that there are other types of what we might call stronger AI that could also be used, which are not passive, not reactive, but actually generative. For example, you could use AI and planning to generate plans for more sustainable, adaptable, and resilient places. You could use AI potentially to bring different things together, program it to think for itself to generate new insights and new ways of working, new ways of thinking and output that potentially you couldn't do yourself as a human at its most extreme form. 

So those are the types of things that people often talk about in relation to AI and planning. And perhaps one final thing is that traditionally a lot of the work around AI and planning has been around things that are visible. For example, mobility in the city, mapping the different types of transport use and then adapting the city to make it more efficient and smarter to use the language of the 2000s. But what AI gives you potentially is the way to bring about new visions of cities, new visions of place, new imaginations of what a place is, how it functions, and how it could be planned in different ways. 

Prof Lauren Andres: 

I mean, I'm really interested in your thoughts Nissa on how you see this from a more sort of practical, from the industry perspective, where do you stand currently in relation to AI and planning? 

Nissa Shahid: 

So not wanting to start straight off controversial, but I actually think we're still at the beginning of the hype cycle and that's a hype cycle specifically for AI and planning. We can see the potential of AI. There's some really powerful AI tools and models out there, and we've seen it work brilliantly across other industries. What it's done in the medical field has been ground breaking. We look at that and go, wow, if only we could do that with planning.  

However, I think there's still a long way for us to go before we can start applying those same kinds of techniques into our day-to-day work. Mainly because I don't think we quite know what the questions are that we're trying to solve with AI. We're still focused very much on faster, cheaper, maybe we'll be able to do this, can we replicate patterns? And you can't really do that with planning because planning is very much about how humans interact with a space around them. I don't think we have the right kind of data to put through AI to be able to make those kinds of predictions. And I don't really think that we're in a place, and I'm really happy to be proven wrong about this and I know I have peers and colleagues who would disagree. We have access to strong technology, we just don't know what to do with it. 

Prof Lauren Andres: 

It's a lot of question marks and I mean something I'm sure we'll get back to afterwards as well, is regulations, legal implications, implication for insurability, accountability, ethical issues? I mean what is quite clear from what you're saying is the extent to which there's huge potential. There's a lot of things written about what it can do, but probably not that much happening in practice. And maybe if could provide me with some examples of where AI is used currently. I mean for what purpose? 

Nissa Shahid: 

I think a lot of the research and actually a lot of the practical work I've seen people try to use AI has been around the consultation process or around again, how can we speed up planning, how can we go through 30,000 representations from the public quickly? I've always been of the mind that if you've got something as powerful as AI, why aren't we instead, rather than focusing on what's cheaper and what's faster and how can we speed up process, shouldn't we be focusing on how can we make process work better again for people, not for the person who wants to go faster. I see a lot of obviously chat with the rise of Chat GPT and everyone using it and everyone jumping onto using that. I'm beginning to see more of this in and around writing articles a lot quickly rather than actually what are the problems we have? It'll be good to see a little bit more around, can we dig into problems a bit better?  

There has been some really exciting work actually by central government recently about how we could potentially use AI to start extracting data from documents at a faster rate or a lot of the questions that we could be looking at could be based around the data that needs better data behind it. And one of the problems historically has been that data has been locked up in non-machine readable PDFs and I've seen central government’s now starting to look at, okay, how can we deploy AI to attack that problem so that we can start getting data so we can ask the questions. It'd be nice to be able to see are there ways that we could potentially start build homes quicker or allocate resources properly or respond to real problems in real time? But I think we're kind of a long way off from that at the moment. 

Prof Lauren Andres: 

Still a work in progress. Mike, I know from our research that we have some very interesting insights both from London and Beijing. I mean, do you want to mention some of those? 

Prof Mike Raco: 

Yes, I'm going to very much concur with what's been said. A lot of the discussion has been about within planning about how we can use it to deal with routinized tasks. As Nissa was saying that somehow, and this is what I think is coming out of the research very strongly, there's an imagination that that type of weak AI can free you up - quote unquote - as a professional planner to think strategically, to think big, to look at the bigger questions that at the moment it's presented to us that they can't do that because they're so caught up in that routinized work, if that could be transferred away, if only you could do that, I could be a good strategic thinker. When we interrogate that a bit more closely in our interviews, people are a bit less clear about exactly what that would involve. 

But some of what we've uncovered in our research currently being done in the private sector shows that if you put in the right amount of resource, if you have the right technical skills, if you are able to mine the right kind of data, you are able to generate the most remarkably detailed insights on places. It is astonishing to us. Those things will have limits clearly, but they are much, much more powerful than any other place descriptive technology I've seen in the past or place descriptive analysis that goes on by normal or human planners if you like. There's a lack of trust in the idea that AI could do that. There's a lack of belief that those things can be generative and done in a way that is better or different or more efficient. My own view from having spoken to some people within private sector through the research with Lauren is that there is a lot of potential here to start coming up with the identification of new patterns and new processes within planning. I think AI has huge potential to support that process if it's done in a way that is led by people who are doing that for the public interest and doing it with all the ethics and all of the understandings of what planning should be about behind them. 

Nissa Shahid: 

So I actually want to add onto that. It's funny enough this conversation I have with a lot of young to mid grade planners who are at the beginning of their career and they'll come up to me and go, oh, have you seen how Chat GPT or Co-pilot can write an entire planning statement? We won't have jobs anymore. And I have to remind them like if Co-pilot can write a planning statement better than you, then you should be worrying about less about what it can do and more about should we be even writing planning statements. The limitation of AI isn't always necessarily just the technology or the data behind it, sometimes it's the person and how creative they are. And if a really good planner would kind of go, okay, how can I use this to write not my planning statement quicker, but to write it better. 

Prof Lauren Andres: 

Making all the planners be very much critical thinker and critical planner, I mean, which is what I teach my year one students, if you're listening. 

Nissa Shahid: 

It's surprising how many planners do tend to forget that and not just planners, I think everyone who leaves university, you get bogged into the day-to-day admin. I say, actually, this is a great opportunity to get rid of that admin and get AI to do the 80% of the job you don't want to do. So you can focus on the 20% you do want to do. 

Prof Mike Raco: 

But can I just give an example of how the presence of AI technologies within the sector, one of the big risks that developers and investors face when they're investing in a project in a city like London is a political risk. It's the risk of course, of failure, of review, of challenge. But what we are being told in some of our research is that it is possible to use AI programmes to forensically deconstruct the politics of a place and then to use that to shape an application or applications in ways that limit the political risk. And what we're being told by some of the private actors we've spoken with is that in the longer run, many of the risks around urban development could in theory be reduced by the use of AI technologies.  

And if you then have a market led system of planning as we do in the English system where we ask people to come forward with proposals, we then say yes or no to those proposals, we allow the developments to happen or not based on negotiation. If in that context risk is actually minimalized and the financial risk even could be minimalized if a market system could work really efficiently, the argument is used to us, maybe it could work better and it'll work better for everybody. Now there's a utopianism embedded within that. Of course there is, but there's something there that I think is a little bit more transformative than what we tended to hear from public actors, which has been all about that weak AI all being about the freeing up of routine tasks. There's other generative things going on in the private sector, very specialised areas with some fantastic innovation and thoughts going on. But those could begin to change some of the dynamics. 

Nissa Shahid: 

But funnily enough, that is what planning is meant to be, having everything in front of you when being able to get to it quicker. If anything, AI, I think everyone would agree that if AI could help streamline that process, make it easier and be able to see that, then fantastic. But there is also the danger of it being used the wrong way around. I mean, don't forget, we are what, 10 years off of the Cambridge Analytica scandal. We've seen what it does. We've seen how thought can be monopolised by the people who've got the technology and access to data and access to all of this information and that there needs to be some kind of transparency or governance to regulate this. And I wouldn't say so much rules, but maybe transparency on how this information is being used, how this data is being shared, giving people the option to understand the value of their own data or how they should be sharing it and what gets shared. Yes, the utopian vision is great, but there's also the need to be realistic about, okay, this is all fantastic, but also let's think about where things could go horrifically wrong, where this information can be abused, where this technology can be abused. Let's not, forget unfortunately, human nature and this is quite pessimistic, I know, but human nature is always, how can we use technology to benefit me before benefiting everyone else? That's just, you might need to get a psychologist to have that conversation rather than.... 

Prof Lauren Andres: 

That's our next podcast.  

Nissa Shahid: 

Good. 

Prof Lauren Andres: 

 I mean, just to come back to what you both have been saying, I think what I'm hearing as well and what is really coming of our research is also the cost currently associated to those absolutely fantastic on paper, new use of AI because it's not affordable for a lot of people. It's very much targeting a specific niche of customers. And for us as planners, I mean that raises some significant issues in terms of inequalities and in terms of inclusion and exclusion. So I think there's a range of debates around this, but you both touch upon really this issue of trust, getting the political actors on board. I think this is really one of the key things again, which is coming out of a project because when we're talking with different actors in the public sector across London, one of the reasons why the acceptance and the use of AI in planning is slow is because there's really this huge political risk in relation to how can we trust AI? 

Do we and can we actually use AI to deliver planning differently in terms of how AI can be used? And we know is used in London is in relation to land use in relation to identifying sites, for example, being able typically maybe to identify small sites and to use AI really to make a much more accurate understanding of what's happening in the use of land. And I think that's really interesting in the context where we know in London we are missing lands that can be used for housing development. And I think that's really an area again, where that may create a use of AI which may be a bit more politically acceptable for this reason. 

Nissa Shahid: 

Yeah, it's great if you could use AI to identify a whole bunch of sites that could be built that you could build upon really quickly, but have you also countered for how does that impact how people live there? How does that impact open space? How does that impact pollution? How does it impact everything? The area around it? There is that example that somebody looked at developing homes out in Harrow where there is a large South Asian population and somebody looked at that and went, oh, there are a lot of older people living out there, so let's build retirement homes and homes for people who are accessible homes ...invested a whole load into building these homes and then somebody turned around and went, no one's moving in. And it only transpired later that nobody had thought, oh, it's a South Asian community stereotypically, at least in South Asian communities, if you've got an older generation living with you that live in your house, people aren't looking for accessible single unit homes. They're looking for homes with a bedroom on the ground floor. And these are the kinds of things that again, a human being or, you would hope a human being, would turn around and go, actually this is what we need to be looking at. AI isn't at that point where it challenges itself. I know we talk a lot about how it will, but it isn't quite there yet. It's great technology, technology is the solution. But what's the question. 

Prof Mike Raco: 

 Yeah, just come back to that example. So it's a very powerful one, but you could let me put the counter argument, which would be firstly, the humans got it wrong so that the people making the decisions got it wrong. People like my colleague here at the Bartlett , Matthew Carmona, would argue that British developments, new housing developments over the last 20 years have been some of the worst types of development that have happened in any European country in terms of the provision of social infrastructure, meeting local needs, shops, schools, and the wide diversity of things that a functioning place needs that's been done by people. So I accept the point that you need to humanise any understanding of planning in order for it to make sense, otherwise it will have problems. But as I said the first point I make is that something's going wrong with the way in which humans are doing it currently. 

 

It's not like we have a wonderfully functioning system, certainly here in England let's stick to that and that somehow AI would disrupt something that is working really well and somehow routinise it and standardise it and dehumanise it. I would not agree with that. I think that there has a potential to bring new things, and this is the second point, which is that the power of some of the AI processing technologies to understand or develop understandings of places is something that through the research we've been quite astonished by. So the ability to, for example, map and understand who is using a place and how is something that is being done in the private sector. It is being used by private investors to shape where they're going to invest and it is being done in a way that is bringing a whole set of new insights. 

It can start to categorise you. It can start to say that you are part of a group or a cluster and then we can shape environments, costs, types of housing, types of things that you need around what we've decided you are as a type of person in this cluster. So it's not so much about individualising it, it's about using the individual data to collectivise. I'd be astonished if a really, really good AI programme from some of the things we've heard would not take into account the type of community living in an area in a particular place and it would suggest or even tell investors these are the sorts of things that would be good in this place. I don't think we're that far away from that. In fact, don't get me wrong, I sound like an evangelist for AI and I'm certainly not, but I just think there's a potential within it. 

And perhaps it comes back to the other issue, Lauren, unless we just move on to the issue about regulations, it's come up a few times. I think it's a very powerful one. So the current regulatory structure does not a do the kinds of things Nissa talking about in terms of really giving a proper framework for these wider questions around public interest, ethics and other things just simply isn't there. And in fact, some of the language coming from government seems to be saying the opposite, which is we need fewer controls and more freedom. I think the second thing is that I would argue that the way in which a lot of the regulation around planning works come back to that is still assuming that the public actors are doing the job incredibly well with very few resources, and yet it does matter. So we are being told on the ground that local authorities are very, very unwilling to use AI for consultations and things like this because they are subject to judicial review. 

 

And if they have been shown to be using AI and the AI missed something or hallucinated or made something up, that would be a huge problem. And so they're very unwilling to, unless the regulations then become more flexible. And this comes back to what maybe one needs to do if you wanted to boost the use of AI, you might say, and again in a more utopian way, and I'm sorry for doing this, but you could say no system of consultation is perfect. You're never going to get everything. Let's shift the regulatory burden a little bit to allow the use of AI. It seems to me that as the AI improves, the regulatory structures are going to have to be more flexible and adjustable. I think in order to get the best out of this. Otherwise, what you're going to see is that the dead hand of potential review limiting its use. 

Nissa Shahid: 

So I don't disagree that there is potential with using AI. I kind of feel the conversation always goes down to it's human or AI. What I would suggest is when looking at this debate, oh, AI could have done it better. We can't use AI because it's definitely not trustworthy. What I'm saying is we need to move to a model that actually looks at okay, how can we use AI? I would be very concerned if we were at a point where AI can get up in the morning and go, right, today I'm going to solve all the world's problems unprompted, but I'm only going to because we can see where that goes. There are enough movies around that to show exactly where that goes and there will always be some kind of human being behind it. The problem isn't with the technology, it's with the human beings who have designed, again, AI is designed by human beings. 

Data is being collected by human beings. The planning process was designed by human beings. If anything, it was designed by certain types of human beings, namely male Caucasian, came from really rich socioeconomic backgrounds. And hence why a hundred years later, the planning system does still, even despite our best efforts, still does serve that very same socio-economic group. So the problem isn't AI, it isn't even the planning process, it's the data that sits behind it and the fact that we need to actively work to design out biases that we need to actively design the planning process and therefore the AI that supports the planning process to create that equitable fairer world that we want. 

Prof Lauren Andres: 

So where do we stand really? Who is driving the innovation? Who is funding for AI development currently? Nissa. 

Nissa Shahid: 

So I don't think it is one sector or one side that is funding or driving it. So it kind of feels like it's coming from everywhere. Central government sees the potential of it, local government would like to be able to use it. And I feel that private sector and even down to the average person, we look at AI and kind of go, okay, we could do something with it as well. I feel that part of the reason why it's so big and in everyone's heads right now is because it's finally become accessible, especially with open AI and Chat GPT becoming available and we're just at that part of that technology cycle where we can finally do something with it. So I don't feel it's one sector pushing for it, it's just the right time.  

Prof Mike Raco: 

And maybe just to add from the research that Lauren and I have been doing so that we are finding that within, for example, the real estate sector and the development sector, those who are key actors within the planning and building of cities, they are quite diverse. So we found that bigger firms have tended to be more conservative, have been much more cautious about adopting new AI principles and technologies actually for similar reasons that local authorities do in a way thinking about regulation, thinking about reputation and other things. Where the real innovation is being pushed and driven is through smaller companies, the ecosystem of smaller companies who are working in quite tight fields but are able to provide clients with tailored AI responses, very, very expensive forms of AI data collection and analysis. But nonetheless, they're being done and they're there. And what we see is it's got a strange bifurcation. 

So on the one hand, a real conservatism within the real estate sector, a focus on what some people might call the social relationships still being foundational to decision-making. On the other hand, being told by some innovative companies that they are doing all kinds of work for bigger clients quietly, but doing very tailored in depth, high quality AI analysis. And that's where some of the really innovative practises are currently going on. And again, this comes back to partly to the geography here in London, there's a big ecosystem of those companies as well pushing some of these potential solutions to the problems that companies, developers and others face. 

Prof Lauren Andres: 

So what is quite clear, again, and this is coming really from our research and all the discussions we've been having. So some of the key companies using AI in real estate, in planning, they are completely interdisciplinary. They're working with people who are tech experts, real estate experts, planning experts. So I think we have very much this sort of landscape which is on one hand national level for the public sector and then a very dynamic private sector pushing this agenda. And this just leads me to serve the final theme for our podcast today and again Keir Starmer in the UK government's AI opportunities action plan. So that's something that was out in January, 2025. It was argued that AI can transform the lives of working people. It has a potential to speed up planning consultations to get Britain building help drive down admin for teachers so that they can get on with teaching our children and feed AI through cameras to spot potholes and help improve roads. I'm interested in hearing your thoughts on where do you think we are heading. I mean there's a massive, massive call at national level to really link planning and AI. We discussed the fact that there's a lot of challenges and risk associated to it. You both, you are expert in this field. I mean, where do you feel we are heading? 

Nissa Shahid: 

I think everyone who I speak to will know that I get so incredibly annoyed when people start blaming planners for slow planning and the system or planning officers. I find that it's an unfair picture of what the planning system is and what planners have to deal with. Planners aren't dealing with being slow, planners are dealing with a lack of resources and cheaper and faster isn't the goal. It's better places for people to live. If you don't do that, you are in danger of just designing a system that works really fast but serves no one or serves a certain class of people and no one else. I think AI has great potential. I think most new innovation, most new things that come out could be used well, but it always comes down to what are you using it for? Why are we doing this? If I want the planning system to work faster, why do I want it to work faster? 

 

I don't just want to build a hundred homes. I want to build a community where a hundred homes will help, will house people and help them live the lives that they want. So what we care about is how do I create better places? How do I do the job I set out to do? I didn't go to university and study planning to do planning cheaper and faster. I did it because I wanted to create better places. And I'll say that 10 years into the job, I'll always come back to this is why I went into planning, this is why I work in cities. It's how can I make things work better? And I feel that that's where we need to keep bringing the conversation to. And that is what I would hold anyone, central government, local, my own colleagues. That's what I hold them accountable to. It's like it will be what can I do that can help me design a better place? 

Prof Lauren Andres: 

This gives me an excellent transition to ask our Head of School. We are the biggest school of planning in the UK and Europe. Mike, what do you think in term of how the BSP really needs to engage with those issues and what can we say to our current students but also all the new ones who are going to be coming to us and trained and becoming those planners Nissa is talking about. 

Prof Mike Raco: 

Well let me start off by saying that I think where AI technologies are does have enormous potential for planning as a process and as a system because many of the things that planning are concerned with lend themselves to the types of technologies and data reproduction that AI could do. I very much take the point that it's curious that planning is singled out by Starmer and it's singled out is exactly the reasons Nissa says absolutely. But as I said to me, planning is failing. And so there is potential here for planning to be able to benefit from systems that do things like map land use that do things like map consultations that do engage with and help planners potentially to sift through engagement processes with different communities in new and potentially really innovative ways. There is huge potential here. And one thing we haven't talked about is that there is also potential for voluntary community sector groups and others citizens to also use AI to use it to, for example, map the ways in which judicial reviews have taken place across public space to analyse those reviews, to think about how you could challenge some of the decisions taken around the places you live in , in ways that others have used and found successful. 

 

So there's a potential democratisation that could go on around AI that could be transformative. So we at the Bartlett along with many planning departments across the UK, have been talking a lot to employers in the last couple of years on this exact topic. What do you need? And what we're being told by employers, public, private, and voluntary sector. Employers alike are telling us they want students to understand the context within which data is constructed, reproduced and used. The technical skills are a different thing. They're clearly separated off into an orbit of technical excellence. And no planning school really could do that, I would argue. But what you can do is to answer the questions that we've engaged with today, the ones that Nissa, has put forward, I think very powerfully. And to get our students to understand that, to understand that those are the core questions to work with this changing technology, to understand what it is, but also to sniff out when there's problems, to see when there are problems with the data to see where it's going down the wrong avenue. You still need to understand systems, politics, governance, regulation, design process to do that. And so many of the things we currently do need to be sharpened up, take these things into account, but they don't need to be jettisoned in some way of as though somehow they're no longer relevant, the things that people have done in previous years. 

Prof Lauren Andres: 

Many thanks to both of you. Many thanks to Mike Raco. Many thanks to Nissa Shahid. Thank you. Thank you. And for more information about the Bartlett School of Planning at University College London, you can visit our website, ucl.ac.uk/Bartlett and follow us on LinkedIn at the Bartlett School of Planning UCL. Thank you again to you both. 

 

 

Photo by Breno Assis on Unsplash

Photo by Breno Assis on Unsplash

Learn more about AI in urban planning

At The Bartlett School of Planning, we offer a unique hands-on learning environment guided by urban planning experts and practitioners at the forefront of our field including AI and new technology.