Planning our cities is a fundamentally future-oriented endeavor. To work out which train lines, apartment blocks and zoning policies to put in place today, we need to do our best to anticipate what the consequences will be for the future.

When it comes to infrastructure, the way this has often been done in the past has been the planner's favorite tool of cost-benefit analysis. The problem is that CBA calculations have been notoriously flawed. We only need to look at the supposedly evidence-based controversial debate over high-speed rail in the UK to see how.

Our predictions of the future, based on such grand theories, have let us down. However, as computing power has grown exponentially since the 1990s, we now have an alternative. We can simply run hundreds of simulations of the future and pick our favorite. That is what the urban modelers are promising.

Cities: Chaos or complexity?

In the mid-20th century, the people planning our cities were confident that they had a pretty good idea of how cities work. They brought their set of standard tools and statistical models to bear on the zoning and shaping of cities.

Since then, there has been an increasing understanding of cities as more slippery "complex systems." What distinguishes cities is not the number of buildings, the urban form or any other physical attribute, but the density of interactions. It is these millions of individual interactions between people, firms and the environment around them that make our cities complex.

This new lens requires looking at cities as we look at ecosystems just as the introduction of a few wolves in a national park can set off a chain of events that changes the physical form of rivers, so too can small changes in urban interactions change the physical form of our cities over time.

Despite this complexity, urban dwellers do not (usually) perceive chaos when they look around. The task urban modelers have set for themselves is to understand how patterns spontaneously "emerge" out of all of this complexity — how a city creates order out of chaos.

UrbanSim: from a bird's-eye view to an agent-based view

One of the most successful models that has gone beyond theory into practice has been UrbanSim. Developed by Cal Berkeley's Paul Waddell with research grants from the National Science Foundation (NSF) , it has now been applied in Honolulu, Houston and San Francisco among other U.S. cities to help guide investment in transport and land use policies.

UrbanSIM is an agent-based model. That means that individual, autonomous "agents" (people, households, firms or others) are encoded with a set of rules and set off to interact with each other and their environment based on that information.

For generations, urban planners have been approaching the city from a bird's-eye view, looking at aggregate patterns and responding to them with a vision laid out in a master plan. The agent-based approach is far more in line with our new "bottom-up" understanding of cities as complex systems. A comprehensive understanding of the whole is out of reach, but we can explore the patterns made up by thousands of individuals to see what aggregate behavior they cumulatively create.

This is a shift from thinking of "cities as machines" (designed, operated and tweaked by an omniscient engineer) to "cities as organisms."

Another great advantage these models offer is that, rather than waiting until enough years have passed to assess the impact of a highway on an urban area, we can instead create a series of potential "scenarios" for the future. Other scenarios might include visions of how far a booming city like Austin will sprawl into its surroundings under certain policy conditions. Or it might be a scenario of flood risk, depending on what we allow to be built where.

The pitfalls of modeling

Many may balk at the idea of a computer claiming to make sense of the chaotic and singular nature of our cities. Indeed, there is an awful lot about urban life that models cannot capture. What is important is that urban models are viewed not as predictions, but rather as tendencies of what might happen in the coming years and helping us better prepare for it. Questions, not answers.

Troubling, too, is the fact that the wider debate over "algorithmic justice" also applies to cities, where increasing areas of our daily lives are ruled by algorithms. Models are not as neutral as they claim, as the bias of their human creators is coded into the rules at the early stages.

We also have to be cautious in putting our faith blindly into data thrown up by models. Tools like UrbanSim do a lot to bring these models to a wider audience.

But those nonprogrammers among us need to be wary when using the outputs of a model when the transition process between inputs and outputs is obscure. As some physicists like to put it: We see the pig go in and the sausage come out, but the intervening process is a mystery to us.

Can we build a science of cities?

We have been warned for years of the "revenge of the nerds," who we are told are set to inherit the earth. Our growing reliance on programmers the ones who grew up building the video games, not playing with them means we need to at least try to find a common language between modelers and sceptics.

The question we are left with is this: Can we build, as modelling advocate Mike Batty suggest, a "new science of cities"? The modelers are coming, and skeptical nonprogrammers have the right to be wary, but we should at least be open to what they have to say.