Zoning Law is Complex. This Makes it Perfect for AI.

Nov 5, 2025

By Kennon Stewart

How Hard Is a Question?

We often say a question is “hard,” but rarely ask what that means. “What color is the sky?” is easy. “What caused the Gulf War?” is hard. But why?

One way to think about difficulty is through information. Every time you ask a question, you begin a process of reasoning—gathering evidence, testing hypotheses, and refining what you need to know next. You stop only when your uncertainty is low enough that you feel confident in the answer.

Some questions need a single fact to reach closure; others demand a chain of inferences. The number of information pulls—each search, lookup, or reasoning step—becomes a rough measure of query complexity.

The more steps it takes to reach closure, the more complex the query.


Zoning Law as a Case Study in Complexity

Zoning codes are the perfect testbed for this idea. They are designed to manage the built environment—a delicate mix of density, safety, sunlight, and noise—but the way they’re written feels overly-complex.

In Ann Arbor, Michigan, zoning laws decide what can be built on each parcel of land. They depend on the property’s location, type, proximity to schools or places of worship, and a dozen other variables. Something as ordinary as installing solar panels can trigger a web of cross-references: neighborhood designations, utility easements, historic-district boundaries.

For a planner or homeowner, the experience of reading zoning code is not one of discovery but retrieval under uncertainty. You flip back and forth between documents, trying to trace which definitions apply, which exceptions override, and which clauses contradict.

Each “jump” between documents or subsections is a small unit of epistemic labor. If one legal code requires three such jumps to answer a question while another requires five, the first is—by our definition—simpler. Its query complexity is lower.

This is not just a legal nuance; it’s a measure of the accessibility of a city’s legal code.


The Counterintuitive Move: Zoning Queries Are an AI Problem

Modern language models can read text but they cannot think about retrieval efficiently. When a system like GPT searches a document database before answering, it performs what’s called Retrieval-Augmented Generation (RAG).

The trick of RAG isn’t the generation—it’s the retrieval. A reasoning model must decide, at every step, whether it has enough evidence to answer or whether it needs to look up more. Each lookup costs time, compute, and potentially introduces noise.

Our research team (XL, ZQ, and myself) treats this as a stochastic decision process: a formal way to describe reasoning under uncertainty.

At any point, the model has a state—its current graph of evidence. Its action is to retrieve more information or to stop. Its goal is to minimize uncertainty as cheaply as possible.

Formally, this is a stopping-time problem: stop when the model is confident enough; continue otherwise.

The fewer retrievals needed to reach a confident answer, the lower the query complexity of that question.


This framing gives us something powerful. We can now treat a legal code—or any knowledge system—as a landscape of expected evidence graphs.

Each question defines a path through that landscape. The expected number of retrieval steps to reach closure quantifies its complexity.

  • “Can I open a coffee shop on this parcel?” — one retrieval.
  • “Under what conditions can this parcel be converted to mixed-use without triggering parking minimums?” — twenty retrievals.

The first is a sentence. The second is a graph.

By instrumenting these reasoning steps, we can measure the epistemic cost of compliance. In cities, that means measuring how hard it is—literally, in information-theoretic terms—to understand the law.


Informatics as City Infrastructure

Cities are not just physical systems of pipes, roads, and buildings. They are information systems—rules and relationships encoded in text.

When we measure the complexity of zoning queries, we are also measuring the governability of the urban system. How much cognitive work must a planner, developer, or resident do to comply? How hard is it to know whether a project is allowed?

By building AI systems that minimize query complexity—retrieving only what is necessary, stopping when the answer is good enough—we’re not just making better legal chatbots. We’re building a new way to see the city through the lens of information flow.


The Larger Vision

This research isn’t about automating urban planning. It’s about creating a quantitative language for the city’s complexity itself.

If we can measure how difficult it is to answer a zoning question, we can begin to measure how transparent—or opaque—a city’s laws really are. And that means we can design not only smarter AI, but also smarter cities.