Print Page | Contact Us | Sign In | Become a Member
Quarterly: Spring 2020 - Minh Dan Vuong

Proving the Success of Economic Development – An Impossible Task?


By Minh Dan Vuong

The take-away: To isolate economic development results from broad economic trends, we used a comparison area. We then looked at the difference between: (1) the change in our target area over time, and (2) the change in our comparison area over time.


Over the last two decades, the City of Portland has invested a sizable amount – $207 million – in an urban renewal area called “Lents.” This money bought land, subsidized new buildings, improved roads and parks, and supported businesses and homeowners directly.

We wanted to know what the results of these investments were – if goals for improving the economy and housing had been achieved.


Naturally, we looked first to our economic development agency to check if it had reported results. Disappointingly, we mostly found plans full of forward-looking promises and documents for specific transactions, but no analysis of how the investments overall had changed the economy and housing market.

We then looked at the data ourselves and saw, for example, that property values had shot up, the number of jobs was up, but the poverty rate had worsened in the Lents area.



How do we know if these trends were a direct result of urban renewal investments? You might look at the growing jobs and proclaim success, or look at rising poverty and say, “without our investments, it could have been a lot worse!” This question of proving causality was a vexing one, but we thought it was more interesting than counting how many buildings were redeveloped and which specific organizations got grant dollars.

Predictable criticisms for these statistics were: “The Great Recession happened! Incomes stagnated across all groups except for the rich! We can’t impact the wider economy! Other organizations also made investments! You’re forgetting this specific project that’s really meeting a community need!”

So, we needed to put these changes over time in Lents into context. Or in auditor-speak, we needed criteria.

  • Criteria can come from the agency’s own goals and targets. For example, our economic development agency set a numeric goal to raise the homeownership rate among people of color. But, more often, we found goals were outdated or vague. Goals for building affordable apartments dated back to 2000. Moreover, how would you know when you have successfully “increased the vitality and economic health of commercial areas” or “intensified industrial uses?”
  • Looking at the city overall was another way to give context. Usually, citywide data is easily accessible from the Census Bureau and from existing studies. Putting the citywide numbers side-by-side with our urban renewal area took care of some of the macroeconomic context, such as the economy’s up and down cycles. When we revisited the earlier statistics: Lents looked like it was lagging behind the citywide property value and poverty trends.

 

But the Lents neighborhood is not like the overall city – the city also includes wealthy areas, downtown high-rises, tourism and nightlife magnets, and industrial warehouses by the river. Commute patterns, employment patterns, and the infrastructure vary across neighborhoods and the needs of each community differ. Even before the urban renewal investments started in 2000, Lents was more vulnerable and poorer than the rest of the city.

  • To fine-tune our analysis, we used a comparison area, putting Lents side-by-side with a comparison area. We chose a comparison area that was similar to Lents in demographics and jobs at the beginning of the urban renewal investments, but the important difference was that the comparison area did not receive any urban renewal investments. Now we saw:

Although we didn’t go this far in our audit report, you might infer from this data:

  • Even though property values grew in Lents, they didn’t grow as fast as in our comparison area, so Lents is lagging further behind.
  • Job growth in Lents was 36 percent, but likely only 20 percentage points of that were due to urban renewal, because the comparison area had 16-percent job growth.
  • Poverty was up in Lents by 7 percentage points, but also up by 4 percentage points in the comparison area, so maybe only 3 percentage points are related to urban renewal.

Economists and statisticians call this approach “difference-in-difference estimation” to infer the effect of a “treatment” – urban renewal investments, in our case. Notice that we were doing more than comparing two areas at a single point in time, because we were also looking back in time. We were also doing more than a longitudinal study, because we were using a comparison area.

This method works only if the sole difference between the two areas is the treatment – what we’re trying to measure. Any other confounding factors will mess up your model. We were relying on the assumption that Lents and our comparison were following the same trends and were subject to the same forces, except for Lents getting urban renewal investments.

So, if someone had made huge investments in our comparison area, but not in Lents – say, a big campus expansion or an anti-poverty program – this analysis wouldn’t pick up on that and thus produce flawed conclusions. You’d also get flawed conclusions if a natural disaster had hit Lents, but not our comparison area. But it’s okay for this analysis if there’s a citywide change – say, an increase in transportation funding – because that applies to both the study area and the comparison area. That’s also the reason we wanted the comparison area to be similar to Lents at the start and we ruled out using any comparison area that wasn’t in city limits or an eastside suburb.

Ultimately, we did not find clear-cut answers. Results were mixed: some indicators went up, some down, and the error margins on the American Community Survey made comparisons difficult at times. Another challenge with our data was that we couldn’t tell how many people moved in or out of Lents and how many residents from 2000 were still there now. But we answered important questions that the community raised and met their desire for more transparency about the investment of $207 million.

When is it a good time to audit economic development? Anytime! I’m sensing the field is hesitant to critically and publicly analyze its own work – maybe they are under pressure to tout success stories and champion their programs. After all, which outside investor wants to work with an economic development agency known for problems? Which neighborhood group wants to support a future economic development program that’s bad news? But without analyzing outcomes, nobody knows whether the investments are working or if adjustments are needed.

For data on 11 more indicators and details, please see our audit report Lents Urban Renewal: 20 Years of Investment with Minimal Evaluation. If you have input or ideas, please reach out!


About the Author

Minh Dan Vuong is a performance auditor with the Portland City Auditor’s Office. His most recent audit assessed economic outcomes in one of Portland’s urban renewal districts. He also audited economic development performance measures and the convention center’s performance targets in San José, California. He holds a bachelor’s degree in Economics from Stanford University and volunteers on ALGA’s Online Resources Committee.