Revisiting the DOM index

A few months ago, I discussed an idea that Adam and I have been contemplating for a little while now—the DOM index. Our objective was to create a metric or set of metrics that capture well being in a way that isn’t blind to ecological limits or human needs, and one that is updated daily. That is, a replacement for the Dow Jones Industrial Average as a measure of national well being.

After some further discussion, we identified a few key challenges to developing the index:

  • How to model the dynamics of the system.
  • How to combine multiple data points and sub-metrics into a single value.
  • How to appropriately discount the future.

I think we’ve more or less resolved the first two, but not in the way we had originally thought that we might.  What we need is a model that we could be confident includes all important systems and sub-systems, and their interactions.  Beyond that, the model should be able to forecast future dynamics so that we can (if we were to choose to) factor in future states in the current index values.  In trying to design such a model, we’d accumulated many dozens of variables and data sources and the task of combining them meaningfully was daunting.

What, instead, if we were to use World 3 developed in Limits to Growth?  It represents a model that has been around for almost four decades now, and one that we can have significant trust in.  The model is of course necessarily, well, limited, in that it begins with some initial conditions and then models what transpires from that point forward. Each scenario they present is based upon a pair of inputs (initial conditions and policy choices). While reality thus far has likely followed their business-as-usual Scenario 1, it probably has diverged at least a little, and has the potential (if we make either particularly good or bad policy choices, or the conditions change in the coming years) to diverge significantly. Our observation is to use World 3 as the underlying model for the DOM index, yet with both up-to-date conditions and up-to-date policy choices provided as inputs.

We’re still left with one issue: how do we factor in present and future well-being?  Humans naturally discount the importance of future events, as do political and economic systems (even though they shouldn’t).  However, the index shouldn’t ignore future well being.  Let’s take a look at LTG Scenario 2, which assumed double the initial resources, but the same basic policy choices. The increased resources put off the limits to growth for a couple of decades, but afterwards the crash is more steep. Here’s what they say about it:

…resource depletion occurs considerably later in this run than it did in Scenario 1, allowing growth to continue longer. Expansion continues for an additional 20 years, long enough to achieve one more doubling in industrial output and resource use. The population also grows longer, reaching a peak of more than eight billion in the simulated year 2040. Despite these extensions, the general behavior of the model is still overshoot and collapse.

Suppose we’re in the year 2012, and we’re looking at these two options: Scenario 1, which has fewer resources and near-term limits or Scenario 2, which has more resources and more severe intermediate-term limits. Which is preferable? In other words, how do we appropriately account for the future behavior of the model in a measure of the present?

It may not be possible to do such a thing perfectly, so an alternative is to create two indexes: one that represents present well-being (and steeply discounts the future) and another index that uses the same inputs but is a leading index (and doesn’t discount the future as much). In such a design, the leading index would likely be the same or worse if we were in Scenario 2 than in Scenario 1 despite the fact that in the latter the limits are reached later, whereas the present index would be better in Scenario 2 than Scenario 1.  We also have to account for the rate of change of well-being, not just the values themselves. What makes the Scenario 2 decline worse is its rate of decline.  So we’ll include a factor for the rate of change, not just the the absolute magnitude.

Our next step is to match up data sources for each of World 3′s models, and build a replica of World 3 that we can tweak as appropriate as inputs or policies change.

As a side note: while the World 3 model didn’t explicitly capture notions of peak oil or climate change, I was reminded how the model beautifully captures the squeeze that I was trying to discuss last week. In Scenario 1, it’s primarily a lack of resources that causes decline. However, if resources aren’t a bottleneck, then, as they put it:

Higher levels of industrial output cause pollution to grow immensely; the pollution level in Scenario 2 peaks about 50 years later than it does in Scenario 1, at a level around five times higher. Part of this rise is due to greater pollution generation rates [a], and part is due to the fact that pollution assimilation processes are becoming impaired [b].

[a] is analogous to, for example, IPCC-style energy projections (i.e. ones that assume that energy consumption and emissions will continue to grow for a long time rather than peaking soon, and that this would cause a rapid rise in greenhouse gases. [b] is analogous to, for example, the effects we’re already seeing in the decreasing ability of oceans and forests to absorb carbon. Whether [a] happens may depend largely on whether we’re living in something closer to Scenario 1 or Scenario 2 (or something else entirely), but that’s a central question in the Climate Change vs. Peak Oil discussion.

Leave a Reply

(required)

Responses to “Revisiting the DOM index”