I recently learned about a (so-called) international development project taking place in Niger. People in a region of the country have been suffering from malnutrition and outright hunger due to periodic drought-induced crop failures. To help respond to this humanitarian crisis, an NGO that was providing food aid to the region partnered with some researchers to determine whether it was best to give the aid as cash disbursements or via a cell-phone based payment system. Since most of the people in this region did not yet have cell phones (or at least ones with support for this functionality), the group distributed inexpensive cell phones to about half of the people receiving aid, and disbursed their aid electronically. They also gave cell phones to a control group who continued to receive cash aid.
The NGO and the researchers were sincere in their efforts to end the hunger—the project did indeed feed hungry people, and that’s a good thing. Also, the experiment did show slight benefits to cell-phone based disbursements, though not for the reasons you might expect. One benefit was that because the disbursement took place at a random time of day, women, who otherwise had little say in money matters, had a chance to delay purchases until the following day, giving them time to discuss purchases with their husbands. Another benefit was that people didn’t have to travel to a disbursement location to receive the aid money. Regardless, the aid recipients spent an overwhelming fraction of their money on the same thing, regardless of the form of the aid: millet.
I was left thinking about what happens a few years from now. After the NGO moves on, the rains will still fail and the land will still dry up and the crop will still fail. Perhaps it will happen a bit more often as the region desertifies. In any case, because nothing fundamental was done to address the root causes of the problem, people will once again starve. But at least they’ll have cell phones.
A couple of months ago I was thinking about different possible future scenarios for the next couple of decades, and in particular focused on what I think are the most likely two: slow reversal and business as usual (slow growth). However, there was a somewhat intentional oversimplification in the post: I treated them as mutually exclusive scenarios in time and space.
What if they happen concurrently? What if industrial societies (and industrializing ones as well) simultaneously experience techno-utopia, business as usual, slow reversal, and collapse (say roughly in the proportions specified—1/8, 1/4, 1/2, 1/8, respectively). The consequences of these scenarios overlapping are very hard to envision in general. In this post, I’d like to continue looking at computing in the long emergency in this context. Last time I looked at aspects of computing today and how they might be transformed by the limits to growth. Here I’d like to consider how the interleaving of different levels of technology might play out, and take a more holistic look.
There’s a seemingly common belief in the peak oil community that the minerals required to make computers are likely a limiting factor in their long-term production, and thus computing as we know it is limited in the same way as fossil fuel-based systems. I’m not sure that’s the case, as I suggested in my previous post. There are few if any rare minerals in modern computing devices, and those minerals that are rare—coltan, for instance—can be recycled. While the sort of scavenging of minerals from e-waste that goes on today could hardly be called recycling, since it’s a labor-intensive process those who do it can easily select for certain minerals should they become scarce. PCBs and chips are made of very common elements—silicon, copper, aluminum, tin, gold, and silver.
It seems what’s more limiting is the manufacturing facilities. They’re expensive and complex. It seems likely that the money to build new ones will become harder to come by and as a result the relative cost of the chips they make will increase. But as long as the facilities exist and there’s a market (and it’s hard to imagine that there won’t be some sort of market for computing devices) existing facilities can continue to produce these components. What seems more likely then is not that computers won’t exist, but that, just like infrastructure, money won’t be spent on them. As people get materially poorer those who have less need for a computer might do without.
Stepping back from the four possible future scenarios, what is it that they might have in common? That is, certain macro features of the world must underlie them all—energy limits, for example. For example, techno-utopia as it is usually framed these days as mostly independent of energy use; more than talking about flying cars and daily shuttles to moonbases, techno-utopians talk about nanocomputers that can augment your brain and the like.
Energy intensity—the availability of concentrated energy—is one of the key things we are losing as fossil fuels (oil in particular) become more expensive and less available. So one consequence we might expect from this is to see the waning of technologies and societal structures that depend upon the availability of cheap, high density energy sources. And even between different computing devices, we might see a proliferation of fewer, cheaper, smaller, and less energy and resource hungry devices as the primary technology people use to stay connected with the world. (We’re already seeing this happen.)
These factors might manifest as some of the following trends:
Increased social / economic stratification. As some individuals, groups, and regions would continue to reap the benefits of the remainder of the growth economy (and potentially some revolutionary new technologies), the existing social compact, which is already frayed, might degrade even further. As a result, increased social, physical, and emotional distance might be the coping strategy required for those living a comfortable life. Those sectors of the economy that are less dependent upon high energy density—what we might call the dematerialization economy—might remain strong, though as we’ve seen over the past few years, the growth in employment at Web companies is a small fraction of the job losses in, say, construction. From their perspective (especially if the next point holds true), techno-utopia will still be on track. (How far the inequality rubber band can be stretched would require a whole post in itself.)
Increased technological escapism. In Tom Murphy’s recent post detailing his discussion with an economist, the economist comments that it’s possible decrease the energy contribution to the economy by virtualizing life. (It’s not just that economist—the technologists I cited in my last post on this subject had a similar wish.) As Adam put it:
My favorite part of the Murphy dialogue is learning that not only would the economist plug into Nozick’s experience machine, but he can’t understand why anybody wouldn’t want to do it.
My response was that if you have a virtual world, there’s no reason to worry about the real world, and indeed people are less and less worried about the real world. And virtual worlds, in some cases, may use less energy to provide similar experiences to real life. So while the economist in Murphy’s dialogue might be onto something, virtual worlds also distract us from the real world that we in fact live in, one in which we might see over a billion climate change refugees this century (a number so vast it’s hard to even imagine).
Maslow’s hourglass. It’s quite possible that there’s already discussion of this, but it seems that technology today is focused on two sorts of needs: ones at the bottom of Maslow’s hierarchy and those at the top. As technology advances (and remains somewhat cheap), these properties could be amplified, and we could have technology that is designed to help people achieve self actualization, express themselves creatively, and, at the other end of the spectrum, tell them where to get some cheap food (though not really provide that food). But at the same time, it might do little to help navigate a fraying social/community/economic fabric, stay healthy, employed, etc. (As an exercise, think about how well technology helps people stay healthy, employed, or close to their neighbors. At best, it seems to not make these worse.) This effect may be the most insidious when it becomes lopsided—when the benefits of technology are skewed to the top of the hierarchy, and we end up with the post-NGO scenario described above.
More than anything else, we might want to step back and ask the question Herman Daly asked about the economy: what is it for? What is computing for? What is its ultimate purpose in human life?
One key shift in thinking that might be needed is to begin applying the term technology more broadly. I think of permaculture as a technology, and the people of Niger might have benefited much more had the NGO realized that a technology that can green the desert is more valuable than one that allows a slightly different means of money distribution. If computing can help meet human needs in conjunction with other technologies like permaculture, we should use both, but we shouldn’t ignore a whole class of technologies as we do today.
While technologists and anti-technologists alike seem to agree that technology has its own inherent purpose, I disagree. Computing, and technology broadly, is made by humans and human systems, and its end goals are determined by us. What do we want those goals to be?