Some time back Adam explored the notion of minimalism, and I’d like to revisit that in the context of technology and civilization. The conclusion he seemed to come to was that some sort of middle course is the right one—shunning all technology can be just as limiting as embracing all of it. In part because of his posts and in part from reading authors at either end of the spectrum I’ve become more convinced of the value of such a balance. The question is where that “balance” point actually is and how the limits to growth, technology, and culture interact. So I’ve done a bit more reading to try to explore that question further, though I am still far from an answer.
First, let’s start with Kevin Kelly’s What Technology Wants, which I picked up at the library primarily because I expected to disagree with it. Kelly makes a good point that by opting for minimalism / anti-civ, while one might maximize “freedom” (from technology / from complex systems outside our control), it’s really just an increase in latitude among a sparse set of options (as he puts it: the freedom to hoe the potatoes whenever one chooses is a limited sort of freedom). On the other hand he argues that embracing the “technium” gives one more options but at some cost to latitude to choose among technologies to accept; he contends this is still a net positive. He puts it this way:
[we] willingly choose technology, with its great defects and obvious detriments, because we unconsciously calculate its virtues.
I think he’s wrong here, but not because a) people are wrong in that calculation or b) because they’re not doing any calculation at all, but rather because of the metrics and means used in calculating (and the things we consider virtues). Those things come from the broader culture, and as part of the paradigm we’re not in a habit of questioning them upon each calculation. And those calculations and individual decisions add up, but no broader examination of the metrics or virtues takes place. So yes people might be doing some sort of rudimentary or short-sighted calculation(s), but those calculations are likely within a bad framework. To examine and/or change that thinking one must get more meta.
Kelly has a fascinating section on the Amish, who he characterizes, contrary to popular conception, as quite technology-savvy in a quirky sort of way. According to Kelly, the Amish have built advanced energy storage, distribution, and use systems using compressed air, so a central generator compresses air and then sends it to households to run ordinary appliances without electricity (he gives examples of everything from blenders to sewing machines to power saws and drills). He describes how what the Amish are really doing is just carefully and slowly adopting technology, and as a result are a) dependent upon the “technium” and b) effectively a first-world protest movement, not something that could exist or does exist in places that already operate at or below their level of technology. He also discusses back-to-the-landers he knew who eventually started tech companies as the end result of Amish living without Amish cultural and community limits and the sacrifice of individual free choice. That is, he contends that without community-based limits, one naturally tends to adopt more and more technology and eventually give up a technologically-austere lifestyle.
I think Kelly is wrong in his take that those who are not pro-technology (not necessarily anti-tech) don’t contribute to things used by others, since their ideas can be spread via things they build, books they write, via the Internet, etc. And he’s misguided in his diatribe against the precautionary principle, which he claims if implemented would prevent all technological development because all technologies can be used equally for good or ill. Maybe overall system complexity is the problem and so that’s what should be limited, but I’m not sure how to measure that (as I discussed last week). Regardless, he comes up with a list of 5 ways to respond to a new technology, which are pretty good: 1) Anticipate its effects (build a hypothesis), 2) Continually re-assess, 3) Prioritize resulting risks, including ones from nature, 4) Quickly correct harm, and 5) Don’t prohibit technology but redirect it to other uses or constraints.
Second is the notion of where we’re headed as a global civilization, whether it’s inevitable, and what’s in control of it. It seems that the civ and anti-civ folks actually agree on something without realizing it: the argument on one side is that we should let technology set its own course (as if it’s a self-aware complex system in some sense) and on the other that technology will devour everything if allowed so it must be destroyed. (Kelly indirectly admits this when he agrees that “technology is a holistic, self-perpetuating machine.”) It seems both sides share a premise, which is technology cannot be limited. Maybe that’s true, maybe it’s not—by humans, anyway. But what about by ecological limits?
Separate from that notion, there’s the question of what perpetuates it. Adam identified advertising as one possible culprit, and in re-reading Jerry Mander, I’m reminded of how much not just advertising, but perhaps, as Mander argues, the medium of TV itself is to blame. Beyond advertising, it seems that the growth-based economic model is at the root of much of this. If we were in Herman Daly’s world, would these problems still exist in the same way? I’m reminded of what Meadows wrote in the Limits to Growth:
All people and institutions play their role within the large system structure. In a system that is structured for overshoot, all players deliberately or inadvertently contribute to that overshoot. In a system that is structured for sustainability, industries, governments, environmentalists, and economists will play essential roles in contributing to sustainability.
Third is the notion of what sorts of fundamentals exist (emergy/transformity, limits to growth, ecological footprint, etc.). I don’t have a clear notion of how this interacts with the technology discussion, but it seems to be the missing piece in the pro-technology case since there’s a general denial that limits to growth exist and that they’re likely to affect the way technology is developed and the economy functions. Kurzweil, for example, flatly denies that anything can stop technological advancement of the kind he shows on his exponential charts; he likes to point out recessions and depressions have no effect. (Will the limits to growth? This is a question I’m very interested in, but have yet to find good data or arguments one way or the other.)
I think what Kurzweil might be basing his argument on is the fact that we haven’t had a decline sharp enough or pervasive enough in the last few hundred years to have eroded the natural buffer that science and R&D have had; energy growth has continued unabated for a few centuries. As long as we live in a growth-based world, R&D is what you need to keep growing. But it’s only worth it if growth is still possible. Without growth, the economic argument for R&D is stripped away, leaving only the academic argument of advancement and science for its own sake. And that seems like it’s a harder sell.
There’s probably also a historical argument here as well—that in times when societies weren’t extremely wealthy, only the rich were able to really pursue science, and so it advanced fairly slowly (at least relative to today). Whereas in societies where the wealth was broad-based, the state was able to spend money on all sorts of academic pursuits, some profitable to the state and some not. It seems we might be moving from this latter state to the former.
That last sentence raises the notion of reversalism. Staniford and Greer had an interesting and friendly, though blunt, discussion a few years ago on the question of the consequences of peak oil. While they agree on the fundamentals, Staniford thinks that we’re in for a couple of decades of hard times, not a slow and permanent decline of industrial civilization. I find it very hard to reason about how and why one or the other might be likely, which is why I try to avoid thinking more than 20 years out (not that it’s possible to even think that far out). I think if peak oil were truly the only issue that we face today, Staniford might be right, because we’d just substitute our fuels as the Hirsch report recommends, grind through the rough transition, and recover. But given climate change and peak everything, I’m not sure how that’ll work. The answer to the reversalism question is crucial; it informs us about the sorts of technologies we should embrace. It seems more study and debate, both scientific and philosophical, is needed on this question.
Fourth is the question of where a good life might be in all of this, whether Hemenway is right about horticultural societies, and what technology should look like in this context. Hemenway makes the case that we want regenerative systems rather than degenerative or sustainable systems. Might he be right that it’s not about the end goal but rather the process by which a system (or civilization) sustains itself? If so, the case can be made that the question people have been asking (should we aim for civ or anti-civ, tech or anti-tech?) is the wrong one, or at least one that doesn’t help lead to an answer even if it leads us to understand the problem.
That is, once we’ve concluded that neither civ nor anti-civ is the right answer, we might conclude that the question should be something like: what should be the operating mode of the target system? It seems that the operating mode—degenerative, sustainable, or regenerative—leads naturally to constraints on what technology or civilization will look like because only certain forms can work well in each mode. (This once again is backed up by what Meadows and Daly have written.) I suppose Hemenway is saying that horticultural societies are ones that best function in a regenerative mode. Kelly makes the case that technology will and must advance, but I think he thinks about technology too narrowly (and cites Wendell Berry as an example of someone who seems to be stuck in a notion of 1940s America as the pinnacle of technological-human harmony).
Personally I think of permaculture as a technology, and, for example, I hope that someday people figure out how to directly tap into the electrical current that plants naturally produce when photosynthesizing as an alternative to silicon solar panels. (As Odum wrote: “The natural conversion of sunlight to electric charge that occurs in all green-plant photosynthesis after 1 billion years of natural selection may already be the highest net emergy possible.”) The question is what differentiates these latter technologies. Surprisingly, Kelly comes up with a decent set of guidelines: 1) Promotes collaboration between people and institutions, 2) Transparent in origins and ownership, understandable by all, 3) Distributed in ownership, production, and control, 4) Flexible and easy to adapt, modify, etc., 5) Redundant and not monopolistic, and 6) Efficient and minimizes its impact on the ecosystem. These guidelines mesh nicely with the goals of Appropriate Technology.
Sadly, in his final chapter, Kelly discusses what he thinks is likely for the future, and goes off the Kurzweil deep end, mostly ignoring his guidelines. As a random example:
Yet we can see more of God in a cell phone than in a tree frog…As the technium’s autonomy rises, we have less influence over the made. It follows its own momentum begun at the big bang. In a new axial age, it is possible the greatest technological works will be considered a portrait of God rather than of us.
Maybe he read McKibben’s The End of Nature and misunderstood the message? Instead of mourning the fact that Nature was no longer independent from humanity, as McKibben was, Kelly seems to have taken it to mean that if Nature is just another force under humanity’s thumb, then what differentiates a tree frog and a cell phone?
Fifth, and finally, is the question of whether we’re headed to an environment conducive to regenerative systems anyway. For a system to expand beyond its ecological footprint (i.e. for it to use more than its share of resources in the physical region it operates in) requires resources to come from somewhere else (either in time—borrowing from the future by drawing down stocks—or space—geographically). The only way for that to happen is via transportation. (Where does information fall? Macroscopically we can treat this as zero.) Will expensive energy naturally return us here? Will relocalization become a must as a result as many peak oil authors contend? Thus will the social environment we’ll find ourselves in be a perfect fit for regenerative systems if we have the wisdom to apply them at the right time?
In this exploration, I’ve concluded that a) the larger system is what needs to be defined, not the ways in which we operate within it, and in this way Daly hits the nail on the head—we want at the minimum sustainable if not regenerative flows at the macro level but a rapid and constant churn of ideas, technology, etc. at a micro level, and b) the discussions over civ / anti-civ, technology / anti-technology, collapse / singularity, etc. are mostly unhelpful even if they’re interesting because they’re trapped within the wrong paradigm: one in which technology or civilization itself has a self-determined path.