In the not too distant past, I was diagnosed with the adult version of Attention Deficit Disorder. The symptoms had certainly been there for a while: an inability to focus on mundane, routine tasks; an extreme propensity for procrastination; severe bouts of absentmindness, inattentiveness, and forgetfulness.

So when I received my diagnosis (determined through the most asinine and simplistic questionnaire imaginable), I experienced a sense of relief. Not only was I now equipped with a medical explanation for, even a justification of, my many inadequacies; I also had access to a solution.

The solution came in the form of some little blue pills. I was prescribed a generic version of Adderall, the wunder-drug that overactive children are force-fed and the 1 percent willingly gobble like candy, all for the promise of enhanced concentration and mental performance. Perhaps best of all, it was a chemical fix that required that I take no real responsibility for my habits and behaviors, instead allowing me to write them off as unmalleable and inherited flaws of my own faulty wiring. I couldn’t change anything, is what I told myself. The Adderall would have to do it for me.

*                    *                    *

We live in an age defined by technology. The proliferation of every sort of gizmo and gadget is evidence of this, as is the exponential development of new types of technology. But more demonstrable of this truth is the way that our society thinks about technology itself.

Steve Jobs, the technological innovator par excellence, is the ultimate figure that we revere and seek to emulate, not our politicians, religious leaders, or even our celebrities. To this end, the “STEM” fields are increasingly the object of our educational affections (and dollars). And humanity’s confidence in technology’s potential to not only cure the most daunting challenges, but to even generate culture, is akin to blind faith in a benevolent deity (According to Pew, “fully eight in ten (81%) expect that within the next 50 years people needing new organs will have them custom grown in a lab, and half (51%) expect that computers will be able to create art that is indistinguishable from that produced by humans.”). In fact, we would not be incorrect to note that the contemporary cult of technology is in many ways a ghastly sort of religion, with its own prophets, religious artifacts, and path to salvation.

This divinization of technology and its capacities has prompted, correctly, dire warnings from skeptics, but too often their suspicions of technological advancement boil down to nothing more than gut feelings and arbitrary discomfort with change, leaving them devoid of a framework of analysis that would allow them to distinguish between “good” and “bad” technology, or even to define what technology is.

For this purpose, I turn to Russell Hittinger’s instructive essay, Christopher Dawson on Technology and the Demise of Liberalism. As part of his larger effort to rearticulate Dawson’s claim that liberalism was rather quickly supplanted by a new technological order that increased government power and decreased individual liberty, Hittinger helps to provide an exceptional understanding of what malignant technology is and what it isn’t, and why its dominance is cause for alarm.

The type of technology that Hittinger, and Dawson before him, warns against is not to be confused with “science, which is simply the effort to understand the natural environment.” Nor should we be alarmed by the mere “tools of applied science,” such as steam engines and computers. Rather, Hittinger defines malignant technology as “the systematic application of tools to culture, especially those areas of culture that had always been reproduced by humanistic activity, for example, sexual intercourse, family, religion, and economic exchange.”

This kind of technology has “nothing to do with the older … notion of ‘labor-saving’ devices.” Instead, it is “aimed at a new cultural pattern in which tools are either deliberately designed to replace the human act, or at least have the unintended effect of making the human act unnecessary or subordinate to the machine.”

Why should this concern us? Primarily, because this type of technology kills virtue.

By replacing the actus humanus central to the scholastics’ virtue ethics framework, technology has the capacity to supplant the opportunity for humans to develop habits—repeated actions of a moral behavior—the very realm where the virtues are instilled. Or as Hittinger puts it:

Hence, the policy of mutual assured destruction supplants diplomacy; the contraceptive pill supplants chastity; the cinema supplants recreation, especially prayer; managerial and propaganda techniques replace older practices and virtues, and so on.

And indeed, with Hittinger’s helpful distinction, we can go “so on” in our own evaluation of contemporary technology, noting how recycling (devoid of a reduction in consumption) supplants moderation and temperance, and how drone warfare (in a similar way to birth control) supplants prudence and restraint while promising control and precision; or, in my own experience, how the chemical solution promised by Adderall supplants the need to develop discipline, even fortitude.

To be clear, I am not denying that chemical factors outside of an individual’s behavioral control affect one’s ability to concentrate, remember details, and reign in impulses. Mental health has a biological component just as prevalent as any other aspect of human health. And technology should be used to treat chemical imbalances or disorders that negatively impact an individual’s life.

But the key characteristic of malignant technology is its tendency to completely displace and box out any opportunity for human activity. Pharmaceuticals to treat Attention Deficit Disorder are too often offered not as complements to forming healthy behavioral habits, but as categorical replacements for human activity. Not only do they preclude opportunities for human activity, but they normalize the belief that changing one’s behavior is futile, inferior, and unnecessary.

And by relieving these areas of any aspect of human responsibility or activity, bad technology doesn’t merely choke out virtue; it encourages vice, fundamentally altering human behavior by its mere existence. In other words, technology isn’t neutral, but it has the capacity to alter human behavior, culture, and standards of morality.

This is a somewhat controversial claim to make, but Hittinger chooses “the problem of contraception” as his case study for demonstrating the coerciveness of technology, noting its ability to “completely reorganize a cultural order, from its systems of justice to its economic markets, to its religious institutions.”

Hittinger argues that even though Anglicans had broken ranks with the traditional Christian prohibition of contraception at the Lambeth Conference in 1930, and several other Protestant sects soon followed, this was still a “conservative” acceptance of the technology, to be used only by married couples for grave reasons. The sexual ethics of the day were not fundamentally revolutionized.

That revolution happened in the 1960s. Hittinger asks us rhetorically, “What changed?” before providing his answer:

The change took place primarily because of a technological advance. The progesterone pill was developed in the late 1950s and shortly thereafter was marketed in the United States. The technological characteristic of the pill was crucial: orally administered, requiring no surgical procedure, it was seemingly a pill alongside other pills.

But Hittinger doesn’t limit technology’s normative potential solely to an increase in its usage. He highlights its capacity to change laws concerning the usage of technology, as was seen in the invention of a right to privacy in 1965’s Griswold v. Connecticut, and its ability to breakdown moral taboos. Contraception didn’t become prevalent because it became morally licit. Rather, it became morally licit (and therefore prevalent), because it became highly effective. “The moral and legal orders,” Hittinger says, “are to be defined by the efficiency of modern [technology].”

By his account, the conforming of legal emancipation to the technological advance of the Pill (and safe abortions) represented this tendency, and the recasting of this development as a “social necessity” can be traced through changes to the Bill of Rights, the removal of common law referencing the responsibility of a husband to his wife and children, the relaxing of divorce laws, changes to moral theologies of certain churches, and the development of similarly normative public school curriculum.

We can similarly see today how the efficacy of drone warfare has changed the way our government fundamentally understands what is just and legal in the context of military strikes. The development of drones didn’t provoke thoughtful discussion on how they might be incorporated into the already existing understanding of viable targets and acceptable collateral damage—it simply changed those standards to accommodate its widespread usage, with the Attorney General bending over backwards to defend actions the moral content of which would have been seriously questioned if it wasn’t drones carrying them out.

Of course, drones can be used justly in warfare. Likewise, not every individual who recycles is committing a sin nor are people who use pharmaceuticals for mental health reasons vicious. Still, these technologies are far more trouble than they’re worth.

A common response to such sweeping criticism of technology is that technologies, abstractly considered, are not inherently evil. The moral dimension of a given technology, these critics say, has more to do with the usage of the technology than any characteristics it might inherently possesses. “Guns don’t kill people, bad people do,” exemplifies this logic in perfect form.

Hittinger anticipates this charge and says that we are mistaken to evaluate technology in a vacuum and removed from cultural contingencies:

Whereas the moralist will examine human choices one by one, focusing upon the particular act, the cultural historian is interested in cultural habits and institutions; for these trace out the actual and imaginative bounds of men and women as social beings. It is in this latter respect that the problem of modern technology is something more than the moral problem of individual choices.

The “problem of modern technology” is leading to the abolition of culture, the decline of virtues and the flourishing of vices; technology is literally supplanting our ability to be social beings. This is indeed a new form of barbarism, more alarming than any before. Because, as Hittinger notes, “modern barbarism” is not only “more culturally primitive than the barbarians of old, but it is immeasurably more powerful, prosperous, and ruthless.”

In other words, it has the capacity to destroy that which makes us human with terrifying efficiency.