He applied the “Page 99 Test” to his new book, Digitizing Diagnosis: Medicine, Minds, and Machines in Twentieth-Century America, and reported the following:
From page 99:Follow Andrew Lea on Twitter and visit his website.Questions about how to describe and represent computers nagged at the hematology team as well. Ralph Engle’s perspective on the nature of the computer—and its relationship to the human—was rather more nuanced than that of many of his contemporaries. Pioneers in biomedical computing frequently lamented that physicians seemed to suffer from the computer allergy more severely than did any other professional group. The unreceptive attitude of many traditional practitioners was widely recognized as a, if not the, central barrier to getting computing technologies out of the research laboratory and into the clinic. In 1966, Stephen Yarnall and Richard Kronmal characterized the “psychological resistance” among physicians to computers in diagnosis. In their view, this resistance had three primary facets: “First, physicians have traditionally approached diagnosis in their own individual manner, and attempts at systematization are generally unwelcome; second, many of the systems introduced to-date have required more, rather than less, work by the physician with no immediate gain apparent to him or his patient; third, physicians, and many laymen, fear computers as strange and impersonal machines which may destroy meaningful doctor-patient relationships and even displace physicians to some extent.”Page 99 of Digitizing Diagnosis does not convey the book’s core arguments, but it does give a nice synopsis of one of its themes: namely, the inner disquietude stirred by the earliest efforts to computerize medical diagnosis. Much of this resistance came from fears that computers would hasten the dehumanization of medicine. As many creators of early computerized systems recognized, doctors’ psychological resistance to computers was a large barrier to the successful implementation of computerized diagnostic programs. Yet the reverse psychological stance—that of openness—also posed threats of its own, as Engle describes here.
Engle shared these concerns about an unreceptive, even antagonistic, clinical audience. But it was actually the opposite attitude—uncritical acceptance— that came to trouble Engle more deeply. Physicians, he believed, needed a deep understanding of how a computer program worked; only then would they be able to effectively evaluate its clinical value and, more crucially, its limitations: “It concerns me that some physicians are only too anxious to let a laboratory test or a computer make their decision. These tools should only be one input into the decision-making process. The physician must add his own perspective.” The output of a computer program, Engle warned, can convey a false decisiveness that physicians may be all too eager to accept uncritically. “The aura of finality and correctness of the computer,” Engle predicted, “will be difficult for some physicians to overcome.” The single computer output did little to convey the messy, contingent, partial, and uncertain choices that the developers made in creating the program. The computer’s rigid outputs might convey a false impression of certitude.
Engle feared that the language surrounding computerized diagnosis, with phrasing like “thinking machines” and “artificial intelligence,” may undercut any measured understanding of a computer program’s limitations. Specifically, Engle seems to have feared that a “living” metaphor about the computer risked gradually shading into a “dead” one; that is, he worried that metaphors may lose their status as mere figures of speech and begin to be interpreted as real or literal. Artificial intelligence may come to be taken as intelligence, full stop.
This excerpt speaks to a larger problematic that engineers came up against, one that runs through this book. Developers of computer systems frequently found themselves toggling between, and trying to reconcile, divergent conceptions of “the physician”—their attributes, attitudes, and aptitudes. Were physicians rational, learned, and atomistic clinical actors, loathe to see the computer enter the medical realm? Or were they fallible beings, only too eager to outsource their cognitive labor to lab tests and algorithms?
In trying to implement computerized systems, engineers found themselves in what STS scholars have called “the dilemma of application.” On the one hand, adoption of computerized tools depended on physicians not feeling threatened or undermined by them. But on the other hand, the justification for clinical decision support systems was predicated on the assumption that physicians needed help—that their capabilities were limited, their rationality bounded. Many of the same paranoias and paradoxes that plagued early computerized diagnostic systems continue to shape efforts to imagine, create, and implement machine learning algorithms in medicine today.
--Marshal Zeringue