A former Guggenheim Fellow, Fulbright Fellow, Fulbright Specialist, and Visiting Scholar at the Stanford Center for Advanced Study in the Behavioral Sciences, she has published ten books. Always On: Language in an Online and Mobile World won the English-Speaking Union’s Duke of Edinburgh English Language Book Award for 2008. Words Onscreen: The Fate of Reading in a Digital World appeared in 2015. How We Read Now: Strategic Choices for Print, Screen, and Audio was published in March 2021.
Baron applied the “Page 99 Test” to her new book, Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing, and reported the following:
Opening a new chapter (“Machines Emerge as Authors”), page 99 begins innocuously enough:Learn more about Who Wrote This? at the Stanford University Press website.The title won’t zoom to the top of anyone’s bestseller list, though the author could care less. For Lithium-Ion Batteries, published by Springer Nature in 2019, was written by a computer (which the publisher dubbed “Beta Writer”). Enter the first machine-generated textbook. The accomplishment, while impressive, is hardly surprising. Computers are tailor-made for zipping through vast quantities of research and summarizing findings. It didn’t hurt that Springer has a massive online database to draw on.Then comes another example, this time about Philip M. Parker’spatented system incorporating a template and databases (plus internet searches) to automatically turn out books. He’s produced more than 200,000 of them, ranging from medical guides to collections of crossword puzzles to volumes filled with quotations. We might debate whether these are really books or more like compilations, but regardless, the sheer output is daunting. Interviewed in 2013, Parker envisioned the day when machines could write doctoral dissertations.The page finishes withNow a decade on – and with large language models as today’s text production tool du jour – that time could soon be now. How did we get here?Page 99 is hardly the crux of Who Wrote This? Yet it captures the stylistic flavor of what comes before and after by using real-world examples in tracing historical evolution. The page might be described as a warm-up to what ensues in the rest of the chapter. Page 100 recounts the earliest case of machine as author when, in the early 1950s, Christopher Strachey cranked out insipid love letters on a Ferranti Mark I computer. More milestones then unfold, including Joseph Weizenbaum’s ELIZA program, James Meehan’s Tale-Spin, hypertext fiction, and contemporary text generation programs like Jasper, running on GPT-3. If you did away with page 99, the rest of the chapter would flow fine.
What’s missing from that opening foray on page 99 is the core theme underlying the book: What happens to human writing if “the dream comes true” and AI can write and edit language that’s indistinguishable from what a human can do. This question is given ample foregrounding in prior chapters, including discussion of the evolution of writing itself, the impact writing has on our minds and brains, the emergence of college-level writing instruction in America, and ETS’s introduction of natural language processing tools to assess student essays. Also absent from page 99 is any hint of what natural language processing is all about, especially for machine translation.
Then there’s what comes after page 99. Later chapters in the book discuss how AI’s efficiency potentially threatens the jobs of professionals such as journalists or translators, arguments for whether we can (or shouldn’t) call AI creative, ways in which everyday users rely on AI writing tools, and where individuals chose to draw the line between collaboration with AI and outright automation.
Had you only read page 99, you’d find two stories about harnessing computers to create books. But you would miss out on why those stories matter for the future of human writing.
--Marshal Zeringue