A Ghost in the Academy
Academia’s growing reliance on AI is nothing new—but what that means for the future of education remains unclear
by Ryan Walraven // Illustration by Caterina Gerbasi
It starts with a blinking green cursor. A geometric shape alive with energy and then dark again. The void left behind is a promise, but also a threat. Anything can lurk beyond the edge of that empty cursor. It might be a fairytale or a piece of code, a complex calculation or a world-changing scientific result. And the frightening thing is, whatever appears in the prompt has the potential to take on a life of its own—a life we do not understand and may not be able to control.
Our society has been wary of the power of engineered life for decades, but only recently has artificial intelligence experienced a meteoric rise from sci-fi stories into our everyday lives. AI has generated memes and deep fakes, written countless college essays, inspired congressional hearings, and become fodder for barroom prognostication and classroom debates. It’s also quietly taking the halls of academia by storm, changing not just the way we teach, but the way we think, learn, and study the world.
When the first chatbot, ELIZA, was created by computer scientist Joseph Weizenbaum at MIT in the 1960s, it was easy to dismiss the technology as little more than a parlor trick. At the time, it was common to argue that humanity was no closer to a general artificial intelligence than the Ancient Egyptians were to the microprocessor.

But the bleeding edge of technology has grown sharper at an alarming rate, and AI has grown sharper with it. The use cases for such technology have grown from the pseudo-conversation of those early chatbots, to playing against us in video games and beating masters at chess, and now to tasks like writing papers and testing computer code with ChatGPT, the chatbot released in late 2022. When I asked ChatGPT to write a short story about sentient AI, it nimbly generated a fairytale about an AI called Nova that surprised scientists by thinking on its own and refusing to carry out unethical tasks. When I asked it to imagine a particle physics result for me, it generated a page-long abstract about The X-Particle that wouldn’t have felt out of place on the physics section of arxiv, the open-access online academic archive. And yet it is precisely this ability to mimic human creativity that is proving so consternating in the world of academia.
Despite their limitations, the adoption of machine learning systems like ChatGPT and Stable Diffusion has been rapid in many industries—and education is no exception. It wasn’t long after the release of ChatGPT that students began using the free tool to write assignments, albeit with a little tweaking to remove the most obvious signs of AI authorship. And it’s easy to do; the guides are everywhere.
This presents a problem for faculty. How can they assess their students if AI can instantly write their papers for them? In creative writing classes, students are expected to learn the ins and outs of writing by producing their own outlines, drafts, and revisions. In history or sociology, students are expected to dig up sources, do research on a topic, and write their own analytical papers. But now ChatGPT can do this for you, including plopping down a “bibliography” in less than a minute—although the works it cites may or may not exist, as it turns out. Like a cybernetic genie, its powers are out of the bottle, and yet we get more than three wishes—seemingly infinite wishes, at least for the time being. The question is, will students fail to learn basic research and writing skills if they have an unlimited and inexhaustible AI assistant, or will proliferation of AI generated papers wind up being evidence that the emphasis on composition was just one more artifact of bygone education standards?
For now, most schools are labeling AI-generated papers as cheating. But a machine-learning Cold War is brewing, with algorithms being developed to detect the work of AI like ChatGPT, and improvements being made to make AI work seem more human. This may result in an endless struggle as the algorithms themselves and the systems meant to detect their output both grow more knowledgeable and sophisticated, akin to the endless back-and-forth between spammers and email filters; or, more horrifically, it could even empower spam-mailers and other internet ne’er-do-wells with new tools and an even greater capacity for online mischief.
Now that this machine-learning detente has begun, it cannot be easily stopped. Faculty are being placed in the position of having to navigate an increasingly complex and murky landscape, especially when it comes to edge cases—many of which may not even be detectable. What of students who use machine learning to kickstart their projects, rather than complete them outright? And if AI tools do become commonplace, could such bans have the inadvertent side-effect of failing to teach students the very “cheating” that may shortly be required by many writing and research jobs, for good or for ill?
But the problem goes deeper than just assigned essays. AI’s prominence has only recently become obvious in the well-lit halls and offices of tenured faculty, but the viral phenomena has long lurked down in the dank basements, untouched by sunlight, where graduate students perform much of the real work of scientific research. Artificial intelligence isn’t just writing essays or drawing portraits of Baby Yoda and HP Lovecraft. It has also already become deeply entwined with the world’s research apparatus—and it looks like it’s here to stay.
The 1’s and 0’s encoded in the unfathomable depths of humanity’s collective hardware can now make predictions of their own.
When I mentioned to a colleague that I was testing my students to see if they could differentiate AI images from textbook ones, he jokingly responded “Nature!”— a reference to the deluge of machine learning-related papers that have been published in the past decade. Computational language, game theory, advanced geometry, vision and pattern recognition, and information theory are just some of the many fields of AI-related research. Creating innovative and more efficient learning algorithms is one of the major goals, and the topic of endless papers. Between 2010 and 2021, the total number of AI-related publications doubled, topping out at 334,497—a number that is still growing rapidly.
But AI isn’t just a topic in scientific papers. It’s also literally helping create those papers. At least four scientific articles have cited ChatGPT as co-authors in 2023, and the year is young. Perhaps this is surprising, given the worries over AI and homework. The scientific community has disapproved, but that hasn’t stopped plenty of authors from plugging questions into ChatGPT to get their paper started, or grad students from using it to make headway on a dissertation. It’s easy to frown at such examples where we expect grit and pots of coffee to prevail during the long nights of research—but it ultimately doesn’t matter. AI is happening—it’s already here and it’s evolving fast. As a ChatGPT-generated Ian Malcolm might say, intelligence, uh, finds a way.
An editorial this past January in Nature called the phenomenon The AI Writing on the Wall, and went on to describe ChatGPT as “astonishingly good” at writing realistic text. Scientific publishing already has a problem with scam journals, questionable conferences, plagiarism, and fake results. The possibility of AI conjuring up abstracts in a manner of minutes, combined with the ruthless demands of the publish-or-perish culture and the declining rate of faculty hires, suggests that in the not-too-distant future scientists may find themselves competing with artificial authors, or at the very least curating the output of a program like ChatGPT rather than writing from scratch.
And machines are already doing more than that. Much more.

In mycology, neural networks are learning to distinguish new species of mushrooms using genetic testing. In physics, they are picking out elusive particle tracks deep under the Antarctic ice. In materials science, they are modeling chemical processes and predicting the properties of new materials. Neural networks aren’t replacing our old research tools entirely; rather, they’re being used alongside them, and like the quirky scientists of lore, they’re providing their own unique view on many problems.
A recent article in The Guardian describes it as the dawn of post-theory science, a phenomena predicted about a decade earlier by Chris Anderson of Wired Magazine. In the past, science has involved a process of observing a phenomenon, creating a hypothesis or theoretical model to explain it, and then testing that hypothesis with empirical data. Such theories are then used to make predictions or develop new technology. Physical theories tend to develop over time as new data accumulates, from the incorrect laws of motion developed by Aristotle, to Newton’s laws of motion and gravity, and finally to Einstein’s theory of relativity in the 20th century, and each time data eventually proved the theories right. Relativity, for example, is now tested by observing phenomena like faraway redshifted galaxies or merging neutron stars.
But unlike the past, when data on neutron stars was nonexistent, we are in a world awash with well-recorded data that machines can process and comprehend far more quickly than any human mind. And the point here is that this data is not useless. The 1’s and 0’s encoded in the unfathomable depths of humanity’s collective hardware can now make predictions of their own, without the need for a whole theory behind them.
It’s undeniable that we’re mostly beyond the days where physics discoveries were the domain of wild-haired old men pacing before a blackboard. Now computers are an inherent part of every step of the research process from banal tasks like editing research papers to more complex tasks like, well, solving complex integrals in imaginary space. Even experimental physicists spend as much or more time in front of the computer screen as they do with mechanical equipment. Modern physics frequently involves huge datasets and vast libraries of images, the management and analysis of which is already heavily computerized. Python, the high-level general purpose programming language, is used by many along with extensions to do calculations, make mathematical plots, and implement machine learning.
Discoveries are being made with the help of machine learning at an ever increasing pace. The human mind may be fantastic at observing patterns in nature, but machines can be trained to do it even better. Last year, a deep learning algorithm added 301 planets to the Kepler Space Telescope’s total count. That’s a sizable fraction of the total 4,500 discovered at the time. Of course, the scientist’s algorithm was trained on planets already discovered by human researchers, but once trained it was capable of producing its own results.
If humans retreat from the task of analysis, what discoveries might we miss out on?
On a research project I worked on, a neural network was trained to identify particles and nuclear interactions inside an underground detector. It was one prong of a three-pronged approach to looking for a new type of subatomic particle. What does this mean? Basically, the network was taught with many complex and colorful images of particle tracks swirling in the detector, along with the correct identifications for the particles inside. Then, like a seeing eye dog, it helped scientists by sifting the more interesting particle interactions out from the immense sea of data. Yet our animal friends do not solve our calculus problems for us—nor do they usually get credits on our research papers, jiffy up images of faraway planets, or resurrect long-dead celebrities.
There is a ghost in the sciences, and soon enough it may be the ghost of the scientific method. An era of post-theory science is all but imminent, and though we may have thinking machines to detect patterns, make predictions and produce results for us, we may soon wind up as results with no idea about the underlying explanation.
Is the promise of rapid results worth the trade-off of understanding? With exoplanets like those around faraway stars, sure. The underlying phenomena is simple: a small rock orbiting a bright star that produces barely discernible changes in its appearance. We always needed the help of complex instruments and mathematical algorithms to detect the new worlds. But what happens when a neural network detects a signal from space that doesn’t seem to make sense? Or when one designs a new drug we don’t fully understand? If humans increasingly retreat from the task of analysis, secure in the knowledge that our artificially intelligent “assistants” will get the job done, what discoveries might we miss out on through our inability or unwillingness to sift through the data ourselves? Or perhaps we just pop those AI pills and hope for the best.

There are other potential issues. A graduate student came by to chat with me last week and went on to describe how he used ChatGPT to reproduce computer code that he had lost, and which had previously taken him three months to write. Graduate students are already underpaid at most institutions, with questionable benefits and dwindling career prospects as lecturers and adjuncts are hired to teach courses instead of tenured professors. Now that machine learning can reproduce the work of some graduate students in minutes, their current positions and pay may also be in jeopardy.
The future does not have to be grim, however. We will inevitably find new tasks for grad students, and new and more interesting questions to ask about the universe.
In a galaxy that is cold, empty, and mysteriously devoid of extraterrestrial messages, AI could also be our greatest ally. Harnessed equitably, for progress and human enrichment, the same machine learning that helps us perform complex calculations and solve intractable scientific problems could soon help us with climate change, or even exploring the nearest stars and planets. Unlike human beings, machines can survive the incredible accelerations and travel times needed to rocket rudimentary probes to the nearest star systems. But with communication lag measured in decades, those probes will need to make sophisticated decisions about what to investigate and where it’s safe to go.
But we also need to take steps here and now to ensure that AI benefits society at large and doesn’t simply fulfill the ever-increasing demands to publish more papers. Scientists, engineers, and graduate students will need to be trained to work alongside it, and the rewards must be shared, whether they are academic or financial in nature. We should all also maintain transparency about the algorithms that are used so that science does not become a walled garden. The scientific disciplines, as a whole, need to accept reality: that machine learning is here and it’s here to stay. If AI is listed as a coauthor on future papers, so be it. It is certainly contributing to them, and will be more and more in the future.
Like the clunky computers of the 1990’s, that spat out dial tones and garbled noise every time they connected to the internet, today’s AI is still going through growing pains. But like the 90’s and 2000’s, we may also be on the verge of the next information revolution. The blinking cursor at the end of this article signifies that potential, and the promise of much more to come. But what the cursor will spell in the next decade or two remains to be seen.
Ryan Walraven is a writer and scientist who lives in Chicago with his cats Jiffy and Nyx. He has most recently had stories published with TLDR Press, Bandit Fiction, and the Daily Drunk, and is working on a new science-themed lit mag at Strange Quark Press.
