The Evolution of Writing in the Age of AI: A Perspective from The Campus
Introduction to ChatGPT
In the spring of 2023, I encountered ChatGPT through my Professor of Writing and Rhetoric, Hector Vila, during my sophomore year. This interaction was more than a technological novelty; it marked the beginning of a profound shift in how I viewed writing. We experimented by pasting our essays into the AI, seeking out improved versions and reflecting on the differences. When I saw my deeply personal essay about my complicated family relationships transformed into something much less authentic, I closed the tab with a sense of finality, convinced that no machine could ever replicate genuine human expression.
Policy Formation in Our Newsroom
That same semester, I listened attentively at a campus meeting where the executive team discussed the emerging role of AI tools like ChatGPT in our newsroom, The Campus. They decided to implement a policy allowing writers to use ChatGPT in the reporting process, with some limitations. While they could use the tool for brainstorming and research, creating or editing content with it was strictly prohibited. Over two years later, I feel it is necessary to acknowledge the limitations of this policy and transparently showcase how we are navigating this rapidly evolving landscape of research and writing.
Avoiding Generative AI
For a long time following that initial introduction to ChatGPT, I tried my best to ignore the burgeoning field of generative AI. I brushed off my dentist’s comments about how technology was reshaping many professions, including journalism. The mere idea that the writing I poured hours into could be produced in seconds filled me with dread.
However, upon stepping into a managing role at The Campus last fall, I quickly realized that I couldn’t afford to ignore these advancements any longer. My responsibilities had expanded to include not just my own writing but also ensuring the quality and authenticity of approximately 20 other pieces each week.
Misconceptions About Our Safe Haven
At first, I thought The Campus was somewhat insulated from generative AI’s potential pitfalls such as factual inaccuracies and bias, particularly given our focus on local news and a small community. Our articles often rely on interviews, making it challenging for AI to produce the depth of reporting required. Furthermore, our writers volunteer their time, driven more by passion than by the convenience of AI writing tools. Why would they opt for a machine to produce their work?
Initially, I also held a firm belief that I could easily distinguish between human writing and that generated by ChatGPT. However, as AI technology evolved, I found it increasingly difficult to remain confident in that ability, which weighed on me heavily.
The Gap Between Perception and Reality
Despite our policy, not a single writer or editor had disclosed using AI in their submissions since its implementation. This lack of transparency felt dissonant, considering a recent study conducted by Middlebury professors revealing that over 80% of students utilize AI to some degree for their coursework. With fresh faces from the class of 2029 joining us—students who’ve been using ChatGPT since high school—I wondered whether our previous assurance of untouched contributions was unrealistic.
Editorial Safeguards in Place
Fortunately, several elements of The Campus’ editorial process help mitigate the influence of generative AI. Using Google Docs, we can track editing histories, making it difficult for someone to pass off AI-generated text undetected. Each editing session is timestamped, allowing us to see the progression of a piece and ensuring that suggestions are made thoughtfully, rather than merely copied from elsewhere. Additionally, our editorial team consists of dedicated editors who meet in person to refine articles collaboratively. This dynamic allows for robust dialogue around each piece, particularly in opinion writing where contributors must engage directly with editorial feedback.
The Dilemma of Transparency
Despite having a stringent policy in place, I am restless about the possibility of needing to add an editor’s note indicating that AI contributed to an article. After striving to ignore this technology’s emergence for so long, such a move feels like surrendering to its inevitability. However, if transparency demands it, we will comply.
I also grapple with the thought that our writers might inadvertently cite information gathered from interviews conducted via AI-generated responses. We aim to prioritize direct interactions over email communication, promoting conversation when securing interviews to ensure the authenticity and depth of our reporting.
Acknowledging the Limits of Control
I recognize that our editorial measures can only go so far; the implications of AI’s evolution extend beyond our control. Still, I am aware that generative AI serves a purpose for many who know how to wield it effectively. It’s crucial for students to understand that The Campus is also a tool—one rooted in the belief that we provide an authentic resource reflecting the Middlebury community through dedicated, thoughtful human effort.
The culture in our newsroom underscores our commitment to quality, encouraging contributors to hone their research and writing skills while ensuring that we remain a platform that values human insight over algorithmic generation.

