The Rise of AI in Local Government: A Look at Washington State
The incorporation of artificial intelligence (AI) technologies, particularly ChatGPT, is becoming increasingly common in local governments across Washington state. This trend raises exciting opportunities and pressing ethical concerns about transparency, authorship, and the accuracy of AI-generated content.
A Case Study: The Lummi Nation’s Grant Application
A striking example of AI’s role in government communications emerged last year when the Lummi Nation sought funding for a crime victims coordinator. Bellingham Mayor Kim Lund penned a letter advocating for the grant, highlighting the nation’s leadership and dedication to community welfare. However, the letter wasn’t entirely Lund’s own work; parts were generated by ChatGPT. Her assistant prompted the AI with relevant questions, and the final draft bore the chatbot’s influence, with about half of the sentences matching the AI’s output.
This incident highlights not just the growing reliance on AI in local governance, but also invites scrutiny about transparency in the use of such technology.
Widespread Adoption in Local Government
The use of ChatGPT among officials in Washington state isn’t a one-off phenomenon. Through public records requests, news organizations like Cascade PBS and KNKX uncovered extensive logs revealing that local governments are employing AI for a multitude of tasks. In addition to drafting letters, city officials have utilized ChatGPT to create social media posts, write policy documents, and draft responses to constituents’ inquiries.
The data suggests that AI has swiftly become a staple tool for enhancing governmental efficiency. Workers are increasingly turning to ChatGPT for assistance with everything from debugging code to refining the tone of emails. Staffers in cities like Everett and Bellingham are utilizing AI-generated content to make tedious tasks easier and more streamlined.
Enhancing Efficiency or Eroding Trust?
While AI can drive efficiency, officials are aware of the potential pitfalls. There are concerns about the transparency of using AI-generated text. Current state guidelines recommend identifying AI-generated government documents clearly, but many records retrieved did not have such markings. Mayor Lund mentioned the possibility of labeling, yet she emphasized that AI is now ubiquitous and might not necessitate specific acknowledgments.
In contrast, observers like Anna-Maria Gueorguieva from the University of Washington argue that the increasing reliance on AI for public-facing communications could harm the already fragile trust citizens have in their government. AI-generated texts often lack the emotional resonance of human communication, which can contribute to a sense of detachment from civic engagement.
The Complexity of AI-Generated Content
Data from local government interactions with ChatGPT reveal a broad range of complexities. Many staffers utilize the technology for simpler tasks, like planning social media campaigns or summarizing meetings. However, they also entrust AI with more complex responsibilities, such as researching legislative proposals or developing policies.
Interestingly, the chat logs also expose a human dimension. City employees are often using AI to craft sympathetic responses to constituents, offering reassurances to vulnerable residents. This human-oriented use of AI could enhance relations between the government and its citizens, provided the texts remain thoughtful and accurate.
The Dichotomy of AI Utilization: Generative vs. Assistive
The distinction between AI-generated and AI-assisted content is crucial as local governments navigate this new terrain. While some documents appear to be solely produced by AI, many involve substantial human revision. This raises questions about authorship and accountability—who is responsible for what the AI writes?
In Everett, officials are adopting formal guidelines outlining how and when staff can deploy AI. Yet inconsistencies remain. For example, even as new guidance emerged mandating that all AI-generated material should be clearly labeled, instances of unlabeled texts continued to be circulated.
Risks of Inaccuracy: AI Hallucinations
Even as local governments experiment with AI, they face an inherent risk: the technology is not infallible. AI models like ChatGPT are notorious for "hallucinating," or fabricating information. For instance, one staffer discovered that ChatGPT delivered fabricated data about passenger traffic for the Bellingham International Airport. Such inaccuracies pose a profound challenge to the validity of AI-generated content.
The pressure to produce accurate information—from budget documents to social media posts—highlights the risks of over-reliance on AI tools that may not always deliver trustworthy results.
Expanding the Scope of AI Applications
The scope of AI use extends beyond official communications. Reports show that city employees employ ChatGPT for personal matters, whether crafting polite refusals for social invitations or navigating complex interpersonal dynamics at work. This adds another layer of complexity to the ethical considerations surrounding AI use in governmental settings.
As more employees embrace AI technologies, its scope will likely continue to expand, fostering an environment characterized by efficiency but also raising critical questions about ethical governance and public accountability.
As local governments in Washington state explore the benefits and challenges of integrating AI into their operations, the intent behind these efforts remains clear: to enhance efficiency and better serve communities. However, the path forward should be navigated thoughtfully, considering the implications for trust and transparency at every turn.
This article outlines the implications of AI in government in Washington State, focusing on case studies, the efficiency versus trust debate, and the pressing concerns around accuracy and ethics. The story continues as local governments adapt and refine their approaches to this rapidly evolving technology.