If you were suspected of a crime, would you trust a chatbot to accurately explain what happened?
Some police departments think the tech is ready. And officers who have started using chatbots to quickly complete their most dreaded task of drafting police reports seemingly don’t want to go back to spending hours each week doing their own paperwork.
In June, a police department in Frederick, Colorado, boasted that it was the “first law enforcement agency in the world to go live with Axon Draft One,” a new kind of police tech that allows a chatbot to spit out AI-generated police reports almost immediately after a body camera stops recording a police interaction.
Powered by OpenAI’s GPT-4 model—which also fuels ChatGPT—Draft One was initially pitched in April to police departments globally. Axon, a billion-dollar company known for its tasers and body cameras, hyped it as “a revolutionary new software product that drafts high-quality police report narratives in seconds based on auto-transcribed body-worn camera audio.” And according to Axon, cops couldn’t wait to try it out, with some departments eagerly joining trials.
Ars confirmed that by May, Frederick’s police department was the first agency to purchase the product, soon followed by an untold number of departments around the US.
Relying exclusively on body camera audio—not video—Draft One essentially summarizes the key points of a recording, similar to how AI assistants summarize the audio of a Zoom meeting.
This may seem like an obvious use for AI, but legal and civil rights experts have warned that the humble police report is the root of the entire justice system, and tampering with it could have serious consequences. Police reports influence not just plea bargains, sentencing, discovery processes, and trial outcomes, but also how society holds police accountable.
“The forcing function of writing out a justification, and then swearing to its truth, and publicizing that record to other legal professionals (prosecutors/judges) is a check on police power,” law expert Andrew Ferguson wrote in the first law review article analyzing Draft One’s potential impacts when compared to human reporting. Additionally, “police reports also serve as the factual grounding for civil lawsuits and insurance claims,” Ferguson noted.
By introducing chatbots that are known to hallucinate, confuse jokes for facts, or randomly add incorrect information, police tech like Draft One could be used to legitimize wrongful arrests, reinforce police suspicions, mislead courts, or even cover up police abuse, experts have cautioned.
Axon’s manager for AI products, Noah Spitzer-Williams, told AP News that unlike ChatGPT, Draft One is less prone to hallucinate because Axon has “access to more knobs and dials than an actual ChatGPT user would have.” Because Axon turned down the “creativity dial” on Draft One, the AI tool is supposedly better at resisting embellishments and sticking to the facts, Spitzer-Williams claimed.
Marketing Draft One as a way to save cops time when drafting police reports, Axon urged police departments to start slowly when learning to use the AI assistants, promising in its press release to “innovate responsibly.”
To minimize potential harms, early adopters like the Frederick PD were advised to restrict their use of chatbots to drafting reports only on minor incidents and charges. Only after officers have gained enough experience “in how to use the tool effectively” on “low severity reports first,” should they then “expand to more severe reports,” Axon’s press release recommended.
But although the official advice was to limit early uses, Axon’s CEO, Rick Smith, openly touted Draft One as having the potential to put an end to busywork bogging down increasingly under-resourced police departments everywhere.
“Every single officer in the US writes police reports, often every day and normally multiple times a day,” Smith said in the press release. “As we’ve done with Draft One, harnessing the power of AI will prove to be one of the most impactful technological advancements of our time to help scale police work and revolutionize the way public safety operates.”
Soon after police departments started implementing Draft One, a senior policy analyst who monitors police use of AI for the digital rights group the Electronic Frontier Foundation (EFF), Matthew Guariglia, wrote a blog post warning that increasingly rampant use of Draft One required urgent scrutiny.
“We just don’t know how it works yet,” Guariglia told Ars.