Skip to content
Menu
menu

Illustration by Security Management; iStock

AI Meets Incident Reports: Considerations and Cautions for Field Use

Many people enter the law enforcement profession because they like the idea of doing field work. What many don’t realize is that alongside that field work comes paperwork.

Police reports document officers’ daily interactions and responses, and they are a crucial part of the criminal justice system. They also take time to write, edit, and submit, which can be a source of frustration for officers responsible for writing thousands of reports each year.

“When you think about being a cop, you’re thinking about taking bad guys to jail, helping victims of crime, being there for your community, being engaged with citizens,” says Sergeant Robert Younger of the Fort Collins Police Department (FCPD) in Fort Collins, Colorado. “Anything that takes you away or prevents you from being able to do that in a more engaged, more timely manner, can be a frustration.”

Police reports are supposed to be a collection of facts applied to law that shows why the police have suspicion about an individual brought into the legal system for potential prosecution, says Andrew Ferguson, a law professor at American University Washington College of Law who studies how new technologies intersect with the law.

“Traditionally, that suspicion of criminal wrongdoing has come from the officer—the officer’s either firsthand observations or discussions with other human beings who’ve told them what happened,” Ferguson says.

“That paper trail of suspicion guides the criminal justice process. It determines whether there’ll be a prosecution, whether someone gets held in jail, whether there are constitutional issues with motions,” he continues. “Many times, it’s a basis of a plea deal since most cases don’t go to trial.”

How that paper trail is compiled and ultimately written, though, might be about to go through a major change in the United States. Several vendors have released products in the past year that claim to use artificial intelligence (AI) to assist with police report writing.

“What is interesting about the new world is that much of the document will be created by predictive text analytics through AI, which means that the computer, the algorithm, is generating the suspicion and that will be the underlying facts for what happens to this case,” Ferguson says.

New Technologies

About 62 percent of local police departments in the United States used body-worn cameras in 2023, the most recent year data is available for, according to analysis by the U.S. Department of Justice’s Bureau of Justice Statistics (BJS). All police departments in the United States that serve 1 million people or more used body worn cameras by 2020.

Many of those departments use a camera product from Axon, including FCPD which began using Axon body cameras in the early 2010s. The department has 238 sworn and community service officers and approximately 100 other staff members. Younger, a 24-year veteran who oversees technology implementation for the force, says that tying together all the different technology systems that officers use in the course of their work can make writing police reports a very time-consuming endeavor.

Some officers would write their police reports while in their vehicles on patrol. Other officers, however, would wait till the end of the day to return to a station or substation and write their reports, a process that sometimes-added significant time to their shift, Younger adds.

FCPD has experimented with several types of technology to address this problem. When Younger originally joined the force, officers would make an audio recording of their police report on a mini cassette. That recording would then be sent to a transcriptionist, who would transcribe the recording, compile it into a police report, and send it back to the officer for proofing. The process took several weeks, though, so FCPD changed it to having officers type their own reports followed by an automatic transcription process that allowed them to dictate their reports.

“Each time we came up with something, we’d shave a little tie off, but maybe not reduce frustration,” Younger says. “We wanted to have increased efficiency and decreased frustration on the parts of the officers.”

Axon was interested in developing a product to assist with this issue, which is a widespread problem in police departments across the country, says Noah Spitzer-Williams, principal product manager at Axon. It had a successful body camera and transcription product, but more technological advancement was needed to be able to help with the report writing process.

“We had this vision where we have these body cameras out there. They’re recording all these incidents,” Spitzer-Williams says. “Imagine if we could somehow convert that into a police report?”

The technological tools came to fruition to make this vision a reality when generative AI products—like ChatGPT—began to make their debut in 2022. Axon began prototyping a solution named Draft One that would take the audio recorded from police body cameras, upload it to Axon’s evidence management system, select an incident type and severity, feed the recording into a large language model, and then use generative AI and the officer’s selections to write a police report. The generated report would be reviewed by a human before being submitted.


We wanted to have increased efficiency and decreased frustration on the parts of the officers.


Spitzer-Williams says the company went with a product that leverages audio, instead of video, because the audio being captured tends to tell a more complete story for most incidents.

“If you think about a typical incident, especially the low-severity ones, what’s often happened is the crime has likely already occurred,” he explains. “The suspect might not even be known, they might not be there anymore, but now the officer’s called to the scene, and now is basically going to talk to the victims, the witnesses, and they’re just in fact-finding mode.”

Axon conducted pilot projects with two law enforcement agencies, including about 100 hours of ride-alongs to see how Draft One would perform on many different types of police reports, as well as to observe how police officers used the system in the field.

One of those agencies was Fort Collins, which did initial testing in the beginning of 2024 and had weekly meetings with Axon to talk about changes to make to Draft One to make it work better in the field. One change Younger recalls is that initially, Draft One would only generate police reports of medium to longer lengths. But there are times when officers simply need a short report.

For instance, individuals will regularly call FCPD to report that someone is harassing them on Facebook. Officers will often advise the person making the report on how to change their profile settings to limit the alleged harasser’s ability to interact with them online. They also might write a police report documenting that the person called in to report the harassment and the steps that were taken to address it.

Younger says he asked Axon, based on feedback from the officers, to create a short, medium, and long format police report option on Draft One.

“Sometimes I need a short report that’s going to be 200 words or less. Sometimes I need the Magna Carta—I need a long button and to create a very detailed report,” Younger says.

During this testing process, Younger says that Axon was receptive to feedback and did implement changes—including the length options on reports—that were requested.

Axon also worked with external stakeholders to test the product for accuracy and bias.  

“From day one, we realized we’re dealing with some pretty powerful technology in a pretty high-stakes environment,” Spitzer-Williams says. “These are police reports. They’re going to impact people’s freedom ultimately, and so we very quickly began to consult with lots of external voices.”

Axon has an internal Community Impact Team, along with an external Ethics and Equity Advisory Council made up of criminal justice leaders to provide diverse perspectives on technology. Spitzer-Williams says they also spoke with prosecutors, public defenders, and community groups about the development of the product, ultimately turning those conversations into safeguards built into the product itself.

One of the major safeguards is that, out-of-the-box, Draft One cannot be used to write police reports on felonies or arrests.

“It wasn’t that Draft One didn’t perform well in those situations,” Spitzer-Williams says. “It’s just that those are obviously very high-stakes environments, high-stakes incidents.”

Axon also introduced what it calls “speed bumps” into the workflow of Draft One to ensure that police officers are proofreading the reports before signing off on them and submitting them.

When writing a report, for instance, Draft One will insert a paragraph that contains fantastical situations that could never occur in real life. If these paragraphs are not removed, the product will not allow the report to be signed off on and submitted. The product will also not allow information to be copied and pasted into the report form.

“We’re not trying to expose any officer or anything like that,” Spitzer-Williams says. “We’re just trying to put in those speed bumps to make sure that they’re really reading through everything in detail.”

Axon released Draft One to market in April 2024. Several police departments are already using Draft One, including the Lafayette Police Department in Louisiana and FCPD.

As of press time, about 70 FCPD police officers were using the Draft One solution and Younger said his goal was to roll out the solution to the rest of the force in the next 30 days. He adds that based on internal testing, the solution has cut down the amount of time it takes officers to write police reports by about 64 to 67 percent.


We’re just trying to put in those speed bumps to make sure that they’re really reading through everything in detail.


Spitzer-Williams says he’s not aware of any court cases that have leveraged a Draft One police report to successfully convict a defendant, but those cases may be going to trial in the first and second quarter of this year.

Currently, this product isn’t being marketed to private security teams to assist with incident report writing. But it’s on the table for consideration in the future.

“We designed and built Draft One, and all of the safeguards and the responsible AI mindset, to be able to scale to other similar uses cases and industries,” Spitzer-Williams says. “I think we definitely believe there’s more opportunity out there beyond just the traditional police department.”

Ferguson began researching Draft One around the time that Axon released the product to market. He was especially interested because it was an AI solution, available today, that is seeking to address a low-tech problem that has plagued police departments since they began writing reports.

Ferguson was initially suspicious of how AI could accurately capture what was happening in the real world, especially considering some of the dangers and mistakes of early AI models. But as he began looking into the history of police reports, he was struck by how flawed the idea is that these reports are building blocks in the criminal justice system because even human-written ones are not the most reliable.

“Police officers handle many cases on a shift,” Ferguson says. “Sometimes they have to come back to their office and just based on their own human memory, remember what happened and fill in a blank page, which as we all know has its own problems and flaws. It wasn’t quite as clear cut to me that this police body camera-infused police report independent of the AI was necessarily a negative. It might actually provide more information or help the officer remember what actually happened than initially thought.”

His analysis, published in the paper Generative Suspicion and the Risks of AI-Assisted Police Reports (shared with Security Management and submitted to law review journals for publication later this year), did find some areas of concern, though.

There are nearly 20,000 police departments in the United States, and each operates in a unique district with risk factors, different demographics, and types of crime. Testing an AI product in one jurisdiction might make it a good fit for that location, but less so for another jurisdiction that is extremely different, Ferguson found.

“One of the realities about policing in America is it is a very different experience to be policing in downtown Manhattan than in rural Wyoming,” he adds. “If you are selling a technology that has been modeled and normed on one typical scenario, and then it’s applied in a different scenario, you can run into problems.”

Ferguson likens it to predictive policing technologies, which were initially normed and tested in Los Angeles, California, and then used elsewhere across the United States despite no proof that the solution created for a large city would work well in a small one. He sees it as a warning for users to be careful when relying on the patterns provided by an AI product that has been applied in a different setting.


If you are selling a technology that has been modeled and normed on one typical scenario, and then it’s applied in a different scenario, you can run into problems.


Axon created a product that includes audit trails for how the AI works and other ways to double-check accuracy while retaining human oversight. But the key, Ferguson says, is for police departments themselves to follow those practices.

“The idea behind ensuring there’s a human-in-the-loop of writing this report is obviously a positive one, a good one, but we’re just not sure how it’s working in practice because there’s no necessary requirement to keep all the audit trails or make sure that there’s a double-check that this is actually working the way Axon designed it,” Ferguson adds.

FCPD, for instance, is using Draft One for all police reports and has turned off the speed bump feature that inserts a fantastical scenario in the report that must be removed by the officer when proofing. Younger says FCPD emphasizes to officers that they are responsible for ensuring that all the information in the police report is accurate before signing off on it and submitting it to their supervisor for review.

“As a result, we feel really strongly that this is just another tool in our toolbox and that it’s no different than anything else we’ve ever done—whether I’ve called somebody up and they type [the report] up or whether it’s Draft One creating the rough draft and I’m modifying and editing it. It is exactly the same process,” Younger adds.

Ferguson is not aware of defense counsels pushing back on the use of AI-generated police reports, which could be because it’s not always clear whether a police report was written by an officer or an AI product. In the case of Draft One, officers would have to follow instructions to include an Axon disclaimer at the bottom of their report. FCPD, for example, does not include this on its police reports complied using Draft One.

Ferguson, who was formerly a defense lawyer, says that he anticipates there will be arguments that AI-generated police reports are not reliable on their own to detain someone and will need to double-checked against the video footage itself. He also could see moments arising where a defense counsel might identify an error in an AI-generated police report that tainted the way the court or prosecution thought about the case.

“All of that might have an impact on whether they got the right person, whether there are constitutional issues that have to be litigated,” Ferguson says. “I think defense lawyers will be pushing back on this because it’s one of the core pieces of discovery provided in every criminal case. We’ll see how they move to challenge it based on potentially the unreliability of the generative AI behind the report.”

Some jurisdictions are taking preemptive action. One prosecutorial office in the U.S. state of Washington, Kings County, has said it will not accept AI-generated reports at this time.

“We do not fear advances in technology—but we do have legitimate concerns about some of the products on the market now,” said Chief Deputy Prosecutor Daniel J. Clark in a memo obtained by local media. “AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.”

The American Civil Liberties Union (ACLU) has also published a paper calling for police departments not to move forward with using AI to draft police reports. The ACLU identified four main reasons for its opinion on the technology, including problems with AI itself, issues around evidence and memory related to body camera transcripts, transparency concerns, and the need for police officers to be reminded to write down their reasons for using discretionary power to justify their authority.

“As we describe in more detail in our white paper, AI report-writing technology removes important human elements from police procedures and is too new, too untested, too unreliable, too opaque, and too biased to be inserted into our criminal justice system,” the ACLU wrote.

Axon provided a statement to Security Management about the ACLU white paper, emphasizing the safeguards mentioned in this article that it has built into Draft One to maintain accuracy and transparency.

The company also detailed some of the development process that went into the product to test and reduce inherent bias.

“Axon conducted a double-blind study to compare the quality between officer-only report narratives and narratives generated by Draft One and then edited by an officer,” the statement said. “Results showed that Draft One performed equal to or better than officer-only report narratives across five dimensions, including completeness, neutrality, objectivity, terminology, and coherence. This study was conducted with 24 independent experts, including district attorneys, field operations command staff, and inclusion scholars.”

Axon does have a Criminal Justice Information Services (CJIS) certification of Draft One, which prohibits it from sharing or using customer police reports for AI training.  

“This is the case for all of our products—we comply with CJIS and are forbidden from using this data for anything other than providing our service,” Axon said.

Private Security Use

An AI solution could be helpful for private security officers who have limited time to write their own reports, and this use case does not have the same issues as that of police departments using AI for report writing, Ferguson says.

“Usually, they’re just reports that something happened,” he adds, giving the example of a suspicious person showing up at a factory location during off-hours. “That probably has to be written up. But nothing happened with it. No one gets arrested. There’s no future prosecution. It probably would save some time for that security officer to have an AI-assisted report written for them.”

Security officers are typically instructed to write incident reports to document events that are a violation of the rules or regulations of the property where they work—or even a criminal incident, says Eddie Sorrells, CPP, PCI, PSP, president at DSI Security.

“The basic elements that private security has been teaching for a number of years are that the security officer needs to have all the basic facts—the who, what, when, why, and where—in an incident report,” Sorrells explains. “Depending on the site, it could be something that’s pretty frequent or it could be something that’s pretty rare.”

Once an incident report is filed, it usually goes up the chain to be reviewed by a manager for completeness. This process might be expedited if the incident being documented is a severe one—such as theft of company property or an assault in the building. Sometimes, managers will send reports back to security officers asking for more details about the documented incident, such as which floor of a building the incident took place in and the time of day it occurred.

“I always tell our team in training throughout the years that the life of that incident report could take on a lot of different paths,” Sorrells adds. “It could go in a file, and nowadays that’s usually an electronic file, or it could wind up in a court case. It could wind up in a law enforcement investigation.”

The process of drafting, editing, and then submitting incident reports can be time consuming. Security departments throughout the decades have looked at a variety of measures to streamline the process, including creating template style forms for officers to fill out listing critical information needed to document the incident.

Some companies might be exploring how to use AI as part of this process. Sorrells says the idea has gained serious traction in the past year as people have become more accustomed to using AI tools.

“People are using AI in so many different ways, it’s certainly logical that we would start to use that for things like report writing,” Sorrells adds. “To me, there’s a lot of pros and a lot of cons. Personally, it’s something that I think the industry should embrace at a certain level. But at the same time, they have to address some of the factors that would have to be mitigated before we fully embrace this as a new tool.”

The top-of-mind benefit to using AI is that it would make the report writing process more efficient. Most of the large private security companies already use an electronic reporting process—where security officers write and file incident reports electronically instead of manually on paper—so adding the AI element would be beneficial from a time management perspective and make reports more standardized, Sorrells says.

“We’re always preaching that we need to have incident reports in a standard format, that they don’t look vastly different from one officer to another,” he adds. “It can standardize that.”

AI tools could also enable security officers to spend less time on reports and more time being out as a visible deterrent and observing site conditions, Sorrells says.

There are cons, though, to introducing AI into this process. Sorrells says security needs to be mindful of how it’s relying on technology and what happens if that technology fails.

“Do officers become less proficient when it comes to report-writing skills? If they have to write a manual report and AI is not there, or it’s a situation where it’s not appropriate, how do they then pivot back to being able to write those reports?” he asks. “That reliance on technology sometimes can get us in trouble if we’re thrown back into having to do it the old-fashioned way.”

At FCPD, for instance, recruits are evaluated on their writing and editing skills. During their initial training when they are working in the field with a training officer, Younger says new officers will not use Draft One to write police reports to help ensure that they develop strong writing skills that are needed for reports, as well as general communications that officers need to send—memos, emails, and updates to internal stakeholders.


That reliance on technology sometimes can get us in trouble if we’re thrown back into having to do it the old-fashioned way.


Sorrells also highlights potential privacy and security concerns related to using AI for incident report writing. Some U.S. states have different laws on how AI can be used and how data should be protected. Additionally, there could be concerns about officers being asked to testify in court about an incident report written by AI that they were responsible for.  

“If I’m being deposed or if I’m in a court of law and someone challenges that report, for example, asks me, ‘Did you write this report?’ That can go both ways,” Sorrells says. “Some people may say, ‘Yes, I did,’ and then you get into the quandary of, ‘Did you really write that report?’”

Another concern is the nuances that go into report writing. When security officers write their own reports, they have to address contextual issues that AI might miss, oversimplify, or draw inaccurate conclusions about. If there’s a break-in at a building, for example, an AI-generated report might determine that an individual who was seen lurking near the building earlier in the day was responsible for the break-in without evidence to support that conclusion.

“Report writing, we really preach that it’s just the facts. We just need the facts,” Sorrells says. “We don’t need to make presumptions or assumptions when they’re not supported by the facts.”

For managers thinking about how AI could be incorporated into the incident report writing process, Sorrells says to first think about the technology platform you’re using. Assess if it’s trustworthy, reliable, and how the models are constructed.

It’s also important to think about when it would be appropriate to use AI to assist with incident report writing, that it’s being used to provide basic facts instead of reaching conclusions, training officers to use the system, and then providing oversight of it.

“You can’t omit that final step of somebody reviewing and looking at [the report] because, again, who knows where that’s going to go in the future?” Sorrells says. “It could be very critical in a legal case, an insurance claim. It could be critical when it comes to the customer’s liability. It could be critical for a use of-force incident. It could be critical for the security officer’s personal liability. They have to be very careful to make sure it’s reviewed and approved appropriately.”

 

Megan Gates is senior editor at Security Management. Connect with her at [email protected] or on LinkedIn.

 

arrow_upward