Industry leaders and regulators are still mulling generative AI’s potential benefits and drawbacks at the start of the year, which includes its applications in sensitive spaces such as legal or court systems, policing, and public safety. The public safety space is at “a really exciting juncture where the pace of technological change is picking up, and AI technology is starting to be woven into applications,” said Chris Merwin, CFO of public safety software provider Mark43.
The New York-based platform offers a cloud-native record management system (RMS) which utilizes automation to help speed up paperwork and other processes for law enforcement, federal and other public safety institutions, according to its website.
“It’s very important that we make the necessary investments to deliver those AI features to our customers, and we're very excited about what's on the roadmap for this year,” Merwin said in an interview. “So first and foremost, my focus is on making sure that we adequately fund those efforts. That’s a critical part of our financial strategy.”
Preparing for a structural technology shift
Merwin joined the company as its finance chief in November, according to his LinkedIn profile. Prior to Mark43, Merwin served as CFO for enterprise AI provider DataRobot, Mark43 said in a press release announcing his appointment. Merwin has also served in various executive roles for banks including Goldman Sachs, Barclays, and Deutsche Bank.
In his first few months as the software provider’s finance chief, Merwin is zeroing in on ensuring the company is providing effective customer service — which includes answering key questions such as, “how do we make sure that we are allocating capital in such a way to maximize investment and our R&D to deliver new features that serve our customers better?” he said.
That means taking a careful look at how the business is funding investments into generative AI or similar technologies, first and foremost for its customer-facing offerings, but “the second piece is making the necessary internal investments in our own systems, infrastructure and processes to make sure that we are well positioned to enter our next phase of growth,” Merwin said.
Enabling the company’s own employees and financial team to tap automated technologies is a key piece in fostering future growth, as interest and attention in generative AI’s potential applications — both inside and outside the public safety space — continues to grow in the early days of 2025.
“When I look at the macro landscape, what I see is a significant structural shift in the demand for modern technology by our customers,” Merwin said. “So my expectation is that we'll see increased budget for modernizing core technology tools by public safety agencies, and even additional budget for new capabilities like our AI features that haven't been funded before.”
Last year, Mark43 inked partnerships with entities such as The Port Authority of New York and New Jersey and the New Orleans Police Department, with the goal of integrating AI to enhance legacy or outdated systems, according to press releases at the time.
“We're helping to solve problems through AI that haven't been able to be addressed historically with on premise systems,” Merwin said. That could include auto-populating data from the company’s RMS and dispatch systems, combined with unstructured data such as body camera footage, into a report which a police officer “can then review for appropriateness after the fact,” he said.
Data remains king
Following customer service and financial strategy, “I'm very much focused on data,” Merwin said. “I want to ensure that we as a company are focused on the right input metrics that ensure we get the outcomes we want.”
The question of data — its aggregation, access, security and privacy — also looms large when it comes to the use of generative AI technologies for policing or similar institutions.
The use of facial recognition and other automated or AI-enabled technologies have become more commonplace among police departments over the years, with the emergence of generative AI tools sparking a renewed conversation around the data they utilize, its ethical use, and who is allowed to access it. The use of generative AI tools for law enforcement remains nascent, with its potential use cases — as well as its potential biases or errors — still being studied, according to an October report in The Guardian.
“Data governance and data privacy are obviously critical,” Merwin said of the space, noting, “of course, every officer is always going to review and approve any report that gets submitted, and that will not change” with the inclusion of generative AI technologies.
When it comes to bringing generative AI into Mark43’s offerings or internal processes, a security review is an essential first step, he said, especially as the type of data that the generative AI model touches is very sensitive.
“The opportunity for GenAI is massive, but in terms of the deployment of it, it's important that it's measured and thoughtful, given the data privacy and security concerns” within the law enforcement and public safety spaces, Merwin said.