As public safety organizations continue to adopt artificial intelligence (AI) solutions to help with more informed decision-making, smoother operations, and improved service, they must not forget their responsibility to operate under a system of transparency. Rick Zak, director, Public Safety & Justice Solutions at Microsoft, and Jack Williams, strategic product manager for AI and Analytics at Hexagon, recently sat down to discuss responsible AI in public safety, as well as best practices for agencies beginning their AI journey. Excerpts from the conversation can be found below. For the full conversation, listen to the podcast on HxGN Spotlight.
Jack: Microsoft has been committed for years to AI and machine learning by coming out with some amazing solutions and products. What are some of the guidelines Microsoft considers their stance when it comes to responsible and ethical AI?
Rick: That’s a great question, because AI is so different from other technologies. It’s so connected to people and has the ability to drive outcomes without people involved. So, the stakes are much higher. As a result, Microsoft has really focused on the policy side of it by trying to support the technology with some frameworks that ensure organizations deliver an AI solution in a responsible way. We put together internal teams to review the development of AI capabilities and a committee that provides guidance on using AI appropriately.
Jack: At Hexagon, we think of responsible AI as being very transparent, explainable, and interpretable. We want to be cognizant of the fact that data sets can be biased to some degree. I realize that can be subjective, but how does Microsoft approach that?
Rick: The question about bias is an important one. In many AI scenarios, you’re using data to teach a system to identify things or to prompt decisions, and it can only do that based on the data it receives. If the data isn’t representative, then you’re not going to get representative outcomes.
To answer your broader question, we came up with six principles to guide our work. And, they’re not Microsoft-specific or product-based. These principles are the framework for our technology and strategy when it comes to AI, and can be applied to public safety agencies.
The first one is fairness. AI systems should treat everyone fairly, and similar people across all dimensions should be treated in a similar way. An example would be an AI system that provides guidance on medical treatment. It should make the same recommendations for everyone with similar symptoms. The fairness should remove any biases, so that the same information going in gives you the same results.
The second principle is that all AI systems should operate reliably, safely, and consistently. Consistency is so important under both normal and unexpected conditions. It’s the idea that if you build a system that can only deliver responsible AI in a very narrow set of conditions, then you’re set up for failure.
The third principle is privacy and security. AI is about peeling back the layers on data and understanding data inside other data. Protecting privacy and securing data are complex, but they are a core concern as systems gather more and more personal and business data. It is crucial that we build systems set up to protect individuals and businesses.
We refer to the fourth as inclusiveness. While we’re building the system, we want to identify barriers that could unintentionally exclude people from all those things I spoke of earlier. Rather than building a system and saying, “Well, we haven’t run into a problem, so we’re good,” we actively seek out those issues and address them.
The fifth principle is transparency. You have to know how information is getting processed and how a system is making its decisions.
The last one is accountability. If you’re putting a system out there built on AI, you have to be accountable for how it operates. You should engage internally to make sure that your tools are created, developed, and deployed in a responsible way.
Jack: These responsible AI principles are so important. What is your advice when it comes to public safety agencies wanting to build AI into their systems?
Rick: The first step isn’t technology; it’s policy. You need to understand how you’re going to use AI and what your constraints are. You can do this while building the technology, but policies should be created throughout the process.
One example is the growth of body cameras. Numerous law enforcement agencies deployed them, and they became a great tool for driving transparency. However, many departments neglected to create policies to support them. Who would wear a camera, and who wouldn’t? When do you turn it on, and when can you turn it off? Who reviews the video? Can an officer review the video as part of writing the report? When there was pushback on how they were being used, those who didn’t have a policy or guidelines were not prepared to answer.
The policy tightly coupled with the technology is what really drove adoption and got the public comfortable with body cameras. So, create an internal policy first. How are you going to use it yourself as a department, and how are you going to govern its use? The people answering these questions and making the decisions shouldn’t be afraid. This is about getting it right at the beginning, so that adding new capabilities becomes easier.
Jack: This is great information and quite timely. AI isn’t going away, and we need to implement it the right way from the very beginning. Thanks so much for your advice.
Rick: My pleasure, and I’d like to add that at Microsoft, we appreciate the partnership we have with Hexagon.
Enjoy this post? Subscribe to our blog and have industry insights delivered right to your inbox each week. Subscribe now.