Using Open Source to Support Explainable AI in the Public

Your e-mail address is used to communicate with you about your registration, related products and services, and offers from select vendors. Refer to our Privacy Policy for additional information.

While predictive analytics and artificial intelligence (AI) capabilities can provide valuable insights and actionable intelligence, public sector agencies need more. They need “explainable AI,” the ability for machines to clearly demonstrate and explain the rationale behind their recommendations. Open source software and communities can help. With technology and development methodology, agencies can build more transparent AI solutions, faster, resulting in greater efficiencies and more accurate and trusted decisions. Learn more in this whitepaper.



We use cookies to optimize your experience, enhance site navigation, analyze site usage, assist in our marketing efforts. Privacy Policy