Artificial intelligence · Certification

How to build trust with your AI product? What to disclose?

Nikita Lukianets CTO and Founder @PocketConfidant AI

Last updated on April 13th, 2021

As a founder of the AI product myself, I've been looking for various ways to overcome the black box fears from the end-user and build trust with them.

One of the ways is certification and a regulatory compliance, of course, but could we do something bottom-up? This is the question that I'm working on and I want to invite founders and product owners to join the discussion and post your comments below. To bring value to this question, I'm building an open source community called Open Ethics and a software kit to allow crowdsource disclosure for an AI product.

A practical workshop and feedback session organized jointly by Open Ethics andMontreal AI Ethics Institute

We will be selecting up to 6 candidates to participate in this workshop with their products. We require attendance of at least one representative per company. Typically, representatives are CTO, CSO, CDO, or a Product Owners who possess decision power in the company transparency approaches, but also has a sufficient knowledge on their product’s privacy/data/cybersecurity/fairness/unbias measures. To register as a company, fill in the form on this page.

Here are my questions:

  • Transparency as a product requirement. How to develop specs for transparency?
  • Open Ethics Label and Transparency Protocol. How to make standards enforceable?

I'd be happy to discuss here or if you already have an AI product and you want to join the workshop, ping me (the workshop is free, but we'll select only 6 projects)