As a founder of the AI product myself, I've been looking for various ways to overcome the black box fears from the end-user and build trust with them.
One of the ways is certification and a regulatory compliance, of course, but could we do something bottom-up? This is the question that I'm working on and I want to invite founders and product owners to join the discussion and post your comments below. To bring value to this question, I'm building an open source community called Open Ethics and a software kit to allow crowdsource disclosure for an AI product.
We will be selecting up to 6 candidates to participate in this workshop with their products. We require attendance of at least one representative per company. Typically, representatives are CTO, CSO, CDO, or a Product Owners who possess decision power in the company transparency approaches, but also has a sufficient knowledge on their product’s privacy/data/cybersecurity/fairness/unbias measures. To register as a company, fill in the form on this page.
Here are my questions:
I'd be happy to discuss here or if you already have an AI product and you want to join the workshop, ping me (the workshop is free, but we'll select only 6 projects)