I've been working with multiple start-ups in fractional COO, CSO and CPO capacities. Each one is building amazingly unique AI solutions enabling circular economy and making our lives better, along with a profitable business model. It is an incredibly rewarding experience. And then, there are conundrums...
The common core value at these firms is actively ensuring a long-term responsible AI solution. We believe in pausing to evaluate how the future generations will live with what we are building using the present-day datasets. These datasets may have uncautious biases built right into them, however they are also the best possible options available at this time. This means that any AI's ability to self-correct and remain unbiased will be limited at this time; at least until it collects and digests more multi-dimensional data. This implies we need to deeply ponder and strategically approach this key question:
"who is actually accountable for helping the AI course-correct and evolve, if needed".
During one of transform this TED Circles, we debated "who should be accountable for a physical robot's actions?". The most popular proposed option was: "there should be an accountable and liable human operator at all times". Now, the bias towards trusting human operators over machines aside, this is definitely true of most solutions today because of their early stage technology. Spot from Boston Dynamics is always monitored and operated by a human holding the wireless remote, when it ventures out to paint the town red!
I wonder if the AI solutions deviate from physical robots in this aspect. We develop AI solutions to automate, simplify and remove biases so that we do not have to micro-monitor process-driven tasks. Most of the time, these solutions are also invisible and behind the scenes to the end-users. The complexity rises quickly when we integrate various AI solutions, developed independently of each other, into a single end-to-end business use case. With a fully automated AI-enabled solution, it is possible that various AI solutions are talking each other without direct intervention or observation from a human. The data humans receive will likely be an overall use-case performance - this is what provides efficiencies and speed to a typically manual business use-case. In this case, we need to clearly define the accountabilities and governance to ensure issues, risks and biases can be quickly corrected before they compound, leading to possibly severe unintentional consequences.
Human and manual systems are always going to be biased. It is the nature-gifted human condition to create trends in our minds using socially-accepted conclusions. We evolved this way to quickly identify potential failures or risks by generalizing them for the benefit of our immediate tribe. How much bias is a factor of the exposure a particular group has had to different types of perspectives and situations. In a global context where we now wish to operate, there is a way to balance this with machine learning and human integration. Ultimately, it is scalable to program the bias out of a machine or data, than out of a mind because machines and data can be deployed multiple times. This can be achieved provided we have ironed out "a sufficiently unbiased MVP" as a starting point. In addition, we need an active governance to identify and manage the biases with accountability. Otherwise, responsible innovation will remain out of reach because machines will simply enhance our existing biases rather than reduce them. What do you think:
Who should be ultimately accountable for an AI system's actions?
Click the image below to vote on our LinkedIn poll and share your thoughts. Also join us for a free live-workshop on October 30 TED Circles: Optimism - Unbiased AI?
Data-Driven Innovation: Early bird 10% discount expires January 14, 2022: Ten2022
Innovation-Driven C-Suite: Early bird 10% discount expires Feb 14, 2022: CSuite10
Sign up for live & on-demand workshops hosted by ‘transform this’.
Join the Super Community for a wealth of transformation and innovation on-demand workshops.