Is It Time for Your Organization to Form an AI Ethics Committee?

May 20, 2019 Douglas Barbin

Do you need to set up an artificial intelligence ethics committee if you are using this technology? Google certainly thought it did — until it changed its mind. Of course Google is one of the leaders in this space while most other companies on the spectrum are merely experimenting with AI or using a variation of it in a vendor product. Still, though, artificial technology is quite different from other technologies and software applications given its ability to think and reason like a human. It is not understated to say there are ethical considerations with its use — even with seemingly benign business operations. Indeed, Deloitte's second annual State of AI in the Enterprise survey found that 32% of executives ranked ethical issues as a top three risk of AI, but most don't yet have specific approaches in place to address this risk.

 

Google’s Brief Flirtation With an Ethics Committee

Yet it appears that nothing with AI is easy, including establishing an ethics committee as Google found out recently. At the end of March Google announced it had established an AI ethics panel to guide the “responsible development of AI” at the company. The panel was to have eight members and would meet four times over 2019 to consider various concerns about Google’s AI endeavors. The panel lasted just a little over a week.

From the beginning it was controversial, with thousands of Google employees calling for the removal of Kay Coles James, head of the Heritage Foundation, because the institution has voiced skepticism about climate change and because of her comments about trans people. Other members’ credentials or beliefs were similarly challenged, with one member resigning and another Tweeting about James that “Believe it or not, I know worse about one of the other people.” Soon enough Google pulled the plug, declaring it was going back to the drawing board.

With this fiasco as background, it is fair for companies to wonder if an AI ethics committee or panel is for them after all. There is no one resounding consensus on the matter though and not surprisingly opinions vary from ‘yes you do’ to ‘no you don’t’ and all points in between.

 

Yes You Need One

Manoj Saxena, advisor to the London Stock Exchange, first GM of IBM Watson and currently executive chairman of CognitiveScale, is resolute that companies need a backup in this area, especially those companies that will be adopting AI to build solutions. “Unlike traditional rules-based systems, AI systems are self-learning systems that need to be designed carefully so they reflect the company’s core values, comply with industry regulations, provide audit trails on how the AI is learned and finally, act as a means of remediation for AI damages or harm,” he said.

Even companies that are just beginning on their AI journey should be thinking about this, according to B12 co-founder and CEO Nitesh Banta. “With technology as powerful as AI, this is particularly true. There's so much unknown about the future of AI and it has the potential to both positively and negatively impact all aspects of society.” Companies should not only talk about the implications internally but should look for opportunities to learn from others, he added.

Perhaps what confuses companies is the fact that these discussions usually start at the societal level — such as debates over whether the technology should be sold to authoritarian regimes or whether robots will replace human jobs. Simple put these are not issues at the adopter level, said Doug Barbin, principal and Cybersecurity and Emerging Technologies Practice leader of Schellman & Company.

"...users of ML technology need to understand the quality, quantity, and especially the limitations of source data. As such, the old saying of garbage in garbage out applies especially when business decisions are made based on the outputs of the ML technology.”

“Adopters of AI need to consider the sources and uses for AI technologies as they should with any other,” Barbin said. “For example, users of ML technology need to understand the quality, quantity, and especially the limitations of source data. As such, the old saying of garbage in garbage out applies especially when business decisions are made based on the outputs of the ML technology.” Some questions to consider, he said, include:

  • Does the ML technology take data from one source like a sales system or does it take from multiple sources?
  • Are there any glaring omissions like customer satisfaction or retention?
  • Are geographic, demographic, market, or other factors  accounted for or do the results tilt positively or negatively towards a  specific segment"

“And when dealing in personal data, a whole additional host of issues come into play. In some cases, systematic actions are applied based on the analysis that occurs,” Barbin said.

Read full article at CMSWire.com >>

About the Author

Douglas Barbin

Doug Barbin is a Principal at Schellman & Company, LLC. Doug leads all service delivery for the western US and is also oversees the firm-wide growth and execution for security assessment services including PCI, FedRAMP, and penetration testing. He has over 19 years of experience. A strong advocate for cloud computing assurance, Doug spends much of his time working with cloud computing companies has participated in various cloud working groups with the Cloud Security Alliance and PCI Security Standards Council among others.

More Content by Douglas Barbin
Previous Article
Data Privacy is in the Spotlight as Colorado Enacts Landmark Consumer Data Privacy Bill (PCDP)
Data Privacy is in the Spotlight as Colorado Enacts Landmark Consumer Data Privacy Bill (PCDP)

Introduction— by Lindsey Ullian, Threat Stack Compliance ManagerColorado has rightfully gained a...

Next Article
How to Plan for the Worst Possible Disaster Recovery Scenarios
How to Plan for the Worst Possible Disaster Recovery Scenarios

Your worst-case DR scenario today might be vastly different than it was just a few years ago. Wh...

×



Subscribe now
to receive content updates once a week

First Name
!
Success
Error - something went wrong!