AI Ethics : A Google conspiracy theory?

A LITTLE CONTEXT

With more development and implementation of AI many pundits and experts have been very vocal of its potential and have called for a standstill in development or at least some regulations. These calls of action have even penetrated the heavy armour that Big Tech hides behind. Imagine letting just a single company toying with such powerful technology

In response, Google confirmed that they had set up an AI ethics council formally known as Advanced Technology External Advisory Council (ATEAC) to govern their use of AI. This was announced on March 26th (2019) by Kent Walker (Senior Vice-President , Global affairs) on the Google blog

Just days later, on April 4th Google had to dissolve the group citing that the public outrage created an environment that was not fit for the group to function in. Though this was never officially announced, Vox got an exclusive from a Google spokesperson which revealed the same. Additionally the initial blog post was updated with the same response.

THE PROBLEMATIC ROSTER

There were many reasons for this controversy to seem premeditated primarily because there were questionable choices on the council. For better understanding, I have broken down each member’s background, technical and political views.

  • Alessandro Acquisti: IT & public policy professor and specialises in bias
  • Bubacarr Bah: Math & data science professor
  • De Kai: Engineering and Computer Science professor
  • Dyan Gibbens: Monitoring & surveillance drone moguls 
  • Joanna Bryson: She is an AI, ethics & collaborative cognition professional who also is a computer Science professor
  • Kay Cole James: President of the Heritage Foundation (a right-wing think-tank)
  • Luciano Floridi: Privacy and information ethics experts 
  • Joseph Burns: policy expert, diplomat, US Deputy Secretary of State during the Obama administration

To summarise, the group has five men and three women; two known to have conservative views, one with liberal ties and the rest assumed to be apolitical.On paper, this seems like a fair-enough spread, spanning genders and political boundaries while also factoring in the academic backgrounds required, however a few people seemed to find the formation of this group problematic.

The argument is primarily based on the objections raised on two specific individuals who were appointed. More precisely, Kay Cole James and Dyan Gibbens. 

After the backlash Google received for Project Maven, hiring a surveillance technology expert like Dyan Gibbens seemed like they were intentionally mangling an already delicate situation. Additionally, to the surprise of some, there were blaring grievances raised against Kay Cole James. Being the President of a conservative think tank, people not only objected the questionable policies that were proposed but found also found issue with her transphobic tweets where she failed to recognize transgender people.

THE FLUMMOXING RESPONSE

Unsurprisingly, a petition soon blew up since the announcement. Just days after the council was formed, a petition calling for James’ removal garnered thousands of votes and Google was back under the hot seat. 

Google weirdly responded very quickly. Faster then they did with Project Maven. Interestingly the petition just called for Kay Cole James’s removal but Google axed the entire council.

While people suspected foul play on Google’s part, another thing that some found weird was that the council was designed to fail.It was a logistical nightmare. The council was commissioned to meet only 4 times in a span of a year. It wouldn’t be remotely enough for a council of eight people to concur on the possibly thousands of projects that Google would simultaneously need approval for.

THE BOTTOM LINE

A solution could have been that these Google executives could have looked inwards. They already have a cesspool of talent & experience- like Peter Norvig and Rob Worthim- they might as well channel that into governing the company use of AI. From this, there are quite a few benefits the company will avail, not limited to just superior contextual understanding and overcoming Intellectual Property issues. However, a glaring problem arises from this and it is that of the conflict of interest. I’ll admit: this solution isn’t perfect but may be a short-term workaround.

Another solution could be to establish a jointly created external board consisting of reputable individuals like Yuval Noah Harari, Sam Harris and similar such individuals. This system is self-fulfilling; these individuals would need to be objective in order to uphold their reputation which is pivotal to their careers. 

Many of these solutions aren’t perfect and the details definitely need to be ironed out but it is of paramount importance to get it right. While some may question the legitimacy of Google in following through on their statements, at least this fiasco has opened up another Pandora’s box in this sector of review and oversight. We still need to find consensus on how to facilitate this assessment of AI, whether it is an audit by an external bord or peer-review. We still don’t know how to go about handling such sensitive intellectual property. The issue here is that people are overlooking tech that can be worth billions, personal interests may cede those of the company and humanity, thus we must vary as we devise a workable solution.

One thought on “AI Ethics : A Google conspiracy theory?

Leave a comment