AI and Ethics: A Reflection

In the last week or so, we had two guest speakers come into class: 

Yonah Welker, to talk about AI Ethics, and 

Heather MacKinnon, to talk about AI within Microsoft. 

Microsoft in Vancouver. Photo by Matthew Manuel.

Here's some of my thoughts and reflections on what I learned from those two sessions. 

1. The legal side of ethics is hard

According to Heather, anywhere from 1/3 to 2/3 of the time spent on each project is either in legal, clearing the ethical requirements of a project (and ensuring it didn't violate any of Microsoft's ethics framework) or in data masking. I learned that it is perfectly legal to take images and data off the internet for the purposes of training ML models - but it is very illegal to tie PII (Personally Identifiable Information) into that. 

2. There is no field where AI can't be used. 

While the technology industry is often where one will first think of when they think "AI", the truth is AI is used in pretty much every industry today. Even if it's not used directly, Heather pointed out that every company needs physical components (even if it's something as mundane as a desk to work on), and the supply chain behind manufacturing that certainly uses AI. As a result, AI is a part of all of our lives - indirectly or directly. 

The exception of course is if you've never used any technology in your life... but that's becoming rarer and rarer nowadays. 

On a lighter note, Warner Bros and Microsoft have used AI to recreate a real-life version of Looney Tunes! It combines lots of emerging technologies together - augmented reality, cloud vision, synthetic speech, etc. - and they all use AI one way or another. 

Here's an interesting fact: According to Microsoft, it only takes as few as 500 words nowadays to create a fully customized AI-powered speech synthesis voice. I never would have thought of it being that low!

3. We've come a long way, but there's still a lot more to go. 

Sounds pretty cliché, but it's certainly not wrong. Today, more or less all of the major tech companies have established ethical frameworks detailing exactly what they will or will not do regarding AI. However, legislation has yet to catch up, meaning that a smaller (and less well-intentioned) company can still swoop in on a project refused by larger companies and complete it despite potential ethical concerns. Yonah pointed out how tricky it is to set legislation in this area in his speech, with autonomous cars being a go-to example. If an autonomous car hits a pedestrian, who's the one to blame? The car owner? The manufacturer? The person responsible for writing the code? At the moment, we simply don't have one definitive "answer" to this. 

With how much AI is intertwined with our own lives nowadays, legislation will almost certainly catch up eventually - just perhaps not right now. 

Comments

Popular posts from this blog

Sustaining vs. Disruptive Technologies

Top Emerging Technologies

Digital Thinkers: Experiments of a Futurist