"Hey Siri, set my alarm for 6 AM tomorrow."
From the iPhone's virtual assistant feature Siri or Amazon's Alexa, natural language processing (NLP), a branch of machine learning, is currently utilized in many applications with an aim for NLP to understand and respond to human communication. During AILA's fifth annual Responsible AI Symposium, professionals within the NLP realm discussed artificial intelligence (AI) and its quest to maintain ethics amidst an era of aggressive technological advancement.
Leading the panel was Associate Director of Responsible AI at Verizon, Mike (Xuning) Tang, Ph.D., whose background encompasses computer science with a speciality in machine learning. In his current role, it is Tang's sole purpose to ensure there is limited and/or no human bias integrated into the decision-making process of a given system.
"For this AILA panel, hopefully we can have a great conversation about AI in Los Angeles, and what it means to do AI responsibly," said Tang.
Los Angeles is quickly becoming an epicenter for AI research and development, and Mark Muro, senior fellow for the Metropolitan Policy Program at Brookings Metro, agrees. "L.A. looks pretty formidable in that early adopter tier... It's not the Bay Area, but it looks very competitive with especially strong representation in commercial industry work in terms of company representation and job postings."
Vince Lynch, CEO of IB.AI, cites the media scene in Los Angeles that assisted in thwarting AI into popularity. "I think the media created the foundation of how people feel about AI or how popular it has become, all of which was generated from Los Angeles and massively impacted the industry," concluded Lynch.
Prioritizing responsibility in AI is directly associated with recognizing exclusion. It must be built with inclusive dataset(s) in mind, and requires machine learning engineers or computer scientists to choose the deployment and development of AI technologies fairly. If the principles of ethical AI are disregarded, chat bots, search engines, facial recognition, and other NLP algorithms will be subjective and based in selectivity.
Panel participant Violet Peng, an assistant professor at the University of California Los Angeles' computer science department with a research area in NLP, shared her thoughts on the topic: "I would take it as part of a fairness issue in which when we know that certain languages, cultures, and groups which are over-represented on the web are included, despite there being many other languages, cultures, and groups that are underrepresented and debarred."
Biases within machine learning models could mean that opinions and stereotypes can enter the core of our models. NLP steering the processing and analysis of vast volumes of unstructured data, enables computers to comprehend the meaning of the text it is provided with. This means that the role of properly selecting data to promote responsible AI is of the utmost importance.
"Any data scientist or machine learning engineer will tell you that what a model learns and what it predicts is solely dependent on the data that was used to train and evaluate the model," Sarthak Sahu, co-head of AI at Virtualitics explained. "As a result, even when we're really cautious about removing columns such as gender or ethnicity or things that might bias the model from a set, we might still be introducing significant bias into the model if the underlying data itself was not a representative sample covering the whole population."
Still, this is a topic of controversy within the AI research community, as it may be challenging to accept that a machine can truly understand language, as opposed to merely simulating the concept of doing so. Swabha Swayamdipta, an incoming assistant professor of computer science at USC and a postdoc at the Allen Institute for AI, explains that it is easier to showcase inconsistencies in language, than to show that something [the computer] is actually understanding you.
This is because machines are not born with the capacity to understand the world, and can certainly not do so when only exposed to language. Swayamdipta continues, "It's a very complex question, and to know the models are not trained to do anything more complex than to finish a pattern, it's kind of disingenuous to even be discussing that they have somehow acquired sentience."
As AI systems evolve into sophisticated and sentient beings, it is evident that regulating a more ethical AI will undoubtedly play a key role to ensure it remains human-centric, where human rights remain unimpeachable and diversity blankets everyone. Ultimately, it could be seen as our moral obligation to assist in the design, development, and launch of an effective and trustworthy AI world.
How will you protect the ongoing effort of responsible AI?