Skip to content

B.C. researcher trying to tap into youth attitudes towards AI and privacy

At what point does information sharing become a privacy concern, asks Vancouver Island university prof
ai-img_7202
Ajay Shrestha, a computer science professor at Vancouver Island University, has a master of sciences in ethical hacking and computer security and a PhD in computer science.

Whether sending a text or searching Google, data is being collected in the pursuit of improving artificial intelligence, and it's an inescapable part of participating in modern society.

That's why Ajay Shrestha, a Vancouver Island University computer science professor, is conducting a study into how these processes specifically concern youths.

"AI and data collection are very common in day-to-day activities. All of the platforms are powered by AI, collecting user data on a daily basis," Shrestha said. "This is for understanding how young users, age 16-19, perceive privacy within AI systems. We're trying to understand how they perceive the transparency, control over their data and data-sharing practices within those AI systems."

During his literature review ahead of the study, Shrestha found that many youths seem unconcerned with the data collection itself, but are concerned with poor methods of communication. This includes companies slipping the details of how the user's data will be used in frequently skipped over terms and conditions, and closed-source coded AI that can't be reviewed by general users. 

"They're fine with sharing their data, but they want to know how their data is being processed by their systems," the professor said. "Right now those algorithms are kind of like black boxes. We don't really know how those algorithms work when it comes to processing user data. Usually people, and young users as well, they want to know how they process the data and whether their data is being shared with other parties. It is, actually."

Whether the same lack of concern is mirrored locally is part of what Shrestha will be researching. Other information he'll be collecting includes how often systems driven by AI are used by participants, whether young users feel confident with how their data is being shared with other platforms, and their level of trust in companies' systems. Parents and educators will be asked about how much time they're spending in conversations with youths about AI privacy.

Shrestha said the end goal of the project is to create insights that can contribute to the development of ethical AI with better privacy consciousness.   

"That's [why] we're doing this kind of study, to understand whether those factors are really affecting the young users to consider using the AI tools more confidently and safely. One of them is privacy consciousness, another one is overall transparency not just about the algorithm that is being used to perform some kind of data analysis, but the overall transparency within the system regarding the sharing of the data."

Shrestha said parents and educators both play a role in educating the younger generation about protecting their privacy online.

"For the parents, it's very important for them to start the conversation with [youths] regarding privacy because most of the time our young users spend a lot of time using [AI powered systems]," Shrestha warned. "Everything is AI-powered, so they're using AI-driven technologies and it's very important to let them understand the risks with using such tools. They are large data collection models and they're gathering a lot of data including the sensitive information, even conversations, your browsing habits, your preferences everything."

When it comes to social media platforms, he said one thing users can do is adjust privacy settings and share only required information.

In Canada, all social media platforms are required to adhere to the personal information protection and electronic documents act. According to the act, companies are required to collect only the personal information the organization needs to fulfill a legitimate, identified purpose, be honest about the reasons for collecting personal information, and do so by fair and lawful means without misleading or deceiving users.

"It has provisions that users have control over their data, they should be able to change their preferences and they can even request their data be deleted from the system," Shrestha said.

While not directly associated with the project, the prof added that another important conversation to have with youths is ethical use of AI. For example, not using AI tools for disinformation, for fraud or spreading discriminatory imagery.

"The AI algorithms sometimes produce biased results, biased outcomes…" he said. "It's also very important all of the researchers can look into the algorithm and contribute to ethical AI, because as we've seen AI tools have generated bias or discriminatory outcomes. When it relates to young users, it becomes more problematic."

The project, called 'Safeguarding Tomorrow’s Data Landscape,' has received about $87,000 through the federal Office of the Privacy Commissioner of Canada. Research participants, including educators, parents, AI developers, researchers and young digital citizens between the ages of 16 to 19 are currently being sought to fill out the survey as well as participate in interviews and focus groups. Participants can apply online through http://csci-viu.github.io/privacy-aware-ai-for-youth. 

The final report will be presented at the end of March.



Jessica Durling

About the Author: Jessica Durling

Nanaimo News Bulletin journalist covering health, wildlife and Lantzville council.
Read more