Two Thirds Of Canadians Are Divided On Whether AI Is Good For Society

Female hand holding smartphone showing an evil with devil horn on her AI chatbot conversation app in the city. Illustrating the ideas around the risks and dangers of Artificial Intelligence (AI). The awareness and careful consideration of its implications
getty
When Leger, a market research company, polled Canadians across the country on their views on artificial intelligence, 34% said AI is good for society, while 36% believed it is harmful.
Jennifer McLeod Macey is Senior Vice President at Leger, which independently released this report on AI usage in Canada. She expresses, “We believe it’s essential to contribute to the news landscape while also understanding public sentiment on a broad range of issues. In this study, we’ve been tracking certain trends for some time — for example, usage, which has been rising steadily and has shown a sharp increase over the last five months. With this report, our goal is to go further by examining aspects such as trust, concerns, and fears surrounding these developments.”
Those who held a positive view of AI’s impact on society were more likely to include Canadians who used AI tools (49%), were male (39%) and younger adults 18-35 (45%).
This study, which covers Canadians across provinces, by gender and across three age groups (18-34, 35-54 and 55+) reveals that use of AI tools has gained momentum from 25% in February 2023 to 57% in August 2025. A marked increase to 47% in March of 2025 from a year prior leads us to believe, as McLeod Macey cautions, these findings are still early day. “Usage is concentrated among younger adults and so far, there is trust and comfort (86% user satisfaction) among them,” she explains.
McLeod Macy alluded to the finding that 83% of younger Canadians (18-34) are leading the way in their use of AI tools compared to 34% of those 55+. It’s these younger adults who are more likely to embrace AI compared to the older groups. It’s their views that may dominate the more informed perceptions and concerns about AI tools.
Leger, AI Tools Tried
Leger
Across all Age Groups, Chatbot Usage is the Highest but also Poses the Highest Concerns among Canadians
Chatbots/assistants are, by far, the most used AI tools by Canadians, followed by AI-enhanced search engines (53%) and social media features (29%). Productivity apps (25%) and image generation (20%) have also a significant uptick.
This is consistent with how younger adults use AI technology; however, this group is far more likely to use social media apps with AI, productivity apps and image generators compared to older adults, who tend to favor chatbots (35-54 group) and search engines (55+ group).
Most Canadians agree that AI chatbots should be prohibited from children’s games, toys, websites etc. (73%) and are largely concerned AI chatbots in their children’s games and daily lives (70%). 66% of Canadians say the prospect of having [AI chatbots] in their lives is scary. Females and 55+ were more concerned compared to males and other age groups.
News about chatbots posing serious risks among the most vulnerable, “supercharging human vulnerabilities,” pushing people to lose touch with reality towards self-harm or suicide is becoming rampant. But AI can also affect those “firmly rooted in reality” previously. Reuters’s recent article about Meta’s content risk standards — approved by Meta’s legal and public policy, engineering and Chief ethicist — permitted AI creations to “engage a child in conversations that are romantic and sensual” describes a system that was meant to “flirt,” that was “fully acceptable and baked into the product.”
Ritesh Kotak is a technologist and lawyer, with over seven years specializing in cybercrime and public safety innovation. He contends that Canada’s AI laws have yet to keep pace with the speed of innovation, stating, “ We need real safeguards in place — clear standards for developers to follow, and large language models built with safety at their core, especially to protect kids and the most vulnerable in our communities.”
He continues, “Canada doesn’t yet have a comprehensive AI law in force, though laws have been previously proposed; instead, there are voluntary a set rules for safe, responsible development and use of high‑impact AI systems. For now, regulation is a patchwork of sector‑specific laws and voluntary guidelines.”
Renjie Butalid, Co-founder and Director of the Montreal AI Ethics Institute, emphasizes that chatbot governance must account for more than just technical functionality. “These tools interact with users in emotionally charged contexts, especially among children and vulnerable populations. Their anthropomorphic design can simulate empathy and trust without offering real care or accountability,” he explains. “AI tools such as chatbots are socio-technical systems shaped not only by code and infrastructure, but by the values, norms and contexts in which they are deployed. When design choices prioritize engagement over wellbeing, governance must address both technical risks and broader societal impacts of these tools.”
Trust in AI Tools Are Confined to Tasks, Education and Health Inquiries
When asked to what extent users would trust AI, most AI users had total trust in AI to complete tasks at home (adjusting the thermostat, vacuuming, playing music etc.) (64%). This was a notable (11%) increase from March 2025. Users also trust AI tools for teaching or helping with education (48%). However, only about one in three would rely on AI tools for health advice (36%), financial (32%) or legal advice (31%). Lowest ranked was using the technology to replace their teacher (18%).
Leger, Trust in AI Tools
Leger
Canadians Overall Have Strong Concerns about AI’s Impact on Privacy, Misinformation and the Threat to Jobs
Overriding concerns among Canadians include privacy (83%) especially with the growing recognition that large language models have scraped most information — personal and private — from the internet. Canadians also worry about the impact on society with increased dependence (83%) and most see a need for companies to do more to regulate these systems (83%). As Canadians have witnessed the degree to which these systems have been used in both foreign and domestic election interference they are concerned with the spread of false information (78%). These views are predominant among those ages 55+.
Canadians also believe AI is a threat to human jobs (78%), while at the same time believe the content it provides is useful (62%) and reduces the risk of human error (44%).
The study also finds when AI tools cause harm, Canadians “primarily hold AI companies responsible (57%), while fewer assign blame to the user (18%) or the government (11%).” However, among Canadians who have used AI tools, they are more likely to hold government accountable (22%).
Despite the overall concerns, almost 40% of Canadians recognize that the country is keeping pace with global innovation and regulation.
Leger, Perceptions and Concerns about Artificial Intelligence
Leger
Kotak, tech lawyer, acknowledges the Canadian privacy concerns related to large language models, stating, “Users have legitimate concerns related to their prompt inputs and outputs being leaked, in addition to LLMs learning patterns and personal details about the user. Recently, users have been concerned about the retention and disclosure of data retained by LLMs.”
He responds to the massive scraping of public and personal information, and the continuous learning of these algorithms which contribute to the growing fear about the future of work and employment. “AI will inevitably transform many sectors. It will not only create new jobs but will replace existing jobs. That’s why it’s essential to strengthen labor and employment protections, so one can benefit from the good this technology will bring rather than be left behind,” he said.
Further, Butalid, of Montreal AI Ethics Institute, notes that the survey findings on privacy (83% concerned) and job displacement (78% see AI as a threat to human jobs), reveal where government leadership is most needed. “These aren’t just individual consumer choices, they’re systemic issues that require coordinated policy responses. When Canadians say they want companies to regulate AI systems more, they’re really asking government to set the rules of the game. Privacy protection and workforce transition support are exactly the kind of challenges where government tone-setting through clear standards, regulations, and investment priorities can make the difference between AI serving Canadian interests or leaving communities behind.”
Almost Half of AI Users are Worried about AI and Cognitive Decline
46% of Canadians are worried that more frequent use of AI might make them “intellectually lazy or lead to a decline in cognitive skills.”
Leger: Concerns About AI Cognitive Abilities
Leger
For McLeod Macey, the one stat that stood out, “Young adults are concerned about the impact on our brains; half are worried about a decline in cognitive abilities compared to 62% of Canadians 55 or older, who say they are not worried. But across the board when we asked about perceptions with various AI tools or scenarios, older adults were more likely to hold more negative opinions or express concern.”
Canadians Proceed with Cautious Optimism
The three areas where Canadians believe AI will create the biggest societal impacts include everyday convenience (51%), productivity (42%), and entertainment and creativity (31%). Those who think AI is good for society tend to support these findings. There was a disparity among males (50%) who were more likely to view productivity as a benefit to society while females (34%) were far less likely to have the same view.
The study also finds “smaller proportions see potential benefits in education (27%) and healthcare (26%). Fewer (23%) believe AI will contribute to environmental progress.”
Overall, Canadians are proceeding with cautious optimism. As McLeod Macey of Leger recognizes this recent surge in adoption is still early day. While there is high adoption overall, Canadians over 35 have yet to embrace AI in higher numbers. Males vs. females have a more positive outlook when it comes to the use of AI and its societal impact, as well as those who have been exposed to these AI tools. As Kotak has pointed out Canadian laws have yet to keep up with the pace of technology, however Canadians are suitably informed about the potential harms, especially younger adults. Butalid emphasizes this is a signal to government to set the rules of the game — in standards, regulation and investment.
Full Report, “Views on Artificial Intelligence” can be found here.