55% of audiences are uncomfortable with AI-Are brands listening?

14
Jan 25
By | Other

Behind the scenes, artificial intelligence (AI) is reshaping today’s media landscape. From powerful algorithms that determine what ad to serve to processing large amounts of data in seconds. It even drives generative AI (generative AI), enabling automation in tasks like creative concept development and copywriting. This growing reliance on AI is impacting the way brands connect with audiences. However, as the presence of AI expands, its rapid adoption has caused skepticism among audiences, especially within multicultural communities.

A recent Nielsen study on AI surveyed over 6,000 respondents to understand how today’s audiences feel about the increased use of AI in media. The findings reveal a critical insight: without transparency, audiences are losing trust in brands, and multicultural audiences – seeking cultural authenticity – can become completely disengaged. Here’s what brands need to know:

Growing Skepticism and the Call for Transparency

According to a recent survey by Prosper Insights & Analytics, 81.5% of respondents are familiar with generative AI and nearly a quarter (25.4%) actively use it.

Similarly, Nielsen’s recent study found a significant level of trust in AI tools, with 87% of users expressing moderate to high trust in AI.

However, the story changes when AI is applied to media. Compared to familiarity with AI in general, awareness of its role in content creation and advertising falls to 69% Furthermore, as audiences discover AI-generated content, concern grows: 55% of respondents feel uncomfortable on websites that rely heavily on AI-generated articles and stories. Distrust deepens further, with nearly half (48%) reporting that they do not trust brands that advertise on such sites.

This skepticism is more than a general caution—it’s a demand for accountability. Four out of five respondents agree that media organizations should be transparent about their use of AI, particularly in shaping news and content. Without this transparency, brands risk alienating audiences and losing credibility.

AI skepticism is even more pronounced among multicultural audiences, who are increasingly influential in today’s market. The Nielsen study points out that more than 60% of different respondents can detect AI-generated content, and this often results in a negative feeling.

AI, Advertising and Audiences – Nielsen 2024

For Native American respondents, trust is particularly fragile: 56% do not trust brands that advertise on AI-heavy sites. This highlights a missed opportunity for brands to authentically engage with a growing and increasingly influential demographic. Similarly, 55% of black respondents express concerns about bias and stereotypes in AI-generated content, highlighting how AI often misses cultural nuances. These findings reveal a growing divide between automated advertising and the authenticity that multicultural audiences expect.

Cultural Significance: A Missed Opportunity

AI’s inability to reflect cultural nuances is a significant missed opportunity. About 40% of respondents feel that AI-generated ads fail to represent their culture or values. Among Native American respondents, that number rises to 52%, and 60% of black respondents express alarm at how their communities are portrayed in AI-generated content and targeted advertising.

These concerns are not just academic; they have real-world implications for brands. By relying on AI without addressing its cultural limitations, brands risk perpetuating biases and alienating audiences seeking authentic representation.

“The data highlights a clear gap between consumer excitement about AI and their concern with its use in media and advertising,” says Patricia Ratulangi, vice president of Global Communications at Nielsen. “This is a reminder that by prioritizing transparency and cultural relevance, brands can foster trust and build stronger connections with their audiences.”

Aligning AI with audience values

Beyond advertising, consumer concerns about AI extend to privacy, ethics and misinformation. According to a recent Prosper Insights & Analytics survey:

  • Privacy Concerns: 53.8% of respondents are concerned about how personal data is collected and used.
  • Ethical oversight: 37.8% emphasize the need for human oversight to ensure responsible applications of AI.
  • Disinformation: 34.2% mention concerns about “AI hallucinations” or the generation of false information.
  • Displacement of work: 29.1% express anxiety about automation replacing human roles.

These concerns together underscore a broader sentiment: audiences are willing to embrace AI, but only if its implementation aligns with their values.

If brands are listening, the way forward is clear. Transparency must become the foundation of AI strategies. Clearly communicating how AI is used – whether in ad targeting or content creation – can help rebuild trust. Equally important, brands should invest in AI systems that prioritize cultural relevance, ensuring that these tools authentically reflect and engage the diverse communities they aim to serve.

The stakes are high, but so are the rewards. Brands that embrace transparency and inclusion in their use of AI will not only differentiate themselves, but also drive trust, loyalty and meaningful engagement in an increasingly skeptical world.

Click any of the icons to share this post:

 

Categories