The Problem With “AI Fluency”

Luiza’s Newsletter

The Problem With “AI Fluency”

As the “AI-first” narrative gains more traction in the tech industry, a few days ago, a post from the CEO of Zapier describing how the company measures “AI fluency” went viral.

In today’s edition, I discuss Zapier’s approach and the problem with the expectation of “AI fluency” that is spreading within tech companies and beyond, as it might be harmful and could backfire, both legally and ethically.

In addition to the memos, leaked emails, and PR announcements showing how each company is prioritizing AI (MetaDuolingoShopify, and Fiverr are recent examples – read my recent analysis), there are also worldwide “AI fluency” efforts aiming to incentivize employees to use AI no matter what.

If you search “AI fluency” on LinkedIn, you’ll see that there is a growing number of posts about the topic, as well as professionals who have added this skill or title to their profiles. There are also job openings that mention or require it, showing that it’s a terminology that is becoming normalized and an integral part of the current AI zeitgeist.

Before I continue, a reminder that learning about AI, upskilling, and integrating it professionally is extremely important. I wrote about AI literacy multiple times in this newsletter, including what laws, such as the EU AI Act, say about it.

AI literacy, however, demands critical thinking, as well as ethical and legal awareness, including the ability to know when not to use AI. This is the opposite of what recent corporate approaches to “AI fluency” are promoting.

Discuss

OnAir membership is required. The lead Moderator for the discussions is onAir Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar