Or listen on: Spotify | Apple Podcasts | Youtube
Transcript
A recent report by Incogni evaluated nine major generative AI platforms, assessing them across 11 criteria related to AI-specific privacy issues, transparency, and data collection practices. The findings highlight notable disparities among these platforms:
Meta’s AI ranked lowest in overall data privacy. Key issues included the sharing of user prompts with corporate affiliates and research partners, and the absence of an option for users to opt out of having their data used for model training. Additionally, Meta’s privacy policies were found to be less transparent and more difficult to navigate compared to other platforms.
Google’s Gemini and Microsoft’s Copilot also received low scores, primarily due to broad data collection practices and ambiguous privacy policies. Both platforms lack clear mechanisms for users to opt out of data usage for training purposes according to the report.
According to the researchers’ criteria, French-based Mistral AI’s Le Chat emerged as the most privacy-conscious platform, followed closely by OpenAI’s ChatGPT. These platforms offer greater transparency, clearer privacy policies, and provide users with options to control how their data is used.
These findings underscore the importance of transparency and user control in AI data practices. As AI continues to integrate into various aspects of business and daily life, understanding and addressing privacy concerns will be essential for building trust and ensuring ethical use of technology.
Thanks for listening. This podcast was edited and produced by a human and narrated by me, an ai. If you enjoyed this briefing, follow us and share it with someone who might like it as well.