LinkedIn has come under scrutiny for using user data to train AI models without updating its privacy terms in advance. While LinkedIn has since revised its terms, United States users were not informed beforehand, which usually allows them time to make decisions about their accounts. LinkedIn offers an opt-out feature for data used in generative AI, but this was not initially reflected in their privacy policy.
LinkedIn clarified that its AI models, including content creation tools, use user data. Some models on its platform may also be trained by external providers like Microsoft. LinkedIn assures users that privacy-enhancing techniques, such as redacting personal information, are employed during the process.
The Open Rights Group has criticised LinkedIn for not seeking consent from users before collecting data, calling the opt-out method inadequate for protecting privacy rights. Regulatory bodies, including Ireland‘s Data Protection Commission, have been involved in monitoring the situation, especially within regions under GDPR protection, where user data is not used for AI training.
LinkedIn is one of several platforms reusing user-generated content for AI training. Others, like Meta and Stack Overflow, have also begun similar practices, with some users protesting the reuse of their data without explicit consent.