LinkedIn, a major tech company owned by Microsoft, has come under fire for training its AI models with user data without explicitly informing users. Similar to other tech giants like Meta and X’s Grok, LinkedIn is automatically opting users into training its AI, as well as models belonging to unnamed “affiliates.” This raises concerns about privacy and transparency, especially since Microsoft also owns a portion of ChatGPT developer OpenAI, which could potentially be using data from the business-focused social network for training its AI.

After facing backlash, LinkedIn clarified that user data will not be used to train base OpenAI models but will be shared with Microsoft for its own OpenAI software. According to a LinkedIn spokesperson, the platform utilizes OpenAI models provided through Microsoft’s Azure AI Service like other customers of the API service. It was emphasized that data is not sent back to OpenAI for training their models. LinkedIn also mentioned that they aim to minimize personal data in the data sets used to train their AI models and are not training “content-generating AI models” on data from the EU, EEA, or Switzerland.

Users have the option to opt out of having their data used for AI training on LinkedIn by going to the data privacy section in settings and switching off the toggle for training content creation AI models. However, privacy activists are raising concerns about the opt-out model, arguing that it is not enough to protect users’ rights. Mariano delli Santi, a legal and policy officer at the Open Rights Group in the UK, criticized the use of opt-out consent and called for urgent action from privacy watchdogs against LinkedIn and other companies that utilize user data for AI training without proper consent.

The issue of tech companies using user data for AI training without adequate consent continues to be a point of contention in the digital landscape. This practice raises questions about privacy, transparency, and the protection of user rights. The growing use of AI in various applications highlights the need for stricter regulations and oversight to ensure that users’ data is handled responsibly. With more companies following the trend of training AI models with user data, it is essential for regulators to intervene and enforce measures that prioritize user privacy and consent.

As the debate over user data privacy and AI training intensifies, it is crucial for tech companies like LinkedIn to be more transparent and proactive in informing users about how their data is being utilized. The reliance on opt-out consent models is increasingly being criticized as inadequate, prompting calls for stricter regulations and enforcement actions to protect user rights. In the era of big data and AI-driven technology, the need for ethical and responsible data practices is more important than ever, and companies must prioritize user privacy and consent in all aspects of their operations.

Share.
Exit mobile version