You may have heard the buzz around WeTransfer’s recent terms of service update—and if it left you worried that your files might be fed into an AI system, you’re not alone.
Earlier this month, the popular Dutch file-sharing company sent out revised terms to its users, set to take effect on August 8, 2025. One clause in particular raised alarm. It suggested that content uploaded to the platform could be used “to improve performance of machine learning models that enhance our content moderation process.”
Naturally, this wording sparked concern. Many interpreted it to mean that WeTransfer had given itself the right to use your files to train artificial intelligence systems. Online backlash was swift, with users accusing the company of setting the stage to sell or share their data with third-party developers of AI tools. Some creatives—like artists and writers—were especially vocal, highlighting ongoing concerns about their work being used without permission to train AI models.
The company clarifies and changes the wording
On July 16, WeTransfer responded publicly, seeking to reassure its user base. In a clear statement, the company said, “Your content is always your content,” emphasising that it does not use AI or machine learning on any files uploaded via its core transfer service.
They went on to explain that the disputed phrase was meant to refer only to potential future uses of AI for content moderation—specifically, detecting and preventing the spread of harmful or illegal material on the platform. According to WeTransfer, no such AI-based moderation system currently exists within their service.
However, recognising the wave of unease the statement caused, the company has now removed any mention of “machine learning” from its terms altogether. The updated section of the document now reads: “You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.”
In other words, WeTransfer still requires certain permissions to operate and improve the platform, but this does not include using your content to train AI.
A bigger issue with AI and user trust
This incident is just one of many that reflect the public’s growing concern over how companies handle personal data in the age of AI. As technology advances rapidly, more people are demanding transparency and consent, especially when it comes to sensitive content and creative work.
The case also demonstrates that businesses must be clear and concise when discussing AI. A vague or poorly worded sentence can quickly lead to a misunderstanding—and in turn, damage trust.
If you’re an artist, writer, or simply someone who values privacy, this news serves as a reminder to read the terms of service closely. And if you’re running a platform, the lesson is equally clear: be upfront, be precise, and don’t assume everyone will interpret technical language the same way.
WeTransfer’s quick response and update to their terms may have calmed some nerves. Still, the incident leaves a lasting message: in today’s digital world, how companies communicate matters more than ever.