2019, Simon N. Foley, editor, Data and Applications Security and Privacy XXXIII[…], Springer, →ISBN, page 4:
While data sanitization shows promise to defend against data poisoning, it is often impossible to validate every data source [14].
2022, Alfred Z. Spector, Peter Norvig, Chris Wiggins, Jeannette M. Wing, Data Science in Context: Foundations, Challenges, Opportunities, Cambridge University Press, →ISBN, page 148:
Similarly, the Tay chatbot suffered from data poisoning. To mitigate data poisoning, it is important not to let any one group contribute too much data to a model.
2023, Katharine Jarmul, Practical Data Privacy[1], O'Reilly, →ISBN:
Data poisoning is one type of adversarial attack—where a user or group of users submit false data to influence the model toward a particular or incorrect prediction.
2023, Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence[2], W. W. Norton & Company, →ISBN:
Some forms of data poisoning are undetectable. Attackers can insert adversarial noise into the training data, altering the training data in a way that is hidden to human observers.