Data Authorized Machine Learning and Unlearning
Speaker: Xiaohua Jia – Hong Kong, Hong KongTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing , Security and Privacy
Abstract
Data ownership protection is an important task in machine learning (ML). However, it is always difficult to protect data owner’s right in the process of ML and the development of data-driven AI products. In this talk, we focus on the issues of three scenarios of data ownership protection during ML: 1) data shall only be used by authorized users (authorized by the data owner) for ML, and unauthorized users cannot utilize the data for any training purpose; 2) data shall only be used for the training of authorized models agreed by both the owner and the user, and the user cannot use the data for the training of any other models or any other purposes (data content is confidential to the users); 3) data owner shall have the right to request “unlearning” of the data from an already trained model. We’ll discuss the state of art technologies in tackling the above issues and then propose our solutions.About this Lecture
Number of Slides: ~40Duration: ~45 minutes
Languages Available: English
Last Updated: 16/03/2026
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.