Deep neural networks are used in machine learning applications such as image classification, audio recognition, natural language understanding, and healthcare. Despite the strong predictive performance of modern DNN architectures, models can inherit biases and fail to generalize because data distributions differ in validation and training, or when test evaluation metrics differ from those used during training. This is due to spurious correlations in the data set and refitting of the learning metric. Importantly, this may lead to fairness violations for specific test groups.
Data weighting is a typical data-centric paradigm in terms of fairness and robustness to minimize changes in data distribution and class imbalance. Weights are calculated by reweighting iteratively based on the learning loss or fairness violations of the training set in data-dependent weighting algorithms. Re-sampling data, applying domain-specific knowledge, estimating weights based on data complexity, and using information about the number of classes are all traditional data weighting techniques. Other research has improved on traditional techniques by studying a function that maps inputs to weights or optimizes for weights, treating them as directly learnable parameters. The approach does not learn a global set of weights and cannot be used for data reduction after training.
Previous weighting methods seek to increase generalization and robustness to noisy labels, class imbalance, training time, and convergence by learning a curriculum on cases. Few of these efforts attempt to weight the data to directly increase other statistics. In this paper, the researchers present fairness-optimized weighting via meta-learning, a method that directly improves both fairness and predictive performance. They use a learning-to-learn paradigm to jointly learn a weight for each sample. It is done in the training kit. Model parameters are optimized for the specified test measure and fairness criterion against a conserved set of examples that integrates the significance of the data.
The technique optimizes high-level model parameters using a weighted loss objective and a global set of sample weights over the set of examples using predefined fairness criteria. Training sample weights on a sample set helps to adapt the model to the fairness metric, which improves the performance of the labels, since the validation hold sets are usually much smaller than the training data set where attribute labels are required .
FORML not only learns to weight based solely on the number of samples, as they witness fewer fairness violations even when the samples are evenly distributed across groups, but also on the significance of the data points relative to the fairness criterion. They test the technique on image recognition datasets and show that it reduces fairness violations by improving the performance of the lowest group without affecting the overall performance. Additionally, FORML improves performance under noisy label conditions, and FORML can be used to eliminate harmful data samples, resulting in greater fairness and efficiency.
The researchers believe that the proper use of data can create fairer models without losing accuracy. FORML has multiple advantages: it is easy to implement with minimal training changes and does not need data pre-processing or post-processing of the model output. Furthermore, FORML is a data-centric strategy that increases fairness depending on the dataset and is model and fairness metric agnostic, going beyond classification.
This Article is written as a research summary article by Marktechpost Staff based on the research paper 'FORML: Learning to Reweight Data for Fairness'. All Credit For This Research Goes To Researchers on This Project. Check out the paper. Please Don't Forget To Join Our ML Subreddit
Researchers at Apple developed Fairness Optimized Reweighting via Meta-Learning (FORML), a Machine Learning Training Algorithm that balances Fairness and Robustness with Accuracy by jointly learning training sample Weights and Neural Network Parameters