Artificial intelligence systems may be able to complete tasks quickly, but that doesn’t mean they always do so fairly. If the datasets used to train machine-learning models contain biased data, it is likely the system could exhibit that same bias when it makes decisions in practice.
For instance, if a dataset contains mostly images of white men, then a facial-recognition model trained with this data may be less accurate for women or people with different skin tones.
A group of researchers sought to understand when and how a machine-learning model is capable of overcoming this kind of dataset bias. They used an approach from neuroscience to study how training data affects whether an artificial neural network can learn to recognize objects it has not seen before. A neural network is a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or “neurons,” that process data.
The new results show that diversity in training data has a major influence on whether a neural network is able to overcome bias, but at the same time dataset diversity can degrade the network’s performance. They also show that how a neural network is trained, and the specific types of neurons that emerge during the training process, can play a major role in whether it is able to overcome a biased dataset.
“A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place,” says the senior author of the paper. The research appears today in Nature Machine Intelligence.
The researchers approached the problem of dataset bias by thinking like neuroscientists. In neuroscience, it is common to use controlled datasets in experiments, meaning a dataset in which the researchers know as much as possible about the information it contains.
The team built datasets that contained images of different objects in varied poses, and carefully controlled the combinations so some datasets had more diversity than others. In this case, a dataset had less diversity if it contains more images that show objects from only one viewpoint. A more diverse dataset had more images showing objects from multiple viewpoints. Each dataset contained the same number of images.
The researchers used these carefully constructed datasets to train a neural network for image classification, and then studied how well it was able to identify objects from viewpoints the network did not see during training (known as an out-of-distribution combination).
For example, if researchers are training a model to classify cars in images, they want the model to learn what different cars look like. But if every Ford Thunderbird in the training dataset is shown from the front, when the trained model is given an image of a Ford Thunderbird shot from the side, it may misclassify it, even if it was trained on millions of car photos.
The researchers found that if the dataset is more diverse — if more images show objects from different viewpoints — the network is better able to generalize to new images or viewpoints. Data diversity is key to overcoming bias, the senior author says.
“But it is not like more data diversity is always better; there is a tension here. When the neural network gets better at recognizing new things it hasn’t seen, then it will become harder for it to recognize things it has already seen,” the author says.
The researchers also studied methods for training the neural network.
In machine learning, it is common to train a network to perform multiple tasks at the same time. The idea is that if a relationship exists between the tasks, the network will learn to perform each one better if it learns them together.
But the researchers found the opposite to be true — a model trained separately for each task was able to overcome bias far better than a model trained for both tasks together.
“The results were really striking. In fact, the first time we did this experiment, we thought it was a bug. It took us several weeks to realize it was a real result because it was so unexpected,” the author says.
They dove deeper inside the neural networks to understand why this occurs.
They found that neuron specialization seems to play a major role. When the neural network is trained to recognize objects in images, it appears that two types of neurons emerge — one that specializes in recognizing the object category and another that specializes in recognizing the viewpoint.
When the network is trained to perform tasks separately, those specialized neurons are more prominent, the senior author explains. But if a network is trained to do both tasks simultaneously, some neurons become diluted and don’t specialize for one task. These unspecialized neurons are more likely to get confused, the author says.
“But the next question now is, how did these neurons get there? You train the neural network and they emerge from the learning process. No one told the network to include these types of neurons in its architecture. That is the fascinating thing,” the author says.
That is one area the researchers hope to explore with future work. They want to see if they can force a neural network to develop neurons with this specialization. They also want to apply their approach to more complex tasks, such as objects with complicated textures or varied illuminations.
https://www.nature.com/articles/s42256-021-00437-5
Can machine-learning models overcome biased datasets?
- 942 views
- Added
Latest News
How formaldehyde affects ep…
By newseditor
Posted 30 Nov
Distinct brain activity tri…
By newseditor
Posted 30 Nov
AI based histologic biomark…
By newseditor
Posted 30 Nov
Repairing nerve cells after…
By newseditor
Posted 30 Nov
A gene regulating fat stora…
By newseditor
Posted 30 Nov
Other Top Stories
Uncontrolled protein de-regulation in cells may lead to trisomy-21s…
Read more
Detecting impurities in ground beef within minutes
Read more
Lifespan prolonged by inhibiting common enzyme
Read more
Stress during pregnancy affects the size of the baby
Read more
Prions in unexpected places
Read more
Protocols
Personalized drug screening…
By newseditor
Posted 30 Nov
Multi-chamber cardioids unr…
By newseditor
Posted 29 Nov
Microfluidic-based skin-on-…
By newseditor
Posted 28 Nov
Biology-guided deep learnin…
By newseditor
Posted 26 Nov
Accurate prediction of prot…
By newseditor
Posted 25 Nov
Publications
Pleiotrophin ameliorates ag…
By newseditor
Posted 30 Nov
Mitf is a Schwann cell sens…
By newseditor
Posted 30 Nov
OsHLP1 is an endoplasmic-re…
By newseditor
Posted 30 Nov
Probiotic treatment with Bi…
By newseditor
Posted 30 Nov
Metabolic immunity against…
By newseditor
Posted 30 Nov
Presentations
Hydrogels in Drug Delivery
By newseditor
Posted 12 Apr
Lipids
By newseditor
Posted 31 Dec
Cell biology of carbohydrat…
By newseditor
Posted 29 Nov
RNA interference (RNAi)
By newseditor
Posted 23 Oct
RNA structure and functions
By newseditor
Posted 19 Oct
Posters
A chemical biology/modular…
By newseditor
Posted 22 Aug
Single-molecule covalent ma…
By newseditor
Posted 04 Jul
ASCO-2020-HEALTH SERVICES R…
By newseditor
Posted 23 Mar
ASCO-2020-HEAD AND NECK CANCER
By newseditor
Posted 23 Mar
ASCO-2020-GENITOURINARY CAN…
By newseditor
Posted 23 Mar