Modelling Dataset Bias in Machine-Learned Theories of Economic Decision-Making

Background Introduction

Over the long term, normative and descriptive models have been trying to explain and predict human decision-making behavior in the face of risk choices such as products or gambling. A recent study discovered a more accurate human decision model by training Neural Networks (NNs) on a new large-scale online dataset called choices13k. This study systematically analyzed the relationship between different models and datasets and found evidence of dataset bias. The research shows that preferences for random gambling in the dataset choices13k tend to be balanced, possibly reflecting increased decision noise. By adding structured decision noise to neural networks trained with laboratory research data, we constructed a Bayesian generative model and found that this model performed better than all other models except for choices13k.

Research Source

This study was published in “Nature Human Behaviour,” titled “Modelling dataset bias in machine-learned theories of economic decision-making.” The authors include Tobias Thomas, Dominik Straub, Fabian Tatai, Megan Shene, Tümer Tosik, Kristian Kersting, and Constantin A. Rothkopf, all affiliated with the Technical University of Darmstadt and the Hessian Center for Artificial Intelligence, Germany.

Research Workflow

Method Overview

This study designed a series of experiments to investigate the interactions between datasets and models, using choice datasets from three different studies: cpc15, choice prediction competition 2018 (cpc18), and choices13k. We trained multiple machine learning models and tested their performance on different datasets to evaluate the generalization ability of the models and the differences between the datasets.

  1. Source and Description of Choice Datasets:

    • cpc15 dataset was collected from laboratory research at the Hebrew University and Technion-Israel Institute of Technology, containing data from 446 participants on 150 different choice problems.
    • cpc18 dataset extended the data of cpc15, containing more gambling and behavioral data collected under the same experimental environment.
    • choices13k dataset consists of the choice behavior of participants from the Amazon Mechanical Turk (AMT) platform on more than 13,000 choice problems.
  2. Model Training and Testing:

    • Using five different models (including three classic machine learning methods beast, Random Forest, and support vector machines svm, as well as two different neural network architectures): We trained these models on the cpc15 and choices13k datasets respectively and evaluated their performance on other datasets.
  3. Dataset Bias Analysis:

    • We applied transfer testing and found that compared to cpc15 and cpc18, models trained on choices13k performed worse on laboratory datasets, suggesting systematic bias between datasets.
    • Using Explainable AI (XAI) methods such as feature importance weights, we explored potential factors leading to these differences, particularly features from psychological and behavioral economics literature.

Experimental Results

  1. Choice Data Analysis:

    • Through transfer testing, we found systematic differences in participant behavior between the choices13k dataset and laboratory datasets cpc15 and cpc18; models performed best on their respective training and testing sets but performed worse on other datasets.
  2. Feature Weight Analysis:

    • Using linear models and feature importance weights, we found that some psychological features (such as the differences in the expected value of gambling, probabilities, and feedback factors) can better explain the prediction differences between models.
  3. Decision Noise Model:

    • To quantify the sources of dataset bias, we further constructed a hybrid model where some participants made random guesses while the remaining participants added extra decision noise. This hybrid model effectively explained the decision noise in the choices13k dataset.

Research Conclusions and Significance

This study reveals the complex interactions between machine learning models and human decision data, emphasizing the importance of the data collection environment. By combining machine learning, data analysis, and theoretically driven reasoning, we can better predict and understand human economic decision-making behavior.

  1. Scientific Significance and Application Value:

    • Provides a scientific method to explain dataset bias through the construction of generative models.
    • The study results highlight the importance of the representativeness of the dataset and the data collection environment in developing broader human decision theories.
    • Provides insights for improving future machine learning models and human decision behavior research methods.
  2. Research Highlights:

    • Identified the existence and causes of dataset bias, demonstrating that even large-scale datasets may require theoretical analysis to understand complex human decision-making behavior.
    • Proposed methods to effectively explain the differences in choice behavior between online and laboratory datasets by incorporating decision noise into generative models.

Future Research Directions

This study raises some key questions for future research, including how to optimize decision experiment designs, ensure data representativeness, and consider the effects of different experimental environments on human decision-making behavior. Additionally, richer datasets and more natural experimental settings may further advance the application and theoretical research of machine learning models in economic decision analysis. Furthermore, this study underscores the indispensability of theory in data-driven machine learning, laying the foundation for future exploration of more effective scientific theory generation tools. Through this research, the authors not only demonstrated the great potential of machine learning in economic decision analysis but also revealed the caution and planning required when autonomously generating large-scale machine learning theories. New boundaries of science and engineering are thus being continually expanded, while simultaneously providing a more accurate and rich model basis for practical economic decisions.