An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry

An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry#

Conference Paper ICSE 2023 Empirical Study

Authors#

Wenxin Jiang
Nicholas M. Synovic
Matt Hyatt
Taylor R. Schorlemmer
Sethi Rohan
Yung-Hsiang Lu
George K. Thiruvathukal
James C. Davis

Abstract#

Deep Neural Networks (DNNs) are being adopted as components in software systems. Creating and specializing DNNs from scratch has grown increasingly difficult as stateof-the-art architectures grow more complex. Following the path of traditional software engineering, machine learning engineers have begun to reuse large-scale pre-trained models (PTMs) and fine-tune these models for downstream tasks. Prior works have studied reuse practices for traditional software packages to guide software engineers towards better package maintenance and dependency management. We lack a similar foundation of knowledge to guide behaviors in pre-trained model ecosystems.

In this work, we present the first empirical investigation of PTM reuse. We interviewed 12 practitioners from the most popular PTM ecosystem, Hugging Face, to learn the practices and challenges of PTM reuse. From this data, we model the decision-making process for PTM reuse. Based on the identified practices, we describe useful attributes for model reuse, including provenance, reproducibility, and portability. Three challenges for PTM reuse are missing attributes, discrepancies between claimed and actual performance, and model risks. We substantiate these identified challenges with systematic measurements in the Hugging Face ecosystem. Our work informs future directions on optimizing deep learning ecosystems by automated measuring useful attributes and potential attacks, and envision future research on infrastructure and standardization for model registries.

Artifacts#

Todo

  • Add the paper preprint

  • Add the poster

  • Add link to the source code

Paper Preprint

Download

Published Paper

View

Poster

Download

Source Code

View

BibTex
@inproceedings{jiang_empirical_2023,
   address = {Melbourne, Victoria, Australia},
   series = {{ICSE} '23},
   title = {An {Empirical} {Study} of {Pre}-{Trained} {Model} {Reuse} in the {Hugging} {Face} {Deep} {Learning} {Model} {Registry}},
   isbn = {978-1-6654-5701-9},
   url = {https://dl.acm.org/doi/10.1109/ICSE48619.2023.00206},
   doi = {10.1109/ICSE48619.2023.00206},
   abstract = {Deep Neural Networks (DNNs) are being adopted as components in software systems. Creating and specializing DNNs from scratch has grown increasingly difficult as state-of-the-art architectures grow more complex. Following the path of traditional software engineering, machine learning engineers have begun to reuse large-scale pre-trained models (PTMs) and fine-tune these models for downstream tasks. Prior works have studied reuse practices for traditional software packages to guide software engineers towards better package maintenance and dependency management. We lack a similar foundation of knowledge to guide behaviors in pre-trained model ecosystems.In this work, we present the first empirical investigation of PTM reuse. We interviewed 12 practitioners from the most popular PTM ecosystem, Hugging Face, to learn the practices and challenges of PTM reuse. From this data, we model the decision-making process for PTM reuse. Based on the identified practices, we describe useful attributes for model reuse, including provenance, reproducibility, and portability. Three challenges for PTM reuse are missing attributes, discrepancies between claimed and actual performance, and model risks. We substantiate these identified challenges with systematic measurements in the Hugging Face ecosystem. Our work informs future directions on optimizing deep learning ecosystems by automated measuring useful attributes and potential attacks, and envision future research on infrastructure and standardization for model registries.},
   urldate = {2024-12-12},
   booktitle = {Proceedings of the 45th {International} {Conference} on {Software} {Engineering}},
   publisher = {IEEE Press},
   author = {Jiang, Wenxin and Synovic, Nicholas and Hyatt, Matt and Schorlemmer, Taylor R. and Sethi, Rohan and Lu, Yung-Hsiang and Thiruvathukal, George K. and Davis, James C.},
   month = jul,
   year = {2023},
   keywords = {Supply chains, Cybersecurity, Biological system modeling, cybersecurity, Decision making, deep learning, Deep learning, Ecosystems, empirical software engineering, Empirical software engineering, engineering decision making, Engineering decision making, machine learning, Machine learning, software reuse, Software reuse, software supply chain, Software supply chain, Standardization, Systematics, trust, Trust},
   pages = {2463--2475},
}

Video#