Skip to main content
Passa alla visualizzazione normale.

SABATO MARCO SINISCALCHI

Differentially Private Adapters for Parameter Efficient Acoustic Modeling

  • Authors: Ho C.-W.; Yang C.-H.H.; Siniscalchi S.M.
  • Publication year: 2023
  • Type: Contributo in atti di convegno pubblicato in volume
  • OA Link: http://hdl.handle.net/10447/637524

Abstract

In this work, we devise a parameter-efficient solution to bring differential privacy (DP) guarantees into adaptation of a cross-lingual speech classifier. We investigate a new frozen pretrained adaptation framework for DP-preserving speech modeling without full model fine-tuning. First, we introduce a noisy teacher-student ensemble into a conventional adaptation scheme leveraging a frozen pre-trained acoustic model and attain superior performance than DP-based stochastic gradient descent (DPSGD). Next, we insert residual adapters (RA) between layers of the frozen pre-trained acoustic model. The RAs reduce training cost and time significantly with a negligible performance drop. Evaluated on the open-access Multilingual Spoken Words (MLSW) dataset, our solution reduces the number of trainable parameters by 97.5% using the RAs with only a 4% performance drop with respect to fine-tuning the cross-lingual speech classifier while preserving DP guarantees.