Sparse, Efficient, and Semantic Mixture Invariant Training: Taming In-the-Wild Unsupervised Sound Separation



Scott Wisdom Aren Jansen Ron J. Weiss Hakan Erdogan John R. Hershey

Google Research

Performance scatter plots Scatter plots of multi-source separation SI-SNR improvement (MSi) versus single-source reconstruction SI-SNR (1S) for various losses. Marker size is proportional to the loss weight lambda, and M indicates the number of output sources for the separation model. The M=16 model uses our proposed efficient version of MixIT, which enables scaling beyond M=8 sources.

Abstract

Supervised neural network training has led to significant progress on single-channel sound separation. This approach relies on ground truth isolated sources, which precludes scaling to widely available mixture data and limits progress on open-domain tasks. The recent mixture invariant training (MixIT) method enables training on in-the-wild data; however, it suffers from two outstanding problems. First, it produces models which tend to over-separate, producing more output sources than are present in the input. Second, the exponential computational complexity of the MixIT loss limits the number of feasible output sources. In this paper we address both issues. To combat over-separation we introduce new losses: sparsity losses that favor fewer output sources and a covariance loss that discourages correlated outputs. We also experiment with a semantic classifica- tion loss by predicting weak class labels for each mixture. To handle larger numbers of sources, we introduce an efficient approximation using a fast least-squares solution, projected onto the MixIT constraint set. Our experiments show that the proposed losses curtail over-separation and improve overall performance. The best performance is achieved using larger numbers of output sources, enabled by our efficient MixIT loss, combined with sparsity losses to prevent over-separation. On the FUSS test set, we achieve over 13 dB in multi-source SI-SNR improvement, while boosting single-source reconstruction SI-SNR by over 17 dB.

 

 

Paper

 

"Sparse, Efficient, and Semantic Mixture Invariant Training: Taming In-the-Wild Unsupervised Sound Separation",
Scott Wisdom, Aren Jansen, Ron J. Weiss, Hakan Erdogan, John R. Hershey,
Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) 2021.
[arXiv preprint]


Audio Demos

[Demo index]

Last updated: October 2021