Audio-Visual Model for Generating Eating Sounds Using Food ASMR Videos

  • HOME »
  • Research »
  • Audio-Visual Model for Generating Eating Sounds Using Food ASMR Videos

Audio-Visual Model for Generating Eating Sounds Using Food ASMR Videos

We present an audio-visual model for generating food texture sounds from silent eating videos. We designed a deep network-based model that takes the visual features of the detected faces as input and outputs a magnitude spectrogram that aligns with the visual streams. Generating raw waveform samples directly from a given input visual stream is challenging; in this study, we used the Griffin-Lim algorithm for phase recovery from the predicted magnitude to generate raw waveform samples using inverse short-time Fourier transform. Additionally, we produced waveforms from these magnitude spectrograms using an example-based synthesis procedure. To train the model, we created a dataset containing several food autonomous sensory meridian response videos. We evaluated our model on this dataset and found that the predicted sound features exhibit appropriate temporal synchronization with the visual inputs. Our subjective evaluation experiments demonstrated that the predicted sounds are considerably realistic to fool participants in a “real” or “fake” psychophysical experiment.

Project page on Github

Kodai Uchiyama and Kazuhiko Kawamoto, Audio-Visual Model for Generating Eating Sounds Using Food ASMR Videos, IEEE Access, Vol.9, pp.50106-50111, 2021.

PAGETOP
Copyright © 川本研究室 All Rights Reserved.
Powered by WordPress & BizVektor Theme by Vektor,Inc. technology.