A Generative AI-Based Framework for COVID-19 Screening from Cough Audio Signals

Scritto il 30/03/2026
da Maddirla Jagadish

J Vis Exp. 2026 Mar 10;(229). doi: 10.3791/69874.

ABSTRACT

Cough audio analysis provides a practical pathway for developing automated tools to support COVID-19 disease screening. Despite growing interest, many existing approaches remain sensitive to noise, class imbalance, and dataset variability, limiting their reproducibility. This work presents a structured generative artificial intelligence-driven methodology for COVID-19 detection using cough sound recordings. Experiments are conducted using two publicly accessible datasets, COUGHVID and Virufy, both of which comprise labeled cough samples. The proposed protocol consists of sequential preprocessing stages, including cough segmentation, denoising, and signal normalization. Acoustic characteristics are then captured through Mel-Frequency Cepstral Coefficients, chroma descriptors, and spectral contrast features. To improve representation learning and mitigate data imbalance, a hybrid generative framework integrating Variational Autoencoders and Generative Adversarial Networks is employed to synthesize feature-level samples. Classification is subsequently performed using Deep Convolutional Neural Networks and Attention-based DCNN models. Performance evaluation indicates that incorporating generative augmentation consistently improves over non-generative baselines, achieving a peak classification accuracy of 97.2% and an AUROC of 0.953 across the evaluated datasets. These results demonstrate the effectiveness of generative modeling for enhancing cough-based COVID-19 detection and establish a reproducible analytical pipeline for future research in acoustic health monitoring.

PMID:41911284 | DOI:10.3791/69874