Master's Student in Computer Science
Cognitive Neuroscience & Artificial Intelligence Lab
Department of Computer Science, Western University, London, ON, Canada
Vector Institute for Artificial Intelligence, Toronto, ON, Canada
Research interest: computer vision, computational neuroscience, brain modeling, application of AI in health care
Hobbies: volunteer at random science events and random research
Lumbar Spinal Stenosis (LSS) is a leading cause of disability in adults over 60. While Magnetic Resonance Imaging (MRI) is the diagnostic gold standard, this method encounters another systematic challenge. Medical imaging sits at the heart of modern diagnosis, yet the professionals who interpret these images are in short supply and under mounting pressure, even with temporary mitigating solutions. Compounded with increasing demand, it results in imaging bottlenecks that delay diagnosis and treatment. Prolonged diagnosis causes unnecessary stress and medical concern over time. For conditions requiring timely intervention, such as LSS, these delays carry real clinical and economic costs. This problem does not just affect the life and health outcome of the patients, but also the productivity and well-being of clinicians. One of the most promising proposals is building a system to increase diagnostic efficiency, while maintaining high-quality results and keeping clinicians' well-being in mind. Many computer vision pipelines are built to assist with medical image diagnosis. However, these models suffer from the lack of displaying motivation behind the prediction, failing to gain trust from clinicians. To address this issue, we propose an implementation of Multimodal Large Language Model (LLMs) as a means to bridge the gap between motivation and decision. This project proposes an interdisciplinary framework that pairs established computer vision methods with LLMs to generate structured, evidence-grounded clinical explanations, not to replace radiologist judgment, but to make AI-assisted diagnosis transparent enough to be trustworthy and useful in practice. In this proposal, our interest lies in LLS diagnosis, but we envision this framework can be generalized for any medical imaging sector.
Paper - in progressThe goal of this project is to build and train model that is capable to classify sleep electroencephalography (EEG) into three categories: dreamless sleep, dream sleep and lucid sleep. This project features 4 architectures (convNet, hybridNet, thinNet and lkNet) for a multiclass (3) classifiers. Pytorch API was used for bulding the CNN and the Scikit-learn library to supplement data processing and performance analysis methods. Dataset is combined and generatively desampled to ensure the balance between each categories. The original dreamless and dream sleep data was collected by Scarpelli et al. for their paper Electrophysiological Correlates of Dream Recall During REM Sleep: Evidence from Multiple Awakenings and Within-Subjects Design. The original lucid sleep data was collected by Konkoly et al., presented in their paper Real-time dialogue between experimenters and dreamers during REM sleep.
Paper - in progress
Email: knguy52@uwo.ca
GitHub: timnd08
LinkedIn: Tim V. Nguyen