About UWORCS
UWORCS 2026 (University of Western Ontario Research in Computer Science) is the Computer Science
department's annual student conference. This year marks the 33rd year of the conference, continuing a
long-standing tradition of showcasing research within the department.
The conference provides students an opportunity to receive feedback, refine presentation skills, learn
about research across the department, and connect with peers and faculty in the Computer Science
community.
For PhD students in their third and fourth years, presenting at UWORCS fulfills the yearly seminar
requirement.
For any questions or to learn more, please email the Conference Chair at kwade4@uwo.ca.
Your participation helps to make UWORCS a success!
Subjects
Presenters
Registered Attendees
Keynote Speaker

Arash Habibi Lashkari
Dr. Arash Habibi Lashkari is a Canada Research Chair (CRC) in Cybersecurity. As the founder and director of the Behaviour-Centric Cybersecurity Center (BCCC), he is a Senior member of IEEE and a Full Professor at York University. Prior to this, he was an Associate Professor at the Faculty of Computer Science, University of New Brunswick (UNB). His research focuses on cyber threat modelling and detection, malware analysis, big data security, internet traffic analysis, and cybersecurity dataset generation.
Dr. Lashkari has over 25 years of teaching experience, spanning several international universities, and was responsible for designing the first cybersecurity Capture the Flag (CTF) competition for post-secondary students in Canada. He has been the recipient of 15 awards at international computer security competitions - including three gold awards - and was recognized as one of Canada's Top 150 Researchers for 2017. In 2020, Dr. Lashkari was recognized with the University of New Brunswick's prestigious Teaching Innovation Award for his personally-created teaching methodology, the Think-Que-Cussion Method.
He is the author of ten published books and more than 110 academic articles on a variety of cybersecurity-related topics and the co-author of the national award- winning article series, “Understanding Canadian Cybersecurity Laws”, which was recently recognized with a Gold Medal at the 2020 Canadian Online Publishing Awards, remotely held in 2021.
Building on over two decades of concurrent industrial and development experience in network, software, and computer security, Dr. Lashkari's current work involves the development of vulnerability detection technology to provide protection to network systems against cyberattacks. He simultaneously supervises multiple research and development teams who are working on several projects related to network traffic analysis, malware analysis, Honeynet and threat hunting.
Talk Title
Elevating Cybersecurity Vigilance: Advancing AI-Powered Security and Security of AI Through the UCS Knowledge Mobilization Program.
The Understanding Cybersecurity Series (UCS) is a comprehensive knowledge-mobilization program addressing both AI-powered security and the protection of AI systems at scale. Recognizing the increasing complexity of cyber threats, UCS operates on the principle that effective cybersecurity solutions require collaboration between academia, industry, and policymakers. The program advances cutting-edge research in AI-driven cybersecurity defenses, tackles emerging risks targeting AI models and infrastructures, and promotes public education to ensure cybersecurity knowledge is accessible, actionable, and relevant for IT professionals, researchers, developers, and industry leaders.
In this keynote, Dr. Lashkari will explore the evolving landscape of cybersecurity and the transformative role of AI in this field. The talk will highlight how open-source tools and datasets are enabling AI-powered solutions to improve threat detection, automate responses, and build more resilient systems. It will also cover AI-driven security methods, strategies to protect AI systems and infrastructure, and the development of practical, scalable cybersecurity tools. Attendees will also gain insight into how AI is shaping both defensive and adversarial aspects of digital security.
Dr. Lashkari's keynote speech will take place on Friday, April 10th in Middlesex College.
Please join us in welcoming Dr. Lashkari to the 33rd Annual Conference - UWORCS 2026.
Meet and Greet
Join us for a Meet-and-Greet on April 10th from 8:00 AM to 9:00 AM in MC312 (Grad Lounge), right before the presentations begin!Come by to network with other students, explore presentations that catch your interest, and pick up your name tag.
Coffee and light refreshments will be provided!
Prizes & Games
Attend morning presentations, ask questions, and join to the keynote session to earn entries into the prize draw, which will be held after lunch.Afterwards, join us for games and activities to get to know other students.
The more you participate, the greater your chances to win!
Frequently Asked Questions
-
Who can attend?
Computer science faculty, graduate students, and undergraduate students are invited to attend the presentations. Students from other faculties are also welcome.
-
Is there a registration fee?
There is no registration fee!
-
What sort of research can be presented?
You can present any research you've been involved with, whether it's completed work, ongoing projects, or proposed research. This could include thesis research and course projects with a research component.
-
How should I prepare my talk?
Each oral presentation should be 20 minutes long with an additional 7 minutes for questions.
Presentations should clearly state the research problem and motivation, should include relevant facts/data/analysis to support the research strategy, should be accessible to an audience with an undergraduate-level background in computer science, and should have a smooth flow of ideas. -
Does this fulfill my yearly PhD seminar?
Yes! PhD students in their 3rd and 4th years can present their current research and it will count towards the yearly seminar requirement.
-
Will there be session prizes?
Yes! Each session will have a cash prize for the best presentation. The number and size of the prizes will be determined once registration is finalized.
-
How does session judging/chairing work?
Each presentation will be given a score out of 50 by each of three faculty members (for a total score out of 150), based on presentation quality and clarity. The highest score out of 150 for each session is awarded the best presentation award. Feedback from the judges will be forwarded to each presenter after the event is over. Session chairs announce each speaker and ensure that the talks stay on schedule.
-
Is lunch included?
Yes! Lunch will be provided for registered participants. In addition, coffee and snacks will be provided throughout the day.
Subjects
UWORCS 2026 features oral presentations covering a wide range of topics, including:Team
The team for 2026 year
Kaitlyn Wade
Conference Chair & Web Master
Joud El-Shawa
Graphic Designer
Rawan El Moghrabi
Organizing CommitteePresenters
-
AI in Tabletop RPGs
Khushal S. Mehta
Artificial IntelligenceThis presentation explores the design and implementation of an AI-generated campaign framework for Dungeons & Dragons (D&D), focusing on procedural narrative construction, adaptive worldbuilding, and dynamic encounter orchestration. The system leverages large language models to generate cohesive story arcs, non-player characters (NPCs), quest lines, lore documents, and branching dialogue trees in real time, while maintaining alignment with established D&D 5e mechanics.
-
Wifi Sensing for Smart Home Applications
Gad Mohamed Gad
Artificial IntelligenceWiFi-enabled devices are now ubiquitous in homes, enabling the rapid adoption of WiFi sensing technology for various smart home applications. This technology offers a privacy-preserving, low-cost, and resource-efficient alternative to traditional sensing methods like cameras, videos, or infrared, as it avoids issues such as limited field of view and occlusion. However, WiFi sensing faces significant challenges, including hardware noise, temporal variations, and environmental sensitivity, which cause Channel State Information (CSI) data distribution shifts over time and across locations, ultimately degrading machine learning classifier performance. In this work, we evaluate WiFi sensing models across multiple smart home tasks—room occupancy detection, human activity recognition, and indoor localization—identify key sources of CSI distribution shift, and develop preprocessing and post-training pipelines to extract time-invariant features that amplify class-specific distortion signatures, resulting in substantial accuracy improvements.
-
A Memory-Driven Action Selection Framework for Scalable Ambient NPC Behavior
Eric Buitron Lopez
GamesOpen-world video games populate their environments with ambient non-player characters (NPCs) that perform background activities like shopping, patrolling, or socializing. Making these characters behave in varied, believable ways across large populations is challenging: sophisticated game AI techniques like planning produce diverse behavior but are computationally expensive, while lightweight approaches like finite state machines scale well but tend to produce repetitive patterns. We present a framework where NPC behaviors are defined as directed graphs of actions and each NPC maintains a memory system with records of its recent decisions, guiding future choices away from repetition without expensive computation. The framework is implemented as an engine-agnostic C++ library with JSON-defined behaviors and was validated in both Unity and Unreal Engine, demonstrating behavioral variety and sub-linear scaling from 50 to 200 NPCs.
-
Assessment of Tumor Infiltrating Lymphocytes in Predicting Stereotactic Ablative Radiotherapy (SABR) Response in Unresectable Breast Cancer
Shely Kagan
Artificial IntelligenceBackground: Patients with advanced breast cancer (BC) may be treated with stereotactic ablative radiotherapy (SABR) for tumor control. Variable treatment responses are a clinical challenge and there is a need to predict tumor radiosensitivity a priori. There is evidence showing that tumor infiltrating lymphocytes (TILs) are markers for chemotherapy response; however, this association has not yet been validated in breast radiation therapy. This pilot study investigates the computational analysis of TILs to predict SABR response in patients with inoperable BC.
Methods: Patients with inoperable breast cancer (n = 22) were included for analysis and classified into partial response (n = 12) and stable disease (n = 10) groups. Pre-treatment tumor biopsies (n = 104) were prepared, digitally imaged, and underwent computational analysis. Whole slide images (WSIs) were pre-processed, and then a pre-trained convolutional neural network model (CNN) was employed to identify the regions of interest. The TILs were annotated, and spatial graph features were extracted. The clinical and spatial features were collected and analyzed using machine learning (ML) classifiers, including K-nearest neighbor (KNN), support vector machines (SVMs), and Gaussian Naïve Bayes (GNB), to predict the SABR response. The models were evaluated using receiver operator characteristics (ROCs) and area under the curve (AUC) analysis.
Results: The KNN, SVM, and GNB models were implemented using clinical and graph features. Among the generated prediction models, the graph features showed higher predictive performances compared to the models containing clinical features alone. The highest-performing model, using computationally derived graph features, showed an AUC of 0.92, while the highest clinical model showed an AUC of 0.62 within unseen test sets.
Conclusions: Spatial TIL models demonstrate strong potential for predicting SABR response in inoperable breast cancer. TILs indicate a higher independent predictive performance than clinical-level features alone. -
QUINOA: Quantum-Unified Intelligent Network Orchestration and Automation for 6G Heterogeneous Networks
Iqra Batool
Computer Systems and NetworksThe exponential growth of Internet of Things (IoT) devices and the complexity of sixth-generation (6G) heterogeneous networks create unprecedented challenges in resource optimization and security. This paper introduces QUINOA (Quantum-Unified Intelligent Network Orchestration and Automation), a novel framework leveraging quantum computing for 6G-IoT network optimization. The architecture features four layers: Quantum Security Layer with post-quantum cryptography, Intelligence Layer with autonomous algorithm selection, Programmable Layer with specialized quantum modules (including the Quantum Approximate Optimization Algorithm (QAOA), Quadratic Unconstrained Binary Optimization (QUBO), and Variational Quantum Eigensolver (VQE)), and Interface Layer for 6G integration. The framework intelligently selects optimal quantum algorithms based on real-time problem characteristics. Simulation-based evaluation demonstrates 47% faster convergence in massive MIMO beamforming, 35% improvement in network slicing efficiency, and 52% reduction in interference versus classical methods. Hardware validation on IBM Eagle processors and D-Wave Advantage systems further confirms practical quantum advantages for structured problem instances within current NISQ feasibility limits. However, current quantum hardware limits practical advantages to small problem instances (≤20 qubits) with structured optimization landscapes. The framework provides comprehensive quantum security while maintaining practical deployability on Noisy Intermediate-Scale Quantum (NISQ) devices.
-
Stat-XAI: Reframing Explainability as Statistical Inference
Arsh Chowdhry
Artificial IntelligenceExplainable AI (XAI) methods often produce feature attributions through surrogate fitting, perturbation sampling, or heuristic approximations. Such approaches can be unstable and rarely provide inferential guarantees. We propose an alternative perspective in which explanations are treated as statistical summaries of a model's predictive function and are therefore grounded in formal hypothesis testing and effect-size estimation.
We introduce Stat-XAI, a model-agnostic framework that evaluates both main effects and pairwise interactions between input features and a model's held-out predictions. For each feature (and feature pair), Stat-XAI conducts an appropriate statistical test, applies multiple-testing correction, and reports standardized effect sizes (e.g., \(R^2\), \(\eta^2\), and Cramér's \(V\)), yielding compact, uncertainty-aware rankings of feature importance.
Across 27 synthetic datasets spanning six dataset families with known ground-truth structure including: linear effects, nonlinear effects, logical rules, interaction-only regimes, correlated distractors, and mixed data types; Stat-XAI produces more selective and stable attributions than common baseline methods. Crucially, it reliably recovers interaction-only structure (e.g., XOR), where marginal attribution methods fail. By reframing explanation as inferential analysis of (\(X, \hat{Y}\)), Stat-XAI provides a statistically principled pathway toward trustworthy model explanation in high-stakes decision-making settings. -
Improving Interoperability in Digital Health using Imputation and Graph-Based Inference
Keaton Banik
Artificial IntelligenceDigital health research increasingly depends on data from smartphones, wearables, and apps, but these data are often fragmented, incomplete, and difficult to combine across systems. This work addresses those challenges through a human-centred AI framework using data imputation, graph-based inference, and interoperable predictive modeling. Using synthetic yet realistic health survey datasets inspired by citizen-science research, it examines how incomplete data can be made more useful for machine learning.
The first study compares Multiple Imputation by Chained Equations (MICE) with a large language model (OpenAI o3) for missing-data imputation under Missing at Random and Missing Completely at Random conditions. Results show that LLM-based imputation performs similarly to MICE for categorical survey data, suggesting it can be a practical alternative when traditional models are hard to specify.
The second study explores graph neural networks for semi-supervised mental health inference in partially labeled datasets, showing that graph design matters and that relational representations may help with sparse-label health data.
The third study combines imputation and GNN-based pseudo-labeling to expand training data across partially compatible studies, improving predictive accuracy and showing that interoperability can also mean analytic reuse of fragmented datasets. -
Automating Usability Testing
Elyssa Chung
Artificial IntelligenceBackground
With digital systems in healthcare, it may seem that the user experience (UX) design cycles, workflows, and project management processes are thoroughly recorded. However, further details surrounding human experiences and experimentation during those processes are often recorded outside of formal academia. And considering the boom that Generative Artificial Intelligence (GenAI) has had on the market; paired with AI's rapid onboarding and incorporation into anything that could afford a perceived efficiency boost, and how UX designers are not exempt from this effect; this has led to a lack of academic literature in UX surrounding AI. Here, this study aims to understand how, when involving AI in the usability heuristic analysis process, it can affect UX designer confidence and decision-making/management during the design process.
Considering the intersectional and interdisciplinary nature of this study, the study will follow a mixed-methods approach through a sociotechnical lens and a co-design paradigm; the analysis will incorporate theories from knowledge translation, evidence-based medicine, distributed cognition theory, usability design heuristics, and nudge theory.
Expected Contributions
Ultimately, the aim is to further understand how AI in UX design can affect designers' self-confidence in their skills, their confidence in the AI design space, and their decision-making behaviours. -
Steerable Autoencoders Underlying Remapping, Spatiotopy, and Visual Stability
Tim V. Nguyen
Computational NeuroscienceWhenever the eyes move, the locations of attended targets must be updated to keep track of them despite their shift on the retina, a process called remapping. Here we show that each attended target's identity can be connected to the locations of its features in early visual cortex through a steerable autoencoder model. The model is trained to appropriately shift the feature activity to the target's new location with each saccade. It takes an input pattern, here from early retinotopic cortex, passes it through several layers of encoding to generate a high-level, low-dimensional representation that corresponds to the object areas of visual cortex. The model receives and integrates the upcoming saccadic movement information in this layer. This then returns through decoding layers to produce an updated output pattern, again in early visual cortex. Autoencoders have been used to model object-based attention, to predict the subsequent images when targets are moving, and here to predict upcoming target locations when the eyes move-based on distributed information about eye position (gain fields). This autoencoder process is part of the overall architecture of the visual system, acting to link target properties together, even as the target or the eyes move.
-
Parametric Integer Linear Programming via Presburger Arithmetic
Chirantan Mukherjee
Computer AlgebraA problem instance in parametric integer linear programming (PILP) requests the maximum (or minimum) value of an affine objective function defined over a parametric polyhedron (that is, the intersection of a parametric polyhedron and an integer lattice). Of course, this maximum depends on the parameters and computing it generally leads to a partition of the parameter space so that, above each part of the partition, the maximum is given by a quasi-polynomial. Implemented algorithms for PILP rely either on an adaptation of the simplex method due to Paul Feautrier, or an adaptation of Barvinok's algorithm proposed by Sven Verdoolaege. With each part of the partition, these algorithms provide a sample point (that is, a tuple of values of the varieties) realizing the maximum of the objective function. In this talk, we present a novel algorithm to solve PILP instances which enhance the previous approaches as follows. For each part of the partition, we compute the locus of all points realizing the maximum of the objective function. Our experimental results suggest that our algorithm brings improvement in terms of output size and running time.
-
Broken Lines and Type A Cluster Algebras
Ba Uy Nguyen
Computer AlgebraTheta functions, introduced by Gross, Hacking, Keel, and Kontsevich via the framework of scattering diagrams and broken lines, offer a canonical approach to constructing bases of cluster algebras. While powerful in generality, the combinatorial structure underlying these theta functions can be made fully explicit in the type A setting. In this talk, we focus on the broken lines model for cluster variables of type A cluster algebras. We describe how broken lines in the associated scattering diagram encode the Laurent expansion of each cluster variable, and we develop the combinatorial rules governing their bending and monomial contributions. The result is a direct combinatorial formula for cluster variables expressed entirely in terms of broken lines.
-
Fast Parallel Computation of Power Series Solutions to Linear ODEs
Greg Solis-Reyes
Computer AlgebraLinear ordinary differential equations (ODEs) arise in many disciplines including physics, chemistry, finance, and engineering. Certain ODEs can be solved using power series, which are infinite sums of terms. Computing with power series can be done in several ways, principally truncated, lazy, or relaxed. These methods come with relative trade-offs between efficiency and flexibility. There are known truncated algorithms for computing the power series solutions to linear ODEs. These approaches achieve asymptotically fast performance beyond a certain problem size, but they lack flexibility and require computing from scratch if an update is required. Lazy methods allow power series terms to be updated iteratively, providing more flexibility at a cost of reduced raw performance. Relaxed methods attempt to bridge lazy and truncated approaches. For this research a novel parallel, relaxed algorithm has been developed for computing the power series solutions to linear ODEs. The algorithm, while not asymptotically fast, divides work using a tiling strategy which allows computations to be done in parallel, providing improved performance over alternative methods. The algorithm has been implemented in C/C++. Timed experimental results show that the new parallel algorithm achieves significant speedup relative to its serial (non-parallel) version as well as relative to the default implementation in the MAPLE computer algebra system.
-
Argument Mining in Students' Persuasive Essays
Samin Fatehi Raviz
Artificial IntelligenceArgument mining aims to automatically identify argumentative components, such as claims, premises, and major claims, as well as the relationships between them in natural language text. In the context of students' persuasive essays, this task can help analyze writing structure and support educational feedback. However, existing approaches often rely on token-level or span-level classification, which either ignore context or assume that argument spans are already known. This presentation introduces a graph-based approach for argument mining in students' persuasive essays. The method first segments essays into meaningful chunks that better capture argumentative units. These chunks can then be represented as nodes in a graph, where edges model potential relationships between components. The talk will provide an overview of the dataset, the proposed methodology, and the motivation behind using chunking and graph representations for this task.
-
Do Memorable Images Make Safer AI? Exploring the Link Between Human Memory and Adversarial Robustness
Ehsan Ur Rahman Mohammed
Computer Vision and Image AnalysisArtificial intelligence (AI) systems are increasingly used in everyday and high-stakes settings, from medical imaging to autonomous driving. However, these models remain vulnerable to adversarial attacks: small, often imperceptible changes to an image that can cause AI systems to make incorrect predictions. Interestingly, such perturbations typically do not affect human perception to the same extent, raising concerns about the reliability and safety of current AI systems.
Our research investigates a human-centered factor that may improve AI robustness: image memorability, or how likely a visual scene is to be remembered by people. We ask whether AI models classify highly memorable images more accurately, more confidently, and with greater resistance to adversarial attacks.
To explore this, we evaluate image classification models, including convolutional neural networks and vision transformers, on high- and low-memorability versions of the same images. Our findings suggest that models tend to make more confident and stable predictions on highly memorable images. These images also appear farther from model decision boundaries and remain better aligned with the data manifold, indicating greater prediction certainty. -
Towards Reliable and Interpretable Predictions in Stroke Detection Using Computed Tomography Scans
Aarat Satsangi
Artificial Intelligence in Healthcare & Computer VisionRapid and accurate diagnosis of stroke, ideally within 6 hours of stroke onset, is crucial for treatments like thrombolytic therapy or surgical interventions to be effective. While Computed Tomography (CT) scans are the gold standard for stroke detection, they require expert interpretation, which can vary among radiologists. Machine Learning algorithms have shown promise in stroke detection but are not widely adopted in clinical settings due to concerns about their reliability, interpretability and trustworthiness. Taking a step towards robust, reliable and interpretable decisions first, using 10-Fold Cross Validation, we validate the reliability of three models: Residual Network, Shifted Window Transformer, and Convolutional Vision Transformer; for stroke classification through CT scans. Then we introduce two novel attention-based feature fusion techniques for creating an ensemble of these models which is accompanied by saliency maps generated by Score-weighted Class Activation Mapping (CAM) and a novel inter-model prediction agreement score based on Jensen-Shannon Divergence that quantifies agreement between CAMs from different model backbones. This approach not only improves diagnostic accuracy and throughput but also ensures that predictions are transparent and clinically meaningful, which are some of the essential parts of a Clinical Decision Support System.
-
A New Derivation of the Nemhauser Result
Christopher F. S. Maligec
Theoretical Computer ScienceWe present a new derivation, based on the proof of Das and Kempe, of the classical Nemhauser guarantee for greedy algorithms applied to submodular set functions. These functions appear in a wide variety of applications, among which are influence maximization, feature selection, sensor placement, document summarization, computer vision, information theory, economics, sparse optimization, active learning, and data summarization.
-
Structure-Aware Multimodal Learning and Scaling-Aware Graph-Language Alignment for Molecular Artificial Intelligence
Zihao Jing
Artificial IntelligenceMolecular representation learning and understanding are fundamental to artificial intelligence for drug discovery. Existing methods suffer from three limitations: unstable multimodal learning under noisy conformers and weak fusion strategies, structural information loss in molecule-large language model (LLM) systems caused by compressio, and modality-specific architectures that inadequately ground language reasoning in all-atom geometric structure. This thesis addresses these challenges through three frameworks for representation learning and structure-grounded reasoning. For representation learning, MuMo integrates sequence, 2D, and 3D inputs through structured fusion and progressive injection, enabling stable multimodal learning while preserving modality-specific information. For molecular graph-language alignment, EDT-Former introduces entropy-guided patching and a dynamic query transformer to produce adaptive structure-aware tokens and align molecular graphs with frozen LLMs through lightweight connector training. For generalized molecular reasoning, Cuttlefish introduces a scaling-aware all-atom adapter that generates complexity-adaptive structural patches and injects geometry-grounded modality tokens into LLMs to reduce structural hallucination. Across MoleculeNet, MoleculeQA, TDC, Mol-Instructions, and diverse all-atom reasoning benchmarks, MuMo achieves best performance on 22 of 29 tasks, while EDT-Former and Cuttlefish attain state-of-the-art or superior results on molecular structure-grounded reasoning tasks. These results establish structure-aware multimodal fusion, adaptive graph-language alignment, and scaling-aware geometric grounding as an effective foundation for molecular artificial intelligence.
-
Graph-Based Modeling for Maize Yield Prediction
Amir Morshedian
BioinformaticsThis study introduces a deep learning framework that integrates genomic and environmental data to model genotype-environment interactions for maize yield prediction. The results suggest that graph-based modeling with attention mechanisms can capture complex relationships between genotypes and environments, supporting accurate predictions.
-
SkyLink: A Secure, Low-Latency Communication Protocol for UAV Systems
Chongju Mai
Computer Systems and NetworksSkyLink is a purpose-built communication protocol designed for uncrewed aerial vehicles (UAVs) and ground control systems, addressing the growing need for secure, efficient, and resilient data exchange in constrained environments. Unlike traditional protocols, SkyLink enforces encryption by default, ensuring confidentiality and integrity for both control and telemetry data while minimizing overhead for real-time operations. The protocol introduces a lightweight packet structure with optional authentication tags, enabling flexible trade-offs between security and performance depending on mission requirements. A key innovation is its piggyback acknowledgment mechanism, which reduces communication latency by embedding acknowledgments within outgoing data packets rather than relying on separate transmissions. SkyLink is designed with forward compatibility in mind, supporting future swarm operations while currently focusing on robust unicast communication. It also incorporates time-based anti-replay protection and post-quantum-ready cryptographic design choices, ensuring long-term resilience against emerging security threats. By combining strong security guarantees, efficient bandwidth usage, and extensible architecture, SkyLink provides a modern foundation for next-generation UAV communication systems, enabling safer and more reliable autonomous operations across a wide range of applications.
-
Analysis of Risk in Real-World Driving Scenarios: Implications for Advanced Driving Assistance Systems
Moumita Bhowmik
Computer Vision and Image AnalysisThis thesis presents a dynamic risk assessment algorithm for Advanced Driver Assistance Systems (ADAS) using real urban driving data. The goal is to estimate how risky a current traffic situation is by combining information about the ego vehicle, surrounding objects, and their interactions. To achieve this, a YOLOv8m-DeepSORT pipeline detects and tracks surrounding road users and combines that with CAN bus and IMU signals to obtain ego-vehicle and object states. From these, Time-to-Collision (TTC), Time-to-Materialization (TTM), geometric intersection points, and a set of critical scenarios (crossing traffic, head-on collision, intersection turning, parked/occluded vehicles, not following traffic signs, construction speed-limit violations, and emergency braking) are derived. These inputs feed a Severity-Exposure-Controllability (SEC) model that outputs a continuous risk score R ∈ [0, 10] and LOW, MEDIUM, or HIGH risk classes. The results show realistic scenarios and risk distributions, and a qualitative validation confirms that the proposed SEC-based risk assessment algorithm can provide an interpretable, runtime measure of driving risk that is consistent with established safety concepts and suitable as an input for future ADAS decision-making.
-
Symbolic and Numeric Computation of Symmetries for a Class of Schrödinger Equations
Siyuan Deng
Computer AlgebraAn important and challenging computational problem is to identify and include the missing compatibility(integrability) conditions for general systems of partial differential equations. The inclusion of such missing conditions is executed by the application of differential-elimination algorithms. Differential equations arising during modeling generally contain both exactly known coefficients and coefficients known approximately from data. We focus on our recent work on approximate differential-elimination methods and in particular their application to the determination of approximate symmetries. We illustrate this with applications to a class of Schrödinger equations.
-
Joint Noise and Motion Correction in Myocardial CT Perfusion
Mahmud Hasan
Computer Vision and Image AnalysisComputed Tomography (CT) is a widely used imaging modality that employs X-rays and computational reconstruction to visualize internal anatomy. Although higher radiation doses produce higher-quality images, they also increase long-term cancer risk, motivating the use of low-dose protocols. However, low-dose CT data inherently suffer from elevated Poisson-Gaussian noise, necessitating effective denoising strategies. In myocardial CT perfusion (CTP) imaging, this challenge is compounded by residual cardiac motion, which misaligns consecutive time points and impairs accurate estimation of perfusion maps for diagnosing coronary artery disease. Traditional approaches typically treat these two problems, noise and motion, separately, denoising the reconstructed images first or applying the registration first. Such serial pipelines often degrade clinically significant features; e.g., denoising may destroy structural details essential for registration, while motion correction can distort subtle intensity cues needed for noise modelling. To overcome these limitations, we propose a unified deep learning framework that performs noise suppression and motion correction jointly for low-dose myocardial CTP. The method integrates two complementary components through a parallel ensemble strategy: (i) a modified Fast and Flexible Denoising Network (FFDNet) that incorporates noise-level maps to mitigate blended noise effectively, and (ii) a CNN-based registration model, extended with Time Enhancement Curve (TEC) correction and 4D physiological consistency constraints to estimate temporally coherent and anatomically plausible motion fields. By combining their outputs without iterative dependencies, the proposed framework produces motion-corrected and denoised CTP sequences in a single unified processing step, thereby better preserving myocardial structure and perfusion dynamics than conventional serial pipelines. The model has been evaluated using both reference-based (MSE, PSNR, SSIM, PCC, Noise Variance, TRE) and no-reference (NIQE, FID, KID, AUC) image quality metrics, supplemented by expert human assessment. Results demonstrate that jointly learning noise characteristics and motion patterns enables restoration of low-dose CTP images while minimizing feature corruption, thereby advancing the clinical utility of low-dose myocardial CTP imaging.
-
Integer Hulls, Z-Polyhedra and Presburger Arithmetic in Action
Yuzhuo Lei
Computer AlgebraWhen solving systems of polynomial equations and inequalities, the task of computing their solutions with integer coordinates is a much harder problem than that of computing their real solutions or that of computing all their solutions. In fact, in the presence of non-linear constraints, this task may simply become an undecidable problem. However, studying the integer solutions of linear systems of equations and inequalities is of practical importance in various areas of scientific computing. Two such areas are combinatorial optimization (in particular, integer linear programming) and compiler optimization (in particular, the analysis, transformation, and scheduling of nested loops in computer programs), where a variety of algorithms solve questions related to the points with integer coordinates in a given polyhedron. Another area is at the crossroads of computer algebra and polyhedral geometry, with topics such as toric ideals and Hilbert bases, as well as the manipulation of Laurent series. There are different problems regarding the integer points of a polyhedral set, ranging from whether or not a given rational polyhedron has integer points to “describing all such points.” Answers to the latter can take various forms, depending on the targeted application. For plotting purposes, one may want to enumerate all the integer points of a 2D or 3D polytope, whereas, in the context of combinatorial optimization or compiler optimization, more concise descriptions are sufficient and more effective. For a rational convex polyhedron, defined either by the set of its facets or that of its vertices, one such description is the integer hull of \(PI\) of \(P\), that is, the convex hull of \(P\) intersects \(\mathbb{Z}^D\). The set PI is itself polyhedral and can be described either by its facets, or its vertices.
-
Navigating the Generative Filter Bubble: Understanding and Mitigating the Risks in Human-AI Interaction
Wenqing Zhang
Human Computer InteractionLarge language models (LLMs) have rapidly become central to how people seek information, form opinions, and make decisions. This presentation introduces the generative filter bubble, a novel phenomenon in which LLMs selectively produce information through their generative mechanism, interact with human cognitive tendencies such as confirmation bias, and together create an interaction loop that narrows users' exposure to diverse perspectives. Unlike algorithmic filter bubbles driven by recommendation systems, generative filtering is embedded in the model's architecture, training, and alignment process, manifesting as sycophancy, cultural bias, and value preferences.
Building on this theoretical framework, I present two ongoing empirical studies. The first proposes an evaluation framework for assessing LLM robustness to societal misinformation in multi-turn interactions, examining whether repeated adversarial pressure across different simulated user types causes models to shift from rejection toward implicit agreement. The second investigates a two-part question: does unguided LLM interaction reinforce users' existing beliefs, and can structured conversation and evaluation strategies facilitate belief revision and critical thinking to mitigate filter bubble effects? Together, these studies form an evaluative framework for understanding and addressing epistemic risks in human-AI interaction, with implications for the design of responsible conversational AI systems. -
Finding Inconsistent AI Preferences in People Using Explainable AI
Anemily Machina
Artificial IntelligenceAs part of my Thesis I asked two questions: (1) do people really prefer the results of large generative AIs like ChatGPT or do smaller less popular models still perform as well when evaluated by humans? (2) would the addition of concepts to AI explanations be an improvement over token attributions alone. To answer these questions, we prepared a survey, which had over 100 respondents, where we showed participants AI classifications results with explanations. We found that (1) people do not prefer larger generative models, even if they say they do (2) while concept explanations had similar utility as token explanations when compared to the baseline, the combination of both had a marginal effect when compared to either explanation type alone. This presentation will go over the results of the survey.
-
Parallel Computations of Transversal Hypergraphs
Adam Gale
Computer AlgebraHypergraph theory generalizes graph theory by allowing edges to be arbitrary non-empty subsets of a vertex set. For a hypergraph \(H\) with vertex set \(V\), a subset \(S\) of \(V\) is a transversal if it intersects every edge of \(H\), and the transversal hypergraph \(Tr(H)\) consists of all inclusion-minimal such subsets. Computing \(Tr(H)\) has applications across fields including algebra, artificial intelligence, chemistry, computer vision, and data mining. The classical approach is Berge's divide-and-conquer algorithm, which has been improved in terms of arithmetic efficiency, cache performance, and parallelism, with evidence suggesting quasi-linear average runtime in the size of \(Tr(H)\). Since \(Tr(H)\) can be exponential in size, no algorithm can run in polynomial time in general, and whether it can be computed in output-polynomial time remains an open problem. An alternative line of research, initiated by Thomas Eiter and Georg Gottlob, studies completion-based algorithms for this task. This work surveys these approaches and presents parallel, optimized C implementations of both Berge's algorithm and the Eiter-Gottlob completion algorithm.
-
Phraze: Collaborative AI Annotation Platform
Het Niteshkumar Patel
Artificial IntelligenceAs AI becomes central to how work gets done, more of the actual thinking now happens inside conversations with language models. These interactions capture ideas, reasoning, and decisions, but they are often temporary and fragmented, making it difficult to preserve context or build on what was explored before.
Phraze addresses this by turning conversations into something more structured and lasting. Instead of treating chats as isolated interactions, it allows both people and agents to collaboratively annotate, organize, and connect ideas across conversations. Over time, this creates a shared, evolving knowledge base where insights build on each other rather than disappear.
By combining collaborative annotation with contributions from multiple agents, Phraze helps conversations move beyond simple exchanges and become reusable, contextual artifacts. This not only improves productivity but also creates new opportunities to better understand reasoning, track decisions, and design more transparent and responsible workflows. -
Symmetrical and Uncertainty-Aware Risk Signals for Sepsis Treatment Using Offline Reinforcement Learning
Mohammad Ahsan Siddiqui
Artificial IntelligenceApplying reinforcement learning to clinical decision-making is promising but fraught with risk: models trained on historical records can produce confident recommendations backed by little real evidence. This presentation addresses two challenges in making such systems more reliable for sepsis management in the ICU.
First, it extends the Dead-end Discovery framework to detect not only states from which no treatment can prevent death, but their theoretical mirror image — guaranteed-recovery states. While the framework successfully identifies these states in controlled environments, none are detected in real ICU data, suggesting recovery trajectories in sepsis are too heterogeneous for value-function thresholding to capture.
Second, it introduces two complementary measures of recommendation reliability: epistemic uncertainty, estimated via bootstrapped Deep Q-Network ensembles, and data support, measured by K-nearest neighbours in the learned latent space. Both signals are independently associated with patient mortality — despite neither being designed as a survival predictor. Crucially, combining them reveals a striking gradient: state-action pairs flagged as high-uncertainty under both measures carry mortality rates more than seven times those of low-uncertainty pairs. -
Reaching for Emotion Regulation with Reachy Mini
Sara Lahourpour
AI in HealthcareCognitive neuroscience models often lack ecological validity due to constrained laboratory environments and thus fail to generalize to real-world clinical deployments. To bridge this gap, we propose an extensible bedside companion utilizing the Reachy Mini robot. Our ultimate vision is an embodied system that holistically assesses patient physiological and cognitive-affective states, autonomously alerts medical personnel to anomalies, and improves clinical outcomes while serving as an 'ecological laboratory' for naturalistic data collection. As a foundational step toward this vision, this work establishes a core affective regulation loop. We continuously monitor a user's emotional state in unconstrained settings using Electroencephalography (EEG), video, and speech. Upon detecting negative emotional shifts, a Large Language Model (LLM) triggers targeted cognitive interventions: auditory stimuli (music), semantic information processing (news), a collaborative story-building game, and a targeted cognitive assessment: Montreal Cognitive Assessment-Blind. By continuously measuring responses during these tasks, the system closes the feedback loop to evaluate real-time emotion modulation. Ultimately, this demonstrates the viability of embodied AI for emotion regulation and real-world data capture.
-
Investigating Epsilon Selection Strategies for FGSM and PGD Adversarial Training
Muhammad Haris Rafique Zakar
Artificial IntelligenceAs the development and availability of novel machine learning applications continues to grow, the frequency, scale, and sophistication of attacks on the models behind these technologies also continues to evolve. Among these threats are adversarial examples, which are small, carefully constructed perturbations of the input data that cause models to make incorrect predictions while remaining imperceptible to humans. Adversarial training, where the model is intentionally trained perturbed data, can improve the robustness of machine learning models. However, the effectiveness of this approach depends critically on the perturbation budget epsilon, which is often chosen heuristically despite the clear robustness-utility trade-off and high training cost per candidate. In this work, we propose the epsilon selection process as a budgeted black-box search problem, where strategies are limited to fixed compute budget, named oracle calls. We evaluate four epsilon-selection strategies: Grid Search, Binary Search, Binary Search Plus Refinement, and Coarse-to-Fine. All strategies share a common pipeline that involves a simple CNN classifier, FGSM-based adversarial training, and PGD-based robustness evaluation. Across both MNIST and CIFAR-10, we saw that under weak threat scenarios, the proposed strategies can yield feasible solutions for epsilon. Furthermore, we show that FGSM-only evaluation can significantly overestimate a model's robustness relative to PGD. Strong threat scenarios result in the infeasibility of all search strategies due to the lack of a meaningfully indistinguishable signal between candidate values. Finally, with CIFAR-10, we saw the importance of an appropriate epsilon search domain and the inverse relationship between search domain size and compute budget requirement.
-
Annotation of Rhetorical Moves in the Results Section of Biochemistry Articles
Heet Sheth
Artificial Intelligence and Natural Language ProcessingArguments in scientific discourse are how researchers justify novel claims and advance knowledge in their fields. Automatically identifying these arguments in scientific text — a task known as argumentation mining — is a challenging problem, in part because it requires understanding the underlying rhetorical structure of scientific writing. Rhetorical moves, defined as segments of text with a specific communicative function within a given genre, provide a principled foundation for this analysis. In this work, we focus on the rhetorical move framework developed by Kanoksilapatham (2005) for biochemistry research articles, which extends the foundational work of Swales (1990). We present a computational model that automatically identifies rhetorical moves in the results sections of biochemistry articles. Our model integrates modern linguistic feature tagging, frequency-based analysis, and similarity metrics to classify move boundaries and types. Linguistic features include part-of-speech tags, verb tense, and sentence-level syntactic patterns, which are compared against frequency profiles derived from a reference corpus of annotated biochemistry articles. Performance is evaluated against human-annotated move labels, and results demonstrate the viability of this computational approach. Ultimately, accurate rhetorical move identification lays the groundwork for downstream argumentation mining, enabling the automatic extraction and summarization of arguments in scientific text.
Schedule
| Activity | Room | Time |
|---|---|---|
| Registration & Meet-And-Greet | MC312 (Grad Lounge) | 8:00am - 9:00am |
| Presentations | TBD | 9:00am - 12:25pm |
| Break | - | 12:25pm - 12:30pm |
| Keynote | MC110 | 12:30pm - 1:30pm |
| Lunch | MC312 (Grad Lounge) | 1:30pm - 3:00 pm |
| Closing Ceremony & Awards | MC316 | 2:15pm - 2:30pm |
Timetable
| Time slot | MC316 AI & Computer Algebra |
MC320 Computational Neuroscience, Computer Vision & Computer Networks |
MC105B Games & AI |
MC110 Natural Language Processing, Theoretical Computer Science & Human-Computer Interaction |
|---|---|---|---|---|
| 9:00AM | Muhammad Haris Rafique Zakar | Mahmud Hasan | Eric Buitron Lopez | Samin Fatehi Raviz |
| 9:25AM | Arsh Chowdhry | Aarat Satsangi | Khushal S. Mehta | Heet Sheth |
| 9:50AM | Ba Uy Nguyen | Tim V. Nguyen | Zihao Jing | Christopher F.S. Maligec |
| 10:15AM | Yuzhuo Lei | Ehsan Ur Rahman Mohammed | Amir Morshedian | - |
| 10:40AM | - | - | - | - |
| 10:45AM | Chirantan Mukherjee | Moumita Bhowmik | - | Wenqing Zhang |
| 11:10AM | Adam Gale | Chongju Mai | Keaton Banik | Anemily Machina |
| 11:35AM | Siyuan Deng | Iqra Batool | Mohammad Ahsan Siddiqui | Elyssa Chung |
| 12:00PM | Greg Solis-Reyes | Gad Mohamed Gad | Shely Kagan | Sara Lahourpour | Judges |
Kaizhong Zhang Sepideh Bahrami |
Duff Jones Umair Rehman |
Mathias Babin Gezheng Xu |
Mostafa Milani Roberto Solis-Oba |
