About UWORCS
UWORCS stands for University of Western Ontario Research in Computer Science. UWORCS is our annual departmental student conference and provides a great opportunity to develop your presentation skills in front of a friendly audience of your peers and faculty members.
Your participation is needed to make this event a success. Please email Kaitlyn Wade at kwade4@uwo.ca for more details.
Subjects
Presenters
Registered Attendees
Keynote Speaker

Lila Kari
Lila Kari is a Professor in the School of Computer Science at the University of Waterloo, having previously served in the Department of Computer Science at the University of Western Ontario from 1993 to 2017. She earned her M.Sc. from the University of Bucharest and a Ph.D. in 1991, for which she received the prestigious Rolf Nevanlinna Doctoral Thesis Award for the best doctoral thesis in mathematics in Finland.
With over 250 peer-reviewed publications, Professor Kari is a recognized expert in biomolecular computation, which involves using biological, chemical, and other natural systems to perform computations. In 2015, she received the Rozenberg Tulip Award for DNA Computer Scientist of the Year at the 21st International Conference on DNA Computing and Molecular Programming, Harvard University, USA, for her contributions in advancing formal theoretical models and exemplary leadership in the field. The award is presented annually by the International Society for Nanoscale Science, Computation, and Engineering (ISNSCE) and recognizes a prominent scientist who has shown continuous contributions, pioneering original work, and has influenced the development of the field.
She is Editor-in-Chief of Theoretical Computer Science-C and the Springer Natural Computing Book Series. Professor Kari has served as Steering Committee Chair for the DNA Computing and Molecular Programming conference series and on the Scientific Advisory Committee of the International Society for Nanoscale Science, Computation, and Engineering. She has also held numerous advisory roles, including serving on the editorial boards of Natural Computing and the Journal of Universal Computer Science.
Throughout her career, Professor Kari has received numerous awards, including the Florence Bucke Science Prize, the Faculty of Science Award for Excellence in Undergraduate Teaching, and the Canada Research Chair in Biocomputing. At the University of Waterloo, she was awarded the School of Computer Science Outstanding Performance Award in 2018 and served as University Research Chair from 2017 to 2023. Her current research focuses on comparative genomics, biodiversity informatics, and the theoretical aspects of bioinformation and biocomputation.
Talk Title
Life at the Extremes: Advancing Biodiversity Informatics and Extremophile Genomics Through Machine Learning
While biologists are continually discovering and classifying thousands of new species each year, it is estimated that 95% of the over 20 million multicellular species on Earth still lack official names or classifications. Our work is aligned with the long-term vision of the Planetary Biodiversity Mission—to document all multicellular life by 2045—and with the challenge of deciphering the "Rosetta Stone" of genomics by uncovering the mathematical principles behind genomic sequences.
In this presentation, I will explore how mathematical models of DNA sequences can be combined with both supervised machine learning and unsupervised deep learning techniques to achieve rapid, accurate, and scalable genome classification across all taxonomic levels. I will also share our recent discoveries that reveal how adaptations to extreme temperatures and pH conditions leave a distinct environmental imprint on the genomic signatures of microbial extremophiles. Notably, our application of unsupervised learning to unlabelled DNA data has uncovered several instances of extremophile microbes that, despite their deep evolutionary divergence, share similar genomic signatures associated with the harsh environments they inhabit.
Dr. Kari's keynote speech will take place on Monday, April 7th in Middlesex College. Please join us to welcome Dr. Kari to the 32nd Annual Conference - UWORCS 2025.
Meet and Greet
Join us for a Meet-and-Greet on April 7 from 8:00 AM to 9:00 AM in MC312 (Grad Lounge), just before the presentations begin! This is a great opportunity to network and explore which presentations might interest you. Swing by to grab your name tag and enjoy some light snacks to start the day off right!
Trivia
At 2:00pm, shortly after setting out the lunch food, we will begin our trivia session.
Frequently Asked Questions
Here are the FAQs for UWORCS 2025.
-
Who can attend?
Computer science faculty, graduate students, and undergraduate students are invited to attend to listen to the presentations. Students from other faculties are also welcome.
-
Is there a registration fee?
No registration fee.
-
What sort of research can be presented?
The more you care about a subject the better your talk will probably be. Choose something that you've personally worked on during your grad/undergrad thesis studies, or even a course project with a research flavour. You can even present ongoing research. UWORCS is a great opportunity to practice explaining whatever work you are most proud of.
-
How should I prepare my talk?
Each presentation should be 20 minutes long with an additional 7 minutes for questions.
Presentations should clearly state the research problem and motivation, should include relevant facts/data/analysis to support the research strategy, should be accessible to an audience with an undergraduate-level background in computer science, and should have a smooth flow of ideas. -
Does this fulfill my yearly PhD seminar?
Yes. PhD students in their 3rd and 4th years can present their current research and have it count towards their yearly seminar requirement (692).
-
Will there be session prizes?
Yes, each session will have a cash prize for the best presentation. The number and size of the prizes will be determined once we have made the schedule.
-
How does session judging/chairing work?
Each presentation will be given a score out of 50 by each of three faculty members (for a total score out of 150), based mainly on presentation quality and clarity. The highest score out of 150 for each session is awarded the best presentation award. Feedback from the judges will be forwarded to each presenter after the event is over. Session chairs announce each speaker and ensure that the talks stay on schedule.
-
Is lunch included?
Yes, lunch will be available to all participants and registered attendees. We have not yet set a menu, but we will ensure that an appropriate variety of lunch options are provided. In addition, coffee and snacks will be provided throughout the day.
Subjects
UWORCS 2025 involves talks that are judged by faculty members and senior students and prizes are awarded to top presenters
in a variety of categories including the following subjects. Topics of interest include, but are not limited to:
Team
The team for 2025 year

Kaitlyn Wade
Conference Chair
Regina Liang
Web Master & Organizing Committee
Joud El-Shawa
Organizing CommitteePresenters
-
Robust machine learning
Ruizhi Pu
Artificial IntelligenceWe study the data imbalance ,especially, deep imbalanced regression by leveraging the classification. In the meantime, we also investigate the label noise in standard machine learning by exploiting the training dynamic. Therefore, we aim to achieve a high quality machine which is robust to both data imbalance and label noise.
-
An Efficient Network Slicing-based Resource Allocation Framework for 5G/6G Network
Iqra Batool
Computer Systems and NetworksNetwork slicing has emerged as a crucial technology in 5G/6G networks, enabling multiple virtual networks over shared physical infrastructure. This paper proposes a novel resource allocation framework addressing key challenges in dynamic slice management, QoS guarantees, and energy efficiency. We introduce a hybrid CNN-LSTM architecture with attention mechanisms that uniquely combines spatial pattern recognition with temporal traffic prediction to optimize resource distribution across heterogeneous service requirements (eMBB, URLLC, mMTC). Our novel dynamic weight adaptation technique effectively balances the inherent trade-offs between utilization, QoS, and energy consumption. Implementation demonstrates significant improvements: 91.2% resource utilization, 99.5% QoS satisfaction rate, and 35.2% energy reduction compared to tradi-tional approaches. Extensive evaluations on a testbed simulating 100 base stations and 10,000 users validate our framework’s scalability while maintaining 8ms inference latency and 99.95% reliability for URLLC applications. Our lightweight optimization algorithm reduces computational overhead by 40%, overcoming a major barrier to real-time deployment. These advancements address the limitations of current architectures which achieve only 60-70% resource utilization and experience up to 25% QoS violations in operational networks.
-
Noise and Motion Correction from Low Dose Cardiac CT Image using Deep Neural Network
Mahmud Hasan
Computer Vision and Image AnalysisCT imaging requires radiation dose and computer algorithms to capture internal body structures for clinical evaluation. The amount of radiation dose depends on the organ sensitivity. Heart is highly radio sensitive, making it difficult to use higher radiation dose for cardiac CT imaging. While higher radiation produces better image quality, it also increases the risk of cancer due to radiation absorption. To mitigate this risk, low-dose CT scans are used, but they introduce significant noise, making denoising essential before diagnostic use. Denoising can be performed in two domains: the reconstruction domain, which includes methods like Iterative Reconstruction, Filtered Back Projection, and Compressed Sensing, or the image domain, which offers conventional denoising techniques as well as greater flexibility with Deep Learning (DL) techniques. DL models, such as Denoising Convolutional Neural Networks (DnCNN), have shown superior performance over traditional denoising methods and have been successfully applied in CT angiography. However, they remain underutilized in cardiac CT perfusion. Cardiac CT perfusion imaging presents additional challenges due to residual motion from continuous heart activity, leading to misalignment across time points and making perfusion map generation for Coronary Artery Disease (CAD) difficult. Traditional non-rigid registration methods, which correct motion artifacts, require extensive computations, while DL-based registration offers a more efficient solution. However, motion correction can lead to information loss, which is also a concern with denoising. Since both processes are necessary to improve low-dose perfusion imaging, we propose a DL-based approach that simultaneously addresses denoising and motion compensation. By learning noise and motion patterns together, our model preserves critical image features that might otherwise be lost if denoising and registration were applied independently, ensuring better image quality for clinical applications.
-
Comprehensive LU and True Path
Aishat Olagunju
Computer AlgebraWe study the PLU decomposition of rectangular matrices with polynomial coefficients, depending on parameters, possibly subject to algebraic constraints. Computing such a parametric decomposition typically requires to split computations into cases. This yields various computational challenges, in particular the fact that more cases than necessary may be generated. We present heuristic methods which attempt to minimize the number of cases. In particular, these methods try to avoid splitting the computations, if this is possible. We implemented these methods in Maple and evaluated their benefits on various test problems coming from the literature.
-
Partially Ordered Sets for Maple 2025
Adam Gale
Computer AlgebraPartially ordered sets (aka posets) are a particular type of directed graphs which are well-suited to model relations of dependence. These posets are represented by a set of elements in combination with a binary relation, such that this relation is reflexive, antisymmetric, and transitive. For instance, the divisibility relation between integer numbers defines a poset. Another classical example is the set containing the faces (vertices, edges, facets, etc.) of a polyhedron. Many operations on posets (finding maximal chains and antichains, computing minimal elements, etc.) are of algorithmic nature and have numerous applications (combinatorics of sets, optimization of computer programs, social sciences, polyhedral geometry, etc.). The Maple package that we developed provides over 30 functions for working with posets, enabling users to tackle a wide range of applications. Key functionalities include constructing and visualizing posets, finding transitive closures and reductions, computing maximal chains and antichains, and determining whether a poset is graded, ranked, or forms a lattice. By implementing these algorithms efficiently and taking advantage of condensed data structures, these new functions allow Maple users to quickly analyze and manipulate partially ordered sets in both research and practical applications. The package has been approved by Maple and will be coming with the new release of Maple 2025.
-
Errors and detection in homotopy continuation methods
Michelle Hatzel
Computer AlgebraHomotopy continuation is a numerical method used to find the roots of polynomials and the solutions to systems of polynomials. It is a fast method based on a predictor-corrector algorithm, but prone to errors that are not always detected by software. Using a result from our work on root-finding of univariate polynomials, we will discuss how we may assess the viability of solutions, prior to analysis, by using data generated by the predictor-corrector algorithm itself.
-
Computer Algebra and Multivalued Functions
Jacob Imre
Computer AlgebraMultivalued functions - functions which have more than one output over a given domain - are an esoteric subject. However, these objects are familiar to anyone with an oeuvre in math; root functions, inverse trigonometric functions and the logarithm are all multivalued. The Lambert W function, a major topic of this presentation, arises as the inverse of a much simpler function. Simply put, multivalued functions are already nascent to any mathematician. As such, how may we go about exploring them? The goal of this talk is to investigate uses of computer algebra systems to visualize and approximate these functions, specifically through the use of Maple. Topics will include images, Riemann surfaces, and series approximations of the Lambert W function. Emphasis will be placed on the behaviour of these approximations as they near unstable regions and how they may be improved.
-
Demystifying Large Language Models: From Attention to Adaptation
Zihao Jing
Artificial IntelligenceLarge Language Models (LLMs) such as chatGPT-o1, Deepseek-R1, and their derivatives have revolutionized natural language processing through scalable learning, emergent abilities, and broad applicability. In this presentation, I will unpack the core principles behind these models, beginning with the self-attention mechanism that enables context-aware understanding of language. We will then delve into fine-tuning methods that allow LLMs to adapt to specialized domains with minimal data, including low-resource or scientific applications. Special attention will be given to domain adaptation strategies that maintain generalization while improving task-specific performance. Additionally, I will touch on reinforcement learning methods such as Reinforcement Learning with Human Feedback (RLHF), which align model behavior with human intent. Rather than focusing on a single experiment or dataset, this talk serves as a guided overview for researchers and students looking to understand the fast-evolving LLM landscape, offering practical insights into model customization and training dynamics. Whether you're aiming to build your own model or simply want to understand how ChatGPT and similar systems work under the hood, this talk will provide a foundational knowledge base to navigate and contribute to the field.
-
Towards Reliable and Interpretable Predictions in Stroke Detection Using Computed Tomography Scans
Aarat Satsangi
Computer Vision and Image AnalysisRapid and accurate diagnosis of stroke, ideally within 6 hours of stroke onset, is crucial for treatments like thrombolytic therapy or surgical interventions to be effective. While Computed Tomography (CT) scans are the gold standard for stroke detection, they require expert interpretation, which can vary among radiologists. Machine Learning algorithms have shown promise in stroke detection but are not widely adopted in clinical settings due to concerns about their reliability, interpretability and trustworthiness. Taking a step towards reliable and interpretable decisions we demonstrate in this paper the effectiveness of an ensemble of Residual Network, Shifted Window Transformer, and Convolutional Vision Transformer for classification of CT scans into stroke or no-stroke. Extending this to multi-class classification, we validate individual model reliability using 10-Fold Cross Validation and enhance model interpretability through saliency maps generated by Score-weighted Class Activation Mapping (Score-CAM). We also introduce Jensen-Shannon Divergence as a novel confidence metric that quantifies agreement between CAMs from different models. This approach not only improves diagnostic accuracy and throughput but also ensures that predictions are transparent and clinically meaningful, which are some of the essential parts of a Clinical Decision Support System.
-
Instruction Fine Tuning for Molecule Synthesis
Rishabh Agrawal
Artificial IntelligenceThis work explores the application of instruction fine-tuning to enhance the performance of large language models (LLMs) in the domain of molecule synthesis. By leveraging domain-specific datasets and carefully designed synthesis instructions, we adapt general-purpose LLMs to understand and generate accurate, step-by-step synthesis pathways. Our approach improves the model's ability to follow chemical instructions, reason through multi-step reactions, and suggest viable synthetic routes. Experimental results demonstrate significant gains in accuracy and coherence over baseline models, showcasing the potential of instruction fine-tuning for advancing AI-driven molecular design and synthetic planning.
-
Transformer Embeddings for Advanced Glucose Testing Predictions in Hospitalized Patients
Joud El-Shawa
Artificial IntelligenceThe over-utilization of laboratory tests presents a persistent challenge in healthcare, leading to increased costs, hospital-acquired anemia, heightened patient anxiety, and low-value care. While manual efforts to address these issues have been implemented in Canada, broader system-level changes are necessary for sustainable improvements. In this study, we introduced a novel approach to predicting laboratory test occurrences using Long Short-Term Memory networks, enhanced by Natural Language Processing embeddings, with a focus on glucose testing as a critical use case. Using the GEMINI dataset, a large-scale multi-hospital database in Canada, we developed a robust preprocessing pipeline to address missing data and class imbalance. By incorporating NLP embeddings, we integrated structured and unstructured data, overcoming common limitations, such as missing values, and enabling the use of additional data often excluded by traditional models. Our results demonstrate that systematically exploring diverse data modalities improves predictive accuracy. Including embeddings not only enhanced the model’s flexibility but also provided a framework for capturing the temporal and contextual aspects of clinical data. These findings emphasise the transformative potential of embedding techniques and deep learning in predictive healthcare analytics. This work provides a strong foundation for integrating artificial intelligence into clinical workflows, helping to streamline decision-making, reduce unnecessary testing, and lower associated costs.
-
The Spatial Sync (SPASY) Protocol: Synchronizing Location-Based Metaverse Assets over Named Data Networking
Duff Jones
Computer Systems and NetworksThe Metaverse has been proposed as the next generation of the Web, a generation built upon decentralized, shared, immersive experiences that merge the digital and physical worlds. Named Data Networking (NDN), an Information-Centric Networking (ICN) paradigm, has been proposed as a future approach to Internet architecture, one that will evolve the aging Internet Protocol Suite (TCP/IP). Multiple researchers have argued that ICN approaches, like NDN, are a better fit for the Metaverse than TCP/IP. Immersive user experiences are central to the Metaverse, so if NDN is to work as a Metaverse architecture, it must support synchronized user experiences. NDN does not have a transport layer, instead synchronizing datasets amongst multiple users/parties via a Sync protocol. There are currently no Sync protocols built specifically for the location-based assets of the Metaverse. We designed an NDN Sync protocol, Spatial Sync (SPASY), to address this gap. SPASY uses edge computing and geospatial indexing, attaching distributed datasets to specific locations. SPASY also uses an asset naming scheme that incorporates geohashes to allow users to easily discover new assets. Our simulations demonstrate that SPASY can meet the real-time demands of a Metaverse use case: a hypothetical augmented reality browser.
-
Quantifier Elimination Over the Integers
Chirantan Mukherjee
Computer AlgebraIn this talk, we revisit, in a unified presentation, different techniques from polyhedral geometry that can support quantifier elimination over the integers. We propose a few improvements and report on a comparative implementation of these techniques.
-
Game Balance Through Procedural Content Generation
Mathias Babin
Artificial IntelligenceThis preliminary work explores procedural content generation as a method for improving game balance. Traditionally, achieving balance requires designers to fine-tune parameters to ensure interactions between game elements promotes equitable gameplay. Rather than directly modifying existing elements, we propose generating new elements that influence these interactions, restoring equilibrium in otherwise unbalanced systems. As a case study, we utilize Pokémon’s turn-based RPG combat system, which is defined by inherent type, move, and stat advantages. Our approach trains a reinforcement learning agent to generate new Pokémon that, when introduced to the pool, balances the win rates across all existing Pokémon.
-
Osculating Curves
Sepideh Bahrami
Computer AlgebraGiven a complex analytic curve \( X \) in \( \mathbb{C}^2 \) and a point \( p \in X \), we aim to approximate \( X \) geometrically near \( p \). Similar to how a tangent line approximates a function’s graph via its derivative, we consider higher-order approximations using a Taylor series expansion. Centering \( p \) at the origin, we express \( X \) locally as \( f(x) = \sum_{i=1}^{\infty} c_i x^i \). Alternatively, plane curves can be defined implicitly as varieties \( V(F(x,y)) \), where \( F(x,y) \) is a polynomial in \( \mathbb{C}[x,y] \). Given a degree \( d \), we seek the best polynomial approximation of \( X \) by finding a curve \( V(F(x,y)) \) of degree at most \( d \), called the \textbf{degree \( d \) osculating curve} of \( X \) at \( p \). This approach provides a natural extension of tangent approximations to higher-degree algebraic curves.
-
Symbolic-numeric algorithm for intersecting linear differential varieties
Siyuan Deng
Computer AlgebraAn important and challenging computational problem is to identify and include the missing compatibility (integrability) conditions for general systems of partial differential equations. The inclusion of such missing conditions is executed by the application of differential-elimination algorithms. Differential equations arising during modeling generally contain both exactly known coefficients and coefficients known approximately from data. Very little research has been done on this case. This talk focuses on our recent work on approximate differential-elimination methods and how to apply both symbolic and numerical methods to DE systems with numerical stability.
-
DQL-PBFT Enhanced Hyperledger Fabric Blockchain for Efficient OTA Updates and DDoS Defense in CAVs
Sadia Yeasmin
Distributed Systems and ApplicationsThe rise of Connected Autonomous Vehicles (CAVs) has revolutionized transportation, enabling seamless Over-the-Air (OTA) software updates. However, these updates are highly vulnerable to Distributed Denial-of-Service (DDoS) attacks, jeopardizing network stability and security. This research presents a Deep Q-Learning (DQL)-enhanced Practical Byzantine Fault Tolerance (PBFT) consensus framework within Hyperledger Fabric (HLF) blockchain to defend OTA updates against DDoS threats. The DQL agent dynamically optimizes PBFT parameters—including leader node selection, timeout adjustments, and node participation—while continuously updating firewall rules to detect and mitigate malicious traffic. By integrating adaptive learning and real-time decision-making, the proposed system enhances consensus efficiency and fault tolerance under high-traffic conditions. Simulations on Amazon Web Services (AWS) demonstrate that the DQL-PBFT framework significantly improves resilience against DDoS attacks, achieving lower latency and higher throughput, ensuring secure and scalable OTA updates in CAVs.
-
Impact of Non-Classical Receptive Fields on the Adversarial Robustness of Computer Vision Models
Ehsan Ur Rahman Mohammed
Computer Vision and Image AnalysisThe adversarial robustness of artificial intelligence models remains a critical challenge for their deployment in real-world applications. Brain-inspired approaches have garnered considerable attention in recent years, given the superior adversarial robustness observed in human vision. In this study, we propose an enhanced version of the VOneNet model (Dapello et al., 2020), which is based on the classical model of primary visual cortex V1 area. Our enhancements are guided by recent findings showing that V1 receptive fields extends beyond the classical receptive field where input is directly received by the convolutional kernel, to include the non-classical receptive field (nCRF) (Spillmann et al., 2015). The nCRF plays a crucial role by either inhibiting or amplifying the output of the classical receptive field depending on the context around the classical receptive field. The proposed model is evaluated against standard convolutional architectures, including AlexNet, ResNet50, and the original VOneNet, using two benchmark datasets, CIFAR-10 and ImageNet100. Performance is assessed under the threat of adversarial attacks, specifically utilizing the projected gradient descent (Madry et al., 2017) and Carlini and Wagner (Carlini and Wagner, 2017) attack methods.
-
An Immunological Based Machine Learning Framework for the Prediction of Pathological Response in Locally Advanced Breast Cancer
Shely Kagan
Artificial Intelligence -
Developing a Relay-based Autonomous Drone Delivery System
Muhammad Haris Rafique Zakar
Autonomous DronesDrones are small Unmanned Aerial Vehicles (UAVs) with the potential to rapidly offset the commercial delivery industry. Their benefits are obvious; drones, through their autonomous capabilities and high degree of maneuverability, can significantly improve current delivery infrastructures by improving delivery times and safely servicing remote regions as well as offering a more energy efficient mode of transport. However, current drone models have limited ranges that are not sufficient for practical delivery applications and cannot safely operate in urban areas that present unique challenges like dynamic obstacles. Simply increasing the battery size carries several fundamental disadvantages: larger batteries add weight to drones reducing their payload capability; larger batteries increase the footprint of drones reducing their maneuverability and limiting their operating areas; and larger batteries require more power to lift, compounding the problem. As a solution, we propose an alternative relay-based smart scheduling scheme for delivering packages over long distances using coordinated handoffs at stopover sites between a network of drones. This delivery strategy minimizes delivery time, offers a more balanced workload among drones, and enables deliveries that are outside the range of any single drone. The viability of our new approach is implemented and tested on a real-world prototype using actual drone models in various delivery situations and test beds. The results show that our prototype system can autonomously manage and complete several challenging delivery scenarios.
-
Predicting Vulnerability to Sleep Deprivation Using EEG-Based Graph-Theoretic Features
Daya Kumar
Computational NeuroscienceSleep deprivation (SD) is known to impair cognitive performance, yet individuals exhibit considerable variability in their susceptibility to its effects. The ability to predict vulnerability to SD has significant implications for safety-critical domains such as healthcare, transportation, and defense. In this study, we present a machine learning framework that leverages EEG-derived functional connectivity features to classify individual vulnerability to SD. We utilize resting state EEG data of 28 participants from an open-source EEG dataset. The EEG recordings were preprocessed and segmented into 4-second epochs, and a subset of eight electrodes from the parietal and occipital regions—areas implicated in SD-related neural dynamics—was retained for analysis. Functional connectivity was estimated using Phase Locking Value (PLV) between all channel pairs. Leveraging graph-theoretic measures, we computed degree centrality (DC) to quantify the topological importance of each node within the parieto-occipital network. Participants were labeled as either vulnerable or resilient to SD based on their psychomotor vigilance task (PVT) performance. A LightGBM classifier trained on DC features achieved a test accuracy of 80.20% and an area under the ROC curve (AUC) of 0.88. These findings demonstrate the feasibility of using EEG-based network metrics to predict SD vulnerability, offering a promising direction for the development of real-time, non-invasive cognitive monitoring tools.
-
An Exponential Lower Bound for Quantifier Elimination (QE) of Existentially Quantified Presburger Arithmetic Formulas
Christopher F. S. Maligec
Computer AlgebraProblems regarding compiler optimization (like array dependability analysis) can be described in terms of formulas in Presburger Arithmetic. The arithmetic is also used in proof-of-correctness systems for programs, such as the Stanford Pascal Verifier and Microsoft’s 2005 Spec# system, in SAT/SMT solvers, and in proof assistants like Coq and Isabelle.
Presburger Arithmetic is the theory of the integers with equality, addition, and divisibility constraints but without multiplication (except in the case of repeated addition). It is a complete and decidable theory, which means that every sentence in the language of the theory can be proven to hold or not.
Quantifier Elimination (QE) is the elimination of bound variables in logic formulas, such as Presburger Arithmetic, an example of which is “there exists an x such that x = 5”. “There exists” is the existential quantifier, and x is the bound variable in the formula x = 5. QE can thus be used to determine whether a system of inequalities/equalities that describe a mathematical problem has a solution or not.
Until the recent work of Haase et al. (2024), it was thought that the complexity lower bound of Presburger Arithmetic was doubly exponential in the length of the input formula. Haase et al. proved that it is, in fact, singly exponential. This presentation goes through the concepts involved in the proof and presents a novel shortened version of the central proposition.
Talks Schedule
Activity | Room | Time |
---|---|---|
Registration & Meet-And-Greet | MC312 (Grad Lounge) | 8:00am - 9:00am |
Presentations | See Time Table | 9:00am - 12:00pm |
Break | - | 12:00pm - 12:15pm |
Keynote | MC110 | 12:15pm - 1:15pm |
Closing Ceremony | MC110 | 1:15pm - 1:30pm |
Lunch | MC312(Grad Lounge) | 1:30pm - 4:00pm |
Trivia | MC316 | 2:00pm - 2:30pm |
Timetable
Time slot | MC316 Networks & Distributed Systems, Computer Algebra I |
MC320 Computer Algebra II |
MC105B Artificial Intelligence |
MC110 Artificial Intelligence (Health Applications) |
---|---|---|---|---|
9:00AM | Iqra Batool![]() |
Chirantan Mukherjee | Ruizhi Pu | Mahmud Hasan |
9:30AM | Muhammad Haris Rafique Zakar | Aishat Olagunju![]() |
Mathias Babin![]() |
Aarat Satsangi |
10:00AM | Jacob Imre | Christopher F. S. Maligec![]() |
Ehsan Ur Rahman Mohammed![]() |
- |
10:30AM | Siyuan Deng | Sepideh Bahrami | - | Joud El-Shawa |
11:00AM | Sadia Yeasmin | Adam Gale | Zihao Jing | Shely Kagan![]() |
11:30AM | Duff Jones![]() |
Michelle Hatzel | Rishabh Agrawal | Daya Kumar![]() |
Judges |
Hong Chen Marc Moreno Maza Mostafa Milani |
Roberto Solis-Oba Yalda Mohsenzadeh |
Hanan Lutfiyya Zubair Fadlullah |
Umair Rehman Daniel Servos |
Register now
4th-year undergraduates and graduates can present, anyone can attend. Register Now!
Click to Register