UWORCS stands for University of Western Ontario Research in Computer Science. UWORCS is the annual internal departmental student conference intended to give students the opportunity to practice presenting to academic audiences.
This is a great opportunity to practice your presentation skills and exchange ideas with other students and faculty members. And there is a guest speaker each year to talk about a hot topic.
Your participation is needed to make this event a success. Please Email Andrew Bloch-Hansen at firstname.lastname@example.org for more details.
Robert H.C. Moir
Robert H.C. Moir, PhD² is a philosopher and mathematician with a long academic history. Dr. Moir has obtained degrees in physics, mathematics, and philosophy from both McGill University and Western University.
In Western, he
- received PhD in Philosophy in 2013
- received PhD in Applied Mathematics in 2017
Based on the emerging concepts of a digital layer and digital land, this relatable "Earth-based" system uses GPS coordinates and UTC to index a data structure for managing globally unique digital assets. Earth64 will provide a secure, efficient, and interoperable (blockchain-independent) system for the management of unique items of value at scale, including NFTs, smart contracts, etc.
Blockchain Computing: Present and Future
Although the majority of the focus on blockchain has been directed toward its first major application--cryptocurrency--it has been more quietly changing the way that many processes and systems operate in many areas, including supply chain management, finance, healthcare, voting systems, entertainment, IoT and AI. The reason for this is that the basic function of blockchain technology is to provide provably authentic record-keeping, which has very broad applications.
Blockchain technology is also having an impact on computing, in large part because it makes possible the merging of payment settlement and software. We will take a look at the more limited form of computing permitted by the Bitcoin protocol, and how this was expanded upon to create the Ethereum protocol, which allows arbitrary code to be stored on-chain as "smart contracts" which then run on a distributed virtual machine.
As impressive as these achievements are, they nonetheless have some significant inefficiencies, including expensive transactions whose cost and execution time are unpredictable, the large scale replication of computational work, and the need to replicate the entire history of the blockchain ledger, which for Bitcoin and Ethereum are now hundreds of GB in size. We conclude by taking a look at how blockchain technology is evolving through a consideration of the TODA protocol, a ledgerless blockchain, and how it can be applied toward forms of high performance blockchain computing.
Dr. Moir's keynote speech will take place on Tuesday, May 24th at 12:00PM.
Please join us to welcome Dr. Moir to the 30th Annual Conference - UWORCS 2022.
Frequently Asked Questions
Here are the FAQs for UWORCS 2022.
Who can attend?
Computer science faculty, graduate students, and undergraduate students are invited to attend.
Is there a registration fee?
No registration fee.
What sort of research can be presented?
The more you care about a subject the better your talk will probably be. Choose something that you've personally worked on during your grad/undergrad thesis studies, or even a course project with a research flavour. You can even present ongoing research. UWORCS is a great opportunity to practice explaining whatever work you are most proud of.
How should I prepare my talk?
Presentations will be online via zoom. Each presentation should be 15 minutes long with an additional 5 minutes for questions.
Does this fulfill my yearly PhD seminar?
Yes. PhD students in their 3rd and 4th years can present their current research and have it count towards their yearly seminar requirement (692).
Will there be session prizes?
Yes, each session will have a cash prize for the best presentation. The number and size of the prizes will be determined once we have made the schedule.
How does session judging/chairing work?
Each presentation will be given a score out of 50 by each of three faculty members (for a total score out of 150), based mainly on presentation quality and clarity. The highest score out of 150 for each session is awarded the best presentation award. Feedback from the judges will be forwarded to each presenter after the event is over. Session chairs announce each speaker and ensure that the talks stay on schedule.
UWORCS 2022 involves talks that are judged by faculty members and senior students and prizes are awarded to top presenters
in a variety of categories including the following subjects. Topics of interest include, but are not limited to:
The team for 2022 year
Andrew Bloch-HansenConference Chair
Nianqi ChenWeb Master
There are the presenters and their topics.
Waiting for your participation!
Local Search for the Multiway Cut Problem
Theoretical Computer Science
In the multiway cut problem we are given a weighted undirected graph G = (V, E) and a set T ⊆ V of k terminals. The goal is to find a minimum weight set of edges E' ⊆ E with the property that by removing E' from G all the terminals become disconnected. We present a simple local search approximation algorithm for the multiway cut problem with approximation ratio 2 - 2/k. We present an experimental evaluation of the performance of our local search algorithm and show that it greatly outperforms the isolation heuristic of Dalhaus et al. and it has similar performance as the much more complex algorithms of Calinescu et al., Sharma and Vondrak, and Buchbinder et al. which have the currently best known approximation ratios for this problem.
Hierarchical Reinforcement Learning for Decision Support in Health Care
The concept of optimal decision-making is critical within many organizations. A large variety of these organizations are structured hierarchically but make decisions sequentially. Using data collected over time by these organizations, we have been able to successfully apply reinforcement learning (RL) to many sequential decision-making problems. In doing this, however, we are not fully taking advantage of the benefits of their natural hierarchical structures and the ways that different layers may impact each other. Thus, we are often not able to learn truly optimal decision-making policies.
Hierarchical reinforcement learning (HRL) is a powerful tool for solving extended problems with sparse rewards. HRL decomposes a RL problem into a unique ladder of subtasks such that higher-level `parent-tasks' invoke lower-level `child-tasks' as if they were primitive actions. This design is useful in helping us solve real-world decision-making problems as it accounts for the innate hierarchical layout of many organizations, whereas flat RL does not.
During recent years, the landscape of HRL research has grown significantly, resulting in an abundance of unique hierarchical algorithms. However, due to limitations in current research, existing HRL frameworks are still not suitable solutions for the vast majority of real-world decision-making problems. In our work, we aim to formalize a new offline HRL framework capable of building sequential decision-making support models using real-world datasets collected from stochastic behavioral policies.
A Hybrid Machine Learning Model for Efficient Classification of IT Support Tickets in The Presence of Class Overlap
Classifying customer support tickets is one of the building blocks of IT service management. Correct classification would lead to customer satisfaction and more time for the agents to focus on other tasks.
However, for large-scale IT corporas, the number of classes is huge. This results in the presence of a large number of shared words between different classes. The problem is widely known as ‘overlapping classes. Misclassification due to overlapping regions is a challenging problem that is not getting proper attention. In this paper, we propose a hybrid machine learning model based on a linear support vector machine and a set of N rules, where N is the number of overlapped classes.
The experimental results on four datasets show that our hybrid method provides major improvements in the f score of the overlapped classes. Hence, we recommend that for text classification tasks with overlapping classes, traditional SVM along with a set of handcrafted rules can provide an interpretable and superior performance.
Hybrid Feature- and Similarity-Based Models for Prediction and Interpretation on Large-Scale Observational Data
Large-scale electronic health record (EHR) datasets often include simple informative features like patient age together with complex data like care history that are not easily represented as individual features. Such complex data have the potential to both improve the quality of risk assessment and to enable a better understanding of causal factors leading to those risks. For example, increased age may be associated with risk, but that relationship may not persist if we account for care history. We propose a hybrid feature- and similarity-based model for supervised learning that combines feature and kernel learning approaches to take advantage of rich but heterogeneous observational data sources to create interpretable models for prediction and for investigation of causal relationships.
Goal and Policy Based Code Generation and Deployment of Smart Contracts
Deep Sequence Modeling for Anomalous ISP Traffic Prediction
Internet traffic in the real world is susceptible to various external and internal factors which may abruptly changen the normal traffic flow. Those unexpected changes are considered outliers in traffic. However, deep sequence models have been used to predict complex IP traffic, but their comparative performance for anomalous traffic has not been studied extensively. In this paper, we investigated and evaluated the performance of different deep sequence models for anomalous traffic prediction. Several deep sequences models were implemented to predict real traffic without and with outliers and show the significance of outlier detection in real-world traffic prediction. First, two different outlier detection techniques, such as the Three-Sigma rule and Isolation Forest, were applied to identify the anomaly. Second, we adjusted those abnormal data points using the Backward Filling technique before training the model. Finally, the performance of different models was compared for abnormal and adjusted traffic. LSTM Encoder Decoder (LSTM En De) is the best prediction model in our experiment, reducing the deviation between actual and predicted traffic by more than 11% after adjusting the outliers. All other models, including Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), LSTM En De with Attention layer (LSTM En De Atn), Gated Recurrent Unit (GRU), show better prediction after replacing the outliers and decreasing prediction error by more than 29%, 24%, 19%, and 10% respectively. Our experimental results indicate that the outliers in the data can significantly impact the quality of the prediction. Thus, outlier detection and mitigation assist the deep sequence model in learning the general trend and making better predictions.
Deploying Edge Computing for Improved QoE in the AR Cloud
Computer Systems and Networks
The augmented reality (AR) Cloud, a significant part of what is often called the metaverse, has recently become a major area of interest for technology companies. The AR Cloud adds a layer of AR to the real world, essentially merging the augmented world with the real world. Over the past decade, cloud computing has significantly impacted the way that applications are delivered to users; however, the cloud cannot meet the latency and bandwidth requirements necessary for a user to have a good quality of experience (QoE) with the AR Cloud. The applications composing the AR Cloud require very low latency (<20 ms for an AR headset) and high bandwidth (possibly >5Gbps). Moving cloud services to the network edge has been proposed as a solution to this problem. Our work assesses how fog computing, multi-access edge computing (MEC), and 5G networks can be deployed to improve QoE for AR applications. We explore a variety of network topologies, with the ultimate goal being the development of a layered, or hierarchical, cloud.
Smart Cooperative Parking Environment
Distributed Systems and Applications
In mega cities, proper management of parking infrastructure is important to accommodate the increasing number of vehicles. The existence of a smart parking system that can increase the space of parking infrastructure is crucial to reduce traffic congestion, air pollution and driving expenses. Such increase of parking space can be by attained by enabling au- tonomous interaction among public and private parking facilities. Essentially, parking facilities belong to competing stakeholders which means that the management of such environment is distributed by nature. Thus, A Multi-agent architecture can be used to enable interaction in a multi-stockholder environment in which agents take actions on behalf of their owners. In this work, Smart Cooperative Parking Environment (SCOPE) architecture for smart cities is proposed. SCOPE is a lightweight, multi-agent, distributed system that employs the Cloud-Edge continuum to localize network traffic and hide the heterogeneity of parking infrastructure. In addition, SCOPE’s architecture provides an integration framework that enables a cooperative autonomous interaction among parking facilities. Using SCOPE’s integration framework, both public and private parking facilities can participate in a trading competition to serve a driver request for a parking spot. From the driver perspective, SCOPE’s ensures the selection of a best available parking spot that matches the driver preferences. SCOPE simulation results show that SCOPE significantly minimizes search time, traffic, cost, air pollution and provides better driver satisfaction results.
Predicting and Modifying Memorability of Imagest
Computer Vision and Image Analysis
Everyday, we are bombarded with many photographs of faces, whether on social media, television, or smartphones. From an evolutionary perspective, faces are intended to be remembered, mainly due to survival and personal relevance. However, all these faces do not have the equal opportunity to stick in our minds. It has been shown that memorability is an intrinsic feature of an image but yet, it is largely unknown what attributes make an image more memorable. In this work, we first proposed new models for predicting memorability of face and object images. Subsequently, we proposed a fast approach to modify and control the memorability of face images. In our proposed method, we first found a hyperplane in the latent space of StyleGAN to separate high and low memorable images. We then modified the image memorability (while maintaining the identity and other facial features such as age, emotion, etc.) by moving in the positive or negative direction of this hyperplane normal vector. We further analyzed how different layers of the StyleGAN augmented latent space contribute to face memorability. These analyses showed how each individual face attribute makes an image more or less memorable. Most importantly, we evaluated our proposed method for both real and synthesized face images. The proposed method successfully modifies and controls the memorability of real human faces as well as synthesized faces. Our proposed method can be employed in photograph editing applications for social media, learning aids, or advertisement purposes.
PROTECT: Appliances Operation Modes Identification Using States Clusteringt
Computer Systems and Networks
The increasing cost, energy demand, and environmental issues has led many researchers to find approaches for energy monitoring, and hence energy conservation. The emerging technologies of Internet of Things (IoT) and Machine Learning (ML) deliver techniques that have the potential to efficiently conserve energy and improve the utilization of energy consumption. Smart Home Energy Management Systems (SHEMSs) have the potential to contribute in energy conservation through the application of Demand Response (DR) in the residential sector. In this paper, the aPpliances opeRation mOdes idenTification using statEs ClusTering (PROTECT) is proposed which is SHEMS analytical component that utilizes the sensed residential disaggregated power consumption in supporting DR by providing consumers the opportunity to select lighter Appliance Operation Modes (AOMs). The states of the Single Usage Profile (SUP) of an appliance are extracted and reformed into features in terms of clusters of states. These features are then used to identify the AOM used in every occurrence using K-Nearest Neighbors (KNN). AOM identification is considered a basis for many potential smart DR applications within SHEMS that contributes to up to 78% energy reduction for some appliances.
Real-Time Kinematic Assisted Smart Parking Management
Moinul Islam Sayed
Distributed Systems and Applications
The availability of wireless technology has been a key driver in the design and development of smart cities, which has been a popular research topic in recent years. As the population and density of cities grow, urban organization and optimization become more essential than ever, particularly in terms of traffic and space optimization. Vehicle parking is an area of urban development that may greatly benefit from implementation of smart and cost-effective systems. Currently, people do not have access to real-time parking spot availability. Traditionally, drivers attempt to locate open parking places on the streets by driving around, relying only on their local knowledge and luck to locate a parking spot. This habit costs a lot of time, fuel, and congestion that could be reduced with smart solutions. This study utilizes Real-Time Kinematic (RTK) technology with Android smartphones to design state-of-the-art parking system that includes smart mapping of parking spots in parking lots and smart parking management. RTK with its centimeter-level accuracy and low-cost maintenance has enormous potential for traffic and space optimization. With the developed Android app, parking spots can be easily mapped and included in mapping platforms and consumer applications regardless of size of parking lot. Users can search and check for availability of parking spots in different parking lots, and book available spots or release through the app. To provide users with real-time availability, this system utilizes RTK based monitoring system which is cost-effective and resilient to weather conditions.
Computing the integer hull of polyhedral sets
In this presentation we discuss a new algorithm for computing the integer hull P_I of a rational polyhedral set P, together with its implementation in Maple and C. Our presentation focuses on the two-dimensional and three-dimensional cases. Consider P given by a system of linear inequalities Ax <= b, where A is an integer matrix and b is an integer vector. Instead of using a conventional cutting-plane method over the whole system, we find the integer hull of each “angular sector” individually and then combine the results in order to deduce P_I . An angular sector is given by all the facets of P intersecting at one vertex of P. Our method only computes the vertices of P_I, avoiding the manipulation of all the integer points in P_I.
Missing Traffic Sign Objects in Respect to Vehicle Speed
Computer Vision and Image Analysis
Detecting traffic sign objects on the road has become a key element in Advanced Driver Assistance Systems (ADAS). In this study, firstly, employing a real-time and accurate algorithm, YOLOV4, we detect traffic sign objects. Then, we cross-calibrate the 3D driver’s gaze to the forward stereo system and estimate the driver’s visual attention area. Thus, by intersecting this area with the traffic sign objects, we find the number of missed or seen traffic sign objects while driving at different ranges of speeds. Furthermore, we investigate the speed of drivers in pre-attentive and attentive fixations in considering consecutive frames. The experimental results show that the behaviour of drivers in checking the vehicle environment is the primary reason for missing or seeing traffic sign objects, pre-attentive and attentive fixations at different speeds. Moreover, we demonstrate that drivers check more traffic sign objects when the speed of the vehicle is close to zero.
PITHIA: Protein interaction site prediction using multiple sequence alignments and attention
Proteins play a major role in cellular functions. Although some proteins operate independently, the majority act together. This suggests that it is vital to know the binding sites that facilitate the interaction. The development of effective computational methods is essential since experimental methods are time consuming and expensive. PITHIA is a deep learning model for prediction of protein interaction sites that combines alignment, attention, and embedding, which are considered to be the most powerful tools in bioinformatics. By combining attention with the concept of multiple sequence alignments, the recently introduced MSA-transformer produces a language model that outperforms previous unsupervised approaches significantly. The PTHIA architecture is also based on attention, selecting candidates through careful comparison while drawing its input from the contextual embeddings provided by the MSA-transformer. By updating several widely used datasets, we provide meaningful comparisons with existing programs and create a brand new dataset that is the largest and most challenging to date. When tested on five datasets, PITHA vastly outperforms the competition in terms of multiple measures, outperforming the closest competitor by as much as 35% in terms of area under the precision-recall curve.
Using Connectome Features to Constrain Echo State Networks
Recently it has been shown that (fruit fly) connectome-derived constraints can inform performance and variance improvements for Echo State Networks (ESNs) in chaotic time-series prediction -- ESNs are competitive in this task. Herein we clarify the impact of particular connectome-derived structural features: namely, (edge) sparsity, distribution and position of weights, and clustering. From a null model which is connectome-derived -- notably, in terms of its sparsity (or equivalently, its density), weights, and clustering coefficient -- we create four classes of models (A, B, B2, C), each characterized by the presence of one connectome-derived structural feature. After hyperparameter tuning and model selection from our model classes, we evaluate and compare performance on size-variations of a discrete chaotic time-series dataset (Mackey-Glass). We find that model A (a 20%-dense model with weights sampled from a connectome distribution) achieves superior performance and variance across training input sizes of [50, 250, 500, 750] when compared to its null model; when pitted against a conventional ESN the improvements in model variance are greater. Conversely, we report arbitrary positioning of edge weights (as in vanilla ESNs) as a structurally-imposed limitation on model variance which can be remedied in particular by enforcing connectome-derived weight positioning.
Protein Protein Interaction Prediction using Deep learning
Protein Protein Interaction Prediction (PPI Prediction) is an important problem in biology, experimental approaches to detect interaction suffer from high false positives and negatives and hence there is a need for theoretical approaches. Here by we explore various state of art deep learning and NLP approaches towards this problem.
A Novel and Failsafe Blockchain Framework for Secure OTA Updates in Connected Autonomous Vehicles
Distributed Systems and Applications
Connected Autonomous Vehicles (CAVs) are becoming data centers on wheels, amassing petabytes of data, as they require a combination of software and hardware systems and sub-systems in order to operate reliably and simply in real-time. This research proposes a secure and scalable software updates framework in a distributed manner for the CAVs, leveraging Hyperledger Blockchain (BC) with smart contract technology. The framework is able to overcome slow processing speed which is one of the major limitations of BC with a high level of security against possible cyber-attacks. We use a salting-based hashing scheme over the traditional Elliptic Curve Cryptography (ECC) Key to ensure multi-factor authenticated protection from any malicious transaction while downloading and installing any new feature update in CAVs. Moreover, our framework ensures immutability, load-management capability and cost-free transactions while successfully upgrading and deploying Over-The-Air (OTA) software patches in any system of CAVs.
Learning human brain organization across functional imaging datasets
Modern neuroimaging techniques, especially functional magnetic resonance imaging (fMRI), provide the opportunities to explore underlying human brain organizations, which observes brain activity in hundreds of thousand brain locations (voxels) simultaneously when participants engaging in a large variety of mental tasks. The resultant datasets have been used to produce functional atlases of the human brain, each associated with a specific function. In recent years, the number of high-quality data sets and the number of associated brain maps have rapidly increased. However, each individual dataset typically focusses on a specific functional domain, often leading to poor characterization of other parts of the brain. In this project, we present a generative framework that allows for fusion across disparate functional data sets to produce a more complete characterization of the human brain. The high-level structure of the framework is a high-dimensional probabilistic model that describes the spatial arrangement of functional brain regions. In the meanwhile, the framework learns separate emission models for each data set, linking the functional brain regions to the predicted response for the specific set of tasks. The framework integrates information across diverse data sets while considering the relative strengths and weaknesses of each, efficiently deals with missing data within individual subjects, and may help to provide better estimates of both group and individual brain organization based on limited data.
Anomaly Detection with Adversarially Learned Perturbations of Latent Space
Vahid Reza Khazaie
Anomaly detection is to identify samples that do not conform to the distribution of the normal data. Due to the unavailability of anomalous data, training a supervised deep neural network is a cumbersome task. As such, unsupervised methods are preferred as a common approach to solve this task. Deep autoencoders have been broadly adopted as a base of many unsupervised anomaly detection methods. However, a notable shortcoming of deep autoencoders is that they provide insufficient representations for anomaly detection by generalizing to reconstruct outliers. In this work, we have designed an adversarial framework consisting of two competing components, an Adversarial Distorter, and an Autoencoder. The Adversarial Distorter is a convolutional encoder that learns to produce effective perturbations and the autoencoder is a deep convolutional neural network that aims to reconstruct the images from the perturbed latent feature space. The networks are trained with opposing goals in which the Adversarial Distorter produces perturbations that are applied to the encoder's latent feature space to maximize the reconstruction error and the autoencoder tries to neutralize the effect of these perturbations to minimize it. When applied to anomaly detection, the proposed method learns semantically richer representations due to applying perturbations to the feature space. The proposed method outperforms the existing state-of-the-art methods in anomaly detection on image and video datasets.
Triaging Patients to the ICU: A Visual Analytics Tool for Admission Decision
COVID-19 has caused an increase in patients who have required admission to intensive care units where the demand for this unit has exceeded the capacity. Triaging patients with COVID-19 based on their biological laboratory results proved to be a very difficult task for health authorities. In order to tackle this issue, we first classified patients to identify those who need to be admitted to the ICU then we used the Analytic Hierarchy Process (AHP) to rank the priority of admissions to ICUs. To facilitate difficult triage decisions, we developed a visual analytics tool that allows users to interactively explore the correlation between a patient's rank and their laboratory results. This tool also allows users to interact with the classification model and track the decision process. Furthermore, it illustrates the underlying working mechanism of the models through visual representations, which in turn, improves users' confidence in generated predictions. We demonstrate the utility of this tool by presenting a usage scenario.
Rule-based Integrator in Maple
RUBI is a modern rule-based symbolic integrator developed by Albert Rich. It attempts to find optimal antiderivatives of large classes of mathematical expressions. Because RUBI is designed as a rewriting system its implementation in a computer algebra system based on the paradigm of functional programming, like MAPLE, is a challenge. This talk is mainly focusing on adapting RUBI to MAPLE and optimizing the process of calculating the result of integrands.
Presenters are in 4th year and grad. Anyone can attend Cash prizes. Register Now!Click to Register