Before it is anything else, a video game is a piece of software. Consequently, programming lies at the heart of game development. Programming a video game, however, raises several interesting challenges not found in developing traditional software. Programming a game requires integration of sophisticated concepts and software technologies from computer graphics, artificial intelligence, networking, and other disciplines into a highly usable, highly interactive package with serious real-time performance constraints. Increasingly, game programming is split into two separate and important tasks: the development of a game engine, providing core functionality to support one or more games, and the development of game logic that runs on top of this engine, providing the specifics of a particular game.
This course studies the fundamental aspects of building distributed systems and developing distributed applications. Emphasis is placed on client-server application design using sockets and remote procedure calls and developing reliable applications through the use of replication, group membership protocols, clock synchronization and logical timestamps. Students will have the opportunity to develop reliable distributed applications.
The video game market is a multi-billion dollar a year global industry, with more units of video game software distributed each year than virtually any other software product. In 2000, for the third consecutive year, an astonishing 35% of all Americans identified computer and video games as the most fun entertainment activity (according to the latest survey results released by IDSA, the Interactive Digital Software Association). A distant second was watching television (18%), surfing the Internet (15%), reading books (13%), and going out to the movies (11%). As such, the video game industry is a significant and important industry.
Building a high quality game is a surprisingly difficult and challenging process; to quote Andre LaMothe, CEO of Xtreme GamesLLC: "Game programming is without a doubt the most intellectually challenging field of Computer Science in the world." This course provides an in-depth examination of video game design and implementation to study the many concepts and issues that bring about these challenges. Topics include: the history of video games; the game development process; principles of game design, game play, and balance; game genres and genre-specific design issues; plot, story, and level design; technical foundations from computing (graphics, artificial intelligence, networking, software engineering, and so on), and elsewhere (physics, anatomy, language studies, and so on); ethical issues in video games and the gaming industry; and the future of gaming. The course will culminate with a significant group project focused on the design and development of an innovative video game.
The website from last year's offering of the course is also still up and available at: http://www.csd.uwo.ca/courses/CS9641b Students can go there and see last year's outline, notes, and so on. The grading methods may change this time, and material will be updated, but things should otherwise stay quite similar.
This course is a continuation of CS3346, Artificial Intelligence I. A broad range of areas falls into the field of Artificial Intelligence. In this course we give a brief introduction to three very active areas of Artificial Intellegence: machine learning, natural language processing, and computer vision. The programming assignments will be done in Matlab (short intro to Matlab will be given), and the empthasis of the assignments will be on developing practical applications, such as SPAM detector in email, text categorization, object tracking in videos. Planned topics (subject to change) are:
This course focuses on advanced techniques for the design and analysis of algorithms. Among the topics covered are: approximation algorithms, randomized algorithms, on-line algorithms, zero-knowledge proofs, parallel algorithms, computational geometry, and distributed algorithms.
Software design immediately follows the requirements engineering phase in a software process. A software requirements specification tells us "what" a system does (or is to do), and becomes input to the design process. The resultant software design tells us "how" a software system works (or is to be implemented). A high-level view of the system, depicting processing elements, data elements, and connecting elements that hold the pieces together is called an "architecture".
This course focuses on software architectures though some consideration is given to software design. The learning objectives are to become familiar with: the notion of software architectures, different types of architectures, and with the role they play in software systems and in software development. Concepts presented in lectures are complemented by practical assignments and a project.
The course is interested in established and promising methodologies for improving the quality of programming projects in particular and computer science undertakings in general, with particular interest in approaches being taken in industry. This includes: behavioral and test driven development, pair programming, code reviews, refactoring legacy code to improve testability, automated testing tools, web application frameworks like Ruby on Rails, formal specifications, deriving test data from specifications, reducing testing by proving properties of programs, random testing, and mutation testing. Specifications being tested against could include usability, security, and performance constraints. See the course web site for a current schedule of material to be covered as well as grading policies for undergraduate and graduate students (as this is a cross-listed course wth CS4472).
Requirements engineering (RE) covers all the activities involved in discovering, analyzing, documenting and maintaining a set of requirements for a computer-based system. The use of the term "engineering" implies that systematic and repeatable techniques should be used to ensure that system requirements are complete, consistent, relevant, etc. RE is a front-end part of a software development process which enables software engineers to define what a software system is required to do and the circumstances under which it shall operate.
In this course, students will:
This course provides an overview of a number of areas in human-computer interaction (HCI). Broadly speaking, HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. HCI addresses any interaction with computers by humans, as developers or as users, as individuals or as groups. On completion of the course, students are expected to have theoretical knowledge of and practical experience in the fundamental aspects of designing, implementing and evaluating interactive systems that are useful and usable. It is expected that students will become familiar with some of the literature in HCI and develop sufficient background in HCI issues to take more advanced courses or begin research projects at the master's or doctoral levels in the topics covered by this course.
Research into the design of usable technology draws extensively on knowledge of informatics, cognition, communication, representation and computation. HCI professionals seek to identify the nature and parameters of human cognition at the interface so as to design forms of representation that support human interpretation and use as well as to reliably and validly test new technologies for usability and acceptability.
Introduction to techniques used for analyzing biological sequences. Topics include: sequence alignment, dynamic programming, BLAST, spaced seeds, suffix trees, suffix arrays, Markov chains and hidden Markov models, profile HMMs for sequence families, multiple sequence alignment methods, building phylogenetic trees, etc.
The creation of this course was motivated from reading two articles about some of the problems large corporations are having with managing their software: Automated QA at EA: Driven by Events; ACM Queue, May 2014 and How Amazon Web Services uses Formal Methods; CACM, April 2015. The issues raised by these papers indicated that it would be worthwhile studying `reactive programming' further as well as, more generally, questions regarding the design of large asynchronous parallel systems (particularly those common on the web).
The target audience for the course is graduate students who are interested in how requirements impact the specification, design, implementation, and testing of software systems with regards to: safety, accessibility, and sustainability (see also Sustainable Software).
Current hardware improvements focus on increasing the number of computations that can be performed in parallel rather than on increasing clock speed alone. This change has brought multi-processor workstations to the desktop, expanding interest in parallel algorithms and software capable of exploiting these computing resources. At the same time, these new hardware acceleration technologies stress the need of a deeper understanding of performance issues in software design.
The aim of this course is to introduce you to the design and analysis of algorithms and software programs capable of taking advantage of these new computing resources. The following concepts will guide our quest for high performance: parallelism, scalability, granularity, locality, cache complexity, synchronization, scheduling and load balancing.
Out of the course, you are anticipated to have an in depth understanding of the following subjects:
A quarter part of the course will give an overview of other hot topics in high performance computing, including the following ones:
It is widely believed that a picture is worth more than a thousand words. However, dealing with digital pictures (images) requires far more computer memory and transmission time than that needed for plain text.
To be able to handle, efficiently, the huge amount of data associated with images, compression schemes are needed. Image compression is a process intended to yield a compact representation of an image, hence, reducing the image storage/transmission requirements.
Over the last few decades, many good image compression schemes have been developed. These schemes are currently used in commercial compression products/systems, e.g., JPEG and GIF. The performance of these schemes varies from low to high compression ratios with low to high levels of degradation of the decompressed images.
This course provides students with a solid understanding of the fundamentals and the principles of various digital still-image compression schemes.
Upon completion of the course, the students will be equipped with the fundamental knowledge that will help them understand various compression techniques in such a way as to optimize their use for a particular application.
CS9630: Image Processing and Analysis
Instructor: John Barron
We cover basic image processing techniques, filtering in the spatial and frequency domains (lowpass, highpass and bandpass filters), edge detection, region growing, mrphorological operations, histogramming, segmentation, fourier transform and sampling, etc. The programming language is MatLab (which will be taught in class).
The objective of this course is to introduce students to data science (DS) techniques, with a focus on application to substantive (i.e. "applied") problems. Students will gain experience in identifying which problems can be tackled by DS methods, and learn to identify which speciﬁc DS methods are applicable to a problem at hand. During the course, students will gain an in-depth understanding of a particular (substantive problem, DS solution) pair, and present their ﬁndings to their peers in the class.
In this course, we will study what information visualization is, how information/data can be presented visually, how to interact with
information to perform tasks, what the applications of information visualization are, how humans process visual information, how
people navigate information spaces, and what activities and environments can benefit from information visualization techniques.
Information visualization has applications in library and information science, health information science, computer science, digital
humanities, journalism, history, and media studies—to name a few—(examples include: social networks, text visualization, search
engines, business analysis, digital libraries, digital games, learning tools, geographic visualization tools, health analytics, scientific
discovery, data journalism, data analytics tools, and decision support tools). Students can also apply what they learn in usability
design of web sites, as well as human-computer interaction. This is primarily a design course and is very open and flexible. You do
not need to have any specific technical background to take this course. However, you need to have some general knowledge of
computers (e.g., databases, information systems). We will refer to these in the course of our study of information visualization.
Additionally, you should be comfortable with a course that has an interdisciplinary approach. All students will benefit from taking
this course, particularly those who are interested in learning about the role of new technology when trying to creatively solve
problems and challenges in dealing with the massive volumes of existing data
Imagine you have just finished a lengthy set of experiments using your favourite experimental modality and piece of equipment. You’d like to analyze this data using a specific technique implemented in a software package to which you have access... but the software won’t load the file from your experimental equipment. With just a little bit of knowledge of scripting and understanding of data formats, you could solve this problem yourself in a matter of minutes.
Perhaps you’re on the cutting edge of research in your field and have applied a novel technique to generate an overwhelming quantity of data. Now that you have the data, what are you going to do with it? How can you find interesting, and relevant, patterns in 2 terabytes of data? What tools and methodologies from information science can help you make sense of your data?
This course sets out to accomplish two primary goals:
Officially, this course has 3 “lecture” hours and 2 “lab” hour. In practice, I’m not going to be doing much “lecturing”; we’ll be trying to do stuff, not just talk/listen about stuff. The lecture hours will consist of small microlectures followed by immediate hands-on application of what we’ve just learned. The designated lab hours will give you a chance to practice problem solving in large groups.
An overview of core data structures and algorithms in computing, with a focus on applications to informatics and analytics in a variety of disciplines. Includes lists, stacks, queues, trees, graphs, and their associated algorithms; sorting, searching, and hashing techniques. For non-Computer Science graduate students.
This graduate course examines the foundational techniques in the field of computer vision. Vision is one of the senses allowing us to build powerful internal representations of the world. Hence, machines that correctly interpret visual information have an extended capability to interact with the world and humans. To do so involves the ability for algorithms to interpret noisy images and construct an operational understanding of visual scenes. From image processing to motion understanding, passing through 3D stereo vision and reconstruction, the goal of making computers see the way us humans do still remains elusive, and constitutes an exciting sub-topic of artificial intelligence.
Management and analysis of unstructured data, with a focus on text data, for example transaction logs, news text, article abstracts, and microblogs. Overview of unstructured image, audio, and video data. Hands-on experience with modern distributed data management and analysis infrastructure.
The Internet and other modern communications services provided by telco companies (e.g., Bell Canada, Telus, Rogers) are a part of today’s modern society. Today’s Telco industry serves as a technological backbone for any services and infrastructure one could imagine e.g., content delivery, smart homes, e-health.
This course will introduce some of the core technical functionalities within a Telco industry; introduce the core challenges in these units, and how these challenges could be explored. Students will gain real life knowledge and experience through various research projects, discussion, and assignments. Undergraduate students will be mainly involved in designing and developing a tool / prototype to address a specific high-value Telco industry functional problem. Graduate students will be mainly involved in exploring a real world complex Telco research topic and working on a full fledge research paper that addresses a specific research problem. In addition, this course is expected to assist the graduate students (who yet to identify their thesis topic) to find an interesting real-world research problem that could be a good fit for their thesis.
CS9660: Computational Linguistics
Instructor: Robert Mercer
Modern computational linguistics uses a variety of techniques to process human language. In this graduate level introduction to this subject we will investigate a number of these techniques. They include finite state automata and finite state transducers applied to words; n-grams and part of speech tagging to connect words to grammar; grammars to describe the syntactical structure of sentences; parsers that use these grammars to generate the structure; the semantics of words and sentences; and discourse.
This course focuses on the study of algorithms used for solving problems that arise from the design and use of wide-area networks, such as the Internet. Among the topics that we might cover are: Distributed algorithms for network problems, Searching for information on the Web and Web crawling, Caching and prefetching, Routing, Service placement and clustering, Peer-to-peer systems, Load balancing.
Bioinformatics studies biological problems using biological, computational, and mathematical methods. Computational biology studies computational techniques that can solve biological problems efficiently. This course covers some selected topics from Bioinformatics research.
The topics are drawn from the following lists:
• Pairwise sequence alignment with affine gap penalty.
• Multiple sequence alignment with affine gap penalty.
• Neighbour-joining algorithm for phylogenetic tree construction.
• Tree comparison algorithms.
• RNA structure alignment algorithms.
• Sequence assembly
• Hidden Markov models
• RNA secondary structure prediction by minimum energy folding.
• Protein peptide de novo sequencing.
• Normalized similarity and distance
Two recent topics have become crucial in healthcare: evidence-based healthcare (EBHC) and big data. EBHC’s main purpose is to increase and improve the use of evidence (i.e., data and information) by stakeholders (e.g., health practitioners, policy-makers, public health managers, etc.). As health data continues to grow, big data can also play an increasingly important role in different aspects of EBHC. Despite the emergence of these two areas, the role that health informatics tools (HITs) can play in EBHC and the analysis and evaluation of these tools often receive little attention. HITs permeate EBHC at every turn—e.g., data and text mining tools for evidence generation, distillation, or synthesis; decision support for incorporating evidence-based protocols into clinical workflow; or web-based visualization tools for gaining insight into patterns of data. As HITs advance, explicit understanding of and investigations into the relationship between HITs and EBHC become increasingly vital.
In this course, we will examine topics related to health informatics—with particular emphasis on HITs, big data in healthcare, presentation of health data, analytics methods and their role in healthcare, and the importance of human-centeredness of HITs and factors that contribute to this.
This course is cross-listed (with students from different departments). You should feel comfortable being in a course that has students from different backgrounds and view points. Even though this course has no specific pre-requisites, you are expected to be interested in learning about issues that sit at the intersection of health, information science, computing, and technology.
The course will provide a comprehensive introduction to machine learning, one of the most active and important areas in AI (Artificial Intelligence). Various learning paradigms, methodologies and theories will be covered. The main focus will be on inductive learning from examples.
Some knowledge of knowledge representation, logic, reasoning, and probability theory would be helpful.
CS9862a: Introduction to Modern SAT Solvers: Applications and Algorithms
Instructor: Robert Webber
Although in general solving SAT instances may be slow, many significant SAT instances can be solved efficiently. Thus, the conversion of a difficult optimization problem into the corresponding SAT instances and then exploiting highly efficient modern SAT solvers to solve the problem has become one of the standard computer science methodologies for handling difficult problems. The adoption of this methodology has happened because of significant progress on algorithms for solving SAT problems over the last decade or two. This course covers the basics of these new algorithms as well as how people approach converting their application problems to suitable SAT instances. Some examples where conversion to SAT has proven successful include: tracking down errors in software, software configuration management, test case generation, cryptanalysis, analyzing network security protocols, analyzing gene data (haplotype inference), error correction in data records in preparation for data mining, general planning problems, and common-sense reasoning (with application to natural language understanding and high-level robot perception). Some successfully solved SAT instances have involved millions of variables.
CS9863: Empirical Research in Software Engineering
Instructor: Nazim Madhavji
This is a course on “research methods” with particular focus on how to conduct empirical research in the field of Software Engineering (SE). We shall also touch base on research methods in Computer Science (CS) and Information Technology (IT). While creativity is central to advancing scientific knowledge, conducting research requires the use of rigorous qualitative and quantitative methods.
CS9864: Software Engineering for Big Data Applications and Analytics
Instructor: Nazim Madhajvi
The focus in this course is on the development, maintenance and evolution of software applications and services dealing with large volumes of data. With recent advances in technologies (e.g., ubiquitous computing, internet of things, cloud computing, sensors and devices, etc.), it has become more practical to capture and process large volumes of both structured and unstructured data (e.g., patient records, traffic data, video data, images, sporting statistics, events, logistics data, etc.). Also, with the advent of mass use of internet and communication technologies (e.g., online access, e-commerce, social media data, mobile data, and others), there is thus considerable and growing interest amongst organisations and institutions to analyse such data for their purposes.
In this course, we go beyond data analytics. Our goal is to create “hybrid” application systems, composed of functions and data. This leads to exploring how systems may be designed such that they would encompass not only functions and traditional system quality attributes (such as performance, reliability, usability, etc.), but also data and data attributes (such as volume, streaming, variety of data, etc.). We also focus on such topics as:
We analyse a number research papers related to hybrid systems and, in addition, there is a group project involving services, big data, and cloud infrastructure.
CS9877: Research Topics in Genomics and Proteomics
Instructor: Lucian Ilie
Genomics and proteomics are two rapidly growing areas of molecular biology that are already causing a revolution in medicine. While genomics is concerned with the sequencing and analysis of an organism's genome, proteomics studies the organism's proteome (the entire set of proteins), including protein abundances, variations, modifications, and their interactions with other proteins or DNA. The two fields aim to understand cellular processes and their relation with diseases. The course will provide first an introduction to basic concepts of computational molecular biology, including sequence alignment, dynamic programming, BLAST, spaced seeds, suffix trees, suffix arrays, Markov chains, hidden Markov models, profile HMMs for sequence families, multiple sequence alignment methods, etc. Then, current and emerging research topics in genomics and proteomics will be discussed including DNA sequencing, error correction, genome assembly, assembly evaluation, genome resequencing, variation discovery, metagenomics, primer design, DNA splice junction prediction, DNA-protein binding prediction, protein-protein interaction prediction, protein-protein interaction network alignment, protein structure prediction, protein contact map prediction, gene expression inference, cancer diagnosis, alignment-free homology detection, etc.