Improved algorithms, better implementations, and faster computers have enabled many previously time-consuming computer algebra computations to be performed routinely and have extended the range of what is practically possible to compute. However, there remains many computations that still require excessive computing time, and while existing computer algebra systems have had some practical success in applications, their widespread use in computational science and engineering remains limited. Part of this is due to inherent difficulties of exact computation; however, there are many cases where the performance achieved by an implementation could be dramatically improved through optimized implementations and parallel computation and the use of special purpose hardware.

Efficient implementations of symbolic computation, requires techniques that go far beyond the manipulation of algebraic or differential equations; these include efficient memory management, data compression, code optimization, parallel and distributed computing. While the computer algebra community has begun to incorporate high performance computing techniques, architecture aware algorithms, and parallel computing, tuning algorithms to perform well on modern computer architectures and adapting algorithms and systems to parallel computers can be a difficult and time consuming process and much work remains. Moreover, there are many challenges in achieving high-performance in computer algebra algorithm implementations due to their irregular structure and higher level data types. Also the complexity of computer algebra systems make the incorporation of parallel computation challenging.

This session is devoted to exploring the application of high-performance computing to computer algebra algorithms, applications and systems, and the research and implementation challenges this poses.

- Practical implementation and performance analysis of computer algebra algorithms
- Implementation and optimization techniques for computer algebra algorithms
- Parallel implementations of computer algebra algorithms and parallel computer algebra systems
- The application and extension of optimizing compiler techniques to the implementation of computer algebra algorithms.
- Adapting the implementation of computer algebra algorithms to improve performance by better utilizing the underlying hardware
- Techniques for automating the optimization and platform adaptation of computer algebra algorithms
- Cache complexity and cache-oblivious computer algebra algorithms
- Hardware acceleration technologies (multi-cores, GPUs, FGPAs)

If you are interested in presenting your recent work in this session, please send your title and abstract to jjohnson@cs.drexel.edu, malaschonok@gmail.com, or moreno@csd.uwo.ca no later than May 29, 2017; see the section important dates on the ACA homepage.

Please use the LaTeX template for your submission, see the page ACA 2017 Publications for detailed instructions. Your abstract should be 1–2 pages long, including references.

**Interactions between high-performance computing and computer algebra: overview and perspectives**

(Jeremy Johnson, Gennadi Malaschonok and Marc Moreno Maza)**slides (1/2)**

**Fast construction of a lexicographic Gröbner basis of the vanishing ideal of a set of points**

(Xavier Dahan)**A Parallel Compensated Horner Scheme**

(Stef Graillat)**Improved method to find optimal formulae for bilinear maps**

(Svyatoslav Covanov)**slides**

**Minimizing arithmetic and communication costs for faster matrix computations**

(Oded Schwartz)**slides**

**Communication-efficient parallel Bruhat decomposition**

(Alexander Tiskin)**Efficient algorithms for evaluating high-degree matrix polynomials**

(Sivan Toledo)**slides****High-Performance Kernels for Exact Linear Algebra**

(Jeremy Johnson)**Sparse matrices in computer algebra when using distributed memory: theory and applications**

(Gennadi Malaschonok)**slides****Comprehensive Optimization of Parametric Kernels for Graphics Processing Units**

(Marc Moreno Maza)**slides**