**Cluster Computation in Maple**

Based on software developed at RISC in Linz (Austria) we illustrate
several uses of the *Beowulf Cluster being run out of a Maple session*.

**1. ** The High
Performance Computing Group at SFU
has recently acquired a 192-processor Beowulf cluster, bugaboo.hpc.sfu.ca.
Bugaboo, running 96 2-way 1.2 Gigahertz AMD Athlon processors, is a world
class cluster capable of performing 144.6 billion operations per second---good
enough to rank 465 on the top 500 list, www.top500.org.
One of the tools we are using to harness this computational power is the
Distributed
Maple
package, written by Wolfgang Schreiner of the University
of Linz, Austria. Distributed Maple provides a simple yet powerful interface
for scheduling and executing Maple commands in parallel across a network,
making it possible to conduct high-level mathematical research in a powerful
computational environment.

**2. **In our first demo, we survey
*Distributed Maple's interface* and try out some examples on the
Bugaboo cluster. These simple examples---*parallel summation,
factorization, and matrix multiplication*---demonstrate in real-time the
advantages of (symbolic) parallel computation. The Maple worksheet for
the first demo is available here.

**3. **Our second demo shows ongoing
research that illustrates *graphically* the *use of the Beowulf
cluster for numerical optimization*. The standard-bearers in numerical
optimization have been quasi-Newton methods, which use gradient and
function evaluations to direct the search for a minimum; however, the
gradient is often quite expensive to calculate. With the increased
power of parallel environments, there has been renewed interest in
*Generalized Pattern Search* methods for optimization, which use
only function evaluations to direct the optimization process. The
primary advantages of these methods are that they can be easily
parallelized by splitting the function evaluations among processes, and
are less sensitive to `noise' that often leads to inaccurate gradient
evaluations.

This demo illustrates the ability to run parallel function evaluations over the Beowulf cluster. Each example is an animation of the function plot, with the function evaluations plotted as points; the colors of the plot points identify which processor is used in the function evaluation. The Maple worksheet for the second demo is available here and here.

Given the strong Computer Algebra Group at CECM, it was natural to start with Maple. Plans are to enhance these tools and to provide similar functionality for MatLab in the next few months, here and within WestGrid. This includes integrating MapleNet.

**Parallel Maple Team:**

Herre Wiersma - hwiersma@cecm.sfu.ca
- CECM and CoLab RA

Mason Macklem - msmackle@cecm.sfu.ca
- CECM and CoLab RA