Advanced Scientific Visualization II: Multi Modal Mathematics

Multimodal mathematical visualization expands the notion of "image" to include 3D, motion, sound, and (eventually) haptics. It is an inherently time-based form and, as such, shares many of the immersive, tele-immersive, and large-scale visualization research questions of data visualization and tangible/intangible computing. Methodologies and classification systems developed for the dynamic perception of complex, yet deterministic, mathematical structures can contribute much to these fields.

1. A visual study for a sound map. In the Viz-server object shown, multiple integer lattices in the plane have been rescaled to lattices. The resultant rational lattice in the plane is then rescaled by horizontal and vertical frequencies and multiplied by a flow time parameter that starts at zero. Under the quotient one obtains a rational lattice whose "endpoint" flows along the torus knot defined by the (now) longitudinal and meridian frequencies of the scaling. The dominant visual structure observed is the local proximity (in time) of simple rational alignments in the flow. Lattice points converge to, and diverge from, flow time events. This visual structure is further accentuated by staggering the longitudinal and meridian radii of the torus as a function of the flow time, giving a visual study for a sound map.

2. MVS (Mathematics Visualization System). This work is being done in the MVS environment. MVS is a visualization tool tailored to multimodal mathematical visualization; optimized for perceiving abstract structure mapped flexibly to multimodal parameters; a time based form rather than static viewer; dynamic loading of mathematical concept to be visualized. It has an emphasis on abstract rather than realistic displays, the capability to render even computationally demanding visualizations, and has been designed from the beginning for use with immersive or virtual reality displays. MVS shares some similarities with Visual Python, in particular allowing mathematicians with programming skills but not detailed knowledge of 3D graphics to be able to create visualizations

The core of MVS is written in C++ and uses the SGI Performer 3D graphical toolkit for rendering. The mathematical algorithms that generate the visualizations are written in Python and C. The current version of MVS runs on SGI IRIX, with a Linux version in the works. Development is being done by Robin Johnson, Julie Tolmie and Hugh Fisher presently. Further information can be found at http://mvs.sourceforge.net/

3. Collaborative Environments: low end/high end, 2D vs. 3D, immersive vs. remote. MVS objects currently run in numerous remote environments using Viz-Server: Julie Tolmie and Robin Johnson (below left) are working in Vancouver in the Fakespace at SFU Surrey; the Cave at NewMIC; and SFU Colab. The object demonstrated today is running remotely from the Viz-server at NewMIC. Meredith Walsh, Stephen Barass, and Hugh Fisher (below right) in the Wedge, VELab (Virtual Environments Laboratory), CSIRO, Canberra, earlier this year. Presently work is also being done to provide MVS with it's own collaboration system for use with low bandwidth networks.

Other remote and local collaborators interested in the data visualization and tangible/intangible computing aspects of this work include the V2 Institute for the Unstable Media, (Rotterdam, The Netherlands), Interactive Arts (SFU Surrey), School of Contemporary Arts (SFU).


Julie Tolmie - julie_tolmie@sfu.ca - CoLab Member and SFU Surrey Faculty