Experimental Mathematics: Recent Developments and Future Outlook

Experimental Mathematics: Recent Developments and Future Outlook

David H. Bailey, Jonathan M. Borwein

1  Introduction

While extensive usage of high-performance computing has been a staple of other scientific and engineering disciplines for some time, research mathematics is one discipline that has heretofore not yet benefited to the same degree. Now, however, with sophisticated mathematical computing tools and environments widely available on desktop computers, a growing number of remarkable new mathematical results are being discovered partly or entirely with the aid of these tools. With currently planned improvements in these tools, together with substantial increases expected in raw computing power, due both to Moore's Law and the expected implementation of these environments on parallel supercomputers, we can expect even more remarkable developments in the years ahead.

This article briefly discusses the nature of mathematical experiment. It then presents a few instances primarily of our own recent computer-aided mathematical discoveries, and sketches the outlook for the future. Additional examples in diverse fields and broader citations to the literature may be found in [] and its references.

2  Preliminaries

The crucial role of high performance computing is now acknowledged throughout the physical, biological and engineering sciences. Numerical experimentation, using increasingly large-scale, three-dimensional simulation programs, is now a staple of fields such as aeronautical and electrical engineering, and research scientists heavily utilize computing technology to collect and analyze data, and to explore the implications of various physical theories.

However, ``pure'' mathematics (and closely allied areas such as theoretical physics) only recently has begun to capitalize on this new technology. This is ironic, because the basic theoretical underpinnings of modern computer technology were set out decades ago by mathematicians such as Alan Turing and John Von Neumann. But only in the past decade, with the emergence of powerful mathematical computing tools and environments, together with the growing availability of very fast desktop computers and highly parallel supercomputers, as well as the pervasive presence of the Internet, has this technology reached the level where the research mathematician can enjoy the same degree of intelligent assistance that has graced other technical fields for some time.

This new approach is often termed experimental mathematics, namely the utilization of advanced computing technology to explore mathematical structures, test conjectures and suggest generalizations. And there is now a thriving journal of Experimental Mathematics. In one sense, there is nothing new in this approach - mathematicians have used it for centuries. Gauss once confessed, ``I have the result, but I do not yet know how to get it.'' []. Hadamard declared, ``The object of mathematical rigor is to sanction and legitimize the conquests of intuition, and there was never any other object for it.'' []. In recent times Milnor has stated this philosophy very clearly:

If I can give an abstract proof of something, I'm reasonably happy. But if I can get a concrete, computational proof and actually produce numbers I'm much happier. I'm rather an addict of doing things on computer, because that gives you an explicit criterion of what's going on. I have a visual way of thinking, and I'm happy if I can see a picture of what I'm working with. []

What is really meant by an experiment in the context of mathematics? In Advice to a Young Scientist, Peter Medawar [] identifies four forms of experiment:

1. The Kantian experiment is one such as generating ``the classical non-Euclidean geometries (hyperbolic, elliptic) by replacing Euclid's axiom of parallels (or something equivalent to it) with alternative forms.''

2. The Baconian experiment is a contrived as opposed to a natural happening, it ``is the consequence of `trying things out' or even of merely messing about.''

3. The Aristotelian experiment is a demonstration: ``apply electrodes to a frog's sciatic nerve, and lo, the leg kicks; always precede the presentation of the dog's dinner with the ringing of a bell, and lo, the bell alone will soon make the dog dribble.''

4. The Galilean experiment is ``a critical experiment - one that discriminates between possibilities and, in doing so, either gives us confidence in the view we are taking or makes us think it in need of correction.''

The first three are certainly common in mathematics. However, as discussed in detail in [], the Galilean experiment is the only one of the four forms which can make experimental mathematics a truly serious enterprise.

3  Tools of the Trade

The most obvious development in mathematical computing technology has been the growing availability of powerful symbolic computing tools. Back in the 1970s, when the first symbolic computing tools became available, their limitations were quite evident - in many cases, these programs were unable to handle operations that could be done by hand. In the intervening years these programs, notably the commercial products such as Maple and Mathematica, have greatly improved. While numerous deficiencies remain, they nonetheless routinely and correctly dispatch many operations that are well beyond the level that a human could perform with reasonable effort.

Another recent development that has been key to a number of new discoveries is the emergence of practical integer relation detection algorithms. Let x = (x1, x2, ¼, xn) be a vector of real or complex numbers. x is said to possess an integer relation if there exist integers ai, not all zero, such that a1 x1 + a2 x2 +¼+ an xn = 0. By an integer relation algorithm, we mean a practical computational scheme that can recover the vector of integers ai, if it exists, or can produce bounds within which no integer relation exists. The problem of finding integer relations was studied by numerous mathematicians, including Euclid and Euler. The first general integer relation algorithm was discovered in 1977 by Ferguson and Forcade []. There is a close connection between integer relation detection and finding small vectors in an integer lattice, and thus one common solution to the integer relation problem is to apply the Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm []. At the present time, the most effective scheme for integer relation detection is Ferguson's ``PSLQ'' algorithm [,].

Integer relation detection, as well as a number of other techniques used in modern experimental mathematics, relies heavily on very high precision arithmetic. The most advanced tools for performing high precision arithmetic utilize fast Fourier transforms (FFTs) for multiplication operations. Armed with one of these programs, a researcher can often effortlessly evaluate mathematical constants and functions to precision levels in the many thousands of decimal digits. The software products Maple and Mathematica include relatively complete and well-integrated multiple precision arithmetic facilities, although until very recently they did not utilize FFTs, or other accelerated multiplication techniques. One may also use any of several freeware multiprecision software packages [,] and for many purposes tools such as Matlab, MuPAD or more specialized packages like Pari-GP are excellent.

High precision arithmetic, when intelligently used with integer relation detection programs, allows researchers to discover heretofore unknown mathematical identities. It should be emphasized that these numerically discovered ``identities'' are only approximately established. Nevertheless, in the cases we are aware of, the results have been numerically verified to hundreds and in some cases thousands of decimal digits beyond levels that could reasonably be dismissed as numerical artifacts. Thus while these ``identities'' are not firmly established in a formal sense, they are supported by very compelling numerical evidence. After all, which is more compelling, a formal proof that in its full exposition requires hundreds of difficult pages of reasoning, fully understood by only two or three colleagues, or the numerical verification of a conjecture to 100,000 decimal digit accuracy, subsequently validated by numerous subsidiary computations? In the same way, these tools are often even more useful as a way of excluding the possibility of hoped for relationships, as in equation () below.

Figure

FIGURE 1(a-d): -1/1 polynomials (to be set in color)

We would be remiss not to mention the growing power of visualization especially when married to high performance computation. The pictures in FIGURE 1 represents the zeroes of all polynomials with ±1 coefficients of degree at most 18. One of the most striking features of the picture, its fractal nature excepted, is the appearance of different sized ``holes'' at what transpire to be roots of unity. This observation which would be very hard to make other than pictorially led to a detailed and rigorous analysis of the phenomenon and more [,]. They were lead to this analysis by the interface which was built for Andrew Odlyzko's seminal online paper [].

One additional tool that has been utilized in a growing number of studies is Sloane and Plouffe's Encyclopedia of Integer Sequences []. As the title indicates, it identifies many integer sequences based on the first few terms. A very powerful on-line version is also available and is a fine example of the changing research paradigm. Another wonderful resource is Stephen Finch's ``Favorite Mathematical Constants," which contains a wealth of frequently updated information, links and references on 125 constants, [], such as the hard hexagon constant k » 1.395485972 for which Zimmermann obtained a minimal polynomial of degree 24 in 1996.1

In the following, we illustrate this - both new and old - approach to mathematical research using a handful of examples with which we are personally familiar. We will then sketch some future directions in this emerging methodology. We have focussed on the research of our own circle of direct collaborators. We do so for resaons of fmailiarity and because we believe it is representative of broad changes in the way mathematics is being done rather than to claim primacy for our own skills or expertise.

4  A New Formula for Pi

Through the centuries mathematicians have assumed that there is no shortcut to determining just the n-th digit of p. Thus it came as no small surprise when such a scheme was recently discovered []. In particular, this simple algorithm allows one to calculate the n-th hexadecimal (or binary) digit of p without computing any of the first n-1 digits, without the need for multiple-precision arithmetic software, and requiring only a very small amount of memory. The one millionth hex digit of p can be computed in this manner on a current-generation personal computer in only about 30 seconds run time.

This scheme is based on the following remarkable formula, whose formal proof involves nothing more sophisticated than freshman calculus:

p
=
¥
å
k = 0 
1
16k
é
ê
ë
4
8k+1
- 2
8k+4
- 1
8k+5
- 1
8k+6
ù
ú
û

This formula was found using months of PSLQ computations, after corresponding but simpler n-th digit formulas were identified for several other constants, including log(2). This is likely the first instance in history that a significant new formula for p was discovered by a computer.

Similar base-2 formulas are given in [,] for a number of other mathematical constants. In [] some base-3 formulas were obtained, including the identity

p2 = 2
27
¥
å
k = 0 
1
729k
é
ê
ë
243
(12k+1)2
- 405
(12k+2)2
- 81
(12k+4)2
-
27
(12k+5)2
- 72
(12k+6)2
- 9
(12k+7)2
-
9
(12k+8)2
- 5
(12k+10)2
+ 1
(12k+11)2
ù
ú
û

In [], it is shown that the question of whether p,   log(2) and certain other constants are normal can be reduced to a plausible conjecture regarding dynamical iterations of the form x0 = 0,

xn
=
(b xn-1 + rn) mod 1

where b is an integer and rn = p(n) / q(n) is the ratio of two nonzero polynomials with deg(p) < deg(q). The conjecture is that these iterates either have a finite set of attractors or else are equidistributed in the unit interval. In particular, it is shown that the question of whether p is normal base 16 (and hence base 2) can be reduced to the assertion that the dynamical iteration x0 = 0,

xn
=
æ
ç
è
16 xn-1 + 120 n2 - 89 n + 16
512 n4 - 1024 n3 + 712 n2 - 206 n + 21
ö
÷
ø
mod 1

is equidistributed in [0, 1). There are also connections between the question of normality for certain constants and the theory of linear congruential pseudorandom number generators. All of these results derive from the discovery of the individual digit-calculating formulas mentioned above. For details, see [].

5  Identities for the Riemann Zeta Function

Another application of computer technology in mathematics is to determine whether or not a given constant a, whose value can be computed to high precision, is algebraic of some degree n or less. This can be done by first computing the vector x = (1, a,a2, ¼, an) to high precision and then applying an integer relation algorithm. If a relation is found for x, then this relation vector is precisely the set of integer coefficients of a polynomial satisfied by a. Even if no relation is found, integer relation detection programs can produce bounds within which no relation can exist. In fact, exclusions of this type are solidly established by integer relation calculations, whereas ``identities'' discovered in this fashion are only approximately established, as noted above.

Consider, for example, the following identities, with that for z(3) due to Apéry [,]:

z(2)
=
3 ¥
å
k = 1 
1
k2 æ
ç
è
2k
k
ö
÷
ø
z(3)
=
5
2
¥
å
k = 1 
(-1)k-1
k3 æ
ç
è
2k
k
ö
÷
ø
z(4)
=
36
17
¥
å
k = 1 
1
k4 æ
ç
è
2k
k
ö
÷
ø
where z(n) = åk k-n is the Riemann zeta function at n. These results have led many to hope that
Z5
=
z(5) / ¥
å
k = 1 
(-1)k-1
k5 æ
ç
è
2k
k
ö
÷
ø
(1)
might also be a simple rational or algebraic number. However, computations using PSLQ established, for instance, that if Z5 satisfies a polynomial of degree 25 or less, then the Euclidean norm of the coefficients must exceed 2 ×1037. Given these results, there is no ``easy'' identity, and researchers are licensed to investigate the possibility of multi-term identities for z(5). One recently discovered [], using a PSLQ computation, was the polylogarithmic identity
¥
å
k = 1 
(-1)k+1
k5 æ
ç
è
2k
k
ö
÷
ø
=
2 z(5) + 80 ¥
å
k = 1 
é
ê
ë
1
(2k)5
- L
(2k)4
ù
ú
û
r2k
-
4
3
L5 + 8
3
L3 z(2) + 4 L2 z(3)
where L = log(r) and r = (Ö5 - 1)/2. This illuastrates neatly that one can only find a closed form if one knows where to look.

Other earlier evaluations involving the central binomial coefficient suggested general formulas [], which were pursued by a combination of PSLQ and heavy-duty symbolic manipulation. This led, most unexpectedly, to the identity

¥
å
k = 1 
z(4k+3) z4k
=
¥
å
k = 1 
1
k3 (1 - z4/k4)
=
5
2
¥
å
k = 1 
(-1)k-1
k3 æ
ç
è
2k
k
ö
÷
ø
(1-z4/k4)
k-1
Õ
m = 1 
1+4z4/m4
1-z4/m4
.

Experimental analysis of the first ten terms showed that the rightmost above series necessarily had the form

5
2
¥
å
k = 1 
(-1)k-1 Pk(z)
k3 æ
ç
è
2k
k
ö
÷
ø
(1 - z4/k4)
where
Pk(z)
=
k-1
Õ
j = 1 
1 + 4z4/j4
1 - z4/j4
.

Also discovered in this process was the intriguing equivalent combinatorial identity

æ
ç
è
2n
n
ö
÷
ø
=
¥
å
k = 1 
2 n2 n-1
Õ
i = 1 
(4k4 + i4)

k2 n
Õ
i = 1, i ¹ k 
(k4 - i4)
.

This evaluation was discovered as the result of an serendipitous error in an input to Maple2- the computational equivalent of discovering penicillin after a mistake in a Petri dish.

With the recent proof of this last conjectured identity, by Almkvist and Granville [], the above identities have now been rigorously established. But other numerically discovered ``identities'' of this type appear well beyond the reach of current formal proof methods. For example, in 1999 British physicist David Broadhurst used a PSLQ program to recover an explicit expression for z(20) involving 118 terms. The problem required 5,000 digit arithmetic and over six hours computer run time. The complete solution is given in [].

6  Identification of Multiple Sum Constants

Numerous identities were experimentally discovered in some recent research on multiple sum constants. After computing high-precision numerical values of these constants, a PSLQ program was used to determine if a given constant satisfied an identity of a conjectured form. These efforts produced empirical evaluations and suggested general results []. Later, elegant proofs were found for many of these specific and general results [], using a combination of human intuition and computer-aided symbolic manipulation. Three examples of experimentally discovered results that were subsequently proven are:

¥
å
k = 1 
æ
ç
è
1 + 1
2
+ ¼+ 1
k
ö
÷
ø
2

 
 (k + 1)-4
=
37
22680
p6 - z2 (3)
¥
å
k = 1 
æ
ç
è
1 + 1
2
+ ¼+ 1
k
ö
÷
ø
3

 
 (k + 1)-6
=
z3(3) + 197
24
z(9) + 1
2
p2 z(7)
- 11
120
p4 z(5) - 37
7560
p6 z(3)
¥
å
k = 1 
æ
ç
è
1 - 1
2
+ ¼+ (-1)k+1 1
k
ö
÷
ø
2

 
 (k + 1)-3
=
4  Li 5( 1
2
) - 1
30
ln5(2)
- 17
32
z(5) - 11
720
p4 ln(2)
+ 7
4
z(3) ln2(2) + 1
18
p2 ln3(2)
- 1
8
p2 z(3)
where again z(n) = åj = 1¥ j-n is a value of the Riemann zeta function, and Li n (x) = åj = 1¥ xj j-n denotes the classical polylogarithm function.

More generally, one may define multi-dimensional Euler sums (or multiple zeta values) by

z æ
ç
è
s1,
s2
¼
sr
s1,
s2
¼
sr
ö
÷
ø
: =
å
k1 > k2 > ¼ > kr > 0 
s1k1
k1s1
  s2k2
k2s2
  ¼  srkr
krsr
where sj = ±1 are signs and sj > 0 are integers. When all the signs are positive, one has a multiple zeta value. The integer r is the sum's depth and s1 + s2 + ¼+ sr is the weight. These sums have connections with diverse fields such as knot theory, quantum field theory and combinatorics. Constants of this form with alternating signs appear in problems such as computation of the magnetic moment of the electron.

Multi-dimensional Euler sums satisfy many striking identities. The discovery of the more recondite identities was facilitated by the development of Hölder convolution algorithms that permit very high precision numerical values to be rapidly computed. See [] and a computational interface at www.cecm.sfu.ca/projects/ezface+/. One beautiful general identity discovered by Zagier [] in the course of similar research is

z(3,1,3,1,¼,3,1)
=
1
2n+1
z(2,2,¼,2)    =    2 p4n
(4n+2)!

where there are n instances of `(3,1)' and `2' in the arguments to z(·). This has now been proven in [] and the proof, while entirely conventional, was obtained by guided experimentation. A related conjecture for which overwhelming evidence but no hint of a proof exists is the ``identity''

8n z æ
ç
è
2,
1,
2,
1,
¼,
2,
1
-1,
1,
-1,
1,
¼,
-1,
1
ö
÷
ø
=
z(2,1,2,1,¼,2,1).

Along this line, Broadhurst conjectured, based on low-degree numerical results, that the dimension of the space of Euler sums with weight w is the Fibonacci number Fw+1 = Fw + Fw-1, with F1 = F2 = 1. In testing this conjecture, complete reductions of all Euler sums to a basis of size Fw+1 were obtained with PSLQ at weights w £ 9. At weights w = 10 and w = 11 the conjecture was stringently tested by application of PSLQ in more than 600 cases. At weight w = 11 such tests involve solving integer relations of size n = F12 + 1 = 145. In a typical case, each of the 145 constants was computed to more than 5,000 digit accuracy, and a working precision level of 5,000 digits was employed in an advanced ``multi-pair'' PSLQ program. In these problems the ratios of adjacent coefficients in the recovered integer vector usually have special values, such as 11! = 39916800. These facts, combined with confidence ratios typically on the order of 10-300 in the detected relations, render remote the chance that these identities are spurious numerical artifacts, and lend substantial support to this conjecture [].

7  Mathematical Computing Meets Parallel Computing

The potential future power of highly parallel computing technology has been underscored in some recent results. Not surprisingly, many of these computations involve the constant p, underscoring the enduring interest in this most famous of mathematical constants. In 1997 Fabrice Bellard of INRIA used a more efficient formula, similar to the one mentioned in section three, programmed on a network of workstations, to compute 150 binary digits of p starting at the trillionth position. Not to be outdone, 17-year-old Colin Percival of Simon Fraser University in Canada organized a computation of 80 binary digits of p beginning at the five trillionth position, using a network of 25 laboratory computers. He an many others are presently computing binary digits at the quadrillionth position on the web []. As we write, the most recent computational result was Yasumasa Kanada's calculation (September 1999) of the first 206 billion decimal digits of p. This spectacular computation was made on a Hitachi parallel supercomputer with 128 processors, in little over a day, and employed the Salamin-Brent algorithm [], with a quartically convergent algorithm from [] as an independent check.

Several large-scale parallel integer relation detection computations have also been performed in the past year or two. One arose from the discovery by Broadhurst that

a630-1 =
(a315-1) (a210-1) (a126-1)2 (a90-1) (a3-1)3 (a2-1)5 (a-1)3
(a35-1) (a15-1)2 (a14-1)2 (a5-1)6 a68

where a = 1.176280818¼ is the largest real root of Lehmer's polynomial []

0
=
1 + a- a3 - a4 - a5 - a6 - a7 + a9 + a10.

The above cyclotomic relation was first discovered by a PSLQ computation, and only subsequently proven. Broadhurst then conjectured that there might be integers a, bj, ck such that

z(17)
=
8
å
j = 0 
bj p2j(loga)17-2j+
å
k Î D(S) 
ck Li17(a-k)

where the 115 indices k are drawn from the set, D(S), of positive integers that divide at least one element of

S
=
{29,47,50,52,56,57,64,74,75,76,78,84,86,92,96,98,108,110,118,
124,130, 132,138,144,154,160,165,175,182,186,195,204,212,240,
246,270,286,360,630}.

Indeed, such a relation was found, using a parallel multi-pair PSLQ program running on a SGI/Cray T3E computer system at Lawrence Berkeley Laboratory. The run employed 50,000 decimal digit arithmetic and required approximately 44 hours on 32 processors. The resulting integer coefficients are as large as 10292, but the ``identity'' nonetheless was confirmed to 13,000 digits beyond the level of numerical artifact [].

8  Connections to Quantum Field Theory

In another surprising recent development, David Broadhurst has found, using these methods, that there is an intimate connection between Euler sums and constants resulting from evaluation of Feynman diagrams in quantum field theory [,]. In particular, the renormalization procedure (which removes infinities from the perturbation expansion) involves multiple zeta values. As before, a fruitful theory has emerged, including a large number of both specific and general results [].

Some recent quantum field theory results are even more remarkable. Broadhurst has now shown [], using PSLQ computations, that in each of ten cases with unit or zero mass, the finite part the scalar 3-loop tetrahedral vacuum Feynman diagram reduces to 4-letter ``words'' that represent iterated integrals in an alphabet of seven ``letters'' comprising the one-forms W: = dx/x and wk : = dx / (l-k-x), where l: = (1 + Ö[(-3)])/2 is the primitive sixth root of unity, and k runs from 0 to 5. A 4-letter word is a 4-dimensional iterated integral, such as

U: = z(W2w3w0) =
ó
õ
1

0 
dx1
x1
ó
õ
x1

0 
dx2
x2
ó
õ
x2

0 
dx3
(-1-x3)
ó
õ
x3

0 
dx4
(1-x4)
=
å
j > k > 0 
(-1)j+k
j3k
.

There are 74 such four-letter words. Only two of these are primitive terms occurring in the 3-loop Feynman diagrams: U, above, and

V
: =
Real [z(W2w3w1)] =
å
j > k > 0 
(-1)j cos(2 pk / 3)
j3 k
.

The remaining terms in the diagrams reduce to products of constants found in Feynman diagrams with fewer loops. These ten cases as shown in Figure 1. In these diagrams, dots indicate particles with nonzero rest mass. The formulas that have been found, using PSLQ, for the corresponding constants are given in Table 2. In the table the constant C = åk > 0 sin(pk / 3) / k2.


Picture Omitted
0by70by160
Picture Omitted
0by70by160
Picture Omitted
0by70by160
Picture Omitted
0by70by160
Picture Omitted
0by70by160

Picture Omitted
0by70by160
Picture Omitted
0by70by160
Picture Omitted
0by70by160
Picture Omitted
0by70by160
Picture Omitted
0by70by160

Figure 1: The ten tetrahedral cases

V1
=
6 z(3) + 3 z(4)
V2A
=
6 z(3) - 5 z(4)
V2N
=
6 z(3) - 13
2
z(4) - 8 U
V3T
=
6 z(3) - 9 z(4)
V3S
=
6 z(3) - 11
2
z(4) - 4 C2
V3L
=
6 z(3) - 15
4
z(4) - 6 C2
V4A
=
6 z(3) - 77
12
z(4) - 6 C2
V4N
=
6 z(3) - 14 z(4) - 16 U
V5
=
6 z(3) - 469
27
z(4) + 8
3
C2 - 16 V
V6
=
6 z(3) - 13 z(4) - 8 U - 4 C2
2Formulas found by PSLQ for the ten cases of Figure 1

9  A Note of Caution

In spite of the remarkable successes of this methodology, some caution is in order. First of all, the fact that an identity is established to high precision is not a guarantee that it is indeed true. One example is

¥
å
n = 1 
[n tanhp]
10n
»
1
81

which holds to 267 digits, yet is not an exact identity, failing in the 268'th place. Several other such bogus ``identities'' are exhibited and explained in [].

More generally speaking, caution must be exercised when extrapolating results true for small n to all n. For example,

ó
õ
¥

0 
sin(x)
x
 dx
=
p
2
ó
õ
¥

0 
sin(x)
x
sin(x/3)
x/3
 dx
=
p
2
¼
ó
õ
¥

0 
sin(x)
x
sin(x/3)
x/3
¼ sin(x/13)
x/13
 dx
=
p
2

yet

ó
õ
¥

0 
sin(x)
x
sin(x/3)
x/3
¼ sin(x/15)
x/15
 dx =
467807924713440738696537864469
935615849440640907310521750000
p.

When this fact was recently observed by a researcher using a mathematical software package, he concluded that there must be a ``bug'' in the software. Not so. What is happening here is that

ó
õ
¥

0 
sin(x)
x
sin(x/h1)
x/h1
¼ sin(x/hn)
x/hn
 dx
=
p
2

only so long as 1/h1 + 1/h2 + ¼+ 1/hn < 1. In the above example, 1/3 + 1/5 + ¼+ 1/13 < 1, but with the addition of 1/15, the sum exceeds 1 and the identity no longer holds []. Changing the hn lets this pattern persist indefinitely but still fail in the large.

10  Future Outlook

Computer mathematics software is now becoming a staple of university departments and government research laboratories. Many university departments now offer courses where the usage of one of these software packages is an integral part of the course. But further expansion of these facilities into high schools has been inhibited by a number of factors, including the fairly high cost of such software, the lack of appropriate computer equipment, difficulties in standardizing such coursework at a regional or national level, a paucity of good texts incorporating such tools into a realistic curriculum, lack of trained teachers and many other demands on their time.

But computer hardware continues its downward spiral in cost and its upward spiral in power. It thus appears that within a very few years, moderately powerful symbolic computation facilities can be incorporated into relatively inexpensive hand calculators, at which point it will be much easier to successfully integrate these tools into high school curricula. Thus it seems that we are poised to see a new generation of students coming into university mathematics and science programs who are completely comfortable using such tools. This development is bound to have a profound impact on the future teaching, learning and doing of mathematics.

A likely and fortunate spin-off of this development is that the commercial software vendors who produce these products will likely enjoy a broader financial base, from which they can afford to further enhance their products geared at serious researchers. Future enhancements are likely to include more efficient algorithms, more extensive capabilities mixing numerics and symbolics, more advanced visualization facilities, and software optimized for emerging symmetric multiprocessor and highly parallel, distributed memory computer systems. When combined with expected increases in raw computing power due to Moore's Law - improvements which almost certainly will continue unabated for at least ten years and probably much longer - we conclude that enormously more powerful computer mathematics systems will be available in the future.

We only now are beginning to experience and comprehend the potential impact of computer mathematics tools on mathematical research. In ten more years, a new generation of computer-literate mathematicians, armed with significantly improved software on prodigiously powerful computer systems, are bound to make discoveries in mathematics that we can only dream of at the present time. Will computer mathematics eventually replace, in near entirety, the solely human form of research, typified by Andrew Wiles' recent proof of Fermat's Last Theorem? Will computer mathematics systems eventually achieve such intelligence that they discover deep new mathematical results, largely or entirely without human assistance? Will new computer-based mathematical discovery techniques enable mathematicians to explore the realm, proved to exist by Gödel, Chaitin and others, that is fundamentally beyond the limits of formal reasoning?

11  Conclusion

We have shown a small but we hope convincing selection of what the present allows and what the future holds in store. We have hardly mentioned the growing ubiquity of web based computation, or of pervasive access to massive data bases, both public domain and commercial. Neither have we raised the human/computer interface or intellectual property issues and the myriad other not-purely-technical issues these raise.

Whatever the outcome of these developments, we are still persuaded that mathematics is and will remain a uniquely human undertaking. One could even argue that these developments confirm the fundamentally human nature of mathematics. Indeed, Reuben Hersh's arguments [] for a humanist philosophy of mathematics, as paraphrased below, become more convincing in our setting:

  1. Mathematics is human. It is part of and fits into human culture. It does not match Frege's concept of an abstract, timeless, tenseless, objective reality.

  2. Mathematical knowledge is fallible. As in science, mathematics can advance by making mistakes and then correcting or even re-correcting them. The ``fallibilism'' of mathematics is brilliantly argued in Lakatos' Proofs and Refutations [].

  3. There are different versions of proof or rigor. Standards of rigor can vary depending on time, place, and other things. The use of computers in formal proofs, exemplified by the computer-assisted proof of the four color theorem in 1977, is just one example of an emerging nontraditional standard of rigor.

  4. Empirical evidence, numerical experimentation and probabilistic proof all can help us decide what to believe in mathematics. Aristotelian logic isn't necessarily always the best way of deciding.

  5. Mathematical objects are a special variety of a social-cultural-historical object. Contrary to the assertions of certain post-modern detractors, mathematics cannot be dismissed as merely a new form of literature or religion. Nevertheless, many mathematical objects can be seen as shared ideas, like Moby Dick in literature, or the Immaculate Conception in religion.

Certainly the recognition that ``quasi-intuitive'' analogies can be used to gain insight in mathematics can assist in the learning of mathematics. And honest mathematicians will acknowledge their role in discovery as well.

We look forward to what the future will bring.

References

[]
G. Almkvist and A. Granville, ``Borwein and Bradley's Ap\' ery-like formulae for z(4n+3)'', Experimental Mathematics 8 (1999), 197-204.

[]
Issac Asimov and J. A. Shulman, ed., Isaac Asimov's Book of Science and Nature Quotations, Weidenfield and Nicolson, New York, 1988, pg. 115.

[]
David H. Bailey, ``A Fortran-90 Based Multiprecision System'', ACM Transactions on Mathematical Software, 21 (1995), pg. 379-387. Available from http://www.nersc.gov/~dhbailey.

[]
David H. Bailey, Jonathan M. Borwein and Roland Girgensohn, ``Experimental Evaluation of Euler Sums'', Experimental Mathematics, 4 (1994), 17-30.

[]
David H. Bailey, Peter B. Borwein and Simon Plouffe, ``On The Rapid Computation of Various Polylogarithmic Constants'', Mathematics of Computation, 66,(1997, 903-913.

[]
David H. Bailey and David Broadhurst, ``Parallel Integer Relation Detection: Techniques and Applications''. Available from http://www.nersc.gov/~dhbailey.

[]
David H. Bailey and David Broadhurst, ``A Seventeenth-Order Polylogarithm Ladder''. Available from http://www.nersc.gov/~dhbailey.

[]
David H. Bailey and Richard E. Crandall, ``On the Random Character of Fundamental Constant Expansions'', manuscript (2000). Available from http://www.nersc.gov/~dhbailey.

[]
David Borwein and Jonathan M. Borwein, ``Some Remarkable Properties of Sinc and Related Integrals'', CECM Preprint 99:142, available from http://wayback.cecm.sfu.ca/preprints.

[]
Jonathan M. Borwein and Peter B. Borwein, Pi and the AGM: A Study in Analytic Number Theory and Computational Complexity, John Wiley and Sons, New York, 1987.

[]
J. M. Borwein and P. B. Borwein, ``Strange Series and High Precision Fraud'', American Mathematical Monthly, 99 (1992), 622-640.

[]
J.M. Borwein and D.M. Bradley, ``Empirically determined Apéry-like formulae for zeta(4n+3),'' Experimental Mathematics, 6 (1997), 181-194.

[]
Jonathan M. Borwein, David M. Bradley, David J. Broadhurst and Peter Lisonek, ``Special Values of Multidimensional Polylogarithms'', Trans. Amer. Math. Soc., in press. CECM Preprint 98:106, available from http://wayback.cecm.sfu.ca/preprints.

[]
Jonathan M. Borwein, David J. Broadhurst and Joel Kamnitzer, ``Central binomial sums and multiple Clausen values,'' preprint, November 1999. CECM Preprint 99:137, , available from http://wayback.cecm.sfu.ca/preprints.

[]
J.M. Borwein, P.B. Borwein, R. Girgensohn and S. Parnes, ``Making Sense of Experimental Mathematics,'' Mathematical Intelligencer, 18, Number 4 (Fall 1996), 12-18.

[]
Jonathan M. Borwein and Robert Corless, ``Emerging tools for experimental mathematics,'' MAA Monthly, 106(1999), 889-909. CECM Preprint 98:110, , available from http://wayback.cecm.sfu.ca/preprints.

[]
Peter. B. Borwein and Christopher Pinner, ``Polynomials with {0,+1,-1} Coefficients and Root Close to a Given Point,'' Canadian J. Mathematics 49 (1998), 887-915.

[]
David J. Broadhurst, John A. Gracey and Dirk Kreimer, ``Beyond the Triangle and Uniqueness Relations: Non-zeta Counterterms at Large N from Positive Knots'', Zeitschrift für Physik, C75 (1997), 559-574.

[]
David J. Broadhurst and Dirk Kreimer, ``Association of Multiple Zeta Values with Positive Knots via Feynman Diagrams up to 9 Loops'', Physics Letters, B383 (1997), 403-412.

[]
David J. Broadhurst, ``Massive 3-loop Feynman Diagrams Reducible to SC* Primitives of Algebras of the Sixth Root of Unity'', preprint, March 1998, to appear in European Physical Journal C. Available from http://xxx.lanl.gov/abs/hep-th/9803091 .

[]
David J. Broadhurst, ``Polylogarithmic Ladders, Hypergeometric Series and the Ten Millionth Digits of z(3) and z(5)'', preprint, March 1998. Available from http://xxx.lanl.gov/abs/math/9803067.

[]
Sid Chatterjee and Herman Harjono, ``MPFUN++: A Multiple Precision Floating Point Computation Package in C++'', University of North Carolina, Sept. 1998. Available from http://www.cs.unc.edu/Research/HARPOON/mpfun++.

[]
Helaman R. P. Ferguson, David H. Bailey and Stephen Arno, ``Analysis of PSLQ, An Integer Relation Finding Algorithm'', Mathematics of Computation, 68 (1999), 351-369.

[]
Helaman R. P. Ferguson and Rodney W. Forcade, ``Generalization of the Euclidean Algorithm for Real Numbers to All Dimensions Higher Than Two'', Bulletin of the American Mathematical Society, 1 (1979), 912-914.

[]
Stphen Finch, ``Favorite Mathematical Constants", http://www.mathsoft.com/asolve/constant/constant.html.

[]
Reuben Hersh, ``Fresh Breezes in the Philosophy of Mathematics'', the American Mathematical Monthly, August-September 1995, 589-594.

[]
Loki Jörgenson, ``Zeros of Polynomials with Constrained Roots", http//www.cecm.sfu.ca/personal/loki/Projects/Roots/Book.

[]
Imre Lakatos, Proofs and Refutations: The Logic of Mathematical Discovery, Cambridge University Press, 1977.

[]
Derrick H. Lehmer, ``Factorization of Certain Cyclotomic Functions'', Annals of Mathematics, 34 (1933), 461-479.

[]
A. K. Lenstra, H. W. Lenstra, Jr. and L. Lovasz, ``Factoring Polynomials with Rational Coefficients'', Mathematische Annalen, 261 (1982), 515-534.

[]
P. B. Medawar, Advice to a young Scientist, Harper Colophon, New York, 1981.

[]
Andrew Odlyzko, ``Zeros of polynomials with 0,1 coefficients", http://wayback.cecm.sfu.ca/organics/authors/odlyzko/ and /organics/papers/odlyzko/support/polyform.html.

[]
Colin Percival, ``Pihex: A Distributed Effort To Calculate Pi'', http://wayback.cecm.sfu.ca/projects/pihex/.

[]
George Polya, Mathematical Discovery: On Understanding, Learning, and Teaching Problem Solving, Combined Edition, New York, Wiley and Sons, 1981, pg. 129.

[]
Ed Regis, Who Got Einstein's Office?, Addison-Wesley, 1986, pg. 78.

[]
N.J.A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995. The on-line version can be accessed at http://www.research.att.com/~njas/sequences/Seis.html.

[]
Don  Zagier, Values of zeta functions and their applications, First European Congress of Mathematics, Volume II, Birkhäuser, Boston, 1994, 497-512.


Footnotes:

1 See http://pauillac.inria.fr/algo/bsolve/constant/square/square.html.

2 Typing `infty' for `infinity' revealed that the program had an algorithm when a formal variable was entered.


File translated from TEX by TTH, version 2.20.
On 17 May 2000, 09:17.