Photo by Ken Levinson 
Robert H. Lewis
Associate Professor of Mathematics
For about twenty years, I have worked on the borderline between mathematics and computer science. I’ve become an applied mathematician, but in a sense that didn’t exist thirty years ago.
I think the general public does not have a very good idea of what mathematicians do, so let me start with some generalities. Before roughly 1985, almost all mathematicians in academic positions in the United States and Europe were pure mathematicians. They spent their careers discovering and proving theorems with no regard to potential applications. Most people will recall the Pythagorean Theorem from geometry, which states that a2 + b2  c2 = 0 for the sides of a right triangle. To a mathematician, there is no interest in working out numerical examples; the interest is in proving that the theorem is always correct. The proof is a rigorous logical argument; arithmetic is irrelevant. Mathematics encompasses far more than geometry, but nonetheless, virtually all mathematicians in academia were engaged in discovery and proof of new theorems. Their results often did turn out to be of interest to others. For example, the recent movie A Beautiful Mind concerns John Nash, whose work influenced economists.
There were (and are) also applied mathematicians, but they tended to work in industry or the military. They used advanced mathematics to help engineers develop better products.
These traditional categories were seriously challenged around 1977 by the proof of the Four Color Theorem. It says that no matter how complicated your map (think of a map of the United States), four colors is enough to shade the regions (states) so that any two that border have different colors. The proof was not just the expected kind of logical argument. They designed an algorithm that could be run by a computer to check several thousand complex cases that no person could possibly do. It took the computer hundreds of hours. To some, this was heresy; to others, the opening of a new realm.
Everyone sensed that something new was happening in the mathematical world. The computer was not just doing a lot of arithmetic. The software encoded a mathematical structure called a graph and checked certain symbolic properties of it.
About that time, I started out as a pure mathematician working in algebraic topology. But I was soon attracted to computer science and obtained a degree in it. I am not interested in proving theorems with software, but rather solving problems that involve algebraic symbolic manipulation (as opposed to mere arithmetic). I have written a cuttingedge computer algebra system to do this, called Fermat (on the Web at: www.bway.net/~lewis). In doing so I didn’t prove any theorems, but designed some pretty complex algorithms.
Here of late, I mostly work with systems of polynomials. Most people probably remember polynomials from high school mathematics. We encountered above a2 + b2  c2, which is a polynomial in three variables with three terms (or pieces) of degree two (the exponents). For another example, x2y3 + z2 – 3xyz  x2 has four terms, three variables, and is degree five because that is the sum of the exponents in the first term. An amazing number and variety of “real world” problems can be reduced to systems of polynomials. I have worked on problems arising in pure mathematics, computer vision and robotics, medical research and other topics.
My latest project results from the convergence of computational chemistry, flexibility of threedimensional objects, and systems of polynomials.
Basically, living things are made of proteins, and proteins work because they can fold up. Molecules can fold because they are flexible. Simple examples are easily built from a few plastic balls and rods, as in Chemistry 101. In 1812, the French mathematician AugustinLouis Cauchy considered flexibility of threedimensional structures (think of a geodesic dome) where each joint can pivot or hinge. He proved that if the structure is convex (no indentations) it must be rigid. People came to believe that there were no flexible structures at all. Astonishingly, in 1978 a nonconvex one was found. It is very enlightening to hold a model of one of these and feel it move.
My colleagues and I have a new approach to understanding flexibility, using symbolic computation instead of numerical calculation. We describe the geometry of a certain object with six polynomial equations. We solve this system of six to derive what is called a resultant, which turns out to be one polynomial in about two hundred thousand terms, in twelve variables, of degree thirtyone. As an ordinary text file, it occupies three megabytes of storage space on a computer hard drive (by comparison, the plain text version of the entire King James Bible occupies just four megabytes of storage space). Of course, there is no interest whatever in actually looking at all these terms. Furthermore, it’s not enough just to have this monster polynomial. I developed an algorithm to detect flexibility in the object by examining the resultant. We hope to extend this to more situations.
The concepts here are not beyond the grasp of undergraduate mathematics majors, and we hope to increase their participation in research at Fordham.
Readers can find my essay on mathematics education at: www.fordham.edu/mathematics/whatmath.html
The “Sapientia et Doctrina” section of Inside Fordham features firstperson columns written by members of the Fordham Jesuit community and University faculty. Our Jesuit correspondents offer essays on teaching and learning from a Jesuit perspective, or focus on some aspect of scholarship as seen through the lens of Jesuit tradition. Faculty correspondents write on an academic topic: their own academic specialty or current research; or an aspect of scholarship, written for the lay person. The two types of columns alternate by issue.
For more information please contact the editor, Bob Howe, at (212) 6366538, howe@fordham.edu.
