Prime numbers lie at the core of some of the oldest and most perplexing questions in mathematics. Evenly divisible only by themselves and 1, they are the building blocks of integers. In recent decades, prime numbers have emerged from their starring roles in mathematical research (SN: 5/25/02, p. 324: Available to subscribers at Prime Effort: Powerful conjecture may be proved) by becoming prized commodities–as elements in a cryptographic scheme widely used to keep digital messages secret (SN: 2/6/99, p. 95).
Although there are infinitely many primes, they are also relatively scarce and rather haphazardly scattered among the integers. Indeed, of the first 25 billion whole numbers, only 1,091,987,405–or about 4 percent–are primes, and the proportion of primes decreases as the numbers get bigger.
The absence of any readily discernible pattern in their distribution makes identifying primes a tricky proposition. Is 687,532,127 a prime? There’s no way to tell simply by looking. Clearly, the number isn’t evenly divisible by two, nor by any other even number. Is it divisible by 3? 5? 7? By 26,203? In fact, 687,532,127 has no divisors other than one and itself, so it’s a prime.
This process of elimination by trial division is the idea behind the prime-detecting sieve of Eratosthenes, named for a Greek mathematician who lived in the third century B.C. The sieve of Eratosthenes represents a systematic way of checking whether a number is a prime by dividing into the given number all smaller primes, starting with two and going up to the square root of the target number. If none of the integers divides evenly into the given number, the target is a prime. In the case of huge numbers, however, trial division is both tedious and time-consuming.
Even so, the need to undertake such primal analysis has mushroomed because of the increasing importance of cryptography. One widely used cryptographic scheme is based on the notion that, whereas it’s relatively easy to multiply together large primes, it’s considerably more difficult to factor the resulting number and retrieve the original primes. Yet that operation is just the one required for decoding the encrypted messages. This scheme requires a ready-to-use supply of large primes, so it has encouraged mathematicians and computer scientists to seek increasingly efficient ways of identifying prime numbers.
Now, a team from the Indian Institute of Technology (IIT) in Kanpur, India, has devised a novel approach for detecting primes. The new technique solves a long-standing problem in number theory and computer science, providing a long-sought improvement in the theoretical efficiency of prime-detecting algorithms.
“It is a lovely result and gives the field of algorithmic number theory a shot in the arm,” says number theorist Carl Pomerance of Bell Laboratories in Murray Hill, N.J. “I was surprised, especially with the simplicity of the method. My hat is off to them.”
The IIT team of Manindra Agrawal, Neeraj Kayal, and Nitin Saxena announced its findings in August. Mathematicians quickly confirmed the validity of the results, and some researchers have already made improvements, offering hope that this novel approach eventually may be turned into a practical, speedy method for finding primes.
Time trials
With computers now doing all the heavy lifting in testing for primes, much recent research has focused on the efficiency of competing sets of instructions, or algorithms. This efficiency is typically measured in terms of how the demand on computer resources, such as time or memory, goes up as the size of the number to be tested increases.
The efficiency of the venerable sieve of Eratosthenes, for example, is related to the number of trial divisions required to test a given integer for primality. The trouble is that the number of trial divisions grows exponentially with the number of digits in the target integer. So, although it’s a practical way to test integers smaller than 10 billion, which have nine digits, it fails miserably for integers of 25 digits or more.
Today’s heavily used, computer-based procedures for detecting primes rely instead on a mathematical shortcut discovered in the 17th century by jurist and mathematician Pierre de Fermat. His so-called little theorem expresses a remarkable relationship involving primes: If an integer p is a prime number, then for all integers a, dividing both a p and a by p gives a result with the same remainder. For example, if p = 7 (a prime) and a = 9, dividing 97 by 7 gives a remainder of 2, as does dividing 9 by 7.
Any integer p that fails this test isn’t a prime; it’s a composite number with factors other than 1 and itself. However, a few composite numbers also pass the test, so further steps are needed to weed them out to ensure that the target truly is a prime (SN: 3/6/82, p. 158).
For practical purposes, it isn’t usually necessary to check specifically for these fake primes. The idea behind today’s most efficient Fermat-based algorithms is to test the integer in question using a randomly chosen value for a. If the integer passes the Fermat test, there’s only a small probability that it’s actually not a prime. If the target integer is tested again with another randomly chosen value of a and still passes, the probability is even smaller that it’s not a prime. The more times it passes, the more likely it is to be an authentic prime. By repeating the test enough times, it’s possible to reduce the probability of error to nearly zero. That’s good enough for the sorts of primes needed for a prime-based cryptographic system.
Elusive Efficiency
What had eluded researchers until now, however, was a prime-detecting method that not only always yields a correct answer but also meets another important criterion: efficiency of calculation. Instead of an algorithm requiring an amount of computer time that grows exponentially with the target number’s size, computer scientists wanted one that grows more slowly, say, at a rate that’s only proportional to the number size or, more likely, a straightforward power of the number size.
For example, suppose that the number of digits in a target integer is N and the number of digits is doubled to 2N. With exponential growth, the time required to test the number would increase from bN to b2N, where b is some constant related to the prime-testing algorithm. If, instead, the rate grows as a power of number size, the time would increase at a more sedate pace from N c to (2N)c, where the exponent c is a constant. In the latter case, researchers describe the algorithm as taking no more than polynomial time. A polynomial is an algebraic expression containing powers of one or more variables.
To find a prime-detecting algorithm that could do the job in polynomial time, researchers had explored a variety of approaches–some based on highly sophisticated mathematics–but with limited success. In 1999, Agrawal decided to try a relatively simple approach that he noticed had been overlooked by others. He enlisted the aid of Kayal and Saxena, who were undergraduate students at the time.
Early computer simulations were encouraging, but only this past summer did the team finally work out the complete method and the mathematical proof establishing its theoretical efficiency.
In effect, the IIT team found a new, generalized version of Fermat’s little theorem–one in which the integers a p and a are replaced by polynomial expressions: (x – a) p and (x p – a). Using this foundation, they formulated an algorithm that they could prove had a polynomial running time proportional to N12.
Primality-testing methods had been getting quicker, but the algorithms had been getting more complex, says Chris K. Caldwell of the University of Tennessee at Martin. “In contrast, [the Agrawal-Kayal-Saxena algorithm] has a shocking simplicity. It’s an algorithm that most any programmer can follow,” he says.
Practical concerns
Achieving a theoretical breakthrough is one thing. Putting it into practice for everyday use is another matter entirely. With a running time proportional to the 12th power of the number of digits, the new algorithm is still painfully slow for relatively small numbers. As it turns out, Caldwell says, “it is far slower than trial division in practice.”
However, “one has to be extremely careful when pronouncing something practical or not,” warns Richard E. Crandall of the Center for Advanced Computation at Reed College in Portland, Ore.
Indeed, experts who have carefully studied the Agrawal-Kayal-Saxena algorithm have already made improvements. One such variant was developed by Hendrik W. Lenstra of the University of California, Berkeley. Crandall recently demonstrated that a computer programmed with this variant could crack a 30-digit prime in about a day instead of the several years required for the original IIT algorithm.
That’s a significant improvement in performance but still far from the speed required to identify, say, 1,000-digit primes.
There’s hope, however. A crucial step in Lenstra’s variant algorithm can be readily divided up among a large number of computers. A project in which volunteers’ computers around the world shared in the calculations could easily handle a 1,000-decimal-digit potential prime in about a year, Crandall estimates. The SETI@home project, in which more than a million computers worldwide have participated in sifting through radio-telescope signals for signs of extraterrestrial life, is one well-known example of such an effort (SN: 3/4/00, p. 152: Great Computations).
Additional improvements in the Agrawal-Kayal-Saxena prime-detecting algorithm may be ahead. “Give number theorists a year with this algorithm, and it should be much clearer what its future is,” Caldwell says.
Whatever happens on the practical front, the IIT work stands as a major theoretical advance. Moreover, because the team solved the problem in such an elegant, unexpected manner, mathematicians are now wondering what else they may have overlooked in other mathematical ventures.
****************
If you have a comment on this article that you would like considered for publication in Science News, please send it to editors@sciencenews.org.