Division and Its Discontents

February 23, 2010

Professor Steven Strogatz of Cornell has a new post, “Division and Its Discontents”, available on his Opinionator blog at the New York Times Web site, continuing his series on some of the basic elements of mathematics.  In this installment, he talks in more detail about a theme I mentioned in my last post about his column.

There’s a narrative line that runs through arithmetic, but many of us missed it in the haze of long division and common denominators. It’s the story of the quest for ever-more versatile numbers.

We saw how introducing the idea of subtraction led naturally to the development of negative numbers, which obey the same rules of arithmetic as the positive integers, which in their use for counting are our natural starting point.  In this column, Prof. Strogatz talks about the operation of division, and how it leads us inexorably to the use of fractions (that is, rational numbers).

Although I have already given away the main thrust of the “plot”, in some sense, I do urge you to read his column.  He has some lovely examples of some of the ways people get themselves in a muddle when thinking about fractions, one from the movie My Left Foot, and one from a conversation Mr. George Vaccaro has with representatives of Verizon Wireless.

Later, he talks a bit about the representation of fractions as decimals, and the fact that if

1/3 = 0.3333…

then, multiplying both sides by 3, we get

1 = 0.9999…

Understanding why this is true will take one a long way in understanding the properties of numbers and their representations.

Finally, Prof. Strogatz introduces our friends the irrational numbers.  I motivated their introduction by a geometric argument: what is the length of the diagonal of a unit square?   He starts off with a number whose representation is irrational by construction; its decimal representation does not terminate or repeat:


All of these numbers, remember, are required to behave themselves in certain specific ways, so that the rules of arithmetic still hold.  And even though we have wandered fairly far from the basic idea of counting rocks or birds or apples, we still find our numbers relevant to the real world.

How Random is That?

February 23, 2010

Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.
John von Neumann

The Science Daily Web site has an article describing a new approach to generating random numbers.  The technique itself is interesting, and it potentially has some important applications.  Unfortunately, the article tends to oversell the potential benefits.

A new approach to generating truly random numbers could lead to improved Internet security and better weather forecasts, according to researchers writing in the International Journal of Critical Computer-Based Systems.

Traditionally, people seeking a stream of random numbers have taken one of two broad approaches to getting them:

  1. Some physical process, which is believed to be random (such as radioactive decay, thermal noise, or radio noise), is measured, and then (possibly) adjusted to remove any measurement bias.  Of course, particularly if only a small set of numbers is wanted, lower-tech methods, like rolling dice or flipping coins, can be used.
  2. An algorithmic method is used to generate a sequence of numbers that have a specified set of desired statistical properties.  Note that, in this case, the numbers are not “truly” random, since the algorithm is by definition deterministic.

What the researchers, from the University of Hagen and BTC AG in Germany, have done is to introduce a physical random number device into an algorithmic generation process.  The device uses a circuit element called a “flip-flop”, which can be in either of two states (conventionally, 0 or 1); by initializing it to a metastable state, its final state becomes unpredictable.  This potentially inserts a truly random element into the stream of numbers generated, without fooling around with Geiger counters or dice.

In essence, this addresses the weakness of the algorithmic approach: that if the algorithm is known, it is possible to predict the subsequent sequence of numbers.  There are some uses for random numbers for which this is very important.  Consider, for example, selecting the winning numbers in a daily lottery.  It is obviously undesirable, from the lottery promoter’s point of view, that anyone should be able to predict future winning numbers.  Similarly, in cryptographic applications, security often depends on the unpredictability of the number sequence.  (As, for example, in the one-time pad, the one provably unbreakable cipher.)

There is another class of applications for random numbers, though, where the unpredictability of the sequence is less important, as long as the numbers possess desired statistical properties.  The most prominent example is simulation (sometimes called stochastic or Monte Carlo simulation).  In a simulation application, we are often concerned with modeling the behavior of a real system under a range of possible conditions.  For example, we might want to simulate the returns to an investment strategy under a variety of economic and market conditions.  In this kind of application, the main difficulties arise in making sure that the “random” numbers accurately reflect the probability distribution of outcomes.

At one level, this can be an algorithmic problem.  Let’s suppose that we want to model a variable Q, which has a standard normal distribution.  One approach to generating values for Q, which I have actually seen used, makes use of a set of six uniformly distributed random values U1, U2, … U6 on the interval (0,1).  Then, by appeal to the Central Limit Theorem, the value Q’:

is approximately normally distributed with mean 0 and standard deviation 1.  However, this is, generally speaking, a very bad method for generating Q.  It should be obvious, for starters, that a value of Q’ greater than 3 can never occur.  While it is true that outcomes 3 or more standard deviations from the mean are unlikely, that is not the same as impossible.   This could be a serious problem in our example of simulating an investment strategy, since we probably are interested (or, at least, we should be) in extreme outcomes, especially negative ones.

There are much better algorithms for generating numbers drawn from a normal distribution, or other common distributions.  Of course, determining the true distribution for a real-world random variable is not always entirely straightforward (and assuming everything is normally distributed is very dangerous).

So this is a development with some promise, but it is not a panacea for everything that ails the use of random numbers.

%d bloggers like this: