The Y10K Problem, Pascal's Wager, and Petersberg
The "Y2K problem" involves computer programs whose representations of
dates consist only of the last two digits of the decimal date. This
representation was fairly unambiguous as long as all the dates of
interest were within the same century, but it became a potential
problem as the year 2000 AD approached. It's been said that this
deficiency in computer software arose partly because of the historical
accident that computers were invented at the middle of a century, far
enough from both the next and the prior century for these events to
seem remote. (It's also somewhat coincidental that computers were
invented very near the turn of a millenium, so the digit in the
1000's place, as well as the 100's place, was soon to change.)
Also, there was a widespread assumption that software code written
in, say, the 1960's could not possibly still be in use more than 30
years later when the year 2000 rolled around.
However, we now realize that this was short-sighted, and that software
or elements of software may persist in standard applications for an
indefinite period of time. Unlike hardware, simple algorithms and
sequences of logic statements don't "wear out" over time. On the
contrary, they tend to grow and proliferate, becoming ever more deeply
embedded in the "logical infrastructure" of our civilization. Coping
with the Y2K problem has necessitated tremendously expensive and
laborious efforts to review and revise 40 years worth of software,
to accommodate the full four-digit date.
Unfortunately, it may be that we are once again being short-sighted
in our approach to the representation of dates in our logical infra-
structure. Admittedly the use of four digits will be adequate for
quite some time, but eventually it too will prove to be deficient.
On New's Years Eve, 31 December 9999 AD the world will once again
be facing a potential crisis, because on New Years Day the year will
be 10000 AD, requiring FIVE decimal digits to represent the date.
This might be called the Y10K problem.
Some people may take comfort from the fact that the year 10000 seems
so remote, and surely no software written today will still be in use
8000 years from now... but of course this is the same sort of thinking
that got us into trouble 40 years ago. Remember, software has, at
least potentially, an infinite lifespan. If we think it was a daunting
task to review and correct 40 years worth of software to address the
Y2K problem, imagine the task of reviewing and correcting 8000 years
worth of software routines that will be almost organically embedded
in every imaginable component of our future civilization. The labor
required for such an undertaking is almost inconceivable. We may
hope that when the time arrives there will be artificial intelligence
tools that can manage the conversion efficiently, but can we really
count on this?
Needless to say, as soon as we recognize the reality of the Y10K
problem, we can also forsee the Y100K problem, and the Y1000K problem,
and so on. Each of these potential crises is 10 times more remote
than the previous one, but the magnitude of the problem is 10 times
worse! In view of this, we could argue that each of these "problems"
should be given roughly equal weight in our present considerations -
and there are infinitely many such problems.
This is somewhat reminiscent of Pascal's famous "probability" argument
in favor of living a pious life. He reasoned that regardless of how
low a probability is assigned to the existence of God and heaven, we
should still bet our lives on it, because the payoff is infinite
everlasting life (versus everlasting damnation). Thus the expected
value of a wager in favor of God (Pascal argued) is always infinitely
great.
Perhaps a more apt analogy from probability theory is the famous
Petersberg Paradox, in which the player's chances of winning are cut
in half and the payoff doubles on each successive round. It seems
that each of the infinitely many possible outcomes contributes an
equal amount to the expected value of the game, so the expected value
should be infinite, whereas in practice we tend to discount the more
remote outcomes, regardless of the potential payoffs (or losses)
associated with them.
It's interesting to consider whether, in principle, provisions could
be made that would be forever immune from "YXK" problems. In other
words, is there a representation of dates that could work forever?
Presumably not, if we assume each platform has only a finite amount
of memory available, which makes it impossible to distinguish between
infinitely many different years. Thus, even with an open-ended
protocol, there would be limits. We might speculate about whether
these considerations have any relevance to biology and genetic
information. The same finite range of time is repeated in each
generation, as if the dates in any individual "device" are only able
to represent about 100 years, and then we simply cease to function.
But I digress...
Return to MathPages Main Menu