icon Known


Y2K Hoax
by Rapjim@aol.com, January 21, 1999

Few stories have captured the attention of the folks, like the year 2000, millennium bug. It is so well known I can skip the background paragraph that would normally accompany a piece such as this. Not since Joe McCarthy have we had such a groundless excuse for hysteria. According to people on TV and radio talk shows, on January 1, 2000, planes will start falling from the sky as autopilots fail to adjust properly, food will rot in the fields as truck drivers who depend on computer generated reports of where to drive, instead get nothing, Social Security payouts will cease, banks will lose all track of what they owe you in deposits, while charging you an extra hundred years interest on what your loans. The stock market will reset to 1900 levels, and on and on. One such prognosticator, in the name of Bob Brinker, on the radio, tells you that the year 2000 is not a normal leap year and that will cause problems. For the record Bob, the year 2000 is a leap year in the normal sense of leap years being evenly divisible by 4. Century years evenly divisible by 400 are leap years. Other century years are not, but we don't have to worry about that until 2100, by which time everybody reading this will be dead, so who cares. There is also the notion that certain brands of computer, simply cannot handle the year 2000. This is also bunk as I will explain. Why? Well, first a few facts.

Fact 1. Even PC's do not use the commonly know DDMMYY format for date storage.

Really? What do PC's use? PC's use 2 bytes, or 16 bits to store the number of days since January 1, 1980. Now these two bytes, can have any value from 0 to 65,535 i.e. 2 to the 16th power. Thus, the total number of values storable in 2 bytes is 65,536. If we decide that each value represents one year, we can then store with those 2 bytes every day for a number of years that is 65,636 divided by 365.25. In this case 179. So with two measly bytes, we can now represent all the DDMMYY's for a period of 179 years! Compare that with what you are told, namely that 6 bytes are not enough for more than 100 years. If those 6 bytes used for DDMMYY were used according to the formula I just described, you would be able to store 2 to the 48th power, days. I don't know exactly how big that number is, but I can tell you it dwarfs even the massive federal debt. It would easily store all the days since the beginning of time as we know it. So isn't that something, PC's since their inception have used 2 bytes, and can handle more years than other applications using 6 bytes. For computer guys like myself this is a really big deal, for the rest of you I realize it may be ho-hum. Sorry about that.

Myth: IBM mainframes are incapable of this.

Fact 2: Wrong, IBM mainframes have an 8 byte, 64 bit, system clock.

This clock tracks every fractional nanosecond since January 1, 1900. This clock will be useful longer than anyone reading this will be alive. The notion that computers out there can't handle the year 2000 for technical hardware/system reasons is garbage. Not that it hasn't made lots of latter day Pied Pipers millionaires, but it is still wrong. So now you know. It is only application level software, written by programmers of varying degrees of talent, that have given us this problem to the extent that it exists. By the way, if you use 4 bytes instead of 2 bytes as above, you can track the current day value for over 12 million years. I include a longer explanation at the bottom of this page.

Well if two bytes can handle 179 years, how did we ever get stuck with 6 bytes that could only store 100 years?

One word. Laziness. Ok, that's a little strong, so how about I say they, the programmers, were following the KISS principle. Keep it simple, stupid. Ya see, in order to follow the PC example and use 2 bytes to represent 179 years, you also need a program to convert the 2 bytes into DDMMYY. That takes work, is best written in Assembler for performance reasons, and Assembler is a foreign language to most programmers. It also adds complexity. Sure you get a system you don't have to worry about for 79 years longer, AND drastic savings on CPU and disk costs, but hey, why bother? Most programmers know they will leave for another job, get downsized out, get fired or disabled and so on, so who cares. Anyway, they rationalize, it's a management issue.

So what of management? From my experience, in the corporate and public sectors, many low level managers tend to be hacks who could not program, but were otherwise loyal, so they were promoted out of a position where they might do damage, as in foul the whole system up to the point it would not work regardless of what year it was. Senior level managers in the private sector, tend to be good at getting senior level jobs, but illiterate when it comes to data processing design. In the public sector, data processing managers tend to be people who are good at delivering votes to political decision makers. They are skilled indeed, but again, not with anything DP related.

Okay, it's a mess, but planes are still going to crash on January 1, 2000 if something isn't done, right? I have never worked on the software that operates autopilots in airplanes. My guess is that they will work properly regardless of what date is calculated. To assume that the human pilot would not recognize what the error is, given the hype, and then quickly reprogram his box to a date say five years ago, is to attribute to that pilot a level of stupidity, that I hope is not justified. But I could be wrong. Airlines I suppose, to play it safe, could simply check each plane during its regular test flights, by keying in a year 2000 day and seeing if the machine still works as advertised. There is no reason to wait until the year 2000 to do this. The computer has no connection to the physical world except what we program it for.

But wait, you have a better idea that is being ignored, pray tell what is being done with the billions currently being spent on this hoax as you call it? I accept that rewriting programs to store dates in numerical rather than display format will not be attempted by most organizations this late in the game. Unfortunately, most of the money is still being squandered. I have worked as a contractor at four different sites, and in each case, the clients are forming user groups, study groups, they are having meetings, they are drawing up contingency plans, test plans, implementation schedules etc. Every thing conceivable except just fixing the problem. Most can't fix the problem because they don't know how the problem effects their installation. They are sure a problem must exist because they have read about it in the newspapers, but they are blind otherwise.

All right smarty pants, what would you do?

Thought you'd never ask. The answer obviously is site dependant, but in general and assuming they have two systems, one for testing and one for production, most places do by the way, I would copy the entire production system to the test system, programs and data, set the test system for June 1, 1999 and start running every transaction and batch program I had. Then I advance the date one month at a time and repeat the cycle up to June 1, 2000. Then I would look at the output. The errors should become immediately obvious. I would make the appropriate changes, recompile everything, recopy from production the data only repeat the one year cycle. Preferably this is being done by 2 to 3 people max. One person working alone would be best but that may be unrealistic. More than 3 invites disaster because you get the "I thought he/she was doing that, not me" syndrome. If there is one employee who knows the system well and is competent, I strongly recommend using that person alone. And for lord's sake, don't use a contractor unless you have to. Even though I are one. There is no way a contractor will be able to learn your system and do a good job in the amount of time you want him to. I have never understood how it is an employee will be given months to learn a system before attempting changes. A contractor is expected to have the changes completed at the end of his first week. This year 2000 thing is exactly the wrong type of job to hand off to contractors yet that is precisely where most of the money seems to be going.

Ok, run that by me again how it is you can get 179 years out of 2 bytes when the largest companies in this country can only get 100 years out of 6 bytes. I just don't follow you on this and I'm not a programmer. Or I am a programmer and I still don't get it.

Fasten your seat belts, turn off the TV, here we go. Computers are a combination of 2 basic things, a processor and a memory. I'm guessing most of you already new that. Now imagine if each byte of memory in your Commodore Vic 20 was a string of 8 coins and you had 1 K of memory. How many coins would you have? Well 1 K of memory is 1024 bytes. A byte is 8 bits so we'd have 8192 independently changeable coins (1024 x 8). Why bytes. Why 8? What is so magic about that? Nothing is magic about 8. It's just a compromise. In order to make sense out of all the bits, we have to group them together in such a way that we can look at them and all agree on what they mean. Eight bits to the binary unit (byte) just works out nice. We could define a byte to have 8192 bits. That would be one heck of a byte. If the coins in our example could be set to either head or tails, this one byte could have a range of values equal to 2 to the 8192nd power! If you covered the floor of your garage with 8192 quarters and tried to set them to all the various different combinations of head and tails, you would probably not live long enough to set them all.

Why not use a 4 bit byte then?

If you have 4 bits, you have 16 potential meanings. 0000 0001 0010 all the way to 1111. Trust me 16! Sixteen is good. You get to drive at sixteen and god knows what else, but you have to admit it is limiting. If we separate all our 8192 independently addressable bits into groups of 4, it will get dicey if we want to store values of things that have more than 16 options. Using 4 bit bytes we will get twice as many bytes as with 8 bit bytes, but they are not useful. Especially if we, god forbid, want to make a word processor that allows character storage. There are 26 letters and we can only store 16. That won't sell will it? How about a 5 bit byte 00000 to 11111. That helps a little. Now we can have 32 values, notice it doubles each time you add a bit. The letters? We got 'em covered. What's that? Oh, you want to store some number do you? And you want 2 separate cases for the letters? Oh my, 32 won't do it. How about 6 bit bytes? That would give us 000000 to 111111 Wow? 64 separately assignable values. That just might work. No lower case but hey we can live with that. As an aside, when I worked for USAir, all the airlines used this 64 bit scheme to represent data. It's faster to transmit 6 bit bytes rather than 8 bit bytes and in an age of 300 baud modems that made a difference.

icon