On Tue, 29 Mar 2011, Carl Wiedemann wrote:
If you are storing birthdates, you probably want to use DATETIME in the schema instead of INT. INT would require the date be stored as a unix timestamp, which is only valid after the year 1970, and birthdays are likely to exist before this date.
From wikipedia: http://en.wikipedia.org/wiki/Unix_time
"The Unix time number is zero at the Unix epoch, and increases by exactly 86 400 per day since the epoch. Thus 2004-09-16T00:00:00Z, 12 677 days after the epoch, is represented by the Unix time number 12 677 × 86 400 = 1 095 292 800. This can be extended backwards from the epoch too, using negative numbers; thus 1957-10-04T00:00:00Z, 4 472 days before the epoch, is represented by the Unix time number −4 472 × 86 400 = -386 380 800."
and
"The standard Unix time_t (data type representing a point in time) is a signed integer data type, traditionally of 32 bits (but see below), directly encoding the Unix time number as described in the preceding section. Being 32 bits means that it covers a range of about 136 years in total. The minimum representable time is 1901-12-13, and the maximum representable time is 2038-01-19. At 03:14:07 UTC 2038-01-19 this representation overflows. This milestone is anticipated with a mixture of amusement and dread; see year 2038 problem."
I looked into this issue once myself and decided that the advantages of adding and subtracting integers and applying SQL intdate<threshold tests on the database server was far easier/more efficient to manage than comparing strings.
And also, I'll be retired by 2038. Someone elses problem!