#9377 closed defect (fixed)
avoid "ValueError: timestamp out of range for platform time_t" errors
Reported by: | syjeong | Owned by: | Remy Blank |
---|---|---|---|
Priority: | normal | Milestone: | 0.12 |
Component: | general | Version: | 0.12dev |
Severity: | normal | Keywords: | timestamp |
Cc: | hoff.st@… | Branch: | |
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
How to Reproduce
While doing a GET operation on /ticketcalendar
, Trac issued an internal error.
(please provide additional details here)
Request parameters:
{}
User agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.9 Safari/533.2 ChromePlus/1.3.9.0
System Information
Trac | 0.12dev-r9719
|
Babel | 1.0dev-r546
|
Docutils | 0.6
|
Genshi | 0.7dev-r1134
|
mod_python | 3.3.1
|
Pygments | 1.2.2
|
pysqlite | 2.4.1
|
Python | 2.6.5 (r265:79063, Apr 16 2010, 13:28:26) [GCC 4.4.3]
|
pytz | 2010b
|
setuptools | 0.6
|
SQLite | 3.6.22
|
Subversion | 1.6.6 (r40053)
|
jQuery | 1.4.2
|
Enabled Plugins
TracGanttCalendarPlugin | 0.1
|
Python Traceback
Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/Trac-0.12dev_r9719-py2.6.egg/trac/web/main.py", line 513, in _dispatch_request dispatcher.dispatch(req) File "/usr/local/lib/python2.6/dist-packages/Trac-0.12dev_r9719-py2.6.egg/trac/web/main.py", line 235, in dispatch resp = chosen_handler.process_request(req) File "/usr/local/lib/python2.6/dist-packages/TracGanttCalendarPlugin-0.1-py2.6.egg/ganttcalendar/ticketcalendar.py", line 122, in process_request due_time = to_datetime(due, utc) File "/usr/local/lib/python2.6/dist-packages/Trac-0.12dev_r9719-py2.6.egg/trac/util/datefmt.py", line 52, in to_datetime return datetime.fromtimestamp(t, tzinfo or localtz) ValueError: timestamp out of range for platform time_t
Attachments (0)
Change History (11)
comment:1 by , 15 years ago
Resolution: | → cantfix |
---|---|
Status: | new → closed |
comment:2 by , 15 years ago
… probably related to the switch to microsecond timestamps (see #6466).
Should we try to make to_datetime()
more resilient against that? For example, if a timestamp value is greater than the limit for datetime
, interpret it as a microsecond timestamp?
comment:3 by , 15 years ago
Cc: | added |
---|---|
Keywords: | timestamp added |
Resolution: | cantfix |
Status: | closed → reopened |
Summary: | ValueError: timestamp out of range for platform time_t → avoid "ValueError: timestamp out of range for platform time_t" errors |
I suppose that this can't hurt. At most if it's not enough to ensure full compatibility with the plugins, they will break in some other place… not worse than what we see here.
comment:4 by , 15 years ago
Suggested patch:
-
trac/util/datefmt.py
diff --git a/trac/util/datefmt.py b/trac/util/datefmt.py
a b 49 49 elif isinstance(t, date): 50 50 return (tzinfo or localtz).localize(datetime(t.year, t.month, t.day)) 51 51 elif isinstance(t, (int, long, float)): 52 if t > _max_ts: # Accept microsecond timestamps for 0.11 compatibility 53 t = t / 1000000.0 52 54 return datetime.fromtimestamp(t, tzinfo or localtz) 53 55 raise TypeError('expecting datetime, int, long, float, or None; got %s' % 54 56 type(t)) … … 407 409 utcmax = datetime.max.replace(tzinfo=utc) 408 410 _epoc = datetime(1970, 1, 1, tzinfo=utc) 409 411 _zero = timedelta(0) 412 _max_ts = (1 << 31) - 1 410 413 411 414 localtz = LocalTimezone() 412 415 -
trac/util/tests/datefmt.py
diff --git a/trac/util/tests/datefmt.py b/trac/util/tests/datefmt.py
a b 62 62 self.assertEqual(datefmt.to_datetime(23), expected) 63 63 self.assertEqual(datefmt.to_datetime(23L), expected) 64 64 self.assertEqual(datefmt.to_datetime(23.0), expected) 65 66 def test_to_datetime_microsecond_timestamps(self): 67 expected = datetime.datetime.fromtimestamp(2345.678912, 68 datefmt.localtz) 69 self.assertEqual(datefmt.to_datetime(2345678912), expected) 70 self.assertEqual(datefmt.to_datetime(2345678912L), expected) 71 self.assertEqual(datefmt.to_datetime(2345678912.0), expected) 65 72 66 73 def test_to_datetime_can_convert_dates(self): 67 74 expected = datetime.datetime(2009, 5, 2, tzinfo=datefmt.localtz)
Ok to apply?
comment:5 by , 15 years ago
Ideally, we should also log a warning, but I suppose this would require passing the environment to to_datetime()
, so it's probably not possible.
comment:6 by , 15 years ago
Milestone: | → 0.12 |
---|---|
Resolution: | → fixed |
Status: | reopened → closed |
Slightly improved patch applied in [9789] (also handles negative microsecond timestamps).
comment:7 by , 15 years ago
Owner: | set to |
---|
comment:8 by , 15 years ago
As we're talking about microseconds… what about doing * 0.000001
instead? :-)
$ python -m timeit -r 10 '123 / 1000000.0' 10000000 loops, best of 10: 0.0715 usec per loop $ python -m timeit -r 10 '123 * 0.000001' 10000000 loops, best of 10: 0.0237 usec per loop
comment:9 by , 15 years ago
I don't see any reasonable reason why this would be the case, but numbers don't lie… Done in [9790].
comment:10 by , 15 years ago
Maybe I'm still too far away from topics like code performance optimization, but a quick newbie research suggests, that there are differences between python versions too (http://bugs.python.org/issue4128). Division optimization was subject to testing of alternative algos for Python some time ago (http://bugs.python.org/issue3451).
My test results of Christian's timeit commands are 0.254 vs. 0.105 usec respectively (Python 2.5.2 (r252:60911, Jan 24 2010, 14:53:14) from Debian 5.0 package on personal workstation idling around with AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ backed by 2x 3 GB DDR2/800 RAM in dualchannel-config). Division is clearly more costly.
While the issue for Trac is certainly not about big numbers it very well might be about many numbers processed in succession. What I got from the reading of second resource mentioned here, division involves recursions (while multiplication might be more straight forward). Thanks, that you took performance serious here.
Interesting, being a mathematic freak I never thought about implementation of basic arithmetic operations in computer programs like this before. Life-long learning with Trac…
comment:11 by , 15 years ago
The truth is, this optimization is probably totally irrelevant here:
- We don't convert that many timestamps into
datetime
objects anyway. - Even if we did, the division is only executed if a microsecond timestamp is erroneously passed to
to_datetime()
instead offrom_utimestamp()
.
The reasons I wrote above that I don't see why division should be slower than multiplication are the following:
- The operation is performed on
float
s, so AFAIK they should be executed by the FPU. - Even if the execution time (on the FPU) of the multiplication is lower than that of the division, the time of the actual operation is probably at least one order of magnitude lower than the overhead of the Python virtual machine (instruction decoding, dispatching, …). So there must be something else in the VM that causes the difference, but I can't find a good reason why this should be the case.
PluginIssue (TH:GanttCalendarPlugin).