bigtable benchmark

A simple brute force generation of a 10 columns x 1000 rows table,
inspired from the bigtable benchmark proposed by
For each run, time is averaged over 10 renderings, and the
best time of 4 runs is retained.
All times are in ms.
What you should want is automatic quoting and less time.

quoting qpy 1.7 evoque 0.4 mako 0.2.4 genshi 0.5.1
automatic 30.75 52.69 30.16 376.61 (c)
automatic R(a) 61.80
manual 51.65 (b) 45.92
none 30.72 13.30
none T(d) 7.77 7.36

(a) Restricted —
qpy 1.7 /
mako 0.2.4 /
genshi 0.5.1
offer no support for restricted execution or sandboxing
(b) manual quoting implies no qpy
(c) xml mode
(d) Tweaked — as close to pure python as possible, entire template is reduced to the single brute expression:
<table>${"".join(["<tr>%s</tr>" % "".join(["<td>%s</td>" % (col) for col in row]) for row in table])}</table>

template and data

The (automatically quoted) evoque template.

template_string = """<table>
$for{ row in table }
<tr>$for{ col in row }<td>${col}</td>$rof</tr>
$test{ table=[("a","b","c","d","<escape-me/>","f","g","h","i","j")] }
BASEROW = ("a","b","c","d","<escape-me/>","f","g","h","i","j")
TABLE_DATA = [ BASEROW for x in range(1000) ]


The actual times are coming off a
MacBook Pro with 2.4 GHz Intel Core 2 Duo, 2 GB of RAM,
and Mac OS X 10.5, running Python 2.6.1.


Please remember that performance benchmarks are only relevant
when considered within an entire context, and they may vary
enormously between different combinations of hardware and software,
even if those difference may appear to be very slight.
In addition, two different systems
never quite do precisely the same thing,
however simple and apparently identical the timed task may be.