subs benchmark

For curiosity, a very simple performance look at doing
only string substitutions
thus for this one we can even compare against
from the python standard library.
We test a small template, mostly static text with just 6 variable

For each run, time is averaged over 2000 renderings, and the
best time of 4 runs is retained.
All times are in ms.
What you should want is automatic quoting and less time.

quoting template.String qpy 1.7 evoque 0.4 mako 0.2.4 genshi_text 0.5.1
automatic 0.028 0.034 0.112
automatic R(a) 0.038
manual 0.038 (b) 0.107 0.169
none 0.050 0.025 0.084

(a) Restricted —
qpy 1.7 /
mako 0.2.4 /
genshi 0.5.1
offer no support for restricted execution or sandboxing
(b) manual quoting implies no qpy

template and data

The (automatically quoted) evoque template.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
<html xmlns="" xml:lang="en" lang="en">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta http-equiv="Content-Style-Type" content="text/css; charset=UTF-8" />
<meta http-equiv="imagetoolbar" content="no" />
<style type="text/css">
.signature { color: #977; font-weight: bold; }
<p>Welcome back ${first}, you are logged in as </code>${username}<code> 
(last login: ${last}).</p>
<p>Your balance is: ${balance}</p>
DATA = dict(title='Your balance', first="Joey", username='joe123', 
    last="2008-02-29", balance=789.19, 
    comment="Thank you <b>very</b> much!")


The actual times are coming off a
MacBook Pro with 2.4 GHz Intel Core 2 Duo, 2 GB of RAM,
and Mac OS X 10.5, running Python 2.6.1.


Please remember that performance benchmarks are only relevant
when considered within an entire context, and they may vary
enormously between different combinations of hardware and software,
even if those difference may appear to be very slight.
In addition, two different systems
never quite do precisely the same thing,
however simple and apparently identical the timed task may be.