Edgewall Software

Version 19 (modified by Ryan J Ollos, 7 years ago) ( diff )

Info on browser console.

Trac Troubleshooting

This is a collection of recipes you can use to troubleshoot Trac and help us to fix bugs.

Read the Wiki

Trac comes with extensive documentation in form of Wiki pages. Some are present in each Trac installation, for example those that are part of the TracGuide, while most are only found on this site, for example the Frequently Asked Questions.

Some subsystems have their dedicated page with a troubleshooting section:

Then, use the TracSearch to dig through existing tickets describing issues similar to yours. The Trac MailingLists are also usually of some help.

And because this a Wiki, feel free to enhance its contents while you're at it.

If you're facing a new problem, the next step is to try to find out the root cause.

Check the Logs

The log output is an excellent source of diagnostic information. When a feature is unavailable it's often the case that the Trac or plugin Component that implements the feature has failed to load. This could be due to a programming error, missing dependency or installation problem. Component loading is logged at DEBUG level when restarting tracd or the web server. Additional debug information and tracebacks may also be logged.

Set the LogLevel to DEBUG, restart tracd or the web server and then inspect the log. See TracLogging for additional information.

Check the Browser Console

JavaScript error and debug messages will be logged to the browser console. See the documentation for your browser for instructions on accessing the browser console.

Debugging Trac

Python errors

Trac assists with debugging internal errors by providing a stack trace. Provided you have the TRAC_ADMIN privilege, the error page will display the faulty line of code in its context and the values of local variables.

Now, before describing the specific debugging techniques you can use, a basic understanding of Trac's architecture is needed.

Trac uses a model-view-controller approach. The controller is a Python class called a "module" which inherits from the Component class and implements the IRequestHandler interface. A Component is the basic building block in the TracDev/ComponentArchitecture, it implements some Interfaces. The controller reacts on user requests and prepares the data that will be used by a template engine to fill the adequate template, in order to render the view which is sent back to the user.

The TracDev collection of development guides can be useful here.

Inspecting the Template data

Clearsilver's HDF

Before 0.11, Trac was using ClearSilver as its template engine. Clearsilver uses a so-called Hierarchical Data Format which is prepared by the controller. To inspect this HDF, a simple trick is to append to the URL the ?hdfdump=1 string or &hdfdump=1, in case other parameters are already present in the URL.

Genshi data dictionary

Starting with 0.11, Trac uses the Genshi template engine. As this template engine is also written in Python, no specific data format is needed and a simple Python dictionary is used to feed the engine. It's very easy to inspect any part of this data by modifying the template and inserting ${pprint(...)} statements, possibly in between <pre>...</pre> tags. Each modification to a template will be detected on the fly and you'll be able to see the result of the change immediately, provided you have the following setup in your TracIni:

[trac]
auto_reload = yes

Modifying the Code

TracStandalone is the indispensable companion whether you intend to develop or debug Trac. In particular, check out the -r (auto-reload) feature, which will make Trac notice any change to one of its source file and restart automatically. You can therefore see Trac react immediately to your code changes, provided you don't have syntax errors outside of a method.

In this setup, you're free to try out modification, dump additional information to the log or insert direct print statements; an ugly but effective way of debugging.

It may even be possible to run tracd with a debugger, but not explored here.

Best is to start from a checkout of the pristine source code you're interested to debug or develop for. Then, you can run the standalone server by doing:

$ python scripts/tracd <options>

If you're using trunk (Trac 0.11dev), since the SetupTools integrations, you'll have to run the standalone module directly:

$ python trac/web/standalone.py <options>

Note that the very first time in a fresh working copy, you'll have to at least initialize the Trac.egg-info folder, by doing:

$ python setup.py egg_info

System Errors

System errors are serious issues like segmentation faults or process hangs. This usually involves some C extensions used by Trac, either due to a bug or a misconfiguration in those libraries, or to an incorrect usage within Trac.

In this case, the first thing to do is to identify the subsystem involved, by getting the stack trace of the process. This can be done using gdb on Unix or WinDbg on Windows or MS Developer Studio.

With GDB you can also make the link between the C stack trace and the Python backtrace. For this, you need to teach gdb a few additional commands: Get the gdbinit script, and "source" it. You should be able to issue interesting commands like pystack, pyframe, pylocals, etc.

If you are using a gdb 7.x version built --with-python, you can instead source libpython.py and use py-bt. See also Scripting gdb using Python.

Debugging Segmentation Faults

Getting a backtrace for tracd:

$ gdb $(which python)
(gdb) run /opt/trac-0.10/scripts/tracd -p 8080 /srv/trac/yourproject

Of course, adapt the paths and the options to match what is relevant for you.

Getting a backtrace for Apache's httpd can be done in a similar way:

$ apachectl -k stop
$ gdb $(which httpd)
(gdb) run -X

When it crashes, do bt in order to get a full backtrace. Another very useful command is info shared, which gives you a list of the shared libraries actually used.

Debugging a Hanging Process

Here it might be interesting to just "attach" to an already running process:

$ ps -ef | grep httpd 
...
$ # note the PID of interest, e.g. 28221
$ gdb
(gdb) attach 28221
Attaching to process 28221
Reading symbols from /opt/apache-2.0.55/bin/httpd...done.
Using host libthread_db library "/lib64/tls/libthread_db.so.1".
Reading symbols from /opt/apache-2.0.55/lib/libaprutil-0.so.0...done.
Loaded symbols for ...
...
(gdb) bt
#0  0x0000002a962905af in __accept_nocancel () from /lib64/tls/libpthread.so.0
#1  0x0000002a95bc1a84 in apr_socket_accept (new=0x7fbfffd538, sock=0x5b27b0,
    connection_context=0xbd2d58) at sockets.c:164
#2  0x0000000000477c4d in unixd_accept (accepted=0x7fbfffd560,
...
(gdb) cont
Continuing.

<here you can press Ctrl-C>

Program received signal SIGINT, Interrupt.
[Switching to Thread 182911242432 (LWP 28221)]
0x0000002a962905af in __accept_nocancel () from /lib64/tls/libpthread.so.0
(gdb) detach
Detaching from program: /opt/apache-2.0.55/bin/httpd, process 28221
(gdb) ^D
$

The Loaded symbols ... part usually provides interesting information. You may find out that the libraries actually used are not the one that you expected, or you might notice the presence of mod_php triggering the load of an alternate sqlite library. That kind of information is also available without gdb by looking in the /proc/28221/maps file (to reuse the same pid from the above example).

The bt command is what gives you the "backtrace" of the program, usually the most interesting bit of information. You can also resume execution of the program (using cont) and interrupt the process a bit later, to see if it remains hanged in the same area. In case there is no hang (you "attached" to it just for curiosity), you can also detach from the process and it will continue to work unaffected.

Note: See TracWiki for help on using the wiki.