Storage APIs

Storage interfaces

There are various storage implementations that implement standard storage interfaces. They differ primarily in their constructors.

Application code rarely calls storage methods, and those it calls are generally called indirectly through databases. There are interface-defined methods that are called internally by ZODB. These aren’t shown below.


interface ZODB.interfaces.IStorage

A storage is responsible for storing and retrieving data of objects.

Consistency and locking

When transactions are committed, a storage assigns monotonically increasing transaction identifiers (tids) to the transactions and to the object versions written by the transactions. ZODB relies on this to decide if data in object caches are up to date and to implement multi-version concurrency control.

There are methods in IStorage and in derived interfaces that provide information about the current revisions (tids) for objects or for the database as a whole. It is critical for the proper working of ZODB that the resulting tids are increasing with respect to the object identifier given or to the databases. That is, if there are 2 results for an object or for the database, R1 and R2, such that R1 is returned before R2, then the tid returned by R2 must be greater than or equal to the tid returned by R1. (When thinking about results for the database, think of these as results for all objects in the database.)

This implies some sort of locking strategy. The key method is tcp_finish, which causes new tids to be generated and also, through the callback passed to it, returns new current tids for the objects stored in a transaction and for the database as a whole.

The IStorage methods affected are lastTransaction, load, store, and tpc_finish. Derived interfaces may introduce additional methods.


The approximate number of objects in the storage

This is used soley for informational purposes.


Close the storage.

Finalize the storage, releasing any external resources. The storage should not be used after this method is called.

Note that databases close their storages when they’re closed, so this method isn’t generally called from application code.


The name of the storage

The format and interpretation of this name is storage dependent. It could be a file name, a database name, etc..

This is used soley for informational purposes.


An approximate size of the database, in bytes.

This is used soley for informational purposes.

history(oid, size=1)

Return a sequence of history information dictionaries.

Up to size objects (including no objects) may be returned.

The information provides a log of the changes made to the object. Data are reported in reverse chronological order.

Each dictionary has the following keys:

UTC seconds since the epoch (as in time.time) that the object revision was committed.
The transaction identifier of the transaction that committed the version.
An alias for tid, which expected by older clients.
The bytes user identifier, if any (or an empty string) of the user on whos behalf the revision was committed.
The bytes transaction description for the transaction that committed the revision.
The size of the revision data record.

If the transaction had extension items, then these items are also included if they don’t conflict with the keys above.


Test whether a storage allows committing new transactions

For a given storage instance, this method always returns the same value. Read-only-ness is a static property of a storage.


Return the id of the last committed transaction.

If no transactions have been committed, return a string of 8 null (0) characters.

pack(pack_time, referencesf)

Pack the storage

It is up to the storage to interpret this call, however, the general idea is that the storage free space by:

  • discarding object revisions that were old and not current as of the given pack time.
  • garbage collecting objects that aren’t reachable from the root object via revisions remaining after discarding revisions that were not current as of the pack time.

The pack time is given as a UTC time in seconds since the epoch.

The second argument is a function that should be used to extract object references from database records. This is needed to determine which objects are referenced from object revisions.


Sort key used to order distributed transactions

When a transaction involved multiple storages, 2-phase commit operations are applied in sort-key order. This must be unique among storages used in a transaction. Obviously, the storage can’t assure this, but it should construct the sort key so it has a reasonable chance of being unique.

The result must be a string.


interface ZODB.interfaces.IStorageIteration

API for iterating over the contents of a storage.

iterator(start=None, stop=None)

Return an IStorageTransactionInformation iterator.

If the start argument is not None, then iteration will start with the first transaction whose identifier is greater than or equal to start.

If the stop argument is not None, then iteration will end with the last transaction whose identifier is less than or equal to stop.

The iterator provides access to the data as available at the time when the iterator was retrieved.


interface ZODB.interfaces.IStorageUndoable

A storage supporting transactional undo.

undoInfo(first=0, last=-20, specification=None)

Return a sequence of descriptions for undoable transactions.

This is like undoLog(), except for the specification argument. If given, specification is a dictionary, and undoInfo() synthesizes a filter function f for undoLog() such that f(desc) returns true for a transaction description mapping desc if and only if desc maps each key in specification to the same value specification maps that key to. In other words, only extensions (or supersets) of specification match.

ZEO note: undoInfo() passes the specification argument from a ZEO client to its ZEO server (while a ZEO client ignores any filter argument passed to undoLog()).

undoLog(first, last, filter=None)

Return a sequence of descriptions for undoable transactions.

Application code should call undoLog() on a DB instance instead of on the storage directly.

A transaction description is a mapping with at least these keys:

“time”: The time, as float seconds since the epoch, when
the transaction committed.
“user_name”: The bytes value of the .user attribute on that
“description”: The bytes value of the .description attribute on
that transaction.
“id`” A bytes uniquely identifying the transaction to the
storage. If it’s desired to undo this transaction, this is the transaction_id to pass to undo().

In addition, if any name+value pairs were added to the transaction by setExtendedInfo(), those may be added to the transaction description mapping too (for example, FileStorage’s undoLog() does this).

filter is a callable, taking one argument. A transaction description mapping is passed to filter for each potentially undoable transaction. The sequence returned by undoLog() excludes descriptions for which filter returns a false value. By default, filter always returns a true value.

ZEO note: Arbitrary callables cannot be passed from a ZEO client to a ZEO server, and a ZEO client’s implementation of undoLog() ignores any filter argument that may be passed. ZEO clients should use the related undoInfo() method instead (if they want to do filtering).

Now picture a list containing descriptions of all undoable transactions that pass the filter, most recent transaction first (at index 0). The first and last arguments specify the slice of this (conceptual) list to be returned:

first: This is the index of the first transaction description
in the slice. It must be >= 0.
last: If >= 0, first:last acts like a Python slice, selecting
the descriptions at indices first, first+1, ..., up to but not including index last. At most last-first descriptions are in the slice, and last should be at least as large as first in this case. If last is less than 0, then abs(last) is taken to be the maximum number of descriptions in the slice (which still begins at index first). When last < 0, the same effect could be gotten by passing the positive first-last for last instead.


interface ZODB.interfaces.IStorageCurrentRecordIteration

Iterate over the records in a storage

Use like this:

>>> next = None
>>> while 1:
...     oid, tid, data, next = storage.record_iternext(next)
...     # do things with oid, tid, and data
...     if next is None:
...         break


interface ZODB.interfaces.IBlobStorage

A storage supporting BLOBs.


Return a directory that should be used for uncommitted blob data.

If Blobs use this, then commits can be performed with a simple rename.


interface ZODB.interfaces.IStorageRecordInformation

Provide information about a single storage record

data = <zope.interface.interface.Attribute object>

The data record, bytes

data_txn = <zope.interface.interface.Attribute object>

The previous transaction id, bytes

oid = <zope.interface.interface.Attribute object>

The object id, bytes

tid = <zope.interface.interface.Attribute object>

The transaction id, bytes


interface ZODB.interfaces.IStorageTransactionInformation

Provide information about a storage transaction.

Can be iterated over to retrieve the records modified in the transaction.

Note that this may contain a status field used by FileStorage to support packing. At some point, this will go away when FileStorage has a better pack algoritm.


Iterate over the transaction’s records given as IStorageRecordInformation objects.

tid = <zope.interface.interface.Attribute object>

Transaction id

Included storages


class ZODB.FileStorage.FileStorage.FileStorage(file_name, create=False, read_only=False, stop=None, quota=None, pack_gc=True, pack_keep_old=True, packer=None, blob_dir=None)

Storage that saves data in a file

__init__(file_name, create=False, read_only=False, stop=None, quota=None, pack_gc=True, pack_keep_old=True, packer=None, blob_dir=None)

Create a file storage

  • file_name (str) – Path to store data file
  • create (bool) – Flag indicating whether a file should be created even if it already exists.
  • read_only (bool) – Flag indicating whether the file is read only. Only one process is able to open the file non-read-only.
  • stop (bytes) – Time-travel transaction id When the file is opened, data will be read up to the given transaction id. Transaction ids correspond to times and you can compute transaction ids for a given time using TimeStamp.
  • quota (int) – File-size quota
  • pack_gc (bool) – Flag indicating whether garbage collection should be performed when packing.
  • pack_keep_old (bool) – flag indicating whether old data files should be retained after packing as a .old file.
  • packer (callable) – An alternative packer.
  • blob_dir (str) – A blob-directory path name. Blobs will be supported if this option is provided.

A file storage stores data in a single file that behaves like a traditional transaction log. New data records are appended to the end of the file. Periodically, the file is packed to free up space. When this is done, current records as of the pack time or later are copied to a new file, which replaces the old file.

FileStorages keep in-memory indexes mapping object oids to the location of their current records in the file. Back pointers to previous records allow access to non-current records from the current records.

In addition to the data file, some ancillary files are created. These can be lost without affecting data integrity, however losing the index file may cause extremely slow startup. Each has a name that’s a concatenation of the original file and a suffix. The files are listed below by suffix:

Snapshot of the in-memory index. This are created on shutdown, packing, and after rebuilding an index when one was not found. For large databases, creating a file-storage object without an index file can take very long because it’s necessary to scan the data file to build the index.
A lock file preventing multiple processes from opening a file storage on non-read-only mode.
A file used to store data being committed in the first phase of 2-phase commit
A temporary file used when saving the in-memory index to avoid overwriting an existing index until a new index has been fully saved.
A temporary file written while packing containing current records as of and after the pack time.
The previous database file after a pack.

When the database is packed, current records as of the pack time and later are written to the .pack file. At the end of packing, the .old file is removed, if it exists, and the data file is renamed to the .old file and finally the .pack file is rewritten to the data file.

interface ZODB.FileStorage.interfaces.IFileStoragePacker
__call__(storage, referencesf, stop, gc)

Pack the file storage into a new file

  • storage (FileStorage) – The storage object to be packed
  • referencesf (callable) – A function that extracts object references from a pickle bytes string. This is usually ZODB.serialize.referencesf.
  • stop (bytes) – A transaction id representing the time at which to stop packing.
  • gc (bool) – A flag indicating whether garbage collection should be performed.

The new file will have the same name as the old file with .pack appended. (The packer can get the old file name via If blobs are supported, if the storages blob_dir attribute is not None or empty, then a .removed file must be created in the blob directory. This file contains records of the form:


or, of the form:


If packing is unnecessary, or would not change the file, then no pack or removed files are created None is returned, otherwise a tuple is returned with:

  • the size of the packed file, and
  • the packed index

If and only if packing was necessary (non-None) and there was no error, then the commit lock must be acquired. In addition, it is up to FileStorage to:

  • Rename the .pack file, and
  • process the blob_dir/.removed file by removing the blobs corresponding to the file records.

FileStorage text configuration

File storages are configured using the filestorage section:

  path Data.fs

which accepts the following options:

blob-dir (existing-dirpath)
If supplied, the file storage will provide blob support and this is the name of a directory to hold blob data. The directory will be created if it doesn’t exist. If no value (or an empty value) is provided, then no blob support will be provided. (You can still use a BlobStorage to provide blob support.)
create (boolean)
Flag that indicates whether the storage should be truncated if it already exists.
pack-gc (boolean, default: true)
If false, then no garbage collection will be performed when packing. This can make packing go much faster and can avoid problems when objects are referenced only from other databases.
pack-keep-old (boolean, default: true)
If true, a copy of the database before packing is kept in a ”.old” file.
packer (string)
The dotted name (dotted module name and object name) of a packer object. This is used to provide an alternative pack implementation.
path (existing-dirpath, required)
Path name to the main storage file. The names for supplemental files, including index and lock files, will be computed from this.
quota (byte-size)
Maximum allowed size of the storage file. Operations which would cause the size of the storage to exceed the quota will result in a ZODB.FileStorage.FileStorageQuotaError being raised.
read-only (boolean)
If true, only reads may be executed against the storage. Note that the “pack” operation is not considered a write operation and is still allowed on a read-only filestorage.


class ZODB.MappingStorage.MappingStorage(name='MappingStorage')

In-memory storage implementation

Note that this implementation is somewhat naive and inefficient with regard to locking. Its implementation is primarily meant to be a simple illustration of storage implementation. It’s also useful for testing and exploration where scalability and efficiency are unimportant.


Create a mapping storage

The name parameter is used by the getName() and sortKey() methods.

MappingStorage text configuration

File storages are configured using the mappingstorage section:



name (string, default: Mapping Storage)
The storage name, used by the getName() and sortKey() methods.


class ZODB.DemoStorage.DemoStorage(name=None, base=None, changes=None, close_base_on_close=None, close_changes_on_close=None)

A storage that stores changes against a read-only base database

This storage was originally meant to support distribution of application demonstrations with populated read-only databases (on CDROM) and writable in-memory databases.

Demo storages are extemely convenient for testing where setup of a base database can be shared by many tests.

Demo storages are also handy for staging appplications where a read-only snapshot of a production database (often accomplished using a beforestorage) is combined with a changes database implemented with a FileStorage.

__init__(name=None, base=None, changes=None, close_base_on_close=None, close_changes_on_close=None)

Create a demo storage

  • name (str) – The storage name used by the getName() and sortKey() methods.
  • base (object) – base storage
  • changes (object) – changes storage
  • close_base_on_close (bool) – A Flag indicating whether the base database should be closed when the demo storage is closed.
  • close_changes_on_close (bool) – A Flag indicating whether the changes database should be closed when the demo storage is closed.

If a base database isn’t provided, a MappingStorage will be constructed and used.

If close_base_on_close isn’t specified, it will be True if a base database was provided and False otherwise.

If a changes database isn’t provided, a MappingStorage will be constructed and used and blob support will be provided using a temporary blob directory.

If close_changes_on_close isn’t specified, it will be True if a changes database was provided and False otherwise.


Close the changes database and return the base.


Create a new demo storage using the storage as a base.

The given changes are used as the changes for the returned storage and False is passed as close_base_on_close.

DemoStorage text configuration

Demo storages are configured using the demostorage section:

  <filestorage base>
    path base.fs
  <mappingstorage changes>
    name Changes

demostorage sections can contain up to 2 storage subsections, named base and changes, specifying the demo storage’s base and changes storages. See ZODB.DemoStorage.DemoStorage.__init__() for more on the base and changes storages.


name (string)
The storage name, used by the getName() and sortKey() methods.

Noteworthy non-included storages

A number of important ZODB storages are distributed separately.

Base storages

Unlike the included storages, all the implementations listed in this section allow multiple processes to share the same database.


NEO can spread data among several computers for load-balancing and multi-master replication. It also supports asynchronous replication to off-site NEO databases for further disaster resistance without affecting local operation latency.

For more information, see


RelStorage stores data in relational databases. This is especially useful when you have requirements or existing infrastructure for storing data in relational databases.

For more information, see


ZEO is a client-server database implementation for ZODB. To use ZEO, you run a ZEO server, and use ZEO clients in your application.

For more information, see

Optional layers


ZRS provides replication from one database to another. It’s most commonly used with ZEO. With ZRS, you create a ZRS primary database around a FileStorage and in a separate process, you create a ZRS secondary storage around any storage. As transactions are committed on the primary, they’re copied asynchronously to secondaries.

For more information, see


zlibstorage compresses database records using the compression algorithm used by gzip.

For more information, see


beforestorage provides a point-in-time view of a database that might be changing. This can be useful to provide a non-changing view of a production database for use with a DemoStorage.

For more information, see


cipher.encryptingstorage provided compression and encryption of database records.

For more information, see