SQLAlchemy 0.3 Documentation

Version: 0.3.1 Last Updated: 11/12/06 18:46:51
Generated Documentation

module sqlalchemy.sql

defines the base components of SQL expression trees.

Module Functions

def alias(*args, **params)

def and_(*clauses)

joins a list of clauses together by the AND operator. the & operator can be used as well.

def asc(column)

return an ascending ORDER BY clause element, e.g.:

order_by = [asc(table1.mycol)]

def between(ctest, cleft, cright)

returns BETWEEN predicate clause (clausetest BETWEEN clauseleft AND clauseright).

this is better called off a ColumnElement directly, i.e.

column.between(value1, value2).

def bindparam(key, value=None, type=None, shortname=None)

creates a bind parameter clause with the given key.

An optional default value can be specified by the value parameter, and the optional type parameter is a sqlalchemy.types.TypeEngine object which indicates bind-parameter and result-set translation for this bind parameter.

def case(whens, value=None, else_=None)

SQL CASE statement -- whens are a sequence of pairs to be translated into "when / then" clauses; optional [value] for simple case statements, and [else_] for case defaults

def cast(clause, totype, **kwargs)

returns CAST function CAST(clause AS totype) Use with a sqlalchemy.types.TypeEngine object, i.e cast(table.c.unit_price * table.c.qty, Numeric(10,4)) or cast(table.c.timestamp, DATE)

def column(text, table=None, type=None)

returns a textual column clause, relative to a table. this is also the primitive version of a schema.Column which is a subclass.

def delete(table, whereclause=None, **kwargs)

returns a DELETE clause element.

This can also be called from a table directly via the table's delete() method.

'table' is the table to be updated. 'whereclause' is a ClauseElement describing the WHERE condition of the UPDATE statement.

def desc(column)

return a descending ORDER BY clause element, e.g.:

order_by = [desc(table1.mycol)]

def exists(*args, **params)

def extract(field, expr)

return extract(field FROM expr)

def insert(table, values=None, **kwargs)

returns an INSERT clause element.

This can also be called from a table directly via the table's insert() method.

'table' is the table to be inserted into.

'values' is a dictionary which specifies the column specifications of the INSERT, and is optional. If left as None, the column specifications are determined from the bind parameters used during the compile phase of the INSERT statement. If the bind parameters also are None during the compile phase, then the column specifications will be generated from the full list of table columns.

If both 'values' and compile-time bind parameters are present, the compile-time bind parameters override the information specified within 'values' on a per-key basis.

The keys within 'values' can be either Column objects or their string identifiers. Each key may reference one of: a literal data value (i.e. string, number, etc.), a Column object, or a SELECT statement. If a SELECT statement is specified which references this INSERT statement's table, the statement will be correlated against the INSERT statement.

def join(left, right, onclause=None, **kwargs)

return a JOIN clause element (regular inner join).

left - the left side of the join right - the right side of the join onclause - optional criterion for the ON clause, is derived from foreign key relationships otherwise

To chain joins together, use the resulting Join object's "join()" or "outerjoin()" methods.

def literal(value, type=None)

returns a literal clause, bound to a bind parameter.

literal clauses are created automatically when used as the right-hand side of a boolean or math operation against a column object. use this function when a literal is needed on the left-hand side (and optionally on the right as well).

the optional type parameter is a sqlalchemy.types.TypeEngine object which indicates bind-parameter and result-set translation for this literal.

def not_(clause)

returns a negation of the given clause, i.e. NOT(clause). the ~ operator can be used as well.

def null()

returns a Null object, which compiles to NULL in a sql statement.

def or_(*clauses)

joins a list of clauses together by the OR operator. the | operator can be used as well.

def outerjoin(left, right, onclause=None, **kwargs)

return an OUTER JOIN clause element.

left - the left side of the join right - the right side of the join onclause - optional criterion for the ON clause, is derived from foreign key relationships otherwise

To chain joins together, use the resulting Join object's "join()" or "outerjoin()" methods.

def select(columns=None, whereclause=None, from_obj=[], **kwargs)

returns a SELECT clause element.

this can also be called via the table's select() method.

'columns' is a list of columns and/or selectable items to select columns from 'whereclause' is a text or ClauseElement expression which will form the WHERE clause 'from_obj' is an list of additional "FROM" objects, such as Join objects, which will extend or override the default "from" objects created from the column list and the whereclause. **kwargs - additional parameters for the Select object.

def subquery(alias, *args, **kwargs)

def table(name, *columns)

returns a table clause. this is a primitive version of the schema.Table object, which is a subclass of this object.

def text(text, engine=None, *args, **kwargs)

creates literal text to be inserted into a query.

When constructing a query from a select(), update(), insert() or delete(), using plain strings for argument values will usually result in text objects being created automatically. Use this function when creating textual clauses outside of other ClauseElement objects, or optionally wherever plain text is to be used.

Arguments include:

text - the text of the SQL statement to be created. use :<param> to specify bind parameters; they will be compiled to their engine-specific format.

engine - an optional engine to be used for this text query.

bindparams - a list of bindparam() instances which can be used to define the types and/or initial values for the bind parameters within the textual statement; the keynames of the bindparams must match those within the text of the statement. The types will be used for pre-processing on bind values.

typemap - a dictionary mapping the names of columns represented in the SELECT clause of the textual statement to type objects, which will be used to perform post-processing on columns within the result set (for textual statements that produce result sets).

def union(*selects, **params)

def union_all(*selects, **params)

def update(table, whereclause=None, values=None, **kwargs)

returns an UPDATE clause element.

This can also be called from a table directly via the table's update() method.

'table' is the table to be updated. 'whereclause' is a ClauseElement describing the WHERE condition of the UPDATE statement. 'values' is a dictionary which specifies the SET conditions of the UPDATE, and is optional. If left as None, the SET conditions are determined from the bind parameters used during the compile phase of the UPDATE statement. If the bind parameters also are None during the compile phase, then the SET conditions will be generated from the full list of table columns.

If both 'values' and compile-time bind parameters are present, the compile-time bind parameters override the information specified within 'values' on a per-key basis.

The keys within 'values' can be either Column objects or their string identifiers. Each key may reference one of: a literal data value (i.e. string, number, etc.), a Column object, or a SELECT statement. If a SELECT statement is specified which references this UPDATE statement's table, the statement will be correlated against the UPDATE statement.

back to section top

class AbstractDialect(object)

represents the behavior of a particular database. Used by Compiled objects.

back to section top

class Alias(FromClause)

def __init__(self, selectable, alias=None)

def accept_visitor(self, visitor)

engine = property()

def named_with_column(self)

back to section top

class ClauseElement(object)

base class for elements of a programmatically constructed SQL expression.

def accept_visitor(self, visitor)

accept a ClauseVisitor and call the appropriate visit_xxx method.

def compare(self, other)

compare this ClauseElement to the given ClauseElement.

Subclasses should override the default behavior, which is a straight identity comparison.

def compile(self, engine=None, parameters=None, compiler=None, dialect=None)

compile this SQL expression.

Uses the given Compiler, or the given AbstractDialect or Engine to create a Compiler. If no compiler arguments are given, tries to use the underlying Engine this ClauseElement is bound to to create a Compiler, if any. Finally, if there is no bound Engine, uses an ANSIDialect to create a default Compiler.

bindparams is a dictionary representing the default bind parameters to be used with the statement. if the bindparams is a list, it is assumed to be a list of dictionaries and the first dictionary in the list is used with which to compile against. The bind parameters can in some cases determine the output of the compilation, such as for UPDATE and INSERT statements the bind parameters that are present determine the SET and VALUES clause of those statements.

def copy_container(self)

return a copy of this ClauseElement, iff this ClauseElement contains other ClauseElements.

If this ClauseElement is not a container, it should return self. This is used to create copies of expression trees that still reference the same "leaf nodes". The new structure can then be restructured without affecting the original.

engine = property()

attempts to locate a Engine within this ClauseElement structure, or returns None if none found.

def execute(self, *multiparams, **params)

compile and execute this ClauseElement.

def scalar(self, *multiparams, **params)

compile and execute this ClauseElement, returning the result's scalar representation.

back to section top

class ClauseParameters(dict)

represents a dictionary/iterator of bind parameter key names/values.

Tracks the original BindParam objects as well as the keys/position of each parameter, and can return parameters as a dictionary or a list. Will process parameter values according to the TypeEngine objects present in the BindParams.

def __init__(self, dialect, positional=None)

def get_original(self, key)

returns the given parameter as it was originally placed in this ClauseParameters object, without any Type conversion

def get_original_dict(self)

def get_raw_dict(self)

def get_raw_list(self)

def set_parameter(self, bindparam, value)

back to section top

class ClauseVisitor(object)

Defines the visiting of ClauseElements.

def visit_alias(self, alias)

def visit_binary(self, binary)

def visit_bindparam(self, bindparam)

def visit_calculatedclause(self, calcclause)

def visit_cast(self, cast)

def visit_clauselist(self, list)

def visit_column(self, column)

def visit_compound(self, compound)

def visit_compound_select(self, compound)

def visit_fromclause(self, fromclause)

def visit_function(self, func)

def visit_join(self, join)

def visit_label(self, label)

def visit_null(self, null)

def visit_select(self, select)

def visit_table(self, column)

def visit_textclause(self, textclause)

def visit_typeclause(self, typeclause)

back to section top

class ColumnCollection(OrderedProperties)

an ordered dictionary that stores a list of ColumnElement instances.

overrides the __eq__() method to produce SQL clauses between sets of correlated columns.

def add(self, column)

add a column to this collection.

the key attribute of the column will be used as the hash key for this dictionary.

back to section top

class ColumnElement(Selectable,_CompareMixin)

represents a column element within the list of a Selectable's columns. A ColumnElement can either be directly associated with a TableClause, or a free-standing textual column with no table, or is a "proxy" column, indicating it is placed on a Selectable such as an Alias or Select statement and ultimately corresponds to a TableClause-attached column (or in the case of a CompositeSelect, a proxy ColumnElement may correspond to several TableClause-attached columns).

columns = property()

Columns accessor which just returns self, to provide compatibility with Selectable objects.

foreign_key = property()

foreign_keys = property()

foreign key accessor. points to a ForeignKey object which represents a Foreign Key placed on this column's ultimate ancestor.

orig_set = property()

a Set containing TableClause-bound, non-proxied ColumnElements for which this ColumnElement is a proxy. In all cases except for a column proxied from a Union (i.e. CompoundSelect), this set will be just one element.

primary_key = property()

primary key flag. indicates if this Column represents part or whole of a primary key.

def shares_lineage(self, othercolumn)

returns True if the given ColumnElement has a common ancestor to this ColumnElement.

back to section top

class Compiled(ClauseVisitor)

represents a compiled SQL expression. the __str__ method of the Compiled object should produce the actual text of the statement. Compiled objects are specific to the database library that created them, and also may or may not be specific to the columns referenced within a particular set of bind parameters. In no case should the Compiled object be dependent on the actual values of those bind parameters, even though it may reference those values as defaults.

def __init__(self, dialect, statement, parameters, engine=None)

construct a new Compiled object.

statement - ClauseElement to be compiled

parameters - optional dictionary indicating a set of bind parameters specified with this Compiled object. These parameters are the "default" values corresponding to the ClauseElement's _BindParamClauses when the Compiled is executed. In the case of an INSERT or UPDATE statement, these parameters will also result in the creation of new _BindParamClause objects for each key and will also affect the generated column list in an INSERT statement and the SET clauses of an UPDATE statement. The keys of the parameter dictionary can either be the string names of columns or _ColumnClause objects.

engine - optional Engine to compile this statement against

def compile(self)

def execute(self, *multiparams, **params)

execute this compiled object.

def get_params(self, **params)

returns the bind params for this compiled object.

Will start with the default parameters specified when this Compiled object was first constructed, and will override those values with those sent via **params, which are key/value pairs. Each key should match one of the _BindParamClause objects compiled into this object; either the "key" or "shortname" property of the _BindParamClause.

def scalar(self, *multiparams, **params)

execute this compiled object and return the result's scalar value.

back to section top

class CompoundSelect(_SelectBaseMixin,FromClause)

def __init__(self, keyword, *selects, **kwargs)

def accept_visitor(self, visitor)

name = property()

back to section top

class Executor(object)

represents a 'thing that can produce Compiled objects and execute them'.

def compiler(self, statement, parameters, **kwargs)

return a Compiled object for the given statement and parameters.

def execute_compiled(self, compiled, parameters, echo=None, **kwargs)

execute a Compiled object.

back to section top

class FromClause(Selectable)

represents an element that can be used within the FROM clause of a SELECT statement.

def __init__(self, name=None)

def accept_visitor(self, visitor)

def alias(self, name=None)

c = property()

columns = property()

def corresponding_column(self, column, raiseerr=True, keys_ok=False)

given a ColumnElement, return the ColumnElement object from this Selectable which corresponds to that original Column via a proxy relationship.

def count(self, whereclause=None, **params)

def default_order_by(self)

foreign_keys = property()

def join(self, right, *args, **kwargs)

def named_with_column(self)

True if the name of this FromClause may be prepended to a column in a generated SQL statement

oid_column = property()

original_columns = property()

a dictionary mapping an original Table-bound column to a proxied column in this FromClause.

def outerjoin(self, right, *args, **kwargs)

primary_key = property()

back to section top

class Join(FromClause)

def __init__(self, left, right, onclause=None, isouter=False)

def accept_visitor(self, visitor)

def alias(self, name=None)

creates a Select out of this Join clause and returns an Alias of it. The Select is not correlating.

engine = property()

name = property()

def select(self, whereclauses=None, **params)

back to section top

class Select(_SelectBaseMixin,FromClause)

represents a SELECT statement, with appendable clauses, as well as the ability to execute itself and return a result set.

def __init__(self, columns=None, whereclause=None, from_obj=[], order_by=None, group_by=None, having=None, use_labels=False, distinct=False, for_update=False, nowait=False, engine=None, limit=None, offset=None, scalar=False, correlate=True)

def accept_visitor(self, visitor)

def append_column(self, column)

def append_from(self, fromclause)

def append_having(self, having)

def append_whereclause(self, whereclause)

def clear_from(self, from_obj)

froms = property()

name = property()

def union(self, other, **kwargs)

def union_all(self, other, **kwargs)

back to section top

class Selectable(ClauseElement)

represents a column list-holding object.

def accept_visitor(self, visitor)

def select(self, whereclauses=None, **params)

back to section top

class TableClause(FromClause)

def __init__(self, name, *columns)

def accept_visitor(self, visitor)

def alias(self, name=None)

def append_column(self, c)

def count(self, whereclause=None, **params)

def delete(self, whereclause=None)

def insert(self, values=None)

def join(self, right, *args, **kwargs)

def named_with_column(self)

original_columns = property()

def outerjoin(self, right, *args, **kwargs)

def select(self, whereclause=None, **params)

def update(self, whereclause=None, values=None)

back to section top

module sqlalchemy.schema

the schema module provides the building blocks for database metadata. This means all the entities within a SQL database that we might want to look at, modify, or create and delete are described by these objects, in a database-agnostic way.

A structure of SchemaItems also provides a "visitor" interface which is the primary method by which other methods operate upon the schema. The SQL package extends this structure with its own clause-specific objects as well as the visitor interface, so that the schema package "plugs in" to the SQL package.

class BoundMetaData(MetaData)

builds upon MetaData to provide the capability to bind to an Engine implementation.

def __init__(self, engine_or_url, name=None, **kwargs)

def is_bound(self)

back to section top

class CheckConstraint(Constraint)

def __init__(self, sqltext, name=None)

def accept_schema_visitor(self, visitor, traverse=True)

back to section top

class Column(SchemaItem,_ColumnClause)

represents a column in a database table. this is a subclass of sql.ColumnClause and represents an actual existing table in the database, in a similar fashion as TableClause/Table.

def __init__(self, name, type, *args, **kwargs)

constructs a new Column object. Arguments are:

name : the name of this column. this should be the identical name as it appears, or will appear, in the database.

type: the TypeEngine for this column. This can be any subclass of types.AbstractType, including the database-agnostic types defined in the types module, database-specific types defined within specific database modules, or user-defined types.

*args: Constraint, ForeignKey, ColumnDefault and Sequence objects should be added as list values.

**kwargs : keyword arguments include:

key=None : an optional "alias name" for this column. The column will then be identified everywhere in an application, including the column list on its Table, by this key, and not the given name. Generated SQL, however, will still reference the column by its actual name.

primary_key=False : True if this column is a primary key column. Multiple columns can have this flag set to specify composite primary keys. As an alternative, the primary key of a Table can be specified via an explicit PrimaryKeyConstraint instance appended to the Table's list of objects.

nullable=True : True if this column should allow nulls. Defaults to True unless this column is a primary key column.

default=None : a scalar, python callable, or ClauseElement representing the "default value" for this column, which will be invoked upon insert if this column is not present in the insert list or is given a value of None. The default expression will be converted into a ColumnDefault object upon initialization.

_is_oid=False : used internally to indicate that this column is used as the quasi-hidden "oid" column

index=False : Indicates that this column is indexed. The name of the index is autogenerated. to specify indexes with explicit names or indexes that contain multiple columns, use the Index construct instead.

unique=False : Indicates that this column contains a unique constraint, or if index=True as well, indicates that the Index should be created with the unique flag. To specify multiple columns in the constraint/index or to specify an explicit name, use the UniqueConstraint or Index constructs instead.

autoincrement=True : Indicates that integer-based primary key columns should have autoincrementing behavior, if supported by the underlying database. This will affect CREATE TABLE statements such that they will use the databases "auto-incrementing" keyword (such as SERIAL for postgres, AUTO_INCREMENT for mysql) and will also affect the behavior of some dialects during INSERT statement execution such that they will assume primary key values are created in this manner. If a Column has an explicit ColumnDefault object (such as via the "default" keyword, or a Sequence or PassiveDefault), then the value of autoincrement is ignored and is assumed to be False. autoincrement value is only significant for a column with a type or subtype of Integer.

quote=False : indicates that the Column identifier must be properly escaped and quoted before being sent to the database. This flag should normally not be required as dialects can auto-detect conditions where quoting is required.

case_sensitive=True : indicates that the identifier should be interpreted by the database in the natural case for identifiers. Mixed case is not sufficient to cause this identifier to be quoted; it must contain an illegal character.

def accept_schema_visitor(self, visitor, traverse=True)

traverses the given visitor to this Column's default and foreign key object, then calls visit_column on the visitor.

def append_foreign_key(self, fk)

case_sensitive = property()

columns = property()

def copy(self)

creates a copy of this Column, unitialized. this is used in Table.tometadata.

back to section top

class ColumnDefault(DefaultGenerator)

A plain default value on a column. this could correspond to a constant, a callable function, or a SQL clause.

def __init__(self, arg, **kwargs)

def accept_schema_visitor(self, visitor, traverse=True)

calls the visit_column_default method on the given visitor.

back to section top

class Constraint(SchemaItem)

represents a table-level Constraint such as a composite primary key, foreign key, or unique constraint.

Implements a hybrid of dict/setlike behavior with regards to the list of underying columns

def __init__(self, name=None)

def copy(self)

def keys(self)

back to section top

class DefaultGenerator(SchemaItem)

Base class for column "default" values.

def __init__(self, for_update=False, metadata=None)

def execute(self, **kwargs)

back to section top

class DynamicMetaData(MetaData)

builds upon MetaData to provide the capability to bind to multiple Engine implementations on a dynamically alterable, thread-local basis.

def __init__(self, name=None, threadlocal=True, **kwargs)

def connect(self, engine_or_url, **kwargs)

def dispose(self)

disposes all Engines to which this DynamicMetaData has been connected.

engine = property()

def is_bound(self)

back to section top

class ForeignKey(SchemaItem)

defines a column-level ForeignKey constraint between two columns.

ForeignKey is specified as an argument to a Column object.

One or more ForeignKey objects are used within a ForeignKeyConstraint object which represents the table-level constraint definition.

def __init__(self, column, constraint=None, use_alter=False, name=None)

Construct a new ForeignKey object.

"column" can be a schema.Column object representing the relationship, or just its string name given as "tablename.columnname". schema can be specified as "schema.tablename.columnname"

"constraint" is the owning ForeignKeyConstraint object, if any. if not given, then a ForeignKeyConstraint will be automatically created and added to the parent table.

def accept_schema_visitor(self, visitor, traverse=True)

calls the visit_foreign_key method on the given visitor.

column = property()

def copy(self)

produce a copy of this ForeignKey object.

def references(self, table)

returns True if the given table is referenced by this ForeignKey.

back to section top

class ForeignKeyConstraint(Constraint)

table-level foreign key constraint, represents a colleciton of ForeignKey objects.

def __init__(self, columns, refcolumns, name=None, onupdate=None, ondelete=None, use_alter=False)

def accept_schema_visitor(self, visitor, traverse=True)

def append_element(self, col, refcol)

def copy(self)

back to section top

class Index(SchemaItem)

Represents an index of columns from a database table

def __init__(self, name, *columns, **kwargs)

Constructs an index object. Arguments are:

name : the name of the index

*columns : columns to include in the index. All columns must belong to the same table, and no column may appear more than once.

**kw : keyword arguments include:

unique=True : create a unique index

def accept_schema_visitor(self, visitor, traverse=True)

def append_column(self, column)

def create(self, connectable=None)

def drop(self, connectable=None)

back to section top

class MetaData(SchemaItem)

represents a collection of Tables and their associated schema constructs.

def __init__(self, name=None, **kwargs)

def accept_schema_visitor(self, visitor, traverse=True)

def clear(self)

def create_all(self, connectable=None, tables=None, checkfirst=True)

create all tables stored in this metadata.

This will conditionally create tables depending on if they do not yet exist in the database.

connectable - a Connectable used to access the database; or use the engine bound to this MetaData.

tables - optional list of tables, which is a subset of the total tables in the MetaData (others are ignored)

def drop_all(self, connectable=None, tables=None, checkfirst=True)

drop all tables stored in this metadata.

This will conditionally drop tables depending on if they currently exist in the database.

connectable - a Connectable used to access the database; or use the engine bound to this MetaData.

tables - optional list of tables, which is a subset of the total tables in the MetaData (others are ignored)

def is_bound(self)

def table_iterator(self, reverse=True, tables=None)

back to section top

class PassiveDefault(DefaultGenerator)

a default that takes effect on the database side

def __init__(self, arg, **kwargs)

def accept_schema_visitor(self, visitor, traverse=True)

back to section top

class PrimaryKeyConstraint(Constraint)

def __init__(self, *columns, **kwargs)

def accept_schema_visitor(self, visitor, traverse=True)

def add(self, col)

def append_column(self, col)

def copy(self)

back to section top

class SchemaItem(object)

base class for items that define a database schema.

case_sensitive = property()

engine = property()

def get_engine(self)

return the engine or raise an error if no engine

metadata = property()

back to section top

class SchemaVisitor(ClauseVisitor)

defines the visiting for SchemaItem objects

def visit_check_constraint(self, constraint)

def visit_column(self, column)

visit a Column.

def visit_column_default(self, default)

visit a ColumnDefault.

def visit_column_onupdate(self, onupdate)

visit a ColumnDefault with the "for_update" flag set.

def visit_foreign_key(self, join)

visit a ForeignKey.

def visit_foreign_key_constraint(self, constraint)

def visit_index(self, index)

visit an Index.

def visit_passive_default(self, default)

visit a passive default

def visit_primary_key_constraint(self, constraint)

def visit_schema(self, schema)

visit a generic SchemaItem

def visit_sequence(self, sequence)

visit a Sequence.

def visit_table(self, table)

visit a Table.

def visit_unique_constraint(self, constraint)

back to section top

class Sequence(DefaultGenerator)

represents a sequence, which applies to Oracle and Postgres databases.

def __init__(self, name, start=None, increment=None, optional=False, quote=False, **kwargs)

def accept_schema_visitor(self, visitor, traverse=True)

calls the visit_seauence method on the given visitor.

def create(self)

def drop(self)

back to section top

class Table(SchemaItem,TableClause)

represents a relational database table. This subclasses sql.TableClause to provide a table that is "wired" to an engine. Whereas TableClause represents a table as its used in a SQL expression, Table represents a table as its created in the database.

Be sure to look at sqlalchemy.sql.TableImpl for additional methods defined on a Table.

def __init__(self, name, metadata, **kwargs)

Construct a Table.

Table objects can be constructed directly. The init method is actually called via the TableSingleton metaclass. Arguments are:

name : the name of this table, exactly as it appears, or will appear, in the database. This property, along with the "schema", indicates the "singleton identity" of this table. Further tables constructed with the same name/schema combination will return the same Table instance.

*args : should contain a listing of the Column objects for this table.

**kwargs : options include:

schema=None : the "schema name" for this table, which is required if the table resides in a schema other than the default selected schema for the engine's database connection.

autoload=False : the Columns for this table should be reflected from the database. Usually there will be no Column objects in the constructor if this property is set.

mustexist=False : indicates that this Table must already have been defined elsewhere in the application, else an exception is raised.

useexisting=False : indicates that if this Table was already defined elsewhere in the application, disregard the rest of the constructor arguments. If this flag and the "redefine" flag are not set, constructing the same table twice will result in an exception.

owner=None : optional owning user of this table. useful for databases such as Oracle to aid in table reflection.

quote=False : indicates that the Table identifier must be properly escaped and quoted before being sent to the database. This flag overrides all other quoting behavior.

quote_schema=False : indicates that the Namespace identifier must be properly escaped and quoted before being sent to the database. This flag overrides all other quoting behavior.

case_sensitive=True : indicates that the identifier should be interpreted by the database in the natural case for identifiers. Mixed case is not sufficient to cause this identifier to be quoted; it must contain an illegal character.

case_sensitive_schema=True : indicates that the identifier should be interpreted by the database in the natural case for identifiers. Mixed case is not sufficient to cause this identifier to be quoted; it must contain an illegal character.

def accept_schema_visitor(self, visitor, traverse=True)

def append_column(self, column)

append a Column to this Table.

def append_constraint(self, constraint)

append a Constraint to this Table.

case_sensitive_schema = property()

def create(self, connectable=None, checkfirst=False)

issue a CREATE statement for this table.

see also metadata.create_all().

def drop(self, connectable=None, checkfirst=False)

issue a DROP statement for this table.

see also metadata.drop_all().

def exists(self, connectable=None)

return True if this table exists.

primary_key = property()

def tometadata(self, metadata, schema=None)

return a copy of this Table associated with a different MetaData.

back to section top

class UniqueConstraint(Constraint)

def __init__(self, *columns, **kwargs)

def accept_schema_visitor(self, visitor, traverse=True)

def append_column(self, col)

back to section top

module sqlalchemy.engine

Module Functions

def create_engine(*args, **kwargs)

creates a new Engine instance. Using the given strategy name, locates that strategy and invokes its create() method to produce the Engine. The strategies themselves are instances of EngineStrategy, and the built in ones are present in the sqlalchemy.engine.strategies module. Current implementations include "plain" and "threadlocal". The default used by this function is "plain".

"plain" provides support for a Connection object which can be used to execute SQL queries with a specific underlying DBAPI connection.

"threadlocal" is similar to "plain" except that it adds support for a thread-local connection and transaction context, which allows a group of engine operations to participate using the same connection and transaction without the need for explicit passing of a Connection object.

The standard method of specifying the engine is via URL as the first positional argument, to indicate the appropriate database dialect and connection arguments, with additional keyword arguments sent as options to the dialect and resulting Engine.

The URL is in the form <dialect>://opt1=val1&opt2=val2. Where <dialect> is a name such as "mysql", "oracle", "postgres", and the options indicate username, password, database, etc. Supported keynames include "username", "user", "password", "pw", "db", "database", "host", "filename".

**kwargs represents options to be sent to the Engine itself as well as the components of the Engine, including the Dialect, the ConnectionProvider, and the Pool. A list of common options is as follows:

pool=None : an instance of sqlalchemy.pool.DBProxy or sqlalchemy.pool.Pool to be used as the underlying source for connections (DBProxy/Pool is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments.

echo=False : if True, the Engine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A Engine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well.

logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.

encoding='utf-8' : the encoding to be used when encoding/decoding Unicode strings

convert_unicode=False : True if unicode conversion should be applied to all str types

module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle. For mysql, MySQLdb.

use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <column1>(+)=<column2> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.

def engine_descriptors()

provides a listing of all the database implementations supported. this data is provided as a list of dictionaries, where each dictionary contains the following key/value pairs:

name : the name of the engine, suitable for use in the create_engine function

description: a plain description of the engine.

arguments : a dictionary describing the name and description of each parameter used to connect to this engine's underlying DBAPI.

This function is meant for usage in automated configuration tools that wish to query the user for database and connection information.

back to section top

class Connectable(object)

interface for an object that can provide an Engine and a Connection object which correponds to that Engine.

def contextual_connect(self)

returns a Connection object which may be part of an ongoing context.

def create(self, entity, **kwargs)

creates a table or index given an appropriate schema object.

def drop(self, entity, **kwargs)

engine = property()

returns the Engine which this Connectable is associated with.

def execute(self, object, *multiparams, **params)

back to section top

class Connection(Connectable)

represents a single DBAPI connection returned from the underlying connection pool. Provides execution support for string-based SQL statements as well as ClauseElement, Compiled and DefaultGenerator objects. provides a begin method to return Transaction objects.

The Connection object is **not** threadsafe.

def __init__(self, engine, connection=None, close_with_result=False)

def begin(self)

def close(self)

def connect(self)

connect() is implemented to return self so that an incoming Engine or Connection object can be treated similarly.

connection = property()

The underlying DBAPI connection managed by this Connection.

def contextual_connect(self, **kwargs)

contextual_connect() is implemented to return self so that an incoming Engine or Connection object can be treated similarly.

def create(self, entity, **kwargs)

creates a table or index given an appropriate schema object.

def default_schema_name(self)

def drop(self, entity, **kwargs)

drops a table or index given an appropriate schema object.

engine = property()

The Engine with which this Connection is associated (read only)

def execute(self, object, *multiparams, **params)

def execute_clauseelement(self, elem, *multiparams, **params)

def execute_compiled(self, compiled, *multiparams, **params)

executes a sql.Compiled object.

def execute_default(self, default, **kwargs)

def execute_text(self, statement, parameters=None)

def in_transaction(self)

def proxy(self, statement=None, parameters=None)

executes the given statement string and parameter object. the parameter object is expected to be the result of a call to compiled.get_params(). This callable is a generic version of a connection/cursor-specific callable that is produced within the execute_compiled method, and is used for objects that require this style of proxy when outside of an execute_compiled method, primarily the DefaultRunner.

def reflecttable(self, table, **kwargs)

reflects the columns in the given table from the database.

def run_callable(self, callable_)

def scalar(self, object, parameters=None, **kwargs)

should_close_with_result = property()

Indicates if this Connection should be closed when a corresponding ResultProxy is closed; this is essentially an auto-release mode.

back to section top

class ConnectionProvider(object)

defines an interface that returns raw Connection objects (or compatible).

def dispose(self)

releases all resources corresponding to this ConnectionProvider, such as any underlying connection pools.

def get_connection(self)

this method should return a Connection or compatible object from a DBAPI which also contains a close() method. It is not defined what context this connection belongs to. It may be newly connected, returned from a pool, part of some other kind of context such as thread-local, or can be a fixed member of this object.

back to section top

class DefaultRunner(SchemaVisitor)

a visitor which accepts ColumnDefault objects, produces the dialect-specific SQL corresponding to their execution, and executes the SQL, returning the result value.

DefaultRunners are used internally by Engines and Dialects. Specific database modules should provide their own subclasses of DefaultRunner to allow database-specific behavior.

def __init__(self, engine, proxy)

def exec_default_sql(self, default)

def get_column_default(self, column)

def get_column_onupdate(self, column)

def visit_column_default(self, default)

def visit_column_onupdate(self, onupdate)

def visit_passive_default(self, default)

passive defaults by definition return None on the app side, and are post-fetched to get the DB-side value

def visit_sequence(self, seq)

sequences are not supported by default

back to section top

class Dialect(AbstractDialect)

Defines the behavior of a specific database/DBAPI.

Any aspect of metadata defintion, SQL query generation, execution, result-set handling, or anything else which varies between databases is defined under the general category of the Dialect. The Dialect acts as a factory for other database-specific object implementations including ExecutionContext, Compiled, DefaultGenerator, and TypeEngine.

All Dialects implement the following attributes:

positional - True if the paramstyle for this Dialect is positional

paramstyle - the paramstyle to be used (some DBAPIs support multiple paramstyles)

supports_autoclose_results - usually True; if False, indicates that rows returned by fetchone() might not be just plain tuples, and may be "live" proxy objects which still require the cursor to be open in order to be read (such as pyPgSQL which has active filehandles for BLOBs). in that case, an auto-closing ResultProxy cannot automatically close itself after results are consumed.

convert_unicode - True if unicode conversion should be applied to all str types

encoding - type of encoding to use for unicode, usually defaults to 'utf-8'

def compile(self, clauseelement, parameters=None)

compile the given ClauseElement using this Dialect.

a convenience method which simply flips around the compile() call on ClauseElement.

def compiler(self, statement, parameters)

returns a sql.ClauseVisitor which will produce a string representation of the given ClauseElement and parameter dictionary. This object is usually a subclass of ansisql.ANSICompiler.

compiler is called within the context of the compile() method.

def convert_compiled_params(self, parameters)

given a sql.ClauseParameters object, returns an array or dictionary suitable to pass directly to this Dialect's DBAPI's execute method.

def create_connect_args(self, opts)

given a dictionary of key-valued connect parameters, returns a tuple consisting of a *args/**kwargs suitable to send directly to the dbapi's connect function. The connect args will have any number of the following keynames: host, hostname, database, dbanme, user,username, password, pw, passwd, filename.

def dbapi(self)

subclasses override this method to provide the DBAPI module used to establish connections.

def defaultrunner(self, engine, proxy, **params)

returns a schema.SchemaVisitor instances that can execute defaults.

def do_begin(self, connection)

provides an implementation of connection.begin()

def do_commit(self, connection)

provides an implementation of connection.commit()

def do_execute(self, cursor, statement, parameters)

def do_executemany(self, cursor, statement, parameters)

def do_rollback(self, connection)

provides an implementation of connection.rollback()

def execution_context(self)

returns a new ExecutionContext object.

def get_default_schema_name(self, connection)

returns the currently selected schema given an connection

def has_sequence(self, connection, sequence_name)

def has_table(self, connection, table_name)

def oid_column_name(self)

returns the oid column name for this dialect, or None if the dialect cant/wont support OID/ROWID.

def reflecttable(self, connection, table)

given an Connection and a Table object, reflects its columns and properties from the database.

def schemadropper(self, engine, proxy, **params)

returns a schema.SchemaVisitor instance that can drop schemas, when it is invoked to traverse a set of schema objects.

schemagenerator is called via the drop() method on Table, Index, and others.

def schemagenerator(self, engine, proxy, **params)

returns a schema.SchemaVisitor instance that can generate schemas, when it is invoked to traverse a set of schema objects.

schemagenerator is called via the create() method on Table, Index, and others.

def supports_sane_rowcount(self)

Provided to indicate when MySQL is being used, which does not have standard behavior for the "rowcount" function on a statement handle.

def type_descriptor(self, typeobj)

provides a database-specific TypeEngine object, given the generic object which comes from the types module. Subclasses will usually use the adapt_type() method in the types module to make this job easy.

back to section top

class Engine(Executor,Connectable)

Connects a ConnectionProvider, a Dialect and a CompilerFactory together to provide a default implementation of SchemaEngine.

def __init__(self, connection_provider, dialect, echo=None)

def compiler(self, statement, parameters, **kwargs)

def connect(self, **kwargs)

returns a newly allocated Connection object.

def contextual_connect(self, close_with_result=False, **kwargs)

returns a Connection object which may be newly allocated, or may be part of some ongoing context. This Connection is meant to be used by the various "auto-connecting" operations.

def create(self, entity, connection=None, **kwargs)

creates a table or index within this engine's database connection given a schema.Table object.

def dispose(self)

def drop(self, entity, connection=None, **kwargs)

drops a table or index within this engine's database connection given a schema.Table object.

engine = property()

def execute(self, statement, *multiparams, **params)

def execute_compiled(self, compiled, *multiparams, **params)

def execute_default(self, default, **kwargs)

func = property()

def has_table(self, table_name)

def log(self, msg)

logs a message using this SQLEngine's logger stream.

name = property()

def raw_connection(self)

returns a DBAPI connection.

def reflecttable(self, table, connection=None)

given a Table object, reflects its columns and properties from the database.

def run_callable(self, callable_, connection=None, *args, **kwargs)

def scalar(self, statement, *multiparams, **params)

def text(self, text, *args, **kwargs)

returns a sql.text() object for performing literal queries.

def transaction(self, callable_, connection=None, *args, **kwargs)

executes the given function within a transaction boundary. this is a shortcut for explicitly calling begin() and commit() and optionally rollback() when execptions are raised. The given *args and **kwargs will be passed to the function, as well as the Connection used in the transaction.

back to section top

class ExecutionContext(object)

a messenger object for a Dialect that corresponds to a single execution. The Dialect should provide an ExecutionContext via the create_execution_context() method. The pre_exec and post_exec methods will be called for compiled statements, afterwhich it is expected that the various methods last_inserted_ids, last_inserted_params, etc. will contain appropriate values, if applicable.

def get_rowcount(self, cursor)

returns the count of rows updated/deleted for an UPDATE/DELETE statement

def last_inserted_ids(self)

return the list of the primary key values for the last insert statement executed.

This does not apply to straight textual clauses; only to sql.Insert objects compiled against a schema.Table object, which are executed via statement.execute(). The order of items in the list is the same as that of the Table's 'primary_key' attribute.

In some cases, this method may invoke a query back to the database to retrieve the data, based on the "lastrowid" value in the cursor.

def last_inserted_params(self)

return a dictionary of the full parameter dictionary for the last compiled INSERT statement.

Includes any ColumnDefaults or Sequences that were pre-executed.

def last_updated_params(self)

return a dictionary of the full parameter dictionary for the last compiled UPDATE statement.

Includes any ColumnDefaults that were pre-executed.

def lastrow_has_defaults(self)

return True if the last row INSERTED via a compiled insert statement contained PassiveDefaults.

The presence of PassiveDefaults indicates that the database inserted data beyond that which we passed to the query programmatically.

def post_exec(self, engine, proxy, compiled, parameters)

called after the execution of a compiled statement. proxy is a callable that takes a string statement and a bind parameter list/dictionary.

def pre_exec(self, engine, proxy, compiled, parameters)

called before an execution of a compiled statement. proxy is a callable that takes a string statement and a bind parameter list/dictionary.

def supports_sane_rowcount(self)

Indicates if the "rowcount" DBAPI cursor function works properly.

Currently, MySQLDB does not properly implement this function.

back to section top

class ResultProxy(object)

wraps a DBAPI cursor object to provide access to row columns based on integer position, case-insensitive column name, or by schema.Column object. e.g.:

row = fetchone()

col1 = row[0] # access via integer position

col2 = row['col2'] # access via name

col3 = row[mytable.c.mycol] # access via Column object.

ResultProxy also contains a map of TypeEngine objects and will invoke the appropriate convert_result_value() method before returning columns, as well as the ExecutionContext corresponding to the statement execution. It provides several methods for which to obtain information from the underlying ExecutionContext.

def __init__(self, engine, connection, cursor, executioncontext=None, typemap=None)

ResultProxy objects are constructed via the execute() method on SQLEngine.

def close(self)

close this ResultProxy, and the underlying DBAPI cursor corresponding to the execution.

If this ResultProxy was generated from an implicit execution, the underlying Connection will also be closed (returns the underlying DBAPI connection to the connection pool.)

This method is also called automatically when all result rows are exhausted.

executioncontext = property()

def fetchall(self)

fetch all rows, just like DBAPI cursor.fetchall().

def fetchone(self)

fetch one row, just like DBAPI cursor.fetchone().

def last_inserted_ids(self)

return last_inserted_ids() from the underlying ExecutionContext.

See ExecutionContext for details.

def last_inserted_params(self)

return last_inserted_params() from the underlying ExecutionContext.

See ExecutionContext for details.

def last_updated_params(self)

return last_updated_params() from the underlying ExecutionContext.

See ExecutionContext for details.

def lastrow_has_defaults(self)

return lastrow_has_defaults() from the underlying ExecutionContext.

See ExecutionContext for details.

def scalar(self)

fetch the first column of the first row, and close the result set.

def supports_sane_rowcount(self)

return supports_sane_rowcount() from the underlying ExecutionContext.

See ExecutionContext for details.

back to section top

class RowProxy(object)

proxies a single cursor row for a parent ResultProxy. Mostly follows "ordered dictionary" behavior, mapping result values to the string-based column name, the integer position of the result in the row, as well as Column instances which can be mapped to the original Columns that produced this result set (for results that correspond to constructed SQL expressions).

def __init__(self, parent, row)

RowProxy objects are constructed by ResultProxy objects.

def close(self)

close the parent ResultProxy.

def has_key(self, key)

return True if this RowProxy contains the given key.

def items(self)

return a list of tuples, each tuple containing a key/value pair.

def keys(self)

return the list of keys as strings represented by this RowProxy.

def values(self)

return the values represented by this RowProxy as a list.

back to section top

class SchemaIterator(SchemaVisitor)

a visitor that can gather text into a buffer and execute the contents of the buffer.

def __init__(self, engine, proxy, **params)

construct a new SchemaIterator.

engine - the Engine used by this SchemaIterator

proxy - a callable which takes a statement and bind parameters and executes it, returning the cursor (the actual DBAPI cursor). The callable should use the same cursor repeatedly.

def append(self, s)

append content to the SchemaIterator's query buffer.

def execute(self)

execute the contents of the SchemaIterator's buffer.

back to section top

class Transaction(object)

represents a Transaction in progress.

the Transaction object is **not** threadsafe.

def __init__(self, connection, parent)

def commit(self)

connection = property()

The Connection object referenced by this Transaction

is_active = property()

def rollback(self)

back to section top

module sqlalchemy.engine.url

Module Functions

def make_url(name_or_url)

given a string or unicode instance, produces a new URL instance.

the given string is parsed according to the rfc1738 spec. if an existing URL object is passed, just returns the object.

back to section top

class URL(object)

represents the components of a URL used to connect to a database.

This object is suitable to be passed directly to a create_engine() call. The fields of the URL are parsed from a string by the module-level make_url() function. the string format of the URL is an RFC-1738-style string.

Attributes on URL include:

drivername

username

password

host

port

database

query - a dictionary containing key/value pairs representing the URL's query string.

def __init__(self, drivername, username=None, password=None, host=None, port=None, database=None, query=None)

def get_module(self)

return the SQLAlchemy database module corresponding to this URL's driver name.

def translate_connect_args(self, names)

translate this URL's attributes into a dictionary of connection arguments.

given a list of argument names corresponding to the URL attributes ('host', 'database', 'username', 'password', 'port'), will assemble the attribute values of this URL into the dictionary using the given names.

back to section top

module sqlalchemy.orm

the mapper package provides object-relational functionality, building upon the schema and sql packages and tying operations to class properties and constructors.

Module Functions

def backref(name, **kwargs)

create a BackRef object with explicit arguments, which are the same arguments one can send to relation().

used with the "backref" keyword argument to relation() in place of a string argument.

def cascade_mappers(*classes_or_mappers)

attempt to create a series of relations() between mappers automatically, via introspecting the foreign key relationships of the underlying tables.

given a list of classes and/or mappers, identifies the foreign key relationships between the given mappers or corresponding class mappers, and creates relation() objects representing those relationships, including a backreference. Attempts to find the "secondary" table in a many-to-many relationship as well. The names of the relations will be a lowercase version of the related class. In the case of one-to-many or many-to-many, the name will be "pluralized", which currently is based on the English language (i.e. an 's' or 'es' added to it).

NOTE: this method usually works poorly, and its usage is generally not advised.

def class_mapper(class_, entity_name=None, compile=True)

given a ClassKey, returns the primary Mapper associated with the key.

def clear_mapper(m)

remove the given mapper from the storage of mappers.

when a new mapper is created for the previous mapper's class, it will be used as that classes' new primary mapper.

def clear_mappers()

remove all mappers that have been created thus far.

when new mappers are created, they will be assigned to their classes as their primary mapper.

def contains_eager(key, decorator=None)

return a MapperOption that will indicate to the query that the given attribute will be eagerly loaded without any row decoration, or using a custom row decorator.

used when feeding SQL result sets directly into query.instances().

def defer(name)

return a MapperOption that will convert the column property of the given name into a deferred load.

used with query.options()

def deferred(*columns, **kwargs)

return a DeferredColumnProperty, which indicates this object attributes should only be loaded from its corresponding table column when first accessed.

used with the 'properties' dictionary sent to mapper().

def eagerload(name)

return a MapperOption that will convert the property of the given name into an eager load.

used with query.options().

def extension(ext)

return a MapperOption that will insert the given MapperExtension to the beginning of the list of extensions that will be called in the context of the Query.

used with query.options().

def lazyload(name)

return a MapperOption that will convert the property of the given name into a lazy load.

used with query.options().

def mapper(class_, table=None, *args, **params)

return a new Mapper object.

See the Mapper class for a description of arguments.

def noload(name)

return a MapperOption that will convert the property of the given name into a non-load.

used with query.options().

def object_mapper(object, raiseerror=True)

given an object, returns the primary Mapper associated with the object instance

def polymorphic_union(table_map, typecolname, aliasname='p_union')

create a UNION statement used by a polymorphic mapper.

See the SQLAlchemy advanced mapping docs for an example of how this is used.

def relation(*args, **kwargs)

provide a relationship of a primary Mapper to a secondary Mapper.

This corresponds to a parent-child or associative table relationship.

def synonym(name, proxy=False)

set up 'name' as a synonym to another MapperProperty.

Used with the 'properties' dictionary sent to mapper().

def undefer(name)

return a MapperOption that will convert the column property of the given name into a non-deferred (regular column) load.

used with query.options().

back to section top

class MapperExtension(object)

base implementation for an object that provides overriding behavior to various Mapper functions. For each method in MapperExtension, a result of EXT_PASS indicates the functionality is not overridden.

def after_delete(self, mapper, connection, instance)

receive an object instance after that instance is DELETEed.

def after_insert(self, mapper, connection, instance)

receive an object instance after that instance is INSERTed.

def after_update(self, mapper, connection, instance)

receive an object instance after that instance is UPDATEed.

def append_result(self, mapper, selectcontext, row, instance, identitykey, result, isnew)

receive an object instance before that instance is appended to a result list.

If this method returns EXT_PASS, result appending will proceed normally. if this method returns any other value or None, result appending will not proceed for this instance, giving this extension an opportunity to do the appending itself, if desired.

mapper - the mapper doing the operation

selectcontext - SelectionContext corresponding to the instances() call

row - the result row from the database

instance - the object instance to be appended to the result

identitykey - the identity key of the instance

result - list to which results are being appended

isnew - indicates if this is the first time we have seen this object instance in the current result set. if you are selecting from a join, such as an eager load, you might see the same object instance many times in the same result set.

def before_delete(self, mapper, connection, instance)

receive an object instance before that instance is DELETEed.

def before_insert(self, mapper, connection, instance)

receive an object instance before that instance is INSERTed into its table.

this is a good place to set up primary key values and such that arent handled otherwise.

def before_update(self, mapper, connection, instance)

receive an object instance before that instance is UPDATEed.

def create_instance(self, mapper, selectcontext, row, class_)

receieve a row when a new object instance is about to be created from that row. the method can choose to create the instance itself, or it can return None to indicate normal object creation should take place.

mapper - the mapper doing the operation

selectcontext - SelectionContext corresponding to the instances() call

row - the result row from the database

class_ - the class we are mapping.

def get(self, query, *args, **kwargs)

override the get method of the Query object.

the return value of this method is used as the result of query.get() if the value is anything other than EXT_PASS.

def get_by(self, query, *args, **kwargs)

override the get_by method of the Query object.

the return value of this method is used as the result of query.get_by() if the value is anything other than EXT_PASS.

def get_session(self)

retrieve a contextual Session instance with which to register a new object.

Note: this is not called if a session is provided with the __init__ params (i.e. _sa_session)

def load(self, query, *args, **kwargs)

override the load method of the Query object.

the return value of this method is used as the result of query.load() if the value is anything other than EXT_PASS.

def populate_instance(self, mapper, selectcontext, row, instance, identitykey, isnew)

receive a newly-created instance before that instance has its attributes populated.

The normal population of attributes is according to each attribute's corresponding MapperProperty (which includes column-based attributes as well as relationships to other classes). If this method returns EXT_PASS, instance population will proceed normally. If any other value or None is returned, instance population will not proceed, giving this extension an opportunity to populate the instance itself, if desired.

def select(self, query, *args, **kwargs)

override the select method of the Query object.

the return value of this method is used as the result of query.select() if the value is anything other than EXT_PASS.

def select_by(self, query, *args, **kwargs)

override the select_by method of the Query object.

the return value of this method is used as the result of query.select_by() if the value is anything other than EXT_PASS.

back to section top

module sqlalchemy.orm.mapper

Module Functions

def class_mapper(class_, entity_name=None, compile=True)

given a ClassKey, returns the primary Mapper associated with the key.

def object_mapper(object, raiseerror=True)

given an object, returns the primary Mapper associated with the object instance

back to section top

class Mapper(object)

Defines the correlation of class attributes to database table columns.

Instances of this class should be constructed via the sqlalchemy.orm.mapper() function.

def __init__(self, class_, local_table, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, extension=None, order_by=False, allow_column_override=False, entity_name=None, always_refresh=False, version_id_col=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, select_table=None, allow_null_pks=False, batch=True, column_prefix=None)

construct a new mapper.

All arguments may be sent to the sqlalchemy.orm.mapper() function where they are passed through to here.

class_ - the class to be mapped.

local_table - the table to which the class is mapped, or None if this mapper inherits from another mapper using concrete table inheritance.

properties - a dictionary mapping the string names of object attributes to MapperProperty instances, which define the persistence behavior of that attribute. Note that the columns in the mapped table are automatically converted into ColumnProperty instances based on the "key" property of each Column (although they can be overridden using this dictionary).

primary_key - a list of Column objects which define the "primary key" to be used against this mapper's selectable unit. This is normally simply the primary key of the "local_table", but can be overridden here.

non_primary - construct a Mapper that will define only the selection of instances, not their persistence.

inherits - another Mapper for which this Mapper will have an inheritance relationship with.

inherit_condition - for joined table inheritance, a SQL expression (constructed ClauseElement) which will define how the two tables are joined; defaults to a natural join between the two tables.

extension - a MapperExtension instance or list of MapperExtension instances which will be applied to all operations by this Mapper.

order_by - a single Column or list of Columns for which selection operations should use as the default ordering for entities. Defaults to the OID/ROWID of the table if any, or the first primary key column of the table.

allow_column_override - if True, allows association relationships to be set up which override the usage of a column that is on the table (based on key/attribute name).

entity_name - a name to be associated with the class, to allow alternate mappings for a single class.

always_refresh - if True, all query operations for this mapped class will overwrite all data within object instances that already exist within the session, erasing any in-memory changes with whatever information was loaded from the database.

version_id_col - a Column which must have an integer type that will be used to keep a running "version id" of mapped entities in the database. this is used during save operations to insure that no other thread or process has updated the instance during the lifetime of the entity, else a ConcurrentModificationError exception is thrown.

polymorphic_on - used with mappers in an inheritance relationship, a Column which will identify the class/mapper combination to be used with a particular row. requires the polymorphic_identity value to be set for all mappers in the inheritance hierarchy.

_polymorphic_map - used internally to propigate the full map of polymorphic identifiers to surrogate mappers.

polymorphic_identity - a value which will be stored in the Column denoted by polymorphic_on, corresponding to the "class identity" of this mapper.

concrete - if True, indicates this mapper should use concrete table inheritance with its parent mapper.

select_table - a Table or (more commonly) Selectable which will be used to select instances of this mapper's class. usually used to provide polymorphic loading among several classes in an inheritance hierarchy.

allow_null_pks - indicates that composite primary keys where one or more (but not all) columns contain NULL is a valid primary key. Primary keys which contain NULL values usually indicate that a result row does not contain an entity and should be skipped.

batch - indicates that save operations of multiple entities can be batched together for efficiency. setting to False indicates that an instance will be fully saved before saving the next instance, which includes inserting/updating all table rows corresponding to the entity as well as calling all MapperExtension methods corresponding to the save operation.

column_prefix - a string which will be prepended to the "key" name of all Columns when creating column-based properties from the given Table. does not affect explicitly specified column-based properties

def add_properties(self, dict_of_properties)

adds the given dictionary of properties to this mapper, using add_property.

def add_property(self, key, prop)

add an indiviual MapperProperty to this mapper.

If the mapper has not been compiled yet, just adds the property to the initial properties dictionary sent to the constructor. if this Mapper has already been compiled, then the given MapperProperty is compiled immediately.

def base_mapper(self)

return the ultimate base mapper in an inheritance chain

def cascade_callable(self, type, object, callable_, recursive=None)

execute a callable for each element in an object graph, for all relations that meet the given cascade rule.

type - the name of the cascade rule (i.e. save-update, delete, etc.)

object - the lead object instance. child items will be processed per the relations defined for this object's mapper.

callable_ - the callable function.

recursive - used by the function for internal context during recursive calls, leave as None.

def cascade_iterator(self, type, object, recursive=None)

iterate each element in an object graph, for all relations taht meet the given cascade rule.

type - the name of the cascade rule (i.e. save-update, delete, etc.)

object - the lead object instance. child items will be processed per the relations defined for this object's mapper.

recursive - used by the function for internal context during recursive calls, leave as None.

def common_parent(self, other)

return true if the given mapper shares a common inherited parent as this mapper

def compile(self)

compile this mapper into its final internal format.

this is the 'external' version of the method which is not reentrant.

def delete_obj(self, objects, uowtransaction)

issue DELETE statements for a list of objects.

this is called within the context of a UOWTransaction during a flush operation.

def get_attr_by_column(self, obj, column, raiseerror=True)

return an instance attribute using a Column as the key.

def get_select_mapper(self)

return the mapper used for issuing selects.

this mapper is the same mapper as 'self' unless the select_table argument was specified for this mapper.

def get_session(self)

return the contextual session provided by the mapper extension chain, if any.

raises InvalidRequestError if a session cannot be retrieved from the extension chain

def has_eager(self)

return True if one of the properties attached to this Mapper is eager loading

def identity(self, instance)

deprecated. a synoynm for primary_key_from_instance.

def identity_key(self, primary_key)

deprecated. a synonym for identity_key_from_primary_key.

def identity_key_from_instance(self, instance)

return the identity key for the given instance, based on its primary key attributes.

this value is typically also found on the instance itself under the attribute name '_instance_key'.

def identity_key_from_primary_key(self, primary_key)

return an identity-map key for use in storing/retrieving an item from an identity map.

primary_key - a list of values indicating the identifier.

def identity_key_from_row(self, row)

return an identity-map key for use in storing/retrieving an item from the identity map.

row - a sqlalchemy.dbengine.RowProxy instance or other map corresponding result-set column names to their values within a row.

def instance_key(self, instance)

deprecated. a synonym for identity_key_from_instance.

def instances(self, cursor, session, *mappers, **kwargs)

return a list of mapped instances corresponding to the rows in a given ResultProxy.

def is_assigned(self, instance)

return True if this mapper handles the given instance.

this is dependent not only on class assignment but the optional "entity_name" parameter as well.

def isa(self, other)

return True if the given mapper inherits from this mapper

def iterate_to_root(self)

def polymorphic_iterator(self)

iterates through the collection including this mapper and all descendant mappers.

this includes not just the immediately inheriting mappers but all their inheriting mappers as well.

To iterate through an entire hierarchy, use mapper.base_mapper().polymorphic_iterator().

def populate_instance(self, selectcontext, instance, row, identitykey, isnew)

populate an instance from a result row.

This method iterates through the list of MapperProperty objects attached to this Mapper and calls each properties execute() method.

def primary_key_from_instance(self, instance)

return the list of primary key values for the given instance.

def primary_mapper(self)

returns the primary mapper corresponding to this mapper's class key (class + entity_name)

props = property()

compiles this mapper if needed, and returns the dictionary of MapperProperty objects associated with this mapper.

def register_dependencies(self, uowcommit, *args, **kwargs)

register DependencyProcessor instances with a unitofwork.UOWTransaction.

this calls register_dependencies on all attached MapperProperty instances.

def save_obj(self, objects, uowtransaction, postupdate=False, post_update_cols=None, single=False)

issue INSERT and/or UPDATE statements for a list of objects.

this is called within the context of a UOWTransaction during a flush operation.

save_obj issues SQL statements not just for instances mapped directly by this mapper, but for instances mapped by all inheriting mappers as well. This is to maintain proper insert ordering among a polymorphic chain of instances. Therefore save_obj is typically called only on a "base mapper", or a mapper which does not inherit from any other mapper.

def set_attr_by_column(self, obj, column, value)

set the value of an instance attribute using a Column as the key.

def translate_row(self, tomapper, row)

translate the column keys of a row into a new or proxied row that can be understood by another mapper.

This can be used in conjunction with populate_instance to populate an instance using an alternate mapper.

back to section top

module sqlalchemy.orm.query

class Query(object)

encapsulates the object-fetching operations provided by Mappers.

def __init__(self, class_or_mapper, session=None, entity_name=None, lockmode=None, with_options=None, extension=None, **kwargs)

def compile(self, whereclause=None, **kwargs)

given a WHERE criterion, produce a ClauseElement-based statement suitable for usage in the execute() method.

def count(self, whereclause=None, params=None, **kwargs)

given a WHERE criterion, create a SELECT COUNT statement, execute and return the resulting count value.

def count_by(self, *args, **params)

returns the count of instances based on the given clauses and key/value criterion. The criterion is constructed in the same way as the select_by() method.

def execute(self, clauseelement, params=None, *args, **kwargs)

execute the given ClauseElement-based statement against this Query's session/mapper, return the resulting list of instances.

After execution, closes the ResultProxy and its underlying resources. This method is one step above the instances() method, which takes the executed statement's ResultProxy directly.

def get(self, ident, **kwargs)

return an instance of the object based on the given identifier, or None if not found.

The ident argument is a scalar or tuple of primary key column values in the order of the table def's primary key columns.

def get_by(self, *args, **params)

return a single object instance based on the given key/value criterion.

this is either the first value in the result list, or None if the list is empty.

the keys are mapped to property or column names mapped by this mapper's Table, and the values are coerced into a WHERE clause separated by AND operators. If the local property/column names dont contain the key, a search will be performed against this mapper's immediate list of relations as well, forming the appropriate join conditions if a matching property is located.

e.g. u = usermapper.get_by(user_name = 'fred')

def instances(self, cursor, *mappers, **kwargs)

return a list of mapped instances corresponding to the rows in a given "cursor" (i.e. ResultProxy).

def join_by(self, *args, **params)

return a ClauseElement representing the WHERE clause that would normally be sent to select_whereclause() by select_by().

def join_to(self, key)

given the key name of a property, will recursively descend through all child properties from this Query's mapper to locate the property, and will return a ClauseElement representing a join from this Query's mapper to the endmost mapper.

def join_via(self, keys)

given a list of keys that represents a path from this Query's mapper to a related mapper based on names of relations from one mapper to the next, returns a ClauseElement representing a join from this Query's mapper to the endmost mapper.

def load(self, ident, **kwargs)

return an instance of the object based on the given identifier.

If not found, raises an exception. The method will *remove all pending changes* to the object already existing in the Session. The ident argument is a scalar or tuple of primary key column values in the order of the table def's primary key columns.

def options(self, *args, **kwargs)

return a new Query object, applying the given list of MapperOptions.

def select(self, arg=None, **kwargs)

selects instances of the object from the database.

arg can be any ClauseElement, which will form the criterion with which to load the objects.

For more advanced usage, arg can also be a Select statement object, which will be executed and its resulting rowset used to build new object instances. in this case, the developer must insure that an adequate set of columns exists in the rowset with which to build new object instances.

def select_by(self, *args, **params)

return an array of object instances based on the given clauses and key/value criterion.

*args is a list of zero or more ClauseElements which will be connected by AND operators.

**params is a set of zero or more key/value parameters which are converted into ClauseElements. the keys are mapped to property or column names mapped by this mapper's Table, and the values are coerced into a WHERE clause separated by AND operators. If the local property/column names dont contain the key, a search will be performed against this mapper's immediate list of relations as well, forming the appropriate join conditions if a matching property is located.

e.g. result = usermapper.select_by(user_name = 'fred')

def select_statement(self, statement, **params)

given a ClauseElement-based statement, execute and return the resulting instances.

def select_text(self, text, **params)

given a literal string-based statement, execute and return the resulting instances.

def select_whereclause(self, whereclause=None, params=None, **kwargs)

given a WHERE criterion, create a SELECT statement, execute and return the resulting instances.

def selectfirst(self, *args, **params)

works like select(), but only returns the first result by itself, or None if no objects returned.

def selectfirst_by(self, *args, **params)

works like select_by(), but only returns the first result by itself, or None if no objects returned. Synonymous with get_by()

def selectone(self, *args, **params)

works like selectfirst(), but throws an error if not exactly one result was returned.

def selectone_by(self, *args, **params)

works like selectfirst_by(), but throws an error if not exactly one result was returned.

session = property()

table = property()

def with_lockmode(self, mode)

return a new Query object with the specified locking mode.

back to section top

class QueryContext(OperationContext)

created within the Query.compile() method to store and share state among all the Mappers and MapperProperty objects used in a query construction.

def __init__(self, query, kwargs)

def accept_option(self, opt)

accept a MapperOption which will process (modify) the state of this QueryContext.

def select_args(self)

return a dictionary of attributes from this QueryContext that can be applied to a sql.Select statement.

back to section top

class SelectionContext(OperationContext)

created within the query.instances() method to store and share state among all the Mappers and MapperProperty objects used in a load operation.

SelectionContext contains these attributes:

mapper - the Mapper which originated the instances() call.

session - the Session that is relevant to the instances call.

identity_map - a dictionary which stores newly created instances that have not yet been added as persistent to the Session.

attributes - a dictionary to store arbitrary data; eager loaders use it to store additional result lists

populate_existing - indicates if its OK to overwrite the attributes of instances that were already in the Session

version_check - indicates if mappers that have version_id columns should verify that instances existing already within the Session should have this attribute compared to the freshly loaded value

def __init__(self, mapper, session, **kwargs)

def accept_option(self, opt)

accept a MapperOption which will process (modify) the state of this SelectionContext.

back to section top

module sqlalchemy.orm.session

Module Functions

def class_mapper(class_, entity_name=None, compile=True)

given a ClassKey, returns the primary Mapper associated with the key.

def object_mapper(object, raiseerror=True)

given an object, returns the primary Mapper associated with the object instance

def object_session(obj)

return the Session to which the given object is bound, or None if none.

back to section top

class Session(object)

encapsulates a set of objects being operated upon within an object-relational operation.

The Session object is **not** threadsafe. For thread-management of Sessions, see the sqlalchemy.ext.sessioncontext module.

def __init__(self, bind_to=None, hash_key=None, import_session=None, echo_uow=False)

def bind_mapper(self, mapper, bindto)

bind the given Mapper to the given Engine or Connection.

All subsequent operations involving this Mapper will use the given bindto.

def bind_table(self, table, bindto)

bind the given Table to the given Engine or Connection.

All subsequent operations involving this Table will use the given bindto.

def clear(self)

removes all object instances from this Session. this is equivalent to calling expunge() for all objects in this Session.

def close(self)

closes this Session.

def connect(self, mapper=None, **kwargs)

returns a unique connection corresponding to the given mapper. this connection will not be part of any pre-existing transactional context.

def connection(self, mapper, **kwargs)

returns a Connection corresponding to the given mapper. used by the execute() method which performs select operations for Mapper and Query. if this Session is transactional, the connection will be in the context of this session's transaction. otherwise, the connection is returned by the contextual_connect method, which some Engines override to return a thread-local connection, and will have close_with_result set to True.

the given **kwargs will be sent to the engine's contextual_connect() method, if no transaction is in progress.

def create_transaction(self, **kwargs)

returns a new SessionTransaction corresponding to an existing or new transaction. if the transaction is new, the returned SessionTransaction will have commit control over the underlying transaction, else will have rollback control only.

def delete(self, object, entity_name=None)

mark the given instance as deleted.

the delete operation occurs upon flush().

deleted = property()

a Set of all objects marked as 'deleted' within this Session

dirty = property()

a Set of all objects marked as 'dirty' within this Session

echo_uow = property()

def execute(self, mapper, clause, params, **kwargs)

using the given mapper to identify the appropriate Engine or Connection to be used for statement execution, executes the given ClauseElement using the provided parameter dictionary. Returns a ResultProxy corresponding to the execution's results. If this method allocates a new Connection for the operation, then the ResultProxy's close() method will release the resources of the underlying Connection, otherwise its a no-op.

def expire(self, obj)

mark the given object as expired.

this will add an instrumentation to all mapped attributes on the instance such that when an attribute is next accessed, the session will reload all attributes on the instance from the database.

def expunge(self, object)

remove the given object from this Session.

this will free all internal references to the object. cascading will be applied according to the 'expunge' cascade rule.

def flush(self, objects=None)

flush all the object modifications present in this session to the database.

'objects' is a list or tuple of objects specifically to be flushed; if None, all new and modified objects are flushed.

def get(self, class_, ident, **kwargs)

return an instance of the object based on the given identifier, or None if not found.

The ident argument is a scalar or tuple of primary key column values in the order of the table def's primary key columns.

the entity_name keyword argument may also be specified which further qualifies the underlying Mapper used to perform the query.

def get_bind(self, mapper)

return the Engine or Connection which is used to execute statements on behalf of the given Mapper.

Calling connect() on the return result will always result in a Connection object. This method disregards any SessionTransaction that may be in progress.

The order of searching is as follows:

if an Engine or Connection was bound to this Mapper specifically within this Session, returns that Engine or Connection.

if an Engine or Connection was bound to this Mapper's underlying Table within this Session (i.e. not to the Table directly), returns that Engine or Conneciton.

if an Engine or Connection was bound to this Session, returns that Engine or Connection.

finally, returns the Engine which was bound directly to the Table's MetaData object.

If no Engine is bound to the Table, an exception is raised.

def has_key(self, key)

identity_map = property()

a WeakValueDictionary consisting of all objects within this Session keyed to their _instance_key value.

def import_instance(self, *args, **kwargs)

deprecated; a synynom for merge()

def is_expired(self, obj, unexpire=False)

return True if the given object has been marked as expired.

def load(self, class_, ident, **kwargs)

return an instance of the object based on the given identifier.

If not found, raises an exception. The method will *remove all pending changes* to the object already existing in the Session. The ident argument is a scalar or tuple of primary key columns in the order of the table def's primary key columns.

the entity_name keyword argument may also be specified which further qualifies the underlying Mapper used to perform the query.

def mapper(self, class_, entity_name=None)

given an Class, return the primary Mapper responsible for persisting it

def merge(self, object, entity_name=None)

merge the object into a newly loaded or existing instance from this Session.

note: this method is currently not completely implemented.

new = property()

a Set of all objects marked as 'new' within this Session.

def query(self, mapper_or_class, entity_name=None, **kwargs)

return a new Query object corresponding to this Session and the mapper, or the classes' primary mapper.

def refresh(self, obj)

reload the attributes for the given object from the database, clear any changes made.

def save(self, object, entity_name=None)

Add a transient (unsaved) instance to this Session.

This operation cascades the "save_or_update" method to associated instances if the relation is mapped with cascade="save-update".

The 'entity_name' keyword argument will further qualify the specific Mapper used to handle this instance.

def save_or_update(self, object, entity_name=None)

save or update the given object into this Session.

The presence of an '_instance_key' attribute on the instance determines whether to save() or update() the instance.

def scalar(self, mapper, clause, params, **kwargs)

works like execute() but returns a scalar result.

sql = property()

def update(self, object, entity_name=None)

Bring the given detached (saved) instance into this Session.

If there is a persistent instance with the same identifier already associated with this Session, an exception is thrown.

This operation cascades the "save_or_update" method to associated instances if the relation is mapped with cascade="save-update".

back to section top

class SessionTransaction(object)

represents a Session-level Transaction. This corresponds to one or more sqlalchemy.engine.Transaction instances behind the scenes, with one Transaction per Engine in use.

the SessionTransaction object is **not** threadsafe.

def __init__(self, session, parent=None, autoflush=True)

def add(self, connectable)

def close(self)

def commit(self)

def connection(self, mapper_or_class, entity_name=None)

def get_or_add(self, connectable)

def rollback(self)

back to section top

module sqlalchemy.pool

provides a connection pool implementation, which optionally manages connections on a thread local basis. Also provides a DBAPI2 transparency layer so that pools can be managed automatically, based on module type and connect arguments, simply by calling regular DBAPI connect() methods.

Module Functions

def clear_managers()

removes all current DBAPI2 managers. all pools and connections are disposed.

def manage(module, **params)

given a DBAPI2 module and pool management parameters, returns a proxy for the module that will automatically pool connections, creating new connection pools for each distinct set of connection arguments sent to the decorated module's connect() function.

Arguments: module : a DBAPI2 database module.

poolclass=QueuePool : the class used by the pool module to provide pooling.

Options: See Pool for options.

back to section top

class AssertionPool(Pool)

a Pool implementation which will raise an exception if more than one connection is checked out at a time. Useful for debugging code that is using more connections than desired.

TODO: modify this to handle an arbitrary connection count.

def __init__(self, creator, **params)

def create_connection(self)

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self, conn)

def status(self)

back to section top

class NullPool(Pool)

a Pool implementation which does not pool connections; instead it literally opens and closes the underlying DBAPI connection per each connection open/close.

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self, conn)

def status(self)

back to section top

class Pool(object)

Base Pool class. This is an abstract class, which is implemented by various subclasses including:

QueuePool - pools multiple connections using Queue.Queue

SingletonThreadPool - stores a single connection per execution thread

NullPool - doesnt do any pooling; opens and closes connections

AssertionPool - stores only one connection, and asserts that only one connection is checked out at a time.

the main argument, "creator", is a callable function that returns a newly connected DBAPI connection object.

Options that are understood by Pool are:

echo=False : if set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information. Echoing can also be achieved by enabling logging for the "sqlalchemy.pool" namespace.

use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the same connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using.

recycle=-1 : if set to non -1, a number of seconds between connection recycling, which means upon checkout, if this timeout is surpassed the connection will be closed and replaced with a newly opened connection.

auto_close_cursors = True : cursors, returned by connection.cursor(), are tracked and are automatically closed when the connection is returned to the pool. some DBAPIs like MySQLDB become unstable if cursors remain open.

disallow_open_cursors = False : if auto_close_cursors is False, and disallow_open_cursors is True, will raise an exception if an open cursor is detected upon connection checkin.

If auto_close_cursors and disallow_open_cursors are both False, then no cursor processing occurs upon checkin.

def __init__(self, creator, recycle=-1, echo=None, use_threadlocal=False, auto_close_cursors=True, disallow_open_cursors=False)

def connect(self)

def create_connection(self)

def dispose(self)

def do_get(self)

def do_return_conn(self, conn)

def get(self)

def log(self, msg)

def return_conn(self, agent)

def status(self)

def unique_connection(self)

back to section top

class QueuePool(Pool)

uses Queue.Queue to maintain a fixed-size list of connections.

Arguments include all those used by the base Pool class, as well as:

pool_size=5 : the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.

max_overflow=10 : the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.

timeout=30 : the number of seconds to wait before giving up on returning a connection

def __init__(self, creator, pool_size=5, max_overflow=10, timeout=30, **params)

def checkedin(self)

def checkedout(self)

def dispose(self)

def do_get(self)

def do_return_conn(self, conn)

def overflow(self)

def size(self)

def status(self)

back to section top

class SingletonThreadPool(Pool)

Maintains one connection per each thread, never moving a connection to a thread other than the one which it was created in.

this is used for SQLite, which both does not handle multithreading by default, and also requires a singleton connection if a :memory: database is being used.

options are the same as those of Pool, as well as:

pool_size=5 - the number of threads in which to maintain connections at once.

def __init__(self, creator, pool_size=5, **params)

def cleanup(self)

def dispose(self)

def dispose_local(self)

def do_get(self)

def do_return_conn(self, conn)

def status(self)

back to section top

module sqlalchemy.ext.sessioncontext

class SessionContext(object)

A simple wrapper for ScopedRegistry that provides a "current" property which can be used to get, set, or remove the session in the current scope.

By default this object provides thread-local scoping, which is the default scope provided by sqlalchemy.util.ScopedRegistry.

Usage: engine = create_engine(...) def session_factory(): return Session(bind_to=engine) context = SessionContext(session_factory)

s = context.current # get thread-local session context.current = Session(bind_to=other_engine) # set current session del context.current # discard the thread-local session (a new one will # be created on the next call to context.current)

def __init__(self, session_factory, scopefunc=None)

current = property()

Property used to get/set/del the session in the current scope

def del_current(self)

def get_current(self)

mapper_extension = property()

get a mapper extension that implements get_session using this context

def set_current(self, session)

back to section top

class SessionContextExt(MapperExtension)

a mapper extionsion that provides sessions to a mapper using SessionContext

def __init__(self, context)

def get_session(self)

back to section top

module sqlalchemy.mods.threadlocal

this plugin installs thread-local behavior at the Engine and Session level.

The default Engine strategy will be "threadlocal", producing TLocalEngine instances for create_engine by default. With this engine, connect() method will return the same connection on the same thread, if it is already checked out from the pool. this greatly helps functions that call multiple statements to be able to easily use just one connection without explicit "close" statements on result handles.

on the Session side, module-level methods will be installed within the objectstore module, such as flush(), delete(), etc. which call this method on the thread-local session.

Note: this mod creates a global, thread-local session context named sqlalchemy.objectstore. All mappers created while this mod is installed will reference this global context when creating new mapped object instances.

Module Functions

def assign_mapper(class_, *args, **kwargs)

back to section top

class Objectstore(object)

def __init__(self, *args, **kwargs)

session = property()

back to section top

module sqlalchemy.ext.selectresults

class SelectResults(object)

Builds a query one component at a time via separate method calls, each call transforming the previous SelectResults instance into a new SelectResults instance with further limiting criterion added. When interpreted in an iterator context (such as via calling list(selectresults)), executes the query.

def __init__(self, query, clause=None, ops={}, joinpoint=None)

constructs a new SelectResults using the given Query object and optional WHERE clause. ops is an optional dictionary of bind parameter values.

def avg(self, col)

executes the SQL avg() function against the given column

def clone(self)

creates a copy of this SelectResults.

def compile(self)

def count(self)

executes the SQL count() function against the SelectResults criterion.

def filter(self, clause)

applies an additional WHERE clause against the query.

def join_to(self, prop)

join the table of this SelectResults to the table located against the given property name.

subsequent calls to join_to or outerjoin_to will join against the rightmost table located from the previous join_to or outerjoin_to call, searching for the property starting with the rightmost mapper last located.

def limit(self, limit)

apply a LIMIT to the query.

def list(self)

return the results represented by this SelectResults as a list.

this results in an execution of the underlying query.

def max(self, col)

executes the SQL max() function against the given column

def min(self, col)

executes the SQL min() function against the given column

def offset(self, offset)

apply an OFFSET to the query.

def order_by(self, order_by)

apply an ORDER BY to the query.

def outerjoin_to(self, prop)

outer join the table of this SelectResults to the table located against the given property name.

subsequent calls to join_to or outerjoin_to will join against the rightmost table located from the previous join_to or outerjoin_to call, searching for the property starting with the rightmost mapper last located.

def select(self, clause)

def select_from(self, from_obj)

set the from_obj parameter of the query to a specific table or set of tables.

from_obj is a list.

def sum(self, col)

executes the SQL sum() function against the given column

back to section top

class SelectResultsExt(MapperExtension)

a MapperExtension that provides SelectResults functionality for the results of query.select_by() and query.select()

def select(self, query, arg=None, **kwargs)

def select_by(self, query, *args, **params)

back to section top

module sqlalchemy.exceptions

class ArgumentError()

raised for all those conditions where invalid arguments are sent to constructed objects. This error generally corresponds to construction time state errors.

back to section top

class AssertionError()

corresponds to internal state being detected in an invalid state

back to section top

class ConcurrentModificationError()

raised when a concurrent modification condition is detected

back to section top

class DBAPIError()

something weird happened with a particular DBAPI version

def __init__(self, message, orig)

back to section top

class FlushError()

raised when an invalid condition is detected upon a flush()

back to section top

class InvalidRequestError()

sqlalchemy was asked to do something it cant do, return nonexistent data, etc. This error generally corresponds to runtime state errors.

back to section top

class NoSuchColumnError()

raised by RowProxy when a nonexistent column is requested from a row

back to section top

class NoSuchTableError()

sqlalchemy was asked to load a table's definition from the database, but the table doesn't exist.

back to section top

class SQLAlchemyError()

generic error class

back to section top

class SQLError()

raised when the execution of a SQL statement fails. includes accessors for the underlying exception, as well as the SQL and bind parameters

def __init__(self, statement, params, orig)

back to section top

class TimeoutError()

raised when a connection pool times out on getting a connection

back to section top

module sqlalchemy.ext.proxy

class AutoConnectEngine(BaseProxyEngine)

An SQLEngine proxy that automatically connects when necessary.

def __init__(self, dburi, **kwargs)

def get_engine(self)

back to section top

class BaseProxyEngine(Executor)

Basis for all proxy engines.

def compiler(self, *args, **kwargs)

this method is required to be present as it overrides the compiler method present in sql.Engine

engine = property()

def execute_compiled(self, *args, **kwargs)

this method is required to be present as it overrides the execute_compiled present in sql.Engine

def get_engine(self)

def set_engine(self, engine)

back to section top

class ProxyEngine(BaseProxyEngine)

Engine proxy for lazy and late initialization.

This engine will delegate access to a real engine set with connect().

def __init__(self, **kwargs)

def connect(self, *args, **kwargs)

Establish connection to a real engine.

def get_engine(self)

def set_engine(self, engine)