The INSERT Statement - Firebird

The INSERT statement is used for adding new rows to a single table. SQL does not permit rows to be inserted to more than one table in a single INSERT statement.

INSERT statements can operate on views, under some conditions.

The INSERT statement has two general forms for passing the values for the input column list.

Use this form for inserting a list of constants:

Use this form for inserting from an inline query:

In the following example, an INSERT INTO clause defines an input set for TABLEB and a SELECT clause defines a corresponding inline query from TABLEA to supply values to the input set:

Inserting into BLOB Columns

The INSERT INTO ... SELECT technique passes BLOBs to BLOBs directly. As a rule, if you need to insert BLOBs as part of a VALUES( ) list, they must be constructed in the client application using API functions and structures. In DSQL applications, they are usually passed as streams. The segment size can simply be disregarded except in ESQL applications.

However, if you are passing a VALUES( ) entry to a BLOB as text, Firebird will accept character strings as input, for example:

This capability will suit conditions where the text to be stored will never exceed 32,767 bytes (or 32,765, if you are reading a VARCHAR field into a BLOB). For many programming interfaces, the problem may seem a non-issue, because they cannot handle such large strings anyway. However, since Firebird will accept concatenated SQL string expressions, such as MYVARCHAR1 || MYVARCHAR2, it will be necessary to protect string inputs from overflows.

Inserting into ARRAY Columns

In embedded applications (ESQL), it is possible to construct a pre-compiled SQL statement to insert entire arrays into an array column. Errors can occur if the data does not exactly fill the array.

In DSQL, it is not possible to insert data into ARRAY columns at all. It is necessary to implement a custom method in application or component code that calls the API function isc_array_put_slice.

Using INSERT with Automatic Fields

Table definitions may have defined columns whose field values are populated with data automatically when a new row is inserted. This may happen in several ways:

  • A column is defined by a COMPUTED BY clause.
  • A column, or a domain under which it was defined, includes a DEFAULT clause.
  • A BEFORE INSERT trigger has been added to the table to populate the column automatically.

Automatic fields may affect the way you formulate INSERT statements for the table.


Including a computed column in the input column list is not valid and will cause an exception, as illustrated by the following example:

Columns with Defaults

If a column has a default defined for it, the default will work only on inserts and only if the defaulted column is omitted from the input column list. If the statement in the previous example is corrected, it will cause 'TAIWAN' to be written to the BIRTH_COUNTRY column:

Defaults do not kick in to replace NULLs:

If you are developing applications using components that generate INSERT statements from the column specifications of the datasets’ SELECT statements —for example, Delphi and JBuilder—be especially aware of this behavior. If your component does not support a method to get the server defaults, it may be necessary to customize the update statement that it uses to perform an insert from a dataset.


When inserting into a table that uses triggers to populate certain columns automatically, make sure you omit those columns from your input list if you want to ensure that the trigger will do its work.

This is not to say that you must always omit triggered fields from input lists. A trigger that is designed to provide a value in case no value is supplied by INSERT statements is highly recommended, to bombproof data integrity —especially where multiple applications and tools are used to access your databases. The trigger should test the input for certain conditions (NULL, for example) and do its stuff according to the conditions.

In the following example, the primary key, OID, is populated by a trigger:

In this case, the trigger fetches the generator value for the primary key, because no value was passed in the input list for it. However, because the trigger was set up to do this only when it detects NULL, the following INSERT statement will work just fine, too—provided, of course, that the value supplied for OID does not violate the unique constraint on the PK:

When would you do this? Surprisingly often, in Firebird. When implementing master-detail structures, you can secure the primary key of the master row from its generator Before that row is posted to the database, by a simple DSQL call:


Generators operate outside transaction control and, once you have that number, it is yours. You can apply it to the foreign key column of the detail rows in client buffers, as you create them, without posting a master record to the database. If the user decides to cancel the work, there is nothing to “undo” on the server. If you have ever struggled to achieve this kind of capability with a DBMS that uses an autoincrement or “identity” type, you will learn to love this feature.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Protection Status

Firebird Topics