Dropping Stored Procedures - Firebird

The DROP PROCEDURE statement deletes an existing stored procedure from the data-base. You can use this statement anywhere it is possible to use DDL statements.

The syntax is

DROP PROCEDURE name;

The procedure name must be the name of an existing procedure. Take care with case sensitivity of object names if quoted identifiers were used when creating the procedure.

The following statement deletes the LOG_SALES procedure:

DROP PROCEDURE LOG_SALES;

Restrictions

The following restrictions apply to dropping a procedure:

  • Only SYSDBA or the owner of a procedure can drop it.
  • A procedure that is in use by any transaction cannot be dropped. This will be a special issue in systems where procedures are being called from transactions that are committed using CommitRetaining.
  • If any other objects in the database refer to or call the procedure, it will be necessary to alter the dependent objects to remove the references and commit that work, before the procedure can be dropped.
  • A recursive procedure cannot be dropped without first removing the recursive calls and committing that change. Similar complications apply to a procedure that calls other procedures that, in turn, call the procedure you want to drop. All such dependencies must be removed and committed before the procedure is free to be dropped.

OPTIMIZATION TOPIC: Using the Internal RDB$DB_KEY

Firebird inherited an undocumented feature that can speed up query performance in some conditions. This is RDB$DB_KEY (usually referred to simply as the db_key), an internal cardinality key maintained by the database engine for internal use in query optimization and record version management. Inside the transaction context in which it is captured, it represents a row’s position in the table.

About RDB$DB_KEY

The first lesson to learn is that RDB$DB_KEY is a raw position, related to the database itself and not to a physical address on disk. The second is that the numbers do not progress in a predictable sequence. Don’t consider performing calculations involving their relative positions! The third lesson is that they are volatile —they change after a backup and subsequent restore and, sometimes, after the transaction is committed. It is essential to understand the transience of the db_key and to make no assumptions about its existence once an operation that refers to it is committed or rolled back.

Size of RDB$DB_KEY

For tables, RDB$DB_KEY uses 8 bytes. For a view, it uses as many multiples of 8 bytes as there are underlying tables. For example, if a view joins three tables, its RDB$DB_KEY uses 24 bytes. This is important if you are working with stored procedures and want to store RDB$DB_KEY in a variable. You must use a CHAR(n) data type of the correct length.

By default, db_keys are returned as hex values—2 hex digits represent each byte, causing 16 hex digits to be returned for 8 bytes. Try it on one of your sample tables in isql:

SQL> SELECT RDB$DB_KEY FROM MYTABLE;
RDB$DB_KEY
================
000000B600000002
000000B600000004
000000B600000006
000000B600000008
000000B60000000A

Benefits

Because an RDB$DB_KEY marks the raw position of a row, it is faster for a search than even a primary key. If for some special reason a table has no primary key or active unique index, or it is primed on a unique index that is allowed to contain nulls, it is possible for exact duplicate rows to exist. Under such conditions, an RDB$DB_KEY is the only way to identify each row unequivocally.

Several kinds of statements run faster when moved into a stored procedure using an RDB$DB_KEY—typically, updates and deletions with complex conditions. For inserts (even huge batches) the RDB$DB_KEY is of no avail, since there is no way to ascertain what the values will be.

However, if the database pages being searched for update or delete are already in main memory, the difference in access speed is likely to be negligible. The same is true if the searched set is quite small and all of the searched rows are close to one another.

Optimization of Queries

Performance problems are likely if you try to run a DSQL update like the following example against a huge table:

UPDATE TABLEA A
SET A.TOTAL = (SELECT SUM (B.VALUEFIELD)
FROM TABLEB B
WHERE B.FK= A.PK)
where <conditions..>

If you run the same operation often and it affects a lot of rows, it would be worth the effort to write a stored procedure that handles the correlated total on each row without needing to perform a subquery:

CREATE PROCEDURE ...
..
AS
BEGIN
FOR SELECT B.FK, SUM(B.VALUEFIELD) FROM TABLEB B
GROUP BY B.FK
INTO:B_FK, :TOTAL DO
UPDATE TABLEA A SET A.TOTAL = :TOTAL
WHERE A.PK= :B_FK AND ... END

Although speedier, it still has the problem that records in A have to be located by primary key each time a new pass of the FOR .. . DO loop happens.

Some people claim better results with this alien syntax:

... DECLARE VARIABLE DBK CHAR(8); /* 8 CHARS FOR A TABLE'S DBKEY */ ... FOR SELECT B.FK, SUM(B.VALUEFIELD), A.RDB$DB_KEY FROM TABLEB B JOIN TABLEA A ON A.PK = B.FK WHERE <conditions>
GROUP BY B.FK, A.RDB$DB_KEY
INTO :B_FK, :TOTAL, :DBK DO
UPDATE TABLEA SET A.TOTAL = :TOTAL
WHERE A.RDB$DB_KEY = :DBK;

The benefits of this approach are

  • Filtering of the common records for A and B is efficient where the optimizer can make a good filter from the explicit join.
  • If the join can apply its own search clause, there is a gain in getting the extra filtering Before the update checks its own condition.
  • Rows from the right-side table (A) are located by raw db_key values, extracted at the time of the join, making the search faster than looking through the primary key or its index.

Inserting

Since inserting does not involve a search, the simplest insert operations—for example, reading constant values from an import set in an external table—are not affected by the need to locate keys.

Not every INSERT statement’s VALUES input set is obtained so simply, however. It can be a very complicated set of values derived from expressions, joins, or aggregations. In a stored procedure, an INSERT operation may well be branched into the ELSE sub-clause of an IF (EXISTS(. ..)) predicate, for example:

IF EXISTS(SELECT...) THEN
...
ELSE
BEGIN
INSERT INTO TABLEA
SELECT
C.PKEY,
SUM(B.AVALUE),
AVG(B.BVALUE),
COUNT(DISTINCT C.XYZ)
FROM TABLEB B JOIN TABLEC C
ON B.X = C.Y
WHERE C.Z = 'value'
AND C.PKEY NOT IN(SELECT PKEY FROM TABLEA)
GROUP BY C.PKEY;
END ...

Implementing this in a stored procedure:

FOR SELECT
C.PKEY,
SUM(B.AVALUE),
AVG(B.BVALUE),
COUNT(DISTINCT C.XYZ)
FROM TABLEB B JOIN TABLEC C
ON B.X = C.Y
WHERE C.Z = 'value'
AND C.PKEY NOT IN(SELECT PKEY FROM TABLEA)
GROUP BY C.PKEY
INTO :C_KEY, :TOTAL, :B_AVG, :C_COUNT DO
BEGIN
SELECT A.RDB$DBKEY FROM TABLEA A
WHERE A.PKEY = :C_KEY
INTO :DBK;
IF (DBK IS NULL) THEN /* the row doesn't exist */
INSERT INTO TABLEA(PKEY, TOTAL, AVERAGE_B, COUNT_C)
VALUES(:C_KEY, :TOTAL, :B_AVG, :C_COUNT);
ELSE
UPDATE TABLEA SET
TOTAL = TOTAL + :TOTAL,
AVERAGE_B = AVERAGE_B + :B_AVG,
COUNT_C = COUNT_C + :C_COUNT
WHERE A.RDB$DB_KEY = :DBK;
END
...

Duration of Validity

By default, the scope of a db_key is the current transactions. You can count on it to remain valid for the duration of the current transaction. A commit or rollback will cause the RDB$DB_KEY values you had to become unpredictable. If you are using CommitRetaining, the transaction context is retained, blocking garbage collection and thus preventing the old db_key from being “recycled.” Under these conditions, the RDB$DB_KEY values of any rows affected by your transaction remain valid until a “hard” commit or rollback occurs.

After the hard commit or rollback, another transaction might delete a row that was isolated inside your transaction’ s context and was thus considered “existent” by your application. Any RDB$DB _KEY value might now point to a non -existent row. If there is a long interval between the moment when your transaction began and when your work completes, you should check that the row has not been changed or locked by another transaction in the meantime.

Some application interfaces, for example, IB Objects, are super-smart about inserts and can prepare a “slot” for a newly inserted row in the client buffers to short- circuit the refresh following the commit. Such features are important for performance across the network. However, “smarts” like this are based on exact, real keys. Since the db_key is merely a proxy key for a set that has been derived from previously committed data, it has no meaning for a new row—it is not available for spot updates of the client buffers.

Changing the Scope of Duration

The default duration of RDB$DB_KEY values can be changed at connection time, by using the API parameter isc_dpb_dbkey_scope. Some development—for example, the IB Objects components in Borland ObjectPascal environments tools—surface it in a connectivity class. However, it is not recommended to extend the scope of db_keys in a highly interactive environment, since it will disable garbage collection, with the unwanted side effect of causing your database file to grow at an alarming rate and slowing down performance until the system hangs or crashes. Do not pool connections having non-default db_key scope.

RDB$DB_KEY with Multi-Table Sets

All tables maintain their own distinct, 8-byte RDB$DB_KEY columns. Views and joins generate runtime db_keys by concatenating those of the rows in the source tables. If you use RDB$DB_KEY in multi-table sets, be very careful to qualify each one accurately.

RDB$DB _KEY cannot be used across tables. There is no possibility of establishing a dependency relationship between the RDB$DB_KEY of one table and another, except in re-entrant (self-referencing) joins.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Firebird Topics