Dear readers of our blog, we'd like to recommend you to visit the main page of our website, where you can learn about our product SQLS*Plus and its advantages.
 
SQLS*Plus - best SQL Server command line reporting and automation tool! SQLS*Plus is several orders of magnitude better than SQL Server sqlcmd and osql command line tools.
 

REQUEST COMPLIMENTARY SQLS*PLUS LICENCE

Enteros UpBeat offers a patented database performance management SaaS platform. It proactively identifies root causes of complex revenue-impacting database performance issues across a growing number of RDBMS, NoSQL, and deep/machine learning database platforms. We support Oracle, SQL Server, IBM DB2, MongoDB, Casandra, MySQL, Amazon Aurora, and other database systems.

PL/SQL and Oracle database instance memory

3 September 2020

PL/SQL and Oracle

Using machine resources (including memory and processor), Oracle can support tens of thousands of concurrent database users. The memory management technologies of Oracle in recent years have become quite complex and difficult to master.

And although the automation tools that have appeared in recent versions, greatly simplify the work of database administrators, PL/SQL developers must understand the basic principles of memory management (especially about cursors and batch variables) to avoid unproductive memory consumption.

SGA, PGA, and UGA

When a client program (say, SQL*Plus or SQL Developer) interacts with a database, three memory structures are used.

SGA (System Global Area)

SGA is a group of interrelated memory structures or SGA components that contain data and control information for one instance of an Oracle Database. SGA is shared by all server and background processes. Examples of data stored in SGA are cached data blocks and shared SQL areas.

PGA (Process Global Area)

PGA is a memory area containing data and control information that is used monopolistically by each server process. Oracle Database creates a PGA when the Oracle process starts.

A separate PGA area is created for each server and background process. The set of individual PGA areas is called the PGA instance area. In the database initialization parameters, the size of the PGA instance area is set, but not of individual PGAs.

UGA (User Global Area)

The data to be stored between database calls during the session (packet variables, private SQL areas, etc.) shall be placed in UGA (User Global Area).

In essence, UGA shall be used to preserve the state of the session. The location of UGA in memory shall depend on the selected method of connection to the database:

  • Dedicated server. For each session, Oracle shall create a dedicated server process. This configuration is useful under heavy loads, such as intensive calculations or long-running queries to the database. UGA is placed in a PGA because other server processes do not need to access this area.
  • Shared server. Database accesses shall be queued up for a group of common server processes that can serve accesses from any session. This configuration is well suited for hundreds of concurrent sessions that generate short messages with relatively long idle time. UGA is placed in the SGA area so that the data are available for any of the shared server processes.

The total amount of PGA depends significantly on the type of operations performed by the server for the application. For example, PL/SQL packages that fill large collections may require large amounts of UGA memory.

If the application works with shared servers, the user processes have to wait for their service queue. If your user process runs long-running PL/SQL blocks of SQL commands, the database administrator must either configure a server with a large number of shadow processes or run these sessions on a dedicated server.

Now let’s look at what memory looks like from the perspective of the program being executed.

Cursors and Oracle memory

You may have already written hundreds of programs that announce and open cursors, select lines from them, and then close again. Neither SQL nor PL/SQL can work without cursors; many operators implicitly execute recursive calls that open additional cursors.

And since each cursor, explicit or implicit, occupies the memory of the database server, the Oracle configuration optimization process often involves reducing the number of cursors that an application needs.

Although this section is dedicated to memory management, remember that this is just one aspect of database optimization; you may be able to improve overall performance by increasing the number of cursors.

Oracle associates cursors with PL/SQL anonymous blocks in much the same way as with SQL commands. For example, when processing the first call in the current user session, Oracle opens an area in UGA memory (“private” SQL area) where information specific to that call is placed.

When executing an SQL command or a PL/SQL block, the server first checks if the contents of the library cache have a ready representation of this code. If such a common PL/SQL area is found, the execution kernel links it to a private SQL area.

If such an area does not exist, Oracle parses the command or block. (Also, Oracle prepares and caches an anonymous PL/SQL block execution plan that includes a PL/SQL kernel call to interpret the byte code).

Oracle interprets the simplest PL/SQL blocks (blocks that call subroutines and blocks that do not contain integrated SQL commands) using only the memory allocated to the main cursor. If the program contains PL/SQL calls or SQL calls, Oracle requires additional private areas in UGA memory. PL/SQL manages them on behalf of your application.

We approach an important fact regarding cursor handling: there are two ways to close the cursor. If this is done programmatically, it becomes impossible to continue using the cursor in your application without reopening it. The CLOSE command closes the cursor programmatically:

CLOSE command: Cursor_name;

or when closing the implicit cursor automatically. However, PL/SQL shall not immediately release the memory associated with this cursor – the cursor shall continue to exist as a database object to avoid processing it at possible re-opening as it often happens.

When you look at the V$OPEN_CURSOR view, you will see that performing CLOSE does not reduce the number of open session cursors; in version 11.1, you can also select from the new CURSOR_TYPE column for more cursor information.

PL/SQL supports its own “session cache of cursors”, i.e. it decides when which cursor should be released. The maximum number of cursors in that cache is set by the initialized OPEN_CURSORS parameter. The choice of a cursor for memory release is made based on the LRU algorithm (Least Recently Used – “the longest not used”).

However, the internal PL/SQL algorithm works optimally only if all cursors are closed immediately after data selection is completed. So, remember: cursors that are explicitly opened in the program should be explicitly closed immediately after completion of use.

The internal algorithm works optimally only if your programs close the cursors immediately after the completion of data sampling. So, remember:

If the program explicitly opens the cursor, always explicitly close it right after you finish (but not before)!

The programmer can interfere with the standard Oracle behavior. Of course, you can end the session itself to close all cursors! There are other, less radical ways:

to reset the package state (see below "Large collections in PL/SQL");

low-level management of cursor behavior with DBMS_SQL package (however, the benefit of this approach can be much less than the loss of performance and complexity of programming).

Tips for saving memory

Once you understand the theory, we move on to practical tips that you can use in your daily work. Also, take a look at the more general tips for optimizing programs (or general tips for optimizing PL/SQL code here).

Besides, it is useful to be able to measure the amount of memory used by the session at any given time from the application code. To do this, you can query different V$ views. The plsql_memory package (see plsql_memory.pkg on the book site) will help you with this.

Command sharing

The database can provide programs with shared access to compiled versions of SQL commands and anonymous blocks even if they are received from different sessions and different users.

The optimizer determines the execution plan during parsing, so the factors affecting parsing (including the optimizer settings) will affect the sharing of SQL commands. For the system to share SQL commands, several basic rules must be followed:

  • Variable values must be set via binding variables, not literals so that the instruction text remains unchanged. The binding variables themselves must have the appropriate names and data types.
  • The rules for character case and formatting in the source code must be the same. If you run the same programs, this will happen automatically. The “one-time” commands may not coincide 100% with the commands from the programs.
  • References to database objects shall be allowed as references to the same object.
  • For SQL, the database parameters affecting the SQL query optimizer shall be the same. For example, in calling sessions, the same optimization criterion (ALL_ROWS or FIRST_ROWS) shall be set.
  • The calling sessions must support the same national languages (National Language Support, NLS).

We will not dwell on the last two rules; specific reasons for preventing SQL command sharing can be found in the V$SQL_SHARED_CURSOR view. Now we are interested first of all in the influence of the first three rules for PL/SQL programs.

The first rule (bind) is so critical that a separate subsection is devoted to it. The second rule (case and formatting) is a well-known condition for sharing instructions. The text must match exactly because the database calculates the hash code for searching and locking an object in the library cache.

Even though PL/SQL usually ignores the case, the following three blocks are not perceived as identical:

BEGIN NULL; END;
begin null; end;
BEGIN NULL; END;

Different hash codes are generated for them, so instructions are considered different – on the logical level they are equivalents, but on the physical level, they are different.

However, if all your anonymous blocks are short and all your “real programs” are implemented as stored code (e.g. as packages), it is much less likely that you will erroneously suppress their shared use.

Implement SQL and PL/SQL code as stored programs. Anonymous blocks should be as short as possible, in general – consist of a single cell of the stored program. Also, you can give one more recommendation related to SQL: to make it possible to share SQL instructions, place them in programs that are called from different places in the application. This saves you the trouble of writing the same instructions several times.

The third rule states that external references (to tables, procedures, etc.) must be resolved as references to the same object. Let’s say that the Scott user and I connected to Oracle and both started such a block:

BEGIN
XYZ;
END;

Oracle’s decision as to whether you both should be allowed to use the cacheable form of this block depends on whether the name “xyz” refers to the same stored procedure. If a Scott user defines a synonym for xyz that points to our copy of the procedure, then Oracle will allow sharing of the anonymous block.

If we have independent copies of the procedure, each will work with its block. And even if both copies of the xyz procedure are identical, Oracle will cache them as different objects. In the same way as different objects, identical triggers of different tables are cached.

We can conclude from the above: avoid creating identical copies of tables and programs under different accounts. According to the common opinion, to save memory, it is necessary to separate the program code common for several programs (and especially for triggers) and implement it in a separate call.

In other words, one database account is assigned by the owner of the PL/SQL programs, and other users who need the programs are granted EXECUTE privileges. Although this practice has a very good effect on maintainability, it is unlikely to provide real memory savings.

What’s more, a new object is created for each caller with an additional memory usage of several kilobytes per session. Of course, significant memory usage will be observed only with a very large number of users.

There is another problem with well-established stereotypes, which manifests itself in an environment with multiple users running one PL/SQL program simultaneously. When calling the shared code, a mechanism is needed to set and remove the library cache lock, which can lead to delays.

In such cases, code duplication may be preferable because it prevents unnecessary locks from being set and reduces the risk of performance degradation.

But let’s go back to the first rule concerning binding variables.

Binding variables

In the Oracle environment, the binding variable is called the input variable in the command, whose value is passed from the environment of the calling party. Binding variables play a particularly important role in sharing SQL commands regardless of the instruction source: PL/SQL, Java, SQL*Plus, or OCI.

Bindings variables simplify application scaling, help fight code injection, and facilitate SQL command sharing.

For two instructions to be considered identical, the binding variables used in them must match by name, data type, and maximum length. For example, the following two instructions are not considered identical:

SELECT col FROM tab1 WHERE col = :bind1;
SELECT col FROM tab1 WHERE col = :bind_1;

However, this requirement only applies to the instruction text perceived by the SQL kernel. As mentioned earlier, PL/SQL will reformulate static SQL instructions even before SQL sees them. Example:

FUNCTION plsql_bookcount (author IN VARCHAR2)
RETURN NUMBER
IS
title_pattern VARCHAR2(10) := '%PL/SQL%';
l_count NUMBER;
BEGIN
SELECT COUNT(*) INTO l_count
FROM books
WHERE title LIKE title_pattern
AND author = plsql_bookcount.author;
RETURN l_count;
END;

After running plsql_bookcount, the V$SQLAREA view in Oracle 11g shows that PL/SQL reformulated the query as follows:

SELECT COUNT(*) FROM BOOKS WHERE TITLE LIKE :B2 AND AUTHOR = :B1

The author parameter and the local title pattern variable are replaced by bind variables: B1 and: B2. Thus, in static SQL code, you don’t have to worry about matching binding variable names; PL/SQL replaces the variable name with the generated binding variable name.

Automatic input of binding variables in PL/SQL applies to variables used in WHERE and VALUES sentences of static instructions INSERT, UPDATE, MERGE, DELETE, and of course SELECT.

Additional experiments showed that changing the maximum length of the PL/SQL variable did not lead to additional instruction in the SQL area, but with a change in the variable data type such an instruction appears.

However, I’m not asking you to take my word for it; if you have the necessary privileges, you can make your experiments and determine whether SQL commands are shared according to your plans. Take a look at the V$SQLAREA view. The result of the sampling for the above code:

SQL> SELECT executions, sql_text
2 FROM v$sqlarea
3 WHERE sql_text like 'SELECT COUNT(*) FROM BOOKS%'

ENTRIES SQL_TEXT
---------- --------------------------------------------------
1 SELECT COUNT(*) FROM BOOKS WHERE TITLE LIKE: B2
AND AUTHOR = :B1

If PL/SQL is so smart, then you don’t have to worry about binding variables? Take your time: although PL/SQL automatically binds variables in static SQL tools, this feature is not available in dynamic SQL. Inaccurate programming will easily lead to the formation of commands with literals. Example:

FUNCTION count_recent_records (tablename_in IN VARCHAR2,
since_in IN VARCHAR2)
RETURN PLS_INTEGER
AS
l_count PLS_INTEGER;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '.
|| DBMS_ASSERT.SIMPLE_SQL_NAME(tablename_in)
|| ' WHERE lastupdate > TO_DATE(')
|| DBMS_ASSERT.ENQUOTE_LITERAL(since_in)
|| ', ''YYYYMMDD'').
INTO l_count;
RETURN l_count;
END;

When it is executed, commands of the following kind are dynamically built:

SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090315', 'YYYYMMDD')

Repeated calls with different since_in arguments may generate a large number of instructions that are unlikely to be shared:

SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090105','YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20080704', 'YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090101', 'YYYYMMDD')

Naturally, this leads to the very wasteful use of memory and other server resources.

USING DBMS_ASSERT TO FIGHT CODE INJECTION

What are these DBMS_ASSERT calls in the example with binding variables? Dynamic SQL, which uses data directly entered by the user, should be checked before performing it without looking back. Calling DBMS_ASSERT helps to ensure that the code gets exactly the data it expects to get.

If you try to call the count_recent_records function for a “strange” table name like “books where 1=1;–“, DBMS_ASSERT will issue an exception and stop the program before it can do any harm. DBMS_ASSERT.SIMPLE_SQL_NAME ensures that the input data meets the criteria for a valid SQL name.

DBMS_ASSERT.ENQUOTE_LITERAL encloses the input data in quotes and checks that it does not contain built-in quotes. Full description of DBMS_ASSERT can be found in Oracle PL/SQL Packages and Types Reference documentation.

If you rewrite the same function using a bind variable, you get it:

FUNCTION count_recent_records (tablename_in IN VARCHAR2,
since_in IN VARCHAR2)
RETURN PLS_INTEGER
AS
count_l PLS_INTEGER;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '.
|| DBMS_ASSERT.SIMPLE_SQL_NAME(tablename_in)
|| ' WHERE lastupdate > :thedate'.
INTO count_l
USING TO_DATE(since_in,'YYYYMMDD');
RETURN count_l;
END;

The SQL compiler will get the following instructions:

SELECT COUNT(*) FROM tabname WHERE lastupdate > :thedate

The second version is not only simpler and easier to understand but also provides significantly better performance for repeated calls with the same tablename_in argument values for different since_in values.

Oracle also supports the initialization parameter CURS0R_SHARING, which can provide some advantages in applications with large amounts of code without binding variables. By assigning FORCE or SIMILAR to this parameter, you can require the database to replace SQL literals (fully or partially) with binding variables, preventing parsing costs. Unfortunately, this is one of the features that works better in theory than in practice.

On the other hand, if you are careful to use meaningful binding variables in dynamic SQL code, your efforts will be rewarded at runtime (just remember to leave the default value of EXACT to CURS0R_SHARING).

Even if you manage to achieve some performance improvements by using CURSOR_SHARING, consider this method as halfway through. In terms of efficiency, it cannot even be compared to full-blown binding variables, and it can generate several unexpected and unwanted side effects.

If you have to use this feature because of software anomalies (often by third parties), do so only until you can modify the code to move to fully-fledged binding variables. Also, note that the appropriate setting can be enabled at the session-level with the LOGON trigger.

When it is executed, commands of the following kind are dynamically built:

SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090315', 'YYYYMMDD')

Repeated calls with different since_in arguments may generate a large number of instructions that are unlikely to be shared:

SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090105','YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20080704', 'YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090101', 'YYYYMMDD')

Naturally, this leads to the very wasteful use of memory and other server resources.

USING DBMS_ASSERT TO FIGHT CODE INJECTION

What are these DBMS_ASSERT calls in the example with binding variables? Dynamic SQL, which uses data directly entered by the user, should be checked before performing it without looking back. Calling DBMS_ASSERT helps to ensure that the code gets exactly the data it expects to get.

If you try to call the count_recent_records function for a “strange” table name like “books where 1=1;–“, DBMS_ASSERT will issue an exception and stop the program before it can do any harm. DBMS_ASSERT.SIMPLE_SQL_NAME ensures that the input data meets the criteria for a valid SQL name.

DBMS_ASSERT.ENQUOTE_LITERAL encloses the input data in quotes and checks that it does not contain built-in quotes. Full description of DBMS_ASSERT can be found in Oracle PL/SQL Packages and Types Reference documentation.

If you rewrite the same function using a bind variable, you get it:

FUNCTION count_recent_records (tablename_in IN VARCHAR2,
since_in IN VARCHAR2)
RETURN PLS_INTEGER
AS
count_l PLS_INTEGER;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '.
|| DBMS_ASSERT.SIMPLE_SQL_NAME(tablename_in)
|| ' WHERE lastupdate > :thedate'.
INTO count_l
USING TO_DATE(since_in,'YYYYMMDD');
RETURN count_l;
END;

The SQL compiler will get the following instructions:

SELECT COUNT(*) FROM tabname WHERE lastupdate > :thedate

The second version is not only simpler and easier to understand but also provides significantly better performance for repeated calls with the same tablename_in argument values for different since_in values.

Oracle also supports the initialization parameter CURS0R_SHARING, which can provide some advantages in applications with large amounts of code without binding variables.

By assigning FORCE or SIMILAR to this parameter, you can require the database to replace SQL literals (fully or partially) with binding variables, preventing parsing costs. Unfortunately, this is one of the features that works better in theory than in practice.

On the other hand, if you are careful to use meaningful binding variables in dynamic SQL code, your efforts will be rewarded at runtime (just remember to leave the default value of EXACT to CURS0R_SHARING).

Even if you manage to achieve some performance improvements by using CURSOR_SHARING, consider this method as halfway through. In terms of efficiency, it cannot even be compared to full-blown binding variables, and it can generate some unexpected and unwanted side effects.

If you have to use this feature because of software anomalies (often by third parties), do so only until you can modify the code to move to fully-fledged binding variables. Also, note that the appropriate setting can be enabled at the session-level with the LOGON trigger.

Packages and efficient memory usage

When retrieving the byte code of a stored PL/SQL program, the entire program is read. It is not only about procedures and functions but also about database packages. In other words, it is impossible to make only part of the package read, – when accessing any element, even a single variable, in the library cache is loaded all the compiled code of the package.

Therefore dividing the code by a smaller number of larger packages results in less memory (and disk space) than with many smaller packages. Thus, the logical grouping of package elements is useful not only in terms of architecture but also in terms of system performance.

Since Oracle reads the entire package into memory, only functionally linked elements should be grouped into packages, i.e. those that are very likely to be called in one session.

Large collections in PL/SQL

Sharing objects is a great solution, but it is not allowed for all objects in the program. Even if several users execute the same program belonging to the same Oracle scheme, each session has its memory area, which contains data specific to this call – local and packet variables values, constants, and cursors.

And it is senseless to try to organize the sharing of these data related to a particular session. The most typical problems arise when working with collections (the collections are described in detail in this article). Suppose that an associative PL/SQL array is declared in the program as follows:

DECLARE
TYPE number_tab_t IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
number_tab number_tab_t;
empty_tab number_tab_t;

A large number of elements are included in an array:

FOR i IN 1...100000
LOOP
number_tab(i) := i;
END LOOP;

All these elements must be stored somewhere. According to the above rules, memory for this array will be allocated in the global area of the UGA user if it is declared at the packet level, or in the global area of the CGA call if it is data from an anonymous block, procedure, or top-level function. In any case, working with a large collection will require a very large amount of memory.

The question is how to free this memory when you have finished working with the collection.

You need to execute one of two commands:

number_tab.DELETE;

or

number_tab := empty_tab;

In any of these cases, Oracle will free memory in its list of free objects. This means that the memory for storing the package variable will be returned to the dynamic session state pool, and the memory of the call level variable will be returned to CGA.

The same will happen if a variable declared as a collection goes out of scope. For example, if a collection is declared in a separate procedure, Oracle will release the memory it occupies as soon as the procedure is finished. In any case, such memory will not be available to other sessions, nor the current session, if it needs memory from the CGA region.

If subsequent DML operations will, say, sort a large amount of data, the application may need a huge amount of memory. And only after the session is over will the memory be completely freed and returned to the parent dynamic pool (heap).

It should be emphasized that for the virtual memory management system it will not be difficult to manage a large swap file, especially if the process keeps a large amount of inactive virtual memory in its address space.

This inactive memory takes up only hard disk space, not real memory. However, it may sometimes be undesirable to fill the page space, so it is better if Oracle frees up the memory. For such cases there is a special “garbage collection” procedure with a very simple syntax:

DBMS_SESSION.FREE_UNUSED_USER_MEMORY;

This built-in procedure will find most of the UGA memory not used by program variables and return it to the parent heap: if there is a dedicated server, to PGA, and if there is a shared server, to SGA.

I thoroughly tested the process of releasing memory occupied by collections in different situations, in particular when using associative arrays and nested tables for a dedicated and shared server, anonymous blocks, and packet data. As a result, the following conclusions were drawn:

  • It is not enough to assign a nested table or VARRAY array to NULL to free up the memory occupied. Either call the DELETE collection method, or assign another empty but initialized collection, or wait for it to come out of scope.
  • To free up memory and return it to the parent heap, use the dbms_session.free_unused_user_memory the procedure, when your program has filled one or more large PL/SQL tables, marked them as unused, and most likely will not have to allocate large memory blocks for similar operations.
  • In shared server mode, errors of memory shortage usually occur more frequently than in dedicated server mode, since the UGA region is allocated from the system global SGA region, which has a limited size. As stated in the section “What to do if there is a memory shortage” (see below), this may result in an ORA-04031 error.
  • In the shared server mode, it is impossible to free the memory occupied by PL/SQL tables (unless the table is declared at the package level).

In practice, the amount of memory occupied by a collection of NUMBER elements does not depend on whether the elements contain NULL values or, say, 38-digit numbers. But for values of the VARCHAR2 type, declared with a length of more than 30 characters, Oracle seems to allocate memory dynamically.

When filling an associative array in the dedicated server mode, an array containing a million values of the NUMBER type takes about 38 MB. And even if the array elements are of the BOOLEAN type, Oracle9i uses almost 15 MB of memory for it. Multiply this value by the number of users, for example, 100, and the result will be huge, especially if you don’t want to upload a memory to disk for performance reasons.

To find out how much memory UGA and PGA are using the current session, please make the following request:

SELECT n.name, ROUND(m.value/1024) kbytes
FROM V$STATNAME n, V$MYSTAT m
WHERE n.statistic# = m.statistic#
AND n.name LIKE 'session%memory%'

(Default privileges are not enough for reading views from this example). The query will show current and maximum memory usage in the session.

If you want to free up the memory used by batch collections without session termination, call one of the two built-in procedures.

  • DBMS_SESSION.RESET_PACKAGE – frees up all memory allocated to store information about the state of a package. As a result, all variables of the package get default values. For packages, this built-in procedure does more than the FREE_UNUSED_USER_MEMORY package because it does not pay attention to whether the memory is used or not.
  • dbms_session.modify_package_state (operation flags in pls_integer) – in the operation flags parameter you can specify one of two constants: DBMS_SESSlON.free_all_ resources or DBMS_SESSION.reinitialize. Using the first one results in the same effect as using the RESET_PACKAGE procedure. The second constant restores the state variables by assigning them default values but does not release or recreate the package data from zero. Besides, it programmatically closes open cursors and does not clear their cache. If this is acceptable in your situation, use the second constant, because it is faster than the full package reset.

BULK COLLECT…LIMIT operations

BULK operations increase the efficiency of data processing, but you must make sure that memory usage remains moderate and that collections do not grow to too large a size.

When BULK COLLECT is selected, all lines are loaded into the collection by default. At large data sizes, the collection is too large. In such cases, the LIMIT design comes to the aid.

In the course of testing, it was found out that it reduces memory consumption, which was expected, but it also found that it speeds up the programs. In the following example, a test table with a million rows is used.

To obtain reliable data for comparison, the program was first to run to pre-fill the cache and then run again after reconnecting to the database. The plsql_memory package has been used to display memory usage information (see plsql_memory.pkg on the book site):

DECLARE
-- Preparation of collections
TYPE numtab IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
TYPE nametab IS TABLE OF VARCHAR2(4000) INDEX BY PLS_INTEGER;
TYPE tstab IS TABLE OF TIMESTAMP INDEX BY PLS_INTEGER;
CURSOR test_c IS
SELECT hi_card_nbr,hi_card_str,hi_card_ts
FROM data_test
;
nbrs numtab;
txt nametab;
tstamps tstab;
counter number;
strt number;
fnsh number;BEGIN
plsql_memory.start_analysis; -- Initialization of memory usage data output
strt := dbms_utility.get_time; -- Saving start time
OPEN test_c;
LOOP
FETCH test_c BULK COLLECT INTO nbrs,txt,tstamps LIMIT 10000;
EXIT WHEN nbrs.COUNT = 0;
FOR i IN 1...nbrs.COUNT LOOP
counter := counter + i; -- Data processing
END LOOP;
END LOOP;
plsql_memory.show_memory_usage;
CLOSE test_c;
fnsh := dbms_utility.get_time;
-- Converting hundredths of a second to millisecond
DBMS_OUTPUT.PUT_LINE('Run time = '||(fnsh-strt)*10||' ms'));
END;
/

The results that I received:

  • Change in UGA memory: 394272 (Current = 2459840)
  • Change in PGA memory: 1638400 (Current = 5807624)
  • Run time = 1530 ms

We see that with a limit of 10,000 records, PGA memory usage has grown by 1,638,400 bytes. When information is output again after the PL/SQL block is finished, most of this memory (although not all of it) is freed:

  • EXEC plsql_memory.show_memory_usage;
  • Change in UGA memory: 0 (Current =2394352)
  • Change in PGA memory: -458752 (Current = 3907080)

Then a re-test was carried out without LIMIT so that all rows of the table were loaded at once:

  • Change in UGA memory: 0 (Current = 1366000)
  • Change in PGA memory: 18153472 (Current = 22519304)

As you can see, without LIMIT much more memory is used. So, I strongly recommend to include LIMIT in the final version of your applications.

Saving the state of objects

Usually, Oracle saves the values of package-level constants, variables, and cursor state in the UGA area until the session is over. This is not the case with variables declared in the declaration section of an individual module. Since their scope is limited by the module, the memory occupied by the data is freed at the end of the session. From now on, they no longer exist.

Apart from disconnecting the database from the server, the package state loss may occur for several other reasons:

  • The program has been recompiled or otherwise become invalid from the database point of view;
  • The built-in DBMS_SESSI0N procedure has been performed in the current session. RESET_PACKAGE;
  • A SERIALLY_REUSABLE compiler directive has been added to the program code (see here) ordering Oracle to save the status parameters only for the call time and not for the whole session;
  • The program uses a web gateway in the default mode, where each client’s session parameters are not saved;
    an error such as ORA-04069 is transmitted to the client session (the library with the dependent table cannot be deleted or replaced).
  • Taking into account these limitations, the data structures of the package can act as global in the PL/SQL environment. In other words, they can be used by PL/SQL programs executed in one session for data exchange.

In terms of application architecture, global data is divided into two types: open and private.

  • Open data. The data structure declared in the package specification is a global open data structure. It can be accessed by any program or user with EXECUTE privilege. Programs can even assign arbitrary values to package variables that are not declared as constants. As we know, open global data is a potential cause of many errors: it’s convenient to declare it, but using it “in a hurry” generates unstructured code with dangerous side effects.
  • Private data. Private global data structures do not cause so many problems. They are absent in the package specification and cannot be referred to from outside. These data are intended to be used only inside the package and only by its elements.

The package data is global only within a single session or connection to the database. They are not used together by several sessions. If you need to share data between sessions, there are other means for this: DBMS_PIPE package, Oracle Advanced Queuing, UTL_TCP package… not to mention the database tables!

What to do when there is a lack of memory

Suppose you are working with a database; everything is going fine, a lot of SQL and PL/SQL commands are executed, and suddenly like a clap of thunder from a clear sky: ORA-04031 error, you can’t allocate n bytes of shared memory.

This error is more common in shared server mode with its increased UGA memory consumption of the shared server. In dedicated server mode, the database can usually get more virtual memory from the operating system, but in this case, a similar error ORA-04030 may occur.

Actually, with a dedicated server, the maximum memory size is about 4 Gbytes per session, while with a shared server, you can manually assign any desired size to the pool.

There are several ways to fix this situation. If you are an application developer, try to reduce the use of shared memory. Some possible actions (approximately in order of application):

  • Make changes to the code and make sure that as many SQL commands as possible are shared.
  • Reduce the size or number of collections stored in memory.
  • Reduce the size of the application code in memory.
  • Change the database level settings and/or buy additional memory for the server.

Items 1 and 2 have already been covered; consider item 3. How to estimate the size of the source code after it has been loaded into memory? And how to reduce it?

Before starting a PL/SQL program, a database must load its entire byte code into memory. To find out how much space a program object occupies in a common pool, ask your database administrator to perform the built-in DBMS_SHARED_POOL procedure. SIZES, which lists all objects larger than the specified size.

The following example shows the memory consumption required by the objects in the shared pool immediately after the database is started1:

SQL> SET SERVEROUTPUT ON SIZE 1000000
SQL> EXEC DBMS_SHARED_POOL.sizes(minsize => 125)

SIZE(K) KEPT NAME
------- ------ ---------------------------------------------------------------
433 SYS.STANDARD (PACKAGE)
364 SYS.DBMS_RCVMAN (PACKAGE BODY)
249 SYSMAN.MGMT_JOB_ENGINE (PACKAGE BODY)
224 SYS.DBMS_RCVMAN (PACKAGE)
221 SYS.DBMS_STATS_INTERNAL (PACKAGE)
220 SYS.DBMS_BACKUP_RESTORE (PACKAGE)
125 MERGE INTO cache_stats_1$ D USING (select * from table(dbms_sta)
ts_internal.format_cache_rows(CURSOR((select dataobj# o, st
atistic# stat, nvl(value, 0) val from gv$segstat where stat
istic# in (0, 3, 5) and obj# > 0 and inst_id = 1) union all
(select obj# o, 7 stat,nvl(sum(num_buf), 0) val from x$kcb
oqh x where inst_id = 1 group by obj#) order by o))) wh
(20B5C934,3478682418) (CURSOR)

The condition minsize => 125 means “output only objects with size 125 Kbytes and above”. From the output data, you can see that the STANDARD package [1] [2] occupies most of the total memory (433 Kbytes).

If you want to get rid of errors 4031 or 4030, you need to know the amount of memory used by programs, but not enough; you also need to know the size of the total pool and the amount of memory occupied by “recreated” objects – that is, objects that may become obsolete, be displaced from memory and reloaded if necessary.

Some of this information is difficult to obtain from the database; you may need to know the mysterious X$ views. However, versions 9.2.0.5 and higher automatically generate a heap dump in the USER_DUMP_DEST directory when an error 4031 occurs. See what you can extract from the obtained information or just pass it to Oracle technical support.

Also, try to find out if the application has a large amount of undivided code which would be more logical to make shared because it can have a big impact on memory usage.

PL/SQL programs compiled into low-level code are compiled into shared library files, but the database still allocates some memory for their execution. A privileged user may use operating system tools (such as pmap in Solaris) to change the amount of memory occupied outside the database.

Now, let us turn to step 4 – configure the database or purchase additional memory. A competent database administrator knows how to configure a shared pool using the following options:

  • SHARED_POOL_SlZE – Bytes reserved for the shared pool.
  • DB_CACHE_SIZE – Memory bytes reserved for storing strings from the database (you may have to reduce this value to increase the size of the shared pool).
  • LARGE_POOL_SlZE – Memory bytes reserved for an optional block that stores the UGA connection area of the shared server (prevents competition for the use of the shared pool by the UGA variable part).
  • 3AVA_POOL_SlZE – bytes used by the Java memory manager.
  • STREAMS_POOL_SIZE – bytes used by Oracle Streams technology.
  • SGA_TARGET – the size of the SGA area from which the database will automatically allocate the cache and pool mentioned above (non-zero number of bytes).
  • PGA_AGGREGATE_TARGET – the total amount of memory used by all server processes in an instance. Usually equal to the amount of server memory available for the database, less the SGA size.
  • PGA_AGGREGATE_LIMIT (appeared in Oracle Database 12c) – specifies the limit of aggregate PGA memory consumed by an instance. If the limit is exceeded, calls to sessions that use the most memory will be canceled. Parallel requests will be treated as a whole. If the total PGA memory usage is still above the limit, the sessions with the highest memory usage will be terminated. The described actions do not apply to SYS processes and critical background processes.

You can also ask your administrator to force PL/SQL programs, sequences, tables, or cursors into memory using the DBMS_SHARED_POOL.KEEP procedure.

For example, the following block requires that the database fixes the STANDARD package in memory:

BEGIN
DBMS_SHARED_POOL.KEEP('SYS.STANDARD');
END;

Fixing in memory is especially useful for large programs that are relatively rare to run. Without committing, partially compiled code is very likely to be forced out of the pool due to long inactivity, and when called again, loading it may lead to the displacement of many smaller objects from the pool (and corresponding performance losses).

The following advice may seem obvious, but if a small subset of users or applications has ORA-04031 errors – try to put the “intruders” in dedicated server mode.

 
Tags: , , , , , , ,

MORE NEWS

 

Preamble​​NoSql is not a replacement for SQL databases but is a valid alternative for many situations where standard SQL is not the best approach for...

Preamble​​MongoDB Conditional operators specify a condition to which the value of the document field shall correspond.Comparison Query Operators $eq...

5 Database management trends impacting database administrationIn the realm of database management systems, moreover half (52%) of your competitors feel...

The data type is defined as the type of data that any column or variable can store in MS SQL Server. What is the data type? When you create any table or...

Preamble​​MS SQL Server is a client-server architecture. MS SQL Server process starts with the client application sending a query.SQL Server accepts,...

First the basics: what is the master/slave?One database server (“master”) responds and can do anything. A lot of other database servers store copies of all...

Preamble​​Atom Hopper (based on Apache Abdera) for those who may not know is an open-source project sponsored by Rackspace. Today we will figure out how to...

Preamble​​MongoDB recently introduced its new aggregation structure. This structure provides a simpler solution for calculating aggregated values rather...

FlexibilityOne of the most advertised features of MongoDB is its flexibility.  Flexibility, however, is a double-edged sword. More flexibility means more...

Preamble​​SQLShell is a cross-platform command-line tool for SQL, similar to psql for PostgreSQL or MySQL command-line tool for MySQL.Why use it?If you...

Preamble​​Writing an application on top of the framework on top of the driver on top of the database is a bit like a game on the phone: you say “insert...

Preamble​​Oracle Coherence is a distributed cache that is functionally comparable with Memcached. In addition to the basic function of the API cache, it...

Preamble​​IBM pureXML, a proprietary XML database built on a relational mechanism (designed for puns) that offers both relational ( SQL / XML ) and...

  What is PostgreSQL array? In PostgreSQL we can define a column as an array of valid data types. The data type can be built-in, custom or enumerated....

Preamble​​If you are a Linux sysadmin or developer, there comes a time when you need to manage an Oracle database that can work in your environment.In this...

Preamble​​Starting with Microsoft SQL Server 2008, by default, the group of local administrators is no longer added to SQL Server administrators during the...