Global temporary table is a great way to store session level data in DB2. However, it is important to understand, that these are not 'in-memory' tables and performance of global temporary tables is much lower than arrays (DB2 9.5) or collections (DB2 9.7).
Another very important factor is to make sure to use indexes on global temporary tables. For example:
DECLARE GLOBAL TEMPORARY TABLE session.lookup_data ( row_id integer, value varchar(4000) ) NOT LOGGED on commit PRESERVE rows;
CREATE unique index session.lookup_data_idx ON session.lookup_data (row_id );
Index on global temporary table will provide huge performance improvement.
Another option is "NOT LOGGED" on table, which improves performance, but not drastically.
From a practical perspective, global temporary table are useful when we need to store millions of rows on a session level.
If the need is to store hundreds of rows, arrays are much faster.
As a practical case, recently I switched from session level tables to arrays and data processing performance jumped from 25 records/second to 550 records/second with the use of arrays.
This is a blog where I describe and share my experiences as a technical expert and as a bootstrap start-up founder with over of 20+ years of professional career
Sunday, November 28, 2010
Friday, November 26, 2010
tidbit:Oracle:Exadata Wait Events (cell related)
Exadata Wait Events:
NAME WAIT_CLASS
---------------------------------------------------------------- --------------------------------------------------
cell list of blocks physical read User I/O
cell manager cancel work request Other
cell manager closing cell System I/O
cell manager discovering disks System I/O
cell manager opening cell System I/O
cell multiblock physical read User I/O
cell single block physical read User I/O
cell smart file creation User I/O
cell smart flash unkeep Other
cell smart incremental backup System I/O
cell smart index scan User I/O
cell smart restore from backup System I/O
cell smart table scan User I/O
cell statistics gather User I/O
cell worker idle Idle
cell worker online completion Other
cell worker retry Other
NAME WAIT_CLASS
---------------------------------------------------------------- --------------------------------------------------
cell list of blocks physical read User I/O
cell manager cancel work request Other
cell manager closing cell System I/O
cell manager discovering disks System I/O
cell manager opening cell System I/O
cell multiblock physical read User I/O
cell single block physical read User I/O
cell smart file creation User I/O
cell smart flash unkeep Other
cell smart incremental backup System I/O
cell smart index scan User I/O
cell smart restore from backup System I/O
cell smart table scan User I/O
cell statistics gather User I/O
cell worker idle Idle
cell worker online completion Other
cell worker retry Other
Labels:
Database,
Exadata,
Oracle,
Performance
tidbit:Oracle:Exadata System Statistics (cell related)
Exadata cell statistics:
STATISTIC# NAME CLASS
---------- ---------------------------------------------------------------- ----------
51 cell physical IO interconnect bytes 64
224 cell physical IO bytes saved during optimized file creation 64
225 cell physical IO bytes saved during optimized RMAN file restore 64
226 cell physical IO bytes eligible for predicate offload 64
227 cell physical IO bytes saved by storage index 8
228 cell smart IO session cache lookups 64
229 cell smart IO session cache hits 64
230 cell smart IO session cache soft misses 64
231 cell smart IO session cache hard misses 64
232 cell smart IO session cache hwm 64
233 cell num smart IO sessions in rdbms block IO due to user 64
234 cell num smart IO sessions in rdbms block IO due to big payload 64
235 cell num smart IO sessions using passthru mode due to user 64
236 cell num smart IO sessions using passthru mode due to cellsrv 64
237 cell num smart IO sessions using passthru mode due to timezone 64
238 cell num smart file creation sessions using rdbms block IO mode 64
239 cell physical IO interconnect bytes returned by smart scan 64
240 cell session smart scan efficiency 64
359 cell scans 64
360 cell blocks processed by cache layer 128
361 cell blocks processed by txn layer 128
362 cell blocks processed by data layer 128
363 cell blocks processed by index layer 128
364 cell commit cache queries 64
365 cell transactions found in commit cache 64
366 cell blocks helped by commit cache 64
367 cell blocks helped by minscn optimization 64
368 cell blocks skipped due to chained rows 64
369 cell simulated physical IO bytes eligible for predicate offload 192
370 cell simulated physical IO bytes returned by predicate offload 192
371 cell simulated session smart scan efficiency 192
372 cell CUs sent uncompressed 64
373 cell CUs sent compressed 64
374 cell CUs sent head piece 64
375 cell CUs processed for uncompressed 64
376 cell CUs processed for compressed 64
377 cell IO uncompressed bytes 64
394 cell index scans 64
571 cell flash cache read hits 8
STATISTIC# NAME CLASS
---------- ---------------------------------------------------------------- ----------
51 cell physical IO interconnect bytes 64
224 cell physical IO bytes saved during optimized file creation 64
225 cell physical IO bytes saved during optimized RMAN file restore 64
226 cell physical IO bytes eligible for predicate offload 64
227 cell physical IO bytes saved by storage index 8
228 cell smart IO session cache lookups 64
229 cell smart IO session cache hits 64
230 cell smart IO session cache soft misses 64
231 cell smart IO session cache hard misses 64
232 cell smart IO session cache hwm 64
233 cell num smart IO sessions in rdbms block IO due to user 64
234 cell num smart IO sessions in rdbms block IO due to big payload 64
235 cell num smart IO sessions using passthru mode due to user 64
236 cell num smart IO sessions using passthru mode due to cellsrv 64
237 cell num smart IO sessions using passthru mode due to timezone 64
238 cell num smart file creation sessions using rdbms block IO mode 64
239 cell physical IO interconnect bytes returned by smart scan 64
240 cell session smart scan efficiency 64
359 cell scans 64
360 cell blocks processed by cache layer 128
361 cell blocks processed by txn layer 128
362 cell blocks processed by data layer 128
363 cell blocks processed by index layer 128
364 cell commit cache queries 64
365 cell transactions found in commit cache 64
366 cell blocks helped by commit cache 64
367 cell blocks helped by minscn optimization 64
368 cell blocks skipped due to chained rows 64
369 cell simulated physical IO bytes eligible for predicate offload 192
370 cell simulated physical IO bytes returned by predicate offload 192
371 cell simulated session smart scan efficiency 192
372 cell CUs sent uncompressed 64
373 cell CUs sent compressed 64
374 cell CUs sent head piece 64
375 cell CUs processed for uncompressed 64
376 cell CUs processed for compressed 64
377 cell IO uncompressed bytes 64
394 cell index scans 64
571 cell flash cache read hits 8
Labels:
Database,
Exadata,
Oracle,
Performance
tidbit:DB2:drop schema
Simple way to drop DB2 schema:
Db2 drop table ERRORSCHEMA.ERRORTABLE
Db2 "CALL SYSPROC.ADMIN_DROP_SCHEMA('DG', NULL, 'ERRORSCHEMA', 'ERRORTABLE')"
Db2 drop table ERRORSCHEMA.ERRORTABLE
Db2 "CALL SYSPROC.ADMIN_DROP_SCHEMA('DG', NULL, 'ERRORSCHEMA', 'ERRORTABLE')"
Oracle:Windows:Database Upgrade Fall-back
Recently I was approached by one of the major customers to assist in creation of database fall-back procedure for a Windows based Oracle 10g RAC system, that is using OCFS to store database data.
The objective was to upgrade clusterware and database from Oracle 10.2.0.2 to 10.2.0.4 and on top of it apply patch 38. Obviously, both ORACLE_HOMES and database itself were to be upgraded.
In a large company world, I would suggest to create separate RAC cluster, setup standby database there and during the brief outage copy control files and redo logs, start-up database and be done with it. This way the fall-back is very simple (start using old system) and most of the work (installing and patching software) can be done without any production impact.
The problem was that customer engaged us only after their fall-back test totally failed and all outage process was based on actually upgrading production system (software and database).
The major issue was that Oracle provided fall-back failed, as downgrade did not worked as expected, as well as nothing was said about proper downgrade of ORACLE_HOMES.
At the end, we arrived to below fall-back procedure:
1) Shutdown all database related Windows services
2) Shutdown clusterware related Windows services
3) Kill ons.exe processes - they sometimes left hanging
4) Take system state backup (IMPORTANT)
5) Backup ocfs and oraclefence drivers
6) Backup database and clusterware Oracle homes
7) Backup Oracle Inventory
8) Backup control files and redo logs
9) Backup OCR using ocrconfig -export <exp_ocr_file.bkp>
10) Backup Voting disk using 'ocopy <voting_file> <exp_voting_file.bkp>'
11) Backup config files (init/tnsnames/listener/sqlnet) (just in case)
During fall-back exercise, we were able to:
1) Restore system state backup in 3 minutes
2) Restore drivers in 3 min (manual copy)
3) Oracle Homes and Inventory Restore was done using rename (1 minute)
4) OCR was restored using "ocrconfig -import" (3 minutes), no need to restore voting disk
5) Database restore from RMAN took 40-50 minutes
6) Redo restore (50GB) took 15 minutes in parallel with database restore
7) Control file restore took 1 minute
8) Database recovery and startup took 5 minutes
Fall-back was 100% successful
The objective was to upgrade clusterware and database from Oracle 10.2.0.2 to 10.2.0.4 and on top of it apply patch 38. Obviously, both ORACLE_HOMES and database itself were to be upgraded.
In a large company world, I would suggest to create separate RAC cluster, setup standby database there and during the brief outage copy control files and redo logs, start-up database and be done with it. This way the fall-back is very simple (start using old system) and most of the work (installing and patching software) can be done without any production impact.
The problem was that customer engaged us only after their fall-back test totally failed and all outage process was based on actually upgrading production system (software and database).
The major issue was that Oracle provided fall-back failed, as downgrade did not worked as expected, as well as nothing was said about proper downgrade of ORACLE_HOMES.
At the end, we arrived to below fall-back procedure:
1) Shutdown all database related Windows services
2) Shutdown clusterware related Windows services
3) Kill ons.exe processes - they sometimes left hanging
4) Take system state backup (IMPORTANT)
5) Backup ocfs and oraclefence drivers
6) Backup database and clusterware Oracle homes
7) Backup Oracle Inventory
8) Backup control files and redo logs
9) Backup OCR using ocrconfig -export <exp_ocr_file.bkp>
10) Backup Voting disk using 'ocopy <voting_file> <exp_voting_file.bkp>'
11) Backup config files (init/tnsnames/listener/sqlnet) (just in case)
During fall-back exercise, we were able to:
1) Restore system state backup in 3 minutes
2) Restore drivers in 3 min (manual copy)
3) Oracle Homes and Inventory Restore was done using rename (1 minute)
4) OCR was restored using "ocrconfig -import" (3 minutes), no need to restore voting disk
5) Database restore from RMAN took 40-50 minutes
6) Redo restore (50GB) took 15 minutes in parallel with database restore
7) Control file restore took 1 minute
8) Database recovery and startup took 5 minutes
Fall-back was 100% successful
Subscribe to:
Posts (Atom)