Database performance
After several years or product usage, the database can grow considerably and can affect performance and some important functions of the product. Bravura Security Fabric is constantly updating its backend database with historical data, some of which may be useful for audits, but it doesn't offer any automatic way to shrink data from the database by default.
In addition to hardware improvements, the following maintenance tasks will also help improve performance. Such maintenance tasks must be performed carefully, following your organization's mandatory application uptime and data retention policies.
It is highly recommended that these tasks are performed by a database administrator.
There are four ways to reduce data that Bravura Security Fabric processes and keep performance from degrading over time:
Only list and load objects the Bravura Security Fabric solution needs from the target systems; use filters to reduce the data collected to the absolute minimum.
Plan ahead if you're going to partition tables in the database, to put older data on slower, cheaper storage, and newer data on faster, more expensive storage.
See
dbarc' s partitioning options below.Periodically "hide" data from the main table views in the database
See
dbarc's hide options below.This doesn't reduce the database size, but it speeds up some operations which run on table views rather than on the full tables.
Periodically trim the data in the databases.
See
dbarc's delete options below.To reduce the amount of unused space, a database "shrink" or "rebuild-indexes" maintenance operation can be performed (best to be done in application low-use times or planned outages, especially not while the auto discovery (
psupdate) process is scheduled to run).
All of the above direct database manipulation has to be applied on all databases in an instance separately because the SQL run against the database in an instance does not automatically replicate between application nodes.
Partitioning and archiving the database
dbarc
Use the dbarc program to configure the range of data Bravura Security Fabric will use during normal operation. It can partition and archive data in the database; for example, you can divide data into partitions for 2007+older, 2008, 2009, 2010+newer. This allows you to reduce the amount of data that Bravura Security Fabric queries regularly, which improves performance. You can also move the storage of chunks of data onto different devices with different performance characteristics.
The views created by dbarc can be used for exit traps and CGI programs when the relevant information is requested.
Reports always use the full table, with an option to set the date range to limit the information, rather than using views.
If the owner of the application objects is different to the database user used by the Database Service (iddb), then you must create a file with dbarc and execute it manually on the client.
Table partitions and views are grouped in categories:
Group | Description |
|---|---|
checkout | Privileged access checkout information |
pwhistory | Password reset history |
request | Workflow request information |
rtaudit | Auditing information |
sesslog | Report-related information |
sessmon | Session monitoring information |
userhistory | User and account definition information |
Caution
When dbarc.exe is executed for the first time the process can impact the performance of Bravura Security Fabric and take several hours to complete. Subsequent executions only take a few minutes.
Caution
Before running any code, including dbarc without the -f option, check with Support if the (dbarc-) rendered SQL is correct for your needs; send support@bravurasecurity.com the following:
- The planned SQL statements in a text file.
- The counts of rows resulting from one of the queries from the section above, as tab-separated text, with headers.
In different versions of the product dbarc may handle different tables; check the help string of the utility to see which tables are supported in your version:
util\dbarc -h
The following is a list of tables to consider when deciding how many days Bravura Security Fabric will keep data:
All products:
sesslog - The largest group because it retains data from all sessions into the product. This group in some cases can retain half of the used space into the database usage.
pwhistory - Check how many days is configured on the password policy allow reuse of old passwords after N days rule.
userhistory
rtaudit
Bravura Identity:
request
Bravura Privilege:
checkout
sessmon
dbarc:Test the results by running
dbarcwith its -file argument and the other arguments for each data group that has to be handled first.For example, to see how
dbarcwill try to remove any event log data older than a year, run the following from a command prompt:
<Program Files path>\Bravura Security\Bravura Security Fabric\<instance>\ util\ dbarc -delete sesslog -deletemaxage 366 -file reduceSesslog.sql
Always a backup the data before making any changes.
Replace the DELETE with SELECT * in the resulting queries, and before running the DELETEs. Save the results of the SELECTs to a file from which you can search and recover the removed data if needed at a later date.
dbarc.exe -listgroups dbarc.exe -list <group> dbarc.exe -split <group> -splitinterval <days> -splitadvance <num> dbarc.exe -drop <group> [-dropdate <YYYYMMDD>]|[-dropmaxage <days>] dbarc.exe -delete <group> -deletemaxage <days> dbarc.exe -hideolder <group> [-hidedate<YYYYMMDD|none> ]|[-hidemaxage <days|none>] dbarc.exe -extract <group> [-extractdir <directory>] dbarc.exe -extractreset <group> -extractresettime <YYYYMMDDHHMMSS> dbarc.exe -file <filename>
Argument | Description |
|---|---|
-delete <group> | Delete partition group content older than a specified age of data (-deletemaxage). |
-deletemaxage <days> | Delete content from partition group older than this number of days from current. |
-drop <group> | Drop partitions in the specified group either older than a specified date (-dropdate) or older than a specified age of data (-dropmaxage). |
-dropdate <YYYYMMDD> | Date value for dropping old data partition. |
-dropmaxage <days> | Drop partitions older than this number of days from current. |
-extract <group> | Extract data not previously extracted from the specified group to CSV files. |
-extractdir <directory> | Extract data to this directory. The default is \Logs\<instance>\archive_extract\. |
-extractreset <group> | Clear archivetime previously set by "extract". |
-extractresettime <YYYYMMDDHHMMSS> | Earliest archivetime value previously set by "extract" to reset. |
-file <filename> | Write commands to specified file instead of running them. |
-force | Allow operations that may be dangerous or time-consuming to be executed. |
-hidedate <YYYYMMDD> | Date value for hiding old data from full tables and put into views. Date can be none to disable hiding to retrieve full table content into views. |
-hidemaxage <days> | Hide partitions older than this number of days. Can be none to disable hiding. |
-hideolder <group> | Hide records older than the specified date from the core product for the selected group. |
-list <group> | List partitions in the specified group. |
-listgroups | Display a list of valid partition groups. |
-split <group> | Split partitions such that sufficient future partitions are reserved. This in intended for scheduled tasks. |
-splitadvance <N> | Number of intervals into the future to create partitions. |
-splitinterval <days> | Number of days between newly-split partitions. |
When splitting, dropping, hiding, extracting or resetting extract time for partitions in groups, you can specify all to have the operation affect all groups.
To create 30 partitions for the request group, with each partition 10 days apart from now:
dbarc -split request -splitadvance 30 -splitinterval 10
To create 6 partitions for all groups, with each partition 183 days apart from now:
dbarc -split all -splitadvance 6 -splitinterval 183
To cache the sesslog group into a view for information newer than June 15, 2013:
dbarc -hideolder sesslog -hidedate 20130615
To drop pwhistory group partition named and dated at ’20101215’:
dbarc -drop pwhistory -dropdate 20101215
To drop all group partitions older than 365 days.
dbarc -drop all -dropmaxage 365
To extract checkout group data into CSV files:
dbarc -extract checkout
To clear all groups of archivetime previously set by "extract" on June 15, 2013 at 01:00:00 or later:
dbarc -extractreset all -extractresettime 20130615010000
Important
When a large amount of rows need to be removed (the first time sesslog group data is removed), depending on the amount of data in the database and the indexes fragmentation, the process can take hours and affects the access to the UI. In a production environment, consider a downtime window to have it done.
Before performing these operations, ensure there is enough free space in the partitions where the data files are maintained by MSSQL. At least 2.5 times the size of the data files should be free before starting the process.
Ask your DBA to re-build indexes periodically specially for the tables on the sesslog group. These tables are the most used by the product and data is added every minute in there.
Re-index
On large databases, the index fragmentation can cause stored procedures using tables with fragmented indexes to perform 2-3 times slower or more, which may not be visible in most PerfSproc log entries. Individual stored procedures may be sub-millisecond either way; however, when adding up thousands of stored procedure calls, especially when running auto discovery or rendering pages in solutions with many resources (like PDRs, user classes and target systems), tiny differences slowly add up.
Some tables in the backend database are rebuilt from scratch during psupdate , fragmenting their indexes, and with large tables, that leads to reduced performance. You can improve performance by:
Analyze the PerfSproc entries in the following ways in
idmsuite.log. This should reveal which stored procedures take most time; address those and the large tables they use first.Remove data according to company data retention policies:
Trim unnecessary tables; for example, the cookie table.
Back up and trim historical tables the daily use of the system doesn't require. For example, sesslog_full, *_audit, and*.hst.
Back up and trim invalid data from the tables the system does use, especially used in the stored procedures which take the most time. For example, account, acctattr, nosgroup, profile, userattr, reqbatch, and managegrp.
If those steps don't increase performance enough, then look at index maintenance.
Rebuilding all indexes is simple in itself; however, it takes time to get the parameter values just right and find the correct strategy regarding the frequency and time of day in relation with other product and database server scheduled processes. Consider the following:
Only tables with over millions of records will benefit from being index-optimized, and will result in a meaningful increase in speed for long-running tasks like
psupdate,idtrack, queue processing in services, or if they significantly impact user experience due to WebUI slowness on complex pages.Indexes on historical/audit tables that tend to add entries at the end of the index list, can do with a smaller fill (70% to 90%).
For indexes that can get new entries all over the index, the fill should be lower (50-% to 70%).
Experimentation with these settings can lead to a good balance between maintenance time and improved performance results.
Make sure that you do not reindex the database while psupdate, autores, idtrack, or other tasks are running.
If reindex execution is force-configured with no locks, that will invalidate the execution plans that the database creates for the stored procedures used in those tasks, and often cause deadlock-like symptoms.
The following is an example script to use to identify fragmented indexes, check what slow stored procedures cause the unwanted slowdown, what tables they use and the high percentage of the stored procedure execution those tables use in the execution plan, and rebuild only those.
SELECT OBJECT_NAME(OBJECT_ID), index_id,index_type_desc,index_level, avg_fragmentation_in_percent,avg_page_space_used_in_percent,page_count FROM sys.dm_db_index_physical_stats (DB_ID(N'<database_name>'), NULL, NULL, NULL , 'SAMPLED') ORDER BY avg_fragmentation_in_percent DESCand fill the query-fix-indexing.sql with the correct maintenance script, for example: USE <database_name>; ALTER INDEX ALL ON <table_name1> REBUILD WITH (FILLFACTOR=90); ALTER INDEX ALL ON <table_name2> REBUILD WITH (FILLFACTOR=70);
Ensure that you have sqlcmd installed on the primary node.
The actual SQL script requires experimenting with the index rebuild settings to create a good balance of maintenance time versus performance after scripted maintenance is applied:
For tables with multiple indexes, rather than rebuilding ALL, individual indexes can be rebuilt (the ones which affect specifically-sought performance).
In a production environment you will have to find a way to call sqlcmd or a similar Microsoft utilility without spelling out the credentials on the command line.
If all this has been completed and performance is still an issue for specific stored procedures, contact support@bravurasecurity.com to ask if it's possible to optimize the specific stored procedures that slow down binary execution . In your email, include the steps you followed to increase performance, and:
A log containing PerfSproc procedure entries for the binary runs that need optimization,
or
A verbose log for those binaries that did not have the PerfSproc option in the logging service.