Skip to main content

Best practices and troubleshooting

Recommended configuration

For replicated instances, ensure that:

  • The External address of each node is configured to match the TLS certificate used by end-user browser.

    Those servers must be directly reachable from the end-user's browser, not only via the load balancer.

  • The CORS (cross-origin resource sharing) ALLOWED ORIGINS on the instance mut cover the URLs of all nodes as the end-user's browser sees them.

The file storage needs to allow the WebUI of all nodes to reach all files from each of them, so that disaster recovery can happen fast, without having to move large amounts of data.

Recording write performance

The recommended storage method may be more complicated when nodes from different data centers need to reach it.

At "write" time, the latencies and access delays between data centers need to be as small as possible. The smonc CGI that writes the files needs to serialize the writes, and a slow write can be interpreted by the Operating system as a failed write even if it succeeds.

The memory footprint and duration of execution of smonc can increase if it has to "buffer" the writing of the "frames" it receives from the disclosure plugins.

IIS has a limited number of CGI threads that it can keep active at any given time. Any dropped "frames" in the recording tend to interrupt the recording, and with it, the recorded session.

A large number of concomitant sessions sending data to the same Privilege node can overwhelm the node's IIS with incoming calls. If this issue affects your instance, consider reducing the frame rate of the recording or the size of the captured "frames".

Session recording process

The following is a description of the recording process, as illustrated in Session monitoring architecture . This is useful when checking the application logs for issues.

  1. The end user (system administrator) triggers a disclosure plugin that is configured for session recording.

    • Native disclosures, except for view and copy, and guacamole disclosures allow for session recording.

    • Web application disclosures do not allow for session recording.

  2. The disclosure plugin calls into the application via the IIS CGI module and our application's cgi-bin\smonc.exe

    If something goes wrong with the session recording, the session is interrupted. This includes:

    • The disclosure being cancelled by an authorized application user (like a manager or IT admin or site or application administrator, removing access to a specific MSP to the current session owner)

    • Application errors like session recording files not being allowed by the Operating System to be written to the configured storage.

  3. While the session data is being written to the configured storage location, that server's session monitoring service (idsmpg) detects the new files and triggers smonprocessmeta to update the application's database with the metadata of the collected session details.

  4. Later, when an Auditor (someone with session view, search or export privileges) has to, they can export a "package" that can contain one or more captured data set from one or more sessions

    That package is processed with utility smonsavemeta , and saved on the server where the session data was recorded originally. This is done only from the WebUI; it cannot be done manually.

    The browser displays a link to download the resulting zip archive. The link can lead to another node, which is why CORS configuration is recommended .

Session files storage

Do not configure the session data storage on the same drive or partition where the application is installed, to prevent it from running out of space and going into DB COMMIT SUSPEND mode.

The text data captured (keyboard and clipboard data), is tiny in comparison with screenshots and packages (zip files) containing videos compiled from those screenshots.

The text data is stored only temporarily on disk; after smonprocessmeta runs, that data is removed and kept only in the application's backend database.

See Screen capture trend analysis for formulas for deciding how much space to set aside in the storage locations for screenshots.

The session monitoring service uses a mutex on the storage location to determine when new files are saved. If the storage method doesn't update the application's mutex when files are being added, smonprocessmeta won't run. If that issue cannot be solved at the operating system level, smonprocessmeta must be scheduled separately to run (every 5 minutes or on the customer-required schedule), in a Windows Scheduler Task similar to the default ones which provide healthcheck monitoring and external database replication.

Data collection for issues

If there is an issue with session recording or processing:

  1. Check the network architecture - what sort of storage, with details, is being used in the location that stores the session recording files?

  2. Increase logging to Verbose to smonc , idsmpg , smonprocessmeta , smonclean , smonsavemeta.

    Trace-restart the logging so the session service can start logging at the increased level.

  3. Use Task Manager's Details page to right-click on the idsmpg process and collect a memory dump.

  4. Check with sysinternals' procexp64.exe or handle.exe if other processes than the ones described in this topic keep handles on the location where the session files are stored.

    If there's no relevant output from those tools, collect a procmon.exe trace while smonc writes session files and the end of a session.

    To save RAM and trace file size, configure procmon to drop all entries other than file operations on the location where the session files are written.

Disaster recovery

If the recommended configuration was followed, when a replicated node becomes unavailable, the data it collected will still be available on the shared storage location.

  1. Move the data to another folder, one symlinked from a surviving node, or from the new node supposed to replace the old one.

    The move on the same storage is orders of magnitude faster than if it was supposed to move between storage locations

  2. Use Bravura Privilege 's smonmove utility to change in the backend database the "owning" node of the data, from the old/decommissioned node, to the new one or the one taking over that data.

    This is in addition to "moving" the execution of services performed by the old/decommissioned node, to that other node that replaces it.

See the Replication and Recovery documentation for more information on recovery.