Writing the idtrack.psl script
The idtrack.psl
script is required for propagating changes to user accounts and account association, group memberships, or account attributes. It is optional when tracking changes to profile and request attributes.
A sample script is located in the samples\ directory in the Bravura Security Fabric distribution. The sample script simply prints out all the changes into the idmsuite.log
.
With scripted propagation, any tracked change can be a triggering event to perform any provisioning operation allowed via the application programming interface (API).
To perform scripted provisioning, the script needs to make API calls based on decisions made by the script.
The script must include idtrack-types.psl
, which is shipped in the \<instance>\script\ directory. This file provides structure definitions that are necessary for the script to query what changes have been tracked.
The script must define a processChange function:
processChange( const $userid, const $isProfile)
This function is called once for every user for which idtrack
discovers changes.
If changes were discovered for items that are attached to a user profile, then $userid
is set to the profile ID, and $isProfile
is 1.
If the changes were discovered for items that are not associated to a user profile, then $userid
is the account longid, and $isProfile
is 0.
The script can include startup()
and shutdown()
functions when idtrack
starts and finishes. The behavior of the startup()
function is based on the return value:
0
(or no return value) – continues to call processChanges on tracked changes1
– halts without error-1
– halts with an error
Functions for capturing changes are described in the sections that follow.
See also
The PSLAng Reference Manual(pslang.pdf) for details on writing PSLang scripts.
The Bravura Security Fabric Remote API (api.pdf) for details on the API.
Validate automated requests for feasibility
When designing or troubleshooting workflow automation, keep in mind that:
Auto-discovery or
idtrack
may collect invalid or incomplete data from target systemsEvent actions (exit traps) fire at different times during workflow processing, and for all requests
Scripts are called repeatedly, at different times, in different possible pre-defined requests and contexts
A request should not be triggered before all requirements for it (including a recipient) are available in the current calling context. That means that all request data may not be available to custom scripts,whether stand-alone or deployed via components .
Before triggering any new request, verify that the current calling context contains a valid recipient value, as well as the data, such as attributes, target/hostid, operations, relevant to the request.
Other possible dependencies must be checked by the script if:
The operations to be run as part of the request have such dependencies,
and
The Transaction monitor is not configured to repeat failed operations,
or
The dependency is not likely to be added by another operation triggered before the configured retries end.
Positive failures must also be addressed. Operations may fail because the requested entitlement already exists in the target, account or profile where it's being added. These have a low probability of being for the wrong object, but all such possibilities for each entitlement have to be considered at solution design time. For example, if an account is being added to a group where it's already a member, does that operation have to
Fail, because the account may belong to another user with the same identifier,
or
Succeed, because in the current solution there's no chance that target would have two different accounts with the same identifier?
The workflow manager has a limited number of threads for processing requests, and will process its reqbatch database table "queue" in order. That means that triggering a large number of automated requests at the same time will prevent manual requests from being processed until previously-triggered requests have finished processing.