How To
Workflows: discovering and mastering workflow creation
every swimlane application has a workflow, which is a set of operations that are carried out automatically on the application's records under certain circumstances these operations are specified in the workflow editor, which is a cad tool that allows you to define execution logic using a form of visual programming within the editor, you create a hierarchy (a tree structure) of conditions and actions each condition contains one or more evaluations (expressions that evaluate to true or false), and each action is a type of operation, or side effect (set a record field value, show or hide a record section, send an e mail notification, etc) the solution building capabilities of swimlane's integrations (both plugins and custom script tasks) overlap in some ways with the capabilities of workflows and, workflows can launch integration tasks, so these two mechanisms are often used in tandem but, workflows are distinctive in that they have exclusive abilities to make swimlane records more interactive by the actions that update the record's layout and behaviors in real time (including field show/hide, section show/hide, field read/write toggle, field required/optional toggle, and the ability to update the list of valid values for fields of type selection (formerly known as valueslist)) success in crafting security solutions in swimlane hinges in part on creating great workflows, and creating sound workflows requires a keen understanding of both when workflows are executed how the execution logic in each workflow is carried out events that spark workflow execution the workflow for application x is carried out within the chrome browser against an open (currently displayed) record belonging to x whenever a human user opens a new record form a new record is saved for the first time an existing record is opened an open record undergoes any modification by a human user in addition, the workflow for x is also carried against a swimlane record belonging to x within the swimlane platform (on the server side) whenever a new record is created by an integration task a new record is created by a remote client tool/script (using the swimlane platform's restful api, likely via the swimlane driver) an existing record is modified by either of these two means workflow execution characteristics it's crucial to bear in mind the following explanation of how workflow logic is executed (how the tree structure is evaluated and carried out) against a swimlane record there is no guarantee of the order of execution of a workflow in swimlane this is because both the client side workflow processor (running in the browser) and the server side workflow processor (running under the web server) parse and evaluate the json document representing the workflow definition in the order in which the conditions and actions appear in the json, which is not guaranteed to preserve any particular ordering for example, if there were three sibling conditions named a, b, and c in a certain section of the workflow tree, then it's essential to bear in mind that all three will be evaluated, but it's not possible to predict the order in which they will be evaluated (and, there are six possible orders in which a set of three conditions will be evaluated ) it is worthy of note, however, that long experience has shown that the conditions and actions of the workflow tree are evaluated in a top down, left to right ordering the vast majority of the time another important clarification is that both the client side and server side process the workflow by evaluating all conditions first, building a list of actions where the conditions are met, then executing those actions the lack of enforced evaluation order for conditions, and for actions, as well as the manner in which actions of type trigger integration are carried out, is the cause of gotchas enumerated below gotchas experience has shown the following caveats and limitations in the workflow action execution order (trigger integration) while the workflow processor (on both the client and server sides) evaluates the workflow's eligible actions in a one at a time series (after identifying those eligible actions in the manner described above where no user chosen order is enforced), the integration jobs launched by the workflow are not carried out in serial order in fact, they are often executed in parallel, sometimes resulting in unexpected/undesired outcomes why is that? each trigger integration action is implemented to enqueue its task as a job in the hangfire job queue this allows the job to be executed like all other background jobs and, as swimlane's tasks exe launches multiple jobs from the queue in parallel (depending upon the number of processor cores available on tasks exe's host), these jobs may "race" to update the record consider the following example when a new record is generated in the workflow gotchas app, the app's workflow will be evaluated assume in this example that both of the conditions "test group" and "test hash" evaluate to true in that case, both the "query ad" (short for active directory) and "query vt" (short for virustotal) tasks will be enqueued within milliseconds (or less) of one another these two tasks will very likely execute in parallel because they are enqueued in such close proximity in time assume also that both of these tasks will further set and/or update other fields within the record in question if both of the scripts for these two tasks employ the task api to update the record in question (by adding values to sw outputs, and ensuring that these values are correctly mapped specific fields in the targeted record), then their parallel execution will not result in any harmful outcome because tasks exe employs a lock to serialize the updates to the record if, on the other hand, one or both of these scripts employ the restful api (optionally using the swimlane driver) to update the targeted record, then the updates made by one script may be accidentally reverted by the updates performed by the other script record updates done through the restful api are not atomic or parallel safe because they lack the locking enjoyed by the task api each restful client job (whether it runs inside or outside of swimlane) must fetch a copy of the record, then modify that copy in local memory, then submit the modified copy back via an http put when two such jobs are executing in parallel, their simultaneous edits often conflict the changes made to the record by the first job to complete its put are often undone by the put from the other job because these jobs are in a race to fetch the record and update it resources for learning more about the task api and the restful api can be found here the following strategies are recommended for working around this semantic gap (the strict serial order of workflow tree evaluation and the parallel execution of background jobs) each one requires the identification of all integration tasks that are likely to operate upon a given record within close proximity of time the pertinent tasks ensure that all of the pertinent tasks make exclusive use of the task api refactor the pertinent tasks using the restful api to eliminate use of put /app/{id}/record/{id} and replace it with patch /app/{id}/record/{id} this will only work if each such task only updates an exclusive subset of record fields refactor the pertinent tasks using the swimlane driver replace record save() with record patch() this will only work if each such task only updates an exclusive subset of record fields refactor the execution configuration of the pertinent tasks so that only one of these tasks is catalyzed by the workflow (it may or may not matter which one goes first) add a new output of type execute task to that first, worklow triggered task so that the second task is thereby launched add a new output of type execute task to that second task so that the third task is launched and so on until the launching of every task in the group has been provided for refactor the execution configuration of the pertinent tasks so that each one is catalyzed only when the task's "guarding" condition evaluates to true specifically, ensure that the first task's outputs (either through via the task api or the restful api) include an assignment to the field that's monitored by the subsequent task's guarding condition the write to this field (as well as any other field assignments from the first task) will cause the entire workflow to be re evaluated, whereupon the second task's guarding condition will evaluate to true, which will cause the second task to run only after the first task has successfully completed be careful to ensure that the first task is the only entity that makes the assignment to the second task's guarding field see to it that the second task's outputs include the assignment to the field that will cause the third task's guarding condition to evaluate to true, and so on this approach offers an advantage over the one immediately above in that all of the execution control logic remains defined within the workflow combine the pertinent tasks into a single task (this is not ideal as modular definition and maintainability of each task are lost but, this approach has worked for some customers ) the fourth of the above strategies is the least desirable of all because it hampers future maintenance of behaviors in the original set of tasks the first and third strategies are superior because they retain modular construction that separates concerns the first strategy is somewhat limited in that the task api is less flexible than the restful api for working with reference fields however, improvements to support for reference fields in the 3 3 release of the swimane platform mitigate this disadvantage the second strategy has been successfully employed by a swimlane customer who has seven auto enrichment scripts that operate on every one of their incident management records these seven work well, launching and running to completion in strict serial order, because they are daisy chained together via the execute task output type the third strategy may be the most flexible and maintainable all around action execution variability 1 vs n executions trigger integration and trigger notification actions are only carried out the first time that their ancestor conditions' evaluations are met this is helpful in many contexts, such as those in which one wouldn't want to send an e mail notification over and over again when the first message is sufficient and additional messages would be redundant however, for those use cases where it is desirable to send e mail notification each and every time that pertinent conditions are met, then the following workaround could be considered create python script task that verifies that appropriate conditions are met (by testing the same field values that the original, applicable workflow conditions test) composes and dispatches an e mail message is triggered by a record save event, rather than a workflow action trigger integration and trigger notification actions are only carried out when changes to a record are being persisted (during a record save operation) trigger integration actions, however, can optionally be defined to execute when the pertinent conditions are met while the record is being edited by a human user in the swimlane user interface before the user clicks save (see the end user documentation's explanation of the immediately execute option ) but, this configuration choice is highly discouraged as it is rarely truly needful, and it often results in unwanted side effects such as record revision bloat and the frequent display of the keep local changes dialog all other action types are carried out every time the pertinent conditions are met unless precluded by one of the exceptional factors mentioned below condition evaluation subtlety as described above, condition evaluations test record field values to determine whether or not to follow certain branches of the application's business logic (as it were) these evaluations return true (as it were) in the manner that one might expect in all circumstances except the one in which a field is being tested that was only just set by another set field value action elsewhere in the workflow this is because of the workflow evaluation algorithm described above where all conditions are evaluated once, then all eligible actions are carried out one server side workaround for this gotcha (that's proven acceptable in a variety of use cases) is to provoke the workflow to evaluate multiple times in rapid succession after a record is altered by a client script or a swimlane integration task this results in a first pass of the workflow setting one or more field values, which are then persisted, and a second pass then going deeper into the business logic because the conditions that should have evaluated to true on the first pass do evaluate to true on the second pass what provokes the second pass of the server side workflow processor? this clever trick is typically arranged by replacing the set field value actions, the ones that do get executed during the first pass, with trigger integration actions that launch python script tasks that employ the task api to set the field values these task based field assignments not only set the fields to the desired values, but they also provoke the workflow processor to be run again on the server side in rapid succession the task api is preferred over the restful api for this workaround because it executes more efficiently within the platform, and because it doesn't cause record save events (which can launch the app's integration tasks configured for on record save triggering during phases of the record's life cycle where such may not have been anticipated by the soc engineer) default action(s) condition behavior this condition will only evaluate to true the very first time that a workflow was carried out on a new record for a new record being composed by a human user in the swimlane user interface, the actions that are children of the default action(s) condition are carried out right when the new record form is first opened for a record created by a client script, these default actions are carried out just after the newly formed record is first submitted to the swimlane platform