How To
Introduction to the Python Task API
background the swimlane rest api and python driver allow solution developers to connect and interact with the swimlane platform from external entities (other programs acting as swimlane clients) in contrast, the swimlane python script task type, and the python task api, allow developers to customize and extend functionality within the platform swimlane offers four task types built in tasks (such as the āemail importā task that uses platform code to retrieve email message data from an imap inbox) plugin tasks the names and number of these tasks vary depending on which of the plugins you have previously installed powershell script tasks python script tasks follow the swimlane platform documentation to create a new task of type python script inputs and outputs the swimlane task api provides the primary means of conveying input data into the task's script and retrieving output values from the script in addition, the script can interact with the swimlane rest api and its python driver, which gives this task type even greater power and flexibility many scripts use both the task api and the rest api/driver task api inputs the python task api provides the script with input values via several objects, all of which are python dictionaries sw context config sw context inputs sw context asset sw context user sw context state the first three of the above are described in the pertinent section of the platform documentation see also lesson 2 of the swimlane certified soar developer course, which provides greater detail for sw context inputs, and notable coverage of sw context user and sw context state these inputs are marshaled together and injected into the script by the platform whenever a python script task is executed as a background job these input values are not present within the script when it is executed from the taskās debugger console sw context user there are many properties within sw context user these belong to the swimlane user account under which the executing script was launched these can be inspected by following the instructions in debugging python script tasks, and forked plugin tasks, with print statements or debugging python script tasks, and forked plugin tasks, with custom log files using the guidance within helpful commands and queries, python section (at the bottom) in order to add the following diagnostic code print(json dumps(sw context user, sort keys=true, indent=4, separators=(",", " "))) the above would work well for print statement debugging for log file debugging try json dump(sw context user, log file object, sort keys=true, indent=4, separators=(",", " ")) sw context state this property is populated by the script code itself the key value pairs added to this dictionary are persisted by the platform at the end of the script's execution they can be found within the swimlane integrationstates collection while querying mongodb re injected into the script during its next execution this allows each python script task its own data store this is a very, very useful feature consider the following example updated since = '' if 'last updated since' in sw context state updated since = sw context state\['last updated since'] else updated since = '2020 06 04t00 00 00z' logfile write('querying third party api for updated alarms starting ' + updated since + '\n') \# query for and process the udpated alarms new updated since = pendulum now() to rfc3339 string() replace('+00 00', 'z') sw context state\['last updated since'] = new updated since logfile write('retrospective date and time for the next run ' + new updated since + '\n') the above script, when run with a scheduled trigger launching it every 20 minutes, for example, can know to fetch and act upon only those alarms updated within the prior 20 minutes the added benefit, however, is that if the swimlane deployment hosting the script goes down for 2 hours for maintenance, the script will do the right thing the next time that it executes specifically, the first time that it runs, it will āknowā to query for all of alarms updated within the prior two hours the second time that it runs after the maintenance window it will resume its typical behavior of launching every 20 minutes and fetching/processing only those alarms updated within the prior 20 minute span the support team relies heavily on this benefit for itās own automations another trick afforded by sw context state is the ability to test the swimlane workflow feature in a granular, rigorous way the following code is not well suited for production use, but it does make testing nuanced worfklow behavior easier count = 0 per record key = sw context config\['recordid'] + ' execution count' if per record key in sw context state count = sw context state\[per record key] sw context state\[per record key] = count + 1 if count == 0 sw outputs = \[{'data' 'first output'}] print 'count is {0} and record id is {1}' format(str(count), sw context config\['recordid']) else sw outputs = \[{'data' ', second output'}] print 'count is {0} and record id is {1}' format(str(count), sw context config\['recordid']) this code creates a new entry in the scriptās state store for every record upon which it operates this wouldnāt scale for an app with thousands or tens of thousands of records, but it is terribly useful in pre production scenarios for determining how many times a script is invoked by the workflow feature upon a given record, and under what circumstances conclusion when authoring python scripts for the swimlane platform follow these guidelines when authoring scripts that will run outside of the platform, use the swimlane driver whenever possible and fall back on the unwrapped rest api when necessary see this helpful section of the driver docs to learn the preferred method for blending these two approaches when authoring scripts that will run inside of the platform, use the task api first, and then fall back on the driver and/or rest api when necessary enjoy creating security solutions in swimlane!