Solutions and Applications
...
AI SOC Ingestion
Run the Ingestion Process
use this mode to build a complete alert ingestion component for a specific vendor it combines the http api calls that fetch raw alerts with a hero ai generated jsonata expression that maps the raw data to the turbine schema — producing a single reusable component for downstream playbooks you can run the ingestion process in three ways under generate ingestion component/playbook using choose one of flow 1 ingestion using previously created components — choose existing components and use components you already have in turbine; then test the api calls and complete mapping (see docid 0p9qwz3o 0j5dnkpjugmq below) flow 2 ingestion using api specification — choose api specification and follow all steps (provide spec, get api sequence, configure auth, test, map to turbine schema) see docid 0p9qwz3o 0j5dnkpjugmq below flow 3 ingestion using webhooks — choose webhook and use a sample alert payload to generate schema mapping and a webhook based playbook see docid 0p9qwz3o 0j5dnkpjugmq at the bottom of this section the link to ai soc toggle (when generating a playbook or ingestion component) applies to all three flows turn it on if you want the turbine schema object produced by the ingestion flow to be sent automatically to the signal triage application in ai soc flow 1 ingestion using previously created components when you use components that already exist in turbine, under generate ingestion component/playbook using choose existing components then follow these steps step 1 select existing components select existing components that retrieve alert data from the component dropdown, choose the component(s) you need, then select continue step 2 sequence and execute components arrange components in the right execution order, run them sequentially, and trigger data flow between components use the drag handle to reorder components if you have more than one for each component, select run to execute it and view the output in the interface use open component to open the component in a separate view if the api call needs headers, query parameters, or a request body, select edit to open configure parameters enter or edit headers, query parameters, and request body (json), select update to save, then run the component from the main view upon a successful run, click link to automatically map and transfer the output parameters from the previous api call into the input fields of the next api call in your sequence continue until you have run the sequence and the final component returns the raw alert data you need for mapping then select continue next steps complete map to the turbine schema (part of docid\ ogtyoyw oumqxrbeofw4a in flow 2), docid\ ogtyoyw oumqxrbeofw4a , and docid\ ogtyoyw oumqxrbeofw4a to create the final ingestion component authentication is already configured for the selected components, so you do not configure authentication in this flow flow 2 ingestion using api specification follow all steps below when you choose api specification under generate ingestion component/playbook using to build the ingestion component from an openapi spec step 1 api specification upload or paste an openapi spec to discover available endpoints in the api specification area, paste your api/swagger specification (json or yaml) or use choose file to upload a file optionally check skip deprecated endpoints select parse openapi spec to discover endpoints step 2 get api sequence select or generate the sequence of api calls required to retrieve alerts from the vendor product manually select endpoints — the widget lists endpoints (after parsing in step 1) select the checkboxes for the endpoints you need you can select up to 25 endpoints; once this limit is reached, the remaining checkboxes are disabled then select continue use hero ai — hero ai generates the api call sequence optionally add a custom prompt in the text area (for example, add an optional custom prompt to guide hero ai in generating the api sequence ) select get api sequence using hero ai , then continue if the spec has not been parsed yet, the widget shows please parse the openapi spec first to see available endpoints step 3 authentication configuration set up the authentication required to access the vendor api enter api endpoint base url (for example, https //api crowdstrike com/) under authentication type , choose one none (no authentication needed), api key , bearer token , basic auth , or oauth 2 0 create a new http asset or select an existing one and configure credentials as needed select continue step 4 sequence and execute components and map to the turbine schema sequence and execute components arrange the api call components in the right execution order, run them sequentially, and trigger data flow between components use the drag handle to reorder components if you have more than one for each component, select run to execute it and view the output in the interface use open component to open the component in a separate view if the api call needs headers, query parameters, or a request body, select edit to open configure parameters enter or edit headers, query parameters, and request body (json), select update to save, then run the component from the main view continue until you have run the sequence and the final component returns the raw alert data you need for mapping then select continue step 5 configuring the ingestion component define component prefix name enter a unique prefix to identify the generated component (for example, crwd) the prefix is limited to a maximum of 10 characters this prefix is prepended to the name of the new component and all associated actions generated by this flow configure hero ai governance visible to hero ai it is highly recommended to toggle this on this allows hero ai to discover and utilize this component in other automated workflows and use cases across the platform requires confirmation to execute in chat this setting ensures that any execution initiated by the ai within a chat session requires explicit human approval before running enable route to ai soc toggle this on to automatically send the turbine schema formatted alert output to the ai soc solution finalize generation once all settings are verified, click the create component button your new ingestion component will be built and added to your content library, ready to begin processing alerts step 6 review the output the completed ingestion component includes the http actions that fetch raw alerts from the vendor api a predefined component that handles the mapping of the incoming raw alert to the turbine schema (including a transform data action with the hero ai–generated jsonata expression) if route to ai soc is enabled, an emit event action is automatically added so the standardized turbine schema alerts are sent to ai soc for triage and remediation the component is named vendorname ingestion component (for example, crowdstrike ingestion component) and is ready to use in ai soc alert ingestion playbooks to discard your progress and start over, select start from beginning flow 3 ingestion using webhooks this flow uses the webhook ingestion playbook creator to turn raw vendor alert data (delivered via webhook) into standardized swimlane records the creator builds a canvas playbook that receives the webhook payload → creates a raw alerts array → parses alerts in bulk → extends each alert to the turbine schema → emits each turbine schema object (for example to the signal triage application when route to ai soc is on) you may be prompted for a sample alert payload and schema mapping earlier in the widget; complete those as needed then configure the playbook and webhook as follows step 1 configure the ingestion playbook enter a playbook prefix name (for example, "fortinet" or "crowdstrike") turn the route to ai soc toggle on if you want the turbine schema object from this flow to be sent automatically to the signal triage application in ai soc (this toggle applies to all three ingestion flows ) click create playbook you receive a playbook link and a webhook url to give to your vendor step 2 configure the vendor webhook copy the webhook url from the previous step in your vendor platform (for example, firewall, edr, siem), open the webhook or api integration settings paste the url so the vendor can post alert data to it when events occur testing the webhook (optional) — you can post a sample alert to the webhook url (for example, with a rest client or curl) in signal triage, open signal trace to confirm the alert was ingested and that the signal data and turbine schema fields are populated correctly