Orchestration
...
Native Actions
Hero AI Native Action
the hero ai native action allows seamless integration of ai driven workflows directly into your orchestration processes powered by swimlane llm, it eliminates the need for complex scripting and manual transformations, offering users a more efficient way to leverage ai capabilities introduction the hero ai native action enables orchestrators to integrate ai driven workflows directly into playbooks and components define prompts in natural language, use field replacement to inject dynamic data, and configure tools that hero ai calls autonomously to gather information and make decisions key capabilities natural language prompts express what you want to accomplish, not how to accomplish it dynamic field replacement integrate playbook data into prompts using field replacements autonomous tool selection configure tools and let hero ai decide when and how to use them intelligent reasoning hero ai analyzes context and makes decisions similar to a skilled analyst adaptive operations workflows adapt to different scenarios based on available context and data key interface elements prompt tab prompt input field define the text prompt sent to hero ai for processing use a natural language query or command relevant to the desired hero ai action this is a required field if left blank, an error message appears add playbook property adds dynamic properties from a playbook, enabling hero ai to reference real time data during execution for example, adding the playbook property for security alerts makes hero ai aware of specific incidents from your system contains sensitive data marks prompts that contain sensitive information when enabled, the prompt data is handled according to your organization's data protection policies maxoutputtokens parameter the maxoutputtokens field sets the maximum number of output tokens hero ai uses when generating a response this parameter controls the output token limit, not input tokens a token ranges from one character to a word the default value of 1024 provides a balanced level of detail adjust based on your needs the current maximum value is 8192 output tokens test button and previous playbook run drop down after entering your prompt, click test to see how hero ai responds select a previous playbook run from the from previous run drop down menu to test the prompt with historical data the drop down lists previous runs with timestamps, showing how hero ai handles various datasets from different runs error handling if a required field is missing or there is an issue with the prompt, error information appears in the test result output section review the error details to troubleshoot any issues prompt formatting may be lost if you navigate to the tools or outputs tabs without saving to preserve formatting, save your changes before switching tabs if formatting is lost, close the action configuration dialog and reopen it to restore the original formatting tools tab the tools tab configures tools that hero ai uses to help address your prompt hero ai calls these tools at its discretion to gather relevant data and insights to help it make decisions the tools tab displays all components and ai agents available in your tenant select from this list which tools to configure for that instance of the native action hero ai analyzes your prompt and automatically determines which tools to call based on what your prompt asks for, what each tool's description says it does, and the context available in your playbook tools must have a valid description to help hero ai understand when and how to use them components without descriptions cause errors and fail hero ai native action execution there are no hard limits to how many tools can be configured for each native action, but configure only the tools that are relevant to your specific task more tools lead to decreased accuracy, so choose only what you need it is not recommended to use ai agents as tools for other ai agents this can exhaust capacity very quickly, especially when multiple agents invoke each other adding and configuring tools add a tool click the add a tool button to open the tool selection interface edit tools dialog the edit tools dialog displays all available tools with the following information tools the name of the tool with its icon description a description of what the tool does (required for effective tool usage) interface the interface version associated with the tool (if applicable) select tools use the search bar to filter available tools select tools by checking the checkbox next to each tool you want to include select multiple tools the dialog shows "x selected / y total" at the bottom to track your selections apply changes click apply to save your tool selections, or cancel to discard changes edit tools after tools are configured, click edit tools to add additional tools or remove existing ones tools are optional add tools only when they provide value to your specific use case hero ai automatically determines when and how to use the configured tools during prompt execution all components and ai agents in your tenant are available as tools in hero ai native actions tools can be either regular components or turbine ai agents to ensure components work effectively as tools, provide clear description fields that explain what the tool does and when to use it—this is essential for hero ai to understand when to call the tool also ensure well defined input/output schemas so hero ai understands what data to provide and what to expect outputs tab the outputs tab allows you to view and configure the output schema for the hero ai action the outputs are structured as follows result an object containing the ai generated response and metadata requestid the unique identifier for the request generatedtext the text generated by hero ai in response to your prompt finishreason the reason the response was completed "stop" response completed naturally (happy path) "length" response was truncated due to token limit prompttokens the number of tokens consumed in the input prompt responsetokens the number of tokens consumed in the generated response turns array containing information about each turn in the execution, including model used, token usage per turn, and tool execution details configuredtools array showing which tools were configured when the action executed (for historical reference) error error information if the action fails when referencing outputs in downstream actions, use the full namespace path for example, to access the generated text, use result generatedtext rather than just generatedtext using outputs in downstream actions \# example using hero ai output in a conditional action condition result finishreason == "stop" && result responsetokens < 500 \# example using generated text in a notification message "ai analysis {{heroaiaction result generatedtext}}" how to use the hero ai action open your playbook in the turbine canvas and drag the hero ai action from the add panel into your canvas configure the prompt write your custom prompt in the prompt tab use add playbook property to insert dynamic data, mark sensitive data if needed, and adjust maxoutputtokens (default 1024, max 8192) configure tools (optional) navigate to the tools tab, click add a tool , select relevant tools from all available components (ensure tools have clear descriptions), and click apply test and validate click test to see how hero ai responds optionally select a previous playbook run from the drop down to test with historical data review results, check token usage, verify tool execution, and refine your prompt as needed how hero ai native actions work hero ai native actions orchestrate intelligent security operations through an iterative reasoning process understanding how they work helps you design more effective prompts and leverage their full capabilities core components each hero ai native action consists of three main components prompt and instructions what you want hero ai to accomplish your custom prompt defines the task or question field replacements inject dynamic data from your playbook the ai model the intelligence powering the action powered by swimlane llm interprets your prompt and available context generates actions, decisions, and responses based on patterns and real world data reasons about when and how to use configured tools the tool toolbox a set of components hero ai uses to execute tasks all components in your tenant are available as tools hero ai analyzes your prompt and determines which tools to call tools execute automatically when hero ai determines they are needed tool outputs are incorporated into the context for generating the final response execution flow hero ai native actions execute through multiple turns —complete exchanges between the system and the llm each turn includes sending a prompt and receiving a response hero ai engages in multiple turns to accomplish complex tasks requiring tool execution and iterative reasoning high level process job creation and validation the playbook engine creates a hero ai job and validates inputs (prompt, maxoutputtokens, tools, feature flags) tool preparation if tools are configured, the system retrieves tool definitions for all selected components reasoning and tool execution hero ai analyzes your prompt, determines which tools to call, executes them, and incorporates outputs into context response generation hero ai generates a final response based on your prompt, tool outputs, and playbook context output creation the system creates action outputs including generated text, token usage, request id, and finish reason hero ai breaks down complex tasks into deliberate steps analyzing available data, weighing options, determining actions, and evaluating results to ensure decisions build on context and drive toward your goal execution considerations execution time guarantee hero ai native actions have a guaranteed maximum execution time of 15 minutes all valid hero ai jobs (passing other success criteria such as maximum prompt size) that complete within 15 minutes are guaranteed to finish successfully jobs taking longer than 15 minutes may succeed or fail—the system does not guarantee strict outcomes for actions exceeding this timeframe autonomous tool selection hero ai analyzes your prompt and autonomously decides which tools to call and when you configure available tools, but hero ai determines the execution plan based on reasoning about what's needed to accomplish your goal contextual reasoning hero ai considers available context, tool capabilities, and your prompt when making decisions, similar to how an analyst would approach a problem token limits responses are truncated if they exceed the maxoutputtokens limit monitor result finishreason to detect truncation error handling if execution fails, error information is available in the error output field hero ai native actions do not fail if underlying tool calls fail—hero ai continues processing and incorporates tool failure information into its reasoning and response transparency every action, tool call, and decision is logged, providing full audit trails for review and compliance tool execution tools are executed via an mcp server that calls the turbine engine on premises deployments run within your cluster, while cloud deployments run within the cloud infrastructure testing and validation testing your hero ai native action before deploying it to production is crucial for ensuring reliable results testing in the action dialog the hero ai action dialog provides built in testing capabilities for testing your configuration before deploying to production these testing features are only available when testing from within the action dialog configure your action enter your prompt, set maxoutputtokens if needed, configure tools, and add playbook properties for dynamic data select test data (optional) use the "from previous run" drop down to select a previous playbook run (shows up to 20 recent runs with timestamps) if no run is selected, the test uses default/empty data run the test click the test button the test status indicator shows "testing" while processing, then "loaded" when complete or "error" if it fails review test results results appear in the result section below the prompt input, showing result generatedtext the ai generated response result requestid unique request identifier result prompttokens and result responsetokens token usage across all turns result finishreason how the response completed ( "stop" for natural completion—the happy path, "length" if truncated) result turns array containing information about each turn in the execution, including model used, token usage per turn, and tool execution details result configuredtools array showing which tools were configured when the action executed (for historical reference) validate results check response quality, token usage, tool execution (if tools configured), and finish reason if finishreason is "length" , increase maxoutputtokens use the copy result button to copy the result json testing tips test with multiple playbook runs to validate different data scenarios start with simple prompts and gradually add complexity test edge cases (empty fields, null values) to ensure robust behavior validating responses in production after deploying your playbook, monitor execution through playbook run logs and details validate outputs using conditional actions to check that result generatedtext contains expected content and result finishreason indicates successful completion ( "stop" for natural completion) add error handling to check for the error field in outputs and implement fallback logic if hero ai fails track execution time and token usage to optimize prompts and maxoutputtokens based on actual usage use cases and examples use case 1 security alert analysis scenario automatically analyze security alerts and provide structured summaries solution use hero ai native action to analyze alerts and extract key information alert analysis prompt "analyze this security alert and identify all observables (ips, domains, file hashes, user accounts) provide a structured summary with threat assessment " tools threat intelligence lookup components, user context components result fast alert analysis with structured summaries that help analysts quickly understand and prioritize alerts use case 2 phishing email triage scenario automatically analyze reported phishing emails and communicate with users solution use hero ai native actions to automate analysis and communication email analysis prompt "analyze this reported phishing email identify phishing indicators, assess threat level, and determine if user communication is needed " tools url analysis component, attachment scanning component, email reputation component user communication (if needed) prompt "generate a user friendly message explaining the threat assessment for this email include what to do if they clicked links or opened attachments " result faster initial assessment, automated user communication, and better identification of sophisticated phishing attempts use case 3 threat intelligence correlation scenario synthesize multiple threat intelligence sources into actionable intelligence solution use hero ai native action to analyze and correlate ti results ti correlation prompt "analyze threat intelligence results from multiple providers for these observables {{observables}} correlate findings, identify patterns, resolve conflicts, and provide a consolidated threat assessment with confidence scores " tools multiple ti provider components, historical ti component result synthesized intelligence from multiple sources with comprehensive threat reports that save analyst time best practices prompt design express intent, not implementation write prompts that describe what you want to accomplish, not how to accomplish it let hero ai reason about the best approach ✅ good "analyze this alert and determine if it requires investigation" ❌ less effective "call ti tool, then check user history, then compare results" provide context include relevant context in your prompt using field replacements the more context hero ai has, the better its reasoning include alert details, user information, system state, etc reference related cases or historical data when relevant be specific about output format if you need structured output, specify the format in your prompt example "provide analysis in this format verdict (true positive/false positive), confidence level, key indicators, recommended actions" tool configuration tool selection there are no hard limits to how many tools can be configured for each native action, but configure only tools necessary for your use case more tools lead to decreased accuracy and increased response time start with 2 3 essential tools add more only if they provide distinct value tool component design when creating components to use as tools clear descriptions (required) write descriptions that clearly explain what the tool does and when to use it tools without descriptions may not be used effectively by hero ai well defined schemas ensure clear input/output schemas so hero ai understands what data to provide and what to expect error handling implement robust error handling so tools fail gracefully workflow design multi agent workflows when building complex workflows, consider breaking tasks into multiple hero ai actions, each with a specific role enrichment actions gather and correlate data analysis actions reason about findings decisioning actions determine next steps communication actions handle user interaction prevent infinite loops design prompts and tool configurations to avoid scenarios where hero ai might call tools repeatedly without progress ensure tool components have clear termination conditions troubleshooting tools not appearing in selection dialog issue components do not appear in the tool selection dialog solutions ensure the component is saved and not deleted (deleted components are not available as tools) check that you have permissions to view the component all components and ai agents in your tenant should appear in the tools tab hero ai not using configured tools issue hero ai does not call tools even though they are configured solutions review tool component descriptions ensure they clearly indicate when to use the tool check that your prompt actually requires the functionality provided by the tools verify tool components are working correctly when called directly consider refining your prompt to be more explicit about needing tool data component failure handling issue tool components fail during hero ai execution solutions review component execution logs to understand failure reasons ensure tool components have proper error handling verify tool components are working correctly when called directly check that tool components have appropriate timeouts configured review component dependencies and ensure they are available observability and monitoring execution monitoring hero ai native actions provide observability through execution logs all hero ai actions are logged with request ids, token usage (prompt tokens and response tokens), execution time, tool calls made during execution, and finish reasons grafana dashboards llm calls are logged and available in grafana dashboards for monitoring token usage trends, execution times, success/failure rates, and tool usage patterns playbook run details review playbook run details to see hero ai action execution status, outputs generated, errors encountered, tool execution results, and turn by turn execution details operational health page the operational health page provides additional visibility new trigger type a new "hero ai native action" trigger type filter allows you to view component runs that are part of hero ai native actions component run filtering filter to see just the component runs that are part of hero ai native action executions sensitive data handling if components have sensitive inputs marked (via "mark sensitive" in data inputs configuration), those values are redacted in the ui for security monitoring best practices monitor token consumption over time, track result finishreason to identify truncated responses, analyze which tools hero ai calls most frequently, track error rates, and monitor execution times to optimize performance and costs