Solutions and Applications
AI SOC Solution
Operational Best Practices
follow these practices to maximize ai soc effectiveness and maintain efficient security operations signal triage and routing start simple with routing rules create basic routing rules first, then refine based on actual signal volume and patterns complex rules can be difficult to maintain and debug order routing rules strategically place high priority, specific rules before general catch all rules rules are evaluated in order, so the most important matches should come first review routing rule performance regularly use the routing rule management dashboard to monitor which rules match signals most frequently and adjust as needed test routing rules before production validate rules with sample signals to ensure they route correctly and do not create unintended matches document rule logic and purpose add clear descriptions to routing rules explaining when and why they should match signals threat intelligence and enrichment validate enrichment providers regularly check provider credentials monthly and before they expire to avoid stale verdicts and enrichment failures configure multiple ti providers for redundancy use multiple providers for the same observable types to ensure enrichment completes even if one provider fails monitor enrichment completion rates set up alerts for signals with pending enrichments that exceed expected completion times review provider specific limitations document which providers support which observable types (ips, domains, hashes, etc ) to set proper expectations check enrichment status before generating plans ensure threat intelligence enrichment has completed before generating investigation plans for more accurate ai analysis knowledge base management use knowledge base articles to standardize investigation steps create kb articles for common attack patterns, false positive scenarios, and standard operating procedures keep kb articles current and actionable regularly review and update kb articles based on investigation outcomes and new threat intelligence link kb articles to signals proactively manually link relevant kb articles to signals during triage to provide immediate context to analysts use matching values for auto linking configure matching values in kb articles to automatically link articles to related signals based on observable patterns document false positives in kb articles create kb articles for known false positive patterns to help ai soc learn and reduce future false positives maintain kb article quality ensure kb articles contain clear guidance, not just descriptions include specific investigation steps and decision criteria ai plan generation and analysis use ai plans as a starting point, not a replacement review and validate ai generated plans before executing ai plans are suggestions based on available data document deviations from ai plans when you skip or modify ai suggested steps, add notes explaining why this helps improve future plan generation re run analysis after major evidence changes when new observables are discovered or threat intelligence enrichment completes, regenerate the plan or re run verdict analysis to keep verdicts current review plan steps before running use the review action to inspect steps before execution, especially for steps that modify systems or take remediation actions add custom steps when needed use add additional step to include organization specific investigation steps that aren't in the ai generated plan wait for enrichment before generating plans allow threat intelligence enrichment to complete before generating plans to ensure ai has all available context normalize data for ai context store data from third party services as observables (ips, domains, hashes, urls) or structured custom fields rather than unstructured text observables are automatically extracted and enriched, making them available for ai analysis and correlation document playbook execution results when playbooks execute investigation steps, document important results in investigation comments or signal summaries so they can be referenced in subsequent ai analysis and plan generation case management and escalation escalate early for critical assets or users don't wait for complete investigation when signals impact critical systems, executive accounts, or sensitive data review correlations before starting a new case check the correlation panel to see if similar signals exist or if a case already exists for related activity attach evidence when escalating include relevant threat intelligence, observables, and investigation notes when escalating signals to cases set appropriate case priority use case priority to reflect business impact, not just technical severity close cases with clear resolution notes document the root cause, remediation steps taken, and any follow up actions required playbook development and maintenance test playbooks with sample data always test new or modified playbooks with representative sample data before enabling in production add error handling to playbooks include on failure paths for external integrations that can fail (api timeouts, credential issues, etc ) keep playbook flows focused avoid mixing unrelated logic in a single flow create separate playbooks for distinct use cases document playbook triggers and dependencies clearly document what triggers each playbook and what components or assets it requires monitor playbook execution regularly review playbook execution logs to identify failures, performance issues, or unexpected behavior version control playbook changes document changes to production playbooks and test in a development environment first when possible design playbooks for ai integration when creating playbooks that will be used in ai generated plans, design them to receive signal or case tracking ids as input rather than individual properties this improves reliability and makes playbooks easier for ai to invoke correctly signal investigation workflow claim signals promptly claim signals you're investigating to prevent duplicate work and ensure clear ownership update signal status regularly keep signal status current (new → in progress → resolved) to maintain accurate dashboards and reports set manual verdicts on resolved signals always set a manual verdict when resolving signals to improve ai learning and historical analysis use investigation comments for context add investigation comments to document your reasoning, especially when verdicts differ from ai suggestions document changes and findings as notes throughout the investigation so ai can use this information when assessing signal data review similar signals check the correlation panel for similar signals that may have already been investigated to avoid duplicate work complete investigation summaries add investigation summaries to resolved signals to capture key findings and lessons learned correlate related alerts when new alerts are related to existing signals or cases, use correlation features to link them this helps ai understand relationships and context when analyzing signals and generating plans performance and monitoring monitor signal volume trends use dashboards to track signal volume, verdict distribution, and resolution times to identify capacity issues set up alerts for critical conditions configure alerts for signals with malicious verdicts, critical severity, or signals that exceed sla thresholds review oldest signals regularly use the "signals oldest" report to identify signals that may be stuck or forgotten track false positive rates monitor false positive rates by verdict type to identify areas where tuning or additional context is needed measure enrichment completion times track how long threat intelligence enrichment takes to identify slow providers or capacity issues review routing rule effectiveness regularly assess which routing rules are matching signals and adjust rules that aren't performing as expected security and compliance follow least privilege for rbac configure role based access control to grant only necessary permissions regularly audit role assignments document investigation decisions maintain clear audit trails by documenting investigation decisions, especially for compliance sensitive signals protect sensitive investigation data be mindful of sensitive information in investigation comments and ensure proper access controls regular credential rotation establish a schedule for rotating api credentials and credentials used by playbooks and integrations review and update triage rules periodically review triage rules to ensure they align with current security policies and threat landscape continuous improvement conduct regular retrospectives review investigation outcomes, false positives, and missed detections to identify improvement opportunities share learnings across the team use knowledge base articles and investigation summaries to share findings and best practices refine ai plans based on outcomes when ai plans consistently miss important steps, create kb articles or update investigation procedures to fill gaps measure and track metrics establish kpis for signal resolution time, false positive rates, and analyst efficiency to measure improvement over time stay current with threat intelligence regularly review threat intelligence feeds and update enrichment providers to ensure you're detecting current threats