Google Gemini
the google gemini connector allows users to interact with google's ai models, sending prompts and receiving ai generated responses to enhance decision making and operational efficiency google gemini is an advanced ai driven platform that offers powerful text generation capabilities the google gemini connector for swimlane turbine allows users to leverage this ai technology to automate text generation within their security workflows by integrating with google gemini, users can send prompts and receive ai generated responses, enhancing decision making and response strategies this integration simplifies complex tasks, such as generating narratives for incident reports or crafting communication based on security event data, directly within the swimlane turbine platform prerequisites to effectively utilize the google gemini connector within the swimlane platform, ensure you have the following api key configuration for google gemini ai api access api key your unique identifier to authenticate requests to google gemini model the specific ai model you wish to use for generating responses getting an api key go to https //aistudio google com/ navigate to the api keys page from the dashboard click create api key and select your project copy the generated api key for use in the connector capabilities this connector provides the following capabilities text generation generate human like text responses to prompts question answering ask questions and receive comprehensive answers content analysis analyze and summarize text content creative writing generate creative content, stories, and explanations code generation generate and explain code snippets multi model support access to multiple gemini model variants with different capabilities limitations rate limits subject to google's api rate limits and quotas model availability some models may have regional availability restrictions parameter support not all models support all generation parameters (e g , top k) token limits responses are limited by the max tokens parameter (default 1024, max 8192) content safety all responses are subject to google's safety filters asset setup required configuration base url https //generativelanguage googleapis com (default) api key your google gemini api key from google ai studio model select from supported gemini models gemini 2 5 pro most capable model for complex tasks gemini 2 5 flash fast and efficient for most use cases gemini 2 5 flash lite lightweight version with reduced capabilities gemini 2 0 flash previous generation fast model gemini 2 0 flash lite previous generation lightweight model optional configuration max tokens maximum response length (1 8192, default 1024) temperature response creativity (0 0 1 0, default 0 7) top p nucleus sampling (0 0 1 0, default 0 95) top k top k sampling (1 100, default 64, not supported by all models) thinking budget thinking budget for gemini 2 5 models (0 1000, default 0, 0 disables thinking) model parameter support model max tokens temperature top p top k thinking gemini 2 5 pro ✅ ✅ ✅ ✅ ✅ gemini 2 5 flash ✅ ✅ ✅ ✅ ✅ gemini 2 5 flash lite ✅ ✅ ✅ ❌ ✅ gemini 2 0 flash ✅ ✅ ✅ ✅ ❌ gemini 2 0 flash lite ✅ ✅ ✅ ❌ ❌ tasks setup ask gemini action the connector provides a single action called "ask gemini" that accepts input prompt (string, required) the question or prompt to send to gemini output response (string) the ai generated response model used (string) the model that generated the response tokens used (integer) number of tokens consumed finish reason (string) reason for generation completion safety ratings (array) content safety assessments example usage \## example playbook action action ask gemini parameters prompt "explain the concept of machine learning in simple terms" notes api key security keep your api key secure and never expose it in client side code cost management monitor your usage through google ai studio to manage costs error handling the connector includes comprehensive error handling for common api issues model selection choose the appropriate model based on your needs pro for complex tasks, flash for speed parameter tuning experiment with temperature and other parameters to achieve desired response characteristics reference urls https //ai google dev/gemini api/docs/text generation https //ai google dev/api/generate content#v1beta generationconfig https //ai google dev/gemini api/docs/api key https //aistudio google com/ configurations google gemini ai configuration configuration for google gemini ai api with model selection and generation parameters configuration parameters parameter description type required apikey api key for google gemini authentication string required model gemini model to use for text generation string required max tokens maximum number of tokens in the response (default 1024) integer optional temperature controls randomness in responses (0 0 2 0) (default 0 7) number optional top p nucleus sampling parameter (0 0 1 0) (default 0 95) number optional top k top k sampling parameter (default 64) not supported by flash lite models number optional actions ask gemini send a prompt to google gemini and receive an ai generated response, requiring specific parameters endpoint method get input argument name type required description prompt string required the prompt or question to send to gemini input example {"prompt" "string"} output parameter type description response string the ai generated response from gemini model used string the model that was used to generate the response tokens used integer number of tokens used in the response finish reason string reason why the generation finished output example {"response" "string","model used" "string","tokens used" 123,"finish reason" "string"} response headers header description example content type the media type of the resource application/json date the date and time at which the message was originated thu, 01 jan 2024 00 00 00 gmt