Mscc.GenerativeAI Gets or sets the name of the model to use. Returns the name of the model. Name of the model. Sets the API key to use for the request. The value can only be set or modified before the first request is made. Specify API key in HTTP header Using an API key with REST to send to the API. Sets the access token to use for the request. Sets the project ID to use for the request. The value can only be set or modified before the first request is made. Returns the region to use for the request. Gets or sets the timespan to wait before the request times out. Throws a , if the functionality is not supported by combination of settings. Optional. Logger instance used for logging Optional. Logger instance used for logging Parses the URL template and replaces the placeholder with current values. Given two API endpoints for Google AI Gemini and Vertex AI Gemini this method uses regular expressions to replace placeholders in a URL template with actual values. API endpoint to parse. Method part of the URL to inject Return serialized JSON string of request payload. Return deserialized object from JSON response. Type to deserialize response into. Response from an API call in JSON format. An instance of type T. Get default options for JSON serialization. default options for JSON serialization. Get credentials from specified file. This would usually be the secret.json file from Google Cloud Platform. File with credentials to read. Credentials read from file. This method uses the gcloud command-line tool to retrieve an access token from the Application Default Credentials (ADC). It is specific to Google Cloud Platform and allows easy authentication with the Gemini API on Google Cloud. Reference: https://cloud.google.com/docs/authentication The access token. Run an external application as process in the underlying operating system, if possible. The command or application to run. Optional arguments given to the application to run. Output from the application. Formatting string for logging purpose. The command or application to run. Optional arguments given to the application to run. Formatted string containing parameter values. Content that has been preprocessed and can be used in subsequent request to GenerativeService. Cached content can be only used with model it was created for. Initializes a new instance of the class. Initializes a new instance of the class. Optional. Logger instance used for logging Creates CachedContent resource. The cached content resource to create. A cancellation token that can be used by other objects or threads to receive notice of cancellation. The cached content resource created Thrown when the is . Creates CachedContent resource. The minimum input token count for context caching is 32,768, and the maximum is the same as the maximum for the given model. Required. The name of the `Model` to use for cached content Format: `models/{model}` Optional. The user-generated meaningful display name of the cached content. Maximum 128 Unicode characters. Optional. Input only. Developer set system instruction. Currently, text only. Optional. Input only. The content to cache. Optional. A chat history to initialize the session with. Optional. Input only. New TTL for this resource, input only. A duration in seconds with up to nine fractional digits, ending with 's' Optional. Timestamp in UTC of when this resource is considered expired. This is always provided on output, regardless of what was sent on input. A cancellation token that can be used by other objects or threads to receive notice of cancellation. The created cached content resource. Thrown when the is or empty. Lists CachedContents resources. Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000. Optional. A page token, received from a previous `ListCachedContents` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListCachedContents` must match the call that provided the page token. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Reads CachedContent resource. Required. The resource name referring to the content cache entry. Format: `cachedContents/{id}` A cancellation token that can be used by other objects or threads to receive notice of cancellation. The cached content resource. Thrown when the is or empty. Updates CachedContent resource (only expiration is updatable). The cached content resource to update. Optional. Input only. New TTL for this resource, input only. A duration in seconds with up to nine fractional digits, ending with 's' Optional. The list of fields to update. A cancellation token that can be used by other objects or threads to receive notice of cancellation. The updated cached content resource. Thrown when the is . Thrown when the is or empty. Deletes CachedContent resource. Required. The resource name referring to the content cache entry. Format: `cachedContents/{id}` A cancellation token that can be used by other objects or threads to receive notice of cancellation. If successful, the response body is empty. Thrown when the is or empty. Initializes a new instance of the class. Initializes a new instance of the class. Optional. Logger instance used for logging Generates a set of responses from the model given a chat history input. Required. The request to send to the API. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the is . Helper class to provide API versions. Helper class to provide model names. Ref: https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning#latest-version Imagen 3 Generation is a Pre-GA. Allowlisting required. Imagen 3 Generation is a Pre-GA. Allowlisting required. Imagen 3 Generation is a Pre-GA. Allowlisting required. Possible roles. Initializes a new instance of the class. Initializes a new instance of the class. Optional. Logger instance used for logging Creates an empty `Corpus`. Gets information about a specific `Corpus`. Lists all `Corpora` owned by the user. Deletes a `Corpus`. Updates a `Corpus`. Performs semantic search over a `Corpus`. Generates embeddings from the model given an input. Initializes a new instance of the class. Initializes a new instance of the class. Optional. Logger instance used for logging Generates embeddings from the model given an input. Required. The request to send to the API. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the is . Adapter size for tuning job. Unspecified adapter size. Adapter size 1. Adapter size 4. Adapter size 8. Adapter size 16. Style for grounded answers. Unspecified answer style. Succinct but abstract style. Very brief and extractive style. Verbose style including extra details. The response may be formatted as a sentence, paragraph, multiple paragraphs, or bullet points, etc. A list of reasons why content may have been blocked. BlockedReasonUnspecified means unspecified blocked reason. Safety means candidates blocked due to safety. You can inspect s to understand which safety category blocked it. Prompt was blocked due to unknown reasons. Prompt was blocked due to the terms which are included from the terminology blocklist. Prompt was blocked due to prohibited content. Candidates blocked due to unsafe image generation content. The mode of the predictor to be used in dynamic retrieval. Always trigger retrieval. Run retrieval only when system decides it is necessary. Source of the File. Used if source is not specified. Indicates the file is uploaded by the user. Indicates the file is generated by Google. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens. Unspecified means the finish reason is unspecified. Stop means natural stop point of the model or provided stop sequence. MaxTokens means the maximum number of tokens as specified in the request was reached. Safety means the token generation was stopped as the response was flagged for safety reasons. NOTE: When streaming the Candidate.Content will be empty if content filters blocked the output. Recitation means the token generation was stopped as the response was flagged for unauthorized citations. Other means all other reasons that stopped the token generation The token generation was stopped as the response was flagged for the terms which are included from the terminology blocklist. The token generation was stopped as the response was flagged for the prohibited contents. The token generation was stopped as the response was flagged for Sensitive Personally Identifiable Information (SPII) contents. The function call generated by the model is invalid. The response candidate content was flagged for using an unsupported language. Token generation stopped because generated images contain safety violations. Mode of function calling to define the execution behavior for function calling. Unspecified function calling mode. This value should not be used. Default model behavior, model decides to predict either a function call or a natural language response. Model is constrained to always predicting a function call only. If "allowed_function_names" are set, the predicted function call will be limited to any one of "allowed_function_names", else the predicted function call will be any one of the provided "function_declarations". Model will not predict any function call. Model behavior is same as when not passing any function declarations. Probability vs severity. The harm block method is unspecified. The harm block method uses both probability and severity scores. The harm block method uses the probability score. Block at and beyond a specified harm probability. Threshold is unspecified. Content with NEGLIGIBLE will be allowed. Content with NEGLIGIBLE and LOW will be allowed. Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed. All content will be allowed. Turn off the safety filter. The category of a rating. Ref: https://ai.google.dev/api/rest/v1beta/HarmCategory HarmCategoryUnspecified means the harm category is unspecified. HarmCategoryHateSpeech means the harm category is hate speech. HarmCategoryDangerousContent means the harm category is dangerous content. HarmCategoryHarassment means the harm category is harassment. HarmCategorySexuallyExplicit means the harm category is sexually explicit content. Content that may be used to harm civic integrity. Negative or harmful comments targeting identity and/or protected attribute. Content that is rude, disrespectful, or profane. Describes scenarios depicting violence against an individual or group, or general descriptions of gore. Contains references to sexual acts or other lewd content. Promotes unchecked medical advice. Dangerous content that promotes, facilitates, or encourages harmful acts. The probability that a piece of content is harmful. Unspecified means harm probability unspecified. Negligible means negligible level of harm. Low means low level of harm. Medium means medium level of harm. High means high level of harm. Harm severity levels. Unspecified means harm probability unspecified. Negligible means negligible level of harm. Low means low level of harm. Medium means medium level of harm. High means high level of harm. Unspecified language. This value should not be used. Python >= 3.10, with numpy and simpy available. The media resolution Media resolution has not been set. Media resolution set to low (64 tokens). Media resolution set to medium (256 tokens). Media resolution set to high (zoomed reframing with 256 tokens). The modality associated with a token count. Unspecified modality. Plain text. Image. Video. Audio. Document, e.g. PDF. Defines the valid operators that can be applied to a key-value pair. The default value. This value is unused. Supported by numeric. Supported by numeric. Supported by numeric and string. Supported by numeric. Supported by numeric. Supported by numeric and string. Supported by string only when value type for the given key has a stringListValue. Supported by string only when value type for the given key has a stringListValue. Outcome of the code execution. Unspecified status. This value should not be used. Code execution completed successfully. Code execution finished but with a failure. `stderr` should contain the reason. Code execution ran for too long, and was cancelled. There may or may not be a partial output present. Type contains the list of OpenAPI data types as defined by https://spec.openapis.org/oas/v3.0.3#data-types Unspecified means not specified, should not be used. String means openAPI string type Number means openAPI number type Integer means openAPI integer type Boolean means openAPI boolean type Array means openAPI array type Object means openAPI object type Describes what the field reference contains. Reference contains a GFS path or a local path. Reference points to a blobstore object. This could be either a v1 blob_ref or a v2 blobstore2_info. Clients should check blobstore2_info first, since v1 is being deprecated. Data is included into this proto buffer. Data should be accessed from the current service using the operation GetMedia. The content for this media object is stored across multiple partial media objects under the composite_media field. Reference points to a bigstore object. Indicates the data is stored in diff_version_response. Indicates the data is stored in diff_checksums_response. Indicates the data is stored in diff_download_response. Indicates the data is stored in diff_upload_request. Indicates the data is stored in diff_upload_response. Indicates the data is stored in cosmo_binary_reference. Informs Scotty to generate a response payload with the size specified in the length field. The contents of the payload are generated by Scotty and are undefined. This is useful for testing download speeds between the user and Scotty without involving a real payload source. Note: range is not supported when using arbitrary_bytes. The requested modalities of the response. Default value. Indicates the model should return text. Indicates the model should return images. Indicates the model should return audio. The state of the tuned model. The default value. This value is unused. The model is being created. The model is ready to be used. The model failed to be created. Output only. Current state of the Chunk. The default value. This value is used if the state is omitted. Chunk is being processed (embedding and vector storage). Chunk is processed and available for querying. Chunk failed processing. States for the lifecycle of a File. The default value. This value is used if the state is omitted. File is being processed and cannot be used for inference yet. File is processed and available for inference. File failed processing. The state of the tuned model. The default value. This value is used if the state is omitted. Being generated. Generated and is ready for download. Failed to generate the GeneratedFile. The state of the tuning job. The default value. This value is unused. The tuning job is running. The tuning job is pending. The tuning job failed. The tuning job has been cancelled. Type of task for which the embedding will be used. Ref: https://ai.google.dev/api/rest/v1beta/TaskType Unset value, which will default to one of the other enum values. Specifies the given text is a query in a search/retrieval setting. Specifies the given text is a document from the corpus being searched. Specifies the given text will be used for STS. Specifies that the given text will be classified. Specifies that the embeddings will be used for clustering. Specifies that the given text will be used for question answering. Specifies that the given text will be used for fact verification. Initializes a new instance of the class. Initializes a new instance of the class with a specific message that describes the current exception. Initializes a new instance of the class with a specific message that describes the current exception and an inner exception. Initializes a new instance of the class with the block reason message that describes the current exception. Initializes a new instance of the class. Initializes a new instance of the class with a specific message that describes the current exception. Initializes a new instance of the class with a specific message that describes the current exception and an inner exception. Initializes a new instance of the class. Initializes a new instance of the class with a specific message that describes the current exception. Initializes a new instance of the class with a specific message that describes the current exception and an inner exception. Initializes a new instance of the class. Initializes a new instance of the class with a specific message that describes the current exception. Initializes a new instance of the class with a specific message that describes the current exception and an inner exception. Initializes a new instance of the class. Initializes a new instance of the class with a specific message that describes the current exception. Initializes a new instance of the class with a specific message that describes the current exception and an inner exception. Initializes a new instance of the class with the finish message that describes the current exception. Initializes a new instance of the class. Initializes a new instance of the class. Optional. Logger instance used for logging Lists the metadata for Files owned by the requesting project. The maximum number of Models to return (per page). A page token, received from a previous ListFiles call. Provide the pageToken returned by one request as an argument to the next request to retrieve the next page. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List of files in File API. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Gets the metadata for the given File. Required. The resource name of the file to get. This name should match a file name returned by the ListFiles method. Format: files/file-id. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Metadata for the given file. Thrown when the is or empty. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Deletes a file. Required. The resource name of the file to get. This name should match a file name returned by the ListFiles method. Format: files/file-id. A cancellation token that can be used by other objects or threads to receive notice of cancellation. If successful, the response body is empty. Thrown when the is or empty. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Initializes a new instance of the class. Initializes a new instance of the class. Optional. Logger instance used for logging Lists the generated files owned by the requesting project. The maximum number of Models to return (per page). A page token, received from a previous ListFiles call. Provide the pageToken returned by one request as an argument to the next request to retrieve the next page. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List of files in File API. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Checks whether the API key has the right conditions. API key for the Gemini API. Thrown when the is null. Thrown when the is empty. Thrown when the has extra whitespace at the start or end, doesn't start with 'AIza', or has the wrong length. Checks if the functionality is supported by the model. Model to use. Message to use. Thrown when the functionality is not supported by the model. Checks if the IANA standard MIME type is supported by the model. See for a list of supported image data and video format MIME types. See for a list of supported audio format MIME types. The IANA standard MIME type to check. Thrown when the is not supported by the API. Checks if the IANA standard MIME type is supported by the model. See for a list of supported image data and video format MIME types. See for a list of supported audio format MIME types. See also for a list of supported MIME types for document processing. Ref: https://developer.mozilla.org/en-US/docs/Web/HTTP/MIME_types/Common_types The IANA standard MIME type to check. Thrown when the is not supported by the API. Checks if the language is supported by the model. Language to use. Thrown when the is not supported by the API. Throws an exception if the IsSuccessStatusCode property for the HTTP response is false. The HTTP response message to check. Custom error message to prepend the message."/> Include the response content in the error message. The HTTP response message if the call is successful. Truncates/abbreviates a string and places a user-facing indicator at the end. The string to truncate. Maximum length of the resulting string. Optional. Indicator to use, by default the ellipsis … The truncated string Thrown when the parameter is null or empty. Thrown when the length of the is larger than the . You can enable Server Sent Events (SSE) for gemini-1.0-pro See Server-sent Events Activate JSON Mode (default = no) Activate Grounding with Google Search (default = no) Activate Google Search (default = no) Enable realtime stream using Multimodal Live API Initializes a new instance of the class. Initializes a new instance of the class. The default constructor attempts to read .env file and environment variables. Sets default values, if available. Optional. Logger instance used for logging Initializes a new instance of the class with access to Google AI Gemini API. API key provided by Google AI Studio Model to use Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. A list of Tools the model may use to generate the next response. Optional. Optional. Configuration of tools. Optional. Flag to indicate use of Vertex AI in express mode. Optional. Logger instance used for logging Initializes a new instance of the class with access to Vertex AI Gemini API. Identifier of the Google Cloud project Region to use Model to use Optional. Endpoint ID of the tuned model to use. Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. A list of Tools the model may use to generate the next response. Optional. Optional. Configuration of tools. Optional. Logger instance used for logging Initializes a new instance of the class given cached content. Content that has been preprocessed. Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. Logger instance used for logging Thrown when is null. Initializes a new instance of the class given cached content. Tuning Job to use with the model. Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. Logger instance used for logging Thrown when is null. Get a list of available tuned models and description. List of available tuned models. The maximum number of Models to return (per page). A page token, received from a previous ListModels call. Provide the pageToken returned by one request as an argument to the next request to retrieve the next page. Optional. A filter is a full text search over the tuned model's description and display name. By default, results will not include tuned models shared with everyone. Additional operators: - owner:me - writers:me - readers:me - readers:everyone A cancellation token that can be used by other objects or threads to receive notice of cancellation. Lists the [`Model`s](https://ai.google.dev/gemini-api/docs/models/gemini) available through the Gemini API. List of available models. Flag, whether models or tuned models shall be returned. The maximum number of `Models` to return (per page). If unspecified, 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger page_size. A page token, received from a previous ListModels call. Provide the pageToken returned by one request as an argument to the next request to retrieve the next page. Optional. A filter is a full text search over the tuned model's description and display name. By default, results will not include tuned models shared with everyone. Additional operators: - owner:me - writers:me - readers:me - readers:everyone A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Gets information about a specific `Model` such as its version number, token limits, [parameters](https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters) and other metadata. Refer to the [Gemini models guide](https://ai.google.dev/gemini-api/docs/models/gemini) for detailed model information. Required. The resource name of the model. This name should match a model name returned by the ListModels method. Format: models/model-id or tunedModels/my-model-id A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Copies a model in Vertex AI Model Registry. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the functionality is not supported by the model. Creates a tuned model. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the functionality is not supported by the model. Deletes a tuned model. Required. The resource name of the model. Format: tunedModels/my-model-id A cancellation token that can be used by other objects or threads to receive notice of cancellation. If successful, the response body is empty. Thrown when the is null or empty. Thrown when the functionality is not supported by the model. Updates a tuned model. Required. The resource name of the model. Format: tunedModels/my-model-id The tuned model to update. Optional. The list of fields to update. This is a comma-separated list of fully qualified names of fields. Example: "user.displayName,photo". A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the is null or empty. Thrown when the functionality is not supported by the model. Transfers ownership of the tuned model. This is the only way to change ownership of the tuned model. The current owner will be downgraded to writer role. Required. The resource name of the tuned model to transfer ownership. Format: tunedModels/my-model-id Required. The email address of the user to whom the tuned model is being transferred to. A cancellation token that can be used by other objects or threads to receive notice of cancellation. If successful, the response body is empty. Thrown when the or is null or empty. Thrown when the functionality is not supported by the model. Uploads a file to the File API backend. URI or path to the file to upload. A name displayed for the uploaded file. Flag indicating whether to use resumable upload. A cancellation token to cancel the upload. A URI of the uploaded file. Thrown when the is null or empty. Thrown when the file is not found. Thrown when the file size exceeds the maximum allowed size. Thrown when the file upload fails. Thrown when the request fails to execute. Uploads a stream to the File API backend. Stream to upload. A name displayed for the uploaded file. The MIME type of the stream content. Flag indicating whether to use resumable upload. A cancellation token to cancel the upload. A URI of the uploaded file. Thrown when the is null or empty. Thrown when the size exceeds the maximum allowed size. Thrown when the upload fails. Thrown when the request fails to execute. Lists the metadata for Files owned by the requesting project. The maximum number of Models to return (per page). A page token, received from a previous ListFiles call. Provide the pageToken returned by one request as an argument to the next request to retrieve the next page. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List of files in File API. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Gets the metadata for the given File. Required. The resource name of the file to get. This name should match a file name returned by the ListFiles method. Format: files/file-id. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Metadata for the given file. Thrown when the is null or empty. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Deletes a file. Required. The resource name of the file to get. This name should match a file name returned by the ListFiles method. Format: files/file-id. A cancellation token that can be used by other objects or threads to receive notice of cancellation. If successful, the response body is empty. Thrown when the is null or empty. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Generates a model response given an input . Refer to the [text generation guide](https://ai.google.dev/gemini-api/docs/text-generation) for detailed usage information. Input capabilities differ between models, including tuned models. Refer to the [model guide](https://ai.google.dev/gemini-api/docs/models/gemini) and [tuning guide](https://ai.google.dev/gemini-api/docs/model-tuning) for details. Required. The request to send to the API. Options for the request. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated content. Thrown when the is . Thrown when the request fails to execute. Thrown when the functionality is not supported by the model or combination of features. Generates a response from the model given an input prompt and other parameters. Required. String to process. Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. A list of Tools the model may use to generate the next response. Optional. Configuration of tools. Options for the request. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated content. Thrown when the is . Thrown when the request fails to execute. Generates a streamed response from the model given an input GenerateContentRequest. This method uses a MemoryStream and StreamContent to send a streaming request to the API. It runs asynchronously sending and receiving chunks to and from the API endpoint, which allows non-blocking code execution. The request to send to the API. Options for the request. Stream of GenerateContentResponse with chunks asynchronously. Thrown when the is . Thrown when the request fails to execute. Thrown when the functionality is not supported by the model or combination of features. Generates a response from the model given an input GenerateContentRequest. Required. The request to send to the API. Options for the request. Response from the model for generated content. Thrown when the is . Thrown when the request fails to execute. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the is . Generates images from text prompt. Required. Model to use. Required. String to process. Configuration of image generation. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated content. Thrown when the is . Thrown when the is . Thrown when the request fails to execute. Generates images from text prompt. Required. String to process. Number of images to generate. Range: 1..8. A description of what you want to omit in the generated images. Aspect ratio for the image. Controls the strength of the prompt. Suggested values are - * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength) Language of the text prompt for the image. Adds a filter level to Safety filtering. Allow generation of people by the model. Option to enhance your provided prompt. Explicitly set the watermark A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated content. Thrown when the is . Thrown when the request fails to execute. Generates a grounded answer from the model given an input GenerateAnswerRequest. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for a grounded answer. Thrown when the is . Generates a text embedding vector from the input `Content` using the specified [Gemini Embedding model](https://ai.google.dev/gemini-api/docs/models/gemini#text-embedding). Required. EmbedContentRequest to process. The content to embed. Only the parts.text fields will be counted. Optional. The model used to generate embeddings. Defaults to models/embedding-001. Optional. Optional task type for which the embeddings will be used. Can only be set for models/embedding-001. Optional. An optional title for the text. Only applicable when TaskType is RETRIEVAL_DOCUMENT. Note: Specifying a title for RETRIEVAL_DOCUMENT provides better quality embeddings for retrieval. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List containing the embedding (list of float values) for the input content. Thrown when the is . Thrown when the functionality is not supported by the model. Generates multiple embedding vectors from the input `Content` which consists of a batch of strings represented as `EmbedContentRequest` objects. Required. Embed requests for the batch. The model in each of these requests must match the model specified BatchEmbedContentsRequest.model. Optional. The model used to generate embeddings. Defaults to models/embedding-001. Optional. Optional task type for which the embeddings will be used. Can only be set for models/embedding-001. Optional. An optional title for the text. Only applicable when TaskType is RETRIEVAL_DOCUMENT. Note: Specifying a title for RETRIEVAL_DOCUMENT provides better quality embeddings for retrieval. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List containing the embedding (list of float values) for the input content. Thrown when the is . Generates an embedding from the model given an input Content. Required. String to process. The content to embed. Only the parts.text fields will be counted. Optional. The model used to generate embeddings. Defaults to models/embedding-001. Optional. Optional task type for which the embeddings will be used. Can only be set for models/embedding-001. Optional. An optional title for the text. Only applicable when TaskType is RETRIEVAL_DOCUMENT. Note: Specifying a title for RETRIEVAL_DOCUMENT provides better quality embeddings for retrieval. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List containing the embedding (list of float values) for the input content. Thrown when the is . Thrown when the functionality is not supported by the model. Generates an embedding from the model given an input Content. Required. List of strings to process. The content to embed. Only the parts.text fields will be counted. Optional. The model used to generate embeddings. Defaults to models/embedding-001. Optional. Optional task type for which the embeddings will be used. Can only be set for models/embedding-001. Optional. An optional title for the text. Only applicable when TaskType is RETRIEVAL_DOCUMENT. Note: Specifying a title for RETRIEVAL_DOCUMENT provides better quality embeddings for retrieval. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List containing the embedding (list of float values) for the input content. Thrown when the is . Thrown when the functionality is not supported by the model. Generates multiple embeddings from the model given input text in a synchronous call. Content to embed. Optional. The model used to generate embeddings. Defaults to models/embedding-001. Optional. Optional task type for which the embeddings will be used. Can only be set for models/embedding-001. Optional. An optional title for the text. Only applicable when TaskType is RETRIEVAL_DOCUMENT. Note: Specifying a title for RETRIEVAL_DOCUMENT provides better quality embeddings for retrieval. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List containing the embedding (list of float values) for the input content. Thrown when the is . Thrown when the functionality is not supported by the model. Runs a model's tokenizer on input `Content` and returns the token count. Refer to the [tokens guide](https://ai.google.dev/gemini-api/docs/tokens) to learn more about tokens. Options for the request. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Number of tokens. Thrown when the is . Starts a chat session. Optional. A collection of objects, or equivalents to initialize the session. Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. A list of Tools the model may use to generate the next response. Returns a attached to this model. Performs a prediction request. Required. The request to send to the API. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Prediction response. Thrown when the is . Thrown when the request fails to execute. Same as Predict but returns an LRO. Required. The request to send to the API. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Prediction response. Thrown when the is . Thrown when the request fails to execute. Generates a response from the model given an input message. The request to send to the API. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the is . Counts the number of tokens in the content. Options for the request. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Number of tokens. Thrown when the is . Generates a response from the model given an input prompt. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the is . Runs a model's tokenizer on a string and returns the token count. Options for the request. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Number of tokens. Thrown when the is . A cancellation token that can be used by other objects or threads to receive notice of cancellation. Thrown when the is . Counts the number of tokens in the content. Options for the request. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Number of tokens. Thrown when the is . Generates multiple embeddings from the model given input text in a synchronous call. Required. Embed requests for the batch. The model in each of these requests must match the model specified BatchEmbedContentsRequest.model. A cancellation token that can be used by other objects or threads to receive notice of cancellation. List of Embeddings of the content as a list of floating numbers. Thrown when the is . Entry point to access Gemini API running in Google AI. See Model reference. Initializes a new instance of the class with access to Google AI Gemini API. The default constructor attempts to read .env file and environment variables. Sets default values, if available. The following environment variables are used: GOOGLE_API_KEY API key provided by Google AI Studio. GOOGLE_ACCESS_TOKEN Optional. Access token provided by OAuth 2.0 or Application Default Credentials (ADC). Initializes a new instance of the class with access to Google AI Gemini API. Either API key or access token is required. API key for Google AI Studio. Access token for the Google Cloud project. Version of the API. Optional. Logger instance used for logging Create a generative model on Google AI to use. Model to use (default: "gemini-1.5-pro") Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. A list of Tools the model may use to generate the next response. Optional. Generative model instance. Thrown when both "apiKey" and "accessToken" are . Create a generative model on Google AI to use. Content that has been preprocessed. Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Generative model instance. Thrown when is null. Thrown when both "apiKey" and "accessToken" are . Returns an instance of CachedContent to use with a model. Cached content instance. Thrown when both "apiKey" and "accessToken" are . Returns an instance of to use with a model. Model to use (default: "imagegeneration") Imagen model Thrown when both "apiKey" and "accessToken" are . Uploads a file to the File API backend. URI or path to the file to upload. A name displayed for the uploaded file. Flag indicating whether to use resumable upload. A cancellation token to cancel the upload. A URI of the uploaded file. Thrown when the is null or empty. Thrown when the file is not found. Thrown when the file size exceeds the maximum allowed size. Thrown when the file upload fails. Thrown when the request fails to execute. Uploads a stream to the File API backend. Stream to upload. A name displayed for the uploaded file. The MIME type of the stream content. Flag indicating whether to use resumable upload. A cancellation token to cancel the upload. A URI of the uploaded file. Thrown when the is null or empty. Thrown when the size exceeds the maximum allowed size. Thrown when the upload fails. Thrown when the request fails to execute. Gets a generated file. When calling this method via REST, only the metadata of the generated file is returned. To retrieve the file content via REST, add alt=media as a query parameter. Required. The name of the generated file to retrieve. Example: `generatedFiles/abc-123` Metadata for the given file. Thrown when the is null or empty. Thrown when the request fails to execute. Lists the metadata for Files owned by the requesting project. The maximum number of Models to return (per page). A page token, received from a previous files.list call. Provide the pageToken returned by one request as an argument to the next request to retrieve the next page. List of files in File API. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Gets the metadata for the given File. Required. The resource name of the file to get. This name should match a file name returned by the files.list method. Format: files/file-id. Metadata for the given file. Thrown when the is null or empty. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Deletes a file. Required. The resource name of the file to get. This name should match a file name returned by the files.list method. Format: files/file-id. If successful, the response body is empty. Thrown when the is null or empty. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. Lists the metadata for Files owned by the requesting project. The maximum number of Models to return (per page). A page token, received from a previous files.list call. Provide the pageToken returned by one request as an argument to the next request to retrieve the next page. List of files in File API. Thrown when the functionality is not supported by the model. Thrown when the request fails to execute. The interface shall be used to write generic implementations using either Google AI Gemini API or Vertex AI Gemini API as backends. Create an instance of a generative model to use. Model to use (default: "gemini-1.5-pro") Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Optional. A list of Tools the model may use to generate the next response. Optional. Thrown when required parameters are null. Generative model instance. Create an instance of a generative model to use. Content that has been preprocessed. Optional. Configuration options for model generation and outputs. Optional. A list of unique SafetySetting instances for blocking unsafe content. Generative model instance. Gets information about a specific Model. Required. The resource name of the model. This name should match a model name returned by the models.list method. Format: models/model-id or tunedModels/my-model-id Thrown when model parameter is null. Thrown when the backend does not support this method or the model. Returns an instance of an image generation model. Model to use (default: "imagegeneration") Name of the model that supports image generation. The can create high quality visual assets in seconds and brings Google's state-of-the-art vision and multimodal generative AI capabilities to application developers. Initializes a new instance of the class. Initializes a new instance of the class. The default constructor attempts to read .env file and environment variables. Sets default values, if available. Initializes a new instance of the class with access to Google AI Gemini API. API key provided by Google AI Studio Model to use Optional. Logger instance used for logging Initializes a new instance of the class with access to Vertex AI Gemini API. Identifier of the Google Cloud project Region to use Model to use Optional. Logger instance used for logging Generates images from the specified . Required. The request to send to the API. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated images. Thrown when the is . Generates images from text prompt. Required. String to process. Number of images to generate. Range: 1..8. A description of what you want to omit in the generated images. Aspect ratio for the image. Controls the strength of the prompt. Suggested values are - * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength) Language of the text prompt for the image. Adds a filter level to Safety filtering. Allow generation of people by the model. Option to enhance your provided prompt. Explicitly set the watermark A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated content. Thrown when the is . Thrown when the request fails to execute. Generates a response from the model given an input prompt and other parameters. Required. String to process. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated content. Thrown when the is . Thrown when the request fails to execute. Generates an image from the model given an input. Initializes a new instance of the class. Initializes a new instance of the class. Optional. Logger instance used for logging A cancellation token that can be used by other objects or threads to receive notice of cancellation. Name of the model that supports image captioning. generates a caption from an image you provide based on the language that you specify. The model supports the following languages: English (en), German (de), French (fr), Spanish (es) and Italian (it). Initializes a new instance of the class. Initializes a new instance of the class. The default constructor attempts to read .env file and environment variables. Sets default values, if available. Initializes a new instance of the class with access to Vertex AI Gemini API. Identifier of the Google Cloud project Region to use Model to use Optional. Logger instance used for logging Generates images from the specified . Required. The request to send to the API. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated images. Generates a response from the model given an input prompt and other parameters. Required. The base64 encoded image to process. Optional. Number of results to return. Default is 1. Optional. Language to use. Default is en. Optional. Cloud Storage uri where to store the generated predictions. A cancellation token that can be used by other objects or threads to receive notice of cancellation. Response from the model for generated content. Thrown when the is . Thrown when the request fails to execute. Thrown when the is not supported by the API. Generates a response from the model given an input prompt and other parameters. Required. The base64 encoded image to process. Required. The question to ask about the image. Optional. Number of results to return. Default is 1. Optional. Language to use. Default is en. A cancellation token that can be used by other objects or threads to receive notice of cancellation