Google Gemini (Independent Publisher) (Preview)

Custom connector for Google Gemini, providing advanced AI multi modal content generation functionalities.Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, and Gemini Nano, it was announced on December 6, 2023

This connector is available in the following products and regions:

Service Class Regions
Logic Apps Standard All Logic Apps regions except the following:
     -   Azure Government regions
     -   Azure China regions
     -   US Department of Defense (DoD)
Power Automate Premium All Power Automate regions except the following:
     -   US Government (GCC)
     -   US Government (GCC High)
     -   China Cloud operated by 21Vianet
     -   US Department of Defense (DoD)
Power Apps Premium All Power Apps regions except the following:
     -   US Government (GCC)
     -   US Government (GCC High)
     -   China Cloud operated by 21Vianet
     -   US Department of Defense (DoD)
Contact
Name Priyaranjan KS , Vidya Sagar Alti [Tata Consultancy Services]
URL https://www.tcs.com
Email [email protected]
Connector Metadata
Publisher Priyaranjan KS , Vidya Sagar Alti [Tata Consultancy Services]
Website https://ai.google.dev/
Privacy policy https://policies.google.com/privacy
Categories AI

Creating a connection

The connector supports the following authentication types:

Default Parameters for creating connection. All regions Not shareable

Default

Applicable: All regions

Parameters for creating connection.

This is not shareable connection. If the power app is shared with another user, another user will be prompted to create new connection explicitly.

Name Type Description Required
API Key securestring The API Key for this api True

Throttling Limits

Name Calls Renewal Period
API calls per connection 100 60 seconds

Actions

Count tokens

Counts the number of tokens in a given text using the Generative Language Model.

Generate batch embeddings

Generates embedding vectors for a batch of text contents.

Generate embedding

This endpoint is designed to generate an embedding vector for the provided text content, which can be used for various natural language processing tasks such as text similarity, classification, and clustering.

Generate multi modal content

Generates a response from the model given an input message and an image or video.

Generate stream content

By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.

Generate text content

Generates a text response from the model given an input message.

Get all models

Retrieves a list of all available models with their details.

Get model details

Retrieves details of a specific model based on the provided model name.

Count tokens

Counts the number of tokens in a given text using the Generative Language Model.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

API version to use for the vision endpoint.Eg- 'v1beta'

Model Name
modelName True string

Model name, Eg- 'gemini-pro'.

Text
text string

Required. Text content for which the token count is to be determined.

Returns

Name Path Type Description
totalTokens
totalTokens integer

The total number of tokens in the provided text.

Generate batch embeddings

Generates embedding vectors for a batch of text contents.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

API version, Eg- 'v1beta'.

Model Name
modelName True string

Model name, Eg- 'embedding-001'.

Model
model True string

Identifier of the model used for embedding generation.This should match the format 'models/{modelName}'.

Text
text string

Required.The text content for which the embedding is generated.

Returns

Name Path Type Description
embeddings
embeddings array of object
values
embeddings.values array of number

An array of numerical values representing the generated embedding.

Generate embedding

This endpoint is designed to generate an embedding vector for the provided text content, which can be used for various natural language processing tasks such as text similarity, classification, and clustering.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

The version of the API to be used. This parameter defines the versioning scheme of the API endpoint. Eg- 'v1beta'

Model Name
modelName True string

The name of the model to be used for generating the embedding. The model name should correspond to one of the models available in the API. Eg- 'embedding-001'

Model Resource Name
model True string

Identifier of the model used for embedding generation. This should match the format 'models/{modelName}'.

Text
text string

Required.The text content for which the embedding is generated.

Task Type
taskType string

Optional.The type of task for which the embedding is intended. This parameter helps the model to understand the context in which the embedding is generated.

Title
title string

Optional.An optional title for the content. This is applicable for certain task types like RETRIEVAL_DOCUMENT.

Returns

Name Path Type Description
values
embedding.values array of number

An array of numerical values representing the generated embedding.

Generate multi modal content

Generates a response from the model given an input message and an image or video.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

API version to use for the vision endpoint.Eg- v1beta

Base Model Name
modelName True string

Name of the base model.Eg- Enter gemini-pro and corresponding vision model (gemini-pro-vision) will be used

Role
role string

Optional. The producer of the content. Must be either 'user' or 'model'

Parts
Parts object
Category
category string

Optional.The category of content to be filtered.

Threshold
threshold string

Optional.The threshold for filtering content in the specified category.

Max Output Tokens
maxOutputTokens integer

Optional.The maximum number of tokens to include in a vision candidate.

Temperature
temperature number

Optional.Controls the randomness of the vision output.

Top P
topP number

Optional.The maximum cumulative probability of tokens to consider when sampling.

Top K
topK integer

Optional.The maximum number of tokens to consider when sampling.

Stop Sequences
stopSequences array of string

Optional.The set of character sequences that will stop text output generation.

Returns

Name Path Type Description
candidates
candidates array of object
parts
candidates.content.parts array of object
items
candidates.content.parts object
finishReason
candidates.finishReason string
index
candidates.index integer
safetyRatings
candidates.safetyRatings array of object
category
candidates.safetyRatings.category string
probability
candidates.safetyRatings.probability string
safetyRatings
promptFeedback.safetyRatings array of object
category
promptFeedback.safetyRatings.category string
probability
promptFeedback.safetyRatings.probability string

Generate stream content

By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

API version, Eg- 'v1beta'.

Model Name
modelName True string

Model name, Eg- 'gemini-pro'.

Role
role string

The producer of the content. Must be either 'user' or 'model'

Text
text string

Required. Text content to be processed.

Category
category string

Optional. Category of content to be filtered.

Threshold
threshold string

Optional. Threshold level for content filtering.

Temperature
temperature number

Optional. Controls randomness in the response. Higher values lead to more varied responses.

Max Output Tokens
maxOutputTokens integer

Optional. Maximum number of tokens in the generated content.

Top P
topP number

Optional. Controls diversity of the response. Higher values lead to more diverse responses.

Top K
topK integer

Optional. Limits the number of high-probability tokens considered at each step.

Candidate Count
candidateCount integer

Optional. Number of candidate responses to generate.

Stop Sequences
stopSequences array of string

Optional.The set of character sequences that will stop text output generation.

Returns

Name Path Type Description
array of object
candidates
candidates array of object
parts
candidates.content.parts array of object
text
candidates.content.parts.text string
role
candidates.content.role string
finishReason
candidates.finishReason string
index
candidates.index integer
safetyRatings
candidates.safetyRatings array of object
category
candidates.safetyRatings.category string
probability
candidates.safetyRatings.probability string
safetyRatings
promptFeedback.safetyRatings array of object
category
promptFeedback.safetyRatings.category string
probability
promptFeedback.safetyRatings.probability string

Generate text content

Generates a text response from the model given an input message.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

API version to use for the endpoint. Eg- v1beta

Model Name
modelName True string

Name of the model to be used for text generation. Eg - gemini-pro

Role
role string

Optional. The producer of the content. Must be either 'user' or 'model'

Text
text True string

Required.Text for generating the response.

Category
category string

Optional.The category of content to be filtered.

Threshold
threshold string

Optional.The threshold for filtering content in the specified category.

Max Output Tokens
maxOutputTokens integer

Optional.The maximum number of tokens to include in a text candidate.

Temperature
temperature number

Optional.Controls the randomness of the text output.

Top P
topP number

Optional.The maximum cumulative probability of tokens to consider when sampling.

Top K
topK integer

Optional.The maximum number of tokens to consider when sampling.

Candidate Count
candidateCount integer

Optional. Number of candidate responses to generate.

Stop Sequences
stopSequences array of string

Optional.The set of character sequences that will stop text output generation.

Returns

Name Path Type Description
candidates
candidates array of object
parts
candidates.content.parts array of object
text
candidates.content.parts.text string
finishReason
candidates.finishReason string
index
candidates.index integer
safetyRatings
candidates.safetyRatings array of object
category
candidates.safetyRatings.category string
probability
candidates.safetyRatings.probability string
safetyRatings
promptFeedback.safetyRatings array of object
category
promptFeedback.safetyRatings.category string
probability
promptFeedback.safetyRatings.probability string

Get all models

Retrieves a list of all available models with their details.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

API version, Eg- 'v1beta'.

Returns

Name Path Type Description
models
models array of object
name
models.name string

Unique identifier of the model.

version
models.version string

Version of the model.

displayName
models.displayName string

Display name of the model.

description
models.description string

Description of the model.

inputTokenLimit
models.inputTokenLimit integer

The maximum number of input tokens the model can handle.

outputTokenLimit
models.outputTokenLimit integer

The maximum number of output tokens the model can generate.

supportedGenerationMethods
models.supportedGenerationMethods array of string

List of supported generation methods by the model.

temperature
models.temperature number

Default temperature setting for the model. Not present for all models.

topP
models.topP number

Default topP setting for the model. Not present for all models.

topK
models.topK number

Default topK setting for the model. Not present for all models.

Get model details

Retrieves details of a specific model based on the provided model name.

Parameters

Name Key Required Type Description
API Version
apiVersion True string

API version, Eg- 'v1beta'.

Model Name
modelName True string

Model name, Eg- 'gemini-pro'.

Returns

Name Path Type Description
name
name string

Unique identifier of the model.

version
version string

Version of the model.

displayName
displayName string

Display name of the model.

description
description string

Description of the model.

inputTokenLimit
inputTokenLimit integer

The maximum number of input tokens the model can handle.

outputTokenLimit
outputTokenLimit integer

The maximum number of output tokens the model can generate.

supportedGenerationMethods
supportedGenerationMethods array of string

List of supported generation methods by the model.

temperature
temperature number

Default temperature setting for the model.

topP
topP number

Default topP setting for the model.

topK
topK number

Default topK setting for the model.