Perplexity AI (Independent Publisher) (Preview)
Unlock the most powerful AI research assistant. Raise Perplexity to the next level with more Copilot, upgraded AI, unlimited file upload, and web service access. Upgrade to Claude-2 or GPT-4 for more accurate answers, will pplx, Mistral, and Llama language models also available.
This connector is available in the following products and regions:
Service | Class | Regions |
---|---|---|
Logic Apps | Standard | All Logic Apps regions except the following: - Azure Government regions - Azure China regions - US Department of Defense (DoD) |
Power Automate | Premium | All Power Automate regions except the following: - US Government (GCC) - US Government (GCC High) - China Cloud operated by 21Vianet - US Department of Defense (DoD) |
Power Apps | Premium | All Power Apps regions except the following: - US Government (GCC) - US Government (GCC High) - China Cloud operated by 21Vianet - US Department of Defense (DoD) |
Contact | |
---|---|
Name | Troy Taylor |
URL | https://www.hitachisolutions.com |
[email protected] |
Connector Metadata | |
---|---|
Publisher | Troy Taylor |
Website | https://www.perplexity.ai/ |
Privacy policy | https://blog.perplexity.ai/legal/privacy-policy |
Categories | AI |
Creating a connection
The connector supports the following authentication types:
Default | Parameters for creating connection. | All regions | Not shareable |
Default
Applicable: All regions
Parameters for creating connection.
This is not shareable connection. If the power app is shared with another user, another user will be prompted to create new connection explicitly.
Name | Type | Description | Required |
---|---|---|---|
API Key (in the form 'Bearer YOUR_API_KEY') | securestring | The API Key (in the form 'Bearer YOUR_API_KEY') for this api | True |
Throttling Limits
Name | Calls | Renewal Period |
---|---|---|
API calls per connection | 100 | 60 seconds |
Actions
Get chat completion |
Generates a model's response for the given chat conversation. |
Get chat completion
Generates a model's response for the given chat conversation.
Parameters
Name | Key | Required | Type | Description |
---|---|---|---|---|
Model
|
model | True | string |
The name of the model that will complete your prompt. |
Role
|
role | True | string |
The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user. |
Content
|
content | True | string |
The contents of the message in this turn of conversation. |
Max Tokens
|
max_tokens | integer |
The maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window. |
|
Temperature
|
temperature | double |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. You should either set temperature or top_p, but not both. |
|
Top P
|
top_p | double |
The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. You should either alter temperature or top_p, but not both. |
|
Top K
|
top_k | double |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
|
Presence Penalty
|
presence_penalty | double |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency penalty. |
|
Frequency Penalty
|
frequency_penalty | double |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence penalty. |
Returns
Name | Path | Type | Description |
---|---|---|---|
ID
|
id | string |
The identifier. |
Model
|
model | string |
The model. |
Object
|
object | string |
The object. |
Created
|
created | integer |
When created. |
Choices
|
choices | array of object | |
Index
|
choices.index | integer |
The index. |
Finish Reason
|
choices.finish_reason | string |
The finish reason |
Content
|
choices.message.content | string |
The content. |
Role
|
choices.message.role | string |
The role. |
Content
|
choices.delta.content | string |
The content. |
Role
|
choices.delta.role | string |
The role. |
Prompt Tokens
|
usage.prompt_tokens | integer |
The prompt tokens used. |
Completion Tokens
|
usage.completion_tokens | integer |
The completion tokens used. |
Total Tokens
|
usage.total_tokens | integer |
The total tokens used. |