Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch
0.3.0-alpha
Prefix Reserved
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch --version 0.3.0-alpha
NuGet\Install-Package Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch -Version 0.3.0-alpha
<PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch" Version="0.3.0-alpha" />
paket add Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch --version 0.3.0-alpha
#r "nuget: Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch, 0.3.0-alpha"
// Install Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch as a Cake Addin #addin nuget:?package=Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch&version=0.3.0-alpha&prerelease // Install Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch as a Cake Tool #tool nuget:?package=Microsoft.Azure.Functions.Worker.Extensions.OpenAI.CosmosDBSearch&version=0.3.0-alpha&prerelease
Azure Functions bindings for OpenAI's GPT engine
This project adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions.
This extension depends on the Azure AI OpenAI SDK.
NuGet Packages
The following NuGet packages are available as part of this project.
Preview Bundle
Add following section to host.json
of the function app for non dotnet languages to utilise the preview bundle and consume extension packages:
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
"version": "[4.*, 5.0.0)"
}
Requirements
- .NET 6 SDK or greater (Visual Studio 2022 recommended)
- Azure Functions Core Tools v4.x
- Update settings in Azure Function or the
local.settings.json
file for local development with the following keys:- For Azure,
AZURE_OPENAI_ENDPOINT
- Azure OpenAI resource (e.g.https://***.openai.azure.com/
) set. - For Azure, assign the user or function app managed identity
Cognitive Services OpenAI User
role on the Azure OpenAI resource. It is strongly recommended to use managed identity to avoid overhead of secrets maintenance, however if there is a need for key based authentication add the settingAZURE_OPENAI_KEY
and its value in the settings. - For non- Azure,
OPENAI_API_KEY
- An OpenAI account and an API key saved into a setting.
If using environment variables, Learn more in .env readme. - Update
CHAT_MODEL_DEPLOYMENT_NAME
andEMBEDDING_MODEL_DEPLOYMENT_NAME
keys to Azure Deployment names or override default OpenAI model names. - If using user assigned managed identity, add
AZURE_CLIENT_ID
to environment variable settings with value as client id of the managed identity. - Visit binding specific samples README for additional settings that might be required for each binding.
- For Azure,
- Azure Storage emulator such as Azurite running in the background
- The target language runtime (e.g. dotnet, nodejs, powershell, python, java) installed on your machine. Refer the official supported versions.
Features
The following features are currently available. More features will be slowly added over time. The language stack specific samples are also available in this repo for dotnet-isolated, java, nodejs, powershell and python. Visit the feature specific folder for utilising those.
Text completion input binding
The textCompletion
input binding can be used to invoke the OpenAI Chat Completions API and return the results to the function.
The examples below define "who is" HTTP-triggered functions with a hardcoded "who is {name}?"
prompt, where {name}
is the substituted with the value in the HTTP request path. The OpenAI input binding invokes the OpenAI GPT endpoint to surface the answer to the prompt to the function, which then returns the result text as the response content.
C# example
Setting a model is optional for non-Azure OpenAI, see here for default model values for OpenAI.
[Function(nameof(WhoIs))]
public static HttpResponseData WhoIs(
[HttpTrigger(AuthorizationLevel.Function, Route = "whois/{name}")] HttpRequestData req,
[TextCompletionInput("Who is {name}?")] TextCompletionResponse response)
{
HttpResponseData responseData = req.CreateResponse(HttpStatusCode.OK);
responseData.WriteString(response.Content);
return responseData;
}
Python example
Setting a model is optional for non-Azure OpenAI, see here for default model values for OpenAI.
@app.route(route="whois/{name}", methods=["GET"])
@app.text_completion_input(arg_name="response", prompt="Who is {name}?", max_tokens="100", model = "%CHAT_MODEL_DEPLOYMENT_NAME%")
def whois(req: func.HttpRequest, response: str) -> func.HttpResponse:
response_json = json.loads(response)
return func.HttpResponse(response_json["content"], status_code=200)
Chat completion
Chat completions are useful for building AI-powered assistants.
There are three bindings you can use to interact with the assistant:
- The
assistantCreate
output binding creates a new assistant with a specified system prompt. - The
assistantPost
output binding sends a message to the assistant and saves the response in its internal state. - The
assistantQuery
input binding fetches the assistant history and passes it to the function.
You can find samples in multiple languages with instructions in the chat samples directory.
Assistants
Assistants build on top of the chat functionality to provide assistants with custom skills defined as functions. This internally uses the function calling feature of OpenAIs GPT models to select which functions to invoke and when.
You can define functions that can be triggered by assistants by using the assistantSkillTrigger
trigger binding.
These functions are invoked by the extension when a assistant signals that it would like to invoke a function in response to a user prompt.
The name of the function, the description provided by the trigger, and the parameter name are all hints that the underlying language model use to determine when and how to invoke an assistant function.
C# example
public class AssistantSkills
{
readonly ITodoManager todoManager;
readonly ILogger<AssistantSkills> logger;
// This constructor is called by the Azure Functions runtime's dependency injection container.
public AssistantSkills(ITodoManager todoManager, ILogger<AssistantSkills> logger)
{
this.todoManager = todoManager ?? throw new ArgumentNullException(nameof(todoManager));
this.logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
// Called by the assistant to create new todo tasks.
[Function(nameof(AddTodo))]
public Task AddTodo([AssistantSkillTrigger("Create a new todo task")] string taskDescription)
{
if (string.IsNullOrEmpty(taskDescription))
{
throw new ArgumentException("Task description cannot be empty");
}
this.logger.LogInformation("Adding todo: {task}", taskDescription);
string todoId = Guid.NewGuid().ToString()[..6];
return this.todoManager.AddTodoAsync(new TodoItem(todoId, taskDescription));
}
// Called by the assistant to fetch the list of previously created todo tasks.
[Function(nameof(GetTodos))]
public Task<IReadOnlyList<TodoItem>> GetTodos(
[AssistantSkillTrigger("Fetch the list of previously created todo tasks")] object inputIgnored)
{
this.logger.LogInformation("Fetching list of todos");
return this.todoManager.GetTodosAsync();
}
}
You can find samples in multiple languages with instructions in the assistant samples directory.
Embeddings Generator
OpenAI's text embeddings measure the relatedness of text strings. Embeddings are commonly used for:
- Search (where results are ranked by relevance to a query string)
- Clustering (where text strings are grouped by similarity)
- Recommendations (where items with related text strings are recommended)
- Anomaly detection (where outliers with little relatedness are identified)
- Diversity measurement (where similarity distributions are analyzed)
- Classification (where text strings are classified by their most similar label)
Processing of the source text files typically involves chunking the text into smaller pieces, such as sentences or paragraphs, and then making an OpenAI call to produce embeddings for each chunk independently. Finally, the embeddings need to be stored in a database or other data store for later use.
C# embeddings generator example
[Function(nameof(GenerateEmbeddings_Http_RequestAsync))]
public async Task GenerateEmbeddings_Http_RequestAsync(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = "embeddings")] HttpRequestData req,
[EmbeddingsInput("{RawText}", InputType.RawText)] EmbeddingsContext embeddings)
{
using StreamReader reader = new(req.Body);
string request = await reader.ReadToEndAsync();
EmbeddingsRequest? requestBody = JsonSerializer.Deserialize<EmbeddingsRequest>(request);
this.logger.LogInformation(
"Received {count} embedding(s) for input text containing {length} characters.",
embeddings.Count,
requestBody.RawText.Length);
// TODO: Store the embeddings into a database or other storage.
}
Python example
@app.function_name("GenerateEmbeddingsHttpRequest")
@app.route(route="embeddings", methods=["POST"])
@app.embeddings_input(arg_name="embeddings", input="{rawText}", input_type="rawText", model="%EMBEDDING_MODEL_DEPLOYMENT_NAME%")
def generate_embeddings_http_request(req: func.HttpRequest, embeddings: str) -> func.HttpResponse:
user_message = req.get_json()
embeddings_json = json.loads(embeddings)
embeddings_request = {
"raw_text": user_message.get("RawText"),
"file_path": user_message.get("FilePath")
}
logging.info(f'Received {embeddings_json.get("count")} embedding(s) for input text '
f'containing {len(embeddings_request.get("raw_text"))} characters.')
# TODO: Store the embeddings into a database or other storage.
return func.HttpResponse(status_code=200)
Semantic Search
The semantic search feature allows you to import documents into a vector database using an output binding and query the documents in that database using an input binding. For example, you can have a function that imports documents into a vector database and another function that issues queries to OpenAI using content stored in the vector database as context (also known as the Retrieval Augmented Generation, or RAG technique).
The supported list of vector databases is extensible, and more can be added by authoring a specially crafted NuGet package. Visit the currently supported vector specific folder for specific usage information:
- Azure AI Search - See source code
- Azure Data Explorer - See source code
- Azure Cosmos DB using MongoDB (vCore) - See source code
More may be added over time.
C# document storage example
This HTTP trigger function takes a URL of a file as input, generates embeddings for the file, and stores the result into an Azure AI Search Index.
public class EmbeddingsRequest
{
[JsonPropertyName("Url")]
public string? Url { get; set; }
}
[Function("IngestFile")]
public static async Task<EmbeddingsStoreOutputResponse> IngestFile(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req)
{
using StreamReader reader = new(req.Body);
string request = await reader.ReadToEndAsync();
EmbeddingsRequest? requestBody = JsonSerializer.Deserialize<EmbeddingsRequest>(request);
if (requestBody == null || requestBody.Url == null)
{
throw new ArgumentException("Invalid request body. Make sure that you pass in {\"Url\": value } as the request body.");
}
Uri uri = new(requestBody.Url);
string filename = Path.GetFileName(uri.AbsolutePath);
HttpResponseData response = req.CreateResponse(HttpStatusCode.Created);
return new EmbeddingsStoreOutputResponse
{
HttpResponse = response,
SearchableDocument = new SearchableDocument(filename)
};
}
Python example document store example
@app.function_name("IngestFile")
@app.route(methods=["POST"])
@app.embeddings_store_output(arg_name="requests", input="{url}", input_type="url", connection_name="AISearchEndpoint", collection="openai-index", model="%EMBEDDING_MODEL_DEPLOYMENT_NAME%")
def ingest_file(req: func.HttpRequest, requests: func.Out[str]) -> func.HttpResponse:
user_message = req.get_json()
if not user_message:
return func.HttpResponse(json.dumps({"message": "No message provided"}), status_code=400, mimetype="application/json")
file_name_with_extension = os.path.basename(user_message["Url"])
title = os.path.splitext(file_name_with_extension)[0]
create_request = {
"title": title
}
requests.set(json.dumps(create_request))
response_json = {
"status": "success",
"title": title
}
return func.HttpResponse(json.dumps(response_json), status_code=200, mimetype="application/json")
Tip - To improve context preservation between chunks in case of large documents, specify the max overlap between chunks and also the chunk size. The default values for MaxChunkSize
and MaxOverlap
are 8 * 1024 and 128 characters respectively.
C# document query example
This HTTP trigger function takes a query prompt as input, pulls in semantically similar document chunks into a prompt, and then sends the combined prompt to OpenAI. The results are then made available to the function, which simply returns that chat response to the caller.
Tip - To improve the knowledge for OpenAI model, the number of result sets being sent to the model with system prompt can be increased with binding property - MaxKnowledgeCount
which has default value as 1. Also, the SystemPrompt
in SemanticSearchRequest can be tweaked as per user instructions on how to process the knowledge sets being appended to it.
public class SemanticSearchRequest
{
[JsonPropertyName("Prompt")]
public string? Prompt { get; set; }
}
[Function("PromptFile")]
public static IActionResult PromptFile(
[HttpTrigger(AuthorizationLevel.Function, "post")] SemanticSearchRequest unused,
[SemanticSearchInput("AISearchEndpoint", "openai-index", Query = "{Prompt}", ChatModel = "%CHAT_MODEL_DEPLOYMENT_NAME%", EmbeddingsModel = "%EMBEDDING_MODEL_DEPLOYMENT_NAME%")] SemanticSearchContext result)
{
return new ContentResult { Content = result.Response, ContentType = "text/plain" };
}
Python document query example
@app.function_name("PromptFile")
@app.route(methods=["POST"])
@app.semantic_search_input(arg_name="result", connection_name="AISearchEndpoint", collection="openai-index", query="{Prompt}", embeddings_model="%EMBEDDING_MODEL_DEPLOYMENT_NAME%", chat_model="%CHAT_MODEL_DEPLOYMENT_NAME%")
def prompt_file(req: func.HttpRequest, result: str) -> func.HttpResponse:
result_json = json.loads(result)
response_json = {
"content": result_json.get("response"),
"content_type": "text/plain"
}
return func.HttpResponse(json.dumps(response_json), status_code=200, mimetype="application/json")
The responses from the above function will be based on relevant document snippets which were previously uploaded to the vector database. For example, assuming you uploaded internal emails discussing a new feature of Azure Functions that supports OpenAI, you could issue a query similar to the following:
POST http://localhost:7127/api/PromptFile
Content-Type: application/json
{
"Prompt": "Was a decision made to officially release an OpenAI binding for Azure Functions?"
}
And you might get a response that looks like the following (actual results may vary):
HTTP/1.1 200 OK
Content-Length: 454
Content-Type: text/plain
There is no clear decision made on whether to officially release an OpenAI binding for Azure Functions as per the email "Thoughts on Functions+AI conversation" sent by Bilal. However, he suggests that the team should figure out if they are able to free developers from needing to know the details of AI/LLM APIs by sufficiently/elegantly designing bindings to let them do the "main work" they need to do. Reference: Thoughts on Functions+AI conversation.
IMPORTANT: Azure OpenAI requires you to specify a deployment when making API calls instead of a model. The deployment is a specific instance of a model that you have deployed to your Azure OpenAI resource. In order to make code more portable across OpenAI and Azure OpenAI, the bindings in this extension use the Model
, ChatModel
and EmbeddingsModel
to refer to either the OpenAI model or the Azure OpenAI deployment ID, depending on whether you're using OpenAI or Azure OpenAI.
All samples in this project rely on default model selection, which assumes the models are named after the OpenAI models. If you want to use an Azure OpenAI deployment, you'll want to configure the Model
, ChatModel
and EmbeddingsModel
properties explicitly in your binding configuration. Here are a couple examples:
// "gpt-35-turbo" is the name of an Azure OpenAI deployment
[Function(nameof(WhoIs))]
public static string WhoIs(
[HttpTrigger(AuthorizationLevel.Function, Route = "whois/{name}")] HttpRequest req,
[TextCompletion("Who is {name}?", Model = "gpt-35-turbo")] TextCompletionResponse response)
{
return response.Content;
}
public class SemanticSearchRequest
{
[JsonPropertyName("Prompt")]
public string? Prompt { get; set; }
}
// "my-gpt-4" and "my-ada-2" are the names of Azure OpenAI deployments corresponding to gpt-4 and text-embedding-ada-002 models, respectively
[Function("PromptEmail")]
public IActionResult PromptEmail(
[HttpTrigger(AuthorizationLevel.Function, "post")] SemanticSearchRequest unused,
[SemanticSearchInput("KustoConnectionString", "Documents", Query = "{Prompt}", ChatModel = "my-gpt-4", EmbeddingsModel = "my-ada-2")] SemanticSearchContext result)
{
return new ContentResult { Content = result.Response, ContentType = "text/plain" };
}
Default OpenAI models
- Chat Completion - gpt-3.5-turbo
- Embeddings - text-embedding-ada-002
- Text Completion - gpt-3.5-turbo
While using non-Azure OpenAI, you can omit the Model specification in attributes to use the default models.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.1 is compatible. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.1
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
0.3.0-alpha | 71 | 10/8/2024 |
0.2.0-alpha | 99 | 5/6/2024 |
0.1.0-alpha | 84 | 4/24/2024 |