Skip to content
English
  • There are no suggestions because the search field is empty.

Function Calling

Function calling allows to connect models with external tools and applications. Function calling is used to empower AI assistants with different capabilities and build deep integrations between applications and models.

Examples for Function Calling

  • Enable assistants to fetch data: an AI assistant needs to fetch the latest customer data from an internal system when a user asks, what are my recent orders? before it can generate response to the user.
  • Enabling assistants to take actions: an AI assistant needs to schedule meetings based on user preferences and calendar availability.
  • Enabling assistants to perform computation: a math tutor assistant needs to perform a math computation.
  • Building rich workflows: a data extraction pipeline that fetches raw text, then converts it to structured data and saves it in a database.
  • Modifying your applications' UI: you can use function calls that update the UI based on user input, for example, rendering a pin on a map.

Learn more about the usage of function calling

Apply Function Calling

In Compass, Function Calling is supported with Chat Completions API. In this section, use the function such as get_current_weather and tell the model that it can use this function to retrieve weather information based on the input location. If user asks a question regarding weather, the model cannot respond due to lack of information. Instead, it can detect the available functions to provide this kind of information. Hence the model will build input parameters based on the user question and indicate the function to call. Once you call the function as per model suggestion, model responds to the user’s question regarding weather.

To start using the Function Calling:

  1. Define the function using the tool parameters. See the API reference documentation.
  2. Based on the model response, call the particular functions.
  3. Send the information for each function call and function response to the model.
from openai import OpenAI
import json


client = OpenAI(
    base_url = "https://api.core42.ai/v1",
    default_headers={"api-key": "<API_KEY>"},
    api_key="XXX"
)

# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    if "tokyo" in location.lower():
        return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "72", "unit": unit})
    elif "paris" in location.lower():
        return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})

def run_conversation():
    # Step 1: send the conversation and available functions to the model
    messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}]
    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                    },
                    "required": ["location"],
                },
            },
        }
    ]
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=tools,
        tool_choice="auto",  # auto is default, but we'll be explicit
    )
    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls
    # Step 2: check if the model wanted to call a function
    if tool_calls:
        # Step 3: call the function
        # Note: the JSON response may not always be valid; be sure to handle errors
        available_functions = {
            "get_current_weather": get_current_weather,
        }  # only one function in this example, but you can have multiple
        messages.append(response_message)  # extend conversation with assistant's reply
        # Step 4: send the info for each function call and function response to the model
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            function_response = function_to_call(
                location=function_args.get("location"),
                unit=function_args.get("unit"),
            )
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }
            )  # extend conversation with function response
        second_response = client.chat.completions.create(
            model="gpt-4o",
            messages=messages,
        )  # get a new response from the model where it can see the function response
        return second_response
print(run_conversation())