Blog

My Custom AI Assistant: Building a Personalized Chatbot with ChatGPT’s Assistant API

Introduction

Create your own personalized AI companion with an array of customizable parameters that define its behavior, knowledge, and capabilities. Key components include:

  • Instructions: Forge the personality and response patterns of your AI assistant by defining how it interacts with users and utilizes AI models.
  • Cutting-Edge AI Models: Choose from advanced AI models like GPT-3.5 or GPT-4 to power your assistant, or wait for upcoming tailored models to take your AI companion’s intelligence to the next level.
  • Powerful Tools: Equip your AI assistant with tools like a Code Interpreter for seamless code handling, a Knowledge Retrieval system to tap into a vast pool of information, and a Function Calling feature to execute specific tasks and operations, making your AI companion a versatile helper tailored to your needs

Customizing the parameters of your AI assistant allows you to create a unique, multifaceted companion tailored to your needs and budget. The combination of AI models, specific instructions, and versatile tools unlocks various capabilities and offers a wide range of possibilities.

Tools

Code Interpreter

  • Code Interpreter Feature: The Assistants API’s Code Interpreter enables the execution of Python code in a secure, sandboxed environment.
  • Capabilities: With Code Interpreter, your AI assistant can run your code as instructed.
  • Pricing: Each session of Code Interpreter usage is charged at $0.03, with a default session duration of 1 hour. This means that within a 1-hour window, users can engage with Code Interpreter in the same thread, and only a single session fee will be applied.
  • File Access: Files attached at the assistant level are accessible across all runs associated with that assistant. Additionally, files can be included at the thread level, granting access exclusively to specific threads.
  • Cost Considerations: When using Code Interpreter, files linked to assistants or messages are not indexed, meaning you won’t incur any additional charges for them.

Knowledge Retrieval

  • Implementing Search: When you upload and provide a file to the assistant, OpenAI automatically segments the document, generates embeddings, and conducts vector searches to locate relevant content that addresses the user’s query.
  • Search Techniques: There are two primary search methods:
    • Incorporating file content into short documentation prompts.
    • Employing vector searches for longer documents.
  • Enhanced Search Quality: The search functionality has been optimized to improve quality by including content that is relevant to the context of the model call.
  • Pricing: The cost of the search feature is $0.20 per gigabyte (GB) per assistant per day. Removing a file from the assistant also eliminates it from the search index.

Function Calling 

  • Function Recognition: The Assistants API can identify and execute custom functions with their respective arguments.
  • Seamless Interactions: The API pauses conversation execution to handle function calls, providing a smooth and integrated user experience.

Make an assistant

Two methods for creating AI Assistants: You can build your own AI assistants using either the Assistants API or the Assistants Playground.

Assistants API: Developer’s paradise – Build customized AI assistants with limitless potential.

Assistants Playground: No coding needed – Experiment and design personalized AI companions with ease.

Workflow

  • Assistant: Your personalized AI companion, complete with custom instructions, AI models, and integrated tools.
  • Thread: A dialogue session between the AI assistant and the user, storing all exchanged messages. Threads automatically manage content length to ensure context compatibility with AI models.
  • Message: A communication unit within a thread, containing text, images, or other file attachments. These messages are organized as a list within each thread.
  • Run: The process through which the AI assistant utilizes messages to interact with models and tools for accomplishing tasks. A run can have various statuses throughout its execution.
  • Run Step: A detailed breakdown of each step taken by the AI assistant during a run, providing insights into the assistant’s decision-making and task-completion process.

Merit

  • Context Management: The AI assistant effectively handles context to stay within the model’s limitations.
  • Message Content: Messages can include text, images, and various file attachments for flexible communication.
  • File Attachments: Each AI assistant can accommodate up to 20 files, with a maximum file size of 512 MB and a token limit of 2,000,000 per file.
  • Parallel Tool Access: The AI assistant can use multiple tools concurrently, such as Code Interpreter, Knowledge Retrieval, and Function Calling.
  • File Format Compatibility: The assistant supports various file formats and can generate new files using tools like Code Interpreter.
  • Easy Integration: The AI assistant platform is designed for seamless integration with existing applications.

Limitation

  • Python Support: Code Interpreter currently only executes Python code.
  • Status Checking: Regularly retrieve Run objects to check their status for updates.
  • Thread Locking: In-progress Runs lock associated threads until completion.
  • Output Streaming: Real-time output streaming (including messages and Run steps) is not supported.
  • Limited Tool Access: Tools like DALL·E and browsing are not available.
  • Image Messages: Creating user messages with images is not supported.

Demo

In this demonstration, we’ll walk through the process of creating an AI assistant using the user-friendly Assistants Playground interface. We’ll focus on utilizing the Code Interpreter tool to enhance the assistant’s capabilities.

Code Interpreter

Begin by accessing the Assistants Playground and creating a new assistant. Specify the assistant’s name, provide personalized instructions, and choose a suitable AI model. Additionally, enable the Code Interpreter option to incorporate code execution capabilities into your assistant.

As an example, we’ll create an assistant and instruct it to generate a function that returns a random number between 1 and 100. The assistant will utilize the Code Interpreter to execute Python code directly within the AI environment, showcasing the seamless integration of code execution and response generation.

Within the Assistants Playground, users can track the activity of their AI assistants by reviewing the API logs displayed on the right-hand side of the interface.

Retrieval

Users can enable the search option within their AI assistant and upload one or more relevant knowledge files. These files serve as the information source for the assistant, allowing it to effectively respond to user messages by retrieving and presenting relevant data.

In this example, the assistant searches for Pionero’s information in the knowledge.txt file and returns the results to the user.

Function 

To create a custom function for your assistant, simply provide a unique function name, an informative description, define the required parameters, and indicate any mandatory fields. After completing the function’s definition, save your changes to finalize the process.

The AI assistant uses the provided function description to identify the specific user messages that will trigger the function and its associated parameters.

After the function fires, this thread is held to wait for the final result. However, the assistant’s responsibility is only to determine the function name and the parameters of that function, but not the contents of the function. This means you have to host the get_wearther function yourself.

Take a look at this code.

import OpenAI from “openai”;

import ‘dotenv/config’

const openai = new OpenAI({apiKey: process.env[‘OPENAI_API_KEY’]});

// Create an assistant with function getCurrentWeather

const assistant = await openai.beta.assistants.create({

    name: “Weather Assistant”,

    instructions: “You are a weather bot. Use the provided functions to answer questions.”,

    model: “gpt-3.5-turbo”,

    tools: [{

        “type”: “function”,

        “function”: {

            “name”: “getCurrentWeather”,

            “description”: “Get the weather in location”,

            “parameters”: {

                “type”: “object”,

                “properties”: {

                “location”: {“type”: “string”, “description”: “The city and state eg San Francisco, CA”}

                },

                “required”: [“location”]

            }

        }

    }]

});

// Create a thread

const thread = await openai.beta.threads.create();

// Add a message to the thread

const message = await openai.beta.threads.messages.create(

    thread.id,

    {

      role: “user”,

      content: “What is the weather in Hanoi.”

    }

  );  

// Assistant run process

const run = await openai.beta.threads.runs.create(

    thread.id,

    { assistant_id: assistant.id }

);

// Wait for run status change to requires_action

await new Promise(r => setTimeout(r, 2000));

// Retrieve the run status

const runRetrieve = await openai.beta.threads.runs.retrieve(

    thread.id,

    run.id

  );

// getCurrentWeather raised here

if (runRetrieve.status === ‘requires_action’ && runRetrieve.required_action.submit_tool_outputs.tool_calls[0].function.name === ‘getCurrentWeather’) {

    const toolCallId = runRetrieve.required_action.submit_tool_outputs.tool_calls[0].id

    await openai.beta.threads.runs.submitToolOutputs(

        thread.id,

        run.id,

        {

          tool_outputs: [

            {

              tool_call_id: toolCallId,

              // The final result will be placed here, it can be got from an API

              output: Math.floor(Math.random() * 30) + “C”,

            },

          ],

        }

      );

}

await new Promise(r => setTimeout(r, 2000));

const messages = await openai.beta.threads.messages.list(

    thread.id

);

messages.data.forEach(element => {

    console.log(“element:”, element.content[0].text)

});

element: { value: ‘The current weather in Hanoi is 16°C.’, annotations: [] }

element: { value: ‘What is the weather in Hanoi.’, annotations: [] }

Conclusion

In this article, you’ve explored the capabilities, strengths, and limitations of AI assistants, along with gaining insights into the assistant creation process. The ability to utilize Python for direct image, Excel, JSON processing, and more within the assistant was also discussed. Additionally, you’ve learned about the options for retrieving specific datasets, leveraging custom functions, and seamlessly integrating AI assistants into existing applications via the Assistants API.