Skip to content

Non-streaming chat

Hopfield provides a simple way to interact with chat models. You can use different API providers with type guarantees with Zod.

Usage

Use chat models from OpenAI:

ts
ts
import hop from "hopfield";
import openai from "hopfield/openai";
import OpenAI from "openai";
 
const hopfield = hop.client(openai).provider(new OpenAI());
 
const chat = hopfield.chat();
 
const messages: hop.inferMessageInput<typeof chat>[] = [
{
role: "user",
content: "How do you count to ten?",
},
];
 
const response = await chat.get({
messages,
});
 
const responseType = response.choices[0].__type;
const responseType: "stop" | "length" | "content_filter"
if (responseType === "stop") {
const message = response.choices[0].message;
const message: { role: "assistant"; content: string; }
}
ts
import hop from "hopfield";
import openai from "hopfield/openai";
import OpenAI from "openai";
 
const hopfield = hop.client(openai).provider(new OpenAI());
 
const chat = hopfield.chat();
 
const messages: hop.inferMessageInput<typeof chat>[] = [
{
role: "user",
content: "How do you count to ten?",
},
];
 
const response = await chat.get({
messages,
});
 
const responseType = response.choices[0].__type;
const responseType: "stop" | "length" | "content_filter"
if (responseType === "stop") {
const message = response.choices[0].message;
const message: { role: "assistant"; content: string; }
}

Parameters

Model Name

The model name to use for the embedding.

ts
const hopfield = hop.client(openai).provider(new OpenAI());

const chat = hopfield.chat("gpt-4-0613"); 
const hopfield = hop.client(openai).provider(new OpenAI());

const chat = hopfield.chat("gpt-4-0613"); 

OpenAI

The default model name is shown below. To override this, you must use a model which is enabled on your OpenAI account.

ts
ts
import type { DefaultOpenAIChatModelName } from "hopfield/openai";
(alias) type DefaultOpenAIChatModelName = "gpt-4-0613" import DefaultOpenAIChatModelName
ts
import type { DefaultOpenAIChatModelName } from "hopfield/openai";
(alias) type DefaultOpenAIChatModelName = "gpt-4-0613" import DefaultOpenAIChatModelName

All possible model names are shown below (reach out if we are missing one!)

ts
ts
import type { OpenAIChatModelName } from "hopfield/openai";
(alias) type OpenAIChatModelName = "gpt-4-0314" | "gpt-4-0613" | "gpt-4-32k-0314" | "gpt-4-32k-0613" | "gpt-3.5-turbo-0301" | "gpt-3.5-turbo-0613" | "gpt-3.5-turbo-1106" | "gpt-3.5-turbo-16k-0613" | "gpt-4-1106-preview" import OpenAIChatModelName
ts
import type { OpenAIChatModelName } from "hopfield/openai";
(alias) type OpenAIChatModelName = "gpt-4-0314" | "gpt-4-0613" | "gpt-4-32k-0314" | "gpt-4-32k-0613" | "gpt-3.5-turbo-0301" | "gpt-3.5-turbo-0613" | "gpt-3.5-turbo-1106" | "gpt-3.5-turbo-16k-0613" | "gpt-4-1106-preview" import OpenAIChatModelName

Response Count

The number of chat responses to be returned (this is usually referred to as n). For all providers, this defaults to 1. This is capped at 20.

ts
const hopfield = hop.client(openai).provider(new OpenAI());

const chat = hopfield.chat("gpt-4-0613", 10); 
const hopfield = hop.client(openai).provider(new OpenAI());

const chat = hopfield.chat("gpt-4-0613", 10); 

The response can then be safely used:

ts
ts
const messages: hop.inferMessageInput<typeof chat>[] = [
{
role: "user",
content: "What's the best way to get a bunch of chat responses?",
},
];
 
const response = await chat.get({
messages,
});
 
const chatCount = response.choices.length;
const chatCount: 10
ts
const messages: hop.inferMessageInput<typeof chat>[] = [
{
role: "user",
content: "What's the best way to get a bunch of chat responses?",
},
];
 
const response = await chat.get({
messages,
});
 
const chatCount = response.choices.length;
const chatCount: 10

Released under the MIT License.