Skip to main content

How to let your end users choose their model

Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The init_chat_model() helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.

Supported models

See the init_chat_model() API reference for a full list of supported integrations.

Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have langchain-openai installed to init an OpenAI model.

%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai

Basic usage​

from langchain.chat_models import init_chat_model

# Returns a langchain_openai.ChatOpenAI instance.
gpt_4o = init_chat_model("gpt-4o", model_provider="openai", temperature=0)
# Returns a langchain_anthropic.ChatAnthropic instance.
claude_opus = init_chat_model(
"claude-3-opus-20240229", model_provider="anthropic", temperature=0
)
# Returns a langchain_google_vertexai.ChatVertexAI instance.
gemini_15 = init_chat_model(
"gemini-1.5-pro", model_provider="google_vertexai", temperature=0
)

# Since all model integrations implement the ChatModel interface, you can use them in the same way.
print("GPT-4o: " + gpt_4o.invoke("what's your name").content + "\n")
print("Claude Opus: " + claude_opus.invoke("what's your name").content + "\n")
print("Gemini 1.5: " + gemini_15.invoke("what's your name").content + "\n")
API Reference:init_chat_model
GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?

Claude Opus: My name is Claude. It's nice to meet you!

Gemini 1.5: I am a large language model, trained by Google. I do not have a name.

Simple config example​

user_config = {
"model": "...user-specified...",
"model_provider": "...user-specified...",
"temperature": 0,
"max_tokens": 1000,
}

llm = init_chat_model(**user_config)
llm.invoke("what's your name")

Inferring model provider​

For common and distinct model names init_chat_model() will attempt to infer the model provider. See the API reference for a full list of inference behavior. E.g. any model that starts with gpt-3... or gpt-4... will be inferred as using model provider openai.

gpt_4o = init_chat_model("gpt-4o", temperature=0)
claude_opus = init_chat_model("claude-3-opus-20240229", temperature=0)
gemini_15 = init_chat_model("gemini-1.5-pro", temperature=0)

Was this page helpful?


You can leave detailed feedback on GitHub.