Skip to main content
Open In ColabOpen on GitHub

ChatNetmind

  • TODO: Make sure API reference link is correct.

This will help you getting started with Netmind chat models. For detailed documentation of all ChatNetmind features and configurations head to the API reference.

Overviewโ€‹

Integration detailsโ€‹

  • TODO: Fill in table features.
  • TODO: Remove JS support link if not relevant, otherwise ensure link is correct.
  • TODO: Make sure API reference links are correct.
ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatNetmindlangchain-netmindโœ…/โŒbeta/โŒโœ…/โŒPyPI - DownloadsPyPI - Version

Model featuresโ€‹

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
โœ…/โŒโœ…/โŒโœ…/โŒโœ…/โŒโœ…/โŒโœ…/โŒโœ…/โŒโœ…/โŒโœ…/โŒโœ…/โŒ

Setupโ€‹

  • TODO: Update with relevant info.

To access Netmind models you'll need to create a/an Netmind account, get an API key, and install the langchain-netmind integration package.

Credentialsโ€‹

  • TODO: Update with relevant info.

Head to (TODO: link) to sign up to Netmind and generate an API key. Once you've done this set the NETMIND_API_KEY environment variable:

import getpass
import os

if not os.getenv("NETMIND_API_KEY"):
os.environ["NETMIND_API_KEY"] = getpass.getpass("Enter your Netmind API key: ")

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installationโ€‹

The LangChain Netmind integration lives in the langchain-netmind package:

%pip install -qU langchain-netmind

Instantiationโ€‹

Now we can instantiate our model object and generate chat completions:

  • TODO: Update model instantiation with relevant params.
from langchain_netmind import ChatNetmind

llm = ChatNetmind(
model="model-name",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)

Invocationโ€‹

  • TODO: Run cells so output can be seen.
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
print(ai_msg.content)

Chainingโ€‹

We can chain our model with a prompt template like so:

  • TODO: Run cells so output can be seen.
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate

TODO: Any functionality specific to this model providerโ€‹

E.g. creating/using finetuned models via this provider. Delete if not relevant.

API referenceโ€‹

For detailed documentation of all ChatNetmind features and configurations head to the API reference: API reference


Was this page helpful?