On this page Changelog 0.28.2 Fix bug where partially streamed unicode characters (e.g. Chinese) would cause an error in OpenAI function calls. Add
openai.finish_reason span attribute for
Improved completion/prompt logging to include explicit message text Fix bug where memoized components could duplicate content Refactor
<Converse> to allow rounds to progress in parallel when content allows
batchFrames render option to coalesce ready frames
js-tiktoken import that fails on 1.0.8.
Remove the MDX repair attempt. Reduce standard system prompt size.
Sidekick can now interject with filler content (e.g. "Let me check on that.") when the model requests a function call.
Update OpenAI client to 4.16.0 Add support for OpenAI parallel function calls Update the model enums in
In the OpenTelemetry logger, ensure that SeverityNumber is set. In the
Put the user system messages before the built-in system messages. Make the MDX formatting logic is conditional on using MDX Accept
children as the conversation to act on (defaults to
Fix a bug in
LimitToValidMdx where a whitespace character was initially yieled.
Sidekick now accepts a
useCitationCard prop, which controls whether it will emit
<Citation /> MDX components.
Sidekick is no longer locked to GPT-4-32k. Now, it'll run with whatever model is set by the AI.JSX context.
If you pass tools, make sure that the model supports native function calling, or you'll get an error. Fix bug in Anthropic's
ChatCompletion where it was too aggressive in checking that
tools don't exist.
Sidekick component. The
systemMessage is now always given to the model as the last part of the context window.
Remove other cruft from the built-in Sidekick system message. Remove
Card component from the Sidekick's possible output MDX components.
role prop from the
Fix issue with how the SDK handles request errors. Enable Sidekicks to introduce themselves at the start of a conversation. Fix an issue where empty strings in conversational prompts cause errors to be thrown. Modified
lib/openai to preload the tokenizer to avoid a stall on first use
Fixed an issue where
debug(component) would throw an exception if a component had a prop that could not be JSON-serialized.
Sidekick to add the following options:
Added components for Automatic Speech Recognition (ASR) in
Addec components for Text-to-Speech (TTS) in
ASR providers include Deepgram, Speechmatics, Assembly AI, Rev AI, Soniox, and Gladia. TTS providers include Google Cloud, AWS, Azure, and ElevenLabs. Fixed a bug where passing an empty
functionDefinitions prop to
<OpenAIChatModel> would cause an error.
Added the ability to set Anthropic/OpenAI clients without setting the default model Increase the default token limit for automatic API response trimming. API token limiting: long API responses in
Sidekick are now automatically truncated. If this happens, the response is chunked and the LLM is given a new function
loadBySimilarity to query the last function response.
<UseTools> to allow AI.JSX components to be tools.
FixieCorpus to take a
FixieCorpus.createTool helper to create a tool that consults a Fixie corpus.
Updated default URL for
Updated DocsQA battery to use the new version of the Fixie corpus REST API. Updated DocsQA battery to use the new Fixie corpus REST API. Add Sidekick component. Sidekicks are a high-level abstraction for combining tool use, docs QA, and generated UI. Change
MdxSystemMessage to no longer automatically infer component names from the
usageExamples is now a plain string, and component names are passed separately via the
<ConversationHistory> component to render to a node from a
ConversationHistoryContext provider, rather
than from OpenAI message types.
Replace usage of
openai-edge with that of the
openai v4 package.
<FixieCorpus> component to use the new Fixie Corpus REST API.
This is currently only available to users on
beta.fixie.ai but will be brought
Memoized streaming elements no longer replay their entire stream with every render. Instead, they start with the last rendered frame. Elements returned by partial rendering are automatically memoized to ensure they only render once. Streaming components can no longer yield promises or generators. Only
AI.AppendOnlyStream values can be yielded.
AI.AppendOnlyStream value is now a function that can be called with a non-empty value to append.
In the OpenTelemetry integration: Add prompt/completion attributes with token counts for
<OpenAIChatModel>. This replaces the
tokenCount attribute added in 0.9.1.
By default, only emit spans for
tokenCount field to
OpenTelemetry-emitted spans. Now, if you're emitting via OpenTelemetry (e.g. to DataDog), the spans will tell you how many tokens each component resolved to. This is helpful for answering quetsions like "how big is my system message?". Breaking: Remove prompt-engineered
UseTools. Previously, if you called
UseTools with a model that doesn't support native function calling (e.g. Anthropic),
UseTools would use a polyfilled version that uses prompt engineering to simulate function calling. However, this wasn't reliable enough in practice, so we've dropped it.
Fix issue where
gpt-4-32k didn't accept functions.
Fix issue where Anthropic didn't permit function call/responses in its conversation history. Add Anthropic's claude-2 models as valid chat model types. Fix issue where Anthropic prompt formatting had extra
0.8.5 Fix issue where OpenTelemetry failures were not being properly attributed. Add OpenTelemetry integration for AI.JSX render tracing, which can be enabled by setting the
AIJSX_ENABLE_OPENTELEMETRY environment variable.
Throw validation errors when invalid elements (like bare strings) are passed to
Reduce logspam from memoization. Fix issue where the
description field wasn't passed to function definitions.
Add support for token-based conversation shrinking via
MdxChatCompletion to be
MdxSystemMessage. You can now put this
SystemMessage in any
ChatCompletion to prompt the model to give MDX output.
ShowConversation components facilitate streaming conversations.
ChatCompletion components to render to
AI.RenderContext to ensure that memoized components render once, even if placed under a different context provider.
AIJSX_LOG environment variable to control log level and output location.
<UseTools> to take a complete conversation as a
children prop, rather than as a string
toTextStream to accept a
logger, so you can now see log output when you're running AI.JSX on the server and outputting to a stream. See
AI + UI and Observability. 0.5.12 Updated
readme.md in the
ai-jsx package to fix bugs on the npm landing page.
Make JIT UI stream rather than appear all at once. Use
openai-edge instead of
now produces an
object which will render to a URL in the command line, but returns an
<img /> tag when using in the browser (React/Next).
Fix build system issue that caused problems for some consumers. Remove need for projects consuming AI.JSX to set
"moduleResolution": "esnext" in their
Adding Weights and Biases integration Fix how env vars are read. When reading env vars, read from
REACT_APP_VAR_NAME. This makes your env vars available to projects using
Add OpenAI client proxy.