Vertex claude api can not stream output by using ai-chatbot template

I use nextjs-ai-chatbot(Next.js AI Chatbot) template ,and change the llm from openai to vertex claude api by using GitHub - nalaso/anthropic-vertex-ai: Vercel AI community package for using anthropic through vertex ai..

and then I can’t get stream output, I have tried many times, but still can’t stream output. I don’t know why. Has anyone encountered the same problem as me?

my core code is here:

console.log(model.apiIdentifier);
  const result = await streamText({
    model: modelInstance,
    system: modelId === 'claude-3-5-coder' ? CodePrompt : regularPrompt,
    messages: coreMessages,
    maxSteps: 5,

    onFinish: async ({ responseMessages }) => {
      if (session.user && session.user.id) {
        try {
          const responseMessagesWithoutIncompleteToolCalls =
            sanitizeResponseMessages(responseMessages);
          console.log("responseMessagesWithoutIncompleteToolCalls:",responseMessagesWithoutIncompleteToolCalls);
          await saveChat({
            id,
            messages: [
              ...coreMessages,
              ...responseMessagesWithoutIncompleteToolCalls,
            ],
            userId: session.user.id,
          });
        } catch (error) {
          console.error('Failed to save chat');
        }
      }
    },
  });
  for await (const textPart of result.textStream){
    console.log(textPart)
  }
  //console.log('data:', result.toDataStreamResponse({  data: streamingData,}))
  return result.toDataStreamResponse();
}

Hi @taype, welcome to the Vercel Community! Thanks for posting your question here.

I’ll try to recreate the issue and debug it with you. Meanwhile, I notice that in the code snippet above you pass the data: streamingData to the console.log statement but not in the return result.toDataStreamResposne() statement. Can you confirm this from your application code?

It’d also be helpful if you could share a larger code snippet for better context of the variables used and also can you confirm what is the exact output do you see: is it nothing or is it the text you expect but just not streaming?

2 Likes

"Hi @anshumanb, thanks for your help!

If I run this: return result.toDataStreamResponse({ data: streamingData });, the issue arises right there.

Hi Taype, to help you debug this better could you answer a few questions for me?

  1. What is the exact output do you see: is it nothing or is it the text you expect but just not streaming?
  2. If you get an error in the console, could you share that here?

Also, it’d be very helpful if you could share a larger code snippet for better context of the variables used. If this is a public repository, you can also share the link to it here.

1 Like

Hi @anshumanb, thanks for your help again

  1. I got the text,but just not streaming.
  2. I didn’t get error in my console, it just not streaming output.
  3. I use nextjs-ai-chatbot Next.js AI Chatbot and change the llm from openai to vertex claude api by using GitHub - nalaso/anthropic-vertex-ai: Vercel AI community package for using anthropic through vertex ai..
  4. the more code is here
// api/chat/route.ts
import {
  convertToCoreMessages,
  generateObject,
  JSONValue,
  Message,
  StreamData,
  streamObject,
  streamText,
} from 'ai';
import { z } from 'zod';
import { createAnthropicVertex } from '@/claude/anthropic-vertex-provider';
import { customModel } from '@/ai';
import { models } from '@/ai/models';
import { CodePrompt, regularPrompt } from '@/ai/prompts';
import { auth } from '@/app/(auth)/auth';
import {
  deleteChatById,
  getChatById,
  getDocumentById,
  saveChat,
  saveDocument,
  saveSuggestions,
} from '@/db/queries';
import { Suggestion } from '@/db/schema';
import { generateUUID, sanitizeResponseMessages, convertToLanguageModelV1Prompt } from '@/lib/utils';

export const maxDuration = 60;

type AllowedTools =
  | 'createDocument'
  | 'updateDocument'
  | 'requestSuggestions'
  | 'getWeather';

const canvasTools: AllowedTools[] = [
  'createDocument',
  'updateDocument',
  'requestSuggestions',
];

const vertexProvider = createAnthropicVertex();

export async function POST(request: Request) {
  const {
    id,
    messages,
    modelId,
  }: { id: string; messages: Array<Message>; modelId: string } =
    await request.json();
  const session = await auth();

  if (!session) {
    return new Response('Unauthorized', { status: 401 });
  }

  const model = models.find((model) => model.id === modelId);

  if (!model) {
    return new Response('Model not found', { status: 404 });
  }


  try {
    const coreMessages = convertToCoreMessages(messages);
    const streamingData = new StreamData();
    
    const result = await streamText({
      model: vertexProvider(model.apiIdentifier),
      system: modelId === 'claude-3-5-coder' ? CodePrompt : regularPrompt,
      messages: coreMessages,
      maxSteps: 5,
      onFinish: async ({ responseMessages }) => {
        if (session.user?.id) {
          try {
            const sanitizedMessages = sanitizeResponseMessages(responseMessages);
            await saveChat({
              id,
              messages: [...coreMessages, ...sanitizedMessages],
              userId: session.user.id,
            });
            streamingData.close();
          } catch (error) {
            console.error('Failed to save chat:', error);
          }
        }
      }
    });
    
    return result.toDataStreamResponse({
      data: streamingData,
    });
  } catch (error) {
    console.error('Streaming error:', error);
    return new Response('Error processing request', { status: 500 });
  }
}

export async function DELETE(request: Request) {
  const { searchParams } = new URL(request.url);
  const id = searchParams.get('id');

  if (!id) {
    return new Response('Not Found', { status: 404 });
  }

  const session = await auth();

  if (!session || !session.user) {
    return new Response('Unauthorized', { status: 401 });
  }

  try {
    const chat = await getChatById({ id });

    if (chat.userId !== session.user.id) {
      return new Response('Unauthorized', { status: 401 });
    }

    await deleteChatById({ id });

    return new Response('Chat deleted', { status: 200 });
  } catch (error) {
    return new Response('An error occurred while processing your request', {
      status: 500,
    });
  }
}
api result:
HTTP/1.1 200 OK
set-cookie: authjs.session-token=sssssssssEbWNE; Path=/; Expires=Fri, 03 Jan 2025 02:42:00 GMT; HttpOnly; SameSite=Lax
vary: RSC, Next-Router-State-Tree, Next-Router-Prefetch, Next-Router-Segment-Prefetch
content-type: text/plain; charset=utf-8
x-vercel-ai-data-stream: v1
Date: Wed, 04 Dec 2024 02:42:02 GMT
Connection: keep-alive
keep-alive: timeout=5
Transfer-Encoding: chunked

0:"I'm"
0:" trained to understand and help"
0:" with many programming languages, but"
0:" I don't actually \"excel\" in"
0:" any of them since I don"
0:"'t write or run code myself."
0:" I can help explain concepts and provide"
0:" guidance for popular languages like Python,"
0:" JavaScript, Java, C++, an"
0:"d others."
e:{"finishReason":"stop","usage":{"promptTokens":29,"completionTokens":61},"isContinued":false}
d:{"finishReason":"stop","usage":{"promptTokens":29,"completionTokens":61}}

6.I didn’t change the front-end code ,they are in here /components/custom/chat.tsx and /componets/custom/message.tsx
7. can i get your whatsapp or other chat app, so I can give you more info.

Thank you very much for help me.

Hi @taype, thanks for providing more context. Let me try and recreate this to help you better.

can i get your whatsapp or other chat app, so I can give you more info.

I don’t need any private information at this time, but even if we do, we can do so here on the community in a private space. I’ll guide you when we need such information.

Hi @taype, I was reading through the AI SDK docs and one difference I noticed in your code and the linked example is that you are doing

const result = await streamText({

Whereas the AI SDK docs suggest doing

const result = streamText({

I think using await consumes the stream and returns the text and that is why you see the text but not streaming. Can you try this change and see if it works?

Hi @anshumanb, thanks for your help again
I have tried it many times, and the streaming problem has been solved using your method, but my front end still takes a long time to return the first block of stream content. Can you help me take a look? I will post the code.
my back-end code:
try {
const coreMessages = convertToCoreMessages(messages);
const encoder = new TextEncoder();
const stream = new TransformStream();
const writer = stream.writable.getWriter();

streamText({
model: vertexProvider(model.apiIdentifier),
system: modelId === ‘claude-3-5-coder’ ? CodePrompt : regularPrompt,
messages: coreMessages,
maxSteps: 5,
onFinish: async ({ responseMessages }) => {
if (session.user?.id) {
try {
const sanitizedMessages = sanitizeResponseMessages(responseMessages);
await saveChat({
id,
messages: […coreMessages, …sanitizedMessages],
userId: session.user.id,
});
} catch (error) {
console.error(‘Failed to save chat:’, error);
}
}
await writer.close();
}
}).then(async (result) => {
for await (const chunk of result.textStream) {
let messageContent = chunk;
try {
const parsedChunk = JSON.parse(chunk);
if (parsedChunk.type === ‘error’) {
console.error(‘AI model error:’, parsedChunk.error.message);
const errorMessage = {
id: generateUUID(),
role: ‘assistant’,
content: messageContent
};
await writer.write(
encoder.encode(data: ${JSON.stringify(errorMessage)}\n\n)
);
await writer.close();
break;
}
} catch (e) {

}

const message = {
  id: generateUUID(),
  role: 'assistant',
  content: messageContent
};
console.log('stream output:', messageContent);
if (messageContent.trim()) {
  await writer.write(
    encoder.encode(`data: ${JSON.stringify(message)}\n\n`)
  );
}

}
console.log(‘Finished sending all chunks.’);
});

return new Response(stream.readable, {
headers: {
‘Content-Type’: ‘text/event-stream’,
‘Cache-Control’: ‘no-cache’,
‘Connection’: ‘keep-alive’,
},
});
} catch (error) {
console.error(‘Request error:’, error);
return new Response(‘Error processing request’, { status: 500 });
}
}
my front-end code is
export function Chat({
id,
initialMessages,
selectedModelId,
}: {
id: string;
initialMessages: Array;
selectedModelId: string;
}) {
const [isLoading, setIsLoading] = useState(false);
const [messages, setMessages] = useState(initialMessages);
const [input, setInput] = useState(‘’);
const [canvas, setCanvas] = useState<UICanvas | null>(null);
const [attachments, setAttachments] = useState();

const handleSubmit = async (e: React.FormEvent | { preventDefault?: () => void } | undefined) => {
if (e?.preventDefault) {
e.preventDefault();
}
if (!input.trim() || isLoading) return;

setIsLoading(true);
const userMessage: Message = {
id: Date.now().toString(),
role: ‘user’ as const,
content: input
};
setMessages(prev => […prev, userMessage]);
const currentInput = input;
setInput(‘’);

try {
const response = await fetch(‘/api/chat’, {
method: ‘POST’,
headers: {
‘Content-Type’: ‘application/json’,
},
body: JSON.stringify({
id,
messages: […messages, { role: ‘user’, content: currentInput }],
modelId: selectedModelId
}),
});

if (!response.ok) {
throw new Error(HTTP error! status: ${response.status});
}

const assistantMessage: Message = {
id: (Date.now() + 1).toString(),
role: ‘assistant’ as const,
content: ‘’
};
setMessages(prev => […prev, assistantMessage]);

const reader = response.body?.getReader();
if (!reader) {
throw new Error(‘No reader available’);
}

const decoder = new TextDecoder();
let buffer = ‘’;

try {
while (true) {
const { done, value } = await reader.read();
if (done) break;

  const chunk = decoder.decode(value);
  buffer += chunk;

  while (buffer.includes('\n\n')) {
    const lineEnd = buffer.indexOf('\n\n');
    const line = buffer.slice(0, lineEnd);
    buffer = buffer.slice(lineEnd + 2);

    if (line.startsWith('data: ')) {
      try {
        const jsonData = JSON.parse(line.slice(6)); // slice(6) to remove 'data: '
        const chars = jsonData.content.split('');
        const batchSize = 2; 
        for (let i = 0; i < chars.length; i += batchSize) {
          const batch = chars.slice(i, i + batchSize).join('');
          await new Promise(resolve => setTimeout(resolve, 10));
          setMessages(prev => {
            const newMessages = [...prev];
            const lastMessage = newMessages[newMessages.length - 1];
            if (lastMessage.role === 'assistant') {
              return [
                ...newMessages.slice(0, -1),
                {
                  ...lastMessage,
                  content: lastMessage.content + batch
                }
              ];
            }
            return newMessages;
          });
        }
      } catch (e) {
        console.error('Failed to parse line:', line);
      }
    }
  }
}

} finally {
reader.releaseLock();
}
} catch (error) {
console.error(‘Error:’, error);
} finally {
setIsLoading(false);
window.history.replaceState({}, ‘’, /chat/${id});
}
};

const stop = () => {
setIsLoading(false);
};

const append = async (message: Message) => {
setMessages(prev => […prev, message]);
};

useEffect(() => {
console.log(‘Messages updated:’, messages);
}, [messages]);

const [messagesContainerRef, messagesEndRef] =
useScrollToBottom();

const handleSubmitWrapper = (event?: { preventDefault?: () => void } | undefined) => {
handleSubmit(event);
};

const appendWrapper = async (message: Message | CreateMessage) => {
if (‘id’ in message) {
await append(message as Message);
}
return null;
};

return (

{messages.length === 0 && }
{messages.map((message) => (

))}

  <div
    ref={messagesEndRef}
    className="shrink-0 min-w-[24px] min-h-[24px]"
  />
</div>
<form className="flex mx-auto px-4 bg-background pb-4 md:pb-6 gap-2 w-full md:max-w-3xl">
  <MultimodalInput
    input={input}
    setInput={setInput}
    handleSubmit={handleSubmitWrapper}
    isLoading={isLoading}
    stop={stop}
    attachments={attachments}
    setAttachments={setAttachments}
    messages={messages}
    setMessages={setMessages}
    append={appendWrapper}
  />
</form>

</>
);
}
I did not use the useChat function because that function has always had problems and cannot achieve streaming output.

Thank you very much for help me.