Wirepod uses Large Language Models (LLMs) in streaming mode
Have the Vector robot deliver long nuanced answers
Wirepod, the defacto server for handling the Vector robot now, has seen many updates in the last two months.. including language support for Turkish and Russian languages… and the ability to process streaming responses from the knowledge graph supported by a Large Language Model (LLM) provider such as OpenAI or Together AI. This article specifically talks about streaming responses.
Vector’s Knowledge Graph
In a previous post, we had explored how Vector’s Knowledge Graph is facilitated by Wirepod using services from OpenAI or Together AI. However, one of the challenges with Wirepod was that Vector could only deliver short answers. Vector’s in-built firmware (which cannot be changed although project vic-yocto is working on that) has a timeout by which an answer must be received from the Knowledge Graph. If the response doesn’t come in time, Vector just shrugs it off, saying that it cannot answer the question. Large Language Models (LLMs) provided by Open AI and Together AI take a fair bit…
Keep reading with a 7-day free trial
Subscribe to Learn With A Robot to keep reading this post and get 7 days of free access to the full post archives.