Updates on llm-wrapper
Building the best Knowledge Graph API for Vector. Invitation to try out a Free Knowledge Server for Vector.
In a previous post, we gave a brief introduction to llm-wrapper, a proxy server for providing an API for Large Language Models (LLM) inference service. The use case for llm-wrapper is to help your Vector robot (using Wirepod) to analyze and provide live news with the help of XAI.ai or Perplexity Sonar, something which Wirepod cannot natively do.
Since then, we have made some progress which I share in this update.
Community Updates
has run the llm-wrapper on a Raspberry Pi 4. He graciously provided me notes on how to get the service up and running on a Pi, which is now included in the README of llm-wrapper. has run the llm-wrapper on a on a CHROX Android TV Box.I thank both Holger and Brian for their help, building an open-source software is not possible without community involvement.
You can too run llm-wrapper either on the same box/machine which is running Wirepod, or some other server. You can peek at our public server too (See below)
New Features
Since the last release, llm-wrapper off…