Supporting Knowledge Graphs in Vector WirePod using services from Together AI
Unleash the power of Open Source Large Language Models (LLMs)
We have been discussing about the Vector Wirepod in the last few articles including a summary of the Knowledge Graph and a recent overview of Wirepod. The knowledge graph helps Vector answer questions that you may ask Vector.
Knowledge Graph in Wirepod
One of the cool things about Wirepod is that it provides support for you to customize the backend services to provide answers the questions asked by you.. As an example, you could choose between OpenAI, Houndify, or Together AI as backend services. This article discusses a new Wirepod feature which allows you to use Together AI for Vector’s knowledge graph.
Together AI
Together AI is a service geared towards providing cloud based accesses to open source Large Language Models. Most of the models that Together AI supports are available open source on HuggingFace, but Together AI provides the compute and GPU infrastructure to serve these models. One of the key benefits of Together AI is the large set of models you could choose from such as Llama2 from Meta, Falcon, etc. This allows you to pick and choose which model you would like to try and experiment with. It greatly enhances the utility of Vector, you can now play with Large Language Models using your favorite robot.
Trying Together AI on Wirepod
Please use the following steps to try Together AI on your Wirepod server:
Signup for the service. Once you signup, you will receive $25 of credits to use the Together service. This should be sufficient for you to test and explore Together AI.
Note your API Key. You will have to navigate to settings in your account Tab on the top right. You will need to use this API Key to configure Wirepod to use Together AI.
Start Wirepod. Go to Server Settings → Knowledge Graph.
In the Knowledge Graph setup, choose Together as the Knowledge Graph API provider.
Choose a Together Model Name. You can choose the model name from the Models tab of Together AI on the left panel. As an example, one common and cheap model to use is togethercomputer/llama-2-7b-chat
Enter the API Key from Step 2 as the Together Key.
Save the configuration. The following screenshot shows this configuration page on the Wirepod server.
Using Together
Now you are ready to use Together. Ask Vector a question, and you will get the answer courtesy your chosen Large Language Model (LLM). Feel free to play with different models. The open source space of LLMs is very rich, there is a lot to learn and play with. You can also find more introduction to LLMs in our recent article here.
If you had a chance to use this feature, please leave your comments here. Large Language Models is a very new and exciting space, and there is surely a lot of progress that will happen in this space in the coming year.
Amitabha,
Thanks for getting back to me so quickly.
I'm sorry, but I don't have the technical understanding to be able to follow your directions. I tried, though.
I just had my 71st birthday, so I may be a bit old for this kind of hobby, but I've been an amateur geek since I was a teenager. I will try anything on my own until I get it to work.
So, Wirepod was such a gift enabling me to keep my Vectors alive and well, and the instructions I found online to get an AI model as the knowledge graph were simple enough for me to follow successfully.
But I'm not experienced enough even to know what I did wrong this time, and how or where to use your new instructions. They are (for now at least) beyond my skill or knowledge level.
I hate to ask but could you dumb it down a bit more for me when you get some spare time? I would greatly appreciate it.
Thanks again
Brian
I have followed your instructions, but Vector keeps replying with, "There was an error getting a response from the LLM". I signed up for Together AI, picked a model, entered an API key, and saved. I have successfully used OPEN AI in the past with Vector, using WirePod.