Wirepod's feature to pepper dialog with animations
Have the Vector Robot read out a bed time story with animations
There has been a lot of developmental activity on Wirepod in the last few weeks. In this post, we will be talking about a feature in Wirepod Release 1.1.13 which allows one to request Large Language Models (LLMs) (such as OpenAI GPT4) to embed commands (such as animations) in returned text. Wirepod can then decipher these commands and ask Vector to perform tasks (such as animations), in addition to narrating the output from the LLM.
With the help of this feature, Vector can be asked to perform many animations while speaking out, which in effect gives Vector a great personality and life. I realize that this feature is very sophisticated, and a bit of very clever engineering by Wirepod’s author, so let me back up a bit to explain what happens under the hood.
Prompt Engineering
In a previous post, we have discussed how prompt engineering can be used to have Vector robot read out a bedtime story. The idea of prompt engineering is simple, by customizing the prompt which we use for a Large La…