LLM Node
The LLM node encapsulates a large language model and provides functions for AI assisted text and images, such as personal greetings, scene analysis and so on. The node is powered by Ollama running a local LLM described in configuration:
loading...
Setup
The LLM is powered by Ollama and this should be installed and configured to auto-start on boot on the Raspberry PI. To install, run the following command:
curl -fsSL https://ollama.com/install.sh | sh
To ensure Ollama is auto-started on boot, run:
sudo systemctl enable ollama
Node Startup
Ollama is time and resource hungry. Because of this, the LLM Node attempts to start up and detect Ollama when the node is first started. This process is run under a background thread and reports the status to the Status Node:
loading...
During startup the node checks the version of Ollama:
loading...
This version check returns the version string for Ollama if Ollama is installed on the Raspberry Pi. If there is a problem, the version check returns an empty version and this is used by the node to perform a controlled shutdown of the node.
If the version check is successful, a call is made to ollama to start the service.
Personalized Greeting
On detecting a new person, the Person Node informs this node of the new person through the event llm-generate-greeting. The purpose is to generate a personalized greeting for the newly detected person based on the person_score for that person. This event is handled in the LLM Node by the greet method:
loading...
This method creates a prompt for the LLM based on the person.name and a sentiment. This prompt is then sent to Ollama to generate a response and the response is then sent to the Speech Node to vocalize the response.
The sentiment is picked up from configuration based on the person.person_score