Skip to content

Wire-Pod GO plugin for Anki Vector Robot to talk to send commands to Home Assistant via the Conversation API

License

Notifications You must be signed in to change notification settings

NonaSuomy/WirePodVectorCommandHomeAssistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Wire-Pod Vector Command HomeAssistant

Wire-Pod GO plugin for Anki Vector Robot to talk to and send commands to Home Assistant via the Conversation API

vector-vector-robot

Table of Contents

YouTube Short

YouTube Short

^ https://youtube.com/shorts/i7WPcnAWji8 ^

Home Assistant Container for Docker

#HomeAssistant Container for Docker
version: "3.9"
services:
  homeassistant:
    container_name: homeassistant
    image: "ghcr.io/home-assistant/home-assistant:stable"
    volumes:
      - /var/lib/libvirt/images/usb001/docker-storage/homeassistant/config:/config
      - /etc/localtime:/etc/localtime:ro
    mac_address: de:ad:be:ef:00:08
    networks:
      - dhcp
    devices:
      - /dev/ttyACM0:/dev/ttyACM0
    restart: "no"
    privileged: true

networks:
  dhcp:
    name: dbrv100
    external: true

Home-LLM on a GPU server

image

Clone or download the repository https://github.com/oobabooga/text-generation-webui install it however you please I just run it on the metal of a GPU server.

Note: You currently need an unreleased tag of text-generation-webui to work with the V2 model of Home-LLM which you can get like this until fully released

git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
git checkout -b llamacpp_0.2.29 origin/llamacpp_0.2.29
git branch
git pull
echo "--listen --api --model Home-3B-v2.q8_0.gguf --n-gpu-layers 33" >> CMD_FLAGS.txt
./start_linux.sh

Edit CMD_FLAGS.txt before installing uncomment the line # --listen --api (Remove the number sign and space) I also added --model Home-3B-v2.q8_0.gguf --n-gpu-layers 33 for my GPU to start the model on boot.

Run the start_linux.sh, start_windows.bat, start_macos.sh, or start_wsl.bat script depending on your OS.

Select your GPU vendor when asked.

Once the installation ends, browse to http://GPUServerIP:7860/?__theme=dark.

Download the gguf file from acon96/Home-3B-v2-GGUF and Home-3B-v2.q8_0.gguf for max GPU potential in the text-generation-webui.

Select the reload button when it is done downloading (2.8G) the file Home-3B-v2.q8_0.gguf should show up in the Model list then hit Load and Save settings.

image

Excerpt from Home-LLM

Performance of running the model on a Raspberry Pi

The RPI4 4GB that I have was sitting right at 1.5 tokens/sec for prompt eval and 1.6 tokens/sec for token generation when running the Q4_K_M quant. I was reliably getting responses in 30-60 seconds after the initial prompt processing which took almost 5 minutes. It depends significantly on the number of devices that have been exposed as well as how many states have changed since the last invocation because of llama.cpp caches KV values for identical prompt prefixes.

It is highly recommended to set up text-generation-webui on a separate machine that can take advantage of a GPU.

Start-up on boot of the GPU server.

#/etc/systemd/system/textgen.service
[Unit]
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash /opt/code/text-generation-webui/start_linux.sh
User=textgen
Group=textgen

[Install]
WantedBy=multi-user.target

Add user textgen and to groups video/render

sudo useradd -m -s /bin/bash -G render,video textgen
sudo passwd textgen
id textgen
uid=1001(textgen) gid=1001(textgen) groups=1001(textgen),44(video),109(render)
sudo chown -R textgen:textgen /opt/text-generation-webui/

Test your user to make sure it can run text-generation-webui

su textgen
cd /opt/text-generation-webui
./start_linux.sh
08:11:51-750558 INFO     Starting Text generation web UI
08:11:51-753175 WARNING
                         You are potentially exposing the web UI to the entire internet without any access password.
                         You can create one with the "--gradio-auth" flag like this:

                         --gradio-auth username:password

                         Make sure to replace username:password with your own.
08:11:51-822630 INFO     Loading Home-3B-v2.q8_0.gguf
08:11:51-907601 INFO     llama.cpp weights detected: models/Home-3B-v2.q8_0.gguf
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: Radeon RX Vega, compute capability 9.0, VMM: no
llama_model_loader: loaded meta data with 19 key-value pairs and 453 tensors from models/Home-3B-v2.q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = phi2
llama_model_loader: - kv   1:                               general.name str              = Phi2
llama_model_loader: - kv   2:                        phi2.context_length u32              = 2048
llama_model_loader: - kv   3:                      phi2.embedding_length u32              = 2560
llama_model_loader: - kv   4:                   phi2.feed_forward_length u32              = 10240
llama_model_loader: - kv   5:                           phi2.block_count u32              = 32
llama_model_loader: - kv   6:                  phi2.attention.head_count u32              = 32
llama_model_loader: - kv   7:               phi2.attention.head_count_kv u32              = 32
llama_model_loader: - kv   8:          phi2.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv   9:                  phi2.rope.dimension_count u32              = 32
llama_model_loader: - kv  10:                          general.file_type u32              = 7
llama_model_loader: - kv  11:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,51200]   = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,51200]   = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,50000]   = ["Ġ t", "Ġ a", "h e", "i n", "r e",...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 50296
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 50297
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  259 tensors
llama_model_loader: - type q8_0:  194 tensors
llm_load_vocab: mismatch in special tokens definition ( 910/51200 vs 944/51200 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = phi2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 51200
llm_load_print_meta: n_merges         = 50000
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 2560
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 32
llm_load_print_meta: n_embd_head_k    = 80
llm_load_print_meta: n_embd_head_v    = 80
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2560
llm_load_print_meta: n_embd_v_gqa     = 2560
llm_load_print_meta: f_norm_eps       = 1.0e-05
llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 10240
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q8_0
llm_load_print_meta: model params     = 2.78 B
llm_load_print_meta: model size       = 2.75 GiB (8.51 BPW)
llm_load_print_meta: general.name     = Phi2
llm_load_print_meta: BOS token        = 50296 '<|im_start|>'
llm_load_print_meta: EOS token        = 50297 '<|im_end|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_tensors: ggml ctx size =    0.35 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  2686.46 MiB
llm_load_tensors:        CPU buffer size =   132.81 MiB
............................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   640.00 MiB
llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
llama_new_context_with_model: graph splits (measure): 3
llama_new_context_with_model:      ROCm0 compute buffer size =   147.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     9.00 MiB
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
08:11:56-083810 INFO     LOADER: llama.cpp
08:11:56-084582 INFO     TRUNCATION LENGTH: 2048
08:11:56-085278 INFO     INSTRUCTION TEMPLATE: Alpaca
08:11:56-085904 INFO     Loaded the model in 4.26 seconds.
08:11:56-086636 INFO     Loading the extension "openai"
08:11:56-153090 INFO     OpenAI-compatible API URL:

                         http://0.0.0.0:5000

08:11:56-154806 INFO     Loading the extension "gallery"
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.

Looks good! If you get permission issues check your permissions on the /opt/text-generation-webui folder

sudo systemctl daemon-reload
sudo systemctl enable textgen.service
sudo systemctl start textgen.service
sudo systemctl status textgen.service
● textgen.service
     Loaded: loaded (/etc/systemd/system/textgen.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-01-22 07:53:18 UTC; 5s ago
   Main PID: 2499 (bash)
      Tasks: 8 (limit: 18905)
     Memory: 254.5M
        CPU: 4.054s
     CGroup: /system.slice/textgen.service
             ├─2499 /bin/bash /opt/text-generation-webui/start_linux.sh
             ├─2512 python one_click.py
             ├─2517 /bin/sh -c ". \"/opt/text-generation-webui/installer_files/conda/etc/profile.d/conda.sh\" && conda activate \"/opt/text-generation-webui/installer_files/env\" && python server.py  --listen --api --model Home-3B-v2.q8_0.gguf --n-gpu-layers 33"
             └─2520 python server.py --listen --api --model Home-3B-v2.q8_0.gguf --n-gpu-layers 33

systemd[1]: Started textgen.service.

Add the GPU server IP and port (5000) to your Home-LLM integration.

Select text-generation-webui API in the dropdown then hit SUBMIT.

image

The GPU Server IP that is running text-generation-webui

API Hostname*: 10.0.0.42

The backend port for text-generation-webui 5000 is the default for text-generation-webui (not 7860 which is the webui)

API Port*: 5000

Model name has to be the exact same as it looks in the dropdown model list in text-generation-webui

Model Name*: Home-3B-v2_q8_0.gguf

Chat completions endpoint is how the URLs get formed to post data to text-generation-webui without this checked it will use /v1/completions which is depreciated for /v1/chat/completions (Both currently still work but let's think about the future).

[X] Use chat completions endpoint

API Key doesn't matter can type anything or nothing at all.

API key: na

image

Select SUBMIT and hope for Success!

image

image

Wyoming-Whisper Container for Docker

image

Manual Docker Config

# docker run -it -p 10300:10300 -v /path/to/local/data:/data rhasspy/wyoming-whisper --model tiny-int8 --language en
# Whisper Container for Docker
version: "3.9"
services:
  wyoming-whisper:
    image: rhasspy/wyoming-whisper:latest
    labels:
      - "com.centurylinklabs.watchtower.enable=true"
      - "com.centurylinklabs.watchtower.monitor-only=false"
    container_name: whisper
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/lib/libvirt/images/usb001/docker-storage/wyoming-whisper/data:/data
      #- /etc/asound.conf:/etc/asound.conf
    mac_address: "de:ad:be:ef:5e:34"
    networks:
      - dhcp
    devices:
      - /dev/snd:/dev/snd
    ports:
      - 10300:10300 # http
    command: --model tiny-int8 --language en
    restart: "no"
networks:
  dhcp:
    name: "dbrv100"

Wyoming-Piper Docker Container

image

Manual Docker Config

# docker run -it -p 10200:10200 -v /path/to/local/data:/data rhasspy/wyoming-piper --voice en-us-lessac-low
# Piper Container for Docker
version: "3.9"
services:
  wyoming-piper:
    image: rhasspy/wyoming-piper:latest
    labels:
      - "com.centurylinklabs.watchtower.enable=true"
      - "com.centurylinklabs.watchtower.monitor-only=false"
    container_name: piper
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/lib/libvirt/images/usb001/docker-storage/wyoming-piper/data:/data
    mac_address: "de:ad:be:ef:5e:35"
    networks:
      - dhcp
    ports:
      - 10200:10200 # http
    command: --voice en-us-lessac-low
    restart: "no"
networks:
  dhcp:
    name: "dbrv100"

Wyoming-OpenWakeWord Container for Docker

image

Manual Docker Config

# Wyoming-OpenWakeWord Container for Docker
version: "3.9"
services:
  wyomingopenwakeword:
    container_name: "wyoming-openwakeword"
    image: "rhasspy/wyoming-openwakeword:latest"
    labels:
      - "com.centurylinklabs.watchtower.enable=true"
      - "com.centurylinklabs.watchtower.monitor-only=false"
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/lib/libvirt/images/usb001/docker-storage/wyoming-openwakeword/data:/data
    mac_address: "de:ad:be:ef:5e:36"
    hostname: "wyomingopenwakeword"
    networks:
      - dhcp
    ports:
      - 10400:10400
    devices:
      - /dev/snd:/dev/snd
    command: --model 'ok_nabu' --model 'hey_jarvis' --model 'hey_rhasspy' --model 'hey_mycroft' --model 'alexa' --preload-model 'ok_nabu'    
    # --uri 'tcp://0.0.0.0:10400'
    restart: "no"
    #restart: unless-stopped
    #privileged: true
    #network_mode: host

networks:
  dhcp:
    name: "dbrv100"
    external: true

Wyoming Protocol

image

image

In the Wyoming protocol integration click "Add Entry" and enter the IP and Port of the three Whisper/Piper/OpenWakeWord dockers so 10200, 10300, 10400, and their corresponding IP.

Then go under the settings -> voice assistant -> add assistant.

Name your assistant.

Select your conversation agent like Home Assistant or your Home LLM, etc.

Home LLM has some options as well that you can play with

image

Then drop down the boxes for Faster-Whisper, Piper, and OpenWakeWord. Select the settings you like for each.

image

Then give it a test

image

image

Home Assistant Groups

Currently, the model doesn't support turning off areas so you require this integration to get that working with the model

https://www.home-assistant.io/integrations/group/

image

image

image

Wire-Pod LXC

I recently switched to using LXC as I was modifying the docker too much. Yes persistent storage would be a lot better for not losing code/settings if you update the docker container. The Docker config is still below but I prefer this option.

/etc/netplan/xx-someautogen.yaml

network:
  version: 2
  renderer: networkd
  ethernets:
    enp3s0:
      dhcp4: no
  bridges:
    br0:
      dhcp4: yes
      interfaces:
        - enp3s0
sudo apt install lxc
sudo lxc init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
Would you like LXC to be available over the network? (yes/no) [default=no]: yes
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
sudo lxc-create -t download -n wirepod -- --dist ubuntu --release jammy  --arch amd64
sudo lxc-ls
wirepod
sudo lxc-stop -n wirepod
sudo vim /var/lib/lxc/wirepod/config
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64

# Container specific configuration
lxc.rootfs.path = dir:/var/lib/lxc/wirepod/rootfs
lxc.uts.name = wirepod
lxc.start.auto = 1

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br0
lxc.net.0.flags = up
lxc.net.0.hwaddr = de:ad:be:ef:ca:fe
sudo lxc-start -n wirepod
sudo lxc-attach -n wirepod
git clone https://github.com/kercre123/wire-pod --depth=1
cd wire-pod
sudo STT=vosk ./setup.sh
./setup.sh daemon-enable
systemctl start wire-pod
systemctl status wire-pod

Wire-Pod Docker

For docker you will need to look at how to add a docker container to a bridge so that it grabs the IP from your router and not the docker subnet. ( I was using the docker-net-dhcp plugin. On newer versions of docker you may have to look for an issue on their tracker about freezing on boot which I was in and list how to compile it to get it working properly. devplayer0/docker-net-dhcp#23 (comment) ) but there are other things like macvlan ipvlan etc. (I was having issues with them where they can't communicate to the host properly that is why I was using the plugin)

I use to use docker to run Vector with Wire-Pod my old configuration looks like this:

FROM ubuntu:latest

# Edit your timezone here:
ENV TZ=Europe/London
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
 && apt-get install -y sudo

RUN adduser --disabled-password --gecos '' wirepod
RUN adduser wirepod sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

USER wirepod

RUN sudo apt-get update && sudo apt-get install -y git
RUN sudo mkdir /wire-pod
RUN sudo chown -R wirepod:wirepod /wire-pod

RUN cd /wire-pod
RUN git clone https://github.com/kercre123/wire-pod/ wire-pod

WORKDIR /wire-pod

RUN sudo STT=vosk /wire-pod/setup.sh

WORKDIR /wire-pod/chipper

CMD sudo /wire-pod/chipper/start.sh
sudo docker build -t wire-pod .

My Docker Compose looks like this

# Wire-Pod Container for Docker
# https://github.com/kercre123/wire-pod/wiki/Installation
version: "3.9"
services:
  wirepod:
    container_name: "wirepod"
    image: "wire-pod:latest"
    #labels:
    #  - "com.centurylinklabs.watchtower.enable=true"
    #  - "com.centurylinklabs.watchtower.monitor-only=false"
    volumes:
      - /var/lib/libvirt/images/usb001/docker-storage/wirepod/data:/data
    # - /run/dbus:/run/dbus:ro # This should technically get dbus running for Avahi but didn't seem to work for me.
                               # May have to use dbus-broker instead of dbus-daemon.
    mac_address: "9e:de:ad:be:ef:42"
    hostname: escapepod
    #networks:
    #  - dhcp
    ports:
      - 80:80
      - 8080:8080
      - 443:443
      - 8084:8084
      - 5353:5353/udp
    restart: "no" # Turn this to unless-stopped if you think everything is ok 
                        # If you use docker-net-dhcp like me keep this set to no so your dockerd doesn't hang on boot. 
#networks:
#  dhcp:
#    name: "dbrv42"
#    external: true
docker compose up -d

Compile

After that attach to the docker

docker exec -it wirepod /bin/bash

You should be met with the cli prompt

wirepod@escapepod:/wire-pod/chipper$
cd plugins
mkdir commandha
cd commandha
wget https://raw.githubusercontent.com/NonaSuomy/WirePodVectorCommandHomeAssistant/main/commandha.go

Edit the commandha.go file to add your long-lived access token and the IP of your HA instance

url := "http://HAIPHERE:8123/api/conversation/process" // Replace with your Home Assistant IP
token := "LONGTOKENHERE" // Replace with your token
//agentID := "AgentIDHere" // Replace with your agent_id (Can get this with the dev assist console in YAML view or try the name)

AgentID

Note: Uncommenting AgentID stuff in the code is not required to use the default HA agent! It is an optional setting when you have multiple agents that are not the default agent in Home Assistant. Uncommenting these in the go code and not setting the AgentID properly may give you HTML status codes that you can't see and may not work. Test the default agent before using this.

If you want Vector to use a specific Agent ID you setup for him under assist manager. You need to uncomment these three lines and add your agentID := "################################" etc, to the middle one. Otherwise, it will use the default one.

image

Grab the agent_id:

image

//AgentID string `json:"agent_id"`

//agentID := "AgentIDHere" // Replace with your agent_id (Can get this with the dev assist console in YAML view or try the name)

//AgentID: agentID,

Note: If you delete and remake your agent in the HA setup it will generate a new AgentID which you will then have to recompile into this and restart the server/vector.

Compiling

Compile the GO plugin to the root directory of /wire-pod/chipper/plugins

sudo /usr/local/go/bin/go build -buildmode=plugin -o /wire-pod/chipper/plugins/commandha.so commandha.go

Restart your wirepod docker

docker container restart wirepod

Testing, testing, 123

In the console log you should see

docker logs wirepod
SDK info path: /tmp/.anki_vector/
API config successfully read
Loaded jdocs file
Loaded bot info file, known bots: [00######]
Reading session certs for robot IDs
Loaded 54 intents and 54 matches (language: en-US)
Initiating vosk voice processor with language en-US
Opening VOSK model (../vosk/models/en-US/model)
Initializing VOSK recognizers
VOSK initiated successfully

After this it should load our compiled plugin

Loading plugins
Loading plugin: comandha.so
Utterances []string in plugin comandha.so are OK
Action func in plugin comandha.so is OK
Name string in plugin Home Assistant Control is OK
comandha.so loaded successfully

Then continue on...

Starting webserver at port 8080 (http://localhost:8080)
Starting jdocs pinger ticker
Starting SDK app
Starting server at port 80 for connCheck
Initiating TLS listener, cmux, gRPC handler, and REST handler
Configuration page: http://VECTORIP:8080
Registering escapepod.local on network (loop)
Starting chipper server at port 443
Starting chipper server at port 8084 for 2.0.1 compatibility
wire-pod started successfully!
Jdocs: Incoming ReadDocs request, Robot ID: vic:00######, Item(s) to return: 
[doc_name:"vic.AppTokens"]
Successfully got jdocs from 00######
Vector discovered on network, broadcasting mDNS
Broadcasting mDNS now (outside of timer loop)

Then a successful detection and command looks like this ("Hey Vector" pause... "assist turn off the family room lights")

This is a custom intent or plugin!
Bot 00###### request served.
Bot 00###### Stream type: OPUS
(Bot 00######, Vosk) Processing...
Using general recognizer
(Bot 00######) End of speech detected.
Bot 00###### Transcribed text: assist turn off the family room lights
Bot 00###### matched plugin Home Assistant Control, executing function
Bot 00###### plugin Home Assistant Control, response Turned off the lights

Then turn it back on...

This is a custom intent or plugin!
Bot 00###### request served.
Bot 00###### Stream type: OPUS
(Bot 00######, Vosk) Processing...
Using general recognizer
(Bot 00######) End of speech detected.
Bot 00###### Transcribed text: assist turn on the family room lights
Bot 00###### matched plugin Home Assistant Control, executing function
Bot 00###### plugin Home Assistant Control, response Turned on the lights
This is a custom intent or plugin!
Bot 00###### request served.

200w

Carry on...

Haven't received a conn check from 00###### in 15 seconds, will ping jdocs on next check
Broadcasting mDNS now (outside of timer loop)
Jdocs: Incoming ReadDocs request, Robot ID: vic:00######, Item(s) to return: 
[doc_name:"vic.AppTokens"]
Successfully got jdocs from 00######

A failure speech detection should look something like this

(Bot 00######, Vosk) Processing...
Using general recognizer
(Bot 00######) End of speech detected.
Bot 00###### Transcribed text: a third
Not a custom intent
No intent was matched.
Bot 00###### Intent Sent: intent_system_noaudio
No Parameters Sent
Bot 00###### Stream type: OPUS
(Bot 00######, Vosk) Processing...
Using general recognizer
(Bot 00######) End of speech detected.
Bot 00###### Transcribed text: isis to turn off the living room like
Not a custom intent
No intent was matched.
Bot 00###### Intent Sent: intent_system_noaudio
No Parameters Sent
Bot 00###### Stream type: OPUS
(Bot 00######, Vosk) Processing...
Using general recognizer
(Bot 00######) End of speech detected.
Bot 00###### Transcribed text: assist turn off the family room like
Bot 00###### matched plugin Home Assistant Control, executing function
Bot 00###### plugin Home Assistant Control, response Sorry, I couldn't understand that
This is a custom intent or plugin!
Bot 00###### request served.
Bot 00###### Stream type: OPUS
(Bot 00######, Vosk) Processing...
Using general recognizer
(Bot 00######) End of speech detected.
Bot 00###### Transcribed text: assist
Bot 00###### matched plugin Home Assistant Control, executing function
Bot 00###### plugin Home Assistant Control, response Sorry, I couldn't understand that

Using a different Agent ID (HomeLLM)

https://github.com/acon96/home-llm

Bot 00###### Stream type: OPUS
(Bot 00######, Vosk) Processing...
Using general recognizer
(Bot 00######) End of speech detected.
Bot 00###### Transcribed text: assist what does the fox say
Bot 00###### matched plugin Home Assistant Control, executing function
Bot 00###### plugin Home Assistant Control, response The fox does not say.
This is a custom intent or plugin!
Bot 00###### request served.

Extra

Automate Vector to say things with his Text-To-Speech engine via HA Automation

Add your WirePod IP and bot serial number to the 3 lines below.

configuration.yaml

rest_command:
  assume_behavior_control:
    url: 'http://10.0.0.111:8080/api-sdk/assume_behavior_control?priority=high&serial=00######'
    method: 'get'
  say_text:
    url: 'http://10.0.0.111:8080/api-sdk/say_text?text={{ textvector }}&serial=00######'
    method: 'get'
  release_behavior_control:
    url: 'http://10.0.0.111:8080/api-sdk/release_behavior_control?priority=high&serial=00######'
    method: 'get'

Automation

alias: Vector Speech
description: Send text to Vector URL
trigger:
  - type: opened
    platform: device
    device_id: somedevice
    entity_id: someentity
    domain: binary_sensor
condition: []
action:
  - service: rest_command.assume_behavior_control
    data: {}
  - delay: "00:00:01"
  - service: rest_command.say_text
    data:
      textvector: "Your family room lights are off"
  - delay: "00:00:10"
  - service: rest_command.release_behavior_control
    data: {}
mode: single

Useful links

Wire-Pod Go compile information: https://github.com/kercre123/wire-pod/wiki/For-Developers-and-Tinkerers

My issue trying to get wire-pod working in docker: kercre123/wire-pod#201

Looks like RGarrett93 is priming the ability to have wire-pod as an add-on to HA woo hoo! Help them if you can! I hope what I did does as well! https://community.home-assistant.io/t/anki-vector-integration/

giphy

Note: I have no idea how I was able to accomplish this! Please feel free to do a PR and make this better if you want and let me know what you did! I just attempted to break ground for others as I'm a basic hacker at best and just really wanted to issue commands to HA through Vector locally instead of using Alexa on him...

css-in-readme

About

Wire-Pod GO plugin for Anki Vector Robot to talk to send commands to Home Assistant via the Conversation API

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages