Search algorithms when using vector databases

September 27, 2023

This was the recency-weighted retrieval I was talking about using LlamaIndex (an open-source python library - https://gpt-index.readthedocs.io/en/stable/index.html>).

There are time-weighted postprocessors and recency postprocessors, which decide if the query is temporal-related and updates the ranking of the nodes accordingly.

Time-weighted https://gpt-index.readthedocs.io/en/stable/examples/node_postprocessor/TimeWeightedPostprocessorDemo.html https://gpt-index.readthedocs.io/en/stable/api_reference/node_postprocessor.html#llama_index.indices.postprocessor.TimeWeightedPostprocessor Recency https://gpt-index.readthedocs.io/en/stable/examples/node_postprocessor/RecencyPostprocessorDemo.html https://gpt-index.readthedocs.io/en/stable/api_reference/node_postprocessor.html#llama_index.indices.postprocessor.FixedRecencyPostprocessor

This can also tie in with the hybrid search approach we were talking but instead of hardcoding the weightings there is a router above different query engines (ie: some queries require keyword search, some queries require semantic search, and either may require a time-weighted / recency rerank) - https://betterprogramming.pub/unifying-llm-powered-qa-techniques-with-routing-abstractions-438e2499a0d0