GPTCache is a semantic cache library for LLM models and multi-models, which seamlessly integrates with
-
Updated
May 20, 2023 - Python
GPTCache is a semantic cache library for LLM models and multi-models, which seamlessly integrates with
Load a PDF file and ask questions via llama_index and GPT
This open source chatbot project lets you create a chatbot that uses your own data to answer questions, thanks to the power of the OpenAI GPT-3.5 model.
A Flask Server Demo Application showing off some llama-index LLM prompt magic, including file upload and parsing :)
Intelligent search engine/QA module that uses GPT models to provide accurate, relevant & recent answers from Google News/Web, & can directly answer user queries using GPT's knowledge.
A Web-UI for Llama_index. Allow ChatGPT to access your own database.
Experiments with Langchain using different approaches on Google colab
a QA bot on contents of given docs 用所给文档进行问答的聊天机器人
Taking advantage of LlamaIndex's in-context learning paradigm, LlamaDoc empowers users to input PDF documents and pose any questions related to the content. The tool leverages the LLama Index's reasoning capabilities to provide intelligent responses based on the contextual understanding of the LLM.
An experimental architecture for composing actions from simple learning agents to produce complex behavior.
Concepts and examples on using and training LLMs
PromptPal: An AI-powered command-line assistant built using LlamaIndex & OpenAI Chat Model over custom data
FastAPI + Hugging Face Transformers + LlamaIndex
Generate documentation using Hugging Face embeddings and local LLMs
Add a description, image, and links to the llama-index topic page so that developers can more easily learn about it.
To associate your repository with the llama-index topic, visit your repo's landing page and select "manage topics."