Spaces:
Running
Running
File size: 1,694 Bytes
28f1f12 d2a45d8 28f1f12 d2a45d8 28f1f12 fc22502 28f1f12 d2a45d8 28f1f12 2ff7a91 22b49f1 2ff7a91 22b49f1 2ff7a91 615bcf7 2ff7a91 615bcf7 2ff7a91 615bcf7 2ff7a91 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
title: PDF Chatbot
emoji: π
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.16.1
app_file: app.py
pinned: true
---
[](https://www.python.org/downloads/)
[](https://github.com/psf/black)
[](https://github.com/pylint-dev/pylint)
**Aim: PDF-based AI chatbot with retrieval augmented generation**
**Architecture / Tech stack:**
- Front-end:
- user interface via Gradio library
- Back-end:
- HuggingFace embeddings
- HuggingFace Inference API for open-source LLMs
- Chromadb vector database
- LangChain conversational retrieval chain
You can try out the deployed [Hugging Face Space](https://huggingface.co./spaces/cvachet/pdf-chatbot)!
----
### Overview
**Description:**
This AI assistant, using Langchain and open-source LLMs, performs retrieval-augmented generation (RAG) from your PDF documents. The user interface explicitely shows multiple steps to help understand the RAG workflow. This chatbot takes past questions into account when generating answers (via conversational memory), and includes document references for clarity purposes. It leverages small LLM models to run directly on CPU hardware.
**Available open-source LLMs:**
- Meta Llama series
- Alibaba Qwen2.5 series
- Mistral AI models
- Microsoft Phi-3.5 series
- Google Gemma models
- HuggingFace zephyr and SmolLM series
### Local execution
Command line for execution:
> python3 app.py
The Gradio web application should now be accessible at http://localhost:7860
|