A comprehensive collection of LangChain-based applications demonstrating various AI-powered use cases including chatbots, document Q&A, language translation, and more.
This repository contains multiple independent projects, each showcasing different LangChain capabilities:
LangChain/
βββ IntroSection/ # LangChain basics and middleware concepts
βββ Building-chatBot/ # Basic chatbot implementation
βββ E2E_QA_ChatBot/ # End-to-end Q&A chatbot with multiple models
βββ LanguageTranslation/ # Translation API with LangServe
βββ RAG_document_QA/ # RAG-based document question answering
βββ Text-summarization/ # Web page content summarizer
Introduction to LangChain concepts, basic chains, and middleware patterns.
Contents:
LangchainIntro.ipynb- Getting started with LangChainMiddlewares.ipynb- Understanding LangChain middlewares
Simple chatbot implementation demonstrating core LangChain chat functionality.
Contents:
ChatBot.ipynb- Basic chatbot notebook
A fully-featured Streamlit chatbot supporting multiple AI models from Groq - all 100% FREE!
Features:
- β 9 Different Models: OpenAI-compatible, LLaMA, Mixtral, Gemma
- β Runtime API Key Input: No .env file needed
- β Configurable Parameters: Temperature, max tokens
- β Persistent Chat History: Session-based conversations
Quick Start:
cd E2E_QA_ChatBot
streamlit run app.pyModels Available:
- OpenAI GPT OSS 120B (OpenAI-compatible)
- Meta LLaMA 3.3 70B, 3.1 8B/70B, 3.2 1B/3B
- Mixtral 8x7B
- Google Gemma 2 9B, 7B
π Full Documentation
A FastAPI-based translation service using LangChain and LangServe.
Features:
- π Multi-language Translation: English to any language
- β‘ REST API: FastAPI with automatic Swagger docs
- π LangServe Integration: Easy-to-use API routes
Quick Start:
cd LanguageTranslation
python serve.pyAPI Endpoint:
POST http://localhost:8000/chain/invoke
Example Request:
{
"input": {
"language": "French",
"text": "Hello, how are you?"
},
"config": {},
"kwargs": {}
}Interactive Docs: http://localhost:8000/docs
Retrieval-Augmented Generation application for asking questions about PDF documents.
Features:
- π PDF Processing: Automatic document loading and chunking
- π Semantic Search: FAISS vector store for similarity search
- π€ AI-Powered Answers: Groq ChatGroq model with context
- πΎ Ollama Embeddings: Local embeddings generation
- π Context Display: View source chunks used for answers
Quick Start:
cd RAG_document_QA
# Make sure Ollama is running: ollama serve
streamlit run app.pyPrerequisites:
- Ollama installed and running
- PDF documents in
research_papers/folder - Groq API key
π Full Documentation
AI-powered content summarization tool that extracts and summarizes content from web pages.
Features:
- π Web Page Summarization: Summarizes articles, blogs, and documentation
- π€ ChatGroq AI: Uses LLaMA 3.3 70B (128K context) for intelligent summarization
- π Runtime API Key: Enter API key in UI - no .env file required
- ποΈ Content Preview: View original content before summary
- π Metadata Display: Shows source information
- β‘ Handles Long Content: Automatic chunking for large articles
Quick Start:
cd Text-summarization
streamlit run app.py
# Enter Groq API key in sidebar
# Paste any webpage URLExample URLs:
- Web page:
https://example.com/article - News articles, blogs, technical docs
π Full Documentation
- Python 3.8+ (3.10+ recommended)
- Ollama (for RAG_document_QA project)
- Groq API Key (get it FREE at console.groq.com)
-
Clone or navigate to the repository:
cd "C:\Users\LangChain" -
Create and activate a virtual environment:
python -m venv .myvenv .\.myvenv\Scripts\activate -
Install dependencies:
pip install -r requirements.txt -
Set up environment variables:
Create a
.envfile in the project root:GROG_API_KEY=your_groq_api_key_here GROQ_API_KEY=your_groq_api_key_here
Note: Some projects use
GROG_API_KEYand others useGROQ_API_KEY -
Install Ollama (for RAG project):
- Download from ollama.ai
- Pull required model:
ollama pull llama3.1
Core packages used across projects:
- LangChain - Framework for LLM applications
- LangServe - API deployment for LangChain
- LangChain-Groq - Groq model integration
- Streamlit - Web UI framework
- FastAPI - REST API framework
- FAISS - Vector similarity search
- Ollama - Local embeddings
- PyPDF - PDF processing
Install all dependencies:
pip install -r requirements.txtcd E2E_QA_ChatBot
streamlit run app.py
# Enter Groq API key in UIcd LanguageTranslation
python serve.py
# Visit http://localhost:8000/docs# Start Ollama first
ollama serve
# In another terminal
cd RAG_document_QA
streamlit run app.pycd Text-summarization
streamlit run app.py
# Enter Groq API key in sidebar
# Paste webpage URLAll projects use Groq models, which are:
- β 100% FREE - No credit card required
- β‘ Lightning Fast - Extremely fast inference
- π Generous Limits - Suitable for development and production
Get your free API key: console.groq.com/keys
Two Ways to Use API Keys:
- Runtime Input (Recommended): Enter in the UI sidebar when the app starts
- Environment Variable (Optional): Set in
.envfile for convenience
Most projects support entering the API key at runtime through the UI!
Each project has its own detailed README:
Feel free to explore, modify, and extend these projects for your own use cases.
- Projects are independent and can be run separately
- Shared dependencies are in the root
requirements.txt - Environment variables can be set globally in root
.env - Each project may have specific setup requirements (see individual READMEs)
Common Issues:
- Import errors: Make sure virtual environment is activated and dependencies are installed
- API key errors: Check
.envfile exists with correct key names - Ollama errors: Ensure Ollama is running with
ollama serve - Port conflicts: Change ports in respective app files if 8000/8501 are in use