The LLM Answer Engine is a cutting-edge project designed to harness the power of modern AI and search technologies to deliver rich, context-aware responses to user queries. Built with Next.js and Tailwind CSS, it integrates multiple APIs and frameworks including Groq, Mistral AI's Mixtral, Langchain.JS, Brave Search, and Serper API to provide not just answers, but also sources, images, videos, and follow-up questions.
Key Features
- Multi-source Data Retrieval: Combines Brave Search and Serper API to fetch relevant content, images, and videos.
- AI-Powered Processing: Utilizes Groq and Mixtral for understanding and processing user queries efficiently.
- Enhanced User Experience: Includes features like dark mode, dynamic UI components, and optional rate limiting via Upstash Redis.
- Flexible Configuration: Supports various models and embeddings, with options for local inference using Ollama.
Technologies Used
- Frontend: Next.js, Tailwind CSS, Vercel AI SDK
- Backend: Langchain.JS, Cheerio for HTML parsing, optional Ollama for local inference
- APIs: OpenAI, Groq, Brave Search, Serper API
Getting Started
To set up the project, clone the repository, add your API keys, and start the server using Docker or npm. The project is highly configurable, allowing you to tweak settings like text chunk size, number of similarity results, and more via the app/config.tsx
file.
Future Enhancements
Planned features include document upload for RAG, UI settings for model selection, and improved support for follow-up questions with Ollama.
This project is ideal for developers interested in NLP, search technologies, and building AI-powered applications. Contributions are welcome under the MIT License.