Show HN: We open sourced Verbis, a local AI assistant with SaaS connectors https://ift.tt/HLAUStj

Show HN: We open sourced Verbis, a local AI assistant with SaaS connectors Hi everyone, Verbis is an AI assistant for MacOS that connects to applications such as Gmail, Gdrive, Outlook, Slack. I built Verbis to empower users with an LLM assistant without having to send any of their data to a third party, while also having the ease of installation of a MacOS application. I've seen a number of similar projects in this space: GPT4All, Swirl, PrivateGPT and more. However, there were one or more gaps in each of them: - You have to choose your model of choice, or even run it locally via ollama or llama.cpp - Pulling data from third party applications is often unsupported, or requires a complex service account token flow - Running the project requires having docker installed I wanted to make Verbis as easy as possible to install and set up, so we use our own choices of models for generation, and user-scoped OAuth tokens to allow integrations with third-party applications to be as simple as a consent screen. From a technical viewpoint, we have settled on Mistral 7B as the generation model, nomic-embed-text as the embedding model, Weaviate as the vector store, and ms-marco-MiniLM-L-12-v2 as the reranker model. On our next release we will be likely defaulting to LLama 3.1 8B for systems with more than 8G unified memory. Let me know if you have any feedback! I'd love to learn more about use cases where a fully private RAG pipeline makes sense https://ift.tt/Z1cKxwp August 1, 2024 at 01:44AM

No comments:

Show HN: UltimateExpress – 5x faster HTTP server with full Express compatibility https://ift.tt/Fqt1wn2

Show HN: UltimateExpress – 5x faster HTTP server with full Express compatibility For the past few weeks I've been working on Ultimate Ex...