Show HN: TrustGraph – Do More with AI with Less (Open Source AI Infrastructure) https://ift.tt/QifDzke

Show HN: TrustGraph – Do More with AI with Less (Open Source AI Infrastructure) Hi HN, We’re Daniel and Mark, the creators of TrustGraph ( https://ift.tt/KsCXSgv ). TrustGraph is an open source, full end-to-end AI infrastructure that automates knowledge graph building and querying along with modular agent integration. A unique aspect of TrustGraph is that the graph building is a one-time process that builds reusable knowledge cores that can be stored, shared, and reloaded. You can read more about TrustGraph knowledge cores here ( https://ift.tt/3YB4vcA ). Throughout our careers, we’ve been faced with huge datasets of unstructured knowledge - thousands of pages, tens of thousands of pages of documents with facts disconnected by thousands of pages. Knowledge graphs and AI unlock a solution to convert this unstructured knowledge to an enriched knowledge structure enabling AI to extract accurate intelligence. Unlike AI frameworks, TrustGraph is your infrastructure. All the services, stores, observability, and backbone gets deployed in a single package. Once deployed, TrustGraph enables you to: - Ingest PDF, TXT, and MD files in batches - Chunk ingested docs with multiple chunking options and parameters - Structure the knowledge in each chunk as RDF - Convert each chunk to mapped vector embeddings - Store RDF triples in either Cassandra or Neo4j - Store vector embeddings in Qdrant - Monitor system performance, like CPU/memory resources and token throughput, with a Grafana dashboard - Store and load knowledge cores - Query the graph store and VectorDB for AI generation - Easy agent integration with Apache Pulsar backbone TrustGraph is model agnostic and currently supports: - Anthropic API - AWS Bedrock API - AzureAI API - OpenAI API in Azure - Cohere API - Llamafile API - Ollama API - OpenAI API - VertexAI API TrustGraph is deployed with either Docker or Kubernetes. For knowledge extraction, we’ve seen Claude 3 Haiku and Gemini 1.5 Flash provide the best value. However, with a quality knowledge core extraction, you can use locally deployed SLMs to get big LLM performance on your dataset. Our goal is to be able to use TLMs (models 3B parameters and smaller) for AI generation on knowledge cores. Trust is the foundation of TrustGraph’s goals. TrustGraph aims to enable “accuracy first” AI generation with our fully transparent and open source approach. Realtime observability will enable doing more with AI with less - less compute, less memory, and less power. We have a vision for “open knowledge cores”, that will contain common terms, definitions, and information to aid the generation for niche use cases driven by community. We hope you join us on this journey of doing more with less. Daniel and Mark Give us a try: https://ift.tt/KsCXSgv Full Documentation: https://ift.tt/sSpfl0n Blog: https://ift.tt/NoICqFE Join the Community: https://ift.tt/veUoNJS https://ift.tt/KsCXSgv October 7, 2024 at 02:02AM

No comments:

Show HN: TrustGraph – Do More with AI with Less (Open Source AI Infrastructure) https://ift.tt/QifDzke

Show HN: TrustGraph – Do More with AI with Less (Open Source AI Infrastructure) Hi HN, We’re Daniel and Mark, the creators of TrustGraph ( h...