Project Synapse

Advanced Multi-Pipeline AI Research & Analysis Platform

Part of the Xere AI unified intelligence system, developed by EdgeXene LLC

Mission Statement

Project Synapse, developed under eklypse as part of the Xere AI unified intelligence system, delivers AI assistance built for something more useful than writing bad haikus; it's designed for real personal and research work. It's built to be more than just another chatbot: think personal research buddy meets brainy sidekick. With transparent multi-stage reasoning (no black-box mumbo jumbo), real-time data plugged in, and security solid enough to make a lawyer sleep at night, Synapse's aim is to deliver reliable, citation-backed insights.

Whether it's helping with independent legal digging, breaking down business strategy, or just stress-testing wild ideas, the mission is simple: keep improving the platform while poking at the frontier of agentic RAG, so eventually, it won't just help with research, it'll run autonomous research workflows on its own (without asking for coffee breaks) -- like a tireless digital colleague.

Built for Real Work • Transparent by Design • Pushing Toward Agentic RAG

Platform Overview

Project Synapse is a multi-pipeline AI research and analysis platform with 8 specialized modes and a full document management system. Built on Node.js with Together AI serverless inference, Qdrant vector search, and 17+ integrated data sources.

8
Specialized Modes
192GB
DDR5 ECC RAM
17+
Data Integrations
5
Export Formats

Specialty Mode Pipelines

All specialty modes use GLM-5.1 for Stage 1 intelligent dispatching with semantic API routing. Each mode is a multi-stage pipeline powered by Together AI serverless inference with automatic fallback cascading.

Consultant

Max Tokens: 78,000

Stages: 3

Stage 1: zai-org/GLM-5.1 (6k, temp 0.5) -- Dispatch + API routing
Stage 2: openai/gpt-oss-120b (48k, temp 1.0) -- Deep analysis
Stage 3: deepcogito/cogito-v2-1-671b (24k, temp 0.8) -- Strategic synthesis

Legal

Max Tokens: 81,000

Stages: 3

Stage 1: zai-org/GLM-5.1 (6k, temp 0.4) -- Dispatch + API routing
Stage 2: deepcogito/cogito-v2-1-671b (48k, temp 0.7) -- Legal reasoning
Stage 3: Qwen/Qwen3-235B-A22B-Instruct-2507-tput (27k, temp 0.6) -- Final synthesis

Technical

Max Tokens: 93,000 base (+ 51k code gen)

Stages: 3 (+ optional Stage 4)

Stage 1: zai-org/GLM-5.1 (6k, temp 0.5) -- Intelligent dispatching
Stage 2: Qwen/Qwen3.5-397B-A17B (36k, temp 0.6) -- Deep technical analysis
Stage 3: Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 (51k, temp 0.4) -- Code generation & synthesis
Stage 4 (auto-trigger): Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 (51k) -- Extended code generation

* Stage 4 auto-triggers when code implementation is requested

Creative Writing

Max Tokens: 83,000

Stages: 2

Styles: 2 writing modes

Stage 1: zai-org/GLM-5.1 (6k, temp 0.6) -- Creative direction
Stage 2a: Qwen/Qwen3.5-397B-A17B (77k, temp 0.9) -- Polished Narrative
or
Stage 2b: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 (77k, temp 0.9) -- Dynamic Shorts

* Polished Narrative for literary fiction, memoirs, structured long-form. Dynamic Shorts for flash fiction, dialogue-heavy scenes, experimental drafts.

Image Generator

Type: Dedicated image generation mode

Models: 2 options

Imagen 4.0 -- Fast generation for quick iterations
or
FLUX.1.1-pro -- High-quality image synthesis

* 12 art style presets per model (Anime, Photorealistic, Digital Art, Futuristic, Landscape, Oil Painting, Concept Art, Watercolor, 3D Render, Pixel Art, Minimalist, Dark Fantasy)

Ask Anything

Real-time web search with Perplexity-style streaming results. Supports speed modes (fast, balanced, deep), automatic model escalation based on context size, and location-aware local queries.

Default Model: openai/gpt-oss-120b (10k output tokens, temp 0.3)

Large Context (>25k tokens): Escalates to Qwen/Qwen3-235B-A22B-Instruct-2507-tput

Math/Logic Queries: Escalates to Qwen/Qwen3-235B-A22B-FP8-tput

Web Search: Parallel AI with perspectives

Citation Styles: OSCOLA (default), MLA

Parallel Web Search (fast/balanced/deep)
Batch Content Scraping (source extraction)
LLM Synthesis with citations (model selected by query complexity)

Agentic Research

Long-form research paper generation with 4 stages (+ optional agentic deep research). Supports configurable word counts from 3,000 to 30,000 words with real-time progress tracking via SSE.

Word Counts: 3k, 5k, 10k, 15k, 20k, 25k, 30k

Citation Styles: OSCOLA (default), APA, MLA

Export Formats: PDF, DOCX, LaTeX, RTF, Markdown

Stage 1: Planning -- Outline generation and citation estimation
Stage 2: Research -- Source gathering and deep research
Stage 2b (optional): Agentic Deep Research -- Iterative gap filling (up to 3 iterations, 85% coverage threshold)
Stage 3: Synthesis -- Comprehensive paper body with citations
Stage 4: Quality Assurance -- Citation verification, consistency checking, polish

Options: Enable/disable RAG, web search, legal APIs, agentic research, and LaTeX generation per paper.

Monitor

Continuous web monitoring powered by Parallel AI's Monitor API. Define a natural-language query once, choose a cadence, and Synapse tracks the web for new events matching your query. Events are delivered via webhook and persist in your chat history, even while you are signed out.

Provider: Parallel AI Monitor API (v1alpha)

Cadences: Daily (once or twice per day) or Weekly (once per week)

Per-user limit: 3 active monitors

Duration: Each monitor runs for up to 21 days, then stops automatically

History retention: Events remain visible for 365 days after expiry

How it works: Monitors run on Parallel's servers independently of your Synapse session. When changes are detected, Parallel POSTs to a secured webhook and Synapse persists the events to your account so you can review them next time you sign in.

Grand Library & RAG System

Centralized document management system with layout-aware PDF processing, vector search, and agentic retrieval.

Document Management

Collections Personal, public, and legal document collections with custom categories and tags
Supported Formats PDF, DOCX, TXT, Excel files (50MB upload limit)
PDF Processing pdfminer.six for layout detection + Llama-4-Scout vision model for tables, charts, and figures
ArXiv Auto-Fetch Research and Technical modes automatically download cited ArXiv papers into the Grand Library

Agentic RAG Pipeline (3-Stage)

Stage 1: Dispatcher

Model: zai-org/GLM-5.1
Role: Analyzes query intent and extracts search parameters. Routes to RAG Tool, Web Search, or Specialized APIs.
Routing: Semantic routing across 17 data sources via native function calling

Stage 2: Analyzer

Model: deepcogito/cogito-v2-preview-deepseek-671b
Role: Evaluates quality and sufficiency of retrieved results, calculates confidence scores, determines if supplementation or self-correction is needed

Stage 3: Synthesizer

Model: openai/gpt-oss-120b
Role: Creates comprehensive final answer with OSCOLA-style [GL-n] citations and bidirectional navigation

Infrastructure:

  • Embeddings: intfloat/multilingual-e5-large-instruct (1024 dimensions, multilingual)
  • Reranker: mixedbread-ai/mxbai-rerank-large-v2
  • Vector DB: Qdrant (multiple collections: legal, financial, business, manufacturing, public)
  • Knowledge Graph: Neo4j with concept-aware multi-hop retrieval (RELATES_TO, PART_OF, CITES edges). Entity-based expansion augments vector search with graph traversal for citation-aware answers. Nightly backfill cron extracts new relationships from ingested documents.
  • Caching: Redis (1-hour TTL for embeddings)
  • Orchestration: LangGraph with conditional routing, graph enrichment node, and self-correction loops
  • Continuous Web Monitoring: Parallel AI Monitor API (per-user subscriptions, 21-day max duration, webhook-delivered events)

Token Allocation by Specialty Mode

GLM-5.1 Dispatching: All specialty modes use zai-org/GLM-5.1 (6k tokens) at Stage 1 for semantic API routing across 17 data sources. Ask Anything uses a separate model selection path with automatic escalation based on context size and query type.

Active Data Integrations

GLM-5.1 uses semantic understanding to automatically select the most relevant APIs for each query. The badges below show primary APIs per mode, but GLM-5.1 can route to any source based on query context.

Mode Primary Data Sources
Ask Anything Parallel Brave Search (fallback) Wikipedia RAG
Consultant FRED World Bank Polygon.io FMP SEC EDGAR RAG Parallel NewsAPI
Legal CourtListener UK Legislation Congress.gov SEC EDGAR RAG Parallel NewsAPI World Bank
Technical RAG Parallel arXiv Wikipedia NASA NewsAPI
Agentic Research RAG (always) Parallel Brave arXiv CourtListener UK Legislation Congress FRED World Bank Polygon.io FMP SEC EDGAR Alpha Vantage NASA Wikipedia NewsAPI
Creative RAG Parallel Wikipedia NASA NewsAPI
Monitor Parallel AI Monitor API

Intelligent Features

  • GLM-5.1 Semantic Dispatching: Native function calling for intelligent API routing across all specialty modes
  • Complexity Detection: Queries classified as LOW/MEDIUM/HIGH based on depth, breadth, reasoning, ambiguity, and expertise dimensions
  • OSCOLA Citation System: All sources return OSCOLA-formatted citations -- RAG ([GL-N]), Web ([WEB-N]), Government ([GOV-N]), ArXiv ([ARXIV-N]) -- with bidirectional navigation
  • Model Fallback Cascading: Each model has a defined fallback chain -- GLM-5.1 to gpt-oss-120b, cogito-v2-1-671b to Qwen3-235B then gpt-oss-120b, Qwen3.5-397B to Qwen3-235B then gpt-oss-120b, Qwen3-Coder-480B to Qwen3.5-397B then gpt-oss-120b
  • Multi-Jurisdiction Legal APIs: US (CourtListener, Congress.gov, Regulations.gov, eCFR, GovInfo), EU (EUR-Lex), UK (Find Case Law, legislation.gov.uk)
  • ArXiv Auto-Download: Automatic paper retrieval with OSCOLA academic citations, ingested into Grand Library for future RAG queries
  • Together AI Key Rotation: Multiple API keys with automatic rotation for load distribution
  • Conversation Retention: 365-day history across all modes with per-mode session context

Platform Capabilities

  • GLM-5.1 intelligent dispatching -- Semantic API routing across 17 data sources via native function calling
  • Multi-stage reasoning pipelines -- Up to 4 stages per specialty mode with automatic fallback cascading
  • Real-time SSE streaming -- All modes stream responses in real-time with progress tracking
  • Legal & regulatory research -- Multi-jurisdiction APIs with OSCOLA citation formatting
  • Academic research -- arXiv, NASA, Wikipedia with auto-download to Grand Library
  • Citation systems -- OSCOLA (legal), APA (business/research), MLA (academic)
  • Export formats -- PDF, DOCX, LaTeX, RTF, Markdown for research papers
  • Image generation -- Imagen 4.0 (fast) and FLUX.1.1-pro (quality) with 12 art style presets each
  • Professional-grade security -- Input sanitization, rate limiting (120 req/min), IP-based deduplication, threat detection
  • Conversation memory -- 365-day retention with session-based context per mode

Project Synapse

A product of Xere AI by EdgeXene LLC

Privacy Notice & Disclaimer Login

15442 Ventura Blvd., Ste 201-2525, Sherman Oaks, CA 91403

contact@edgexene.io