Advanced Multi-Pipeline AI Research & Analysis Platform
Part of the Xere AI unified intelligence system, developed by EdgeXene LLC
Project Synapse, developed under eklypse as part of the Xere AI unified intelligence system, delivers AI assistance built for something more useful than writing bad haikus; it's designed for real personal and research work. It's built to be more than just another chatbot: think personal research buddy meets brainy sidekick. With transparent multi-stage reasoning (no black-box mumbo jumbo), real-time data plugged in, and security solid enough to make a lawyer sleep at night, Synapse's aim is to deliver reliable, citation-backed insights.
Whether it's helping with independent legal digging, breaking down business strategy, or just stress-testing wild ideas, the mission is simple: keep improving the platform while poking at the frontier of agentic RAG, so eventually, it won't just help with research, it'll run autonomous research workflows on its own (without asking for coffee breaks) -- like a tireless digital colleague.
Built for Real Work • Transparent by Design • Pushing Toward Agentic RAG
Project Synapse is a multi-pipeline AI research and analysis platform with 8 specialized modes and a full document management system. Built on Node.js with Together AI serverless inference, Qdrant vector search, and 17+ integrated data sources.
All specialty modes use GLM-5.1 for Stage 1 intelligent dispatching with semantic API routing. Each mode is a multi-stage pipeline powered by Together AI serverless inference with automatic fallback cascading.
Max Tokens: 78,000
Stages: 3
Max Tokens: 81,000
Stages: 3
Max Tokens: 93,000 base (+ 51k code gen)
Stages: 3 (+ optional Stage 4)
* Stage 4 auto-triggers when code implementation is requested
Max Tokens: 83,000
Stages: 2
Styles: 2 writing modes
* Polished Narrative for literary fiction, memoirs, structured long-form. Dynamic Shorts for flash fiction, dialogue-heavy scenes, experimental drafts.
Type: Dedicated image generation mode
Models: 2 options
* 12 art style presets per model (Anime, Photorealistic, Digital Art, Futuristic, Landscape, Oil Painting, Concept Art, Watercolor, 3D Render, Pixel Art, Minimalist, Dark Fantasy)
Real-time web search with Perplexity-style streaming results. Supports speed modes (fast, balanced, deep), automatic model escalation based on context size, and location-aware local queries.
Default Model: openai/gpt-oss-120b (10k output tokens, temp 0.3)
Large Context (>25k tokens): Escalates to Qwen/Qwen3-235B-A22B-Instruct-2507-tput
Math/Logic Queries: Escalates to Qwen/Qwen3-235B-A22B-FP8-tput
Web Search: Parallel AI with perspectives
Citation Styles: OSCOLA (default), MLA
Long-form research paper generation with 4 stages (+ optional agentic deep research). Supports configurable word counts from 3,000 to 30,000 words with real-time progress tracking via SSE.
Word Counts: 3k, 5k, 10k, 15k, 20k, 25k, 30k
Citation Styles: OSCOLA (default), APA, MLA
Export Formats: PDF, DOCX, LaTeX, RTF, Markdown
Options: Enable/disable RAG, web search, legal APIs, agentic research, and LaTeX generation per paper.
Continuous web monitoring powered by Parallel AI's Monitor API. Define a natural-language query once, choose a cadence, and Synapse tracks the web for new events matching your query. Events are delivered via webhook and persist in your chat history, even while you are signed out.
Provider: Parallel AI Monitor API (v1alpha)
Cadences: Daily (once or twice per day) or Weekly (once per week)
Per-user limit: 3 active monitors
Duration: Each monitor runs for up to 21 days, then stops automatically
History retention: Events remain visible for 365 days after expiry
How it works: Monitors run on Parallel's servers independently of your Synapse session. When changes are detected, Parallel POSTs to a secured webhook and Synapse persists the events to your account so you can review them next time you sign in.
Centralized document management system with layout-aware PDF processing, vector search, and agentic retrieval.
Model: zai-org/GLM-5.1
Role: Analyzes query intent and extracts search parameters. Routes to RAG Tool, Web Search, or Specialized APIs.
Routing: Semantic routing across 17 data sources via native function calling
Model: deepcogito/cogito-v2-preview-deepseek-671b
Role: Evaluates quality and sufficiency of retrieved results, calculates confidence scores, determines if supplementation or self-correction is needed
Model: openai/gpt-oss-120b
Role: Creates comprehensive final answer with OSCOLA-style [GL-n] citations and bidirectional navigation
Infrastructure:
GLM-5.1 Dispatching: All specialty modes use zai-org/GLM-5.1 (6k tokens) at Stage 1 for semantic API routing across 17 data sources. Ask Anything uses a separate model selection path with automatic escalation based on context size and query type.
GLM-5.1 uses semantic understanding to automatically select the most relevant APIs for each query. The badges below show primary APIs per mode, but GLM-5.1 can route to any source based on query context.
| Mode | Primary Data Sources |
|---|---|
| Ask Anything | Parallel Brave Search (fallback) Wikipedia RAG |
| Consultant | FRED World Bank Polygon.io FMP SEC EDGAR RAG Parallel NewsAPI |
| Legal | CourtListener UK Legislation Congress.gov SEC EDGAR RAG Parallel NewsAPI World Bank |
| Technical | RAG Parallel arXiv Wikipedia NASA NewsAPI |
| Agentic Research | RAG (always) Parallel Brave arXiv CourtListener UK Legislation Congress FRED World Bank Polygon.io FMP SEC EDGAR Alpha Vantage NASA Wikipedia NewsAPI |
| Creative | RAG Parallel Wikipedia NASA NewsAPI |
| Monitor | Parallel AI Monitor API |
A product of Xere AI by EdgeXene LLC
15442 Ventura Blvd., Ste 201-2525, Sherman Oaks, CA 91403