WHITE PAPER
v2: Evidence-First

GLXY White Paper v2

No prompts. Ask like a human. Drop notes, links, files, and messages. In under a minute you get three outputs—Today’s Brief, Open Loops, Next Best Action—grounded in evidence and a deterministic universe state.

Universe Map

An implementation schematic (not a user-facing UI): planet isolation, Bridge retrieval, and knowledge graph provenance.
GLXY Universe MapA schematic showing planets connected to the Bridge and a knowledge graph cluster with provenance and evidence.BridgeHome PlanetWork PlanetHealth PlanetResearch PlanetCompany GalaxyFIG 1: Planets → Bridge (federated retrieval) → Knowledge Graph (provenance + evidence)

Start here

No prompts. Ask like a human. You shouldn’t need to learn a new language to get value from your own information.

Drop notes, links, files, and messages… GLXY stores everything first, then extracts what matters. You can inspect what it saved, what it inferred, and what it chose to show you.

In under a minute you get 3 outputs: Today’s Brief, Open Loops, and Next Best Action. Each line links back to evidence (a quote, a snippet, a file, a message).

Under the hood, GLXY is a MasterContext engine built for ownership and auditability. But the user experience stays simple: one place to drop input, one place to see the next clear step.

The Core Loop

GLXY is designed for non-prompters. You don’t “drive” it with clever prompts—you just drop what’s happening, and it keeps your world up to date.

Drop Store Extract 3 outputs Next Best Action Repeat
  • Drop: paste a message, save a link, upload a file, jot a note.
  • Store: everything is saved first (nothing “disappears” because parsing failed).
  • Extract: GLXY pulls out claims, tasks, dates, decisions, and unknowns—each tied to evidence.
  • 3 outputs: you get a clear view of today and what’s unresolved.
  • Next Best Action: one recommended move to reduce uncertainty or unblock progress.
  • Repeat: as you drop more, outputs get sharper, without you rebuilding context every session.

What you get in 60 seconds

You drop something small—an email thread, a meeting note, a link—and GLXY responds with three outputs. These outputs are meant to be useful even if you never “chat” at all.

Example

Today’s Brief
1) Ship draft proposal for Zephyr by 4pm (needs 2 screenshots).
2) Confirm budget range for Q1 ads ($8–12k?) before Monday.
3) “Wait on legal” is the blocker for the partnership doc.
Open Loops
- Unknown: who owns the final sign-off on Zephyr?
- Pending: reply from Sam about the timeline.
- Contradiction: two different dates mentioned (Jan 7 vs Jan 9).
Next Best Action (one)
Send one message: “Who is final sign-off for Zephyr, and is the due date Jan 7 or Jan 9?” (links to the exact lines it’s resolving)

Each bullet is evidence-first: you can open the source snippet that produced it. If GLXY isn’t sure, it says so (see Truth Policy).

2-minute Guided Setup

GLXY works without setup. The guided setup just helps it format outputs the way you like, and keeps it from guessing.

Setup script (skippable)
  1. What should we call you?
  2. What’s your time zone (or city)?
  3. What’s your main focus right now? (work / personal / both / “I’m not sure”)
  4. What should count as an “Open Loop” for you? (unanswered question, pending reply, missing info, blocked task, contradiction)
  5. Do you want GLXY to suggest a Next Best Action automatically, or only when you ask?
  6. How do you want uncertainty shown? (e.g. “Confirmed / Likely / Unverified / Unknown”)
  7. Any topics to avoid summarizing by default? (optional)
  8. Do you want a daily brief time? (optional)

Truth Policy

GLXY is not “right by default.” It is evidence-first by design. If something can’t be supported, it should be labeled or left out.

  • Evidence required: important claims must link to a source (snippet, file, message, or URL).
  • Uncertainty labels: GLXY uses explicit labels (e.g. Confirmed / Likely / Unverified / Unknown). No hidden confidence theater.
  • Corrections: when new evidence contradicts an older claim, GLXY marks the older claim as corrected and shows both sides.
  • No invented citations: if it can’t cite, it must say “no evidence found.”
  • User control: you can edit, delete, or lock memories. Locked items don’t get overwritten by “better guesses.”

Universe State Machine

GLXY keeps a deterministic “universe state” so the product can always tell you what’s happening and what to do next. Each state has one primary call-to-action.

UNINITIALIZED
No inputs yet. GLXY can’t summarize what it hasn’t seen.
CTA: Drop your first item.
INGESTING
Saving is done; background processing is running (parsing, extraction, indexing).
CTA: Keep dropping—your inputs are safe.
BOOTSTRAPPING
GLXY is forming the initial “universe”: entities, projects, recurring threads, and open loops.
CTA: Confirm or edit the first Brief + Open Loops.
READY
Outputs are stable and evidence-backed. The universe can evolve deterministically with new drops.
CTA: Do the Next Best Action.
DEGRADED
GLXY is missing dependencies (model downtime, rate limits, failed extraction) but storage remains intact.
CTA: View what’s saved + retry processing.

UI/UX Principles

GLXY is intentionally minimal. These principles are meant to be enforceable, not inspirational.

  • One primary action per screen: no competing CTAs.
  • Drop-first, not chat-first: the default entry is “drop something,” not “prompt the model.”
  • Three outputs always visible: Brief, Open Loops, Next Best Action.
  • Evidence is one click away: every meaningful line shows sources.
  • Uncertainty must be explicit: no hidden guesses.
  • Deterministic state is visible: show the current universe state and what it means.
  • Edits are first-class: users can correct a fact, not just “tell the assistant.”
  • Memory locks exist: users can freeze items to prevent drift.
  • No dark patterns: no nudges to share more than needed to get value.
  • Fast path first: “first win” must land inside 60 seconds on a cold start.

Privacy + Ownership

GLXY treats your context as a private asset. You should be able to inspect it, move it, and delete it—without negotiation.

  • Export: download your raw inputs and derived context (including citations/provenance).
  • Delete: delete individual items or your entire universe, with clear confirmation.
  • Edit: correct extracted facts and labels; edits are tracked as changes, not hidden rewrites.
  • Lock memory: mark facts as “locked” so they won’t be overwritten by later inference.
  • What’s used for training: your content is not used to train public models by default. When model APIs are used, content may be sent to the model provider strictly to generate outputs.
  • What’s not required: you don’t have to share your entire life to get a first win—one small drop is enough.

Technical Appendix (Engine Material)

This appendix preserves the original “engine” whitepaper content and implementation metaphors (Mission Control / Planets / Bridge). These are internal concepts, not the user-facing experience.

Technical Architecture (Original)

GLXY is a persistent, evolving MasterContext engine designed to reverse the prevailing asymmetry of the digital economy: data extraction. Instead of routing the value of lived digital experience into opaque monetization pipelines, GLXY enables individuals and organizations to own, govern, and compute over their own context.

GLXY is a closed-loop contextual system in which every artifact becomes an opportunity to refine a governable state: the MasterContext.

newContext = f(oldContext, newInput)
Ingest Extract Context Evolve Signal Detect Insight Generate Save
  • Schema constraints: transitions must produce valid MasterContext JSON.
  • Change semantics: updates expressed as typed operations, not opaque rewrites.
  • Evidence requirements: newly asserted facts include pointers to supporting snippets.
  • Versioning: each commit yields a new context version with an append-only audit trail.

Store-First, Reason-Later (ACE16) ensures 100% persistence: raw artifacts are stored immediately; enrichment and context evolution happen asynchronously with retries.

AC13 defines how GLXY converts evolving context into a structured relational substrate: nodes/edges are synthesized automatically, but every assertion carries mandatory provenance and evidence snippets.

A Planet is a scoped contextual domain: an isolated MasterContext slice and its artifact store, graph substructure, and retrieval index. Planet Synthesis updates a planet’s state from authorized artifacts:

Cᵖ₍t+1₎ = f(Cᵖ₍t₎, Iᵖ₍t₎)

Bridge Chat is federated retrieval with policy: retrieve top‑k snippets per eligible planet, merge/rerank into a global top‑K evidence set, then synthesize with citations.

The Human-Centric Intelligence (Reframed)

GLXY is engineered around a non-negotiable premise: intelligence is only valuable if it is humane—context-aware, emotionally appropriate, and operationally loyal to the person it serves. Personalization is controlled: it’s explicit, inspectable, and reversible.

Rescue Mode identifies pressure signatures across the MasterContext and converts diffuse stress into a 15-minute actionable mission:

  • Small enough now: ≤ 15 minutes, minimal dependencies.
  • High-leverage: reduces uncertainty or unblocks the next critical step.
  • Evidence-backed: derived from commitments and preferences, not generic advice.

With explicit authorization, GLXY detects cross-planet overlaps to surface coherence: recommendations that are systemically correct for a life, not locally optimal for one app.

Mission Control remains the best implementation metaphor: a command center where context persists as a living map, tasks become missions, domains remain governed yet connectable, and insights arrive with evidence—not mystique.

Future Roadmap & Extensibility (Original)

  • Infant (MVP): Universal Ingest, planet isolation, Bridge retrieval with evidence, durable MasterContext.
  • Adolescent: schema evolution, conflict detection, stronger summaries, matured knowledge graph.
  • Adult: Rescue Mode, cross-planet overlaps, governed integrations, “next best action”.
  • Powerhouse: delegated agents under policy and predictive analytics (risk, drift, anomaly forecasting).

Company Galaxy deploys GLXY inside an organization: internal documentation and communication are ingested into a shared MasterContext with role-based governance to eliminate silos and generate System Effectiveness insights (bottlenecks, decision latency hotspots, alignment drift, knowledge coverage gaps), always with provenance and evidence.

GLXY evolves into an Engine as a Service: other apps call a GLXY API for ingestion, retrieval, and evidence-backed reasoning while the user’s MasterContext remains portable and governed.

The endpoint is a Universal Context Layer where intelligence improves with time, operates under user control, and amplifies agency. The next era of intelligence is defined not by model size, but by context ownership.

GLXY White Paper v2: outcome-first UX, evidence-first outputs, deterministic universe state. Engine material preserved in the Technical Appendix.