Persistencia Local
RaiSE ships with filesystem-backed adapters for both backlog management and documentation publishing. These are the default when no external services (Jira, Confluence) are configured — no setup required.
When to Use Filesystem Adapters¶
| Scenario | Backlog | Docs |
|---|---|---|
| Getting started — exploring RaiSE before connecting Jira/Confluence | filesystem | filesystem |
| Offline work — no network access | filesystem | filesystem |
| Open-source projects — no Atlassian license | filesystem | filesystem |
| Team with Atlassian — connected to Jira and Confluence | jira | confluence or composite |
| Belt and suspenders — local backup + remote publish | jira | composite (both) |
Filesystem Backlog (FilesystemPMAdapter)¶
Storage¶
Each issue is a YAML file at .raise/backlog/items/{KEY}.yaml:
.raise/backlog/items/
├── E1.yaml # Epic
├── S1.1.yaml # Story under E1
├── S1.2.yaml # Story under E1
├── E2.yaml # Another epic
└── S2.1.yaml # Story under E2
Issue Schema¶
key: E1
summary: "Implement user authentication"
issue_type: Epic
status: in-progress
description: "OAuth2 + PKCE flow for web and CLI clients"
labels:
- security
- v1.0
priority: high
assignee: alice@company.com
parent: null
created: "2026-04-01T09:00:00+00:00"
updated: "2026-04-03T14:30:00+00:00"
comments:
- id: E1-1
body: "Decided on PKCE over implicit flow"
author: rai
created: "2026-04-02T10:00:00+00:00"
links:
- target: E2
link_type: blocks
Key Generation¶
Keys are auto-generated based on issue type:
- Epics:
E1,E2,E3, ... - Stories:
S{epic_num}.1,S{epic_num}.2, ... (requiresparent_keyin metadata) - Tasks: Same as epics (
E{N})
CLI Usage¶
All rai backlog commands work transparently with the filesystem adapter:
# Create an epic
rai backlog create "Implement auth" -p LOCAL -t Epic
# Create a story under it
rai backlog create "OAuth2 flow" -p LOCAL -t Story --parent E1
# Search
rai backlog search "auth"
rai backlog search "status = in-progress"
# Transition
rai backlog transition E1 done
# Comment
rai backlog comment E1 "Completed OAuth2 implementation"
# Get details
rai backlog get E1
Adapter Selection¶
When only one adapter is registered, it's selected automatically. When multiple adapters are available (filesystem + jira), specify which one:
rai backlog search "auth" -a filesystem # Force filesystem
rai backlog search "auth" -a jira # Force Jira
Filesystem Documentation Target (FilesystemDocsTarget)¶
What It Does¶
Writes documentation to local markdown files. This is the write-only counterpart to Confluence — it saves files locally but doesn't support search or retrieval (use your editor or grep for that).
Usage¶
# Publish from governance convention (governance/roadmap.md)
rai docs publish roadmap --title "Q2 Roadmap"
# Publish from any file
rai docs publish adr --title "ADR-045" --file dev/decisions/adr-045.md
# Publish from stdin
echo "# My Doc" | rai docs publish notes --title "Session Notes" --path docs/notes.md --stdin
Where Files Go¶
The --path flag (or metadata["path"]) determines the output location. When publishing from an existing file, the path defaults to that file's location.
Frontmatter Validation¶
The filesystem target validates YAML frontmatter before writing:
- Required fields:
title,status - Epic-level docs: also require
epic_id - Story-level docs: also require
story_idandepic_id
Composite Target (Dual-Write)¶
The CompositeDocTarget publishes to both filesystem and Confluence in a single call:
- Filesystem first (durability guarantee — your file is always saved)
- Confluence second (returns the remote URL)
- If Confluence fails but filesystem succeeds: returns success with "sync pending" warning
This is the default behavior when both targets are registered. No configuration needed.
# This writes locally AND publishes to Confluence
rai docs publish adr --title "ADR-045"
# Output:
# Published: adr → https://yoursite.atlassian.net/wiki/spaces/SPACE/pages/12345
If Confluence is down:
# Published: adr → docs/governance/adr-045.md
# Warning: Remote publish failed (sync pending) — retry with rai docs publish
Migrating to Jira/Confluence Later¶
The filesystem adapter is a starting point, not a dead end. When you're ready to connect external services:
- Follow the Configuring Jira & Confluence guide
- Your filesystem backlog stays as-is — it doesn't conflict with Jira
- Use
-aflag to choose which adapter to use per command
There is no automatic migration from filesystem YAML to Jira issues. The recommended approach is to create new issues in Jira and reference the filesystem keys in descriptions or comments for traceability.
SQLite Storage¶
RaiSE stores all internal data — sessions, signals, patterns, pipeline runs, artifacts, and journal entries — in a single SQLite database.
Database Location¶
There is one global database shared across all your RaiSE projects. Per-project data is isolated internally via a project_id column — the CLI handles this automatically, so you never need to filter by project yourself.
What's Stored¶
| Table | Contents |
|---|---|
sessions |
Session records (ID, start/end, summary) |
signals |
Work lifecycle events (story start/complete, phase transitions) |
patterns |
Knowledge graph patterns accumulated across sessions |
pipeline_runs |
Pipeline run state for all story/epic/bugfix runs |
artifacts |
Structured design, plan, and review artifacts |
journal_entries |
Session journal and diary entries |
pending_sync |
Backlog operations queued for remote sync |
WAL Mode Files¶
When RaiSE is running, you may see two extra files alongside the database:
~/.rai/raise.db
~/.rai/raise.db-wal ← Write-Ahead Log (active transactions)
~/.rai/raise.db-shm ← Shared memory index
These are normal SQLite WAL mode files — not corruption. They disappear when no RaiSE process has the database open. If they persist after all RaiSE processes exit, they are safely incorporated into the main DB on next open.
Key Commands¶
Check database health:
RaiSE DB: /home/alice/.rai/raise.db (8,988 KB)
Schema version: 24
┏━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
┃ Table ┃ Rows ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
│ sessions │ 543 │
│ signals │ 1,204 │
│ patterns │ 1,713 │
│ pipeline_runs │ 1,539 │
│ artifacts │ 7 │
└────────────────────────┴───────┘
Export for backup:
Exports all tables as JSONL files to a timestamped directory:
Each table becomes a .jsonl file (e.g. sessions.jsonl, signals.jsonl). Run this before upgrading RaiSE or before a machine migration.
Migrate from RaiSE 2.x:
If you're upgrading from 2.x (where data was stored in .raise/rai/personal/*.jsonl files), run the one-time migration:
This imports your legacy JSONL/YAML data into SQLite. Old files are renamed *.migrated (not deleted). See the Migration Guide for the full upgrade procedure.
Verify schema integrity:
Verifies that .raise/schema.sum matches the current migration hashes. Run this after upgrading RaiSE to confirm the schema is consistent.