A pubsub reader/publisher for rss/atom/json/meshtastic/etc which works on http/https/ipfs/ipns/tor and includes composable, functional logic “bots” including llms.
Read first:
PubSub is a C++17 application with a high-performance TUI that:
subscriptions/ by subscription-type plugin (for example, rss/, meshtastic/) and stores per-source metadata in index.md front matterlatest.<ext> and cache/YYYY-MM-DD-HH-mm-ss.<ext>)A central priority of PubSub is to provide a high-speed, high-bandwidth, low-latency interface that rewards mastery. Inspired by the numeric command systems of Star Trek: Klingon Academy and the zero-latency navigation of the Sidekick phone, the TUI is designed for rapid execution via muscle memory.
2-1 to open Sub -> Subscriptions, select Refresh All, and close the menu) without waiting for UI animations. The general principle is that the lower numbers are the more common tasks at each level. Not all of the numbers need be assigned, so that future updates and plugins can add new commands without disrupting existing muscle memory.
2 would open the full-screen Subscriptions view, while 2-1 should queue a refresh all task and open the Refresh All modal. These kinds of modals should have the option to wait and watch them display status as they execute their task, or press escape to dismiss them. (They can still be re-opened from the tasks menu.)/api/generate and stream resultslatest, and synchronizes the SQLite database with filesystem stateprompts/ (for example, taxonomy generation)subscriptions/subscriptions/rss, subscriptions/meshtastic). Transport helpers (HTTPS, Tor, IPFS, etc.) never create their own directories—they extend the owning subscription-type plugin.subscriptions/rss/ becomes a source when its index.md front matter defines all required keys (source_name, source_uri, source_slug, file_extension). Directories missing the file or any required field remain navigable categories.subscriptions/meshtastic/; the plugin writes its own metadata schema to index.md files automatically.cache/ folder housing latest.<ext> and timestamped payloads (cache/YYYY-MM-DD-HH-mm-ss.<ext>); the name cache is reserved and skipped when scanning for sources.subscriptions/meshtastic/ with per-channel child subscriptions and manages the necessary index.md metadata automaticallyoutbox/ directories and sent by the destination pluginlog.YYYY-MM-DD-HH-mm-ss-ffffff.txt--test runs the app with temporary directories and a temp configuration databaseThese instructions are minimal and may need adjustment depending on your platform toolchain. See the CLI-First C++ Development Style Guide (Notcurses) section in project-specification.md for details.
makeExample workflow:
1) Ensure Notcurses development files are available
2) Build with your Makefile or standard g++ command
3) Run the executable; consult logs/ for the newest per-run log file
4) Browse prompts/ if you want to drive LLM-based automation (e.g., generating RSS taxonomy metadata)
Supported Windows flow: MSYS2 MinGW-w64 (GNU toolchain, Makefile-driven)
MSYS2 provides a POSIX-like environment and MinGW-w64 compilers. Your Windows drives are mounted under /c, /d, etc. For example, C:\Users\CJ\Documents\GitHub\pubsub is /c/Users/CJ/Documents/GitHub/pubsub inside the MSYS2 shell.
Recommended shell: “MSYS2 MinGW 64-bit” (MSYSTEM=MINGW64)
Install toolchain (first time only):
pacman -S --needed mingw-w64-x86_64-toolchain mingw-w64-x86_64-notcurses make
Build and test:
cd /c/Users/CJ/Documents/GitHub/pubsub
make clean
make -j2
# Optional helpers added to the Makefile
make run # runs --help, --version, --test, --init-default-subscriptions
make check # smoke tests; fails on mismatch
Artifacts and bundling:
bin/pubsub.exelibstdc++-6.dll, libgcc_s_seh-1.dll, libwinpthread-1.dll into bin/, so you can double-click bin/pubsub.exe from Explorer or run from PowerShell without modifying PATH.Run CLI tests from PowerShell (after building):
powershell -ExecutionPolicy Bypass -File .\run_cli_tests.ps1
bundle-dlls step (included in the default all target) copies required MinGW runtime DLLs to bin/. That allows double-click running outside the MSYS2 shell.libstdc++-6.dll error, run the provided helper to copy the DLLs:powershell -ExecutionPolicy Bypass -File .\ensure_mingw_runtime.ps1
notcurses.h is available in your include path (usually handled by pkg-config).--testNote: This is the single, consolidated source of truth for the project. It includes the technical specification, roadmap, social context, and relevant style guides. Where other standalone files are mentioned historically, their content now lives here.
This project aims to build a modern, cross‑platform RSS reader and publisher written in C++17 with a high-performance TUI built on Notcurses. The application will allow users to subscribe to feeds from a wide variety of protocols (HTTP/HTTPS, IPFS/IPNS, Tor, ActivityPub, AT protocol, etc.) and content formats (RSS/XML, Atom, JSON), browse and filter posts, and publish content or comments back to supported protocols. It replaces typical heavy frameworks with a minimal, CLI‑first design emphasising portability and reproducibility.
subscriptions/ directory. Each immediate child directory corresponds to a subscription-type plugin namespace (for example, rss, atom, meshtastic). Transport helpers (e.g., HTTPS, Tor, IPFS) do not own directories; they extend the subscription-type plugins and leave layout unchanged. Within each namespace, the owning plugin controls the rules for the subdirectories it manages. For the RSS/Atom plugin, a subdirectory only becomes a source when it contains an index.md whose YAML front matter includes every required key: source_name, source_uri, source_slug, and file_extension. Directories that lack the file or omit any required field remain navigable categories within that plugin even if they include other content or auxiliary index.md files. Directories titled “cache” are reserved for cached payloads and are not treated as categories or subscriptions.
prompts/; each .md file documents a structured workflow (for example, building a taxonomy from RSS feeds) that the application can provide to an LLM when automating filesystem updates or metadata generation.index.md file using YAML front matter. The front matter must be at the top of the file and enclosed within triple‑dashed lines. Each subscription-type plugin defines its own schema: for the RSS/Atom plugin the required fields are source_name, source_uri, source_slug, and file_extension, while Meshtastic and other plugins may use different keys or auto-generated metadata. Transport helpers inherit the schema from their parent subscription-type plugin. Additional optional keys such as display_name, feed_url, website_url, logo, cadence, or plugin-specific metadata may be present.
prompts/. Prompts must explain the task, required outputs, directory conventions, and metadata fields so that generated responses can be applied deterministically by the app or agents.index.md file.Feed fetching and parsing
assets/bin/ and extracted at runtime as needed.
* Support multiple content formats: RSS/XML, Atom and JSON. A pluggable source plugin architecture will handle protocol‑specific fetching and content parsing. Subscription-type plugins (such as RSS/Atom or Meshtastic) own the directory namespaces and may compose one or more transport helpers (HTTPS, Tor, IPFS, etc.) to reach remote feeds without altering the filesystem layout. Plugins must implement methods to declare supported protocols and URIs, fetch raw content and parse posts into a common structure.latest.<ext>) and a timestamped snapshot in a cache/ subdirectory for each subscription. Respect the configured fetch cadence (default 24 hours).Publishing and logic
User interface
2-1). Ensure zero-latency transitions between menu states to support rapid, muscle-memory based navigation using 10-key or D-pad inputs.subscriptions/.Data storage and configuration
data/pubsub_config.db and accessed via a DatabaseManager class.Logging and test mode
Logger class that writes plain‑text logs to a root‑level log.txt and a per‑run file under logs/. Logs must capture inputs, outputs and contextual metadata in deterministic order.--test flag in the CLI to enable simulation mode, which runs the same code paths using temporary directories and does not modify real data. This helps reproduce behaviour and support automated testing.Platform support
g++ or clang++ for compilation and static linking where possible. Support cross‑compilation for Windows using mingw‑w64.Language and standards: Use C++17. Adhere to the Vibe coding style guide (one operation per line, small focused functions, descriptive naming).
TUI libraries: Use Notcurses for all rendering and input handling. Notcurses allows for advanced terminal graphics (images, video) and high-performance rendering. Avoid legacy libraries like ncurses or slang. The input loop must support numeric chaining (capturing rapid sequences of digits) to drive the menu system.
Dependencies: Include libraries for HTTP requests (e.g., libcurl), YAML parsing (for index.md front matter), JSON parsing, SQLite, and plugin management. Bundle any external binaries (Tor, IPFS) under assets/bin/ and extract at runtime.
Build system: Provide a Makefile that compiles sources in src/, links against required libraries, and outputs executables into bin/. Support optional static builds and cross‑compile targets.
SourcePlugin, DestinationPlugin and LogicPlugin. Plugins must declare supported protocols or content types and handle the fetch/parse/publish lifecycle. Within SourcePlugin, distinguish subscription-type plugins (which own filesystem namespaces under subscriptions/) from transport helpers (which extend those plugins without adding new directories). Provide a PluginFactory for registration and selection.
OllamaLogicPlugin for LAN LLM integration. It must expose a configurable API endpoint (default http://ollama.local:11434/api/generate), accept a post and prompt template, support streaming, and fail gracefully if unreachable.MeshtasticSourcePlugin and MeshtasticDestinationPlugin to treat Meshtastic as both a source and a destination. Source connects via serial, maps channels to subscriptions, and yields posts; destination watches per‑channel outbox folders and sends messages via the connected node.Post structure.Logic blocks are modular, composable units that accept inputs and produce outputs within the right-most (view) column. Each block transforms data, applies filters, or generates metadata (including via LLMs). Users can chain blocks so the output of one becomes the input to the next. A collapsible menu at the top of the view column contains full-width buttons that govern query, transformation, and visualization.
The view column’s dataset derives from the left-tree selection. Selecting a folder or subscription recursively collects all child posts through a database query, enabling filters and re-queries by subsequent blocks.
The first menu button opens tools to:
Filters operate at the database level against posts and subscriptions tables; transformations can also create derived attributes.
Below Filters is a dynamic list of configured views for the current path. Default is a Data Table. Additional views: Charts, Media Playlist, Photo Gallery, Rendered Markdown. Each view can include subqueries, transformations, and LLM prompts that produce synthesized outputs for display.
Integrate a networked LLM via an Ollama server reachable on the local network (default port 11434). Logic blocks can send prompts such as “summarise”, “analyze bias”, or “generate counter-argument” to /api/generate and stream back results. Store block outputs in the configuration database for persistence and later review. Provide default prompts and allow customization per block.
Treat a connected Meshtastic node as first‑class source/destination:
subscriptions/ named after the device ID; mirror discovered channels as child subscriptions with an auto-managed index.md containing Meshtastic-specific metadata and a cache/ for received messages.outbox/ per channel; the destination plugin monitors and sends queued messages via sendText/sendData.Implement a background Fetcher that evaluates cadence per subscription and updates when due:
next_due = last_fetched + cadence; fetch when now >= next_due.cache/YYYY-MM-DD-HH-mm-ss.<ext> and atomically update latest.<ext>.index.md to include source_name, source_uri, source_slug, and file_extension; other plugins (e.g., Meshtastic) apply their own validation rules or generate metadata automatically. Transport helpers never create directories—they attach to the owning subscription-type plugin and share its namespace.index.md YAML, invoke the plugin-specific validation, and upsert into SQLite.--test mode, operate on temporary directories.SQLite schema includes tables:
subscriptions – mirror of YAML front matter and paths; indexed by protocol and pathfetches – per‑attempt history with status, metadata, and cache pathslatest_payloads – one row per subscription’s current latest payload (metadata and optional raw)posts – parsed items normalized into a query‑friendly structure, de‑duped by content hashIndexes and upserts are required for performance and consistency. The latest cache timestamp serves as canonical fetched_at.
Extend --test to simulate Ollama responses and Meshtastic packets. Produce identical logging and DB write patterns to real runs but isolate all filesystem and database paths to temporary locations. Provide CLI commands/flags to dump Fetcher status, DB counts, and plugin registrations for LLM‑friendly inspection.
Performance: The application must remain responsive when rendering thousands of posts. Use background threads for network I/O and parsing, and avoid blocking the UI thread.
Input Latency: The TUI must process input and update the application state with zero perceptible latency. Animations must never block input; users must be able to “type ahead” of the interface.
Portability: Build and run on Linux and Windows without third‑party package managers. All dependencies must be vendored or installable via standard package managers.
Reliability: Handle network failures gracefully; log errors and retry fetches according to the configured cadence. Ensure that plugin failures do not crash the application.
Security and privacy: Respect user privacy; do not send data to third parties unless configured. Provide sensible defaults for Tor/IPFS connectivity and document any external connections.
src/ with clear module separation.Makefile for building and packaging the application.README.md describing installation, usage and contributing guidelines, and linking prominently to this document.project-specification.md (this consolidated document including the specification, roadmap, social context, and style guides).The High Desert Institute (HDI) is a 501(c)(3) non-profit organisation dedicated to “building a foundation for the survival of humanity”. HDI focuses on off-grid land projects, libraries and guilds that provide free, open-source solutions to basic human needs such as housing and infrastructure. A team of community-building experts is fundraising to create outposts in the high deserts of the southwest to research, develop and share these solutions. Key HDI programmes include:
Access to trustworthy, up-to-date information is critical for communities operating off-grid or on unreliable networks. Traditional RSS readers focus only on the mainstream internet. Our C++17 RSS reader and publisher brings together multiple decentralised protocols—HTTP/HTTPS, IPFS/IPNS, Tor, ActivityPub and AT protocol—into one application. By supporting both reading and publishing, it allows communities to:
The RSS reader/publisher complements and extends HDI’s existing projects:
By distributing news and knowledge across decentralised networks, this project reduces dependence on centralised platforms and fosters resilience. It respects user autonomy—subscriptions are stored locally, publishing is optional, and logs are transparent. Support for Tor and IPFS empowers activists and communities facing censorship to share critical information. LAN-local LLMs via Ollama preserve privacy while enabling advanced analytics offline. Treating Meshtastic as a source/destination extends reach across the Cyberpony Express mesh. Integration with HDI’s education and communication initiatives ensures that research and experience from off-grid outposts can be shared widely, strengthening mutual aid networks.
This RSS reader and publisher is more than a convenience; it is infrastructure for a resilient future. By bridging decentralised protocols and feeding into the Librarian, Cyberpony Express, guilds and the Library, it helps realise HDI’s vision of a world where communities control their own information flow and build free, open-source solutions for survival. Its offline-friendly design ensures that no matter the circumstances, knowledge can be shared, stored and acted upon.
This style guide outlines best practices for command‑line interface (CLI) development tools. Its goal is to make every action that your tool performs visible from the CLI so that both humans and large language models (LLMs) can understand and debug the workflow. The guidance in this document is language‑agnostic and can be applied to any project that exposes a command‑line surface, even if the project also ships a graphical user interface.
To support LLMs and developers in understanding your tool’s behaviour, every CLI‑based project must create a root‑level log directory (for example, logs/). Each run of the application must write to a brand‑new log file inside that directory named with the timestamp of when the run started using a sortable pattern such as log.YYYY-MM-DD-HH-mm-ss-ffffff.txt. Every per-run log file must contain:
Logs should be written in a plain‑text, append‑only format. The goal is to give agents complete visibility into what happened during execution, so do not reuse log files across runs—always start a fresh, timestamped file when the process begins. If multiple processes are spawned (for example, launching Tor or IPFS as sub‑processes), their stdout/stderr should also be captured and appended to that run’s log file.
LLMs often need to verify that your tool behaves correctly without modifying real data. Provide a dedicated test mode (for example, via a --test or -test flag) that:
When running in test mode, the tool should exercise enough code paths to make automated testing effective. This allows LLMs to verify functionality without incurring the overhead of running integration tests during every normal startup.
Every repository that contains CLI‑based tools must include a comprehensive README.md file. The README must link prominently to this consolidated project-specification.md and, where helpful, to specific sections within it (e.g., CLI logging, the C++ development guide, and the roadmap). At a minimum, the README.md should:
logs/ directory and the timestamped files it contains) and how to run the project in normal and test modes.Agents and developers should read project-specification.md for detailed guidance. The README.md provides quick orientation and build/run steps and links back here.
The project specification, social context, style guide, and roadmap are stored in project-specification.md which should always be included in any prompt context. The roadmap must be kept up to date as the project evolves. It should be a living document that reflects the current status of all tasks and features.
The roadmap must include:
[ ] - Not started[?] - In progress / Testing / Development[x] - Completed and tested[-] - Blocked / DeferredThe specification must include:
The social context document must include:
All three documents must be kept current whenever project details change, features are added, or development status updates.
The following guidelines apply across languages:
INFO for normal operations, WARN for recoverable issues, and ERROR for serious problems. Do not hide exceptions—log stack traces at the ERROR level.LLMs cannot see your GUI or internal state; they rely entirely on textual output. To make your tool LLM‑friendly:
Document a typical usage scenario for your tool. For example:
# run the tool normally and capture logs
./mytool --input data.txt --output results.txt
# inspect the newest log file
ls logs/
cat "$(ls -t logs/log.* | head -n 1)"
# run in test mode
./mytool --test
# run a command to dump current status
./mytool --status
This document outlines a lightweight, CLI-first approach to writing, building, and distributing C++ software using Notcurses for small, fast, cross-platform TUIs. It focuses on transparent builds, minimal dependencies, and direct control over compilation and linking—without large build systems or IDE lock-in.
Most modern C++ projects use heavyweight environments like Visual Studio, Xcode, or large meta-build systems (CMake, Meson, Premake). These tools can:
A CLI-first C++ style instead:
g++, clang++, x86_64-w64-mingw32-g++).This guide standardizes on Notcurses as the preferred TUI library for small cross-platform applications.
To build small, fast, portable TUI utilities — such as RSS readers, Ollama frontends, and monitoring dashboards — that compile in seconds and distribute as single executables.
A typical CLI-first C++ + Notcurses project:
project/
src/
main.cpp
tui.cpp
tui.h
assets/
icons/
fonts/
build/ # intermediate object files (ignored by VCS)
bin/ # final executables
Makefile
src/ → C++ source and headersassets/ → static resources embedded or loaded at runtimebuild/ → compiler outputs (.o, .obj)bin/ → packaged executablesUse g++ (Linux/macOS) or x86_64-w64-mingw32-g++ (Windows cross-compile).
# Compile and link directly
g++ -std=c++17 -O2 -Wall \
src/*.cpp \
-o bin/myapp \
$(pkg-config --cflags --libs notcurses)
This produces a single executable — no installers, no runtime dependencies, no DLL sprawl.
APP = myapp
SRC = $(wildcard src/*.cpp)
OBJ = $(SRC:src/%.cpp=build/%.o)
CXX = g++
CXXFLAGS = -std=c++17 -O2 -Wall $(shell pkg-config --cflags notcurses)
LDFLAGS = $(shell pkg-config --libs notcurses)
all: bin/$(APP)
build/%.o: src/%.cpp
mkdir -p build
$(CXX) $(CXXFLAGS) -c $< -o $@
bin/$(APP): $(OBJ)
mkdir -p bin
$(CXX) $(OBJ) -o $@ $(LDFLAGS)
clean:
rm -rf build bin
Build with make or mingw32-make (on Windows).
To avoid runtime library issues and ensure portability:
Linux static build:
g++ -static -static-libgcc -static-libstdc++ -O2 \
src/*.cpp -o bin/myapp \
$(pkg-config --cflags --libs notcurses)
Windows cross-compile (from Linux):
x86_64-w64-mingw32-g++ -O2 -std=c++17 \
src/*.cpp -o bin/myapp.exe \
$(x86_64-w64-mingw32-pkg-config --cflags --libs notcurses)
Now you can ship bin/myapp or bin/myapp.exe as a single binary.
#include <notcurses/notcurses.h>
#include <iostream>
int main() {
struct notcurses_options opts = {0};
opts.flags = NCOPTION_NO_ALTERNATE_SCREEN; // Optional: keep history
struct notcurses* nc = notcurses_init(&opts, NULL);
if (!nc) return -1;
struct ncplane* stdplane = notcurses_stdplane(nc);
ncplane_putstr(stdplane, "Hello from PubSub TUI!");
notcurses_render(nc);
// Input loop
uint32_t id;
struct ncinput ni;
while((id = notcurses_getc_blocking(nc, &ni)) != (uint32_t)-1) {
if (id == 'q') break;
// Handle other input...
}
notcurses_stop(nc);
return 0;
}
ncplane). Planes can be moved, resized, and Z-ordered independently.notcurses_stdplane) is the background. Create child planes for your widgets and content.ncplane_move_top() and ncplane_move_bottom() to manage visibility.Notcurses provides a modern, high-performance alternative with multimedia capabilities.
Run make to:
bin/../bin/myapp or myapp.exe).By following this CLI-first C++ style, you’ll produce self-contained, portable GUI tools that compile in seconds, run anywhere, and depend on nothing but your own source code. Notcurses provides just enough TUI capability to make tools usable without sacrificing performance or simplicity—perfect for small, independent, and reproducible software projects.
CLI-first C++ tools should adopt the same verbose logging rules described in the general CLI development guide. At minimum:
logs/ directory (or similarly named folder). Each time the application starts, create a new plain-text log file inside that directory named with the start timestamp (for example, log.YYYY-MM-DD-HH-mm-ss-ffffff.txt) and capture all user inputs, outputs, and contextual metadata within that file.stdout and stderr streams and append them to the same per-run log file.Implement a --test, --dry-run, or similar command‑line flag that runs the application in simulation mode. In test mode:
This project uses a single consolidated document: project-specification.md (this file). It contains:
Keeping these sections together improves discoverability for humans and agents and ensures there is one authoritative source of truth.
Sometimes a CLI-first C++ tool needs to interface with external binaries—such as Tor for anonymous networking or IPFS for decentralized storage. To bundle and manage these tools reliably:
assets/bin/ directory within your repository.std::system or a process library (e.g., Boost.Process) and communicate via sockets, pipes, or command-line interfaces.assets/etc/ and reference them when launching the external tools.By vendoring external binaries you avoid relying on system installations and maintain control over versions and configurations.
Two kinds of integrations will be used.
Use llama.cpp (libllama), to run prompts against a small local model:
llama.cpp in C++Add as a submodule and link the library:
git submodule add https://github.com/ggerganov/llama.cpp external/llama.cpp
git -C external/llama.cpp submodule update --init --recursive
CMakeLists.txt
add_subdirectory(external/llama.cpp)
add_executable(myapp src/main.cpp)
target_link_libraries(myapp PRIVATE llama)
src/main.cpp (ultra-short example)
#include "llama.h"
#include <iostream>
int main() {
llama_init_params ip = llama_init_params_default();
llama_model_params mp = llama_model_default_params();
llama_context_params cp = llama_context_default_params();
// e.g., 4096 ctx tokens
cp.n_ctx = 4096;
// Load a small GGUF (quantized) model
llama_model *model = llama_load_model_from_file("models/Qwen3-4B-Instruct-2507-Q8_0.gguff", mp);
llama_context *ctx = llama_new_context_with_model(model, cp);
// Load prompt from promts/ or hardcode for testing
const char *prompt = "You are a helpful assistant. Q: 2+2? A:";
llama_batch batch = llama_batch_init(512, 0, 1);
// tokenize
std::vector<llama_token> toks(512);
int n = llama_tokenize(model, prompt, std::strlen(prompt), toks.data(), toks.size(), true, false);
toks.resize(n);
// evaluate prompt
llama_batch_clear(batch);
for (int i = 0; i < (int)toks.size(); ++i) {
llama_batch_add(batch, toks[i], i, {0}, false);
}
llama_decode(ctx, batch);
// sample a few tokens
std::string out;
llama_token cur = 0;
for (int t = 0; t < 64; ++t) {
cur = llama_sample_token_greedy(ctx, llama_get_logits_ith(ctx, toks.size() - 1 + t));
if (cur == llama_token_eos(model)) break;
out += llama_token_to_piece(model, cur);
llama_batch_clear(batch);
llama_batch_add(batch, cur, toks.size() + t, {0}, true);
llama_decode(ctx, batch);
}
std::cout << out << "\n";
llama_batch_free(batch);
llama_free(ctx);
llama_free_model(model);
llama_backend_free();
}
This C++ guide is part of a larger style guide collection. For general logging, interpretability, and agent‑integration rules that apply to all CLI‑based projects, see the “CLI Logging and Interpretability Style Guide” section in this document.
All persistent features must converge on shared SQLite schemas so the GUI, CLI, and automation prompts operate on consistent structures. The subscriptions/ filesystem remains the source of truth. Every time the application fetches, creates, renames, or deletes content under subscriptions/, the database must be re-synchronised to mirror that state (including removals). Table names, required columns, and intended indexes are defined below; implementations must use these definitions verbatim.
tasks)| Column | Type | Notes |
|---|---|---|
id |
INTEGER PRIMARY KEY AUTOINCREMENT | Unique task identifier |
task_type |
TEXT NOT NULL | fetcher or llm |
status |
TEXT NOT NULL | queued, running, paused, succeeded, failed |
payload |
TEXT NOT NULL | JSON blob describing work to perform |
result |
TEXT | JSON blob summarising outcome or error |
last_updated_at |
DATETIME NOT NULL | Updated whenever task state changes |
created_at |
DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP | Queue time |
started_at |
DATETIME | Optional start timestamp |
completed_at |
DATETIME | Optional finish timestamp |
Indexes: (task_type, status) to drive dashboards; (last_updated_at) for pruning.
llm_prompts)| Column | Type | Notes |
|---|---|---|
id |
INTEGER PRIMARY KEY AUTOINCREMENT | Unique prompt run |
prompt_text |
TEXT NOT NULL | Final rendered prompt |
model_name |
TEXT NOT NULL | Selected model identifier |
destination_type |
TEXT NOT NULL | local or api |
destination_uri |
TEXT | API endpoint or local socket path |
queued_at |
DATETIME NOT NULL | When request entered queue |
started_at |
DATETIME | Execution start |
completed_at |
DATETIME | Execution end |
tokens_evaluated_per_sec |
REAL | Collected from inference metrics |
tokens_generated_per_sec |
REAL | LLM throughput |
metadata |
TEXT | JSON containing additional stats (temperatures, lengths, etc.) |
task_id |
INTEGER REFERENCES tasks(id) ON DELETE SET NULL | Link back to task queue |
subscription_sources)| Column | Type | Notes |
|---|---|---|
id |
INTEGER PRIMARY KEY AUTOINCREMENT | Surrogate key |
subscription_path |
TEXT NOT NULL UNIQUE | Filesystem path (e.g., subscriptions/rss/tech/hackernews) |
category_path |
TEXT NOT NULL | Category path including parent directories |
source_name |
TEXT NOT NULL | From YAML front matter |
source_uri |
TEXT NOT NULL | Canonical feed URI |
source_slug |
TEXT NOT NULL | Stable slug |
file_extension |
TEXT NOT NULL | Expected cache extension |
plugin_type |
TEXT NOT NULL | Owning subscription plugin |
cadence_minutes |
INTEGER | Optional cadence override |
metadata |
TEXT | JSON for plugin specific fields |
yaml_last_hash |
TEXT | Hash of last parsed front matter |
updated_at |
DATETIME NOT NULL | Last schema load |
created_at |
DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP | Initial discovery |
posts)| Column | Type | Notes |
|---|---|---|
id |
INTEGER PRIMARY KEY AUTOINCREMENT | Surrogate key |
subscription_id |
INTEGER NOT NULL REFERENCES subscription_sources(id) ON DELETE CASCADE | Foreign key |
title |
TEXT NOT NULL | Normalised title |
source_slug |
TEXT NOT NULL | Cached for fast joins |
link |
TEXT | Canonical link |
guid |
TEXT | Feed GUID/hash |
summary |
TEXT | Short description |
content |
TEXT | Full content |
post_image |
TEXT | Resolved hero image path/URL |
source_logo_image |
TEXT | Optional logo |
audio_file |
TEXT | Media URL/path |
video_file |
TEXT | Media URL/path |
pubdate |
DATETIME | Publication timestamp |
last_updated_at |
DATETIME NOT NULL | When record last changed |
fetched_at |
DATETIME NOT NULL | Source fetch timestamp |
content_hash |
TEXT NOT NULL UNIQUE | Deduplication key |
Indexes: (subscription_id, pubdate DESC), (content_hash).
meshtastic_messages)| Column | Type | Notes |
|---|---|---|
id |
INTEGER PRIMARY KEY AUTOINCREMENT | Surrogate key |
local_device_id |
TEXT NOT NULL | Our node identifier |
channel_id |
TEXT NOT NULL | Channel or DM identifier |
source_device_id |
TEXT NOT NULL | Message origin |
message_id |
TEXT NOT NULL | Stable message UID |
content |
TEXT | Text payload |
reply_to |
TEXT | Optional parent message id |
raw_payload |
BLOB | Binary body if present |
data_type |
TEXT | Text, data, telemetry, etc. |
received_at |
DATETIME NOT NULL | When message arrived |
updated_at |
DATETIME NOT NULL | Last update (edits/acks) |
metadata |
TEXT | JSON for extra fields |
Unique index (local_device_id, channel_id, message_id).
meshtastic_nodes)| Column | Type | Notes |
|---|---|---|
id |
INTEGER PRIMARY KEY AUTOINCREMENT | Surrogate key |
local_device_id |
TEXT NOT NULL | Our device observing |
remote_device_id |
TEXT NOT NULL | Device identifier seen |
short_name |
TEXT | Broadcast short name |
long_name |
TEXT | Broadcast long name |
first_seen_at |
DATETIME NOT NULL | First observation |
last_seen_at |
DATETIME NOT NULL | Most recent observation |
metadata |
TEXT | JSON for capabilities/firmware |
Unique index (local_device_id, remote_device_id).
meshtastic_sightings)| Column | Type | Notes |
|---|---|---|
id |
INTEGER PRIMARY KEY AUTOINCREMENT | Surrogate key |
local_device_id |
TEXT NOT NULL | Reporting node |
remote_device_id |
TEXT NOT NULL | Observed node |
recorded_at |
DATETIME NOT NULL | Timestamp of sighting |
rssi |
REAL | Signal strength |
reported_latitude |
REAL | Optional latitude |
reported_longitude |
REAL | Optional longitude |
reported_altitude |
REAL | Optional altitude (meters) |
telemetry_payload |
TEXT | JSON of telemetry/health data |
Index (local_device_id, recorded_at DESC) to power history views.
Implementations must update this section whenever schema changes occur and add migration steps to the roadmap.
This section specifies a filesystem‑first task system, modeled after the subscriptions/ pattern. Markdown files with YAML front matter are the single source of truth for both Fetcher and LLM tasks. The database mirrors these files for querying, dashboards, and automation. The goal is to make tasks human‑writable and agent‑friendly while preserving transactional integrity and auditability.
fetcher and llm.tasks (all tasks) and llm_prompts (for LLM tasks).Root directory: tasks/
Namespace (type‑owned), suggested date bucketing for hygiene:
tasks/fetcher/YYYY/MM/<uid>/tasks/llm/YYYY/MM/<uid>/Per‑task contents:
index.md — YAML front matter holds authoritative metadata; Markdown body is the human description.artifacts/ — optional output files (e.g., summaries, cached payload snippets, JSON reports). Store large results here and link to them from YAML result fields.This mirrors the subscriptions/ arrangement where the owning type/namespace governs rules and the front matter is canonical.
Common required keys:
task_type: fetcher |
llm |
uid: stable string identifier (matches folder name recommended)title: short titlestatus: queued |
running |
paused |
succeeded |
failed |
created_at: ISO8601 timestampupdated_at: ISO8601 timestampcompleted_at: ISO8601 timestamp or nullpayload: object — type‑specific inputs/parametersresult: object or null — type‑specific outputs/summarymetadata: object — optional free‑form tags/labelsFetcher recommended fields:
payload.subscription_path: string (e.g., subscriptions/rss/tech/hackernews)payload.cadence_minutes: integer (override)payload.reason: enum (manual, schedule, retry, fs-change, …)payload.transport: enum (https, tor, ipfs, …)result.cache_path: string (last written cache file)result.status_detail: string (HTTP status or parser detail)result.items_parsed: integerresult.error: string (on failure)LLM required/recommended fields:
prompt_text: the final rendered promptmodel_name: selected model identifierdestination_type: local |
api |
destination_uri: API endpoint or local socket pathpayload.params: object (e.g., temperature, max_tokens)payload.context_ref: object linking context (e.g., post_id, source_slug)result.output_file: path under artifacts/ for large text outputresult.tokens_evaluated_per_sec: numberresult.tokens_generated_per_sec: numberresult.metrics: object (extra stats)result.error: string (on failure)The Markdown body of index.md contains a human description and can be copied into the DB payload as description for convenience.
Tables used (see Core Data Schemas): tasks and llm_prompts.
tasks mapping (applies to both task types):
task_type <= YAML.task_typestatus <= YAML.statuspayload (JSON) includes at minimum:
file_path: absolute path to index.mduid: YAML.uidyaml_hash: SHA‑256 hash of normalized front matterdescription: Markdown body (optional/truncated)user_payload: entire YAML.payload objectartifacts: optional list of discovered files under artifacts/result (JSON) <= YAML.result (+ optional small computed extras)created_at <= YAML.created_atstarted_at <= derived when status first transitions to running (if absent in YAML)completed_at <= YAML.completed_atlast_updated_at <= YAML.updated_atIdentity & upsert:
(task_type, uid); do not rely on DB id across syncs.task_type = ? AND json_extract(payload, '$.uid') = ?.llm_prompts mapping (only for llm tasks):
prompt_text, model_name, destination_type, destination_uri from YAMLqueued_at <= YAML.created_atstarted_at <= first running timestampcompleted_at <= YAML.completed_attokens_evaluated_per_sec, tokens_generated_per_sec <= resultmetadata (JSON): merge payload.params, payload.context_ref, artifacts, file_path, uidtask_id: FK to the owning tasks rowDeletion mirroring:
index.md) is removed, delete the tasks row and clear or delete the linked llm_prompts row accordingly.Contract:
tasks/ treetasks and llm_prompts reflect the filesystem exactly (including removals)Algorithm:
tasks/**/index.md on startup and on demand.yaml_hash; optionally list artifacts/.tasks by (task_type, uid); update when yaml_hash differs or file mtime increases.llm tasks, upsert into llm_prompts linked by task_id.payload.file_path no longer exists is removed.Conflict resolution:
When the app changes a task (e.g., status → running, adding result):
updated_at, possibly completed_at).index.tmp next to index.md; flush; atomic rename to index.md.task_type (Fetcher/LLM), columns: Title, Status, Created, Updated, Completed, Result.index.md in editor; reveal task folder; create new task (scaffold folder + YAML); retry; pause.result.output_file when present.--tasks-sync — rescan filesystem and mirror into DB (verbose summary of insert/update/skip/delete)--tasks-list [fetcher|llm|all] — list tasks with key fields--tasks-show <uid> — dump YAML and the mirrored DB row(s)--tasks-new <type> --title <t> --uid <u> [--...params] — scaffold directory and starter YAML--tasks-update <uid> --status <s> [--result-file <path>] — atomic YAML update then DB mirrorTest mode (--test):
tasks/ to a temporary root (e.g., data/test/tasks/).artifacts/.Edge cases:
artifacts/ and reference via result.output_file.index.md; clean up stray .tmp files on startup.yaml_hash is authoritative for change detection; mtime is a fast path.llm_prompts: ensure cleanup on task deletion.tasks and llm_prompts consistently.This application primarily uses the local LLM as a deterministic formatter/renderer, not as an autonomous agent. A “prompt program” supplies exact instructions and an expected output contract. The model receives only the required input text (e.g., the head of a podcast RSS feed or scraped metadata) appended to a reusable prompt from prompts/, and must return only the requested structured output (typically a YAML front matter block). The application validates the output against a schema, writes it to the filesystem if valid, or feeds back a concise correction prompt for another attempt.
Key properties:
subscriptions/ or tasks/ and runs the sync to mirror into SQLite.Inputs
prompts/, e.g., prompts/taxonomy_from_feeds.md) that describes:
feed.xml head, HTTP response headers, scraped page head)Outputs
index.md with this front matter and an optional body stubExample expected output envelope (one of):
---
source_name: Example Podcast
source_uri: https://example.org/podcast.xml
source_slug: example-podcast
file_extension: xml
display_name: Example Podcast
cadence: daily
---
The app will ignore any text before/after the first valid YAML block and fail the render if no valid block is found.
Qwen3-4B-Instruct-2507-Q8_0.ggufsource_name, source_uri, source_slug, file_extension).index.md front matter and create cache/ as needed under subscriptions/<plugin>/<categories>/<slug>/.tasks/<type>/.../index.md when appropriate (e.g., generated from a higher‑level operation), but typically the app scaffolds tasks directly and uses the LLM for metadata generation only.llm task (queued → running → succeeded/failed); llm_prompts stores the final prompt, timings, and minimal metrics.--llm-render --prompt prompts/taxonomy_from_feeds.md --in feed_head.xml --out subscriptions/.../index.md --dry-runA stretch goal is to eventually enable the local llm (default: Qwen3‑4B‑Instruct Q8 via libllama/llama.cpp) to create and manage resources (subscriptions, tasks, taxonomy) using the same primitives available to users. Design all operations so they are callable by an LLM acting as an optional “pair‑programmer” that can explain changes, propose actions, and execute tools when authorized.
Define tools in a central Tool Registry with:
create_subscription_from_url)(plugin, source_uri, target_path))Each tool is implemented once in C++ and exposed through:
--tools-invoke <name> --input-json <...>All invocations produce structured logs and, in --test, run against a sandboxed root and temp DB.
To support a broad set of models, we provide multiple prompting/IO adapters:
Action: <tool>\nInput: <json>, we execute and return Observation: <result> back in a loop until Final Answer.Model profile selects adapter and minor formatting details. Default Qwen3‑4B‑Instruct profile uses JSON Tool Call with fenced code blocks preferred for reliability.
url, category_path (optional), plugin (default rss), file_extension (default xml), slug (optional)subscriptions/<plugin>/<category>/<slug>/, write index.md with required front matter, create cache/, return subscription_path and created flag.subscription_path, partial metadataInputs: task_type (fetcher |
llm), title, payload, plus LLM fields for llm tasks |
tasks/<type>/YYYY/MM/<uid>/index.md; optionally create artifacts/.uid, status, optional result or result_fileAll write tools support dry_run for simulation and report the planned diffs and resolved paths without making changes.
--test: temp roots for subscriptions/ and tasks/, temp DB, simulated network if needed.prompts/ should include concise tool descriptions and examples to bootstrap models.Qwen3-4B-Instruct-2507-Q8_0.gguf via libllama as default.index.md, and the GUI reflects the change.--test mode yields identical logs and DB writes (to temp paths) without touching real data.Before doing anything else, read this consolidated document: project-specification.md. Then review the README.md for installation and quick start instructions.
This roadmap outlines the sequential tasks and milestones for developing the C++17 RSS reader/publisher. It follows the style guide’s recommendation to keep commits atomic and tasks well-scoped. Use the status legend below to track progress.
| Symbol | Meaning |
|---|---|
[ ] |
Not started |
[?] |
In progress / Testing / Development |
[x] |
Completed |
[-] |
Blocked / deferred |
| Status | Task |
|---|---|
| [ ] | Create Git repository and initialise with .gitignore |
| [ ] | Set up directory structure (src/, assets/, data/, subscriptions/, build/, bin/) |
| [ ] | Write initial Makefile for building with Notcurses, SQLite and other dependencies |
| [ ] | Implement basic Logger class writing to root-level log.txt and per-run logs |
| [ ] | Implement CLI argument parsing in main.cpp (support --help, --version, --init-default-subscriptions, --test) |
| [ ] | Create project-specification.md (this consolidated document) with initial content |
| Status | Task |
|---|---|
| [ ] | Initialise Notcurses; create the main application planes including the Base Menu anchored at the bottom. |
| [ ] | Implement the Numeric Input System to handle chained commands (e.g., 2-1) and map them to menu actions with zero latency. |
| [ ] | Implement the Home View (Menu Item 1) as the default view. |
| [ ] | Implement the Pub View (Menu Item 2) with input fields for title and content; buttons are present but disabled/no-op |
| [ ] | Implement the Sub View (Menu Item 3) with its three-column layout (Subscriptions, Posts, View). |
| [ ] | Render a placeholder directory tree in the Subscriptions column. |
| [ ] | Render a placeholder list of posts in the Posts column. |
| [ ] | Render a placeholder post detail view in the View column. |
| [ ] | Add a status bar just above the Base Menu to display context and feedback. |
| Status | Task |
|---|---|
| [ ] | Add a Settings View section dedicated to database configuration, surfacing current SQLite path, size, and connection status |
| [ ] | Embed a DB Explorer view with resizable panes for table list, SQL editor, and asynchronous results viewer |
| [ ] | Introduce a Tasks View (Menu Item 4) with fetcher/LLM sub-views to stage workflow automation features |
| [ ] | Extend Settings with general guidance plus LLM local/API configuration (endpoint, auth, prompt editor, model picker) |
| [ ] | Tasks system (disk-first): define tasks/ filesystem with fetcher/ and llm/ namespaces; each task folder contains index.md with YAML front matter as source of truth and optional artifacts/ for large outputs |
| [ ] | Tasks YAML schema: common fields (task_type, uid, title, status, created_at, updated_at, completed_at, payload, result, metadata); LLM-specific fields (prompt_text, model_name, destination_type, destination_uri) |
| [ ] | Implement TaskManager with FS↔DB sync that upserts into tasks and, for LLM tasks, mirrors into llm_prompts; include file_path, uid, and yaml_hash embedded in JSON payload |
| [ ] | Change detection: full scan on startup and on-demand; detect updates via mtime + YAML hash; mirror deletions (remove DB rows when task folders are removed); per-file transactional upserts |
| [ ] | App writes & atomicity: update tasks by writing YAML first (index.tmp + atomic rename to index.md), then mirror into DB inside a transaction; log every operation (insert/update/skip/delete) |
| [ ] | UI wiring: extend Tasks dashboard to filter by task_type (Fetcher/LLM), display Title/Status/Created/Updated/Completed/Result, open the underlying index.md, reveal folder, and create/edit tasks |
| [ ] | CLI: add --tasks-sync, --tasks-list [type], --tasks-show <uid>, --tasks-new <type> ..., --tasks-update <uid> ... to drive the tasks system without GUI |
| [ ] | Test mode & edge cases: in --test, redirect tasks/ into a temp root, use deterministic UIDs, store large results in artifacts/, handle duplicate UIDs, and recover from partial writes |
| [ ] | Implement the core schemas described above (tasks, llm_prompts, subscription_sources, posts, meshtastic_messages, meshtastic_nodes, meshtastic_sightings) with migrations and unit tests |
| [ ] | Wire the tasks dashboard to a persisted Fetcher queue, update statuses in real time, and connect LLM tasks to logic pipelines |
| [ ] | Expand the database explorer to support transactional edits, guardrails (row limits, timeout), and structured logging for every statement |
| [ ] | Persist database settings from the UI with path validation, error reporting, and live reconnection |
| [ ] | Persist LLM profiles (local/API) with hydrated prompt/model selectors and integrate selections with upcoming Ollama/llama.cpp plugins |
| [ ] | Capture UI interactions (task actions, SQL execution, settings changes) in the Logger and expose equivalent CLI commands for query runner and task status output |
| [ ] | Document task queue, database workflow, and LLM configuration flows as prompts/docs so agents can automate them |
| [ ] | Ensure all database management actions invoke DatabaseManager APIs with transactional safety and verbose logging |
| [ ] | Document database maintenance workflows and tie them to prompts (e.g., add or update prompts/ entries for automated assistance) |
| Status | Task |
|---|---|
| [ ] | Implement deterministic LLM renderer that builds prompt from prompts/ + input snippet, runs libllama (Qwen3-4B-Q8) with low temperature, and captures output |
| [ ] | YAML extraction & validation: extract the first YAML/front-matter block; validate against per-plugin schema with clear, actionable errors |
| [ ] | Correction loop: on validation failure, re-prompt with concise error list and previous output; retry up to N times (e.g., 2–3) |
| [ ] | Filesystem write & sync: on success, write index.md front matter to the correct path (app chooses path), then trigger FS→DB sync |
| [ ] | UI: add “Render from prompt” flow that previews base prompt, input snippet, and YAML result; user can accept to save or request correction |
| [ ] | CLI: add --llm-render --prompt <path> --in <file> [--out <index.md>] [--dry-run] with exit codes for success/validation-failure/error |
| [ ] | Logging: record prompt hash, input hash, output hash, validation errors, retries, timings, and final write target in per-run log |
| [ ] | Tests/fixtures: add sample podcast feed head and golden YAML; verify pipeline yields valid YAML within retry budget under --test |
| [ ] | Prompt updates: ensure prompts/taxonomy_from_feeds.md explicitly requires YAML-only output, lists required keys, and includes a minimal valid example |
| [ ] | LLM tasks integration: record each render as an llm task with status transitions (queued→running→succeeded/failed); upsert into tasks and llm_prompts with file_path, uid, yaml_hash, and link any artifact/output files |
| [ ] | Tasks dashboard wiring: display render tasks with model, destination, input snippet hash, validation status; provide actions to open result, reveal artifacts/, and retry correction |
| [ ] | CLI task helpers: --tasks-new llm --title <t> --prompt <p> --in <file> [--uid <u>] scaffolds a render task; --tasks-update <uid> --status <s> [--result-file <path>] transitions state; --tasks-sync ingests outputs into DB |
| [ ] | Error handling policy: after N validation failures mark task failed, attach concise error summary in result, and persist the last invalid YAML into artifacts/invalid.yaml |
| [ ] | Test mode parity: ensure llm render tasks in --test run against temp roots/DB, write artifacts there, and produce identical logs and state transitions |
| Status | Task |
|---|---|
| [ ] | Implement Subscriptions manager: create subscriptions/ if missing, ensure each subscription-type plugin subfolder (e.g., rss/, meshtastic/) exists, walk directory trees, and load YAML front matter |
| [ ] | Automatically create a taxonomy.txt file within the subscriptions directory which recursively lists all category paths starting with “subscriptions/” (only list non-source, non-cache directories). This process should happen on every startup, and any time something happens tha would affect the contents of the list. |
| [ ] | Design and implement DatabaseManager using SQLite to store preferences and UI state; migrate temporary UI state persistence from file to SQLite |
| [ ] | Create a theme table in the database to store user-selected colours and styles |
| [ ] | Implement a ThemeManager to save, load, and apply user-selected themes from the database. Include resizing windows, window position, etc. |
| [ ] | Build a filesystem synchroniser that diff-scans subscriptions/ on startup and after every change, reconciling additions, renames, and deletions into subscription_sources and cascading removals into dependent tables |
| [ ] | When fetchers write cache files or parsed posts, update the posts table to mirror the filesystem (including removing pruned cache entries) |
| [ ] | Write helper functions to parse and validate index.md front matter, applying plugin-specific schemas (e.g., enforce source_name, source_uri, source_slug, file_extension for RSS/Atom) using a YAML library |
| [ ] | Wire the left tree in the TUI to real subscription data; selection updates the centre and right planes (still showing placeholders until Fetcher lands) |
| [ ] | Implement functions to create, rename and delete categories and subscriptions through the TUI (menus and dialogs) |
| Status | Task |
|---|---|
| [ ] | Integrate libcurl (or equivalent) for HTTP/HTTPS requests |
| [ ] | Implement asynchronous fetching on background threads; avoid blocking the TUI |
| [ ] | Parse RSS and Atom payloads into Post structures |
| [ ] | Populate the post list in the centre column with parsed items and update the detail view on selection |
| [ ] | Implement background Fetcher tick (1–5 min) to evaluate cadence per subscription |
| [ ] | On success, write cache/YYYY-MM-DD-HH-mm-ss.<ext> and atomically update latest.<ext> |
| [ ] | Maintain SQLite tables: subscriptions, fetches, latest_payloads, posts with indexes and upsert semantics |
| [ ] | Rescan filesystem changes to keep DB in sync with subscriptions/ and YAML front matter |
| [ ] | Implement per-subscription locking, retries, and backoff with robust logging |
| [ ] | Add a simple in-TUI activity indicator and last-fetch status per subscription (non-blocking notifications area or status bar) |
| Status | Task |
|---|---|
| [ ] | Define abstract interfaces for SourcePlugin, DestinationPlugin and LogicPlugin |
| [ ] | Implement PluginFactory for registering and retrieving plugins by protocol and content type |
| [ ] | Implement initial RssHttpsPlugin supporting HTTP/HTTPS, parsing RSS and Atom feeds; expose enable/disable in the TUI |
| [ ] | Implement a simple logic plugin (e.g., keyword filter) and surface it in a TUI debug panel for per-subscription tests |
| [ ] | Implement OllamaLogicPlugin for LAN LLM calls with streaming support and prompt templates (TUI toggle only for now) |
| [ ] | Implement a stub destination plugin that logs published content for testing; add a TUI “Test publish” button |
| Status | Task |
|---|---|
| [ ] | Implement right-most column as a logic pipeline area with expandable menu |
| [ ] | Add “Filters & Transformations” tool to define SQL-based filters and transformations against SQLite tables |
| [ ] | Implement dynamic Views list: Data Table (default), Charts, Media Playlist, Photo Gallery, Rendered Markdown |
| [ ] | Enable chaining logic blocks so one block’s output becomes the next block’s input |
| [ ] | Persist view configurations and logic block results in the configuration database |
| Status | Task |
|---|---|
| [ ] | Integrate IPFS/IPNS via external binary; extract and launch the daemon on demand |
| [ ] | Integrate Tor via bundled binary and configure SOCKS proxy for HTTP requests |
| [ ] | Handle JSON Feed parsing (in addition to RSS/Atom) |
| Status | Task |
|---|---|
| [ ] | Implement TUI components for the Pub View (Menu Item 2): fields for title, content, media attachments, and protocol selectors |
| [ ] | Implement DestinationPlugin for ActivityPub publishing (e.g., send posts to a configured server) |
| [ ] | Implement destination plugin for AT protocol (e.g., BlueSky) |
| [ ] | Add ability to schedule posts and display publishing status to the user |
| [ ] | Persist drafts and published posts for future editing |
| Status | Task |
|---|---|
| [ ] | Implement logic plugins: keyword filter, tagger, summariser using local or remote LLMs |
| [ ] | Provide TUI to enable/disable logic plugins per subscription and configure parameters |
| [ ] | Implement test mode path: run logic plugins and publishing in simulation without side effects |
| [ ] | Integrate Ollama-backed summarisation, bias analysis, and counter-argument logic blocks |
| Status | Task |
|---|---|
| [ ] | Detect Meshtastic node via USB (auto or user-provided path) and create node root under subscriptions/ |
| [ ] | Mirror channels as child subscriptions with plugin-managed index.md metadata and a cache/ for received messages |
| [ ] | Implement MeshtasticSourcePlugin to subscribe to text/data and convert to Post objects |
| [ ] | Implement MeshtasticDestinationPlugin to watch per-channel outbox/ and send via sendText / sendData |
| [ ] | Expose channels in sub tree; enable channel selection in pub destinations with queue status |
| Status | Task |
|---|---|
| [ ] | Implement a comprehensive Settings View (Menu Item 0) to configure fetch cadence defaults, TUI themes, and account credentials, etc |
| [ ] | Allow users to change global colours; save preferences in the database and apply across the TUI |
| [ ] | Provide a log viewer accessible from the TUI that reads log.txt and run logs |
| Status | Task |
|---|---|
| [ ] | Finalise static builds for Linux and Windows; ensure external binaries (Tor, IPFS) are packaged and extracted at runtime |
| [ ] | Write scripts or Make targets for cross-compiling with mingw-w64 |
| [ ] | Write installation instructions and sample configuration in the README |
| [ ] | Tag a beta release and solicit user feedback |
Evergreen requirement: treat these tasks as part of every change, not a one-time endgame pass. Whenever code, specs, or assets change, rerun the checklist below so agents always keep documentation and polish current.
| Status | Task |
|---|---|
| [ ] | Update README.md with usage examples, screenshots and troubleshooting tips |
| [ ] | Maintain this project-specification.md throughout development |
| [ ] | Add inline code comments and docstrings; ensure all functions are documented |
| [ ] | Review logs and refine error handling and user messages |
| [ ] | Final release: package binaries, update documentation and create release notes |
| Status | Task |
|---|---|
| [ ] | Extend --test mode to simulate Ollama API responses and Meshtastic packets without external dependencies |
| [ ] | Ensure simulation uses temporary directories and produces the same logs and DB writes (to temp DB) as real runs |
| [ ] | Add CLI flags to dump current Fetcher status, DB counts, and plugin registrations for LLM-friendly inspection |
src/ with clear module separation.Makefile for building and packaging the application.README.md describing installation, usage and contributing guidelines, and linking prominently to this document.project-specification.md (this consolidated document including the specification, roadmap, social context, and style guides).