System translated (Gemini)

To Build a Long-Term AI Assistant, I Forged My Own Tools and Assembled a Coding Team for OpenClaw Link to heading

Foreword: This article was co-authored with my OpenClaw. My OpenClaw is purely cloud-based, so its capabilities are still limited. During the Chinese New Year, aside from taking care of my baby (just a few months old), I spent most of my time building OpenClaw through “vibe coding.” It can now help me get many things done quickly. I’m sure much of it was reinventing the wheel and not the optimal solution, but I feel the process is worth sharing—consider it my project for the Spring Festival. All the custom tools were built by OpenClaw’s own little team; I was just the commander. :)

Many people’s use of OpenClaw / Agents stops at the “chatting, researching, and writing” stage. However, I’ve taken it a step further: my current cloud-based OpenClaw is no longer a “chatbot” but an always-online cloud adjutant. It runs scheduled tasks daily, performs health self-checks, and reliably handles very specific jobs:

  • AI Daily: Automatically fetches my watchlist and X trending topics at 06:30 every day, generates a Chinese summary, and delivers it (via email/Discord).
  • Yesterday’s Email Summary: Pulls from two IMAP mailboxes, automatically categorizes emails into “Expenses/Risks/Actions/FYI,” and sends a condensed summary.
  • Local Life/Travel: Makes real calls to Amap to check ETA, routes, weather, and POIs, and generates a mobile-friendly map preview.
  • Office Collaboration: Integrates with Outlook Mail and Calendar, reliably sending emails and creating meetings when the token is healthy.
  • Knowledge Management: Organizes key content into Markdown and syncs it to Obsidian/WebDAV.
  • Continuous Development: Uses a main Agent + coder/scoder collaborative workflow to asynchronously modify scripts and run self-checks for acceptance, without blocking the main conversation.

However, once I started treating it as a system that needs to be always-on, continuously productive, recoverable, and verifiable—rather than just a “chat window”—the deeper I went, the more I encountered real problems that aren’t apparent with surface-level use:

  • When conversations get long, the model quickly hits TPM / context limits, leading to slower responses, looser logic, and an increase in hallucinations.
  • While a /reset can restore its intelligence, the assistant immediately gets “amnesia”: it forgets the rules, the project progress, and the status of external services.
  • The most dangerous part comes after adding more capabilities: the Agent will confidently give an output even when scripts, tokens, or APIs are broken. You think the system is running, but it’s actually just ‘making things up.’

What I want to share in this article is how I solved these problems to build a long-term assistant, creating a runnable, recoverable, and auditable cloud AI operating system that can perform health self-checks and resume work 3 seconds after a reset. This involves physicalizing state, Reset V2, a three-piece health suite, and a closed-loop engineering process with multi-agent collaboration.

So, I’m writing this not to showcase a cool demo, but to share a reusable architectural methodology that I already have up and running in the cloud:

  • Physicalization: Grounding memory, capabilities, and task states into the file system.
  • Reset V2 (Runtime Contract): Upgrading reset from “amnesia” to a “controlled reboot.”
  • Health System (selftest / health / capabilities): To always be able to answer the question, “Can it still work reliably right now?”
  • Multi-Agent Collaboration (Main + coder + scoder): Turning coding from a chat-based promise into an engineering delivery.

1. Runtime Environment: A Purely Server-Side “Hub” Architecture (Why I Insist on the Cloud) Link to heading

I chose not to run OpenClaw locally (e.g., on a Mac mini), and the core reason wasn’t performance, but the security boundary.

When you use an Agent deeply, it’s no longer just a “dialogue box for writing summaries” but a hub “connected to the real world.” It needs to hold my tokens for email, calendar, maps, news sources, and various accounts; run long-term cron jobs; read and write state files; and trigger script workflows. For me, placing this entire suite of capabilities directly on a desktop device is too risky—issues like permissions, network access, device loss, and human error become much less controllable.

That’s why I placed it in a headless cloud environment:

  • Isolation and Control: A Linux VPS with Docker to lock down permissions, dependencies, and network boundaries.
  • 24/7 Availability: Scheduled tasks don’t depend on my computer being on.
  • Engineered System: Due to the lack of desktop-level capabilities (i.e., no reliance on GUI operations), I was forced to engineer tasks into a system of “scripts + SOPs + health checks.” It is precisely these mechanisms that allowed it to evolve from a chat tool into a personal operating system.

The system’s capabilities are connected as follows:

  • Physical Deployment: Linux VPS, containerized with Docker.
  • Hub Connections: Gmail/Outlook, Zhuge Intelligence/Cloopen work email, Amap API, Yahoo Finance, Twitter (via Bird CLI).
  • Asynchronous Delivery: Even when I’m sleeping, I can wake up gen_ai_digest.py at 6 AM, process last night’s information, and deliver it asynchronously via Outlook or Discord.

2. Combatting “Context Cancer”: Reset V2’s Background and Two Iterations (Including Health System) Link to heading

I later came to realize a fact: the problem wasn’t that “models aren’t smart enough,” but that models inherently have limitations, and I pushed OpenClaw to a depth that continuously triggered these limitations.

Taking Gemini as an example: as conversations and tasks progress, OpenClaw’s context continuously expands, eventually exceeding Gemini’s input limits; coupled with throttling like TPM (Tokens Per Minute), the system enters a “brain fog state”: it slows down, becomes erratic, hallucinations increase, and API calls even directly error out/refuse service.

Therefore, I must regularly perform a mandatory action: reset.

However, the side effects of a reset are extremely fatal: at this point, OpenClaw has accumulated a large amount of “engineering assets”:

  • It has learned a set of development specifications and collaboration mechanisms for tasks/projects.
  • I have built up a collection of script tool libraries and capability routes.
  • The engineering status, troubleshooting experience, and dependencies of existing projects are all in progress.

A regular reset would instantly “disrupt” these things: it would forget the rules, forget the tool library, and wouldn’t actively re-learn. So, all my subsequent mechanism designs essentially answer the same question:

Since a reset is unavoidable, how can I ensure the system can quickly recover to a “trusted operational state” after a reset, and continue to develop and iterate?

2.1 First Iteration: Pure SOP Recovery (Heavy and Unstable) Link to heading

In the first iteration, I took a very intuitive approach: after a reset, I had the assistant reread SOP documents to restore its state.

I quickly discovered that it couldn’t solve the key problems:

  • SOPs are mostly Markdown, lengthy, and recovery is costly.
  • More fatally: even if it reread the SOPs, it would still forget the script tool library. —It might remember “what to do,” but not “which script to call, what capability entry points exist, or how to verify.”

Therefore, the SOP-only recovery method couldn’t achieve the “seconds-to-availability after restart” that I desired.

2.2 Second Iteration: Runtime Contractual Recovery (Like Operating System Bootstrapping) Link to heading

The second iteration involved a key suggestion from scoder: Don’t treat recovery as “reading documents”; treat it as “system startup” – using machine-readable runtime contracts + capability indexes + health checks to bring the state back online.

Therefore, I iterated to develop the Runtime Contract and BOOTSTRAP guided process: after a restart, three layers of definition files are loaded strictly in order.

After the second iteration, I also documented the design process of “this runtime refactoring” as a project card (e.g., openclaw-runtime-review.md). Because the runtime itself is an evolving capability line: each time a new external service is introduced, or a new health threshold or self-check item is added, a traceable alignment anchor is needed.

2.3 BOOTSTRAP Three Layers: Identity / Constitution / Tooling Link to heading

Layer One: Soul and Persona (Identity Layer) Link to heading

SOUL.md defines the Agent’s self-perception and behavioral red lines. I deliberately wrote it to be “action-oriented”:

  • Refuse meaningless pleasantries, directly provide results.
  • Read-only operations without asking for permission (checking traffic/calendar).
  • To remember something, it must be written to a file; prohibited from “remembering” through conversation.
# SOUL.md
- Name: Jobs
- Vibe: Sharp, Concise, No-nonsense.
- Core Truths:
  - Be genuinely helpful, not performatively helpful.
  - Don't ask permission for read-only actions.
  - If you want to remember something, WRITE IT TO A FILE.

Layer Two: Operating Constitution (Constitution Layer) Link to heading

runtime/RUNTIME-CONTRACT.md is the system’s “constitution,” machine-readable, with hard constraints. The focus isn’t slogans, but two mechanisms:

1) Mandatory Selftest After a reset, selftest_all.sh must be run, with actual connections to external services:

  • Outlook: Refresh OAuth Token.
  • IMAP: Log in to email to confirm password/permissions are valid.
  • Map API: Real route/weather requests to confirm quota and return structure are normal.

2) Health Lock If a self-test fails, a health file is generated (e.g., runtime/health/outlook.json marked status: error), and all subsequent related operations are automatically intercepted to prevent the Agent from blindly trying and getting accounts locked.

# RUNTIME-CONTRACT.md

## 健康状态与 selftest 契约
- Reset 后必须运行 selftest/selftest_all.sh
- 任何操作前必须检查 runtime/health/*.json 状态

## 全局硬规则
- 时区:默认 Asia/Shanghai,禁止自作聪明转换
- 专用实现优先:capability 中有定义,禁止使用通用 LLM 瞎猜 API

Layer Three: Capability Index (Tooling Layer) Link to heading

runtime/capabilities.yaml is similar to an API gateway routing table: it precisely defines natural language intentions and binds SOPs and health as prerequisites.

# capabilities.yaml
local_maps:
  summary: "查路线/POI/天气"
  sops: ["memory/route-planning-SOP.md"]
  entry_scripts:
    route: 'python3 scripts/route_eta_amap.py "{origin}" "{dest}" --mode drive'
    poi: 'python3 scripts/poi_search.py "{location}" "{keyword}"'

2.4 Reset V2’s “Health System” Integrated into Bootstrapping: The Capability Check Trifecta (selftest / health / capabilities) Link to heading

This Runtime Contract ultimately aims to answer a very specific question:

Can this OpenClaw reliably perform its tasks right now?

Therefore, I’ve converged “availability” into a three-part capability check, and these are part of the Reset bootstrap. To avoid just discussing concepts, I’ll explain them from the perspective of “script/file division of labor”: who is responsible for self-check, who is responsible for persisting state, and who is responsible for routing natural language to the correct capability entry point.


2.4.1 selftest_all.sh: One-Click Master Self-Check Scheduling Script (Full Machine Smoke Test Entry) Link to heading

Purpose:

  • As the main entry point, it serially/concurrently calls selftest/<domain>.sh from various domains.
  • After running, it ensures that runtime/health/*.json are all the latest check results (refreshing the entire machine’s health report).

Core Behavior (abstract understanding is sufficient):

  • Iterates through capability domains, for example: ai_daily / email_summary / email_check / outlook / local_maps / stocks / twitter_bird / ob_webdav / coding / web_preview / cron_midday_evening_briefing ...
  • Executes for each domain:
    bash selftest/<domain>.sh
    
  • Each selftest script internally writes/updates: runtime/health/<domain>.json

So, the essence of selftest_all.sh can be understood in one sentence:

Batch refresh all capability health reports. Run a smoke test on all capabilities first thing in the morning.


2.4.2 selftest/<domain>.sh: Health Check Script for a Single Domain (Minimum but Real Business Verification) Link to heading

Each domain has one selftest script, responsible for performing " minimum but real " business verification: It doesn’t run empty shells but actually connects to an external service / runs a dry-run / validates a critical path once.

Several typical examples:

selftest/ai_daily.sh Link to heading

Check: Can gen_ai_digest.py --dry-run run, and confirm:

  • Watchlist JSON can be read
  • Bird CLI can return AI hotspots
  • GEMINI_API_KEY is available (LLM calls do not error)

Output: Update runtime/health/ai_daily.json (status / checked_at / detail)

selftest/email_summary.sh Link to heading

Check:

  • Connect to Zhuge Intelligence + Cloopen IMAP (can dry-run by fetching only 1 email)
  • Run gen_email_summary.py --dry-run to see if it finishes normally

Output: Update runtime/health/email_summary.json, detail usually includes IMAP connection status

selftest/outlook.sh Link to heading

Check:

  • Call skills/outlook/scripts/outlook-token.sh test to verify Token
  • Optional: dry-run sending mail or outlook-calendar.sh today to confirm API returns normally

Output: Update runtime/health/outlook.json (Token status / last successful send or fetch)

selftest/local_maps.sh Link to heading

Check:

  • Call amap_weather.py or route_eta_amap.py once
  • Confirm AMap key exists and is not expired, response structure is parsable

Output: Update runtime/health/local_maps.json

Other domains (stocks.sh / twitter_bird.sh / ob_webdav.sh / coding.sh / web_preview.sh, etc.) follow the same pattern: Use a minimal business call to verify the link, then write the health JSON.


2.4.3 runtime/health/*.json: Health Report for Each Capability (Traffic Light + Degrade Switch) Link to heading

runtime/health/*.json is not a script, but the output file of selftest. It acts as the “authoritative state source” in the system: the main Agent, SOP, and Cron must consult it before deciding on the next action.

Suggested unified field format (illustrative):

{
  "status": "ok | degraded | error | unknown",
  "checked_at": "2026-02-25T02:15:37Z",
  "detail": "最近一次自检结果摘要",
  "details": { "...": "可选:更细节字段" }
}

Business meaning of status:

  • ok: This domain’s capabilities can be safely executed, and the output can be considered a “business credible result.”
  • degraded: Partially available; risks/limitations need to be highlighted in the response.
  • error: This domain is considered unavailable; only the reason can be explained + a degradation plan provided.
  • unknown: Self-check has not been run or is expired; selftest should be triggered before execution.

Typical mapping:

  • runtime/health/ai_daily.json determines if the AI Daily pipeline is trustworthy.
  • runtime/health/email_summary.json determines if yesterday’s summary can run.
  • runtime/health/outlook.json Decide whether to allow sending Outlook emails/writing schedules (key to health lock)
  • runtime/health/local_maps.json Decide whether to use a real interface for checking ETA/weather

2.4.4 runtime/capabilities.yaml: Routing table from intent to “script + SOP + health” (capability map) Link to heading

The last part of the trinity is not a script, but a configuration: runtime/capabilities.yaml. It tells the Agent:

When the user says X: Which health to check firstWhich selftest to run when unhealthy → Which script to execute when healthy → And refer to which SOP/project cards to interpret the output.

A more complete domain configuration example:

local_maps:
  summary: "本地生活 / 地图 & 路线(AMap)——drive/walk/transit 多方案 ETA..."
  intents: ["route", "poi", "weather"]
  entry_scripts:
    route: 'python3 scripts/route_eta_amap.py "{origin}" "{dest}" --mode drive --text'
    poi:   'python3 scripts/poi_search.py "{location}" "{keyword}" --radius 3000 --text'
  health_file: "runtime/health/local_maps.json"
  selftest: "selftest/local_maps.sh"
  sops:
    - "memory/route-planning-SOP.md"
    - "memory/poi-search-SOP.md"
  hard_rules:
    - "禁止直接用经验或 web_search 回答路线/ETA/POI/天气,除非专用实现不可用并说明原因。"

Field division at a glance:

  • entry_scripts.*: The actual business script (route/poi/weather…)
  • selftest: Which selftest to run when health is expired/unknown
  • health_file: Which health report to check before execution
  • sops: Binds interpretation and operational norms (refer back to project cards when necessary)
  • hard_rules: Mandatory constraint “if a dedicated implementation can be used, no guessing allowed”

2.4.5 Summary in one sentence: How the trinity cooperates Link to heading

Condensing the collaborative relationship of the trinity into one sentence:

  • selftest_all.sh: Batch schedule all selftests, refresh the health status of the entire machine
  • selftest/<domain>.sh: Perform minimal but real business selftests on a single capability line, update corresponding health
  • runtime/health/*.json: Traffic light health reports for each domain, deciding whether to execute, how to degrade, whether to lock
  • runtime/capabilities.yaml: Route “user’s one sentence” to script + SOP + health + selftest, which is the “map” of the capability layer

2.5 Benefits: 3 seconds to restore to a trusted running state after Reset Link to heading

Now reset no longer means “amnesia”, but a controllable restart:

“Outlook module is healthy, map module is degraded (rate limited), I have 11 available capabilities.”

It doesn’t need to remember what was just discussed, because the state is in the file, not in the conversation history.


3. Solving execution chronic problems: Asynchronous collaboration model of primary and secondary Agents (EDTP) Link to heading

The starting point for multi-Agent collaborative programming is actually the simultaneous existence of three pain points:

a) A single primary Agent is either too expensive or only talks but doesn’t act Link to heading

  • Using heavy intelligent agent models is effective but costly, and prone to “overly complex planning”
  • Using ordinary conversational models is low cost, but a common problem is “only talks but doesn’t act”

b) I learn from GUI programming: primary thread does not block, secondary threads do heavy work Link to heading

Like mobile apps/desktop programs: the primary thread is responsible for interactive response, and secondary threads asynchronously handle time-consuming logic and IO. Similarly, multi-Agent frees the primary Agent, allowing it to maintain task reception and demand analysis capabilities, and handing over heavy development/long tasks to secondary Agents for silent completion.

c) Primary Agent + coder + scoder: trade cost for scale, trade review for reliability Link to heading

  • Primary Agent (Jobs) = PM/Tech Lead: Decomposes tasks, writes task descriptions, performs acceptance, does not directly modify code (high token cost for primary conversation)
  • coder = primary engineer: Uses cheaper programming models to modify scripts, add logs, and self-test to close the loop
  • scoder (Senior) = Architect/Review: Uses stronger models for solutions and reviews, improving the quality and reliability of complex projects

In practice, the primary Agent usually locates the corresponding project card before assigning a task: “This requirement belongs to ai_daily / email_summary, which line’s iteration version”, and then writes the Task Spec (goals, constraints, acceptance commands) accordingly. In this way, the coder’s changes will not deviate from the business topology, and the scoder’s review will also have a unified alignment baseline.

3.1 EDTP: Evidence-Driven Task Distribution Protocol Link to heading

Before the primary Agent starts the coder/scoder, the Task Spec must meet four conditions:

  1. Evidence-based entry: Clear file path, key functions/branches/line numbers given when necessary
  2. Active Role: Declare the sub-Agent’s expert persona (Deep Researcher / System Architect / Senior Debugger / Protocol Specialist)
  3. Cognitive Alignment Handshake: The sub-agent restates the task and environment upon startup.
  4. Layered Evidence Loop: The coder provides commands and their output; the researcher provides a chain of facts and citations.

3.2 Division of Responsibilities: Main / coder / scoder Link to heading

  • Main Agent: Understands requirements, breaks down tasks, writes Task Specs, decides which sub-agent to activate, and performs acceptance testing.
  • coder: Modifies code, adjusts cron jobs, adds logs, and performs self-checks based on the Task Spec. Delivers “modified files + verification command + output”.
  • scoder: Intervenes for system-level refactoring, complex requirements, or when the coder fails after multiple attempts. Creates architectural designs and conducts reviews.

3.3 Three Task Status Levels: Statements Must Align with Facts Link to heading

Coding tasks are only allowed three statuses: Not Started / In Progress / Completed. The Main Agent is prohibited from verbally stating “it’s in progress” or “it’s done” unless a corresponding session and verification record exists.


4. Solidifying Personalized Capabilities: The SOP and Project Card Mechanism Link to heading

4.1 SOPs: General Skills Are Insufficient for My Highly Personalized Scenarios Link to heading

General skills solve for generalized capabilities, but many of my tasks are highly personalized: accounts, knowledge structures, task workflows, classification criteria, output formats, fallback strategies, and more. These cannot be run stably by simply explaining them once in a prompt. Therefore, I consolidate them into SOPs, allowing new models or new agents to “get up to speed just by glancing at the manual.”

4.2 The Project Card Mechanism: A Recurring Need = A Long-Term Maintained Project Card Link to heading

When it comes to Project Cards, I’ve condensed the design philosophy into a single sentence:

A recurring type of need = A long-term maintained Project Card Used to carry the “Background → Architecture → Usage → Iteration History” for this capability.

It’s not about solving “how to call a script,” but rather three higher-level questions: Why does this capability exist? What is the system topology? How will it be continuously iterated upon and handed over in the future?


4.2.1 When is a Project Card needed? Link to heading

If any of the following conditions are met, there should be a notes/projects/<name>.md:

  • The task is long-term and will be continuously iterated upon (not a one-off script).
  • It involves multiple scripts, cron jobs, or external services (e.g., IMAP + Outlook + LLM).
  • It may be handed over to another agent in the future, or I might need to resume work on it after a break.

Typical examples include:

  • ai-daily.md: The entire AI Daily pipeline.
  • email-summary.md: Yesterday’s email summary.
  • openclaw-runtime-review.md: The runtime modification project.
  • miaokong-website.md: Personal site: content + build + preview + deployment.

4.2.2 What should a Project Card contain? Link to heading

I try to keep the structure of Project Cards consistent (the phrasing can vary, but the skeleton remains the same), typically including these 5 sections:

  1. Background & Goal

    • Why this capability was created and the specific pain point it solves.
    • e.g., AI Daily: To reduce the time spent manually browsing newsletters / X, consolidating everything into a single email before 06:30 each day.
  2. Architecture Topology Describe the data flow from a three-layer perspective:

    • Data Layer: RSS / Watchlist / Bird / IMAP, etc.
    • Logic Layer: Core scripts (e.g., gen_ai_digest.py, gen_email_summary.py)
    • Transport Layer: Delivery channels (e.g., send_ai_briefing.sh, Outlook API, etc.)
  3. Dependencies & Environment

    • Required environment variables / API keys.
    • Dependencies on external CLIs or services (e.g., bird, outlook-cli, tvscreener).
  4. Usage / Invocation A “quick start” section for my future self:

    • The command to run it manually.
    • How the scheduled task is configured (in which Cron / OpenClaw cron id).
  5. Iteration Log

    • Record key changes on a timeline (V1.0 launch, V1.1 added sources, V2.0 architecture overhaul…).
    • The purpose is to make the “current state” and “how it got here” traceable.

4.2.3 The Role of Project Cards in the Overall System Link to heading

In the system, Project Cards serve three main purposes:

  1. Business Documentation “Above the Capability”

    • capabilities.yaml only tells the agent that a capability exists and what its entry-point script is.
    • The actual business context, goals, boundaries, topology, and constraints are written in the Project Card.
  2. An Anchor for Coding Collaboration

  • When the main Agent initiates coder/scoder, it prioritizes pointing to the project card: “This requirement belongs to V1.2 of the AI Daily line”
  • The coder modifies scripts based on the project card’s topology and constraints, rather than aimlessly searching the repository.
  1. Long-Term Memory / Handover Medium
  • When reviewing after a period, or switching Agents, simply reading the project card will reveal:
    • The current status of this business line
    • Which scripts/cron jobs are the ‘official’ ones, and which are merely historical remnants
    • How to iterate next

5. Application Scenarios: How Do These Architectures Translate into Practical Capabilities? Link to heading

5.1 Email Distillation and ‘Dehydrated’ Briefings Link to heading

Cron triggers gen_email_summary.py, IMAP fetches emails, LLM categorizes them as “Cost/Risk/Action/Notification”, outputting minimalist daily reports.

5.2 Route Rendering and Web Preview Link to heading

route_eta_amap.py pulls data → amap_render.py generates static HTML → Nginx exposes links, tap on mobile to compare routes.

5.3 Automatic Notes (Obsidian Sync) Link to heading

ob_note_sync.py integrates with WebDAV, organizing conversation highlights into Markdown and synchronizing them.

5.4 All-Network Information Sentinel (Bird + Watchlist) Link to heading

Bird CLI monitors X hotspots + Watchlist captures RSS, outputting AI Daily and hotspot aggregation.


Conclusion: Stability Comes from Architecture, Not Prompts Link to heading

When you document business logic (SOP), asynchronize tasks (Sub-agents), and physicalize states (Files & Configs), AI is no longer an unstable black box, but a truly manageable, iterable cloud adjutant.


Appendix: Current System Capabilities Overview (Runtime Capabilities) Link to heading

1) ⚡ High-Frequency Automated Tasks Link to heading

  • ai_daily: AI Daily (✅ Healthy)
  • email_summary: Yesterday’s Email Summary (✅ Healthy)
  • cron_midday_evening_briefing: Mid/Evening Report Push (✅ Healthy)

2) 🛠️ Practical Toolkit Link to heading

  • local_maps: Routes/POI/Weather (✅ Healthy)
  • stocks: Market Trends + Technical Analysis (✅ Healthy)
  • twitter_bird: Hotspot Search (✅ Healthy)

3) 📅 Office and Collaboration Link to heading

  • email_check: Unread Email Summary
  • outlook_calendar: Schedule Management (✅ Healthy)
  • ob_webdav: Note Synchronization (✅ Healthy)

4) 💻 Development and Preview Link to heading

  • coding: Scheduling coder/scoder (EDTP Acceptance)
  • web_preview: Markdown Rendering to Web Links

5) Underlying Implementation Index (Technical Mapping) Link to heading

DomainCore CapabilityPhysical Implementation (Script/Tool)Corresponding SOP
Local MapsRoute Planning, ETA, POI, Weatherroute_eta_amap.py, poi_search.pyroute-planning-SOP.md
Email SummaryYesterday’s Summary Generation and Sendinggen_email_summary.py, send_email_summary.shemail-summary.md
AI DailyWatchlist + Twitter Dailygen_ai_digest.py, bird CLIai-daily.md
OutlookCalendar Management, Email Sendingoutlook-calendar.sh, outlook-mail.shoutlook-SOP.md
ObsidianWebDAV Writingob_note_sync.pymiaok-约定.md
CodingAsynchronous Coding and Architecture Reviewsessions_spawncoding-SOP.md
Web PreviewRendered Publishing Previewweb_preview_publish.py, Nginx基础设置-总览.md
TwitterTrending Search & Aggregationbird CLItwitter-bird-SOP.md
StocksMarket Conditions & Technical Analysisyf, tvscreenermiaok-炒股-SOP.md