- Learn
- The-market
- Specifications
AI & Developer Skills
Sharenote is built for the era of AI-assisted development. We provide a suite of “Skills” that allow your LLM (Large Language Model) to understand the strict mathematical constraints and Nostr event schemas required to build on the protocol.
Installing the Sharenote Skill
If you are using a compatible AI agent or CLI (like Gemini CLI), you can add Sharenote protocol expertise with a single command:
npx skills add soprinter/skills
Once installed, your AI assistant will be able to:
- Perform Core Math: Calculate Z-bits, continuous difficulty, and perform valid linear aggregation.
- Construct Events: Build and validate Nostr Kinds 35500-35510 with perfect tag precision.
- Audit Payouts: Verify proportional pool shares from public hash records.
- Reference SDKs: Use the exact functions from
go-sharenote,sharenotejs, andsharenotelib.
Why Use Skills?
Proof-of-work difficulty exists on a logarithmic scale. Standard LLMs often struggle with the nuances of log-2 addition and floating-point precision in a decentralized context.
The Sharenote Skill provides:
- Mathematical Grounding: Prevents “hallucinations” in difficulty calculations.
- Schema Enforcement: Ensures every Nostr event you produce is valid and interoperable.
- Idiomatic SDK Usage: Guides your agent to use our optimized libraries instead of rewriting complex logic.
Vibe Coding with Sharenote
“Vibe Coding” refers to building robust apps by describing your intent to an AI agent rather than manually writing every line of cryptographic code. Once the skill is installed, you can use prompts like:
- “Write an Express middleware that intercepts requests, checks for a
20Z00Sharenote in the authorization header, and verifies the Continuous Difficulty math using the sharenote skill.” - “Build a Go service that acts as a Stratum proxy, consuming mining hashes and packaging them into Kind 35510 AuxPoW events exactly as defined by the sharenote skill.”
The resulting code is fully transparent, exactly tailored to your service, and mathematically compliant with the protocol.
LLM-Friendly Documentation
We provide compressed, LLM-optimized versions of our documentation for use in “Context Windows” or as reference material for AI crawlers:
- llms.txt: A high-level summary of the protocol and its core concepts.
- llms-full.txt: A full, concatenated version of all documentation and specifications.