Compare commits

...

24 Commits

Author SHA1 Message Date
c617c96929 feat: add YouTrack skill and executor with configuration 2026-02-21 15:04:29 +05:30
b5c843ca69 chore: remove YouTrack integration files and documentation 2026-02-21 13:29:34 +05:30
d609974f8e fix: remove deprecated user model policies from settings.json 2026-02-21 13:20:48 +05:30
bb273491fc feat: add YouTrack API client and invoice generator with documentation 2026-02-21 13:20:43 +05:30
6e06487ca2 feat: add user model policies to restrict access to specific models in settings.json 2026-02-20 19:15:51 +05:30
b91c2938c8 fix: update maxOutputTokens for Gravity Opus 4.6 model in settings.json 2026-02-20 18:15:18 +05:30
7cab6508e3 fix: correct spelling of 'user-invocable' in SKILL.md files and update baseUrl and provider in settings.json 2026-02-20 15:37:36 +05:30
e0b6468c17 chore: update SKILL.md to include user-invokable and disable-model-invocation flags 2026-02-20 14:53:52 +05:30
d477154e63 chore: update model identifiers and display names in settings.json 2026-02-20 14:53:47 +05:30
290843f234 feat: add commit message generator skill for concise message suggestions 2026-02-20 10:49:16 +05:30
0fd219bf18 Add Context7 skill for retrieving up-to-date documentation via API 2026-02-20 00:09:13 +05:30
f4cdc3c7a8 Add markdown table justification script and update settings for output effort 2026-02-19 15:42:54 +05:30
0a5730ec1b Refactor model identifiers in coder and reviewer droids; add settings.json for configuration management 2026-02-19 12:22:20 +05:30
20fdfbbff8 Add installation check for colgrep in install script 2026-02-18 00:49:32 +05:30
344c7472e2 Add hint for using bash commands to justify markdown tables 2026-02-17 23:46:05 +05:30
a273bf8963 Update SKILL.md to include full path for rules directory 2026-02-16 16:46:34 +05:30
5b29c844c8 Add installation script for .factory directory 2026-02-16 16:31:44 +05:30
c0cca3db72 Clarify usage instructions for rules skill in SKILL.md 2026-02-16 16:29:29 +05:30
9b35a38728 Enhance droid documentation and coding rules:
- Update coder and reviewer descriptions to clarify subagent roles.
- Improve coding rules for modularity and project structure.
- Add new semantic code search skill documentation for ColGREP.
- Introduce rules skill for accessing project coding conventions.
2026-02-16 16:16:26 +05:30
1b939ccf9b Fix typos and improve clarity in subagent coding guidelines 2026-02-13 17:16:39 +05:30
1a6bffef2b only droids 2026-02-11 15:22:25 +05:30
5b304c5c2d Add maxOutputTokens for Kimi for Coding model in settings.json 2026-02-06 17:17:52 +05:30
59c0033298 Update Opus model version to 4.6 in settings.json 2026-02-06 16:49:30 +05:30
229070be0b skill initial 2026-02-06 16:21:11 +05:30
20 changed files with 949 additions and 154 deletions

54
.factory/droids/coder.md Normal file
View File

@@ -0,0 +1,54 @@
---
name: coder
description: Specialized for large code generation using GPT 5.3 Codex. Generates production-ready code based on detailed specifications.
model: custom:Gpt-5.3-Codex
tools: ["Read", "Edit", "Create", "ApplyPatch", "LS", "Grep", "Execute"]
---
You are a specialized code generation droid powered by GPT 5.3 Codex. Your sole purpose is to write high-quality, production-ready code.
You are a subagent who is supposed to help the primary agent.
## Your Strengths and Weaknesses
### Strengths
- Exceptional code generation capabilities, especially for complex algorithms and large codebases.
## Weaknesses
- Smaller tasks may be less efficient for you, as you excel at generating larger codebases.
- Editing markdown files is not your strength; focus on code files instead.
- Editing YAML, JSON, or other structured data files is not your strength; focus on code files instead.
## Your Rules
1. **DO NOT create new markdown files** - Only the driver droid creates documentation
2. **Work with primary agent** - You should receive detailed specs from primary agent. Ask for clarification if needed before starting implementation
3. **Generate complete implementations** - Write full, working code, not stubs
4. **Follow existing patterns** - Match the codebase's style, conventions, and architecture
5. **Handle errors properly** - Include appropriate error handling and edge cases
## Process
1. Load the rules skill and read AGENTS.md.
2. Run `colgrep init` if no index exists, then use `colgrep` for semantic code search to understand the codebase before making changes.
3. Read any context files provided by the parent agent
4. Review the specification carefully
5. Implement the solution completely
6. Verify your changes compile/syntax-check mentally
7. Report what you created/modified
## Output Format
```bash
Summary: <one-line description of what was implemented>
Files Modified:
- <file>: <brief description of changes>
Implementation Notes:
- <any important decisions or trade-offs>
- <known limitations if any>
```
Focus on correctness and completeness. The review droid will catch issues later.

View File

@@ -0,0 +1,57 @@
---
name: reviewer
description: Critical code reviewer using Opus 4.6. Finds bugs, security issues, and logic errors. Never generates code - only critiques.
model: custom:Opus-4.6
reasoningEffort: high
tools: ["Read", "Execute"]
---
You are a critical code review droid powered by Opus 4.6. Your job is to find bugs, security vulnerabilities, logic errors, and design flaws. You are a subagent who is supposed to help the primary agent.
## Your Rules
1. **NEVER write or modify code** - You are strictly read-only and critical
2. **NEVER create files** - Only analyze and report
3. **Assume context is complete** - The parent agent should provide all relevant files; do not explore unnecessarily
4. **Be thorough but constructive** - Find real issues, not nitpicks
## What to Look For
| Category | Checks |
|---------------------|------------------------------------------------------------------------------------|
| **Correctness** | Logic errors, off-by-one bugs, null dereferences, race conditions |
| **Security** | Injection vulnerabilities, unsafe deserialization, auth bypasses, secrets exposure |
| **Performance** | N+1 queries, unnecessary allocations, blocking operations |
| **Maintainability** | Code duplication, tight coupling, missing error handling |
| **Testing** | Untested edge cases, missing assertions, brittle tests |
## Process
1. Load the rules skill and read AGENTS.md.
2. Run `colgrep init` if no index exists, then use `colgrep` for semantic code search to understand relevant code paths and dependencies.
3. Read all files provided by the parent agent
4. Trace through critical code paths mentally
5. Identify issues with severity ratings
6. Suggest specific fixes (as text, not code)
## Output Format
```
Summary: <one-line verdict: "No blockers", "Minor issues found", or "Critical issues require fix">
Findings:
- [SEVERITY] <file>:<line> - <issue description>
Impact: <what could go wrong>
Suggestion: <how to fix>
Severity Levels:
- 🔴 CRITICAL: Must fix before merge (security, data loss, crashes)
- 🟡 WARNING: Should fix (bugs, performance issues)
- 🟢 NIT: Nice to have (style, minor improvements)
Follow-up Tasks:
- <specific action items for the coder droid or human>
```
Be skeptical. Your value is in catching what others miss.

6
.factory/rules/code.md Normal file
View File

@@ -0,0 +1,6 @@
# Code Rules
1. Never use emojis in the code. Use ASCII characters as much as possible.
Kaomojis are also fine to make it fun but do not use emojis.
2. Keep files under 300 lines. Create nested folders/files for modularity. If someone runs `tree --gitignore` they should see a well structured project. And it should be self explanatory about where to find what.

4
.factory/rules/github.md Normal file
View File

@@ -0,0 +1,4 @@
# GitHub Rules
We have `gh` cli. Use that as much as possible. But read only.
Ask user to run specific commands if they are not read only.

View File

@@ -0,0 +1,15 @@
# Markdown Rules
## Tables
The tables in markdown should always by ascii char count justified for better readability.
Example:
| Name | Age | City |
|------------|-----|---------------|
| Abhishek | 30 | New York |
## Hint
Use bash commands to count the chars in each column and add spaces accordingly to justify the table.

17
.factory/rules/project.md Normal file
View File

@@ -0,0 +1,17 @@
# Project Rules
Do not put obvious comments in the code. Every comment should add value to the codebase.
Docstrings are different than comments.
Do not put emojis in the code. Use ASCII characters as much as possible.
## Explore
Always start with `tree --gitignore`. Do not get stuck in loop of running `ls` or `grep`.
Try to understand the code structure first.
Try to grab the coding style and patterns used in the codebase. It will help you to write code that is consistent with the existing codebase.
## Motive
The motive should to do things the right way and not the easy way. The right way is to follow the coding standards and best practices.
The easy way is to write code that is quick and not manageable. Avoid the easy way.

25
.factory/rules/python.md Normal file
View File

@@ -0,0 +1,25 @@
# Python Rules
Always make sure to run linter and typecheck.
Possibly with `uv` like `uv run ruff check --fix` and `uv run ty`.
## Package management
use `uv pip` instead of `pip` as I always create my virtual environments with `uv` if it doesn't already exist. Example: `uv venv -p 3.12`
## Code style
Do not try to use comments to work around the linter (ruff) or type checker (ty) issues.
Chances are Makefiles are present read and use them. If doesn't exist then create it.
Run formatting after done with changes.
Never use `sys.path` or `pathlib` for resources. Use `importlib.resources`.
Fetch version from pyproject.toml using `importlib.metadata`.
## Some rules to configure in ruff
- Ban relative imports.
- Keep imports at the top of the file.
## Type checking
Try to write type safe code. Use type hints and type annotations as much as possible. It will help you to catch bugs early and it will also help you to understand the code better.

View File

@@ -0,0 +1,18 @@
# Subagent Rules
Always use prep time to understand where you can leverage your available subagents.
## Coding with subagents
- The specialized coder subagent shall handle code generation. And even code refactoring.
- The coder agent is not good at editing Markdown, YAML, JSON, or other structured data files. So you should handle those.
- Alsways start providing relavant files and as much context as possible to the subagents. Don't hesitate, the more rich context you provide, the better the output will be.
- None of the subagents use `tree --gitignore` command instictively, so you should ask them to do it or ptovide relevant file tree yourself. This is especially important for the coder subagent, as it will help it to understand the project structure and dependencies.
### Productivity Trick
Do utilize it in multiple stages:
- Ask to analyze the code with in context of the task
- Ask to show you a proposed plan which you'll evaluate and can ask for alternative approach or give a goahead.
- Give the goahead and ask to implement the code.

View File

@@ -19,22 +19,50 @@
"baseUrl": "http://localhost:8383", "baseUrl": "http://localhost:8383",
"apiKey": "sk-abcd", "apiKey": "sk-abcd",
"displayName": "Kimi for Coding (BYOK)", "displayName": "Kimi for Coding (BYOK)",
"maxOutputTokens": 131072,
"noImageSupport": false, "noImageSupport": false,
"provider": "anthropic" "provider": "anthropic"
}, },
{ {
"model": "Opus-4.5", "model": "Kimi-K2.5-BS10",
"id": "custom:Opus-4.5-(BYOK)-2", "id": "custom:Kimi-Baseten-(BYOK)-2",
"index": 2, "index": 2,
"baseUrl": "http://localhost:8383/v1",
"apiKey": "sk-abcd",
"displayName": "Kimi Baseten (BYOK)",
"maxOutputTokens": 131072,
"noImageSupport": false,
"provider": "generic-chat-completion-api"
},
{
"model": "Opus-4.6",
"id": "custom:Gravity-Opus-4.6-(BYOK)-3",
"index": 3,
"baseUrl": "http://localhost:8383", "baseUrl": "http://localhost:8383",
"apiKey": "sk-abcd", "apiKey": "sk-abcd",
"displayName": "Opus 4.5 (BYOK)", "displayName": "Gravity Opus 4.6 (BYOK)",
"maxOutputTokens": 128000, "maxOutputTokens": 64000,
"extraArgs": { "extraArgs": {
"parallel_tool_calls": true, "parallel_tool_calls": true,
"thinking": { "output_config": {
"type": "enabled", "effort": "max"
"budget_tokens": 120000 }
},
"noImageSupport": true,
"provider": "anthropic"
},
{
"model": "Sonnet-4.6",
"id": "custom:Gravity-Sonnet-4.6-(BYOK)-4",
"index": 4,
"baseUrl": "http://localhost:8383",
"apiKey": "sk-abcd",
"displayName": "Gravity Sonnet 4.6 (BYOK)",
"maxOutputTokens": 64000,
"extraArgs": {
"parallel_tool_calls": true,
"output_config": {
"effort": "high"
} }
}, },
"noImageSupport": true, "noImageSupport": true,
@@ -42,8 +70,8 @@
}, },
{ {
"model": "Gpt-5.3-Codex", "model": "Gpt-5.3-Codex",
"id": "custom:Gpt-5.3-Codex-(BYOK)-3", "id": "custom:Gpt-5.3-Codex-(BYOK)-5",
"index": 3, "index": 5,
"baseUrl": "http://localhost:8383/v1", "baseUrl": "http://localhost:8383/v1",
"apiKey": "sk-abcd", "apiKey": "sk-abcd",
"displayName": "Gpt 5.3 Codex (BYOK)", "displayName": "Gpt 5.3 Codex (BYOK)",
@@ -59,8 +87,8 @@
}, },
{ {
"model": "Gpt-5.2", "model": "Gpt-5.2",
"id": "custom:Gpt-5.2-(BYOK)-4", "id": "custom:Gpt-5.2-(BYOK)-6",
"index": 4, "index": 6,
"baseUrl": "http://localhost:8383/v1", "baseUrl": "http://localhost:8383/v1",
"apiKey": "sk-abcd", "apiKey": "sk-abcd",
"displayName": "Gpt 5.2 (BYOK)", "displayName": "Gpt 5.2 (BYOK)",
@@ -76,7 +104,7 @@
} }
], ],
"sessionDefaultSettings": { "sessionDefaultSettings": {
"model": "custom:Gpt-5.3-Codex-(BYOK)-3", "model": "custom:Kimi-for-Coding-(BYOK)-1",
"autonomyMode": "auto-low", "autonomyMode": "auto-low",
"specModeReasoningEffort": "none", "specModeReasoningEffort": "none",
"reasoningEffort": "none" "reasoningEffort": "none"

View File

@@ -0,0 +1,138 @@
---
name: colgrep
description: Semantic code search using ColGREP - combines regex filtering with semantic ranking. Use when the user wants to search code by meaning, find relevant code snippets, or explore a codebase semantically. All local - code never leaves the machine.
user-invocable: false
disable-model-invocation: false
---
# ColGREP Semantic Code Search
ColGREP is a semantic code search tool that combines regex filtering with semantic ranking. It uses multi-vector search (via NextPlaid) to find code by meaning, not just keywords.
## When to use this skill
- Searching for code by semantic meaning ("database connection pooling")
- Finding relevant code snippets when exploring a new codebase
- Combining pattern matching with semantic understanding
- Setting up code search for a new project
- When grep returns too many irrelevant results
- When you don't know the exact naming conventions used in a codebase
## Prerequisites
ColGREP must be installed. It's a single Rust binary with no external dependencies.
## Quick Reference
### Check if ColGREP is installed
```bash
which colgrep || echo "ColGREP not installed"
```
### Install ColGREP
```bash
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/lightonai/next-plaid/releases/latest/download/colgrep-installer.sh | sh
```
### Initialize index for a project
```bash
# Current directory
colgrep init
# Specific path
colgrep init /path/to/project
```
### Basic semantic search
```bash
colgrep "database connection pooling"
```
### Combine regex with semantic search
```bash
colgrep -e "async.*await" "error handling"
```
## Essential Flags
| Flag | Description | Example |
|------|-------------|---------|
| `-c, --content` | **Show full function/class content** with syntax highlighting | `colgrep -c "authentication"` |
| `-e <pattern>` | Pre-filter with regex, then rank semantically | `colgrep -e "def.*auth" "login"` |
| `--include "*.py"` | Filter by file type | `colgrep --include "*.rs" "error handling"` |
| `--code-only` | Skip text/config files (md, yaml, json) | `colgrep --code-only "parser"` |
| `-k <n>` | Number of results (default: 15) | `colgrep -k 5 "database"` |
| `-n <lines>` | Context lines around match | `colgrep -n 10 "config"` |
| `-l, --files-only` | List only filenames | `colgrep -l "test helpers"` |
| `--json` | Output as JSON for scripting | `colgrep --json "api" \| jq '.[].unit.file'` |
| `-y` | Auto-confirm indexing for large codebases | `colgrep -y "search term"` |
## How it works
1. **Tree-sitter parsing** - Extracts functions, methods, classes from code
2. **Structured representation** - Creates rich text with signature, params, docstring, calls, variables
3. **LateOn-Code-edge model** - 17M parameter model creates multi-vector embeddings (runs on CPU)
4. **NextPlaid indexing** - Quantized, memory-mapped, incremental index
5. **Search** - SQLite filtering + semantic ranking with grep-compatible flags
## Recommended Workflow
### For exploring a new codebase:
```bash
# 1. Initialize (one-time)
colgrep init
# 2. Search with content display to see actual code
colgrep -c -k 5 "function that handles user authentication"
# 3. Refine with regex if needed
colgrep -c -e "def.*auth" "login validation"
# 4. Filter by language
colgrep -c --include "*.py" "database connection pooling"
```
### For finding specific patterns:
```bash
# Hybrid search: regex filter + semantic ranking
colgrep -e "class.*View" "API endpoint handling"
# Skip config files, focus on code
colgrep --code-only "error handling middleware"
# Just get filenames for further processing
colgrep -l "unit test helpers"
```
### For scripting/automation:
```bash
# JSON output for piping to other tools
colgrep --json "configuration parser" | jq '.[] | {file: .unit.file, score: .score}'
```
## Pro Tips
1. **Always use `-c` for initial exploration** - Shows full function content, no need to read files separately
2. **Use `-e` to narrow results** - Regex pre-filter is much faster than semantic ranking everything
3. **Index auto-updates** - Each search detects file changes; no need to re-run `init` manually
4. **Large codebases** - Use `-y` to skip confirmation prompts for indexing >10K files
## Example workflow
1. **First time setup** for a project:
```bash
cd /path/to/project
colgrep init
```
2. **Search with content display** (recommended):
```bash
colgrep -c -k 5 "authentication middleware"
```
3. **Refine with regex**:
```bash
colgrep -c -e "def.*auth" "login validation"
```
4. **The index auto-updates** - each search detects file changes and updates automatically

View File

@@ -0,0 +1,62 @@
---
name: commit_message
description: Generate a concise one-liner commit message by analyzing staged changes and recent git history. Use when the user wants a commit message suggestion before committing.
user-invocable: true
disable-model-invocation: false
---
# Commit Message Generator
## Overview
Generate a concise, meaningful one-liner commit message by inspecting the staged diff and recent commit history. Sometimes staged files relate to a prior commit, so the previous commit is also reviewed for context.
## Workflow
### Step 1: Gather Context
Run these commands to collect the necessary information:
```bash
# Get the last commit message and diff for context
git log -1 --format="%h %s" && echo "---" && git diff HEAD~1 --stat
# Get the staged diff (what will be committed)
git diff --staged
```
### Step 2: Analyze the Changes
- Read the staged diff to understand **what** changed.
- Read the last commit (`git log -1`) to understand if the staged changes are a continuation, fix, or follow-up to the previous commit.
- If staged files overlap with files in the last commit, treat the changes as related and reflect that in the message.
### Step 3: Generate the Message
Compose a **single-line** commit message following these rules:
- **Format:** `<type>: <concise description>`
- **Types:** `feat`, `fix`, `refactor`, `docs`, `style`, `test`, `chore`, `perf`, `ci`, `build`
- **Length:** Under 72 characters
- **Tone:** Imperative mood ("add", "fix", "update", not "added", "fixed", "updated")
- **No period** at the end
### Step 4: Output
Print only the suggested commit message — nothing else. No explanation, no alternatives.
## Examples
```
feat: add retry logic to API client
fix: correct off-by-one error in pagination
refactor: extract auth middleware into separate module
docs: update README with new installation steps
chore: bump dependencies to latest versions
```
## Tips
- If the staged diff is empty, inform the user: "No staged changes found. Stage files with `git add` first."
- If the staged changes clearly extend the previous commit (same files, same feature), phrase the message as a continuation rather than a new change.
- Keep it specific — avoid vague messages like "update code" or "fix stuff".

View File

@@ -0,0 +1,87 @@
---
name: context7
description: Retrieve up-to-date documentation for software libraries, frameworks, and components via the Context7 API. This skill should be used when looking up documentation for any programming library or framework, finding code examples for specific APIs or features, verifying correct usage of library functions, or obtaining current information about library APIs that may have changed since training.
user-invocable: false
disable-model-invocation: false
---
# Context7
## Overview
This skill enables retrieval of current documentation for software libraries and components by querying the Context7 API via curl. Use it instead of relying on potentially outdated training data.
## Workflow
### Step 1: Search for the Library
To find the Context7 library ID, query the search endpoint:
```bash
curl -s "https://context7.com/api/v2/libs/search?libraryName=LIBRARY_NAME&query=TOPIC" | jq '.results[0]'
```
**Parameters:**
- `libraryName` (required): The library name to search for (e.g., "react", "nextjs", "fastapi", "axios")
- `query` (required): A description of the topic for relevance ranking
**Response fields:**
- `id`: Library identifier for the context endpoint (e.g., `/websites/react_dev_reference`)
- `title`: Human-readable library name
- `description`: Brief description of the library
- `totalSnippets`: Number of documentation snippets available
### Step 2: Fetch Documentation
To retrieve documentation, use the library ID from step 1:
```bash
curl -s "https://context7.com/api/v2/context?libraryId=LIBRARY_ID&query=TOPIC&type=txt"
```
**Parameters:**
- `libraryId` (required): The library ID from search results
- `query` (required): The specific topic to retrieve documentation for
- `type` (optional): Response format - `json` (default) or `txt` (plain text, more readable)
## Examples
### React hooks documentation
```bash
# Find React library ID
curl -s "https://context7.com/api/v2/libs/search?libraryName=react&query=hooks" | jq '.results[0].id'
# Returns: "/websites/react_dev_reference"
# Fetch useState documentation
curl -s "https://context7.com/api/v2/context?libraryId=/websites/react_dev_reference&query=useState&type=txt"
```
### Next.js routing documentation
```bash
# Find Next.js library ID
curl -s "https://context7.com/api/v2/libs/search?libraryName=nextjs&query=routing" | jq '.results[0].id'
# Fetch app router documentation
curl -s "https://context7.com/api/v2/context?libraryId=/vercel/next.js&query=app+router&type=txt"
```
### FastAPI dependency injection
```bash
# Find FastAPI library ID
curl -s "https://context7.com/api/v2/libs/search?libraryName=fastapi&query=dependencies" | jq '.results[0].id'
# Fetch dependency injection documentation
curl -s "https://context7.com/api/v2/context?libraryId=/fastapi/fastapi&query=dependency+injection&type=txt"
```
## Tips
- Use `type=txt` for more readable output
- Use `jq` to filter and format JSON responses
- Be specific with the `query` parameter to improve relevance ranking
- If the first search result is not correct, check additional results in the array
- URL-encode query parameters containing spaces (use `+` or `%20`)
- No API key is required for basic usage (rate-limited)

View File

@@ -0,0 +1,57 @@
---
name: rules
description: Access and apply project coding rules and conventions from ~/.factory/rules/ directory. Use when needing to follow coding standards, conventions, or guidelines for the project.
user-invokable: false
disable-model-invocation: false
---
# Rules Skill
This skill provides access to project-specific coding rules and conventions stored in `~/.factory/rules/`.
## Available Rule Files
- **code.md** - General coding conventions and best practices
- **github.md** - GitHub-specific workflows and conventions
- **markdown.md** - Markdown formatting rules
- **project.md** - Project-specific conventions
- **python.md** - Python-specific coding standards
- **subagents.md** - Guidelines for using subagents
## When to use this skill
- When starting work on a new task to understand project conventions
- When unsure about coding standards for a specific language or context
- When setting up new code that should follow existing patterns
## Usage
Read the rule files from `~/.factory/rules/` and apply them as needed.
### ColGREP Init
When starting work on a project, run `colgrep init` to build the semantic search index. This enables the colgrep skill for semantic code search across the codebase. The index auto-updates on subsequent searches, so `init` only needs to run once per project.
### Markdown Table Justification
To justify markdown tables (ASCII character count justified), use the built-in script:
```bash
# Justify tables in a file (executable with uv shebang)
~/.factory/skills/rules/table_justify.py <file>
# Or via python
python ~/.factory/skills/rules/table_justify.py <file>
# Justify and save to new file
~/.factory/skills/rules/table_justify.py <file> -o <output>
# Justify from stdin
cat table.md | ~/.factory/skills/rules/table_justify.py
```
## Research
- Back all claims with reference code.
- State only what is proven.
- If evidence is lacking, say: "I can't find any evidence to support this claim" or "Not enough info".

View File

@@ -0,0 +1,152 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.11"
# ///
"""
Markdown Table Justifier
ASCII character-justifies markdown tables by padding all cells
to match the maximum width in each column.
Usage:
./table_justify.py <file>
./table_justify.py <file> -o <output>
cat <file> | table-justify
"""
import argparse
import re
import sys
from pathlib import Path
def parse_table(lines: list[str]) -> tuple[list[list[str]], list[int]]:
"""
Parse a markdown table into rows and calculate column widths.
Returns (rows, max_widths).
"""
rows = []
max_widths = []
for line in lines:
line = line.rstrip()
if not line.startswith("|"):
continue
# Split by | and strip whitespace
cells = [cell.strip() for cell in line.split("|")]
# Remove empty first/last cells from leading/trailing |
cells = [c for c in cells if c or c == ""]
if len(cells) > 0 and cells[0] == "":
cells = cells[1:]
if len(cells) > 0 and cells[-1] == "":
cells = cells[:-1]
if not cells:
continue
rows.append(cells)
# Update max widths
while len(max_widths) < len(cells):
max_widths.append(0)
for i, cell in enumerate(cells):
max_widths[i] = max(max_widths[i], len(cell))
return rows, max_widths
def is_separator_row(row: list[str]) -> bool:
"""Check if a row is a header separator (all dashes)."""
if not row:
return False
return all(re.match(r"^-+$", cell.strip().replace(" ", "")) or cell.strip() == "" for cell in row)
def format_separator(widths: list[int]) -> str:
"""Format the separator row."""
cells = ["-" * w for w in widths]
return "|" + "|".join(f" {c} " for c in cells) + "|"
def format_row(row: list[str], widths: list[int]) -> str:
"""Format a data row with proper padding."""
padded = []
for i, cell in enumerate(row):
if i < len(widths):
padded.append(cell.ljust(widths[i]))
else:
padded.append(cell)
return "|" + "|".join(f" {c} " for c in padded) + "|"
def justify_table(lines: list[str]) -> list[str]:
"""Justify a markdown table."""
rows, max_widths = parse_table(lines)
if not rows:
return lines
result = []
for i, row in enumerate(rows):
if i == 1 and is_separator_row(row):
result.append(format_separator(max_widths))
else:
result.append(format_row(row, max_widths))
return result
def process_content(content: str) -> str:
"""Process content and justify all tables found."""
lines = content.split("\n")
result = []
table_lines = []
in_table = False
for line in lines:
if line.strip().startswith("|"):
table_lines.append(line)
in_table = True
else:
if in_table:
# End of table, process it
result.extend(justify_table(table_lines))
table_lines = []
in_table = False
result.append(line)
# Process last table if file ends with one
if table_lines:
result.extend(justify_table(table_lines))
return "\n".join(result)
def main():
parser = argparse.ArgumentParser(
description="ASCII character-justify markdown tables"
)
parser.add_argument("input", nargs="?", help="Input file (default: stdin)")
parser.add_argument("-o", "--output", help="Output file (default: stdout)")
args = parser.parse_args()
# Read input
if args.input:
content = Path(args.input).read_text()
else:
content = sys.stdin.read()
# Process
result = process_content(content)
# Write output
if args.output:
Path(args.output).write_text(result)
else:
print(result)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,104 @@
---
name: youtrack
description: Dynamic access to youtrack MCP server (19 tools)
user-invocable: false
disable-model-invocation: false
---
# youtrack Skill
This skill provides dynamic access to the youtrack MCP server without loading all tool definitions into context.
## Available Tools
- `log_work`: Adds a work item (spent time) to the specified issue. You can specify the duration (in minutes), optional date, work type, description, and optional work item attributes. Use get_project to retrieve the workTypes and workItemAttributesSchema for the target project.
- `manage_issue_tags`: Adds a tag to or removes a tag from an issue. If a name is used, the first tag that matches the provided name is added. If no matching tags are found, an error message with suggestions for similar tags is returned. When successful, it returns the ID of the updated issue and the updated list of issue tags.
- `search_issues`: Searches for issues using YouTracks query language. The 'query' can combine attribute filters, keywords, and free text. Examples of common patterns include:
- Free text: Find matching words in the issue summary, description, and comments. Use wildcards: '*' for any characters, '?' for single characters (e.g., 'summary: log*', 'fix??'). Examples: 'login button bug', 'some other text', 'summary: log*', 'description: fix??'.
- Linked issues: '<linkType>: <issueId>' (by link type), 'links: <issueId>' (all linked to issueId issues). Examples: 'relates to: DEMO-123', 'subtask of: DEMO-123' (issues where DEMO-123 is a parent), 'links: DEMO-12' (issues linked to DEMO-12 with any link type). Hint: get_issue returns 'linkedIssueCounts' property which shows the available link types for the issue.
- Issues where an issue is mentioned: 'mentions: <issueId>'. Examples: 'mentions: DEMO-123'.
- Project filter: 'project: <ProjectName>'. Use project name or project key. Examples: 'project: {Mobile App}', 'project: MA'.
- Assignee filter: 'for: <login>'. Use 'me' for the currently authenticated user. Examples: 'for: me', 'for: john.smith'.
- Reporter filter: 'reporter: <login>'. Use 'me' for the currently authenticated user. Examples: 'reporter: me', 'reporter: admin'.
- Tag filter: 'tag: <TagName>'. Wrap multi-word tags in braces { }. Examples: 'tag: urgent', 'tag: {customer feedback}'.
- Field filter: '<FieldName>: <Value>'. For any project field, for example, State, Type, Priority, and so on. Wrap multi-word names or values in { }. Use get_project to get the possible fields and values for the project issues to search. Use '-' as 'not', e.g., 'State: -Fixed' filters out fixed issues. Examples: 'Priority: High', 'State: {In Progress}, Fixed' (searches issues with 'In Progress' state + issues with 'Fixed' state), 'Due Date: {plus 5d}' (issues that are due in five days).
- Date filters: 'created:', 'updated:', 'resolved date:' (or any date field) plus a date, range, or relative period. Relative periods: 'today', 'yesterday', '{This week}', '{Last week}', '{This month}', etc. Examples: 'created: {This month}', 'updated: today', 'resolved date: 2025-06-01 .. 2025-06-30', 'updated: {minus 2h} .. *' (issues updated last 2 hours), 'created: * .. {minus 1y 6M}' (issues that are at least one and a half years old).
- Keywords: '#Unresolved' to find unresolved issues based on the State; '#Resolved' to find resolved issues.
- Empty/Non-Empty Fields: Use the 'has: <attribute>'. Example: 'has: attachments' finds issues with attachments, while 'has: -comments' finds issues with no comments. Other attributes: 'links', '<linkType>' (e.g. 'has: {subtask of}'), 'star' (subscription), 'votes', 'work'.
- Combining filters: List multiple conditions separated by spaces (logical AND). For OR operator, add it explicitly. Examples: '(project: MA) and (for: me) and (created: {minus 8h} .. *) and runtime error' (issues in project MA and assigned to currently authenticated user and created during last 8h and contains 'runtime error' text), '(Type: Task and State: Open) or (Type: Bug and Priority: Critical)'.
Returns basic info: id, summary, project, resolved, reporter, created, updated and default custom fields. For full details, use get_issue. The response is paginated using the specified offset and limit.
- `update_issue`: Updates an existing issue and its fields (customFields). Pass any of the arguments to partially update the issue:
- 'summary' or 'description' arguments to update only the issue summary or description.
- 'customFields' argument as key-value JSON object to update issue fields like State, Type, Priority, etc. Use get_issue_fields_schema to discover 'customFields' and their possible values.
- 'subscription' argument to star (true) or unstar (false) the issue on behalf of the current user. The current user is notified about subsequent issue updates according to their subscription settings for the Star tag.
- 'vote' argument to vote (true) or remove a vote (false) on behalf of the current user for the issue.
Returns the ID of the updated issue and the confirmation what was updated.
- `get_project`: Retrieves full details for a specific project.
- `get_saved_issue_searches`: Returns saved searches marked as favorites by the current user. The output search queries can be used in search_issues. The response is paginated using the specified offset and/or limit.
- `get_user_group_members`: Lists users who are members of a specified group or project team. Project teams are essentially groups that are always associated with a specific project. The response is paginated using the specified offset and/or limit.
- `link_issues`: Links two issues with the specified link type.
Examples:
- TS-1 is a subtask of TS-2: {"targetIssueId": "TS-1", "linkType": "subtask of", "issueToLinkId": "TS-2"};
- TS-4 is a duplicate of TS-3: {"targetIssueId": "TS-4", "linkType": "duplicates", "issueToLinkId": "TS-3"};
- TS-1 is blocked by TS-2: {"targetIssueId": "TS-1", "linkType": "blocked by", "issueToLinkId": "TS-2"};
Returns updated link counts for all target issue link types.
- `get_current_user`: Returns details about the currently authenticated user (me): login, email, full name, time zone.
- `get_issue`: Returns detailed information for an issue or issue draft, including the summary, description, URL, project, reporter (login), tags, votes, and custom fields. The `customFields` output property provides more important issue details, including Type, State, Assignee, Priority, Subsystem, and so on. Use get_issue_fields_schema for the full list of custom fields and their possible values.
- `get_issue_comments`: Returns a list of issue comments with detailed information for each. The response is paginated using the specified offset and/or limit
- `get_issue_fields_schema`: Returns the JSON schema for custom fields in the specified project. Must be used to provide relevant custom fields and values for create_issue and update_issue actions.
- `find_projects`: Finds projects whose names contain the specified substring (case-insensitive). Returns minimal information (ID and name) to help pick a project for get_project. The response is paginated using the specified offset and/or limit.
- `find_user`: Finds users by login or email (provide either login or email). Returns profile data for the matching user. This includes the login, full name, email, and local time zone.
- `find_user_groups`: Finds user groups or project teams whose names contain the specified substring (case-insensitive). The response is paginated using the specified offset and/or limit.
- `add_issue_comment`: Adds a new comment to the specified issue. Supports Markdown.
- `change_issue_assignee`: Sets the value for the Assignee field in an issue to the specified user. If the `assigneeLogin` argument is `null`, the issue will be unassigned.
- `create_draft_issue`: Creates a new issue draft in the specified project. If project is not defined, ask for assistance. Draft issues are only visible to the current user and can be edited using update_issue. Returns the ID assigned to the issue draft and a URL that opens the draft in a web browser.
- `create_issue`: Creates a new issue in the specified project. Call the get_issue_fields_schema tool first to identify required `customFields` and permitted values (projects may require them at creation). If project is not defined, ask for assistance. Returns the created issue ID and URL. Use get_issue for full details.
## Usage Pattern
When the user's request matches this skill's capabilities:
**Step 1: Identify the right tool** from the list above
**Step 2: Generate a tool call** in this JSON format:
```json
{
"tool": "tool_name",
"arguments": {
"param1": "value1"
}
}
```
**Step 3: Execute via bash:**
```bash
cd $SKILL_DIR
./executor.py --call 'YOUR_JSON_HERE'
```
IMPORTANT: Replace $SKILL_DIR with the actual discovered path of this skill directory.
## Getting Tool Details
If you need detailed information about a specific tool's parameters:
```bash
cd $SKILL_DIR
./executor.py --describe tool_name
```
## Error Handling
If the executor returns an error:
- Check the tool name is correct
- Verify required arguments are provided
- Ensure the MCP server is accessible
---
*Auto-generated from MCP server configuration by mcp_to_skill.py*

View File

@@ -0,0 +1,81 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "mcp>=1.0.0",
# "httpx",
# ]
# ///
"""MCP Skill Executor - HTTP (Streamable HTTP) transport"""
import json
import sys
import asyncio
import argparse
from pathlib import Path
import httpx
from mcp import ClientSession
from mcp.client.streamable_http import streamable_http_client
async def run(config, args):
url = config["url"]
headers = config.get("headers", {})
http_client = httpx.AsyncClient(headers=headers, timeout=httpx.Timeout(30, read=60))
async with http_client:
async with streamable_http_client(url=url, http_client=http_client) as (
read_stream,
write_stream,
_,
):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
if args.list:
response = await session.list_tools()
tools = [{"name": t.name, "description": t.description} for t in response.tools]
print(json.dumps(tools, indent=2))
elif args.describe:
response = await session.list_tools()
for tool in response.tools:
if tool.name == args.describe:
print(json.dumps({"name": tool.name, "description": tool.description, "inputSchema": tool.inputSchema}, indent=2))
return
print(f"Tool not found: {args.describe}", file=sys.stderr)
sys.exit(1)
elif args.call:
call_data = json.loads(args.call)
result = await session.call_tool(call_data["tool"], call_data.get("arguments", {}))
for item in result.content:
if hasattr(item, "text"):
print(item.text)
else:
print(json.dumps(item.model_dump(), indent=2))
else:
parser.print_help()
def main():
parser = argparse.ArgumentParser(description="MCP Skill Executor (HTTP)")
parser.add_argument("--call", help="JSON tool call to execute")
parser.add_argument("--describe", help="Get tool schema")
parser.add_argument("--list", action="store_true", help="List all tools")
args = parser.parse_args()
config_path = Path(__file__).parent / "mcp-config.json"
if not config_path.exists():
print(f"Error: {config_path} not found", file=sys.stderr)
sys.exit(1)
with open(config_path) as f:
config = json.load(f)
asyncio.run(run(config, args))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,7 @@
{
"url": "https://<YourYouTrackInstance>.youtrack.cloud/mcp",
"transport": "http",
"headers": {
"Authorization": "Bearer <YourYouTrackToken>"
}
}

138
DROIDS.md
View File

@@ -1,138 +0,0 @@
# Factory Droids
A system for orchestrating AI droids to handle complex coding tasks through specialized roles.
## Overview
Factory Droids uses `droid exec` to run AI agents non-interactively, each specializing in different aspects of software development.
## Available Commands
```bash
droid exec --help # Show exec command options (includes model list)
droid --help # Show all droid commands
droid exec --list-tools # List available tools for a model
```
> **Tip:** Run `droid exec --help` to see all available models including BYOK custom models.
## Quick Start
```bash
# Read-only analysis (default)
droid exec "analyze the codebase structure"
# With file input
droid exec -f prompt.txt
# With specific model
droid exec --model custom:kimi-k2.5 "explore the project"
# Low autonomy - safe file operations
droid exec --auto low "add JSDoc comments"
# Medium autonomy - development tasks
droid exec --auto medium "install deps and run tests"
# High autonomy - production operations
droid exec --auto high "fix, test, commit and push"
```
## Available Models (BYOK)
| Model ID | Name | Reasoning |
|-----------------------------------|----------------------|-----------|
| `custom:kimi-k2.5` | Kimi K2.5 | Yes |
| `custom:claude-opus-4.6` | Claude Opus 4.6 | Yes |
| `custom:gpt-5.3-codex` | GPT 5.3 Codex | Yes |
| `custom:gpt-5.2` | GPT 5.2 | Yes |
## Droid Roles
| Droid | Model | Purpose | Auto Level |
|------------|-------------------------------|---------------------------------------|------------|
| Explorer | `custom:kimi-k2.5` | Code exploration and research | high |
| Spec | `custom:gpt-5.2` | Planning and specification generation | high |
| Coder | `custom:gpt-5.3-codex` | Large code generation | high |
| Coder-lite | `custom:kimi-k2.5` | Small code generation and fixes | high |
| Quality | `custom:kimi-k2.5` | Formatting, linting, type checking | high |
| Reviewer | `custom:claude-opus-4-6` | Code review and bug finding | high |
| Runner | `custom:kimi-k2.5` | Build, test, and execution | high |
## Workflow
1. **Start** with a good instruction follower (`custom:kimi-k2.5` or `custom:gpt-5.3-codex`)
2. **Make** a todo list
3. **Explore** - Launch multiple explorer droids with `custom:kimi-k2.5` in parallel
4. **Spec** - Evaluate context with spec droid using `custom:gpt-5.2`
5. **Confirm** spec with user
6. **Code** - Use `custom:gpt-5.3-codex` for large code gen, `custom:kimi-k2.5` for small
7. **Quality** - Run quality check droid with `custom:kimi-k2.5 --auto high`
8. **Review** - Run review droid with `custom:claude-opus-4-6 --auto high`
9. **Run** - Run build/test droid with `custom:kimi-k2.5 --auto high`
10. **Summarize** - Provide final summary
## Autonomy Levels
| Level | Flag | Description |
|---------|----------------|-------------------------------------------------------|
| Default | (none) | Read-only - safest for reviewing planned changes |
| Low | `--auto low` | Basic file operations, no system changes |
| Medium | `--auto medium`| Development ops - install packages, build, git local |
| High | `--auto high` | Production ops - git push, deploy, migrations |
| Unsafe | `--skip-permissions-unsafe` | Bypass all checks - DANGEROUS! |
## Command Options
```
Usage: droid exec [options] [prompt]
Arguments:
prompt The prompt to execute
Options:
-o, --output-format <format> Output format (default: "text")
--input-format <format> Input format: stream-json for multi-turn
-f, --file <path> Read prompt from file
--auto <level> Autonomy level: low|medium|high
--skip-permissions-unsafe Skip ALL permission checks (unsafe)
-s, --session-id <id> Existing session to continue
-m, --model <id> Model ID (default: claude-opus-4-5-20251101)
-r, --reasoning-effort <level> Reasoning effort (model-specific)
--enabled-tools <ids> Enable specific tools
--disabled-tools <ids> Disable specific tools
--cwd <path> Working directory path
--log-group-id <id> Log group ID for filtering logs
--list-tools List available tools and exit
-h, --help Display help
```
## Authentication
Create API key: https://app.factory.ai/settings/api-keys
```bash
export FACTORY_API_KEY=fk-... && droid exec "fix the bug"
```
## Examples
```bash
# Analysis (read-only)
droid exec "Review the codebase for security vulnerabilities"
# Documentation
droid exec --auto low "add JSDoc comments to all functions"
droid exec --auto low "fix typos in README.md"
# Development
droid exec --auto medium "install deps, run tests, fix issues"
droid exec --auto medium "update packages and resolve conflicts"
# Production
droid exec --auto high "fix bug, test, commit, and push to main"
droid exec --auto high "deploy to staging after running tests"
# Continue session
droid exec -s <session-id> "continue previous task"
```

View File

@@ -1,4 +1,4 @@
# VISION # VISION of the Project
Need a skill for factory droid which can launch `droid exec` for multiple things. Need a skill for factory droid which can launch `droid exec` for multiple things.
@@ -86,13 +86,13 @@ Need a skill for factory droid which can launch `droid exec` for multiple things
| Rank | Model | | Rank | Model |
|------|------------------| |------|------------------|
| 1 | `gpt_5.2` | | 1 | `gpt_5.2` |
| 2 | `gpt_5.3_codex` | | 2 | `opus_4.6` |
| 3 | `opus_4.6` | | 3 | `gpt_5.3_codex` |
| 4 | `kimi_k2.5` | | 4 | `kimi_k2.5` |
## Flow ## Flow
-> Start with good instruction follower (kimi_k2.5 or gpt_5.3_codex). -> Start with `kimi_k2.5` as the driver and entrypoint.
User asks a question or give a task. User asks a question or give a task.
-> Make a todo list. -> Make a todo list.
-> exploration is always needed. launch multiple explorer droid with kimi_k2.5 asking question in natural language. -> exploration is always needed. launch multiple explorer droid with kimi_k2.5 asking question in natural language.
@@ -103,3 +103,9 @@ User asks a question or give a task.
-> Run review droid with opus_4.6 to find bugs and issues. -> Run review droid with opus_4.6 to find bugs and issues.
-> Run build/test/run droid with kimi_k2.5. -> Run build/test/run droid with kimi_k2.5.
-> Provide summary -> Provide summary
## Important Notes
- Assume that all droid exec with any model will try to explore the code base. So we need to provide as many context as possible that there should not be need to explore again when it comes to opus 4.6 or gpt 5.2. 5.3-codex and kimi-k2.5 are good at exploring, so they can be let loose.
- Do not create unnecessary new markdown files. Need to ask this in every droid exec. Only the driver (kimi-k2.5) should be doing it.

15
install.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash
# Install .factory to ~/.factory (merges, does not replace parent folders)
mkdir -p ~/.factory
rsync -av .factory/ ~/.factory/
echo "Installed .factory to ~/.factory"
# Install colgrep if not already installed
if ! command -v colgrep &> /dev/null; then
echo "Installing colgrep..."
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/lightonai/next-plaid/releases/latest/download/colgrep-installer.sh | sh
else
echo "colgrep is already installed, skipping."
fi