mirror of
https://fastgit.cc/github.com/Michael-A-Kuykendall/shimmy
synced 2026-04-21 13:23:05 +08:00
- Add comprehensive MLX engine implementation with Python MLX bindings - Implement MLX model discovery, loading, and native inference pipeline - Add MLX feature flag compilation and Apple Silicon hardware detection - Create dedicated GitHub Actions workflow for MLX testing on macos-14 ARM64 - Add MLX documentation to README and wiki with capability descriptions - Implement pre-commit hooks enforcing cargo fmt, clippy, and test validation - Fix GPU backend tests to properly force specific backends instead of auto-detection - Resolve property test race conditions with serial test execution - Update release workflow validation and platform-specific test expectations - Add MLX implementation plan and cross-compilation toolchain support 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
2.4 KiB
2.4 KiB
Engineering Methodology
Shimmy was built spec-first, test-driven, and AI‑assisted. This document records the exact loop, the quality gates, and where to find proofs.
Development Loop
-
Define a contract/spec Example: “Implement
/v1/chat/completionswith streaming (Server‑Sent Events) and match the response schema.” -
Generate a candidate implementation AI tools scaffold code; every line is reviewed before commit. Nontrivial changes are tied to a spec (issue or PR description) and include tests.
-
Validate with properties & invariants
- Property‑based tests: see
docs/ppt-invariant-testing.md. - Runtime invariants: assertions on protocol, state, and memory safety expectations.
- Tests live under
/testsand run in CI on Linux/macOS/Windows.
- Property‑based tests: see
-
CI Gates Every PR runs:
- DCO sign‑off
- Build matrix (Linux/macOS/Windows)
- Unit + property tests
- Static checks / duplicate issue detection
- Release workflow dry‑run (where applicable)
-
Iterate until green Code merges only when all gates pass. Releases are signed/tagged and changelogged.
Quality Practices
- Property Testing: Exercise edge cases beyond example‑based tests.
- Runtime Invariants: Fail fast when correctness assumptions are violated.
- Benchmarks: Reproducible scripts and environment in
docs/BENCHMARKS.md. - OpenAI Compat: Supported endpoints/fields in
docs/OPENAI_COMPAT.md. - Security Defaults:
- Binds to
127.0.0.1by default. - External model files are trust‑on‑first‑use; optional SHA‑256 verification and allow‑list paths are available/planned.
- Prefer running with least privilege; avoid exposing ports publicly without auth.
- Binds to
Philosophy
- Spec first, code second — logic/contracts drive implementation.
- Tests > syntax — correctness is proven with properties/invariants.
- AI is a tool; process is the product — the methodology scales teams.
- Forever‑free core — MIT license; contributions via Issues/PRs are welcome.
Quick Links
- Property/invariant guide:
docs/ppt-invariant-testing.md - Tests:
/tests - CI: GitHub Actions → CI status badge in README
- Benchmarks:
docs/BENCHMARKS.md - OpenAI Compatibility:
docs/OPENAI_COMPAT.md