The thing I keep running into with multi-agent setups is that the engineering principles a team actually cares about — how to handle errors, when to wrap shell calls, what counts as a critical path — live in a wiki page or a slide deck nobody reads. That's already a problem for humans; for an LLM agent it is a guarantee of policy violation.
coding-ethos is the position I've taken: those principles belong in a single coding_ethos.yml file, and from that one file the build emits everything that needs to know about them — CLAUDE.md / GEMINI.md agent instructions, Ruff / Pyright / golangci-lint configs, compiled Go pre-commit hooks, agent tool-use guards, and an MCP server the agent can query at runtime.
The key invariant: the engine that writes the markdown rules is the exact same engine that evaluates CEL expressions at the git-hook level. They cannot drift. If the hook denies an action, the agent gets back a structured skill_id hint instead of a generic exit code — so the feedback loop closes inside the agent's own context rather than landing on a human's screen.
Heavily opinionated, currently slanted toward Python and Go, in active development. Posted on r/GeminiCLI with worked examples; read the original thread if you want the implementation walk-through, and feature requests are welcome on the repo.
*Standalone:Standard Graph Neural Networks are structurally constrained when mapping complex text attribution: linear aggregation in flat Euclidean space inevitably forces semantic drift. To map high-dimensional knowledge faithfully you have to transition to curved semantic manifolds, where the geometry itself carries the relational structure.
Across thirty years of building scientific analysis pipelines — genetics, satellite imagery, multi-continent high-resiliency financial applications — the through-line has been the same: representations must remain mathematically faithful to their underlying geometry, or they stop being interpretable the moment the data leaves your dev set.
I've recently open-sourced a framework that discovers emergent knowledge-graph relations in high-order semantic vector spaces through manifold learning and spectral analysis. Initial proofs, a small teaser, and the Python codebase live at paudley/nonlinear-semantic-graphs; the working paper that motivates the design is in Publications.
I'm looking to connect with researchers and applied scientists specialising in Topological Data Analysis, Geometric Deep Learning, and Knowledge Representation — especially anyone working on geodesic aggregation or spectral graph theory — to push these ideas into robust enterprise deployments. Comment on the LinkedIn original or drop me a line directly.
*Standalone: