Honestly, the biggest difference is how they treat the web. Flutter brings its own entire rendering engine to the browser to draw pixels on a canvas, while Coi is designed to use the browser's native HTML and CSS.
If you’ve used Vue or React, Coi will feel much more familiar. It’s a component-based model where your logic, styles, and markup live together, but you get strict static typing and native-speed performance via WASM.
The goal is a true "write once, run everywhere" workflow. On the web, it stays light by using the browser's engine. For mobile and desktop, the plan is to map that same HTML/CSS styling and component logic to a native C++/Skia backend. You get a native app's performance without losing the web's layout flexibility.
Technically, it’s not just about the download pipe. A 2MB+ WASM blob or JS bundle imposes a heavy CPU parse/compile tax. On mid-tier mobile devices, this results in main-thread jank and a 'Time to Interactive' (TTI) that kills user retention before the first frame even renders.
Beyond that, the 'Canvas-only' approach is computationally expensive. You are effectively re-implementing a rendering engine (layout, hit-testing, paint) on top of a rendering engine. While the browser's native DOM/CSS engine is a highly optimized, hardware-accelerated C++ powerhouse, a Canvas-only app has to manually calculate every pixel and handle every event in the WASM/JS loop. This drains battery faster and runs hotter than letting the browser do what it was built for.
Thanks for the feedback! You're absolutely right to question this.
Just to clarify, my benchmark was using Canvas2D, not WebGL, that's why the numbers are much lower than your WebGL2 example. Based on your comment I actually removed the command batching to test the difference, and yeah, the batching optimization is smaller than I initially thought. WebCC with batched commands hits ~100 FPS, without batching it's ~86 FPS, and Emscripten is ~40 FPS. So the batching itself only contributes about ~14 FPS.
The bigger performance difference compared to Emscripten seems to come from how Canvas2D operations are handled. Emscripten uses their val class for JS interop which wraps each canvas call in their abstraction layer. WebCC writes raw commands (opcode + arguments) directly into a buffer that the JS side decodes with a tight switch statement. The JS decoder already has direct references to canvas objects and can call methods immediately without property lookups or wrapper overhead. With 10k draw calls per frame, these small per-call differences (property access, type boxing/unboxing, generic dispatch) compound significantly.
> Emscripten uses their val class for JS interop which wraps each canvas call in their abstraction layer.
This is an C++ embind thing right? At least the WebGL2 shim doesn't use that (and IMHO embind should never be used when performance matters), but that might actually explain a lot of the difference.
Hard disagree. Canvas 2D is fully GPU-accelerated in modern browsers and can easily handle thousands of draw calls at 60fps,more than enough for most practical applications. For data visualization, interactive tools, drawing apps, and UI rendering, it's a robust and performant choice. WebGL is often overkill unless you're dealing with extreme datasets or 3D scenes. With its simpler API and faster startup, Canvas 2D is perfectly suited for the vast majority of 2D use cases. Labeling it as 'horribly inefficient' is simply wrong ._.
I suspect Svelte’s heavy lift with compile-time reactivity came largely from trying to infer "reactive intent" from standard JavaScript code. Since JS is so dynamic, the compiler had to rely on heuristics or strict assignment rules
Coi avoids this ambiguity because the compiler can definitively trace usage patterns. Since mut variables are explicitly declared, the compiler essentially just looks at where they are used in the view {} block to establish dependencies at compile time. This static analysis is precise and doesn't require the compiler to "guess" intent, effectively preserving the benefits of compile-time reactivity without the fragility found in dynamic languages
Possible in theory, but a Vue→Coi transpiler would be complex given the different reactivity models and syntax.
Most practical approach: AI-assisted conversion. Feed an LLM the Coi docs + your Vue code and let it transform components. For migrating existing codebases, that's likely the most efficient path.
Fixed! The issue was specific to FreeBSD's clang setup. When compiling with --target=wasm32 and -nostdlib, clang on Linux/macOS still finds minimal freestanding headers for the wasm32 target, but FreeBSD's clang doesn't have these configured by default - even though stdint.h exists, it's not in the search path for cross-compilation targets.
The fix was adding freestanding stdint.h and stddef.h to webcc's compat layer using compiler built-ins (__SIZE_TYPE__, etc.). This makes webcc work consistently across all platforms without relying on platform-specific clang configurations.
I hope it works now for you - hit me up if there are still problems!
Reactive DOM updates – When you change state, the compiler tracks dependencies and generates efficient update code. In WebCC C++, you manually manage every DOM operation and call flush().
JSX-like view syntax – Embedding HTML with expressions, conditionals (<if>), and loops (<for>) requires parser support. Doing this with C++ macros would be unmaintainable.
Scoped CSS – The compiler rewrites selectors and injects scope attributes automatically. In WebCC, you write all styling imperatively in C++.
Component lifecycle – init{}, mount{}, tick{}, view{} blocks integrate with the reactive system. WebCC requires manual event loop setup and state management.
Efficient array rendering – Array loops track elements by key, so adding/removing/reordering items only updates the affected DOM nodes. The compiler generates the diffing and patching logic automatically.
Fine-grained reactivity – The compiler analyzes which DOM nodes depend on which state variables, generating minimal update code that only touches affected elements.
From a DX perspective: Coi lets you write <button onclick={increment}>{count}</button> with automatic reactivity. WebCC is a low-level toolkit – Coi is a high-level language that compiles to it, handling the reactive updates and DOM boilerplate automatically.
These features require a new language because they need compiler-level integration – reactive tracking, CSS scoping, JSX-like templates, and efficient array updates can't be retrofitted into C++ without creating an unmaintainable mess of macros and preprocessors. A component-based declarative language is fundamentally better suited for building UIs than imperative C++.
You're right about the market positioning - WebCC isn't trying to be Emscripten. It's for when you want to build for the web, not just on the web. I'm actually using it myself to port my game engine, currently in the process of ripping out Emscripten entirely.
On the opcode parsing - the switch/case approach is intentionally simple and surprisingly fast. Modern compilers turn dense switch statements into jump tables, so it's essentially O(1) dispatch.
Your flush timing concern is understandable, but the architecture actually handles this cleanly. Buffered commands accumulate, and anything that returns a value auto-flushes first to guarantee correct ordering. For game loops, the natural pattern is batch everything during your frame logic, single flush at the end. You don't lose control, the auto-flush on sync calls ensures execution order is always maintained.
I started with WebCC to get the best possible performance and small binaries, which works well for things like games. However, writing UI code that way is very tedious. I built Coi to make the development process more enjoyable (better DX) while keeping the efficiency. To me, the gain in performance and the cleaner syntax felt like a good reason to try a new language approach :)
If you’ve used Vue or React, Coi will feel much more familiar. It’s a component-based model where your logic, styles, and markup live together, but you get strict static typing and native-speed performance via WASM.
The goal is a true "write once, run everywhere" workflow. On the web, it stays light by using the browser's engine. For mobile and desktop, the plan is to map that same HTML/CSS styling and component logic to a native C++/Skia backend. You get a native app's performance without losing the web's layout flexibility.