Glad to see the conversation about hand tracking in the browser over here.
This demos is done under the context of a series of creative experiments on how to use real time hand tracking in the browser for creative interactions. Will be posting more experiments soon.
Tech background: I am using MediaPipe to control the hand rig in threejs. MediaPipe provides landmarks that are used to control a threejs Skeleton (hierarchy of bones with rotations).
Fwiw, some things I've found fun: Clip-on fish-eye lens, intended for phone but fitting on laptop, for expanding webcam field of view. Additional cameras: on sticks above screen tips for high-res stereo positioning over kbd; asymmetric high-off-to-side to trade some resolution for some field of view (meh); high-overhead for whole-workspace tracking. Binocular periscope with webcam splitter and screen-tip mirrors (blech - low-res awkward fiddly). Look-down mirror on webcam, partial or full, to get kbd view (nice in VR). Look-down with curved mirror along top of keyboard to get "out along kbd surface view" and crufty touch detection for kbd-as-touch-surface (cute but fiddly - only makes sense to save a camera or two; caveat I had high-contrast white hands on black thinkpad kbd). Putting tracking markers on fingers (flats, a-frames, or cubes on velcro rings) makes for less jittery tracking, but is awkward (meh). Markers taped around keyboard help with calibration.
Magic wand. I found I could more-or-less manage to type while holding a chopstick. So stuck a marker cube on one end, and an arc-sliced-off a small-Xmass-ball on tip, so it slides smoothly across (thinkpad) keys. Barber-pole rotation marker. Anvil'ed tip pressure sensor, a finger microswitch, and very thin and soft ribbon cable to arduino. But didn't actually get the pressure sensor working before punted on all this. Chopstick was narrow enough to avoid breaking hand tracking.
Some gotchas: 2K camera resolution was painful for tracking. (Several years ago) mediapipe finger tracking was annoyingly noisy for doing stereo. You only get one usb2 camera per usb port, even if it's usb3 (maybe usb3 cameras allow working around that limit nowadays?). If you do hand, arm, face and marker tracking on several cameras, even with native gpu mediapipe, you're burning a lot of gpu just on the human interface device, before your likely-graphical-itself app even starts. If I had it to do over now, I'd punt mirrors, use 4K usb3 cameras, and at least with desktop, more cameras. Nicely merging high-latency camera tracking with lower-latency keyboard, touchpad, and graphics tablets, requires changes to the input event pipeline, and adapting apps to deal with "oh my! That space key pressed several keys ago - it was pressed with a pointer finger at position 3!, so that means we roll back app state and then ...".
Here we are a half-century later, still banging on glorified xerox altos. We're so broken.
Correction: barber pole for optical rotation went with color ends, not marker cube.
Greenfield, stylus-wise, I'd... (1) Punt "keyboard as graphics tablet" as a bodge. Except for Mac-ish non-tiny touchpad with stylusified 2: (2) Simple chopstick with color ends and barber pole. Caveat that color blobs with 2K cameras and ambient light are noisy low-res (and slow) for small gestures. Workday ergonomics says resting hand with small motions, rather than movie arm waving. Barber pole pushes towards full-not-pen-short chopstick. And I've not seen a nice simple story for finger pressure/buttons. Sensor fusion with hand pose might be interesting? Squishy HID? (3) Graphics tablets already give clean fast high-res 2D, sometimes several cm above surface, often pressure, sometimes tilt, even rotation. Might add high-latency optical height. Gestures with distinguishable 2D projection could skip burden of high-time-res fusion, for a fast dev path to UI software rather than HID struggles. Caveat mediapipe hand pose struggles with thick black stylus, black background, and white hands (at least years ago - maybe someone is now training with styluses? Maybe someone's stylus, if recolored, is thin enough?). For the other hand, can fuse pose with tablet multitouch. Fwiw.
Np, tnx for the demo. Sigh, sorry, not really, nor easily accessible.
I do that poorly, repeatedly. A mindset of "today's rev n is bad, still unusable; tomorrow's incremental rev n+1 will be slightly better; no point in recording bad, wait for better; will demo at meetup for friends, but otherwise, who'd care?"... left a sparse trail. Sort of: you might take a picture of your nice finished cake, but of baking? There have been HN posts of commitment-hacking as a service, eg, iirc, a Japanese workspace with sign-in like "I'm here to write one chapter, and I'd like person-standing-behind-me level pressure". So perhaps, motivate documenting this week's state as a service? As finding/creating community that's interested in such seems often difficult.
Hmm, here's a snapshot[1] of my late-rev laptop hardware with flop-up kbd cam and (stowed fold-up) stereo cams (wires not connected). Gaff tape, sticks, vecro and cardboard esthetic allows fast and incremental iteration. For wires, I like magnetic usb connectors[2]. Fwiw.
This is amazing I had a browser plugin called flutter years back that was able to do webcam gesture recognition for scroll and forward/back. This is using threejs so I wonder how much is CPU or GPU and also how well this could, now or in the future, run under the hood in the background of a web game (or WebXR!) just as the input device and without too much overhead. Great Proof of Concept !!
Thanks for the nice words!
Your plugin sounds like fun.
In terms of using hand tracking for web games: my next experiments will use this setup to interact with 3D scenes.
Looking forward to where you take this, and just to clarify I was just an end user of Flutter but was checking for a link (I had this installed years back) and turns out they got aquired by Google[1] so could be somewhere in Lense for all I know!
One use case I immediately thought for hand movement tracking like this is to help my disabled brother - tetraplegic - steer efficiently. Using mouse for him is sometimes too hard. Only in some cases. If one could use this as a macro launcher or more accurate joystick without attaching real joystick, this could help a lot
I wish I had found out about MediaPipe before the tail end of grad school, but my collaborator integrated some neat stuff into our project. Very cool; thanks for sharing!
- A designer that don't know/don't want to write css. She can keep her design process and use Strapfork to create the css components.
- A designer that likes the workflow that Strapfork proposes (UI is part of the brand -> design components | design UI with mockups. Mix both in directly inHTML and have the final design in the browser, no need of photoshop).
- A backender that just want to customise bootstrap easy and fast.
Jetstrap is much more like a layout/markup creator, with drag and drop interface. Strapfork is a customiser for the Bootstrap components, meaning that you can add styles to them. It generates css and documentation, but you still have to write your markup.
Ouch, that's bad news... I used the dafult setup for tinyletter. Didn't expect that to happen. Anywaay, you can send me an email to jackjackbach at gmail, if you are still interested.
I'm sorry about the misunderstanding. About the spam... I'm the creator of the app and don't know the person who submitted. I'm not running an affiliate program either.
Glad to see the conversation about hand tracking in the browser over here.
This demos is done under the context of a series of creative experiments on how to use real time hand tracking in the browser for creative interactions. Will be posting more experiments soon.
Tech background: I am using MediaPipe to control the hand rig in threejs. MediaPipe provides landmarks that are used to control a threejs Skeleton (hierarchy of bones with rotations).
Feel free to ask, I will answer any questions!