Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with you. It’s all about trading memory for computation. If you have any idea, feel free to share it. Thanks for the comments


Take a look at how mosh transfers deltas of terminal viewports over the wire using what it calls “SSP”. That protocol might have some advantages here, especially since you can access the state of the pre-rasterization drawn objects, not just the pixels, on the screen.

Once you do that, you may obviate the need for any transcoding or conversion to MJPEG since you can just redraw the objects on the canvas.

Also, RM2 seems to have a built in Screen Share feature. Might be worth describing the differences (besides not needing their cloud subscription service).


I will try to answer to both points: In the first article, I described how I fetched the picture by reading the virtual framebuffer. I have not any knowledge of what’s being drawn. All I have from the beginning is a 2.5Mb byte array.

I don’t use any jpeg compression anymore in this version

And my understanding is that the native client is transmitting the vector representation to the client and the client redraws it with the same algorithm. It is only doable if you know what algorithm they use. I did a small test to decode their format, but it may change more often than the format of the picture.

Does it provide you the answer? (thanks for the conversation)


Without knowing how often your RLE is hitting the max length of 16, but assuming it was often, a further optimization could be using one bit as a flag and to signal that the following block is pixel is either a small sequence of 1-8 pixels, or a large sequence of a multiple of 8 pixels (ie. 1 = 8x1, 2 = 8x2, 3 = 8x3).

This lets you compress up to 64 pixels into the space of no more than 2.


Sounds like a good idea




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: