Hacker Newsnew | past | comments | ask | show | jobs | submit | RustyRussell's commentslogin

Adam is not Satoshi.

In early days of Blockstream I remember him and Greg Maxwell spitballing ideas about Bitcoin, and he was clearly intellectually feeling out the constructions as novel concepts.

I have spent my fair time with geeks, myself included, and this "shiny new thing" geek excitement is distinctive. And Adam is a typical nerd for whom guile does not come easy, if at all.

I realize this is not a transferrable proof, but I stand by it, for what that's worth.


This is RustyRusell of Linux kernel fame. This is kind of the evidence that he want for a claim. First hand. Not what we just read from the NYT article.

As with many comments here: use a build-time assertion that the system is little-endian, and ignore it. Untested code is broken code.

I was at IBM when we gave up on big endian for Power. Too much new code assumed LE, and we switched, despite the insane engineering effort (though TBH, that effort had the side effect of retaining some absolutely first-class engineers a few more years).


In the small, it's still a meritocracy. A patch like this is obviously correct and I expect to get in first try (maybe with a formatting fix by the maintainer).

For large works, the burden shifts, since you are increasing the maintenance load. Now we have the question of who will do the future work, and that requires judgement of the importance of the work and/or the author, and hence is a fundamentally political question.


I share this frustration, but a downvote is probably a better reaction than a reply here.


I've been on here for months and still haven't gotten the ability to downvote. I'm not sure why it's put on such a pedestal.


Agreed. Less karma than you, but I've been on the site for 12 years and still can't downvote...


You're writing this from a dystopian future where punctuation is banned? :)

For those like me who still require parsing assistance :

- We are Bob

- Red Rising

- Murderbot


Hehe when I write it I separated them by return \n but Hacker News strips the single new lines. It's not very WYSIWYG


On the other hand, I once implemented something to be told later it was novel and probably the optimal solution in the space.

An AI might be more likely to find it...


I'm thinking early April?


A goid friend of mine worked on decimal floating point for IBM Power chips (I think it was Power 7 which had hardware support).

Anyway, he insisted on calling it just "Decimal Floating". Because there was "no point".


Hilarious and apt.

Either you want fixed point for your minimum unit of accounting or you want floating point because you’re doing math with big / small numbers and you can tolerate a certain amount of truncation. I have no idea what the application for floating point with a weird base is. Unacceptable for accounting, and physicists are smart enough to work in base 2.


I'm pretty confident that dfp is used for financial computation. Both because it has been pushed heavily by IBM (who certainly are very involved in financial industry) and because many papers describing dfp use financial applications as motivating example. For example this paper: https://speleotrove.com/decimal/IEEE-cowlishaw-arith16.pdf

> This extensive use of decimal data suggested that it would be worthwhile to study how the data are used and how decimal arithmetic should be defined. These investigations showed that the nature of commercial computation has changed so that decimal floating-point arithmetic is now an advantage for many applications.

> It also became apparent that the increasing use of decimal floating-point, both in programming languages and in application libraries, brought into question any assumption that decimal arithmetic is an insignificant part of commercial workloads.

> Simple changes to existing benchmarks (which used incorrect binary approximations for financial computations) indicated that many applications, such as a typical Internet-based ‘warehouse’ application, may be spending 50% or more of their processing time in decimal arithmetic. Further, a new benchmark, designed to model an extreme case (a telephone company’s daily billing application), shows that the decimal processing overhead could reach over 90%


Wow. OK, I believe you. Still don’t see the advantages over using the same number of bits for fixed point math, but this definitely sounds like something IBM would do.

Edit: Back of the envelope, you could measure 10^26 dollars with picodollar resolution using 128 bits


Decimal128 has exact rounding of decimal rules and preserves trailing zeros.

I don’t think Decimal64 has the same features, but it has been a while.

But unless you hit the limits of 34 decimal digits of significand, Decimal128 will work for anything you would use fixed point for, but much faster if you have hardware support like on the IBM cpus or some of the sparc cpus from Japan.

OPAP Agg functions as an example are an application.


> I don’t think Decimal64 has the same features, but it has been a while.

Decimal32, Decimal64, and Decimal128 all follow the same rules, they just have different values for the exponent range and number of significant figures.

Actually, this is true for all of the IEEE 754 formats: the specification is parameterized on (base (though only 2 or 10 is possible), max exponent, number of significant figures), although there are number of issues that only exist for IEEE 754 decimal floating-point numbers, like exponent quantum or BID/DPD encoding stuff.


You are correct, the problem is that Decimal64 has 16 digits of significand, while items like apportioned per call taxes need to be calculated with six digits past the decimal before rounding which requires about 20 digits.

Other calculations like interest rates take even more and cobol requires 32 digits.

As decimal128 format supports 34 decimal digits of significand, and has emulated exact rounding, it can meet that standard.

While items is more complex, requiring ~15-20% more silicon space in the ALU plus larger dataset size, compared to arbitrary precision libraries like BigNum it is more efficient for business applications.

This looks like a digestible cite:

https://speleotrove.com/decimal/decifaq1.html


I want signal to act as a transport bus. In particular, I want to give certain contacts permission to ask my phone for its location, so I can give my wife that ability without sharing it with Google.

Signal has solved the identity part, now encourage others to build apps on it.

(2fa via Signal would be better than SMS, too, though I know this may be controversial!)


> Signal has solved the identity part, now encourage others to build apps on it.

Doesn't the fact that nobody has built apps on it indicate the license (AGPL 3) is a real issue for its ecosystem?


I'm not seeing how you could draw that conclusion. The more likely explanation is that they are telling people not to build apps around it (and I assume thus the apis aren't well designed for adoption by other apps).

> This repository is used by the Signal client apps (Android, iOS, and Desktop) as well as server-side. Use outside of Signal is unsupported.

https://github.com/signalapp/libsignal


Isn't the lack of an SDK the problem, not the LICENSE?


Did anyone else find the use if ABNF annoying?

  unicode-assignable =
   %x9 / %xA / %xD /               ; useful controls
   %x20-7E /                       ; exclude C1 controls and DEL
   %xA0-D7FF /                     ; exclude surrogates
   %xE000-FDCF /                   ; exclude FDD0 nonchars
   %xFDF0-FFFD /                   ; exclude FFFE and FFFF nonchars
   %x10000-1FFFD / %x20000-2FFFD / ; (repeat per plane)
   %x30000-3FFFD / %x40000-4FFFD /
   %x50000-5FFFD / %x60000-6FFFD /
   %x70000-7FFFD / %x80000-8FFFD /
   %x90000-9FFFD / %xA0000-AFFFD /
   %xB0000-BFFFD / %xC0000-CFFFD /
   %xD0000-DFFFD / %xE0000-EFFFD /
   %xF0000-FFFFD / %x100000-10FFFD
I mean, just define ranges.

Also, where are the test vectors? Because when I implement this, that's the first thing I have to write, and you could save me a lot of work here. Bonus points if it's in JSON and UTF-8 already, though the invalid UTF-8 in an RFC might really gum things up: hex encode maybe?


The tests for the go code at https://github.com/timbray/RFC9839 are in effect test vectors.


I want to implement this. My code is in C.

How does this help me check my implementation? I guess I could ask ChatGPT to convert your tests to my code, but that seems the long way around.


https://github.com/timbray/RFC9839/blob/main/unichars.go

I don't know rust at all but I can pretty quickly understand:

    var unicodeAssignables = []runePair{
     {0x20, 0x7E},       // ASCII
     {0xA, 0xA},         // newline
     {0xA0, 0xD7FF},     // most of the BMP
     {0xE000, 0xFDCF},   // BMP after surrogates
     {0xFDF0, 0xFFFD},   // BMP after noncharacters block
     {0x9, 0x9},         // Tab
     {0xD, 0xD},         // CR
     {0x10000, 0x1FFFD}, // astral planes from here down
     {0x20000, 0x2FFFD},
     {0x30000, 0x3FFFD},
     {0x40000, 0x4FFFD},
     {0x50000, 0x5FFFD},
     {0x60000, 0x6FFFD},
     {0x70000, 0x7FFFD},
     {0x80000, 0x8FFFD},
     {0x90000, 0x9FFFD},
     {0xA0000, 0xAFFFD},
     {0xB0000, 0xBFFFD},
     {0xC0000, 0xCFFFD},
     {0xD0000, 0xDFFFD},
     {0xE0000, 0xEFFFD},
     {0xF0000, 0xFFFFD},
     {0x100000, 0x10FFFD},
    }


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: