Hacker Newsnew | past | comments | ask | show | jobs | submit | more hermitShell's commentslogin

Regarding growth, there are different career phases. For a new entrant to the industry (new grad) or to the company, just executing on any project is growth. How do you do things around here? What technology do you use?

Years later the gains and performance plateau. These things are now mostly understood, and problem solving in the familiar domain yields less growth.

This is when hot cold is IMO a good approach. . Let people explore different problems, different technology. Try things out at work, crazy stinky skunk ideas.


I have wondered this and occasionally seen some related news.

Transistors can do more than on and off, there is also the linear region of operation where the gate voltage allows a proportional current to flow.

So you would be constructing an analog computer. Perhaps in operation it would resemble a meat computer (brain) a little more, as the activation potential of a neuron is some analog signal from another neuron. (I think? Because a weak activation might trigger half the outputs of a neuron, and a strong activation might trigger all outputs)

I don’t think we know how to construct such a computer, or how it would perform set computations. Like the weights in the neural net become something like capacitance at the gates of transistors. Computation is I suppose just inference, or thinking?

Maybe with the help of LLM tools we will be able to design such things. So far as I know there is nothing like an analog FPGA where you program the weights instead of whatever you do to an FPGA… making or breaking connections and telling LUTs their identity


You don't think we know how to construct an analog computer? We have decades of experience designing analog computers to run fire control systems for large guns.

https://maritime.org/doc/firecontrol/index.php


We have also a pretty decent amount of experience with (pulse/spiking) artificial neural networks in analog hardware, e.g. [1]. Very Energy efficient but yet hard to scale.

[1] https://www.kip.uni-heidelberg.de/Veroeffentlichungen/detail...


That’s a very cool abstract, thanks. I suppose it’s the plasticity that poses a pretty serious challenge.

Anyway, if this kind of computer was so great maybe we should just encourage people to employ the human reproduction system to make more.

There’s a certain irony of critics of current AI. Yes, these systems lack certain capabilities that humans possess, it’s true! Maybe we should make sure we keep it that way?


I had an incident where an older couple were stopped at a green light, angled down hill, in a snowstorm, with parking brake instead of foot pedal, in a borrowed vehicle.

When they asked me for insurance I just dragged it out and made friendly conversation(eventually giving the insurance slip). They got increasingly irate and panicked. Maybe because it was only a glancing blow and wouldn’t exceed even a slim deductible.

Anyway, I should probably get a dash cam…


This story doesn't make sense: it's not clear who hit who, whether you were scamming them by not giving insurance, or how a dash cam would help when no real damage was done.


The older couple hit him, with their motionless, parking-braked vehicle. Possibly by using dark matter/energy to cause space-time expansion that pushed his car into theirs.


Snow storm detail indicates the road may have been slippery. They might have locked their wheels on an icy hill and skated into the other car. It happens a lot where I live.


I don't really understand what you're trying to say here.

Did you try to avoid giving them your insurance details? Why?

What would the dash cam have shown?


I would find it weird too if someone would stall.

And you always need to be able to stop your car independently of the other do. You know minimum braking distance?


If you rear-end someone, it's pretty much always your fault. People are allowed to be stopped in the road for any number of reasons, and it's your responsibility as a car operator to be aware of your surroundings and able to stop your own car without hitting anything.


JVM I think I can understand, but do you happen to know more about LISP machines and whether they use an ISA specifically optimized for the language, or if the compilers for x86 end up just doing the same thing?

In general I think the practical result is that x86 is like democracy. It’s not always efficient but there are other factors that make it the best choice.


They used an ISA specifically optimized for the language. At the time it was not known how to make compilers for Lisp that did an adequate job on normal hardware.

The vast majority of computers in the world are not x86.


Wait. It was pretty well known how to make compilers for Lisp, and they were not bad. There were some little parts of some lisps (number tower, overflow to bignum, rationals) which was problematic (and still is today, if you do not have custom HW). But those pieces were and are not that important for general purpose. The era of LISP isa was not so long after all.


The stock-hardware compilers for Lisp that were available in 01979 when Knight designed the CADR, like MACLISP, were pretty poor on anything but numerical code. When Gabriel's book https://archive.org/details/PerformanceAndEvaluationOfLispSy... came out in 01985, the year after he founded Lucid to fix that problem, InterLisp on the PDP-10 was 8× slower on Tak (2") than his handcoded assembly PDP-10 reference version (¼") (pp. 83, 86, 88, "On 2060 in INTERLISP (bc)"), while MacLisp on SAIL (another PDP-10, a KL-10) was only 2× slower (.564"), and the Symbolics 3600 he benchmarked it on was slightly faster (.43") than MacLisp but still 50% slower than the PDP-10 assembly code. No Lucid Common Lisp benchmarks were included.

Unfortunately, most of Gabriel's Lisp benchmarks don't have hand-tuned assembly versions to compare them to.

Generational garbage collection was first published (by Lieberman and Hewitt) in 01983, but wouldn't become widely used for several more years. This was a crucial breakthrough that enabled garbage collection to become performance-competitive with explicit malloc/free allocation, sometimes even faster. Arena-based or region-based allocation was always faster, and was sometimes used (it was a crucial part of GCC from the beginning in the form of "obstacks"), but Lisp doesn't really have a reasonable way to use custom allocators for part of a program. So I would claim that, until generational garbage collection, it was impossible for stock-hardware Lisp compilers to be performance-competitive on many tasks.

Tak, however, doesn't cons, so that wasn't the slowness Gabriel observed in it.

So I will make two slightly independent assertions here:

1. Stock-hardware Lisp compilers available in the late 01970s, when LispMs were built, were, in absolute terms, pretty poorly performing. The above evidence doesn't prove this, but I think it's at least substantial evidence for it.

2. Whether my assertion #1 above is actually true or not, certainly it was widely believed at the time, even by the hardest core of the Lisp community; and this provided much of the impetus for building Lisp machines.

Current Lisp compilers like SBCL and Chez Scheme are enormous improvements on what was available at the time, and they are generally quite competitive with C, without any custom hardware. Specializing JIT compilers (whether Franz-style trace compilers like LuaJIT or not) could plausibly offer still better performance, but neither SBCL neither Chez uses that approach. SBCL does open-code fixnum arithmetic, and I think Chez does too, but they have to precede those operations with bailout checks unless declarations entitle them to be unsafe. Stalin does better still by using whole-program type inference.

Some links:

https://dl.acm.org/doi/pdf/10.1145/152739.152747 "'Infant Mortality' and Generational Garbage Collection", Baker (from 01993 I think)

https://dspace.mit.edu/handle/1721.1/5718 "CADR", AIM-528, by Knight, 01979-05-01

https://www.researchgate.net/publication/221213025_A_LISP_ma... "A LISP machine", supposedly 01980-04, ACM SIGIR Forum 15(2):137-138, doi 10.1145/647003.711869, The Papers of the Fifth Workshop on Computer Architecture for Non-Numeric Processing, Greenblatt, Knight, Holloway, and Moon, but it looks like what Knight uploaded to ResearchGate was actually a 14-page AI memo by Greenblatt

https://news.ycombinator.com/item?id=27715043 previous discussion of a slide deck entitled "Architecture of Lisp Machines", the slides being of little interest themselves but the discussion including gumby, Mark Watson, et al.


When the RISC processors were available (for the same reason RISC started to grow) it was better to just compile to ASM.


fantastic project. Do you envision this as living on FPGA's forever, or getting into silicon directly? Maybe an extension of RISC-V?


Oh boy, I definitely considered that — turning PyXL into a RISC-V extension was an early idea I thought of.

It could probably be adapted into one.

But I ultimately decided to build it as its own clean design because I wanted the flexibility to rethink the entire execution model for Python — not just adapt an existing register-based architecture.

FPGA is for prototyping. although this could probably be used as a soft core. But looking forward, ASIC is definitely the way to go.


100% agree ‘chat bots’ will not be a revolutionary technology, but other uses of the underlying technology will be. General robotics, pharmaceuticals, new matter… and eventually 1st line medicines and law sure, but I sure don’t want doctors to vibe diagnose me, or lawmakers to vibe legislate.


I enjoyed your thoughts about non linear success and I think there is a sense in which the principle holds. but in both life and Balatro experientially, most win conditions are reached linearly, and the really crazy combos just let you explore endless mode for a couple more rounds. To your point being the owner of a modest business, or just being a good employee makes you ‘win’. But if you want to hit 10x win or 100x win, you need some ridiculous non linear scaling.


yeah agreed, I think there are plenty of ways to live a successful and fulfilling 'linear' life


Agreed, our whole computing paradigm needs to shift at a fundamental level in order to let AI be 'magic', not just token prediction. Chatbots will provide some linear improvements, but ultimately I very much agree with you and the article that we're trapped in an old mode of thinking.

You might be interested in this series: https://www.youtube.com/@liber-indigo

In the same way that Microsoft and the 'IBM clones' brought us the current computing paradigm built on the desktop metaphor, I believe there will have to be a new OS built on a new metaphor. It's just a question of when those perfect conditions arise for lightning to strike on the founders who can make it happen. And just like Xerox and IBM, the actual core ideas might come from the tech giants (FAANG et al.) but they may not end up being the ones to successfully transition to the new modality.


I was thinking about this very thing recently, because I like to be able to tell my computer to do exactly what I want. Little annoying things, usually Microsoft products. Maybe the next 20 years will bring more improvement in software than the past 20. Hardware has gotten faster, software more complex... but at the root of it, technology exists for us to exercise our will over reality. If we could accomplish the same thing without technology, that would obviously be better. I guess I'm trying to say the interface matters.


Check out this Geiko ad: https://www.berkshirehathaway.com/

Buffett is a rare gem


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: