Hacker Newsnew | past | comments | ask | show | jobs | submit | nodespace's commentslogin

I support a few web novel authors on Patreon, and the website on mobile is so bad that I read the new chapters in my email client.

In fact, I am extremely hesitant about starting any new subscriptions because of how bad it is. (and I do not look forward to reading all the catchup chapters on that interface, not to mention trying to search through where I left off on the publicly posted content without getting spoiled by chapter titles.)

It takes ages to load, content is not on pages, its all streamed in order, and there is just a ton of lag trying to scroll through and find things.

If you click away and try to come back, welp, good luck finding where you left off, better hope you had that chapter number memorized.

Did I mention the lag? It takes like 10 seconds to load a text only section.

There is search, but again the lag is aweful.


Seconded. The experience of reading book chapters through Patreon is utterly abysmal. I don't know why Royal Road hasn't added direct integration or at least password protection like pirateaba does yet.


This is subtley different though. 1$ gets split between all upvotes a person made. So instead of posts getting a value amount directly proportional to upvotes recieved, it will be proportional to how often those users upvote.

In theory I think this would encourage higher quality posts to attract those who upvote rarely.


I may be wrong, but I believe this is how the subreddit already works. They set a total amount of crypto to be released, and it goes in proportion to the total number of upvotes you receive in comparison to others in a given time window. I think the approaches are largely identical with the exception of the crypto vs direct fiat.


The difference is that the non.io model rewards more for upvotes from people who have a high bar for upvoting and upvote rarely.


To be clear, this is an entirely different model with a different incentive structure from the other one under discussion. It will not have the same issues. You don't want majority appeal, rather you want niche appeal. The more people like primarily your post and nothing else, the more valuable the upvotes become.


Are there any implimentations of this? I got started working on one for rust, but got kinda stuck in a few places. This could be very useful for RTS AI I think, or anything where you need to optimize managing resources and build orders, if I understand negative weight shortest paths correctly.


Does anyone know what is meant by this:

> Other design tools included the infamous 'Egyptian' algorithm that generated massive amounts of linear-based traveling linework and was used extensively for the bracket animations when a combatant was de-rezzified.

Some googling and chatgpt searching hasn't yielded any info. I'm a huge fan of the tron asthetic, and it inspired me to design an entire game based on it. Such an algorithm sounds like it would be very cool to impliment in a game.


I think its because, very often, wisdom requires rewriting a fundamental assumption about the world. And people tend to tie up their identity with their fundamental assumptions.

As a result these changes can destabilize a persons identity, which causes said person to look for the closest source of stability, often the 'peophet'.

This gives the 'prophet' enourmous power over the person, not just because of the identity destabilization, but because when you change someones fundamental assumptions about the world, those changes don't nessecerily have to be truthful or helpful for the person. The person only has to think the changes do.

This identity change and stability dependence is probably what causes the cult like appearance/ behaviour.


Wow, very well said. Thank you for the insight!


Is there any way to get it to respond the same way when something is outside the golden path? So for example, if you gave it the backwards sentence task, it would respond with "I don't know how to do this" or really any way of programatically evaluating that it failed, without needing to know what the task itself was.


Knowing whether or not it’s giving you a sensible response is one of the things that are hard for gpt-3, unfortunately. It has no concept of failing.


On the contrary, doing a two-stage generation where the second stage simply judges whether a generation is correct can help a lot. It works even better if you give it several generations and let it choose whichever is the most truthful. I wrote a basic example of this here that uses my own confabulation-suppressing prompt in the first stage, but simpler variations of this exist in the published literature: https://twitter.com/goodside/status/1559586486705602562?s=21...

The hallucination-suppressing prompt it implicitly uses the output of is here: https://twitter.com/goodside/status/1556459121834168320?s=21...


Yes. You can, with effort, condition it to respond sensibly with phrases like “I’m sorry, I don’t know how to reverse strings,” or “I’m sorry, I can’t do any math calculation that a human couldn’t do in their head.” But in doing so you damage its ability to do some tasks it’s actually capable of, e.g. reciting a memorized answer to “What is the fourth root of 625?” Its memorization abilities are insane: It seems to know, for example, the exact MD5 hashes of all single-character alphanumeric strings. Much of the arithmetic it knows is probably similarly memorized, and it’s hard to clarify for it what aspects of that memory are safe to use.

The initial problem that got me interested in GPT-3 is suppressing confabulated answers to the Hofstadter-Bender questions published in The Economist. I eventually found an apparent solution but I’m yet to carefully validate it: https://twitter.com/goodside/status/1556459121834168320?s=21...


I think its like saying:

"2 times (but instead of mutiplying do the oposite thing that makes it smaller)"

Is where the "logic" comes from.

Hm, I bet this parses as well:

"An hour is .5 times more then a half hour."

Although not quite as well.


I think intelligence is more then just biology. Its a technology. In my experiance we can significantly improve our ability to learn just with aquiring different mental techniques. Theres no getting around that some peoples brains are already pre-wired or had experiences to cause this, and thus have a headstart (all these geniuses like von Neumann etc) but that doesn't mean you can't start treading in the same path.

Everyone's brain might be different, but that just means you might need different mental techniques then someone else. For example, I'm terrible at memorizing things, my working memory is rather small compared to the average person I've come to find[1]. To adapt, I focused on finding things that would let me compress memory, focusing on the 'core' of things so to speak and ignoring everything else. For example, in math and physics, I ignore pretty much all the equations and instead try to find the similarities that tie everything else together and use that to derive/figure out the rest as needed. This way I can get away with knowing much less. (I've also been lucky to have teachers that let us reference the equations on tests). Suprisingly, I found this acts as a great foundation for learning new things, letting them slot together much more readily. (I also got into the habit of using a notebook as both a reference and workspace for ideas, letting me refer back to things that dropped out of my working memory quickly)

The thing is though, if I didn't pressure myself to develop that habit in the first place, I would never have improved.

I think a huge part of mental ability is simply having motivation to go through the process.(admittedly a large part of motivation is biological/just how you are wired)

[1]This is a bit subjective, the important part I think though, was the pressure it caused me to put on myself to improve.


Don't set yourself short on biology, since you're clearly referring to your genetic make-up. Gene-expression is way more complex and a much bigger factor than most people give it credit for. The environment impacts gene-expression quite a bit and I believe it's a big part of learning new things as experience provokes or suppresses gene expression. More to come.


Huh, I am too young to have messed with Ti-84 programming, but what you described sounds extreemly similar to my experience programing in YOLOL the ingame/inworld programming language in the game starbase.

In this language you are limited to 120 characters per line, and 20 lines with .2 second execution time for a single line in a "chip" (the devices that store and execute code ingame). Only very basic operations are availible, like goto, if, and math operations.

You basically end up code golfing to squeeze as much as possible into those 120 characters, to minimize latency and use fewer chips (which have in-world costs) its been a fun experience so far.

I wonder if the developers were inspired by the similar Ti-84 programming limits


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: