The way it works in France is that money goes to a company that collects it on behalf of all copyright holders. Its website does not offer any documentation as to how copyright holders can claim their share.
That sounds pretty shady. There's also the problem that most media generated globally is not French. Do they pretend to distribute the spoils globally?
In reality the system in these countries is pure corruption. The beneficiaries are large corporations who see it as an extra revenue stream and that's it.
Not completely. I know some french musicians who are great artists, but are not mainstream enough to sell enough records - and they do get state money to continue their art (progressive/psychedelic music, nothing tame).
It operates sort of like a guild. For music, there's the SACEM, where songwriters, musicians, etc. register themselves (hey I have this thing), and get help (e.g. SACEM invests in young aspiring music professionals) and royalties based on how their music was used and by whom. All music users pay SACEM for the use, and SACEM distributes the proceeds to the copyright holders.
I think lots of commenters are being unintentionally pedantic. It’s clear that there are different types of abstractions one is concerned with when programming at the application level. Yes, it’s all abstractions on top of subatomic probability fields, but no one is thinking at even the atomic level when they step through the machine code execution with a debugger.
The one abstraction you would have to keep in mind with assembler (writing more than reading tho) is the cache hierarchy. The days of equal cost to read/write any memory location are ancient. Even in the old 8 bit days some memory was faster to access than others (e.g. 6502 zero page).
The flags are another abstraction that might not mean what it says. The 6502 N flag and BPL/BMI instructions really just test bit 7 and aren't concerned with whether the value is really negative/positive.
“… the two decades after World War II in the United States, a time of economic redistribution and reversal of upward social mobility.”
Does anyone have a summary about the “reversal of upward mobility” bit? I’m pretty sure I’ve never heard that anywhere else and I don’t think I have the mental model to understand it intuitively without an explanation.
I think it’s just poorly worded; it’s about the upward mobility of the _generation born after the Second World War_. This generation lacked opportunities and this created unrest in the 60s and 70s.
The previous generation – the one that reached adulthood before and during WW 2 – had upward mobility in the 40s, 50s, and wanted its children to have too
This is the correct understanding. Go back to the selfie of the monkey. Is the monkey the creator of the photo? Does he own the copyright? No. The photographer who created the opportunity for the monkey to take the selfie is the holder of the copyright on that image.
Similarly, the operator of the LLM is the holder of the copyright of the LLM’s output.
> This is the correct understanding. Go back to the selfie of the monkey. Is the monkey the creator of the photo? Does he own the copyright? No. The photographer who created the opportunity for the monkey to take the selfie is the holder of the copyright on that image.
This is incorrect. The monkey is unable to have a copyright on the photograph, but there was no court case suggesting the owner of the camera (Slater) has a copyright on the photo, and the Copyright Office's rules actually say the opposite, that it isn't copyrightable at all (the Wikipedia summary of the situation is good, pointing out the Copyright Office specifically added an example of "a photograph taken by a monkey" to their guidance to make their point clear).
The professional photographer claimed he engineered the situation that led to the photo and thus he owns the copyright on the images. That specific claim appears to not have been addressed by the court nor by the copyright office. Instead Slater settled by committing to donations from future revenue of the photos.
If it were a trained monkey, and the photographer held a button in his hand that triggered the photo taking mechanism, there'd be no question of copyrightability. Similarly, vibe-coding and eliciting output from a software tool which results in software or images or text created under the specification and direction and intent and deliberate action of the user of the tool is clearly able to be copyrighted.
The user is responsible for the output of the software. An image created in photoshop isn't the IP of Adobe, nor is text in Word somehow belonging to Microsoft. The idea that because the software tool is AI its output is magically immune from copyright is silly, and any regulation or legislation or agency that comes to that conclusion is silly and shouldn't be taken seriously.
Until they get over the silliness, just lie. You carefully manually crafted each and every character, each pixel, each raw byte by hand, slaving away with a tiny electrode, flipping each bit in memory, to elicit the result you see. Any resemblance to AI creations is purely coincidental, or deliberate as an ironic statement about current affairs.
Copyright is positive law created by humans, not natural law that we happen to recognize. The idea that adopted legislation or established caselaw can be wrong about what copyright fundamentally is makes no sense.
Not what I'm saying - if you meet the technical, intentional definition of a process, substantiated by precedent, then the law should support any variation of the process which has those same technical features meeting the definition.
Using AI as a tool to produce output, no matter how complex the underlying tool, should result in the authorship of the output being assigned to the user of the tool.
If autocorrect in Word doesn't nullify copyright, neither should the use of LLMs; manifesting an idea into code and text and images using prompts might have little human input, but the input is still there. And if it's a serious project, into which many hours of revision, back and forth, testing, changing, etc, there should be absolutely no bar to copyright.
I can entertain a dismissal based on specific low effort uses of a tool - something like "generate a 13 chapter novel 240 pages long" and seeing what you get, then attempting to publish the book. But almost anything that involves any additional effort, even specifying the type of novel, or doing multiple drafts, or generating one chapter at a time, would be sufficient human involvement to justify copyright, in my eyes.
There's no good reason to gatekeep copyright like that. It doesn't benefit society, or individuals, it can only benefit those with vast IP hoards and giant corporations, and it's probably fair to say we've all had about enough of that.
CISC only survived because CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode. RISC CPUs can avoid this completely, but it turns out backwards compatibility was important to the market and the transistor cost of "instruction decode" just adds like +1 pipeline depth or something.
> CISC only survived because CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode.
For Intel CPUs, this was somewhat true starting from the Pentium Pro (1995). The Pentium M (2004) introduced a technique called "micro-op fusion" that would bind multiple micro-ops together so you'd get combined micro-ops for things like "add a value from memory to a register". From that point onward, the Intel micro-ops got less and less RISCy until by Sandy Bridge (2011) they pretty much stopped resembling a RISC instruction set altogether. Other x86 implementations like K7/K8/K10 and Zen never had micro-ops that resembled RISC instructions.
> CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode.
In absolute terms, this is true. But in relative terms, you're talking less than 1% of the die area on a modern, heavily cached, heavily speculative, heavily predictive CPU.
I hadn't heard that, but certainly, there must have been many times when Intel held the crown of "biggest working hunk of silicon area devoted to RAM."
> It will just take on the appropriate functionality to keep all the compute in the same chip.
So, an iGPU/APU? Those exist already. Regardless, the most GPU-like CPU architecture in common use today is probably SPARC, with its 8-way SMT. Add per-thread vector SIMD compute to something like that, and you end up with something that has broadly similar performance constraints to an iGPU.
Get used to it. The modern day solution for everything right now is to throw AI at it.
Hmmm... I need to measure this piece of wood for cutting, let me take a picture of it and see what the ai says its measurement is instead of using a measuring tape because it is faster to use the AI.
(At least 90% of the time.. the other 10% it will be slightly off, and your items will come out crooked. But don't worry, there is a tiny gray disclaimer about AI making mistakes and that you need to double-chrck it, so it's not AI's fault)
Begin reimplementing a subleq/muxleq VM with GPU primitive commands:
https://github.com/howerj/muxleq (it has both, muxleq (multiplexed subleq, which is the same but mux'ing instructions being much faster) and subleq. As you can see the implementation it's trivial. Once it's compiled, you can run eforth, altough
I run a tweked one with floats and some beter commands, edit muxleq.fth, set the float to 1 in that file with this example:
1 constant opt.float
The same with the classic do..loop structure from Forth, which is not
enabled by default, just the weird for..next one from EForth:
1 constant opt.control
and recompile:
./muxleq ./muxleq.dec < muxleq.fth > new.dec
run:
./muxleq new.dec
Once you have a new.dec image, you can just use that from now on.
reply