A naive approach could be to create an outline, then have an LLM randomly sample a section, supply the surrounding context, rewrite that part, then repeat, ideally alongside human writing. Some sort of continuous revision cycle.
Yes, that's the study that shows zero effect. The authors completely messed up in their write up of the study. That study is the definition of bad science.
No, I did not misinterpret it. That study does not show what you think it does. i.e. the actual data does not lead to the conclusion presented. The actual data shows zero effect, the written conclusion is a work of fiction.
What's the best way to run this on my Macbook Pro?
I've tried LMStudio, but I'm not a fan of the interface compared to OpenAI's. The lack of automatic regeneration every time I edit my input, like on ChatGPT, is quite frustrating. I also gave Ollama a shot, but using the CLI is less convenient.
Ideally, I'd like something that allows me to edit my settings quite granularly, similar to what I can do in OpenLM, with the QoL from the hosted online platforms, particularly the ease of editing my prompts that I use extensively.
Not sure why your comment was downvoted. ^ is absolutely the right answer.
Open WebUI is functionally identical to the ChatGPT interface. You can even use it with the OpenAI APIs to have your own pay per use GPT 4. I did this.
Hey can you guys elaborate how this works? I'm looking at the Ollama section in their docs and it talks about load balancing? I don't understand what that means in this context.
If you download the html of the page, you can put it into Calibre and use Calibre's convert feature to generate an epub. I have not tried putting the generated file on my e-reader but it looks fine on desktop.