I agree with the sentiment of this post. I my personal experience the usefulness of a LLM positively correlated with your ability to constrain the problem it should solve.
Prompts like 'Update this regex to match this new pattern' generally give better results than 'Fix this routing error in my server'.
Although this pattern seems true empirically, I've never seen any hard data to confirm this property(?). And this post is interesting but seems like a missed opportunity to back this idea with some numbers.
Prompts like 'Update this regex to match this new pattern' generally give better results than 'Fix this routing error in my server'.
Although this pattern seems true empirically, I've never seen any hard data to confirm this property(?). And this post is interesting but seems like a missed opportunity to back this idea with some numbers.