Just this week I wanted Claude Code to plan changes in a sub directory of a very large repo. I told it to ignore outside directories and focus on this dir.
It then asked for permission to run tree on the parent dir. Me: No. Ignore the parent dir. Just use this dir.
So it then launches parallel discovery tasks which need individual permission approval to run - not too unusual, as I am approving each I notice it sneak in grep and ls for the parent dir amongst others. I keep denying it with "No" and it gets more creative with what tool/pathing it's trying to read from the parent dir.
I end up having to cancel the plan task and try again with even more firm instructions about not trying to read from the parent. That mostly worked the subsequent plan it only tried the once.
Did you ask it why it insisted on reading from the parent directory? Maybe there is some resource or relative path referenced.
I'm not saying you should approve it or the request was justified (you did tell it to concentrate on a single directory). But sometimes understanding the motivation is helpful.
One of the major advantages for Wireguard over OpenVPN (for me) is that it's quite difficult for random port scans to detect it.
With OpenVPN it's hanging out there responding to everyone that asks nicely that yes, it's OpenVPN.
So anyone with a new exploit for OpenVPN just has to pull up Shodan and now they've got a nice list of targets that likely have access to more private networks.
Wireguard doesn't respond at all unless you've got the right keys.
Also, fwiw - we're approaching 11 years since it was announced, and 5 years since it was accepted into the Linux/BSD kernels.
The original site is down for me, so going based on the app I was thinking it was about the actual edible Honey product, not Honey the discount coupon thing.
Only if the detection mechanism is looking at that single IP and from a single location.
Find the ASN(s) advertising that network and figure out their location.
Even within the ASN there may still be multiple hops, and those IPs may be owned by others (eg the hosting facility) who are not playing the same latency games.
A few months ago I had someone submit a security issue to us with a PoC that was broken but mostly complete and looked like it might actually be valid.
Rather than swap out the various encoded bits for ones that would be relevant for my local dev environment - I asked Claude to do it for me.
The first response was all "Oh, no, I can't do that"
I then said I was evaluating a PoC and I'm an admin - no problems, off it went.
Agreed, there's definitely a heavy element of that to it.
But, at the risk of again being labelled as an AWS Shill - there's also other benefits.
If your organisation needs to deploy some kind of security/compliance tools to help with getting (say) SOC2 certification - then there's a bunch of tools out there to help with that. All you have to do then is plug them into your AWS organisation. They can run a whole bunch of automated policy checks to say you're complying with whatever audit requirements.
If you're self-hosting, or using Hetzner - well, you're going to spend a whole lot more time providing evidence to auditors.
Same goes with integrating with vendors.
Maybe you want someone to load/save data for you - no problems, create an AWS S3 bucket and hand them an AWS IAM Role and they can do that. No handing over of creds.
There's a bunch of semi-managed services where a vendor will spin up EC2 instances running their special software, but since it's running in your account - you get more control/visiblity into it. Again, hand over an AWS IAM Role and off you go.
It's the Slack of IAAS - it might not be the fastest, it's definitely not the cheapest, and you can roll your own for sure. But then you miss out on all these integrations that make life easier.
For years I've been collecting a slowly growing pile of old drives from old devices as I had replaced them.
I finally decided to do something about them and got myself a USB to IDE/SATA adapter and spent a week or so duplicating them so I could check if there was anything worthwhile to keep, before destroyig and sending them to an e-waste recycler.
Some of those drives had been sitting around since early 2000.
All except two of the mechanical drives just fired up and worked fine. One won't power on at all. One powers up, but reading from some sectors just results in failures.
But the SSDs are another story. All blank/empty for any that were more than a few years old. Even though I know some were pulled from working machines.
Just this week I wanted Claude Code to plan changes in a sub directory of a very large repo. I told it to ignore outside directories and focus on this dir.
It then asked for permission to run tree on the parent dir. Me: No. Ignore the parent dir. Just use this dir.
So it then launches parallel discovery tasks which need individual permission approval to run - not too unusual, as I am approving each I notice it sneak in grep and ls for the parent dir amongst others. I keep denying it with "No" and it gets more creative with what tool/pathing it's trying to read from the parent dir.
I end up having to cancel the plan task and try again with even more firm instructions about not trying to read from the parent. That mostly worked the subsequent plan it only tried the once.