Hacker Newsnew | past | comments | ask | show | jobs | submit | xrmagnum's commentslogin

I built TunnelBuddy (tunnnelbuddy.net) just for this. I am the same: citizen of one country and resident of another. I have multiple friends and family where I am from. I get them to open tunnelbuddy (nobody needs to sign up), to share a one-off password (like TeamViewer) and I get to access the internet as if I was at their place.

Underneath, it uses WebRTC (the same tech as Google Meet). It is free to use, I just built to fix this problem that I have... I am quite surprised expats only get by using a traditional VPN whose IPs are known by online services...


That's a cool tool.

FYI: There's a typo in the URL you posted, an extra `n` :)


Hosting your VPN is a fair bit of work. Plus, you need residential IP too which you don't really have if you host on any Cloud Provider. I had the same issue and decided to build TunnelBuddy.net so that a friend can share their internet connection with me or vice versa. It is entirely P2P and requires no sign up, no credit card. You download the app, share a code and that's it. It is like TeamViewer, but instead of sharing your screen you share your Internet Connection.


I ended up building something in this space recently (TunnelBuddy – https://www.tunnelbuddy.net I’m the author) that lets you use a friend’s machine as an exit node over WebRTC.

One of the design decisions I made was P2P or nothing: there’s a small signalling service, but no TURN/relay servers. If the peers can’t establish a direct connection, the tunnel just doesn’t come up.

The trade-off is fewer successful connections in weird NAT setups, but in return you know your traffic never transits a third-party relay – it goes straight from your client to your friend’s endpoint.


My traffic will transit third parties all the time, since it's going over the Internet. What's the problem with relays, if the traffic is end-to-end encrypted?


Fair point!

- With a TURN/relay, you’re introducing a single, purpose-built box that: - sees all the tunnel metadata for many users (IP pairs, timing, volume), - is easy to log at or subpoena/compel, - and becomes a natural central chokepoint if someone wants to block the system.

- Without that relay, your traffic still crosses random ISPs/routers, but: - those hops are *generic Internet infrastructure*, not “the TunnelBuddy relay”, - there’s no extra entity whose whole job is to see everyone’s flows.


I’m definitely open to open-sourcing it, just not immediately. I’d like to stabilise it, run a beta, and fix the most obvious flaws first.


It's striking how much of the AI conversation focuses on new use cases, while overlooking one of the most serious non-financial costs: privacy.

I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.

Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist. That left me deeply concerned—not just about this moment, but about where things are headed.

The real question isn't just "what can AI do?"—it's "who is keeping the record of what it does?" And just as importantly: "who watches the watcher?" If the answer is "no one," then maybe we shouldn't have a watcher at all.


> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.

I'm fairly sure "seemed" is the key word here. LLMs are excellent at making things up - they rarely say "I don't know" and instead generate the most probable guess. People also famously overestimate their own uniqueness. Most likely, you accidentally recreated a kind of Barnum effect for yourself.


  Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.
Chatgpt was court ordered to save history logs.

https://www.malwarebytes.com/blog/news/2025/06/openai-forced...


That only means that OpenAI have to keep logs of all conversations, not that ChatGPT will retain memories of all conversations.


you could explain that to ChatGPT and it would agree but then again, if you HAVE TO keep the logs ...


> I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.

> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.

Maybe I'm missing something, but why wouldn't that be expected? The chat history isn't their only source of information - these models are trained on scraped public data. Unless there's zero information about you and your family on the public internet (in which case - bravo!), I would expect even a "fresh" LLM to have some information even without you giving it any.


I think you are underestimating how notable a person needs to be for their information to be baked into a model.


LLMs can learn from a single example.

https://www.fast.ai/posts/2023-09-04-learning-jumps/


That doesn’t mean they learn from every single example.


I find it problematic that this article recommends disabling DoH, which leaves users with unencrypted DNS — still centralized (e.g. to Google’s 8.8.8.8 or an ISP) and now vulnerable to man-in-the-middle attacks. Replacing one form of centralization with another while giving up encryption doesn’t improve privacy — it worsens it.

If the goal is to reduce centralization, a better approach would be to use encrypted DNS (DoH or DoT) with resolver rotation or randomization. That way, users retain privacy from local networks and ISPs without concentrating all DNS traffic in a single provider’s hands.


If you're looking to implement encrypted DNS with multiple servers or providers, consider using unbound, which supports TLS resolvers and can operate in recursive mode. Alternatively, you might opt for AdGuard DNSProxy or dnscrypt-proxy, both of which support DNS over HTTPS (DoH), DNS over TLS (DoT), and DNSCrypt. You can run these tools on your local network or computer and configure your resolve.conf to point to them.


It is problematic; it's a post from 2018 that did not age well at all.


It wasn't correct even when it was originally posted.


I agree, but I remember the controversy at the time about browser vendors usurping DNS and want to avoid as much of that argument as I can.

(I have weirdly strong and specific ideas about DNS security.)


Disabling DoH in your browser’s settings should make it fall back to you system’s resolver.

You’ll only be vulnerable to a MitM attack if your system’s resolver is insecure and also vulnerable to a MitM attack.


(which all are by default)


That's a pretty serious security issue, which affects every other process on your host.


No, plenty of OSs ship encrypted DNS resolvers by default.


Zero mainstream OSs ship encrypted DNS resolvers by default, unless you count ones that will automatically fall back to insecure DNS, which defeats the purpose since a network attacker can cause that.


DoT is explicitly mentioned as a better alternative


DoT is strictly worse than DoH. It doesn't actually fix any of the author's issues with DoH, and it has the gigantic downside that it's trivial for hostile networks to block.


Thanks for taking a look! Don't hesitate to ask should you have any questions or recommendations!


We recently had to move away from Amplify unfortunately.

Here are some examples of problems we faced:

* Building Issues

we consistently have release problems that ends up taking a lot of of our time. eg: a CLI update would end up breaking our CI [1].

* Overall instability

we regularly have problems that we need to work-around ourselves, sometimes in a really not elegant manner because we have not a lot of control. [2]

* Data Layer

DynamoDB is the DB of choice for Amplify. We do not understand this choice technically. It may make sense from a sales pitch perspective to use a serverless database along with other serverless technologies (API Gateway, AWS Lambda). Technologically speaking though:

  - DynamoDB is a great, high performance NoSQL DB. To quote Rick Houlihan: "Not a flexible database but a really efficient DB to use at scale". You need to know your access patterns to query the DB in a efficient way. 

  - Amplify is aimed to bootstrap an application in no-time, letting front end developers focus on building. When doing so, the product is rarely defined upfront, let alone its access patterns. At this stage, data needs to change shape & format to adapt to new use-cases. In other words, data needs to be migrated. To this date though, the Amplify team does not provide a way to migrate data [3]
This disconnection between the initial business cases of DynamoDB and Amplify can even be seen within the AWS teams themselves. [4] We don't believe any of them are to blame. The solo Front End Engineer bootstrapping a quick Amplify app for a PoC feels like a way different use case than a team of highly trained data engineers working on their Single Table Design for their micro-service. Amplify rightfully tries to offer an easy way to deal with storing data. And so it follows a standard SQL design with DynamoDB. This though leads to bad performance (original selling point of DynamoDB) or other limitations hard to anticipate.

Overall it is pretty clear and fine that Amplify focuses on PoC projects rather than production ones (with features like Geo-tagging [5] but no way to migrate data). However, when starting to get traction, it is a shame we need to completely eject instead of being able to extend because of lack of (boring but necessary) fundamentals.

[1] https://github.com/aws-amplify/amplify-cli/issues/10164

[2] https://github.com/aws-amplify/amplify-category-api/issues/3...

[3] https://github.com/aws-amplify/amplify-cli/issues/1407

[4] https://twitter.com/houlihan_rick/status/1513137762895110147


Energy & emissions are complex to account for. I don’t know too much about the US but in France we had quite good results in term of reductions.

But where do they come from? From the optimisation of our energy consumption? Or from the relocation of some of our companies abroad?


>France we had quite good results in term of reductions.

France has the benefit of large-scale nuclear power. Any given day, my home province of Ontario, is at around 85%-95% fossil-fuel-free emissions because of nuclear and hydro [1]. If instead we decided to go with solar and wind, then we'd be waiting until 2060 to cut CO2 emissions and burning much more natural gas until then ... just like Germany.

[1] https://live.gridwatch.ca/home-page.html


Is your hostname added to /etc/hosts?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: