Any proxy could act as a MIM, so someone using a malicious fork of Stealth may cause problems.
But, the net is like this already. One site may send you to another site that tricks you into stealing your data. And, a relatively recent vulnerability subverted any WebKit-based browser from stating whether the site’s URL was using the correct server, so you’d have no visible way of knowing a site using HTTPS was legitimate.
Using a VPN could be better, but it’s sometimes worse, because you change who is trusted more (the VPN provider), as they know one of the addresses you’re coming from and everything you’re doing, and can record and sell that data.
I mean, technically, mozilla's ca-certificates tracker is the biggest attack vector on the internet's infrastructure [1]
and TLS transport encryption relies heavily on identification mechanisms which are recorded, verified and stored in a manner that a lot of third parties have to be trusted, too.
Even when ignoring that salesforce is a private entity with financial motivations, and that the server is hosted on 17 years out of date OSes, I wouldn't trust any single entity with a responsibility like this. Maybe the UN, but nothing below that, and I think a legislation for this would be the "most correct" approach.
I hope that in future (given tlsnotary works in the peer to peer case) this can be solved with content based signatures instead of per-domain-and-ip based certificates.
I mean, a snakeoil cert has to be assumed to be just as legit as a cross-signed cert these days due to the lower feasibility of letsencrypt certs.
Certificate pinning was a nice approach from the statistical perspective, but with letsencrypt taking over this is only valid for 3 months (max) until the pinned cert will lead to a required reverification.
Electron has local file access, etc. in fact, it states: “Under no circumstances should you load and execute remote code with Node.js integration enabled.”
So, Stealth should consider forking Electron if better sandboxing is needed.
If the network isn’t free and data is centralized, one day you think you have it all and the next you could have nothing. Tor pretends to be secure, but is dark and compromised. This project seems to understand that and wants to try again to fix via P2P in a way that has promise.
The simple implementation of web forms is broken in today’s web. It’s an input field or other element styled as an input field that may or may not be grouped or in a form object, possibly generated dynamically. Websockets, timers, validations, ... it’s a huge PITA.
The DOM is a freaking mess. It’s not there until it’s there, it’s detached, it’s shared. It’s been gangbanged so much, there’s no clear parent anymore.
ECMAScript- which version and which interpretation should Babel translate for you, and would you like obscufation via webpack, and how about some source maps with that, so you can unobscufate to debug it? Yarn, requireJS, npm, and you need an end script tag, should it go in the body or the head? You know the page isn’t full loaded yet, and it won’t ever be. There, it’s done, until that timer goes off. Each element generated was just regenerated and the old ones are hidden, but the new ones have the same script with different references. Sorry, that was the old framework, use this one, it’s newer and this blog or survey says more people use it.
For a P2P open data sharing network over https, the proxy could allow a request to get someone else down the path. Not everything is direct.
> Tor pretends to be secure, but is dark and compromised.
Citation needed. Please stop with the "tor is compromised" meme... and what do you even mean by "dark"? What the hell... Tor is by no means a perfect anonymity solution but it's to my knowledge the best we've got. It's certainly way better than a VPN or no anonymization at all.
More specifically, tor anonymity is limited by the fact that it's low-latency. This is a fundamental limitation of any low-latency transport layer and not the fault of the tor developers or any obscure forces. In particular, if your attacker has control of both your entry point (your tor guard node or your ISP) and your exit point (tor exit node, or the tor hidden service or website you are connecting to) , it becomes possible to de-anonymize your connection (to the specific exit point in question) through traffic analysis. There's just no way around that for a network meant to transport real-time traffic (as opposed to plain data or email for instance). And yes, it stands to reason that various intelligence agencies will have invested in running exit nodes or entry nodes but this is just unavoidable. What you can do counteract this is to run your own nodes or to donate to (presumably) trustworthy node operators.
I think it's also worth noting that although tor can by no means 100% guarantee that you will be free from government surveillance at all times, it does make mass surveillance more difficult and more error-prone, and to me that's the whole point. Furthermore, although government surveillance cannot be thwarted 100%, tor does make corporate surveillance basically impossible (assuming you can avoid browser fingerprinting; this is what the tor browser is for).
All in all, I can't claim tor is perfect (because it can't be!) but the more people use it the better it gets and it's certainly better than anything else, so please stop spreading FUD and encourage people to use it instead.
Also, it's unclear to me how Stealth helps at all with hiding the IP addresses of its participants... It claims to be "private" but the README doesn't say anything about network privacy...
The code doesn’t strike me as concerning itself with protecting privacy so much as changing who will get to log your traffic. Interesting effort though; I’ll hope for more details from them in the future!
Chill, bro. I said “seems hand-wavy” and “I’d love to be wrong”. I was hedging my bets and clearly indicating this was a surface-level read. I shouldn’t have to have a better alternative on deck to point out something in the codebase that didn’t seem to be privacy-friendly. No offense was meant.
Since you asked how I would do things: I would have had a clear and detailed security-specific document or section of the readme to detail in what ways it is peer-to-peer and in what ways it is private. I would have probably gestured towards the threat model I used when designing the protocols, but —- let’s be honest —- I’d probably be too lazy to document it adequately. As far as I can tell, there’s one paragraph in its developer guide on security and two paragraphs on peer-to-peer communication and I wasn’t able to get a good read on its concrete design or characteristics.
> Note that the DNS queries are only done when 1) there's no host in the local cache and 2) no trusted peer has resolved it either.
This wasn’t clear to me from my first spelunk through the readme or the docs. Are you affiliated with the project? Is there a good security overview of the project you know of?
> I mean, DNS is how the internet works. Can't do much about it except caching and delegation to avoid traceable specificity.
What I meant to say is, I was not so sure that the google public dns could be considered private. But nevermind on that, I can’t confirm their logging policies. I’m probably just paranoid about how easy google seems to build a profile on me. So yeah, as mentioned, just my initial read.
Hey, my comment wasn't meant in a defending manner...I'm just curious whether I maybe missed a new approach to gathering DNS data :)
I've seen some new protocols that try to build a trustless blockchain inspired system, but they aren't really there yet and sometimes still have recursion problems.
When I was visiting a friend in France I first realized how much is censored there by ISPs and cloudflare/google and others, so that's why I decided it might be a good approach to have a ronin here.
I totally agree that threat model isn't documented. Currently the peer to peer stuff is mostly manual, as there's no way to discover peers (yet). So you would have to add other local machines yourself in the browser settings.
Security wise there's currently a lot of things that are changing, such as the upcoming DNS tunnel protocol that can use dedicated other peers that are connected to the clearnet already by encapsulating e.g. https inside dns via fake TXT queries etc.
> public dns could be considered private
Totally agree here, I tried to find as many DoT and DoH dns servers as possible, and the list was actually longer before.
In 2019 a lot of dns providers went either broke or went commercial (like nextdns which now requires a unique id per user, which defeats the purpose of it completely)... But maybe someone knows a good DoH/DoT directory that's better than the curl wiki on github?
Thanks for following up with added info! I’ll look forward to seeing the project progress; It’s an area I’m super interested in. As far as naming systems better at privacy than DNS, I’m not aware of any serious options. Personally, I’m working on implementing something that hopes to improve the verifiability of naming resolutions, but thats a long ways off: https://tools.ietf.org/html/draft-watson-dinrg-delmap-02
Large scale actors (read: ISPs and government agencies) have a huge amount of entry and exit nodes. They can simply measure timestamps and stream bytesizes, which allows them to trace your IP and geolocation.
They do not have to decrypt HTTPS traffic for that, because the order of those streams is pretty unique when it comes to target IPs and timestamps.
Yes, hidden services are safe (well, no system is really safe). But if e.g. a hidden service includes a web resource from the clearnet, it can be traced.
I was talking about the "using tor to anonymize my IP" use case, where exit nodes get a huge amount of traffic per session.
In order to be really anon you would need a custom client side engine that randomizes the order of external resources, and pauses/resumes requests (given 206 or chunked encoding is supported), and/or introduces null bytes to have a different stream bytesize after TLS encryption is added.
Hidden services are safer in the sense that your connection can't be deanonymized with the help of your third relay (which would have been an exit node in the case of a clearnet connection) but if the hidden service in question were to be a honeypot and your entrypoint (ISP or tor guard node) were to be monitored by the same entity (this second requirement also holds for clearnet connection monitoring BTW), it would be possible to deanonymize your connection to the hidden service.
How easy it is to perform the traffic analysis would have to depend on the amount of data being transferred, if I had to guess, so downloading a video would probably be worse than browsing a plaintext forum like hackernews. But if we're talking about a honeypot, your browser could be easily tricked into downloading large-enough files even from a plaintext website (just add several megabytes of comments in the webpage source for instance).
> In order to be really anon you would need a custom client side engine that randomizes the order of external resources, and pauses/resumes requests (given 206 or chunked encoding is supported), and/or introduces null bytes to have a different stream bytesize after TLS encryption is added.
It's unclear to me how any of this helps avoid traffic analysis. I believe tor already pads data into 512-byte cells, which might help a little bit.
Calling it “moving to the cloud” has such pompous overtones. They stopped giving away access to their source for free, just like the majority of the rest of their community. Atlassian might as well be any other company now with a really good product suite. Years ago, they were awesome. They gave free licenses to open source projects. I still think they’re the best, but this cloud-only thing sucks. The software has value outside of their hosting it.
Definitely. Windows got worse after XP, mostly with 8. Why did they move things around and setup multiple ways of doing things? Gnome got worse after v2, and what was Ubuntu thinking with ads? OSX/macOS/iOS got worse with flat design (which thankfully morphed) and AppStore and awful windows-style install nanny.
It’s not just the interface. iOS and macOS got embedded spyware years ago, and it’s still there; they can backdoor whenever. Dig through your logs and sometimes you’ll see output of a menu choice upon being connected to. Windows has similar from what I’ve read. I’m willing to give up some privacy, but it seems like B.S. to make people pay for things that do that in such a hidden way. It leaves things exposed. Unfortunately, the desktop OS alone is not enough for security either. Hardware doesn’t lock down, with potential openings for instructions in multiple places in modern computers.
I want old pre-AppStore OSX/macOS that doesn’t nanny my installs or screw old drivers, with a good package manager, and easily tabbed and gridded terminal windows without tmux necessarily.
I’d also like GPL’d Windows XP running flawlessly like a mac.
I’d use Linux on the desktop, but I’ve never liked any of the desktop managers and it was never as reliable as OSX/macOS.
Would you mind using Linux? It's been my experience that it's done everything I wanted it to well. Linux on desktop has changed. I hated it too in the 2000s. Now mostly everything just works, KDE can look and feel like any desktop including OSX. Very reliable.
You'll find a pirated version of XP running as you wish soon since the leak of the XP source code not GPL though.
I've been using Elementary OS as a daily driver for at least a year, maybe two (and before that I used Mint for several years). For the most part (there's that pesky word again!), it works quite well. And it really is beautiful--aesthetically, I quite like it.
But boy do I wish that bluetooth would work reliably. Since working at home full time, noise-cancelling headphones have gone from 'nice to have' to 'nearly essential'. It worked more or less fine for a while, then some update broke something and it stopped working. Now it's working again, kind of, but connecting a pair of headphones causes most of the entire UI to stop responding for a full minute or two. Sigh.
And maybe the next time I update it will be fine again. Who knows? And that's the problem: every update feels like Russian roulette. And this isn't even a laptop. I use this thing for work; I do not have time to dick around all today troubleshooting obscure bluetooth problems.
If I'm going to continue to use it, I guess what I need to do is stop updating (or only update specific apps, like firefox) once I happen upon a relatively 'stable' configuration. Security updates be damned.
To add to this, I prefer Ubuntu MATE, where "MATE" refers to the desktop environment: it's exactly what it needs to be, light and responsive and useful, without the need for a GPU just to render you friggin' desktop. It's neat.
Hackintoshes are based on macOS, but that doesnt mean it receives the breath of testing and scrutiny as the real deal.
Thats the whole point, right? Obviously smaller and non-mainstream distros with non-mainstream packages, more cutting edge packages will have more paper cuts.
Use pulse audio volume control. See if it would help. Mine also broke after some system update. With this I was able to select bluetooth profile as well as set audio out via bluetooth. Before finding this bluetooth headset wouldnt work correctly on elementary os, but never needed it in linuxmint.
Didn't miss the point at all. Why would you assume that? I faced a similar problem as parent and know the pain point. Was just trying to let parent know of a solution I found useful.
I'm in the same situation as you: work from home, own noise-cancelling Bluetooth headphones, use elementary OS.
Personally, the only problem I've ever had is when two devices are connected to my headphones (laptop and a phone). When a notification pops up on my phone, my headphones get "taken over" by the sound from my phone. I basically just turn off Bluetooth on my phone at that point (easier than disconnecting a device).
Minor annoyance for sure (especially because I have notifications turned on for like 3 apps on my phone), but I'm so used to elementary OS (using it since 0.2) that there's no way I can switch to anything else — Windows or another Linux distro — at this point.
I've tried to move to Linux once a year since 2005. A few weekends ago I did my annual attempt and had a go at Elementary OS (live USB wouldn't boot, gave up), MX Linux (couldn't get sound, wouldn't boot after installing nvidia drivers), Manjaro XFCE (kept locking up, requiring a power cycle) and Pop OS.
Pop fared best, but even then I had all kinds of showstopper problems with monitor power saving, resolution, crazy window repositioning, and some behaviour where the desktop workspace randomly becomes far larger than the monitor and sort of pans around. If I leave my computer for 10 mins then have to spend 20 mins fixing it when I come back, that is a deal breaker.
I persevered though... Tried playing a game, alt-tabbed out to do something else, machine rebooted. Tried to use their tiling window manager functionality, but it had all kinds of weird bugs making it virtually impossible to use for anything except simply switching focus (and even then, their theme does not visually distinguish between focused and unfocused windows, which is problematic!)
Anyway... rant over. Short version: I disagree with you. :)
My experience is very similar. And yet in every debate on this subject some people will claim that they are running Linux without experiencing any of these problems. They can't all be lying can they? So what gives?
I believe it all comes down to selecting the right hardware. The way I've been trying Linux was to install it on some machine I had lying around (mostly Acer, Asus, MacBook, no-name towers). Apparently, that's not how it works.
I remember back in 1990s and early 2000s it was hit and miss whether or not Linux would install on a particular machine. Then over time things improved and you could install it on almost any machine.
Some Linux enthusiasts celebrated this achievement by claiming loudly that Linux now "just works". They couldn't possibly have done a greater disservice to the desktop Linux movement, because that's just not true.
It doesn't just work. It just installs. And then it's crushingly disappointing on most machines.
My next Linux attempt will be on one of those known good hardware configurations. Anything else is just a waste of time.
Sadly I have about the same experience as you with my last attempt 2 weeks ago.
I started wit KDE Neon, but it failed to install drivers for my Nvidia card and proceeded to sabotage my sound drivers in the process (they were working fine before).
I then switched to ElementaryOS which did fine with my Nvidia but every time it played a sound it would send a loud crack in my speakers.
Back on Windows which I feel a prisoner of. The thing is sending data all over the internet, I can't even write a diary because I feel like I live in the USSR where I have to pay attention to everything I say or the KGB will get me (to be clear it's just a metaphor, I understand I can write whatever I want on my PC without consequences but I don't like the feeling that my inner thoughts would end up on a server somewhere).
I run Linux as my daily driver, but I really do get your pain. There are way too many items problems that run in the way. Live usb didn't boot. Volume keys don't work, etc. etc. It has gotten WAY better, but the polished professional feel just isn't there yet. Your trackpad won't feel 100%, if you don't know your hardware inside / out your Nvidia card or something else might not work. Part of the problem, too, is that there are way too many projects inside the open source world. While that is a blessing, it's also a curse.
Some people just want to boot a machine and get to work. Even though I run Linux, I have become that person as well.
You've tried every desktop manager for Linux and spent enough time with them to be sure you didn't like them? Very impressive.
Not sure what your reliability metrics are, but I've run Linux and macOS desktops side by side for years now (decades, even), and I don't really detect much of a difference. The macOS ones do tend to have the benefit of a rigid hardware platform, which is why I suppose they do a bit less well when forced to run on an arbitrary platform (e.g. inside KVM).
If you want good-faith responses, it's best not to be a jerk in what you're posting. Actually, your comment would be excellent if it had been just the second paragraph. Unfortunately the first paragraph negated it (and then some) before it even had a chance. That's one reason why the HN guidelines include "Don't be snarky."
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and sticking to the rules when posting here, we'd be grateful and you'll get much more interesting responses. Note these in particular:
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Please don't comment about the voting on comments. It never does any good, and it makes boring reading."
There are no fake accounts. There are only accounts or not.
There are no sockpuppet bloggers; those are called bloggers.
If you go down the fake road, you have to realize real news by real reporters is often wrong. I’ve had someone in law enforcement tell me the news accidentally labeled them the victim on televised news and didn’t correct it.
Classbooks written for US public school students- I understand that some content in them has been incorrect and intentionally biased.
There is truth, but it’s an ideal.
I’m glad that this is calling out those that are manipulating people, but on the other hand- what is the goal?
Will
shaming bring fairness?
We could have communist dictatorial leaders enforcing their version of truth, if you’d rather have that sort of thing.
Our president should tell the truth, and it should be a scandal if not, to a point of course, because I’d bet most have lied at times.
But if it’s time to activate something like a libel superpower on the internet, how would that even work in a fair and practical way?
Freedom of speech cannot be freedom only to tell truth; truth can be aspired to, but not necessarily known by all, and what’s understood to be truth by some may change. So, really, what should be done?
Btw- I’ve done my best in past years to tell the truth as much as I can when I’m not kidding around, and it typically makes things difficult, but better. I’m not recommending anyone fake up things to boost rep. But, it’s happening, it’s not good, and I don’t see how AI or oversight or a control play would end well when it comes to enforcing truth. However, the notion of a “fake” account is what allows most of the users to post content on HN and Reddit more freely.
> By the way, there are no fake accounts. There are only accounts or not.
Yes, until we see a great advancement in AI, actual meat based mammals are driving these accounts.
> There are no sockpuppet
Sometimes it seems like half of Twitter is fake accounts. You've never seen photos or videos of a Bangladeshi click farm? 50 people sitting in small cubicles running proxy-connected virtual machines on desktop PCs, posting stuff, upvoting things on reddit, etc?
I assure you that such things exist. Some of the places that used to do MMORPG gold mining to trade virtual currency for real money have shifted into the market, because it's much more lucrative.
You've never seen the pictures from China of 1 person sitting in front of a board with 40 budget android phones mounted on it, upvoting and reviewing apps?
I can't find the photo right now, but absolutely the same thing exists in the android app ecosystem. If you read and write fluent Mandarin you can probably find such in 30 seconds of searching within-the-GFW search engines.
I’m with you that it’s a serious problem. If it weren’t, Amazon and others wouldn’t be working so hard on AI to combat the AI or human that’s beaten their AI.
Those aren’t “fake accounts”, though. They’re real accounts being abused. There’s a difference. If Amazon and Twitter allow it to happen, it will happen. But what does shaming accomplish here? It just means people waste time talking about it. It has little chance to change behavior. More likely the outcome could become Reddit and HN enforcing a real ID. That may hurt the community, because not all of us want our name on everything; it’s not because I don’t stand behind what I’m saying- I’m just not going to treat every post like I want to carry it around with me on a sign for the rest of my life, even though at some point, maybe I’ll have to!