That kind of makes sense. The internet sleuths would analyze it in minutes further than the reporters ever could. They like their jobs so they insist on keeping that data to themselves.
Edit: Ahh, hackernews, the tumblr sister site where inconvenient truth means downvotes :D
Well, it is actually pretty interesting how the information has been managed in case. The press wants their own piece after they lost the previous major leaks completely. It's actually pretty relevant discussion about the future of an industry that has been in slow decline for the last decades. In case someone interprets that as sarcasm or sensationalism that is probably an attitude problem, or perhaps an issue with low media interpretation skills.
I have found Hackernews as one of the islands that really acts still like penguins. Strong engineering culture, and just like penguins people just love guarding their rocks and throwing poo at others, just in certain style that is very typical to technical communities. Sarcasm should be the last of issues when compared to that.
> The press wants their own piece after they lost the previous major leaks completely.
The information is being released by 100 different news organizations in 100 different countries. It has nothing to do with protecting their jobs from "Internet sleuths", it's having professional journalistic standards.
What pisses me off is that after they are done going through the data, they often still don't release the files.
There is a reason why publicly available information is useful (think about historians for instance), and by locking it for these only 100 orgs (which will then forget about it in half a year when the scandal is done) you lose on the opportunity to do future research, books, etc. on this
The newspaper leading in this investigation has been very open to research of such data it received in the past – not just opening the data, but also funding it.
Buffer overflows are basically about memory corruption. They can lead to crashes, remote code execution, if run in privileged code to privilege escalation, etc.
Most modern operating systems have buffer overflow protection technologies such as ASLR. I tried recently exploiting a few guaranteed buffer overflows for fun, and it's getting irritatingly hard at least on Linux. Non-executable stack, *alloc functions have sanity checks, -fstack-protector provided canaries, ... It's possible to get past all that, but it takes a bit work.
I would be freaked if I was running some older operating system, and someone vendored a poorly compiled version of libpng. Windows applications are probably the scariest here, especially when run on older Windows servers...
AFAIK there are a few that produce such results that they can not be distinguished from a real person. None of the really good ones are open source, and the best ones I have heard of are not even for sale.
The best implementations make an advantage in markets so they are well guarded. We do not know about the best implementations, because we did not notice a thing. For example some phone operators have replaced their customer services with TTS / STT solutions. Because people tend to lock up when they realize they are talking with a computer, they have had to make those systems sound very natural.
I know a few are pretty crappy ones, but a few are plain spooky. Customers that tend to joke and flirt with the computer, hoping for an emotional response, probably are most likely to notice them.
Then there's the case where the US intelligence services demonstrated their capabilities to a politician (senator/congressman) by recording him, and producing a voice clip of him saying something like "death to america" so well that no one could distinguish the speaker. It also seemingly passed further voice analysis. Google it up, pretty interesting read.
> I know a few are pretty crappy ones, but a few are plain spooky. Customers that tend to joke and flirt with the computer, hoping for an emotional response, probably are most likely to notice them.
I got a call two years ago from a telemarketer that kept asking me yes/no questions with a robot voice and didn't leave me any space in the conversation to say anything. I really felt like I was talking to a computer so I tried to Turing-test him by forcing him to answer an open-ended question. It took two minutes, and the caller turned out to be human after all. That wasn't very relieving though. Humans should not speak with such zombified voice.
The logic here is that most of the time files of certain type are usually in logical locations, whereas with SELinux the logic is that types are intrinsic properties of objects, and saved as metadata their metadata. The difference is that with the latter moving the object does not change the properties (the context comes along), while the first one might lose the properties in case something made moving outside the intended envelope possible.
The approach taken by Tame is technically easier to implement without shooting yourself in the foot, and featured also in Grsecurity fame's RBAC implementation. Jolly good.
The thing just is, the approach taken by SELinux with the external security daemon can and has been extended beyond files. Tracking the information by its properties when it moves from files to database, web servers, etc, is a powerful (but extremely hard for implementors) feature. Also, administratively the security classifications of documents are properties of documents, not the storage containers they are found from.
Taking a look at Alexa, the trend for Reddit has been and still after the "Pao debacle" is upwards. They are constantly getting new users, probably much more than what they lost. So from the viewpoint of the management all is well. These new users are also probably more susceptible to their monetizing efforts.
The fact that Reddit has in the last 6 months lost most of their high quality content creators does not seem to concern them.
Here's [0] the relevant section of the X.509 RFC (Name Constraints). Unfortunately, last time this was discussed on HN, someone mentioned that Name Constraints are not supported by all client software, making it unsafe to rely on them.
I'd imagine (based on very superficial knowledge) that DANE would achieve something to that effect. But it's pretty much dead because apparently DNSSEC wasn't all that great.
Since the days of Orange Book in 80's three rules have been golden in IT security: authorize, audit all information usage, and never let uncontrollably the information out of the secure domain. Implementing properly access management, watching users, and limiting the tools works still nowadays when implemented correctly.
The leaks of the past a few years each one of the previous failed. The users had baffling access to the information, there were oversights in auditing, and it was somewhat easy to move the information mass out of the secured domain. Leaking was downright easy, and getting caught was not certain.
I took recently a look at different products meant for file access auditing, to solve the part "audit all information usage" in cases where the information systems can not be adapted (COTS). There seems to be a surprisingly large amount of products with different feature sets and value propositions. Some of which have pretty steep prices and highly evolved features.
I got the inspiration to develop my own very simple tool, just for learning new skills and for the heck of it. A few hours of wading through MSDN, nerve wrecking C/C++ programming, tuning and it's ready. Quality is a bit so-so (there might be at least memory leaks, although I attempted to catch them) and I had no accurate specifications, but here it is...
With this application all file accesses (creation of low level handles) cause an event that is logged at centralized log management system. I did not implement hashing the files or gathering them, because it probably has a direct impact on the performance of a desktop, but it would be trivial to add as a feature.
After the information is in the centralized log management system, it is relatively easy to generate for instance weekly report of all the file accesses of users. In AD environments one could fetch information about managers, push the data through a good PDF template, and email the reports. As outcome the managers would get weekly reports of what their underlings have accessed.
When the awareness of the previous would spread, that would raise the bar to even attempt anything in the higher security environments. The impact on the overall security in the long rould would be more significant than the actual technical feature. The information security tools work best when they have a psychological impact. Absurd but true. It's not always the best to crank some technical bolt all the way.
On the other hand, some privacy should be guaranteed to the users by limiting the tool. At least in lower security environments this might come across, because employees probably nowadays have limited rights to use employer's computers for their own matters, for instance accessing banking services while on lunch break.
I would appreciate comments, code review, bug and feature reports, etc!
I have tried preaching similar message while I have worked for a C4I unit. I found it extremely hard to get anyone understand what the actual point was, and even after that I got mostly "but we're all COTS now" with a shrug.
The previous, while working with netsec, stands practially for abandoning the sound principles and going for superficial compliance models. There is no real security architecture in place for most systems, there are not trusted paths of handling information, and the assurance level is at rock bottom. The result is scary, when you take it into the context of your adversaries being hostile, active, and very well funded (typically state sponsored).
Actually I considered elaborating the previous with examples from real life, but then I realized that stuff might be classified, so... Meh.
Appreciate the corroboration from the inside. I've suspected as much given that even the "controlled interfaces" are usually EAL4 at best. Did you know Navy people built an EAL7 IPsec VPN? I'm sure you can immediate realize (a) how awesome that is and (b) what value it has for our infrastructure/military. Yet, it got canceled before evaluation because brass said "no market for it." Virtually nobody in military or defense were interested in setting up highly secure VPN's.
Haha I feel you on that. It's very important for people to understand the basic way C.C. works: a security target or protection profile with the security features needed (can't leave anything out!); an EAL that shows they worked hard (or didn't) to implement them correctly. I'd explain what EAL4 means but Shapiro did a much better job below [1]. That most of the market has insufficient requirements with EAL4 or lower assurance shows what situation we're in. Hope you at least enjoyed the article as I haven't been able to do much about the market so far. ;)
EAL criteria are so operationally restrictive that useful work is effectively prevented from happening. No one needs worse security, we need better security.
A number of us have conformed to higher ones on a budget with small teams. The highest one's are indeed a ton of work to accomplish yet there's been dozens of projects and several products with such correctness proofs. They figured by the 80's they needed their certified TCB to be re-usable in many situations to reduce the issue you mentioned. Firewalls, storage, communications, databases and so on all done with security dependent on same component. Modern work like SAFE (crash-safe.org) takes this closer to the limit by being able to enforce many policies with same mechanism.
So, your claim is understandable but incorrect. Useful work repeatedly got done at higher EAL's. It continues to get done. The real problem is (a) bad choice of mechanism for TCB and (b) bad evaluation process. Most of us skipped high-EAL evaluations for private evaluations instead by people working with us throughout the project. Saves so much time and money while providing even more peer review.
They really need to improve the evaluation process itself so it's not so cumbersome and update their guidance on best mechanisms for policy enforcement. Probably sponsor via funding some of them like they did in the old days. Fortunately, DARPA, NSF, and EU are doing this for many teams so we can just leverage what they create.
It has no random feature, the channels they offer are either hand crafted or based on popularity. The music selection excludes many smaller, and indie, labels. The search functionalities are very limited (the metadata is too low quality).
I wanted to love Spotify, and I really gave it a try. I just couldn't keep using it, all it did for me was to make me angry. In the end it's mostly good for listening to what the major labels think you should listen.
I have personally found Spotify's Browse>Discover feature to be be useful and worthy, I've found a lot of interesting music in there.
My use case though is that I most often go on Spotify to listen to a specific album I had in mind, either I knew already or heard of it from a friend, blog, local gig, etc. But those times (20% maybe?) I feel like trying something new, I always find something good on Discover.
I wish their UI was better though. The desktop app spawns 7 SpotifyHelper processes that each eat 40mb of RAM and the whole thing feels way slow on my 10ish year laptop. The web app uses flash, and while it consumes less ram, the playback is choppy on my pc. At least the desktop app plays fine once it's started.
Probably not anything new from Asus. I just recently purchased one, and it's pure UEFI. The newest models have begun removing the options to enable "legacy mode", so you can't run legacy operating systems anymore.
This will probably become more and more common, and after the next couple years it will be extremely hard to find any good laptop that would run openBSD.
Edit: Ahh, hackernews, the tumblr sister site where inconvenient truth means downvotes :D