People use other managers for many reasons: storing passwords (and other secrets) which aren't used on a site, using them on different browsers (say, Safari on the desktop and Chrome on mobile) and lack of trust on the browser's password manager.
Also, for a long time, browsers didn't save passwords with forms marked with autocomplete=off.
To generate random, strong passwords. Also not to be locked into a browser. Better actual password management (e.g. last changed). Tags.
A canary of chrome did have the ability to generate random passwords, but password management in chrome is still a pain IMO. Not sure about FF, but a quick google suggests it doesn't generate random passwords automatically.
It's more like the pipe operator in ocaml (http://blog.shaynefletcher.org/2013/12/pipelining-with-opera...). The lisp version has the extra advantage that you don't have to repeat it between all the intermediate functions. ((->> 2 (* 100) str count) vs 2 |> (* 100) |> str |> count).
I understand how a lisp implementation would work here to require only the single operator (I'm assuming a fairly simple macro).
Would it not be possible to do something similar in another functional language to take a <pipe function> and apply it sequentially to a list of function calls?
There are no semantic problems with this, but typing will get in the way: you can express it fairly easily if all the functions have the same type (such as Int -> Int): actually it's just 'foldr ($)'. But it is difficult to type a list of functions such as each member's return value has the same type as the next one's parameter (symbolically, [an-1 -> an, ..., a1 -> a2, a0 -> a1]). It's easier to refer to the composition of such functions, which is why you would see it as 'h . g . f'.
This kind of proofs usually are separated in two steps:
- correction: define a loop invariant. Assuming the loop terminates, it will hold when the program exits the loop.
- termination: define a loop variant, something that changes at every iteration. If you can find a variant that is a strictly decreasing sequence of natural numbers (or an increasing, bounded sequence), you've established that the loop terminates.
Once you've done these two steps you know that the program terminates and the invariant holds at the end. (in your case, you won't be able to find a good variant - depending on the floating point semantics you use of course).
If you look at the disassembly in the link, the backdoor was inserted smack in the middle of the authentication function, which caused jump labels further down to change.
This is all trivial for a compiler to adjust, but it's not what someone manually tampering with the binary would do.
In addition, AFAIK this affects both the ARM and x86 firmware, so a patched binary would imply two separate modifications. Though that would still leave open the possibility that the toolchain was exploited before compilation occurred.
Why would you choose that particular password if you patched the binary? That particular string would stick out in a binary, it certainly looks more like source code.
That's assuming that this particular string was already present somewhere in the binary. Since it is only present as a reference, you would not see the string in a binary patch.
It would have been something that already existed in the string table for the binary, so you would have just been referencing an address and not inserting a string inline.
I suppose that's the reason. Still, I found the result completely unexpected and amusing. I googled this string just to see if it has ever appeared anywhere else but apparently Google these days tries very hard not to return zero results.
I totally agree, that was awesome to discover random websites.
I think that there's still a place for webrings these days, so I'm creating the webring club. Feel free to subscribe on http://webring.club/ or shoot me an email (address in profile) if you're interested in exchanging ideas!
For some reason mailchimp was returning 403s from heroku, this must be related to this morning's outage. This is fixed and I added the emails that failed.
That's hilarious -- in fact the full quote is even better. This guy hits it right on the nose:
jeff atwood: "any application that can be written in JavaScript, will eventually be written in JavaScript. Writing Photoshop, Word, or Excel in JavaScript makes zero engineering sense, but it's inevitable. It will happen. In fact, it's already happening"
Of course, it doesn't make "zero" engineering sense in any logical fashion. Only those emotionally distraught by the "imperfection" of JavaScript don't see the utility in applications moving to the web.
Which isn't to say the web is ready. But it will be!
Yeah moving the software in the cloud (the new name for the Web) is the right way to go... well if you want to end up with a terminal in your hand and the whole intelligence upper in the cloud... like an interactive TV from the nineties or a french Minitel from the eighties. :)
There are technical reasons to dislike literally every technology. I say "emotionally distraught" because JavaScript seems to attract detractors who think that strong feelings are a valid substitute for logic.
I was under the impression that asm.js/emscripten are the new hotness these days? Hence not requiring porting everything to JS manually to bring it "to the web" anymore.
I like to see this as a sort of duality: closures are objects that have a single method (call) and objects are made of functions that can only capture one variable (this).
That's not a closure. That's returning a struct. That struct is an object that has several instance members that are functions, which you've created as closures, but note you're still returning a struct. To return a closure, you need to return a function type, which then has exactly one way to call it, which is the claim here about functions.
Of course you can dispatch anything you like within that call, turning a closure back into "methods", although in most languages that's a fairly painful way to operate even though if it is possible. Personally over the years I've come more around to tel's point of view, which is that as related as objects and closures may be, they aren't quite just two sides of the same coin, as further evidenced by the fact that pretty much all modern OO languages also include closures. If they really were the same thing in two guises somebody would probably have fully unified them by now, but they aren't the same thing. As much fun as the koan is, I think it's false wisdom.
Yes, I said that. Please try less hard for the "gotchas". There's no ambiguity here, the Go type system does not permit it; something is either a struct or a closure (or exactly one of the other valid types; even "interfaces" are actually a specific type in Go), and the linked code returns a struct. (Python, for instance, can create something that acts as both object and closure at the same time with an object that implements the __call__ magic method.)
You have to specify the language; there's no (useful) definition of type classes and interfaces outside of the context of a specific language that is specific enough to make reliable comparisons.
If you mean "Are Haskell type classes equivalent to Go interfaces?", the answer is no, Haskell type classes are substantially more powerful, even before you start turning on extensions. For instance, "Go" can not express the Monad typeclass at all.
I was more thinking Haskell type classes vs. C# or Java interfaces. They're a bit more powerful than Go interfaces because they can be used in combination with generics.
The Monad type class is a pretty good test. Last I knew it wasn't properly expressible in C# or Java either in the fully general sense, though you can get closer than Go, certainly. Whether it gets close enough that you can call it "done" is a bit of a judgment call; in the end, the semantics are never quite identical even in the best of cases.
As others have exemplified: it gets really hard to talk about this stuff outside of formal models. Especially if your goal is to talk about equivalency.
At this point I start trying to turn toward System F, but that's my hammer for this particular nail and I don't know an equivalent one in the object side of things. I'm certain you can express higher-level things in System F which are equivalent to interfaces. I know there's a translation of typeclasses to System F—it's called Haskell, har har—although you have to recognize that the "search" component of typeclasses will be lost.
Interfaces—at least in C#/Java—are equivalent to product types; they let you express the notion of conjunction within the type system.
Similarly, “implementation inheritance” (where every class is either sealed/final or abstract) is how we express sum types.
We can simulate type classes in C# by using implicit casts to abstract classes (interestingly, it’s not so straightforward to simulate the awesomeness of type classes in F#).
The key to all of this is to realize that “types” and “classes” are not equivalent, even though C# and Java conflate them.
Yes, it's not very formal and as others said it would need some definitions to make a useful result.
But actually I don't think that this duality is related to types: in a static language, the types of captured variables does not appear in a value's type. And for the object side, the type of instance variables is hidden behind the public interface too.
So it's really more about techniques for creating abstractions than how the values are composed together.