And surprisingly, in python 2.7, I was able to define a function that takes more than 255 arguments. But perhaps this only worked because I cheated and used exec.
>>> exec("def f(" + ",".join("f" + str(x) for x in range(300)) + "): print(f299)"
>>> f(*range(300))
299
Interestingly enough, the function def (the exec part) does not work in 3.6, as one could expect, but in 2.7 even though the function definition works, you can make the call fail by generating actual parameters:
>>> exec("def f(" + ",".join("f" + str(x) for x in range(300)) + "): print(f299)")
>>> f(*range(300))
299
>>> exec("f(" + ", ".join(str(i) for i in range(300)) + ")")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1
SyntaxError: more than 255 arguments
When reducing the number of parameters to be invalid for f, the following error is shown:
>>> exec("f(" + ", ".join(str(i) for i in range(30)) + ")")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
TypeError: f() takes exactly 300 arguments (30 given)
Which seems a bit paradoxical - "You need 300 arguments" - "No, wait, actually, you can't have more than 255" ... ;)
It looks like they either had changed the internals of python 3, making it fail at the definition as a side effect instead of when calling, or used the same underlying logic but purposely chose to issue the exception to potentially catch the problem earlier...
Ironically, in that situation, using a list might end up being more efficient due to cache locality. Or not. That's why measuring is so important, since performance can be a very counterintuitive subject. Hard-data should always prevail over theory and guesswork.
Bad practitioners in any field crudely invoke and naively apply a trite toolbox of mantras. Detached from reality and driven by insecure ego, they become the problem by using magical thinking from authoritarian logic and delude themselves of the reality of what they are actually doing.
In any skill, people can fall prey to cults of myth and mysticism as they merit based on adherence to orthodoxy rather than suitability. Programming is no different and sometimes it's hard to hear anybody think over the mooing of all the sacred cows.
For any realistic workload containing a mix of failed-lookups, and lookup-hits midway through the list, we're talking about many thousands of comparisons for a single lookup on average. Regardless, replace list with linked-list in the above example, for illustrative purposes.
I agree that measurements & hard-data are preferable to guesswork, but it takes time and energy to gather these measurements and hard-data as well. For minor decisions where the alternative proposal is very slightly more complex, but there's a very compelling reason to assume order-of-magnitude performance improvements, I would argue that gathering data is a waste of time.
If it's that bad, it should be easy to discover what is fastest.
But the way too common situation is that you look at a program under development and tell people "no way, that list will be too large to keep searching, you should design it around a hash set", and people reply something about premature optimization and keep going, then it gets released and is too slow to use so you get to rewrite all their code under pressure.
The interface is the same. For example, in C# you have an interface Collection, which offers a lookup method, and which both HashSet and Lists implement.
The point of the OP is that among different implementations for the same interface, choose the simplest one unless you have empirical evidence that compels you to do otherwise.
Arrays didn't implement interfaces in C# until the Linq release. But the main problem is that arrays have an interface st is much more complex than just ICollection, they have functionality you don't want to be used and that should be properly encapsulated.
The lookup method just checks whether an element exists in a collection or not. List and HashSet implement non-indexable collections, which have no notion of key. The Collection interface is very thin, and it's mostly used when you need to keep track of a set of elements. E.g. storing the nodes already visited in a dfs.
OK I know more about C# now than I ever planned on knowing.
The problem with OP comment is that KISS and premature optimization are not diametrically opposed. Thet are two separate principles that mean different things.
Premature optimization is bad, but not because it necessarily violates KISS. Similarly, many people overcomplicate code for reasons nothing to do with optimization.
His argument reminds me of people who argue against free speech generally because we already ban people shouting fire in a cinema.
I tried this feature when it was available only in pilot mode. It was very nice and fitted quite well my uses. But the UX back then could use some improvements. For example, opening a new tab in a specific container took way too many clicks. And you couldn't just to CTR+T because it would always open the tab in the default container, rather than that of the active tab.
It's creative approach to content monetization that solves some of the problems associated with ads, such as visual clutter. But mining is a computationally intensive process. So, in a world where this became commonplace, I can't see how you could have more than a few websites running on the client, contending for CPU power, without the end-user experience ending up being severely affected - and the revenue that content makers end up getting out of this as well.
I don't know about Flow, but in Typescript you can still fire up a REPL and play around. The difference is that the typechecker validates every command you run. This is enough for small experiences, e.g., when you want to validate an algorithm. But yes, in the case you want to rerun your application to test your changes, the feedback cycle is a bit longer. But I find acceptable considering that static-typing makes refactoring significantly easier and less risky, which is important when working on large projects.
My browsing experience on a mobile phone stepped up greatly since I started using Firefox Focus. You don't even need to mess around with plugins because ad blocking is already a built-in core feature.
Modern web development is already hard by itself, specially when it comes to security. A saner runtime language is needed to replace the sub par standard that is javascript. One with a robust type-system and coherent semantics. It won't fix every problem, but a least it would prevent abuses such as the one in question.
WASM (WebAssembly) is about developing a very simple cross-browser bytecode that allows implementing any runtime on top of it. The first versions are already rolling out in latest major browser versions, but at this stage you don't yet get DOM access from WASM. After the initial phase when DOM access is implemented, it's the beginning of end for JavaScript. Future browsers might well implement JS as a pre-shipped runtime targeting the internal WASM core.
I was commenting to the GP about technologies to replace JavaScript. On the long term WASM is the best candidate, though it's indeed not one of the intended goals of the project. JS will be with us eternally, rest assured. But if DOM-enabled WASM would one day gain wide adoption, developers targeting contemporary browsers of the future would at least have a wider selection of runtimes to choose form in addition to JS.
On the other hand, if you thought modern browsers are bloated, just wait for everyone to compile their runtimes on top of WASM.
It's not very hard to imagine, especially in an enteprise environment, running a browser 15-20 years from now and that browser loading the equivalent of the JVM, .NET CLR, Ruby VM, etc., on top of WASM :)
This actually reminds me of "es-operating-system"; an experimental operating system copyrighted by Nintendo (yes, Nintendo!), where "every system API is defined in Web IDL".
AFAIK it never went anywhere, but maybe building an entirely new OS/Browser based around WebIDL seemed less insane 10 years ago.
First of all browsers are committed to backwards compatibility.
Secondly, there's huge amounts of Javascript written right now, nobody's going to throw away billions of dollars worth of investments. People complain about Cobol written in the 60's, when the programmers counted in the thousands. Javacript today is written by millions of programmers.
What bothers me most about clojure is the abstraction leak when it comes to errors. When something goes wrong (say you passed a value of wrong datatype to a function), it just spits a java error stack. I'd like to have feedback closer to the conceptual level of the language, than to the intricacies of the run-time.
Spec instrumentation solves this perfectly for Clojure and ClojureScript. There's a price to pay, sure. Namely speccing your functions, but specs are defined for the functions in clojure.core.