The CPU implementation is done by Qualcomm, Apple, and the like. The CPU design is done by ARM, predominantly in the UK. The fact that Softbank (Japan) owns ARM has no bearing on where and who is doing the actual technical work.
That's incorrect, Apple designs their own micro architecture that implements the ARM ISA. Qualcomm, HiSilicon, and Samsung use ARM's reference micro architectures, which are mostly designed in Austin TX now, although some have been from the ARM Sophia, France office. Samsung also made custom uarchs at their Austin office for their Exynos chips but recently shut the operation down.
Yes it uses the arm instruction set, but it was completely designed by Apple. This is identical to Intel using the same 64 bit instruction set as AMD... you wouldn’t say AMD designs intels chips though.
Apple is largely an exception in the crowd of ARM SoCs manufacturers, though. A lot of companies just implements ARM cores, with their additional magic in other bits of the SoCs.
The fact that Apple does it does not mean that ARM does not do any design.
It is a completely different language - it is syntax incompatible. Human languages are a very poor comparison to computer languages.
It's just a tough sell to manually update a huge pile of code that works just fine as-is. No new features will be introduced. No bugs fixed, while some new ones may pop up. There's a lot of manual work in porting, the automated tools only cover the basics. Your downstream customers receive very little benefit for this effort, which makes it very hard to justify spending the engineering resources on it.
At this point, I've helped port multiple code-bases in multiple companies. Porting to Python 3 actually can and does fix some bugs, as it generally forces the code to start handling text semi-correctly, instead of just hoping the bytes go through and it all works out.
> Your downstream customers receive very little benefit for this effort,
They do, actually. Again, while I was porting those same code-bases, I'm working with a team of engineers who (and along with myself) are also still adding code to meet other incoming requirements. And in my experience, there was a fair bit of "Boy, it'd be nice if X were easier in Python!" where X is something that is easy in Python … 3. The question would have often gone unanswered had we not had devs experienced with Python 3 on the team.
Porting to 3 gets you all the additions that have come to Python that haven't been backported. A better standard library, syntax that better supports you, etc., translate to better productivity for devs as they are now equipped with better tools. This is only going to get worse as libraries drop support for 2.
(And while many things have been backported into third-party libraries, not all of them have, and in particular, syntax changes. And the existing backports are actually quite useful during the process of porting: I can change the code to conditionally depend on the backport library in 2, and use the real deal in 3.)
It sounds like you had a project that really benefited from updating, and that's great! Some of mine have as well.
It is not universally applicable, however. There's a lot of old code that, as I said before, is just fine. Adding a 100 line feature in the old version is easier than porting the whole 10,000 lines over. Just ask COBOL programmers what they're up to right now - it's not porting to a modern environment, it's maintaining the old one.
The other major issue is that Python 3 really didn't fix the major issues with Python that persist to this day. Atrocious byte code performance, a really awful deployment story, and perennial favorite: the GIL. Had just one of these been tackled in Python 3, the migration probably would've happened a lot faster. You might not care about those problems or they may not be problems for you, but they are for some of us and they persist to this day.
I will concede that on the whole, Python 3 is a better Python. But it's not 2x or 3x better. It's 10-20% better, and that's a tough sell to a busy engineer who has better things to do. I can't justify spending time on anything that is a mere 10% improvement when I have a pile of 2x improvements to work on.
I appreciate what you folks are saying, but from my perspective this is exactly what I mean by the absence of a "good technical reason".
I also appreciate that there are a small number of projects (e.g. PyPy) that probably do require some of the special sauce around CPython 2.
But for everyone else ... come on folks. We're developers and engineers, we're supposed to be the creative builders of tomorrow and all that stuff. Is this really so hard?
Absolutely, especially when there is no tangible benefit to doing so. At $day_job it took us over a month to convert our codebase to Python 3 (bit by bit, not all at once), and we still ended up running into errors in production. In our case, we were forced to make the switch because we needed to upgrade some libraries that had dropped Python 2 support, but there is plenty of software out there that Just Works™ and nobody wants to touch it (for good reason!). In those cases, it's way less expensive to pay someone to maintain the interpreter than it is to take on the effort of a full blown conversion.
How did you do it bit by bit? Running multiple interpreters (I'd imagine that to be another major effort - then again possibly well worth it if you're moving to a services architecture anyway)?
Kind of, but probably not in the way you're thinking. All I meant is that we transitioned each of our services from Python 2 to code that was compatible with both Python versions, one by one. We tackled one aspect of Python 3 support at a time, e.g. imports, strings, integer division, third-party libraries, etc. Between future imports and utility librariess like six, most things were fairly easy to make compatible across both versions.
Once we did as much as we could do on Python 2, we had one of our engineers maintain a separate Python 3 branch in a staging environment for a while (maybe two weeks?) that we did some heavy QA on. And once we were fairly confident in the results, we merged and released that over a weekend and fixed issues as they came up.
We did all of this once for each service, but the bulk of the work (>80%) was our legacy monolith. I think the whole thing would have taken way less time with better test coverage, but alas. :)
There's a way, but I did not do it outside of just POC. It's actually possible to compile legacy code using Cython and import it to python 3, then I suppose you could convert one file at a time.
It's possible that this might be more work that if you would use pylint, mypy, python-future. Also it probably won't work if your code tries to be clever and does some import hacking (although who knows, maybe it still would work)
90% of computer engineering isn't writing software. It's logistics, organization, and incentives.
For Python 3, I think it fell down hard on incentives. It was better, but not better enough to justify rewriting code that worked for a lot of teams, for a lot of years.
When you go to make incompatible changes and you find your userbase kicking and screaming against following you, that's not a technological problem, but it is a problem.
The issue here is opportunity cost. It has nothing to do with difficulty. The time spent porting to Python 3 has to provide a better return on investment than the same time spent doing something else. For a lot of people for a very long time, continuing to now 12 years later, that opportunity cost was such that porting to Python 3 was always a lower priority.
Languages need to serve their users, not the other way around. A lot of us really didn't care about minor syntax tweaks and we didn't need a lot of the new features. What we needed was better performance and a better deployment story, and we have received neither.
From my experience, and this is compared with AVR and most ARM parts I've worked with.
1) Generally poor documentation. Non-existent at the register level - you are entirely at the mercy of the SDK and what other people have reverse engineered.
2) Crashes on unaligned pointer access. This is something that ARM generally handles with ease (though there are exceptions). Porting code from ARM or 8 bit machines to the ESP8266 can take a fair amount of effort in tracking down all of the unaligned access exceptions that come up. It's extremely frustrating.
3) To that point, the SPI flash access is also beholden to 32 bit alignment. That really complicates things if you port code that relied on a typical SPI flash's ability to access at a 1 byte granularity.
4) Poor compiler support. I think the latest GCC they support is 5.something. It requires a custom backend so it is vendor maintained. Contrast with ARM support in GCC which is generally outstanding.
5) No hardware debug support. You can run GDB over a serial port, but that's no where near as powerful as having an actual JTAG/SWD port.
6) The program memory architecture is complex and difficult to use (poor documentation is part of the issue here). There's no on-chip flash - code streams from a SPI flash into a RAM cache. Some code has to run out of RAM and must be placed by the linker as such.
All that being said - the chip works. The Wifi is excellent (I run dozens of these at a time, they are more reliable than my Unifi routers they connect to). For a $2 FCC certified module it's unbeatable. I have really come around to like this part in spite of its many flaws - it was built to do one thing very well at a price point no one else could touch - and by and large it succeeded.
I have run into a few of these, I do have to admit, but I don't find any of these to be show-stoppers, necessarily .. like you I like the ES-series, in spite of the flaws. It's comfortable, once you move in.
One last question, platform.io? Pretty good ecosystem in general - and with the wide variety of assets to select from, a lot of your points become less painful ..
Over-engineering is indeed fun - that was part of the point of the project, it was (and remains) a labor of love.
Modularity was actually a design consideration I really struggled with, but at the end of the day, I realized I didn't want to do just another Arduino clone where you put the whole thing together yourself. I designed Chromatron under the assumption that you'd eventually end up with a lot of them, in which case the Wifi and live coding are really useful features - I wanted and all-in-one solution. But for most people who only need 1 or 2 projects, it's overkill. I think that's part of the reason it didn't sell that well.
Memory and the available CPU time to run the virtual machine. There's also a fader engine to do timed fades per pixel, that eats a fair amount of RAM as well (but it's a nice to have).
It was designed for smaller, but highly dynamic, projects. I figured large scale installations were already well served in the DMX/pixel mapping space.
Hi, project creator here. Sadly, the poster was a little behind on catching this... The Kickstarter boards shipped over 2 years ago, and as the hardware didn't really sell well after that, it's mostly just a personal project that this point.
That being said, I am actively working on a port for some stock ESP8266 boards (Adafruit Feather, NodeMCU, etc). I don't really want to sell hardware, but I'd love to see someone else using the software.
Hi! I'm the project creator. While it's true the Kickstarter boards have all sold out, the project is still alive and well. Check out the dev and esp8266 branches, there's commits as of today ;-)
My long term goal for this year is to get a port running on some stock ESP8266 boards (such as NodeMCU and the Adafruit Feather Huzzah). My original hardware design was too complex to build in a cost effective way, but I'd really like to get the software in the hands of those who are interested.