> They chose (perhaps felt forced to chose) to change their body to fit the stereotype.
This is kind of presuming the conclusion, and ignoring the stated experiences of the people being discussed. The concept of gender identity (whether or not it does a good job of it) is explicitly intended to highlight that the causation isn't "I fit this stereotype, therefore I am this", but just "I feel I am this", with the implication that it may (or may not) be reaffirming to fit in socially with other people who are that way. Many don't identify with the stereotypes. Sure, some cling to stereotypes - maybe because they previously felt constrained from expressing themselves, or just as a practical measure against getting misidentified - but that doesn't imply that that's the cause or the goal. If you don't believe the claim that it isn't the cause, that's fair, but then it's just a person's beliefs against anothers' self-assessment.
The term "transitioning" is also used in a general sense, including "being yourself", as opposed to necessarily taking hormones. Taking hormones, changing paperwork, doctor appointments, losing fertility, etc., is kind of a long, drawn out, stressful ordeal that inherently starts with "being yourself in your own body" until changes kick in; it's pretty unlikely that someone transitioning hasn't considered just not doing the hard part, especially since it's so slow that there's ample time to turn back.
I'd wager most trans people are as opposed to gender essentialism and gender stereotypes as other feminists are. Nothing about accepting that they might actually just not be comfortable in their bodies, or at least just letting them be, implies philosophically accepting gender essentialism. Moreover, making the world more accpeting of people just being themselves is pretty closely aligned with just letting trans people be - if they actually don't need to transition, they'd be less likely to in a world where those who don't fit stereotypes, like trans people, are more accepted. And it's not a verboten topic - the Reddit trans communities have these discussions pretty often, when the questions are asked genuinely. It's just that, as with other minority-focused topics, it's often brought up with the clear intention of harassment rather than having an actual discussion, so there's a low tolerance for things that signal disingenuousness.
I can't speak to the complexity of this project, but for ROM hacking in general, it's not all too complicated, but it does take some persisence.
First you need to find code relevant to what you want to do. You can take diffs of memory as you do things in the game to narrow down where the relevant addresses are, and then set breakpoints on read/write to those locations to find relevant code. After that, you can mostly just follow the assembly - you can even read backwards up the call stack by reading to what look like the beginning of a procedure, and searching for jumps to that address. Once you've found the code you want to modify, you just need to find some empty space in the ROM that you can branch out to to write your code.
The trickier bit is if you're in a spot where you need to worry about timing. On older consoles, this included things like changing the HUD or wave effects, etc., since you need to make sure whatever work you do gets done in time for HBlank/VBlank.
Some consoles also had some built in functions that would give you an idea of what code was for, like the lz77 compression on the GBA, which was mostly just used for graphics. Similarly, DMA was most often used for copying sprite data. On newer consoles where code was often compiled from C (rather than assembly with macros), I'm under the impression you can also do things like try to pattern match on the signature of standard library functions.
Impurity is a lack of constraints. You can, with impure code, write an interpreter for a language that constrains code to being pure, and implemented correctly, you can then depend on code written in that langauge to be pure.
His point is that there's no lost benefit in calling pure code from impure code - the code is already unconstrained. Calling impure code from pure code, though, means you've lost the purity constraint, which means you've lost the nice emergent properties you get from that constraint.
Even if you write 'pure' code in an impure environment, you can't depend on it being pure, because that constraint isn't actually enforced. No matter how pure I try to keep my Javascript, I can never depend on it being pure - there could be a bug, or someone could add impurity at the bottom of the callstack, breaking referential transparency.
Following your argument to the extreme, Haskell is also an impure language: there could be a bug in the compiler, or someone could introduce impurity in it! Even worse, someone could call unsafePerformIO!
You would argue: but that goes against the Haskell specification, that impurity is not proper Haskell!
And exactly: the "purity" comes from an abstraction, which is a contract between you and some other developers. The fact that JavaScript the language does not enforce such contracts does not mean that you can not get into agreements with other developers. Sure, the JavaScript interpreter won't complain when the contract is broken, in the same way that the CPU is not complaining when Haskell has a bug, in this case, JavaScript is not the one making the purity promise---a developer is.
As such, I believe that in JavaScript, a library author can make claims about the purity of some code. Sure, some adversarial coder could use monkeypatching to break it---and some adversarial coder could also gives you instances of some type class that call unsafePerformIO under the hood. In the end, we are talking about degrees to which some purity can be ensured, and levels at which the contracts are enforced.
I know some Haskell developers would like to think that they are programing in this perfect language and that it is impossible to do functional programming in other languages. But I disagree, and actually, I believe that bringing the design tools of functional programming to mainstream languages is a worthy goal that should not be discouraged.
In theory it would be possible to write a specification for a pure subset of JavaScript. But it's very important to acknowledge that this pure subset doesn't exist, and won't exist unless and until such a specification is written, to the point that it's possible to write an automated tool that can verify whether a given piece of code conforms to this specification. And if such a specification did exist, we would likely call it a "language" in its own right.
Once you write that language and confirm that your library conforms to it you can say your library is pure. Saying your library is pure before you've actually formally checked it is like saying your library doesn't have any bugs in before you've made any attempt to look for them - i.e. almost certainly a false, and so irresponsible to claim that it qualifies as a lie in my book.
I 100% agree that purity is still incredibly useful for reasoning about and maintaining code in any language, but I think you're underselling the benefits of statically verifying that you haven't unintentionally introduced impurity somewhere. Oh, and ideally it'd also allow for some nice optimisations, but I'm not sure to what extent that's being done.
> First, literacy isn't completely related to the writing system. Look at Spanish speaking countries, where the alphabet is more phonetic than the English alphabet.
Perhaps more to the point, Japan, which uses Chinese characters for nouns, adjectives, adverbs, and verb-roots, has a very high literacy rate. [1]*
* Not trying to cherry-pick for Japan being at the top, but Japan's not listed in the 2015 UNESCO report that's referenced all over the place. This US DoE report from 2013 seems legitimate enough.
> Perhaps more to the point, Japan, which uses Chinese characters for nouns, adjectives, adverbs, and verb-roots, has a very high literacy rate.
While literacy might be high, there's been a stream of articles over the years in Japanese media noting that literacy in kanji (the component of Japanese based on Chinese characters) in particular has been steadily declining, even among the highly educated; e.g.
http://www.japantimes.co.jp/news/2013/07/03/national/kanji-w...
That's a fair point - seems related to the fact that people aren't writing as much anymore.
Anecdotally, I feel like Japanese picks a better point in the tradeoff space than other languages - Chinese characters are used where they get some leverage, with phonetics for everything else, even with a bunch of half-kanji words. Still, I'd imagine some of the less-commonly used ones will get dropped over time.
On the other hand, English+Emoji+Txt simplification seems to be approaching the same point from the other direction.
Practically speaking, things are more nuanced than the article would imply, though - they're not 3000 completely unique characters, and at least for a lot of nouns, the radicals do serve to broadly categorize things as, e.g. "related to water", which can serve as a reading aid similar to how root words do. The author further claims "With a phonetic writing system like an alphabet or a syllabary, you need only learn a few dozen symbols and you can read most everything printed in a newspaper.", but that's only accurate insofar as reading means "pronouncing"; it's a tradeoff with being able to infer meaning.
They seem to be restocking in small batches and selling out immediately, at least online. According to nowinstock.net, in the US, Best Buy's been getting shipments most consistently, and received one as recently as last week. Some people find out what days they get shipments and just wait outside the door for them to open.
It's certainly reasonable for a studies across fields with hourly pay or, e.g., including both part-time and full-time (such as the BLS survey cited by the Forbes article), but I'm not sure we can make that assumption for software development - programmer productivity in a given time period is, on its own, somewhat mythically considered to vary by orders of magnitude.
You don't think people who are willing to work longer hours end up getting paid more? Are people who work more than forty hours each week doing it for nothing?
> pay little or no attention to memorizing characters
I'm not usually one to speak bluntly, but, at least for Japanese, speaking from a lot of experience, this is flat out the wrong approach.
Older books like the romanzied "Japanese for Busy People" took this approach. I started with this book, it felt easy at the time, but it was immensely impractical - you don't build up enough of a base to learn from any other material. It's completely useless. After over a year, in a formal setting, I ultimately decided to just completely start over.
Currently Japanese courses mostly just focus on Hiragana and Katakana, since material for children usually has pronunciation guides over the kanji anyway. This is workable, but it just gives you no leverage where you could have some - you end up memorizing a thousand indiviaul words, some of which maybe kind of sound similar in parts, when you could instead learn a dozen charcters, and know that they share root words. Phoenetic approaches work for children who are in a situation where they can memorize from immersion with native speakers, and have additional clues like context and intonation to emphasize root words, to pick up new vocabulary.
When I started studying Chinese, we instead went with a "radicals-first" approach, and after just a short time of that, my Japanese vocabulary exploded - I could suddenly pick out all the seafood on a menu, and I started asking people mid-conversation "oh, does that use these characters?", and I could understand what it meant without them having to struggle to translate to English: Heart+Electric+Map - it's probably an electrocardiogram, especially if we're talking about hospitals. If you haven't learned the kanji, you don't even know if you can make that association from the sounds, much less how to ask that question.
That's how you get to fluency, by building up a foundation that lets you learn without active study. You can't get there by avoiding the basis of word composition in the language. Apologies if my frustration with characters-last approaches came through a little strong - they wasted a lot of my time.
Mair's point is that "building up a foundation" means learning the language not the script.
> characters-last approaches
I think it's more accurate to call what Mair's advocating a "language-first-characters-later-if-necessary" approach.
Here's what Mair said about Chinese classes:
It’s a tragedy that so many young Americans spend years stuffing their heads with hundreds of Chinese characters, gaining no usable proficiency, and then forgetting them all by the time they’re 25. If the Chinese would wake up and permit pinyin to function as part of a genuine digraphia, then I would say it might make sense for maybe 2 percent of the population to learn up to third-year level of Mandarin—strictly romanized, mind you. But there are exceedingly few teachers who are enlightened enough to teach it that way.
They're not reasonably separable. Memorizing romaji, or even strings of hiragana, removes the associations between root-words that are related, like those radicals that let you identify menu items, and removes the visual aid for identifying root words in context. Memorizing associations between arbitrary groups of sounds without the radicals is actually harder, it just sounds easier to those intimidated by the idea of learning characters.
Given, much to Mair's displeasure, that actual native media is written with kanji, they're immediately necessary, and not studying kanji effectively bars students from any learning outside the classroom until they have - which is completely counter to giving students either the context necessary for immersion or the reinforcement for traditional study to work.
Maybe it's merited for Chinese classes, with its larger character set, but Mair's approach just sacrifices students' ability to learn on their own in favor of lowering attrition rates of intro-level classes. It's actively harmful to students aiming for fluency to build up the characters as this insurmountable mountain in the distance before they can actually use their knowledge, rather than the useful compositional blocks and visual aids that they are.
(Notice the native narrator has no trouble reading the kana-only writing.)
It's true that much of current native media still uses kanji, but they aren't "immediately necessary". There are books/manga with furigana and software tools like Rikaichan/kun for online articles.
People should be able to learn whatever they want outside of the classroom, including kanji (being able to recognise commonly used characters certainly helps in understanding native media). It's just that grammar, vocabulary, etc (real language) are much more important than kanji for beginners.
Sure, Korean is phonetic, and so learning it phonetically makes sense, since you won't be forced to re-learn it again to be able to use your knowledge. Speaking from experience studying Korean, though, attaining a useable vocabulary through rote memorization is dramatically harder than if you're familiar with the 한자/漢字 words are derived from, for exactly that reason - even though hanja's not used directly so much in korean, we started studying it in the third semester of Korean class, and it was helpful, though I'd agree maybe a bit excessive there.
Games that use a strict hiragana character set are readable by natives because they're already fluent in speaking; they're unnecessarily difficult for non-native speakers, particularly since they tend to only have the bare minimum of separators between words. After living in 4 prefectures, working for a small Japanese company, and dating a native speaker for roughly a decade, I still find those games hard to read - it's just hard to identify the shapes of words, like reading English with no spaces; you have no choice but to guess where the boundaries are and sound everything out syllable by syllable, because everything's written differently from how you'd see it anywhere else.
If some people want to put off kanji or memorize phrases from a travel guide, that's their prerogative, but not teaching the compositional aspects of kanji, and leaving students to fend for themselves after selling the idea that it's all rote memorization (and seemingly to push this implausible ethnocentric dream of the language one day being exclusively romanized) is just a disservice to the students, as it was to me.
> strict hiragana character set are readable by natives because they're already fluent in speaking
This is an argument for spending more time trying to become fluent in speaking (by learning vocab/grammar/etc) than spending time learning how to draw a bunch of characters.
The use of spaces would be necessary if Japan were to switch to kana, and that's one of the 3 things Kanamoji-kai recommends as well:
Without spaces kana-only writing would be hard to read for most native speakers.
I honestly don't see any issue in not teaching kanji in Japanese classes, any more than not teaching cursive writing or Latin in English classes. They are just not essential. (Please do not start a lecture on how Latin helps you understand English.)
> ethnocentric dream of the languge one day being romanized
I'm not saying Japan should switch to romaji. I think in Japan's case kana would be much more realistic.
Also, I'm not advocating abolishing kanji either (people should be able to write kanji all day everyday if that's what they want to do).
I just want the government to (a) stop teaching kanji in public schools, and (b) mandate the use of kana (or some other phonetic writing system) in government documents.
I don't see how it's "ethnocentric". For most non-native speakers it's much easier to learn this way. Only non-native Japanese learners I've met who found it "easier to read" kana-kanji writing were people who already knew kanji e.g. Chinese people who were learning Japanese & were not fluent in speaking. The true of the matter is they were just familiar with the characters and they often didn't even know how to pronounce the words.
I have to agree with T-R. Japanese is much easier to read with Kanji. If you run into a kanji you don't know, that's too bad, but I run into words I don't know in English, too. You just look it up. English spelling is just as complicated, in my opinion.
Also, "stop teaching kanji in public schools" is pretty much the same as abolishing kanji.
You read my conversation with T-R and your takeaway is I think teaching kanji to beginners is a bad idea because "I run into a kanji I don't know"? Please.
> English spelling is just as complicated, in my opinion
That's a separate issue, but I agree with you to some extent. Here's a Feynman quote (which I agree with 100%):
If the professors of English will complain to me that the students who come to the universities, after all those years of study, still cannot spell "friend," I say to them that something's the matter with the way you spell friend.
> "stop teaching kanji in public schools" is pretty much the same as abolishing kanji
I don't think so. There are many things that aren't taught in public schools but taught in private classes (calligraphy/ikebana/yoga/etc).
It's good that people who care about these cultures enjoy them in their own time and other people are left alone and not being forced to learn something they think has a low ROI.
If kanji is such a good idea as you claim, it will survive and thrive without government coercion, don't you think?
"standard 12pt font" is biased for latin characters, though - at the larger font size, the Chinese characters are still getting much better information compression than the latin characters, which need to expand horizontally by using several characters per unit of meaning, plus separators (spacing and hyphens) to even be readable. In practice, even adjusting for font size, paragraphs of Chinese and Japanese are almost always shorter than latin-language equivalents.
Of course, the real winner of this argument is probably Korean.
Actually, I think Korean is not a winner. Its block-shape dictates that a character like 이 take the same amount of space as 핥, so there's a limit on how small a character goes. (There are fonts that try to break out of the standard square shape, but I don't think any found wide use.)
I think proportional fonts have a better space utilization by their nature. Anyways, I don't think that's an important metric. It's not like we live in an age where paper is scarce.
Kanji isn't so bad - most of the more complex characters are built up out of smallers ones, and even the smaller ones, as the article breifly hints at, have a primary radical that roughly categorizes it - learn the radical for "water", and you can identify all the seafood on a menu, even if you can't read a single character. They're actually really helpful for both comprehension and reading speed, like learning latin root words would be for English, but a lot more consistently applicable, so it gives you a lot of leverage.
The stroke order seems annoying at first, but it's pretty consistent, and actually really helpful for memorizing. The different pronounciations also seem intimidating at first, but for the most part:
- One's for composing with other characters
- Others are mostly variations on that one for fluidity, where people will still understand you if you get it wrong
- There may be one for when it's used on its own, that gets used so often you quicly think of it as basic
- Anything else is probably an edge case that you'll never see
This is kind of presuming the conclusion, and ignoring the stated experiences of the people being discussed. The concept of gender identity (whether or not it does a good job of it) is explicitly intended to highlight that the causation isn't "I fit this stereotype, therefore I am this", but just "I feel I am this", with the implication that it may (or may not) be reaffirming to fit in socially with other people who are that way. Many don't identify with the stereotypes. Sure, some cling to stereotypes - maybe because they previously felt constrained from expressing themselves, or just as a practical measure against getting misidentified - but that doesn't imply that that's the cause or the goal. If you don't believe the claim that it isn't the cause, that's fair, but then it's just a person's beliefs against anothers' self-assessment.
The term "transitioning" is also used in a general sense, including "being yourself", as opposed to necessarily taking hormones. Taking hormones, changing paperwork, doctor appointments, losing fertility, etc., is kind of a long, drawn out, stressful ordeal that inherently starts with "being yourself in your own body" until changes kick in; it's pretty unlikely that someone transitioning hasn't considered just not doing the hard part, especially since it's so slow that there's ample time to turn back.
I'd wager most trans people are as opposed to gender essentialism and gender stereotypes as other feminists are. Nothing about accepting that they might actually just not be comfortable in their bodies, or at least just letting them be, implies philosophically accepting gender essentialism. Moreover, making the world more accpeting of people just being themselves is pretty closely aligned with just letting trans people be - if they actually don't need to transition, they'd be less likely to in a world where those who don't fit stereotypes, like trans people, are more accepted. And it's not a verboten topic - the Reddit trans communities have these discussions pretty often, when the questions are asked genuinely. It's just that, as with other minority-focused topics, it's often brought up with the clear intention of harassment rather than having an actual discussion, so there's a low tolerance for things that signal disingenuousness.