Hacker Newsnew | past | comments | ask | show | jobs | submit | pezezin's commentslogin

The author mentioned the DE-10 Nano on another comment, which is the original board used by the MiSTer project, based on the Cyclone V.

I am currently living in Japan, and it seems that they follow the American style exams. I don't know if it is a result of the post-war occupation, or it was already like that before WW2.

Back home in Spain we follow the same style of a single national-level exam that you mentioned though.


That's quite impressive. 70% is obviously way too big for a MiSTer core, but I wonder if one day we will have an affordable FPGA board able to simulate a late '90s PC...

> Also it had forward texture mapping which significantly improves cache utilization and would be beneficial even today.

Not really. Forward texture mapping simplifies texture access by making framebuffer access non-linear, reverse texture mapping has the opposite tradeoff. But that is assuming rectangular textures without UV mapping, like the Sega Saturn did; the moment you use UV mapping texture access will be non-linear no matter what. Besides that, forward texture mapping has serious difficulties the moment texture and screen sampling ratios don't match, which is pretty much always.

There is a reason why only the Saturn and the NV-1 used forward texture mapping, and the technology was abandoned afterwards.


It depends on your use case. Storing WGS84 coordinates as 32-bit floats can incur on errors of several meters. It might be good for your fitness tracking application, but not for serious GIS usage.

Case in point: many years ago I was working on some software to generate 3D models from drone pictures. The first step of the pipeline was to convert from WGS84 to ECEF (https://en.wikipedia.org/wiki/Earth-centered,_Earth-fixed_co...), an absolute Cartesian coordinate system. Well, it turns out that at the scales involved, 6.371 million meters, 32-bit floats have a precision of half a meter, so the resulting models were totally broken.

Moving to 64-bit floats fixed this issue.


Isn't that more of using a float to represent the number? Would be akin to trying to represent .5. Which, if your goal is to represent decimals, you are best off not using floats.

Granted, just storing it as a 32 bit integer is probably difficult for most uses. BCD just isn't common for most programmers. (Or fixed point, in general.)


If your goal is just to store the coordinates in a database, sure, use fixed point or whatever.

But any kind of calculation will involve a great deal of trigonometry, square roots, and the like. It is just easier to use floating point. Examples:

https://en.wikipedia.org/wiki/Geographic_coordinate_conversi...

https://en.wikipedia.org/wiki/Vincenty%27s_formulae

https://gist.github.com/govert/1b373696c9a27ff4c72a


You should be able to do the calculations in fixed point, easily enough? Indeed, it used to be that most embedded systems would use fixed point due to lack of float hardware.

I would actually think fixed point would be beneficial for its accuracy being a bit more controlled than floating point is. Yes, you lose the range of floating point. But I just don't see how that is relevant for numbers that are constrained to +/- 180 by definition.

That all said, I cannot and do not argue against that it is faster to get going with basic float/doubles, due to how commonly those are supported in base libraries.


> For example, one-third of the top 100 mobile games in Japan currently come from China.[20]

China is indeed taking the mobile game world by storm. Go to Akihabara and you will see these huge billboards of Chinese games like Genshin Impact or Honkai Star Rail. China is starting to outplay Japan at their own otaku game.


It is actually a Greek word, that's why it is the same in most other languages.

https://en.wikipedia.org/wiki/Holocaust_(sacrifice)


Thanks. I did not know about the etymological roots.

I live in Japan and IMHO the problem is that it is an extremely conservative and risk averse country, "if it ain't broke don't fix it" taken to the extreme. They had a period of innovation after WW2 out of necessity, but after the bubble crash of 1990 they reverted back to their old selves.

Consoles used off-the-shelf CPUs until the 6th generation. Even the Dreamcast and the first Xbox used off-the-shelf CPUs, it was only the PS2 and the GameCube that started the trend of using custom-made CPUs.

Not entirely accurate.

The PSX's CPU is semi-custom. The core is a reasonably stock R3000 CPU, but the MMU is slightly modified and they attached a custom GTE coprocessor.... I guess you can debate if attaching a co-processor counts as custom or not (but then the ps4/xbone/ps5/xbs use unmodified AMD jaguar/zen2 cores)

IMO, the N64's CPU counts as off-the-shelf... however the requirements of the N64 (especially cost requirements) might have slightly leaked into the design of the R4300i. But the N64's RSP is a custom CPU, a from scratch MIPS design that doesn't share DNA with anything else.

But the Dreamcast's CPU is actually the result of a joint venture between Hitachi and Sega. There are actually two variants of the SH4, the SH4 and SH4a. The Dreamcast uses the SH4a (despite half the documentation on the internet saying it uses the SH4), which adds a 4-way SIMD unit that's absolutely essential for processing vertices.

We don't know how much influence Sega's needs had over the whole SH4 design, but the SIMD unit is absolutely there for the Dreamcast, I'm pretty sure it's the first 4-way floating point SIMD on the market. The fact that both the SH4/SH4a were then sold to everyone else, doesn't mean they were off the shelf.

Really, the original Xbox using an off-the-shelf CPU is an outlier (technically it's a custom SKU, but really it's just a binned die with half the cache disabled).


The huge interconnect would also useful be for HPC tasks. The FP8 not so much, HPC still loves FP64.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: