Hacker Newsnew | past | comments | ask | show | jobs | submit | WillFlux's commentslogin

I'm surprised by this. 1280x768 is an unusual resolution, what display timings are you using?

I've found 1024x768 and 1280x720 are both well supported. I tend to use these display timings: https://projectf.io/posts/video-timings-vga-720p-1080p/


Thanks for the link. I meant to write 1280x720@60Hz for 16:9 HDMI TVs/monitors but have used 1024x768 quite a bit in the past for 4:3 VGA monitors and ended up mixing the two.

VIDEO_ID_CODE 4 (1280x720) works on all monitors I tested while 1 (640x480) only displays on half of them:

https://github.com/nand2mario/nestang/blob/master/src/hdmi2/...


Hello, I'm the author of the Project F blog. I'd be happy to field any questions you have.


This is an exciting project. Have you thought about the software and OS yet?


Thanks. I want to keep the software simple while having proper support for the graphics and audio hardware. I already have some prototype software written is RISC-V asm. I’ll probably use lua as the first high-level language, as it has a small code base and runs well on memory-constrained systems.

I’m not yet sure what features the OS will offer; it partly depends on interrupts and whether I support virtual memory. But I’m not trying to create another UNIX; there are plenty of those already. However, the system will be modern, e.g. using UTF-8 encoding.


This makes sense. I’ve thought about a similar project and come to the same conclusions.


RISC-V takes a different approach to branching, even compared to other RISC processors. There are no status registers AKA condition codes.


MIPSr6 added RISC-V like compare and branch instructions.


Load-store architecture is a defining quality of RISC in general and RISC-V in particular. If you're used to a rich set of addressing modes in x86 or 68K, coding on RISC-V asm is a bit of a shock, but I'm definitely warming to it.


Burrell Smith and Andy Hertzfeld worked for Radius on the Full Page Display. How different would Apple have been if it had held onto more of the original Macintosh team in the mid-80s?


68000 is, in many ways, the pinnacle of assembler for programming, but RISC-V is pretty fun, too. I hope RISC-V tempts a few more people to try asm programming (again).


I got into 68000 programming quite late (6yr ago), but I have been enjoying it (so far Amiga, Atari ST, rosco-m68k). It is a very programmer-friendly instruction set architecture.

RISC-V I started playing with more recently (early 2023, thanks to VisionFive 2), and it feels like my old favorite (MIPS), without the baggage MIPS carried.

It is a pleasure to work with this amount of GPRs and the comfortable alternative names for them that the official abi offers. I am loving it so far.

I expect to have RVA22+V hardware soon (Milk-V Oasis). Very much looking forward to playing with the vector extension on that.


Yep, same, I am keeping an eye on Oasis, but to run powerful GPU drivers (much user space would have to be ported from c++ to hand written risc-v assembly, SDK included). Don't rush it though, concurrent access and memory coherence of device memory is still not finalized.

I have been coding kind of a lot x64 recently, the limitation of 16 GPRs has been painful. I am sure that when I will crank up on rv64 assembly programming, those 32GPRs will feel like fresh air.

In the other hand, I am not fond of the ABI register names, and the pseudo-instructions involving mini-compilation. I'll stick to xNUMBER register names and won't use pseudo-instructions. Like I will avoid any abuse of the preprocessor.


>In the other hand, I am not fond of the ABI register names

Why? They're simple substitution, and very helpful with following ABI.

>and the pseudo-instructions involving mini-compilation.

Again, why? These aren't specific to the assembler used, but rather, defined in the specification itself. This means they are reliable, and will always be there for as long as you use a RISC-V compliant assembler.

They are thus also the register names you will see in disassembler output, debuggers and other tools.

Also, you might be interested in this new RVA22+V board[0].

0. https://forum.banana-pi.org/t/leading-the-future-of-computin...


The standard pseudo-instructions are not just standard. They express idioms that get treated differently, sometimes also by hardware.

For example `li` gets expanded by the assembler into `liu` and `addi` which on larger RISC-V cores get recognised and fused back into a single op. Using `xori` instead of `addi` would have had the same result but wouldn't get fused.

Next, some idioms get recognised and automatically assembled into "compressed" 16-bit instructions to save space. For example "mv rd,rs" and "addi rd, rs, 0" both get assembled into "c.mv rd,rs". And on a larger RISC-V core, "c.mv" could be only a register rename in the decoder, thus taking 0 cycles.


Intel might not even be making another generation of discrete consumer GPUs. See the numerous stories in the tech press over the last few months.

We have to hope AMD has the price, and just as importantly volume, to make the GPU market competitive.


> Intel might not even be making another generation of discrete consumer GPUs

That might be ok. The integrated graphics on Intel chips are getting better - probably 100% thanks to the discrete GPU effort.

Apple has shown with the M1/M2 that integrated graphics can be really quite good even without high-end Nvidia performance. If Intel matches that, they could own the low-to-mid range just by selling CPUs and leave Nvidia in a pickle with no profitable market for their binned chips.


Apple deliver an absolute fuckton of memory bandwidth to their SoCs. A base M2 is swimming in 100GB/sec of memory bandwidth while the i7-1265U is 4/5 of that. All of the M1 dies are well above that number while on the Intel side the top of the line H and HX monsters are still limited to the 82GB/sec that dual channel LPDDR5-5200 puts out.


Intel is going in the right direction: the 13th-gen chips are at 89.5GB/sec with DDR5-5600 memory.

Don't think about the present, think about what the landscape could look like in 3-5 years ;)


I think Apple is achieving that bandwidth with DDR4.


They're achieving that bandwidth by putting ridiculous bus widths on the SoCs. The M1 Pro has a 256-bit bus to memory, the M1 Max has a 512-bit bus to memory, and the M1 Ultra has a 1024-bit bus to memory.


No, Apple isn't achieving that bandwidth with DDR4: "M2 uses LPDDR5-6400"

Don't get me wrong, it's stunning performance, and having it on the same package as the CPU and GPU has other benefits.


Ah they upgraded the ram between M1 and 2 then.


IMO that shouldn't be a concern to those looking at buying this gen. The tech will move into their integrated CPU's, so drivers will be maintained well into the future.


The examples aren’t limited to VGA; I support four different outputs with these designs.

* VGA using a Pmod board (you could also create your own register ladder)

* DVI using the TI TFP410 on the DVI Pmod board

* DVI generated on FPGA with a Verilog TMDS encoder (no IC required)

* SDL simulation on a PC

DVI is a subset of HDMI, so works on modern TVs and monitors.

You can find the source on GitHub: https://github.com/projf/projf-explore/tree/main/graphics/fp...


Even if you don’t have an FPGA, you can run these hardware designs on your PC with SDL and Verilator.

It’s really simple to set up: https://projectf.io/posts/verilog-sim-verilator-sdl/


The big disadvantage of not having a real FPGA is that you won't be concious of the very real LUT/gate limits. A simulation will happily allow you to apply all sorts of nice compartmentalizations and abstractions, without making your understand that they will cost significant money if you tried to find an FPGA to fit them into.

This was my biggest shock when first working with FPGAs, while naively using a software mentality. Most everything had to be re-written once the simulations were done.


Hello, I'm the author of the Project F blog. I've almost finished a complete overhaul of this series: animation and double-buffering are coming in October.

I'd be happy to field any questions you have.


Nice work :) I think implementing a VGA controller controller seems a lot nicer in Verilog/VHDL than on an MCU.

The Ti chip you're using for DVI looks interesting too, not heard of that before.

It looks like you're going to use the FPGAs BRAM for double buffering? I started implementing double buffering for led strips in VHDL, but need to get back to finishing the SPI controller for it.


Thanks :)

The TI TFP410 chip is on the 1BitSquared DVI Pmod board: https://docs.icebreaker-fpga.org/hardware/pmod/dvi/

I've also got designs that generate DVI on the FPGA with TMDS encoding (no external IC required). I've never polished or written them up, but you can see an example here:

* https://github.com/projf/projf-explore/blob/main/graphics/fp...

* https://github.com/projf/projf-explore/blob/main/lib/display...

I'm using BRAM for framebuffers as it allows me to focus on the graphics rather than memory controllers and access. BRAM gives you dual ports and true random I/O; DRAM is much more complex.


Just here to say thanks for this, it looks awesome and is well in my reading list :)


metagripe, the font you are using is too thin, top 10% of hard to read websites.

> "游ゴシック", YuGothic, "ヒラギノ角ゴ Pro", "Hiragino Kaku Gothic Pro", "メイリオ", Meiryo, sans-serif;


If you'd link to a screenshot of what you see, that would be helpful.

I have noticed the font renders thinner on Windows.

I plan to look at the design over the winter: some things could definitely be improved.


Firefox on OSX, https://i.imgur.com/G0fuP1C.png

BTW the article is wonderful, thank you for making it. I have an Arty board but I will be running through it in Verilator.


Thank you. I can't promise to get to the new design until early 2023 as I have many hardware designs I want to finish this year.

Once you've got a design working in Verilator, I strongly recommend running it on an actual board if you can: nothing beats the feeling of running on hardware :)


Nice! How about a chapter on ray-tracing?


Oh, I’ll get to that :D


do you think with a customized analog board, CRT displays could have a function similar to G-Sync? what would be involved?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: