Thanks for the link. I meant to write 1280x720@60Hz for 16:9 HDMI TVs/monitors but have used 1024x768 quite a bit in the past for 4:3 VGA monitors and ended up mixing the two.
VIDEO_ID_CODE 4 (1280x720) works on all monitors I tested while 1 (640x480) only displays on half of them:
Thanks. I want to keep the software simple while having proper support for the graphics and audio hardware. I already have some prototype software written is RISC-V asm. I’ll probably use lua as the first high-level language, as it has a small code base and runs well on memory-constrained systems.
I’m not yet sure what features the OS will offer; it partly depends on interrupts and whether I support virtual memory. But I’m not trying to create another UNIX; there are plenty of those already. However, the system will be modern, e.g. using UTF-8 encoding.
Load-store architecture is a defining quality of RISC in general and RISC-V in particular. If you're used to a rich set of addressing modes in x86 or 68K, coding on RISC-V asm is a bit of a shock, but I'm definitely warming to it.
Burrell Smith and Andy Hertzfeld worked for Radius on the Full Page Display. How different would Apple have been if it had held onto more of the original Macintosh team in the mid-80s?
68000 is, in many ways, the pinnacle of assembler for programming, but RISC-V is pretty fun, too. I hope RISC-V tempts a few more people to try asm programming (again).
I got into 68000 programming quite late (6yr ago), but I have been enjoying it (so far Amiga, Atari ST, rosco-m68k). It is a very programmer-friendly instruction set architecture.
RISC-V I started playing with more recently (early 2023, thanks to VisionFive 2), and it feels like my old favorite (MIPS), without the baggage MIPS carried.
It is a pleasure to work with this amount of GPRs and the comfortable alternative names for them that the official abi offers. I am loving it so far.
I expect to have RVA22+V hardware soon (Milk-V Oasis). Very much looking forward to playing with the vector extension on that.
Yep, same, I am keeping an eye on Oasis, but to run powerful GPU drivers (much user space would have to be ported from c++ to hand written risc-v assembly, SDK included). Don't rush it though, concurrent access and memory coherence of device memory is still not finalized.
I have been coding kind of a lot x64 recently, the limitation of 16 GPRs has been painful. I am sure that when I will crank up on rv64 assembly programming, those 32GPRs will feel like fresh air.
In the other hand, I am not fond of the ABI register names, and the pseudo-instructions involving mini-compilation. I'll stick to xNUMBER register names and won't use pseudo-instructions. Like I will avoid any abuse of the preprocessor.
>In the other hand, I am not fond of the ABI register names
Why? They're simple substitution, and very helpful with following ABI.
>and the pseudo-instructions involving mini-compilation.
Again, why? These aren't specific to the assembler used, but rather, defined in the specification itself. This means they are reliable, and will always be there for as long as you use a RISC-V compliant assembler.
They are thus also the register names you will see in disassembler output, debuggers and other tools.
Also, you might be interested in this new RVA22+V board[0].
The standard pseudo-instructions are not just standard. They express idioms that get treated differently, sometimes also by hardware.
For example `li` gets expanded by the assembler into `liu` and `addi` which on larger RISC-V cores get recognised and fused back into a single op.
Using `xori` instead of `addi` would have had the same result but wouldn't get fused.
Next, some idioms get recognised and automatically assembled into "compressed" 16-bit instructions to save space. For example "mv rd,rs" and "addi rd, rs, 0" both get assembled into "c.mv rd,rs". And on a larger RISC-V core, "c.mv" could be only a register rename in the decoder, thus taking 0 cycles.
> Intel might not even be making another generation of discrete consumer GPUs
That might be ok. The integrated graphics on Intel chips are getting better - probably 100% thanks to the discrete GPU effort.
Apple has shown with the M1/M2 that integrated graphics can be really quite good even without high-end Nvidia performance. If Intel matches that, they could own the low-to-mid range just by selling CPUs and leave Nvidia in a pickle with no profitable market for their binned chips.
Apple deliver an absolute fuckton of memory bandwidth to their SoCs. A base M2 is swimming in 100GB/sec of memory bandwidth while the i7-1265U is 4/5 of that. All of the M1 dies are well above that number while on the Intel side the top of the line H and HX monsters are still limited to the 82GB/sec that dual channel LPDDR5-5200 puts out.
They're achieving that bandwidth by putting ridiculous bus widths on the SoCs. The M1 Pro has a 256-bit bus to memory, the M1 Max has a 512-bit bus to memory, and the M1 Ultra has a 1024-bit bus to memory.
IMO that shouldn't be a concern to those looking at buying this gen. The tech will move into their integrated CPU's, so drivers will be maintained well into the future.
The big disadvantage of not having a real FPGA is that you won't be concious of the very real LUT/gate limits. A simulation will happily allow you to apply all sorts of nice compartmentalizations and abstractions, without making your understand that they will cost significant money if you tried to find an FPGA to fit them into.
This was my biggest shock when first working with FPGAs, while naively using a software mentality. Most everything had to be re-written once the simulations were done.
Hello, I'm the author of the Project F blog. I've almost finished a complete overhaul of this series: animation and double-buffering are coming in October.
Nice work :) I think implementing a VGA controller controller seems a lot nicer in Verilog/VHDL than on an MCU.
The Ti chip you're using for DVI looks interesting too, not heard of that before.
It looks like you're going to use the FPGAs BRAM for double buffering? I started implementing double buffering for led strips in VHDL, but need to get back to finishing the SPI controller for it.
I've also got designs that generate DVI on the FPGA with TMDS encoding (no external IC required). I've never polished or written them up, but you can see an example here:
I'm using BRAM for framebuffers as it allows me to focus on the graphics rather than memory controllers and access. BRAM gives you dual ports and true random I/O; DRAM is much more complex.
Thank you. I can't promise to get to the new design until early 2023 as I have many hardware designs I want to finish this year.
Once you've got a design working in Verilator, I strongly recommend running it on an actual board if you can: nothing beats the feeling of running on hardware :)
I've found 1024x768 and 1280x720 are both well supported. I tend to use these display timings: https://projectf.io/posts/video-timings-vga-720p-1080p/