Jump to content

tcdev

Members
  • Content Count

    110
  • Joined

  • Last visited

Posts posted by tcdev


  1. Xevious has been on my to-do list for many, many years. And I'm starting to get close to the point where I'm actually going to start it. So if this isn't finished by 2030, I might actually beat you to it!!!

    • Like 2
    • Haha 2

  2. I'm curious to know why it has stalled. By all reports it was all-but-complete quite a while ago and there's clearly a lot of demand for it.

     

    Guess I'll just have to make a Neo Geo version then... ;)

    • Like 1

  3. Slightly off-topic but if anyone wants to play the voices from Atari's arcade Star Wars machine...

    Data is contained in MAME ROM "136021.107"
    
    Here is a list of start and end addresses for the voice data.
    Since the ROM loads at address $4000 in 6809 address space,
    the offset within the ROM is just <address>-$4000.
    
    4062-419B          ; use the force luke
    419C-4240          ; remember
    4241-42FB          ; i'm on the leader
    42FC-4435          ; the force is strong with this one
    4436-4576          ; red 5 standing by
    4577-46C2          ; this is red 5 i'm going in
    46C3-4802          ; r2 try to increase the power
    4803-48DF          ; you're all clear kid
    48E0-49D5          ; let go luke
    49D6-4A58          ; (darth vader's rasping)
    4A59-4B37          ; yahoo
    4B38-4C49          ; i have you now
    4C4A-4DB9          ; look at the size of that    thing
    4DBA-4F30          ; stay in attack formation
    4F31-502D          ; the force will be with you
    502E-50A6          ; always
    50A7-51A3          ; (bleeps)
    51A4-52C7          ; (wookiee    growl)
    52C8-54C8          ; i'm hit but not too bad, r2 see what you can do with it
    54CD-557C          ; i've lost r2
    557D-5746          ; great shot kid that was one in a million
    5747-580F          ; i can't shake him
    5810-58E4          ; luke trust me
    
    • Like 3

  4.  

    IMHO this is a failed assumption. The POKEY chip is the one and only in the Atari 8-Bit chipset which has two (!) chip-select signals. Atari would have advisedly the goal to enable more than one POKEY in a system or cabinet. And CPU power isn´t the issue here.

     

    Having 2 chip-select signals has nothing to do with having multiple POKEYs in a system; it's to simplify glue logic for the system designer, especially when the chip-select logic has two inputs, or being able to select either positive or negative logic without adding another inverter to the circuit.

     

    Just like any other device, you can hook as many POKEY chips up to a CPU as you desire. The chip-select pin on each is driven by address decode circuitry that is part of the system design, not the POKEY itself.


  5. A 65816 would probably be needed to have acceptable results with quad POKEY audio. You're probably right about many of the other POKEY functions needing a dedicated CPU... otherwise, some kind of buffering circuit would be necessary, and that would cause lag.

     

    Actually the 65816 would be quite unsuited to the POKEY. In native mode the 65816 is unable to byte-address memory, being restricted to two 8-bit accesses to sequential addresses. It does have an 8-bit access mode, but then you're crippling it down towards the 6502 again.


  6. It's not explained by the PCB, but a Vector Display is something different to a Raster Display.

    Perhaps a Vector Processor is not a full CPU, but it is a Co-Processor. If the Main Processor had to handle the 3D , there had to be a second CPU to drive POKEY. Or a much faster processor with an adaptive bus system.

    The (6502/6809) main CPU in the Atari vector arcade games build a "display list" in shared memory that consists primarily of VECTOR commands and calls to vector ROM routines (that consist of VECTOR commands). Once a complete "frame" has been described, the CPU hands it over to the AVG for actual display on the monitor, then gets on with the business of building the display list for the next frame. So the main CPU doesn't have to actually render any vectors itself.

     

    The AVG in the arcade machines processes the "display list" of vector commands and controls the beam on the monitor to draw the vectors. It is a simple "processor" with a handful of special commands, either drawing or execution flow (JSR, JMP, RTS, HALT).

     

    The so-called "math box" used in the games assist the main CPU with matrix arithmetic for 2D and 3D transforms. These calculations would be required whether the display is vector, or raster (with the same graphics rendered). The math box has a PROM which defines a dozen or so subroutines that chain together a few simple operations (think micro-code in a CPU) to provide more complex operations, like a 3x3 matrix multiply. The interface is, again, a block of shared memory.

     

    FWIW A single 6809 in Star Wars controls the TMS5220 and 4x POKEY chips. The TMS5220 sample data is fed byte-by-byte by the 6809 whilst it's also doing POKEY sound. 2 POKEYs are used for FX, the other 2 for music.

    • Like 2

  7. Wow. How is it that I didn't know about this? Oh, it must be because I don't (yet) have a Jaguar. Well, this game is going to change that.

     

    I spent the best part of my Computer Science degree playing Xevious in the uni arcade. I was never quite as good as my mate, who could last for several hours, but I did manage to reach the end of the map a few times (the dips were set to continue from the start) and clock the score once or twice (sadly, unlike my mate, not more than once in the same game). Fun fact, your extra Salvalou go 8,9,A,B,C... my mate had 'W' when it hit the right edge of the screen. BTW there are quite a few more hidden citadels than is documented on the net, and I used to know where every single one of them was. Used to. I own a (bootleg) Xevious arcade PCB and of course Namco Classics Vol 1. I started to reverse-engineer the original arcade ROMs many, many years ago with the intention of writing a MAME driver, but was beaten to the punch. I did, however, end up writing the Namco Classics MAME driver! Fast-forward about two decades and having made some good progress on an FPGA emulation, I set it aside to pursue other projects and - bam - beaten again.

     

    Xevious is hands-down my favourite arcade game. These days I spend a lot of my hobby time reverse-engineering retro games and porting them between platforms. Xevious is on my to-do list, most likely targeting the Neo Geo. I'm curious to know whether this port is a direct translation of the original Z80 code, some sort of hybrid emulation, or done purely from observation using ripped graphics? All my ports are direct translation from original assembler code. It's a long hard slog but the end result is 100% accurate. Right now I'm doing Asteroids for the IIGS/Coco3 and possibly others.

     

    Anyway, enough rambling. It goes without saying that I'd like to be on whatever list exists for pre-order. And now I'm going to go an check out development resources for the Jaguar. Maybe I can target one of my existing ports to it? Apple II Lode Runner or ZX Spectrum Knight Lore on the Jaguar, anyone?

    TLDR; I'd like to be on the pre-order list please!

    • Like 5

  8. Then there's the issue of multiple system busses. You've got the cartridge ROM, battery backed SRAM, CPU RAM, Graphics RAM all on different busses which may need to be accessed simultaneously. If a single random access per console system clock is pushing the boundaries of the RAM module's latency, then adding a secondary read or write to a different address on the same module during the same console system clock, might break the system. A cache miss in a game console could have devastating consequences.

     

     

    This is probably the biggest impediment to emulating the Neo Geo; this and the relatively large memory devices required on no less than 5 different buses.

     

    For simpler systems with modest requirements, modern FPGAs are likely to have sufficient RAM on-chip for some of the memory devices. In these cases the number of buses is almost inconsequential. A lot of 8-bit arcade games, for example, require no external RAM at all on later generation FPGAs; these games typically have video, attribute and CPU RAM sizes that are measured in 10s of kB at most. It's generally the (even 8-bit) microcomputer emulations that require external (S)RAM on "hobbyist" FPGAs, and of course later generation arcade games/consoles.

     

    Some systems can tolerate some latencies on some buses (eg. 68K) and still run "correctly", though arguably extending the latency beyond the original design specifications affects the accuracy of the emulation. Other buses, such as video memory, cannot tolerate any latencies at all. However in these latter cases, it is sometimes possible to pre-fetch into a cache since the access pattern is known. Again, this will arguably affect emulation accuracy. But in general, yes, multiple buses equates to multiple headaches for the emulation implementer.


  9. And I'm not even sure if data can be fetched from SDRAM or DDR one byte at a time. It may be necessary for the FPGA to read an entire "word" off the RAM module, then discard all but the one byte of that "word" it needs, then immediately fetch a new "word" from somewhere else, discard all but one byte, and perform all intended operations with that byte within a single system clock. Writes would be even worse. It would need to read back the entire word it needs to write to, change the one byte in that word that needs to be changed, then write the entire word back to the RAM module, if for instance single 8-bit writes to a single address are not possible. Before you can write a single byte to the RAM, you need to know the contents of the entire 32-bit word where that byte resides. Most memory probably have 32-bit or 64-bit bus width, depending on whether the CPU is running 32 or 64 bits. So a CPU architecture designed to read/write one byte at a time with a maximum one clock latency, would be a challenge.

     

    Reading an entire "word" is inconsequential and has no down-side to performance or otherwise, it is simply masked in the controller.

     

    Given that memory is designed for maximum throughput, it would be a monumental drawback to require that entire words be written to the RAM. SDR/DDR have DQM/DM signals which are effectively byte enables for write data. So no issue there.

     

    There shouldn't be an issue using 100% of the RAM for emulated systems. 8/16-bit data buses can easily be accommodated by a mux on the data bus and shifting low-order address bits into DQM/DM signals, for example. Plenty of 8-bit emulations doing that already with 16-bit FLASH and/or SRAM.

     

    FYI here is a good explanation of how SDR/DDR works.


  10. Some emulators for machines after the N64 or PS were criticised for not emulating the internals but instead implementing higher-level APIs that games rely upon. This allows, for example, to make 3D games much higher resolution than originally intended, but did have its share of compatibility problems.

     

    The main reason for this is the complexity of the designs - in particular the custom ICs used in the design. There is no easy way to reverse-engineer the workings of a custom IC, especially ones as complex as those used in modern consoles. The only feasible approach has been a black-box reverse engineering and/or so-called high-level emulation at a known (well-specified) level such as an OpenGL API layer. Compatibility issues arise due to bugs/nuances of the particular implementation being unknown to the emulator authors. Those that "criticise" such efforts are certainly unable to contribute anything useful to the process and rightly ignored. On the flip side there are some advantages to a high-level approach, such as being able to generate higher-resolution displays, at the expense of accuracy. At the very least, it's a stepping-stone towards more accurate implementations and shouldn't be discouraged. You only have to look at the progress of MAME to appreciate that.

     

    You may be interested to know that portions of the N64 were implemented in Verilog. If one were to find themselves in the possession of the Verilog source, then perhaps we could start having the discussion about FPGA implementations not being emulations. ;)


  11. The emulator DICE is an exact 1 to 1 to the original machine. Should it be called an 'emulator' or is there a better term for how exact it is? If an emulation is 100% like the original, does it cease to be an emulation?

     

    The term "emulation" does not itself imply anything about accuracy or lack thereof. Something is either an emulation, or not. Emulators may be very good (accurate) or not, or anything in between - either way they're all simply emulators.

    • Like 1

  12. I think performance would be a huge consideration.

    Which would you choose? Which do you think has superior compatibility? Still think an FPGA is "just" emulation? :ponder:

     

    I don't understand where you're going with this - the one, single statement that I had issue with was a claim that FPGA implementations aren't emulations. That - and only that - is what I've been addressing until now.

     

    Performance, accuracy, "which is best" etc are secondary and completely different discussions. I could debate this but quite frankly, we'd probably agree for the most part. I don't see the point.


  13. ^^ Did anyone fail to bring up the lag issues with traditional emulation? Many emulators lag badly. An FPGA implementation requires no screen buffer to render graphics to a display like a traditional CPU/GPU combo does. An FPGA can pump pixels out to the screen in real time, whether said display is a CRT beam or a 480p, 720p, 1080p, or even 4k display. Just the zero lag alone makes a huge difference to some gamers. 16ms of lag can mean the difference between defeat and victory in some games. No ARM or x86 implementation of a game console is going to be pumping pixels out to the sreeen in realtime. No way no how. and that is exactly what Kevtris intends to do with his Zimba 3000.

     

    Your comment is valid but not relevant to the core discussion here - whether or not an FPGA implementation is an emulation. We're not focusing on performance.

     

    Besides, the requirement for a "screen buffer" is not strictly a "software requirement" but dictated by the operating environment. Programming at the bare metal on the right platform it would definitely be possible to write a software emulator that renders directly into the frame buffer of the video hardware. Just like the good old days of 8/16-bit computers.

     

    EDIT: You'd need a high end (expensive) FPGA and a good design to pump out 4K video


  14. An FPGA is programmed. Someone needs to write the program. For video games, usually this is not by re-engineering a chip's schematic. Usually this is done by investigating data sheets and/or making note of the operation of the original. In this case, an FPGA solution is essentially emulation, that is, it is not a direct copy of the original schematic, but instead, it's an interpretation, and prone to needing revision based on comparison to the real thing (just like software emulation).

     

    And if even if you do implement the schematic, verbatim, in an FPGA (which can and has been done) - and even if we completely ignore the internals of the chips in the circuit - then it's STILL an approximation, it's STILL an emulation! Because an FPGA comprises a few basic building blocks that are configured together to behave like (aka emulate) the circuit which is described by the high level language.

    • Like 1

  15. Assuming the "Final ASIC" is equivalent to the original console, if not from a logic gate level then at least from a functional standpoint, then who cares if the "Final ASIC" is implemented as discrete logic gates on a custom die, or an FPGA chip? You claim both are emulation, but the FPGA/ASIC is operating on the hardware level in a way that is equivalent to original chips. The software emulator tries to compute the logic of every part on a command/ASM level, by converting ASM from one CPU architecture to another. And the results of emulation are not realtime yet the results of logic gate emultaion via the FPGA, or an entire clone chip, are.

     

    Surely you see the folly of using the blanket term "emulation" to describe both processes? I cannot imagine how the software emulation could possibly be superior to going the hardware route.

     

    Who cares indeed - that's my exact point.

     

    An FPGA implementation based on a black-box reverse-engineering process, as 99.9% are, is not "equivalent to original chips". Far from it. It's an emulation described in a high level language whose outputs approximate those of the original chip given the same inputs - and nothing more. Just like software emulation. Again, there's absolutely nothing magic about the fact that it's implemented in hardware. It's a different design implemented with different logic - period. aka. an emulation. Why is that so hard to accept?

     

    Why do you claim that software emulation isn't real-time? There are plenty of aspects of the timing within an FPGA design that are only approximations of the original chip. Some chips have analogue aspects to them (eg. SID) that can only ever be approximated in an FPGA. Some chips have non-specified or variable gate delays which can impact the original design and again, only be approximated in an FPGA. FPGAs themselves have clocking constraints that sometimes mean that implementations can only be approximated and are therefore known to be inexact. It's also sometimes possible to use much faster software emulation to mitigate these issues. If I can emulate a 4MHz Z80 CPU in software running at effectively 5x the speed, then how is that not "real time"? Because it's not tied to a PLL in the FPGA that is programmed to approximate the original clock? That's not a requirement of "real time".

     

    Your argument about the process of "converting ASM" is completely and utterly irrelevant to the discussion. The mechanics of the process don't matter in the slightest. Simply because the software emulation is abstracted a few more levels away from the original design doesn't make an FPGA implementation any less of an emulation. Yes, you could argue that it's closer to the original design, but - as I seem to be repeating over and over again - it's still an emulation.

     

    I've done plenty of FPGA implementations where I didn't even look at the schematics. I've used the exact same components for tilemap and sprite systems across many implementations that are completely and utterly unrelated. And you know what - no-one could possible know without looking at the source - because the function of that logic is to take values stored in registers and/or memory and render them as pixels on the screen as the raster scans the CRT and the output is indistinguishable from the original. How is that not an emulation? In what way is this superior to a software emulation? It's not. Therefore there is nothing inherently superior to an FPGA implementation.

     

    I never once said or even implied that software emulation is superior to hardware. My preference is actually FPGA emulation. But again, and I'm sick of saying this, it is still an emulation.

    • Like 2

  16. 1) Your typical "I'm a hardware engineer" mentality shows through in your innate need to show the most technical breakdown while not conveying anything to a lay person.

    2) You are taking the engineering standpoint which is that since it's not a final ASIC it's still emulating or simulating the platform.

    What your failing to recognize is the question itself asks us technical persons to draw the distinction for others on how this is different from emulation.

    3) These microchips allow for the software to execute as if they were running on original hardware. Whether the design meets that standard is often at issue.

    4) I suspect you may allow me the statement that an FPGA design can often take a more direct approach to solving certain hardware issues. Or you may not to continue your point.

    5) I deal with your kind all of the time as well.

    6) Now stop trying to be 'right' and remember there are non engineers on this forum, try to communicate with people in a friendlier and less condescending manner while you are at it.

    7) If you felt I was attacking you at all in the previous statement, for that I apologize. I just want to give people the understanding they seek, and with my experience in the tech industry,

    aside from my IT degree which doesn't cover it all, and as a former trainer for another career in my life, that's my take on it.

     

     

    1) Actually, my first degree was in Computer Science (software) and my first 8 years of employment was as a software engineer. It wasn't until I returned to uni to study Electrical Engineering that I moved into hardware and FPGA design. So I would argue that I'm not your "typical" hardware engineer, whatever that entails.

     

    2) There is no distinction between an FPGA and a "final ASIC" with respect to the subject of emulation, so you're way off track there. And whilst the question may comprise drawing a distinction between FPGA's and "emulation" - and I agree that there is a distinction between FPGA implementation and software emulation - the fact remains that they are both emulation.

     

    3) Just like software emulation allows the software to execute as if it was running on the original hardware. That's the whole point of emulation.

     

    4) You may make that statement, and I would even agree. Doesn't change the underlying facts though.

     

    5) "Your kind"... I very much doubt it.

     

    6) I could say the same. Being told twice that you're simply wrong tends to rub the wrong way.

     

    7) I understand that that you want to 'give people the understanding that they seek'. You more or less admit you have little first hand experience in this area, yet you state simply that others (myself) are wrong. Apologies if I come across as unfriendly and condescending, see above. I'm not interested in a personal slanging match either. But I have formal qualifications in both software and hardware engineering, direct experience in emulation in both software and FPGAs, so I feel I've got a pretty good handle on the concepts here - perhaps, trying not to sound too pretentious, better than most on this forum. I'm only trying to give people understanding as well. I just don't want (what I perceive as) misinformation being spread.

     

    In closing, no-one can deny that FPGA emulation is different to software emulation. But underneath it all, the simple fact is that they're both emulation. But there's nothing inherently superior or more accurate about FPGA emulation; however it is arguably easier to make a more (consistently) accurate emulator using current technology with an FPGA.

    • Like 3

  17.  

    Nope, it's hardware running electrical signals implementing a design which accomplishes the same functions, reverse engineered, in a general purpose re programmable processor.

    No original code is being fed into software to be translated into a non native format for processing and returning translated results.

    It's a hardware processor which can be reconfigured for simulation of various hardware designs. It's running the code native.

     

    There are so many things wrong with your explanation I don't know where to start. Perhaps the most glaring (and telling) is your description of an FPGA as being a "processor". And what you completely fail to recognise is that the FPGA implementation, described in a high level design language no less, is an emulation. For the most part, the resultant logic barely even resembles the original hardware design at the gate level. I suggest you read up on HDL synthesis and FPGA architecture.

     

    And your statement about a "non native format" and "translated results" is just gobbledygook. The only difference between a software emulation and a hardware emulation are levels of abstraction. How do you class, for example, a processor that uses microcode to implement its instruction set? How is implementing a state machine in HDL any different from one implemented in software on a processor? There is absolutely nothing inherently more accurate (or more "pure") with hardware (FPGA) emulation as opposed to software.

     

    But then again I'm only an electronic design engineer with a decade of experience in FPGA design, as well as 20+ years playing around with software emulation and about 8 years playing with FPGA emulation of retro computer/videogame systems - so what would I know...

     

    A little knowledge is a dangerous thing.

    • Like 1

  18. I was under the impression the VDP was similar in design with obvious changes done for BW operation. I am aware of the CPU differences, but those chips are well documented.

    Much like how the Coleco , MSX and SG1000 play parts round robin.

    If the VDP tech wasn't somewhat similar in design then I am misinformed. Not the spec's exactly but their design.

     

    You've been misinformed.

     

    As you mention the Colecovision, MSX & SG1000 (as well as others) all use the the same VDP from Texas Instruments.

     

    The Gameboy doesn't use the same VDP, nor anything derived from the design of that VDP. The similarities end with their basic purpose.

     

    You could not leverage anything off either the Gameboy or NES implementations in order to produce the other.

×
×
  • Create New...