Jump to content

matthew180

Members
  • Content Count

    2,863
  • Joined

  • Last visited

  • Days Won

    3

matthew180 last won the day on August 13 2019

matthew180 had the most liked content!

Community Reputation

1,946 Excellent

4 Followers

About matthew180

  • Rank
    River Patroller

Contact / Social Media

Profile Information

  • Gender
    Male
  • Location
    Central Florida
  • Interests
    My family, FPGA, electronics, retro/vintage computers, programming, coin-op games, reading, outdoor activities.
  • Currently Playing
    Divinity2, Borderlands2, beta-testing for Realms of Antiquity (TI-99/4A CRPG).

Recent Profile Visitors

19,840 profile views
  1. It can be difficult for sure. I think you just have to set things up to allow for collaboration (i.e. use a version control like github, create an information website with stated goals, etc.), and wait for people to show interest. I can probably help with that. I can probably help with that too: https://dnotq.io/sdram/sdram.html https://github.com/dnotq/sdram I strongly dislike the way I have seen most people write Verilog (it looks too much like software languages, and the shit I have seen people write, it is a wonder it works at all). IMO, VHDL is a lot better to learn first, since it will probably be unfamiliar to most people, and that helps break any mindset you have that is grounded in software. When you are working with FPGAs, you are *not* programming, you are *describing* hardware (that is the "H" in HDL), and it is not even remotely the same. I find it best to think in terms of how you might build such a circuit using TTL logic ICs; do that, and you will find your HDL makes a lot more sense and you will be able to solve more complicated problems. Been there, but even my "it finally works!" was still a case of "just barely", and there was so much I still did not realize or understand. Yeah, all the Mealy / Moore theory and such, I never use it in practice; the information is just not helpful to solving problems (that I have found). I'm sure you could look at various HDL designs and identify "ah, this is Mealy-style" or whatever, but I have never sat down to describe some circuit and thought "oh, I'll use a Moore-style FSM here...". Understanding clocks and timing is critical in FPGAs, just as it is in discrete digital electronics. It is probably one of the areas that most people getting into FPGAs take for granted and don't understand. However, there are some basic rules, that once you learn and follow, it becomes much easier and your circuits will work much better. Those who know me have heard me say before: when working with FPGAs, the bugs are (almost) always timing. Uhm, you know that is me, right? And yes, Pong's books are exceptional IMO. He wrote the book twice, once for VHDL and Verilog, and of course I recommend leaning VHDL first. I find FPGA work a very lonely space in the hobby realm. I have always been looking for people to talk to about it, so if you want to have some discussions, let's hook up on Discord (my username is dnotq#6613).
  2. Yup, "CRT" video formats are really very straight forward, just a pair of counters and some logic. I was very excited when I discovered how simple driving the raster CRT really was. Now, generating the pixel data, that's a different story. There are some very clever circuits to be found in the early arcade computers. DVI and its successor keep the data in a digital format and do some encoding to minimize the transitions on the video cable (to keep a 0V DC bias), but other than that it is really just a variation on the same signaling used to drive a CRT. It can certainly be complicated at times, and the documentation is like the E/A manual (the information is in there, but you better know WTF you are doing and looking at). DisplayPort, that is a totally different ball-game; it truly is a totally different format, not even remotely related to anything before it. It is probably not as complicated as it seems, once you understand what is going on, but needless to say, I did not get very far into the specifications before I was pretty much lost. It is also one of those deals where the *MINIMUM* data stream is a single 1.6Gbps channel, which immediately puts you into the realm of very complicated (and more expensive) ICs and PCB layout. Display Port was never really an option for the MK2. Hmm, seems to be a human condition, especially when it comes to technology. For some reason, people like to make things way more complicated than necessary, and I find ideas and protocols that come from academia (i.e. most of the Internet, languages, data formats, etc.) tend to be overly complicated for the sake of completeness. Too complicated to understand, not practical to fully implement, and full of capability that is not necessary the majority of the time. There is a certain amount of complexity you will have to deal with to obtain the video being generated these days, and it blows me away that even a display like 1920x1080 can be updated at 60 FPS. That is a *lot* of data to move, and without any kind of jitter in the image, just amazing. But, it is common place that people take it for granted and assume it must be "easy" (just all software is simple and anyone can do it). Maybe, but I need to get it done first. Personally I think things like documentation and consistent availability again will help more in the short term.
  3. Wow, those are pretty yellow! I have done the retro-bright and it works, but some of the parts have started to yellow again. I thought it only happened due to direct sunlight, but now I think it is also heat related (the plastic I did with the retro-bright was stored mostly in the dark after the process, but the space was hot (a garage) during the summer). I only had access to a Tomy once, very briefly, to test the F18A in it. I don't recall, but the keyboard looks like it would have been very frustrating; like the original PCjr keyboard, but not as bad as the U.S. Timex Sinclair or the Atari 400. Having the 9995, it seems like it would run pretty quick. Admittedly I know very little about the Tomy, and I don't think I have ever seen anyone actually use one.
  4. I have also done things like this, but more of a thought experiment, or just sketching the ideas on paper. I never put that much effort into any kind of actual layout. The old ICs are *big*, and I think it is easy to forget that today with all the miniature components that surround us. Computers used to be bigger because they needed to be bigger. Making the boards smaller was the catalyst for the modern "chipsets". It is also hard to do a good layout, something that is lost on most people, and it can make doing the projects frustrating when your work is compared (size, cost, quality, availability, support, etc.) to something main-stream like an Rpi, or even a modern computer or smart-phone. This is what I was talking about above where I find it rare that there is a sufficient cross-section of goals to facilitate a project collaboration. It is important that you are doing the project because *you* want to and are having fun. If other people happen to also be interested, then great. If not, they are welcome to do their own projects that satisfy their own goals. Cool looking board, by the way. It would probably be prohibitively expensive though, I think.
  5. That requires physical soldering, desoldering, and electronics skills that I don't think most people have acquired, so it limits the the number of participants in an already small community. I think those who can do these things already are. There is really a lot of hardware hacking going on in this community, probably a lot more than other retro-systems. AFAIK, most of the projects are single-person endeavors though, and I can only imagine what 3 or more people here focusing on a single project could achieve. However, in my experience the biggest blockers to a group-based projects is a decisive list of goals that everyone can agree upon, and enough of an interest cross-section as to what the end results would be. Without constraints, those questions are impossible to answer. Emulation can run a system faster than has ever been possible, and as computers get faster, so do the emulators. If you stick to hardware, then you still have the same situation, al-la the 100MHz 9900-core inside the F18A, etc. How fast are you wanting to go? As for addressable memory, there are any number of ways to make more RAM available to the CPU, and it just depends on what your consider acceptable. If you limit to just "instruction-supported direct CPU addressable", then it will always be 64K, unless you use an FPGA and expand the address bus and modify the instruction-set (I've done this a little, and emulators can also more easily do this as well). But, as soon as you are willing to add something like a memory-mapper / memory-paging, there is no actual limit, just practical limits. We live in a strange (and somewhat interesting) time, with a lot of very fast and cheap computing hardware and electronics available to us. To me the question is not how fast *can* we go, but more the question of how fast *should* we go, and still have a system that resembles the 99/4A? IMO, an FPGA is a really good place to experiment with these ideas. It is much easier and faster to model and test hardware with the FPGA. Once you have something working, and you still want to do the design with discrete ICs, then at least you have a blue-print and know the design will work. Alas, it might not be as "hands on" as working with a breadboard and discrete ICs, but IMO it is just as fun and rewarding, and very accessible (and getting more so every month). Well, there is *overdrive* in Classic99, and the BASIC assembler, both available right now. An interpreted BASIC will never run as fast as an assembly program (simply due to the interpreter overhead), so again, there needs to be some criteria as to when the goal is achieved. In the case of the 99/4A, having a BASIC dialect where the interpreter is *written in* assembly would go a long way towards making a native BASIC much faster and nicer on the 99/4A (that, and getting the hell out of VRAM for anything other than video). I think the modern Forth interpreters demonstrate how fast such a BASIC might be.
  6. I think the lack of responses in this thread is because the engineering "decisions" made for the 99/4A have been discussed, argued, hashed over, and beaten to death many times. If you "fix" these things then the end result is circular, i.e. you end up with a system that is not compatible with anything else, or only partially compatible, which leads back to not making and breaking-changes so you can stay compatible. If you are happy to just mess around with your custom system, then there is nothing wrong with that and such projects can be very fun and rewarding. However, I suspect most people get a lot of enjoyment from sharing and being a part of a community (i.e. the human need for belonging and being a social animal), so most people will probably not go down the road of the one-off isolated system, or making mods that cause incompatibility (which limits what you can do in terms of hot-rodding the system, etc.).
  7. Thanks. I'm wondering if there was enough non 99/4A related posts to need another subforum?
  8. I tend not to talk about hardware I'm not working on (or done building), but I have thought about a fire-hose replacement from time to time. The main problem is speed. Even though the 99/4A seems really slow compared to today's computers, the fire-hose and PEB bus operate at the full CPU bus speed of about 3.3MHz, or 333ns. When you consider that kind of speed, things like USB and networking fall short because the system-bus will not tolerate any latency in a bus transaction. Memory accesses in the 99/4A do introduce a wait-state for 16-to-8 bit memory access, but that speed is still fast, and it has been proven that most memory in the old systems was fast enough to operate with the wait-state generator bypassed. Long story short, to make the console to PEB cable smaller you really need to reduce the number of cables, and to do that you need to serialize the bus. Luckily the computer world is crazy with high speed serial buses these days, and we can ride that wave of cheap ICs and cables. If I was going to make such a cable, I would probably use a pair of these (18-Bit Bus LVDS Serializer/Deserializer - 15-66 MHz): https://www.ti.com/product/DS92LV18 Two transmit and two receive channels could be run over a stock digital video cable (like DVI or that other one that I won't mention), or even a USB-C cable. It would need to be run somewhere around 30MHz or so to have some margin, but that is well within the capability of these ICs. Some buffers to control direction and bus contention, and you could probably do the whole thing with non-programmable ICs (no FPGA, CPU, or microcontroller needed). You would have to know WTF you are doing with high-speed serial electronics and PCB layout though, this is not a low-speed kind of circuit. But the result would be a single small cable between the console and PEB. However, these days it might be easier to just put the 99/4A onto a single PCB and stick it into the PEB (which I think has been done in the past). You would have to do something about the cartridge port though.
  9. Yup, been down that path about 3 different ways now. Thanks for the suggestions though, I know of a few open-hardware devboards that missed that little detail. I am currently using the MIC2090 (5V, 50mA, reverse current protection) on the MK2 for the required 5V output on the digital video interface. Tried that, unfortunately it did not correct the situation.
  10. If the source or destination is small enough, and word aligned, you can locate your workspace on top of the src or dst, and eliminate the overhead of indirect addressing for half of the copy. Speed gains are usually found by intimately understanding the problem and writing custom solutions that combine operations, which is only possible due to having that external knowledge. This is where humans will out-smart computers/compilers for a very long time to come. Squeezing performance will never be a "generic code" endeavor, IMO.
  11. Ultimately I would love to find a way to make a living doing hardware like this, similar to the AdaFruit or SparkFun business model, only smaller (much smaller, just enough to support myself, and maybe 1 or 2 other people). The amount of time I spend working on this is unbelievable to me, and I either have to scale back so my hobby does not interfere with life and work, or I need to make it support me 100%. I do plan to release the schematics and HDL once things are wrapped up. Since this is still very much a hobby project, I just don't have time to create and manage all the information, support, and people that come along with having open hardware, so it stays on the back-burner for now. I'm prioritizing getting something working over releasing the information. It is also a lot of work, and the F18A is 8+ years of almost non-stop effort and learning. It is kind of hard to just "give it all away". But I use open source software/hardware all the time, and I support the general idea, so that's what I will do when I have the time to focus on it.
  12. Well, that's the real rub. I have done initial testing on each new feature on the MK2: the SRAM, the digital output, the ADC and DAC audio, etc., but without doing all the HDL work before releasing the hardware, there is no way for me to test everything all together. Also, things like USB and digital video are complicated, despite that they are ubiquitous these days, so I cannot make any guarantees about compatibility with anything. For example, I have tested the MK2 digital video on 4 monitors, 2 TVs, and a video-signal tester that I have, and it works on 6 out of 7. One of my ASUS 4K monitors does not like the video, despite the signal passing 100% on the video-tester. The TVs (both are Samsung) over-scan the hell out of the signal, cutting off 16-pixel on all four sides, despite properly detecting the 720x480p signal! I'm not sure what that is all about. Maybe since the MK2 does not perform EDID, the TVs are just being stubborn about it? IDK. However, the three monitors (and video-tester) that accept the signal show exactly the correct image and all pixels are accounted for. Video is hard. Anyone who says or believes otherwise is lying or does not know what they are talking about. I cannot, and will not, make any guarantees or claims about compatibility with any monitor, TV, or computer system. The engineering and testing cost to build such a device, with such a limited market, is just not practical. Having said that, of course I am doing everything I can, and know how to do, to make the MK2 work as expected and produce video signals that will work with as many devices as possible.
  13. The old site had a contact form to get me your email address, but it got so spammy that I had to take it down. F-ing bots. The store is down too, for now while I get things situated on the new site. Just send me a P.M. with your email, or you can email me directly (pretty easy to figure out: my first name @ the website domain), or as Greg mentioned, you can watch the forum since I will announce here as well.
  14. The MSX1 is a great system, I personally really like my MSX1 Toshiba HX-10. MSX BASIC is really nice, giving fully access to all of the 9918A's capabilities directly, i.e. all 4 graphics modes are supported with native commands, and VRAM was 100% available for program use (since the system had its own RAM, VRAM was left for, well, video purposes as it should be). MSX BASIC even has commands to set VDP registers directly, and luckily it does not validate the register number, so you can access the F18A's enhanced registers from MSX BASIC. I actually used this quite a bit during testing. I really wish the MSX had penetrated the U.S. market, and had that been the case I think I would be hanging out in the MSX forums these days, instead of the TI forums.
  15. Life and job due to the pandemic have caused me to not have as much time as I planned the last few months, so the MK2 is not where I anticipated it would be by now. I hate to keep apologizing, but that's all I can do for now, and I suppose I'll keep doing so until I get the MK2 done. So the TMDS core I had to write is complete and tested. This was necessary to support DVI and similar digital interfaces. The last real hurdle is to decouple the scan-line generation from the pixel clock so I can use the new video formats that will be required. Unfortunately when I was first writing the HDL for the F18A I was also simultaneously learning FPGAs and HDL, so my organization and separation of modules is not as clean as it should have been. So I have some rework. The end-to-end audio seems to be working pretty well too. I had a new noise problem (another one) that I could not figure out, but it might have been my computer and not the MK2. I have to run another test to verify (fingers crossed). I hope I did not already give this update? I did not read back before posting, so apologizes if this is duplication information. I always wanted to have some examples of using the F18A with other microcontrollers and in systems that did not have the 9918A. Alas, what I *want* to do and what I have *time* to do are very different matters altogether. I hope with the MK2 that I can produce some decent documentation, as well as some hardware examples and other such things. I try to have positive intentions, even if I fall a little short on my actions. I also have a serial interface (SPI) planned for the MK2, so make it easier to hook up to something like a microcontroller. Since SPI can be very fast, it should be easy to get it to perform similar in speed to the parallel interface (I should probably do the calculations before I say something like that, though). Some people have used the F18A in various projects, but they are few and far between. I hope the MK2 and better availability will help with that. Sitting here tonight thinking about how I can get the MK2 out faster, I realized something. The hold-up now is rewriting a bunch of tile-layer and sprite HDL to decouple it from the pixel clock so I can support the digital pixel clock (which is 27MHz instead of ~25MHz). However, if I limit the initial release to just supporting the analog VGA output, I could get things released sooner and provide the digital support via a firmware update. I don't know how well that will go over with people, but it could help speed up the release. The MK2's external SRAM would also be unused with this release too. I don't really have a way to ask the 500 or so people who are on the waiting list, but at this point I think most would want *something* now, with more features via updates. Yes? No?
×
×
  • Create New...