Jump to content

retro_doog

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by retro_doog

  1. Yikes! With only four 150uF "bulk" caps on that supply, I'm not sure that would be sufficient for the TI Console. Despite being rated for 160W(and most of that capacity is probably on the 3.3V rail which the TI can't even use), the power architecture of the ITX machines these are designed for is not the same as our TI consoles. Anyway, This is the reason I'm creating the board described in this thread! To make a drop-in, form factor correct, non-hacky, power converter replacement that meets the specs for THIS machine and will allow for a range of simple DC voltage input adapters. ?
  2. The "burning off" is simply just the way linear voltage regulators work. Linear regulators are lower cost and take up much less board area than switching regulators, and create a much "cleaner" rail. However, creating 5V from 12V with a linear is a huge waste of power. In modern designs, when a noise free supply mandates use of a linear regulator for sensitive analog measurements or audio use, modern LDO or Low dropout voltage regulators are used. So, so to get from 12V to a clean 5V, you first have a switching regulator reduce the 12V to 6 or 7V then apply that to a low dropout linear regulator to get a clean 5V. Now only 1V or 2V worth of power is being wasted. Of course, the amount of power actually wasted depends on the current draw as well. It can be appropriate to create a 5V "trickle" rail from 12V with a linear regulator if it is only consuming <100mA to drive a realtime clock or system management unit(for things like soft power). Modern computers and modern operating systems need more than just a hardware reset switch or you could create many ways to "brick" your system, or at least corrupt it to the point of needing to be re-formatted/installed. These old 8-bit systems had few power rails, less asynchronous clock domains, and no caching. When you wrote to the disk, the blocks were written when commanded and contrl returned to the program only after the write was complete for example. Modern hardware and operating systems use multiple levels of caching which is local higher speed storage that is only periodically read/written to/from the main source/destination. This applies to main memory in the case of the processor caches and to mass storage in the case of your file system. If you were to pull the plug(a hard reset is no different) anything in the caches would not get to their final destination and could easily leave things in a corrupt state. This is even more critical with SSDs as they constantly need to shuffle logical blocks around as part of their wear leveling and the also auto encrypt all data. Without proper power sequencing, it would be very easy to actually brick the SSD into not even being re-formatable at the OS level. Ah, power sequencing! Our old machines only had a few power rails. Usually +5V for the digital logic, +12V generally for analog stuff like video/audio/disk drive motors or anything else that needed power, and in the older generation machines of which the TI sits on the border, a negative rail to bias old NMOS and early CMOS digital ICs. Not too long after the early 80's machines only needed one digital rail, the +5V. Then in order to facilitate faster clock speeds, lower power consumption, and to accommodate smaller transistor processes, all of which go hand in hand, the "digital rail" was reduced from +5V to +3.3V to +2.5V, +1.8V, +1.0V, and in todays fastest processors and memory, even sub 1V logic levels. A modern computer actually runs on a mix of ALL of the above voltages I listed at the same time! Moreover most ICs will have a lower "core voltage" so the internal logic can run faster and use less power and have a higher "I/O rail" to talk to other ICs on the MLB. All of these mixed voltage levels created the need for special power sequencing, where the rails are brought up in a proper timing relative to other rails. All of this plus other functions like thermal handling are done be another small processor(usually a microcontroller) generally referred to as the "system management unit" or SMU. It orchestrates the sleep/wake/hibernate, powerup/power down, and warm/cold reset as well as recovery from hard power losses(auto reboot for servers and the like). All this to try to ensure you can't/won't corrupt or brick or even destroy your logic IC's by typing the wrong thing or pushing the wrong button. There's just no way to accommodate a hard reset button on modern machines. As far as programs locking up, modern operating systems are very adept at recovering from an individual program or process that goes awry. Only a hard crash of the OS itself would require a forces reset of the machine. I'm a Mac guy, so let me illustrate: On an Apple II, if a program crashed your whole computer froze up or if you were lucky dumped you into the "monitor" and you had to hard or soft reboot(power switch, ctrl-reset, or PR#6). Later on the early Macs, a program crash would give you the system bomb and a dialog box with a button to click to reset the machine. Here you can see the operating system geting smart enough to try and salvage what was in progress but still needing a warm reset directed by the user. You could still get hard crashes that required the reset button, but as the OS progressed through System 6, 7, and 8 hard crashes became less common. On to OS-X, and now a program can crash, but leave you more or less "cleanly" back at the finder. In fact, the other programs you had running didn't lose any data or crash themselves! You could still get a Kernel Panic and they were not uncommon in the early days/years of OS-X. Fast forward to today, I'm running MacOS 10.13 "High Sierra" and I can't recall having a kernel panic for MANY years. Probably not since before 10.6. Maybe in 10.4 through 10.8, I'd need to reboot the machine every week or two to give it it's "pep" back. For the last 5-6 years at least though, I can let my machine run for months or even over a year, only needing to reboot it for certain OS updates or again to give it it's pep back. If these machines had hard reset buttons to the processor, they would be in much worse shape. There are literally a thousand things going on in your modern computer besides the program you happen to be running and yanking those in flight would wreak havoc. Sorry for the long post. I just thought I'd give a treatise on why things aren't as simple as they were back in the old console days...
  3. It's actually wattage Depending on the voltage chosen for the power adapter, the amperage requirements will be different. If the power input requirements of the computer are, say 18Watts, the end user could use a 12V/1.5A supply or a 15V/1.2A supply or an 18V/1.0A supply equally well. I still plan to allow for some range of input voltages, I'm just not sure if I'm going to handle the case of <=12V yet. I may design for both so I can test on the first prototype as there should be plenty of board space.
  4. Sure, because... GROMS My 100mA -5V measurement included with a cartridge playing, so I suspect the load requirements will not be much higher. With that, I found a 200mA capable Inverting Switched Cap regulator IC, so I think we're going to have good margin on these supplies ? Again, the end user will simply need to choose a power brick with enough wattage for their particular setup...
  5. Hmmm, I'm rethinking the use of lower than 12V for the input supply. Even with an exactly 12V supply I'd need what is called a Buck-Boost or Flyback style regulator which are larger, more complex, and more costly than a Buck regulator. I don't want to fix the input supply requirement to be 12V only and even then the wall supply would be unregulated and not really suitable for the 12V rail as-is... So, I'm considering making the input supply range 15V-18V. I need a bit more time to research the flyback options. I should also locate a 12V supply and see what the unregulated voltage actually measures. Sometimes these 12V supplies are actually 13-14V with a min at 12V under full load(because usually unregulated internally). If that is the case, I could use an ultra low dropout linear regulator for the 12V rail since it's only drawing 250mA. I've already identified a couple of low cost/complexity and very efficient (90%+) 5V switchers at 2-3A ratings, so the rail with the highest power requirements is already sorted out. On to the -5V. 100mA is borderline for switched-Cap inverters. I may need to do a inverting boost reg instead. From looking at the schematic, it appears that only theCPU, DRAM, and GROM ICs are actually using -5V. I doubt any modern addon hardware will add extra load requirements to this rail. I should probably see what the draw is with the Speech Synth attached, though...
  6. I'm sure that and many other things could be made to work, but they're all piecemeal hack-together solutions: PicoATX+Neg5VCircuit+Power Switch+LED+How to mount it all... I like to make drop in boards that have the same form factor as the original so there's nothing glued, hacked, or flopping around inside the case. ? As far as the expansion box, I suspect it has the AC/DC supply inside and just takes a power cord? There are lots of ways to lighten that up, I'm sure, but again I'm trying to stay away from anything that involves line voltage. UL exists for a reason, ya know! Also, and I could be wrong, but I suspect the console only "users" outnumber PEB by something close to 10:1.
  7. Also, I have a bunch of these which might be able to provide enough power for just a console + a small addon: Wouldn't THAT be something?
  8. Still too "Bricky" for me. Also, while that potentially replaces the entire internal power board, you'd still need a PCB inside the box to work with the factory switch and to provide the LED for B&SS systems. The meanwell is also still a floor brick, although smaller, and requires "assembly" in the form of a second IEC power cord. All make me go "Yuk!" I'm thinking more like this: https://www.digikey.com/product-detail/en/tri-mag-llc/L6R18-180/364-1273-ND/7682636 As far as soft power, it's far better to bring out a communication interface that connects to something "smart" than a raw switch. To me the smart power would still use the console switch to connect/disconnect the input power. The soft power would then enable the regulators to actually power the console. Soft power would require a small "trickle" regulator to power a simple MCU(I'd use an ATTiny AVR part). An added benefit of having a communication interface like UART is that you can do lot more power management type stuff like poll any indicator signals from the regulators (Overcurrent/Under voltage) or even set the power LED color remotely. In addition the ATTiny has a small bit of EEPROM so you can store settings in it that persist when wall power is removed. I actually made a project with one of these that gives you an RGB power LED for the Macintosh Color Classic and overrides the volume and brightness buttons to set the LED mode while still retaining the main volume and brightness functions of those buttons. Also, since I'm planning multiple platform power supply projects based on these switcher designs, I like the idea of the universal single(but flexible) input voltage over a standard barrel connector. My solution may not end up being for everyone, but I think I'm designing for the "common case" of Console + a few modern low power add-ons... Thanks for the continued input
  9. Even a laptop supply is bulkier than I'd like. - and at 50-90W is way overkill. I'm thinking commonly available 12-18V wall adapters with the standard 5.5mm barrel connector. Some of these can be very compact at sub 20W ratings. If i can efficiently handle 5V input, a lot of very compact USB type chargers would even be sufficent, although most of those are only in the 5-10W range and may be underpowered. Still the barrel jack will allow for a lot of options as some users can get by with a lot less input power and some will want more...
  10. The new supply will absolutely produce less heat! Like I demonstrated before, more power is being burned than delivered to the logic board with the existing pre-QI supply at least. I'll be using regulators with 80-90% efficiency at load. Can't do much about the switch as I'm going for a drop in replacement. However the stock supply uses a dual-pole and I'll only need a single pole, so there could be less friction? I'm sure I can design a supply that is capable of powering any combination of modern devices attached short of a backlit TFT display! I found a candidate switcher for the 5V rail that is good for 2 Amps, so I'll probably use that. The main requirement will be on the size of wall adapter the user chooses. I thought about the dream scenario of putting an AC/DC inside the box and having a common IEC power cable attach to the unit, but then I'm getting into the realm of dangerous voltages and possible fires/injuries that could expose me to liability. Something non-obtrusive that attaches at the wall was my compromise. Again, thanks for all the great input!
  11. Yeah, there will definitely be reasonable overhead. I'll probably design the regulators to handle a significantly higher load so the end requirement will lie in whatever adapter the end user chooses for their system. Oops! You lost me at "Brick" I usually refer to that as "soft power"(being a vintage Mac guy). I really don't see a need for that here as the TI-99 has no power management capabilities. Also anything that would involve extra tie ins to the logic board or software/firmware hooks is an anti-goal. It takes less energy to slide that switch that's a few inches away from the keyboard than to type a command and hit carriage return As far as automated startup shutdown remotely from a networked R-Pi or similar, that's probably a very small niche. I could add a test header with the regulator enable signals in case someone else wanted to gain access to those. Anything "Smart" or "Soft" power related would have to live somewhere else anyway. Or... ugh now I'm digging myself a rathole , I could put a small ATTiny "Power Management Unit" on the supply board and provide a UART header to whatever wanted to hack it. And I cold make it a BOM stuff option or DIP Switch configurable. I'm thinking of making the power LED an RGB and having a dip switch to allow for 7 colors to be chosen. Well, there U go, I think I just defined the final feature set for this thing. Thanks for the input!
  12. Thanks for the offer! I'll definitely reach out when I get closer. I'm putting this design on the fast track as I hope to reuse the regulation circuits on a number of similar projects.
  13. Oh! Great reminder - I should repeat my measurements with the Speech Synth attached, although I suspect it probably only add a couple hundred mA of load to the 5V and a negligible amount to the other rails. I'd be surprised if things like the modern 32K sidecar use much more than 100mA assuming it's using fairly a modern SRAM component. Thanks for the input!
  14. Hey all, since FinalGROM99 beat me to the market with a fine multi-cart solution, I've been thinking up other useful retro projects to do... I've always HATED the large heavy bricks that come with the TI-99, even the "wall mount one" complete with screw hole so you can anchor it in place using the screw that holds your duplex outlet junction box cover on! Not completely unrelated, I also cannot stand the large, heavy power brick that comes with an Apple IIc, although it is "prettier" than the TI brick. Also, these bricks take up a lot of space in my collectibles box/trunk. So.... I've devised a way to allow much more compact "switcher" wall adapters work with these machines. In fact, my goal is to be able to use the same wall supply with both my TI-99 and my Apple IIc. Of course, the TI-99 uses an AC/AC wall or floor brick and has a sizable internal power supply board to both rectify the low voltage AC and to regulate the +5/-5/+12 supplies required by the main logic board. On the earlier revision units, the individual regulators are linear and are terribly inefficient. For example, To create the 5V rail from the 12V rectified/regulated source, which itself is derived from a slightly higher rectified filtered voltage, you basically "burn off" the 7V in a power transistor that acts as a variable resistor. So more power is wasted/converted to heat than is actually delivered to the logic board! Now, I believe the later QI supply switched from linear to switching supplies(unavoidable pun), but I'm not 100% sure as I haven't seen the schematic for the QI supply board. However, I suspect it too is far less efficient than modern supplies are capable of. Anyway the goal of this project is to make a replacement power board that can fit into either Black/Silver or Beige units(sans power LED - or you can drill a hole in your case) and allow these units to run with a variety of compact switcher AC/DC supplies. I've found 18W 15V units as small as 2.5x1.5x1.1"! A goal is to allow a fairly wide range of input voltages depending on whether my 12V regulator is strictly a buck(requires higher input voltage than output) or what is called a "buck-boost" which can actually take an input voltage that is either lower or higher than the output voltage. The 5V supply will definitely be a buck and the -5V may just be a switched capacitor inverting supply as it has much lower current draw from measurements on my console. With that I did some current draw measurements on my Non QI supply and logic board and got this: +12V : 250mA +5V : 1.0A -5V : 100mA So, despite the wall brick being rated for 22Watts, only about 8.5W is being delivered to the logic board. Surprisingly, I saw very little variation between idle and while playing Parsec. It appears that keypresses more than anything caused load fluctuations, but the load variations I saw were in the 5%-ish range. Now I suppose the unit would need more power if you stack a bunch of sidecars onto the console. I'm not as familiar with those as I never had more than the cassette interface myself. Does, say the disk drive get power from the console or does it have it's own supply? Regardless, I'm going to make this specced for the power requirements of just a bare console, with a reasonable amount of overhead (maybe 20-25%) to allow for some of the more modern things people are making that attach to the consoles (FA18, that CF floppy emulator thing, that RaspberryPi other thingy, etc...). The rear power connector will be replaced with a thin panel(metal or just made from PCB material) with a standard 5.5mm barrel jack that most generic wall supplies can be gotten with(as long as it's not one of those oddball ones that puts power on the outer barrel!). Since the TI console alone only uses 8.5W, I'd be interested to see if I can run it off of the 6V 1.8A radio shack supply I have that basically looks like a box shaped power plug and never blocks other power sockets when in use. I bought like 50-100 of these when someone was liquidating after Radio Shack went more or less out of business, so I'd like to find a use for these... Anyway, this is a long introductory post, but I was wondering if there would be an interest in a product like this in the community? I'm going to make this thing for myself anyway, but If I can make one I can make more if there is a market. Thanks for reading!
  15. OK, after starting a two EEPROM version of the AnyCart with an ATtiny and scrapping it... And then completing a schematic for a mixed EEPROM(for ROM) and ATmega+SQI ROM(for GROM), all the way to floor planning, I've decided to park that implementation as well. Still too many compromises and kludges/muxes/messy stuff. So. I now have a 75% complete schematic of "Third Time's a Charm" goodness! I've decided, I don't really want to just make just another, albeit improved, version of a ROM/GROM cart. I want to make a cartridge development platform! So I've jumped straight to a 44-pin ATxmega class processor, which is only a buck more than the 32-pin ATmega I was using before, with the following benefits: 44-pins(of course ) - Turns out the TQFP44 has the same pin pitch of .8mm as the 32-pin, so other than the extra 12-pins, the soldering won't be that much of an extra challenge. I can do 1.27mm SOICs without my "Nerd Goggles" and can somewhat painfully handle .65mm TSSOPs and .5mm VSSOPs, so I'm confident these will go down relatively easily. 32-Mhz operation and all from the internal oscillator. This actually saves me a $0.50-$0.70 crystal, so the Xmega upgrade is even cheaper! Also, I hear people can run these at 48Mhz with no issues and still from the internal oscillator to boot! USB, baby! Although my initial code base won't have the hooks in and I'll be using my ICE to flash the AVR and my custom SPI programmer for the flash, eventually, I'll have boot loader code and a way to flash the cart over USB. It will probably still require preparing a binary, but still, the cart will ultimately be able to be flashed without cracking it open. I plan to put the mini USB on the side so you can never have it powered by both the console and USB at the same time. If I had the newer white cartridge shells, I could have put it in the cart slot area to the side of the card edge connector. Still, when the cart is inserted, the USB won't show. Unified SQI flash ROM Yup, a tiny 8-pin SIOC with up to 8M(Maybe more) of flash. The only downside is that I have to run the XMEGA off of 3.3V and therefore have to level shift the entire cartridge slot. But I already have that worked out as well as mixing(well, tristate bussing) the 8-bit data bus with the lower byte of the address bus, since I'll never need both at the same time(The console won't be able to write to GROM). Even at 48Mhz, with 24Mhz SPI, I'm still too slow to do straight ROM access with single bit SPI, but I figure the 32-48Mhz boost over the ATmega's 20 Mhz should make it easier to emulate the SQI bus at a rate faster then single bit automated SPI can handle. I have an idea where I preload the first 12-bits of the address whenever I go to Idle to get a head start. Then, if a ROM access comes in, I'll only have to send the last 3 nybbles of the address and fetch the data. For the second half of the read cycle, the auto-incrementing behavior of the SQI flash should make it easy to fetch the second(Even?) Byte. Depending on stuff, The ROM images may have to be "re-endianed" to line up with this, but that's just byte shuffling and can be done in a script if needed. Now, If we're sitting at Idle with our ROM bank address preloaded into the flash and a GROM access comes. I just pull the chip select high and start over with loading the GROM base address. The delay is no problem since GROM is Soooooo Slooooooooooow compared to CPU ROM. So, this one's a keeper, and the floor planning is looking good. I have all the level shifters down in the "squeezed" area of the cart where the shell Z-height restrictions are tighter, which is where they want to be anyway, right next to the card edge signals. The only task left is to hook up the busses to the AVR, which is an iterative process that I'm doing in conjunction with layout to make routing easier(i.e. I can reorder signals to make them mostly run parallel to the AVR ports). I probably won't have a lot of time for layout in the next week, but I wanted to make sure the schematic was done so I didn't forget any of the fine details. Still, I have a bunch of PCB stuff backed up, so I'm pretty motivated to get this done so I can send everything out for fab at once. I'll keep everyone posted.
  16. I'm not too familiar with the GRAM Kracker myself, but that is a good question.
  17. Sure, that's my point. A GROM device is a synchronous device, albeit in it's own clock domain. My design will attempt to reproduce the synchronous behavior of an actual GROM. I now have come to realize that you have chosen to implement your design in an asynchronous manner, which was not immediately obvious to me. We're both engineers here, and we know there is more than one way to implement a design. My implementation will be closer to emulation, yours seems to be closer to simulation. Both methods work as your design has proven. Yes, I now realize that GROM operations are a series of individual atomic CPU access cycles and not a monolithic operation like I had assumed. My assumptions made sense to me based on how other sequentially accessed ROMs operate, including TI's own TMS6100 Speech ROMs which I had just completed an AVR based ROMulator for. Since the GROM was not openly specified by TI and I have yet to find an actual TI published timing diagram, I'm left to deal with interpretations based on the work of others like yourself. Haha! I like you. Of course it's not laid out, just schematic and floor-planned. However, I can go all the way to fab and assembly before I need to know exact details. I just run every control signal to the AVR, and mux the two nybbles of the data bus to both the AVR and to the SQI flash, and the rest is just "typing"(my former manager's word for software ) Your advice is much welcomed, and your knowledge of the system is extremely helpful! I'm choosing a different path by implementing synchronous near-emulation, but that doesn't mean I don't have respect for your design or the work that went into it. My background is in synchronous ASIC I/O bus and host controller designs including multi-clock domain situations(33/66Mhz PCI to 50/100MHz FireWire), so I'm naturally gravitating towards the design style that is in my wheelhouse. Other than quicker response time and probably not responding to address reads, I would like my GROMulator to behave both internally and externally most like an original GROM. Again, thanks for all the useful input
  18. I clearly need to read the GROM spec more closely. I thought the GROM clock was what latched in the address bytes and synchronized the control signals. Also, I thought the auto incrementing sequential data cycles were just a series of GROM clocks with the next sequential byte's data at each GROM clock edge. If this is not the case, then GROM is much slower than I thought so I guess I'll have no problems. It will be more clear when I look at the appropriate flow charts or simply take some logic analyzer dumps. I do realize the GROM Clock is in a different clock domain, but I assumed these were still synchronous devices. The actual GROM chips surely require this clock and I still plan to synchronize my GROMulator to that same clock. I'm going to run my software state machine for GROM emulation off the GROM clock either by sampling the clock, or, more likely, routing it to a pin change interrupt and emulating a clocked state machine from that mechanism. Doing everything that is in the GROM flow chart completely asynchronously could expose some of those latent bus contention situations I talked about earlier. Also, I would be inclined to believe that the internal address counter in an actual GROM actually counts on GROM clock edges. So, the big challenge will be straight non-GROM ROM access. My understanding is that I will only have four 333nS CPU clocks total to decode the address, fetch data from the SQI ROM, and drive present it on the data bus. If so, timing will be pretty tight and I may just have to build the thing and see how good I can get the software and then decide how much over clocking if any I need to guarantee timing. Because of that, I may also put a "chicken ROM" down so I can at least have 512K-1M of parallel ROM if I can't get shared serial ROM to meet timing. I have a sort of AVR dev board on hand that is not an arduino, so maybe I should whip up some test code to see how fast I can emulate the SQI bus. Also, is anyone clamoring for 8K of RAM in the cartridge space? Due to the bank/title select's use of lower address lines, I thought of having only the upper 4K as an SRAM option. Thats still, 16x what the built in console has, and many/most are utilizing various schemes to get the proper 32K expansion either with PEB emulation like the CF7+, putting the 32K in the console, or having actual PEBs. The SRAM I'm putting on the cart will be more like the battery backed Mini-Memory SRAM, although it can be used for volatile data if a title want that too. However, it will be nonvolatile and NOT need a battery… ever! It's a semi-new technology that has a flash backed SRAM implemented at the bit level. What I may do with the 8K chip is split it into 2 banks so that if you use the mini-memory built in cart, and later want to use a non-minimemory title that just wants volatile RAM, it won't overwrite the minimum 4K partition. The descriptor will have a bit that chooses which bank of 4K SRAM is active, if any, for a title.
  19. Ah, very helpful, mizapf! The link to that book I had on my iPad, didn't have Appendix G included! So I see now that there is a mechanism to switch the 2.2K resistors from pull-ups to pull downs on the 8-bit data bus to the GROMs. This also seems to suggest that while the GROMs are PMOS, the chips that interface to it are NMOS and that all MOS devices are open-drain. Looking at GREADY(GRY in the sch) is interesting. It has a pull-up! So, it looks like there is a very good chance that the PMOS GROM has an NMOS GREADY signal! So the Not-Ready state cold easily be asserted by all chips and you're not ready until every last ROM releases the not ready state. So… It could be fun to make a multi-cart that also has the console ROMs onboard. You would have to pull the internal console ROMs for it to work, but you could possibly speed up the whole system that way! Does the later v2.2 refer to Console ROM or GROM0? This could be a way to help out those v2.2 folks who can't run ROM only carts I may make this an option for the multi-cart. Custom non-extended basic, anyone? Why?…. Why not! Thanks for pointing me to this resource The other books will come in handy too. More night time reading for my iPad!
  20. Hi Tursi, Sorry up front , I don't know how to properly reference split quotes, so I'll just indent and color any quotes below: So it looks like I won't have to worry about processing time with GROM in general, so that's good. The hold (or rather, READY) signal is an important part of being a GROM. I wouldn't recommend leaving it out on purpose even if you don't think you want it today. Sorry, I may not have been clear. I was referring to not having a way to hold off straight ROM accesses, which I am also trying to implement in a single flash resource. GROM operations take 14-30 GROM clocks. You aren't anywhere near being slow. You are also not required to use the GROM clock, nothing else in the system relies on it. I take it into my AVR but I don't use it. Wait, GROMs are more or less synchronous. How can you know when to sample all of the control signals, particularly since access can have multiple phases (Load Addr H, load Addr L, etc.) Most importantly sequential data accesses surely need to be synchronized to the GROM CLK do they not? Reading back gives you the correct address, although no chip responds to that range). The data bus is likewise not strongly driven, but I need to double-check the details. Reportedly there is a pull-up on the bus in one direction and a pull-down in the other, so you need to drive it correctly. (TTL can dominate that system, which is how the old GROM devices would override it.) So during an address read, every GROM chip drives the bus at the same time? Normally that would be a nightmare if one of the GROMs had a corrupt address load, but, being PMOS, I guess they can get away with that since there is no possibility of contention. However, I will capitalize on this behavior and not have my GROMulator respond to address reads at all, since the console ROMs will cover reporting the loaded address for me Hmmm, this does make me realize I may want to put a stage of PMOS open drain buffering on the data bus to prevent possible contention. Mizapf is right on the operation of READY. GROMs are /always/ "not ready", except when they have completed an operation. That makes perfect sense to me, however, I may have been confused by this statement then: If you're fast, the other GROMs still hold the bus the normal duration anyway, I took this to mean that for every GROM cycle, the CPU doesn't "see" the GREADY until every GROM chip says they are ready. In other words, for a given access I thought you were saying that even if I complete my access super fast and assert my READY, the console GROMs, for instance would block it until they also say "ready". Or were you simply stating that my accesses will be fast, but whenever access occurs to a different GROM those particular slower accesses will bring the average access time down. However, knowing that the devices are PMOS and that the GREADY is active High, I would think that no GROM can override my GREADY High since no PMOS device can drive low. Reportedly there is a pull-up on the bus in one direction and a pull-down in the other I'm probably misinterpreting this as well. I could see if NMOS devices are mixed with PMOS devices, that the NMOS drivers would have weak pull-ups and the PMOS drivers would have weak pull-downs, and any TTL(Or CMOS even if TI used any such devices in the console) can easily overdrive am otherwise non driven bus. However, since the data bus is bi-directional by design, that would suggest that an un driven bus is not actually tri-state, but floats somewhere in the middle due to the resistor divider created by the NMOS pull-ups and PMOS pull-downs. Looks like I'll need to put an oscilloscope on the bus as well as the logic analyzer. I'm working with a PMOS device emulation in my speech synth ROM project, and my goal is to completely understand the circuit, otherwise there is a real possibility that my replacement device, and my multi-cart for that matter, might have slivers of time where there is real bus contention that, over time, can degrade the components and cause long term damage or failure to the system. I should probably put a scope on the actual ROMS used in the carts as well to see if they are PMOS or NMOS or if they have a TTL output stage(unlikely). There's a very good chance that the parallel ROMs are NMOS as that was a popular technology due to having higher bit density. Actually, it occurs to me that the reason that the GROMs are characterized as weak drivers even by TI's admission, is because they are using under sized PMOS transistors. In general a properly balanced CMOS gate(balanced for speed or drive which is related to speed), the PMOS high side FET is physically larger to achieve the same timing and drive as the NMOS low side FET. I forget how much larger, but close to 2X IIRC. I should pull out my old VLSI design book and look it up. I'd been writing RTL for so long, sometimes I forget about the transistors that are begotten(begat?) from the code Anyway, pull-ups and pull-downs(and PMOS and NMOS) are pretty easy to spot with a scope. If the rising edge is fast and the falling edge is slow, you have PMOS with a pulldown and vice-versa for NMOS. The other question for technology this old is where the resistors are. I would have no problem believing that the PMOS and NMOS(if any) chips simply have open drain drivers and that the "pull-me" resistors as I generically call them, are passives on the board or a "bus parking" terminator of some kind. It may do me well to hunt down the console schematic if it's out there and see exactly what I'm dealing with before I start "playing on the bus" Again, thanks for the help and clarifications! If I learn anything new and interesting, I'll be sure to share Like This Quote MultiQuote
  21. Oh wait. You mean the Console GROMs will hold the GREADY inactive(LOW) until they decide the address is not in their space? So all of the GREADY are wired AND together and are not Totem-Pole or Push-Pull? I thought GROMs were PMOS. Hmmm I better do some more research on that aspect of the GROM electrical specs. Still, I definitely don't want to be the slowest GROM in the box. Note that your GROMulator has parallel access to the internal flash in the AVR you chose. I'm using an external SQI flash. If I just used 1-bit SPI, the initial flash access would be 8+24+8 clocks at 4Mhz(the max SPI can run when the SysClk is 8Mhz) and would take 10uS plus some overhead to present the parallel data on the bus. Subsequent Sequential Data Cycles would be right at 2uS + Overhead and would be just about the exact rate for the SPI clock. However, if we creep just over the line, we have to wait over 2uS until the next GROM CLK to get the data and would effectively be 2x slower. But, I'm confident that whatever I lose in Nybble-Banging SQI, I'll get back in the 4x data throughput so I'll definitely be to deliver sequential data at the GROM CLK rate. Still, I really want to be able to handle ROM cycles with the SQI ROM, So I'm hopeful that I can devise enough tricks including hand assembled SQI protocol, falling edge clock tricks, and as a last resort, over clocking to make ROM timing. Especially since I won't have the option of a hold signal. Before I fab this board, I plan to put my logic analyzer on a real cart and characterize the typical cycle timings at the cart slot. I'm suspecting I may be losing a sliver of time budget over the timing diagrams due to the console having to decode to generate the ROMG signal. Right now I'm designing a Nybble-Slicer from 8:4 muxes so I can present most of the address nybbles directly to the SQI interface instead of piping them through the MCU. Most of the overhead will be in generating the high order address nybbles and getting them piped to the SQI as there will be a Port read, table lookup, and pointer arithmetic before the bank portion of the address can be sent. I guess the table lookup offset will be in a local variable by then, so at least I won't have to take the array index timing hit. All of this active discussion is great! I helps me "think out loud" while typing and alerts me to caveats I need to keep an eye out for. Thanks all
  22. Haha, thanks acadiel! Now the board is looking plenty sparse. I may throw down one EEPROM/FLASH socket just in case I can't make ROM timing. I came up with a trick that I'm sure will allow me to meet GROM timing however, maybe even at a crystal-less 8MHz. Stuffing options are always good
  23. OK, I had an 80% complete schematic for the design above with dual EEPROM sockets and all of the limitations I originally conceded to, but when I started some initial PCB floor planning I noticed that the cartridge board was looking pretty congested. Then, I realized that forgot all about the SRAM chip! Not willing to give up yet another important feature, I decided to scrap the whole thing and start over at the architecture phase So, now I'm back with a new design. I've decided to take the plunge and go for a unified ROM resource. I'm using a Quad SPI capable tiny 8-pin SOIC that packs up to 8MB in a single tiny chip! My hope is to make a very efficient "Byte-Bang" SQI interface implementation using AVR assembly so I can hopefully achieve the 4X speedup over 1-bit SPI. I'll have to see how fast back to back port writes can be done in assembly. I'm not at all worried about GROM, since that is slow as molasses(less than half a MHz clock, right?), but I really want to be able to handle straight ROM on this interface as well. I'm upsizing from a 20-pin AtTiny SOIC to a 32-Pin AtMega and putting an external crystal down. Worst case I'll over clock the AVR from 20MHz to 30Mhz which appears to be quite stable for most who have tried. Since I'm not using the ADC or any of the internal I/O modules really, I expect to be in good shape. This will prevent me from having to level shift an Xmega processor. Also, I'll be able to use some sort of serial in system programming method for the flash, instead of dealing with socketed EPROMs. This would open the door for a future revision with an AtMega USB capable AVR and being able to flash over USB, or I may just release some USB/UART based "bus-piratey" type flasher for those who want to be able to flash their carts. For first proto, at least, It'll have to be ISP through a custom external device like my MBED development board expansion port. Anyway, I just wanted to update everyone and let you know the project is officially underway! The PCB is holding up a couple of prototype multi board panels I'm waiting to release, so I'm motivated to get this one done quickly so I can send everything out at once and have a ton of fun soldering/reflowing to do in a few weeks
  24. Hi Acadiel, Thanks for the kind words and ideas! To get things rolling quickly on a prototype, the AnyCart is going to have some limitations. It's goal is to be able to handle most if not all production carts and early 3rd party carts, while being capable of handling many later titles. It really won't be able to easily accommodate new titles with ROMs exceeding 1-4 banks or >40K of GROM in one title. I'm sort of redefining the original latched bank switch scheme into a title selection, but after a title has been selected and launched, the AnyCart will allow titles that had limited bank switching via the writes to >600x to operate based on flags in the descriptor. Titles that did not actually implement bank switching, but instead did writes to ROM either due to bad code, or for copy protection, will have the live bank switching disabled. Oh, and also, SRAM! I'm putting an 8K non-volatile SRAM down that - needs no battery - So it will last forever(or at least one million power cycles with a data retention of 100 years). Mini memory should easily be able to be handled by the AnyCart, as well as new titles having a similar option of 4K ROM, 4K SRAM, or live banking of 8-24K ROM + 8K-ish SRAM. The only restriction is that you won't be able to use the lower maybe 256 bytes of the SRAM since writing there might switch the bank. It depends on how deep into address decoding I want to get into. My plan is for the AnyCart hardware to be cheap and easy. It will have 512K of ROM Flash and 512K of GROM flash split into 8K chunks. You will be able to select up to 128 titles, however, unless you were loading the cart with exactly 64 8K ROM only titles plus 64 8K GROM only titles, you probably won't e able to take advantage of the full 128 titles. However, there is a 1Mx8 OTP EPROM that seems to be pin compatible with the socket layout so I will try to have a jumper option to support that chip in case one is willing to give up reflashability for 2x the space. That 1Mx8 is the largest capacity that comes in 5V, and is not a pesky wide TSSOP which makes external programming and socketing a chore(and expensive!). I'm actually still trying to figure out the whole CRU mechanism, let alone using it for bank switching. I just want to "get" what it actually does. I realize it's some sort of serial based device register access as opposed to having memory mapped I/O in the parallel bus space(I think), but beyond that it's still a mystery. This is mostly because I haven't had time to do more than skim the available information on this bus. For me personally, it helps to have background information on WHY TI used this method this and not just how it operates. For the AnyCart, I probably won't have CRU style bank switching in at least for the initial prototype. Hopefully this will not exclude too many titles. I realize my AnyCart really will be an "Almost AnyCart", but that just doesn't roll off the tongue as easily, so I'm just gonna keep calling it the AnyCart My plans for the EveryCart, if I still have the time and resources to develop it in the future, will be to use a much more capable MCU like a 100Mhz Cortex M3. It will utilize a very fast and large unified SPI flash for all resources. The SPI will have to run at at least 33Mhz, I believe, in order to be able to meet ROM timing.The AnyCart's MCU is just a traffic director, intercepting a small number of address and control lines and staying out of the way of the data bus, while generating the appropriate base addresses and chip selects. The EveryCart's MCU is going to have access to the entire slot's signals and all will have to be 5V-3.3V level shifted. Access to everything on the bus, will make it much more powerful. Also, since the processor core will be 32-bits wide and be very fast, complicated math can be done on the address pointers so the memory resources can be utilized much more efficiently. A 6K GROM will only take up 6K, and, if anyone took the time to see how much actual space the carts take up(to the nearest 1K perhaps), we could optimize even further. I would be curious to know even for the AnyCart, if there are a large number of known titles that are actually 4K and under as it wouldn't be too difficult to partition at that granularity. Anyway, after much agonizing over memory chips, and trying my best to shoehorn SPI or SQI onto the AnyCart to make programming easier, I finally realized I'm going to have to start with the good ole 32-PLCC form factor socketed Flash/EEProms for this cart. I wanted a board that I can assemble reasonably easily with my tiny reflow oven and maybe a couple of partial solder stencils, if not solder completely by hand(Through hole and SIOCs). Since the surface mount 32-PLCC sockets are much easier to route, I'll probably be solder pasting and reflowing the AnyCart boards. If this thing really starts to sell, I'll invest in a full board solder stencil or maybe even offload assembly to the PCB fab house. These are going to come in new - repurposed cartridge shells, and I even had a plan to have a special edition that would come in new old stock retail boxes with a printed manual. First things first, though! With component selection complete, I'll be starting the schematic today
×
×
  • Create New...