Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About RedskullDC

  • Rank
    Space Invader
  • Birthday September 28

Contact / Social Media

Profile Information

  • Gender
  • Location
    Sydney, Australia
  • Interests
    Motorbikes, antique computers, Permaculture
  1. Hi Nippur72, et al. Been thinking about this a bit more over the last few days..... Using the SDRAM means that you will have to allow for the refresh cycles which are quite different to the original 4164 DRAMs on the 500. With the SDRAM chip on the mist board, each refresh cycle is around 70ns as far as I can see from the datasheet, with each row needing to be refreshed at least once every 32ms IIRC. The dual port SDRAM controller would be best. You can run the video side with the 14.778730MHz clock without any problem to generate the exact PAL signal, and have a cycle exact vertical interrupt. That also allows simple modification should you decide you want to generate a VGA signal somewhere down the track. Modify the SDRAM controller so it only allows refresh cycles to occur during the non-visible parts of the screen display. (HSYNC and VSYNC active), and only during the VIDEO "slot". I don't think that the 32ms maximum refresh interval time is necessarily something to worry about too much. I've experimented in the past with delaying the refresh cycles to SDRAM chips to see how long they would retain their contents without a refresh cycle. A couple of chips were able to retain their contents reliably for anything up to 4-5 seconds without a refresh cycle being allowed to occur. Far in excess of the 32ms quoted. The CPU side of the SDRAM controller is easy, as you no longer have to interleave with the video side. As long as you insert one wait state to the Z80 for each RAM/ROM read/write, the timing should match the original machine exactly. My Laser 500 has now been shipped, so as soon as it arrives I will be able to confirm your screen calculations of 952x312 precisely. Cheers, Leslie
  2. Emulating some of the poor design decisions of early machines is not always a good idea You hobble your design with constraints to deal with hardware that isn't even there on the FPGA board. How accurate do you want to make it? --- You can easily get around the RAM bottle neck by using a dual port SDRAM controller. Check out the one used in: https://github.com/NibblesLab/mz80b_de0 It's an emulation of the SHARP MZ80B on the Terasic DE0 board. --- Nice of Platis to fill in some of the blanks! Look forward to seeing a scan of that manual! Cheers, Leslie
  3. Hi Nippur72, From an FPGA point of view, there is nothing to solve. I assume you will but the LASER ROM image inside a Block RAM on the FPGA? Use the 4 BA bits to determine which block is being accessed. Read the SDRAM and ROMs in parallel. Z80 can access the ROM, while video section reads from SDRAM. PAL screen output looks good. Cheers, Leslie
  4. I can see 250ns EPROMS are fitted in the pic above. If the ROM read cycles get the wait state the speed doesn't really matter. Read will be initiated in one CPU cycle, the ROMs will be read in the next CPU cycle. --- Yes, the LS273 will latch the output of the Character Generator. We just have no idea when in the cycles the VLSI actually reads that data. Will probably have to measure it with a scope to be sure. At a guess I would say it is read during the section you marked in blue during the video "slot". That is the only time you can guarantee that the DRAMs are not driving the RD7:0 lines. Cheers, Leslie
  5. I didn't say 8 "cycles", but 8 clock edges (both rising and falling). That gives you 8 distinct state changes within one CPU clock period. --- Delay line is probably correct for the generation of the *CAS falling edge. Data sheet for the 4164 DRAM says the max RAS to CAS delay is 75ns for the 150ns part. Period for the 14.7MHz clock is 67ns, which would be cutting it a bit close. --- That's good to know that ROM reads also get a wait state inserted, even though it isn't necessary. Regards, Leslie
  6. Timing diagram looks reasonable to me, but all guesswork till we get a scope/analyser on the real machine I've added a couple of descriptions to the pin table. Not sure what FRSEL and /TOPBK are meant to do? RAS and CAS *are* aligned to the 14MHz clock. *RAS, *CAS, *AX will all likely be generated by a state machine circuit in the VLSI chip. Looking at the timing diagram, imagine that the Video "slot" is divided into 8 distinct sections, split on each clock edge of F14M. State 0 begins at the rising edge of F14M, at the start of the video "slot" Here is what happens at each clock edge: 0: Assume that MA7:0 outputs are stable and hold the video ROW bits. *RAS goes low, clocking the ROW bits into the DRAMs. Access time for the DRAMs begins now. 1: *AX goes low. MA7:0 are now outputting the COLUMN data bits 2: *CAS goes low, clocking the column data into the DRAMs. Difficult to tell from the diagram whether CAS is generated from clock edge 2, or by a delay line attached to clock edge #1. Either would suffice. 3-4 do nothing, maintain state 5: RAS goes high 6. *CAS and *AX both go high. Video section will most likely sample RAM data here. (approx 201ns after the falling egde of *RAS) *MREQ from the Z80 may go low here if the CPU is trying to initiate a read cycle. With AX high, the multiplexers will be selecting the ROW data of the Z80 address bus in readiness for the CPU cycle. 7. do nothing, maintain state ---- CPU slot is basically the same as the video slot. All data to/from the DRAMs goes through the VLSI chip on the RD7:0 lines. Presumably, the VLSI chips latches CPU read/write data, and presents it to the CPU during the CPU slot. It doesn't appear that wait states are inserted for ROM reads or keyboard reads either? It also isn't possible to tell from the diagram when the VLSI chip reads the character generator ROM/latch (2764/74LS273). Suspect it does that during the CPU "slot" when it has free access to the RD7:0 lines. We badly need a full scan of that technical manual Cheers, Leslie
  7. VLSI chips differences are probably just batch numbers. Not sure there would be any revision differences. 17MHz crystal may only be required for the CHROMA and COLOURBURST signals? I've seen a couple of 500's with no RF modulator fitted. Not really sure of the purpose of the F3M pin on the VLSI chip. Perhaps they use it to derive a harmonic from the 14MHz crystal....?? Cheers, Leslie
  8. 4 x 14MHz cycles. My Laser 500 hasn't arrived yet. The video and cpu "slots" are both 1 CPU clock wide (3.6947MHz) or 4 x 14.77873MHz clocks wide. Period of one CPU clock at 3.6947MHz is approximately 271ns long. A pic of the 500 motherboard: https://www.flickr.com/photos/grupousuariosamstrad/7615785522 shows it to be fitted with 150ns DRAM chips. DRAMs are easily able to responding within the 271ns "slot" . Regards, Leslie
  9. Nothing you need to worry about from an FPGA point of view. AX = Address multipleX. Between the Z80 and GA1 on the schematic are 2 x 74LS257 Multiplexers. AX is connected to the Select line on both of those chips, and is used to multiplex the A15-A0 address bus down to the MA7-MA0 signals which go to the DRAMs Sequence is: 1. At the start of a video/cpu "slot", *RAS, *AX, *CAS are all high. 2. Multiplexers are currently selectin the "ROW" address bits 3. *RAS goes low, registering the ROW bits in the DRAMS 4. AX changes state, Multiplexers are now selecting the "COLUMN" address bits 5. *CAS goes low, registering the COLUMN bits in the DRAMS 6. DRAMs now have the full address information. ---- The address multiplexers only appear to be used for when the Z80 is addressing the DRAMs. GA1 has its' own MA7-MA0 outputs for when the video section is addressing the DRAMs The CVAM output from GA1 (CPU/VIDEO-ADDRESS-MULTIPLEX?) can disable the multiplexer outputs. That will be the CV signal shown in the timing diagram. ----- Interesting to see the selection of ROW bits are A0, A1, A2, A3, A4, A8, A9, A10. This means that the 7 bit refresh address generated by the Z80 is ignored completely. VIDEO section multiplexers inside GA1 must be similarly wired as the external CPU ones. I haven't checked, but I wouldn't mind betting those particular address lines cycle through all 256 combinations as a result of the screen being re-drawn, no matter what screen mode is being displayed. In effect, providing an invisible 8 bit DRAM refresh cycle Regards, Leslie
  10. That is correct, but it also gives us the answer to the reason for delay circuit! Since the F14M clock is stopped, nothing is advancing in the VLSI. The *only* effect that the delay circuit has is to stretch the HSYNC pulse by around 470ns Presumably the HSYNC pulse coming out of the VLSI chip is too short due to an error in the logic which was only discovered after the VLSI chips were already manufactured? Has the unfortunate side effect of slowing the CPU down by 470ns every horizontal line. Cheers, Leslie
  11. I don't think the one I purchased came with any manuals The auction pics show it to be a QWERTY model keyboard, with only ENGLISH keytops. Will be interesting to see if the ROMS are the same as what we already have. Definitely looking forward to a scanned copy of the BASIC manual and tech manual. They would clear up a lot of mysteries I'm sure! ---- Easiest way to implement the wait states on the 500 is to just have a circuit which inserts a one-cpu-clock wait state for every memory read/write/instruction fetch. There isn't any need to tie it to the video generation to achieve the same effect as a real machine. Better to have the video run independently, then you can have any output frequency you like. Doesn't need to be tied to the 14.7MHz clock. The digital delay line circuit introduces a ~ 473ns (1000ns/ 14.7MHz * 7 = 473ns) pause in the 14Mhz clock after every *HSYNC. If you are planning to use the same Z80 core as the LASER_310_FPGA project, it already has a clock enable input which you could use to mimic this delay. Re-creating the hardware as close as possible is a great idea. Don't think I'm trying to point you in a different direction. At some point (if you are like me) you will most likely want to extend the FPGA implementation to : a) run faster b) emulate disks/cassettes c) more memory d) extended screenmodes (such as the Laser 3000 HI-RES RGB modes) With that in mind, it is best not to cripple your video output section to mimic hardware kludges from the 1980's if you don't have to Look forward to seeing your progress! Cheers, Leslie
  12. They look like sane values to produce a PAL signal. I don't think there is any way to figure out the missing CPU signals without putting a logic analyser on a real machine. To assist with that , I just purchased a Laser 500 which I saw on Ebay: https://www.ebay.com.au/itm/202660668891 -- I wouldn't be too concerned with emulating the PAL output and memory wait states in an FPGA version. It really isn't necessary Much easier to use dual port memory for the video memory regions, and let the video and cpu run completely independently. If the machine is running a bit fast, can always lower the CPU clock. Disk system runs independently off its' own 4MHz clock, so no need to sync that. 17MHz signal is not required at all on the FPGA. If you want to use the entire screen, recommend using 1280x960 as the video output standard (4:3) On the max resolution screen (640 x 192) each horizontal pixel is displayed twice, and each vertical line is displayed 5 times. 640 x 2 = 1280, 192 * 5 = 960. Otherwise, SVGA=800x600. Multiply the vertical lines by 3 = 576. Centre the video, and display the border register colour when outside the 640x192 area. Cheers, Leslie
  13. According to the timing diagram, the CPU and VIDEO sub-section share access to the RAM memory on each alternate CPU clock cycle. A 1:1 basis. Third line of the diagram shows the CPU CK (CPU CLOCK). CPU doesn't know anything about the video section, so it can issue a memory read/write instruction fetch at *any* time, during the CPU "slot" or VIDEO "slot". You will need to take a look at a Z80 tech manual to see that each memory read/write operation takes 3 clock cycles ("T" states) and an instruction fetch takes 4. (Assuming no wait states). On a mem read/write cycle, *MREQ , *RD both go low on the falling edge of the T1 state. *WR goes low on the falling edge of the T2 state. Referring back to the Laser timing diagram, this will always be when CPU CK is LOW. There is no way of knowing if that falls in the CPU allocated slot, or the video slot. VIDEO always reads a byte in one "slot" CPU always requires two "slots". Consider for a minute that the CPU clock is 3.6947 Mhz. That means that the period of the CPU clock is somewhere close to 271ns. Plenty of time for the VIDEO section to read a byte from RAM in it's "slot". The point where the CPU asserts *RD or *WR in the cycle (3/4 of the way through the "slot") doesn't allow enough time for the RAMS to respond however. A one-cpu-clock wait state is therefore always inserted for every memory byte read, memory byte write and for each byte of the instruction fetch (including one wait state for each operand byte) regardless of which "slot" the CPU read/write is initiated in. I don't think that I/O instructions have a wait state inserted according to the diagram. --- Regarding the HSYNC. No, it's only triggered on the falling edge... according the the diagram. Cheers, Leslie
  14. Hi Nippur72, I thought about the problem some more.. It's possible that stopping the F14M clock in that position has something to do with the colour burst signal generation. Your suggestion that it may be an add-on circuit to fix a bug in the VLSI chip that was only discovered after they were produced may not be a bad guess either. I don't think that the missing cycles have anything to do with scanline dependent features. According to the timing diagram, there is a one cpu clock wait state inserted for *every* memory access, whether it be read/write/opcode fetch. Are you accounting for the instructions and overhead during the interrupt after each vertical retrace? Regards, Leslie P.S. I think it is an interesting co-incidence that the Laser 350/500/700 series employs the same control register address ($6800) as the earlier LASER 310/VZ200/VZ300 machines. The graphics/text screen layouts on the 350/500/700 are basically the same as an Apple ][. It would come as no surprise to learn that VTECH used some of the same VLSI code from their LASER 3000 series machine here. I had a quick look at the Laser 3000 tech manual, but it does not have a delay line circuit like the 350-700.
  15. Hi Nippur72, et al. 1. Not sure. 2. CPU Clock and Mem accesses are synchronised to the F14M signal. (pg 214 of the tech manual, posted in this thread) 3. The flip-flops provide a digital delay, not a divide by 4 . Pin 9 of the LS02 takes a product of the delay circuit. When pin 9 is low, F14M runs freely same as T14M, but inverted. When pin 9 is high, F14M is held low. When HSYNC goes from High to low, the circuit holds F14M low for 6 (or 7) 14Mhz clocks. When HSYNC goes from low to high, there is no delay. 4. Maybe the GA1 uses this time to reload the colour registers etc. in preparation for the next display line? Cheers, Leslie
  • Create New...