Jump to content
Sign in to follow this  
ijor

SIO synchronous mode

Recommended Posts

I finally made some tests, following candle's idea, of SIO synchronous mode with an externally generated clock.

 

Results seems to indicate, that indeed Pokey receiver can't shift in less than 3 PHI2 cycles. I also found that Pokey needs actually more than 3 cycles at the start and/or end of the byte transmission.

 

The software on the Atari run a small loop polling for the IRQ bit in Pokey. Whenever it found an IRQ signaled, it reset the interrupt in Pokey and re-enabled it again. NMI were disabled, interrupts were disabled in the CPU, and ANTIC DMA was disabled altogether. The idea was just to generate a pulse on the IRQ signal for scoping.

 

The external signals were generated with an ARM micro. The serial clock is the direct output of a hardware timer for maximum accuracy. The transmitted byte was a constant $FF. This was done on purpose to make analysis of the trace capture much easier. Each pulse on the SIO line is the start bit of each byte.

 

Both the start and stop bits were "streched" at a multiple of the regular clock period. This was done initially to give enough time for the 6502, even at higher frequencies. But then I found, as mentioned above, that this was actually required by Pokey.

 

The test only verifies if an IRQ pulse was generated for each transmitted byte, and nothing else.

 

Three traces are attached. First was done at 570 KHz (less than PHI2/3), no errors were detected. Second trace is at 599 KHz (slightly faster than PHI2/3), shows a small amount of errors, about one every twelve bytes or so. Third trace is at 630 KHz, shows one error almost every other byte. At higher frequencies, more errors were seen (no trace posted).

 

The trace capture includes the following signals taken at Pokey pins: Serial clock, Serial in, IRQ, and R/W. The latter was included to verify that the CPU wasn't missing an IRQ pulse.

 

Note that when you see a missing IRQ pulse, the error was actually on the previous byte. Note that in the previous byte (to any missing IRQ), IRQ is pulsed late. This means that Pokey missed a bit, and then intrepreted the next start bit as the current stop bit. And because then no Start bit was seen by Pokey, the next byte (to the byte where a bit was missed) is ignored completely.

 

 

570 KHz, no errors:

post-6585-1235517528_thumb.png

 

599 KHz, zoomed on one error:

post-6585-1235517534_thumb.png

 

630 KHz, multiple errors:

post-6585-1235517538_thumb.png

Edited by ijor

Share this post


Link to post
Share on other sites

One interesting (and unexpected) founding about the caps on the SIO connectors on some XE computers:

 

At these higher frequencies, I had to use push-pull drivers. Open-drain drivers, as standard for the SIO bus, didn't work because the pull-ups on the computer are too weak.

 

I tested on two computers, one XE with caps, one XL without caps. The results were almost identical as long as the drivers were push-pull. The caps made a difference when using open-drain drivers, but even without caps, open-drain can be used only at lower frequencies (somewhere around 100 KHz).

Share this post


Link to post
Share on other sites
I finally made some tests, following candle's idea, of SIO synchronous mode with an externally generated clock.

 

Results seems to indicate, that indeed Pokey receiver can't shift in less than 3 PHI2 cycles. I also found that Pokey needs actually more than 3 cycles at the start and/or end of the byte transmission.

 

The software on the Atari run a small loop polling for the IRQ bit in Pokey. Whenever it found an IRQ signaled, it reset the interrupt in Pokey and re-enabled it again. NMI were disabled, interrupts were disabled in the CPU, and ANTIC DMA was disabled altogether. The idea was just to generate a pulse on the IRQ signal for scoping.

 

The external signals were generated with an ARM micro. The serial clock is the direct output of a hardware timer for maximum accuracy. The transmitted byte was a constant $FF. This was done on purpose to make analysis of the trace capture much easier. Each pulse on the SIO line is the start bit of each byte.

 

Both the start and stop bits were "streched" at a multiple of the regular clock period. This was done initially to give enough time for the 6502, even at higher frequencies. But then I found, as mentioned above, that this was actually required by Pokey.

 

The test only verifies if an IRQ pulse was generated for each transmitted byte, and nothing else.

 

Three traces are attached. First was done at 570 KHz (less than PHI2/3), no errors were detected. Second trace is at 599 KHz (slightly faster than PHI2/3), shows a small amount of errors, about one every twelve bytes or so. Third trace is at 630 KHz, shows one error almost every other byte. At higher frequencies, more errors were seen (no trace posted).

 

The trace capture includes the following signals taken at Pokey pins: Serial clock, Serial in, IRQ, and R/W. The latter was included to verify that the CPU wasn't missing an IRQ pulse.

 

Note that when you see a missing IRQ pulse, the error was actually on the previous byte. Note that in the previous byte (to any missing IRQ), IRQ is pulsed late. This means that Pokey missed a bit, and then intrepreted the next start bit as the current stop bit. And because then no Start bit was seen by Pokey, the next byte (to the byte where a bit was missed) is ignored completely.

 

 

570 KHz, no errors:

post-6585-1235517528_thumb.png

 

599 KHz, zoomed on one error:

post-6585-1235517534_thumb.png

 

630 KHz, multiple errors:

post-6585-1235517538_thumb.png

 

I haven't looked at your graphs yet (my browser isn't showing them clearly), but if you get 630Khz w/error every other byte it means shift register is shifting at > phi2/3 cycles. You need to put delays in between bytes, but to test shift register you just need one byte transmitted successfully.

Share this post


Link to post
Share on other sites
I haven't looked at your graphs yet (my browser isn't showing them clearly)

 

The images are intentionally taken at very wide resolution. This was done in purpose so that you could see multiple bytes, and yet zoomed enough to see the detail and measure the timing with a fine scale. If your browser has troubles with them, you can download the pictures and see/edit them with your favorite image editor.

 

but if you get 630Khz w/error every other byte it means shift register is shifting at > phi2/3 cycles. You need to put delays in between bytes, but to test shift register you just need one byte transmitted successfully.

 

I did delay between bytes, I said that in the post and it is clear in the image. And the image shows that no transmitted byte was received correctly. Error here means the absence of an IRQ pulse, and "no-error" means just the presence of the IRQ pulse. A present IRQ pulse still doesn't mean that a byte was transferred correctly. It just means that Pokey saw ten bits, but not neccessarily the right ones.

 

In the case of the 630 KHz, where an IRQ pulse is missing every other byte, it means that all failed. I explained the behaviour in the post. To get a byte ok, you would need two IRQ pulses in a row. Or at the very least, the IRQ must be activated at the "right time" (see below for the details). Otherwise the IRQ you are seeing is Pokey taking bits from two different bytes.

 

Now let's see what is the "right time" for the IRQ activation, and what is a "wrong time". Note that the IRQ was captured at the Pokey pin. It has no relation to the CPU timing.

 

This is the zoom of one byte done at 570 KHz (no errors detected). You see that Pokey activates IRQ at less than 5 us after the trailing edge of the serial clock on the stop bit:

 

post-6585-1235597818_thumb.png

 

This is the zoom of the first IRQ pulse on the 630 KHz image. You can see that Pokey activated IRQ at ~5 us after the trailing edge of the serial clock on the next start bit. The trailing serial edge on the stop bit is more than 17 us before the IRQ edge! This clearly means that Pokey missed one bit and interpreted the next start bit as the current stop bit. And this is about the best case on the capture, most IRQ pulses are even worse. Worse in the sense that the IRQ edge is much further (after more serial clock pulses) from the stop bit, meaning Pokey missed multiple bits.

 

post-6585-1235597823_thumb.png

 

Lastly, note that a single byte transferred ok doesn't say too much. You can transmit a byte at slightly faster than PHI/3, and depending on the exact relation of the clock phases, Pokey would still get 3 PHI2 cycles between bits. This is what happens in the 599 KHz test. Being just a bit faster than PHI/3, most bytes get ok.

Edited by ijor

Share this post


Link to post
Share on other sites
I haven't looked at your graphs yet (my browser isn't showing them clearly)

 

The images are intentionally taken at very wide resolution. This was done in purpose so that you could see multiple bytes, and yet zoomed enough to see the detail and measure the timing with a fine scale. If your browser has troubles with them, you can download the pictures and see/edit them with your favorite image editor.

 

but if you get 630Khz w/error every other byte it means shift register is shifting at > phi2/3 cycles. You need to put delays in between bytes, but to test shift register you just need one byte transmitted successfully.

 

I did delay between bytes, I said that in the post and it is clear in the image. And the image shows that no transmitted byte was received correctly. Error here means the absence of an IRQ pulse, and "no-error" means just the presence of the IRQ pulse. A present IRQ pulse still doesn't mean that a byte was transferred correctly. It just means that Pokey saw ten bits, but not neccessarily the right ones.

 

In the case of the 630 KHz, where an IRQ pulse is missing every other byte, it means that all failed. I explained the behaviour in the post. To get a byte ok, you would need two IRQ pulses in a row. Or at the very least, the IRQ must be activated at the "right time" (see below for the details). Otherwise the IRQ you are seeing is Pokey taking bits from two different bytes.

 

Now let's see what is the "right time" for the IRQ activation, and what is a "wrong time". Note that the IRQ was captured at the Pokey pin. It has no relation to the CPU timing.

 

This is the zoom of one byte done at 570 KHz (no errors detected). You see that Pokey activates IRQ at less than 5 us after the trailing edge of the serial clock on the stop bit:

 

post-6585-1235597818_thumb.png

 

This is the zoom of the first IRQ pulse on the 630 KHz image. You can see that Pokey activated IRQ at ~5 us after the trailing edge of the serial clock on the next start bit. The trailing serial edge on the stop bit is more than 17 us before the IRQ edge! This clearly means that Pokey missed one bit and interpreted the next start bit as the current stop bit. And this is about the best case on the capture, most IRQ pulses are even worse. Worse in the sense that the IRQ edge is much further (after more serial clock pulses) from the stop bit, meaning Pokey missed multiple bits.

 

post-6585-1235597823_thumb.png

 

Lastly, note that a single byte transferred ok doesn't say too much. You can transmit a byte at slightly faster than PHI/3, and depending on the exact relation of the clock phases, Pokey would still get 3 PHI2 cycles between bits. This is what happens in the 599 KHz test. Being just a bit faster than PHI/3, most bytes get ok.

 

I have a problem with IRQ pin determining if a byte is transferred okay or not. Also, what delay amount did you use between bytes and what's the receiver software doing and is the graph above for an unmodified XL machine or modified one? I mean when I did my test with 333kbps from PC, all I did was do the following in BASIC:

 

10 ? PEEK(53773): GOTO 10

 

I only sent one byte and then put some delay and then sent another one. I did not care if IRQ pin was correct or not but just that the byte came out correctly.

Share this post


Link to post
Share on other sites
I have a problem with IRQ pin determining if a byte is transferred okay or not.

 

Well, you might have a problem with that, but I do not. And you are, of course, free to disagree with my conclusion because of this.

 

For starters, no IRQ was missing at all as long as the bitrate is below PHI2/3, and IRQ pulses start to miss as soon as the bitrate is faster than PHI2/3. So at the very least, there is something significant at the PHI2/3 boundary.

 

In second place, the IRQ timing tells you exactly which bit Pokey detected as stop bit. As I said already, Pokey activates IRQ at about 5 us from the stop bit. That is seen in many other tests I did at multiple different frequencies. So if you see that IRQ is activated 5 us after say, the second bit of the next byte, then it is more than obvious that Pokey somewhere missed some bits.

 

Furthermore, I used a significant delay before the stop bit, and another delay between the stop bit and the next start bit (because this is synchronous mode, and I manage the serial clock, I can add delays whenever I want, not just between bytes). I did this on purpose. So the problem can't be related to the stop bit or the IRQ generation itself. The problem must be before (somewhere at the data bits), because Pokey is given plenty of time both at the start and at the end of the byte.

 

But I know that you are never going to agree with me, at least about this. So please, to avoid yet another fight, let's agree to disagree and please stop arguing about this. You made your point, you said that this test doesn't proof anything, you already said why. Let me have this thread (that I started) where I am giving my tests and my conclusions.

 

Also, what delay amount did you use between bytes

 

You can see the exact delay on the traces, there is a timing scale on top. That's one of the reasons of making traces and posting the zoomed images. Anybody can do precise measures of whatever portion of the trace capture. But anyway, the exact delays are as following:

 

On the first trace, at 570 KHz, both the start and the stop bit are twice as slow (the period of the serial clock on these bits is twice as big as the one in the data bits). In other words, the start and stop bits are sent at 285 KHz (570/2).

 

On the second trace, the start and stop bits are four times slower. On the third, they are eight times slower. But in both of these cases I also tested 16 times slower (more than one full byte time) just in case, getting identical results as the posted ones.

 

and what's the receiver software doing

 

I described this in the first message of this thread. Also, as I said, the traces captured the R/W pin. So you could see exactly when the CPU was resetting/re-enabling the interrupt on the IRQEN register on Pokey (they are the only write accesses on the code). First R/W pulse is the reset, second right away is re-enabling the interrupt.

 

and is the graph above for an unmodified XL machine or modified one?

 

I tested on three different computers, all unmodified, one 130XE, one 800XL and one 1200XL. The 130XE has "the caps" on the SIO connector, the others do not. All are NTSC computers and I got identical results on all of them.

 

I mean when I did my test with 333kbps from PC, all I did was do the following in BASIC:

 

10 ? PEEK(53773): GOTO 10

I only sent one byte and then put some delay and then sent another one. I did not care if IRQ pin was correct or not but just that the byte came out correctly.

 

This kind of test is not the most suitable for a trace capture. It would require huge delays between bytes, and then I could only capture single byte transfers per trace. And I also wanted to test the minimum delay required (and if a delay was required at all). Plus the whole idea was to see the results on a trace, without depending on results obtained at the Atari software.

Share this post


Link to post
Share on other sites

Very cool. It really makes you wonder why Atari didn't ever make use of the clock line. The 810 didn't have a POKEY, so it wouldn't have been able to reach top speed (the data is only coming off the track at 125kbps anyway), but it surely could have bit-banged the line (a-la the C64/1541) at something faster than 19200...

Share this post


Link to post
Share on other sites

Putting Pokey into a peripheral back then would have been overkill and probably increased the price to even more rediculous levels.

Also, I highly doubt Atari would have willingly sold any of it's custom chips to 3rd party vendors in the early '80s.

 

I also see some flawed logic in the "19,200" choice... but I'd guess 3 reasons:

- in those days, 19,200 was probably considered the fastest bit-rate for low-cost serial devices.

- the 6507 CPU used in Atari drives only ran at 1 Mhz... although I'd imagine it could do bit-banging a bit quicker than it does.

- PAL/NTSC differences in Pokey base clocks. Especially at low divisors, the frequency differences between PAL/NTSC could be potential cause of problems with SIO.

Edited by Rybags

Share this post


Link to post
Share on other sites
Very cool.

 

Please note that the merit belongs mostly to candle. A couple of us made some tests before, but he was (AFAIK) the first one that considered (and hopefully will implement) actually using an external clock.

 

Putting Pokey into a peripheral back then would have been overkill and probably increased the price to even more rediculous levels.

 

I think he meant just an UART (or USART), so that the drive wouldn't need to bit-bang, not precisely Pokey.

 

- the 6507 CPU used in Atari drives only ran at 1 Mhz... although I'd imagine it could do bit-banging a bit quicker than it does.

 

Actually at 500 KHz on the 810. But yes, it is possible to do higher as the Happy for the 810 does.

 

- in those days, 19,200 was probably considered the fastest bit-rate for low-cost serial devices.

 

I am guessing that from their point of view, that was more than fast enough at the time. Similar to the famous (who needs more than 640K of RAM).

 

Also, the bottleneck wasn't the SIO speed. The major bottleneck in the original 810 ROM was the bad sector interleave. It took some time until first third-party, and then Atari used a more optimized sector interleave with an updated 810 firmware.

 

The point I'm trying to make is that the 19,200 bps SIO rate, wasn't seen as a speed problem at the design stage.

Share this post


Link to post
Share on other sites

i don't care about the credits, just want this done, and can't do everything myself

i'm verry glad that some people here actually are doing something to bring this idea to the reality

 

clock can be external for faster transfer rates than pokey divisor 0, and any rate when using internal clock line, as data are perfectly in sync (using clock lines for data strobe) for compatibility with existing softwre

 

carry on! :D

  • Like 1

Share this post


Link to post
Share on other sites
I am guessing that from their point of view, that was more than fast enough at the time. Similar to the famous (who needs more than 640K of RAM).

 

"19200 bps ought to be enough for everyone"

Share this post


Link to post
Share on other sites

basing on Ijor data, and if my calculations are correct it would be 456kbits/s when using irq

a bit faster than normal i must say...

Share this post


Link to post
Share on other sites
I have a problem with IRQ pin determining if a byte is transferred okay or not.

 

...

But I know that you are never going to agree with me, at least about this. So please, to avoid yet another fight, let's agree to disagree and please stop arguing about this. You made your point, you said that this test doesn't proof anything, you already said why. Let me have this thread (that I started) where I am giving my tests and my conclusions.

...

I never fought with you nor insulted you; you interpreted it that way. Yes, I will agree to disagree with you.

Share this post


Link to post
Share on other sites
basing on Ijor data, and if my calculations are correct it would be 456kbits/s when using irq

 

I transferred 256 bytes packets "successfully" at ~48 KBytesps. This time was actually transferring data, and comparing on the Atari with known data. The Atari in turn signaled, after each packet, if the data verified ok or not. The external micro monitored this signal and made statistics...

 

However, I do get some errors. The errors are in the order of 1.5%-2%, in packets (one or two error packets every 100 packets). Interesting that the error rate doesn't change significantly no matter how much I reduce the bitrate (I tried down to ~80KHz).

 

I don't know, at this point, if I'm doing something wrong, or if this is intrinsic to the Pokey synchronous receiver. It would be interesting to compare with error rates on "async" SIO transfers. Somebody has any numbers? Hias?

 

The error rate is small, but it means that some kind of checksuming/verification must be implemented, and the retry logic would affect the effective performance.

Share this post


Link to post
Share on other sites

Ijor, can You implement the same checksum calculation orginal sio routines are using? I wonder if they will catch all of Yours errors, or some more complicated algorithms must be used (crc8)

Share this post


Link to post
Share on other sites
Ijor, can You implement the same checksum calculation orginal sio routines are using? I wonder if they will catch all of Yours errors, or some more complicated algorithms must be used (crc8)

 

Hmm, I might try. But, you know, I'm lazy :)

 

I do assume it will catch most errors. As I was saying, I'm still not sure about the reason. But the effect seems to be that Pokey sometimes miss a bit. This is very different than getting the "wrong bit", because when missing a bit what happens is that some of the following bytes are "shifted". Then it is not likely that you would get the right checksum for the wrong data.

 

Probably you should consider what is your overhead on your target computer. Check if your "accelerated" Atari could compute a simple checksum (even a 16-bit one when using a 65816) on the fly while receiving the packet. It would be a significant difference in performance, if as a consequence of using a more complicated checksum, it must be computed separately after receiving the packet.

 

Note that I'm not checking for Pokey overrun or frame errors, neither I'm checking for timeouts. In my case it wasn't worth to bother (I'm comparing data bit by bit anyway) and I wanted the loop to be as fast as possible.

Share this post


Link to post
Share on other sites

Ijor, if You consider Yourself being lazy, then what i suppose to consider myself? The doctor said even my eyes have lazy lenses and this was medical term :twisted:

i would assume that you need only 2, maybe 3 additional instruction to preform calculation of sio checksum and one byte from page0, afaik there is one already assigned for this

 

it might have something to do with pokey missing a clock pulse raher than bit, and may have something to do with rise/fafll times of Your test circuit

 

i got the very same results with my setup - pokey was missing a clock pulse and as a result all bits in following data stream were shifted, sending 10 bits of zeros reseted the chip's receiver

 

as for acelerated atari - i'm not quite there yet, need a day or two to get it running at 7mhz clock - for faster speeds i need to get it off proto-board

Edited by candle

Share this post


Link to post
Share on other sites

Hi Ijor!

 

First of all: thanks for your in-depth tests and analysis. This is really interesting!

 

Concerning errors in async mode: Some time ago I had a test running at pokey divisor 1 for approx. 2.5 hours, without any error (diag.atr checks for framing and overrun errors and does the standard SIO checksum and stops on the first error).

 

I repeated the test, at pokey divisor 0, and had no errors for 15 minutes (then I stopped it). I used a DD disk image with random data on it. Considering that reading a DD disks took approx. 18 seconds, this were some 35000 sectors in the test.

 

Another test, again at divisor 0, with copy2000: copied the "random DD" image 2 times and compared the images with the original (to be sure there are no errors that might have slipped through the SIO checksum), also everything OK. So, another 1440 sectors read and 1440 sectors written OK.

 

But: I'm currently investigating some strange Pokey behaviour: I had to set the baudrate to 125494 bit/sec (126674.82 would be normal on PAL), if I set it higher (for example 125762 bit/sec) I get framing errors.

 

The test scenario is a continous byte-stream (as sent by the UART with transmit FIFO enabled), if I configure the UART to transmit a second stop bit (basically a 1 bit delay after each byte) I can go up to 126843 bit/sec.

 

It seems that Pokey needs quite some time to transfer a received byte from the shift register to SERIN, check for overrun errors and set IRQST. If there's no delay between the bytes Pokey will start receiving a little bit too late and then the bit-shift occurrs (at least it seems to be so in async mode).

 

I'm still testing this with the other pokey divisors, for example at divisor 3 (nominal 88672.38 bit/sec on PAL) 87902 up to 89367 bps worked fine.

 

Back to synchronous mode: ATM I have not the slightest idea what's happening here. From your plots I see that you "stretched" both the start and the stop bit. Could you try just stretching the stop bit? In async mode stretching the start bit would be a big no-no, of course in sync mode this shouldn't matter at all. But who knows? Maybe some bug in Pokey?

 

If the problems also occur at 80kbps you could try checking SKSTAT for framing errors and/or have a closer look at the received bytes. In theory pokey should wait for the start bit even in sync mode. So with some special bit patterns (for example $FE, $FD, $FB, $F7, $EF, $DF, ... or maybe with 2 or more zero-bits) it should be possible to get pokey to re-sync (but on some wrong start bit) after it has lost sync. Maybe this will shed some more light on the pokey internals.

 

so long,

 

Hias

Share this post


Link to post
Share on other sites

Maybe it's worth putting a 'scope on the SERIN or CLOCK pin...

 

Doing this stuff with Interlace, I'm using PORTAs msb to trigger scope sweeps.

 

I was interested in how fast it rises... it takes something like half a scanline to atttain peak level, so maybe SERIN or CLOCK also has some problem with intermediate components causing lag.

Share this post


Link to post
Share on other sites

Ijor: i would go with what HiassofT is saying - pokey needs additional clock cycles to transfer byte from receiver to serin register

 

HiassofT: pokey IS resyncing on wrong start bits - had this confirmed on previous thread

Edited by candle

Share this post


Link to post
Share on other sites
i would assume that you need only 2, maybe 3 additional instruction to preform calculation of sio checksum and one byte from page0, afaik there is one already assigned for this

 

I don't have any space left on page zero :), I'm transferring the 256 bytes packets to page zero. This let me gain one cycle on the main loop.

 

Seriously, I can do it if you insist, but it involves some work. The Atari is not hooked to the PC, not even connected to any monitor or TV. It is hooked only to the MCU test board, which first behaves like a mini SIO2PC master, and later it performs the synchronous test. Every single modification to the Atari code involves some work. Because I have to assemble in the PC, then somehow transfer the Atari code to the MCU. Furthermore, the MCU implements a very basic SIO2PC which cannot even hold a full ATR image in its RAM.

 

Again, I will do it if it's important for you. But I think we won't gain too much. We know that no checksum or CRC can catch all errors. In practice, you would be able to catch probably all errors just by checking the frame error bit in Pokey. Because if a single bit (or clock pulse) was missed, then Pokey will read the next start bit as the current stop bit, producing then a framing error. Checking this is very "cheap" because it is a sticky bit, you can check it only once after the whole packet.

 

Btw, I was thinking that you should invest in a good timeout strategy, because this is what you will get if a clock pulse was missed. Possibly, it might be worth to send an "extra" padding byte at the end of the packet to avoid waiting until timing out.

 

it might have something to do with pokey missing a clock pulse raher than bit, and may have something to do with rise/fafll times of Your test circuit

 

Yes, it is a clock issue. I'm not sure though, if it is missing a clock pulse or if it is getting an extra spurious one. Two clock pulses too close (less than 3 PHI2 cycles) would produce similar behavior, because (as I explained already) Pokey won't shift in less than 3 PHI2 cycles.

 

It might have something to do with fall times. But if anything, then the problem should be that they are too fast and not too slow. Can't be the rise time, because I get the same behaviour using open-drain buffers at slow speeds. Note that Pokey has a Schmitt Trigger on this input.

 

I should try different hardware setups, but I think we should better wait until your own tests. This would give us some hints if the problem is unique to my test bed (either software or hardware), or not.

 

i would go with what HiassofT is saying - pokey needs additional clock cycles to transfer byte from receiver to serin register

 

Do you mean extra serial clock cycles (multiple stop bits), or extra PHI2 cycles? It is surely not an issue of extra PHI2 cycles. Again, I get about the same error rate no matter how slow I go.

 

And I don't think it requires extra stop bits. I didn't tried sending multiple stop bits with this test (I did that when testing the maximum achievable bitrate), but it wouldn't make any sense. We are talking about something like one single error per 10,000 bytes (possibly one single clock pulse error, every 100,000 clock pulses), doesn't sound like a "hardware functional" issue. Sounds more like an electrical issue, or a clock synchronization problem in Pokey.

 

But again, I shouldn't bother too much at this point. Let's wait until your tests and let's see if you get similar or a different error rate.

Share this post


Link to post
Share on other sites

Just calculate the checksum at end of transmission... or are you sending/expecting some sort of ACK in short order?

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...