Jump to content
IGNORED

SIO synchronous mode


ijor

Recommended Posts

Concerning errors in async mode: Some time ago I had a test running at pokey divisor 1 for approx. 2.5 hours, without any error (diag.atr checks for framing and overrun errors and does the standard SIO checksum and stops on the first error)...I repeated the test, at pokey divisor 0, and had no errors for 15 minutes (then I stopped it).

 

Hi Hias,

 

Thanks, that is a very useful piece of information.

 

It seems that Pokey needs quite some time to transfer a received byte from the shift register to SERIN, check for overrun errors and set IRQST. If there's no delay between the bytes Pokey will start receiving a little bit too late and then the bit-shift occurrs (at least it seems to be so in async mode).

 

No, I confirmed it doesn't need any delay (see below). And this matches what I see in the schematics.

 

Back to synchronous mode: ATM I have not the slightest idea what's happening here. From your plots I see that you "stretched" both the start and the stop bit. Could you try just stretching the stop bit?

 

I tried that, it doesn't work.

 

Initially, as you see in the traces, I used both long start and stop bits. I did that because I was wanted to check the maximum Pokey shift rate, and then I wanted to discard problems associated with the process of starting and ending a byte reception. And also for testing very high bit-rates (at the 1 MHz range), where some delay was needed for giving enough time to the 6502.

 

After that test, then I checked all the combinations of long/short start/stop bits. The result is that long stop bits are not needed, but long start bits are required. I can transfer at 570 KHz successfully (with the mentioned error rate), without any delay at all between bytes, as long as I use long start bits. Adding delay between bytes doesn't help.

 

Pokey needs about 6 PHI2 cycles between the start bit and the first data bit. In other words, it is the "start byte reception" logic the one that needs a delay, not the "end byte reception" one.

 

In theory pokey should wait for the start bit even in sync mode.

 

Yes. The sync logic is exactly the same as the async one. The only difference is in the clock generation. There is one mux that selects (depending on the SKCTL bits) between the external bi-dir clock or the internal one. After this mux, everything is exactly the same, even the clock edge detection logic.

 

Maybe it's worth putting a 'scope on the SERIN or CLOCK pin...

 

Yep, that would be interesting. But for this purpose, to get an accurate rise/fall picture and catch possible tiny glitches, a high bandwidth scope is required. I'm afraid I don't have one.

Edited by ijor
Link to comment
Share on other sites

edit: just re-read the thread again, so first paragraph of my little essey is void :)

 

 

as You probably have seen, i got my ft232r based sio2pc interface assembled, now i need to get some time and write little program on pc side

anyone here have some spare time that i could use? i'm a bit short on this right now :)

but seriously, need to do this bitbang mode implementation anyway for other projects, so it should speed thing a bit

 

and one more thing - for me it is important, that anyone gives a damn, and when it happens, or how soon we all will be looking at any software utilising this type of transfer - is not important - so take Your time, i'll wait ;)

 

sooner or latter we will, maybe even HiassofT will release a patch for sio system proecdures? Who knows :) hope it will get popular though

would be a shame if ater 30 years of laying and waiting for someone to explore the idea was forgotten again

 

i suppose it was too expensive and not verry needed to have such speed on serial interface, and would involve using another pokey in pheripherial device, or another asic (which had to be designed), and using pure asynchronous serial interface was simply cheaper, and straight forward solution back then, but now we have hobby-grade mpu's that can generate video signals doing bit-bang and all of this for penneis

 

need to hit the sack, again early in the morning, this time shif will kill me one day

 

ps. what would You call high-bandwidth?

Edited by candle
Link to comment
Share on other sites

edit: just re-read the thread again, so first paragraph of my little essey is void :)

 

 

as You probably have seen, i got my ft232r based sio2pc interface assembled, now i need to get some time and write little program on pc side

anyone here have some spare time that i could use? i'm a bit short on this right now :)

but seriously, need to do this bitbang mode implementation anyway for other projects, so it should speed thing a bit

 

and one more thing - for me it is important, that anyone gives a damn, and when it happens, or how soon we all will be looking at any software utilising this type of transfer - is not important - so take Your time, i'll wait ;)

 

sooner or latter we will, maybe even HiassofT will release a patch for sio system proecdures? Who knows :) hope it will get popular though

would be a shame if ater 30 years of laying and waiting for someone to explore the idea was forgotten again

 

i suppose it was too expensive and not verry needed to have such speed on serial interface, and would involve using another pokey in pheripherial device, or another asic (which had to be designed), and using pure asynchronous serial interface was simply cheaper, and straight forward solution back then, but now we have hobby-grade mpu's that can generate video signals doing bit-bang and all of this for penneis

 

need to hit the sack, again early in the morning, this time shif will kill me one day

 

ps. what would You call high-bandwidth?

 

This is great. [good / real] Serial is still the way for old computers to communicate with a lot of newish devices. Like:

http://www.saelig.com/miva/merchant.mvc?Sc...tegory_Code=BRD

 

I was going to try to fumble my way through attaching a asic... try to fumble my way through a handler... and try to move on to fumbling with this. :) But I can skip to the last steps. :)

Link to comment
Share on other sites

Hi Ijor!

 

It seems that Pokey needs quite some time to transfer a received byte from the shift register to SERIN, check for overrun errors and set IRQST. If there's no delay between the bytes Pokey will start receiving a little bit too late and then the bit-shift occurrs (at least it seems to be so in async mode).

 

No, I confirmed it doesn't need any delay (see below). And this matches what I see in the schematics.

 

Back to synchronous mode: ATM I have not the slightest idea what's happening here. From your plots I see that you "stretched" both the start and the stop bit. Could you try just stretching the stop bit?

 

I tried that, it doesn't work.

 

Initially, as you see in the traces, I used both long start and stop bits. I did that because I was wanted to check the maximum Pokey shift rate, and then I wanted to discard problems associated with the process of starting and ending a byte reception. And also for testing very high bit-rates (at the 1 MHz range), where some delay was needed for giving enough time to the 6502.

 

After that test, then I checked all the combinations of long/short start/stop bits. The result is that long stop bits are not needed, but long start bits are required. I can transfer at 570 KHz successfully (with the mentioned error rate), without any delay at all between bytes, as long as I use long start bits. Adding delay between bytes doesn't help.

 

Pokey needs about 6 PHI2 cycles between the start bit and the first data bit. In other words, it is the "start byte reception" logic the one that needs a delay, not the "end byte reception" one.

This is interesting. Do you have an idea why Pokey needs 6 cycles after the start bit?

 

In async mode pokey resets its timers at the start bit (al least this is what pokey.pdf tells us). Do you know how this logic works (I'd assume it resets at the beginning of the start bit), and how sampling works in async mode (i.e.: does pokey sample at the middle of each bit or at some other time)?

 

I have to admit I'm currently a little bit confused. As I wrote before, in async mode adding a stopbit made 125762 to 126843 bps reliable. Very strange...

 

Maybe it's worth putting a 'scope on the SERIN or CLOCK pin...

 

Yep, that would be interesting. But for this purpose, to get an accurate rise/fall picture and catch possible tiny glitches, a high bandwidth scope is required. I'm afraid I don't have one.

Do you have any specs about the rise/fall times of the IO on your MCU? Is it possible to configure the slew rate?

 

From my experience with interfacing CPLDs to the Atari address/databus it's crucial to configure the CPLD outputs to slow slew rate, otherwise you get all the nice HF effects (crosstalk/interference, ground bounce, etc). OTOH these CPLDs I/Os are targetted at some 50-200MHz, so the "fast" slew rate setting has to be really fast. For MCU I/Os I'd expect slew rates similar to standard TTL/CMOS ICs.

 

so long,

 

Hias

Link to comment
Share on other sites

In normal I/O mode, doesn't Pokey reset the clock at certain times (which is probably in aid of giving us that +- 5% variance in data reception rate) ?

 

Just as for 16-bit sound, you have that 4-cycle overhead... that might explain part of the lag, even though the sound channels aren't being used when using External Clock.

Link to comment
Share on other sites

Doing this stuff with Interlace, I'm using PORTAs msb to trigger scope sweeps.

 

I was interested in how fast it rises... it takes something like half a scanline to atttain peak level, so maybe SERIN or CLOCK also has some problem with intermediate components causing lag.

Hint: connect the trigger directly to the PIA pin (ideally: bend it up) to avoid the nasty resistors and inductors that are between PIA and joystick ports.

 

Some time ago I built a JTAG interface so that I could program a Lattice iM4A5 CPLD with the Atari using a joystick port. This never worked (I even tried adding a schmitt-trigger) due to the slow slew rate. When I connected the interface directly to the PIA pins everything worked fine (but that was not my goal, I wanted a simple external cable).

 

so long,

 

Hias

Link to comment
Share on other sites

I suspected so much... guess they also wanted to have some protection so we wouldn't fry our PIAs.

 

The lag isn't a problem... it seems consistent so far as it's a constant slope each trigger.

 

I'm only triggering once at a specific time per frame (for the Interlace testing), so it's no problem (and of course, I could just set the 'scope to trigger on the downward which is virtually instant, if I needed cycle-exactness).

Edited by Rybags
Link to comment
Share on other sites

as You probably have seen, i got my ft232r based sio2pc interface assembled...but seriously, need to do this bitbang mode implementation anyway for other projects, so it should speed thing a bit ...

would be a shame if ater 30 years of laying and waiting for someone to explore the idea was forgotten again

 

Well, I think we already know that it does work. Even if we'll always get a certain small error rate, it is still not too significant. Instead of ~48 KBps the effective rate might go down, being extremely pessimistic about the checksum overhead and retry latency, to say, ~40 KBps.

 

Anyway, I wasn't trying to throw the problem at you. What I meant is that, possibly, the problem is specific to my test setup. It would be very difficult for me to use a whole different setup. But you will, you will use different hardware, cabling, etc. So I think it would be very significant to check what results you get.

 

ps. what would You call high-bandwidth?

 

Well, I don't know for sure. But we are not interested in watching the digital portion of the waveform, I already get this. What you want is to check for such things as rise/fall rates, shapes, and tiny glitches. So the bandwidth required is not directly related to the digital signal frequency, but rather to the rise/fall time frequency. Furthermore, because the error is not constant, we probably need a dual-channel DSO, or at least a DSO with an additional digital trigger.

 

This is interesting. Do you have an idea why Pokey needs 6 cycles after the start bit?

 

I'm not 100% sure, but I think this has to do with the pre-set of the shift register. You can see the "seudo-logic" of this in Perry/Piotr Pokeydocs work (I'm using the term seudo-logic, because it doesn't give the exact timing).

 

I have to admit I'm currently a little bit confused. As I wrote before, in async mode adding a stopbit made 125762 to 126843 bps reliable. Very strange...

 

I should have said, that no delay is needed in sync mode. But conceivable, it is needed in async mode. The relevant difference might what both you and Rybags mention, the reset of the counters at the start. Conceivable, Pokey needs a delay for this reason (see below).

 

I might have further clues if you could check what happens exactly when you get a framing error. Can you check if the data byte at SERIN was still ok? If it is bad, then the problem might be, indeed, too small a delay at the start. But if the byte is ok, it might mean something else.

 

Also, are you sure about your true and exact bit-rate? Remember our talk sometime ago about some UARTs not using the exact nominal crystal.

 

In async mode pokey resets its timers at the start bit (al least this is what pokey.pdf tells us). Do you know how this logic works (I'd assume it resets at the beginning of the start bit), and how sampling works in async mode (i.e.: does pokey sample at the middle of each bit or at some other time)?

 

It is very difficult for me to be precise about this, at least at this time. Yes, the counters are reset on start bit reception, but I can't say for sure which is the exact timing. Reading the receiver logic in the schematics is not so difficult. But combining this with the counter reset, divider and phase align logic, is much more difficult. There are several pipelined delays involved, some of them add delays, others compensate each other.

 

It would probably require simulations to get precise answers. But I might get, at least, a better idea by sampling some signals on async mode. This still might not tell me the answer to your current issue about an extra stop bit, but I should be able to deduce the exact sampling PHI2 cycle. As you are suspecting, possibly it is not exactly at the center of the bit.

 

From my experience with interfacing CPLDs to the Atari address/databus it's crucial to configure the CPLD outputs to slow slew rate, otherwise you get all the nice HF effects (crosstalk/interference, ground bounce, etc). OTOH these CPLDs I/Os are targetted at some 50-200MHz, so the "fast" slew rate setting has to be really fast. For MCU I/Os I'd expect slew rates similar to standard TTL/CMOS ICs.

 

I somehow suspect the slew rate is not the problem. As you are saying, this is not an ultra-fast 45 nm device. Slew rates shouldn't be much faster than TTL logic. In second place, you have those 100 Ohm serie resistors on the Pokey signals, you have a pull-up, and you also have a schmitt trigger inside Pokey. Besides, this is just one single (or two in the worst case) signal switching at the same time.

Edited by ijor
Link to comment
Share on other sites

i don't know if this is enought, but i have 10ns/div time base on my scope, and it is 2 channel, and can be triggered by separate trigger input

it also has IEEE 488 GPIB port, but i don't have any printer or data cable to get the data out to my pc

 

and i don't mind problems throwed at me so don't be so appologeasing ;)

 

ps. maybe someone here know little DIY project for this interface?

Link to comment
Share on other sites

I won't be able to do more tests for a few days. And anyway, it would be good to wait until candle tests. So I am making here a resume and some preliminary conclusions:

 

- This is about SIO synchronous mode, using an external serial clock.

- Only receiver mode (from the outside to the Atari) was tested.

- We confirmed it works reliably (with some error rate) at bitrates upto PHI2/3.

- Higher rates require push-pull drivers (SIO bus specifications are open collector).

- As long as drivers are push-pull, the capacitors on the SIO connectors on some XE computers don't seem to matter.

- No delay is required between bytes.

- At higher rates, a small delay is required between the start bit and the first data bit.

 

- The maximum working bitrate we achieved was at 570 KHz, as measured with an instrument.

- Attempting to cross the PHI2/3 barrier results in multiple frequent errors at the byte level.

- Even just slightly over PHI2/3 (599 KHz) produced constant errors at the packet level (no packet transferred ok).

- Adding longer delays between bytes and/or after the start bit didn't help at all.

 

- The test was done transferring 256 bytes packets. The received packet was compared with known good data on the Atari.

- We get an error rate, in packets, in the order of 1.5%-2%.

- The error rate is about the same in the whole bitrate range we tested, from 19.2 KHz (standard SIO rate), to 570 KHz.

- The error rate was almost identical in an XE and in an XL computer.

- Adding longer delays after the start bit, and/or between bytes doesn't change the error rate.

- Using Open Drain drivers (at lower bitrates only), doesn't change the error rate.

- At this point we are unsure about what is the reason of this error rate.

 

 

Other things that might be interesting to test, but so far we didn't:

 

- Test SIO synchronous transmitter (from the Atari to the outside).

- Perform some tests and trace capture on Async mode.

- Test packets with random data and specific patterns.

- Test on PAL computer, test on 400/800 computer.

Link to comment
Share on other sites

Hi Ijor!

 

I'm not 100% sure, but I think this has to do with the pre-set of the shift register. You can see the "seudo-logic" of this in Perry/Piotr Pokeydocs work (I'm using the term seudo-logic, because it doesn't give the exact timing).

Do you mean this doc?

 

I should have said, that no delay is needed in sync mode. But conceivable, it is needed in async mode. The relevant difference might what both you and Rybags mention, the reset of the counters at the start. Conceivable, Pokey needs a delay for this reason (see below).

 

I might have further clues if you could check what happens exactly when you get a framing error. Can you check if the data byte at SERIN was still ok? If it is bad, then the problem might be, indeed, too small a delay at the start. But if the byte is ok, it might mean something else.

 

Also, are you sure about your true and exact bit-rate? Remember our talk sometime ago about some UARTs not using the exact nominal crystal.

Yes, I remeber this talk :-) But this time I'm quite confident that the UART works right, the timing measurements (on my PC) show correct values.

 

I did several tests in async mode and I think what's happening is that Pokey misses the startbit and then re-syncs at the next falling edge.

 

First, the description of my test-bed: A PAL 600XL, with the caps removed, connected with my MAX232 SIO2PC interface to a 16C950 card. I ran these tests at 126571 bit/sec (126674.82 would be the nominal pokey frequency for divisor 0).

 

Test #1: I used a disk image with random data. The first few bytes were "13 a8 75 dd ea 52 24 96 22 d2 e6 90 f3 2e 71 af".

 

The first byte ($13) was received correctly, the second received byte was $ad, with a framing error, and thus my code stopped with an error.

 

Test #2: I used a pattern of "33 33 33 33 33 ...." in the disk image.

 

This time I didn't get a framing error but a timeout error. A total of $C9 bytes (instead of $100) was received and this is how the first 11 bytes looked like:

 

"33 a6 9a 33 a6 9a 33 33 33 a6 9a"

 

The pattern "33 a6 9a" was repeating in the remaining bytes, sometimes interrupted by multiple $33, $a6 or $9a bytes.

 

BTW: If I configure the UART to transmit 2 stop bits instead of 1 all data is received just fine.

 

Now let's look at this in binary mode and try to find and explanation what was happening (BTW: I'm not sure if this is what really happened, but it seems possible):

Test #1:
Byte sequence: 13 a8 75

Bit sequence ("S" indicates start bit, "s" stop bit):
S				 s S				 s S				 s
0 1 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 1 1 0 1 0 1 0 1 1 1 0 1
0 1 1 0 0 1 0 0 0 1		   0 1 0 1 1 0 1 0 1 1
------- 13 --------		   ------- ad --------

Note the wrong stop bit (0 instead of 1) after the $ad byte.

 

Test #2
Byte sequence: 33 33 33 33

S				 s S				 s S				 s S				 s
0 1 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1
0 1 1 0 0 1 1 0 0 1	   0 0 1 1 0 0 1 0 1 1		 0 0 1 0 1 1 0 0 1 1
------- 33 --------	   ------- a6 --------		 ------- 9a --------

 

The bit patterns seem to match, if this theory is correct it also means that pokey needs a falling edge to detect the start bit, not just a "0" bit.

 

Then I did a completely different test. I built my own testpattern generator using a small CPLD and some lines of VHDL code. The CPLD is clocked by PHI2, the logic inside the CPLD consists of 2 counters, one to divide the PHI2 clock down to bitclock (with separately adjustable values for start, stop, and data bits) and a simple bit-counter (running from 0 to 9). When I press a switch, the CPLD transmits a "$33" byte (with one start and one stop bit).

 

I wrote a small program on the Atari that would just receive 16 bytes and then print them to the screen. I also implemented framing and overrun error checks, but in the "$33" testcase these errors never showed up.

 

At pokey divisors 40 and 3 everything was fine. The Atari received a nice stream of 33 33 33 33...

 

Then I tried pokey divisor 1. This time I received "33 a6 a6 a6 a6...."

 

Next test: pokey divisor 0. Now I received - what a surprise - a repeating pattern of "33 a6 9a 33 a6 9a...". So at least my UART seems to work correctly :-)

 

Some more tests: then I tried stretching the stop bit. The standard value was 14 PHI2 cycles which resulted in the pattern above.

 

At 15 PHI2 cycles I received "33 a6 a6 a6 a6...." - like in the pokey divisor 1 test above.

 

At 16 PHI2 cycles I received a correct block of "33 33 33 33 33..."

 

And a last test, this time I wanted to know if stretching the start bit also works. So I set the start bit to 16 cycles, data and stop bit were at the 14 cycles default. And - voila - it also worked fine!

 

Enough for today, it's 3:20am localtime, I have to go to bed :-)

 

so long,

 

Hias

Link to comment
Share on other sites

i appologise for not posting any result from test i suppose to do, but didn't have the time yet - doing 6 projects in parallel don't make this any easier

 

oh - 4:25 am local time, and i suppose to get up at 8:00...

 

i'll try to get them done before saturday :(

Link to comment
Share on other sites

Do you mean this doc?

 

Yes. That's an awesome work on re-creating the logic from the (almost) unreadable schematics. However, as I noted in the previous thread, not everything is 100% correct and some details are a bit misleading (such as the clock on the shift registers).

 

I did several tests in async mode and I think what's happening is that Pokey misses the startbit and then re-syncs at the next falling edge...

...

The bit patterns seem to match, if this theory is correct it also means that pokey needs a falling edge to detect the start bit, not just a "0" bit.

...

At 16 PHI2 cycles I received a correct block of "33 33 33 33 33..."

 

I checked the async logic in the schematics. I still can't say the exact timing and interaction with the counters and phase align logic. I checked mostly the logic that resets and re-syncs the counter/timers on start bit reception.

 

Yes, this logic (and only this logic) requires an edge. In other words, Pokey doesn't exactly need an edge for starting the byte reception. But it needs an edge to release the reset of the counters.

 

This logic is mainly an S/R flip-flop. The output of this flip-flop is high active. A high output keeps the Pokey counters on reset. It is (normally) set on stop bit reception or when the receiver is idle, and it is reset on any falling edge on the serial input.

 

If for some reason this edge was "missed", then the counters would be held on reset. With the counters on permanent reset state, no serial clock is generated and no shifting is performed. I can't say for sure how and why this edge could be missed though, because the edge detector doesn't have any condition. It depends only on PHI2 and the serial input pin.

 

Conceivable, there is a timing issue with the "set" logic of this flip-flop. That is, the edge possibly is not being missed, but what happening is that the set (for the stop bit reception) arrives slightly later. Hard to be sure about this, but it looks like a possibility because the "set on stop" is synchronous to the serial clock, and the "reset on start" is not (only synchronous to PHI2).

 

Just in case the "set/reset" is a bit confusing: I mean here the set/reset of the flip-flop. In turns, the counters are reset when this flip-flop is set. Btw, the "set on stop" is conditioned by a correct stop bit. In other words, a framing error would not resync the counters.

 

And a last test, this time I wanted to know if stretching the start bit also works. So I set the start bit to 16 cycles, data and stop bit were at the 14 cycles default. And - voila - it also worked fine!

 

Interesting, but I'm not sure this is related to what I obvserved in sync mode. I think what is happening here is that you get a similar effect as if you would be reducing the bit-rate. Because like this you are actually delaying the data bits and the stop bit.

Link to comment
Share on other sites

Hi Ijor!

 

Conceivable, there is a timing issue with the "set" logic of this flip-flop. That is, the edge possibly is not being missed, but what happening is that the set (for the stop bit reception) arrives slightly later. Hard to be sure about this, but it looks like a possibility because the "set on stop" is synchronous to the serial clock, and the "reset on start" is not (only synchronous to PHI2).

Interesting theory. Let's see if we can verify or explain it (see below).

 

And a last test, this time I wanted to know if stretching the start bit also works. So I set the start bit to 16 cycles, data and stop bit were at the 14 cycles default. And - voila - it also worked fine!

Interesting, but I'm not sure this is related to what I obvserved in sync mode. I think what is happening here is that you get a similar effect as if you would be reducing the bit-rate. Because like this you are actually delaying the data bits and the stop bit.

Yes, that's also what I'm currently thinking about it.

 

BTW: I re-did the test from yesterday because I observed some strange behaviour that I couldn't explain: pattern $33 needed a 16 cycle stop-bit, but $13 and $31 needed 17 cycle stop-bits to be received correctly.

 

I changed my logic so that I'm now transmitting 10 high bits before transmitting the pattern. If the CPLD is not transmitting the output is in high-z state, either that caused troubles or it was something else like a bouncing switch or so.

 

Now I get consistent results: transmission is fine if I either use 15 cycle start or stop bits. i.e.: the whole transmission needs to be 1 clock cycle longer - 141 instead of 140.

 

Then I wanted to know where pokey samples the bit (usually it should to that in the center). So I used a pattern of $31 and then shortened the first bit and at the same time lengthened the second bit so that the whole transmission time was identical. I also set the stop bit to 28 cycles (i.e. 2 stop bits) to be sure no problems arise from this issue.

 

Now the results: using 12 cycles for bit 0 worked fine, using 11 cycles resulted in a $30 received. This means at pokey divisor 0 sampling occurs at the 12th cycle of 14. I also verified this by shortening bit 7, with identical results: at 11 cycles I received $b1 instead of $31.

 

The difference between the working configuration (15 cycles stop bit) and the sampling of the stop-bit is just 3 cycles - at 2 cycles difference I get the wrong data. Could this be the same 3-cycle delay as for sampling the clock in sync mode, as you wrote before?

 

The other remaining question is why sampling takes place at the 12th cycle. So I did some more testing: at divisor 1 pokey samples at the 13th cycle (of 16), at divisor 40 it samples at cycle 52 (of 96) - verified with the bit 0 and bit 7 method.

 

So compared to the expected values (cycle 7, cycle 8, cycle 47) sampling takes place exactly 5 cycles later. This is again interesting, I had expected the 6 cycle setup delay you mentioned before. Again: any ideas? :-)

 

so long,

 

Hias

Link to comment
Share on other sites

One idea here.

 

The manual says that resync allows for a +-5% variation in data rate.

My guess would be that resync could theoretically take place at any 0->1 or 1->0 transition within the data stream (inclusive of start/stop bits).

 

That might explain discrepencies in success with different bit patterns.

 

On the one hand, you have Pokey assisting you by resyncing itself partway through a byte - no problem whatsoever at low data rates, but a hinderance if you're pushing the boundaries.

Link to comment
Share on other sites

Hi Rybags!

 

The manual says that resync allows for a +-5% variation in data rate.

My guess would be that resync could theoretically take place at any 0->1 or 1->0 transition within the data stream (inclusive of start/stop bits).

I'm quite sure it resyncs only at the beginning of the start bit, this is also what the docs say. The allowed variation is a result of how pokey works: it samples the bit (approx.) at it's center. So if the the speed doesn't match exactly, but the last sample still falls into the stop bit everything's fine.

 

The effect we are seeing here (in async mode) is (mostly) due to the 5 clock cycle delay after the beginning of each start bit. This shifts the sampling point from the center (7) almost to the end (12), allowing for almost no deviation up (only down in speed). I guess nobody noticed it before that the highest pokey speed simply doesn't work as it should (the added pause of at least 1 phi2 cycle is really needed). Several people wrote before that a pause is needed, but this was mainly because their SIO code was too slow to cope with the maximum pokey speed.

 

That might explain discrepencies in success with different bit patterns.

If you mean my last post where I wrote about the new testcases: I just found the reason. I had a silly bug in my CPLD logic that resulted in the very first (start) bit being one cycle too short. I fixed the code, disabled the 10 "1" bits before the transmission and now everything's fine again. $33, $31 and $13 all work with just one additional clock cycle.

 

so long,

 

Hias

Link to comment
Share on other sites

The difference between the working configuration (15 cycles stop bit) and the sampling of the stop-bit is just 3 cycles - at 2 cycles difference I get the wrong data. Could this be the same 3-cycle delay as for sampling the clock in sync mode, as you wrote before?

 

It shouldn't be related.

 

The 3 PHI2 cycles between active serial clock pulses, is the minimum that the control requires for performing a correct shift. This is "regardless" sync or async, but of course that is not relevant for async because it would never happen with the internal counters. It is not directly related to the sampling of the external clock.

 

Anyway, again, I don't think they are related.

 

So compared to the expected values (cycle 7, cycle 8, cycle 47) sampling takes place exactly 5 cycles later. This is again interesting, I had expected the 6 cycle setup delay you mentioned before. Again: any ideas? :-)

 

Interesting!

 

I don't think these two issues should be (directly) related either. This 5-cycles shift you found, is the result of different internal delays in Pokey. And this includes delays in the re-sync and internal counters logic that are not relevant in sync mode.

 

However, I didn't expect such a big shift (5 cycles). There is a shift in sync mode as well, but it is smaller. It might indeed match my theory of the "set on stop" arriving after the "reset on start". But again, it is almost impossible to get the exact internal timing without performing some kind of simulation.

 

The manual says that resync allows for a +-5% variation in data rate.

 

The manual just states the obvious here, this tolerance is inherent to any 10-bit asynchronous transfer. Every UART resyncs its internal counters (at least) at the start bit. Otherwise it wouldn't work, because the start bit is (normally) the only synchronization mechanism here. The 5% comes from sampling at the center of the bit time and the 10 bits length.

 

My guess would be that resync could theoretically take place at any 0->1 or 1->0 transition within the data stream (inclusive of start/stop bits).

 

Some UARTS do that, but Pokey doesn't. And some of those UARTS let you transfer without stop/start bits, as long as there are enough transitions on the data stream.

Edited by ijor
Link to comment
Share on other sites

  • 1 year later...

However, I do get some errors. The errors are in the order of 1.5%-2%, in packets (one or two error packets every 100 packets). Interesting that the error rate doesn't change significantly no matter how much I reduce the bitrate (I tried down to ~80KHz).

I just read that the 6522 is suffering from similar problems when it is clocked externally. Maybe Pokey also suffers from this bug? I'm really not sure, but it should be quite easy to test:

 

The problem with the 6522 is that it misses a clock transition when it occurrs within a few ns around the falling edge of PHI2. The solution is quite simple: Put a 74LS74 in front of the clock input and clock this FF with PHI2, so that the clock input can only change at the rising edge of PHI2.

 

Here's a link to a sample schematic by Garth Wilson: http://www.6502.org/users/garth/projects.php?project=1&schematic=11

And here's a link to some more information about the 6522 bug: http://forum.6502.org/viewtopic.php?t=40

 

I don't have a FT232R board here, so I can't test this by myself, but maybe this information is of some help to you.

 

so long,

 

Hias

Link to comment
Share on other sites

  • 1 month later...
I just read that the 6522 is suffering from similar problems when it is clocked externally. Maybe Pokey also suffers from this bug? I'm really not sure, but it should be quite easy to test:

The problem with the 6522 is that it misses a clock transition when it occurrs within a few ns around the falling edge of PHI2.

 

Hi Hias,

 

I would need to check it, honestly, I don't remember the details. But as far as I do remember, I think the problem didn't seem to be missing a clock or pulse, but the opposite. I remember making some tests that suggested Pokey was internally generating an extra pulse.

 

I suspect the issue is something I commented you some time ago by PM. The external clock signal is not synchronized correctly inside Pokey, and it might produce a glitch if it happens at the wrong time. A glitch could then, generate an extra pulse.

 

But whatever happens to be the exact reason, it's very likely that is about the relation between PHI2 and the external clock.

Link to comment
Share on other sites

  • 8 years later...

It's been 10 years, so it seems like a good time to necro this thread. :)

 

But seriously, I was just looking this over and thinking about it, and I was wondering why you're using start and stop bits in these tests?  Aren't they superfluous when you have the clock signal to let you know when each bit is valid?

Link to comment
Share on other sites

23 hours ago, evilmoo said:

It's been 10 years, so it seems like a good time to necro this thread. :)

But seriously, I was just looking this over and thinking about it, and I was wondering why you're using start and stop bits in these tests?  Aren't they superfluous when you have the clock signal to let you know when each bit is valid?

 

After 10 years I am older and, honestly, I don't remember all the details :), but ...

 

Pokey is not designed for a true, full synchronous serial transfer. The so called synchronous mode is just the regular asynchronous mode with start and stop bits, only using an external clock instead of an internally generated one.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...