Jump to content
IGNORED

Movie Cart


rbairos

Recommended Posts


Hi everyone.
Thanks for checking out my MovieCart last night.
It was the result of a very long series of experimentations and trial and error, in assembly,
color processing, microcontroller interfacting and packaging.
Shout out to @zackattack for being my technical sounding board from the beginning,
and @zeropagehomebrew for debuting it last night!

Thought I'd start a new thread to discuss technical issues etc.
 

The final format I chose was a solid block of 10x192 cells, drawn as two alternating checkerboards
of 5x192.  
I had tried/emulalted/rendered several variations of this, including:
12x192 with two random colored columns.
40x40 playfield with full color.
12x192 with alternating r/g/b lines (chronovision)
Several variations of alternating columns, alternating diagonal columns, etc.
Greyscale floyd steinberg dithering with background color swaps
The final full alternating checkerboard is thanks to @DirtyHairy and @SpiceWare for explaining the 8 bit back shift in detail.

A developed a testbed in TouchDesigner with glsl and custom plugins to test out all these variations over months.
I still have several of those test clips if anyone's interested.


In terms of hardware I aimed for through-hole, 5v, simple construction.
It uses a 16mips pic (64hz 4-cycle) driving 1K of an 8k dual port ram.
Data is completely uncompressed and read using low level SPI interface to an sd card.

The mc doesn't have to react to each instruction, but knows which of two possible 128-byte sections of the ram the 6507 is
currently accessing, through very careful planning of the kernel.
In this way I ended up only needed 3 address lines:
A7 -> which one of two possible chunks is the 6507 on
A10 -> controller + console messages
A11 -> used to enable the ram output onto the bus.

Note I did not need A12, so these carts actually work on modified Flashback 2's as well, which don't drive A12 properly.

In terms of controller + console data:
The mc does not probe any registers, instead the kernel decides to communicate it back, one bit per scanline by selecting
addressing A10 line, which the RAM address lines don't use.


I have everything up on my github (pcb layouts, assembler, real-time conversion utilities, etc etc)
So its available for dissection, improvement etc:
https://github.com/lodefmode

Going to work with @Zackattack in terms of an uno part, and will likely submit a cleaned-up support for Stella emulation,
which was invaluable for this project.


Anyways feel free to post any technical comments, questions here.


Cheers!
Rob  "lodef mode"


 

colorize.jpg

inside_cart.jpg

instructions1.jpg

instructions2.jpg

screenshot.jpg

  • Like 26
  • Thanks 3
Link to comment
Share on other sites

Many thanks for the explanation. I have a few more questions:

  1. How were the results of your experiments with other formats? E.g. Chronovision doesn't sound like a good idea.
  2. There were noticeable vertical black lines in the videos. Do you know where they come from? 
  3. Did you experiment with compression? A simple vertical delta compression of pixels and/or colors seems feasible and might provide decent results.
  4. There are a few unused cycles in your kernels, and a litte potential to use less cycles. I am not sure if the timing allows this, but these cycles could be used to display 1 or 2 more columns. Or did you try that already?

BTW: Are you interested to get your code integrated into Stella? 

  • Like 1
Link to comment
Share on other sites

1 hour ago, Thomas Jentzsch said:

Many thanks for the explanation. I have a few more questions:

  1. How were the results of your experiments with other formats? E.g. Chronovision doesn't sound like a good idea.
  2. There were noticeable vertical black lines in the videos. Do you know where they come from? 
  3. Did you experiment with compression? A simple vertical delta compression of pixels and/or colors seems feasible and might provide decent results.
  4. There are a few unused cycles in your kernels, and a litte potential to use less cycles. I am not sure if the timing allows this, but these cycles could be used to display 1 or 2 more columns. Or did you try that already?

BTW: Are you interested to get your code integrated into Stella? 

1. I'll update a few this morning.

2.  Yup, there were two beginning pixels that don't receive the color update in time, and show the color from 16 pixels back (or so, don't recall exactly).
I felt that was way more tolerable than 6+6 (12 columns) with two entire columns the wrong color.

3. Thought about compression, but wanted to go with a through-hole 5v microcontroller in the end.   And storage was super cheap.  Was still able to read a full frame in time through polling the SD card
for 512 chunks.

4.  I originally had more columns but worse color control (I'll put a example up).  One thing though, technically I could have kept 12 columns for b/w mode and just the 8 for color, but that would
involve swapping out the kernel and some other complexities.  This was b/w is implemented by just masking out a few bits of the color value (Thanks TIA engineers!)

Stella port: Absolutely.  Project wouldn't be possible without countless hours debugging on stella.
Honestly it looks better on stella than real hardware, which tries to be smart with locking + interlacing.
I have the basic core on github, which defines a peek and poke that needs to be spliced in for anyone that wants to tinker.
I have an uncommitted .mov extension that I could supply as well.
Best choice :  .mov includes the whole movie, but Stella would have to stop from trying to read it all.


 

Link to comment
Share on other sites

1 hour ago, Andrew Davie said:

If you have any clips of "chronovision" (did you mean ChronoColour?) I'd like to see/review!


Here are a couple.
Please set youtube quality to '720p 60' otherwise it skips too many frames, making it look incomplete.


This one is just some early tests comparing b/w dithering with best color per 8 bit cell vs  full field r,g,b dithering:
Note it uses 12 columns of color, which isn't possible without bus stuffing perhaps, (which I chose not to pursue).

 



This one is a variation of ChronoVision, but done in two passes in an attempt to increase resolution:

 


In the end, using only R,G,B at this resolution was way too jarring I found, given the Atari's rich palette.
And b/w dithering + picking colors separately didnt provide as good quality as doing it all in a single pass (dithering with the atari palette).

Cheers,
Rob.


 

  • Like 1
Link to comment
Share on other sites

16 minutes ago, rbairos said:


Here are a couple.
Please set youtube quality to '720p 60' otherwise it skips too many frames, making it look incomplete.


This one is just some early tests comparing b/w dithering with best color per 8 bit cell vs  full field r,g,b dithering:
Note it uses 12 columns of color, which isn't possible without bus stuffing perhaps, (which I chose not to pursue).

 


This one is a variation of ChronoVision, but done in two passes in an attempt to increase resolution:

 


In the end, using only R,G,B at this resolution was way too jarring I found, given the Atari's rich palette.
And b/w dithering + picking colors separately didnt provide as good quality as doing it all in a single pass (dithering with the atari palette).

Cheers,
Rob.


 

 

Thanks for the videos. If I'm interpreting what I'm seeing correctly, you're either using alternate R/G/B scanlines, and that's it.

If you were trying for ChronoColour, you need to also cycle the R/G/B on consecutive frames. So consider scanlines 0/1/2, in frame 0 you would have R/G/B, and in frame 1, G/B/R, and frame 2 B/R/G.  That is, you "roll" the RGB every frame, and you also alternate it on every scanline within each frame.  That way, over 3 frames each and every scanline displays R/G/B, so you get a time-based multiplexing on every scanline of R/G/B.

 

 

Link to comment
Share on other sites

2 hours ago, chad5200 said:

Was watching last night. Very impressive!

Would it be possible to upload YouTube (or other video hosting site) video demo samples with full volume audio?

 

2 hours ago, Thomas Jentzsch said:

YouTube would remove them very fast.

 

Big Buck Bunny can be used for free, emphasis added:

 

Quote

License Information


The results of the Peach open movie project has been licensed under the Creative Commons Attribution 3.0 license. This includes all the data we’ve published online and on the DVDs, and all of the contents on this website. If any content on this site is not licensed as such, it will be clearly indicated.

 

In short, this means you can freely reuse and distribute this content, also commercially, for as long you provide a proper attribution.

 

 

  • Like 1
Link to comment
Share on other sites

9 minutes ago, hizzy said:

Where did you get the boxes and manuals done? They're super cool!

The boxes I started with a template from:
http://kevman3d.blogspot.com/2017/03/restoring-2600-collection-one-box-at.html
But changed up some of the flap sizes a bit for easier slicing.

I had them all printed at https://www.staplescopyandprint.ca/  (Canada) which was super cheap compared to all the other options I found.
Stickers were done at stickeryou.com which turned out nice.

As for actual content, I started with scans of original combat box + manuals and painstakingly reproduced it all as close as possible.
In the end I made a file in TouchDesigner which lets you just plug in any photo and text, and it assembles the final images.

Included screenshots of the process.

I can put them on the github.
TouchDesigner is free for non-commercial use.
The labelmaker should work fine, but the boxmaker has much higher resolutions so won't look right on the free version at the moment.

Cheers,
Rob.


 

label_creator.PNG

box_creator.PNG

  • Like 2
Link to comment
Share on other sites

6 minutes ago, Andrew Davie said:

The first video looks very much like you were varying RGB on alternate frames, without interleaving inside the frame itself.

 

Yes thats right. I tried full frame as well as per line, as well as some per column if I recall.

Link to comment
Share on other sites

11 minutes ago, SpiceWare said:

 

 

Big Buck Bunny can be used for free, emphasis added:

 

 

 

Thats a public domain animation?
Very cool.
In the end though, I found cartoon animations didn't translate as well.  
The solid patches of colors beside each other didn't lend well to 8-pixel color boundaries unfortunately.
Best was people's faces, houses, driving cars, etc.

The encoder is all on the github and a clip showing the encoding process can be found here:
 

 

Edited by rbairos
  • Like 3
Link to comment
Share on other sites

8 minutes ago, rbairos said:

Yes thats right. I tried full frame as well as per line, as well as some per column if I recall.

None of those have "full frame AND per line at the same time" as per my earlier explanation.

 

Frame 0 line 0 starts with red. Next line green. Next line blue. Next line red.... GBRGBRGB...

Frame 1 line 0 starts with green. Next line blue. Next line red. Next line green.... BRGBRGB...

Frame 2 line 0 starts with blue. Next line red. Next line green. Next line green.... BRGBRGB...

Frame 3 line 0 starts with red.... etc

 

 

Link to comment
Share on other sites

17 minutes ago, Andrew Davie said:

The first video looks very much like you were varying RGB on alternate frames, without interleaving inside the frame itself.

 

oh, just realized, one of the clips above is R,G,B,Intensity which was just an added average grey level. If I recall, this was just a test to try and get rid of some of the jarringness of just R,G,B lines.

Link to comment
Share on other sites

15 minutes ago, Andrew Davie said:

None of those have "full frame AND per line at the same time" as per my earlier explanation.

 

Frame 0 line 0 starts with red. Next line green. Next line blue. Next line red.... GBRGBRGB...

Frame 1 line 0 starts with green. Next line blue. Next line red. Next line green.... BRGBRGB...

Frame 2 line 0 starts with blue. Next line red. Next line green. Next line green.... BRGBRGB...

Frame 3 line 0 starts with red.... etc

 

 

The RGB-grey one does.  Each frame starts with a different component.

But here's the test with just R,G,B
Note for bright sections,the frame starts with a blue line, the next frame starts with a red, next frame with green etc.
The right hand uses the full palette.

Note I do two passes of 48 pixels, but it wasn't really less jarring at one pass either.

 


 

Edited by rbairos
  • Like 1
Link to comment
Share on other sites

7 minutes ago, rbairos said:

I probably have that one somewhere as well.
Many of my tests involved cycling through start possibilities for each frame to break up patterns.
I even tried this with columns, to try and further break up the lines.
I think I even tried this at the individual pixel level at one point (purely hypothetical, not for the Atari)
They all resulted in a 20hz jarring red, green, blue pixels.
 

This is true if you choose nearest colour. My method is to first resize a frame to destination x * y. Then separate the colours into three planes (R/G/B). Then dither each plane separately (Floyd-Steinberg) into 1 bit per pixel. Finally, do the RGB interleaving as described above.  That is, split it into RGB or GBR or BRG alternating lines, depending on if your frame # MOD 3 is 0, 1, or 2.

 

 

Edit: The subtlety/shadows/shading comes exclusively from the dithering of each plane individually, rather than combined as a single image.

Edited by Andrew Davie
  • Like 1
Link to comment
Share on other sites

10 minutes ago, Andrew Davie said:

This is true if you choose nearest colour. My method is to first resize a frame to destination x * y. Then separate the colours into three planes (R/G/B). Then dither each plane separately (Floyd-Steinberg) into 1 bit per pixel. Finally, do the RGB interleaving as described above.  That is, split it into RGB or GBR or BRG alternating lines, depending on if your frame # MOD 3 is 0, 1, or 2.

 

 

Edit: The subtlety/shadows/shading comes exclusively from the dithering of each plane individually, rather than combined as a single image.


For my early RGB tests, including those above I very likely dithered each component separately and spliced them back together after as described.
This would have been easier to implement for me. 
It was only later my tests incorporated floyd-steinberg to pick from a palette of colors.
Original algorithm was straight monochrome on separate components.
I just couldn't get past the low resolution that would pop out r/g/b dots glaringly.
If you've got a specific setup you want me to try, Im very happy to give it a shot over the next little while.
We might find something better.
Cheers
 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, rbairos said:


For my early RGB tests, including those above I very likely dithered each component separately and spliced them back together after as described.
This would have been easier to implement for me. 
It was only later my tests incorporated floyd-steinberg to pick from a palette of colors.
Original algorithm was straight monochrome on separate components.
I just couldn't get past the low resolution that would pop out r/g/b dots glaringly.
If you've got a specific setup you want me to try, Im very happy to give it a shot over the next little while.
We might find something better.
Cheers
 

Just hoping to get the best quality possible on the machine.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...