Jump to content
IGNORED

What do you guys use to create Pokey Music?


Recommended Posts

Moreover. MPT has editable frequency tables (well, at least 8-bit ones - 4 of them in total). And all of them are saves together with music. Here's old example of my experiments with triangle waves (still not supported by SAP players!):

 

On real stuff it sounds even better! :)

That's indeed very cool. Perhaps you should ask Fox if he could improve ASAP emulation :)

Edited by pseudografx
Link to comment
Share on other sites

Come on emkay. I want to listen to YOUR guitars.

 

 

 

 

Preferably in-tune, haha! :P

 

Let us take part on your humour, you know together the laughing is better ;)

 

Show us ... your ... favorite "in tune" music. The rules: 50Hz VBI and one POKEY, 4 voices been used, no digis.

 

Link to comment
Share on other sites

 

Sounds awesome! I would love to see what you've got!

 

Ok, I created a small vid to demonstrate the output:

 

 

Track info:

1: Original

2: 50Hz updates, only 'clear' distortions

3: 50Hz updates, all distortions

4: 100Hz updates, all distortions

5: 200Hz updates, all distortions

 

 

 

I believe if you could increase the quality by using more updates per frame (4, 8..) it could be very interesting.

 

See above. Last track has less than 400 bytes per second, so it could be used as compression scheme - if only the quality would be better...

That's the reason I stopped this project 3 years ago. It didn't come out so nicely as I hoped and other things were more interesting or promising ;)

But: The converter is far from being perfect: Audio translation is done by only estimating three simple criteria and I left advanced pokey modes aside (16 bit mode etc.) so there is (much?) potential left...

 

 

The problem with that DLL is the wrong sound handling. It's not only that it doesn't really sound like POKEY, it also produces artefacts than POKEY doesn't . Which means, at the end you get results that haven't anything in common with the original hardware.

 

I think that isn't really relevant in this case or stadium.

  • Like 4
Link to comment
Share on other sites

Nice. IMHO the last version is the only one that I find similar to the original tune.. I can kind of "recognize" it there..

The others sound just too different (at least for my tastes).

So.. maybe increasing it to 400hz? :) (or adding other AUDCTL modes?)

 

I vote for "there is potential left" x)

  • Like 1
Link to comment
Share on other sites

So.. maybe increasing it to 400hz? :) (or adding other AUDCTL modes?)

 

I vote for "there is potential left" x)

 

Increasing to 400Hz in this 8k sampling ratio scenario would mean a compression ratio of ~1:5 (AUDF+AUDC vs. 2 samples per byte in volume only mode) which is in reach to other compression schemes which promise better quality. (BTW: 'thealgorithm' is right: quality is increased only slightly while producing more distortion)

 

I still have some ideas to enhance the converter (e.g. by better spectrum analysis and utilizing more than one channel), but currently I'm quite busy with an other project...

Link to comment
Share on other sites

I find it beneficial to only use 50hz mode (due to compression ratio) and as mentioned previously, the increase in updates do not give too much of a dramatic difference in quality (and produce more distortion using this method) on top of using as much data as other audio compression methods (which would sound a lot better)

 

If using more than one channel, the genetic/hill climbing approach seem to be the only viable solution (if allowing the facility to create audio chunks via combined mixing of oscillator waveforms) However there is another approach via building the audio channel by channel

 

e.g recreate audio using one channel based on source, then when optimum wave, amplitude, frequency is found, then to layer the second channel, then the third.

 

The recent implementation splits the audio into two bands and uses the first channel as a "basschannel" with the other two channels used to recreate the rest. example below

 

https://www.dropbox.com/s/heq2noxyozu35wy/frodigi4-preview.wav?dl=0

  • Like 3
Link to comment
Share on other sites

As said: When thinking of FroDigi for the POKEY, the waveshaping has to be implemented. POKEY can fill the gaps from quantisation point to quantisation point with it's own waveshapes, resembling the waves as good as possible...

 

Still 50Hz programmig.This time 50Hz setting in Altirra for people who think that the bars in 60Hz were played faster ;) Altirra isn't that stable at 50Hz but it's ok-ish.

 

Edited by emkay
Link to comment
Share on other sites

The biggest wondering about "Pokey music" is that "developers" stand on a lonely place with creating better results on the available hardware.

Still nothing against "frodigi" but it's a selfteller to see people "like" it in this thread.

Just a reminder: SID has own controllers, acting on the wave producing, without CPU usage. That's why "frodigi" works. It's also the cause that "Impossible Mission" was the 1st game with fully programmed "Synth voices".

 

The plain sound generators of POKEY produce plain waves or some non-repeatable waves that makes it "impossible" to use simple wave comparisions and recreate them at 50Hz . Using 400Hz and higher, is "digitizing"

 

In 2006 I created this example:

 

 

It's even NOT the 1st demonstration. The "speech" is cleaner than in the frodigy demos. And it's simply using 50Hz programming of an RMT instrument.

Not sure WHY such approaches were always put underneath the carpet. Taste of music? Well, POKEY isn't able to play real music that way.

But a controlled update of the produced waves can solve it.

Edited by emkay
  • Like 1
Link to comment
Share on other sites

The biggest wondering about "Pokey music" is that "developers" stand on a lonely place with creating better results on the available hardware.

:thumbsup: Exactly. If some people think NOW it's a good time to ask f.e. ASAP upgrades, then it's years too late. Sorry :( .... just to repeat my comments:

 

Are you in a hurry then? Don't that fast, please. :|

Edited by analmux
  • Like 1
Link to comment
Share on other sites

The basic problem is that the musicians are usually not a programmers and vice versa. In addition peaople are lazy nowdays and wants to make Atari stuff on a PC. The result is obivous. Everyone use the RMT with mostly predefined instruments.

 

The only way to change it is to make a new tracker/editor which would allow to easily use the uncommon features, mutiple VBI updates and so on. The new opensource multiplatform enotracker may be the good starting point, perhaps?

  • Like 3
Link to comment
Share on other sites

  • 2 months later...

Personally, I wouldn't be bothered about time constraints when considering DIGItized sounds along with visuals as I've heard "truly excellent" SPL's (DIGI's) even with DMA time taken for the visuals. I used to use the Torsten Karwoth's Soundmonitor and the tracker only takes a couple of scan-lines to process & play a tune, I'm sure that It is certainly possible to play some DIGI's playing alongside the tune, perhaps the easiest way is to deliberately not use 1 channel in the tracker and write a small routine to work alongside the tracker to utilise this unused channel, popping the volumes directly (makeing waveforms) to the speaker as one would do naturally with the POKEY.

 

any comments?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...