Jump to content

thealgorithm

Members
  • Posts

    62
  • Joined

  • Last visited

thealgorithm's Achievements

Star Raider

Star Raider (3/9)

14

Reputation

  1. I find it beneficial to only use 50hz mode (due to compression ratio) and as mentioned previously, the increase in updates do not give too much of a dramatic difference in quality (and produce more distortion using this method) on top of using as much data as other audio compression methods (which would sound a lot better) If using more than one channel, the genetic/hill climbing approach seem to be the only viable solution (if allowing the facility to create audio chunks via combined mixing of oscillator waveforms) However there is another approach via building the audio channel by channel e.g recreate audio using one channel based on source, then when optimum wave, amplitude, frequency is found, then to layer the second channel, then the third. The recent implementation splits the audio into two bands and uses the first channel as a "basschannel" with the other two channels used to recreate the rest. example below https://www.dropbox.com/s/heq2noxyozu35wy/frodigi4-preview.wav?dl=0
  2. In its current implementation, increasing to more than one update per frame results in more distortion. This is due to the oscillators being "free running" The encoder can find a good match based on the offset of the amplitude but in reality the oscillator can be at any amplitude on the next update. Its more a catch 22 situation. Emulating the current oscillator position (of where it would be in the decoder) would result in less "optimum matches" but also less distortion. While opting for the free running method would give more potential matches (but more distortion if updating more than once per frame). This applies to methods where more than one channel is used to reproduce the sample (and in particular sawtooth waveform where a rising section in the wrong phase in the decode can destroy the reproduction quite a bit) In regards to rastaconverter. The frodigi's hill climbing works very much like it with data being fed into the virtual ear but then also adjusted via genetic methods. I am using a variant of the hill climbing method known as step count hill climb which can reduce local minima issues more than late acceptance hill climbing in most cases.
  3. Well to cut a long story short, There are certain individuals/groups who critisise the video / audio compression methods (e.g csam super, frodigi) and when i decide to release the encoder (csam super) these people who tend to have acted negatively end up using it in their demo's. I also have never claimed that the quality is good (After all, what can be expected attempting to cram in drums, bassline, singing, lead and other audio data into 3 sid channels using just the 4 waveforms at 1 frame per second) Its rather more a tech demo to showcase what can be produced with the encoder. Wont mention any names, but i guess those that complain about the quality are those that attempted something similar but failed miserably :-) Hence any encoder that i produce will be kept to myself from now on (Although will keep on releasing demo's) If anyone is interested in how it works, i have given some brief information in pouet or csdb http://csdb.dk/release/?id=133293&show=notes#notes
  4. It wont be released. There are many people negatively talking about the method (but probably wishing inside that they can use it to their advantage) So no, no sources or executables :-| Its not a complex process anyhow
  5. The pokey can certainly also benefit from the frodigi method (I have read a brief outline of possibilities) and it certainly has potential, many of which seem unexplored. By the way I have released the third version of frodigi on c64 (finally reducing the "clicking" by certain methods as well as the encoder attempting to preserve the bassline more accurately. http://www.youtube.com/watch?v=DFteV6YE7F0
  6. The SID (Sound Interface Device) is the soundchip in the C64 and C128. The Frodigi method (Free Running Oscillator Digi) recreates audio by placing frequency and waveforms into the three channels of the SID once per frame to recreate the original source audio. The decode is nothing more than just reading some bitpacked data and placing this into a few registers. The encoder is a different thing altogether ofcourse
  7. There are a few disadvantages in the SID (when taking into account the FRODIGI methods) First all, holding onto any sustain value per frame is an issue when the new sustain value is higher than the previous. It is possible to have the same sustain value or lower than the previous, but not higher. (A workaround is to update sustain then turn off and on the gate for that channel. This results in noticeable clicking. The method i am using changes the master volume for all three channels. This makes it less accurate (but more compact in data). Issue again is that on the old SID (6581) there will be noticable clicking. In order to reduce this, the master volume can be updated inbetween with the interpolated value from previous and next (which halves the volume of the click) Another Idea is using fast attack and slow release and setting gateoff before the sustain phase is reached (at a desired interval until the attack reaches its amplitude) unfortunately this may still have some issue with quality due to the curve in amplitute at the beginning of the update as well as the amplitude level before the next update. Changing duty cycles can somewhat mimick lower/higher volume but this is linked with the frequency. Same as with filters
  8. One thing to bear in mind is that even though the Frodigi method updates once per frame (or even once per two frames), the sid oscillators in the SID update every cycle (which is just below 1mhz a second!) Hence sample rate is not 50hz (Its only the update rate of the parameters that is 50hz, the SID does the rest without cpu intervention)
  9. I have had the sd2iec and it is great for loading single filers at high speed (as it can be used in conjunction with the software jiffydos fastloader). For multipart games, loading speed suffers (and ofcourse most fastloaders will not work) There are a few fastloaders that are supported at simulation level on the sd2iec which includes the GIJoe loader, as well as the Dreamload IRQ loader (that can run some demo's flawlessly - e.g error 23) With the Epyx fastloader cart, things get even more simpler and fast (but again many people may assume that it will speed up loading for everything) which is not the case. I have since purchased the Turbo Chameleon 64 and even though the price tag of 250 euro may seem high. It does near or enough everything including the ability to run Amiga, Atari2600 and even now the atari800 core
  10. Yes. The point i made, you have mentioned in the last paragraph, but nothing can be done about it. You are corrent in regards to keeping benchmarks for drawing in a buffer (as there may be factors with latency etc in gfx buffer areas) although similar can alse be said in general ram
  11. It all boils down to how the routines are written for each architecture. An algorithm written for one CPU may not run efficiently on another type of CPU and wise versa. Again creating different methods for each CPU would not be ideal either. For example, if a CPU cannot calculate in 16bits and requires multiple instructions in order to achieve what a cpu (that can operate in 16bits), The speed is dependant on how this routine is put together. Certainly it would still be an approximation and i guess nothing can be done about this. The point i am trying to make here is that a benchmark which runs on the same command set would be more accurate (Such as x86 vs x86, arm vs arm etc)
  12. This is dependant on how the benchmark routines are coded. Using the same command set and routine (on same architecture of cpu) would indicate any efficiency in the amount of cycles it takes for each opcode etc. If the benchmark routine for a different architecture of cpu is written differently however, this may not be optimal and the benchmark results would be flawed
  13. Benchmarking would not be ideal for different cpu architectures. Would be a different case if they were all based on the same command set (eg arm v7, x86 etc)
  14. May be a bit offtopic here, but i implemented LAHC and SCHC options in my image/Video compression tool CSAM Super. http://csdb.dk/release/?id=127248 Ilmenit, perhaps you can incorporate SCHC hill climbing (that may work better) in RastaConverter.
  15. Back on topic, the atari clearly has its advantages as well. That lovely display list (similar to amiga copper) allowing solid raster splits horizontally/vertically etc without any of the type of timer trickery used on the c64. Extra processing speed 1.79mhz vs 0.998 or so and ability to have more freely range of colors (and barely any overhead in scrolling) is ofcourse a must as well Where the c64 excels in is its ability to have many colors in hires (320x200) without using any graphical trickery. With software graphical tricks, more tighter packed colors using FLI/sprite underlay/overlay modes in hires. Color palette is limited to 16 colors, but these color choices are far better than the saturated colors in the amstrad and spectrum Larger sprites and the SID audio chip is also a good plus Overall each machine has its plus and minus points
×
×
  • Create New...