Several years ago, I started creating a music educational video game called NoteBlaster for iOS. From the beginning one of the things I wanted NoteBlaster to have was a very "Atari" vibe. I wanted it to have a retro arcade graphics style, I wanted it to have FM synthesis music, and I wanted it to have TMS52220 speech synthesis. Once I got to the point in development where it was time to start thinking about how I was going to get TMS5220 speech, I started a thread here --> http://atariage.com/...me-ti99-speech/
I spent a great deal of time trying to use QBox Pro, but despite my best efforts, the speech I generated was less than desirable. Eventually I gave up on QBox Pro, and believed it was hopeless, my game would simply not be able to have the high quality Atari coin-op reminiscent speech that I so badly wanted. At that moment, a friend of mine encouraged me not to give up, and he looked up the Speak & Spell toy on wikipedia and found the name of the person who worked at Texas Instruments at the time that was responsible for the algorithms used... Richard Wiggins. I figured, ok-- what have I got to lose? I picked up the phone and called various places trying to track him down, and eventually found him. I told him I was creating a music video game, and asked if he could help supply me with the information about how TI's portable speech lab system (which is what Atari Games actually used for their speech processing) actually worked to generate linear predictive code for TMS5220 chips. Richard agreed, and we had many phone conversations involving concepts of digital signal processing, autocorrelation, pre-emphasis filtering, chebychev filter poles, a to k conversions, pitch analysis, etc. He sent me hand written notes involving math equations, illustrations, and photo copies of pages from text books that he had on the subject. Finally, a good six months later, I had a Mac OS X desktop application that could analyze an 8khz 16-bit waveform and turn it into an LPC byte stream that could be fed to the TMS5220 chip and it sounded as good as the speech in all of Atari's arcade games. It was incredibly exciting! Around that time, I reached out to one of the MAME core developers, Jonathan Gevaryahu (aka Lord Nightmare), and he helped fine tune a handful of things on both the desktop analysis app side and also the actual TMS5220 emulation code. I am very grateful for all of his help.
Once I had the capability to generate the high quality speech that I desired, it was time to go back to the remaining development work in my video game. I then reached out to Ernie Fosselius, who if you are not aware, was the voice over actor that did the all of the dialog for the Atari Gauntlet and Gauntlet II arcade games. I told him about the video game I was creating and asked if he would like to be a part of it, and he agreed to do all of the voice over work for NoteBlaster.
BlueWizard, my Mac OS X application for processing speech files is available on github:
Edited by patrick99e99, Mon Jan 9, 2017 7:20 PM.