EricBall
Members-
Content Count
2,361 -
Joined
-
Last visited
Content Type
Profiles
Member Map
Forums
Blogs
Gallery
Calendar
Store
Everything posted by EricBall
-
iOS programming - major milestone reached
EricBall commented on EricBall's blog entry in EricBall's Tech Projects
I've loaded the alpha code on my boss's and two coworker's phones, in addition to my son's and wife's iPods. Everyone is very impressed. (Although I think with the idea that I've created an actual iOS game rather than being impressed with the game itself.) The transition to Swift 2.3 was relatively painless (although I still don't know how to create a Swift 2.3 project in XCode 8 - but if I need to I can create one in XCode 7). Renaming the project from the development name to the actual name was less than painless (the XCode "rename" only does one step of many), but I found a step-by-step YouTube video. Hooray for the Internet. My current challenge is putting together the screen which will list all of the levels you have created. While XCode & UIKit have the basic framework (UITableView) - there's a lot of "write from scratch" instead of "tweak to suit". I've also had to write a bunch of sqlite interface code to handle the actual data. While there are some free toolkits out there, I wasn't able to get the nicest one to install correctly. I'm also leery of making my code dependent on someone else's (other than Apple's - which I can't avoid). And even the sqlite code isn't that bad with some help from a StackExchange post. -
Or an optional sun visor
EricBall commented on Nathan Strum's blog entry in (Insert stupid Blog name here)
I think the answer to both questions will be yes (for some measures). The real comparison between the Switch and the Lynx will be which will have more titles and which will have the longer battery life. I suspect the Switch will win the battery life comparison only because of the difference in battery technologies. The number of titles is more iffy. Not including downloads, the Wii U has only 122 retail games (as of Oct 23, 2016 as per Wikipedia), one less than the Lynx. -
My iOS game has reached a major milestone - I've linked the level creator to the play level so it's now possible to create a level and play it. In theory a lot of the remaining coding should be relatively standard and therefore easier. Hopefully I can get my son and a few other friends / coworkers to use it to start creating levels First I need to update it to Swift 2.3 so I can load it onto iOS 10 devices. (This is one part of the process I dislike, there's a lot of forced obsolescence in iOS development. In order to test on the current iOS you need the current version of Xcode, which needs the current version of macOS and you need to use the current version of Swift.)
-
One step forward, two steps back
EricBall commented on Nathan Strum's blog entry in (Insert stupid Blog name here)
I have to say my personal interest in getting a new dedicated game system is very low. I simply have so many gaming options: various Android / iOS devices 27" iMac (including Steam) PS3 (resume my Skyrim addiction) Minecraft (either on the iMac or the PS3) 3DS & DS w/ flash cart WiiU GameCube N64 (assuming I can fix / find a working one) older retro console options via emulation There's just simply so much low cost / no cost entertainment options that I can't justify spending the $$. Even buying a new game for the WiiU is pretty low on the list. -
Drat, drat, drat. I remembered (and verified) in order to submit to the App Store, the app must be programmed with Swift 2.3 or 3. So I guess I'll need to figure out how to get everything upgraded without breaking it too much.
-
For my iOS programming project I been using sneakernet to try to keep the iMac and MacBook project directories synchronized with limited success. While XCode has support for Git and Subversion (and will create an local Git repository for tracking local changes), the instructions on how to do the initial setup. The Apple documentation starts with "Check Out the project from your repository" and is mum on how to import an existing project into a repository. Well, with some help from http://stackoverflow.com/questions/29599023/how-to-setup-svn-repository-on-xcode-6I now have Subversion running on my iMac and my current projects imported. So now if I want to work on the road, I do an Update to the MacBook project before I leave, and a Commit when I get home.
-
Stella at 20, at 20
EricBall commented on Nathan Strum's blog entry in (Insert stupid Blog name here)
Hoo-Rah! A very logical solution to the problem. Now I just need to find 14 hours to watch it. Would there be any value in an audio-only version? Then I could listen to it during my daily commute. (Did that for the LotR commentary tracks.) -
A Neilsen report (picked up by USA Today, then by Slashdot) says, on average, people only watch 20 channels of the 200 they are subscribed to. Is this news to anyone with a normal cable or satellite package? It's called bundling. In order to get those 20 channels, they have to subscribe to the 200 channel bundle.** In my case, my bundle includes over 100 English language HD channels. (Included in that count are some semi-duplicate channels, like "regional" versions of the two major sports channels which mostly show the same programming, and major networks in different cities / time zones.) The bundle also includes a large number of SD only channels, non-English language channels, shopping channels, and audio only radio / music channels, not to mention the video on demand and pay-per-view channels. And I don't even subscribe to the top tier bundle! Even only considering those 100 channels, there's plenty of content to watch and PVR. But I'll let you in on a secret. I don't watch channels - I watch programs. Does it matter to me Forged in Fire is shown on History (Canada) versus one of the other 100+ channels? Nope. In fact, I had to look that up because I didn't know which channel it was on. ** Yes, have to. In order to watch some programming (e.g. NASCAR for my wife), we need to subscribe to channels which are only available in a bundle. They aren't available a-la carte, nor online. The upshot is we now have a plethora of content available to us at zero incremental cost. OTOH, given what we have available it's tough to justify any additional incremental cost for "nice to have" programs (e.g. GoT or the streaming service series).
-
iOS programming - battling with UICollectionView
EricBall commented on EricBall's blog entry in EricBall's Tech Projects
I managed to get this working (I think). I started to look at some sample code for dynamically sized cells, but ran into more knowledge issues. (Although the code did have some interesting constructs.) But then I found some sample code which used constraints inside the cell and I had a light bulb moment. If I included layout constraints between the cell and the imageview inside the cell, then the imageview would be automatically resized when I resized the cell. Bingo! -
iOS programming - battling with UICollectionView
EricBall posted a blog entry in EricBall's Tech Projects
In my game I want a level builder. Users will be able to create their own levels and send them to me for inclusion in a future release. The UI is fairly simple - a level grid in the top of the screen and the level tiles in the bottom of the screen. Select a tile then where you want it in the grid. The tiles are in a scrolling view like a photo gallery. The iOS UIKit SDk even provides an out-of-the box solution - the UICollectionView (+UICollectionViewFlowLayout). It's suppose to be easy; not quite drag & drop, but fairly close. The problem is the size of the tiles. An iPhone 4s is 640x960 pixels (320x480 "points") while an iPad is 768x1024 or 1536x2048 pixels (both 768x1024 "points"). So if I make the tiles 64x64 "points" the tiles end up much smaller relative to the level grid (which is resized to the screen size). There's no option in Interface Builder to make the size of the cell (which stores the tile) relative to the screen size. And 'cause I'm just learning, I don't 100% understand what I'm doing. So it's a lot of trial and error. Google searches have given me some info, and I think I can set the cell size for the layout, but I haven't found anything which says how to properly resize the cell contents (and my attempts so far have failed). I'm thinking there are two probable causes - first is execution order. The collection view does some buffering and preloading and I think that may be occurring before I'm calculating the size of the cells. Second is Interface Builder creates some implicit code in my application which might be blocking my efforts. My current focus is on the execution order to see how early I can determine the size of the cells so I can use that value elsewhere in the code. Or maybe I need to find some sample code for fully dynamically sized collection views and reuse that. -
While going with Swift 2.3 was okay for my existing projects, I couldn't easily create a new Swift 2.3 project. Fortunately, you can download old versions of XCode and rename them as part of the install. So now I can go keep developing using Swift 2.2 for iOS 9 without issues. It also appears the big downloads last month was XCode updating itself on my macBook and iMac.
-
Last night I told myself to stop playing Minecraft and get back to work on my iOS game. At least start to put together the level editor. Once I have something partially working I'm much more likely to spend time working on it. But XCode had other ideas as it had updated. So when I opened the project, it asked me (twice) whether I wanted to convert it to the current version of Swift. I said no, but then it said it wouldn't be able to compile it. That wasn't a good option either. So I made a backup of the project and told it to convert it to Swift 3. For the most part the change from Swift 2.2 to Swift 3 is cosmetic - more consistent method naming (like fixing stdio.h so fputs parameters are stream then string order). But I found one Catch-22. In my game I'm using an OpenGL fragment (pixel) shader. To pass a variable from the main program to the shader it's put in a "uniform" - a special kind of variable. Uniforms have special types to match OpenGL types, vectors in particular. The commands to create these these vector uniforms is different in Swift 3. That wouldn't be a problem except the commands aren't supported by iOS 9 for some reason. So while I'd like to use Swift 3 in case I need to make code changes in the future, I also want to support iOS 9. So for the moment I've gone with Swift 2.3, which XCode will compile. And I still haven't started coding the level editor.
-
Over the years my home internet access (though my cableco) has steadily improved. However, it's always had a usage cap. Not a hard cap where it stops working, or even a soft cap where the bandwidth gets downgraded. Nope, instead I get a usage charge if I exceed it - which I've done occasionally. The "fix" is to increase my service. However, I've also configured my Netgear router to kick out a warning page when I've exceeded 100GB to give me some warning (the cableco will as well, but theirs only kicks in after I've exceeded it and has a 24-48 hour lag). This month that warning popped up although I hadn't done anything particularly bandwidth intensive (like pulling down the MAME torrent...) Checking the cableco's website I could see there were several days where I had used over 10GB. Hmm... what could be downloading that much data? In a perfect world, my router would have a nice little report breaking down usage by device and destination. There's no technical reason it couldn't as part of the NAT functionality. (Although it might require some external storage.) Heck, the router is even a DNS proxy so the report could even map the IP address of the destination to a FQDN. But this isn't a perfect world - far from it. Other than tracking overall usage the router doesn't track anything other than the list of currently connected devices. Nor does this function appear to be part of other routers, even the open source ones. Bleargh - so how to do this? Hmm - could I sniff the WiFi network and get the info that way? The short answer is yes. Long answer: 1. My first attempt was to use Wireshark on Windows. But that only captured the traffic for that computer. What I needed was to put the adapter in "monitor mode". 2. My second attempt was to use Acrylic WiFi Professional. This worked, I could at least see the number of packets being sent by each device. But I couldn't get a count of the number (and size) of packets received by each device. It also seemed to crash while running unattended for several hours. 3. I then tried to use the drivers from Acrylic with Wireshark (which is supposed to be possible according to Acrylic). But I couldn't get it working or find out how to configure the driver to only listen on the correct channel. 4. So I downloaded a LiveCD/USB image of Kali Linux and tried to use aircrack-ng without success. I'm not sure whether the Linux drivers don't support monitor mode or if there was another issue. 5. Finally I loaded up Wireshark on my wife's old MacBook. Ding, ding, ding, we have a winner! Monitor mode without complaint. TShark to capture straight to disk. About the only quirk was it uses the channel of the connected SSID, but I needed to disable my 5GHz network anyway. So with this I can easily track usage per device. In theory I should also be able to decrypt the packets (as I know the WPA2 passphrase) too, assuming I sniffed the WPA handshake. Of course, it looks like Internet usage is back to normal.
-
Ah, as per "Rebecca Heineman on 3DO DOOM", s/he got the Jag port source from iD (along with the DOS source) and used it because it had already been stripped down to fit in the Jag's 1MB of RAM - making the port easier. That makes more sense. The reuse isn't due to the code but the assets.
-
A most entertaining podcast (along with the related Wolf3d podcast). I like how you go beyond a simple review of the game and delve into the development history and relationships & comparisons with other ports. (It was also interesting to learn how many console ports were actually ports of previous console ports. I wouldn't have thought there was enough similarity between the various consoles to make that strategy better than going back to the original source.) Personally I only played DOOM (and Wolf3d) on the PC, although I had (at least for DOOM) a Gravis Ultrasound for superior music and sound effects. (IMHO the main failing of the GUS was it didn't provide a hardware compatible Sound Blaster DAC port when so many games were using it.) DOOM was my last FPS until GoldenEye 64 as my PC couldn't keep up and I got off the upgrade treadmill.
-
hell yeah DVD is still popular 1. both players and content are much cheaper than Blu-Ray (even without considering DVD-R copies) 2. laptops & portable players typically handle DVDs 3. people and libraries (and adult video stores) have lots of DVDs, there's also a big used market 4. while there is a quality difference, unlike the VHS to DVD transition it isn't enough to make you want to upgrade
-
The game part of my iOS app is 90% done. (Which all programmers know means there's still 90% left to do.) But the big challenges have been conquered - the touch & tilt controls and physics works. There's still some to-dos to load a level, do a reset, handle pause etc. but they will wait until I get the level builder working. However, before I did that I wanted to take a look at performance. One of the cool things with iOS development is you can easily test out the app on an actual device. Plug in an iPhone, iPad or iPod Touch and click run. It even links into the debugger. The thing which concerned me was my tutorial level was running at only 40fps on my 5th gen iPod Touch. This is about the slowest iOS 9 device so if I can keep the framerate up on it I'd probably be good on anything else. And while 40fps isn't bad, the tutorial level is fairly simple so it's possible some complex level might be unplayable. Unfortunately the GPU analysis wasn't giving me much of a hint. Both the Tiler and Renderer were at 75% and CPU was less than 20% - so no absolute bottlenecks. I could walk through the OpenGL call stack for a frame, which was interesting but not very useful. So I started changing the code to see what made a difference. Turning off a couple of recent additions didn't make a difference. Nor did removing some logic from the update loop or removing my shader. Moving the textures into a texture atlas reduced the number of draw calls and provided a minor increase, but not much. But the texture atlas also seemed to weird out the dynamic lighting. Hmm... comment out the dynamic light - 60fps. Ahh-ha! And it turns out texture atlases don't work with normal maps for some reason - which was causing the visual problems. So I guess I'll make that an optional setting. It also got me to thinking. I was also using the normal map textures to generate the physics bodies via the alpha channel. But the process was less than perfect. So if I decided to discard the dynamic lighting entirely, I'd need to generate the physics bodies manually - which would remove the imperfections as well.
-
death of the 3.5mm 15mm TRRS socket?
EricBall commented on EricBall's blog entry in EricBall's Tech Projects
Unlike many other smartphones (like my HTC-601) iPhones don't receive FM radio (https://9to5mac.com/2015/04/20/fm-radio-iphone/) - so that's not a big problem for Apple. And while it's a nice feature, it's not a must-have for most people. OTOH, the bluetooth speaker I just bought has an FM tuner and it uses the USB cable for the antenna. So maybe if Apple really wanted to they could use the lightning cable as an FM antenna. -
Reading https://www.buzzfeed.com/johnpaczkowski/inside-iphone-7-why-apple-killed-the-headphone-jackgave me an idea. What if you combined the Apple W1 wireless chip with something like a Chu Moy pocket amplifier. So you plug your $tupidly expen$ive headphones into the pocket amp which is wirelessly connected to your phone. On the actual removal of the headphone jack, I think Apple is ahead of the curve. Bluetooth based headphones are becoming more common and there are definite advantages to not having a cable between the your head and where you stuff your upgraded Walkman. I suspect not having a headphone jack will be common in the next generation of smartphones.
-
Although cellphones are common, even ubiquitous, "everyone" does not have a cellphone. Cellphones can also be damaged, lost, stolen, or be out of range. Heck, some people don't even have a home phone. Unfortunately, due to cellphones, the usage of public telephones has dropped far below the point where they are profitable. So instead those without need to find other ways when they need to make a call.
-
Mucking around with fragment / pixel shaders
EricBall posted a blog entry in EricBall's Tech Projects
In my game there's a ball which rolls around the playfield. While SpriteKit is very cool with baked in physics and lighting, it's a 2D engine. So although it will rotate the 2D texture of the ball around the Z axis, that doesn't really convey the look of the ball rolling forward. My original idea was to go with a flat shaded sphere and let the normal map lighting give the ball a more 3D look. Unfortunately the result was less than satisfying. But then I had an idea. What if the ball was a window into a larger texture. Moving the window would then suggest the rolling motion. While doing some investigation into how to implement this idea, I stumbled into the idea of using a custom shader. In SpriteKit you can assign an OpenGL ES fragment (aka pixel) shader to a sprite which then renders the sprite. These shaders are written in a C style language which is then compiled at runtime into instructions executed by the GPU. There's all kinds of special instructions for dealing with vectors (e.g. RGBA and XYZ) & matricies (e.g. rotation). So I bashed at it this weekend (ghods it's good to code sometimes) and managed to get it working! The shader does 3 things: 1. Return transparent for pixels outside of the ball 2. Move the center of the ball around the base texture (where the center is is tracked by the main program) 3. Warp the edges of the texture to "wrap" it around the ball The one problem with using a custom texture is you lose the built-in normal map lighting. I'd like to add this in as well, but my first attempts didn't work out. So I've put that on the to-do later list as the current result is good enough for now. Also, for some reason SpriteKit does the rotation step after the custom shader. It's one less thing to handle in my shader, although it probably causes some quality loss. (But I will need to account for it in the lighting step.) -
I have this idea for a game, which I'm trying to program for iOS (using Swift). As this is a considerable step beyond my typical C & ASM skill set, it's been slow going. Swift itself is a post C object oriented / procedural language so isn't that difficult to understand, and I can certainly appreciate _not_ having to learn Objective C. No, the problem is the frameworks. Swift on it's own can do very little. It needs libraries like UIKit and SpriteKit. So unlike C where you can do a lot with just a small portion of the standard library functions, here I need to learn how work within these complex frameworks (which seem to assume implicit knowledge) to accomplish anything. But on the plus side - the frameworks have a lot of features built in. SpriteKit has a whole 2D physics engine baked in - so I don't need to worry about collision detection or bouncing. It will even automatically generate the collision models based on the alpha channel in an image. I recently learned SpriteKit also has dynamic lighting - including "normal maps". My 2D sprites are actually 3D objects, so I'm hoping the normal maps will make them look more 3D. So I whipped up some C programs to generate the normal map texture and the alpha channel for the collision models. Dropped them all into a test app and started to play around. The physics portion worked great, but the dynamic lighting was strange as only part of the textures were responding to the dynamic lighting. In the middle of the night I realized the problem - the dynamic lighting was picking up the alpha channel part of the image which I'd used for the collision model. I'd used 255 for solid parts and 0 for empty parts. So the empty parts weren't being lit. (The alpha channel on the light color works in the same way as a general dimming factor.) In theory this could be used to bake shadows into the normal map, but for me it wasn't what I wanted. The solution is to redo the alpha channel to use 255 & 253 and use 254 as the decision point. * I have no illusions about making millions from this game. In fact, my plan is to make the game free (gratis) to download & play. No in-app purchases, no advertising, no information gathering. Just my gift to the world. ** For iOS development you just need a Mac running the current version of MacOS. The rest of the development tools can be downloaded from Apple for free. There's even a simulator if you don't have an iOS device to test with. To actually publish an app you'll need to sign up for Apple as a developer (<$100) and you need a domain name & website (beware of teaser offers).
-
I think you overestimate what a five-year-old is capable of creating. My fifteen-year-old has difficulty with programming anything although I've been trying to teach him for years. I have to admit, not doing something you don't enjoy and don't need to do is often the sensible plan. Perhaps taking a break is what you need. Maybe you'll find a different creative passion to fill your time with. Or maybe that desire to create homebrew games will hit you with enough force to push you through the challenges. If that does happen, one recommendation I have is to focus on one programming language / console - preferably the one which you have found "easiest" to develop for. My programming knowledge went through several phases. I started with Applesoft BASIC and from there different dialects of BASIC on different computers. I later learned Borland Turbo Pascal - similar to BASIC but with structure instead of GOTO/GOSUB. C came next, although I wouldn't really understand C until after I learned assembly. After a brief attempt to learn 6502, I discovered the power of assembly with the 6809 (TRS-80 Color Computer). Then I went back to the 6502 but was stymied by the lack of 16 bit index registers. 8086 was next with the rise of PCs. I only got back to the 6502 with the 2600. Now I'm trying to learn Swift & IOS programming. It's slow going and frustrating 'cause I have an idea in my head for a game but I don't have anywhere near enough knowledge to start actually creating it.
-
Q1 & 2 - Building the display lists is one of the great challenges with the 7800. As usual, some tradeoffs have to be made between space (RAM) and speed (CPU cycles). Q1 - Yes, 6 bytes per display list is enough for a single 4 byte display list entry and the 2 byte end of list entry. Q2 - While not rebuilding the entire display list sounds more efficient, it is difficult to accomplish in practice. First, as sprites move vertically they need to be removed from one display list and added to another - which means the remaining entries in the display list would need to be rewritten anyway. (Unless you pad the display list with transparent sprites, which will rob your game of precious CPU cycles due to the additional MARIA DMA.) Second, the extra time and space required to determine whether the sprite needs to be updated and finding it in the list can easily negate the savings of not simply rebuilding the display list. Q3 - Yes, you can certainly use any space not used for sprites for code or other data - just don't let it spill onto the next page. However, I'd start with the assumption you're going to use the entire 2K / 4K block for sprite graphics until you run out of space on the holey DMA pages.
-
How much time, money & effort are you willing to spend? The first step would be to somehow divide up the display into vector sequences. Both the Vectrex and Atari games after Asteroids (Deluxe) use analog vector generators. So each vector sequence would start with a "return to center" which could be used as a starting point. (The vector sequence is then drawn as a series of slope + speed over time at intensity or off.) The vector sequence would need to be slowed down for the laser projector, tweaked to account for momentum and the output buffered. Then you'd need a bunch of laser projectors. You'd want to make them as physically small as possible so they can be placed close together rather than trying to create some kind of "multi angle beam combiner". As long as the distance between the outputs is small relative to the final display the parallax shouldn't be too noticeable. However, to make any significant difference you'd need a lot of laser projectors (tens at least). And hopefully each vector sequence could be drawn quickly enough to match the game's internal refresh rate.
