Jump to content

Kaide

Members
  • Content Count

    129
  • Joined

  • Last visited

Posts posted by Kaide


  1. Ah, yeah. That’s foam padding, not a sponge. :)

     

    On one hand, foam padding like that shouldn’t be conductive. However, I’ve never seen it used on top of components and contacts like this before, and would even say it generally shouldn’t be used in this manner. Last thing you want is something getting trapped by the foam and creating a conductive path for a short.

     

    EDIT: I should have read more first. FirebrandX does point out that the foam in the repair is conductive, to ground the board to the chassis. That part would make sense (if not an ideal way to do it). So why was the original site over an IC that likely took Vcc and Ground? Or is it that there was no grounding at all before?


  2. 7 minutes ago, adamchevy said:

    I get your sarcasm but I am surprised nonetheless given the fact that this is FPGA and not a pc running an emulator. Where it loads the binary data from shouldn’t be impacted like this I would think.  

    I wonder if the Mister has this problem on occasion. I wouldn’t be surprised to find out that it does the same thing.

     

    For early systems like the NES, it’s simpler to load the whole thing into the FPGA’s memory and access it directly. So yes, the SD card shouldn’t even be a factor once things get going, since the ROM file has already been fully loaded. Emulators would do the same, and the Mister uses a larger developer FPGA board, which should have even more memory onboard. 

     

    That said, NES games did have slowdown issues on real hardware, including some Nintendo titles like Kirby’s Adventure. Bubble Bobble is another notorious one which I was a fan of but find it hard to play today vs the arcade ROM. Since I spend most of my time in the SNES catalog, I don’t really remember much about if SMB3 was one of those titles with slowdown issues or not. It would definitely be worth comparing to real carts though.

     

     

    • Like 1

  3. 1 hour ago, XtraSmiley said:

    Yeah, but if he can't tell because he doesn't have the ability to see the difference between OLED and LCD why listen to his opinion on it? It's a measurement that is easily done with equipment and many rating sites do it, OLED beats LCD in almost all categories. If YOU can't tell, then YES, save the money on an LCD.

     

    It’s clear you didn’t really read the content of what I was saying, or misunderstand the points I was trying to make. 

     

    The core point is that there’s nothing specific about retro gaming that is any different than other uses of a modern TV. So there’s not a whole not of additional texture someone on this forum can add that you can’t find elsewhere. About the only thing that might matter is compatibility with non-standard refresh rates when using an OSSC or something like that. Sadly, nobody really reports on that, so it’s a bit of a lottery there, and wholly depends on the controller used, and has nothing to do with OLED vs LCD. If you use an AV receiver or processor, that’s another link in the chain that can break OSSC/etc, making it even more annoying. 

     

    Note that I mentioned that an LG-based OLED will have better viewing angles, hands down has better contrast, and even commented on the faster response times of the panel itself in my comments on motion clarity. And in terms of motion clarity, I think it’s a trade off (and an annoying one to make). And I even mentioned that for things that are 60fps, the OLED pulls ahead IMO. But honestly, sample and hold has been a huge step back from CRTs that we still haven’t solved, and instead apply bandaids to. As for image retention & burn-in, I intentionally used the phrase “image retention”. My set is 3-4 years old, it gets retention. I haven’t had any burn-in. But since LCD-based tech like QLED can suffer DSE (as can OLED) and have a similar effect on the picture, it’s honestly a wash. 

     

    The problem with pure measurements in this case is people tend to compare based on those measurements without context, or people wind up ignoring the context of those measurements to make mountains out of ant hills.

     

    Color accuracy on average is generally good enough with a few exceptions, and calibration will tend to drag dE low enough that it makes no visible difference. Both the Samsung Q80 and the LG CX both measured at a dE of under 2 out of the box. That’s ideal. Assuming you aren’t running around in a red pushed mode or some other nonsense, that’s a great place to be in. Color gamut/volume is another issue, but so far the differences in gamut between them have generally done a good job keeping pace with each other that I wouldn’t go chasing it in a buying decision (and Rtings reports better color volume on the Samsung Q80 QLED set BTW). So yes, different displays will measure differently, but keep in mind that samples of the same model will vary here, so there’s margins of error to account for when comparing. 

     

    There used to be some places that measured motion resolution, but it’s gone away in favor of the easier to measure response time as places are trying to measure more models of TV with fewer resources. It’s an important factor, but it’s not the whole picture. Much like you need to model the human ear to get a better idea of how headphones and speakers work, you need to model the human optic system to get a better idea of how different displays actually present motion to a person, rather than a camera. To be blunt, motion handling of TVs is the place where reviews are honestly terrible. 

     

    One of the reasons OLED has such a “wow” factor is because of contrast. Contrast is one of the things the human eye is the best at picking up. And OLED is a clear leader there (but so was Plasma and CRT and look what happened there). So long as the processor isn’t introducing black crush (a problem LG had a while back in early OLED TVs), shadow detail cannot be beat on an OLED. That said, retro games with small palettes aren’t really impacted by that, are they. :)

    • Like 1

  4. 37 minutes ago, Mattelot said:

    Have you used it on a OLED and non-OLED?  What do you think of the OLED's quality?

     

    I have a "man cave" that I've been building and I'm going to be moving my Analogue consoles from my livingroom into the cave.  I'll be buying a TV for the room and I'm debating between OLED and another QLED.  Cost isn't an issue.  I use a QLED in my livingroom, so I know what it looks like but if retro games look more impressive on OLED, I may teeter in that direction.

     

    Quality in terms of what? Color? Contrast? Viewing Angles? Motion? Image-Retention? But really there’s nothing between different panel technology that to me says someone should go OLED vs LCD when discussing retro gaming. 

     

    Color accuracy is close enough that both are equally good (assuming we are talking about Samsung QLED/MVA panels, and LG OLED panels). Contrast and viewing angles go to OLED. MVA panels that Samsung is fond of using in TVs bloom when using zone dimming, and can still leak considerable light leading to elevated black without zone dimming. Mostly an issue in darker rooms. MVA panels also aren’t great at wide viewing angles, while OLED is more in line with IPS LCD panels.  

     

    Image Retention goes to QLED or any other LCD-based tech. It doesn’t bother me since my use keeps it minor enough I only notice with single color backgrounds (similar to dirty screen effect), but it is something I’ve had to deal with using OLED. 

     

    Motion depends a lot on the controller driving the panel, although the tech can affect what you can do. QLED/LCD and OLED are both sample-and-hold tech, which will mess with how your brain interprets what it sees, causing visual artifacts that way. My old Sony 1080p LCD had a feature to strobe the LED backlight at 480Hz. This was great for film and other stuff in the 24-30fps range, since it got incredibly close to the look of CRT when it came to motion clarity. LCD/QLED needs time to transition between frames, which tends to “smear” the frames and blur them. Strobing the backlight helps “reset” the brain’s visual processing between frames, much like projectors and CRTs do. OLED has no backlight, so you can’t do this strobing trick the same way. Sony recently started offering a sort of “rolling blackout” on their OLED panels at 120Hz which helps, but because of the pandemic I haven’t seen one in person yet with the feature enabled. But I’m watching this particular addition closely. 

     

    With both OLED and LCD/QLED, the effects of sample and hold can be lessened by higher frame rates. So 60 FPS gaming will look fine on both sets. I’d give the edge slightly to OLED since you get a slightly crisper motion, thanks to the faster response times, but I honestly kinda prefer the “blur” of 24-30fps LCD than the “double-image” I get from 24-30fps OLED. So newer consoles that have games running at 30fps to me at least, don’t look as good in motion on OLED as they do a good LCD-based panel with a strobing backlight. But really my ranking for TVs in motion clarity tends to be: LCD w/Strobing Backlight > OLED > LCD w/o Strobing Backlight.

     

    When it comes to my next TV, it’s going to be about trying to find a “good enough” balance between contrast and motion clarity for me.  It might mean going back to a full-array backlit LCD of some kind.


  5. 7 minutes ago, Mattelot said:

    Exactly.  

     

    Has anyone tried using these FPGA consoles on a OLED TV?  I've been curious about that and don't find many videos on it.

    I guess what is your specific question? I’ve been using an OLED TV for retro consoles since before the Super NT came out.


  6. 1 hour ago, Pixelboy said:

    And then there's the question of NES mappers, which is a can of worms all by itself. Still, if just the most-used mappers could be supported, I suppose it would be enough for most NES homebrewers. Some could even go out of their way to release Pocket-cart versions of existing commercial NES games, but that could be a difficult technical endeavor depending on the game, and also be risky legally if done without proper licensing, although licensing deals are never impossible.

     

    Anyway, if this could be made to work as intended, then the community could be more easily split between those who make FPGA cores for the Pocket, and those who just want to make games for the handheld: If an FPGA core designer makes a core for a specific Pocket-cart PCB, then homebrewers just craft their games around that PCB's given architecture and features. Then you don't need to be a jack-of-all-trades (knowing both FPGA core programming and game programming) to make cool stuff on the Analogue Pocket, people with specific expertise can just get together and make things happen.  :) 

     

    Mappers are transparent to the cartridge bus. Even with bank switching, the game would just write to a couple specific address locations to tell the mapping chip to update what banks were accessible. So you really just need to figure out the cartridge bus and you should be in good shape. 

     

    I see what you are getting at with the idea, but it has some trade offs that aren’t necessarily ones that make a lot of sense to me. Unless you are thinking of the Pocket itself as the target architecture, versus say, a game meant for a retro console running on the Pocket.

     

    That said, if we are talking about retro consoles on the Pocket, then it seems like home brew as ROM files make more sense to me. Either done via a specialized flash cart with the “adapter” embedded into it, or through direct support of the SD card (like it apparently has for GB Studio). Physical carts for games are nice collectibles, but I’d at least personally tend to prefer those be in the format of the original system so it’s compatible with original hardware.


  7. 11 minutes ago, Pixelboy said:

    1) Could the Pocket be somehow configured to automatically switch to the proper FPGA core if I switch cartridges of different types? For example, playing a ColecoVision cart right after a regular GB cart? Could the FPGA core be integrated into the cart itself and installed automatically at boot?

     

    2) Could an NES/Famicom game (designed for carts with 72/60 pins) be shoehorned into a 32-pin setup? Given that Turbografx-16 HuCards have more than 32 pins and Analogue actually created an adaptor for such HuCards, perhaps it's possible to do the same with NES games. Or maybe not, I dunno.

     

    1) Including the core into the cart itself creates more problems than it solves, IMO. The cart adapters do seem to get detected by the MegaSG to kick it into the appropriate modes, so to me it seems possible, but I’m not certain how it’s implemented. 

     

    2) Possible, but you’d need a bridge to accomplish it. One reason there’s so many pins is because you’ve got two address buses and two data buses. So your adapter needs to effectively change how data is passed along the 32-pin connector, and then present the expected buses to the cartridge. There’s a few ways to do it, but I think my naive concern would be over getting the timings right. 


  8. 7 hours ago, blzmarcel said:

    @Kaide That may be true about MAME, but what about this (as I mentioned in one of my previous posts) ? I have cloned the repo to my hard drive and searched through it and couldn't find any sort of list of games or checksums/hashes/etc. I even searched the binary release in a hex editor and couldn't find anything there either. Over at that friend's place where he has it set up, we threw every home brew, prototype, and demo we could find at it and it just ran them, so it seems the author of that core found a more abstract/generalized approach, which leads me to believe that there is some way of determining what a game needs (what mappers, etc) from the main ROM data itself, which might be worth more study.

    You are asking the wrong person to explain someone else’s design, TBH. That said, the behavior is there, on the line starting with: ”CONSTANT MAPS : arr_jmap := (”

     

    Best I can figure from a closed issue on the GitHub, and from the VHD files (it’s been nearly two decades since I last used VHDL, mind you) is that it defaults to mapper 0, unless the CRC matches one of the CRCs in this array. If it matches one of those CRCs, it uses the mapper index specified. 
     

    This trick really only works if the dump is a known one. If it isn’t known, then it assumes mapper 0. Since the vast majority of Intellivision games use mapper zero, based on the spreadsheet the author references, this effectively works. Still not ideal from an engineering perspective, but “good enough” so long as you aren’t dumping your own ROMs.

    • Thanks 1

  9. 1 hour ago, blzmarcel said:

    Some good points, though my point is that there was some kind of header early to help emulators do the right thing, but afaict nothing of the sort existed at all for Intellivision ROMs at all until recently.

    Which would be part of the reason for the intv2 format. 
     

    MAME has a similar problem. ROM sets are matched to the version of the emulator, since the configuration information lives in MAME, not the ROM. So certain versions of MAME only work with specific dumps of the ROMs, named a certain way. MAME even goes so far as including the CRCs of working ROMs to catch dumps from older ROM sets quickly so it can reject them instead of trying to load them. For Intellivision emulators that don’t use .cfg files, I would suspect a similar arrangement where the file name of the ROM gets looked up in a table for the appropriate configuration, since the configuration is likely simpler than what MAME has to deal with.
     

     


  10. 1 hour ago, blzmarcel said:

    I'm all for adding extra information similar to iNES headers, and agree that it's nicer to have everything together in one file, though that extra information still needs to be read upon loading, so in the end it really shouldn't make much of a different where that needed info is read from (from a header/footer, or accompanying file.)

    Footer is worse than either of the other options, since you have to do a search through the file for the footer, then make sense of the raw data sitting in front of it. For a footer to be "efficient" you need to be able to stream the whole file into memory in one go and then read the footer and still be in good shape.

     

    The NT Mini's ROM format here allows the memory map to be read/configured as the file is being streamed into memory, which has advantages when you don't have a lot of memory to work with, since you don't have to keep a header in memory as reference when reading the file. So this makes the approach a little more memory efficient on loading than even a header.

     

    Early GPS devices used tricks like this (such as writing the R-Tree that sorts POIs into regions into the file format itself) to reduce how much memory was used, and improve speed on searching large POIs from slow flash memory by letting the GPS "skim" the file and keep very little state in RAM. 

     

    1 hour ago, blzmarcel said:

    I did a test with the int2intv tool, using Frog Bog, and found that it is adding to both the beginning and the end, and not just the end of the file:

    I was talking about a specific format proposed for Intellivision (.ROM) that would have appended the metadata in a footer. I thought I clarified in my post that's what i was talking about here.

     

    1 hour ago, blzmarcel said:

    Is it really known which order actually matches the original ROM? Even if it was read directly from the chips, from what I know, that should be the same data that would be read through the edge connector (by an Intellivision or a proper dumping tool.)

    It depends a lot on how the ROM chips were setup on the bus. But I'd argue that "matching the original ROM chips" is not even a good metric, as it gets too much into the weeds. A good archival format for cartridge data is one that is easy to work with, and accurately describes the cartridge enough to reproduce it. 

     

    Interleaved ROM chips (i.e. two ROM chips with 8-bit data buses providing high and low bytes to a 16-bit data bus) makes things more complicated than it needs to be. For the sake of emulation or even hardware reproduction these days, it's easier to pre-interleave the data for example. If I wanted to create carts from scratch, it'd likely be easier to get a 16-bit EEPROM anyways. And if I did need to recreate interleaved EEPROMs, it's a simple task to do that from a non-interleaved dump so long as the format is well documented and consistent. 

     

    In terms of endianness, like the above example, the format just needs to be well documented and consistent. That's it. 

    • Thanks 1

  11. 1 hour ago, blzmarcel said:

    I'm honestly a little confused. What do you mean by "core's native architecture" ? It is my understanding that with FPGA, the architecture of an original system is being replicated, all the chips, processors, circuits, et al, and thus all of the original behavior of said system.

     

    I assum what would happen (and what appears to be the case with other FPGA systems) is that the data from a ROM file is handled the same way an original system would read the ROM data from a cart, as AFAICT, the .int/.bin ROMs that have been around for quite a while are the original binary 1:1 dumps of cart ROM which any other FPGA and emulator expects. But the NT Mini (is this also true of the Super NT and Mega SG?) needs a change of endianness which still strikes me strange that no other FPGA system or emulator requires this.

    In the case of intv2, there's a couple things that jump out to me:

     

    1) Specifying the binary data as 16-bit little endian words, even for 10-bit ROMs. I think we got a bit hung up on providing what was thought to be a concise example of where there is potentially no such thing as a "1:1 copy" when dumping ROMs, but still interesting in this case. 16-bit words does make the emulation of the cartridge bus a bit easier in some ways. 

    2) Embedding what would normally be in a config file for .int/.bin ROMs. 

     

    The second one is rather important, though.

     

    It looks like outside of .rom files which embed metadata that would normally go into the .cfg file, the other formats require this configuration file to specify certain things about the ROM, including how memory is mapped so it can assign certain chunks of the ROM to specific addresses. A bit like the NES mapper example we provided before. But this data is appended to the end of the file and has to be searched for. This isn't a great engineering design, IMO, but it potentially makes it backwards compatible with emulators that only understand .bin ROMs when paired with a .cfg file.

     

    .intv2 on the other hand, uses the format itself to tell where chunks of ROM data goes. So as you read the file, it's telling you where things go, making things much easier. It has performance implications for both dumping and reading of ROM files.

     

    EDIT: Another thought that occurs to me is that one way to describe this is that raw ROM dumps aren't terribly useful. Much like a RAW file from a DSLR, it needs to be processed with additional information to make sense of the raw data. Be it information about circuitry that cannot be dumped, memory locations, etc, etc. Some ROM formats are closer to that raw data. MAME, and I guess Intellivision, fall into that category. Other ROM formats bundle in the metadata required. intv2 would fall more into this second category, but so would an NES ROM that contains information about which mapper was used, and size data on the CHR and PRG ROMs.


  12. 34 minutes ago, blzmarcel said:

    Thanks. Might I ask why the mapper chips aren't dumped along with the CHR and PRG ROMs? I'm not too keen on what mapper data looks like, I was mostly just aware of the two mentioned ROMs.

    They are ASICs. Chunks of logic. You can’t really dump them, as there’s no data there, just volatile state.

     

    When I say they sit between the ROMs and the system, I mean exactly that. The simplest mappers enable bank switching, which enable more data to be stored on the ROMs than the NES can actually address. The game will need to send signals to switch banks when it needs to access different parts of the ROM.

     

    23 minutes ago, bikerspade said:

    Even dumping those supplemental chips may not be enough; if there’s a volatile RAM chip or non-volatile EEPROM chip, those may not have data to be “dumped” yet are vital aspects of the cartridge that need to be represented and emulated. There could also be wiring differences on the PCB.

    I totally forgot about CHR RAM.

    • Thanks 1

  13. 9 minutes ago, blzmarcel said:

    I've sometimes wondered why an original NES or Famicom would know what to do without any such header, yet emulators always seemed to need it. Conventional thinking would be if an emulator is doing what an original system was doing in reading and parsing the data on the cart, though there is likely something more to it otherwise this would have been solved a lot more easily.

    Cartridges on these systems aren’t “inert”. For NES, the mapper chips are on the cart, and sit between the ROM chips and the system. Just dumping the ROM itself isn’t going to include the behavior of those chips.

    • Like 1
    • Thanks 1

  14. 2 hours ago, blzmarcel said:

    Byte order doesn't come into it, or at least shouldn't. A binary image is just the same 1s and 0s as the source, a 1:1 copy. Sometimes it can be compressed, like .smd vs .md/.bin for Genesis. Anything else is not a real image. Anything ending in .bin should be a direct 1:1 (uncompressed) copy of the original source.

    I'd agree it shouldn't come into play, but it does. For simpler systems with an 8-bit data bus, it certainly shouldn't be an issue since you are reading data in small enough chunks that it is effectively a 1:1 of the EEPROM contents. NES, Master System, and SNES for example.

     

    Things get more complicated as you start dealing with 16-bit data buses present on the Megadrive, or the N64. A backup tool will generally dump the cartridge a word at a time using the cartridge bus, where a word is 2 bytes in these two examples. So when I write it out to storage, should it be LSByte first, or MSByte first? If I just record these values into a buffer and flush the buffer as a series of bytes to storage, then endianness of the system affects how the data is recorded. The 68k is big-endian, while the MIPS chip the N64 has is configurable (although I don't know which mode Nintendo used, or if it was switchable while running). So if I don't standardize on what byte order is in the ROM file itself, then it can't be read properly on the other end without some sort of detection looking for something that can be treated similar to a UTF BOM.

     

    And there's an argument to be be made for both LSB and MSB when it comes to consoles like the Megadrive. LSB is more common for EEPROM programmers being used on x86 (DOS generally) at the time. MSB is how the console itself sees it. So which is "real"? At least in the case of Megadrive, the light reading I've done so far seems to suggest that big-endian is the convention, because the original dumps were done on-system. They could have used LSB by byte-swapping things before writing them out in those dumpers to more closely approximate what the raw files written to the ROMs would look like, but they didn't.

     

    It should be pointed out that smd is interleaved when compared to md/bin, due to the dumper that produced them operating in Z80 mode (and likely some addressing issues in Z80 mode), so to load those, you have to know that the high and low bytes for each word are 8K apart from each other in the dump, sliced up into 16K blocks. And which ones are the high byte, and which are the low byte. Fun. 

    • Like 1
    • Thanks 1

  15. 8 minutes ago, CZroe said:

    I recall that he had the math wrong for scanlines at any resolution other than 720p, and since native 720p displays functionally don't exist, it's almost impossible to have a true integer scale with proper scanlines. Not sure if one of the later updates fixed it so I need to go back and look.

    Not sure what you are saying about vertical scrolling with scanlines. The issue I recall was that the lines would bisect/split pixels that have been scaled up. That might make for some funny artifacts with vertical scrolling as rows of pixels expand and contract when scrolling past lines that are supposed to land between them, but the dark lines themselves should not scroll.

     

    1280 * 3 = 3840

    720 * 3 = 2160

     

    Can't do it with a 1080p display, but a 4K display should be able to integer scale 720p just fine. At least on mine, the input lag is the same for 720p and 1080p, so I just run at 720p.

    • Like 1

  16. On 11/26/2020 at 6:20 PM, Toth said:

    So is fedex like being a middle man for those fees? Like they pay the government or something?  Is there a fee schedule that you can look at to calculate those kinds of charges before buying something?  That seems really high.

    At least the last time I had to deal with customs, FedEx is the middle man if they are handling shipping. To import the package, FedEx deals with customs at the port of entry, and then reaches out to the receiver if customs imposes tariffs. 

     

    But yes, 50% tariffs seem extreme. I wonder how much of this might be mistakes/confusion related to the changes in the UK import scheme due to leaving the EU? The changeover is due at the end of the year, so I wonder if the training for the customs agents is getting messed up somewhere.


  17. 1 hour ago, Razzie.P said:

    Got mine just now!   Yeah, it's definitely more "gunmetal gray" than "black."   I'm not disappointed at all, it still looks amazing, but yep... gray.   Hope yours doesn't disappoint.

    When I placed my pre-order, my understanding was that the “gunmetal finish” they advertised was going to be “gunmetal gray”. Was that not the understanding others had? Maybe “Noir” was a bad pick for naming?


  18. 34 minutes ago, eebuckeye said:

    I believe Bleem won against Sony, however, legal expenses still killed it.

    And VGS was shut down when Sony bought the tech from Connectix after Sony lost in court. AFAICT, the PS1 emulation used for the PSP, Vita and PS3/PS4 is based on VGS. Go figure. The evidence is somewhat circumstantial, but based on the fact that the PSP version of the emulator in particular turned out to have very similar compatibility bugs, and even having similar toggles (quirks modes) for improving compatibility on a game-by-game basis.

     

    40 minutes ago, RobDangerous said:

    Remember Bleem and Virtual Game Station? Those were legal as well and yet Sony managed to shut them down. The more popular Analogue becomes the higher the risk to anger some big company like Nintendo and if one of those then find something that looks even slightly suspicious from a legal standpoint (for example their case designs, which are very close to the originals, especially for the Duo) they will drag them to court and keep them busy for years. The "jailbreaks" of all of Analogue's devices were done by Kevtris. Everbody uses them but they are not officially endorsed by Analogue. It is kinda weird but I'm pretty certain it's solely done that way to try and reduce the risk of something like Bleem happening - can't think of any other reason at least.

    Generally, the issues of copyright in this space aren't well covered in the courts. At least in the US, format shifting isn't really a right, and hasn't been defined as "fair use". Even your ability to make personal copies of a copyrighted work is a legal gray area, mostly carved out because companies aren't eager to do the work to get the legal precedent, and there's no legal framework for or against it in the US. So the risk is that you get to be the guinea pig on a particular legal case, with no real insight into what the law says you can do here. And yes, you are right that the resources Nintendo, Sony or Microsoft can spend are formidable.

     

    (EDIT: And in the case of Bleem and VGS, they play original copies of games. VGS in particular would check to see if the media was a CD-ROM or CD-R, and only accept the CD-ROM. VGS didn't support ISOs either.)

     

    That said, companies have been more willing to go after those doing distribution of copies. So ROM sites in the right legal jurisdiction, for example. And it makes sense since it's the best use of resources. The best bang for the buck, if your goal is to minimize piracy. This approach has worked well for the music industry, since it has let music evolve from CD to MP3 to Spotify. But that isn't to say every industry will play this way and sit back as people format shift content. Movies are still playing cat and mouse with their DRM schemes, along with eBooks and Video Games.

     

    When it comes to devices that can play ROMs, how do you get them? If you don't dump them yourself, that gray area of format shifting doesn't even apply, and you are receiving a copy that wasn't permitted/licensed to be made. Nintendo has gone after flash cart makers in the past, to varying degrees of success, and mostly focused on current systems. Honestly, it seems like these flash carts for older systems really only get left alone because companies like Nintendo don't see the point in going after flash carts for the NES, when they can go after the ROM sites they can, and leave it at that.

     

    So it boils down to how close to the line you want to play. Only you don't know exactly where the line is because there's no written copy of the rules, and you only find out when someone like Nintendo slaps you with a C&D. I don't blame Analogue for being cautious here.


  19. 2 minutes ago, Toth said:

    (Though, I still don't understand how they can have a seemingly endless supply of those and the SuperNt goes months without stock and then sells out very quickly.)

    Super NES was always more popular than the Genesis/MegaDrive. This is likely more that they keep selling through the Super NT batches as they get them, versus the Mega SG supply has already caught up to demand. 


  20. Ugh, so I knew this would probably happen, but didn't bother to set an alarm, so I was busy messing with something else and getting started with work while they sold out.

     

    Oh well. I refuse to pay scalper pricing. I either get it at MSRP or I don't get it. 🤷🏻‍♀️

     

    1 hour ago, Steven Pendleton said:

    So... remember when there was that thing on Analogue's website saying "Sign up to be notified when it becomes available" or whatever it was? I'm guessing that email never got sent, not that it matters now.

     

    I got an e-mail a while back about pre-orders starting today at 8am. I think that was the "notification". A notification this morning would have been worthless for a lot of people.


  21. 1 hour ago, Steven Pendleton said:

    Okay, great. I really doubt that users will be able to get the thing to output video and audio through a converter, but I suppose we'll have to wait until the thing is released to find out. It would certainly be interesting if it's possible, but anyone who wants video and audio output should get the dock instead of hoping for something that might not be possible.


    if it does use HDMI alt mode, you’d just need a passive USB-C to HDMI cable.
     

    But yes, since we don’t know how it is spitting out the signal, we should assume the dock is going to be the best way to do it. 


  22. 4 hours ago, Steven Pendleton said:

    Now that I think about it, if Analogue is letting people do stuff with the Pocket, I wonder if it would be possible for someone to make the thing compatible with a generic USB-C to HDMI thingy so you can just tell the dock to go to hell and use your own cheap thing from Amazon. Not sure how you'd charge it when using it like that, though, and I don't have anything that uses USB-C except the Switch and I have basically no familiarity with it, so I'm not sure how it works.

    Depends on what USB-C mode is used between the Pocket and the dock.

     

    HDMI alt mode is one option. Pros: USB 2.0 support for a dock-side hub that only supports HID inputs is plenty, if the Pocket is 1080p through the dock, then the limit of HDMI 1.4b isn’t a big deal. Cons: Uses the PD pins, so charging will be limited to 5V/2.1A (12W), which may be fine.

     

    DisplayPort alt mode is another.  Pros: USB 3, PD and 1080p are all supportable here at the same time. Cons: Dock needs to have a DisplayPort to HDMI adapter built into it.

     

    Anything else is probably overkill, to be honest. But the fact that it’s DAC-compatible makes me suspect HDMI alt mode is more likely here? But my understanding is that the smaller FPGA that isn’t available to devs is also the one responsible for output to the screen or dock, and receiving input, so I doubt that developers would be able to override how the USB-C port works. 

     

    The Switch uses something called MyDP which is somewhat like DisplayPort alt mode, but pre-dates USB-C, making it incompatible with the newer alt modes. It requires a special chip in the dock to pull apart the DisplayPort, Audio and USB data signals that have been multiplexed over the USB cable. A little surprising, but perhaps support for it was already baked into the Tegra platform, and they just used it as-is. PD should still be supported in this setup as well (since it doesn’t need the PD pins). 

×
×
  • Create New...