• 0 Posts
  • 40 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • I have a stack of Logitec F310 controllers, and I’ve never had them fail to work on any system (Windows, Linux, Android). They’re not “pro gamer” or anything, fairly basic, but they’ve always responded smoothly for me even after many years of use. They’re inexpensive, wired, and have an “XBox - DInput” switch on the back (at least mine do; that feature may have been removed by now).

    The F310 (what I use) is wired and has no rumble feedback.

    The F510 is wired and has rumble feedback, but I’ve never used one.

    The F710 is wireless 2.4GHz (not Bluetooth) and has rumble feedback. I have two of these, and in my experience neither of them connects reliably, even under Windows with the official software installed.


  • I’m in a similar boat to you. I ripped almost all of my CDs to 320kbps mp3s for portability, but then I wanted to put all of them (a substantial number) plus a bunch more (my partner’s collection) on a physically tiny USB stick (that I already had) to just leave plugged into our car stereo’s spare port. I had to shrink the files somehow to make them all fit, so I used ffmpeg and a little bash file logic to keep the files as mp3s, but reduce the bitrate.

    128kbps mp3 is passable for most music, which is why the commercial industry focused on it in the early days. However, if your music has much “dirty” sound in it, like loud drums and cymbals or overdriven electric guitars, 128kbps tends to alias them somewhat and make them sound weird. If you stick to mp3 I’d recommend at least 160kbps, or better, 192kbps. If you can use variable bit rate, that can be even better.

    Of course, even 320kbps mp3 isn’t going to satisfy audiophiles, but it sounds like you just want to have all your music with you at all times as a better alternative to radio, and your storage space is limited, similar to me.

    As regards transcoding, you may run into some aliasing issues if you try to switch from one codec to another without also dropping a considerable amount of detail. But unless I’ve misunderstood how most lossy audio compression works, taking an mp3 from a higher to a lower bitrate isn’t transcoding, and should give you the same result as encoding the original lossless source at the lower bitrate. Psychoacoustic models split a sound source into thousands of tiny component sounds, and keep only the top X “most important” components. If you later reduce that to the top Y most important components by reducing the bitrate (while using the same codec), shouldn’t that be the same as just taking the top Y most important components from the original, full group?


  • My main concern is getting games in a form that I can store locally for 20 years and then reasonably expect to boot up and play. A secondary concern (ever since I moved permanently to another country) is going digital whenever possible because shipping stuff long distances is expensive. I had hundreds of physical books that it pained me to give away, but it simply wasn’t economical to move them to my new home. I kept my physical games, CDs, and DVDs, because they’re mostly thin discs and air-filled plastic cases (often replaceable once paper inserts have been removed) and I was able to bring them over affordably.

    Over the last few years I’d say I’ve slowed down on physical retro collecting and only bought a couple dozen retro console games. More often I sail the high seas looking for them because morally there’s no sane argument decades after release that paying $50-100 to a private collector or dealer today has any impact on the developer’s or publisher’s profits in terms of secondary or tertiary sales. The physical game media and packaging have ceased to be games and have become artifacts, almost independent of their content, like other vintage or antique items. Of course that doesn’t apply if the game has been rereleased in more or less its original form, in which case I either buy it (if the price is reasonable) or don’t play it at all (if the price is unreasonable). I actually have such a game in digital storage that I’ve been meaning to play for years, and I learned that it’s quite recently been put up in GOG, so now I’m morally obligated to buy it if I still want to play it, heh. Luckily for me the price seems fair.

    And speaking of GOG, the majority of my recent game purchases have been split pretty evenly between GOG and itch.io; about 95%. I basically haven’t bought anything directly from Steam for more than a decade. I understand that many games there are actually DRM-free, but I’m not interested in trying to research every game before I make a purchase. If each game’s store page indicated its true DRM status clearly (not just “third-party DRM”), I’d consider buying through Steam again. As it is, whenever I learn about an interesting game that’s on Steam, I try to find it on itch.io or GOG, and if I can’t, I generally don’t buy it; I’ll buy it on Steam only if it looks really interesting and it’s dirt cheap.

    Whenever I look at buying “leasing with no fixed term” anything with DRM, I assume that it will be taken away from me or otherwise rendered unusable unexpectedly at some point in the future through no fault of my own. It’s already happened to me a couple of times, and once bitten, twice shy. I know that everyone loves Gabe Newell, and he seems like a genuinely good guy, and he’s said that if Steam ever closed its doors that they’d unlock everything. However the simple fact is that in the majority of situations where that might happen, the call wouldn’t be up to Gaben, even for games published by Valve.

    So yeah, I may put up with DRM in a completely offline context, but in any situation where my access terms can be changed remotely and unilaterally with a forced update, server shutdown, or removal, that’s a hard pass from me.



  • I’m not too knowledgeable about the detailed workings of the latest hardware and APIs, but I’ll outline a bit of history that may make things easier to absorb.

    Back In the early 1980s, IBM was still setting the base designs and interfaces for PCs. The last video card they relased which was an accepted standard was VGA. It was a standard because no matter whether the system your software was running on had an original IBM VGA card or a clone, you knew that calling interrupt X with parameters Y and Z would have the same result. You knew that in 320x200 mode (you knew that there would be a 320x200 mode) you could write to the display buffer at memory location ABC, and that what you wrote needed to be bytes that indexed a colour table at another fixed address in the memory space, and that the ordering of pixels in memory was left-to-right, then top-to-bottom. It was all very direct, without any middleware or software APIs.

    But IBM dragged their feet over releasing a new video card to replace VGA. They believed that VGA still had plenty of life in it. The clone manufacturers started adding little extras to their VGA clones. More resolutions, extra hardware backbuffers, extended palettes, and the like. Eventually the clone manufacturers got sick of waiting and started releasing what became known as “Super VGA” cards. They were backwards compatible with VGA BIOS interrupts and data structures, but offered even further enhancements over VGA.

    The problem for software support was that it was a bit of a wild west in terms of interfaces. The market quickly solidified around a handful of “standard” SVGA resolutions and colour depths, but under the hood every card had quite different programming interfaces, even between different cards from the same manufacturer. For a while, programmers figured out tricky ways to detect which card a user had installed, and/or let the user select their card in an ANSI text-based setup utility.

    Eventually, VESA standards were created, and various libraries and drivers were produced that took a lot of this load off the shoulders of application and game programmers. We could make a standardised call to the VESA library, and it would have (virtually) every video card perform the same action (if possible, or return an error code if not). The VESA libraries could also tell us where and in what format the card expected to receive its writes, so we could keep most of the speed of direct access. This was mostly still in MS-DOS, although Windows also had video drivers (for its own use, not exposed to third-party software) at the time.

    Fast-forward to the introduction of hardware 3D acceleration into consumer PCs. This was after the release of Windows 95 (sorry, I’m going to be PC-centric here, but 1: it’s what I know, and 2: I doubt that Apple was driving much of this as they have always had proprietary systems), and using software drivers to support most hardware had become the norm. Naturally, the 3D accelerators used drivers as well, but we were nearly back to that SVGA wild west again; almost every hardware manufacturer was trying to introduce their own driver API as “the standard” for 3D graphics on PC, naturally favouring their own hardware’s design. On the actual cards, data still had to be written to specific addresses in specific formats, but the manufacturers had recognized the need for a software abstraction layer.

    OpenGL on PC evolved from an effort to create a unified API for professional graphics workstations. PC hardware manufacturers eventually settled on OpenGL as a standard which their drivers would support. At around the same time, Microsoft had seen the writing on the wall with regards to games in Windows (they sucked), and had started working on the “WinG” graphics API back in Windows.3.1, and after a time that became DirectX. Originally, DirectX only supported 2D video operations, but Microsoft worked with hardware manufacturers to add 3D acceleration support.

    So we still had a bunch of different hardware designs, but they still had a lot of fundamental similarities. That allowed for a standard API that could easily translate for all of them. And this is how the hardware and APIs have continued to evolve hand-in-hand. From fixed pipelines in early OpenGL/DirectX, to less-dedicated hardware units in later versions, to the extremely generalized parallel hardware that caused the introduction of Vulkan, Metal, and the latest DirectX versions.

    To sum up, all of these graphics APIs represent a standard “language” for software to use when talking to graphics drivers, which then translate those API calls into the correctly-formatted writes and reads that actually make the graphics hardware jump. That’s why we sometimes have issues when a manufacturer’s drivers don’t implement the API correctly, or the API specification turns out to have a point which isn’t defined clearly enough and some drivers interpret it one way, while other drivers interpret the same API call slightly differently.


  • In my (admittedly limited) experience, SDL/SDL2 is more of a general-purpose library for dealing with different operating systems, not for abstracting graphics APIs. While it does include a graphics abstraction layer for doing simple 2D graphics, many people use it to have the OS set up a window, process, and whatever other housekeeping is needed, and instantiate and attach a graphics surface to that window. Then they communicate with that graphics surface directly, using the appropriate graphics API rather than SDL. I’ve done it with OpenGL, but my impression is that using Vulkan is very similar.

    SDL_gui appears to sit on top of SDL/SDL2’s 2D graphics abstraction to draw custom interactive UI elements. I presume it also grabs input through SDL and runs the whole show, just outputting a queue of events for your program to process.



  • I’m not sure how common they are outside Japan, but I have a little (about 12" I think) Panasonic “Let’s Note” that I use quite a lot as a lightweight coding (and retro/indie gaming :D) device that I can throw in even my smallest bag when there’s a chance I’ll have to kill more than a few minutes. They’re designed to be a little bit rugged. I had Ubuntu on it previously, now Mint, and the only problem I’ve had is that Linux somehow sees two screen brightness systems, and by default it connects the screen brightness keys to the wrong (i.e. nonexistent) one. Once I traced the problem it was a quick and painless fix.

    They seem to be sold worldwide, so you may be able to get one cheaply second-hand. One thing to be careful about is the fact that in order to keep the physical size down, the RAM is soldered to the board. Mine is an older model (5th gen iCore), and has 4GB soldered on but also one SODIMM slot, so I was able to upgrade to 12GB total. But I’ve noticed that on most later models they got rid of the RAM slots entirely, so whatever RAM it comes with is what you’re stuck with.




  • I had a mini movie night with two colleagues, one is around middle age like me, and the other in their twenties. We were going through some DVDs and Blurays, and Die Hard came up. We two older folks said we liked it but the younger said that they’d never seen it. Well obviously we had to watch it right then.

    Afterward, the young colleague said they found the movie boring and unoriginal. Talking it over, we came to the conclusion that while Die Hard had done so much in fresh and interesting ways at the time, it had been so thoroughly copied from by so many other films that it offered little to an uninitiated modern audience, looking back.

    Although I haven’t played it myself, to read someone saying that Ultima 4 is derivative and lacking in originality feels a lot like that experience with Die Hard. Additionally, I think that the real old games usually expect a level of imagination and willingness to put up with discomfort that even I sometimes find a little offputting in 2025, despite the fact that I grew up with many of those games and had no issues with them at the time. If I don’t remind myself of it, it can be easy to forget that old hardware wasn’t limited only in audio-visual power, but also storage size and processing power.

    I still search through old games, but I’m looking for ideas that maybe didn’t work well or hit the market right the first time, but still deserve further consideration, especially in light of technological advances that have happened in the intervening years.




  • I’ve never played the GBA games, and I still found Super Metroid bland.

    I didn’t have an NES or SNES growing up, so I came to those games a little later on. However, Super Metroid was still the most recent game in the franchise when I played it. There were plenty of rave reviews even then, so I looked forward to playing it once I got my hands on a copy. I even bought a new controller for it.

    Initially I actually found the game somewhat frustrating, but once I got used to Samus’ momentum and how the game had been designed to be played, I found it to be very well balanced. But I never felt like there was any real reason for me to go on other than to open new areas. Since it wasn’t referenced in any way (that I noticed) outside of the manual, “The Mission” didn’t seem important. And while the graphics were gorgeous for the time (and still are), that wasn’t enough for me. People often talk about the haunting and creepy feeling of the game’s world, but I didn’t get that. I felt that way about the Prime games, but Super Metroid just seemed empty and abandoned to me, not atmospheric.

    A few years ago I was able to play AM2R and stuck to it all the way to the end, even 100-percenting it, and enjoying it thoroughly. But I don’t think I ever finished Super Metroid. I just put it down one day and never got back to it. And I don’t feel like it’s something I need to tick off some gaming bucket list. If you’re not really enjoying it, stop playing and don’t feel bad about it. There are already more good games in the world than anyone can complete before they die. You can’t play them all, so stick to the ones that resonate with you personally.


  • I’ve been trying to research the various glitches and variations between versions because I’m working on something that uses some undocumented features and precise timing. Unfortunately, I don’t have one good link that explains it well.

    The issue stems from how player objects (the 2600 equivalent to sprites) are placed horizontally. For good and interesting reasons which are also technically involved and complicated, programmers can’t just send an X value to the graphics chip. Instead there’s a two-step process. First, the program sends a signal to the graphics chip when the TV raster is at approximately the desired horizontal position on the screen. Then, because it’s often not possible to nail the timing of that signal to the exact pixel position, the graphics chip has a facility to “jog” the various graphical objects left or right by a very small amount at a time.

    According to the official programmers’ documentation, this final “jog” should only be done at one specific time during each video scanline. If we only do it this way, it works correctly on pretty much every version of the console. However, doing it “correctly” also introduces a short black line at the left side of that scanline. If we instead send the “jog” signal at certain other times, no black line appears. Additionally, the exact distances moved change depending on when we send the signal, which can be worked around or are sometimes even beneficial.

    Kool-Aid Man uses these undocumented “jog” timings, as several games did. But it displays a score counter at the top of the screen by using the player objects placed very close together. It seems that the console versions in question (later 2600 Juniors and some 7800s) are more sensitive to the timing being used, as you can sometimes see the parts of the score flicking left or right by one pixel.

    The Atari 2600 also has a hardware collision detection system, which reports when any two moving screen objects overlap with each other or the background. Once a collision occurs, the relevant flag will stay set until the program clears it. Kool-Aid Man uses this system to detect when the player character touches enemies. But the program only clears the collision flags once, at the bottom of each frame, and the same player objects are used to draw the score. So when the two parts of the score flicker into each other, it registers as a collision between player objects, which the game interprets as a collision between Kool-Aid Man and a Thirsty.

    As you mentioned, I’ve read that setting the console switches a certain way can prevent this issue, but I’m not sure why. My guess is that setting some switches one way rather than another causes a conditional branch instruction that checks the switches to branch rather than fall through, which takes one extra instruction cycle (or vice versa), which is then enough to stabilize the score display and stop the parts from colliding.


  • There’s a… not exactly a bug, but an unannounced change, in the graphics chip in some later versions of the Atari 2600, which has been named after this game by the fan/homebrew community. On most 2600 console versions, it’s possible for a game to perform a particular graphics operation at an unintended time and get an undocumented but consistent and useful result.

    On the differing consoles, the result is slightly different, and because of the way this game is written, it often causes a chain of actions that end up making Kool-Aid Man bounce around continuously as if being hit by enemies, even though nothing is touching him.