• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle




  • The 6.6.x kernel series is LTS and should be fine as a downgrade target (6.7.x not so much so). Unless there’s something specific from the newer kernel versions that you need to drive that system, there shouldn’t be any issues. I’m still on a 6.6-series kernel.

    That being said, you could try troubleshooting this from the bottom up rather than the top down.

    First, use lspci -v to verify that the device is being correctly identified and associated with a driver.

    Next, invoke alsamixer and make sure everything is unmuted and your HD audio controller is the first sound device. The last time I had something like this happen to me, the issue turned out to be that the main soundcard slot was being hijacked by an HDMI audio output that I didn’t want and wasn’t using, and that was somehow muting the sound at the audio jack even when I tried to switch to it. A little mucking around in ALSA-level config files fixed everything.



  • Automated command-line jobs, in my case, which are technically not random but still annoying, because they don’t need to show a window at all. Interestingly, the one thing I can get to absolutely not pop up any window ever are Perl scripts using Win32::Detached . . . which means that it is possible, but Microsoft doesn’t bother to expose such a facility.


  • nyan@sh.itjust.workstoLinux@lemmy.mlAMD vs Nvidia
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    I wouldn’t say the proprietary nvidia drivers are any worse than the open-source AMD drivers in terms of stability and performance (nouveau is far inferior to either). Their main issue is that they tend to be desupported long before the hardware breaks, leaving you with the choice of either nouveau or keeping an old kernel (and X version if using X—not sure how things work with Wayland) for compatibility with the old proprietary drivers.


  • nyan@sh.itjust.workstoLinux@lemmy.mlAMD vs Nvidia
    link
    fedilink
    arrow-up
    5
    ·
    5 months ago

    If those are your criteria, I would go with AMD right now, because only the proprietary driver will get decent performance out of most nVidia cards. Nouveau is reverse-engineered and can’t tap into a lot of features of newer cards especially, and while I seem to recall there is a new open-source driver in the works, there’s no way it’s mature enough to be an option for anyone but testers.



  • The performance boost provided by compiling for your specific CPU is real but not terribly large (<10% in the last tests I saw some years ago). Probably not noticeable on common arches unless you’re running CPU-intensive software frequently.

    Feature selection has some knockon effects. Tailored features mean that you don’t have to install masses of libraries for features you don’t want, which come with their own bugs and security issues. The number of vulnerabilities added and the amount of disk space chewed up usually isn’t large for any one library, but once you’re talking about a hundred or more, it does add up.

    Occasionally, feature selection prevents mutually contradictory features from fighting each other—for instance, a custom kernel that doesn’t include the nouveau drivers isn’t going to have nouveau fighting the proprietary nvidia drivers for command of the system’s video card, as happened to an acquaintance of mine who was running Ubuntu (I ended up coaching her through blacklisting nouveau). These cases are very rare, however.

    Disabling features may allow software to run on rare or very new architectures where some libraries aren’t available, or aren’t available yet. This is more interesting for up-and-coming arches like riscv than dying ones like mips, but it holds for both.

    One specific pro-compile case I can think of that involves neither features nor optimization is that of aseprite, a pixel graphics program. The last time I checked, it had a rather strange licensing setup that made compiling it yourself the best choice for obtaining it legally.

    (Gentoo user, so I build everything myself. Except rust. That one isn’t worth the effort.)





  • Money isn’t important. Some complex software is, in fact, maintained by unpaid volunteers who feel strongly about the project. That doesn’t mean it’s easy (in fact it’s quite difficult to keep the lights on and the code up-to-date), but it is A Thing That Happens despite being difficult.

    What is important is the size of the codebase (in the case of a fork, that’s the code either written for the fork or code that the fork preserves and maintains that isn’t in the original anymore), the length of time it’s been actively worked on, and the bus factor. Some would-be browser forks are indeed trivial and ephemeral one-man shows. Others have years of active commit history, carry tens or even hundreds of thousands of lines of novel or preserved code, and have many people working on them.


  • Distro best added to the “Power-user distros to avoid” list: Gentoo (saying that as a Gentoo user).

    I disagree with your claim that doing things like installation steps manually is necessarily a bad idea, though. It depends on your goal. Obviously it isn’t the fastest way to get things up and running, and as such it isn’t appropriate for newcomers (or for mass corporate deployments). If your goal is to learn about the lower levels of the system, or to produce something highly customized, then it becomes appropriate. Occasionally, it pays dividends in the form of being able to quickly fix a system that’s been broken by automation that didn’t quite work as expected. Anyway, I’d suggest rewording that bit of your Arch screed.