• Alexstarfire@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    5
    ·
    edit-2
    1 month ago

    Isn’t vram usually bigger than ram? Those pics should be switched.

    EDIT: Oh, I took vram to be virtual ram, not video ram. It makes sense for video ram.

    • cm0002@lemmy.worldOP
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      1 month ago

      It depends on your definition of “usually”, high end GPUs for data centers, AI, workstations or “enthusiasts” yea. For these applications you’re starting at like 16

      GPUs for us plebs, no

      • BombOmOm@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        1 month ago

        It’s also fairly cheap to buy 32+ GB of RAM, lots of choices for under $80. Meanwhile, I’m not even sure how you find a video card with 32GB of VRAM (not that you really need this much, 12GB and 16GB are pretty solid for a video card nowadays).

    • FlexibleToast@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 month ago

      Creating your swap as 2x your RAM is outdated advice. Now it’s essentially changed to be 2x until 4GB of RAM, then 1x until 8GB, and anything over 8GB just use 4GB of swap because you probably have enough RAM. Or, even some modern systems like Fedora will swap to zRAM. Which is just a highly compressed portion of RAM.

      • kittenzrulz123@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        The reason why people didnt like 8gb of ram on MacBooks is because they charged premium prices for laptops with 8gb. Especially since you cant upgrade the ram. My Thinkpad has 8gb of ram but if I wanted I could upgrade to 16gb.

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          1 month ago

          I know lol, I was taking a pot shot at apple for exactly that reason, no excuse for the insane pricing with such a restriction on it, not to mention it’s soldered in ram lol.

  • Smoolak@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    4
    ·
    1 month ago

    The meme don’t make sense. An SRAM cache of that size would be so slow that you would most likely save clock cycles reading directly from RAM an not having a cache at all…

    • cogman@lemmy.world
      cake
      link
      fedilink
      arrow-up
      21
      ·
      1 month ago

      Slow? Not necessarily.

      The main issue with that much memory is the data routing and the physical locality of the memory. Assuming you (somehow) could shrink down the distance from the cache to the registers and could have a wide enough data line/request lines you can have data from such a cache in ~4 cycles (assuming L1 and a hit).

      What slows down memory for L2 is the wider address space and slower residence checks. L3 gets a bit slower because of even wider address spaces but also it has to deal with concurrency issues since it’s shared among cores. It also ends up being slower because it physically has to be further away from the cores due to it’s size.

      If you ever look at a CPU die, you’ll see that L1 caches are generally tiny and embedded right into the center of the processor. L2 tends to be bolted onto the sides of the physical cores. And L3 tends to be the largest amount of silicon real estate on a CPU package. This is all what contributes to the increasing fetch performance for each layer along with the fact that you have to check the closest layers first (An L3 hit, for example, means that the CPU checked L1 and L2 and failed at both which takes time. So L3 access will always be at least the L1 + L2 times).

      • Smoolak@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 month ago

        I agree. When evaluating cache access latency, it is important to consider the entire read path rather than just the intrinsic access time of a single SRAM cell. Much of the latency arises from all the supporting operations required for a functioning cache, such as tag lookups, address decoding, and bitline traversal. As you pointed out, implementing an 8 GB SRAM cache on-die using current manufacturing technology would be extremely impractical. The physical size would lead to substantial wire delays and increased complexity in the indexing and associativity circuits. As a result, the access latency of such a large on-chip cache could actually exceed that of off-chip DRAM, which would defeat the main purpose of having on-die caches in the first place.

  • thelosers5o@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 month ago

    Generally there’s a reverse relationship between size and speed. A 8gb cache would also be super slow thus defeating the purpose of the cache. If it were so easy every cpu would have a huge cache

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 month ago

      Not really, if you’re putting that size on the physical chip it will be fast because it’s close by. It’s just that we can’t fit that much on a chip now.

      • thelosers5o@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        1 month ago

        Unfortunately that’s not how it works. This is coming from someone who studied computer hardware and software in university.

        Cache sizes are a trade off. Small cache means quick access speeds but higher chance of a cache miss. Larger caches have a lower access speed but a lower chance for a cache miss.

        This is why we have different levels of cache on a computer actually. It allows us to harness the benefits of the different sizes of caches without impacting the speed as much. With multiple layers we can have small caches that are super fast and then larger caches that are slower and so and so forth. This way we can have both speed and size.