8GB RAM on M3 MacBook Pro ‘Analogous to 16GB’ on PCs, Claims Apple::Following the unveiling of new MacBook Pro models last week, Apple surprised some with the introduction of a base 14-inch MacBook Pro with M3 chip,…

  • Synthead@lemmy.world
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    4
    ·
    1 year ago

    RAM is RAM. If you’re able to manage it better, that’s nice, but programs will still use whatever RAM they were designed to use. If you need to store 5 GiB of something in memory, what happens with the other 2.5 GiB, if they claim that it’s 2x as “efficient?”

    • thejml@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      44
      ·
      1 year ago

      Definitely true, but I will say Mac has pretty decent compression on RAM. I’m assuming that’s why they feel this way. My old MBP 2013 had 8, and I used it constantly until earlier this year when I finally upgraded. It was doing pretty well all things considered, mostly because of on the fly RAM compression.

      • olympicyes@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        1 year ago

        Lower end macs tend to have slower SSDs so this could be a double whammy on these machines.

        • thejml@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          I’m specifically talking about the in memory compression, not swap.

          • uis@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            1 year ago

            But memory compression works the same way swap works. When memory is needed LRU page is written on disk compressed, and where application needs to read data from compressed page it generates pagefault and OS loads(decompresses) page in memory. That’s it.

      • Sylvartas@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Pretty sure windows has been doing some pretty efficient RAM compression of its own since 98SE or something

        • thejml@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          They actually just it in Windows 10. There were third party add ons to do so prior to then, but they had marginal impact from my experience.

      • uis@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 year ago

        Did you know that you could do RAM compression on “old” MBP 2013? All you had to do is install Linux and enable memory compression.

    • NIB@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      59
      ·
      1 year ago

      RAM is not RAM though. If a RAM is twice as fast than some other RAM, then it can swap shit back and forth really fast, making it more efficient per size. Because Apple is soldering ram next to the chip, it enables them to make their RAM a lot faster. M3 max’s ram has 6x more bandwidth than ddr5 and a lot lower latency too.

      Also macos needs less ram in general. Is 8gB ram enough? No. But i would bet money on 12gB m3 over 16gB pc to have fewer ram issues and faster performance.

      Most of the things that “use” ram on every day pc use, dont need ram. It is just parked assets, webpages, etc. Things that if you have a really fast ram, can be re-cached to ram pretty fast, especially if your storage is also really fast.

      • azuth@lemmy.world
        link
        fedilink
        English
        arrow-up
        43
        ·
        1 year ago

        RAM transfer rate is is not important when swapping as the bottleneck will be storage transfer rate when reading and writing to swap.

        Which I doubt Apple can make as fast as DDR4 bandwidth.

        • at_an_angle@lemmy.one
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          1 year ago

          I have a tank that can hold 500 gallons of water. It’s connected to a pumping system that can do 1000 gallons a minute on the discharge side. So it’s just as good as a 2000 gallon tank!

          What do you mean incoming water? Look at my discharge rate!

      • uis@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        1 year ago

        Because Apple is soldering ram next to the chip, it enables them to make their RAM a lot faster.

        What a bullshit I see.

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          Of all the points in their blatantly wrong comment, this probably wasn’t the one to single out. The reason for the soldered RAM is due to speed and length of traces. The longer the trace, the more chance there is for signal loss. By soldering the Ram close to the cpu the traces are shorter, allowing for a minuscule improvement in latency.

          To be clear, I don’t like it either. It’s one of the major things holding me back from buying a MacBook right now.

          • Synthead@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            The longer the trace, the more chance there is for signal loss.

            While this is true on paper, we don’t need to pretend that this is an unsolved problem in reality. It’s not like large-scale motherboard manufacturers simply refuse to put their RAM closer to the CPU, and it’s littered with data loss. Apple also didn’t do anything innovative by soldering the RAM onto their motherboards. This is simply bootlicking Apple for what’s actually planned obsolescence.

            • Alexstarfire@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I can’t speak for this particular instance but the reason swappable RAM sticks aren’t “littered with data loss” is because they are designed not to. I.e. Only rated up to a certain speed and timings. Putting RAM physically closer to the CPU does allow you to utilize the RAM better. It’s physics.

              Personally, I’d rather take a performance hit than be stuck with a set amount of RAM unless there was some ungodly performance gain.

              • Synthead@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                Putting RAM physically closer to the CPU does allow you to utilize the RAM better. It’s physics.

                If the RAM was 3x closer, would it somehow be faster? I’m looking for metrics. With the same stick of any given DDR5, how much performance loss is there on a few example motherboards of your choice?

                My point, again, is that yes, on paper, shorter wires means less opportunity for inductance issues, noise, voltage drop, cross-talk, etc. But this is all on paper.

                It’s not like every motherboard manufacturer doesn’t know what they’re doing and Apple’s brilliant engineers somehow got a higher clock speed than what the RAM is rated for because… shorter wires?

                Case in point: DDR4 is meant to operate at a maximum clock speed per the specs of DDR4. However, on plenty of motherboards that are overclock-capable will support memory that is more than 3x the clock of what DDR4 should be capable of. How does this work with memory that is not soldered into the motherboard?

                Additionally, without overclocking, the memory is designed to operate at a clock speed. Will shorter traces to the RAM magically increase the capable clock speed of the RAM? Are these the “physics” you’re referring to?

                • Alexstarfire@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I know I’ve seen something about this topic. I want to say it was from LTT but I can’t find the video.

                  I didn’t say anything about it being faster. I said utilize it better. Lower latency can be a big help as it allows quicker access. Think of HDD vs SSD. The biggest advantage in the beginning was the much lower latency SSDs provided. Made things a lot snappier even if the speed/throughput wasn’t all that different.

                  I don’t know what kind of difference we’re taking about here, or how much real world preformance benefits there are but there’s a reason CPUs have caches on the die.

                  And that doesn’t include whatever other benefits shorter traces provide. Less voltage drop might be helpful.

                  But, flexibility must still be better than those gains else most manufacturers would have switched. At some point you start running out of better ways to improve performance though. That’s why things are going back to being integrated with the CPU again.

                  • Synthead@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    Lower latency can be a big help as it allows quicker access

                    How much latency? Consider the speed of electricity at a few centimeters.

                    there’s a reason CPUs have caches on the die

                    The static RAM on the die is a different type of memory that’s appropriate for the CPU to use. It’s not that short conductor lengths magically make it faster.

          • TwanHE@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Same way itx boards are preferred for ram oc. But i doubt apple is pushing crazy timings and clocks.

            • BURN@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Exactly. There is an actual, tangible benefit of doing it that way. I don’t like it, as it creates situations where you’re unable to upgrade your own hardware, but it does make sense for the 95% of the population who has never opened a laptop, let alone tried to replace ram