Friday, August 4th 2023

MSI Releases New AGESA PI 1.0.0.7c BIOS Update for Higher Frequency Memory Modules and Stability Bug Fixes

MSI, today, released a new AMD AGESA PI 1.0.0.7c BIOS update for all MSI's motherboard X670E, X670, B650, A620 product line. For this new BIOS release, MSI focus on and prioritize mainly for higher DDR5 memory module support and also stability bug fixes. The latest update has huge significant increase for supported memory frequency on AMD Ryzen CPUs. Below is a list of models that will be ready at the time of the release while other models will have come support in the following week.

In the screenshots below, demonstrates running a Memory Stress Test, on an AMD Ryzen R7 7700X CPU with a paired of dual-channel DDR5-7200 MHz "EXPO" certified kit on MSI's PRO B650-P WIFI Motherboard will run without any stability issues. Moreover, it also demonstrates running a Memory Stress Test on an AMD Ryzen R9 7900X CPU with MSI's MEG X670E ACE Motherboard can even achieve 8000 MHz (CL36) high frequency. A few more updates specifically on the AGESA 1.0.0.7c added extra for protection for reliability than before and also patched a few potential vulnerabilities and security loopholes.
Source: MSI
Add your own comment

35 Comments on MSI Releases New AGESA PI 1.0.0.7c BIOS Update for Higher Frequency Memory Modules and Stability Bug Fixes

#1
Chaitanya
Buildzoid has a video from couple of days trying DDR5 7000+ memory on Ryzen board. Hopefully this means that using 4 modules at around 6000 or 5600 speeds should be possible on Ryzen 7000.
Posted on Reply
#2
Daven
With this pace of optimizations and improvements, I wonder if AMD will need another generation or more of AM5 chipsets. I guess the only major feature missing is USB4. Everything else about these motherboards is very future proof.
Posted on Reply
#3
Slizzo
FINALLY. :P Been waiting on this one for quite a bit. Maybe I'll be able to finally enable XMP!
Posted on Reply
#4
AusWolf
Not bad, but is it worth dropping the memory controller's ratio to 1:2?
Posted on Reply
#5
Tomorrow
ChaitanyaBuildzoid has a video from couple of days trying DDR5 7000+ memory on Ryzen board. Hopefully this means that using 4 modules at around 6000 or 5600 speeds should be possible on Ryzen 7000.
If you need 4 modules of DDR5 then that means you need more than 96GB. Most people dont need more than that and i maintain that 4 DIMM DDR5 boards should be workstation focused. Gaming and mainstream board should incorporate extra M.2 or something similar instead of all four slots for RAM.
AusWolfNot bad, but is it worth dropping the memory controller's ratio to 1:2?
Thankfully there is no performance loss running 1:2 since Infinity Fabric is decoupled from the get-go. Unlike AM4 where it was very ill advised to run 1:2.
That being said there is mostly also no performance uplift.
Posted on Reply
#6
AusWolf
TomorrowThat being said there is mostly also no performance uplift.
I guess I'm not the only one who noticed. :ohwell:
Posted on Reply
#7
ir_cow
ChaitanyaBuildzoid has a video from couple of days trying DDR5 7000+ memory on Ryzen board. Hopefully this means that using 4 modules at around 6000 or 5600 speeds should be possible on Ryzen 7000.
I have doubts this will change. Only one way to find out. Time to update and load up :)
TomorrowThankfully there is no performance loss running 1:2 since Infinity Fabric is decoupled from the get-go. Unlike AM4 where it was very ill advised to run 1:2.
That being said there is mostly also no performance uplift.
No loss from the FLCK, but UCLK:MCLK/2 is pretty noticeable.
TomorrowIf you need 4 modules of DDR5 then that means you need more than 96GB. Most people dont need more than that and i maintain that 4 DIMM DDR5 boards should be workstation focused. Gaming and mainstream board should incorporate extra M.2 or something similar instead of all four slots for RAM.
That's not how bandwidth works. You cannot take two DIMM always and give it to a M.2 PCIe slot.... Unless you mean using the physical space. In that case, well you run into the same problem motherboards have right now. Not enough PCIe to give out.
Posted on Reply
#8
Panther_Seraphin
ChaitanyaBuildzoid has a video from couple of days trying DDR5 7000+ memory on Ryzen board. Hopefully this means that using 4 modules at around 6000 or 5600 speeds should be possible on Ryzen 7000.
I have been running 6000 on 4 DIMMs basically since launch on a 7600x and I havent been on anything later than AGESA 1.0.0.3

I am looking at doing a BIOS update and seeing where it can go.
Posted on Reply
#9
ir_cow
Panther_SeraphinI have been running 6000 on 4 DIMMs basically since launch on a 7600x and I havent been on anything later than AGESA 1.0.0.3

I am looking at doing a BIOS update and seeing where it can go.
I bet that SoC/VDDIO-Mem voltage is pretty high. Might want to check that before your CPU melts down.

4x Single rank is doable. My personal HWBot record is using 4x 6200, but I wouldn't run that daily due to the voltage required to get it stable. UCLK:MCLK/2 (Gear 2) could be instead, but that defeats the purpose of the higher performance.
Posted on Reply
#10
Panther_Seraphin
ir_cowI bet that SoC/VDDIO-Mem voltage is pretty high. Might want to check that before your CPU melts down.

4x Single rank is doable. My personal HWBot record is using 4x 6200, but I wouldn't run that daily due to the voltage required to get it stable. UCLK:MCLK/2 (Gear 2) could be instead, but that defeats the purpose of the higher performance.
Posted on Reply
#12
Panther_Seraphin
ir_cow@Panther_Seraphin you're a lucky one :) Won the silicon lottery.
Definately feel like it a little, I do want to eventually get a 7800x3d but I guarantee the IMC I get on that is absoloutly shambolic knowing my run of luck

I do need to invest a lot more time in tweaking and getting the absoloute best as I have pretty much just thrown settings at it and seen if it sticks.
Posted on Reply
#13
Tomorrow
ir_cowThat's not how bandwidth works. You cannot take two DIMM always and give it to a M.2 PCIe slot.... Unless you mean using the physical space. In that case, well you run into the same problem motherboards have right now. Not enough PCIe to give out.
That's exactly what i mean. Instead of cooking a M.2 slot between the CPU and GPU it could be relocated to near two DIMM slots like ASUS has done on some of their boards. Tho theirs is vertical, not horizontal. The extra space could be utilized by other features too like physical switches, 90 degree angled ATX 24pin connector or moving the EPS 8 pin connectors there.
Posted on Reply
#14
AusWolf
ir_cowNo loss from the FLCK, but UCLK:MCLK/2 is pretty noticeable.
So as usual, it's better to have lower speed memory with UCLK at 1:1 than higher speed memory with UCLK at 1:2. That's what I thought.
Posted on Reply
#15
TheLostSwede
News Editor
TomorrowThat's exactly what i mean. Instead of cooking a M.2 slot between the CPU and GPU it could be relocated to near two DIMM slots like ASUS has done on some of their boards. Tho theirs is vertical, not horizontal. The extra space could be utilized by other features too like physical switches, 90 degree angled ATX 24pin connector or moving the EPS 8 pin connectors there.
You can't just move things around on a PCB, there are pretty strict rules on what goes where so you don't get EMI issues, interference and crosstalk etc.
With PCIe 5.0 and DDR5, those design rules have gotten even tighter, so you don't want to go shoving some noisy 12 V power right next to those parts.
Posted on Reply
#16
Tomorrow
TheLostSwedeYou can't just move things around on a PCB, there are pretty strict rules on what goes where so you don't get EMI issues, interference and crosstalk etc.
With PCIe 5.0 and DDR5, those design rules have gotten even tighter, so you don't want to go shoving some noisy 12 V power right next to those parts.
It's doable. Thats my point.
Posted on Reply
#17
TheLostSwede
News Editor
TomorrowIt's doable. Thats my point.
Some if your suggestions, yes, but not all of them at the same time.
There's a reason why motherboards are designed the way they are, it's not as if these companies put components where they are at random.
Some placements have very strict tolerances and if you screw that up by 0.5 mm, the boards won't work.

For example, that front mounted M.2 slot requires a redriver/retimer to work, those add cost, but I guess you don't mind paying $5 extra in the end for each moved M.2 slot?
Posted on Reply
#18
Tomorrow
Yes so lets stick with what we have and not have any different options because it costs a bit more.
Posted on Reply
#19
sLowEnd
TheLostSwedeSome if your suggestions, yes, but not all of them at the same time.
There's a reason why motherboards are designed the way they are, it's not as if these companies put components where they are at random.
Some placements have very strict tolerances and if you screw that up by 0.5 mm, the boards won't work.

For example, that front mounted M.2 slot requires a redriver/retimer to work, those add cost, but I guess you don't mind paying $5 extra in the end for each moved M.2 slot?
If he's just asking for M.2 slots to be moved beside the DIMM slots, I don't see why it wouldn't be feasible for any reason other than maybe ATX form factor constraints. Dell seems to have no issue sticking those slots beside the DIMMs.

e.g.
www.tweaktown.com/reviews/10343/alienware-aurora-r15-gaming-pc/index.html
Posted on Reply
#20
TheLostSwede
News Editor
sLowEndIf he's just asking for M.2 slots to be moved beside the DIMM slots, I don't see why it wouldn't be feasible for any reason other than maybe ATX form factor constraints. Dell seems to have no issue sticking those slots beside the DIMMs.

e.g.
www.tweaktown.com/reviews/10343/alienware-aurora-r15-gaming-pc/index.html
It all comes down to the overall motherboard layout though. Certain things will add cost and certain things aren't doable simply due to routings that would mess up something else.
In case of the M.2 slots, doable, but it will most likely, as I said, require redrivers/retimers and that adds cost to the end user. That said, many motherboards have them even on standard ATX boards, due to it being the only way to get around the component placement limitations the form factor has.
Routing PCIe lanes from the CPU past/under the DDR5 DIMMs isn't the best of ideas I'd say and I presume Dell hooks up the M.2 slots to the chipset.
The best "picture" of the motherboard in that thing appears to be this render, so it's not possible to tell what components are used and the redrivers/retimers could be on the flip side of the motherboard anyhow.

TomorrowYes so lets stick with what we have and not have any different options because it costs a bit more.
Well, people are already complaining that motherboards are stupidly expensive.
I agree that the desktop PC needs an overhaul from the ground up, as the ATX form factor isn't really fit for purpose today.
However, just moving things around, especially when you don't appear to undestand much about PCB layout and design, could cause issues elsewhere.
We're already at 8 to 10 layer boards on the high-end and with PCIe 5.0, there's no way to go back to fewer layers, largely due to noise/interference.
PCB traces are also getting shorter and shorter because of it, which means we have more limited options in terms of where PCIe 5.0 expansion slots and M.2 connectors can be fitted on the boards. Unless someone comes up with a solution that allows for high-speed interfaces to work over longer PCB traces without increasing cost, I don't see any motherboard maker moving bits around just for the heck of it, at least not until we have a new motherboard form factor.
Posted on Reply
#22
bl4C3y3
ir_cowOnly one way to find out. Time to update and load up
in my case, not really about DDR 7000+ speeds, but more about boot times related to memory training on on MAG B50 Tomahawk with DDR5 6000 CL30 (EXPO1):
  • BIOS 1.61 stable with "memory context restore" enabled (not training memory on each boot) > boot time around 20s
  • BIOS 1.60 not stable with "memory context restore"
  • BIOS 1.72 not stable with "memory context restore" + "high memory efficiency" enabled
  • BIOS 1.74 stable with "memory context restore" + "high memory efficiency" enabled > boot time around 20s, but memory light only shows a second or so
Posted on Reply
#23
Panther_Seraphin
TomorrowIf you need 4 modules of DDR5 then that means you need more than 96GB. Most people dont need more than that and i maintain that 4 DIMM DDR5 boards should be workstation focused. Gaming and mainstream board should incorporate extra M.2 or something similar instead of all four slots for RAM.
Please for the love of God no! Leave the 4 Dimms alone. At least that way when Quad channel memory comes into the mainstream we will already have the space set aside for it without losing MORE features.

The fact that PCI-e lanes are SO limited on the mainstream is pretty infuriating actually with the ability to do things like PCI-e bifurication for quad nvme drives on a single x16 slot also 10/40Gpbs being relatively easy to get into networking wise as well.

Im just looking back to the days of things like the X58 UD9 and imaging the mad setups people could do with that amount of PCI-E lanes even if they were all only Gen 4 at the moment.
Posted on Reply
#24
Tomorrow
Quad channel will not come to mainstream. Certainly not on existing boards and CPU's via firmware update.
Also you misunderstand me. Im not saying that 4 DIMM boards should not exist.
Im saying they should be properly geared towards their intended usecase. Meaning 192GB support. 10G LAN etc.

Currently these slots are wasted. Even people buying standard 2x16GB kits. How many of them really need to add another 2x16GB in the future for games?
I say very few. Most will never populate these empty slots.
Posted on Reply
#25
AusWolf
TomorrowQuad channel will not come to mainstream. Certainly not on existing boards and CPU's via firmware update.
Also you misunderstand me. Im not saying that 4 DIMM boards should not exist.
Im saying they should be properly geared towards their intended usecase. Meaning 192GB support. 10G LAN etc.

Currently these slots are wasted. Even people buying standard 2x16GB kits. How many of them really need to add another 2x16GB in the future for games?
I say very few. Most will never populate these empty slots.
Especially if you consider that populating 4 slots usually results in lower achievable speed, or higher latency. Most gamers prefer speed over capacity. It's enough to have 4 DIMMs on professional boards for people who actually need the higher capacity.
Posted on Reply
Add your own comment
May 21st, 2024 05:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts