• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apple A17 Pro SoC Within Reach of Intel i9-13900K in Single-Core Performance

T0@st

News Editor
Joined
Mar 7, 2023
Messages
2,077 (3.16/day)
Location
South East, UK
An Apple "iPhone16,1" was put through the Geekbench 6.2 gauntlet earlier this week—according to database info this pre-retail sample was running a build of iOS 17.0 (currently in preview) and its logic board goes under the "D83AP" moniker. It is interesting to see a unit hitting the test phase only a day after the unveiling of Apple's iPhone 15 Pro and Max models—the freshly benched candidate seems to house an A17 Pro system-on-chip. The American tech giant has set lofty goals for said flagship SoC, since it is "the industry's first 3-nanometer chip. Continuing Apple's leadership in smartphone silicon, A17 Pro brings improvements to the entire chip, including the biggest GPU redesign in Apple's history. The new CPU is up to 10 percent faster with microarchitectural and design improvements, and the Neural Engine is now up to 2x faster."

Tech news sites have pored over the leaked unit's Geekbench 6.2 scores—its A17 Pro chipset (TSMC N3) surpasses the previous generation A16 Bionic (TSMC N4) by 10% in single-core stakes. Apple revealed this performance uplift during this week's iPhone "Wonderlust" event, so the result is not at all surprising. The multi-score improvement is a mere ~3%, suggesting that only minor tweaks have been made to the underlying microarchitecture. The A17 Pro beats Qualcomm's Snapdragon 8 Gen 2 in both categories—2914 vs. 2050 (SC) and 7199 vs. 5405 (MC) respectively. Spring time leaks indicated that the "A17 Bionic" was able to keep up with high-end Intel and AMD desktop CPUs in terms of single-core performance—the latest Geekbench 6.2 entry semi-confirms those claims. The A17 Pro's single-threaded performance is within 10% of Intel Core i9-13900K and Ryzen 9 7950X processors. Naturally, Apple's plucky mobile chip cannot put up a fight in the multi-core arena, additionally Tom's Hardware notes another catch: "A17 Pro operates at 3.75 GHz, according to the benchmark, whereas its mighty competitors work at about 5.80 GHz and 6.0 GHz, respectively."



View at TechPowerUp Main Site | Source
 
Joined
Jun 27, 2017
Messages
278 (0.10/day)
Processor Intel i5-13600k
Motherboard MSI MEG Z690i Unify
Cooling Noctua NH-C14S
Memory G.Skill Ripjaws V F5-5600J2834F32GX2-RS5W 64GB
Video Card(s) Asus RX6800XT TUF
Storage Samsung 980 Pro 500GB x2
Display(s) Samsung U32H850
Case Streacom DA6 XL chrome
Audio Device(s) Denon PMA-50
Power Supply Corsair SF750
Mouse Logitech MX Master 3
Keyboard Microsoft Surface
Software Win 11 Pro
Joined
Oct 19, 2022
Messages
50 (0.06/day)
It is absolutely incredible to me how powerful and efficient a phone is.
I feel so small in comparison with those brilliant engineers that work on such projects.
 
Joined
Nov 26, 2021
Messages
1,705 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
There's a lot of variance in the scores of the 7950X and the 13900k. Just looking at the first couple of pages of results, the 7950X has validated scores ranging from 2858 to 3253 while the 13900k falls between 2993 and 3475. Nevertheless, Apple's cores are unmatched for IPC and 3.75 GHz is a pretty high speed for their extremely wide cores, but this clock speed suggests that the performance increase is entirely due to the higher clock speed.
 
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
Meh, let's try again with quad channel LPDDR5T @9600 MT/s & 3nm nodes!
 
Joined
Jan 3, 2021
Messages
3,606 (2.49/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Joined
Nov 26, 2021
Messages
1,705 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Set the 13900K to 3.77 GHz and see who comes out on top.
In that case, the A17 would win by a landslide in single threaded performance. Still, the clock speed is a part of the design and equalizing clocks doesn't make sense for two very different designs.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Meh, let's try again with quad channel LPDDR5T @9600 MT/s & 3nm nodes!
It helps a lot when memory is soldered to the board and is physically a lot closer to the CPU cores.
 
Joined
May 9, 2012
Messages
8,545 (1.85/day)
Location
Ovronnaz, Wallis, Switzerland
System Name main/SFFHTPCARGH!(tm)/Xiaomi Mi TV Stick/Samsung Galaxy S23/Ally
Processor Ryzen 7 5800X3D/i7-3770/S905X/Snapdragon 8 Gen 2/Ryzen Z1 Extreme
Motherboard MSI MAG B550 Tomahawk/HP SFF Q77 Express/uh?/uh?/Asus
Cooling Enermax ETS-T50 Axe aRGB /basic HP HSF /errr.../oh! liqui..wait, no:sizable vapor chamber/a nice one
Memory 64gb DDR4 3600/8gb DDR3 1600/2gbLPDDR3/8gbLPDDR5x/16gb(10 sys)LPDDR5 6400
Video Card(s) Hellhound Spectral White RX 7900 XTX 24gb/GT 730/Mali 450MP5/Adreno 740/Radeon 780M 6gb LPDDR5
Storage 250gb870EVO/500gb860EVO/2tbSandisk/NVMe2tb+1tb/4tbextreme V2/1TB Arion/500gb/8gb/256gb/4tb SN850X
Display(s) X58222 32" 2880x1620/32"FHDTV/273E3LHSB 27" 1920x1080/6.67"/AMOLED 2X panel FHD+120hz/7" FHD 120hz
Case Cougar Panzer Max/Elite 8300 SFF/None/Gorilla Glass Victus 2/front-stock back-JSAUX RGB transparent
Audio Device(s) Logi Z333/SB Audigy RX/HDMI/HDMI/Dolby Atmos/KZ x HBB PR2/Moondrop Chu II + TRN BT20S
Power Supply Chieftec Proton BDF-1000C /HP 240w/12v 1.5A/USAMS GAN PD 33w/USAMS GAN 100w
Mouse Speedlink Sovos Vertical-Asus ROG Spatha-Logi Ergo M575/Xiaomi XMRM-006/touch/touch
Keyboard Endorfy Thock 75%/Lofree Edge/none/touch/virtual
VR HMD Medion Erazer
Software Win10 64/Win8.1 64/Android TV 8.1/Android 14/Win11 64
Benchmark Scores bench...mark? i do leave mark on bench sometime, to remember which one is the most comfortable. :o
looking at the last pic ... i am more impressed at the SD8 Gen 2 results ... which is also clocked 160mhz higher on the prime Cortex X3 core than that in my S23 (due to the "For Galaxy" edition )

knowing that it's out since some time now ... less than 900pts more single core, while clocked .57mhz higher, is a bit "yawn" but +1794pts in multi is not bad tho ... (not that it's needed in a phone ... the SD8G2 feel already overkill :laugh:


edit, 3.19? the std SD8G2 is 3.2 on the X3 (3.36 in the "For Galaxy" variant) bah, not that 0.01ghz is an issue :laugh:
 
Last edited:
Joined
Jan 3, 2021
Messages
3,606 (2.49/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
It helps a lot when memory is soldered to the board and is physically a lot closer to the CPU cores.
The absence of connectors helps too.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
I understand that it appears Apple chose greater IPC xtors in some respects for this arch, but it's worth stating they also generally tend to favor max clock speed improvements for the cost savings. In that respect the improvement from N4P->N3B is less than 9%? I suppose it beats the earlier rumblings of straight 3.7ghz, which would have been only ~7%, but still not anything to write home about. I can understand why Apple may be upset by the results (as has been the rumor for some time).

Not a slight toward the chip at all, rather the process (the mobile-leaning density over speed improvement); That's pretty awful. I'm sure N3E will help eek out the required generational performance (twice the clock improvement of n3b at the cost of some density) but one has to factor in the cost/fab space and question for how many products it will truly be worth it (at least until the price comes down or fab allocation becomes less of an issue).

Look at TSMC's press release for N4X, which claims "4% improvement over N4P at 1.2v", which, to save you some time, implies N4X can hit 3600mhz at 1.2v with the capability of being driven even harder. How hard, I don't know know, but I would imagine at least 1.3v (think of chips [cpus/RV790] in the past that were ~1.3125v) isn't out of the question, maybe even 1.35v (which appears to be the Fahrenheit 451 for this type of use case; think about AMD's SOC voltage problems).
Personally, I always think back to ~1.325v being the sweet spot for many chips over several different nodes and generations. Ye ole' 4.7ghz that Intel and AMD were stuck at for some time, granted that's anecdotal.

It's also worth noting, while it has often been lost in the shuffle of the news cycles, that it has been said N4X has been able to achieve higher clockspeeds than N3B, which we now have some context. So probably between ~3.7-4.0ghz. It could potentially perform very similar to n3e just with worse density and power characteristics.

So, just think about that for a hot second. I know this is a space for talking about Apple and the A17, but I will be that guy that interjects about other uses.

Think about N31/N32, running at 1.1v and 1.15v respectively on N5, or nVIDIA's tuning/amalgamation of the process (that probably is denser but limited to ~1.08v for optimal leakage scaling on N5) and their intended power/clock constraints (which I will grant some have circumvented for N31 and the 7700xt can some-what do within it's allowable max TDP).

At some point I just have to wonder if N3E is worth it at any point in the near future. I say this because if you look at something like N32, which I would argue is bandwidth balanced at ~2900/20000, you could very realistically (if AMD can fix whatever gaming clockspeed problems exist in RDNA3) jack that up to ~3500/24000 on N4X and literally replace the 7900xt/compete with the 4080; not a bad place to be cost/performance-wise even if they lose some power efficiency. The argument I will continue to make is that once a card is greater than 225w (one 8-pin connector, easy to cool, cheap PCBs/caps/etc can be used) you may as well use the headroom up to at least 300w, if not 375w, which N32 currently does not. If would be one thing if these cards were shorter/thinner like mid-range cards of yore, but many of them are engineered to the level of a 7900-level card already, with hot spots not exceeding 80c. That gives them ~20% heat headroom assuming the same designs. Assuming N4X can scale well to those higher voltages, it appears a no-brainer IMHO.

N31 is a little more nuanced, as 24gbps memory probably would probably limit the card below 3300mhz and still play second-fiddle to 4090, even if it halves the gap. What they need in that case (regardless of memory used) is to double up the ∞$ to make it similar to nvidia's L2 bandwidth. While I have no idea if they plan to continue to use MCDs going forward into the far future, two things remain true: the cache can and needs to be doubled, and it should transition to 4nm when economically feasible. Even on 6nm, the (16MB+MC) MCD is 37.5mm2. The 64MB on X3D CPUs is ~36mm2. This implies they could add 16MB of cache while staying within the parameters of why the design makes sense. That is to say, a MCD has to be less than the smallest chip with that size bus (ex: 256-bit and <200mm2; hence a MCD should be less than 50mm2). I understand the rumors have always pointed to stacking (and maybe GDDR6W), but wonder if that's truely necessary. With that amount of cache they could even use cheaper memory, as theoretically doubling the L3 alone should allow it to scale to the limits of the process. It could be a true 4090 competitor. Sure, it'd probably use a metric ton of power by virtue of 18432/16384 at lower voltage/clocks will always be twice the proportional efficiency than 12288 at higher, but we are living in a world with a freaking 600W power cable (+75w from the slot). 3 8-pin is also 675w. If that's where we're going, we might as well freaking do it. Nobody will force you buy it, but one thing is for sure...It would be a hell of a lot cheaper up-front.

That's it. That's my spiel. I know we keep hearing about N43/44 (and perhaps being monolithic), and that's fine. Maybe they are (similar to) the things I ponder, maybe they aren't. But in a world where a 4090 is at least $1600, a 4080 $1100, and a 4070ti $800, while the competing stack (7900xtx/7800xt/7700xt) is/will be soon literally half the price, I certainly wouldn't mind a refresh on the later to compete with the former. It may sound incredibly strange, but it's absolutely possible, as it could be AMD doing what AMD does: competing one tier up the stack for their price. EX: a 4090 competitor for the price of a 4080, a 4080 competitor for the price of a 4070ti (whatever those prices are by the time those products could launch).

In a world where an ~7800xt will probably power the PS5 pro, which appears to content most people (~4k60), and that will last us another 3-5 years, and the PS6 maybe 2-2.25x that (perhaps similar/a little faster than a 4090) lasting another 4-8 years AFTER THAT, I have to ask if we truly need a $2000 3nm nvidia part, whatever it's capabilities...or do we need what I described; the smallest 256-bit/384-bit parts that can achieve their highest performance potential at the most economical price if they are in the realm of achieving that level of performance. Because of the slowing of process tech, the idea of long-term future-proofing at the high-end, just like the 'mid-range' with the 7800xt, at a relatively cheaper price isn't completely out of the question if one of the companies (which won't be nVIDIA) decides to make it so.


Source: N4X press release: https://pr.tsmc.com/english/news/2895
 
Joined
Nov 23, 2020
Messages
543 (0.36/day)
Location
Not Chicago, Illinois
System Name Desktop-TJ84TBK
Processor Ryzen 5 3600
Motherboard Asus ROG Strix B350-F Gaming
Cooling ARCTIC Liquid Freezer II 120mm, Noctua NF-F12
Memory B-Die 2x8GB 3200 CL14, Vengeance LPX 2x8GB 3200 CL16, OC'd to 3333 MT/s C16-16-16-32 tRC 48
Video Card(s) PNY GTX 690
Storage Crucial MX500 1TB, MX500 500GB, WD Blue 1TB, WD Black 2TB, WD Caviar Green 3TB, Intel Optane 16GB
Display(s) Sceptre M25 1080p200, ASUS 1080p74, Apple Studio Display M7649 17"
Case Rosewill CRUISER Black Gaming
Audio Device(s) SupremeFX S1220A
Power Supply Seasonic FOCUS GM-750
Mouse Kensington K72369
Keyboard Razer BlackWidow Ultimate 2013
Software Windows 10 Home 64-bit, macOS 11.7.8
Benchmark Scores are good
An Apple "iPhone16,1" was put through the Geekbench 6.2 gauntlet earlier this week—according to database info this pre-release sample was running a build of iOS 17.0 (currently in preview) and its logic board goes under the "D83AP" moniker. It is interesting to see a 16-series unit hitting the test phase only a day after the unveiling of Apple's iPhone 15 Pro and Max models—the freshly benched candidate seems to house an A17 Pro SoC as well.
@T0@st iPhone16,1 is a 15 Pro or Pro Max. The iPhonex,x numbers don't match up to the series number. The existing 14-series are iPhone15,x, 13 series was iPhone14,x, etc. The reason for the mismatch is the S models - the iPhone 6 is iPhone7,x, 6S is iPhone8,x, 7 is iPhone9,x, etc. It was partially realigned with the 8 and X being the same numbers (10,x) and the XS not increasing the series but moving to 11,x. However it means that the 11 series is 12,x and so on.
 
Joined
Dec 26, 2006
Messages
3,862 (0.59/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
But can it play windows??
 
Joined
Mar 5, 2007
Messages
198 (0.03/day)
Location
Ålesund / Norway
System Name Dark Matter / Mørk Materie (In Norwegian)
Processor AMD Ryzen 7 7700 (CPU Core Ratio: 'AI Enhanced' & OC: 'Curve Optimizer' @ -40 & 'PBO2' @ +200 MHz)
Motherboard ASUS ROG Strix B650E-I Gaming WiFi (AMD Socket AM5) (Mini-ITX)
Cooling CPU: EK Waterblocks EK-Nucleus AIO CR240 Lux (D-RGB) & Thermal Grizzly AM5 Contact & Sealing Frame
Memory Corsair Vengeance RGB Black DDR5 6000 MHz (PC5-48000) 2x16GB (AMD EXPO) (CL36 tuned to CL30 @ 1.4v)
Video Card(s) ASUS TUF Gaming GeForce RTX 3060 Ti 8GB V2 OC Edition (Overclocked +175 MHz Core @ +940 Mhz Memory)
Storage 1x Samsung 990 Pro 2TB & 1x Samsung 990 Pro 4TB (Both M.2 SSD)
Display(s) Dell S3220DGF (1800R Curved, VA Panel & 165 Hz Refresh Rate)
Case Phanteks Evolv Shift XT D-RGB (Black) (Modular)
Audio Device(s) ASUS ROG SupremeFX (Realtek ALC4080 Codec & Savitech SV3H712 Amplifier) (On Motherboard)
Power Supply Corsair SF600 Platinum (600w) (Modular) (SFX)
Mouse Logitech MX Anywhere 3S (Graphite)
Keyboard Logitech MX Keys Mini (Nordic) (Grey)
Software Microsoft Windows 11 Home (64-bit) (Norwegian)
Benchmark Scores Cinebench R23: 20.130 (Multi Core) (Single Cycle Run).
And still iOS is handicapped to the maximum compared to what Android can do. Why have such power when Apple doesn't want to use the power the CPU can deliver in many things?

Yeah sure, the ProRes video recording takes A LOT of power and gaming might also use a lot of power. But still.....
 
Last edited:

zatakan

New Member
Joined
Sep 15, 2023
Messages
1 (0.00/day)
I have somehow limited knowledge of CPU design but isn't this whole Apple SOCs' uplift of single-core performance, single-core performance per watt coming from their huge cache, frequency uplift from improved process nodes, and raised frequency when they were able to attach a cooler or a fan? AMD's 7945HX gets M2 max Cinebench R15 multi-core performance per watt with a third of the transistor count (yeah I know GPU and neural engine adds to the transistor count). Which should cost Apple a lot.

source: https://www.notebookcheck.net/M2-Max-vs-R9-7945HX_14975_14936.247596.0.html
 
Joined
Jul 1, 2011
Messages
364 (0.07/day)
System Name Matar Extreme PC.
Processor Intel Core i9-12900KS 5.2GHZ All P-Cores ,4.2GHZ All E-Cores & Ring 4.2GhZ bus speed 100.27
Motherboard NZXT N5 Z690 Wi-Fi 6E
Cooling CoolerMaster ML240L V2 AIO with MX6
Memory 4x16 64GB DDR4 3600MHZ CL15-19-19-36-55 G.SKILL Trident Z NEO
Video Card(s) Nvidia ZOTAC RTX 3080 Ti Trinity + overclocked 100 core 1000 mem. Re-pasted MX6
Storage WD black 1GB Nvme OS + 1TB 970 Nvme Samsung & 4TB WD Blk 256MB cache 7200RPM
Display(s) Lenovo 34" Ultra Wide 3440x1440 144hz 1ms G-Snyc
Case NZXT H510 Black with Cooler Master RGB Fans
Audio Device(s) Internal , EIFER speakers & EasySMX Wireless Gaming Headset
Power Supply Aurora R9 850Watts 80+ Gold, I Modded cables for it.
Mouse Onn RGB Gaming Mouse & Logitech G923 & shifter & E-Break Sim setup.
Keyboard GOFREETECH RGB Gaming Keyboard, & Xbox 1 X Controller & T-Flight Hotas Joystick
VR HMD Oculus Rift S
Software Windows 10 Home 22H2
Benchmark Scores https://www.youtube.com/user/matttttar/videos
WOW No doubt A17 Pro is extremely fast,
But the i9-13900k if it is tested on iOS (I know it won't work but just to compare performance) then it will blow the A17 Pro , note the 13900k is tested on Windows AND apple make the CHIP and IOS so the A17 Pro are 100% optimized for iOS. And the same goes for A17 PRO let say it works on windows then you will see way way less of a score.
 
Joined
Sep 10, 2016
Messages
823 (0.27/day)
Location
Riverwood, Skyrim
System Name Storm Wrought | Blackwood (HTPC)
Processor AMD Ryzen 9 5900x @stock | i7 2600k
Motherboard Gigabyte X570 Aorus Pro WIFI m-ITX | Some POS gigabyte board
Cooling Deepcool AK620, BQ shadow wings 3 High Spd, stock 180mm |BQ Shadow rock LP + 4x120mm Noctua redux
Memory G.Skill Ripjaws V 2x32GB 4000MHz | 2x4GB 2000MHz @1866
Video Card(s) Powercolor RX 6800XT Red Dragon | PNY a2000 6GB
Storage SX8200 Pro 1TB, 1TB KC3000, 850EVO 500GB, 2+8TB Seagate, LG Blu-ray | 120GB Sandisk SSD, 4TB WD red
Display(s) Samsung UJ590UDE 32" UHD monitor | LG CS 55" OLED
Case Silverstone TJ08B-E | Custom built wooden case (Aus native timbers)
Audio Device(s) Onboard, Sennheiser HD 599 cans / Logitech z163's | Edifier S2000 MKIII via toslink
Power Supply Corsair HX 750 | Corsair SF 450
Mouse Microsoft Pro Intellimouse| Some logitech one
Keyboard GMMK w/ Zelio V2 62g (78g for spacebar) tactile switches & Glorious black keycaps| Some logitech one
VR HMD HTC Vive
Software Win 10 Edu | Ubuntu 22.04
Benchmark Scores Look in the various benchmark threads
The big thing that everyone seems to be forgetting is the additional width in the decode block the A17 likely has for throughput vs Intel and AMD. I remember when Anandtech did the deep dive on the A14 in 2020, it had an eight wide decode block vs intel and AMD having 4 max. Even if the x86 chips can't change, I fully expect that apple has made their CPU design have a 12 or 16 wide decode block now to keep going with their big and wide over max frequency strategy.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Joined
May 3, 2019
Messages
2,138 (1.04/day)
System Name BigRed
Processor I7 12700k
Motherboard Asus Rog Strix z690-A WiFi D4
Cooling Noctua D15S chromax black/MX6
Memory TEAM GROUP 32GB DDR4 4000C16 B die
Video Card(s) MSI RTX 3080 Gaming Trio X 10GB
Storage M.2 drives WD SN850X 1TB 4x4 BOOT/WD SN850X 4TB 4x4 STEAM/USB3 4TB OTHER
Display(s) Dell s3422dwg 34" 3440x1440p 144hz ultrawide
Case Corsair 7000D
Audio Device(s) Logitech Z5450/KEF uniQ speakers/Bowers and Wilkins P7 Headphones
Power Supply Corsair RM850x 80% gold
Mouse Logitech G604 lightspeed wireless
Keyboard Logitech G915 TKL lightspeed wireless
Software Windows 10 Pro X64
Benchmark Scores Who cares
Yeah because you need a 13900k in your phone.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,425 (4.69/day)
Location
Kepler-186f
Processor 7800X3D -25 all core
Motherboard B650 Steel Legend
Cooling Frost Commander 140
Video Card(s) Merc 310 7900 XT @3100 core -.75v
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p
Case NZXT H710 (Red/Black)
Audio Device(s) Asgard 2, Modi 3, HD58X
Power Supply Corsair RM850x Gold
Yeah because you need a 13900k in your phone.

yeah its a bit silly really, would be better if they just cut its power in half to run what needs to run with no lag or skips, then get extra battery life that way. i know they already do that to some degree, but I mean take it to the next level.
 
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
The big thing that everyone seems to be forgetting is the additional width in the decode block the A17 likely has for throughput vs Intel and AMD. I remember when Anandtech did the deep dive on the A14 in 2020, it had an eight wide decode block vs intel and AMD having 4 max. Even if the x86 chips can't change, I fully expect that apple has made their CPU design have a 12 or 16 wide decode block now to keep going with their big and wide over max frequency strategy.
clamchowder is doing a fine job here ~
While they're probably not as renowned as AT even 2 decades back, but from whatever I can understand their analysis is top notch!
 

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,560 (1.37/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 5700X3D
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 64GB DDR4-3600(4x16)
Video Card(s) MSI RTX 3070 Gaming X Trio
Storage ADATA Legend 2TB
Display(s) Samsung Viewfinity Ultra S6 (34" UW)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 24.04 LTS
The only question is why... I don't really give a crap about computational photography and AI/ML in my pocket. Running compute-intensive apps on the phone as-is is also stupid(from usability perspective). Having full-blown AAA games on the go, running natively, isn't that impressive these days either. All it needs is a proper dock with a proper desktop mode, at minimum like DEX. Then we'll talk.
 
Top