• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Would you pay more for hardware with AI capabilities?

Would you pay more for hardware with AI capabilities?

  • Yes

    Votes: 2,086 7.3%
  • No

    Votes: 24,247 84.3%
  • Don't know

    Votes: 2,413 8.4%

  • Total voters
    28,746
  • Poll closed .

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,729 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
AI capabilities are becoming increasingly integrated into hardware devices, promising enhanced performance and functionality. However, this advanced technology often comes at a premium price. Would you pay more for hardware with AI features?
 

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
12,608 (2.90/day)
Location
Jyväskylä, Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X @ PBO +200 -20CO
Motherboard Asus ROG Crosshair VII Hero
Cooling Arctic Freezer 50, EKWB Vector TUF
Memory 32GB Kingston HyperX Fury DDR4-3466
Video Card(s) Asus GeForce RTX 3080 TUF OC 10GB
Storage A pack of SSDs totaling 3.2TB + 3TB HDDs
Display(s) 27" 4K120 IPS + 32" 4K60 IPS + 24" 1080p60
Case Corsair 4000D Airflow White
Audio Device(s) Asus TUF H3 Wireless / Corsair HS35
Power Supply EVGA Supernova G2 750W
Mouse Logitech MX518 + Asus ROG Strix Edge Nordic
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis
Nah. Personally I don't find any other usage for AI than making cool images.
 
Joined
Feb 20, 2019
Messages
8,205 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Not this generation.

All of the useful AI I've encountered so far is datacenter-hosted AI requiring hundreds of gigs or even terabytes of RAM for LLM datasets, hundreds of terabytes of fast storage for inference, and year(s) of training on multiple petabytes of datasets to become useful.

The "local AI" software I've tried has been little more than a marketing gimmick that achieves little of value other than maybe some fun image manipulation that I can't really classify as AI. Yes, you can run a few things in CUDA on your GPU but the real power is in a subscription to a cloud service where 30 hours of your GPU can be replaced by a 2-minute wait for the cloud SC to do the job instead.

For now, AI is a cloud product and there's a massive, unbridgeable gap between it and what any local NPU can achieve.
 

64K

Joined
Mar 13, 2014
Messages
6,747 (1.73/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
I have no use for AI or any interest in it for now so no.
 
Joined
Jun 21, 2022
Messages
120 (0.14/day)
Not this generation.

All of the useful AI I've encountered so far is datacenter-hosted AI requiring TB of RAM, multi-TB of working memory on fast flash, and years of training on PB of datasets.

The "local AI" software I've tried has been little more than a marketing gimmick that achieves little of value.

For now, AI is a cloud product and there's a massive, unbridgeable gap between it and what any local NPU can achieve.
I agree!

But the moment i can buy something that runs 100% locally (via GPU) and let´s me talk to my pc and USE it that way, i´m 100% in.
 
Joined
Feb 24, 2023
Messages
2,932 (4.71/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / R9 380 2 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / MSi G2712
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / Corsair CX650M / DQ550ST [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
I'm dumber than some AIs so I don't even know how to use them and get some profit off of it. So no. I'd prefer more calculating power per watt instead.
 
Joined
Feb 20, 2019
Messages
8,205 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I agree!

But the moment i can buy something that runs 100% locally (via GPU) and let´s me talk to my pc and USE it that way, i´m 100% in.
I feel like we're 2-5 generations of hardware away from that. Let's see what Blackwell can do and what it costs, but it would need to be a couple of orders of magnitude faster than a 4090 I think - We have to get hours down to minutes for the big workloads, and minutes down to seconds for the real-time interaction/discussion sort of behaviour you're talking about. It can be done today but using those datacenter pools of multiple dedicated AI systems either running tons of QuadroRTX/4090s or something multiples of something AI specific like the DGX-2.
 
Joined
Jun 12, 2020
Messages
66 (0.04/day)
Processor AMD Ryzen 9 9950X
Motherboard Asus ROG Strix B650E-E Gaming Wifi bios 3040 w/Agesa 1.2.0.2
Cooling Thermalright Phantom Spirit 120
Memory 64 GB Kingston FURY Beast DDR5-6000 CL30 - 2x32 GB
Video Card(s) ASRock Radeon RX 7900 XTX Phantom Gaming OC
Storage 1 x WD Black SN850 1TB, 1 x Samsung 990 PRO 2TB, 2 x Samsung 860 1TB, 1 x Segate 16TB HDD
Display(s) Dell G3223Q 4K UHD
Case NZXT H7 Flow (2024) - All Black
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4080
Power Supply Thermalright TP 1000 Watt
Mouse Razer DeathAdder v3.0 PRO
Keyboard Razer BlackWidow V4
Software Windows 11 PRO 24H2 build 26100.1882
Since AI is starting to be a thing on phones i.e the Samsung Galaxy S24. I was pleasantly surprised when my Samsung Galaxy S22 recently was upgraded to one UI 6.1 and now also have the same AI features as the S24 with no added cost. So basically running on a phone with no native AI hardware. The note / transcript assist is very impressive with the embedded voice recognition. The generative AI in photo editing is also very cool. So I voted NO in the poll. I do however fear AI features becoming subscription based in the future, where you pay on a monthly basis for the features you want.
 
Joined
Jun 2, 2017
Messages
8,994 (3.31/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
The most interesting thing that I have seen in AI has to be that monitor from MSI that was at CES. It has an AI chip that will tell you from what area of the map the next enemy will be. For Action RPGs ans DOTA clones that could be very compelling.
 
Joined
Jan 31, 2010
Messages
5,535 (1.03/day)
Location
Gougeland (NZ)
System Name Cumquat 2021
Processor AMD RyZen R7 7800X3D
Motherboard Asus Strix X670E - E Gaming WIFI
Cooling Deep Cool LT720 + CM MasterGel Pro TP + Lian Li Uni Fan V2
Memory 32GB GSkill Trident Z5 Neo 6000
Video Card(s) Sapphire Nitro+ OC RX6800 16GB DDR6 2270Cclk / 2010Mclk
Storage 1x Adata SX8200PRO NVMe 1TB gen3 x4 1X Samsung 980 Pro NVMe Gen 4 x4 1TB, 12TB of HDD Storage
Display(s) AOC 24G2 IPS 144Hz FreeSync Premium 1920x1080p
Case Lian Li O11D XL ROG edition
Audio Device(s) RX6800 via HDMI + Pioneer VSX-531 amp Technics 100W 5.1 Speaker set
Power Supply EVGA 1000W G5 Gold
Mouse Logitech G502 Proteus Core Wired
Keyboard Logitech G915 Wireless
Software Windows 11 X64 PRO (build 23H2)
Benchmark Scores it sucks even more less now ;)
The most interesting thing that I have seen in AI has to be that monitor from MSI that was at CES. It has an AI chip that will tell you from what area of the map the next enemy will be. For Action RPGs ans DOTA clones that could be very compelling.
Until they call it cheating and block anyone using one of those monitors

Personally I have no use for it unless it can be used to make NPC's in games less boring like the zombies in CP2077 otherwise it's a nope from me
 
Joined
Nov 4, 2005
Messages
11,964 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I would pay slightly more to have it removed and more cache or some other useful hardware in its place
 
Joined
Oct 26, 2018
Messages
222 (0.10/day)
Processor Intel i5-13600KF
Motherboard ASRock Z790 PG Lightning
Cooling NZXT Kraken 240
Memory Corsair Vengeance DDR5 6400
Video Card(s) XFX RX 7800 XT
Storage Samsung 990 Pro 2 TB + Samsung 860 EVO 1TB
Display(s) Dell S2721DGF 165Hz
Case Fractal Meshify C
Power Supply Seasonic Focus 750
Mouse Logitech G502 HERO
Keyboard Logitech G512
Not this generation.

All of the useful AI I've encountered so far is datacenter-hosted AI requiring hundreds of gigs or even terabytes of RAM for LLM datasets, hundreds of terabytes of fast storage for inference, and year(s) of training on multiple petabytes of datasets to become useful.

The "local AI" software I've tried has been little more than a marketing gimmick that achieves little of value other than maybe some fun image manipulation that I can't really classify as AI. Yes, you can run a few things in CUDA on your GPU but the real power is in a subscription to a cloud service where 30 hours of your GPU can be replaced by a 2-minute wait for the cloud SC to do the job instead.

For now, AI is a cloud product and there's a massive, unbridgeable gap between it and what any local NPU can achieve.
From what I've seen it's the training which is only done once, that is intensive. Inference is relatively light work that a current GPU, or even a phone could do.
Most users wont ever do any training, so there are really two different requirements.
From what I've seen larger models only make bigger crappy artwork, or make up more detailed and authoritative sounding, and yet probably incorrect text, about an even wider range of subjects.

So I believe you are right about training models, but I disagree about the power needed to run inference locally, and to a lesser extent about the value of cloud based AI's
Out of curiosity , what LLM is requiring "hundreds of terabytes of fast storage for inference"?
What exactly is this "useful" AI you've encountered, and what do you use it for?
 
Joined
Jun 30, 2008
Messages
243 (0.04/day)
Location
Sweden
System Name Shadow Warrior
Processor 7800x3d
Motherboard Gigabyte X670 Gaming X AX
Cooling Thermalright Peerless Assassin 120 SE ARGB White
Memory 64GB 6000Mhz cl30
Video Card(s) XFX 7900XT
Storage 8TB NVME + 4TB SSD + 3x12TB 5400rpm
Display(s) HP X34 Ultrawide 165hz
Case Fractal Design Define 7 (modded)
Audio Device(s) SMSL DL200 DAC / AKG 271 Studio / Moondrop Joker..
Power Supply Corsair hx1000i
Mouse Roccat Burst Pro
Keyboard Cherry Stream 3.0 SX-switches
VR HMD Quest 1 (OLED), Pico 4 128GB
Software Win11 x64
Like auto-correct an AI will to a large degree get in the way.
 
Joined
Jan 2, 2024
Messages
527 (1.69/day)
Location
Seattle
System Name DevKit
Processor AMD Ryzen 5 3600 ↗4.0GHz
Motherboard Asus TUF Gaming X570-Plus WiFi
Cooling Koolance CPU-300-H06, Koolance GPU-180-L06, SC800 Pump
Memory 4x16GB Ballistix 3200MT/s ↗3800
Video Card(s) PowerColor RX 580 Red Devil 8GB ↗1380MHz ↘1105mV, PowerColor RX 7900 XT Hellhound 20GB
Storage 240GB Corsair MP510, 120GB KingDian S280
Display(s) Nixeus VUE-24 (1080p144)
Case Koolance PC2-601BLW + Koolance EHX1020CUV Radiator Kit
Audio Device(s) Oculus CV-1
Power Supply Antec Earthwatts EA-750 Semi-Modular
Mouse Easterntimes Tech X-08, Zelotes C-12
Keyboard Logitech 106-key, Romoral 15-Key Macro, Royal Kludge RK84
VR HMD Oculus CV-1
Software Windows 10 Pro Workstation, VMware Workstation 16 Pro, MS SQL Server 2016, Fan Control v120, Blender
Benchmark Scores Cinebench R15: 1590cb Cinebench R20: 3530cb (7.83x451cb) CPU-Z 17.01.64: 481.2/3896.8 VRMark: 8009
Not interested in AI specific hardware (yet). Just like my jump from GCN 4.0 to RDNA3, there's a MASSIVE gap in that technology before it catches my attention with anything good.
AI stuff has been a lot of fun since early 2020 but it requires prompt input skill and a training of thought that isn't natural to my abilities, which tells me either I've spent a great deal of time in the wrong parts of the Internets OR there's just that great of a barrier to entry to get started. My initial hardware isn't tailored to AI either. A modern GPU or AIML specific peripheral is the key to gluing all of this together and it's just not interesting enough right now. Maybe when I go to a newer GPU and I'm not talking about a jump in FP64 of 2 TFlops but maybe 64 as is becoming the norm in datacenters. Even then, who is buying? I would rather stick to free services until there's proper reason to invest in this besides like I see going on with Blockade Labs or AI powered MAM like Axel AI. I would be whole heartedly interested in running that last one locally if possible but we're just not going to get there for a while.

More importantly, AI application capable ≠ AI application advantageous. It's been the same case with gaming just to support some DirectX feature level or RT or some other performance metric. I don't care about this and I don't like that features I don't care about are becoming the main focus, getting hamfisted into these devices at an exotic premium. In my timeline, a Radeon HD 6570 was ~$55 USD and that was mildly annoying in a year where I needed to switch to a motherboard that didn't have AMD's 790GX powered IGP garbage. Keeping the specific purpose silicon separate is clutter but part of my philosophy so, gotta eat the cost.

My RX 580 was in the peak of Ethereum mining, something very new at the time which I also didn't care about because it's all cryptominer scammers crapping up the market like normies, sneakerheads and every other invader and tourist that doesn't care about our hobbies. It was double MSRP. Mind you, I already considered $200 to be insane. Now we have sub-$100 Chinese hens coming home to roost on their AliExpress 2048SP miner card and a bunch of screeching to FiNd My vBiOs. They can fry for all I care. :wtf:

My 7900 XT doesn't have any particular special feature to it and at $710 USD was a steal. It appears to be holding its value too. Marketed as an AI device sounds like a plus if I care to play around with that but how many of us actually do? How much of this AI junk is just a markup on the device and how much is it? Probably half of its MSRP.

In the future it looks like whatever becomes interesting at whatever DX feature level or AI level or whatever is going to easily be $$$$ or what we know today as 4090 price territory. I don't care for it.
 
Joined
Jan 11, 2022
Messages
846 (0.82/day)
No, there is nothing i find interesting my graphics card can't accelerate or that can't be done in the cloud at the moment and i highly doubt anything interesting will be introduced within 5 years that will change that.
 
Joined
Feb 3, 2023
Messages
212 (0.33/day)
I would pay (slightly) more for hardware without "ai" capabilities. My reasoning is this capability only exists for two reasons:
-spamming with marketing buzzwords,
-running rudimentary local models for marketing purposes - like Microsoft undoubtedly does or will do with Windows 11.
It will not be capable of running anything useful to me for many generations, if ever. Therefore it's a waste of sand.
 
Joined
Jun 21, 2021
Messages
3,094 (2.50/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
This might have been an interesting question five years ago.

Today it's a joke. Apple put ML cores in their phones with the iPhone X/iPhone 8 series.

Back in 2017. So no, I didn't bother submitting a vote in this poll. But it might have been fun before the pandemic.

Many TPU discussions are several years behind the times. And a lot of AI functions are already here without any fanfare. You ever get a recent fraud alert from your credit card issuer or bank? That's AI in IRL action.

So you have a mid-level Android smartphone and you don't dabble with new Internet functions. That's fine. It doesn't mean you're running AI free. At some point you'll be swimming in AI situations even if you never paid a dime extra for AI.
 
Last edited:
Joined
Feb 20, 2019
Messages
8,205 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
From what I've seen it's the training which is only done once, that is intensive. Inference is relatively light work that a current GPU, or even a phone could do.
Most users wont ever do any training, so there are really two different requirements.
From what I've seen larger models only make bigger crappy artwork, or make up more detailed and authoritative sounding, and yet probably incorrect text, about an even wider range of subjects.

So I believe you are right about training models, but I disagree about the power needed to run inference locally, and to a lesser extent about the value of cloud based AI's
Out of curiosity , what LLM is requiring "hundreds of terabytes of fast storage for inference"?
What exactly is this "useful" AI you've encountered, and what do you use it for?
ChatGPT Plus is fantastic at finding poignant details in a 1500-page technical manual, or looking for common trends across multiple research papers.

The most useful functionality for AI in my industry (AEC) is in cleaning up point-clouds. Even a small point-cloud survey can be 500GB of raw data and it's generally noisy data taken by lidar or photogrammetry and needs vetting to clean up. This used to be an algorithm-driven process that required a lot of human post-process cleanup and sanity-checking. AI tools can now turn a raw dataset into a useful 3D model much faster, removing a lot of human workload and rapidly converting half-terabyte files into useful meshes that take up a few hundred MB. Pointclouds are typically cloud-hosted, so being able to covert hundreds of gigs into hundreds of megs before they leave the survey repository makes the files way easier to handle and move around, too.

I'm sure there are other good uses for AI but identifying shapes, outlines, objects, and general pattern recognition is a huge one for the sector I work in. The larger the dictionary of training, the more accurate and useful the object-recognition seems to be.
 
Joined
Sep 27, 2008
Messages
1,192 (0.20/day)
At present, I don’t use any software that demands it. I wouldn’t rule it out in the future though.
 

bug

Joined
May 22, 2015
Messages
13,722 (3.97/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Today, I would have to say "no". But I would like to remind people that even floating point operations were once offered as an extra add-on at additional cost. Also, I imagine what AI will do 5 years from now will be quite different from the cheap tricks we see today.
 
Joined
Feb 20, 2019
Messages
8,205 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Today it's a joke. Apple put ML cores in their phones with the iPhone X/iPhone 8 series.
The only thing those cores are really used for are to assist with offline voice recognition and to speed up face-unlock. I might be ignorant of other uses but those are the only headline features that get any coverage/attention.
Many TPU discussions are several years behind the times. And a lot of AI functions are already here without any fanfare. You ever get a recent fraud alert from your credit card issuer or bank? That's AI in IRL action.
Most, maybe all of those things are datacenter AI running at Apple, Google, Microsoft, Amazon, your bank, whatever - they're not running locally on your personal device and they don't work when your device can't reach the internet. This poll is about whether you would pay extra for hardware that has its own local AI, not a discussion on whether you would pay for AI in general, the overwhelming majority of which is externally-hosted, online AI.

When a standout application/use-case arrives that makes it worth having local NPU/AI processing available, then there will be a reason to pay for it. At the moment we're playing the-chicken-or-the-egg game and the only way to break that cycle is to embed NPUs into all hardware for no additional cost so that there's enough of a hardware base to make software for locally-run AI economically-viable. This is, of course, just my opinion on the subject and I'm not professing to be any kind of AI expert. If there's some kind of fantastic thing that requires an NPU that I've missed, please do inform me. I do not and cannot read all the news :)
 
Joined
Nov 8, 2022
Messages
41 (0.06/day)
System Name GIGABYTE BLUE PRO
Processor INTEL CORE i7-4770k
Motherboard GIGABYTE GA-Z87X-UD3H
Cooling noctua NH-D12L
Memory HyberX fury DDR3 32GB (4X8)
Video Card(s) ZOTAC GAMING GeForce RTX 3060 AMP White Edition
Storage WD Blue SATA 1TB SSD, WD blue 1TB HDD, Seagate BarraCuda 4TB HDD
Display(s) Lenovo legion Y25-30
Case GIGABYTE GZ-G1 PLUS
Power Supply GIGABYTE SUPERB 720
Mouse Microsoft Basic Optical Mouse v2.0
Keyboard Microsoft Wired Keyboard 600
Software Windows 11 Pro
i don't care about AI things, most programs and games don't using AI accelerators, if they AI thing most likely it will be cloud based service, not locally.
but for the future maybe if they are more programs start demanding hardware accelerator and these parts like cpu and gpu become a standard feature on mainstream price
 
Top