- Joined
- Aug 30, 2006
- Messages
- 7,221 (1.08/day)
System Name | ICE-QUAD // ICE-CRUNCH |
---|---|
Processor | Q6600 // 2x Xeon 5472 |
Memory | 2GB DDR // 8GB FB-DIMM |
Video Card(s) | HD3850-AGP // FireGL 3400 |
Display(s) | 2 x Samsung 204Ts = 3200x1200 |
Audio Device(s) | Audigy 2 |
Software | Windows Server 2003 R2 as a Workstation now migrated to W10 with regrets. |
Quite right, the CPUs were not dedicated accelerators. That is where the term "accelerator" came from. Graphics USED to be done by the CPU, then they designed accelerators due to so much CPU time being used to create and shift things like windows and dialogs around the screen... requiring vast amounts of memory to be moved.
However, CPUs started developing special instructions for shift large blocks of memory around with just one instruction. There is a blurry line between some CPU instructions and GPU instructions. Only recently with 3D and shaders has that difference become more distinct. Point being: Sticking a GPU onto a CPU die is NOTHING MORE CLEVER OR ORIGINAL than sticking x87 (maths and SSE) onto the same die as the CPU.
Next up; GPU is a CPU (CUDA)
Next up; Larrabee
Next up; CPU less computers (x86 Larrabee based PCs)
However, CPUs started developing special instructions for shift large blocks of memory around with just one instruction. There is a blurry line between some CPU instructions and GPU instructions. Only recently with 3D and shaders has that difference become more distinct. Point being: Sticking a GPU onto a CPU die is NOTHING MORE CLEVER OR ORIGINAL than sticking x87 (maths and SSE) onto the same die as the CPU.
Next up; GPU is a CPU (CUDA)
Next up; Larrabee
Next up; CPU less computers (x86 Larrabee based PCs)