Raevenlord
News Editor
- Joined
- Aug 12, 2016
- Messages
- 3,755 (1.24/day)
- Location
- Portugal
System Name | The Ryzening |
---|---|
Processor | AMD Ryzen 9 5900X |
Motherboard | MSI X570 MAG TOMAHAWK |
Cooling | Lian Li Galahad 360mm AIO |
Memory | 32 GB G.Skill Trident Z F4-3733 (4x 8 GB) |
Video Card(s) | Gigabyte RTX 3070 Ti |
Storage | Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB |
Display(s) | Acer Nitro VG270UP (1440p 144 Hz IPS) |
Case | Lian Li O11DX Dynamic White |
Audio Device(s) | iFi Audio Zen DAC |
Power Supply | Seasonic Focus+ 750 W |
Mouse | Cooler Master Masterkeys Lite L |
Keyboard | Cooler Master Masterkeys Lite L |
Software | Windows 10 x64 |
Intel has been steadily increasing its portfolio of products in the AI space, through the acquisition of multiple AI-focused companies such as Nervana, Mobileye, and others. Through its increased portfolio of AI-related IP, the company is looking to carve itself a slice of the AI computing market, and this sometimes means thinking inside the box more than outside of it. It really doesn't matter the amount of cores and threads you can put on your HEDT system: the human brain's wetware is still one of the most impressive computation machines known to man.
That idea is what's behind of neuromorphic computing, where chips are being designed to mimic the overall architecture of the human brain, with neurons, synapses and all. It marries the fields of biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, mimicking the morphology of individual neurons, circuits, applications, and overall architectures. This, in turn, affects how information is represented, influences robustness to damage due to the distribution of workload through a "many cores" design, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.
Intel's Loihi is hardly the first such neuromorphic computing chip to enter the market. The concept, coined by Carver Mead in 1980, was first taken up by universities (as early as 2006 in Georgia Tech), and has since been picked up by companies such as IBM. IBM's own TrueNorth neuromorphic CMOS integrated circuit (described as a many-core network processor on a chip) features a grand total of 4,096 cores. And it powers all of those with just 70 milliwatts of power, or about about 1/10,000th the power density of conventional microprocessors. Each of these cores simulates 256 artificial, programmable silicon "neurons" for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them, which brings the total number of programmable synapses is just over 268 million. That's still a far-cry for yours truly mankind's average of 84 billion neurons, though.
Loihi, however, will feature "only" 130,000 neurons and 130 million synapses. There's fully asynchronous processing capability built-in on this chip, through its many-core mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies with each neuron capable of communicating with thousands of other neurons. Each of these neuromorphic cores includes a "learning engine", which allows the core's learning parameters to change on the fly according to the particular needs of a given workload, through supervised, unsupervised, reinforcement and other learning paradigms.
Intel is saying Loihi is especially capable in workloads such as the development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation. This may all sound like science fiction, but remember these chips will still be fabricated on Intel's 14 nm process technology, and don't incorporate any exotic materials. It's still your old silicon doing its wonders.
View at TechPowerUp Main Site
That idea is what's behind of neuromorphic computing, where chips are being designed to mimic the overall architecture of the human brain, with neurons, synapses and all. It marries the fields of biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, mimicking the morphology of individual neurons, circuits, applications, and overall architectures. This, in turn, affects how information is represented, influences robustness to damage due to the distribution of workload through a "many cores" design, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.
Intel's Loihi is hardly the first such neuromorphic computing chip to enter the market. The concept, coined by Carver Mead in 1980, was first taken up by universities (as early as 2006 in Georgia Tech), and has since been picked up by companies such as IBM. IBM's own TrueNorth neuromorphic CMOS integrated circuit (described as a many-core network processor on a chip) features a grand total of 4,096 cores. And it powers all of those with just 70 milliwatts of power, or about about 1/10,000th the power density of conventional microprocessors. Each of these cores simulates 256 artificial, programmable silicon "neurons" for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them, which brings the total number of programmable synapses is just over 268 million. That's still a far-cry for yours truly mankind's average of 84 billion neurons, though.
Loihi, however, will feature "only" 130,000 neurons and 130 million synapses. There's fully asynchronous processing capability built-in on this chip, through its many-core mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies with each neuron capable of communicating with thousands of other neurons. Each of these neuromorphic cores includes a "learning engine", which allows the core's learning parameters to change on the fly according to the particular needs of a given workload, through supervised, unsupervised, reinforcement and other learning paradigms.
Intel is saying Loihi is especially capable in workloads such as the development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation. This may all sound like science fiction, but remember these chips will still be fabricated on Intel's 14 nm process technology, and don't incorporate any exotic materials. It's still your old silicon doing its wonders.
View at TechPowerUp Main Site