Tuesday, June 2nd 2015
Intel Leads a New Era of Computing Experiences
Intel Corporation today announced products and solutions that will deliver new user experiences and support the computing ecosystem's expansion into new areas. During the opening keynote address at Computex 2015, Kirk Skaugen, senior vice president and general manager of Intel's Client Computing Group encouraged the Taiwan ecosystem to work together and capture the opportunity to shape the future of computing.
"The power of Moore's Law has enabled incredible computing innovation over the past 50 years and because of it, nearly everything in the future will have the ability to compute and connect," said Skaugen. "Our 30-year history of collaboration with Taiwan has delivered historic innovation to the world, from personal computing to the cloud and data centers. The next 30 years will see even greater innovation as we deliver new computing experiences and bring intelligence and connectivity to the Internet of Things together."Skaugen offered predictions into the future of computing and showcased the products and platforms that will take us there. He also showed how these innovations will soon enable intelligence to be everywhere.
Internet of Things News:
"The power of Moore's Law has enabled incredible computing innovation over the past 50 years and because of it, nearly everything in the future will have the ability to compute and connect," said Skaugen. "Our 30-year history of collaboration with Taiwan has delivered historic innovation to the world, from personal computing to the cloud and data centers. The next 30 years will see even greater innovation as we deliver new computing experiences and bring intelligence and connectivity to the Internet of Things together."Skaugen offered predictions into the future of computing and showcased the products and platforms that will take us there. He also showed how these innovations will soon enable intelligence to be everywhere.
Internet of Things News:
- Intel introduced the expansion of the Intel IoT Gateway product family. The latest gateway reference design offers expanded choice in silicon and software with the addition of Intel Core processor-based gateways and Wind River Intelligent Device Platform XT 3 with flexible packaging options for applications that require a low cost of entry.
- Intel also expanded the choice of operating systems for Intel IoT Gateway reference designs with the availability of Ubuntu Snappy Core from Canonical. This builds upon the current OS availability from Microsoft and Wind River.
- Specifically for IoT solutions, especially retail and medical environments, Intel announced the new Intel Pentium, Intel Celeron and Intel Atom processors. With stunning graphics performance in a low thermal envelope, the processors are customized for IoT and offer seven year availability.
- Intel Unite was introduced as a new cost-effective business solution designed for easy and intuitive collaboration and improved meeting productivity. With a select Intel Core vPro processor-based mini PC in the conference room and the Intel Unite application running on devices, existing conference rooms are modernized and transformed into smart and connected meeting spaces with enhanced security.
- In the biggest advancement since its inception, Thunderbolt 3 delivers one computer port that connects to Thunderbolt devices, every display and billions of USB devices. For the first time, a single cable now provides four times the data and twice the video bandwidth of any other cable, while also supplying power. It's unrivaled for new uses, such as 4K video, single-cable docks with charging, external graphics and built-in 10 GbE networking. Initial products are expected to start shipping before the end of this year, with more expected in 2016.
- The 5th generation Intel Core family also now includes the first LGA socketed desktop processor with integrated Iris Pro graphics, Intel's most powerful client processor graphics and media engine. The lower 65-watt thermal design power (TDP) allows full PC performance in a broad range of form factors, including smaller and thinner mini PCs and all-in-one desktops, providing up to two times better 3-D graphics performance, 35 percent video conversion and 20 percent compute performance over the previous generation processors.
- Intel also introduced 5th generation Intel Core mobile processors for mobile and IoT with integrated Intel Iris Pro Graphics. Optimized for gamers and content creators on the go, Intel's fastest and most responsive mobile processors have Intel Iris Pro graphics 6200 and provide up to two times higher compute performance and two times better 3-D graphics performance compared to the current generation.2 These processors are also ideal for medical, public works and industrial IoT applications with the inclusion of critical features for powerful IoT designs, including ECC memory support, Intel vPro technology and offer seven year availability.
- Highlighting progress toward a future of a completely wireless computing experience, Intel announced it is working with Targus to deliver Rezence standard-based wireless charging solutions. Intel also recently announced an agreement with China-based Haier to bring wireless charging solutions to restaurants, hotels, cafés, and airports in China later this year. Additionally, Intel will work with A4WP members, Foxconn Interconnect, Basecom and original design manufacturers BYD and Primax to bring wireless charging solutions to market later this year
27 Comments on Intel Leads a New Era of Computing Experiences
So if you want to solve the multi-threaded problem for games, then you need to solve the question of how to break a game down into independent pieces that work together in a non-blocking way. So as someone who is a developer and holds a degree in computer science, I'm going to say, "That's easier said than done." There are a lot of considerations for applications that work this way and people don't realize the amount of complexity and issues that are involved.
Also there is a ceiling on clock speeds thanks to the types of transistors we currently use, so it's a question that needs to get answered.
www.anandtech.com/show/9320/intel-broadwell-review-i7-5775c-i5-5765c/10
Just think about it, they could be running 10GHz CPU's with insane parallelism that is like 20 times higher than current CPU's, but because Moore's Law is convenient for profit, they just follow it...
Intel tried to hit that with the pentium 4, and we saw how well that went. hot, slow, and pathetic. To hit those clocks, you need an extremely long pipeline, which is both latent and power hungry. AMD's bulldozer can hit only 5-5.3 GHz without insane cooling or turning off all but 1 core, and even then struggles to keep up with a normal 3.4GHz core i7 cpu in most benchmarks, while pulling in excess of ~220 watts.
There is a REASON we stopped chasing the GHz, it leads nowhere. A core like the one you are proposing would be gigantic, expensive, power hungry, and not all that useful. The insane parallelism would be useless, as a program that takes advantage of parallelism could more easily use an 18 core xeon or a GPU, both of which are much cheaper and are probably more power efficient.
not if u have an i7 2600K. Gotta go last decade.
If you look at the first generation core i series from 2007, and the new haswell chips today, there is a remarked increase in IPC, roughly ~5% per generation. moving from nehalem to haswell brings a remarked improvement in performance per clock, not to mention the power savings, which is where most of the work is going. the performance/watt of modern chips is much better than first gen i series chips.
Intel accidentally struck gold with nehalem, and again with sandy bridge. Improving from that is going to be very difficult with currently available technology. skylake is supossed to bring about big IPC improvements, but I'll believe it when I see it. Silicon is starting to reach the end of it's usefullness on that front.
As of today, we have more computerpower than ever before, not only as compays or as just normal people.
I just ask do we need more of that ?, I dont think so since development if going to smaller portiable and handheld devices.
As a consumer, we have enough, more than enough and theres still the need to get more and more, its insane. Even avarage Joe who has a 3 year computer still have more than enough CPU power for everyday need and he is happy. His machine can do what he wants and when he wants it, so where is his need for upgrading ?, unless it brakes down.
For us who likes to have new toys to play with well thats a different matter, but the trouth is, that CPU power is enough, its all about grafics when you play games.
Some people set consuming power high on their list and some dosent, freedom of choice. If AMD's new Zen CPU which comes in 2016 is just as good or better and uses less power than my I7-5820K that I have now, I will change in a heartbeat.
Then again, this is a fringe case, but it is always nice to have more cpu power.
Intel Stalls New Era of Computing Experiences
8 cores and only 1 working... progress a la Intel. It should be CPU job to utilize whole potential of CPU not just software.
My choice for X99 was enough lanes for at least two cards and 4 lanes for a m.2 SSD
You just cannot have so many different ports and standards and I think SATA is not expandable and you cannot put HDD in M.2 slot.
This is easier said than done. You can't simply make all workloads parallel because most code written is serial, there are dependencies between instructions and as a result, would need to share state and memory. The issues here is that overhead of making it parallel might introduce less performance if there is enough coordination that has to occur because most applications will do work to build up data, not build it in parallel, as a result, instructions will depend on the changes from the last (see serializability). It is the developer's job to write code in a way that can leverage hardware when it's available and to know when to apply it because not everything can be done on multiple cores by virtue of the task being done. The state problem is huge and locking will destroy multi-threaded performance. As a result, many successful multi-threaded systems have parts of an application decoupled with queues placed inbetween tasks that much occur serially. This all sounds find and dandy, but now you're just passing state through essentially a series of queues and any work will have to wait if any part of the pipeline slows down (so you're still limited by the least parallel part of your application.) So while you inherently get parallelism by doing this, you add the side effect of increasing latency and not knowing the state of something at any level other than the one it's operating at. So in the case of games, they need to react quickly to less latency means either lower frame rate or input lag.
So while your comment makes sense from looking at it from a high level, it makes absolutely no sense at a low level because that isn't how computers work under the hood and you can't simply break apart an application and make it multi-threaded. It simply doesn't work that way.
I would argue that developers need better tools for easily implementing games that can utilize multiple cores but, languages (like Java, Clojure, C#, etc.) have great constructs for multi-threaded programming. The question isn't if you can do it, the question what is the best way to do it and no one really knows.
I tell people this all the time: "If it were that easy, it would have been done already!"
Lastly, this is already done in processors to some extent at the opcode/microcode level. It's called pipe-lining and most CPUs (which are super scalar,) work this way. Has for a long time, but this is done at the instructional level in an individual core because we're still talking about serial workloads (which is what you're generic application usually is.) All 8 of the SATA ports in my tower are used. One M.2 and be done with it, but it's really not designed for mass storage. SATA will always have a place in my book until something better rolls around that doesn't require the device to be attached to the motherboard. (Imagine mounting a spinning drive to a motherboard, that makes for some pretty funny images. :p
Side note: I had more issues with IDE cables than SATA, so I'm not complaining.
Parent thread launches on core 0, it copies in a large dataset for comparison, it then launches a child thread that starts at the top of the thread, and it starts at the bottom, and it then has to check up on a flag being set by the child thread to see if it found a match thus reducing its performance. Try to find customer 123456789 in a dataset that is alpha numeric and customers numbers are randomized so either you have to sort, search the whole table, or you could have it check using a smarter algorithm. Depending on the size of the dataset and processor speed it might be as fast to sort on a single core as it is to search and compare on multiple cores.