News Posts matching #EPYC

Return to Keyword Browsing

AMD Takes a Bigger Revenue Hit than Microsoft from Huawei Ban: Goldman Sachs

The trade ban imposed on Chinese tech giant Huawei by the U.S. Department of Commerce, and ratified through an Executive Order by President Donald Trump, is cutting both ways. Not only are U.S. entities banned from importing products and services from Huawei, but also engaging in trade with them (i.e. selling to them). U.S. tech firms stare at a $11 billion revenue loss by early estimates. Wall Street firm Goldman Sachs compiled a list of companies impacted by the ban, and the extent of their revenue loss. It turns out that AMD isn't a small player, and in fact, stands to lose more revenue in absolute terms than even Microsoft. It earns RMB 268 million (USD $38.79 million) from Huawei, compared to Microsoft's RMB 198 million ($28.66 million). Intel's revenue loss is a little over double that of AMD at RMB 589 million ($84 million), despite its market-share dominance.

That's not all, AMD's exposure is higher than that of Intel, since sales to Huawei make up a greater percentage of AMD's revenues than it does Intel's. AMD exports not just client-segment products such as Ryzen processors and Radeon graphics, but possibly also EPYC enterprise processors for Huawei's server and SMB product businesses. NVIDIA is affected to a far lesser extent than Intel, AMD, and Microsoft. Qualcomm-Broadcom take the biggest hit in absolute revenue terms at RMB 3.5 billion ($508 million), even if their exposure isn't the highest. The duo export SoCs and cellular modems to Huawei, both as bare-metal and licenses. Storage hardware makers aren't far behind, with the likes of Micron, Seagate, and Western Digital taking big hits. Micron exports DRAM and SSDs, while Seagate and WDC export hard drives.

AMD Confirms Launch of Next-gen Ryzen, EPYC and Navi for Q3

During AMD's annual shareholder meeting today, AMD president and CEO Dr. Lisa Su confirmed the launch of next-generation AMD Ryzen, EPYC CPUs and Navi GPUs for the third quarter of this year. The expected products are going to be manufactured on TSMC's 7 nm process and will be using new and improved architectures.

Ryzen 3000 series CPUs are rumored to have up to as much as 16 cores in Ryzen 9 SKUs, 12 cores in Ryzen 7 SKUs and 8 cores in Ryzen 5 SKUs. EPYC server CPUs will be available in models up to 64 cores. All of the new CPUs will be using AMD "Zen 2" architecture that will offer better IPC performance and, as rumors suggest for consumer models, are OC beasts. Navi GPUs are the new 7 nm GPUs that are expected to be very competitive both price and performance wise to NVIDIA's Turing series, hopefully integrating new technologies such as dedicated Ray Tracing cores for higher frame rates in Ray Tracing enabled games. No next generation ThreadRipper launch date was mentioned, so we don't yet know when and if that will that land.

AMD Collaborates with US DOE to Deliver the Frontier Supercomputer

The U.S. Department of Energy today announced a contract with Cray Inc. to build the Frontier supercomputer at Oak Ridge National Laboratory, which is anticipated to debut in 2021 as the world's most powerful computer with a performance of greater than 1.5 exaflops.

Scheduled for delivery in 2021, Frontier will accelerate innovation in science and technology and maintain U.S. leadership in high-performance computing and artificial intelligence. The total contract award is valued at more than $600 million for the system and technology development. The system will be based on Cray's new Shasta architecture and Slingshot interconnect and will feature high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology.

AMD Reports First Quarter 2019 Financial Results- Gross margin expands to 41%, up 5 percentage points year-over-year

AMD today announced revenue for the first quarter of 2019 of $1.27 billion, operating income of $38 million, net income of $16 million and diluted
earnings per share of $0.01. On a non-GAAP(*) basis, operating income was $84 million, net income was $62 million and diluted earnings per share was $0.06.

"We delivered solid first quarter results with significant gross margin expansion as Ryzen and EPYC processor and datacenter GPU revenue more than doubled year-over-year," said Dr. Lisa Su, AMD president and CEO. "We look forward to the upcoming launches of our next-generation 7nm PC, gaming
and datacenter products which we expect to drive further market share gains and financial growth."

Micron Introduces 9300 Series NVMe Enterprise SSDs

Micron Technology, Inc., today unveiled its new series of flagship solid-state drives (SSDs) featuring the NVM Express (NVMe) protocol, bringing industry-leading storage performance at higher capacities to cloud and enterprise computing markets. The Micron 9300 series of NVMe SSDs enables companies with data-intensive applications to access and process data faster, helping reduce response time.

"The introduction of our third generation of NVMe SSDs endorses our tradition of continued innovation for cloud and enterprise markets," said Derek Dicker, corporate vice president and general manager for Micron's Storage Business Unit. "The Micron 9300 is our flagship series of NVMe SSDs, which feature industry-leading sequential write performance and latency, increased capacities, and delivery of a 28% reduction in power over the previous generation."

AMD "Castle Peak," "Rome," and "Matisse" Referenced in Latest AIDA64 Changelog

FinalWire over the past week posted the latest public beta of AIDA64, which adds support for the three key processor product lines based on AMD's "Zen 2" microarchitecture. The "Matisse" multi-chip module, which received extensive coverage over the past few weeks, will be AMD's main derivative of "Zen 2," designed for the client-segment socket AM4 platform, with up to 16 CPU cores, and the initial flagship product featuring 12 cores. "Rome" is AMD's all-important enterprise-segment MCM for the SP3 platform, with up to 64 CPU cores spread across eight 8-core chiplets interfacing a centralized I/O controller die with a monolithic 8-channel memory controller. It so happens that AMD also wants to update its Ryzen Threadripper line of high-end desktop processors, with "Castle Peak."

"Castle Peak" is codename for 3rd generation Ryzen Threadripper and a client-segment derivative of the "Rome" MCM with a reconfigured I/O controller die that has a monolithic 4-channel DDR4 memory interface, and an unspecified number of CPU cores north of 24. This is for backwards compatibility with the existing AMD X399 motherboards. AMD configures core-count by physically changing the number of 8-core chiplets on the MCM, in addition to disabling cores in groups of 2 within the chiplet. The company could scale core counts looking at its competitive environment. The monolithic quad-channel memory interface could significantly improve the chip's memory performance compared to current-generation Threadrippers, particularly the Threadripper WX series chips in which half the CPU cores are memory bandwidth-starved. The AIDA64 update also improves detection of existing Ryzen/EPYC processors with the K17.3 and K17.5 integrated northbridges.

DOWNLOAD: FinalWire AIDA64 Extreme 5.99.4983 beta

AMD President and CEO Dr. Lisa Su to Deliver COMPUTEX 2019 CEO Keynote

Taiwan External Trade Development Council (TAITRA) announced today that the 2019 COMPUTEX International Press Conference will be held with a Keynote by AMD President and CEO Dr. Lisa Su. The 2019 COMPUTEX International Press Conference & CEO Keynote is scheduled for Monday, May 27 at 10:00 AM in Room 201 of the Taipei International Convention Center (TICC) in Taipei, Taiwan with the keynote topic "The Next Generation of High-Performance Computing".

"COMPUTEX, as one of the global leading technology tradeshows, has continued to advance with the times for more than 30 years. This year, for the first time, a keynote speech will be held at the pre-show international press conference," said Mr. Walter Yeh, President & CEO, TAITRA, "Dr. Lisa Su received a special invitation to share insights about the next generation of high-performance computing. We look forward to her participation attracting more companies to participate in COMPUTEX, bringing the latest industry insights, and jointly sharing the infinite possibilities of the technology ecosystem on this global stage."

AMD's CES 2019 Keynote - Stream & Live Blog

CPUs or GPUs? Ryzen 3000 series up to 16 cores or keeping their eight? Support for raytracing? Navi or die-shrunk Vega for consumer graphics? The questions around AMD's plans for 2019 are still very much in the open, but AMD's Lisa Su's impending livestream should field the answers to many of these questions, so be sure to watch the full livestream, happening in just a moment.

You can find the live stream here, at YouTube.

18:33 UTC: Looking forward, Lisa mentioned a few technology names without giving additional details: "... when you're talking about future cores, Zen 2, Zen 3, Zen 4, Zen 5, Navi, we're putting all of these architectures together, in new ways".

18:20 UTC: New Ryzen 3rd generation processors have been teased. The upcoming processors are based on Zen 2, using 7 nanometer technology. AMD showed a live demo of Forza Horizon 4, using Ryzen third generation, paired with Radeon Vega VII, which is running "consistently over 100 FPS at highest details at 1080p resolution". A second demo, using Cinebench, pitted an 8-core/16-thread Ryzen 3rd generation processor against the Intel Core i9-9900K. The Ryzen CPU was "not final frequency, an early sample". Ryzen achieved a score of 2057 using 135 W, while Intel achieved a score of 2040 using 180 W.. things are looking good for Ryzen 3rd generation indeed. Lisa also confirmed that next-gen Ryzen will support PCI-Express 4.0, which doubles the bandwidth per lane over PCI-Express 3.0. Ryzen third generation will run on the same AM4 infrastructure as current Ryzen; all existing users of Ryzen can simply upgrade to the new processors, when they launch in the middle of 2019 (we think Computex).
Ryzen third generation uses a chiplet design. The smaller die on the right contains 8-cores/16-threads using 7 nanometer technology. The larger die on the left is the IO die, which consists of things like the memory controller and PCI-Express connectivity, to shuffle data between the CPU core die and the rest of the system.

AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total

During the next year and a half, the Finnish IT Center for Science (CSC) will be purchasing a new supercomputer in two phases. The first phase consists of Atos' air-cooled BullSequana X400 cluster which makes use of Intel's Cascade Lake Xeon processors along with Mellanox HDR InfiniBand for a theoretical performance of 2 petaflops. Meanwhile, system memory per node will range from 96 GB up to 1.5 TB with the entire system receiving a 4.9 PB Lustre parallel file system as well from DDN. Furthermore, a separate partition of phase one will be used for AI research and will feature 320 NVIDIA V100 NVLinked GPUs configured in 4-GPU nodes. It is expected that peak performance will reach 2.5 petaflops. Phase one will be brought online at some point in the summer of 2019.

Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.

AMD Doubles L3 Cache Per CCX with Zen 2 "Rome"

A SiSoft SANDRA results database entry for a 2P AMD "Rome" EPYC machine sheds light on the lower cache hierarchy. Each 64-core EPYC "Rome" processor is made up of eight 7 nm 8-core "Zen 2" CPU chiplets, which converge at a 14 nm I/O controller die, which handles memory and PCIe connectivity of the processor. The result mentions cache hierarchy, with 512 KB dedicated L2 cache per core, and "16 x 16 MB L3." Like CPU-Z, SANDRA has the ability to see L3 cache by arrangement. For the Ryzen 7 2700X, it reads the L3 cache as "2 x 8 MB L3," corresponding to the per-CCX L3 cache amount of 8 MB.

For each 64-core "Rome" processor, there are a total of 8 chiplets. With SANDRA detecting "16 x 16 MB L3" for 64-core "Rome," it becomes highly likely that each of the 8-core chiplets features two 16 MB L3 cache slices, and that its 8 cores are split into two quad-core CCX units with 16 MB L3 cache, each. This doubling in L3 cache per CCX could help the processors cushion data transfers between the chiplet and the I/O die better. This becomes particularly important since the I/O die controls memory with its monolithic 8-channel DDR4 memory controller.

Intel Could Upstage EPYC "Rome" Launch with "Cascade Lake" Before Year-end

Intel is reportedly working tirelessly to launch its "Cascade Lake" Xeon Scalable 48-core enterprise processor before year-end, according to a launch window timeline slide leaked by datacenter hardware provider QCT. The slide suggests a late-Q4 thru Q1-2019 launch timeline for the XCC (extreme core count) version of "Cascade Lake," which packs 48 CPU cores across two dies on an MCM. This launch is part of QCT's "early shipment program," which means select enterprise customers can obtain the hardware in pre-approved quantities. In other words, this is a limited launch, but one that's probably enough to upstage AMD's 7 nm EPYC "Rome" 64-core processor launch.

It's only by late-Q1 thru Q2-2019 that the Xeon "Cascade Lake" family would be substantially launched, including lower core-count variants that are still 2-die MCMs. This aligns to preempt or match AMD's 7 nm EPYC family rollout through 2019. "Cascade Lake" is probably Intel's final enterprise microarchitecture to be built on the 14 nm++ node, and consists of 2-die multi-chip modules that feature 48 cores, and a 12-channel memory interface (6-channel per die); with 88-lane PCIe from the CPU socket. The processor is capable of multi-socket configurations. It will also be Intel's launch platform for substantially launching its Optane Persistent Memory product series.

Stuttgart-based HLRS to Build a Supercomputer with 10,000 64-core Zen 2 Processors

Höchstleistungsrechenzentrum (HLRS, or High-Performance Computing Center), based in Stuttgart Germany, is building a new cluster supercomputer powered by 10,000 AMD Zen 2 "Rome" 64-core processors, making up 640,000 cores. Called "Hawk," the supercomputer will be HLRS' flagship product, and will open its doors to business in 2019. The slide-deck for Hawk makes a fascinating disclosure about the processors it's based on.

Apparently, each of the 64-core "Rome" EPYC processors has a guaranteed clock-speed of 2.35 GHz. This would mean at maximum load (with all cores loaded 100%), the processor can manage to run at 2.35 GHz. This is important, because the supercomputer's advertised throughput is calculated on this basis, and clients draw up SLAs on throughput. The advertised peak throughput for the whole system is 24.06 petaFLOP/s, although the company is yet to put out nominal/guaranteed performance numbers (which it will only after first-hand testing). The system features 665 TB of RAM, and 26,000 TB of storage.

AMD "Zen 2" IPC 29 Percent Higher than "Zen"

AMD reportedly put out its IPC (instructions per clock) performance guidance for its upcoming "Zen 2" micro-architecture in a version of its Next Horizon investor meeting, and the numbers are staggering. The next-generation CPU architecture provides a massive 29 percent IPC uplift over the original "Zen" architecture. While not developed for the enterprise segment, the stopgap "Zen+" architecture brought about 3-5 percent IPC uplifts over "Zen" on the backs of faster on-die caches and improved Precision Boost algorithms. "Zen 2" is being developed for the 7 nm silicon fabrication process, and on the "Rome" MCM, is part of the 8-core chiplets that aren't subdivided into CCX (8 cores per CCX).

According to Expreview, AMD conducted DKERN + RSA test for integer and floating point units, to arrive at a performance index of 4.53, compared to 3.5 of first-generation Zen, which is a 29.4 percent IPC uplift (loosely interchangeable with single-core performance). "Zen 2" goes a step beyond "Zen+," with its designers turning their attention to critical components that contribute significantly toward IPC - the core's front-end, and the number-crunching machinery, FPU. The front-end of "Zen" and "Zen+" cores are believed to be refinements of previous-generation architectures such as "Excavator." Zen 2 gets a brand-new front-end that's better optimized to distribute and collect workloads between the various on-die components of the core. The number-crunching machinery gets bolstered by 256-bit FPUs, and generally wider execution pipelines and windows. These come together yielding the IPC uplift. "Zen 2" will get its first commercial outing with AMD's 2nd generation EPYC "Rome" 64-core enterprise processors.

Update Nov 14: AMD has issued the following statement regarding these claims.
As we demonstrated at our Next Horizon event last week, our next-generation AMD EPYC server processor based on the new 'Zen 2' core delivers significant performance improvements as a result of both architectural advances and 7nm process technology. Some news media interpreted a 'Zen 2' comment in the press release footnotes to be a specific IPC uplift claim. The data in the footnote represented the performance improvement in a microbenchmark for a specific financial services workload which benefits from both integer and floating point performance improvements and is not intended to quantify the IPC increase a user should expect to see across a wide range of applications. We will provide additional details on 'Zen 2' IPC improvements, and more importantly how the combination of our next-generation architecture and advanced 7nm process technology deliver more performance per socket, when the products launch.

Intel Puts Out Additional "Cascade Lake" Performance Numbers

Intel late last week put out additional real-world HPC and AI compute performance numbers of its upcoming "Cascade Lake" 2x 48-core (96 cores in total) machine, compared to AMD's EPYC 7601 2x 32-core (64 cores in total) machine. You'll recall that on November 5th, the company put out Linpack, System Triad, and Deep Learning Inference numbers, which are all synthetic benchmarks. In a new set of slides, the company revealed a few real-world HPC/AI application performance numbers, including MIMD Lattice Computation (MILC), Weather Research and Forecasting (WRF), OpenFOAM, NAMD scalable molecular dynamics, and YaSK.

The Intel 96-core setup with 12-channel memory interface belts out up to 1.5X performance in MILC, up to 1.6X in WRF and OpenFOAM, up to 2.1X in NAMD, and up to 3.1X in YASK, compared to an AMD EPYC 7601 2P machine. The company also put out system configuration and disclaimer slides with the usual forward-looking CYA. "Cascake Lake" will be Intel's main competitor to AMD's EPYC "Rome" 64-core 4P-capable processor that comes out by the end of 2018. Intel's product is a multi-chip module of two 24~28 core dies, with a 2x 6-channel DDR4 memory interface.

Intel Announces Cascade Lake Advanced Performance and Xeon E-2100

Intel today announced two new members of its Intel Xeon processor portfolio: Cascade Lake advanced performance (expected to be released the first half of 2019) and the Intel Xeon E-2100 processor for entry-level servers (general availability today). These two new product families build upon Intel's foundation of 20 years of Intel Xeon platform leadership and give customers even more flexibility to pick the right solution for their needs.

"We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers' system requirements. The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup once again demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers," said Lisa Spelman, Intel vice president and general manager of Intel Xeon products and data center marketing.

AMD Could Solve Memory Bottlenecks of its MCM CPUs by Disintegrating the Northbridge

AMD sprung back to competitiveness in the datacenter market with its EPYC enterprise processors, which are multi-chip modules of up to four 8-core dies. Each die has its own integrated northbridge, which controls 2-channel DDR4 memory, and a 32-lane PCI-Express gen 3.0 root complex. In applications that can not only utilize more cores, but also that are memory bandwidth intensive, this approach to non-localized memory presents design bottlenecks. The Ryzen Threadripper WX family highlights many of these bottlenecks, where video encoding benchmarks that are memory-intensive see performance drops as dies without direct access to I/O are starved of memory bandwidth. AMD's solution to this problem is by designing CPU dies with a disabled northbridge (the part of the die with memory controllers and PCIe root complex). This solution could be implemented in its upcoming 2nd generation EPYC processors, codenamed "Rome."

With its "Zen 2" generation, AMD could develop CPU dies in which the integrated northrbidge can be completely disabled (just like the "compute dies" on Threadripper WX processors, which don't have direct memory/PCIe access relying entirely on InfinityFabric). These dies talk to an external die called "System Controller" over a broader InfinityFabric interface. AMD's next-generation MCMs could see a centralized System Controller die that's surrounded by CPU dies, which could all be sitting on a silicon interposer, the same kind found on "Vega 10" and "Fiji" GPUs. An interposer is a silicon die that facilitates high-density microscopic wiring between dies in an MCM. These explosive speculative details and more were put out by Singapore-based @chiakokhua, aka The Retired Engineer, a retired VLSI engineer, who drew block diagrams himself.

AMD Zen 2 GNU Compiler Patch Published, Exposes New Instruction Sets

With a November deadline for feature freeze fast approaching, GNU toolchain developers are now adding the last feature additions to GCC 9.0 (GNU Compiler Collection). Ahead of that deadline, AMD has released their first basic patch adding the "znver2" target and therefore Zen 2 support to GCC. While the patch uses the same cost tables and scheduler data as Znver1, it does feature three new instructions that will be available to AMD's next-gen CPUs which include; Cache Line Write Back (CLWB), Read Processor ID (RDPID), and Write Back and Do Not Invalidate Cache (WBNOINVD).

These three instructions are the only ones that have been found thus far by digging through the current code. Taking into account this is the first patch it can be considered a jumping off point, making sure that the GCC 9.1 stable update, which comes out in 2019, has support for Zen 2. Further optimizations and instructions may be implemented in the future. This is likely since AMD has yet to update the scheduler cost tables and by extension means they may not want to reveal everything about Zen 2 just yet. You could say AMD is for now playing it safe, at least until their 7nm EPYC 2 processors launch in 2019.

AMD "Vega 20" GPU Not Before Late Q1-2019

AMD "Vega 20" is a new GPU based on existing "Vega" graphics architecture, which will be fabbed on the 7 nanometer silicon fabrication process, and bolstered with up to 32 GB of HBM2 memory across a 4096-bit memory interface that's double the bus-width of "Vega 10". AMD CEO Lisa Su already exhibited a mock-up of this chip at Computex 2018, with an word that alongside its "Zen 2" based EPYC enterprise processors, "Vega 20" will be the first 7 nm GPU. AMD could still make good on that word, only don't expect to find one under your tree this Holiday.

According to GamersNexus, the first "Vega 20" products won't launch before the turn of the year, and even in 2019, one can expect product launches till the end of Q1 (before April). GamersNexus cites reliable sources hinting at the later-than-expected arrival of "Vega 20" as part of refuting alleged "Final Fantasy XV" benchmarks of purported "Vega 20" engineering samples doing rounds on the web. Lisa Su stressed the importance of data-center GPUs in AMD's Q3-2018 earnings call, which could hint at the possibility of AMD allocating its first "Vega 20" yields to high-margin enterprise brands such as Radeon Pro and Radeon Instinct.

AMD and Oracle Collaborate to Provide AMD EPYC Processor-Based Offering in the Cloud

Today at Oracle OpenWorld 2018, AMD (NASDAQ: AMD) announced the availability of the first AMD EPYCTM processor-based instance on Oracle Cloud Infrastructure. With this announcement, Oracle becomes the largest public cloud provider to have a Bare Metal version on AMD EPYCTM processors1. The AMD EPYC processor-based "E" series will lead with the bare metal, Standard "E2", available immediately as the first instance type within the Series. At $0.03/Core hour, the AMD EPYC instance is up to 66 percent less on average per core than general purpose instances offered by the competition2 and is the most cost-effective instance available on any public cloud.

"With the launch of the AMD instance, Oracle has once again demonstrated that we are focused on getting the best value and performance to our customers," said Clay Magouyrk, senior vice president, software development, Oracle Cloud Infrastructure. "At greater than 269 GB/Sec, the AMD EPYC platform3, offers the highest memory bandwidth of any public cloud instance. Combined with increased performance, these cost advantages help customers maximize their IT dollars as they make the move to the cloud."

AMD Zen 2 Offers a 13% IPC Gain over Zen+, 16% over Zen 1

AMD "Zen" CPU architecture brought the company back to competitive relevance in the processor market. It got an incremental update in the form of "Zen+" which saw the implementation of an improved 12 nm process, and improved multi-core boosting algorithm, along with improvements to the cache subsystem. AMD is banking on Zen 2 to not only add IPC (instructions per clock) improvements; but also a new round of core-count increases. Bits n Chips has information that Zen 2 is making significant IPC gains.

According to the Italian tech publication, we could expect Zen 2 IPC gains of 13 percent over Zen+, which in turn posted 2-5% IPC gains over the original Zen. Bits n Chips notes that these IPC gains were tested in scientific tasks, and not in gaming. There is no gaming performance data at the moment. AMD is expected to debut Zen 2 with its 2nd generation EPYC enterprise processors by the end of the year, built on the 7 nm silicon fabrication process. This roughly 16 percent IPC gain versus the original Zen, coupled with higher clocks, and possibly more cores, could complete the value proposition of 2nd gen EPYC. Zen 2-based client-segment products can be expected only in 2019.

AMD and Xilinx Announce a New World Record for AI Inference

At today's Xilinx Developer Forum in San Jose, Calif., our CEO, Victor Peng was joined by the AMD CTO Mark Papermaster for a Guinness. But not the kind that comes in a pint - the kind that comes in a record book. The companies revealed the AMD and Xilinx have been jointly working to connect AMD EPYC CPUs and the new Xilinx Alveo line of acceleration cards for high-performance, real-time AI inference processing. To back it up, they revealed a world-record 30,000 images per-second inference throughput!

The impressive system, which will be featured in the Alveo ecosystem zone at XDF today, leverages two AMD EPYC 7551 server CPUs with its industry-leading PCIe connectivity, along with eight of the freshly-announced Xilinx Alveo U250 acceleration cards. The inference performance is powered by Xilinx ML Suite, which allows developers to optimize and deploy accelerated inference and supports numerous machine learning frameworks such as TensorFlow. The benchmark was performed on GoogLeNet, a widely used convolutional neural network.

AMD CEO Speaks with Jim Cramer About the "Secret Sauce" Behind its Giant-Killing Spree

Jim Cramer of CNBC Mad Money interviewed AMD CEO Dr. Lisa Su on the floor of the NYSE remarking her company as the year's biggest tech turnaround stories. The two spoke a variety of topics, including how the company went from a single-digit stock and a loss-making entity to one of the hottest tech-stocks, which threatens both Intel and NVIDIA. Dr. Su placed emphasis on taking long term strategic decisions that bear fruit years down the line.

"We decided to make the right investments. Technology is all about making the right choices, where we're going to invest, and where we're not going to invest...three or four years ago, it was mobile phones, tablets, and IoT that were the sexy things, and we were like 'hey we know that those are good markets, but those are not AMD.' We focused on what we thought the future would hold for us," said Dr. Su. "We are making decisions now that you won't see the outcome of for the next 3-5 years. We're making some good decisions," she added.

AMD Fast-tracks 7nm "Navi" GPU to Late-2018 Alongside "Zen 2" CPU

AMD is unique in the world of computing as the only company with both high-performance CPU and GPU products. For the past several years we have been executing our multi-generational leadership product and architectural roadmap. Just in the last 18 months, we successfully introduced and ramped our strongest set of products in more than a decade and our business has grown dramatically as we gained market share across the PC, gaming and datacenter markets.

The industry is at a significant inflection point as the pace of Moore's Law slows while the demand for computing and graphics performance continues to grow. This trend is fueling significant shifts throughout the industry and creating new opportunities for companies that can successfully bring together architectural, packaging, system and software innovations with leading-edge process technologies. That is why at AMD we have invested heavily in our architecture and product roadmaps, while also making the strategic decision to bet big on the 7nm process node. While it is still too early to provide more details on the architectural and product advances we have in store with our next wave of products, it is the right time to provide more detail on the flexible foundry sourcing strategy we put in place several years ago.

No 16-core AMD Ryzen AM4 Until After 7nm EPYC Launch (2019)

AMD in its Q2-2018 investors conference call dropped more hints at when it plans to launch its 3rd generation Ryzen processors, based on its "Zen2" architecture. CEO Lisa Su stated in the Q&A session that rollout of 7 nm Ryzen processors will only follow that of 7 nm EPYC (unlike 1st generation Ryzen preceding 1st generation EPYC). What this effectively means is that the fabled 16-core die with 8 cores per CCX won't make it to the desktop platform any time soon (at least not in the next three quarters, certainly not within 2018).

AMD CEO touched upon the development of the company's 7 nm "Rome" silicon, which will be at the heart of the company's 2nd generation EPYC processor family. 2nd generation EPYC, as you'd recall from our older article, is based on 7 nm "Zen2" architecture, and not 12 nm "Zen+." 3rd generation Ryzen is expected to be based on "Zen2." As of now, the company is said to have completed tape-out of "Rome," and is sending samples out to its industry partners for further testing and validation. The first EPYC products based on this will begin rolling out in 2019. The 7 nm process is also being used for a new "Vega" based GPU, which has taped out, and will see its first enterprise-segment product launch within 2018.

AMD EPYC Airport Ads Punch Close to the Belt

Airports are the latest battleground for AMD and Intel as the two vie to catch the attention of IT managers in the midst of an AI and big-data inflection point that promises to trigger a gold rush for enterprise processors. AMD took to San Jose International Airport with its latest AMD EPYC static ads targeted at IT managers stuck with Intel Xeon for its historic market leadership. AMD EPYC processors offer "more performance, more security, and more value" than Intel Xeon processors, the ads claim, but not before landing a mean punch in the general area of Intel's belt.
Return to Keyword Browsing
Nov 21st, 2024 09:49 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts