News Posts matching #GPU

Return to Keyword Browsing

Distributed GPU Rendering on the Blockchain is The New Normal, and It's Much Cheaper Than AWS

Otoy, based in Los Angeles, announced a few months ago the launch of RNDR, a cloud rendering platform that is based on the same blockchain used on the Ethereum platform. The idea is simple: it leverages a distributed network of idle GPUs to render graphics more quickly and efficiently.

The solution takes advantage of the unused power of our GPUs, and allows those who need to render images at full speed to do so through this platform. RNDR distributes the revenue through its own blockchain in a decentralized fashion, and in a recent survey of 1,200 of its contributors Otoy said it has the world's largest cloud rendering platform. One that has even been praised by Hollywood director and producer J.J. Abrams, founder of Brave and Basic Attention Token Brendan Eich, and famed talent agent Ari Emanuel.

MSI Talks about NVIDIA Supply Issues, US Trade War and RTX 2080 Ti Lightning

Back on September 27th, MSI talked candidly with PConline at the MSI Gaming New Appreciation Conference, in Shanghai. Multiple MSI executives were available to answer questions regarding products, launches, and potential issues. The first question asked was about the brewing US-Chinese trade war and if it will affect prices of graphics cards and CPUs. To which, Liao Wei, Deputy General Manager of MSI Global Multimedia Business Unit, and MSI Headquarters Graphics Card Products gave an actual answer. Stating that the since NVIDIA's GPU core is handled by a TSMC in Taiwan and memory is handled by Samsung and Hynix in South Korea and the United States respectively, there is little chance of further graphics card price hikes. However CPU side prices may increase on the Intel side, however, AMD is expected to be unaffected.

VUDA is a CUDA-Like Programming Interface for GPU Compute on Vulkan (Open-Source)

GitHub developer jgbit has started an open-source project called VUDA, which takes inspiration from NVIDIA's CUDA API to bring an easily accessible GPU compute interface to the open-source world. VUDA is implemented as wrapper on top of the highly popular next-gen graphics API Vulkan, which provides low-level access to hardware. VUDA comes as header-only C++ library, which means it's compatible with all platforms that have a C++ compiler and that support Vulkan.

While the project is still young, its potential is enormous, especially due to the open source nature (using the MIT license). The page on GitHub comes with a (very basic) sample, that could be a good start for using the library.

Intel is Adding Vulkan Support to Their OpenCV Library, First Signs of Discrete GPU?

Intel has submitted the first patches with Vulkan support to their open-source OpenCV library, which is designed to accelerate Computer Vision. The library is widely used for real-time applications as it comes with 1st-class optimizations for Intel processors and multi-core x86 in general. With Vulkan support, existing users can immediately move their neural network workloads to the GPU compute space without having to rewrite their code base.

At this point in time, the Vulkan backend supports Convolution, Concat, ReLU, LRN, PriorBox, Softmax, MaxPooling, AvePooling, and Permute. According to the source code changes, this is just "a beginning work for Vulkan in OpenCV DNN, more layer types will be supported and performance tuning is on the way."

It seems that now, with their own GPU development underway, Intel has found new love for the GPU-accelerated compute space. The choice of Vulkan is also interesting as the API is available on a wide range of platforms, which could mean that Intel is trying to turn Vulkan into a CUDA killer. Of course there's still a lot of work needed to achieve that goal, since NVIDIA has had almost a decade of head start.

New NVFlash Released With Turing Support

With the latest release of NVIDIA's NVFlash, version 5.513.0, users can now read and write the BIOS on Turing based graphics cards. This includes the RTX 2080 Ti, 2080, and 2070. While this may seem mundane at first, due to the different power limits between graphics cards, there is some hope that cross flashing of the BIOS could result in tangible performance gains.

DOWNLOAD: NVIDIA NVFlash v5.513.0

MSI and ESL Partnering For The MSI Gaming Arena 2018 World Championships, Sponsors ESL One 2018

MSI, a world leader in gaming hardware, has partnered with ESL for its MSI Gaming Arena (MGA) 2018 World Championship and as the exclusive gaming sponsor of the ESL One 2018 Grand Finals at the Barclays Center in New York on September 29 and 30.
MGA 2018 World Championship
Before the Grand Final, MSI Gaming Arena (MGA) 2018 will host the world's top Counter-Strike: Global Offensive teams fighting for the coveted MGA Trophy and $60,000 prize pool on September 30 at the Barclays Center in New York. After the regional qualifiers, the four remaining teams will secure their spot in the MGA 2018 CS:GO Grand Finals.

GALAX Starts Selling OC Lab Edition GPU Pot for Extreme LN2 Overclocking

GALAX has announced availability of their OC Lab Edition GPU Pot, a non-plant-based solution for users to cool their graphics cards with. The OC Lab Edition GPU Pot is fully made of 99.9% purity copper, which allows it to withstand up to -196 ºC. Not many more details are available for now, except pricing, and it's something to definitely not smile about: the OC Lab Edition GPU Pot will cost users $229.99.

GALAX, however, being a customer-friendly brand, are suggesting users put down an order for three of these OC Lab Edition GPU Pot alongside three of their own Galax HOF OC Lab WC cards, which go for $1,799... netting you a $600 discount on the pots. So, yeah. There's that. If you want it.

NVIDIA Stock Falls 2.1% After Turing GPU Reviews Fail to Impress Morgan Stanley

NVIDIA's embargo on their Turing-based RTX 2080 and RTX 2080 Ti ended Wednesday, September 19 and it appears that enthusiasts were not the only ones left wanting more from these graphics cards. In particular, Morgan Stanley analyst Joseph Moore shared a note today (Thursday, September 20) with company clients saying "As review embargos broke for the new gaming products, performance improvements in older games is not the leap we had initially hoped for. Performance boost on older games that do not incorporate advanced features is somewhat below our initial expectations, and review recommendations are mixed given higher price points." The NVIDIA Corporation share value on the NASDAQ exchange had closed at $271.98 (USD) Wednesday and immediately tumbled down to a low of $264.10 opening today before recovering to close at $266.28, down 2.1% over the previous closure.

The Morgan Stanley report further mentioned that "We are surprised that the 2080 is only slightly better than the 1080ti, which has been available for over a year and is slightly less expensive. With higher clock speeds, higher core count, and 40% higher memory bandwidth, we had expected a bigger boost." Accordingly, the market analyst expects a slower adoption of these new GPUs as well as no expectation of "much upside" from NVIDIA's gaming business unit for the next two quarters. Despite all this, Morgan Stanley remains bullish on NVIDIA and expects a $273 price point in the long term.

AMD Fast-tracks 7nm "Navi" GPU to Late-2018 Alongside "Zen 2" CPU

AMD is unique in the world of computing as the only company with both high-performance CPU and GPU products. For the past several years we have been executing our multi-generational leadership product and architectural roadmap. Just in the last 18 months, we successfully introduced and ramped our strongest set of products in more than a decade and our business has grown dramatically as we gained market share across the PC, gaming and datacenter markets.

The industry is at a significant inflection point as the pace of Moore's Law slows while the demand for computing and graphics performance continues to grow. This trend is fueling significant shifts throughout the industry and creating new opportunities for companies that can successfully bring together architectural, packaging, system and software innovations with leading-edge process technologies. That is why at AMD we have invested heavily in our architecture and product roadmaps, while also making the strategic decision to bet big on the 7nm process node. While it is still too early to provide more details on the architectural and product advances we have in store with our next wave of products, it is the right time to provide more detail on the flexible foundry sourcing strategy we put in place several years ago.

AMD Chip Manufacturing to Lay Solely With TSMC On, After 7 nm - And Why It's not a Decision, but a Necessity

It's been a tumultuous few days for AMD, as the company has seen Jim Anderson, Computing and Graphics Group leader after the departure of Raja Koduri, leave the company, at a time of soaring share value for the company (hitting $25.26 and leaving short positions well, short, by $2.67 billion.) However, there's one particular piece of news that is most relevant for the company: Globalfoundries' announcement to stop all ongoing development on the 7 nm node.

This is particularly important for a variety of reasons. The most important one is this: Globalfoundries' inability to execute on the 7 nm node leaves AMD fully free to procure chips and technology from competing foundries. If you remember, AMD's spin-off of GlobalFoundries left the former with the short end of the stick, having to cater to GlobalFoundries' special pricing, and paying for the privilege of accessing other foundries' inventories. Of course, the Wafer Supply Agreement (WSA) that is in place will have to be amended - again - but the fact is this: AMD wants 7 nm products, and GlobalFoundries can't provide.
To the forumites: this piece is marked as an editorial

Intel and Philips Accelerate Deep Learning Inference on CPUs in Medical Imaging

Using Intel Xeon Scalable processors and the OpenVINO toolkit, Intel and Philips tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds," said Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights.

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.

Google Cloud Introduces NVIDIA Tesla P4 GPUs, for $430 per Month

Today, we are excited to announce a new addition to the Google Cloud Platform (GCP) GPU family that's optimized for graphics-intensive applications and machine learning inference: the NVIDIA Tesla P4 GPU.

We've come a long way since we introduced our first-generation compute accelerator, the K80 GPU, adding along the way P100 and V100 GPUs that are optimized for machine learning and HPC workloads. The new P4 accelerators, now in beta, provide a good balance of price/performance for remote display applications and real-time machine learning inference.

Due to Reduced Demand, Graphics Cards Prices to Decline 20% in July - NVIDIA Postponing Next Gen Launch?

DigiTimes, citing "sources from the upstream supply chain", is reporting an expected decrease in graphics card pricing for July. This move comes as a way for suppliers to reduce the inventory previously piled in expectation of continued demand from cryptocurrency miners and gamers in general. It's the economic system at work, with its strengths and weaknesses: now that demand has waned, somewhat speculative price increases of yore are being axed by suppliers to spur demand. This also acts as a countermeasure to an eventual flow of graphics cards from ceasing-to-be miners to the second-hand market, which would further place a negative stress on retailers' products.

Alongside this expected 20% retail price drop for graphics cards, revenue estimates for major semiconductor manufacturer TSMC and its partners is being revised towards lower than previously-projected values, as demand for graphics and ASIC chips is further reduced. DigiTimes' sources say that the worldwide graphics card market now has an inventory of several million units that is being found hard to move (perhaps because the products are already ancient in the usual hardware tech timeframes), and that Nvidia has around a million GPUs still pending logistical distribution. Almost as an afterthought, DigiTimes also adds that NVIDIA has decided to postpone launch of their next-gen products (both 12 nm and then, forcibly, 7 nm) until supply returns to safe levels.

Upcoming Windows 10 Task Manager Update to Show Power Usage, Power Usage Trend per Process

One nifty new feature currently being deployed to Windows 10 Fast Ring users is the ability to see exactly how much power a given process is consuming in your system's hardware (CPU, GPU & Disk). The new feature, which appears as two additional Task Manager tabs, showcases the instantaneous power usage of a given process, but also features a trend calculator that covers a two-minute interval. This should be pretty handy, if the measurement process is close enough to the real power consumption. This could even be used as another flag for cryptomining malware or scripts in a given webpage. You can check the source for the additional updates that have been brought to build 17704 of the Windows Insider Program.

NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

Reddit user 'dustinbrooks' has posted a photo of a prototype graphics card design that is clearly made by NVIDIA and "tested by a buddy of his that works for a company that tests NVIDIA boards". Dustin asked the community what he was looking at, which of course got tech enthusiasts interested.

The card is clearly made by NVIDIA as indicated by the markings near the PCI-Express x16 slot connector. What's also visible is three PCI-Express 8-pin power inputs and a huge VRM setup with four fans. Unfortunately the GPU in the center of the board is missing, but it should be GV102, the successor to GP102, since GDDR6 support is needed. The twelve GDDR6 memory chips located around the GPU's solder balls are marked as D9WCW, which decodes to MT61K256M32JE-14:A. These chips are Micron-made 8 Gbit GDDR6, specified for 14 Gb/s data rate, operating at 1.35 V. With twelve chips, this board has a 384-bit memory bus and 12 GB VRAM. The memory bandwidth at 14 Gbps data rate is a staggering 672 GB/s, which conclusively beats the 484 GB/s that Vega 64 and GTX 1080 Ti offer.

Raja Hires Larrabee Architect Tom Forsyth to Help With Intel GPU

A few months ago we reported that Raja Koduri has left AMD to work at Intel on their new discrete GPU project. Looks like he's building a strong team, with the most recent addition being Tom Forsyth who is the father of Larrabee, which was Intel's first attempt at making an x86-based graphics processor. While Larrabee did not achieve its goal and is considered a failure by many, it brought some interesting improvements to the world, for example AVX512, and is now sold under the Xeon Phi brand.

Tom, who has previously worked at Oculus, Valve, and 3DLabs posted on Twitter that he's joining Intel in Raja's group, but he's "Not entirely sure what he'll be working on just yet." At Oculus and Valve he worked on Virtual Reality projects, for example he wrote big chunks of the Team Fortress 2 VR support for the Oculus Rift. Taking a look at Tom's papers suggests that he might join the Intel team as lead for VR-related projects, as that's without a doubt one of Raja's favorite topics to talk about.

AMD Radeon Vega 12 and Vega 20 Listed in Ashes Of The Singularity Database

Back at Computex, AMD showed a demo of their Vega 20 graphics processor, which is produced using a refined 7 nanometer process. We also reported that the chip has a twice-as-wide memory interface, effectively doubling memory bandwidth, and alsomaximum memory capacity. The smaller process promises improvements to power efficiency, which could let AMD run the chip at higher frequencies for more performance compared to the 14 nanometer process of existing Vega.

As indicated by AMD during Computex, the 7 nanometer Vega is a product targeted at High Performance Compute (HPC) applications, with no plans to release it for gaming. As they clarified later, the promise of "7 nanometer for gamers" is for Navi, which follows the Vega architecture. It's even more surprising to see AOTS results for a non-gaming card - my guess is that someone was curious how well it would do in gaming.

Samsung Wants to Design Their Own Graphics Processor

Job postings on LinkedIn reveal that Samsung is looking to hire a ton of graphics chip engineers to bring forward their own GPU design. In the past the company has licensed GPU IP from companies like ARM and Imagination Technologies, but these designs come with cost, low performance and low flexibility. With Samsung needing graphics IP for a large range of products like phones, tablets and exploring options in markets like automotive, machine learning and AI, it's not surprising that the company is now looking into rolling their own GPU - from scratch as indicated by a recruiter's posting:
"This is Samsung's proprietary IP. We will define the ISA, the architecture, the SW, the entire solution."

NVIDIA Joins S&P 100 Stock Market Index

With tomorrow's opening bell, NVIDIA will join the Standard and Poors S&P 100 index, replacing Time Warner. The spot that NVIDIA is joining in has been freed up by the merger of Time Warner with AT&T. This marks a monumental moment for the company as membership in the S&P 100 is reserved for only the largest and most important corporations in the US. From the tech sector the list comprises illustrious names such as Apple, Amazon, Facebook, Google Alphabet, IBM, Intel, Microsoft, Netflix, Oracle, Paypal, Qualcomm and Texas Instruments.

NVIDIA's stock has seen massive gains over the last years, thanks to delivering record quarter after record quarter. Recent developments have transformed the company from a mostly gaming GPU manufacturer to a company that is leading in the fields of GPU compute, AI and machine learning. This of course inspires investors, so the NVIDIA stock has been highly sought after, now sitting above 265 USD, which brings the company's worth to over 160 billion USD. Congratulations!

AMD "Vega" Outsells "Previous Generation" by Over 10 Times

At its Computex presser, leading up to its 7 nm Radeon Vega series unveil, AMD touched upon the massive proliferation of the Vega graphics architecture, which is found not only in discrete GPUs, but also APUs, and semi-custom SoCs of the latest generation 4K-capable game consoles. One such slide that created quite some flutter reads that "Vega" shipments are over 10 times greater than those of the "previous generation."

Normally you'd assume the previous-generation of "Vega" to be "Polaris," since we're talking about the architecture, and not an implementation of it (eg: "Vega 10" or "Raven Ridge," etc.). AMD later, at its post event round-table, clarified that it was referring to "Fiji," or the chip that went into building the Radeon R9 Fury X, R9 Nano, etc., and comparing its sales with that of products based on the "Vega 10" silicon. Growth in shipments of "Vega" based graphics cards is triggered by the crypto-mining industry, and for all intents and purposes, AMD considers the "Vega 10" silicon to be a commercial success.

GPU Market: Miner Interest Waning, Gamer Interest Increasing - Jon Peddie Research

Jon Peddie Research, the industry's market research firm for the graphics industry, has updated it's quarterly Market Watch report. Overall, the report finds the crypto-currency market is continuing to influence the PC graphics market, though its influence is waning. Market watch found that year-to-year total GPU shipments increased 3.4%, desktop graphics increased 14%, notebooks decreased -3%. GPU shipments decreased -10% from last quarter: AMD decreased -6%, Nvidia decreased -10%, and Intel decreased -11%.

AMD increased its market share again this quarter benefiting from new products for workstations, and crypto-currency mining, Nvidia held steady, and Intel decreased. Over three million add-in boards (AIBs) were sold to cryptocurrency miners worth $776 million in 2017, and an additional 1.7 million were sold in the quarter.

MSI Presents Radeon RX MECH 2 Series Graphics Cards

MSI is proud to present a brand new series based on AMD's "Polaris" chipsets, the Radeon graphics-based MECH series. Equipped with the new thermal design, the Radeon RX MECH series doesn't just allow for higher core and memory speeds but also provide increased performance in games. The outstanding shapes of the eye-catching MECH series cooler are intensified by a fiery red glow piercing through the cover, while the MSI dragon RGB LED on the top can be set to any of 16.7 million colors to match your mood or build. A completely custom PCB design featuring enhanced power design with Military Class 4 components enables higher stable performance to push your graphics card to the max. A classy matte black metal backplate shows the MECH 2 cards more structural strength and provides a nice finishing touch.

"AMD Radeon has always been committed to the best interest of gamers: a dedication to open innovation such as our contributions to the DirectX and Vulkan APIs, a commitment to true transparency through industry standards like Radeon FreeSync technology, and a desire to expand the PC gaming ecosystem by enabling developers everywhere. It is these values that result in a thriving PC gaming community, and explain why so many gamers continue to rally behind the AMD Radeon brand," said Scott Herkelman, vice president and general manager, AMD Radeon Technologies Group.

NVIDIA Teases "Ultimate Gaming Experience" At GTC Taiwan

NVIDIA has posted a short (literally short) teaser, treating users to the promise of the "Ultimate Gaming Experience". This might mean something, such as the new, expected NVIDIA 11** series of graphics cards... Or it may mean something much less exciting, and have something to do with the 4K, HDR gaming experience that is supposed to be reaching gamers in a couple of weeks, at an expected cost of more kidneys than the average human has.

Officially, though, GTC 2018 Taiwan will revolve around artificial intelligence tech (what doesn't these days, really?) Translated, the teaser image reads something along the lines of "Utilizing GPU computing to explore the world's infinite possibilities - witness the power of artificial intelligence and the ultimate gaming experience in GTC Taiwan and Computex 2018." Remember, however, that marketing almost always has a way of blowing things out of proportion - don't hold your breath for a new graphics card series announcement.
Return to Keyword Browsing
Jul 2nd, 2025 23:03 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts