News Posts matching #MIT

Return to Keyword Browsing

Arm Unveils "Accuracy Super Resolution" Based on AMD FSR 2

In a community blog post, Arm has announced its new Accuracy Super Resolution (ASR) upscaling technology. This open-source solution aims to transform mobile gaming by offering best-in-class upscaling capabilities for smartphones and tablets. Arm ASR addresses a critical challenge in mobile gaming: delivering high-quality graphics while managing power consumption and heat generation. By rendering games at lower resolutions and then intelligently upscaling them, Arm ASR promises to significantly boost performance without sacrificing visual quality. The technology builds upon AMD's FidelityFX Super Resolution 2 (FSR 2) and adapts it specifically for mobile devices. Arm ASR utilizes temporal upscaling, which combines information from multiple frames to produce higher-quality images from lower-resolution inputs. Even though temporal upscaling is more complicated to implement than spatial frame-by-frame upscaling, it delivers better results and gives developers more freedom.

This approach allows for more ambitious graphics while maintaining smooth gameplay. In benchmark tests using a complex scene, Arm demonstrated impressive results. Devices featuring the Arm Immortalis-G720 GPU showed substantial framerate improvements when using Arm ASR compared to native resolution rendering and Qualcomm's Game Super Resolution (GSR). Moreover, the technology helped maintain stable temperatures, preventing thermal throttling that can compromise user experience. Collaboration with MediaTek revealed significant power savings when using Arm ASR on a Dimensity 9300 handset. This translates to extended battery life for mobile gamers, addressing key concerns. Arm is releasing ASR under an MIT open-source license, encouraging widespread adoption and experimentation among developers. Below you can see the comparison of various upscalers.

Intel Should be Leading the AI Hardware Market: Pat Gelsinger on NVIDIA Getting "Extraordinarily Lucky"

Intel CEO Pat Gelsinger considers NVIDIA "extraordinarily lucky" to be leading the AI hardware industry. In a recent public discussion with the students of MIT's engineering school to discuss the state of the semiconductor industry, Gelsinger said that Intel should be the one to be leading AI, but instead NVIDIA got lucky. We respectfully disagree. What Gelsinger glosses over with this train of thought is how NVIDIA got here. What NVIDIA has in 2023 is the distinction of being one of the hottest tech stocks behind Apple, the highest market share in a crucial hardware resource driving the AI revolution, and of course the little things, like market leadership over the gaming GPU market. What it doesn't have, is access to the x86 processor IP.

NVIDIA has, for long, aspired to be a CPU company, right from its rumored attempt to merge with AMD in the early/mid 2000s, to its stint with smartphone application processors with Tegra, an assortment of Arm-based products along the way, and most recently, its spectacularly unsuccessful attempt to acquire Arm from Softbank. Despite limited luck with the CPU industry, to level up to Intel, AMD, or even Qualcomm and MediaTek; NVIDIA never lost sight of its goal to be a compute hardware superpower, which is why, in our opinion, it owns the AI hardware market. NVIDIA isn't lucky, it spent 16 years getting here.

NVIDIA DGX H100 Systems are Now Shipping

Customers from Japan to Ecuador and Sweden are using NVIDIA DGX H100 systems like AI factories to manufacture intelligence. They're creating services that offer AI-driven insights in finance, healthcare, law, IT and telecom—and working to transform their industries in the process. Among the dozens of use cases, one aims to predict how factory equipment will age, so tomorrow's plants can be more efficient.

Called Green Physics AI, it adds information like an object's CO2 footprint, age and energy consumption to SORDI.ai, which claims to be the largest synthetic dataset in manufacturing.

MIT Researchers Grow Transistors on Top of Silicon Wafers

MIT researchers have developed a groundbreaking technology that allows for the growth of 2D transition metal dichalcogenide (TMD) materials directly on fully fabricated silicon chips, enabling denser integrations. Conventional methods require temperatures of about 600°C, which can damage silicon transistors and circuits as they break down above 400°C. The MIT team overcame this challenge by creating a low-temperature growth process that preserves the chip's integrity, allowing 2D semiconductor transistors to be directly integrated on top of standard silicon circuits. The new approach grows a smooth, highly uniform layer across an entire 8-inch wafer, unlike previous methods that involved growing 2D materials elsewhere before transferring them to a chip or wafer. This process often led to imperfections that negatively impacted device and chip performance.

Additionally, the novel technology can grow a uniform layer of TMD material in less than an hour over 8-inch wafers, a significant improvement from previous methods that required over a day for a single layer. The enhanced speed and uniformity of this technology make it suitable for commercial applications, where 8-inch or larger wafers are essential. The researchers focused on molybdenum disulfide, a flexible, transparent 2D material with powerful electronic and photonic properties ideal for semiconductor transistors. They designed a new furnace for the metal-organic chemical vapor deposition process, which has separate low and high-temperature regions. The silicon wafer is placed in the low-temperature region while vaporized molybdenum and sulfur precursors flow into the furnace. Molybdenum remains in the low-temperature region, while the sulfur precursor decomposes in the high-temperature region before flowing back into the low-temperature region to grow molybdenum disulfide on the wafer surface.

Brelyon to Showcase World's Largest Field-of-View OLED Display at CES

Brelyon, the MIT spin-off pioneering a new category of ultra-immersive display technologies to enable access to the metaverse, will be showcasing Brelyon Fusion—an OLED display with the world's largest field of view—at CES 2023 in Las Vegas on January 5-8, 2023. Be among the first to experience a preview of a new era of holodeck-like technology that pushes the boundaries of immersion. To schedule a demo, contact info@brelyon.com or visit Brelyon at CES (Exhibit 61514, Venetian Expo, Level 1, Hall G, Eureka Park).

A concept display technology, Brelyon Fusion is the world's first 8K desktop virtual display and features the latest computational and lightfield expansion innovations with surround video conferencing using Synthetic Aperture technology and spatial audio—all without the need to wear a headset.

AMD Releases FidelityFX Super Resolution 2.0 Source Code Through GPUOpen

Today marks a year since gamers could try out AMD FidelityFX Super Resolution technology for themselves with our spatial upscaler - FSR 1. With the introduction of FSR 2, our temporal upscaling solution earlier this year, there are now over 110 games that support FSR. The rate of uptake has been very impressive - FSR is AMD's fastest adopted software gaming technology to date.

So it seems fitting that we should pick this anniversary day to share the source code for FSR 2, opening up the opportunity for every game developer to integrate FSR 2 if they wish, and add their title to the 24 games which have already announced support. As always, the source code is being made available via GPUOpen under the MIT license, and you can now find links to it on our dedicated FSR 2 page.

AMD FidelityFX FSR Source Code Released & Updates Posted, Uses Lanczos under the Hood

AMD today in a blog post announced several updates to the FidelityFX Super Resolution (FSR) technology, its performance enhancement rivaling NVIDIA DLSS, which lets gamers dial up performance with minimal loss to image quality. To begin with, the company released the source code of the technology to the public under its GPUOpen initiative, under the MIT license. This makes it tremendously easy (and affordable) for game developers to implement the tech. Inspecting the source, we find that FSR relies heavily on a multi-pass Lanczos algorithm for image upscaling. Next up, we learn that close to two dozen games are already in the process of receiving FSR support. Lastly, it's announced that Unity and Unreal Engine support FSR.

AMD broadly detailed how FSR works in its June 2021 announcement of the technology. FSR sits within the render pipeline of a game, where an almost ready lower-resolution frame that's been rendered, tone-mapped, and anti-aliased, is processed by FSR in a two-pass process implemented as a shader, before the high-resolution output is passed on to post-processing effects that introduce noise (such as film-grain). HUD and other in-game text (such as subtitles), are natively rendered at the target (higher) resolution and applied post render. The FSR component makes two passes—upscaling, and sharpening. We learn from the source code that the upscaler is based on the Lanczos algorithm, which was invented in 1979. Media PC enthusiasts will know Lanczos from MadVR, which has offered various movie upscaling algorithms in the past. AMD's implementation of Lanczos-2 is different than the original—it skips the expensive sin(), rcp() and sqrt() instructions and implements them in a faster way. AMD also added additional logic to avoid the ringing effects that are often observed on images processed with Lanczos.

AMD Leads High Performance Computing Towards Exascale and Beyond

At this year's International Supercomputing 2021 digital event, AMD (NASDAQ: AMD) is showcasing momentum for its AMD EPYC processors and AMD Instinct accelerators across the High Performance Computing (HPC) industry. The company also outlined updates to the ROCm open software platform and introduced the AMD Instinct Education and Research (AIER) initiative. The latest Top500 list showcased the continued growth of AMD EPYC processors for HPC systems. AMD EPYC processors power nearly 5x more systems compared to the June 2020 list, and more than double the number of systems compared to November 2020. As well, AMD EPYC processors power half of the 58 new entries on the June 2021 list.

"High performance computing is critical to addressing the world's biggest and most important challenges," said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. "With our AMD EPYC processor family and Instinct accelerators, AMD continues to be the partner of choice for HPC. We are committed to enabling the performance and capabilities needed to advance scientific discoveries, break the exascale barrier, and continue driving innovation."

TSMC Claims Breakthrough on 1nm Chip Production

TSMC in collaboration with the National Taiwan University (NTU) and the Massachusetts Institute of Technology (MIT) have made a significant breakthrough in the development of 1-nanometer chips. The joint announcement comes after IBM earlier this month published news of their 2-nanometer chip development. The researchers found that the use of semi-metal bismuth (Bi) as contact electrodes for the 2D matrix can greatly reduce resistance and increase current. This discovery was first made by the MIT team before then being further refined by TSMC and NTU which will increase energy efficiency and performance in future processors. The 1-nanometer node won't be deployed for several years with TSMC planning to start 3-nanometer production in H2 2022.

AMD COVID-19 HPC Fund Donates 7 Petaflops of Compute Power to Researchers

AMD and technology partner Penguin Computing Inc., a division of SMART Global Holdings, Inc, today announced that New York University (NYU), Massachusetts Institute of Technology (MIT) and Rice University are the first universities named to receive complete AMD-powered, high-performance computing systems from the AMD HPC Fund for COVID-19 research. AMD also announced it will contribute a cloud-based system powered by AMD EPYC and AMD Radeon Instinct processors located on-site at Penguin Computing, providing remote supercomputing capabilities for selected researchers around the world. Combined, the donated systems will collectively provide researchers with more than seven petaflops of compute power that can be applied to fight COVID-19.

"High performance computing technology plays a critical role in modern viral research, deepening our understanding of how specific viruses work and ultimately accelerating the development of potential therapeutics and vaccines," said Lisa Su, president and CEO, AMD. "AMD and our technology partners are proud to provide researchers around the world with these new systems that will increase the computing capability available to fight COVID-19 and support future medical research."

Researchers Build a CPU Without Silicon Using Carbon Nanotubes

It is no secret that silicon manufacturing is an expensive and difficult process which requires big investment and a lot of effort to get right. Take Intel's 10 nm for example. It was originally planned to launch in 2015, but because of technical difficulties, it got delayed for 2019. That shows how silicon scaling is getting more difficult than ever, while costs are rising exponentially. Development of newer nodes is expected to cost billions of Dollars more, just for the research alone and that is not even including the costs for the setting up a manufacturing facility. In order to prepare for the moment when the development of ever-decreasing size nodes becomes financially and physically unfeasible, researchers are exploring new technologies that could replace and possibly possess even better electrical properties than silicon. One such material (actually a structure made from it) is Carbon Nanotube or CNT in short.

Researchers from MIT, in collaboration with scientists from Analog Devices, have successfully built a CPU based on RISC-V architecture entirely using CNTs. Called RV16X Nano, this CPU is currently only capable of executing a classic "Hello World" program. CNT is a natural semiconductor, however, when manufactured, it is being made as a metallic nanotube. That is due to the fact that metallic nanotubes are easier to integrate into the manufacturing ecosystem. Its has numerous challenges in production because CNTs tend to position themselves randomly in XYZ axes. Researchers from MIT and Analog Devices solved this problem by making large enough surfaces so that enough random tubes are positioned well.

It Does Matter How You Spin it - Spintronics Could be Answer to Future Semiconductor Technologies

It's only a matter of time before microchip production as we know it disappears entirely, at least for leading-edge tech designs. Either via new materials applied to trusted techniques (such as carbon coating/nanotubes) or entirely new and exotic fabrication technologies, we're rapidly approaching the limits of traditional silicon-based microchips. One solution to the problem, as it stands, might be found in spintronics - an interesting concept which bases processing and data retention not simply on whether current is being applied to a given transistor (as is the case for current silicon chips), but on a property of electrons called spin. Crucially, changing the magnetic orientation of electrons requires but a single charge, instead of a continued supply of power - which allows for much lower power consumption and heat output, two of the encroaching, limiting factors for the usual chips.

MIT Researches Find a New Way to Fix Spectre and Meltdown, Isolation Is Key

The Meltdown and Spectre vulnerabilities have been a real nightmare throughout this year. Those affected were quick (maybe too much) to mitigate the problems with different solutions, but months later even the most recent Intel chips aren't completely safe. Hardware fixes only work for certain Meltdown variants, while the rest are still mitigated with firmware and OS updates that have certain impact on performance.

Intel will have to redesign certain features on their future processors to finally forget Meltdown and Spectre, but meanwhile others have jumped to give some options. MIT researchers have developed a way to partition and isolate memory caches with 'protection domains'. Unlike Intel's Cache Allocation Technology (CAT), MIT's technology, called DAWG (Dynamically Allocated Way Guard) disallows hits across those protection domains. This is important, because attackers targeting this vulnerabilities take advantage of 'cache timing attacks' and can get access to sensible, private data.

MIT, Stanford Partner Towards Making CPU-Memory BUSes Obsolete

Graphene has been hailed for some time now as the next natural successor to silicon, today's most used medium for semiconductor technology. However, even before such more exotic solutions to current semiconductor technology are employed (and we are still way off that future, at least when it comes to mass production), engineers and researchers seem to be increasing their focus in one specific part of computing: internal communication between components.

Typically, communication between a computer's Central Processing Unit (CPU) and a system's memory (usually DRAM) have occurred through a bus, which is essentially a communication highway between data stored in the DRAM, and the data that the CPU needs to process/has just finished processing. The fastest CPU and RAM is still only as fast as the bus, and recent workloads have been increasing the amount of data to be processed (and thus transferred) by orders of magnitude. As such, engineers have been trying to figure out ways of increasing communication speed between the CPU and the memory subsystem, as it is looking increasingly likely that the next bottlenecks in HPC will come not through lack of CPU speed or memory throughput, but from a bottleneck in communication between those two.

Dell Rolls Out the OptiPlex 3020 Desktop

Dell has this week introduced a new OptiPlex business-ready desktop, a model dubbed OptiPlex 3020 that promises to offer 'industry-leading performance and best-in-class security in a budget-friendly package'.

Coming in two versions - Minitower (MIT) and Small Form Factor (SFF), this compact PC features a tool-free design and packs a 4th gen Intel Core processor, up to 16 GB of RAM, Intel HD 4600 graphics, either a hard drive or a solid-state hybrid drive (for up to 2 TB of storage on the minitower SKU), one PCIe x16 slot for graphics expansion, two USB 3.0 ports, and VGA and DisplayPort 1.2 outputs. The OptiPlex 3020 starts at $499.

AMD Appoints Ahmed Yahia Al Idrissi to Board of Directors

AMD (NYSE: AMD) announced today that Ahmed Yahia Al Idrissi has been appointed to the company's board of directors as a second representative of Mubadala Development Company. Yahia currently serves as executive director of Mubadala Industry, where he is responsible for Mubadala's growing industrial portfolio, including metals, mining, utilities, and advanced materials and products. Prior to joining Mubadala, Yahia was a partner at McKinsey & Company where he co-led the firm's Principal Investor practice. He was also the managing partner of McKinsey's Abu Dhabi practice.

"Ahmed's years of success at McKinsey, his responsibilities as part of the senior executive management team at Mubadala and his extensive experience with a number of different boards make him an excellent addition to AMD's board of directors," said Bruce Claflin, AMD's chairman of the board.

Re-engineered Battery Material Could Lead to Rapid Recharging of Many Devices

MIT engineers have created a kind of beltway that allows for the rapid transit of electrical energy through a well-known battery material, an advance that could usher in smaller, lighter batteries -- for cell phones and other devices -- that could recharge in seconds rather than hours. The work could also allow for the quick recharging of batteries in electric cars, although that particular application would be limited by the amount of power available to a homeowner through the electric grid.
The work, led by Gerbrand Ceder, the Richard P. Simmons Professor of Materials Science and Engineering, is reported in the March 12 issue of Nature. Because the material involved is not new -- the researchers have simply changed the way they make it -- Ceder believes the work could make it into the marketplace within two to three years.

NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research

NVIDIA Corporation today announced that Bill Dally, the chairman of Stanford University's computer science department, will join the company as Chief Scientist and Vice President of NVIDIA Research. The company also announced that longtime Chief Scientist David Kirk has been appointed "NVIDIA Fellow."

"I am thrilled to welcome Bill to NVIDIA at such a pivotal time for our company," said Jen-Hsun Huang, president and CEO, NVIDIA. "His pioneering work in stream processors at Stanford greatly influenced the work we are doing at NVIDIA today. As one of the world's founding visionaries in parallel computing, he shares our passion for the GPU's evolution into a general purpose parallel processor and how it is increasingly becoming the soul of the new PC. His reputation as an innovator in our industry is unrivaled. It is truly an honor to have a legend like Bill in our company."

Buy a Laptop for a Child, Get Another Laptop Free

One Laptop Per Child, an ambitious project to bring computing to the developing world's children, has considerable momentum. The early reviews have been glowing, and mass production is set to start next month.

Orders, however, are slow. "I have to some degree underestimated the difference between shaking the hand of a head of state and having a check written," said Nicholas Negroponte, chairman of the nonprofit project. "And yes, it has been a disappointment."

But Mr. Negroponte, the founding director of the M.I.T. Media Laboratory, views the problem as a temporary one in the long-term pursuit of using technology as a new channel of learning and self-expression for children worldwide. He is reaching out to the public to try to give the laptop campaign a boost. The marketing program, to be announced today, is called "Give 1 Get 1," in which Americans and Canadians can buy two laptops for $399. One of the machines will be given to a child in a developing nation, and the other one will be shipped to the purchaser by Christmas. The donated computer is a tax-deductible charitable contribution. The program will run for two weeks, with orders accepted from Nov. 12 to Nov. 26.

MIT Team Simplifies Programming

The group of people who were previously responsible for creating the popular LEGO Mindstorms series of programmable capable robotics kits are responsible for creating Scratch a program which makes it easier for young kids above eight years old to learn programming. Scratch is available as a free 35 MB download, and so far runs on both Windows and Mac OS X. Programming commands are very simple and are separated into categories such as Motion and Sensing and the commands can be dragged and dropped into the scripts panel.
Return to Keyword Browsing
Dec 24th, 2024 09:54 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts