News Posts matching #Tech

Return to Keyword Browsing

NVIDIA Fine-Tunes Llama3.1 Model to Beat GPT-4o and Claude 3.5 Sonnet with Only 70 Billion Parameters

NVIDIA has officially released its Llama-3.1-Nemotron-70B-Instruct model. Based on META's Llama3.1 70B, the Nemotron model is a large language model customized by NVIDIA in order to improve the helpfulness of LLM-generated responses. NVIDIA uses fine-tuning structured data to steer the model and allow it to generate more helpful responses. With only 70 billion parameters, the model is punching far above its weight class. The company claims that the model is beating the current top models from leading labs like OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet, which are the current leaders across AI benchmarks. In evaluations such as Arena Hard, the NVIDIA Llama3.1 Nemotron 70B is scoring 85 points, while GPT-4o and Sonnet 3.5 score 79.3 and 79.2, respectively. Other benchmarks like AlpacaEval and MT-Bench spot NVIDIA also hold the top spot, with 57.6 and 8.98 scores earned. Claude and GPT reach 52.4 / 8.81 and 57.5 / 8.74, just below Nemotron.

This language model underwent training using reinforcement learning from human feedback (RLHF), specifically employing the REINFORCE algorithm. The process involved a reward model based on a large language model architecture and custom preference prompts designed to guide the model's behavior. The training began with a pre-existing instruction-tuned language model as the starting point. It was trained on Llama-3.1-Nemotron-70B-Reward and HelpSteer2-Preference prompts on a Llama-3.1-70B-Instruct model as the initial policy. Running the model locally requires either four 40 GB or two 80 GB VRAM GPUs and 150 GB of free disk space. We managed to take it for a spin on NVIDIA's website to say hello to TechPowerUp readers. The model also passes the infamous "strawberry" test, where it has to count the number of specific letters in a word, however, it appears that it was part of the fine-tuning data as it fails the next test, shown in the image below.

Neuranics Announces £2.4 Million Project to Revolutionise Human-Machine Interfaces

Neuranics has secured an £800,000 grant from Scottish Enterprise (SE) to support a £2.4 million project aimed at transforming how humans interact with machines through innovative wristband technology. Using Neuranics' patented magnetic sensors, the wristbands will detect muscle activity and precise gestures. This 18-month project will create ten high-tech jobs at the company's Glasgow headquarters and solidify Scotland's position as a leader in wearable technology.

The wristbands, leveraging magnetomyography (MMG) technology and machine learning, will interpret muscle movements through soft bands worn on the arms. Initially targeting extended reality (XR) applications, the device will enable seamless gesture recognition for immersive digital experiences.

Report: AI Software Sales to Experience Massive Growth with 40.6% CAGR Over the Next Five Years

The market for artificial intelligence (AI) platforms software grew at a rapid pace in 2023 and is projected to maintain its remarkable momentum, driven by the increasing adoption of AI across many industries. A new International Data Corporation (IDC) forecast shows that worldwide revenue for AI platforms software will grow to $153.0 billion in 2028 with a compound annual growth rate (CAGR) of 40.6% over the 2023-2028 forecast period.

"The AI platforms market shows no signs of slowing down. Rapid innovations in generative AI is changing how companies think about their products, how they develop and deploy AI applications, and how they leverage technology themselves for reinventing their business models and competitive positioning," said Ritu Jyoti, group vice president and general manager of IDC's Artificial Intelligence, Automation, Data and Analytics research. "IDC expects this upward trajectory will continue to accelerate with the emergence of unified platforms for predictive and generative AI that supports interoperating APIs, ecosystem extensibility, and responsible AI adoption at scale."

Microsoft Makes Strategic Investment in Cyclic Materials to Accelerate Climate Tech Innovation

Cyclic Materials, an advanced metals recycling company building a circular supply chain for rare earth elements and other critical metals, today announced it has received an equity investment from Microsoft's Climate Innovation Fund, an initiative dedicated to accelerating technology development and deployment of new climate innovations. This investment is representative of Microsoft's commitment to a circular economy and interest in hard drive rare earth element recycling.

Over the past two years, Cyclic has developed a patent-pending technology, CC360, to specifically address the challenge of recovering rare earths contained in end-of-life hard drives. While hard drives are typically sent to an IT asset disposal (ITAD) company at the end of life, this disposal process is designed for data destruction, followed by shredding of drives for the recovery of other metals such as gold and silver. The rare earths contained are currently not recovered. With the CC360, ITAD companies can separate a portion of hard drives for rare earth recovery, while retaining the rest of the hard drives for their traditional process. These separated magnets can then be processed by Cyclic Materials' processing technologies, unlocking an additional value stream from hard drive disposal.

China Launches Massive $47.5 Billion "Big Fund" to Boost Domestic Chip Industry

Beijing has doubled down on its push for semiconductor self-sufficiency with the establishment of a new $47.5 billion investment fund to accelerate growth in the domestic chip sector. The fund, officially registered on May 24th under the name "China Integrated Circuit Industry Investment Fund Phase III", represents the largest of three state-backed vehicles aimed at cultivating China's semiconductor capabilities. The announcement comes as tensions over advanced chip technology continue to escalate between the U.S. and China. Over the past couple years, Washington has steadily ratcheted up export controls on semiconductors to Beijing over national security concerns about potential military applications. These measures have lent new urgency to China's quest for self-sufficiency in chip design and manufacturing.

With a war chest of 344 billion yuan ($47.5 billion), the "Big Fund" dwarfs the combined capital of the first two semiconductor investment vehicles launched in 2014 and 2019. Officials have outlined a multipronged strategy targeting key bottlenecks, focusing on equipment for chip fabrication plants. The fund has bankrolled major projects such as flash memory maker Yangtze Memory Technologies and leading foundries like SMIC and Huahong. China's homegrown chip industry still needs to catch up to global leaders like Intel, Samsung, and TSMC. However, the immense scale of state-directed capital illustrates Beijing's unwavering commitment to developing a self-reliant supply chain for semiconductors—a technology viewed as indispensable for economic and military competitiveness. News of the "Big Fund" sent Chinese chip stocks surging over 3% on hopes of fresh financing tailwinds.

Apple Introduces the M4 Chip

Apple today announced M4, the latest chip delivering phenomenal performance to the all-new iPad Pro. Built using second-generation 3-nanometer technology, M4 is a system on a chip (SoC) that advances the industry-leading power efficiency of Apple silicon and enables the incredibly thin design of iPad Pro. It also features an entirely new display engine to drive the stunning precision, color, and brightness of the breakthrough Ultra Retina XDR display on iPad Pro. A new CPU has up to 10 cores, while the new 10-core GPU builds on the next-generation GPU architecture introduced in M3, and brings Dynamic Caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to iPad for the first time. M4 has Apple's fastest Neural Engine ever, capable of up to 38 trillion operations per second, which is faster than the neural processing unit of any AI PC today. Combined with faster memory bandwidth, along with next-generation machine learning (ML) accelerators in the CPU, and a high-performance GPU, M4 makes the new iPad Pro an outrageously powerful device for artificial intelligence.

"The new iPad Pro with M4 is a great example of how building best-in-class custom silicon enables breakthrough products," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine, and memory system make M4 extremely well suited for the latest applications leveraging AI. Altogether, this new chip makes iPad Pro the most powerful device of its kind."

Micron to Receive US$6.1 Billion in CHIPS and Science Act Funding

Micron Technology, Inc., one of the world's largest semiconductor companies and the only U.S.-based manufacturer of memory, and the Biden-Harris Administration today announced that they have signed a non-binding Preliminary Memorandum of Terms (PMT) for $6.1 billion in funding under the CHIPS and Science Act to support planned leading-edge memory manufacturing in Idaho and New York.

The CHIPS and Science Act grants of $6.1 billion will support Micron's plans to invest approximately $50 billion in gross capex for U.S. domestic leading-edge memory manufacturing through 2030. These grants and additional state and local incentives will support the construction of one leading-edge memory manufacturing fab to be co-located with the company's existing leading-edge R&D facility in Boise, Idaho and the construction of two leading-edge memory fabs in Clay, New York.

US Weighs National Security Risks of China's RISC-V Chip Development Involvement

The US government is investigating the potential national security risks associated with China's involvement in the development of open-source RISC-V chip technology. According to a letter obtained by Reuters, the Department of Commerce has informed US lawmakers that it is actively reviewing the implications of China's work in this area. RISC-V, an open instruction set architecture (ISA) created in 2014 at the University of California, Berkeley, offers an alternative to proprietary and licensed ISAs like those developed by Arm. This open-source ISA can be utilized in a wide range of applications, from AI chips and general-purpose CPUs to high-performance computing applications. Major Chinese tech giants, including Alibaba and Huawei, have already embraced RISC-V, positioning it as a new battleground in the ongoing technological rivalry between the United States and China over cutting-edge semiconductor capabilities.

In November, a group of 18 US lawmakers from both chambers of Congress urged the Biden administration to outline its strategy for preventing China from gaining a dominant position in RISC-V technology, expressing concerns about the potential impact on US national and economic security. While acknowledging the need to address potential risks, the Commerce Department noted in its letter that it must proceed cautiously to avoid unintentionally harming American companies actively participating in international RISC-V development groups. Previous attempts to restrict the transfer of 5G technology to China have created obstacles for US firms involved in global standards bodies where China is also a participant, potentially jeopardizing American leadership in the field. As the review process continues, the Commerce Department faces the delicate task of balancing national security interests with the need to maintain the competitiveness of US companies in the rapidly evolving landscape of open-source chip technologies.

Montage Technology Pioneers the Trial Production of DDR5 CKDs

Montage Technology, a leading data processing and interconnect IC company, today announced that it has taken the lead in the trial production of 1st-generation DDR5 Clock Driver (CKD) chips for next-generation client memory. This new product aims to enhance the speed and stability of memory data access to match the ever-increasing CPU operating speed and performance.

Previously, clock driver functionality was integrated into the Registering Clock Driver (RCD) chips used on server RDIMM or LRDIMM modules, not deployed to the PCs. In the DDR5 era, as data rates climb 6400 MT/s and above, the clock driver has emerged as an indispensable component for client memory.

Samsung Semiconductor Discusses "Water Stress" & Impact of Production Expansion

"The Earth is Blue," said Yuri Gagarin, the first human to journey into space. With two-thirds of its surface covered in water, Earth is a planet that exuberates its blue radiance in the dark space. However, today, the scarcity of water is a challenge that planet Earth is confronted with. For some, this may be hard to understand. What happened to our blue planet Earth? To put in numbers, more than 97% of the water on Earth consists of seawater, with another 2% locked in ice caps. That only leaves a mere 1% of water available for our daily use. The problem lies in the fact that this 1% of water is gradually becoming scarcer due to reasons such as climate change, environmental pollution, and population growth, leading to increased water stress. 'Water stress' is quantified by the proportion of water demand to the available water resources on an annual basis, indicating the severity of water scarcity as the stress index rises. Higher stress indexes signify experiencing severe water scarcity.

The semiconductor ecosystem, unsustainable without water
Because water stress issues transcend national boundaries, various stakeholders including international organizations and governments work to negotiate water resource management strategies and promote collaboration. UN designates March 22nd as an annual "World Water Day" to raise awareness about the severity of water scarcity running various campaigns. Now, it's imperative for companies to also take responsibility for the water resources given and pursue sustainable management.

Tenstorrent and MosChip Partner on High Performance RISC-V Design

Tenstorrent and MosChip Technologies announced today that they are partnering on design for Tenstorrent's cutting-edge RISC-V solutions. In selecting MosChip Technologies, Tenstorrent stands to strongly advance both its own and its customers' development of RISC-V solutions as they work together on Physical Design, DFT, Verification, and RTL Design services.

"MosChip Technologies is special in that they have unparalleled tape out expertise in design services, with more than 200 multi-million gate ASICs under their belt", said David Bennett, CCO of Tenstorrent. "Partnering with MosChip enables us to design the strongest RISC-V solution we can to serve ourselves, our partners, and our customers alike."

Marvell Announces Industry's First 2 nm Platform for Accelerated Infrastructure Silicon

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, is extending its collaboration with TSMC to develop the industry's first technology platform to produce 2 nm semiconductors optimized for accelerated infrastructure.

Behind the Marvell 2 nm platform is the company's industry-leading IP portfolio that covers the full spectrum of infrastructure requirements, including high-speed long-reach SerDes at speeds beyond 200 Gbps, processor subsystems, encryption engines, system-on-chip fabrics, chip-to-chip interconnects, and a variety of high-bandwidth physical layer interfaces for compute, memory, networking and storage architectures. These technologies will serve as the foundation for producing cloud-optimized custom compute accelerators, Ethernet switches, optical and copper interconnect digital signal processors, and other devices for powering AI clusters, cloud data centers and other accelerated infrastructure.

Microsoft Investment in Mistral Attracts Possible Investigation by EU Regulators

Tech giant Microsoft and Paris-based startup Mistral AI, an innovator in open-source AI model development, have announced a new multi-year partnership to accelerate AI innovation and expand access to Mistral's state-of-the-art models. The collaboration will leverage Azure's cutting-edge AI infrastructure to propel Mistral's research and bring its innovations to more customers globally. The partnership focuses on three core areas. First, Microsoft will provide Mistral with Azure AI supercomputing infrastructure to power advanced AI training and inference for Mistral's flagship models like Mistral-Large. Second, the companies will collaborate on AI research and development to push AI model's boundaries. And third, Azure's enterprise capabilities will give Mistral additional opportunities to promote, sell, and distribute their models to Microsoft customers worldwide.

However, an investment in a European startup can not go smoothly without the constant eyesight of the European Union authorities and regulators to oversee the deal. According to Bloomberg, an EU spokesperson on Tuesday claimed that the EU regulators will perform an analysis of Microsoft's investment into Mistral after receiving a copy of the agreement between the two parties. While there is no formal investigation yet, if EU regulators continue to probe Microsoft's deal and intentions, they could launch a complete formal investigation that could lead to the termination of Microsoft's plans. Of course, the formal investigation is still on hold, but investing in EU startups might become unfeasible for American tech giants if the EU regulators continue to push the scrutiny of every investment made in companies based on EU soil.

GlobalFoundries and Biden-Harris Administration Announce CHIPS and Science Act Funding for Essential Chip Manufacturing

The U.S. Department of Commerce today announced $1.5 billion in planned direct funding for GlobalFoundries (Nasdaq: GFS) (GF) as part of the U.S. CHIPS and Science Act. This investment will enable GF to expand and create new manufacturing capacity and capabilities to securely produce more essential chips for automotive, IoT, aerospace, defense, and other vital markets.

New York-headquartered GF, celebrating its 15th year of operations, is the only U.S.-based pure play foundry with a global manufacturing footprint including facilities in the U.S., Europe, and Singapore. GF is the first semiconductor pure play foundry to receive a major award (over $1.5 billion) from the CHIPS and Science Act, designed to strengthen American semiconductor manufacturing, supply chains and national security. The proposed funding will support three GF projects:

Intel Foundry Services Get 18A Order: Arm-based 64-Core Neoverse SoC

Faraday Technology Corporation, a Taiwanese silicon IP designer, has announced plans to develop a new 64-core system-on-chip (SoC) utilizing Intel's most advanced 18A process technology. The Arm-based SoC will integrate Arm Neoverse compute subsystems (CSS) to deliver high performance and efficiency for data centers, infrastructure edge, and 5G networks. This collaboration brings together Faraday, Arm, and Intel Foundry Services. Faraday will leverage its ASIC design and IP solutions expertise to build the SoC. Arm will provide the Neoverse compute subsystem IP to enable scalable computing. Intel Foundry Services will manufacture the chip using its cutting-edge 18A process, which delivers one of the best-in-class transistor performance.

The new 64-core SoC will be a key component of Faraday's upcoming SoC evaluation platform. This platform aims to accelerate customer development of data center servers, high-performance computing ASICs, and custom SoCs. The platform will also incorporate interface IPs from the Arm Total Design ecosystem for complete implementation and verification. Both Arm and Intel Foundry Services expressed excitement about working with Faraday on this advanced Arm-based custom silicon project. "We're thrilled to see industry leaders like Faraday and Intel on the cutting edge of Arm-based custom silicon development," said an Arm spokesperson. Intel SVP Stuart Pann said, "We are pleased to work with Faraday in the development of the SoC based on Arm Neoverse CSS utilizing our most competitive Intel 18A process technology." The collaboration represents Faraday's strategic focus on leading-edge technologies to meet evolving application requirements. With its extensive silicon IP portfolio and design capabilities, Faraday wants to deliver innovative solutions and break into next-generation computing design.

Avegant announces breakthrough Spotlight display technology for Augmented Reality

Avegant announces Spotlight display technology - the world's first adaptive LED illumination architecture for LCoS. By dynamically illuminating regions of a LCoS microdisplay, Avegant has enabled the benefits of APL (average pixel level) to achieve illumination power savings up to 90% while simultaneously improving contrast by over 10x. Spotlight display technology is integrated alongside Avegant's existing products, retaining the advantages of very small, mature light engines that deliver efficient, full color high-resolution, polarized output. Avegant is collaborating with Applied Materials, Inc. and Lumileds on advanced technologies for Augmented Reality.

"Building on the success of the AG-30L2, we are delighted to announce our new Spotlight display technology. This will usher in a new level of performance for LCoS light engines, enabling efficiency advantages of APL and higher contrast, while maintaining the manufacturability, small-size, and full color high resolution displays that we have come to know from LCoS." says Ed Tang, CEO and Founder of Avegant. "We're excited to be working alongside amazing partners to enable a new level of system performance for the ecosystem."

Google Faces Potential Billion-Dollar Damages in TPU Patent Dispute

Tech giant Google is embroiled in a high-stakes legal battle over the alleged infringement of patents related to its Tensor Processing Units (TPUs), custom AI accelerator chips used to power machine learning applications. Massachusetts-based startup Singular Computing has accused Google of incorporating architectures described in several of its patents into the design of the TPU without permission. The disputed patents, first filed in 2009, outline computer architectures optimized for executing a high volume of low-precision calculations per cycle - an approach well-suited for neural network-based AI. In a 2019 lawsuit, Singular argues that Google knowingly infringed on these patents in developing its TPU v2 and TPU v3 chips introduced in 2017 and 2018. Singular Computing is seeking between $1.6 billion and $5.19 billion in damages from Google.

Google denies these claims, stating that its TPUs were independently developed over many years. The company is currently appealing to have Singular's patents invalidated, which would undermine the infringement allegations. The high-profile case highlights mounting legal tensions as tech giants race to dominate the burgeoning field of AI hardware. With billions in potential damages at stake, the outcome could have major implications for the competitive landscape in cloud-based machine learning services. As both sides prepare for court, the dispute underscores the massive investments tech leaders like Google make to integrate specialized AI accelerators into their cloud infrastructures. Dominance in this sphere is a crucial strategic advantage as more industries embrace data-hungry neural network applications.

Update 17:25 UTC: According to Reuters, Google and Singular Computing have settled the case with details remaining private for the time being.

Intel 15th-Generation Arrow Lake-S Could Abandon Hyper-Threading Technology

A leaked Intel documentation we reported on a few days ago covered the Arrow Lake-S platform and some implementation details. However, there was an interesting catch in the file. The leaked document indicates that the upcoming 15th-Generation Arrow Lake desktop CPUs could lack Hyper-Threading (HT) support. The technical memo lists Arrow Lake's expected eight performance cores without any threads enabled via SMT. This aligns with previous rumors of Hyper-Threading removal. Losing Hyper-Threading could significantly impact Arrow Lake's multi-threaded application performance versus its Raptor Lake predecessors. Estimates suggest HT provides a 10-15% speedup across heavily-threaded workloads by enabling logical cores. However, for gaming, disabling HT has negligible impact and can even boost FPS in some titles. So Arrow Lake may still hit Intel's rumored 30% gaming performance targets through architectural improvements alone.

However, a replacement for the traditional HT is likely to come in the form of Rentable Units. This new approach is a response to the adoption of a hybrid core architecture, which has seen an increase in applications leveraging low-power E-cores for enhanced performance and efficiency. Rentable Units are a more efficient pseudo-multi-threaded solution that splits the first thread of incoming instructions into two partitions, assigning them to different cores based on complexity. Rentable Units will use timers and counters to measure P/E core utilization and send parts of the thread to each core for processing. This inherently requires larger cache sizes, where Arrow Lake is rumored to have 3 MB of L2 cache per core. Arrow Lake is also noted to support faster DDR5-6400 memory. But between higher clocks, more E-cores, and various core architecture updates, raw throughput metrics may not change much without Hyper-Threading.

Synopsys Expands Its ARC Processor IP Portfolio with New RISC-V Family

Synopsys, Inc. (Nasdaq: SNPS) today announced it has extended its ARC Processor IP portfolio to include new RISC-V ARC-V Processor IP, enabling customers to choose from a broad range of flexible, extensible processor options that deliver optimal power-performance efficiency for their target applications. Synopsys leveraged decades of processor IP and software development toolkit experience to develop the new ARC-V Processor IP that is built on the proven microarchitecture of Synopsys' existing ARC Processors, with the added benefit of the expanding RISC-V software ecosystem.

Synopsys ARC-V Processor IP includes high-performance, mid-range, and ultra-low power options, as well as functional safety versions, to address a broad range of application workloads. To accelerate software development, the Synopsys ARC-V Processor IP is supported by the robust and proven Synopsys MetaWare Development Toolkit that generates highly efficient code. In addition, the Synopsys.ai full-stack AI-driven EDA suite is co-optimized with ARC-V Processor IP to provide an out-of-the-box development and verification environment that helps boost productivity and quality-of-results for ARC-V-based SoCs.

AMD FidelityFX Super Resolution Could Come to Samsung and Qualcomm SoCs

AMD FidelityFX Super Resolution (FSR) is an open-source resolution upscaling technology that takes lower-resolution input and uses super-resolution temporal upscaling technology, frame generation using AMD Fluid Motion Frames (AFMF) technology, and built-in latency reduction technology to provide greater-resolution output images from lower-resolution settings. While the technology is open-source, it battles in market share with NVIDIA and the company's Deep Learning Super Sampling (DLSS). However, in the mobile space, there hasn't been much talk about implementing upscaling technology up until now. According to a popular leaker @Tech_Reve on X/Twitter, we have information that AMD is collaborating with Samsung and Qualcomm to standardize on upscaling technology implementations in mobile SoCs.

Not only does the leak imply that the AMD FSR technology will be used in Samsung's upcoming Exynos SoC, but some AMD ray tracing will be present as well. The leaker has mentioned Qualcomm, which means that future iterations of Snapdragon are up to adopt the FSR algorithmic approach to resolution upscaling. We will see how and when, but with mobile games growing in size and demand, FSR could come in handy to provide mobile gamers with a better experience. Primarily, this targets Android devices, which Qualcomm supplies, where Apple's iPhone recently announced MetalFX Upscaling technology with an A17 Pro chip.

Samsung Electronics Holds Memory Tech Day 2023 Unveiling New Innovations To Lead the Hyperscale AI Era

Samsung Electronics Co., Ltd., a world leader in advanced memory technology, today held its annual Memory Tech Day, showcasing industry-first innovations and new memory products to accelerate technological advancements across future applications—including the cloud, edge devices and automotive vehicles.

Attended by about 600 customers, partners and industry experts, the event served as a platform for Samsung executives to expand on the company's vision for "Memory Reimagined," covering long-term plans to continue its memory technology leadership, outlook on market trends and sustainability goals. The company also presented new product innovations such as the HBM3E Shinebolt, LPDDR5X CAMM2 and Detachable AutoSSD.

Ryan Shrout Announces Departure from Intel

The corporate world of semiconductor companies has seen some restructuring lately, and today, we learn that Ryan Shrout of Intel is leaving the company. Most recently serving as Senior Director of Client Segment Strategy, CCG at Graphics and AI Group, Ryan Shrout joined Intel over four years ago. Mr. Shrout started at the company as Chief Performance Strategist in 2018, and later, in 2020, he changed his role to Senior Director of Technical and Competitive Client Marketing. After that, he made a switch and was Senior Director of Gaming, Graphics, and HPC Marketing (AXG Marketing). His most recent role was relatively short-lived, only lasting for three months.

Many people in the community may know Mr. Shrout from his time at PC Perspective, which he founded and managed as Editor-in-Chief in 1999. However, today, on his Twitter/X profile, we're learning that he is departing Intel to take on another role. "Fall is a season for change! Yesterday was my last day at Intel. I'm going to take a couple weeks with the family then I'm excited to talk about what's next," the post stated. With such extensive experience in the PC industry, we are eager to see where Mr. Shrout lands next.

MiTAC to Showcase Cloud and Datacenter Solutions, Empowering AI at Intel Innovation 2023

Intel Innovation 2023 - September 13, 2023 - MiTAC Computing Technology, a professional IT solution provider and a subsidiary of MiTAC Holdings Corporation, will showcase its DSG (Datacenter Solutions Group) product lineup powered by 4th Gen Intel Xeon Scalable processors for enterprise, cloud and AI workloads at Intel Innovation 2023, booth #H216 in the San Jose McEnery Convention Center, USA, from September 19-20.

"MiTAC has seamlessly and successfully managed the Intel DSG business since July. The datacenter solution product lineup enhances MiTAC's product portfolio and service offerings. Our customers can now enjoy a comprehensive one-stop service, ranging from motherboards and barebones servers to Intel Data Center blocks and complete rack integration for their datacenter infrastructure needs," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology.

NVIDIA CEO Meets with India Prime Minister Narendra Modi

Underscoring NVIDIA's growing relationship with the global technology superpower, Indian Prime Minister Narendra Modi met with NVIDIA founder and CEO Jensen Huang Monday evening. The meeting at 7 Lok Kalyan Marg—as the Prime Minister's official residence in New Delhi is known—comes as Modi prepares to host a gathering of leaders from the G20 group of the world's largest economies, including U.S. President Joe Biden, later this week.

"Had an excellent meeting with Mr. Jensen Huang, the CEO of NVIDIA," Modi said in a social media post. "We talked at length about the rich potential India offers in the world of AI." The event marks the second meeting between Modi and Huang, highlighting NVIDIA's role in the country's fast-growing technology industry.

New AI Accelerator Chips Boost HBM3 and HBM3e to Dominate 2024 Market

TrendForce reports that the HBM (High Bandwidth Memory) market's dominant product for 2023 is HBM2e, employed by the NVIDIA A100/A800, AMD MI200, and most CSPs' (Cloud Service Providers) self-developed accelerator chips. As the demand for AI accelerator chips evolves, manufacturers plan to introduce new HBM3e products in 2024, with HBM3 and HBM3e expected to become mainstream in the market next year.

The distinctions between HBM generations primarily lie in their speed. The industry experienced a proliferation of confusing names when transitioning to the HBM3 generation. TrendForce clarifies that the so-called HBM3 in the current market should be subdivided into two categories based on speed. One category includes HBM3 running at speeds between 5.6 to 6.4 Gbps, while the other features the 8 Gbps HBM3e, which also goes by several names including HBM3P, HBM3A, HBM3+, and HBM3 Gen2.
Return to Keyword Browsing
Nov 18th, 2024 04:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts