News Posts matching #Enterprise

Return to Keyword Browsing

Intel NEX "Bartlett Lake-S" CPUs Reportedly in Pipeline

Supply chain insiders have claimed that Intel is working on extending the lifespan of its LGA 1700 platform—a BenchLife report proposes that the "Bartlett Lake-S" processor family is due soon, courtesy of Team Blue's Network and Edge (NEX) business group. Only a few days ago, the rumor mill had placed "Bartlett Lake-S" CPUs in a mainstream desktop category, due to alleged connections with the Raptor Lake-S Refresh series—the former is also (supposedly) based on the Intel 7 processor process. BenchLife believes that DDR4 and DDR5 memory will be supported, but with no mention of possible ECC functionality. Confusingly, chip industry tipsters believe that the unannounced processors could be launched as 15th Gen Core parts.

BenchLife has a history of discovering and reporting on Intel product roadmaps—apparently Bartlett Lake-S can leverage the same core configurations as seen on Raptor Lake-S; namely 8 Raptor Cove P-Cores and 16 Gracemont E-Cores. An insider source claims that a new pure P-Core-only design could exist, sporting up to twelve Raptor Cove units. According to a leaked Bartlett Lake-S series specification sheet: "the iGPU part will use (existing) Intel Xe architecture, up to Intel UHD Graphics 770." The publication alludes to some type of AI performance enhancement as a distinguishing feature for Bartlett Lake-S, when lined up against 14th Gen Core desktop SKUs. Folks salivating at the prospect of a mainstream DIY launch will have to wait and see (according to BenchLife's supply chain insider): "judging from various specifications, this product belonging to the Intel NEX business group may also be decentralized to the consumer market, but the source did not make this part too clear and reserved some room for maneuver."

PNY Expands Enterprise Portfolio with Innovative VAST Data Platform

PNY Technologies, a global leader in memory and storage solutions, has expanded its enterprise portfolio through a strategic partnership with VAST Data, the AI data platform company. This collaboration underscores PNY's commitment to delivering cutting-edge solutions to meet the evolving needs of enterprises integrating AI and HPC into their core processes. This partnership leverages the VAST Data Platform's DataStore capabilities, enhancing PNY's enterprise offerings with unparalleled performance, scalability, and cost efficiency. This move reinforces PNY's position as a key player in the enterprise market.

Key highlights of the partnership include:
  • Revolutionary Solutions: PNY now offers VAST Data's innovative data platform, known for its simplicity and transformative performance, serving data to the world's most demanding supercomputers.
  • Unmatched Scalability: VAST Data's industry-disrupting DASE architecture enables businesses to enjoy nearly limitless scale as their data sets and AI pipelines grow, allowing them to adapt to the changing demands of today's increasingly data-driven world.
  • Cost-Effective Data Management: VAST Data and PNY will empower enterprises to achieve significant cost savings through improved data reduction (VAST Similarity), infrastructure efficiency and simplified management.
  • Enhanced Data Analytics: The VAST DataBase facilitates deeper insights from both structured and unstructured data, accelerating decision-making and enables data-driven innovation across various business functions.
  • Exceptional Customer Support: PNY extends its commitment to exceptional customer support to VAST Data solutions, providing reliable technical assistance and guidance.

IBM Storage Ceph Positioned as the Ideal Foundation for Modern Data Lakehouses

It's been one year since IBM integrated Red Hat storage product roadmaps and teams into IBM Storage. In that time, organizations have been faced with unprecedented data challenges to scale AI due to the rapid growth of data in more locations and formats, but with poorer quality. Helping clients combat this problem has meant modernizing their infrastructure with cutting-edge solutions as a part of their digital transformations. Largely, this involves delivering consistent application and data storage across on-premises and cloud environments. Also, crucially, this includes helping clients adopt cloud-native architectures to realize the benefits of public cloud like cost, speed, and elasticity. Formerly Red Hat Ceph—now IBM Storage Ceph—a state-of-the-art open-source software-defined storage platform, is a keystone in this effort.

Software-defined storage (SDS) has emerged as a transformative force when it comes to data management, offering a host of advantages over traditional legacy storage arrays including extreme flexibility and scalability that are well-suited to handle modern uses cases like generative AI. With IBM Storage Ceph, storage resources are abstracted from the underlying hardware, allowing for dynamic allocation and efficient utilization of data storage. This flexibility not only simplifies management but also enhances agility in adapting to evolving business needs and scaling compute and capacity as new workloads are introduced. This self-healing and self-managing platform is designed to deliver unified file, block, and object storage services at scale on industry standard hardware. Unified storage helps provide clients a bridge from legacy applications running on independent file or block storage to a common platform that includes those and object storage in a single appliance.

Financial Analyst Outs AMD Instinct MI300X "Projected" Pricing

AMD's December 2023 launch of new Instinct series accelerators has generated a lot of tech news buzz and excitement within the financial world, but not many folks are privy to Team Red's MSRP for the CDNA 3.0 powered MI300X and MI300A models. A Citi report has pulled back the curtain, albeit with "projected" figures—an inside source claims that Microsoft has purchased the Instinct MI300X 192 GB model for ~$10,000 a piece. North American enterprise customers appear to have taken delivery of the latest MI300 products around mid-January time—inevitably, top secret information has leaked out to news investigators. SeekingAlpha's article (based on Citi's findings) alleges that the Microsoft data center division is AMD's top buyer of MI300X hardware—GPT-4 is reportedly up and running on these brand new accelerators.

The leakers claim that businesses further down the (AI and HPC) food chain are having to shell out $15,000 per MI300X unit, but this is a bargain when compared to NVIDIA's closest competing package—the venerable H100 SXM5 80 GB professional card. Team Green, similarly, does not reveal its enterprise pricing to the wider public—Tom's Hardware has kept tabs on H100 insider info and market leaks: "over the recent quarters, we have seen NVIDIA's H100 80 GB HBM2E add-in-card available for $30,000, $40,000, and even much more at eBay. Meanwhile, the more powerful H100 80 GB SXM with 80 GB of HBM3 memory tends to cost more than an H100 80 GB AIB." Citi's projection has Team Green charging up to four times more for its H100 product, when compared to Team Red MI300X pricing. NVIDIA's dominant AI GPU market position could be challenged by cheaper yet still very performant alternatives—additionally chip shortages have caused Jensen & Co. to step outside their comfort zone. Tom's Hardware reached out to AMD for comment on the Citi pricing claims—a company representative declined this invitation.

GIGABYTE Enterprise Servers & Motherboards Roll Out on European E-commerce Platform

GIGABYTE Technology, a pioneer in computer hardware, has taken a significant stride in shaping its European business model. Today, GIGABYTE has broadened its e-commerce platform, shop.gigabyte.eu, by integrating enterprise server and server motherboard solutions into its product portfolio. Being at the forefront of computer hardware manufacturing, GIGABYTE recognizes that it is imperative to expand its presence in the EMEA region to maintain its leadership across all markets. With the introduction of our enterprise-level server and motherboard solutions, we are dedicated to delivering a diverse range of high-performance products directly to our B2B clients.

GIGABYTE offers a complete product portfolio that addresses all workloads from the data center to edge including traditional and emerging workloads in HPC and AI to data analytics, 5G/edge, cloud computing, and more. Our enduring partnerships with key technology leaders ensure that our new products are at the forefront of innovation and launch with new partner platforms. Our systems embody performance, security, scalability, and sustainability. Within the e-commerce product portfolio, we offer a selection of models from our Edge, Rack, GPU, and Storage series. Additionally, the platform provides server motherboards for custom integration. The current selection comprises a mix of solutions tailored to online sales. For more complex solutions, customers can get in touch via the integrated contact form.

Samsung Showcases B2B Displays with Advanced Connectivity at ISE 2024

Samsung Electronics today at Integrated Systems Europe (ISE) 2024 in Barcelona is showcasing how SmartThings will bolster its B2B displays to shape the future of business connectivity. Samsung's "SmartThings for Business" exhibition emphasizes the new advancements that the cutting-edge internet-of-things (IoT) platform will offer, as well as Samsung's commitment to providing more connected, easy-to-control digital signage across industries. "In a commercial display sector where operational efficiency is key, Samsung digital signage is leveraging SmartThings to deliver next-gen connectivity and features to organizations of all sizes," said SW Yong, President and Head of Visual Display Business at Samsung Electronics. "This further expansion of the SmartThings ecosystem will serve to elevate experiences for customers and partners from a wide variety of industries."

How Businesses Can Leverage Connected Tech Through SmartThings—From the Smart Store to Smart Office
At the event, Samsung is showcasing how SmartThings enables business owners to leverage their digital signage to connect and gain more control of their smart devices across various landscapes. By offering the SmartThings connectivity feature to commercial display products such as Smart Signage and Hotel TVs, users can experience the convenience of hyper-connectivity in their business environments. These changes will include Samsung smart devices, as well as other devices that support the industry's latest IoT specifications, Matter and the Home Connectivity Alliance (HCA). Through the application of SmartThings to various business environments, Samsung contributes to the more efficient management of space and energy by transforming places of business into interconnected smart spaces. These connectivity improvements have been designed to benefit all types of business customers, from small and mid-sized business owners to enterprises. Examples of the smart spaces—including a smart store, smart office and smart hotel—are on display at Samsung's booth at ISE 2024.

OpenAI Reportedly Talking to TSMC About Custom Chip Venture

OpenAI is reported to be initiating R&D on a proprietary AI processing solution—the research organization's CEO, Sam Altman, has commented on the in-efficient operation of datacenters running NVIDIA H100 and A100 GPUs. He foresees a future scenario where his company becomes less reliant on Team Green's off-the-shelf AI-crunchers, with a deployment of bespoke AI processors. A short Reuters interview also underlined Altman's desire to find alternatives sources of power: "It motivates us to go invest more in (nuclear) fusion." The growth of artificial intelligence industries has put an unprecedented strain on energy providers, so tech firms could be semi-forced into seeking out frugal enterprise hardware.

The Financial Times has followed up on last week's Bloomberg report of OpenAI courting investment partners in the Middle East. FT's news piece alleges that Altman is in talks with billionaire businessman Sheikh Tahnoon bin Zayed al-Nahyan, a very well connected member of the United Arab Emirates Royal Family. OpenAI's leadership is reportedly negotiating with TSMC—The Financial Times alleges that Taiwan's top chip foundry is an ideal manufacturing partner. This revelation contradicts Bloomberg's recent reports of a potential custom OpenAI AI chip venture involving purpose-built manufacturing facilities. The whole project is said to be at an early stage of development, so Altman and his colleagues are most likely exploring a variety of options.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

Intel's Next-gen Xeon "Clearwater Forest" E-Core CPU Series Spotted in Patch

Intel presented its next generation Xeon "Clearwater Forest" processor family during September's Innovation Event—their roadmap slide (see below) included other Birch Stream platform architecture options. Earlier this week, Team Blue's software engineers issued a Linux kernel patch that contains details pertaining to codenamed projects: Sierra Forest, Grand Ridge and the aforementioned Clearwater Forest. All E-Core Xeon "Sierra Forest" processors are expected to launch around the middle of 2024—this deployment of purely efficiency-oriented "Sierra Glen" (Atom Crestmont) cores in enterprise/server chip form will be a first for Intel. The Sierra Forest Xeon range has been delayed a couple of times; but some extra maturation time has granted a jump from an initial maximum 144 E-Core count up to 288. The latest patch notes provide an early look into Clearwater Forest's basic foundations—it seems to be Sierra Forest's direct successor.

The Intel Xeon "Granite Rapids" processor family is expected to hit retail just after a Sierra Forest product launch, but the former sports a very different internal configuration—an all "Redwood Cove" P-Core setup. Phoronix posits that Sierra Forest's groundwork is clearing the way for its natural successor: "Clearwater Forest is Intel's second generation E-core Xeon...Clearwater Forest should ship in 2025 while the open-source Intel Linux engineers begin in their driver support preparations and other hardware enablement well in advance of launch. With engineers already pushing Sierra Forest code into the Linux kernel and related key open-source projects like Clang and GCC since last year, their work on enabling Sierra Forest appears to be largely wrapping up and in turn the enablement is to begin for Clearwater Forest. Sent out...was the first Linux kernel patch for Sierra Forest. As usual, for the first patch it's quite basic and is just adding in the new model number for Clearwater Forest CPUs. Clear Water Forest has a model number of 0xDD (221). The patch also reaffirms that the 0xDD Clearwater Forest CPUs are using Atom Darkmont cores."

QNAP Launches TS-hx77AXU-RP Series Enterprise ZFS NAS with Revolutionary AMD Ryzen 7000 Series Processors

QNAP Systems, Inc., a leading computing, and storage solutions innovator, today unveiled the new high-capacity TS-hx77AXU-RP ZFS NAS series, including 12-bay TS-h1277AXU-RP and 16-bay TS-h1677AXU-RP rackmount models powered by AMD Ryzen 7000 Series processors based on the cutting-edge AMD Socket AM5 platform. By integrating robust hardware and diverse I/O including DDR5 RAM, M.2 PCIe Gen 5, PCIe Gen 4 slots, and redundant power supplies, the TS-hx77AXU-RP series unleashes enterprise-level performance and delivers ultra-high bandwidth, futureproof expandability, and trusted reliability for performance-demanding Tier 2 storage, virtualization, 4K video editing, and PB-level storage applications.

"The whole new TS-hx77AXU-RP series ZFS NAS provides enterprises with superb performance and large storage capacity to tackle business-critical workloads and storage-demanding applications. The AMD Ryzen 7000 Series multi-core processors further unlocks the key performance of DDR5 and M.2 PCIe Gen 5," said Alex Shih, Product Manager of QNAP, adding "Paired with the ZFS file system that's most suited for business applications, the TS-hx77AXU-RP series stands out as the top choice for storage solutions that require uncompromising data integrity."

Dell Generative AI Open Ecosystem with AMD Instinct Accelerators

Generative AI (GenAI) is the decade's most promising accelerator for innovation with 78% of IT decision makers reporting they're largely excited for the potential GenAI can have on their organizations.¹ Most see GenAI as a means to provide productivity gains, streamline processes and achieve cost savings. Harnessing this technology is critical to ensure organizations can compete in this new digital era.

Dell Technologies and AMD are coming together to unveil an expansion to the Dell Generative AI Solutions portfolio, continuing the work of accelerating advanced workloads and offering businesses more choice to continue their unique GenAI journeys. This new technology highlights a pivotal role played by open ecosystems and silicon diversity in empowering customers with simple, trusted and tailored solutions to bring AI to their data.

MSI Introduces New AI Server Platforms with Liquid Cooling Feature at SC23

MSI, a leading global server provider, is showcasing its latest GPU and CXL memory expansion servers powered by AMD EPYC processors and 4th Gen Intel Xeon Scalable processors, which are optimized for enterprises, organizations and data centers, at SC23, booth #1592 in the Colorado Convention Center in Denver from November 13 to 16.

"The exponential growth of human- and machine-generated data demands increased data center compute performance. To address this demand, liquid cooling has emerged as a key trend, said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's server platforms offer a well-balanced hardware foundation for modern data centers. These platforms can be tailored to specific workloads, optimizing performance and aligning with the liquid cooling trend."

NVIDIA NeMo: Designers Tap Generative AI for a Chip Assist

A research paper released this week describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors. The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair. Multiple engineering teams coordinate for as long as two years to construct one of these digital mega cities. Some groups define the chip's overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.

Cisco and Bang & Olufsen Unveil New Wireless Earbuds for Secure Hybrid Work

Cisco and Bang & Olufsen today unveiled new true wireless earbuds built with enterprise-grade features customized for professionals on-the-go. Furthering their partnership to provide customers with top-quality audio experiences whether they're at home, at work or in transit, the Bang & Olufsen Cisco 950 delivers a high-end aesthetic and premium sound combined with advanced security and manageability features.

This product release is an extension of the Cisco and Bang & Olufsen partnership, which addresses increased customer expectation for multifunctional devices that fit their lifestyles in and out of work. As hybrid work continues to enable flexibility in how and where people work, the Bang & Olufsen Cisco 950 enables crystal-clear audio for seamless collaboration from any setting. The earbuds are fully manageable in Cisco's Control Hub platform, giving IT greater visibility and control over their entire fleet of collaboration devices and peripherals.

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Amazon to Invest $4 Billion into Anthropic AI

Today, we're announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop the most reliable and high-performing foundation models in the industry. Our frontier safety research and products, together with Amazon Web Services' (AWS) expertise in running secure, reliable infrastructure, will make Anthropic's safe and steerable AI widely accessible to AWS customers.

AWS will become Anthropic's primary cloud provider for mission critical workloads, providing our team with access to leading compute infrastructure in the form of AWS Trainium and Inferentia chips, which will be used in addition to existing solutions for model training and deployment. Together, we'll combine our respective expertise to collaborate on the development of future Trainium and Inferentia technology.

Unity to Start Charging Per-Installation Fee with New Business Model Update

Unity is introducing some notable changes to its pricing and service offerings, slated to take effect on January 1, 2024. The new Unity Runtime Fee will be based on the number of game installs at the heart of these changes. This fee will apply every time an end user downloads a qualifying game. Unity believes this initial install-based fee allows creators to retain the financial benefits of ongoing player engagement, unlike a model based on revenue sharing. The company clarifies that the fee refers explicitly to the Unity Runtime, part of the Unity Engine that enables games to run on different devices. Additionally, these changes are not going to be not retroactive or perpetual. Instead, all fees will start counting on January 1, 2024. The fee will apply once for each new install and not an ongoing perpetual license royalty, like revenue share.

However, the new Unity Runtime Fee comes with specific thresholds for revenue and installs, designed to ensure that smaller creators are not adversely affected. For Unity Personal and Unity Plus, the fee applies only to games that have generated $200,000 or more in the last 12 months and have a minimum of 200,000 lifetime installs. For Unity Pro and Unity Enterprise, the fee kicks in for games that have made $1,000,000 or more in the last 12 months and have at least 1,000,000 lifetime installs. The table below shows which Unity accounts pay what fees, with costs ranging from $0.2 per install after the first 200,000 installs. After one million installs, each new install starts at $0.15 and $0.125 for Unity Pro and Unity Enterprise, respectively. As the game gains traction, install fees decay, as shown in the table below.

Update 15:36 UTC: Unity issued a statement on company's Twitter/X account that promises changes in the couple of days.

Cervoz Introduces T425 - a New M.2 2230 (A+E key) NVMe Gen3x2 SSD

The Power of Small: Evolution of Data Storage—since computers were built, making them smaller has never stopped. This is especially evident in the industrial sector, where compactness is highly valued. As a crucial component of computers, data storage has undergone a process of miniaturization as well, leading to the era of M.2 interface SSDs. While some ultra compact fanless PC and rugged mobile PC may adopt smaller embedded storage, such as eMMC (Embedded MultiMediaCard), they significantly lag behind M.2 PCIe SSDs in terms of performance. Therefore, M.2 PCIe SSDs have emerged as the superior choice for those seeking top-notch performance and efficiency in their computing experience.

Introducing the Cervoz T425 SSD: Unleash Power in a Tiny Package
To embrace this remarkable trend, Cervoz presents the latest T425, a new M.2 2230 NVMe PCIe Gen 3 x2 SSD. Designed with a compact M.2 2230 form factor (22 mm x 30 mm), it supports both A and E key configurations and utilizes PCIe Gen 3 x2 lanes for high-speed data transfer. With impressive sequential speeds of up to 815 MB/s read and 760 MB/s write, along with storage capacities of 64 GB, 128 GB, and 512 GB, the T425 is the ultimate solution for enhancing the performance of embedded computing systems with space constraints.

New MIPS CEO Sameer Wasson to Drive Company's RISC-V Market Penetration and Innovation

MIPS, a leading developer of high- performance RISC-V compute IP, has announced embedded systems industry veteran Sameer Wasson as the company's new CEO. Before joining MIPS, Wasson spent 18 years at Texas Instruments (TI), most recently as Vice President, Business Unit (BU) Manager, Processors, where he was responsible for the company's Processor businesses. In that role, Wasson re-established TI as a mainstream microprocessor (MPU) and microcontroller (MCU) supplier for high growth automotive and industrial markets, and established the company's footprint in embedded AI, software defined vehicles, and electrification.

As the new CEO of MIPS, Wasson will further accelerate the company's leadership in the High-Performance RISC-V market as it continues to expand its footprint in Automotive and Enterprise markets.

After a Low Base Year in 2023, DRAM and NAND Flash Bit Demand Expected to Increase by 13% and 16% Respectively in 2024

TrendForce expects that memory suppliers will continue their strategy of scaling back production of both DRAM and NAND Flash in 2024, with the cutback being particularly pronounced in the financially struggling NAND Flash sector. Market demand visibility for consumer electronic is projected to remain uncertain in 1H24. Additionally, capital expenditure for general-purpose servers is expected to be weakened due to competition from AI servers. Considering the low baseline set in 2023 and the current low pricing for some memory products, TrendForce anticipates YoY bit demand growth rates for DRAM and NAND Flash to be 13% and 16%, respectively. Nonetheless, achieving effective inventory reduction and restoring supply-demand balance next year will largely hinge on suppliers' ability to exercise restraint in their production capacities. If managed effectively, this could open up an opportunity for a rebound in average memory prices.

PC: The annual growth rate for average DRAM capacity is projected at approximately 12.4%, driven mainly by Intel's new Meteor Lake CPUs coming into mass production in 2024. This platform's DDR5 and LPDDR5 exclusivity will likely make DDR5 the new mainstream, surpassing DDR4 in the latter half of 2024. The growth rate in PC client SSDs will not be as robust as that of PC DRAM, with just an estimated growth of 8-10%. As consumer behavior increasingly shifts toward cloud-based solutions, the demand for laptops with large storage capacities is decreasing. Even though 1 TB models are becoming more available, 512 GB remains the predominant storage option. Furthermore, memory suppliers are maintaining price stability by significantly reducing production. Should prices hit rock bottom and subsequently rebound, PC OEMs are expected to face elevated SSD costs. This, when combined with Windows increasing its licensing fees for storage capacities at and above 1 TB, is likely to put a damper on further growth in average storage capacities.

Google Cloud and NVIDIA Expand Partnership to Advance AI Computing, Software and Services

Google Cloud Next—Google Cloud and NVIDIA today announced new AI infrastructure and software for customers to build and deploy massive models for generative AI and speed data science workloads.

In a fireside chat at Google Cloud Next, Google Cloud CEO Thomas Kurian and NVIDIA founder and CEO Jensen Huang discussed how the partnership is bringing end-to-end machine learning services to some of the largest AI customers in the world—including by making it easy to run AI supercomputers with Google Cloud offerings built on NVIDIA technologies. The new hardware and software integrations utilize the same NVIDIA technologies employed over the past two years by Google DeepMind and Google research teams.

IBASE Announces INA8505 Enterprise 1U Edge Server for 5G Open vRAN & MEC

IBASE Technology Inc. (TPEx: 8050), a global leader in network appliances and embedded solutions, proudly announces the release of the INA8505 enterprise 1U edge server. Powered by the Intel Xeon D-2700 processor and offering versatile connectivity options, this state-of-the-art appliance is specifically designed to excel in demanding 5G Open vRAN & MEC applications such as real-time data analytics, autonomous vehicles, and smart city deployments. It enables full control over resource allocation in the RAN and MEC, and has the potential to seamlessly integrate AI capabilities to dynamically optimize network performance in real time at the edge of the 5G network infrastructure.

The INA8505 delivers unmatched performance, scalability, and efficiency with flexible storage, offering two SATA/NVMe 2.5" HDD/SSD slots, 2x M.2 (M-key) SATA/PCI-E storage slots, and one 16 GB/32 GB/64 GB eMMC. With an FHFL PCI-E (x16) Gen 4 (supports 75 W) and an FHFL PCI-E (x8) Gen 4 (supports 75 W) configurable as PCI-E (x16) Gen 4 double FHFL (supports 120 W), the INA8505 adapts effortlessly into different network environments and meet future demands for increased scalability. It boasts a rich array of I/O connectivity options, including a VGA port from BMC (Aspeed 2600, IPMI 2.0 support), two USB2.0 Type-A ports, an RJ45 console port, and four 25 GbE SFP28 ports, ensuring enhanced adaptability to various connectivity needs.

AMD Reports Second Quarter 2023 Financial Results, Revenue Down 18% YoY

AMD today announced revenue for the second quarter of 2023 of $5.4 billion, gross margin of 46%, operating loss of $20 million, net income of $27 million and diluted earnings per share of $0.02. On a non-GAAP basis, gross margin was 50%, operating income was $1.1 billion, net income was $948 million and diluted earnings per share was $0.58.

"We delivered strong results in the second quarter as 4th Gen EPYC and Ryzen 7000 processors ramped significantly," said AMD Chair and CEO Dr. Lisa Su. "Our AI engagements increased by more than seven times in the quarter as multiple customers initiated or expanded programs supporting future deployments of Instinct accelerators at scale. We made strong progress meeting key hardware and software milestones to address the growing customer pull for our data center AI solutions and are on-track to launch and ramp production of MI300 accelerators in the fourth quarter."

Dell Technologies Expands AI Offerings, in Collaboration with NVIDIA

Dell Technologies introduces new offerings to help customers quickly and securely build generative AI (GenAI) models on-premises to accelerate improved outcomes and drive new levels of intelligence. New Dell Generative AI Solutions, expanding upon our May's Project Helix announcement, span IT infrastructure, PCs and professional services to simplify the adoption of full-stack GenAI with large language models (LLM), meeting organizations wherever they are in their GenAI journey. These solutions help organizations, of all sizes and across industries, securely transform and deliver better outcomes.

"Generative AI represents an inflection point that is driving fundamental change in the pace of innovation while improving the customer experience and enabling new ways to work," Jeff Clarke, vice chairman and co-chief operating officer, Dell Technologies, said on a recent investor call. "Customers, big and small, are using their own data and business context to train, fine-tune and inference on Dell infrastructure solutions to incorporate advanced AI into their core business processes effectively and efficiently."

Samsung & Microsoft Reveal First On-Device Attestation Solution for Enterprise

Samsung Electronics today announced the first step in a plan to reimagine mobile device security for business customers in partnership with Microsoft. This collaboration has led to the industry's first on-device, mobile hardware-backed device attestation solution that works equally well on both company and personally owned devices.

Device attestation can help ensure a device's identity and health, verifying that it has not been compromised. On-device, mobile hardware-backed device attestation—available on Samsung Galaxy devices and combined with protection from Microsoft Intune—now adds enhanced security and flexibility. For enterprises, this is an extra layer of protection against compromised devices falsely claiming to be known and healthy, gaining access to sensitive corporate data. Additionally, organizations can now enable employees to bring their own device (BYOD) to work with the confidence that they are protected with the same level of security as company owned devices. For employees, this means added flexibility for their personal Galaxy devices to safely access their work environment.
Return to Keyword Browsing
Jun 3rd, 2024 08:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts