News Posts matching #Enterprise

Return to Keyword Browsing

Huawei Launches OptiXtrans DC908 Pro, a Next-gen DCI Platform for the AI Era

At MWC Barcelona 2024, Huawei launched the Huawei OptiXtrans DC908 Pro, a new platform for Data Center Interconnect (DCI) designed for the intelligent era. This innovative platform ensures the efficient, secure, and stable transmission of data between data centers (DCs), setting a new standard for DCI networks. As AI continues to proliferate across various service scenarios, the demand for foundation models has intensified, leading to an explosion in data volume. DCs are now operating at the petabyte level, and DCI networks have evolved from single-wavelength 100 Gbit/s to single-wavelength Tbit/s.

In response to the challenges posed by massive data transmission in the intelligent era, Huawei introduces the next-generation DCI platform, the Huawei OptiXtrans DC908 Pro. Compared to its predecessor, the DC908 Pro offers higher bandwidth, reliability, and intelligence.

NVIDIA AI GPU Customers Reportedly Selling Off Excess Hardware

The NVIDIA H100 Tensor Core GPU was last year's hot item for HPC and AI industry segments—the largest purchasers were reported to have acquired up to 150,000 units each. Demand grew so much that lead times of 36 to 52 weeks became the norm for H100-based server equipment. The latest rumblings indicate that things have stabilized—so much so that some organizations are "offloading chips" as the supply crunch cools off. Apparently it is more cost-effective to rent AI processing sessions through cloud service providers (CSPs)—the big three being Amazon Web Services, Google Cloud, and Microsoft Azure.

According to a mid-February Seeking Alpha report, wait times for the NVIDIA H100 80 GB GPU model have been reduced down to around three to four months. The Information believes that some companies have already reduced their order counts, while others have hardware sitting around, completely unused. Maintenance complexity and costs are reportedly cited as a main factors in "offloading" unneeded equipment, and turning to renting server time from CSPs. Despite improved supply conditions, AI GPU demand is still growing—driven mainly by organizations dealing with LLM models. A prime example being Open AI—as pointed out by The Information—insider murmurings have Sam Altman & Co. seeking out alternative solutions and production avenues.

NVIDIA Prepared to Offer Custom Chip Designs to AI Clients

NVIDIA is reported to be setting up an AI-focused semi-custom chip design business unit, according to inside sources known to Reuters—it is believed that Team Green leadership is adapting to demands leveraged by key data-center customers. Many companies are seeking cheaper alternatives, or have devised their own designs (budget/war chest permitting)—NVIDIA's current range of AI GPUs are simply off-the-shelf solutions. OpenAI has generated the most industry noise—their alleged early 2024 fund-raising pursuits have attracted plenty of speculative/kind-of-serious interest from notable semiconductor personalities.

Team Green is seemingly reacting to emerging market trends—Jensen Huang (CEO, president and co-founder) has hinted that NVIDIA custom chip designing services are on the cusp. Stephen Nellis—a Reuters reporter specializing in tech industry developments—has highlighted select NVIDIA boss quotes from an incoming interview piece: "We're always open to do that. Usually, the customization, after some discussion, could fall into system reconfigurations or recompositions of systems." The Team Green chief teased that his engineering team is prepared to take on the challenge meeting exact requests: "But if it's not possible to do that, we're more than happy to do a custom chip. And the benefit to the customer, as you can imagine, is really quite terrific. It allows them to extend our architecture with their know-how and their proprietary information." The rumored NVIDIA semi-custom chip design business unit could be introduced in an official capacity at next month's GTC 2024 Conference.

Cervoz Embraces Edge Computing with its M.2 Compact Solutions

Seizing the Edge: Cervoz Adapts to Shifting Data Landscape—The rapid emergence of technologies like AIoT and 5G and their demand for high-speed data processing has accelerated the data transition from the cloud to the edge. This shift exposes data to unpredictable environments with extreme temperature variations, vibrations, and space constraints, making it critical for edge devices to thrive in these settings. Cervoz strategically targets the blooming edge computing sector by introducing an extensive array of compact product lines, enhancing its existing SSDs, DRAM, and Modular Expansion Cards to meet the unique needs of edge computing.

Cervoz Reveals NVMe M.2 SSDs and Connectivity Solutions to Power the Edge
Cervoz introduces its latest compact PCIe Gen. 3x2 SSD offerings, the T421 M.2 2242 (B+M key) and T425 M.2 2230 (A+E key). These space-efficient design and low power consumption feature offer exceptional performance, catering to the storage needs of fanless embedded PCs and motherboards for purpose-built edge applications. Cervoz is also leading the way in developing connectivity solutions, including Ethernet, Wi-Fi, Serial, USB, and CAN Bus all available in M.2 2230 (A+E key) and M.2 2242/2260/2280 (B+M) form factors. The M.2 (B+M key) 2242/2260/2280 card is a versatile three-in-one solution designed for maximum adaptability. While it initially comes in a 2280 form factor, it can be easily adjusted to fit 2260 or 2242 sizes. It offers an effortless upgrade of existing systems without sacrificing connection capability, especially in edge devices.

Edged Energy Launches Four Ultra-Efficient AI-Ready Data Centers in USA

Edged Energy, a subsidiary of Endeavour devoted to carbon neutral data center infrastructure, announced today the launch of its first four U.S. data centers, all designed for today's high-density AI workloads and equipped with advanced waterless cooling and ultra-efficient energy systems. The facilities will bring more than 300 MW of critical capacity with an industry-leading average Power Usage Effectiveness (PUE) of 1.15 portfolio-wide. Edged has nearly a dozen new data centers operating or under construction across Europe and North America and a gigawatt-scale project pipeline.

The first phase of this U.S. expansion includes a 168 MW campus in Atlanta, a 96 MW campus in the Chicago area, 36 MW in Phoenix and 24 MW in Kansas City. At a time of growing water scarcity where rivers, aquifers and watersheds are at dangerously low levels, it is more critical than ever that IT infrastructure conserve precious water resources. The new Edged facilities are expected to save more than 1.2 billion gallons of water each year compared to conventional data centers. "The rise of AI and machine learning is requiring more power, and often more water, to cool outdated servers. While traditional data centers struggle to adapt, Edged facilities are ready for the advanced computing of today and tomorrow without consuming any water for cooling," said Bryant Farland, Chief Executive Officer for Edged. "Sustainability is at the core of our platform. It is why our data centers are uniquely optimized for energy efficiency and water conservation. We are excited to be partnering with local communities to bring future-proof solutions to a growing digital economy."

NVIDIA Unveils "Eos" to Public - a Top Ten Supercomputer

Providing a peek at the architecture powering advanced AI factories, NVIDIA released a video that offers the first public look at Eos, its latest data-center-scale supercomputer. An extremely large-scale NVIDIA DGX SuperPOD, Eos is where NVIDIA developers create their AI breakthroughs using accelerated computing infrastructure and fully optimized software. Eos is built with 576 NVIDIA DGX H100 systems, NVIDIA Quantum-2 InfiniBand networking and software, providing a total of 18.4 exaflops of FP8 AI performance. Revealed in November at the Supercomputing 2023 trade show, Eos—named for the Greek goddess said to open the gates of dawn each day—reflects NVIDIA's commitment to advancing AI technology.

Eos Supercomputer Fuels Innovation
Each DGX H100 system is equipped with eight NVIDIA H100 Tensor Core GPUs. Eos features a total of 4,608 H100 GPUs. As a result, Eos can handle the largest AI workloads to train large language models, recommender systems, quantum simulations and more. It's a showcase of what NVIDIA's technologies can do, when working at scale. Eos is arriving at the perfect time. People are changing the world with generative AI, from drug discovery to chatbots to autonomous machines and beyond. To achieve these breakthroughs, they need more than AI expertise and development skills. They need an AI factory—a purpose-built AI engine that's always available and can help ramp their capacity to build AI models at scale Eos delivers. Ranked No. 9 in the TOP 500 list of the world's fastest supercomputers, Eos pushes the boundaries of AI technology and infrastructure.

Samsung Announces the Galaxy Tab Active5

Samsung Electronics America has announced the Galaxy Tab Active5, a business-ready ruggedized tablet built to handle the rigors of frontline work. Building on the power of the Galaxy Tab Active3, the Galaxy Tab Active5 delivers significant improvements in performance, durability and security to help businesses conquer their challenges and boost productivity in the field, even in harsh working environments. Additionally, the Galaxy Tab Active5 is available as an Enterprise Edition, making it easy for businesses to enroll, configure, manage and analyze hundreds of devices.

Designed with the needs of retailers in mind, the Galaxy Tab Active5 features a high-resolution camera, near-field communication (NFC) and push-to-talk functionality to enable more efficient barcode-scanning, mobile point-of-sale (mPOS), in-store communication and more. In addition to retail, the Galaxy Tab Active5 also delivers new capabilities in other industries that require a high degree of durability, including foodservice, manufacturing, transportation, construction and the public sector.

Intel NEX "Bartlett Lake-S" CPUs Reportedly in Pipeline

Supply chain insiders have claimed that Intel is working on extending the lifespan of its LGA 1700 platform—a BenchLife report proposes that the "Bartlett Lake-S" processor family is due soon, courtesy of Team Blue's Network and Edge (NEX) business group. Only a few days ago, the rumor mill had placed "Bartlett Lake-S" CPUs in a mainstream desktop category, due to alleged connections with the Raptor Lake-S Refresh series—the former is also (supposedly) based on the Intel 7 processor process. BenchLife believes that DDR4 and DDR5 memory will be supported, but with no mention of possible ECC functionality. Confusingly, chip industry tipsters believe that the unannounced processors could be launched as 15th Gen Core parts.

BenchLife has a history of discovering and reporting on Intel product roadmaps—apparently Bartlett Lake-S can leverage the same core configurations as seen on Raptor Lake-S; namely 8 Raptor Cove P-Cores and 16 Gracemont E-Cores. An insider source claims that a new pure P-Core-only design could exist, sporting up to twelve Raptor Cove units. According to a leaked Bartlett Lake-S series specification sheet: "the iGPU part will use (existing) Intel Xe architecture, up to Intel UHD Graphics 770." The publication alludes to some type of AI performance enhancement as a distinguishing feature for Bartlett Lake-S, when lined up against 14th Gen Core desktop SKUs. Folks salivating at the prospect of a mainstream DIY launch will have to wait and see (according to BenchLife's supply chain insider): "judging from various specifications, this product belonging to the Intel NEX business group may also be decentralized to the consumer market, but the source did not make this part too clear and reserved some room for maneuver."

PNY Expands Enterprise Portfolio with Innovative VAST Data Platform

PNY Technologies, a global leader in memory and storage solutions, has expanded its enterprise portfolio through a strategic partnership with VAST Data, the AI data platform company. This collaboration underscores PNY's commitment to delivering cutting-edge solutions to meet the evolving needs of enterprises integrating AI and HPC into their core processes. This partnership leverages the VAST Data Platform's DataStore capabilities, enhancing PNY's enterprise offerings with unparalleled performance, scalability, and cost efficiency. This move reinforces PNY's position as a key player in the enterprise market.

Key highlights of the partnership include:
  • Revolutionary Solutions: PNY now offers VAST Data's innovative data platform, known for its simplicity and transformative performance, serving data to the world's most demanding supercomputers.
  • Unmatched Scalability: VAST Data's industry-disrupting DASE architecture enables businesses to enjoy nearly limitless scale as their data sets and AI pipelines grow, allowing them to adapt to the changing demands of today's increasingly data-driven world.
  • Cost-Effective Data Management: VAST Data and PNY will empower enterprises to achieve significant cost savings through improved data reduction (VAST Similarity), infrastructure efficiency and simplified management.
  • Enhanced Data Analytics: The VAST DataBase facilitates deeper insights from both structured and unstructured data, accelerating decision-making and enables data-driven innovation across various business functions.
  • Exceptional Customer Support: PNY extends its commitment to exceptional customer support to VAST Data solutions, providing reliable technical assistance and guidance.

IBM Storage Ceph Positioned as the Ideal Foundation for Modern Data Lakehouses

It's been one year since IBM integrated Red Hat storage product roadmaps and teams into IBM Storage. In that time, organizations have been faced with unprecedented data challenges to scale AI due to the rapid growth of data in more locations and formats, but with poorer quality. Helping clients combat this problem has meant modernizing their infrastructure with cutting-edge solutions as a part of their digital transformations. Largely, this involves delivering consistent application and data storage across on-premises and cloud environments. Also, crucially, this includes helping clients adopt cloud-native architectures to realize the benefits of public cloud like cost, speed, and elasticity. Formerly Red Hat Ceph—now IBM Storage Ceph—a state-of-the-art open-source software-defined storage platform, is a keystone in this effort.

Software-defined storage (SDS) has emerged as a transformative force when it comes to data management, offering a host of advantages over traditional legacy storage arrays including extreme flexibility and scalability that are well-suited to handle modern uses cases like generative AI. With IBM Storage Ceph, storage resources are abstracted from the underlying hardware, allowing for dynamic allocation and efficient utilization of data storage. This flexibility not only simplifies management but also enhances agility in adapting to evolving business needs and scaling compute and capacity as new workloads are introduced. This self-healing and self-managing platform is designed to deliver unified file, block, and object storage services at scale on industry standard hardware. Unified storage helps provide clients a bridge from legacy applications running on independent file or block storage to a common platform that includes those and object storage in a single appliance.

Financial Analyst Outs AMD Instinct MI300X "Projected" Pricing

AMD's December 2023 launch of new Instinct series accelerators has generated a lot of tech news buzz and excitement within the financial world, but not many folks are privy to Team Red's MSRP for the CDNA 3.0 powered MI300X and MI300A models. A Citi report has pulled back the curtain, albeit with "projected" figures—an inside source claims that Microsoft has purchased the Instinct MI300X 192 GB model for ~$10,000 a piece. North American enterprise customers appear to have taken delivery of the latest MI300 products around mid-January time—inevitably, top secret information has leaked out to news investigators. SeekingAlpha's article (based on Citi's findings) alleges that the Microsoft data center division is AMD's top buyer of MI300X hardware—GPT-4 is reportedly up and running on these brand new accelerators.

The leakers claim that businesses further down the (AI and HPC) food chain are having to shell out $15,000 per MI300X unit, but this is a bargain when compared to NVIDIA's closest competing package—the venerable H100 SXM5 80 GB professional card. Team Green, similarly, does not reveal its enterprise pricing to the wider public—Tom's Hardware has kept tabs on H100 insider info and market leaks: "over the recent quarters, we have seen NVIDIA's H100 80 GB HBM2E add-in-card available for $30,000, $40,000, and even much more at eBay. Meanwhile, the more powerful H100 80 GB SXM with 80 GB of HBM3 memory tends to cost more than an H100 80 GB AIB." Citi's projection has Team Green charging up to four times more for its H100 product, when compared to Team Red MI300X pricing. NVIDIA's dominant AI GPU market position could be challenged by cheaper yet still very performant alternatives—additionally chip shortages have caused Jensen & Co. to step outside their comfort zone. Tom's Hardware reached out to AMD for comment on the Citi pricing claims—a company representative declined this invitation.

GIGABYTE Enterprise Servers & Motherboards Roll Out on European E-commerce Platform

GIGABYTE Technology, a pioneer in computer hardware, has taken a significant stride in shaping its European business model. Today, GIGABYTE has broadened its e-commerce platform, shop.gigabyte.eu, by integrating enterprise server and server motherboard solutions into its product portfolio. Being at the forefront of computer hardware manufacturing, GIGABYTE recognizes that it is imperative to expand its presence in the EMEA region to maintain its leadership across all markets. With the introduction of our enterprise-level server and motherboard solutions, we are dedicated to delivering a diverse range of high-performance products directly to our B2B clients.

GIGABYTE offers a complete product portfolio that addresses all workloads from the data center to edge including traditional and emerging workloads in HPC and AI to data analytics, 5G/edge, cloud computing, and more. Our enduring partnerships with key technology leaders ensure that our new products are at the forefront of innovation and launch with new partner platforms. Our systems embody performance, security, scalability, and sustainability. Within the e-commerce product portfolio, we offer a selection of models from our Edge, Rack, GPU, and Storage series. Additionally, the platform provides server motherboards for custom integration. The current selection comprises a mix of solutions tailored to online sales. For more complex solutions, customers can get in touch via the integrated contact form.

Samsung Showcases B2B Displays with Advanced Connectivity at ISE 2024

Samsung Electronics today at Integrated Systems Europe (ISE) 2024 in Barcelona is showcasing how SmartThings will bolster its B2B displays to shape the future of business connectivity. Samsung's "SmartThings for Business" exhibition emphasizes the new advancements that the cutting-edge internet-of-things (IoT) platform will offer, as well as Samsung's commitment to providing more connected, easy-to-control digital signage across industries. "In a commercial display sector where operational efficiency is key, Samsung digital signage is leveraging SmartThings to deliver next-gen connectivity and features to organizations of all sizes," said SW Yong, President and Head of Visual Display Business at Samsung Electronics. "This further expansion of the SmartThings ecosystem will serve to elevate experiences for customers and partners from a wide variety of industries."

How Businesses Can Leverage Connected Tech Through SmartThings—From the Smart Store to Smart Office
At the event, Samsung is showcasing how SmartThings enables business owners to leverage their digital signage to connect and gain more control of their smart devices across various landscapes. By offering the SmartThings connectivity feature to commercial display products such as Smart Signage and Hotel TVs, users can experience the convenience of hyper-connectivity in their business environments. These changes will include Samsung smart devices, as well as other devices that support the industry's latest IoT specifications, Matter and the Home Connectivity Alliance (HCA). Through the application of SmartThings to various business environments, Samsung contributes to the more efficient management of space and energy by transforming places of business into interconnected smart spaces. These connectivity improvements have been designed to benefit all types of business customers, from small and mid-sized business owners to enterprises. Examples of the smart spaces—including a smart store, smart office and smart hotel—are on display at Samsung's booth at ISE 2024.

OpenAI Reportedly Talking to TSMC About Custom Chip Venture

OpenAI is reported to be initiating R&D on a proprietary AI processing solution—the research organization's CEO, Sam Altman, has commented on the in-efficient operation of datacenters running NVIDIA H100 and A100 GPUs. He foresees a future scenario where his company becomes less reliant on Team Green's off-the-shelf AI-crunchers, with a deployment of bespoke AI processors. A short Reuters interview also underlined Altman's desire to find alternatives sources of power: "It motivates us to go invest more in (nuclear) fusion." The growth of artificial intelligence industries has put an unprecedented strain on energy providers, so tech firms could be semi-forced into seeking out frugal enterprise hardware.

The Financial Times has followed up on last week's Bloomberg report of OpenAI courting investment partners in the Middle East. FT's news piece alleges that Altman is in talks with billionaire businessman Sheikh Tahnoon bin Zayed al-Nahyan, a very well connected member of the United Arab Emirates Royal Family. OpenAI's leadership is reportedly negotiating with TSMC—The Financial Times alleges that Taiwan's top chip foundry is an ideal manufacturing partner. This revelation contradicts Bloomberg's recent reports of a potential custom OpenAI AI chip venture involving purpose-built manufacturing facilities. The whole project is said to be at an early stage of development, so Altman and his colleagues are most likely exploring a variety of options.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

Intel's Next-gen Xeon "Clearwater Forest" E-Core CPU Series Spotted in Patch

Intel presented its next generation Xeon "Clearwater Forest" processor family during September's Innovation Event—their roadmap slide (see below) included other Birch Stream platform architecture options. Earlier this week, Team Blue's software engineers issued a Linux kernel patch that contains details pertaining to codenamed projects: Sierra Forest, Grand Ridge and the aforementioned Clearwater Forest. All E-Core Xeon "Sierra Forest" processors are expected to launch around the middle of 2024—this deployment of purely efficiency-oriented "Sierra Glen" (Atom Crestmont) cores in enterprise/server chip form will be a first for Intel. The Sierra Forest Xeon range has been delayed a couple of times; but some extra maturation time has granted a jump from an initial maximum 144 E-Core count up to 288. The latest patch notes provide an early look into Clearwater Forest's basic foundations—it seems to be Sierra Forest's direct successor.

The Intel Xeon "Granite Rapids" processor family is expected to hit retail just after a Sierra Forest product launch, but the former sports a very different internal configuration—an all "Redwood Cove" P-Core setup. Phoronix posits that Sierra Forest's groundwork is clearing the way for its natural successor: "Clearwater Forest is Intel's second generation E-core Xeon...Clearwater Forest should ship in 2025 while the open-source Intel Linux engineers begin in their driver support preparations and other hardware enablement well in advance of launch. With engineers already pushing Sierra Forest code into the Linux kernel and related key open-source projects like Clang and GCC since last year, their work on enabling Sierra Forest appears to be largely wrapping up and in turn the enablement is to begin for Clearwater Forest. Sent out...was the first Linux kernel patch for Sierra Forest. As usual, for the first patch it's quite basic and is just adding in the new model number for Clearwater Forest CPUs. Clear Water Forest has a model number of 0xDD (221). The patch also reaffirms that the 0xDD Clearwater Forest CPUs are using Atom Darkmont cores."

QNAP Launches TS-hx77AXU-RP Series Enterprise ZFS NAS with Revolutionary AMD Ryzen 7000 Series Processors

QNAP Systems, Inc., a leading computing, and storage solutions innovator, today unveiled the new high-capacity TS-hx77AXU-RP ZFS NAS series, including 12-bay TS-h1277AXU-RP and 16-bay TS-h1677AXU-RP rackmount models powered by AMD Ryzen 7000 Series processors based on the cutting-edge AMD Socket AM5 platform. By integrating robust hardware and diverse I/O including DDR5 RAM, M.2 PCIe Gen 5, PCIe Gen 4 slots, and redundant power supplies, the TS-hx77AXU-RP series unleashes enterprise-level performance and delivers ultra-high bandwidth, futureproof expandability, and trusted reliability for performance-demanding Tier 2 storage, virtualization, 4K video editing, and PB-level storage applications.

"The whole new TS-hx77AXU-RP series ZFS NAS provides enterprises with superb performance and large storage capacity to tackle business-critical workloads and storage-demanding applications. The AMD Ryzen 7000 Series multi-core processors further unlocks the key performance of DDR5 and M.2 PCIe Gen 5," said Alex Shih, Product Manager of QNAP, adding "Paired with the ZFS file system that's most suited for business applications, the TS-hx77AXU-RP series stands out as the top choice for storage solutions that require uncompromising data integrity."

Dell Generative AI Open Ecosystem with AMD Instinct Accelerators

Generative AI (GenAI) is the decade's most promising accelerator for innovation with 78% of IT decision makers reporting they're largely excited for the potential GenAI can have on their organizations.¹ Most see GenAI as a means to provide productivity gains, streamline processes and achieve cost savings. Harnessing this technology is critical to ensure organizations can compete in this new digital era.

Dell Technologies and AMD are coming together to unveil an expansion to the Dell Generative AI Solutions portfolio, continuing the work of accelerating advanced workloads and offering businesses more choice to continue their unique GenAI journeys. This new technology highlights a pivotal role played by open ecosystems and silicon diversity in empowering customers with simple, trusted and tailored solutions to bring AI to their data.

MSI Introduces New AI Server Platforms with Liquid Cooling Feature at SC23

MSI, a leading global server provider, is showcasing its latest GPU and CXL memory expansion servers powered by AMD EPYC processors and 4th Gen Intel Xeon Scalable processors, which are optimized for enterprises, organizations and data centers, at SC23, booth #1592 in the Colorado Convention Center in Denver from November 13 to 16.

"The exponential growth of human- and machine-generated data demands increased data center compute performance. To address this demand, liquid cooling has emerged as a key trend, said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's server platforms offer a well-balanced hardware foundation for modern data centers. These platforms can be tailored to specific workloads, optimizing performance and aligning with the liquid cooling trend."

NVIDIA NeMo: Designers Tap Generative AI for a Chip Assist

A research paper released this week describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors. The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair. Multiple engineering teams coordinate for as long as two years to construct one of these digital mega cities. Some groups define the chip's overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.

Cisco and Bang & Olufsen Unveil New Wireless Earbuds for Secure Hybrid Work

Cisco and Bang & Olufsen today unveiled new true wireless earbuds built with enterprise-grade features customized for professionals on-the-go. Furthering their partnership to provide customers with top-quality audio experiences whether they're at home, at work or in transit, the Bang & Olufsen Cisco 950 delivers a high-end aesthetic and premium sound combined with advanced security and manageability features.

This product release is an extension of the Cisco and Bang & Olufsen partnership, which addresses increased customer expectation for multifunctional devices that fit their lifestyles in and out of work. As hybrid work continues to enable flexibility in how and where people work, the Bang & Olufsen Cisco 950 enables crystal-clear audio for seamless collaboration from any setting. The earbuds are fully manageable in Cisco's Control Hub platform, giving IT greater visibility and control over their entire fleet of collaboration devices and peripherals.

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Amazon to Invest $4 Billion into Anthropic AI

Today, we're announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop the most reliable and high-performing foundation models in the industry. Our frontier safety research and products, together with Amazon Web Services' (AWS) expertise in running secure, reliable infrastructure, will make Anthropic's safe and steerable AI widely accessible to AWS customers.

AWS will become Anthropic's primary cloud provider for mission critical workloads, providing our team with access to leading compute infrastructure in the form of AWS Trainium and Inferentia chips, which will be used in addition to existing solutions for model training and deployment. Together, we'll combine our respective expertise to collaborate on the development of future Trainium and Inferentia technology.

Unity to Start Charging Per-Installation Fee with New Business Model Update

Unity is introducing some notable changes to its pricing and service offerings, slated to take effect on January 1, 2024. The new Unity Runtime Fee will be based on the number of game installs at the heart of these changes. This fee will apply every time an end user downloads a qualifying game. Unity believes this initial install-based fee allows creators to retain the financial benefits of ongoing player engagement, unlike a model based on revenue sharing. The company clarifies that the fee refers explicitly to the Unity Runtime, part of the Unity Engine that enables games to run on different devices. Additionally, these changes are not going to be not retroactive or perpetual. Instead, all fees will start counting on January 1, 2024. The fee will apply once for each new install and not an ongoing perpetual license royalty, like revenue share.

However, the new Unity Runtime Fee comes with specific thresholds for revenue and installs, designed to ensure that smaller creators are not adversely affected. For Unity Personal and Unity Plus, the fee applies only to games that have generated $200,000 or more in the last 12 months and have a minimum of 200,000 lifetime installs. For Unity Pro and Unity Enterprise, the fee kicks in for games that have made $1,000,000 or more in the last 12 months and have at least 1,000,000 lifetime installs. The table below shows which Unity accounts pay what fees, with costs ranging from $0.2 per install after the first 200,000 installs. After one million installs, each new install starts at $0.15 and $0.125 for Unity Pro and Unity Enterprise, respectively. As the game gains traction, install fees decay, as shown in the table below.

Update 15:36 UTC: Unity issued a statement on company's Twitter/X account that promises changes in the couple of days.

Cervoz Introduces T425 - a New M.2 2230 (A+E key) NVMe Gen3x2 SSD

The Power of Small: Evolution of Data Storage—since computers were built, making them smaller has never stopped. This is especially evident in the industrial sector, where compactness is highly valued. As a crucial component of computers, data storage has undergone a process of miniaturization as well, leading to the era of M.2 interface SSDs. While some ultra compact fanless PC and rugged mobile PC may adopt smaller embedded storage, such as eMMC (Embedded MultiMediaCard), they significantly lag behind M.2 PCIe SSDs in terms of performance. Therefore, M.2 PCIe SSDs have emerged as the superior choice for those seeking top-notch performance and efficiency in their computing experience.

Introducing the Cervoz T425 SSD: Unleash Power in a Tiny Package
To embrace this remarkable trend, Cervoz presents the latest T425, a new M.2 2230 NVMe PCIe Gen 3 x2 SSD. Designed with a compact M.2 2230 form factor (22 mm x 30 mm), it supports both A and E key configurations and utilizes PCIe Gen 3 x2 lanes for high-speed data transfer. With impressive sequential speeds of up to 815 MB/s read and 760 MB/s write, along with storage capacities of 64 GB, 128 GB, and 512 GB, the T425 is the ultimate solution for enhancing the performance of embedded computing systems with space constraints.
Return to Keyword Browsing
Apr 6th, 2025 10:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts