News Posts matching #partners

Return to Keyword Browsing

Report Suggests Naver Siding with Samsung in $752 Million "Mach-1" AI Chip Deal

Samsung debuted its Mach-1 generation of AI processors during a recent shareholder meeting—the South Korean megacorp anticipates an early 2025 launch window. Their application-specific integrated circuit (ASIC) design is expected to "excel in edge computing applications," with a focus on low power and efficiency-oriented operating environments. Naver Corporation was a key NVIDIA high-end AI customer in South Korea (and Japan), but the leading search platform firm and creator of HyperCLOVA X LLM (reportedly) deliberated on an adoption alternative hardware last October. The Korea Economic Daily believes that Naver's relationship with Samsung is set to grow, courtesy of a proposed $752 million investment: "the world's top memory chipmaker, will supply its next-generation Mach-1 artificial intelligence chips to Naver Corp. by the end of this year."

Reports from last December indicated that the two companies were deep into the process of co-designing power-efficient AI accelerators—Naver's main goal is to finalize a product that will offer eight times more energy efficiency than NVIDIA's H100 AI accelerator. Naver's alleged bulk order—of roughly 150,000 to 200,000 Samsung Mach-1 AI chips—appears to be a stopgap. Industry insiders reckon that Samsung's first-gen AI accelerator is much cheaper when compared to NVIDIA H100 GPU price points—a per-unit figure of $3756 is mentioned in the KED Global article. Samsung is speculated to be shopping its fledgling AI tech to Microsoft and Meta.

NVIDIA Hopper Leaps Ahead in Generative AI at MLPerf

It's official: NVIDIA delivered the world's fastest platform in industry-standard tests for inference on generative AI. In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM—software that speeds and simplifies the complex job of inference on large language models—boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago. The dramatic speedup demonstrates the power of NVIDIA's full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI. Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM—a set of inference microservices that includes inferencing engines like TensorRT-LLM—makes it easier than ever for businesses to deploy NVIDIA's inference platform.

Raising the Bar in Generative AI
TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs—the latest, memory-enhanced Hopper GPUs—delivered the fastest performance running inference in MLPerf's biggest test of generative AI to date. The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks. The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf's Llama 2 benchmark. The H200 GPU results include up to 14% gains from a custom thermal solution. It's one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.

Silicon Box Announces $3.6 Billion Foundry Deal - New Facility Marked for Northern Italy

Silicon Box, a cutting-edge, advanced panel-level packaging foundry announced its intention to collaborate with the Italian government to invest up to $3.6 billion (€3.2 billion) in Northern Italy, as the site of a new, state-of-the-art semiconductor assembly and test facility. This facility will help meet critical demand for advanced packaging capacity to enable next generation technologies that Silicon Box anticipates by 2028. The multi-year investment will replicate Silicon Box's flagship foundry in Singapore which has proven capability and capacity for the world's most advanced semiconductor packaging solutions, then expand further into 3D integration and testing. When completed, the new facility will support approximately 1,600 Silicon Box employees in Italy. The construction of the facility is also expected to create several thousand more jobs, including eventual hiring by suppliers. Design and planning for the facility will begin immediately, with construction to commence pending European Commission approval of planned financial support by the Italian State.

As well as bringing the most advanced chiplet integration, packaging, and testing to Italy, Silicon Box's manufacturing process is based on panel-level-production; a world leading, first-of-its-kind combination that is already shipping product to customers from its Singapore foundry. Through the investment, Silicon Box has plans for greater innovation and expansion in Europe, and globally. The new integrated production facility is expected to serve as a catalyst for broader ecosystem investments and innovation in Italy, as well as the rest of the European Union.

Cerebras & G42 Break Ground on Condor Galaxy 3 - an 8 exaFLOPs AI Supercomputer

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the Abu Dhabi-based leading technology holding group, today announced the build of Condor Galaxy 3 (CG-3), the third cluster of their constellation of AI supercomputers, the Condor Galaxy. Featuring 64 of Cerebras' newly announced CS-3 systems - all powered by the industry's fastest AI chip, the Wafer-Scale Engine 3 (WSE-3) - Condor Galaxy 3 will deliver 8 exaFLOPs of AI with 58 million AI-optimized cores. The Cerebras and G42 strategic partnership already delivered 8 exaFLOPs of AI supercomputing performance via Condor Galaxy 1 and Condor Galaxy 2, each amongst the largest AI supercomputers in the world. Located in Dallas, Texas, Condor Galaxy 3 brings the current total of the Condor Galaxy network to 16 exaFLOPs.

"With Condor Galaxy 3, we continue to achieve our joint vision of transforming the worldwide inventory of AI compute through the development of the world's largest and fastest AI supercomputers," said Kiril Evtimov, Group CTO of G42. "The existing Condor Galaxy network has trained some of the leading open-source models in the industry, with tens of thousands of downloads. By doubling the capacity to 16exaFLOPs, we look forward to seeing the next wave of innovation Condor Galaxy supercomputers can enable." At the heart of Condor Galaxy 3 are 64 Cerebras CS-3 Systems. Each CS-3 is powered by the new 4 trillion transistor, 900,000 AI core WSE-3. Manufactured at TSMC at the 5-nanometer node, the WSE-3 delivers twice the performance at the same power and for the same price as the previous generation part. Purpose built for training the industry's largest AI models, WSE-3 delivers an astounding 125 petaflops of peak AI performance per chip.

Ghost of Tsushima Director's Cut is Coming to PC on May 16

Ghost of Tsushima Director's Cut is coming to PC! We at Nixxes are thrilled to collaborate with the talented team at Sucker Punch Productions who created this critically acclaimed open-world action adventure. We are excited to bring Jin's story to a new audience and to bring a Sucker Punch title to PC for the first time. The PC version of Ghost of Tsushima Director's Cut includes the full game, the Iki Island expansion, and the cooperative online multiplayer Legends mode. For the past year, the team at Nixxes has been working hard to bring the Sucker Punch in-house engine technology to PC and implement PC features such as unlocked frame rates, a variety of graphics settings and presets, and customizable mouse and keyboard controls.

With ultrawide monitor support, you can take in the expansive terrain and ancient landmarks with a cinematic field of view. Ghost of Tsushima Director's Cut on PC is fully optimized for 21:9 and 32:9 resolutions and even supports 48:9 resolutions for triple monitor setups. Ghost of Tsushima Director's Cut on PC features the latest performance-enhancing technologies. NVIDIA DLSS 3 and AMD FSR 3 are available with both upscaling and frame generation options. Intel XeSS upscaling is also supported and if your hardware has headroom to spare, you can use NVIDIA DLAA or FSR 3 Native AA to further boost image quality.

Microsoft DirectSR Super Resolution API Brings Together DLSS, FSR and XeSS

Microsoft has just announced that their new DirectSR Super Resolution API for DirectX will provide a unified interface for developers to implement super resolution in their games. This means that game studios no longer have to choose between DLSS, FSR, XeSS, or spend additional resources to implement, bug-test and support multiple upscalers. For gamers this is huge news, too, because they will be able to run upscaling in all DirectSR games—no matter the hardware they own. While AMD FSR and Intel XeSS run on all GPUs from all vendors, NVIDIA DLSS is exclusive to Team Green's hardware. With their post, Microsoft also confirms that DirectSR will not replace FSR/DLSS/XeSS with a new upscaler by Microsoft, rather that it builds on existing technologies that are already available, unifying access to them.

While we have to wait until March 21 for more details to be revealed at GDC 2024, Microsoft's Joshua Tucker stated in a blog post: "We're thrilled to announce DirectSR, our new API designed in partnership with GPU hardware vendors to enable seamless integration of Super Resolution (SR) into the next generation of games. Super Resolution is a cutting-edge technique that increases the resolution and visual quality in games. DirectSR is the missing link developers have been waiting for when approaching SR integration, providing a smoother, more efficient experience that scales across hardware. This API enables multi-vendor SR through a common set of inputs and outputs, allowing a single code path to activate a variety of solutions including NVIDIA DLSS Super Resolution, AMD FidelityFX Super Resolution, and Intel XeSS. DirectSR will be available soon in the Agility SDK as a public preview, which will enable developers to test it out and provide feedback. Don't miss our DirectX State of the Union at GDC to catch a sneak peek at how DirectSR can be used with your games!"

ServiceNow, Hugging Face & NVIDIA Release StarCoder2 - a New Open-Access LLM Family

ServiceNow, Hugging Face, and NVIDIA today announced the release of StarCoder2, a family of open-access large language models for code generation that sets new standards for performance, transparency, and cost-effectiveness. StarCoder2 was developed in partnership with the BigCode Community, managed by ServiceNow, the leading digital workflow company making the world work better for everyone, and Hugging Face, the most-used open-source platform, where the machine learning community collaborates on models, datasets, and applications. Trained on 619 programming languages, StarCoder2 can be further trained and embedded in enterprise applications to perform specialized tasks such as application source code generation, workflow generation, text summarization, and more. Developers can use its code completion, advanced code summarization, code snippets retrieval, and other capabilities to accelerate innovation and improve productivity.

StarCoder2 offers three model sizes: a 3-billion-parameter model trained by ServiceNow; a 7-billion-parameter model trained by Hugging Face; and a 15-billion-parameter model built by NVIDIA with NVIDIA NeMo and trained on NVIDIA accelerated infrastructure. The smaller variants provide powerful performance while saving on compute costs, as fewer parameters require less computing during inference. In fact, the new 3-billion-parameter model matches the performance of the original StarCoder 15-billion-parameter model. "StarCoder2 stands as a testament to the combined power of open scientific collaboration and responsible AI practices with an ethical data supply chain," emphasized Harm de Vries, lead of ServiceNow's StarCoder2 development team and co-lead of BigCode. "The state-of-the-art open-access model improves on prior generative AI performance to increase developer productivity and provides developers equal access to the benefits of code generation AI, which in turn enables organizations of any size to more easily meet their full business potential."

Xbox & Microsoft Schedule GDC 2024 Presentations

As GDC, the world's largest game developer conference, returns to San Francisco, Microsoft and Xbox will be there to engage and empower developers, publishers, and technology partners across the industry. We are committed to supporting game developers on any platform, anywhere in the world, at every stage of development. Our message is simple: Microsoft and Xbox are here to help power your games and empower your teams. From March 18 - 22, the Xbox Lobby Lounge in the Moscone Center South can't be missed—an easy meeting point, and a first step toward learning more about the ID@Xbox publishing program, the Developer Acceleration Program (DAP) for underrepresented creators, Azure cloud gaming services, and anything else developers might need.

GDC features dozens of speakers from across Xbox, Activision, Blizzard, King and ZeniMax who will demonstrate groundbreaking in-game innovations and share community-building strategies. Microsoft technology teams, with support from partners, will also host talks that spotlight new tools, software and services that help increase developer velocity, grow player engagement and help creators grow. See below for the Conference programming details.

Samsung & Vodafone "Open RAN Ecosystem" Bolstered by AMD EPYC 8004 Series

Samsung Electronics and Vodafone, in collaboration with AMD, today announced that the three companies have successfully demonstrated an end-to-end call with the latest AMD processors enabling Open RAN technology, a first for the industry. This joint achievement represents the companies' technical leadership in enriching the Open RAN ecosystem throughout the industry. Conducted in Samsung's R&D lab in Korea, the first call was completed using Samsung's versatile, O-RAN-compliant, virtualized RAN (vRAN) software, powered by AMD EPYC 8004 Series processors on Supermicro's Telco/Edge servers, supported by Wind River Studio Container-as-a-Service (CaaS) platform. This demonstration aimed to verify optimized performance, energy efficiency and interoperability among partners' solutions.

The joint demonstration represents Samsung and Vodafone's ongoing commitment to reinforce their position in the Open RAN market and expand their ecosystem with industry-leading partners. This broader and growing Open RAN ecosystem helps operators to build and modernize mobile networks with greater flexibility, faster time-to-market (TTM), and unmatched performance. "Open RAN represents the forthcoming major transformation in advancing mobile networks for the future. Reaching this milestone with top industry partners like Samsung and AMD shows Vodafone's dedication to delivering on the promise of Open RAN innovation," said Nadia Benabdallah, Network Strategy and Engineering Director at Vodafone Group. "Vodafone is continually looking to innovate its network by exploring the potential and diversity of the ecosystem."

IBM Introduces LinuxONE 4 Express, a Value-oriented Hybrid Cloud & AI Platform

IBM has announced IBM LinuxONE 4 Express, extending the latest performance, security and AI capabilities of LinuxONE to small and medium sized businesses and within new data center environments. The pre-configured rack mount system is designed to offer cost savings and to remove client guess work when spinning up workloads quickly and getting started with the platform to address new and traditional use cases such as digital assets, medical imaging with AI, and workload consolidation.

Building an integrated hybrid cloud strategy for today and years to come
As businesses move their products and services online quickly, oftentimes, they are left with a hybrid cloud environment created by default, with siloed stacks that are not conducive to alignment across businesses or the introduction of AI. In a recent IBM IBV survey, 84% of executives asked acknowledged their enterprise struggles in eliminating silo-to-silo handoffs. And 78% of responding executives said that an inadequate operating model impedes successful adoption of their multicloud platform. With the pressure to accelerate and scale the impact of data and AI across the enterprise - and improve business outcomes - another approach that organizations can take is to more carefully identify which workloads should be on-premises vs in the cloud.

GIGABYTE Enterprise Servers & Motherboards Roll Out on European E-commerce Platform

GIGABYTE Technology, a pioneer in computer hardware, has taken a significant stride in shaping its European business model. Today, GIGABYTE has broadened its e-commerce platform, shop.gigabyte.eu, by integrating enterprise server and server motherboard solutions into its product portfolio. Being at the forefront of computer hardware manufacturing, GIGABYTE recognizes that it is imperative to expand its presence in the EMEA region to maintain its leadership across all markets. With the introduction of our enterprise-level server and motherboard solutions, we are dedicated to delivering a diverse range of high-performance products directly to our B2B clients.

GIGABYTE offers a complete product portfolio that addresses all workloads from the data center to edge including traditional and emerging workloads in HPC and AI to data analytics, 5G/edge, cloud computing, and more. Our enduring partnerships with key technology leaders ensure that our new products are at the forefront of innovation and launch with new partner platforms. Our systems embody performance, security, scalability, and sustainability. Within the e-commerce product portfolio, we offer a selection of models from our Edge, Rack, GPU, and Storage series. Additionally, the platform provides server motherboards for custom integration. The current selection comprises a mix of solutions tailored to online sales. For more complex solutions, customers can get in touch via the integrated contact form.

Bungie Announces Destiny 2 x BioWare Crossover, Normandy Crew Landing February 13

Bungie has announced its new collaboration with EA and BioWare to allow Guardians to join the crew of the Normandy with new cosmetics and in-game items, launching February 13, 2024. The Normandy Crew Bundle will be available at the Eververse store in-game and will include a Commander Shepard-inspired N7 armor set for Titans, a Garrus-inspired Vakarian set for Hunters, and a Liara-inspired Shadow Broker set for Warlocks. In celebration of the partnership, all players will be able to claim the Alliance Requisitions Bundle, including the Enhanced Defense Ghost Shell, Alliance Scout Frigate ship, and Alliance Drop Ship Sparrow, which will be available at no cost. Players can also get the Omni Strike finisher and Flux Dance emote for Silver.

Now released in Destiny 2, Riven's Wishes are new weekly quests open to all players from January 30 until March 12. During this period, Guardians will be tasked with completing a pursuit each week to earn a token redeemable for a treasure trove of rewards. Choose from the Wish for Strength to earn Last Wish raid Deepsight weapons, Wish for Protection to armor up with Exotic gear from the Lightfall year, or Wish for Beauty to collect event mementos and essential Ascendant crafting materials.

MSI MPOWER Motherboard Series Resurrected After Long Absence

An exclusive report provides an initial tease of MSI's relaunch of MPOWER—a beloved product line of high performance yet wallet friendly motherboards. Wccftech published their Z790MPOWER model coverage only a few hours ago. MSI's final batch of MPOWER-branded boards landed back in 2017, with Z170 and Z270 chipsets (on Intel Socket 1151). Here is Wccftech's statement on the matter: "MSI is marking the return of the MPOWER series with a new and cost-effective Z790 product, the Z790MPOWER. This motherboard may look like a very mainstream design but it has something that only a few high-end motherboards can do and that is support for the best DDR5 memory out there."

They moved onto showcasing the board's feature set: "Starting with the details, the MSI Z790MPOWER motherboard features the LGA 1700/1800 socket & supports 12th, 13th, and 14th Gen CPUs from Intel. It is powered by a 15-Phase VRM design which is provided power through dual 8-pin connectors. There are large heatsinks over the VRMs and the Mini-ATX with a silver and black finish looks great." The usual bits of overclocking terminology adorn Z790MPOWER's various heatsinks—including "Overclock, Frequency, MHz, Voltages and Clock."

OpenAI CEO Reportedly Seeking Funds for Purpose-built Chip Foundries

OpenAI CEO, Sam Altman, had a turbulent winter 2023 career moment, but appears to be going all in with his company's future interests. A Bloomberg report suggests that the tech visionary has initiated a major fundraising initiative for the construction of OpenAI-specific semiconductor production plants. The AI evangelist reckons that his industry will become prevalent enough to demand a dedicated network of manufacturing facilities—the U.S. based artificial intelligence (AI) research organization is (reportedly) exploring custom artificial intelligence chip designs. Proprietary AI-focused GPUs and accelerators are not novelties at this stage in time—many top tech companies rely on NVIDIA solutions, but are keen to deploy custom-built hardware in the near future.

OpenAI's popular ChatGPT system is reliant on NVIDIA H100 and A100 GPUs, but tailor-made alternatives seem to be the desired route for Altman & Co. The "on their own terms" pathway seemingly skips an expected/traditional chip manufacturing process—the big foundries could struggle to keep up with demand for AI-oriented silicon. G42 (an Abu Dhabi-based AI development holding company) and SoftBank Group are mentioned as prime investment partners in OpenAI's fledgling scheme—Bloomberg proposes that Altman's team is negotiating a $8 to 10 billion deal with top brass at G42. OpenAI's planned creation of its own foundry network is certainly a lofty and costly goal—the report does not specify whether existing facilities will be purchased and overhauled, or new plants being constructed entirely from scratch.

MSI & Opera GX Partner Up on Special GX MSI Edition Browser

The world's leading PC gaming hardware brand, MSI, has partnered with Opera GX, the browser for gamers to create a unique browser and interactive content experience for fans of MSI's and Opera GX's products and services. "Our partnership with MSI has enabled us to once again create a truly engaging branded browser experience for one of the most renowned hardware brands in gaming," said Mattijs de Valk, VP of Business Development Gaming at Opera, adding: "The GX MSI browser ensures that users stay connected with MSI brand and its products and services in a fun and engaging way - while browsing the internet with the best browser for gamers."

The partnership with Opera GX is just the beginning of a series of upcoming activities. First is the GX MSI Edition—a special co-branded version of the Opera GX gaming browser, including exclusive MSI-themed interactive backgrounds, music, sounds and content. GX MSI Edition comes packed with game-changing features, including CPU, RAM, and Network limiters, hundreds of customization options via GX Mods, and seamless integration with popular platforms like Discord and Twitch.

Microsoft Highlights Top Partner Windows PC Gaming Devices at CES

CES always gives us an exciting look at what's next for hardware, and with gaming at the forefront of technological innovation, it's no surprise that CES 2024 has revealed some truly next-generation Windows 11 gaming PCs. Microsoft and our partners are leading the way in combining traditional power with advancements in AI, and this selection of new announcements are a perfect way to future-proof your gaming experience on PC.

Every PC on this list is designed to run the latest games - including those on PC Game Pass, which will help you to jump into some of the greatest games on the platform the moment you switch on your new machine. Even better, every single computer listed below comes with at least one free month of Xbox Game Pass Ultimate, giving players access to hundreds of high-quality PC, console and cloud games.

Wacom Takes Care of Artists with Digital Rights Management and the new Cintiq Pro 27 and Wacom One Tablets

During the CES 2024 international show, Wacom, one of the leaders in the digital design space, unveiled the new Wacom Cintiq Pro and Wacom One tablets. The company also showcased its digital rights management software, Yuify, and introduced Wacom Bridge, a tool designed to enhance remote collaborative workflows for studios. The new Wacom Cintiq Pro line, including the Pro 27, 22, and 17, was developed in collaboration with professionals in virtual production, VFX, CG, and animation. The latest Wacom Cintiq Pro 27, with its precision and best-in-class color fidelity, is poised to take virtual production workflows to the next level. Color accuracy is crucial in virtual production workflows, and the Wacom Cintiq Pro 27 delivers 100% Rec. 709 and 98% DCI-P3 color accuracy. Its 4K display, with 10-bit color, offers high color performance and calibration options, reducing the traditional setup footprint without compromising performance.

The new Wacom Pro Pen 3, redesigned for ergonomic comfort and customization, complements the Cintiq Pro 27's eight Express Keys and multi-touch screen, offering a harmonious workflow. Wacom Bridge, developed in partnership with AWS NICE DCV and Splashtop, is a technology solution that enhances the use of Wacom products on supported remote desktop connections, catering to the needs of remote and hybrid work environments. The Wacom One line, first launched in 2019, has been redesigned and upgraded, offering more options and customization opportunities. The line includes the Wacom One 13 and 12 displays and the Wacom One Medium and Small pen tablets. Finally, Wacom's commitment to protecting artists' work is embodied in "Yuify", a service that allows artists to protect their artwork, manage usage rights, and establish legally binding license permissions. This digital rights management platform enables creators to conveniently manage their authorship records and sign licenses and contracts.

Intel Collaborates with Taiwanese OEMs to Develop Open IP Immersion Cooling Solution and Reference Design

Intel is expanding immersion cooling collaborations with Taiwanese partners to strengthen its data center offerings for AI workloads. This includes developing an industry-first open IP complete immersion cooling solution and reference design. Partners like Kenmec and Auras Technology will be key in implementing Intel's advanced cooling roadmap. Intel is also cooperating with Taiwan's Industrial Research Institute on a new lab for certifying high-performance computing cooling technologies to international standards. With local ecosystem partners, Intel aims to accelerate next-generation cooling solutions for Taiwanese and global data centers. Advanced cooling allows packing more performance into constrained data center footprints, which is critical for AI's rapid growth. Intel touts a superfluid-based modular cooling system achieving 1500 Watts+ heat dissipation for high-density deployments.

Meanwhile, Kenmec offers a range of liquid cooling products, from Coolant Distribution Units (CDU) to customized Open Rack version 3 (ORv3) water cooling cabinets, with solutions already Intel-certified. Intel wants to solidify its infrastructure leadership as AI workloads surge by fostering an open, collaborative ecosystem around optimized cooling technologies. While progressing cutting-edge immersion and liquid cooling hardware, cultivating shared validation frameworks and best practices ensures broad adoption. With AI-focused data centers demanding ever-greater density, power efficiency, and reliability, cooling can no longer be an afterthought. Intel's substantial investments in a robust cooling ecosystem highlight it as a priority right alongside silicon advances. By lifting up Taiwanese partners as strategic cooling co-innovators, Intel aims to cement future competitiveness.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA today introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements—a collection of NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services—that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

Raspberry Pi Receives Strategic Investment from Arm, Further Extending Long-Term Partnership

Arm Holdings plc (Nasdaq: ARM, "Arm") and Raspberry Pi Ltd today announced an agreement by Arm to make a strategic investment in Raspberry Pi. Arm has acquired a minority stake in Raspberry Pi, further extending a successful long-term partnership between the two companies as they collaborate to deliver critical solutions for the Internet of Things (IoT) developer community.

As the demand for edge compute accelerates, with the proliferation of more demanding IoT and AI applications, Raspberry Pi's solutions are putting the power of low-cost, high-performance computing into the hands of people and businesses all over the world. This investment further cements a partnership that began in 2008, and which has seen the release of many popular Arm-based Raspberry Pi products for students, enthusiasts and commercial developers. Raspberry Pi's most recent flagship product, Raspberry Pi 5, became available at the end of October.

Intel Launches Industry's First AI PC Acceleration Program

Building on the AI PC use cases shared at Innovation 2023, Intel today launched the AI PC Acceleration Program, a global innovation initiative designed to accelerate the pace of AI development across the PC industry.

The program aims to connect independent hardware vendors (IHVs) and independent software vendors (ISVs) with Intel resources that include AI toolchains, co-engineering, hardware, design resources, technical expertise and co-marketing opportunities. These resources will help the ecosystem take full advantage of Intel Core Ultra processor technologies and corresponding hardware to maximize AI and machine learning (ML) application performance, accelerate new use cases and connect the wider PC industry to the solutions emerging in the AI PC ecosystem. More information is available on the AI PC Acceleration Program website.

Harebrained Schemes and Paradox Interactive Part Ways

Paradox Interactive (Paradox) and Harebrained Schemes (HBS)—developers of the Shadowrun trilogy, BATTLETECH and The Lamplighters League—have decided to part ways on 1 January 2024. The separation is the result of a mutual agreement, stemming from each party's strategic and creative priorities. Paradox will retain ownership of The Lamplighters League and other games developed by the studio. HBS will seek new publishing, partnership, and investment opportunities.

"Paradox has refocused its strategy towards its core niches within strategy and management games with endless qualities," said Charlotta Nilsson, COO of Paradox. "We and HBS' leadership have been discussing what would happen after the release of The Lamplighters League, but a new project or sequel in the same genre was not in line with our portfolio plans. Hence, we believe that a separation would be the best way forward. We're very happy that this talented, gifted studio has the chance to continue and can't wait to see what they will make next."

E3 2024 Plans Involve New Venue, ESA and ReedPop Part Ways

Video game events specialist ReedPop is no longer partnering with the Entertainment Software Association (ESA) to co-organize future Electronic Entertainment Expo (aka E3) trade shows. According to GamesIndustry.biz, a venue change is also in the cards—E3's familiar setting of the Los Angeles Convention Center is not on the consideration list for next year. The ESA seems to be sizing up (or down) a suitable new location for E3 2024, despite industry talk indicating that everything had been called off, well in advance. E3 2023 was cancelled around Spring time, with major publishers confirming non-attendance and a downturn in public interest—the latest reports suggest that the ESA is working on a complete format revamp for their 2025 show.

A mutual decision has been reached between the (now former) collaborators—ReedPop's multi-year deal as co-organizer is annulled. ESA president and CEO Stanley Pierre-Louis commented on the split: "We appreciate ReedPop's partnership over the past 14 months and support their ongoing efforts to bring industry and fans together through their various events. While the reach of E3 remains unmatched in our industry, we are continuing to explore how we can evolve it to best serve the video game industry and are evaluating every aspect of the event, from format to location. We are committed to our role as a convenor for the industry and look forward to sharing news about E3 in the coming months."

Tata Partners With NVIDIA to Build Large-Scale AI Infrastructure

NVIDIA today announced an extensive collaboration with Tata Group to deliver AI computing infrastructure and platforms for developing AI solutions. The collaboration will bring state-of-the-art AI capabilities within reach to thousands of organizations, businesses and AI researchers, and hundreds of startups in India. The companies will work together to build an AI supercomputer powered by the next-generation NVIDIA GH200 Grace Hopper Superchip to achieve performance that is best in class.

"The global generative AI race is in full steam," said Jensen Huang, founder and CEO of NVIDIA. "Data centers worldwide are shifting to GPU computing to build energy-efficient infrastructure to support the exponential demand for generative AI.
Return to Keyword Browsing
May 1st, 2024 08:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts