News Posts matching #alternative

Return to Keyword Browsing

Broadcom's Testing of Intel 18A Node Signals Disappointment, Still Not Ready for High-Volume Production

According to a recent Reuters report, Intel's 18A node doesn't seem to be production-ready. As the sources indicate, Broadcom has been reportedly testing Intel's 18A node on its internal company designs, which include an extensive range of products from AI accelerators to networking switches. However, as Broadcom received the initial production run from Intel, the 18A node seems to be in a worse state than initially expected. After testing the wafers and powering them on, Broadcom reportedly concluded that the 18A process is not yet ready for high-volume production. With Broadcom's comments reflecting high-volume production, it signals that the 18A node is not producing a decent yield that would satisfy external customers.

While this is not a good sign of Intel's Fundry contract business development, it shows that the node is presumably in a good state in terms of power/performance. Intel's CEO Pat Gelsinger confirmed that 18A is now at 0.4 d0 defect density, and it is now a "healthy process." However, alternatives exist at TSMC, which proves to be a very challenging competitor to take on, as its N7 and N5 nodes had a defect density of 0.33 during development and 0.1 defect density during high-volume production. This leads to better yields and lower costs for the contracting party, resulting in higher profits. Ultimately, it is up to Intel to improve its production process further to satisfy customers. Gelsinger wants to see Intel Foundry as "manufacturing ready" by the end of the year, and we can see the first designs in 2025 reach volume production. There are still a few more months to improve the node, and we expect to see changes implemented by the end of the year.

NVIDIA to Sell Over One Million H20 GPUs to China, Taking Home $12 Billion

When NVIDIA started preparing the H20 GPU for China, the company anticipated great demand from sanction-obeying GPUs. However, we now know precisely what the company makes from its Chinese venture: an astonishing $12 billion in take-home revenue. Due to the massive demand for NVIDIA GPUs, Chinese AI research labs are acquiring as many as they can get their hands on. According to a report from Financial Times, citing SemiAnalysis as its source, NVIDIA will sell over one million H20 GPUs in China. This number far outweighs the number of home-grown Huawei Ascend 910B accelerators that the Chinese companies plan to source, with numbers being "only" 550,000 Ascend 910B chips. While we don't know if Chinese semiconductor makers like SMIC are capable of producing more chips or if the demand isn't as high, we know why NVIDIA H20 chips are the primary target.

The Huawei Ascend 910B features Total Processing Performance (TPP), a metric developed by US Govt. to track GPU performance measuring TeraFLOPS times bit-length of over 5,000, while the NVIDIA H20 comes to 2,368 TPP, which is half of the Huawei accelerator. That is the performance on paper, where SemiAnalysis notes that the real-world performance is actually ahead for the H20 GPU due to better memory configuration of the GPU, including higher HBM3 memory bandwidth. All of this proves to be a better alternative than Ascend 910B accelerator, accounting for an estimate of over one million GPUs shipped this year in China. With an average price of $12,000 per NVIDIA H20 GPU, China's $12 billion revenue will undoubtedly help raise NVIDIA's 2024 profits even further.

US Weighs National Security Risks of China's RISC-V Chip Development Involvement

The US government is investigating the potential national security risks associated with China's involvement in the development of open-source RISC-V chip technology. According to a letter obtained by Reuters, the Department of Commerce has informed US lawmakers that it is actively reviewing the implications of China's work in this area. RISC-V, an open instruction set architecture (ISA) created in 2014 at the University of California, Berkeley, offers an alternative to proprietary and licensed ISAs like those developed by Arm. This open-source ISA can be utilized in a wide range of applications, from AI chips and general-purpose CPUs to high-performance computing applications. Major Chinese tech giants, including Alibaba and Huawei, have already embraced RISC-V, positioning it as a new battleground in the ongoing technological rivalry between the United States and China over cutting-edge semiconductor capabilities.

In November, a group of 18 US lawmakers from both chambers of Congress urged the Biden administration to outline its strategy for preventing China from gaining a dominant position in RISC-V technology, expressing concerns about the potential impact on US national and economic security. While acknowledging the need to address potential risks, the Commerce Department noted in its letter that it must proceed cautiously to avoid unintentionally harming American companies actively participating in international RISC-V development groups. Previous attempts to restrict the transfer of 5G technology to China have created obstacles for US firms involved in global standards bodies where China is also a participant, potentially jeopardizing American leadership in the field. As the review process continues, the Commerce Department faces the delicate task of balancing national security interests with the need to maintain the competitiveness of US companies in the rapidly evolving landscape of open-source chip technologies.

Intel Launches Gaudi 3 AI Accelerator: 70% Faster Training, 50% Faster Inference Compared to NVIDIA H100, Promises Better Efficiency Too

During the Vision 2024 event, Intel announced its latest Gaudi 3 AI accelerator, promising significant improvements over its predecessor. Intel claims the Gaudi 3 offers up to 70% improvement in training performance, 50% better inference, and 40% better efficiency than Nvidia's H100 processors. The new AI accelerator is presented as a PCIe Gen 5 dual-slot add-in card with a 600 W TDP or an OAM module with 900 W. The PCIe card has the same peak 1,835 TeraFLOPS of FP8 performance as the OAM module despite a 300 W lower TDP. The PCIe version works as a group of four per system, while the OAM HL-325L modules can be run in an eight-accelerator configuration per server. This likely will result in a lower sustained performance, given the lower TDP, but it confirms that the same silicon is used, just finetuned with a lower frequency. Built on TSMC's N5 5 nm node, the AI accelerator features 64 Tensor Cores, delivering double the FP8 and quadruple FP16 performance over the previous generation Gaudi 2.

The Gaudi 3 AI chip comes with 128 GB of HBM2E with 3.7 TB/s of bandwidth and 24 200 Gbps Ethernet NICs, with dual 400 Gbps NICs used for scale-out. All of that is laid out on 10 tiles that make up the Gaudi 3 accelerator, which you can see pictured below. There is 96 MB of SRAM split between two compute tiles, which acts as a low-level cache that bridges data communication between Tensor Cores and HBM memory. Intel also announced support for the new performance-boosting standardized MXFP4 data format and is developing an AI NIC ASIC for Ultra Ethernet Consortium-compliant networking. The Gaudi 3 supports clusters of up to 8192 cards, coming from 1024 nodes comprised of systems with eight accelerators. It is on track for volume production in Q3, offering a cost-effective alternative to NVIDIA accelerators with the additional promise of a more open ecosystem. More information and a deeper dive can be found in the Gaudi 3 Whitepaper.

China Pushes Adoption of Huawei's HarmonyOS to Replace Windows, iOS, and Android

According to ChinaScope, an effort is currently underway to strengthen Huawei's HarmonyOS platform's presence. The local government of Shenzhen has unveiled an ambitious program aimed at supercharging the development of native applications for the operating system. The "Shenzhen Action Plan for Supporting the Development of Native HarmonyOS Open Source Applications in 2024" outlines several key goals to foster a more robust and competitive ecosystem around HarmonyOS. One primary objective is for Shenzhen-based HarmonyOS apps to account for over 10% of China's total by the end of 2024. To facilitate this, the city plans to establish at least two specialized industrial parks dedicated to HarmonyOS software development across various application domains.

Furthermore, the initiative calls for over 1,000 software companies in Shenzhen to obtain HarmonyOS development talent qualifications, underscoring the city's commitment to cultivating a skilled workforce for the platform. Perhaps most impressively, the action plan encourages eligible companies to ramp up their outsourcing services for HarmonyOS app development, with a lofty target of reaching 500,000 HarmonyOS developers. This would represent a significant influx of developer talent focused on the platform if achieved. The Shenzhen government's push aligns with China's broader strategy to reduce reliance on foreign technologies and promote the adoption of domestic alternatives like HarmonyOS. While initially launched by Huawei as a workaround for U.S. sanctions, HarmonyOS has since expanded to power many devices, including smartphones, tablets, smartwatches, and TVs.

Lenovo Anticipates Great Demand for AMD Instinct MI300X Accelerator Products

Ryan McCurdy, President of Lenovo North America, revealed ambitious forward-thinking product roadmap during an interview with CRN magazine. A hybrid strategic approach will create an anticipated AI fast lane on future hardware—McCurdy, a former Intel veteran, stated: "there will be a steady stream of product development to add (AI PC) hardware capabilities in a chicken-and-egg scenario for the OS and for the (independent software vendor) community to develop their latest AI capabilities on top of that hardware...So we are really paving the AI autobahn from a hardware perspective so that we can get the AI software cars to go faster on them." Lenovo—as expected—is jumping on the AI-on-device train, but it will be diversifying its range of AI server systems with new AMD and Intel-powered options. The company has reacted to recent Team Green AI GPU supply issues—alternative units are now in the picture: "with NVIDIA, I think there's obviously lead times associated with it, and there's some end customer identification, to make sure that the products are going to certain identified end customers. As we showcased at Tech World with NVIDIA on stage, AMD on stage, Intel on stage and Microsoft on stage, those industry partnerships are critical to not only how we operate on a tactical supply chain question but also on a strategic what's our value proposition."

McCurdy did not go into detail about upcoming Intel-based server equipment, but seemed excited about AMD's Instinct MI300X accelerator—Lenovo was (previously) announced as one of the early OEM takers of Team Red's latest CDNA 3.0 tech. CRN asked about the firm's outlook for upcoming MI300X-based inventory—McCurdy responded with: "I won't comment on an unreleased product, but the partnership I think illustrates the larger point, which is the industry is looking for a broad array of options. Obviously, when you have any sort of lead times, especially six-month, nine-month and 12-month lead times, there is interest in this incredible technology to be more broadly available. I think you could say in a very generic sense, demand is as high as we've ever seen for the product. And then it comes down to getting the infrastructure launched, getting testing done, and getting workloads validated, and all that work is underway. So I think there is a very hungry end customer-partner user base when it comes to alternatives and a more broad, diverse set of solutions."

Tiny Corp. Prepping Separate AMD & NVIDIA GPU-based AI Compute Systems

George Hotz and his startup operation (Tiny Corporation) appeared ready to completely abandon AMD Radeon GPUs last week, after experiencing a period of firmware-related headaches. The original plan involved the development of a pre-orderable $15,000 TinyBox AI compute cluster that housed six XFX Speedster MERC310 RX 7900 XTX graphics cards, but software/driver issues prompted experimentation via alternative hardware routes. A lot of media coverage has focused on the unusual adoption of consumer-grade GPUs—Tiny Corp.'s struggles with RDNA 3 (rather than CDNA 3) were maneuvered further into public view, after top AMD brass pitched in.

The startup's social media feed is very transparent about showcasing everyday tasks, problem-solving and important decision-making. Several Acer Predator BiFrost Arc A770 OC cards were purchased and promptly integrated into a colorfully-lit TinyBox prototype, but Hotz & Co. swiftly moved onto Team Green pastures. Tiny Corp. has begrudgingly adopted NVIDIA GeForce RTX 4090 GPUs. Earlier today, it was announced that work on the AMD-based system has resumed—although customers were forewarned about anticipated teething problems. The surprising message arrived in the early hours: "a hard to find 'umr' repo has turned around the feasibility of the AMD TinyBox. It will be a journey, but it gives us an ability to debug. We're going to sell both, red for $15,000 and green for $25,000. When you realize your pre-order you'll choose your color. Website has been updated. If you like to tinker and feel pain, buy red. The driver still crashes the GPU and hangs sometimes, but we can work together to improve it."

Seasonic Releases Native 12V-2x6 (H++) Cables

Seasonic introduced a new 12V-2x6 modular PSU cable model late last year—at the time, interested parties were also invited to Beta test early examples. Finalized versions have been introduced, via a freshly uploaded YouTube video (see below) and a dedicated product page. The "H++" connector standard—part of a new ATX 3.1 specification—is expected to replace the troubled "H+" 12VHWPR design. The PC hardware community has engaged in long-running debates about the development and rollout of a danger/peril-free alternative. PCI-SIG drafted the 12V-2x6 design last summer.

Seasonic's introductory section stated: "with the arrival of the new ATX 3 / PCIe 5.0 specifications, some graphic cards will now be powered by the new 12V-2x6 connector. Offering up to 600 W of power, the Seasonic native 12V-2x6 cable has been crafted with high quality materials, such as high current terminal connectors and 16 AWG wires to ensure the highest performance and safety in usage." The new cables are compatible with Seasonic's current ATX 3.0 power supply unit range—including "PRIME TX, PRIME PX, VERTEX GX, PX, GX White and Sakura, FOCUS GX and GX White" models. Owners of older Seasonic ATX 2.0 PSUs are best served with an optional 2x8-pin to 12V-2x6 adapter cable—although 650 W rated and single PCIe connector-equipped units are not supported at all. Two native cable models, and a non-native variant are advertised in the manufacturer's video.

NVIDIA Prepared to Offer Custom Chip Designs to AI Clients

NVIDIA is reported to be setting up an AI-focused semi-custom chip design business unit, according to inside sources known to Reuters—it is believed that Team Green leadership is adapting to demands leveraged by key data-center customers. Many companies are seeking cheaper alternatives, or have devised their own designs (budget/war chest permitting)—NVIDIA's current range of AI GPUs are simply off-the-shelf solutions. OpenAI has generated the most industry noise—their alleged early 2024 fund-raising pursuits have attracted plenty of speculative/kind-of-serious interest from notable semiconductor personalities.

Team Green is seemingly reacting to emerging market trends—Jensen Huang (CEO, president and co-founder) has hinted that NVIDIA custom chip designing services are on the cusp. Stephen Nellis—a Reuters reporter specializing in tech industry developments—has highlighted select NVIDIA boss quotes from an incoming interview piece: "We're always open to do that. Usually, the customization, after some discussion, could fall into system reconfigurations or recompositions of systems." The Team Green chief teased that his engineering team is prepared to take on the challenge meeting exact requests: "But if it's not possible to do that, we're more than happy to do a custom chip. And the benefit to the customer, as you can imagine, is really quite terrific. It allows them to extend our architecture with their know-how and their proprietary information." The rumored NVIDIA semi-custom chip design business unit could be introduced in an official capacity at next month's GTC 2024 Conference.

Meta Anticipating Apple Vision Pro Launch - AR/VR Could Become Mainstream

Apple's Vision Pro mixed reality headset is due to launch on February 2—many rival companies in the AR/VR market space will be taking notes once the slickly designed device (with a $3499 starting price) reaches customers. The Wall Street Journal claims that the executive team at Meta is hopeful that Apple's headset carves out a larger space within a niche segment. The latter's "more experimental" products sometimes have surprising reach, although it may take a second (i.e cheaper) iteration of the Vision Pro to reach a mainstream audience. Meta is reported to have invested around $50 billion into its Quest hardware and software development push—industry experts reckon that this product line generates only ~1% of the social media giant's total revenue.

Insider sources suggest that CEO Mark Zuckerberg and his leadership team are keen to see their big money "gamble" finally pay off—Apple's next release could boost global interest in mixed reality headsets. The Wall Street Journal states that Meta staffers "see the Quest and its software ecosystem emerging as a primary alternative to Apple in the space, filling the role played by Google's Android in smartphones." They hope that the Quest's relatively reasonable cost-of-entry will look a lot more attractive when compared to the premium Vision Pro. The report also shines a light on Meta's alleged push to focus more on mixed reality applications, since taking "inspiration" from Apple's WWDC23 presentation: "In addition, some developers are simplifying their apps and favor Apple's design that allows wearers to use their eyes and fingers to control or manipulate what they see. Meta's Quest primarily relies on the use of controllers for games or applications, although it can work with finger gestures."

GIGABYTE Preps EAGLE OC ICE GeForce RTX 40 Series Cards

GIGABYTE has augmented its EAGLE range of custom graphics card with a new white "ICE" design—the Taiwanese hardware manufacturer's website has been updated with four GeForce RTX 40 Series models that sport a striking white aesthetic. The modern day EAGLE custom cooling solutions usually pops up in a sober yet classy gray shade (with some lighter accents)—it seems that alternatives, ideal for all-white hardware build, will be arriving soon. GIGABYTE has not revealed any information regarding regional pricing, release date or a press release (at the time of writing). Four factory overclocked EAGLE ICE models are listed—starting at the bottom with a pale version of their existing RTX 4060 EAGLE OC 8 GB card, and reaching the ceiling with a white spin on their RTX 4070 Ti SUPER EAGLE OC 16 GB.

GIGABYTE has chosen to not produce an EAGLE graphics card that utilizes NVIDIA's GeForce RTX 4080 SUPER GPU, so an ICE version is unlikely to pop up in the immediate future. GIGABYTE's RTX 4080 SUPER AERO OC model seems to be the only option available in white—within their current (Ada Lovelace) AD103-400-A1 GPU product range. VideoCardz reckons that there is a high probability of EAGLE ICE (non-overclocked) graphics cards arriving after the initial launch of OC models.

Apple announces changes to iOS, Safari, and the App Store in the European Union

Apple today announced changes to iOS, Safari, and the App Store impacting developers' apps in the European Union (EU) to comply with the Digital Markets Act (DMA). The changes include more than 600 new APIs, expanded app analytics, functionality for alternative browser engines, and options for processing app payments and distributing iOS apps. Across every change, Apple is introducing new safeguards that reduce—but don't eliminate—new risks the DMA poses to EU users. With these steps, Apple will continue to deliver the best, most secure experience possible for EU users.

The new options for processing payments and downloading apps on iOS open new avenues for malware, fraud and scams, illicit and harmful content, and other privacy and security threats. That's why Apple is introducing protections—including Notarization for iOS apps, an authorization for marketplace developers, and disclosures on alternative payments—to reduce risks and deliver the best, most secure experience possible for users in the EU. Even with these safeguards in place, many risks remain.

Chinese Vendors are Offering NVIDIA GeForce RTX 4080M and RTX 4090M as Desktop GPUs

According to the recent listing on Goofish, discovered by VideoCardz, Chinese companies have begun selling mobile versions of NVIDIA's latest RTX 40-series GPUs as desktop graphics cards. Initially designed for gaming laptops, the GeForce RTX 4080M and RTX 4090M are now being marketed in China as more affordable alternatives to their official desktop counterparts. This development is no surprise to industry observers who recall similar adaptations with the RTX 20 and 30 series. These companies are leveraging the lower cost of mobile GPUs, combined with budget cooling solutions and simpler PCB designs, to offer more affordable desktop GPU options. The mobile GPUs, which are capped at a power consumption of 175 Watts, are being repurposed without official sanction, with NVIDIA seemingly disregarding this practice. Despite the lack of official endorsement, these modified GPUs are finding their way into the market, providing gamers a cost-effective alternative to the more expensive desktop versions.

While not officially supported by NVIDIA, these cards utilize the mobile GPU dies paired with custom cooling solutions and PCBs to work in desktop PCs. According to reports, the RTX 4080M desktop variant offers 7424 CUDA cores and 12 GB GDDR6 memory, representing a 24% reduction in cores and 4 GB less memory versus the desktop RTX 4080. The desktop RTX 4090M is even more cut-down, with 9728 cores and 16 GB memory—a 40% drop in cores and 8 GB less memory than the flagship RTX 4090 desktop card. Pricing falls between $420 and $560 for the RTX 4080M and exceeds that of even the desktop RTX 4090 for the 4090M variant. Performance and longevity still need to be determined for these unofficial cards. While they present a cheaper RTX 40-series option for Chinese gamers, the reduced specifications come with tradeoffs. Still, their availability indicates the ongoing demand for next-gen GPUs and the lengths some vendors go to to meet that demand.

Chinese Researchers Want to Make Wafer-Scale RISC-V Processors with up to 1,600 Cores

According to the report from a journal called Fundamental Research, researchers from the Institute of Computing Technology at the Chinese Academy of Sciences have developed a 256-core multi-chiplet processor called Zhejiang Big Chip, with plans to scale up to 1,600 cores by utilizing an entire wafer. As transistor density gains slow, alternatives like multi-chiplet architectures become crucial for continued performance growth. The Zhejiang chip combines 16 chiplets, each holding 16 RISC-V cores, interconnected via network-on-chip. This design can theoretically expand to 100 chiplets and 1,600 cores on an advanced 2.5D packaging interposer. While multi-chiplet is common today, using the whole wafer for one system would match Cerebras' breakthrough approach. Built on 22 nm process technology, the researchers cite exascale supercomputing as an ideal application for massively parallel multi-chiplet architectures.

Careful software optimization is required to balance workloads across the system hierarchy. Integrating near-memory processing and 3D stacking could further optimize efficiency. The paper explores lithography and packaging limits, proposing hierarchical chiplet systems as a flexible path to future computing scale. While yield and cooling challenges need further work, the 256-core foundation demonstrates the potential of modular designs as an alternative to monolithic integration. China's focus mirrors multiple initiatives from American giants like AMD and Intel for data center CPUs. But national semiconductor ambitions add urgency to prove domestically designed solutions can rival foreign innovation. Although performance details are unclear, the rapid progress shows promise in mastering modular chip integration. Combined with improving domestic nodes like the 7 nm one from SMIC, China could easily create a viable Exascale system in-house.

Intel Preparing Habana "Gaudi2C" SKU for the Chinese AI Market

Intel's software team has added support in its open-source Linux drivers for an unannounced Habana "Gaudi2C" AI accelerator variant. Little is documented about the mystery Gaudi2C, which shares a core identity with Intel's flagship Gaudi2 data center training and inference chip, otherwise broadly available. The new revision is distinguished only by a PCI ID of "3" in the latest patch set for Linux 6.8. Speculations circulate that Gaudi2C may be a version tailored to meet China-specific demands, similar to Intel's Gaudi2 HL-225B SKU launched in July with reduced interconnect links. With US export bans restricting sales of advanced hardware to China, including Intel's leading Gaudi2 products, creating reduced-capability spinoffs that meet export regulations lets Intel maintain crucial Chinese revenue.

Meanwhile, Intel's upstream Linux contributions remain focused on hardening Gaudi/Gaudi2 support, now considered "very stable" by lead driver developer Oded Gabbay. Minor new additions reflect maturity, not instability. The open-sourced foundations contrast NVIDIA's proprietary driver model, a key Intel competitive argument for service developers using Habana Labs hardware. With the SynapseAI software suite reaching stability, some enterprises could consider Gaudi accelerators as an alternative to NVIDIA. And with Gaudi3 arriving next year, the ecosystem will get a better competitive advantage with increased performance targets.

Zhaoxin Launches KX-7000 Desktop 8-Core x86 Processor to Power China's Ambitions

After years of delays, Chinese chipmaker Zhaoxin has finally launched its long-awaited KX-7000 series consumer CPUs, only one of its kind in China, based on the licensed x86-64 ISA. Zhaoxin claims the new 8-core processors based on "Century Avenue" uArch deliver double the performance of previous generations. Leveraging architectural improvements and 4X more cache, the KX-7000 represents essential progress for China's domestic semiconductor industry. While still likely lagging behind rival AMD and Intel chips in raw speed, the KX-7000 matches competitive specs in areas like DDR5 memory, PCIe 4.0, and USB4 support. For Chinese efforts to attain technological independence, closing feature gaps with foreign processors is just as crucial as boosting performance. Manufactured on a 16 nm process, the KX-7000 does not use the best silicon node available.

Other chip details include out-of-order execution (OoOE), 24 PCIe 4.0 lanes, a 32 MB pool of L3 cache and 4 MB L2 cache, a base frequency of 3.2 GHz, and a boost clock of 3.7 GHz. Interestingly, the CPU also has VT-x, BT-d 2.5, SSE4.2/AVX/AVX2 support, most likely also licensed from the x86 makers Intel and/or AMD. Ultimately, surpassing Western processors is secondary for China next to attaining self-reliance. Instructions like SM encryption catering to domestic data protection priorities underscore how the KX-7000 advances strategic autonomy goals. With its x86 architecture license giving software compatibility and now a vastly upgraded platform, the KX-7000 will raise China's chip capabilities even if it is still trailing rivals' speeds. Ongoing progress closing that performance gap could position Zhaoxin as a mainstream alternative for local PC builders and buyers.

YMTC Spent 7 Billion US Dollars to Overcome US Sanctions, Now Plans Another Investment

Yangtze Memory Technologies Corp (YMTC), China's biggest NAND flash memory manufacturer, has successfully raised billions of US Dollars in new capital to adapt to challenging US restrictions. According to the report from Financial Times, YMTC, which was added to a trade blacklist in December and barred from procuring US equipment to manufacture chips, exceeded its funding target. However, the exact amount remains undisclosed. The capital increase became necessary due to YMTC's substantial spending on finding alternative equipment and developing new components and core chipmaking tools. This financing round was oversubscribed by domestic investors, reflecting support for YMTC amid tightening US restrictions.

Last year, YMTC managed to raise 50 billion Chinese Yuan or about 7 billion US Dollars for equipment. Spending it all on the supply chain, the company is now looking to bolster its offerings with additional equipment for its memory facilities. One of the investors in the funding rally for YMTC has made a statement for Finanical Times: "If Chinese companies have equipment that can be used, [YMTC] will use it. If not, it will see if countries other than the US can sell to it. If that doesn't work, YMTC will develop it together with the supplier." This statement indicates that the company is looking into several options, where one is simply developing its custom machinery with the suppliers.

Intel Itanium Reaches End of the Road with Linux Kernel Stopping Updates

Today marks the end of support for Itanium's IA-64 architecture in the Linux kernel's 6.7 update—a significant milestone in the winding-down saga of Intel Itanium. Itanium, initially Intel's ambitious venture into 64-bit computing, faced challenges and struggled throughout its existence. It was jointly developed by Intel and HP but encountered delays and lacked compatibility with x86 software, a significant obstacle to its adoption. When AMD introduced x86-64 (AMD64) for its Opteron CPUs, which could run x86 software natively, Intel was compelled to update Xeon, based on x86-64 technology, leaving Itanium to fade into the background.

Despite ongoing efforts to sustain Itanium, it no longer received annual CPU product updates, and the last update came in 2017. The removal of IA-64 support in the Linux kernel will have a substantial impact since Linux is an essential operating system for Itanium CPUs. Without ongoing updates, the usability of Itanium servers will inevitably decline, pushing the (few) remaining Itanium users to migrate to alternative solutions, which are most likely looking to modernize their product stack.

GEEKOM MiniPCs Presented as Alternatives to NUC Systems

Intel initiated its Next Unit Computing (NUC) Line of mini PCs in 2012, with a vision of making PC systems that are small enough to fit into the palm of your hand, yet powerful enough to handle day-to-day desktop computing. Although several NUC models are still best-sellers in the market, Intel chose to step away from the business.

As disappointing as Intel's exit from the market is, it's comforting to know that GEEKOM, a multinational consumer electronics company, promises to keep Intel's vision of compact computing alive. In fact, GEEKOM's Mini IT series mini PCs have long been considered as the best alternatives to the Intel NUC Pros.

IEEE 802.11bb Global Light Communications Standard Released

Global LiFi technology firms pureLiFi and Fraunhofer HHI welcome the release of IEEE 802.11bb as the latest global light communications standard alongside IEEE 802.11 WiFi standards. The bb standard marks a significant milestone for the LiFi market, as it provides a globally recognised framework for deployment of LiFi technology.

LiFi is a wireless technology that uses light rather than radio frequencies to transmit data. By harnessing the light spectrum, LiFi can unleash faster, more reliable wireless communications with unparalleled security compared to conventional technologies such as WiFi and 5G. The Light Communications 802.11bb Task Group was formed in 2018 chaired by pureLiFi and supported by Fraunhofer HHI, two firms which have been at the forefront of LiFi development efforts. Both organisations aim to see accelerated adoption and interoperability not only between LiFi vendors but also with WiFi technologies as a result of these standardisation efforts.

SilverStone Commemorates 20 Years With Fresh Case Designs at Computex 2023

SilverStone showed up to Computex with a variety of new cases on offer covering everything from a hybrid rack-mountable / pedestal 5U, to a retro-inspired April Fool's joke, to their largest and flashiest ATX full-tower to date. SilverStone has a reputation for quality and over engineering and it clearly shows in this year's selection. Starting off with the most engineered, and likely most expensive: the recently released Alta F2 was on display with a fully water cooled build showing off the expansive options available in the giant tower. Alta F2 utilizes a 90 degree rotated configuration taking design elements from the older Raven and Fortress series of cases. What the Alta F2 does different is where it allows the GPU to be installed; along the left side (traditionally the back) there are three expansion slots tilted at precisely 11.3 degrees.

These tilted slots are intended to offer as much forced air cooling as possible to the GPU from the bottom mounted 180 mm Air Penetrator 184i PRO fans, and SilverStone dialed in this angle to decimal precision to ensure this. There still exists a standard 9-slot layout along the top of the case (remember, everything is rotated 90 degrees so that's where the "back" is) for those that fill every slot on their motherboard. Since airflow moves bottom-to-top there are intake filters lining the lower chamber of the chassis, and the top panel is almost entirely open for exhaust. The lower chamber is also setup to allow four 3.5 inch drives to be mounted, while four more drives can be mounted in the optional forward mounted, and also angled, drive cages. The Alta F2 has already reached retail and demands a hefty $800 USD price point.

Enablement Continues for Chinese Loongson 3A6000 CPUs Poised to Compete with Intel Willow Cove and AMD Zen 3

Chinese company Loongson, specializing in creating processors for usage in mainland China, has been steadily working on enabling its next-generation Loongson 3A6000 CPUs. Aiming to provide the performance level of Intel Willow Cove and AMD Zen 3, these new CPUs will use Loongson's custom LoongArch Instruction Set Architecture (ISA) with a new set of 64-bit superscalar LA664 cores. Today, thanks to the report from Phoronix, we find out that Loongson has submitted some Linux patches that enable the upcoming 3A6000 CPUs to work with Linux-based operating systems at launch. Interestingly, as the new CPU generation gets closer to launch, more Linux kernel patches begin to surface.

Today's kernel patches focus on supporting the hardware page table walker (PTW). As PTW can handle all fast paths of TLBI/TLBL/TLBS/TLBM exceptions by hardware, software only needs to handle slow paths such as page faults. Additionally, in the past, LoongArch utilized "dbar 0" as a complete barrier for all operations. However, this full completion barrier severely impacted performance. As a result, Loongson-3A6000 and subsequent processors have introduced various alternative hints. Loongson plans to ship samples to select customers in the first half of 2023, so we could see more information surfacing soon.

Alibaba Developing an Equivalent to ChatGPT

Last Tuesday, Alibaba announced its intentions to put out its own artificial intelligence (AI) chatbot product called Tongyi Qianwen - another rival to take on OpenAI's pioneering ChatGPT natural language processing tool. The Chinese technology giant is hoping to retrofit the new chatbot system into several arms of its business operations. Alibaba had revealed initial plans for chatbot integration earlier this year, and mentioned that it was providing an alternative to the already well established ChatGPT tool. Alibaba's workplace messaging application - DingTalk - is slated to receive the first AI-powered update in the near future, although the company did not provide a firm timeline for Tongyi Qianwen's release window.

The product name "Tongyi Qianwen" loosely translates to "seeking an answer by asking a thousand questions" - Alibaba did not provide an official English language translation at last week's press conference. Their chatbot is reported to function in both Mandarin and English language modes. Advanced AI voice recognition is set for usage in the Tmall Genie range of smart speakers (similar in function to the Amazon Echo). Alibaba expects to expand Tongyi Qianwen's reach into applications relating to e-commerce and mapping services.

Google Bard Chatbot Trial Launches in USA and UK

We're starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We're beginning with the U.S. and the U.K., and will expand to more countries and languages over time. Today we're starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.

You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We've learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people.
Return to Keyword Browsing
Nov 21st, 2024 07:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts