News Posts matching #Tech

Return to Keyword Browsing

GlobalFoundries and Biden-Harris Administration Announce CHIPS and Science Act Funding for Essential Chip Manufacturing

The U.S. Department of Commerce today announced $1.5 billion in planned direct funding for GlobalFoundries (Nasdaq: GFS) (GF) as part of the U.S. CHIPS and Science Act. This investment will enable GF to expand and create new manufacturing capacity and capabilities to securely produce more essential chips for automotive, IoT, aerospace, defense, and other vital markets.

New York-headquartered GF, celebrating its 15th year of operations, is the only U.S.-based pure play foundry with a global manufacturing footprint including facilities in the U.S., Europe, and Singapore. GF is the first semiconductor pure play foundry to receive a major award (over $1.5 billion) from the CHIPS and Science Act, designed to strengthen American semiconductor manufacturing, supply chains and national security. The proposed funding will support three GF projects:

Intel Foundry Services Get 18A Order: Arm-based 64-Core Neoverse SoC

Faraday Technology Corporation, a Taiwanese silicon IP designer, has announced plans to develop a new 64-core system-on-chip (SoC) utilizing Intel's most advanced 18A process technology. The Arm-based SoC will integrate Arm Neoverse compute subsystems (CSS) to deliver high performance and efficiency for data centers, infrastructure edge, and 5G networks. This collaboration brings together Faraday, Arm, and Intel Foundry Services. Faraday will leverage its ASIC design and IP solutions expertise to build the SoC. Arm will provide the Neoverse compute subsystem IP to enable scalable computing. Intel Foundry Services will manufacture the chip using its cutting-edge 18A process, which delivers one of the best-in-class transistor performance.

The new 64-core SoC will be a key component of Faraday's upcoming SoC evaluation platform. This platform aims to accelerate customer development of data center servers, high-performance computing ASICs, and custom SoCs. The platform will also incorporate interface IPs from the Arm Total Design ecosystem for complete implementation and verification. Both Arm and Intel Foundry Services expressed excitement about working with Faraday on this advanced Arm-based custom silicon project. "We're thrilled to see industry leaders like Faraday and Intel on the cutting edge of Arm-based custom silicon development," said an Arm spokesperson. Intel SVP Stuart Pann said, "We are pleased to work with Faraday in the development of the SoC based on Arm Neoverse CSS utilizing our most competitive Intel 18A process technology." The collaboration represents Faraday's strategic focus on leading-edge technologies to meet evolving application requirements. With its extensive silicon IP portfolio and design capabilities, Faraday wants to deliver innovative solutions and break into next-generation computing design.

Avegant announces breakthrough Spotlight display technology for Augmented Reality

Avegant announces Spotlight display technology - the world's first adaptive LED illumination architecture for LCoS. By dynamically illuminating regions of a LCoS microdisplay, Avegant has enabled the benefits of APL (average pixel level) to achieve illumination power savings up to 90% while simultaneously improving contrast by over 10x. Spotlight display technology is integrated alongside Avegant's existing products, retaining the advantages of very small, mature light engines that deliver efficient, full color high-resolution, polarized output. Avegant is collaborating with Applied Materials, Inc. and Lumileds on advanced technologies for Augmented Reality.

"Building on the success of the AG-30L2, we are delighted to announce our new Spotlight display technology. This will usher in a new level of performance for LCoS light engines, enabling efficiency advantages of APL and higher contrast, while maintaining the manufacturability, small-size, and full color high resolution displays that we have come to know from LCoS." says Ed Tang, CEO and Founder of Avegant. "We're excited to be working alongside amazing partners to enable a new level of system performance for the ecosystem."

Google Faces Potential Billion-Dollar Damages in TPU Patent Dispute

Tech giant Google is embroiled in a high-stakes legal battle over the alleged infringement of patents related to its Tensor Processing Units (TPUs), custom AI accelerator chips used to power machine learning applications. Massachusetts-based startup Singular Computing has accused Google of incorporating architectures described in several of its patents into the design of the TPU without permission. The disputed patents, first filed in 2009, outline computer architectures optimized for executing a high volume of low-precision calculations per cycle - an approach well-suited for neural network-based AI. In a 2019 lawsuit, Singular argues that Google knowingly infringed on these patents in developing its TPU v2 and TPU v3 chips introduced in 2017 and 2018. Singular Computing is seeking between $1.6 billion and $5.19 billion in damages from Google.

Google denies these claims, stating that its TPUs were independently developed over many years. The company is currently appealing to have Singular's patents invalidated, which would undermine the infringement allegations. The high-profile case highlights mounting legal tensions as tech giants race to dominate the burgeoning field of AI hardware. With billions in potential damages at stake, the outcome could have major implications for the competitive landscape in cloud-based machine learning services. As both sides prepare for court, the dispute underscores the massive investments tech leaders like Google make to integrate specialized AI accelerators into their cloud infrastructures. Dominance in this sphere is a crucial strategic advantage as more industries embrace data-hungry neural network applications.

Update 17:25 UTC: According to Reuters, Google and Singular Computing have settled the case with details remaining private for the time being.

Intel 15th-Generation Arrow Lake-S Could Abandon Hyper-Threading Technology

A leaked Intel documentation we reported on a few days ago covered the Arrow Lake-S platform and some implementation details. However, there was an interesting catch in the file. The leaked document indicates that the upcoming 15th-Generation Arrow Lake desktop CPUs could lack Hyper-Threading (HT) support. The technical memo lists Arrow Lake's expected eight performance cores without any threads enabled via SMT. This aligns with previous rumors of Hyper-Threading removal. Losing Hyper-Threading could significantly impact Arrow Lake's multi-threaded application performance versus its Raptor Lake predecessors. Estimates suggest HT provides a 10-15% speedup across heavily-threaded workloads by enabling logical cores. However, for gaming, disabling HT has negligible impact and can even boost FPS in some titles. So Arrow Lake may still hit Intel's rumored 30% gaming performance targets through architectural improvements alone.

However, a replacement for the traditional HT is likely to come in the form of Rentable Units. This new approach is a response to the adoption of a hybrid core architecture, which has seen an increase in applications leveraging low-power E-cores for enhanced performance and efficiency. Rentable Units are a more efficient pseudo-multi-threaded solution that splits the first thread of incoming instructions into two partitions, assigning them to different cores based on complexity. Rentable Units will use timers and counters to measure P/E core utilization and send parts of the thread to each core for processing. This inherently requires larger cache sizes, where Arrow Lake is rumored to have 3 MB of L2 cache per core. Arrow Lake is also noted to support faster DDR5-6400 memory. But between higher clocks, more E-cores, and various core architecture updates, raw throughput metrics may not change much without Hyper-Threading.

Synopsys Expands Its ARC Processor IP Portfolio with New RISC-V Family

Synopsys, Inc. (Nasdaq: SNPS) today announced it has extended its ARC Processor IP portfolio to include new RISC-V ARC-V Processor IP, enabling customers to choose from a broad range of flexible, extensible processor options that deliver optimal power-performance efficiency for their target applications. Synopsys leveraged decades of processor IP and software development toolkit experience to develop the new ARC-V Processor IP that is built on the proven microarchitecture of Synopsys' existing ARC Processors, with the added benefit of the expanding RISC-V software ecosystem.

Synopsys ARC-V Processor IP includes high-performance, mid-range, and ultra-low power options, as well as functional safety versions, to address a broad range of application workloads. To accelerate software development, the Synopsys ARC-V Processor IP is supported by the robust and proven Synopsys MetaWare Development Toolkit that generates highly efficient code. In addition, the Synopsys.ai full-stack AI-driven EDA suite is co-optimized with ARC-V Processor IP to provide an out-of-the-box development and verification environment that helps boost productivity and quality-of-results for ARC-V-based SoCs.

AMD FidelityFX Super Resolution Could Come to Samsung and Qualcomm SoCs

AMD FidelityFX Super Resolution (FSR) is an open-source resolution upscaling technology that takes lower-resolution input and uses super-resolution temporal upscaling technology, frame generation using AMD Fluid Motion Frames (AFMF) technology, and built-in latency reduction technology to provide greater-resolution output images from lower-resolution settings. While the technology is open-source, it battles in market share with NVIDIA and the company's Deep Learning Super Sampling (DLSS). However, in the mobile space, there hasn't been much talk about implementing upscaling technology up until now. According to a popular leaker @Tech_Reve on X/Twitter, we have information that AMD is collaborating with Samsung and Qualcomm to standardize on upscaling technology implementations in mobile SoCs.

Not only does the leak imply that the AMD FSR technology will be used in Samsung's upcoming Exynos SoC, but some AMD ray tracing will be present as well. The leaker has mentioned Qualcomm, which means that future iterations of Snapdragon are up to adopt the FSR algorithmic approach to resolution upscaling. We will see how and when, but with mobile games growing in size and demand, FSR could come in handy to provide mobile gamers with a better experience. Primarily, this targets Android devices, which Qualcomm supplies, where Apple's iPhone recently announced MetalFX Upscaling technology with an A17 Pro chip.

Samsung Electronics Holds Memory Tech Day 2023 Unveiling New Innovations To Lead the Hyperscale AI Era

Samsung Electronics Co., Ltd., a world leader in advanced memory technology, today held its annual Memory Tech Day, showcasing industry-first innovations and new memory products to accelerate technological advancements across future applications—including the cloud, edge devices and automotive vehicles.

Attended by about 600 customers, partners and industry experts, the event served as a platform for Samsung executives to expand on the company's vision for "Memory Reimagined," covering long-term plans to continue its memory technology leadership, outlook on market trends and sustainability goals. The company also presented new product innovations such as the HBM3E Shinebolt, LPDDR5X CAMM2 and Detachable AutoSSD.

Ryan Shrout Announces Departure from Intel

The corporate world of semiconductor companies has seen some restructuring lately, and today, we learn that Ryan Shrout of Intel is leaving the company. Most recently serving as Senior Director of Client Segment Strategy, CCG at Graphics and AI Group, Ryan Shrout joined Intel over four years ago. Mr. Shrout started at the company as Chief Performance Strategist in 2018, and later, in 2020, he changed his role to Senior Director of Technical and Competitive Client Marketing. After that, he made a switch and was Senior Director of Gaming, Graphics, and HPC Marketing (AXG Marketing). His most recent role was relatively short-lived, only lasting for three months.

Many people in the community may know Mr. Shrout from his time at PC Perspective, which he founded and managed as Editor-in-Chief in 1999. However, today, on his Twitter/X profile, we're learning that he is departing Intel to take on another role. "Fall is a season for change! Yesterday was my last day at Intel. I'm going to take a couple weeks with the family then I'm excited to talk about what's next," the post stated. With such extensive experience in the PC industry, we are eager to see where Mr. Shrout lands next.

MiTAC to Showcase Cloud and Datacenter Solutions, Empowering AI at Intel Innovation 2023

Intel Innovation 2023 - September 13, 2023 - MiTAC Computing Technology, a professional IT solution provider and a subsidiary of MiTAC Holdings Corporation, will showcase its DSG (Datacenter Solutions Group) product lineup powered by 4th Gen Intel Xeon Scalable processors for enterprise, cloud and AI workloads at Intel Innovation 2023, booth #H216 in the San Jose McEnery Convention Center, USA, from September 19-20.

"MiTAC has seamlessly and successfully managed the Intel DSG business since July. The datacenter solution product lineup enhances MiTAC's product portfolio and service offerings. Our customers can now enjoy a comprehensive one-stop service, ranging from motherboards and barebones servers to Intel Data Center blocks and complete rack integration for their datacenter infrastructure needs," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology.

NVIDIA CEO Meets with India Prime Minister Narendra Modi

Underscoring NVIDIA's growing relationship with the global technology superpower, Indian Prime Minister Narendra Modi met with NVIDIA founder and CEO Jensen Huang Monday evening. The meeting at 7 Lok Kalyan Marg—as the Prime Minister's official residence in New Delhi is known—comes as Modi prepares to host a gathering of leaders from the G20 group of the world's largest economies, including U.S. President Joe Biden, later this week.

"Had an excellent meeting with Mr. Jensen Huang, the CEO of NVIDIA," Modi said in a social media post. "We talked at length about the rich potential India offers in the world of AI." The event marks the second meeting between Modi and Huang, highlighting NVIDIA's role in the country's fast-growing technology industry.

New AI Accelerator Chips Boost HBM3 and HBM3e to Dominate 2024 Market

TrendForce reports that the HBM (High Bandwidth Memory) market's dominant product for 2023 is HBM2e, employed by the NVIDIA A100/A800, AMD MI200, and most CSPs' (Cloud Service Providers) self-developed accelerator chips. As the demand for AI accelerator chips evolves, manufacturers plan to introduce new HBM3e products in 2024, with HBM3 and HBM3e expected to become mainstream in the market next year.

The distinctions between HBM generations primarily lie in their speed. The industry experienced a proliferation of confusing names when transitioning to the HBM3 generation. TrendForce clarifies that the so-called HBM3 in the current market should be subdivided into two categories based on speed. One category includes HBM3 running at speeds between 5.6 to 6.4 Gbps, while the other features the 8 Gbps HBM3e, which also goes by several names including HBM3P, HBM3A, HBM3+, and HBM3 Gen2.

TSMC Inaugurates Global R&D Center, Celebrating Its Newest Hub for Technology Innovation

TSMC today held an inauguration ceremony for its global Research and Development Center in Hsinchu, Taiwan, celebrating the Company's newest hub for bringing the next generations of semiconductor technology into reality with customers, R&D partners in industry and academia, design ecosystem partners, and senior government leaders.

The R&D Center will serve as the new home for TSMC's R&D Organization, including the researchers who will develop TSMC's leading-edge process technology at the 2-nanometer generation and beyond, as well as scientists and scholars blazing the trail with exploratory research into fields such as novel materials and transistor structures. With R&D employees already relocating to their workplaces in the new building, it will be ready for its full complement of more than 7,000 staff by September 2023.

Silicon Motion Announces Results for the Period Ended June 30, 2023 and an Acquisition Update

Silicon Motion Technology Corporation ("Silicon Motion", the "Company" or "we") today announced its financial results for the quarter ended June 30, 2023. For the second quarter of 2023, net sales (GAAP) increased sequentially to $140.4 million from $124.1 million in the first quarter of 2023. Net income (GAAP) increased to $11.0 million, or $0.33 per diluted American Depositary Share of the Company ("ADS") (GAAP), from net income (GAAP) of $10.2 million, or $0.30 per diluted ADS (GAAP), in the first quarter of 2023.

For the second quarter of 2023, net income (non-GAAP) increased to $12.6 million, or $0.38 per diluted ADS (non-GAAP), from net income (non-GAAP) of $11.2 million, or $0.33 per diluted ADS (non-GAAP), in the first quarter of 2023.

Intel Tech Helping Design Prototype Fusion Power Plant

What's New: As part of a collaboration with Intel and Dell Technologies, the United Kingdom Atomic Energy Authority (UKAEA) and the Cambridge Open Zettascale Lab plan to build a "digital twin" of the Spherical Tokamak for Energy Production (STEP) prototype fusion power plant. The UKAEA will utilize the lab's supercomputer based on Intel technologies, including 4th Gen Intel Xeon Scalable processors, distributed asynchronous object storage (DAOS) and oneAPI tools to streamline the development and delivery of fusion energy to the grid in the 2040s.

"Planning for the commercialization of fusion power requires organizations like UKAEA to utilize extreme amounts of computational resources and artificial intelligence for simulations. These HPC workloads may be performed using a variety of different architectures, which is why open software solutions that optimize performance needs can lend portability to code that isn't available in closed, proprietary systems. Overall, advanced hardware and software can make the journey to commercial fusion power lower risk and accelerated - a key benefit on the path to sustainable energy."—Adam Roe, Intel EMEA HPC technical director

IBM Study Finds That CEOs are Embracing Generative AI

A new global study by the IBM Institute for Business Value found that nearly half of CEOs surveyed identify productivity as their highest business priority—up from sixth place in 2022. They recognize technology modernization is key to achieving their productivity goals, ranking it as second highest priority. Yet, CEOs can face key barriers as they race to modernize and adopt new technologies like generative AI.

The annual CEO study, CEO decision-making in the age of AI, Act with intention, found three-quarters of CEO respondents believe that competitive advantage will depend on who has the most advanced generative AI. However, executives are also weighing potential risks or barriers of the technology such as bias, ethics and security. More than half (57%) of CEOs surveyed are concerned about data security and 48% worry about bias or data accuracy.

U.S. Administration Outlines Plan to Strengthen Semiconductor Supply Chains

Today, the U.S. Department of Commerce shared the Biden-Harris Administration's strategic vision to strengthen the semiconductor supply chain through CHIPS for America investments. To advance this vision, the Department announced a funding opportunity and application process for large semiconductor supply chain projects and will release later in the fall a separate process for smaller projects. Large semiconductor supply chain projects include materials and manufacturing equipment facility projects with capital investments equal to or exceeding $300 million, and smaller projects are below that threshold.

The announcement leads into the Biden-Harris Administration's Investing in America tour, where Secretary Raimondo and leaders in the Administration will fan across more than 20 states to highlight investments, jobs, and economic opportunity driven by President Biden's Investing in America agenda and the historic legislation he's passed in his first two years in office, including the bipartisan CHIPS and Science Act.

AVerMedia Introduces Live Streamer MIC 350: Elevate Your Audio Experience with DIRAC Custom Tuning

AVerMedia Technologies, a leading provider of audio and video solutions, is thrilled to unveil the AM350, the world's first USB condenser microphone custom-tuned by DIRAC. With its exclusive tuning technology and remarkable features, the AM350 sets a new standard in audio recording, empowering podcasters, vocal performers, singers, and other use cases to achieve exceptional sound quality and captivating performances.

The AM350 features high-sensitivity capsules and durable metal housing, allowing detailed audio capture and enhanced robustness. The AM350 also offers ultra-low noise performance and includes a built-in pop filter, ensuring clear and clean audio capture. Its plug-and-play USB interface makes it accessible for users of all levels, providing a professional recording experience without the need for additional equipment.

Intel's New Chip to Advance Silicon Spin Qubit Research for Quantum Computing

Today, Intel announced the release of its newest quantum research chip, Tunnel Falls, a 12-qubit silicon chip, and it is making the chip available to the quantum research community. In addition, Intel is collaborating with the Laboratory for Physical Sciences (LPS) at the University of Maryland, College Park's Qubit Collaboratory (LQC), a national-level Quantum Information Sciences (QIS) Research Center, to advance quantum computing research.

"Tunnel Falls is Intel's most advanced silicon spin qubit chip to date and draws upon the company's decades of transistor design and manufacturing expertise. The release of the new chip is the next step in Intel's long-term strategy to build a full-stack commercial quantum computing system. While there are still fundamental questions and challenges that must be solved along the path to a fault-tolerant quantum computer, the academic community can now explore this technology and accelerate research development."—Jim Clarke, director of Quantum Hardware, Intel

Samsung to Detail SF4X Process for High-Performance Chips

Samsung has invested heavily in semiconductor manufacturing technology to provide clients with a viable alternative to TSMC and its portfolio of nodes spanning anything from mobile to high-performance computing (HPC) applications. Today, we have information that Samsung will present its SF4X node to the public in this year's VLSI Symposium. Previously known as a 4HPC node, it is designed as a 4 nm-class node with a specialized use case for HPC processors, in contrast to the standard SF4 (4LPP) node that uses 4 nm transistors designed for low-power standards applicable to mobile/laptop space. According to the VLSI Symposium schedule, Samsung is set to present more info about the paper titled "Highly Reliable/Manufacturable 4nm FinFET Platform Technology (SF4X) for HPC Application with Dual-CPP/HP-HD Standard Cells."

As the brief introduction notes, "In this paper, the most upgraded 4nm (SF4X) ensuring HPC application was successfully demonstrated. Key features are (1) Significant performance +10% boosting with Power -23% reduction via advanced SD stress engineering, Transistor level DTCO (T-DTCO) and [middle-of-line] MOL scheme, (2) New HPC options: Ultra-Low-Vt device (ULVT), high speed SRAM and high Vdd operation guarantee with a newly developed MOL scheme. SF4X enhancement has been proved by a product to bring CPU Vmin reduction -60mV / IDDQ -10% variation reduction together with improved SRAM process margin. Moreover, to secure high Vdd operation, Contact-Gate breakdown voltage is improved by >1V without Performance degradation. This SF4X technology provides a tremendous performance benefits for various applications in a wide operation range." While we have no information on the reference for these claims, we suspect it is likely the regular SF4 node. More performance figures and an in-depth look will be available on Thursday, June 15, at Technology Session 16 at the symposium.

Intel to Demonstrate PowerVia on E-Core Processor Built with Intel 4 Node

At VLSI Symposium 2023, scheduled to take place between June 11-16, Intel is set to demonstrate its PowerVia technology working efficiently on an E-Core chip built using the Intel 4 node. Conventional chips have power and signal interconnects distributed across multiple metal layers. PowerVia, on the other hand, dedicates specific layers for power delivery, effectively separating them from the signal routing layers. This approach allows for vertical power delivery through a set of power-specific Through-Silicon Vias (TSVs) or PowerVias, which are essentially vertical connections between the top and bottom surfaces of the chip. By delivering power directly from the backside of the chip, PowerVia reduces power supply noise and resistive losses, optimizing power distribution and improving overall energy efficiency. PowerVia is set to make a debut in 2024 with Intel 20A node.

For VLSI Symposium 2023 talk, the company has prepared a paper that highlights a design made using Intel 4 technology and implements E-Cores only in a test chip. The document states: "PowerVia Technology is a novel innovation to extend Process Scaling by having Power Delivery on the backside. This paper presents the pre and post silicon findings from implementing an Intel E-Core in PowerVia Technology. PowerVia enabled standard cell utilization of greater than 90 percent in large areas of the core while showing greater than 5 percent frequency benefit in silicon due reduced IR drop. Successful Post silicon debug is demonstrated with slightly higher but acceptable throughput times. The thermal characteristics of the PowerVia testchip is inline with higher power densities expected from logic scaling."

GlobalFoundries Files Lawsuit Against IBM to Protect its Intellectual Property and Trade Secrets

GlobalFoundries (GF) today sued IBM for trade secret misappropriation. The complaint asserts the former semiconductor manufacturing company has unlawfully disclosed GF's confidential IP and trade secrets, after IBM sold its microelectronics business to GF in 2015. The technology at issue was collaboratively developed, over decades, by the companies in Albany, New York and the sole and exclusive right to license and disclose that technology was transferred to GF upon the sale.

In the legal action filed in federal court in the Southern District of New York, GF asserts that IBM unlawfully disclosed GF IP and trade secrets to IBM partners including Intel and Japan's Rapidus, a newly formed advanced logic foundry, and by doing so, IBM is unjustly receiving potentially hundreds of millions of dollars in licensing income and other benefits.

Report Suggests Samsung and LG Pushing Wider Adoption of LED Wall Displays at Cinemas

Samsung and LG are among an number of tech companies reportedly pushing for radical changes in the cinema viewing experience. In a piece published by the Hollywood Reporter last week, new behind-the-scenes information has come to light about an effort to replace the (some will say tried and true) traditional cinema theater projection system with LED walls. The vast majority of international theater chains rely on a front projection method (via a back of the booth), and very few locations have a more state-of-the-art LED display-based system in place. The Culver Theater (naturally located in Culver City, CA) is one of a hundred cinemas worldwide to possess a Samsung Onyx LED display - although the tech on show is said to be of an older standard. Industry insiders have been invited to attend demonstrations of a newer generation LED wall technology destined for cinemas in the future, and early impressions are purported to be mixed.

A cinema-based LED wall display functions in a similar way to how a modern LED-based flat screen TV works - although on a much greater massive scale - with particular benefits of the technology resulting in fantastic performance in terms of high dynamic range and peak brightness. The main downside of having a tightly packed array of large LED panels is the resultant heat output - critics of the technology state that it will be difficult to implement an adequate cooling system (through air conditioning) to tame the wall's temperature increasing properties. The power required to operate the LED panel array (plus required cooling solution) is said to be much higher than that of an old-fashioned projector's relatively modest draw from the electricity supply. An LED wall will also completely negate the traditional placement of loudspeakers behind a cinema's front-placed screen - and sound engineers will need to explore a different method of front audio channel output within the context of a next generation LED theater room.

Report: Worldwide IT Spending in 2023 Continues to Slowly Trend Downward

or the fifth consecutive month, International Data Corporation (IDC) has lowered its 2023 forecast for worldwide IT spending as technology investments continue to show the impact of a weakening economy. In its new monthly forecast for worldwide IT spending growth, IDC projects overall growth this year in constant currency of 4.4% to $3.25 trillion. This is slightly down from 4.5% in the previous month's forecast and represents a swing from a 6.0% growth forecast in October 2022.

"Since the fourth quarter of last year, we have seen clear and measurable signs of a moderate pullback in some areas of IT spending," said Stephen Minton, vice president in IDC's Data & Analytics research group. "Tech spending remains resilient compared to historical economic downturns and other types of business spending, but rising interest rates are now impacting capital spending."

NVIDIA Executive Says Cryptocurrencies Add Nothing Useful to Society

In an interview with The Guardian, NVIDIA's Chief Technical Officer (CTO) Michael Kagan added his remarks on the company and its cryptocurrency position. Being the maker of the world's most powerful graphics cards and compute accelerators, NVIDIA is the most prominent player in the industry regarding any computing application from cryptocurrencies to AI and HPC. In the interview, Mr. Kegan expressed his opinions and argued that newly found applications such as ChatGTP bring much higher value to society compared to cryptocurrencies. "All this crypto stuff, it needed parallel processing, and [Nvidia] is the best, so people just programmed it to use for this purpose. They bought a lot of stuff, and then eventually it collapsed, because it doesn't bring anything useful for society. AI does," said Kegan, adding that "I never believed that [crypto] is something that will do something good for humanity. You know, people do crazy things, but they buy your stuff, you sell them stuff. But you don't redirect the company to support whatever it is."

When it comes to AI and other applications, the company has a very different position. "With ChatGPT, everybody can now create his own machine, his own programme: you just tell it what to do, and it will. And if it doesn't work the way you want it to, you tell it 'I want something different," he added, arguing that the new AI applications have usability level beyond that of crypto. Interestingly, trading applications are also familiar to NVIDIA, as they had clients (banks) using their hardware for faster trading execution. Mr. Kegan noted: "We were heavily involved in also trading: people on Wall Street were buying our stuff to save a few nanoseconds on the wire, the banks were doing crazy things like pulling the fibers under the Hudson taut to make them a little bit shorter, to save a few nanoseconds between their datacentre and the stock exchange."
Return to Keyword Browsing
Jul 3rd, 2025 01:36 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts