News Posts matching #OpenAI

Return to Keyword Browsing

NVIDIA and Microsoft Showcase Blackwell Preview, Omniverse Industrial AI and RTX AI PCs at Microsoft Ignite

NVIDIA and Microsoft today unveiled product integrations designed to advance full-stack NVIDIA AI development on Microsoft platforms and applications. At Microsoft Ignite, Microsoft announced the launch of the first cloud private preview of the Azure ND GB200 V6 VM series, based on the NVIDIA Blackwell platform. The Azure ND GB200 v6 will be a new AI-optimized virtual machine (VM) series and combines the NVIDIA GB200 NVL72 rack design with NVIDIA Quantum InfiniBand networking.

In addition, Microsoft revealed that Azure Container Apps now supports NVIDIA GPUs, enabling simplified and scalable AI deployment. Plus, the NVIDIA AI platform on Azure includes new reference workflows for industrial AI and an NVIDIA Omniverse Blueprint for creating immersive, AI-powered visuals. At Ignite, NVIDIA also announced multimodal small language models (SLMs) for RTX AI PCs and workstations, enhancing digital human interactions and virtual assistants with greater realism.

Microsoft Brings Copilot AI Assistant to Windows Terminal

Microsoft has taken another significant step in its AI integration strategy by introducing "Terminal Chat," an AI assistant now available in Windows Terminal. This latest feature brings conversational AI capabilities directly to the command-line interface, marking a notable advancement in making terminal operations more accessible to users of all skill levels. The new feature, currently available in Windows Terminal (Canary), leverages various AI services, including ChatGPT, GitHub Copilot, and Azure OpenAI, to provide interactive assistance for command-line operations. What sets Terminal Chat apart is its context-aware functionality, which automatically recognizes the specific shell environment being used—whether it's PowerShell, Command Prompt, WSL Ubuntu, or Azure Cloud Shell—and tailors its responses accordingly.

Users can interact with Terminal Chat through a dedicated interface within Windows Terminal, where they can ask questions, troubleshoot errors, and request guidance on specific commands. The system provides shell-specific suggestions, automatically adjusting its recommendations based on whether a user is working in Windows PowerShell, Linux, or other environments. For example, when asked about creating a directory, Terminal Chat will suggest "New-Item -ItemType Directory" for PowerShell users while providing "mkdir" as the appropriate command for Linux environments. This intelligent adaptation helps bridge the knowledge gap between different command-line interfaces. Below are some examples courtesy of Windows Latest and their testing:

Etched Introduces AI-Powered Games Without GPUs, Displays Minecraft Replica

The gaming industry is about to get massively disrupted. Instead of using game engines to power games, we are now witnessing an entirely new and crazy concept. A startup specializing in designing ASICs specifically for Transformer architecture, the foundation behind generative AI models like GPT/Claude/Stable Diffusion, has showcased a demo in partnership with Decart of a Minecraft clone being entirely generated and operated by AI instead of the traditional game engine. While we use AI to create images and videos based on specific descriptions and output pretty realistic content, having an AI model spit out an entire playable game is something different. Oasis is the first playable, real-time, real-time, open-world AI model that takes users' input and generates real-time gameplay, including physics, game rules, and graphics.

An interesting thing to point out is the hardware that powers this setup. Using a single NVIDIA H100 GPU, this 500-million parameter Oasis model can run at 720p resolution at 20 generated frames per second. Due to limitations of accelerators like NVIDIA's H100/B200, gameplay at 4K is almost impossible. However, Etched has its own accelerator called Sohu, which is specialized in accelerating transformer architectures. Eight NVIDIA H100 GPUs can power five Oasis models to five users, while the eight Sohu cards are capable of serving 65 Oasis runs to 65 users. This is more than a 10x increase in inference capability compared to NVIDIA's hardware on a single-use case alone. The accelerator is designed to run much larger models like future 100 billion-parameter generative AI video game models that can output 4K 30 FPS, all thanks to 144 GB of HBM3E memory, yielding 1,152 GB in eight-accelerator server configuration.

OpenAI Designs its First AI Chip in Collaboration with Broadcom and TSMC

According to a recent Reuters report, OpenAI is continuing with its moves in the custom silicon space, expanding beyond its reported talks with Broadcom to include a broader strategy involving multiple industry leaders. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The company behind ChatGPT is actively working with both Broadcom and TSMC to develop its first proprietary AI chip, specifically focused on inference operations. Getting a custom chip to do training runs is a bit more complex task, and OpenAI leaves that to its current partners until the company figures out all details. Even with an inference chip, the scale at which OpenAI works and serves its models makes financial sense for the company to develop custom solutions tailored to its infrastructure needs.

This time, the initiative represents a more concrete and nuanced approach than previously understood. Rather than just exploratory discussions, OpenAI has assembled a dedicated chip team of approximately 20 people, led by former Google TPU engineers Thomas Norrie and Richard Ho. The company has secured manufacturing capacity with TSMC, targeting a 2026 timeline for its first custom-designed chip. While Broadcom's involvement leverages its expertise in helping companies optimize chip designs for manufacturing and manage data movement between chips—crucial for AI systems running thousands of processors in parallel—OpenAI is simultaneously diversifying its compute strategy. This includes adding AMD's Instinct MI300X chips to its infrastructure alongside its existing NVIDIA deployments. Similarly, Meta has the same approach, where it now trains its models on NVIDIA GPUs and serves them to the public (inferencing) using AMD Instinct MI300X.

Anthropic Develops AI Model That Can Use Computers, Updates Claude 3.5 Sonnet

The age of automation is upon us. Anthropic, the company behind top-performing Claude large language models that compete directly with OpenAI GPT, has today announced updates to its models and a new feature—computer use. The computer use allows Claude 3.5 Sonnet model to access the user's system by looking at the screen, moving the cursor, typing text, and clicking buttons. While only being experimental for now, the system is prone to errors and creating "dumb" mistakes. However, it allows for one very important feature: driving the operating system designed for humans using artificial intelligence.

There is a benchmark that evaluates AI model's ability to use computers like a human does on human-centered operating system. Called OSWorld, the Claude 3.5 Sonnet model has managed to score 14.9% in screenshot-only category, and 22.0% in some other tasks that require more steps. A typical human scores around 72.36% in this testing, which proves to be difficult even for natural intelligence. However, this is only the beginning as these models advance rapidly. Usually, these models work with other types of data, like text and static images, where they process it and do computation based on it. Working on computers designed for human interaction first is a great leap in the capabilities of AI models.

NVIDIA Fine-Tunes Llama3.1 Model to Beat GPT-4o and Claude 3.5 Sonnet with Only 70 Billion Parameters

NVIDIA has officially released its Llama-3.1-Nemotron-70B-Instruct model. Based on META's Llama3.1 70B, the Nemotron model is a large language model customized by NVIDIA in order to improve the helpfulness of LLM-generated responses. NVIDIA uses fine-tuning structured data to steer the model and allow it to generate more helpful responses. With only 70 billion parameters, the model is punching far above its weight class. The company claims that the model is beating the current top models from leading labs like OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet, which are the current leaders across AI benchmarks. In evaluations such as Arena Hard, the NVIDIA Llama3.1 Nemotron 70B is scoring 85 points, while GPT-4o and Sonnet 3.5 score 79.3 and 79.2, respectively. Other benchmarks like AlpacaEval and MT-Bench spot NVIDIA also hold the top spot, with 57.6 and 8.98 scores earned. Claude and GPT reach 52.4 / 8.81 and 57.5 / 8.74, just below Nemotron.

This language model underwent training using reinforcement learning from human feedback (RLHF), specifically employing the REINFORCE algorithm. The process involved a reward model based on a large language model architecture and custom preference prompts designed to guide the model's behavior. The training began with a pre-existing instruction-tuned language model as the starting point. It was trained on Llama-3.1-Nemotron-70B-Reward and HelpSteer2-Preference prompts on a Llama-3.1-70B-Instruct model as the initial policy. Running the model locally requires either four 40 GB or two 80 GB VRAM GPUs and 150 GB of free disk space. We managed to take it for a spin on NVIDIA's website to say hello to TechPowerUp readers. The model also passes the infamous "strawberry" test, where it has to count the number of specific letters in a word, however, it appears that it was part of the fine-tuning data as it fails the next test, shown in the image below.

ASUS ROG Updates Virtual Assistant With New AI Module

ASUS Republic of Gamers (ROG) today released a significant update to its bundled Virtual Assistant software (formerly known as Virtual Pet). This new software package comes preinstalled on the ROG Zephyrus G16 gaming laptop and leverages the incredible power of AI to significantly level up the capabilities of the Virtual Assistant, including an intelligent chat and Q&A interface, written document summarization, and voice transcription tools. This update is available on laptop models with AMD Ryzen AI 300 Series processors as a free download via ASUS Live Update.

Intelligent chat support
The Virtual Assistant gives users a leg up when they're using an unfamiliar program or system tool. With a local chat and Q&A feature, even when disconnected from the internet, the Virtual Assistant can help users navigate complicated menus and activate the features and settings they need. For example, if a new user is looking to adjust fan settings, they can request that from the Virtual Assistant, and it will direct them to the appropriate settings menu within the Armoury Crate app. Applications like MyASUS, GlideX, and ProArt Creator Hub are supported, and the chat functionality adds a new layer of support for end users.

Apple Debuts the iPhone 16 Pro and iPhone 16 Pro Max - Now with a Camera Button

Apple today introduced iPhone 16 Pro and iPhone 16 Pro Max, featuring Apple Intelligence, larger display sizes, new creative capabilities with innovative pro camera features, stunning graphics for immersive gaming, and more—all powered by the A18 Pro chip. With Apple Intelligence, powerful Apple-built generative models come to iPhone in the easy-to-use personal intelligence system that understands personal context to deliver intelligence that is helpful and relevant while protecting user privacy. Camera Control unlocks a fast, intuitive way to tap into visual intelligence and easily interact with the advanced camera system. Featuring a new 48MP Fusion camera with a faster quad-pixel sensor that enables 4K120 FPS video recording in Dolby Vision, these new Pro models achieve the highest resolution and frame-rate combination ever available on iPhone. Additional advancements include a new 48MP Ultra Wide camera for higher-resolution photography, including macro; a 5x Telephoto camera on both Pro models; and studio-quality mics to record more true-to-life audio. The durable titanium design is strong yet lightweight, with larger display sizes, the thinnest borders on any Apple product, and a huge leap in battery life—with iPhone 16 Pro Max offering the best battery life on iPhone ever.

iPhone 16 Pro and iPhone 16 Pro Max will be available in four stunning finishes: black titanium, natural titanium, white titanium, and desert titanium. Pre-orders begin Friday, September 13, with availability beginning Friday, September 20.

Cerebras Launches the World's Fastest AI Inference

Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, Cerebras Inference is 20 times faster than NVIDIA GPU-based solutions in hyperscale clouds. Starting at just 10c per million tokens, Cerebras Inference is priced at a fraction of GPU solutions, providing 100x higher price-performance for AI workloads.

Unlike alternative approaches that compromise accuracy for performance, Cerebras offers the fastest performance while maintaining state of the art accuracy by staying in the 16-bit domain for the entire inference run. Cerebras Inference is priced at a fraction of GPU-based competitors, with pay-as-you-go pricing of 10 cents per million tokens for Llama 3.1 8B and 60 cents per million tokens for Llama 3.1 70B.

Report: AI Software Sales to Experience Massive Growth with 40.6% CAGR Over the Next Five Years

The market for artificial intelligence (AI) platforms software grew at a rapid pace in 2023 and is projected to maintain its remarkable momentum, driven by the increasing adoption of AI across many industries. A new International Data Corporation (IDC) forecast shows that worldwide revenue for AI platforms software will grow to $153.0 billion in 2028 with a compound annual growth rate (CAGR) of 40.6% over the 2023-2028 forecast period.

"The AI platforms market shows no signs of slowing down. Rapid innovations in generative AI is changing how companies think about their products, how they develop and deploy AI applications, and how they leverage technology themselves for reinventing their business models and competitive positioning," said Ritu Jyoti, group vice president and general manager of IDC's Artificial Intelligence, Automation, Data and Analytics research. "IDC expects this upward trajectory will continue to accelerate with the emergence of unified platforms for predictive and generative AI that supports interoperating APIs, ecosystem extensibility, and responsible AI adoption at scale."

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Anthropic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthropic. Anthropic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.

The Race is Heating Up, Elon Musk's AI Startup xAI Raises $6 Billion

Elon Musk's AI company xAI just scored big (according to Reuters), raising a massive $6 billion in new funding. This sky-high investment values xAI at a whopping $24 billion as investors go all-in on challengers to top AI players like OpenAI. Big-name funders like Andreessen Horowitz and Sequoia backed the funding round, according to xAI's blog post on Sunday. Before this, xAI was valued at $18 billion, Musk said on social app X.

The huge cash influx will help xAI launch its first products, build advanced tech, and turbocharge their research, the company stated. "More news coming soon," Musk teased cryptically after the funding announcement. It's an AI investment frenzy as tech giants like Microsoft and Google parent Alphabet pour fortunes into leading the red-hot generative AI race. With its new war chest, xAI is gearing up to make some serious waves.
Elon Musk xAI

AMD Instinct MI300X Accelerators Power Microsoft Azure OpenAI Service Workloads and New Azure ND MI300X V5 VMs

Today at Microsoft Build, AMD (NASDAQ: AMD) showcased its latest end-to-end compute and software capabilities for Microsoft customers and developers. By using AMD solutions such as AMD Instinct MI300X accelerators, ROCm open software, Ryzen AI processors and software, and Alveo MA35D media accelerators, Microsoft is able to provide a powerful suite of tools for AI-based deployments across numerous markets. The new Microsoft Azure ND MI300X virtual machines (VMs) are now generally available, giving customers like Hugging Face, access to impressive performance and efficiency for their most demanding AI workloads.

"The AMD Instinct MI300X and ROCm software stack is powering the Azure OpenAI Chat GPT 3.5 and 4 services, which are some of the world's most demanding AI workloads," said Victor Peng, president, AMD. "With the general availability of the new VMs from Azure, AI customers have broader access to MI300X to deliver high-performance and efficient solutions for AI applications."

ChatGPT Comes to Desktop with OpenAI's Latest GPT-4o Model That Talks With Users

At OpenAI's spring update, a lot of eyes were fixed on the company, which spurred the AI boom with the ChatGPT application. Now being almost a must-have app for consumers and prosumers alike, ChatGPT is a de-facto application for the latest AI innovation, backed by researchers and scientists from OpenAI. Today, OpenAI announced a new model called GPT-4o (Omni), which hopes to bring advanced intelligence, improved overall capabilities, and real-time voice interaction with users. Now, the ChatGPT application wants to become like a personal assistant that actively communicates with users and provides much broader capabilities. OpenAI claims that it can respond to audio inputs as quickly as 232 milliseconds, with an average of 320 milliseconds, similar to human response time in conversations.

However, OpenAI states that it wants ChatGPT's latest GPT-4o model to be available to the free, Plus, and Team paid subscribers, where paid subscribers get 5x higher usage and early access to the model. Interestingly, the GPT-4o model is much improved across a variety of standard benchmarks like MMLU, Math, HumanEval, GPQA, and others, where it now surpasses almost all models except Claude 3 Opus in MGSM. It now understands more than 50 languages and can do real time translation. In addition to the new model, OpenAI announced that they are launching a desktop ChatGPT app, which can act as a personal assistant and see what is happening on the screen, but it is only allowed by user command. This is supposed to bring a much more refined user experience and enable users to use AI as a third person to help understand the screen's content. Initially only available on macOS, we are waiting for OpenAI to launch the Windows ChatGPT application so everyone can also experience the new technology.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

Microsoft Prepares MAI-1 In-House AI Model with 500B Parameters

According to The Information, Microsoft is developing a new AI model, internally named MAI-1, designed to compete with the leading models from Google, Anthropic, and OpenAI. This significant step forward in the tech giant's AI capabilities is boosted by Mustafa Suleyman, the former Google AI leader who previously served as CEO of Inflection AI before Microsoft acquired the majority of its staff and intellectual property for $650 million in March. MAI-1 is a custom Microsoft creation that utilizes training data and technology from Inflection but is not a transferred model. It is also distinct from Inflection's previously released Pi models, as confirmed by two Microsoft insiders familiar with the project. With approximately 500 billion parameters, MAI-1 will be significantly larger than its predecessors, surpassing the capabilities of Microsoft's smaller, open-source models.

For comparison, OpenAI's GPT-4 boasts 1.8 trillion parameters in a Mixture of Experts sparse design, while open-source models from Meta and Mistral feature 70 billion parameters dense. Microsoft's investment in MAI-1 highlights its commitment to staying competitive in the rapidly evolving AI landscape. The development of this large-scale model represents a significant step forward for the tech giant, as it seeks to challenge industry leaders in the field. The increased computing power, training data, and financial resources required for MAI-1 demonstrate Microsoft's dedication to pushing the boundaries of AI capabilities and intention to compete on its own. With the involvement of Mustafa Suleyman, a renowned expert in AI, the company is well-positioned to make significant strides in this field.

Jensen Huang Will Discuss AI's Future at NVIDIA GTC 2024

NVIDIA's GTC 2024 AI conference will set the stage for another leap forward in AI. At the heart of this highly anticipated event: the opening keynote by Jensen Huang, NVIDIA's visionary founder and CEO, who speaks on Monday, March 18, at 1 p.m. Pacific, at the SAP Center in San Jose, California.

Planning Your GTC Experience
There are two ways to watch. Register to attend GTC in person to secure a spot for an immersive experience at the SAP Center. The center is a short walk from the San Jose Convention Center, where the rest of the conference takes place. Doors open at 11 a.m., and badge pickup starts at 10:30 a.m. The keynote will also be livestreamed at www.nvidia.com/gtc/keynote/.

CNET Demoted to Untrusted Sources by Wikipedia Editors Due to AI-Generated Content

Once trusted as the staple of technology journalism, the website CNET has been publically demoted to Untrusted Sources on Wikipedia. CNET has faced public criticism since late 2022 for publishing AI-generated articles without disclosing humans did not write them. This practice has culminated in CNET being demoted from Trusted to Untrusted Sources on Wikipedia, following extensive debates between Wikipedia editors. CNET's reputation first declined in 2020 when it was acquired by publisher Red Ventures, who appeared to prioritize advertising and SEO traffic over editorial standards. However, the AI content scandal accelerated CNET's fall from grace. After discovering the AI-written articles, Wikipedia editors argued that CNET should be removed entirely as a reliable source, citing Red Ventures' pattern of misinformation.

One editor called for targeting Red Ventures as "a spam network." AI-generated content poses familiar challenges to spam bots - machine-created text that is frequently low quality or inaccurate. However, CNET claims it has stopped publishing AI content. This controversy highlights rising concerns about AI-generated text online. Using AI-generated stories might seem interesting as it lowers the publishing time; however, these stories usually rank low in the Google search index, as the engine detects and penalizes AI-generated content probably because Google's AI detection algorithms used the same training datasets as models used to write the text. Lawsuits like The New York Times v. OpenAI also allege AIs must scrape vast amounts of text without permission. As AI capabilities advance, maintaining information quality on the web will require increased diligence. But demoting once-reputable sites like CNET as trusted sources when they disregard ethics and quality control helps set a necessary precedent. Below, you can see the Wikipedia table about CNET.

Elon Musk Sues Open AI and Sam Altman for Breach of Founding Contract

Elon Musk in his individual capacity has sued Sam Altman, Gregory Brockman, Open AI and its affiliate companies, of breach of founding contract, and a deviation from its founding goal to be a non-profit tasked with the development of AI toward the benefit of humanity. This lawsuit comes in the wake of Open AI's relationship with Microsoft, which Musk says compromises its founding contract. Musk alleges breach of contract, breach of fiduciary duty, and unfair business practices against Open AI, and demands that the company revert to being open-source with all its technology, and function as a non-profit.

Musk also requests an injunction to prevent Open AI and the other defendants from profiting off Open AI technology. In particular, Musk alleges that GPT-4 isn't open-source, claiming that only Open AI and Microsoft know its inner workings, and Microsoft stands to monetize GPT-4 "for a fortune." Microsoft, interestingly, was not named in the lawsuit as a defendant. Elon Musk sat on the original board of Open AI until his departure in 2018, is said to be a key sponsor of AI acceleration hardware used in the pioneering work done by Open AI.

Intel Announces Intel 14A (1.4 nm) and Intel 3T Foundry Nodes, Launches World's First Systems Foundry Designed for the AI Era

Intel Corp. today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners - including Synopsys, Cadence, Siemens and Ansys - who outlined their readiness to accelerate Intel Foundry customers' chip designs with tools, design flows and IP portfolios validated for Intel's advanced packaging and Intel 18A process technologies.

The announcements were made at Intel's first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

Jensen Huang to Unveil Latest AI Breakthroughs at GTC 2024 Conference

NVIDIA today announced it will host its flagship GTC 2024 conference at the San Jose Convention Center from March 18-21. More than 300,000 people are expected to register to attend in person or virtually. NVIDIA founder and CEO Jensen Huang will deliver the keynote from the SAP Center on Monday, March 18, at 1 p.m. Pacific time. It will be livestreamed and available on demand. Registration is not required to view the keynote online. Since Huang first highlighted machine learning in his 2014 GTC keynote, NVIDIA has been at the forefront of the AI revolution. The company's platforms have played a crucial role in enabling AI across numerous domains including large language models, biology, cybersecurity, data center and cloud computing, conversational AI, networking, physics, robotics, and quantum, scientific and edge computing.

The event's 900 sessions and over 300 exhibitors will showcase how organizations are deploying NVIDIA platforms to achieve remarkable breakthroughs across industries, including aerospace, agriculture, automotive and transportation, cloud services, financial services, healthcare and life sciences, manufacturing, retail and telecommunications. "Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," Huang said. "GTC has become the world's most important AI conference because the entire ecosystem is there to share knowledge and advance the state of the art. Come join us."

Jim Keller Offers to Design AI Chips for Sam Altman for Less Than $1 Trillion

In case you missed it, Sam Altman of OpenAI took the Internet by storm late last week with the unveiling of Sora, the generative AI that can congure up photoreal video clips based on prompts, with deadly accuracy. While Altman and his colleagues in the generative AI industry had a ton of fun generating videos based on prompts from the public on X; it became all too clear that the only thing holding back the democratization of generative AI is the volume of AI accelerator chips. Altman wants to solve this by designing his own AI acceleration hardware from the grounds up, for which he initially pitched an otherworldly $7 trillion in investment—something impossible with the financial markets, but one that's possible only by "printing money," or through sovereign wealth fund investments.

Jim Keller needs no introduction—the celebrity VLSI architect has been designing number crunching devices of all shapes and sizes for some of the biggest tech companies out there for decades, including Intel, Apple, and AMD, just to name a few. When as part of his "are you not entertained?" victory lap, Altman suggested that his vision for the future needs an even larger $8 trillion investment, Keller responded that he could design an AI chip for less than $1 trillion. Does Altman really need several trillions of Dollars to build a grounds-up AI chip at the costs and volumes needed to mainstream AI?

Sora by OpenAI is the Text-to-Video AI Model Beyond Our Wildest Imagination

Sam Altman of OpenAI just unveiled Sora, the all new speech-to-video AI model that exactly the way science fiction would want such a thing to work—imagine fluid, photorealistic, true-color video clips based entirely on text prompts. Sora is generative AI on an exponentially higher scale than Dall-E, and presumably requires an enormously higher amount of compute power. But to those that can afford to rent out a large hardware instance, this means the power to create a video of just about anything. Everything democratizes with time, and in a few years, Sora could become the greatest tool for independent content creators, as they could draw up entire worlds using just prompts and green screens. Sora strapped to a mixed reality headset such as the Apple Vision Pro, is basically a Holodeck.
Return to Keyword Browsing
Nov 21st, 2024 07:50 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts