News Posts matching #Chat

Return to Keyword Browsing

Microsoft Reportedly Developing AI-Powered Chatbot for Xbox Support

According to the latest report from The Verge, Microsoft is currently testing a new AI-driven chatbot designed to automate support tasks for its Xbox gaming platform. As the report notes, Microsoft is experimenting with an animated AI character that will assist in answering Xbox support inquiries. The Xbox AI chatbot is connected to Microsoft's Xbox network and ecosystem support documentation. It can answer questions and process game refunds from the Microsoft support website, all aiming to provide users with simple and quick assistance on support topics using natural language, drawing information from existing Xbox support pages. Training on Microsoft's enterprise data will help Microsoft reduce the AI model's hallucinations and instruct it to do only as intended.

As a result, the chatbot's responses closely resemble the information Microsoft provides to its customers to automate support tasks. Recently, Microsoft has expanded the test pool for its new Xbox chatbot, suggesting that the "Xbox Support Virtual Agent" may soon handle support inquiries for all Xbox customers. The development of the Xbox chatbot prototype is part of a broader initiative within Microsoft Gaming to introduce AI-powered features and tools for the Xbox platform and developer tools. The company is also reportedly working on providing AI capabilities for game content creation, gameplay, and the Xbox platform and devices. However, Xbox employees have yet to publicly confirm these more extensive AI efforts for Microsoft Gaming, likely due to the company's cautious approach to presenting AI in gaming. Nevertheless, AI will soon become an integral part of gaming consoles.

AMD Publishes User Guide for LM Studio - a Local AI Chatbot

AMD has caught up with NVIDIA and Intel in the race to get a locally run AI chatbot up and running on its respective hardware. Team Red's community hub welcomed a new blog entry on Wednesday—AI staffers published a handy "How to run a Large Language Model (LLM) on your AMD Ryzen AI PC or Radeon Graphics Card" step-by-step guide. They recommend that interested parties are best served by downloading the correct version of LM Studio. Their CPU-bound Windows variant—designed for higher-end Phoenix and Hawk Point chips—compatible Ryzen AI PCs can deploy instances of a GPT based LLM-powered AI chatbot. The LM Studio ROCm technical preview functions similarly, but is reliant on Radeon RX 7000 graphics card ownership. Supported GPU targets include: gfx1100, gfx1101 and gfx1102.

AMD believes that: "AI assistants are quickly becoming essential resources to help increase productivity, efficiency or even brainstorm for ideas." Their blog also puts a spotlight on LM Studio's offline functionality: "Not only does the local AI chatbot on your machine not require an internet connection—but your conversations stay on your local machine." The six-step guide invites curious members to experiment with a handful of large language models—most notably Mistral 7b and LLAMA v2 7b. They thoroughly recommend that you select options with "Q4 K M" (AKA 4-bit quantization). You can learn about spooling up "your very own AI chatbot" here.

ServiceNow, Hugging Face & NVIDIA Release StarCoder2 - a New Open-Access LLM Family

ServiceNow, Hugging Face, and NVIDIA today announced the release of StarCoder2, a family of open-access large language models for code generation that sets new standards for performance, transparency, and cost-effectiveness. StarCoder2 was developed in partnership with the BigCode Community, managed by ServiceNow, the leading digital workflow company making the world work better for everyone, and Hugging Face, the most-used open-source platform, where the machine learning community collaborates on models, datasets, and applications. Trained on 619 programming languages, StarCoder2 can be further trained and embedded in enterprise applications to perform specialized tasks such as application source code generation, workflow generation, text summarization, and more. Developers can use its code completion, advanced code summarization, code snippets retrieval, and other capabilities to accelerate innovation and improve productivity.

StarCoder2 offers three model sizes: a 3-billion-parameter model trained by ServiceNow; a 7-billion-parameter model trained by Hugging Face; and a 15-billion-parameter model built by NVIDIA with NVIDIA NeMo and trained on NVIDIA accelerated infrastructure. The smaller variants provide powerful performance while saving on compute costs, as fewer parameters require less computing during inference. In fact, the new 3-billion-parameter model matches the performance of the original StarCoder 15-billion-parameter model. "StarCoder2 stands as a testament to the combined power of open scientific collaboration and responsible AI practices with an ethical data supply chain," emphasized Harm de Vries, lead of ServiceNow's StarCoder2 development team and co-lead of BigCode. "The state-of-the-art open-access model improves on prior generative AI performance to increase developer productivity and provides developers equal access to the benefits of code generation AI, which in turn enables organizations of any size to more easily meet their full business potential."

Sony Announces Launch of PlayStation Pulse Elite Wireless Headset

Launching today, our latest wireless headset, Pulse Elite, brings crisp, immersive audio to the gaming experience on the PS5 console; to PlayStation Link supported devices including PS5, PC, Mac, and PlayStation Portal remote player; and to Bluetooth compatible devices such as smartphones and tablets. Pulse Elite follows the launch of our first wireless earbuds, Pulse Explore, with both audio devices featuring planar magnetic drivers to further enhance the PS5 console's Tempest 3D AudioTech. When combined with PlayStation Link, the planar drivers precisely deliver the output of the 3D audio algorithms directly to the player's ear without loss, and nearly no distortion or delay. Here's our quick-start guide on setting up and using the Pulse Elite wireless headset, along with Pulse Explore wireless earbuds.

Set up and use sidetone and 3D audio features on PS5
A tour of the headset appears when you first connect the Pulse Elite wireless headset or Pulse Explore wireless earbuds to your PS5 console via the included PlayStation Link USB adapter. Before diving into a game, I recommend personalizing 3D audio settings and adjusting sidetone volume (changing this adjusts how loudly you hear your own voice in your ear when you talk). It's also possible to create a custom name for the headset, with standard letters, symbols, and even emoji. After the tour, you can change settings at any time while the headset is connected by navigating to the Settings menu and selecting Accessories, followed by Pulse Elite wireless headset.

Groq LPU AI Inference Chip is Rivaling Major Players like NVIDIA, AMD, and Intel

AI workloads are split into two different categories: training and inference. While training requires large computing and memory capacity, access speeds are not a significant contributor; inference is another story. With inference, the AI model must run extremely fast to serve the end-user with as many tokens (words) as possible, hence giving the user answers to their prompts faster. An AI chip startup, Groq, which was in stealth mode for a long time, has been making major moves in providing ultra-fast inference speeds using its Language Processing Unit (LPU) designed for large language models (LLMs) like GPT, Llama, and Mistral LLMs. The Groq LPU is a single-core unit based on the Tensor-Streaming Processor (TSP) architecture which achieves 750 TOPS at INT8 and 188 TeraFLOPS at FP16, with 320x320 fused dot product matrix multiplication, in addition to 5,120 Vector ALUs.

Having massive concurrency with 80 TB/s of bandwidth, the Groq LPU has 230 MB capacity of local SRAM. All of this is working together to provide Groq with a fantastic performance, making waves over the past few days on the internet. Serving the Mixtral 8x7B model at 480 tokens per second, the Groq LPU is providing one of the leading inference numbers in the industry. In models like Llama 2 70B with 4096 token context length, Groq can serve 300 tokens/s, while in smaller Llama 2 7B with 2048 tokens of context, Groq LPU can output 750 tokens/s. According to the LLMPerf Leaderboard, the Groq LPU is beating the GPU-based cloud providers at inferencing LLMs Llama in configurations of anywhere from 7 to 70 billion parameters. In token throughput (output) and time to first token (latency), Groq is leading the pack, achieving the highest throughput and second lowest latency.

Microsoft Copilot Becomes a Dedicated Key on Windows-Powered PC Keyboards

Microsoft today announced the introduction of a new Copilot key devoted to its AI assistant on Windows PC keyboards. The key will provide instant access to Microsoft's conversational Copilot feature, offering a ChatGPT-style AI bot right from a button press. The Copilot key represents the first significant Windows keyboard change in nearly 30 years since the addition of the Windows key itself in the 90s. Microsoft sees it as similarly transformative - making AI an integrated part of devices. The company expects broad adoption from PC manufacturers starting this spring. The Copilot key will likely substitute keys like menu or Office on standard layouts. While currently just launching Copilot, Microsoft could also enable combo presses in the future.

The physical keyboard button helps make AI feel native rather than an add-on, as Microsoft aggressively pushes Copilot into Windows 11 and Edge. The company declared its aim to make 2024 the "year of the AI PC", with Copilot as the entry point. Microsoft envisions AI eventually becoming seamlessly woven into computing through system, silicon, and hardware advances. The Copilot key may appear minor, but it signals that profound change is on the horizon. However, users will only embrace the vision if Copilot proves consistently beneficial rather than gimmicky. Microsoft is betting that injecting AI deeper into PCs will provide usefulness, justifying the disruption. With major OS and hardware partners already committed to adopting the Copilot key, Microsoft's AI-first computer vision is materializing rapidly. The button press that invokes Copilot may soon feel as natural as hitting the Windows key or spacebar. As we await the reported launch of Windows 12, we can expect deeper integration with Copilot to appear.

Jensen Huang & Leading EU Generative AI Execs Participated in Fireside Chat

Three leading European generative AI startups joined NVIDIA founder and CEO Jensen Huang this week to talk about the new era of computing. More than 500 developers, researchers, entrepreneurs and executives from across Europe and further afield packed into the Spindler and Klatt, a sleek, riverside gathering spot in Berlin. Huang started the reception by touching on the message he delivered Monday at the Berlin Summit for Earth Virtualization Engines (EVE), an international collaboration focused on climate science. He shared details of NVIDIA's Earth-2 initiative and how accelerated computing, AI-augmented simulation and interactive digital twins drive climate science research.

Before sitting down for a fireside chat with the founders of the three startups, Huang introduced some "special guests" to the audience—four of the world's leading climate modeling scientists, who he called the "unsung heroes" of saving the planet. "These scientists have dedicated their careers to advancing climate science," said Huang. "With the vision of EVE, they are the architects of the new era of climate science."

"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.

In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.

Samsung Could Replace Google Search on its Mobile Devices

Google's business of providing the world's largest search engine is reportedly in jeopardy, as the latest reports indicate that Samsung could replace Google Search with another search engine as a default solution on its mobile devices. Samsung, which sells millions of devices per year, is looking to replace the current search engine, Google Search, in favor of more modern AI-powered models. Currently, Google and Samsung have a contract where Google pays the South Korean giant a sum of three billion US dollars per year to keep its search engine as a default option on Samsung's devices. However, this decision is flexible, as the contract is up for renewal and new terms are being negotiated.

With the release of the ChatGPT and AI-powered search that Microsoft Bing enables, Google is reportedly working hard to keep up and integrate Large Language Models (LLMs) into Search. Google's answer to Microsoft Bing is codenamed Project Magi, an initiative to bring AI-powered search supposedly next month. To emphasize the importance of getting this to production, Google was ready to give up three billion US Dollars of revenue to Samsung for keeping Google Search as a default search engine for 12 years without a doubt. However, with the emergence of better solutions like Microsoft Bing, Samsung is considering replacing it with something else. The deal is still open, terms are still negotiated, and for now there are no official mentions of Bing. As a reminder, Google has a similar agreement with Apple, worth 20 billion US Dollars, and Google Search was valued at 162 billion US Dollars last year.

OpenAI Unveils GPT-4, Claims to Outperform Humans in Certain Academic Benchmarks

We've created GPT-4, the latest milestone in OpenAI's effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5's score was around the bottom 10%. We've spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload. A year ago, we trained GPT-3.5 as a first "test run" of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time. As we continue to focus on reliable scaling, we aim to hone our methodology to help us predict and prepare for future capabilities increasingly far in advance—something we view as critical for safety.

Discord VP Anjney Midha Shares Details of Expanded AI Chat and Moderation Features

Whether it's generating a shiny new avatar or putting into words something you couldn't quite figure out on your own, new experiences using generative artificial intelligence are popping up every day. However, "tons of people use AI on Discord" might not be news to you: more than 30 million people already use AI apps on Discord every month. Midjourney's server is the biggest on Discord, with more than 13 million members bringing their imaginations to pixels. Overall, our users have created more than 1 billion unique images through AI apps on Discord. And this is just the start.
Almost 3 million Discord servers include an AI experience, ranging from generating gaming assets to groups writing novels with AI, to AI companions, AI companies and AI-based learning communities. More than 10 percent of new Discord users are joining specifically to access AI interest based communities on our platform.
Return to Keyword Browsing
May 1st, 2024 08:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts