News Posts matching #ChatGPT

Return to Keyword Browsing

NVIDIA Wants to Set Guardrails for Large Language Models Such as ChatGPT

ChatGPT has surged in popularity over a few months, and usage of this software has been regarded as one of the fastest-growing apps ever. Based on a Large Language Model (LLM) called GPT-3.5/4, ChatGPT uses user input to form answers based on its extensive database used in the training process. Having billions of parameters, the GPT models used for GPT can give precise answers; however, sometimes, these models hallucinate. Given a question about a non-existing topic/subject, ChatGPT can induce hallucination and make up the information. To prevent these hallucinations, NVIDIA, the maker of GPUs used for training and inferencing LLMs, has released a software library to put AI in place, called NeMo Guardrails.

As the NVIDIA repository states: "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more." These guardrails are easily programmable and can stop LLMs from outputting unwanted content. For a company that invests heavily in the hardware and software landscape, this launch is a logical decision to keep the lead in setting the infrastructure for future LLM-based applications.

Opera Unveils Opera One, an Entirely Redesigned Browser

Opera is unveiling Opera One today. Opera One is the early access version of a completely redesigned browser that is planned to replace the flagship Opera browser for Windows, MacOS, and Linux later this year. Based on Modular Design, Opera One transforms the way you interact with your browser, delivering a liquid navigation experience which is more intuitive to the user. With today's release, Opera One also becomes the first major Chromium-based browser with a multithreaded compositor that brings the UI to life like never before. Opera One also introduces Tab Islands, a new, more intuitive way of interacting with and managing multiple tabs. This news from the company comes just weeks after announcing its first generative AI features, including AI Prompts, as well as access to ChatGPT and ChatSonic in the sidebar.

Introducing the first implementation of Modular Design
Opera has a history of reinventing itself to address the changing needs of its users as well as the evolving nature of the web. With Opera One, the browser has been redesigned according to Modular Design. The new design philosophy, which is being presented today for the first time, will allow Opera to over time build a more powerful and feature-rich browser that is ready for a generative AI-based future. The Opera browser is thus beginning its metamorphosis into a browser that will dynamically adapt to the user's needs by bringing only the key features to the foreground: the relevant modules within Opera One will adjust automatically based on context, providing the user with a more liquid and effortless browsing experience.

PMIC Issue with Server DDR5 RDIMMs Reported, Convergence of DDR5 Server DRAM Price Decline

TrendForce reports that mass production of new server platforms—such as Intel Sapphire Rapids and AMD Genoa—is imminent. However, recent market reports have indicated a PMIC compatibility issue for server DDR5 RDIMMs; DRAM suppliers and PMIC vendors are working to address the problem. TrendForce believes this will have two effects: First, DRAM suppliers will temporarily procure more PMICs from Monolithic Power Systems (MPS), which supplies PMICs without any issues. Second, supply will inevitably be affected in the short term as current DDR5 server DRAM production still uses older processes, which will lead to a convergence in the price decline of DDR5 server DRAM in 2Q23—from the previously estimated 15~20% to 13~18%.

As previously mentioned, PMIC issues and the production process relying on older processes are all having a short-term impact on the supply of DDR5 server DRAM. SK hynix has gradually ramped up production and sales of 1α-nm, which, unlike 1y-nm, has yet to be fully verified by consumers. Current production processes are still being dominated by Samsung and SK hynix's 1y-nm and Micron's 1z-nm; 1α and 1β-nm production is projected to increase in 2H23.

Google Bard AI Chatbot Smart Enough to Assist in Software Coding

Alphabet Incorporated's Google AI division has today revealed a planned update for its Bard conversational artificial intelligence chatbot. The experimental generative artificial intelligence software application will become capable of assisting people in the writing of computer code - the American multinational technology company hopes that Bard will be of great to help in the area of software development. Paige Bailey, a group product manager at Google Research has introduced the upcoming changes: "Since we launched Bard, our experiment that lets you collaborate with generative AI, coding has been one of the top requests we've received from our users. As a product lead in Google Research - and a passionate engineer who still programs every day - I'm excited that today we're updating Bard to include that capability."

The Bard chatbot was made available, on a trial basis, to users in the USA and UK last month. Google's AI team is reported to be under great pressure to advance the Bard chatbot into a suitably powerful state in order to compete with its closest rival - Microsoft Corporation. The Seattle-based giant has invested heavily into Open AI's industry leading ChatGPT application. Google's latest volley against its rivals shows that Bard's has become very sophisticated - so much so that the app is able to chew through a variety of programming languages. Bailey outlines these features in the company's latest blog: "Starting now, Bard can help with programming and software development tasks, including code generation, debugging and code explanation. We're launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript. And you can easily export Python code to Google Colab - no copy and paste required." Critics of AI-driven large language models have posited that the technology could potentially eliminate humans from the job market - it will be interesting to observe the coder community's reaction to Google marketing of Bard as a helpful tool in software development.

Gigabyte Extends Its Leading GPU Portfolio of Servers

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a lineup of powerful GPU-centric servers with the latest AMD and Intel CPUs, including NVIDIA HGX H100 servers with both 4-GPU and 8-GPU modules. With growing interest in HPC and AI applications, specifically generative AI (GAI), this breed of server relies heavily on GPU resources to tackle compute-heavy workloads that handle large amounts of data. With the advent of OpenAI's ChatGPT and other AI chatbots, large GPU clusters are being deployed with system-level optimization to train large language models (LLMs). These LLMs can be processed by GIGABYTE's new design-optimized systems that offer a high level of customization based on users' workloads and requirements.

The GIGABYTE G-series servers are built first and foremost to support dense GPU compute and the latest PCIe technology. Starting with the 2U servers, the new G293 servers can support up to 8 dual-slot GPUs or 16 single-slot GPUs, depending on the server model. For the ultimate in CPU and GPU performance, the 4U G493 servers offer plenty of networking options and storage configurations to go alongside support for eight (Gen 5 x16) GPUs. And for the highest level of GPU compute for HPC and AI, the G393 & G593 series support NVIDIA H100 Tensor Core GPUs. All these new two CPU socket servers are designed for either 4th Gen AMD EPYC processors or 4th Gen Intel Xeon Scalable processors.

Samsung Could Replace Google Search on its Mobile Devices

Google's business of providing the world's largest search engine is reportedly in jeopardy, as the latest reports indicate that Samsung could replace Google Search with another search engine as a default solution on its mobile devices. Samsung, which sells millions of devices per year, is looking to replace the current search engine, Google Search, in favor of more modern AI-powered models. Currently, Google and Samsung have a contract where Google pays the South Korean giant a sum of three billion US dollars per year to keep its search engine as a default option on Samsung's devices. However, this decision is flexible, as the contract is up for renewal and new terms are being negotiated.

With the release of the ChatGPT and AI-powered search that Microsoft Bing enables, Google is reportedly working hard to keep up and integrate Large Language Models (LLMs) into Search. Google's answer to Microsoft Bing is codenamed Project Magi, an initiative to bring AI-powered search supposedly next month. To emphasize the importance of getting this to production, Google was ready to give up three billion US Dollars of revenue to Samsung for keeping Google Search as a default search engine for 12 years without a doubt. However, with the emergence of better solutions like Microsoft Bing, Samsung is considering replacing it with something else. The deal is still open, terms are still negotiated, and for now there are no official mentions of Bing. As a reminder, Google has a similar agreement with Apple, worth 20 billion US Dollars, and Google Search was valued at 162 billion US Dollars last year.

Bulk Order of GPUs Points to Twitter Tapping Big Time into AI Potential

According to Business Insider, Twitter has made a substantial investment into hardware upgrades at its North American datacenter operation. The company has purchased somewhere in the region of 10,000 GPUs - destined for the social media giant's two remaining datacenter locations. Insider sources claim that Elon Musk has committed to a large language model (LLM) project, in an effort to rival OpenAI's ChatGPT system. The GPUs will not provide much computational value in the current/normal day-to-day tasks at Twitter - the source reckons that the extra processing power will be utilized for deep learning purposes.

Twitter has not revealed any concrete plans for its relatively new in-house artificial intelligence project but something was afoot when, earlier this year, Musk recruited several research personnel from Alphabet's DeepMind division. It was theorized that he was incubating a resident AI research lab at the time, following personal criticisms levelled at his former colleagues at OpenAI, ergo their very popular and much adopted chatbot.

Alibaba Developing an Equivalent to ChatGPT

Last Tuesday, Alibaba announced its intentions to put out its own artificial intelligence (AI) chatbot product called Tongyi Qianwen - another rival to take on OpenAI's pioneering ChatGPT natural language processing tool. The Chinese technology giant is hoping to retrofit the new chatbot system into several arms of its business operations. Alibaba had revealed initial plans for chatbot integration earlier this year, and mentioned that it was providing an alternative to the already well established ChatGPT tool. Alibaba's workplace messaging application - DingTalk - is slated to receive the first AI-powered update in the near future, although the company did not provide a firm timeline for Tongyi Qianwen's release window.

The product name "Tongyi Qianwen" loosely translates to "seeking an answer by asking a thousand questions" - Alibaba did not provide an official English language translation at last week's press conference. Their chatbot is reported to function in both Mandarin and English language modes. Advanced AI voice recognition is set for usage in the Tmall Genie range of smart speakers (similar in function to the Amazon Echo). Alibaba expects to expand Tongyi Qianwen's reach into applications relating to e-commerce and mapping services.

Newegg Starts Using ChatGPT to Improve Online Shopping Experience

Newegg Commerce, Inc., a leading global technology e-commerce retailer, announced today that the company is using ChatGPT to improve its customers' online shopping experience. Introduced in November 2022, ChatGPT from OpenAI is a conversational artificial intelligence (AI) program capable of providing information that improves efficiency in myriad situations.

"We're always evaluating our e-commerce technology to ensure we're providing the best customer experience. Through testing, we've proven that ChatGPT has a practical use for Newegg based on the added quality and efficiency it creates," said Lucy Huo, Vice President of Application Development for Newegg. "We deployed ChatGPT to improve content both on-site and off-site to help customers find what they want and elevate their experience. AI doesn't replace employees, but it adds resources so employees are available to handle more complex projects. We're still in the early phases of AI but the benefits for e-commerce may be substantial."
Return to Keyword Browsing
May 21st, 2024 08:36 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts