News Posts matching #ChatGPT

Return to Keyword Browsing

Google Will Use Your Data to Train Their AI According to Updated Privacy Policy

Google made a small but important change to their privacy policy over the weekend that effectively lays claim to anything you post publicly online for use to train their AI models. The original wording of the section of their privacy policy claimed that public data would be used for business purposes, research, and for improving Google Translate services. Now however the section has been updated to read the following:
Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public. For example, we use publicly available information to help train Google's AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.
Further down in the policy text Google has another section which exemplifies the areas of "publicly available" information they seek to scrape,
For example, we may collect information that's publicly available online or from other public sources to help train Google's AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities. Or, if your business's information appears on a website, we may index and display it on Google services.

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023

High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce's forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.

Global Top Ten IC Design Houses Break Even in Q1, Hope for Recovery in Q2 Bolstered by AI Demand

TrendForce reports that inventory reduction in Q1 fell short of expectations and coincided with the industry's traditional off-season, leading to overall subdued demand. However, due to new product release and a surge in urgent orders for specialized specifications, Q1 revenue of the global top ten IC design houses remained on par with 4Q22, with a modest QoQ increase of 0.1% for a total revenue of US$33.86 billion. Changes in ranking included Cirrus Logic slipping from the top ten as well as the ninth and tenth positions being replaced by WillSemi and MPS, respectively. The rest of the rankings remained unchanged.

The smartphone supply continues to grapple with overstock, but AI applications are entering a period of rapid growth
Qualcomm witnessed an uptick in its revenue, largely attributed to the launch and subsequent shipments of its latest flagship chip, the Snapdragon 8Gen2. The company saw 6.1% in QoQ growth in its smartphone business, which effectively offset the downturn from its automotive and IoT sectors. As a result, Qualcomm's Q1 revenue increased marginally by 0.6%, cementing its position at the top of the pack with a market share of 23.5%.

OpenAI Considers Exit From Europe - Faces Planned Legislation from Regulators

OpenAI's CEO, Sam Altman, is currently exploring the UK and Europe on a PR-related "mini" world tour, and protesters have been following these proceedings with much interest. UK news outlets have reported that a demonstration took place outside of a university building in London yesterday, where the UCL Events organization hosted Altman as part of a fireside discussion about the benefits and problems relating to advanced AI systems. Attendees noted that Altman expressed optimism about AI's potential for the creation of more jobs and reduction in inequality - despite calls for a major pause on development. He also visited 10 Downing Street during the British leg of his PR journey - alongside other AI company leaders - to talk about potential risks (originating from his industry) with the UK's prime minister. Discussed topics were reported to include national security, existential threats and disinformation.

At the UCL event, Altman touched upon his recent meetings with European regulators, who are developing plans for advanced legislation that could lead to targeted laws (applicable to AI industries). He says that his company is "gonna try to comply" with these potential new rules and agrees that some form of regulation is necessary: "something between the traditional European approach and the traditional US approach" would be preferred. He took issue with the potential branding of large AI models (such as OpenAI's ChatGPT and GPT-4 applications) as "high risk" ventures via the European Union's AI Act provisions: "Either we'll be able to solve those requirements or not...If we can comply, we will, and if we can't, we'll cease operating… We will try. But there are technical limits to what's possible."

"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.

In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.

Stardock Integrates AlienGPT into Galactic Civilizations IV

Galactic Civilizations IV: Supernova is a 4X turn-based strategy game set in the 24th century where you take on the role of leading a spacefaring civilization that has just developed faster-than-light (FTL) travel. You begin the game with only your home planet and must research new technologies, explore the known galaxy, and colonize new worlds while keeping your people at home happy. At the same time, you will engage in trade, diplomacy, intrigue and war with other alien civilizations. Thanks to the invention of hyperdrive, your people are now ready to discover new worlds, encounter alien civilizations, and learn about the dark history that encompasses them all.

Stardock has released the latest sequel of its award-winning space strategy game series today into early access. Galactic Civilizations IV: Supernova sees the player as the ruler of a united home world that has just discovered faster-than-light travel. Galactic Civilizations IV: Supernova continues a 30-year trend of innovation in the series. This latest sequel introduces AI-generated content OpenAI's ChatGPT technology allowing players to create their own civilizations that uses AI to create the lore, conversation dialogs, quests and more. The game also uses AI, trained on decades of Stardock's alien art to deliver custom graphics for their custom civilization.

NVIDIA Wants to Set Guardrails for Large Language Models Such as ChatGPT

ChatGPT has surged in popularity over a few months, and usage of this software has been regarded as one of the fastest-growing apps ever. Based on a Large Language Model (LLM) called GPT-3.5/4, ChatGPT uses user input to form answers based on its extensive database used in the training process. Having billions of parameters, the GPT models used for GPT can give precise answers; however, sometimes, these models hallucinate. Given a question about a non-existing topic/subject, ChatGPT can induce hallucination and make up the information. To prevent these hallucinations, NVIDIA, the maker of GPUs used for training and inferencing LLMs, has released a software library to put AI in place, called NeMo Guardrails.

As the NVIDIA repository states: "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more." These guardrails are easily programmable and can stop LLMs from outputting unwanted content. For a company that invests heavily in the hardware and software landscape, this launch is a logical decision to keep the lead in setting the infrastructure for future LLM-based applications.

Opera Unveils Opera One, an Entirely Redesigned Browser

Opera is unveiling Opera One today. Opera One is the early access version of a completely redesigned browser that is planned to replace the flagship Opera browser for Windows, MacOS, and Linux later this year. Based on Modular Design, Opera One transforms the way you interact with your browser, delivering a liquid navigation experience which is more intuitive to the user. With today's release, Opera One also becomes the first major Chromium-based browser with a multithreaded compositor that brings the UI to life like never before. Opera One also introduces Tab Islands, a new, more intuitive way of interacting with and managing multiple tabs. This news from the company comes just weeks after announcing its first generative AI features, including AI Prompts, as well as access to ChatGPT and ChatSonic in the sidebar.

Introducing the first implementation of Modular Design
Opera has a history of reinventing itself to address the changing needs of its users as well as the evolving nature of the web. With Opera One, the browser has been redesigned according to Modular Design. The new design philosophy, which is being presented today for the first time, will allow Opera to over time build a more powerful and feature-rich browser that is ready for a generative AI-based future. The Opera browser is thus beginning its metamorphosis into a browser that will dynamically adapt to the user's needs by bringing only the key features to the foreground: the relevant modules within Opera One will adjust automatically based on context, providing the user with a more liquid and effortless browsing experience.

PMIC Issue with Server DDR5 RDIMMs Reported, Convergence of DDR5 Server DRAM Price Decline

TrendForce reports that mass production of new server platforms—such as Intel Sapphire Rapids and AMD Genoa—is imminent. However, recent market reports have indicated a PMIC compatibility issue for server DDR5 RDIMMs; DRAM suppliers and PMIC vendors are working to address the problem. TrendForce believes this will have two effects: First, DRAM suppliers will temporarily procure more PMICs from Monolithic Power Systems (MPS), which supplies PMICs without any issues. Second, supply will inevitably be affected in the short term as current DDR5 server DRAM production still uses older processes, which will lead to a convergence in the price decline of DDR5 server DRAM in 2Q23—from the previously estimated 15~20% to 13~18%.

As previously mentioned, PMIC issues and the production process relying on older processes are all having a short-term impact on the supply of DDR5 server DRAM. SK hynix has gradually ramped up production and sales of 1α-nm, which, unlike 1y-nm, has yet to be fully verified by consumers. Current production processes are still being dominated by Samsung and SK hynix's 1y-nm and Micron's 1z-nm; 1α and 1β-nm production is projected to increase in 2H23.

Google Bard AI Chatbot Smart Enough to Assist in Software Coding

Alphabet Incorporated's Google AI division has today revealed a planned update for its Bard conversational artificial intelligence chatbot. The experimental generative artificial intelligence software application will become capable of assisting people in the writing of computer code - the American multinational technology company hopes that Bard will be of great to help in the area of software development. Paige Bailey, a group product manager at Google Research has introduced the upcoming changes: "Since we launched Bard, our experiment that lets you collaborate with generative AI, coding has been one of the top requests we've received from our users. As a product lead in Google Research - and a passionate engineer who still programs every day - I'm excited that today we're updating Bard to include that capability."

The Bard chatbot was made available, on a trial basis, to users in the USA and UK last month. Google's AI team is reported to be under great pressure to advance the Bard chatbot into a suitably powerful state in order to compete with its closest rival - Microsoft Corporation. The Seattle-based giant has invested heavily into Open AI's industry leading ChatGPT application. Google's latest volley against its rivals shows that Bard's has become very sophisticated - so much so that the app is able to chew through a variety of programming languages. Bailey outlines these features in the company's latest blog: "Starting now, Bard can help with programming and software development tasks, including code generation, debugging and code explanation. We're launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript. And you can easily export Python code to Google Colab - no copy and paste required." Critics of AI-driven large language models have posited that the technology could potentially eliminate humans from the job market - it will be interesting to observe the coder community's reaction to Google marketing of Bard as a helpful tool in software development.

Gigabyte Extends Its Leading GPU Portfolio of Servers

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a lineup of powerful GPU-centric servers with the latest AMD and Intel CPUs, including NVIDIA HGX H100 servers with both 4-GPU and 8-GPU modules. With growing interest in HPC and AI applications, specifically generative AI (GAI), this breed of server relies heavily on GPU resources to tackle compute-heavy workloads that handle large amounts of data. With the advent of OpenAI's ChatGPT and other AI chatbots, large GPU clusters are being deployed with system-level optimization to train large language models (LLMs). These LLMs can be processed by GIGABYTE's new design-optimized systems that offer a high level of customization based on users' workloads and requirements.

The GIGABYTE G-series servers are built first and foremost to support dense GPU compute and the latest PCIe technology. Starting with the 2U servers, the new G293 servers can support up to 8 dual-slot GPUs or 16 single-slot GPUs, depending on the server model. For the ultimate in CPU and GPU performance, the 4U G493 servers offer plenty of networking options and storage configurations to go alongside support for eight (Gen 5 x16) GPUs. And for the highest level of GPU compute for HPC and AI, the G393 & G593 series support NVIDIA H100 Tensor Core GPUs. All these new two CPU socket servers are designed for either 4th Gen AMD EPYC processors or 4th Gen Intel Xeon Scalable processors.

Samsung Could Replace Google Search on its Mobile Devices

Google's business of providing the world's largest search engine is reportedly in jeopardy, as the latest reports indicate that Samsung could replace Google Search with another search engine as a default solution on its mobile devices. Samsung, which sells millions of devices per year, is looking to replace the current search engine, Google Search, in favor of more modern AI-powered models. Currently, Google and Samsung have a contract where Google pays the South Korean giant a sum of three billion US dollars per year to keep its search engine as a default option on Samsung's devices. However, this decision is flexible, as the contract is up for renewal and new terms are being negotiated.

With the release of the ChatGPT and AI-powered search that Microsoft Bing enables, Google is reportedly working hard to keep up and integrate Large Language Models (LLMs) into Search. Google's answer to Microsoft Bing is codenamed Project Magi, an initiative to bring AI-powered search supposedly next month. To emphasize the importance of getting this to production, Google was ready to give up three billion US Dollars of revenue to Samsung for keeping Google Search as a default search engine for 12 years without a doubt. However, with the emergence of better solutions like Microsoft Bing, Samsung is considering replacing it with something else. The deal is still open, terms are still negotiated, and for now there are no official mentions of Bing. As a reminder, Google has a similar agreement with Apple, worth 20 billion US Dollars, and Google Search was valued at 162 billion US Dollars last year.

Bulk Order of GPUs Points to Twitter Tapping Big Time into AI Potential

According to Business Insider, Twitter has made a substantial investment into hardware upgrades at its North American datacenter operation. The company has purchased somewhere in the region of 10,000 GPUs - destined for the social media giant's two remaining datacenter locations. Insider sources claim that Elon Musk has committed to a large language model (LLM) project, in an effort to rival OpenAI's ChatGPT system. The GPUs will not provide much computational value in the current/normal day-to-day tasks at Twitter - the source reckons that the extra processing power will be utilized for deep learning purposes.

Twitter has not revealed any concrete plans for its relatively new in-house artificial intelligence project but something was afoot when, earlier this year, Musk recruited several research personnel from Alphabet's DeepMind division. It was theorized that he was incubating a resident AI research lab at the time, following personal criticisms levelled at his former colleagues at OpenAI, ergo their very popular and much adopted chatbot.

Alibaba Developing an Equivalent to ChatGPT

Last Tuesday, Alibaba announced its intentions to put out its own artificial intelligence (AI) chatbot product called Tongyi Qianwen - another rival to take on OpenAI's pioneering ChatGPT natural language processing tool. The Chinese technology giant is hoping to retrofit the new chatbot system into several arms of its business operations. Alibaba had revealed initial plans for chatbot integration earlier this year, and mentioned that it was providing an alternative to the already well established ChatGPT tool. Alibaba's workplace messaging application - DingTalk - is slated to receive the first AI-powered update in the near future, although the company did not provide a firm timeline for Tongyi Qianwen's release window.

The product name "Tongyi Qianwen" loosely translates to "seeking an answer by asking a thousand questions" - Alibaba did not provide an official English language translation at last week's press conference. Their chatbot is reported to function in both Mandarin and English language modes. Advanced AI voice recognition is set for usage in the Tmall Genie range of smart speakers (similar in function to the Amazon Echo). Alibaba expects to expand Tongyi Qianwen's reach into applications relating to e-commerce and mapping services.

Newegg Starts Using ChatGPT to Improve Online Shopping Experience

Newegg Commerce, Inc., a leading global technology e-commerce retailer, announced today that the company is using ChatGPT to improve its customers' online shopping experience. Introduced in November 2022, ChatGPT from OpenAI is a conversational artificial intelligence (AI) program capable of providing information that improves efficiency in myriad situations.

"We're always evaluating our e-commerce technology to ensure we're providing the best customer experience. Through testing, we've proven that ChatGPT has a practical use for Newegg based on the added quality and efficiency it creates," said Lucy Huo, Vice President of Application Development for Newegg. "We deployed ChatGPT to improve content both on-site and off-site to help customers find what they want and elevate their experience. AI doesn't replace employees, but it adds resources so employees are available to handle more complex projects. We're still in the early phases of AI but the benefits for e-commerce may be substantial."
Return to Keyword Browsing
Nov 21st, 2024 11:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts