News Posts matching #DeepMind

Return to Keyword Browsing

Google Genie 2 Promises AI-Generated Interactive Worlds With Realistic Physics and AI-Powered NPCs

For better or worse, generative AI has been a disruptive force in many industries, although its reception in video games has been lukewarm at best, with attempts at integrating AI-powered NPCs into games failing to impress most gamers. Now, Google's DeepMind AI has a new model called Genie 2, which can supposedly be used to generate "action-controllable, playable, 3D environments for training and evaluating embodied agents." All the environments generated by Genie 2 can supposedly be interacted with, whether by a human piloting a character with a mouse and keyboard or an AI-controlled NPC, although it's unclear what the behind-the-scenes code and optimizations look like, both aspects of which will be key to any real-world applications of the tech. Google says worlds created by Genie 2 can simulate consequences of actions in addition to the world itself, all in real-time. This means that when a player interacts with a world generated by Genie 2, the AI will respond with what its model suggests is the result of that action (like stepping on a leaf resulting in the destruction of said leaf). This extends to things like lighting, reflections, and physics, with Google showing off some impressively accurate water, volumetric effects, and accurate gravity.

In a demo video, Google showed a number of different AI-generated worlds, each with their own interactive characters, from a spaceship interior being explored by an astronaut to a robot taking a stroll in a futuristic cyberpunk urban environment, and even a sailboat sailing over water and a cowboy riding through some grassy plains on horseback. What's perhaps most interesting about Genie 2's generated environments is that Genie has apparently given each world a different perspective and camera control scheme. Some of the examples shown are first-person, while others are third-person with the camera either locked to the character or free-floating around the character. Of course, being generative AI, there is some weirdness, and Google clearly chose its demo clips carefully to avoid graphical anomalies from taking center stage. What's more, at least a few clips seem to very strongly resemble worlds from popular video games, Assassin's Creed, Red Dead Redemption, Sony's Horizon franchise, and what appears to be a mix of various sci-fi games, including Warframe, Destiny, Mass Effect, and Subnautica. This isn't surprising, since the worlds Google used to showcase the AI are all generated with an image and text prompt as inputs, and, given what Google says it used as training data used, it seems likely that gaming clips from those games made it into the AI model's training data.

AI Contributes to 25% of Google's New Code, CEO Sundar Pichai Confirms

During Alphabet's Q3 earnings call, CEO Sundar Pichai announced that AI now generates more than a quarter of the company's new code, marking a significant milestone for AI advancement and for the tech giant. This development comes alongside impressive financial results, with the company reporting $88.2 billion in revenue, representing a 15% year-over-year increase. Implementing AI in code generation has raised concerns, though Google maintains rigorous safety protocols. Every AI-generated code segment undergoes thorough review by human (natural intelligence) engineers before deployment, ensuring quality and security standards are met. This hybrid approach helps Google balance productivity with reliability. The tech giant's commitment to AI development extends beyond code generation.

Recent achievements include the revolutionary AI Overviews feature, which has undergone significant optimization. Through optimizing hardware solutions and technical improvements, Google has managed to reduce query costs by over 90% while simultaneously doubling the capacity of their custom Gemini models. Google's AI push has also garnered prestigious recognition, with DeepMind researchers Demis Hassabis and John Jumper receiving the Nobel Prize in Chemistry for their groundbreaking AlphaFold project. Former Google researcher Geoff Hinton also achieved Nobel recognition in Physics. The impact of Google's AI integration is evident across its product ecosystem, with Gemini models now powering seven platforms that each serve over two billion monthly users. Google Maps recently joined this elite group, while the company has expanded its AI capabilities to external developers through partnerships with platforms like GitHub Copilot to help developers write code with AI assistance.

Google Cloud and NVIDIA Expand Partnership to Advance AI Computing, Software and Services

Google Cloud Next—Google Cloud and NVIDIA today announced new AI infrastructure and software for customers to build and deploy massive models for generative AI and speed data science workloads.

In a fireside chat at Google Cloud Next, Google Cloud CEO Thomas Kurian and NVIDIA founder and CEO Jensen Huang discussed how the partnership is bringing end-to-end machine learning services to some of the largest AI customers in the world—including by making it easy to run AI supercomputers with Google Cloud offerings built on NVIDIA technologies. The new hardware and software integrations utilize the same NVIDIA technologies employed over the past two years by Google DeepMind and Google research teams.

NVIDIA H100 GPUs Set Standard for Generative AI in Debut MLPerf Benchmark

In a new industry-standard benchmark, a cluster of 3,584 H100 GPUs at cloud service provider CoreWeave trained a massive GPT-3-based model in just 11 minutes. Leading users and industry-standard benchmarks agree: NVIDIA H100 Tensor Core GPUs deliver the best AI performance, especially on the large language models (LLMs) powering generative AI.

H100 GPUs set new records on all eight tests in the latest MLPerf training benchmarks released today, excelling on a new MLPerf test for generative AI. That excellence is delivered both per-accelerator and at-scale in massive servers. For example, on a commercially available cluster of 3,584 H100 GPUs co-developed by startup Inflection AI and operated by CoreWeave, a cloud service provider specializing in GPU-accelerated workloads, the system completed the massive GPT-3-based training benchmark in less than eleven minutes.

Google Merges its AI Subsidiaries into Google DeepMind

Google has announced that the company is officially merging its subsidiaries focused on artificial intelligence to form a single group. More specifically, Google Brain and DeepMind companies are now joining forces to become a single unit called Google DeepMind. As Google CEO Sundar Pichai notes: "This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind. Their collective accomplishments in AI over the last decade span AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models."

As a CEO of this group, Demis Hassabis, a previous CEO of DeepMind, will work together with Jeff Dean, now promoted to Google's Chief Scientist, where he will report to the Sundar. In the spirit of a new role, Jeff Dean will work as a Chief Scientist at Google Research and Google DeepMind, where he will set the goal for AI research at both units. This corporate restructuring will help the two previously separate teams work together on a single plan and help advance AI capabilities faster. We are eager to see the upcoming developments these teams accomplish.

Elon Musk AI-Powered Empire Expands Again, X.AI Startup Incorporated in Nevada

Elon Musk has formed a new AI-focused company, as reported by the Wall Street Journal yesterday. The entity registered under the name X.AI was incorporated via a filing in Nevada last month, and Musk appears to be listed as the company's only director with Jared Birchall joining him under the role of secretary. Birchall heads the Musk family office, Excession LLC, and he serves as CEO of Neuralink - a neurotechnology company that was co-founded by Musk back in 2016. It is widely speculated that Birchall serves as a type of fixer - go watch the TV series "Ray Donovan" if you would like to observe a crude (and obviously fictional) example - in corporate affairs.

Reports emerged earlier this week, with Musk being at the forefront of a massive purchase of GPUs destined to arrive shortly at his data centers - this impressive chunk of hardware is speculated to power AI-related number crunching at Twitter in the near future. The founding of X.AI could provide another home for a portion of the 10,000 GPU order, but industry insiders firmly believe that Twitter will need to tool up quickly for its new AI-driven endeavor - the GPUs will likely be set to work on a ChatBot system to underpin the social media platform. Musk has already recruited researchers from DeepMind and setup a lab for them at one of his operations. It remains to be seen how the X.AI startup will run alongside efforts at other Musk-owned companies - it is theorized that he wants to beat OpenAI at their own game, and compete with similar undertakings at Google, Microsoft and Amazon.

Bulk Order of GPUs Points to Twitter Tapping Big Time into AI Potential

According to Business Insider, Twitter has made a substantial investment into hardware upgrades at its North American datacenter operation. The company has purchased somewhere in the region of 10,000 GPUs - destined for the social media giant's two remaining datacenter locations. Insider sources claim that Elon Musk has committed to a large language model (LLM) project, in an effort to rival OpenAI's ChatGPT system. The GPUs will not provide much computational value in the current/normal day-to-day tasks at Twitter - the source reckons that the extra processing power will be utilized for deep learning purposes.

Twitter has not revealed any concrete plans for its relatively new in-house artificial intelligence project but something was afoot when, earlier this year, Musk recruited several research personnel from Alphabet's DeepMind division. It was theorized that he was incubating a resident AI research lab at the time, following personal criticisms levelled at his former colleagues at OpenAI, ergo their very popular and much adopted chatbot.

Hot Chips 2020 Program Announced

Today the Hot Chips program committee officially announced the August conference line-up, posted to hotchips.org. For this first-ever live-streamed Hot Chips Symposium, the program is better than ever!

In a session on deep learning training for data centers, we have a mix of talks from the internet giant Google showcasing their TPUv2 and TPUv3, and a talk from startup Cerebras on their 2nd gen wafer-scale AI solution, as well as ETH Zurich's 4096-core RISC-V based AI chip. And in deep learning inference, we have talks from several of China's biggest AI infrastructure companies: Baidu, Alibaba, and SenseTime. We also have some new startups that will showcase their interesting solutions—LightMatter talking about its optical computing solution, and TensTorrent giving a first-look at its new architecture for AI.
Hot Chips

Onward to the Singularity: Google AI Develops Better Artificial Intelligences

The singularity concept isn't a simple one. It has attached to it not only the idea of Artificial Intelligence that is capable of constant self-improvement, but also that the invention and deployment of this kind of AI will trigger ever accelerating technological growth - so much so that humanity will see itself changed forever. Now, really, there are some pieces of technology already that have effectively changed the fabric of society. We've seen this happen with the Internet, bridging gaps in time and space and ushering humanity towards frankly inspiring times of growth and development. Even smartphones, due to their adoption rates and capabilities, have seen the metamorphosis of human interaction and ways to connect with each other, even sparking some smartphone-related psychological conditions. But all of those will definitely, definitely, pale in comparison to what changes might ensue following the singularity.

The thing is, up to now, we've been shackled in our (still tremendous growth) by our own capabilities as a species: our world is built on layers upon layers of brilliant minds that have developed the framework of technologies our society is now interspersed with. But this means that as fast as development has been, it has still been somewhat slowed down by humanity's ability to evolve, and to develop. Each development has come with almost perfect understanding of what came before it: it's a cohesive whole, with each step being provable and verifiable through the scientific method, a veritable "standing atop the shoulders of giants". What happens, then, when we lose sight of the thought process behind developments: when the train of thought behind them is so exquisite that we can't really understand it? When we deploy technologies and programs that we don't really understand? Enter the singularity, an event to which we've stopped walking towards: it's more of a hurdle race now, and perhaps more worryingly, it's no longer fully controlled by humans.
Return to Keyword Browsing
Feb 6th, 2025 12:53 EST change timezone

Popular Reviews

Controversial News Posts