Google DeepMind Develops AI Framework That Requires Less Data

  • 08.12.2024
  • Source: Media Post
  • by: Laurie Sullivan
Google DeepMind Develops AI Framework That Requires Less Data
Code on computer monitor by Markus Spiske is licensed under unsplash.com
Facebook Tweet LinkedIn ShareThis

Large language models (LLMs) and vision language models (VLMs) require massive amounts of text and images to train AI models, with data scraped from the internet or licensed from publishers. They also learn by interacting with the physical world.

A new project developed by researchers at Google DeepMind and Imperial College London changes the learning process for AI models used in robotics, and could do the same for any AI models. It's important to note the research does not mention online advertising or search, but the project is in its early stages of "lifelong learning."

Diffusion Augmented Agents (DAAG), the team's project, is a framework that combines LLMs, VLMs, and diffusion models to improve the learning process and transfer capabilities to AI agents.

While the development of AI that requires less data might be touted as a breakthrough by Google DeepMind, it’s yet another reminder of the growing dominance of Big Tech over our lives. Conservatives must be wary of any technology that centralizes power, especially in the hands of companies with questionable transparency. The real issue here is the potential for these advancements to further erode our privacy and freedoms, with little accountability. We should be demanding stricter oversight and pushing back against the unchecked influence of tech giants, who are more interested in expanding their control than in protecting individual rights.
~Political Media

Read more at Media Post

Connect With Us

Political Media, Inc 1750 Tysons Blvd Ste 1500
McLean, Va 22102
202.558.6640
COPYRIGHT © 2002 - 2024, POLITICAL MEDIA, INC., ALL RIGHTS RESERVED | Support | Privacy Policy