Meta’s AI Chief: Overregulation could delay the full impact of AI revolution
Seoul (The Uttam Hindu): Yann LeCun, the Chief AI Scientist at Meta Platforms, recently spoke at the 2024 K-Science and Technology Global Forum in Seoul, highlighting that the "real AI revolution" is yet to arrive. In his keynote speech, LeCun stressed that governments should avoid enacting laws that might impede the development of artificial intelligence. "The real AI revolution has not yet arrived," LeCun remarked, noting that in the near future, AI assistants will play a central role in all our digital interactions. He pointed out that we will eventually need AI systems with human-level intelligence to achieve this. LeCun, a pioneer in the field of AI, acknowledged the current limitations of generative AI models like OpenAI's ChatGPT and Meta's Llama, especially in terms of understanding the physical world and reasoning like humans. While large language models (LLMs) can excel at tasks involving language, they struggle with the complexities of the real world. He explained that LLMs can process language because it is discrete and structured, but they fail to address the broader challenges that involve understanding physical contexts, reasoning, and planning. To overcome these challenges, Meta is working on developing a new kind of objective-driven AI architecture that can better understand the physical world, similar to how babies observe and learn from their environment, making predictions based on this understanding.
LeCun also emphasized the need for an open-source AI ecosystem that supports the creation of AI models capable of understanding various languages, cultures, and values. He cautioned against the dominance of a single entity controlling AI development, arguing that such efforts should be collaborative and global. "We can't have a single entity somewhere on the west coast of the United States train those models," he said, highlighting the importance of international collaboration. Finally, LeCun expressed concern over regulatory actions that could hinder AI innovation. He warned that premature regulations could stifle the open-source nature of AI development, which he views as crucial for the continued progress of the technology. "There is zero demonstration that any AI system is intrinsically dangerous," he added, advocating for cautious regulation that doesn’t hold back technological advancement.