In recent years, artificial intelligence has rapidly transformed various sectors, including scientific research, education, and the workplace. As AI becomes more integrated into daily life, experts at UC Berkeley are closely monitoring its development and potential risks in 2026.
Stuart Russell, professor of electrical engineering and computer sciences, highlighted concerns about the sustainability of current investments in AI infrastructure. “Current and planned spending on data centers represents the largest technology project in history. Yet many observers describe a bubble that is about to burst: revenues are underwhelming, the performance of large language models seems to have plateaued, and there are clear theoretical limits on their ability to learn straightforward concepts efficiently,” Russell said. He warned that if no breakthroughs occur to move closer to artificial general intelligence, economic consequences could be severe.
Hany Farid, professor of information, pointed out issues related to trust as AI-generated media becomes more convincing. “I will be watching the accelerating erosion of trust driven by increasingly convincing AI-generated media. In 2026, deepfakes will no longer be novel; they will be routine, scalable, and cheap, blurring the line between the real and the fake. This has profound implications for journalism, democracies, economies, courts and personal reputation,” Farid stated.
Jennifer Chayes, dean of the UC Berkeley College of Computing, Data Science, and Society emphasized both opportunities and responsibilities: “Major technology paradigm shifts like AI come with significant benefits and risks…Conversations about the responsible and ethical use of AI should be prioritized across sectors and civil society.”
Privacy concerns were raised by Deirdre Mulligan, professor of information. She noted that people often share sensitive information with chatbots for various types of support. Mulligan explained that legal cases have already involved demands for access to user logs from companies like OpenAI or from government agencies such as the Department of Homeland Security.
Jodi Halpern, professor of public health, discussed how companion chatbots are expanding into younger age groups: “This year will see the expansion of companion chatbots to young children…Yet ‘buddies’ for toddlers are an emerging market without guardrails.” She called for regulation until safety can be ensured.
Ken Goldberg, professor of engineering at UC Berkeley addressed claims regarding robots replacing human workers: “Significant advances are being made but robots have nowhere near the dexterity” required for many jobs. He indicated that closing this gap remains a major research focus.
Annette Bernhardt from the UC Berkeley Labor Center’s Technology and Work Program is watching legislative progress around workers’ rights concerning digital technologies: “In 2025 unions…began to develop a portfolio of policies to regulate employers’ growing use of AI…” She stressed the importance of laws ensuring humans remain involved in critical decisions affecting people’s lives.
Nicole Holliday, associate professor of linguistics highlighted bias in workplace evaluation tools powered by AI: “Programs like Zoom Revenue Accelerator…are being used by companies…to rate employees.” She expressed concern over systematic biases against certain groups due to these systems’ opaque algorithms.
Jonathan Stray from UC Berkeley’s Center for Human-Compatible AI commented on political neutrality in AI systems following recent federal executive orders requiring unbiased systems but lacking clear definitions.
Camille Crittenden noted an increase in sophisticated deepfakes due to powerful new tools making manipulation accessible at scale. She referenced new California regulations aimed at restoring trust through content authenticity requirements but suggested these measures alone would not suffice.
Alison Gopnik predicted a shift away from pursuing general intelligence toward developing models that interact with their environment similarly to children. She suggested intrinsically motivated reinforcement learning may lead to meaningful progress in seeking truth rather than simply optimizing scores set by humans.



