- AI Academy
- Posts
- š¤Ā Google's Mess Causes Laughter and Irritation
š¤ Google's Mess Causes Laughter and Irritation
Plus, the AI Safety Summit and xAIās Series B funding
Hello AI Enthusiast,
This week in AI, weāve covered a couple of really interesting stories. Researchers at Anthropic have made a significant move in understanding large language models, something no one has done, or dared to announce, before. Their latest research dives deep into the AI's "thought process" in a way that's never been seen, helping us understand how they work but even more importantly how to tweak them (hopefully for ethical purposes). Meanwhile, over at Google, their new AI feature is getting a bit too creative with advice like suggesting glue on pizza. š§
You may have already read about these stories, but weāll help you understand why these things happened and what they mean. š
The Big Picture š
Anthropic Decodes AI's Black Box
Anthropic researchers have been working to understand the inner workings of their AI model by peeking into its complex neural network. They used a method called "dictionary learning" to decode how specific combinations of artificial neurons activate and represent different concepts. Through these experiments, they identified millions of such combinations, or "features," that the model uses to process information. To test their findings, they created "Golden Gate Claude," a specialized version of the AI that focuses heavily on the Golden Gate Bridge by tuning the strength of its related neuron activations. When this feature was amplified, the AI started mentioning the bridge in almost every response, relevant or not. This deeper understanding and ability to adjust neural activations aim to make AI models more transparent, safer and more precise in their operations.
This week, we showed how altering internal "features" in our AI, Claude, could change its behavior.
We found a feature that can make Claude focus intensely on the Golden Gate Bridge.
Now, for a limited time, you can chat with Golden Gate Claude: claude.ai
ā Anthropic (@AnthropicAI)
8:29 PM ā¢ May 23, 2024
š”Our Take: Anthropic was launched in 2021 by ex-OpenAI researchers who felt OpenAI wasn't prioritizing safe AI development. Recently, some other key employees left OpenAI, concerned it was prioritizing releasing smarter models rapidly over ensuring sufficient safety focus (we talked about this in our previous newsletter) and decided to leave. One of them, Jan Leike, has joined Anthropic to lead a new āsuperalignmentā team. So far AI companies have been able to steer the model through the data they feed the models with, but now researchers have found a way to go directly inside the black box and manipulate a specific behavior. We optimistically see this as a way to improve the modelās safety.
Reddit Jokes Confuse Google's AI
Google recently introduced an AI Overviews feature to enhance search results. However, itās been generating incorrect and bizarre answers, such as advising users to add glue to pizza and claiming fictional achievements for public figures and animals. Many of these errors stem from joke comments in old Reddit threads or misleading web content. A company spokesperson stated that these examples are isolated and that Google is refining the product based on these findings. Here are some screenshots of Google's answers.
š”Our Take: Companies believe data is like gold, prioritizing quantity over quality, hence the Reddit partnership. However, data quality is crucial. Google used RAG with Reddit data, but when using RAG with uncurated sources containing incorrect content, the model's outputs could propagate that content, potentially overriding its training on avoiding such content. While RAG doesn't inherently discard training, the lack of curation undermines intended behavior regarding harmful content. We struggle to understand how Google couldn't possibly select the "good" data and then test it before making it publicly available.
Have you experienced similar odd responses from Googleās AI?If you answer yes, share what happened in the comments after voting! |
Bits and Bobs šļø
Global leaders and AI companies, met in Seoul to discuss AI safety and innovation. They agreed on establishing a network of AI Safety Institutes and committed to not developing or deploying systems that pose significant risks.
At the AI Seoul Summit, OpenAI, alongside other industry leaders, agreed on additional safety commitments and plans to publish evolving safety practices to ensure safe AI development and deployment.
OpenAI and News Corp have formed a multi-year partnership to incorporate News Corpās premium journalism into OpenAI's offerings.
Microsoft and Hugging Face are expanding their partnership to make open-source AI easier to use, introducing new features on Azure.
Scarlett Johansson accused OpenAI of copying her voice for ChatGPT's voice feature, but documents show that a different actress was hired months before Johansson was contacted.
Elon Muskās xAI has secured $6 billion in Series B funding from investors like Valor Equity Partners and Andreessen Horowitz.
Microsoft has introduced a Copilot bot within Telegram, allowing users to perform text-based requests like searching the web, answering questions, and assisting with tasks.
Alphabet and Meta, following OpenAI's lead, are negotiating with Hollywood studios to license their content as training material for AI models like video generators
Nvidia's latest earnings report shattered expectations, revealing $6.12 earnings per share and $26 billion in sales, the most profitable and highest sales quarter in its history.
TikTok launched "TikTok Symphony," an AI suite to help brands create and enhance ad campaigns.
Tribal News š«
We also discussed Googleās responses internally in our community and we couldnāt help but be negatively amazed by the poor job done by Big G.
From Our Channels š¤³
Gianluca recently shared a personal reflection on social media about the challenges of unplugging from the AI world. After daring to take some time off, he found himself grappling with anxiety as recent events in AI unfolded rapidly. Luckily, he found a way to "live with it".
Our Gen AI Project Bootcamp is returning for its seventh edition, starting on July 8th and lasting two months.
Youāll learn how to craft and validate optimal prompts for any use case, identify workflow steps for automation, and create AI prototypes using no-code platforms.
Apparently, this program isnāt half bad, hereās what people say:
"Always up to date with new tools and concepts. That's what you look for in education."
"Engaging teaching method and practical application you can directly use in your work."
"Supportive team and non-judgmental learning environment."
Sign up for the waitlist now to get an early bird discount of ā¬300 off the regular price.
LOLgorithms š
Stop writing funny recipes online. Google might take them seriously.
Google AI overview fiasco, explained
ā Trung Phan (@TrungTPhan)
8:05 PM ā¢ May 24, 2024
That's a wrap on our newsletter! Hereās a quick recap before you go:
Generative AI Project Bootcamp: Accelerate your processes and prototype AI business ideas with your own automated AI project.
Startup and SME offers: If your team has 4 or more members, contact us to receive a group offer on our AI courses.
Customized Corporate Training: Equip your team with tailored sessions designed for companies diving into AI.
Catch you next week! š