• AI Academy
  • Posts
  • šŸ¤–Ā Google's Mess Causes Laughter and Irritation

šŸ¤– Google's Mess Causes Laughter and Irritation

Plus, the AI Safety Summit and xAIā€™s Series B funding

Hello AI Enthusiast,

This week in AI, weā€™ve covered a couple of really interesting stories. Researchers at Anthropic have made a significant move in understanding large language models, something no one has done, or dared to announce, before. Their latest research dives deep into the AI's "thought process" in a way that's never been seen, helping us understand how they work but even more importantly how to tweak them (hopefully for ethical purposes). Meanwhile, over at Google, their new AI feature is getting a bit too creative with advice like suggesting glue on pizza. šŸ˜§

You may have already read about these stories, but weā€™ll help you understand why these things happened and what they mean. šŸ‘‡

The Big Picture šŸ”Š

Anthropic Decodes AI's Black Box

Anthropic researchers have been working to understand the inner workings of their AI model by peeking into its complex neural network. They used a method called "dictionary learning" to decode how specific combinations of artificial neurons activate and represent different concepts. Through these experiments, they identified millions of such combinations, or "features," that the model uses to process information. To test their findings, they created "Golden Gate Claude," a specialized version of the AI that focuses heavily on the Golden Gate Bridge by tuning the strength of its related neuron activations. When this feature was amplified, the AI started mentioning the bridge in almost every response, relevant or not. This deeper understanding and ability to adjust neural activations aim to make AI models more transparent, safer and more precise in their operations.

šŸ’”Our Take: Anthropic was launched in 2021 by ex-OpenAI researchers who felt OpenAI wasn't prioritizing safe AI development. Recently, some other key employees left OpenAI, concerned it was prioritizing releasing smarter models rapidly over ensuring sufficient safety focus (we talked about this in our previous newsletter) and decided to leave. One of them, Jan Leike, has joined Anthropic to lead a new ā€œsuperalignmentā€ team. So far AI companies have been able to steer the model through the data they feed the models with, but now researchers have found a way to go directly inside the black box and manipulate a specific behavior. We optimistically see this as a way to improve the modelā€™s safety.

Reddit Jokes Confuse Google's AI

Google recently introduced an AI Overviews feature to enhance search results. However, itā€™s been generating incorrect and bizarre answers, such as advising users to add glue to pizza and claiming fictional achievements for public figures and animals. Many of these errors stem from joke comments in old Reddit threads or misleading web content. A company spokesperson stated that these examples are isolated and that Google is refining the product based on these findings. Here are some screenshots of Google's answers.

šŸ’”Our Take: Companies believe data is like gold, prioritizing quantity over quality, hence the Reddit partnership. However, data quality is crucial. Google used RAG with Reddit data, but when using RAG with uncurated sources containing incorrect content, the model's outputs could propagate that content, potentially overriding its training on avoiding such content. While RAG doesn't inherently discard training, the lack of curation undermines intended behavior regarding harmful content. We struggle to understand how Google couldn't possibly select the "good" data and then test it before making it publicly available.

Have you experienced similar odd responses from Googleā€™s AI?

If you answer yes, share what happened in the comments after voting!

Login or Subscribe to participate in polls.

Bits and Bobs šŸ—žļø

Tribal News šŸ«‚

We also discussed Googleā€™s responses internally in our community and we couldnā€™t help but be negatively amazed by the poor job done by Big G.

From Our Channels šŸ¤³

Gianluca recently shared a personal reflection on social media about the challenges of unplugging from the AI world. After daring to take some time off, he found himself grappling with anxiety as recent events in AI unfolded rapidly. Luckily, he found a way to "live with it".

Our Gen AI Project Bootcamp is returning for its seventh edition, starting on July 8th and lasting two months.

Youā€™ll learn how to craft and validate optimal prompts for any use case, identify workflow steps for automation, and create AI prototypes using no-code platforms.

Apparently, this program isnā€™t half bad, hereā€™s what people say:

  • "Always up to date with new tools and concepts. That's what you look for in education."

  • "Engaging teaching method and practical application you can directly use in your work."

  • "Supportive team and non-judgmental learning environment."

Sign up for the waitlist now to get an early bird discount of ā‚¬300 off the regular price.

LOLgorithms šŸ˜‚

Stop writing funny recipes online. Google might take them seriously.

That's a wrap on our newsletter! Hereā€™s a quick recap before you go:

Catch you next week! šŸ‘‹