• AI Academy
  • Posts
  • 🤗 Hugging Face's Open-Source Challenge to GPT Store

🤗 Hugging Face's Open-Source Challenge to GPT Store

Plus NYT’s AI team and AI images watermarks

Hello AI Enthusiast,

In our latest roundup of AI news, we've noticed a pattern emerging among tech giants and the US government, indicating a cautious approach to AI's oversight. From the Biden administration's mandate on AI safety disclosures to the NYT’s AI team, Microsoft's partnership with a news website, and big techs watermarking AI images, it's clear there's an effort to address AI responsibly. We're keen to hear your thoughts on this in the news section.

📣 Before we unpack today's news selection, we are enthusiastic to announce the launch of our fifth edition of the Master in Prompt Engineering! This course is designed to teach participants the art of effective prompting and developing AI prototypes, increasing efficiency in their workplaces—all without the need for coding.

One-third of the spots have already been filled by those with early access from our waiting list. Plus an early bird discount of €300 is in place. If you're keen to join this innovative journey, make sure to secure your spot ASAP.

And now let’s get on with this week’s succulent news list. Happy reading! 📰

News Bytes 🗞️

  • The Biden administration is implementing a new requirement for developers of major AI systems to disclose their safety test results to the government. This opens up interesting implications: first of all, the US government seems to have a change of strategy going from the usual “markets can self-regulate” to a more involved position when it comes to AI but this move also suggests shared liability as tech companies would test accordingly and the government would be in an approving position.

  • The New York Times, under the leadership of Zach Seward, is establishing a team to incorporate AI into journalism. Comprising machine learning experts, engineers, designers, and journalists, the team will adhere to guidelines for utilizing generative AI, emphasizing that human reporting remains paramount. The formation of this team underscores the newspaper's internal exploration of AI. Rather than opposing AI per se, the focus appears to be on controlling its benefits and safeguarding data usage.

  • The NYT, in the recent past, sued Microsoft for copyright infringement, thus Microsoft decided to go on a different route. The company is teaming up with Semafor, a news website, for an initiative called "Signals" to create news crafted by humans but powered by AI research tools. The creation of a third entity provides room for trial and error enabling Microsoft to test AI within journalism and collect additional data to improve its technology.

  • Tech giants Google, Meta, and OpenAI are all implementing new features to make AI-generated content easily distinguishable for transparency. Google has expanded Bard, its language model, to create images marked by SynthID watermarks; Meta is labeling AI-generated content with "Imagined with AI" across its platforms; and OpenAI is embedding C2PA metadata into its API and ChatGPT generated images. These digital IDs will help users differentiate AI creations in the age of increasing AI content and deepfakes.

We noticed the recent developments on safety and accuracy of information and highlighted the timing as these developments could be precautions for the upcoming elections. It seems these efforts are aimed at curbing the spread of disinformation and misinformation.

  • The Browser Company is introducing innovative AI-powered features in its Arc browser to revolutionize user experience. The "Instant Links" function uses AI to directly find and provide specific content, bypassing search engines, while the upcoming "Live Folders" feature will curate real-time updates from user-selected sources. The company leverages its flexibility to experiment with user experience - a move that could risk business for a giant like Google. This signifies an attempt to redefine the traditional browsing experience.

  • Hugging Face, the startup that offers a huge repository for open-source¹ AI code and frameworks, has launched customizable Hugging Chat Assistants that allow users to build their own AI assistants using a variety of open-source language models. In addition, it provides a central repository where users can explore and use chat assistants created by others. Unlike OpenAI's GPT Store, which is a paid service and relies on proprietary² models, Hugging Face's alternative empowers users with a range of open-source language models, offering both affordability and wider choice.

Are open-source models the best possible solution in the field of AI chat assistants?

Login or Subscribe to participate in polls.

  • Amazon is beta testing Rufus, a generative AI-powered shopping assistant that answers customer questions, provides comparisons, and makes recommendations based on product information. It’s built on a unique dataset from retail stores, customer reviews, Q&As, and the web to find exactly what customers need. Amazon aims to pioneer the most personalized shopping experience that can be found on the web.

  • After announcing, a few weeks ago, the integration of ChatGPT in its voice assistant, German automaker Volkswagen launched its new AI lab, which will serve as a research and development hub for exploring AI breakthroughs in automotive innovations. The lab aims to optimize electric vehicle charging, enable predictive maintenance, and enhance in-car voice recognition. With this move, VW is aiming to become more independent from external AI software and rely more on its technology.

  • Following criticism for lifting the ban on military and warfare applications, OpenAI conducted a study to evaluate the risk of large language models (LLMs) being used to assist in the creation of biological threats. The evaluation involved both biology experts and students. It was found that access to the GPT-4 model led to “only” a mild improvement in the accuracy of creating biological threats. Is this enough to clear OpenAI’s name?

  • Google Maps is launching an early access experiment in the U.S. that utilizes generative AI to analyze its extensive database and provide personalized suggestions for places to go. Users can simply state their preferences, such as a specific vibe or activity, and Maps will generate trustworthy recommendations based on the information available. Existing tools are getting smart upgrades.

  • Microsoft has extended its AI capabilities with Copilot for Sales and Service, a sophisticated application of their ChatGPT variant. The AI tool will prep meeting briefs, summarise lengthy email threads, and integrate with sales/customer service platforms for a more efficient workplace dynamic. Business applications can be a key factor for usage expansion.

  • After 275 years, a global team of competitors and collaborators have successfully read an ancient 2000-year-old scroll, known as the Herculaneum Papyri, using computer vision and machine learning techniques. The scroll contains never-before-seen text about the pleasure and enjoyment of life in ancient Rome, and there is hope for the discovery of even more scrolls.

This week’s glossary 📖

  1. Open-Source Models: Open-source models are freely available for anyone to use, change, and share. In AI, this means people can work together to improve and innovate on technology, making it more accessible and transparent.

  2. Proprietary Models: These models are privately owned and not shared with the public. In AI, companies use them for exclusive benefits, keeping their advancements and technologies secret to maintain a competitive edge.

Educational Pill 💊

SynthID: The Secret Marker for AI Creations

SynthID by Google DeepMind introduces a digital watermarking solution for its AI-generated content. Google made it to help us tell if something we see or hear was made by a computer. This marker is special because even though we can't see or hear it, machines can spot it easily.

Using this magic marker, SynthID can put a hidden sign on pictures or music without changing how they look or sound to us. Even if someone tries to change the picture or song, like adding a filter or making it louder, the secret sign stays there. This means we can always check if something was made by a computer, helping us trust the stuff we find online a bit more.

LOLgorithms 😂

We already have our favorite HugginFace Assistant! 🍝

That is the end of our newsletter.

Remember, if your company is looking to implement AI technologies, we also offer customized corporate training.

See you next week. 👋