• AI Academy
  • Posts
  • 🔄 700 Jobs Out, One AI In: Klarna's Controversial Move

🔄 700 Jobs Out, One AI In: Klarna's Controversial Move

Plus Microsoft's European partnership, Apple’s disinvestment and Google’s fiasco

Hello AI Enthusiast,

In last week’s poll, we asked for your thoughts on OpenAI's latest innovation, the text-to-video model Sora, and its impact on creativity versus the potential for misinformation and ethical dilemmas. A majority of 59% of you believe that Sora will indeed enhance creativity, showcasing optimism about the future of AI in artistic expression. In today's edition, we delve into a significant consequence of Sora's release: a notable shift in the cinema sector, with investments being paused as the industry contemplates the model's implications. But that’s not all; we also bring you juicy news about Microsoft's latest partnership, Apple’s strategic disinvestment, and another fiasco at Google.

Before we jump into today's news, just a quick reminder: there are still very few days left to grab a spot in our Master in Prompt Engineering course. It's a cool mix of on-demand lessons and live sessions where we'll show you how to make ChatGPT, and other LLMs, give you the answers you actually want. Plus, we'll teach you how to use those skills to build an AI MVP from the ground up, make the boring parts of your job way easier, and impress your boss or clients.

And if you're running a startup or a small business, we've got something special for you. Get in touch with us for discounts on group sign-ups, and let's take your team's skills to the next level together. Reach Out for Group Discounts

Now, on with the news. 👇

News Bytes 🗞️

  • Apple is shutting down its nearly decade-old self-driving vehicle project, known as "Project Titan", which involved nearly 2000 employees, despite increasing testing in California. While the project's termination marks a significant shift for the company, resources are being redirected to focus on generative AI efforts.

    • 💡 Our take: Apple has paused its self-driving car project and shifted focus to generative AI, suggesting a strategic choice to target a domain with more immediate potential for growth and influence. The reason for abruptly halting the car project is unclear. Although Apple has been experimenting with autonomous vehicles in Silicon Valley for quite some time, it typically doesn't unveil prototypes but rather finished products that are ready for the public. They might have stopped the car project because they weren't sure they could make it work anytime soon.

With Apple moving away from its self-driving car project to focus on generative AI, a field where it's been less active compared to other tech giants, do you think their entry could raise the quality bar for generative AI technologies?

Login or Subscribe to participate in polls.

  • Microsoft has invested a significant $16M in Paris-based AI startup, Mistral AI, known for its potential GPT-4 rival, the Mistral Large language model. This funding, which converts into equity in Mistral's forthcoming investment round, also aims to boost Microsoft's distribution partnership deal within Azure, expanding options for customers in the Microsoft ecosystem. However, this deal, involving two major AI players, has caught the European Commission's eye, indicating that it'll be part of their ongoing scrutiny of major tech-generative AI collaborations.

    • 💡 Our take: Microsoft's partnership with Mistral AI is a key step in advancing AI-powered tools, positioning Microsoft as a leader in the AI marketplace. This collaboration highlights Microsoft's strategic approach, partnering with Mistral, a European-based company, alongside its US partnership with OpenAI. This dual partnership strategy aims to balance geopolitical interests across major regions. However, Microsoft's growing AI investments, including this new alliance, are attracting regulatory scrutiny, mirroring the attention received by its collaboration with OpenAI.

  • Swedish fintech Klarna's AI assistant powered by OpenAI is claimed to do the work of 700 full-time agents, tackling two-thirds of all customer service chats, and an estimated 2.3 million conversations. Despite the controversy surrounding the recent layoff of 700 employees, the company reinforces that AI's productivity is not connected with workforce reduction, instead suggesting that this reflects the broader implications of AI technology in transforming workforces.

    • 💡 Our take: Sometimes, it looks like AI is taking over jobs that people used to do. In 2022, Klarna had some money problems, which made their investors want them to spend less money. So, they decided to let go of 10% of their employees. But, it turns out that working with OpenAI helped them a lot. By using AI, they could do the same amount of work as 700 employees, especially in helping their customers. However, when we looked at Trustpilot reviews, many people were especially unhappy with their customer service.

Would you like us to dig deeper into how Klarna is using AI and its impact on the quality of its customer service?

Login or Subscribe to participate in polls.

  • Google's AI model, Gemini, recently caused controversy by injecting unwarranted diversity into images, distorting historical context, such as depicting America's Founding Fathers as a multicultural group. While Google explains this as a measure to counteract biases inherent in training data and promote diversity in generic queries, critics highlighted the importance of context-appropriate instructions, blaming the tech giant for its model's missteps rather than the AI itself. If you want to take a look at these anachronistic images here’s our Instagram post where we share some of these images and the prompts that generated them.

  • The Pentagon's Chief Digital and Artificial Intelligence Office has chosen Scale AI to develop a method for testing and evaluating large language models that could influence military operations. By creating a framework that measures model performance and provides instant feedback, this one-year project offers a safe deployment of AI and could potentially reshape how large language models are utilized in military applications.

    • 💡 Our take: On January 10th, OpenAI changed its rules and stopped saying "no" to using its AI for military and warfare stuff. Now, the rules just say you can't use OpenAI's tech to make or use weapons. Besides the CDAO, OpenAI is also working with big names like Meta, Microsoft, the U.S. Army, the Defence Innovation Unit, General Motors, Toyota Research Institute, Nvidia, and others. The timing is pretty interesting, showing that there's more interest in military uses of AI after these rules were relaxed.

  • Nvidia is set to bring AI capabilities on the go with their latest RTX 500 and 1000 Ada Generation laptop GPUs. These portable powerhouses, engineered on the Ada Lovelace architecture, tackle everyday to heavy-duty AI tasks—broadening AI's reach in laptops from Dell, HP, Lenovo, and MSI. With Nvidia expanding from big-league AI labs to personal devices, AI applications and generative models will be faster and more accessible. Nvidia's move, as mentioned in our last newsletter, underlines its vision of pushing AI technology use locally, rather than solely relying on the cloud.

  • Tyler Perry has halted an $800 million expansion of his Atlanta film studio due to concerns sparked by the advance of AI technologies such as OpenAI’s Sora. Perry, impressed by Sora's text-to-video capabilities, voices fears for the jobs at risk due to automation in the movie industry, insisting on the need for regulations to protect livelihoods.

  • Google is prioritizing AI safety with the formation of the AI Safety and Alignment organization and a new team focusing on AGI safeguards. The team aims to address concerns related to deepfakes, and misinformation, and improve the safety of AI models. This move reflects a growing recognition of the importance of ensuring responsible AI development in light of potential risks and challenges ahead.

  • Adobe has unveiled an AI assistant for Acrobat, still in beta, offering users the ability to interact with their PDF documents directly. The assistant can provide section summaries, answer questions based on the document's content, and help create further content like emails and meeting notes. Adobe ensures data privacy, stating no data is used for training purposes, but it's unclear whose language models power the tool.

  • Stability AI has revealed its new image generation model called Stable Diffusion 3, with superior image creation capabilities based on text inputs. This model seems able to correct spelling errors and focus on multiple subjects in images, and it will even respect people's requests to not use certain data for training, which is something we have had a hard time doing so far.

Educational Pill 💊

Understanding AI Models - Mistral's Large vs. GPT-4

In today's tech spotlight, Microsoft's hefty $16M investment in Mistral AI brings the Paris-based startup's Large language model into the limelight. Mistral positions its language model, Large, as a formidable GPT-4 rival, boasting it as the second-best AI model available via API. While it may not outperform GPT-4 in all aspects, Large's competitive edge lies in its speed, matching GPT 3.5's velocity.

Although GPT-4's sophistication is undeniable, the slight compromise in accuracy for enhanced speed with Large could be a crucial factor when developing applications where response time significantly impacts user satisfaction.

Understanding the nuances between these models can empower you to make informed decisions when integrating AI technologies into your projects or workflows. Knowing the strengths of each model allows you to tailor your AI strategy to meet specific needs effectively.

LOLgorithms 😂

History according to Google.

That is the end of our newsletter.

Remember, if your company is looking to implement AI technologies, we also offer customized corporate training.

See you next week. 👋