• Turing Post
  • Posts
  • FOD#2: Money and VC hype, Google, OpenAI, Anthropic, Meta and others update + the era of ephemeral software

FOD#2: Money and VC hype, Google, OpenAI, Anthropic, Meta and others update + the era of ephemeral software

Get through 100,500 AI newsletters all at once

"Oy! What a week! Blink, and you'll miss a new Large Language Model or a discussion about the times when AI enslaves us.

Seriously, this week was extremely fruitful with fascinating innovations. Let's dive into what 150+ newsletters have been musing about. We call it “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.

Today, we will be building connections and making sense of the AI world through the lens of various companies.

But first, let’s talk about…


Remember when the Long Island Iced Tea Corp. changed its name to Long Blockchain Corp, causing its share price to spike by as much as 380%? We're seeing a similar effect now with the addition of 'AI' to anything. This seems to be enough to attract investors, especially considering Statista's prediction of a global AI market exceeding $1 trillion by 2030. However, this leads many to ponder at night, 'Where should I invest my money?' Not a Bot reminds about Gartner Hype Cycle, “a fascinating phenomenon we’ve seen with almost every piece of emerging technology that has come out in the past two decades.” Newcomer shares concerns about overhype in the AI industry: “The AI euphoria offers a mask for just how bad the rest of the startup industry really is. While I believe that large language models are genuinely a profound technological advancement — there is also a reality that this AI hype cycle handed investors exactly what they wanted. Forget about a new technology platform — they wanted an excuse to get back to momentum investing.”

Harry Stebbings, the investor and creator of 20VC, posts: “There are simply not enough AI assets to absorb the immense wall of cash coming for AI companies. Prediction: This will be worse than the dot com bubble in terms of lost dollars in hype companies.”

As we used to say during the fat crypto days: When lambo?

Opportunities will certainly arise for some savvy founders. Despite SoftBank reporting a $32 billion loss in its tech-focused Vision Fund, the company is gearing up to launch an AI investment fund. So, prepare your pitch decks, my AI enthusiasts.

In contrast, without any venture capital, Caryn Marjorie, a 23-year-old influencer, has created an AI avatar/chatbot that generated $71,610 in revenue in just a week. It's interesting to note that most of the subscribers are men.

Here's a piece of free financial advice: Gentlemen, when investing in AI hype and/or spending on AI chatbots, use your prefrontal cortex.

Let’s shift our focus back to the original models, the language models, and the products based on them.

Google and a duck with lipstick

From “Good God, was it impressive” to “Generative AI appears to be a technology trend where Google is struggling to find its competitive edge”, the Google I/O presentation has not left anyone indifferent. We noticed two things:

  • There was a dancing duck on stage wearing lipstick.

  • You can add balloons to your photos and make the sky look more blue, as if it were more natural.

I mean, it's obviously all about naturalness, and what do we really know about ducks? In the latest Guardians of the Galaxy movie, the villain says, 'There is no god! That's why I had to step in.' I don't appreciate companies intervening to alter my photos and consequently my memories. I want my memories to remain intact. It echoes with MIT Technology Review: “How far do we want to go here? What’s the end goal we are aiming for? Ultimately, do we just skip the vacation altogether and generate some pretty, pretty pictures? Can we supplant our memories with sunnier, more idealized versions of the past?”

Having said all that, I'm a devoted user of Google and am cheering for their AI efforts. For what seemed like an eternity in the fast-paced world of AI (where weeks can feel like decades), Google appeared to lag behind the bold, young OpenAI. Now, however, they seem to be placing AI at the core of their business strategy. Why Try AI makes an important point: “OpenAI—a relative unknown with a small pre-existing user base—could afford to launch a shiny new AI chatbot, even if it famously hallucinates and is often bad at math, logic, and more. Microsoft could live with its early iteration of Bing going off the rails, gaslighting users, and professing love to a journalist. After all, Bing chat was a deliberate play at chiseling away at Google’s absolute dominance in search. Microsoft had little to lose and everything to gain in that space. Google, on the other hand, had no such luxury.”

Not only because it’s so responsible. But because there was (and still is) a significant risk to their search ad business model. Semafor puts it this way: “The more radical way of looking at Google’s search conundrum — and one that would probably get the company in trouble with Wall Street — is whether it’s open to cannibalizing its business, and the ad dollars that go with it, by building a search engine reimagined from the ground up.”

So, what has Google enhanced with AI? Search is becoming interactive, responding to questions in a conversational manner like ChatGPT. AI will be integrated into Gmail, Docs, Sheets, Slides, acting as your assistant. Maps will offer an 'Immersive View for Routes.' MedPaLM is specifically designed for the medical domain; MusicLM will bring musical ideas to life; developers can code with StudioBot; Bard is available for everyone and connected to all Google products and third-party services such as Khan Academy, Instacart, OpenTable, Kayak, etc. It’s powered by Google’s latest PaLM 2 model and is integrated into over 25 new Google products. Google has also quietly collaborated with DeepMind to release a next-generation multimodal model called Gemini.

While none of this may seem particularly mind-blowing, The Platformer notices: “Viewed one way, some of this stuff can feel pretty mundane. But in the near term, this is how AI is going to start working its way into our lives. Soon enough, we probably won’t think of it as AI anymore.”

I see this as a highly useful starting point for many tasks. In one of the interviews/podcasts, Sam Altman mentioned that he uses ChatGPT mostly for text summarization. This might seem boring, considering the myriad possibilities for human enhancement that exist. AI is, and should be, a tool, and Google presents it as such.

Additionally, The Rundown noticed that “Google mentioned “AI“ 143 times in today’s presentation, and Google stock increased by $56 Billion.”

OpenAI and more more more

"Never stop" seems to be Sam Altman's motto. He manages to fit a few interviews and podcasts into his schedule each week, while his company continually surprises us with new updates and features. Just last week, they announced two major upgrades to ChatGPT: over 70 third-party plugins and a web browsing feature. I've yet to see the web browsing feature in my paid version; word on the street is that it's still in beta. One of the most impressive integrations is with Wolfram|Alpha, which should prevent any awkward moments when ChatGPT stumbles on equations. Another notable plugin is ChatWithPDF, which lets you engage in a meaningful conversation with a recently downloaded PDF when there's no one else to talk to. The Neuron poses an important question: “All of these tools are beneficial, but we're uncertain how they'll fare against business platforms that are quickly incorporating AI into their own products.” The Rundown urges: “Making a ChatGPT plugin right now is probably the equivalent of the opportunity of creating an app for the iPhone App Store at inception.” You can join the plugin developer here.

Anthropic makes Claude’s digestion much better

Anthropic's chatbot, Claude, has significantly improved in its ability to process information. It can now handle up to 75,000 words (tokens) of context (OpenAI’s GPT-4 can do only 32K tokens, which is always hugely frustrating). Claude will be able to digest, summarize and reference information from hundreds of pages in seconds. The Neuron thinks that “the feature truly makes Anthropic a real competitor to OpenAI - plus, it just killed a dozen “chat with your PDF” tools.”

Anthropic also pushes for Constitutional AI. Instead of RLHF (when models learn from human feedback), it wants to program a constitution of values and principles into the model. And to do that, it is raising up to $5 billion. In case you have a few to spend.

As for RLHF, it’s also quite expensive. Prompts Daily points out The Unseen Labor Behind AI Systems. Key Insight: Contractors label data and predict text for ChatGPT systems at $15/hour. OpenAI has reportedly hired approximately 1,000 remote contractors globally for similar tasks.

Meta doesn’t need the best models

Since the leakage of LLaMA, Meta has been consistently mentioned in the context of open-source. Last week, they announced ImageBind, an open-source AI model that combines six data types: links, text, sounds, images, temperature, and movement readings into a single tool.

ImageBind is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike license (CC BY-NC-SA). LLaMA was released under a Meta-created license that restricts usage to non-commercial research purposes. DINOv2 is licensed under a Creative Commons Attribution-NonCommercial license (similar to ImageBind but without the need to apply the same license to any work done to augment the model). Segment Anything is licensed under an Apache license: no restrictions on the use of code, and further developments have no license requirements on them and can also be patented. Stratechery says about all these licensing decisions: “It definitely feels like tangible evidence of the mix of excitement and trepidation I identified in CEO Mark Zuckerberg’s comments on the earnings call: Meta can sense a real opportunity in Open Source, but the company isn’t yet quite sure how to take advantage of it.” They also notice: “Meta is uniquely positioned to overcome all of the limitations of open source, from training to verification to RLHF to data quality, precisely because the company’s business model doesn’t depend on having the best models, but simply on the world having a lot of them.”

Meta also works hard for its advertisers, introducing AI Sandbox, where copywriters and marketers can play with new AI-powered ad tools. Again, we see AI being a tool, which will be widely the case basically for all companies very soon.

Stability AI keeps it open

Another open-source tool was released last week: Stable Animation SDK allows artists and developers to use the most advanced Stable Diffusion models and create animations in various ways: through prompts (without images), a source image, or a source video.

IBM also tries to keep up

Watsonx.data, a "fit-for-purpose" data store, and Watsonx.governance, for addressing privacy and ethics, were announced last week too. IBM predicts AI will add $16 trillion to the global economy by 2030 and automate 30% of back-office tasks in five years. IBM’s Rob Thomas is sure that AI won't replace managers, but AI-savvy managers will replace the rest. Other tech behemoths might already have similar platforms but let’s wish IBM some luck.

Microsoft has been scheming behind the scene

Bot Eat Brain seeds rumors that Microsoft is assisting AMD in its venture into AI processors aiming to challenge NVIDIA’s dominant 80% market share in the industry. Not a Bot reports that to maintain the edge in AI race, Microsoft made a strategic investment in Builder.ai, an AI software expert that helps businesses order tailor-made software. The expert's name is… Natasha.

I can’t stop…

HuggingFace and its Agents

Ava News first applause to HuggingFace’s Transformers Agents. Through this feature, they are making 100,000+ models accessible to everyone. “With just a chat, you can control these AI powerhouses to perform multimodal tasks, from image generation to text summarization.” If there is a winner in an open-source game, HugghingFace should be the winner.

Enough with the companies; let’s make a few other important points:

To the question of AI catastrophe, The Algorithmic bridge adds an interesting thought: “If there's a tiny probability that pressing the button ends the world and another, equally tiny probability that it saves humanity, there's necessarily an overwhelming majority of futures that are none of those two.” So there are more chances that everything will basically stay as it is.

One Useful Thing suggests that AI should not be treated like traditional software because it is not reliable or predictable in the same way. Instead, it should be treated more like people, with idiosyncratic strengths and weaknesses. Never mentioning the term, the article speaks about anthropomorphism, that we’ve just recently covered. You can enjoy it here.

And I’d like to end the froth on the daydream with the thought by Inside My Head: “We're entering a world of on-demand software generation at runtime. Apps that will be single-use, highly personal, and instant. This is the era of ephemeral software, and it begins now.”


Join the conversation

or to participate.