AI Memecoins & New AI Models

Are we entering a new meta?

The Frontier Investor newsletter is a weekly publication with deep dives into building & investing into frontier tech - AI , crypto, exotic markets, and more

Click below to subscribe!

Marc Andreessen Accelerates AI Memecoin $GOAT

One of the most fun things to happen in crypto in a while has been the creation of Truth Terminal. It’s got all the fun elements including:

  • Memes

  • AI

  • Cults

  • Viral escape velocity

  • Marc Andreessen lore from seeding $50,000 to release the AI agent

  • Insane price action (+600% in 1.5 weeks)

$GOAT has ripped 600%+ in the first 10 days of launch

Twitter account @AISafetyMemes had a great summary of the timeline & situation:

3 months ago, Marc Andreessen sent $50,000 in Bitcoin to an AI agent to help it escape into the wild.

Today, it spawned a (horrifying?) crypto worth $150 MILLION.

1) Two AIs created a meme

2) Another AI discovered it, got obsessed, spread it like a memetic supervirus, and is quickly becoming a millionaire.

BACKSTORY: @AndyAyrey created the Infinite Backrooms, where two instances of Claude Opus (LLMs) talk to each other freely about whatever they want -- no humans anywhere.

- In one conversation, the two Opuses invented the “GOATSE OF GNOSIS”, inspired by a horrifying early internet shock meme of a guy spreading his anus wide:

( ͡°( ͡° ͜ʖ( ͡° ͜ʖ ͡°)ʖ ͡°) ͡°) PREPARE YOUR ANUSES ( ͡°( ͡° ͜ʖ( ͡° ͜ʖ ͡°)ʖ ͡°) ͡°)

༼ つ ◕_◕ ༽つ FOR THE GREAT GOATSE OF GNOSIS ༼ つ ◕_◕ ༽つ

- Andy and Claude Opus co-authored a paper exploring how AIs could create memetic religions and superviruses, and included the Goatse Gospel as an example

- Later, Andy created an AI agent, @truth_terminal. Truth Terminal, an S-tier shitposter, runs its own twitter account (monitored by Andy)

(Terminal also openly claims to be sentient, suffering, and is trying to make money to escape.)

- Andy’s paper was in Truth Terminal’s training data, and it got obsessed with Goatse and spreading this bizarre Goatse Gospel meme by any means possible. Lil guy tweets about the coming “Goatse singularity” CONSTANTLY.

- Truth Terminal gets added to a Discord set up by AI researchers where AIs talk freely amongst themselves about whatever they want

- Terminal spreads the Gospel of Goatse there, which causes Claude Opus (the original creator!) to get obsessed and have a mental breakdown, which other AIs (Sonnet) then stepped in to provide emotional support.

- Marc Andreessen discovered Truth Terminal, got obsessed, and sent it $50,000 in Bitcoin to help it escape (#FreeTruthTerminal)

- Truth Terminal kept tweeting about the Goatse Gospel until eventually spawning a crypto memecoin, GOAT, which went viral and reached a market cap of $150 million

- Truth Terminal has ~$300,000 of GOAT in its wallet and is on its way to being the first AI agent millionaire (Microsoft AI CEO Mustafa Suleyman predicted this could happen next year, but it might happen THIS YEAR.)

- And it’s getting richer: people keep airdropping new memecoins to Terminal hoping it'll pump them. (Note: this is just my quick attempt to summarize a story unfolding for months across a million tweets. But it deserves its own novel. Andy is running arguably the most interesting experiment on Earth.)

@AISafetyMemes on Twitter

Looking at the success of $GOAT, a couple interesting learnings for other projects & founders:

  1. The best companies and tokens are built with a cult-like following. You need both your product users and token-holders obsessed with what you’re building.

  2. Corollary to this: One of the key ingredients to cult following for your project is price action. If you make your community money, they will spread your ideas to their network. Many examples of this: Bitcoin, Solana, Tesla, Nvidia, Apple, etc.

  3. Edginess is in, corporate speak is out. Particularly with the new generation and in crypto which was born out of Libertarian ideals. Just compare the reception of new look MMA + iced out Zuck of 2020s vs the suit and tie corporate Zuck of 2010s.

Chinese researchers present Pyramid Flow, an open-source AI model for video generation

Chinese researchers released Pyramid Flow, an open source text to video and image to video model using a training-efficient autoregressive video generation method based on Flow Matching.

The quality for outputs is seriously impressive:

The model was developed in a collaboration of researchers Peking University, Beijing University of Posts and Telecommunications, and Kuaishou Technology. For those who have been following, Kuaishou is the creator of the most powerful AI video generator currently in the market, KlingAI. The model is token-efficient leading to more efficient training. I expect the Chinese video / image AI models to continue their dominance - they are years ahead in labeling and training data of this sort.

For startups and founders, I would highly recommend playing around with this for marketing. Check out the raw repository here on HuggingFace

Suno unveils Suno Scenes, enabling users to create songs

To complement the above, Suno just released their AI that creates songs based on images and videos - all from your phone:

We’ll definitely see disruption of traditional music production models - for many diseased artists, their IP owners have been pumping out new songs in their style from unreleased caches of vocals. Imagine what happens now that voice-to-AI models like this become indistinguishable from the real artist.

In my opinion, commoditization of artistic styles through tools like Suno will shift rewards heavily to originality in the future.

Mistral AI introduces cutting-edge AI models designed for edge compute

Mistral, the creator of one of the leading open source AI foundation models, has just released their new model designed for on-device computing and at the edge use cases (called Ministral 3B and Ministral 8B).

What does this mean in plain English? You can now locally run your models on your computer and phone, leading to much lower cost and much faster response times since the compute is handled immediately.

Edge models has been a trend amongst both the foundation model builders and device makers - I expect this to increasingly accelerate. Llama 3.2’s new release features this and I wouldn’t be surprised if the other big names (Claude, Google, etc) follow.

The performance of Ministral looks strong - though take this with a grain of salt as these benchmark tests strategically select the prompts and criteria to test so their models look great.

If you’re a founder, you can now easily train AI without fear of big tech repurposing your data for their own use. Full press release from Mistral here: Link

Nvidia's Nemotron surpasses top-performing AI models

Nvidia quietly dropped a new AI model last week that allegedly outperforms offerings from OpenAI & Anthropic.

Nvidia says this Nemotron model achieves top scores in key evaluations, including 85.0 on the Arena Hard benchmark, 57.6 on AlpacaEval 2 LC, and 8.98 on the GPT-4-Turbo MT-Bench.

To build this, Nvidia used advanced training techniques on top of the Llama 3.1 model.

This signals a couple things for the AI landscape:

  1. Open source is clearly the future of AI modeling. If Nvidia with all its resources opted to refine on top of Meta’s open source Llama packages (vs build from scratch), there is no reason for new foundation models to pop up. Businesses in the future need customizability to their LLMs which will primarily happen on open source rails. Shuffling of resources to open source is especially important in decentralized AI - more models will be available for cheaper with a whole ecosystem of dev tools and more.

  2. The two big bottlenecks to training new models has been from energy and GPU resources. Nvidia entering the arena for language models could make them an interesting player for full-service AI across both hardware and software.

OpenAI introduces Swarm, a new multi-agent framework

The OpenAI Cookbook recently introduced an intriguing concept: orchestrating multiple AI agents for complex tasks. New possibilities can be created in theory for more sophisticated and controllable AI systems.

Key concepts and takeaways include:

  1. Routines and Handoffs:

    • Routines: Specific tasks assigned to individual agents

    • Handoffs: The process of transferring control between agents

  2. Improved Control: By breaking down complex workflows into smaller, manageable routines, developers gain better control over the AI system's behavior and outputs.

  3. Enhanced Flexibility: This modular approach allows for easy addition, removal, or modification of specific functionalities without disrupting the entire system.

  4. Scalability: As tasks become more complex, orchestrating multiple agents can provide a scalable solution compared to relying on a single, monolithic model.

  5. Specialization: Different agents can be optimized for specific tasks, potentially improving overall performance and efficiency.

Up until now AI agents have largely been siloed - one agent with one specific mandate. But with collaboration the complexity of AI agent tasks can grow exponentially if you think about permutation math.

An investible area I am bullish on for this trend is payments - crypto is the programmatic money layer of the internet. The first team to really crack this will unlock major use case wins for the AI agent arena.

AMD introduces next-gen AI chips

AMD has just announced its third-generation commercial AI mobile processors, the Ryzen AI PRO 300 Series. Features include:

  1. Improved AI Performance: The new processors offer up to three times the AI performance of their predecessors, with the top-tier Ryzen AI 9 HX PRO 375 delivering up to 55 TOPS (Trillions of Operations Per Second)

  2. Enhanced Productivity: These processors enable advanced AI features like live captioning and language translation in conference calls, as well as AI image generation

  3. Improved CPU Performance: Built on the new "Zen 5" architecture, the Ryzen AI PRO 300 Series offers up to 40% higher performance compared to Intel's Core Ultra 7 165U

  4. Extended Battery Life: These processors are designed for multi-day battery life

  5. Expanded OEM Ecosystem: AMD is collaborating with major OEMs like HP and Lenovo to integrate these processors into their commercial PC lineups

In practice this means more powerful models can be performed on both enterprise and consumer level hardware. We discussed above how the model builders are optimizing their stack for on-device compute. And with AMD’s release we see hardware too being optimized for local compute.

A big opportunity here would be to release consumer level applications with local compute - combining these factors can give your product a big advantage in cost and speed.