This past weekend at the LMU Playa Vista Campus, the Heroes & Villains Generative AI Hackathon brought together over fourteen teams who all showcased their unique perspectives on the classic theme of heroes versus villains in a virtual world.
Since there isn’t enough time to build an entire experience, teams need to focus on a vertical slice (a beautiful corner) that showcases a story element and game mechanic that explains what their world could be.
This is an opportunity to tell a story, breathe life into characters, and construct an epic narrative of good versus evil. How will your heroes rise? What wicked plans will your villains concoct? In the interplay of these forces, the most thrilling tales unfold.
Teams had access to the following platforms:
Each category wins $2,000 and the grand prize also wins 4 Leia Inc. Lume Pad 2
Thank you to our generous sponsors: LMU, AE.Studio, EZ AI, Nvidia, Otoy, and Leia.
Here, we celebrate the top-performing teams who utilized AI tools to their fullest extent, pushing the boundaries of generative AI storytelling. Great timing right after the Apple Vision Pro announcement!
Matthew Kim built Storybook AI, an Asian folklore-inspired interactive storybook. Using Python, Dream Studio, and Open AI Playground, they seamlessly blended text and image generation to bring their hero Hae's journey to life.
Storybook AI is an interactive, generative storybook that takes place in the realm of Haghar (Asian Folklore Inspired World). It follows our hero Hae, a water shaman, on her journey to save her child from the nefarious Mountain King.
StorybookAI is a procedural system that combines over eight AI text generation prompts for procedural story generation. It also uses prompts for generating image prompts on the fly which are fed into dream Studio AI for image generation to visualize the story.
CardAI revolutionized the traditional card game by using OpenAI, Blockade Labs, and Stable Diffusion API to create a completely AI-generated virtual environment, allowing players to play against AI opponents in a custom world.
A card game that generates an entire virtual world using AI. The virtual environment you play in, the cards you play with, and the opponent you play against are completely AI generated. The user can describe whatever setting they like and the AI will create the entire game for them. They can use their favorite setting from media or even describe their own from imagination.
The purpose of this technical design document is to provide an overview of the architecture and components of the fully customizable card game developed for the Virtual World Hackathon. The game allows players to create their own settings and play against AI-generated opponents in a virtual reality (VR) card game environment. This document will outline the key features, APIs used, and the overall technical design of the project.
2. System Overview
The customizable card game system consists of the following major components:
a. Front-end User Interface (UI): The UI allows players to interact with the game, select a setting, play cards, and engage in battles.
b. Card Generation Engine: This component generates fully customized cards, including names, descriptions, images, stats, and lore text, based on AI models and APIs.
c. VR Environment: The VR environment is generated using the Blockade Labs API, providing an immersive setting for gameplay.
d. Opponent AI Generation: The system generates AI opponents based on the selected setting and player preferences,
3. Architecture and Integration
The system follows a client-server architecture, where the front-end UI interacts with the server-side components to retrieve data, generate cards, and manage gameplay. The integration of the components is as follows:
a. Front-end UI:
- Interacts with the server through RESTful APIs to retrieve card data, VR environment details, and opponent information.
- Provides a responsive and user-friendly interface for gameplay, displaying cards, mana pool, opponent actions, and game progress.
b. Card Generation Engine:
- Utilizes OpenAI's API for generating card names and descriptions based on the selected setting.
- Uses the stable diffusion API for generating card images, incorporating relevant elements from the chosen setting.
- Integrates with the server-side API to receive card data requests and respond with generated cards.
c. VR Environment:
- Utilizes the Blockade Labs API to generate a virtual world based on the selected setting.
- Incorporates visual and interactive elements relevant to the chosen setting, enhancing the immersive experience.
- Communicates with the server-side API to retrieve environment details and provide necessary data for the front-end UI.
d. Opponent AI Generation:
- Based on the chosen setting and player preferences, AI-generated opponents are created.
- The server-side API manages opponent generation and communicates relevant opponent data to the front-end UI.
Renee Su and Darragh Burke
Team Vapor for their audio-based game, 'Hot Line'. Utilizing a diverse range of tools like GPT, MusicGen, and ElevenLabs, they created an immersive, real-world simulation of customer service systems.
They built an audio based game called "Hot Line", where the player initially is dropped into a typical First Person Shooter type game. The player soon is prompted with an error screen asking them to renew their subscription to the "game". This is where the real game actually starts, where the player will have to successfully navigate telephone tree in order to renew their subscription. The player will be presented with familiar challenges like presenting identification codes and personal details. We chose this scenario for how relatable to the real world this is, where the user of a telephone customer service system often feels like the system is the villain, while they are the hero.
For music, part of it is generated using Meta's new tool "MusicGen", and the human design took over with an actual Daw. Visual assets, were designed using a combination of MidJourney, Photoshop Generative Beta and more.
The voice-to-text is handled by the Deepgram API. The text to speech is handled by elevenlabs api.
The dialog tree what prototyped using charisma.ai, and then run on a custom implementation using gpt-4 for fuzzy matching of dialog tree options and generating custom responses, with dynamic responses rendered realtime to voice by eleven labs. We also used GPT heavily in the writing process to generate the scripted parts of characters and dialog.
GPT, MidJourney, MusicGen, Charisma, ElevenLabs, Deep Gram, Runway, Adobe
Kiel Howe, Mica Smith, Aurelien Rubod
Play the live demo here https://hotlinev1.vercel.app/
Immerse, an educational virtual world about cancer. Using Unity, Inworld, and OpenAI, the team created AI-powered characters for students to interact with, transforming complex cancer biology into a dynamic educational experience.
Our project is a virtual world designed to educate students about cancer through immersive experiences. Using artificial intelligence, we have created two main characters: the cancer cell (villain) and the natural killer cell (hero), both trained with GPT technology. In the virtual world, students have the opportunity to engage in conversations with these cells, gaining insights into their behaviors and roles in cancer. The virtual world includes challenges, realistic visuals, and explanatory notes to enhance the educational experience. Our goal is to provide an engaging platform that deepens students' understanding of cancer and empowers them to make informed decisions about prevention, treatment, and research.
Unity, Inworld, OpenAI, Leonardo AI, Blockade Labs, MonsterMash
Nicholas Tan, Aarin Salot, Negar Ahani, Theo Luu.
Tales of Mythos: Their novel client-server model, crafted using a blend of AI technologies, Unity, and Beamable, allowed for a fully playable game with real-time generative AI shaping the spatial environment and the story progression.
GOAL: Use real-time generative AI APIs to create a fully-playable game, set in a dynamic virtual world, that could have never been created without AI technology
We implemented a novel client-server model in which a large language model (LLM) generated the story and gameplay from within the context of a game engine. To accomplish this, we created a server using Beamable for the purpose of maintaining state, operating the chatbot, and integrating data from multiple generative AI technologies.
We taught the LLM to encode the metadata pertaining to story progression and virtual world locales using XML, which was parsed by the Unity 3D client and used to do real-time generative AI for the spatial environment, invoking Scenario (for 2D character portraits) and the 3D background (using Blockade Labs skybox). Art was generated real-time while playing. See the PDF for an expanded description of the architecture.
Jon Radoff: Game Design, Prompt Engineer
Dulce Baerga: Graphic design, AI artist
Fabrizio Milo: Generative music/audio experiments
Ali El Rhermoul: Coder, Client/Server Architecture
Anthropic (Claude), Scenario, Midjourney, Blockade Labs, Photoshop, Unity, Beamable
Learn more on Jon Radoff's Twitter.
The Heroes & Villains Generative AI Hackathon highlighted the extraordinary fusion of technology and creativity, underscoring the untapped potential of AI in storytelling. We congratulate all the teams for their inspiring contributions.