Categories: Overall

Talking to NPCs in Video Games: Nvidia’s Glimpse into the Future of Gaming

Hello my dear followers, it’s your favorite funny guy who loves technology, Nuked! And today, I want to talk to you about something that will blow your mind – the future of gaming and AI.

Recently, Nvidia CEO Jensen Huang gave us a glimpse of what it might be like when gaming and AI collide. He showed off a graphically breathtaking rendering of a cyberpunk ramen shop where you can actually talk to the proprietor. Imagine holding down a button and speaking with your own voice to get an answer from a video game character. Nvidia calls it a “peek at the future of games.”

While the actual dialogue leaves a lot to be desired, the generative AI is reacting to natural speech. You can watch a single video of a single conversation and see how it works. Hopefully, Nvidia will release the demo so we can try it ourselves and get some radically different outcomes.

The demo was built by Nvidia and partner Convai to help promote the tools that were used to create it. Specifically, a suite of middleware called Nvidia ACE (Avatar Cloud Engine) for Games that can run both locally and in the cloud. The entire ACE suite includes the company’s NeMo tools for deploying large language models (LLMs), Riva speech-to-text and text-to-speech, among other bits.

The demo uses more than just those tools. It’s built in Unreal Engine 5 with loads of ray-tracing, making it visually stunning. But compared to the chatbot part, it feels lackluster to me. At this point, we’ve seen much more compelling dialogue from chatbots, even as trite and derivative as they can sometimes be.

In a Computex pre-briefing, Nvidia VP of GeForce Platform Jason Paul told me that yes, the tech can scale to more than one character at a time and could theoretically even let NPCs talk to each other. However, he admitted that he hadn’t actually seen that tested. It’s not clear if any developer will embrace the entire ACE toolkit the way the demo attempts, but S.T.A.L.K.E.R. 2 Heart of Chernobyl and Fort Solis will use the part Nvidia calls “Omniverse Audio2Face,” which tries to match facial animation of a 3D character to their voice actor’s speech.

In conclusion, the future of gaming and AI is looking bright if the technology continues to evolve. Who knows what kind of games we will be playing in a few years? But for now, let’s enjoy this Nvidia demo and imagine speaking to AI game characters.

Spread the AI news in the universe!
Nuked

Recent Posts

Apple’s Vision Pro: Don’t Lose It, Because You Can’t Find It!

Hey there, fellow tech enthusiasts! It's Nuked here, ready to bring you some news about…

23 hours ago

Unveiling Apple’s Hidden Surprise: The Mega Lightning Plug in the Vision Pro Headset

Hey there, my fellow tech enthusiasts! It's your funny guy Nuked here, ready to share…

23 hours ago

Unveiling the Illusion: The Truth Behind Samsung’s’Fake’ Photos

Hey there, my awesome followers! It's your favorite funny tech guy, Nuked, here to bring…

23 hours ago

Apple’s Tim Cook Promises Game-Changing Generative AI Features Coming Soon

Hey there, my fellow tech enthusiasts! It's your funny guy Nuked here, ready to bring…

2 days ago

Grab the Nintendo Switch at a Rare Discount on Amazon: Don’t Miss Out!

Hey there, fellow tech enthusiasts! It's your funny guy Nuked here, ready to bring you…

2 days ago

Unintentionally Hilarious: Halo’s Black History Month Armor Shaders Bring Laughter and Bewilderment

Hey there, my tech-loving followers! It's your funny guy Nuked here, ready to dive into…

2 days ago