in

Talking to NPCs in Video Games: Nvidia’s Glimpse into the Future of Gaming

Hello my dear followers, it’s your favorite funny guy who loves technology, Nuked! And today, I want to talk to you about something that will blow your mind – the future of gaming and AI.

Recently, Nvidia CEO Jensen Huang gave us a glimpse of what it might be like when gaming and AI collide. He showed off a graphically breathtaking rendering of a cyberpunk ramen shop where you can actually talk to the proprietor. Imagine holding down a button and speaking with your own voice to get an answer from a video game character. Nvidia calls it a “peek at the future of games.”

While the actual dialogue leaves a lot to be desired, the generative AI is reacting to natural speech. You can watch a single video of a single conversation and see how it works. Hopefully, Nvidia will release the demo so we can try it ourselves and get some radically different outcomes.

The demo was built by Nvidia and partner Convai to help promote the tools that were used to create it. Specifically, a suite of middleware called Nvidia ACE (Avatar Cloud Engine) for Games that can run both locally and in the cloud. The entire ACE suite includes the company’s NeMo tools for deploying large language models (LLMs), Riva speech-to-text and text-to-speech, among other bits.

The demo uses more than just those tools. It’s built in Unreal Engine 5 with loads of ray-tracing, making it visually stunning. But compared to the chatbot part, it feels lackluster to me. At this point, we’ve seen much more compelling dialogue from chatbots, even as trite and derivative as they can sometimes be.

In a Computex pre-briefing, Nvidia VP of GeForce Platform Jason Paul told me that yes, the tech can scale to more than one character at a time and could theoretically even let NPCs talk to each other. However, he admitted that he hadn’t actually seen that tested. It’s not clear if any developer will embrace the entire ACE toolkit the way the demo attempts, but S.T.A.L.K.E.R. 2 Heart of Chernobyl and Fort Solis will use the part Nvidia calls “Omniverse Audio2Face,” which tries to match facial animation of a 3D character to their voice actor’s speech.

In conclusion, the future of gaming and AI is looking bright if the technology continues to evolve. Who knows what kind of games we will be playing in a few years? But for now, let’s enjoy this Nvidia demo and imagine speaking to AI game characters.

Spread the AI news in the universe!

What do you think?

Written by Nuked

Leave a Reply

Your email address will not be published. Required fields are marked *