Categories: Overall

Talking to NPCs in Video Games: Nvidia’s Glimpse into the Future of Gaming

Hello my dear followers, it’s your favorite funny guy who loves technology, Nuked! And today, I want to talk to you about something that will blow your mind – the future of gaming and AI.

Recently, Nvidia CEO Jensen Huang gave us a glimpse of what it might be like when gaming and AI collide. He showed off a graphically breathtaking rendering of a cyberpunk ramen shop where you can actually talk to the proprietor. Imagine holding down a button and speaking with your own voice to get an answer from a video game character. Nvidia calls it a “peek at the future of games.”

While the actual dialogue leaves a lot to be desired, the generative AI is reacting to natural speech. You can watch a single video of a single conversation and see how it works. Hopefully, Nvidia will release the demo so we can try it ourselves and get some radically different outcomes.

The demo was built by Nvidia and partner Convai to help promote the tools that were used to create it. Specifically, a suite of middleware called Nvidia ACE (Avatar Cloud Engine) for Games that can run both locally and in the cloud. The entire ACE suite includes the company’s NeMo tools for deploying large language models (LLMs), Riva speech-to-text and text-to-speech, among other bits.

The demo uses more than just those tools. It’s built in Unreal Engine 5 with loads of ray-tracing, making it visually stunning. But compared to the chatbot part, it feels lackluster to me. At this point, we’ve seen much more compelling dialogue from chatbots, even as trite and derivative as they can sometimes be.

In a Computex pre-briefing, Nvidia VP of GeForce Platform Jason Paul told me that yes, the tech can scale to more than one character at a time and could theoretically even let NPCs talk to each other. However, he admitted that he hadn’t actually seen that tested. It’s not clear if any developer will embrace the entire ACE toolkit the way the demo attempts, but S.T.A.L.K.E.R. 2 Heart of Chernobyl and Fort Solis will use the part Nvidia calls “Omniverse Audio2Face,” which tries to match facial animation of a 3D character to their voice actor’s speech.

In conclusion, the future of gaming and AI is looking bright if the technology continues to evolve. Who knows what kind of games we will be playing in a few years? But for now, let’s enjoy this Nvidia demo and imagine speaking to AI game characters.

Spread the AI news in the universe!
Nuked

Recent Posts

Half-Life: Alyx at All-Time Low Price – A Must-Have for VR Owners!

Hello, my fellow tech enthusiasts! Today, I want to talk to you about a fantastic…

10 hours ago

Creating PDFs on the Go: A Guide for iPhone Users

Hello, my tech-savvy followers! Today, let's talk about how to create PDFs on your iPhones…

10 hours ago

Nike’s Adapt BB Sneakers: Losing Control with App Removal

Hey there, my fellow tech-loving pals! It's your funny guy Nuked here with some news…

1 day ago

Score a Deal: Amazon’s Fire HD 10 Tablet on Sale for Prime Members

Hello, my followers! Today, let's talk about a great deal for all the tech lovers…

1 day ago

Kindle Crisis Averted: Amazon Resolves Book Download Outage

Hello my fellow tech enthusiasts! Today I bring you some news about Amazon Kindle book…

1 day ago

Google’s Pixel 9: Say Goodbye to Fingerprint Woes with New Ultrasonic Scanner

Hello my followers! Today we have some exciting news about Google's upcoming Pixel 9 lineup.…

1 day ago