My AI powered NPC Meta Quest 3 mixed reality project

dont use this tag line, its already used :P "is a virtual Strip Club for educated grown-ups who own a VR headset."
i found out today
Just googled it :D that game looks like a hidden gem if you were looking for an adult themed app, visuals look amazing :eek:;) even though it's a 2019 app, you can tell it's pcvr.
 
the video made me think of a virtual gallery.
i though about a virtual news room with news articles in each painting then the program could discuss current afairs with you?
a walk and talk
That's a good idea, cheers, they are up to date on most of the big news stories i.e. they know trump won, they know about the middle east troubles, ukraine/russia, etc.
 
I feel like I'm being stalked by 3D Hannah, as she's all over my meta horizon feed and the meta TV app.
I must take a look, I've not used Meta TV much (I've seen parts of the 3D Hannah vid, I've been told there's vids of Hannah playing noughts and crosses, not seen those).
 
This is awesome! Without scrolling through all 8 pages can you give more details on the LLM part? Which gpu is running it? Model specifics etc?
Hi, thankyou, the LLM's in use are LLama 70 3B and ChatGPT 4o, they aren't being ran locally, a company called Convai acts as an intermediary for these.
 
Thank you both!



I see. excuse my ignorance but is this because they’re the only product option that fits or is there an alternative self-hostable equivalent that requires more tinkering?
Convai aren't the only option but they're only option I could afford (and they're actually in the process of repricing their product, just hope I can still afford them). It could probably be self hosted but it would require hardware and an internet connection I haven't got and yes definitely more tinkering. If you were doing this just for yourself as the only user then a decent home PC with a good GPU would be enough.
 
I'm working on something that might be of interest to movie buffs, the avatars can 'enjoy' (if AI can enjoy anything) movies we watch with them, they can follow a movie, make comments on it (shown in the video), and be asked about the movie and they'll give an answer relevant to what's happening in the movie (shown in the video).

 
interesting, im trying to decied if its reading it live or picking up information from another source.

my 2 random thoughts are Bigscreen beyond group screening and VRChat (has a number of possibilites like chaperone / safety companion / auto ban ppl ect)

its neet though, she seems to know were you are in the movie after a few seconds of thought/analysis
She's reading it live but I have to admit she's not doing video-to-text analysis, I don't think that is available at the moment or atleast not publically. It'll work for any movie etc and it adjusts to whatever is being watched.

I'm trying to not reveal how it's done as I'm having a bit of banter with Patreon members on working out how the app does it, will reveal all soon of course, I'll have to when it is released as part of the app anyway as it requires a file being downloaded, there's the clue!
 
no no thats fairs completly. appologies
No need for apologies dude! It's basically using the subtitle file for the movie, the avatar times the subtitles with where the movie is up to, and comments on what they think is going on at randomly chosen times or when they are asked to comment. Been watching movies today with it, it's not bad but far from perfect, some things they say are spot on, some things they are off the mark but it's easy to put them back on the right track.
 
Subtitles are good for the dialogue but often will lose the context.

I'm not sure how available things are but what about audio descriptions that are provided for visually impaired users? Might be more context in that to guide the avatar.
Good idea and I did think of that but couldn't find anything of real value that was publically available tbh, I looked for them for Blade Runner 2049, but didn't look extensively, will look again.

Edit : Just found a site with audio description files actually, and realised it will probably be too slow to be of use or it'd too much for the Q3 to deal with with doing everything else at the same time. Unless I use the new chatgpt voice input mode which is really expensive, audio description files have to be transcribed to text first so that they get fed to chatgpt to process, can't do that in good time unfortunately. If I could get a text transcription of the audio description files then that would be great, will keep looking.
 
Last edited:
Back
Top Bottom