Puppeteering - spectrals & datastreams
(bringing life to the future)
---------
About deploying all content to MentalRay, unlike a game engine, is how to get beyond smooth floating cameras through ambient scenes. It's difficult to get beyond the "screenshot in motion" cinematic approach without a lot of work that isn't portable in any prospect. Unlike CGI- memory per frame
Some new developments have been with animation. Not just animating characters, but entire scenes. For that I've turned to our old game engine friend, it can stream a lot of debug data out, including XYZ coords of entities, the viewport orientation and keyframes from anim files, which are all text-based. The idea is to play out scenarios and adapt that to animation data for MentalRay. The end result should resemble whatever was "played" and replicate that in its entirety, including animations from the format. Ultimately, custom animations would be used for characters.
audio/graphsketch/sensors/tracking
imaging/lerping/warping/arithmetic
The end result should be the game engine, played with our own maps, and rendered out like CGI via MentalRay, beyond Playstation 6 graphics. The whole idea of getting everything into an engine seems ridiculous to me at this point, and the true aim should be getting videos out into the world, and go from there. I can't think of a better solution than to actually PLAY the animation as a game and derive all that data to simply feed into text-based scripts using data-management. Don't misunderstand that it will be the game engine at all, it's simply using the game and engine to derive datastreams from. The outcome will still be just a video and not use any game engine assets at all. It's a little difficult to explain fully but I believe it's the best approach to getting VE content being "played", without the need for fully setting up an engine.
Load engine, devmap any map, and type the following into the console:-
r_skipbump 1
r_skipdiffuse 1
r_skipspecular 1
r_skipambient 1
r_skipBlendLights 1
r_skipoverlays 1
r_skipguishaders 1
r_skipdecals 1
r_showtris 1
This will get everything 60fps guaranteed which is really important. Get your player set up where you want to begin, and type the following:-
con_noprint 0
g_showviewpos 1
g_debuganim 0
logfile 1
Everything you do once "logfile" is entered is being recorded. ALT+F4 is fine to make a clean ending too. Ultimately one would set all this up to bind/toggle in a CFG file to get clean recordings out to save time later.
Now go into the base folder and look for qconsole.log and open that with your favourite text editor (Notepad++) and have a good look. We can safely assume each 4 lines provides us something unique and "on its tic", or "in a jiffy", meaning 1/60th a second. If this isn't the case, we'll know by timing the recording (externally) and comparing number of lines in the log to our stopwatch. Using data-management we can "lerp" everything to neatly fit, although in my tests it seems legit because I'm using r_showtris, all those r_skip* and running it at 1280x800 on an optimised OS, so, either way, we'll get our sync.
So what do we get, well, global coords, which we use a "helper" object to bind a character model to. This takes care of all collision, jumping, even crouching (at least in first-person view), and (although tricky) the orientation of the mouselook. If you're wondering, yes, I plan to do Chaos Circuits the same way and although the FPS mouselook here is not ideal, I have ideas for adapting it to roll/pitch/yaw using some more neat data management tricks, we also get each animation of each blending bone-sets in the model. The beauty of the game engine is, with human characters, you get an absolute shitload of different animations that are all compatible with each character. This is really way more simple than it seems, since anim import is really easy in Blender, infact we'd do most of the scripting in Blender using Python to string along the puppeteering of entire characters and bake those to each keyframe. Basically, each animation throughout the entire recorded sequence will be ready in the model before it ever gets into 3DSMAX, sync'd and timed already to 60FPS.
A few things are missing, obviously, such as ragdoll physics (the biggest hurdle here) and projectiles for weapons. I actually plan to use 3DSMAX "Reactor" physics engine and outright simulation physics-based projectiles, ragdolls and even decals/effects right in MAX. Big job, but it means when a character fires a weapon, it will literally shoot a projectile. When a character gets hit with that projectile, they will literally gape bloody wounds, gashing and gushing appropriate blood effects. Like I said, big stuff to set up, but the main point is that I CAN, all by myself. This can actually all be timed to coincide with the weapon fire animations linked to viewport coords, to somewhat automatically figure out where projectiles need to be fired from, in theory. Actually excited about spending a week or two figuring all of that out, but certainly it's all possible.
Lastly, one thing to not forget - this is CGI prerender, not real-time game-mechanics. The time between frame 336 and 337 can involve all kinds of tricks, including completely swapping out one model for another, and it's all imperceptible to the audience. So, if you imagine something being tricky, and there surely are those things, just remember that we also have all the tricks behind-the-curtain, not just in MAX but Q4 too. The power of data management to curate, massage and mold entire datastreams means that we can smooth things out to our preference, or not, depending on what's happening. The maps we bring into Q4 can have all kinds of extras that allow us to get certain characters in certain positions doing certain things at certain times, and if it does come out iffy, well, we can massage the data any way we see fit to mitigate that, or outright customise it too.
vertical-slice
easiest/fastest(options?)
ultimate mockup(prospect)
if AAA can and AA can't
infinite lives
.
.
.