GeminiSaga : Every scene of this movie is like a painting with perfect light and colors.
random000 : Considered canon & a continuation of the 5 year mission, many did not like the filmation c...
random000 : As far as outer-space greats go, this is wonderful. Sure it's a toon, but, it's award winn...
random000 : Same with Earth2. We were touring heavily in the 90s & cable tv wasn't a thing for us sinc...
Lily23 : I recall seeing that show "back in the day". Couldn't afford cable TV so I was stuck watch...
Twixtid : Wow, I actually can't find words to thank you for this list of shows, truly, thank you. I ...
random000 : No problem, was just going with stream of consciousness on that. Most people know all the ...
mandroid : This is hilarious
random000 : Lexx & Farscape are both late 90s-early 2000s spaceship - ensemble crew based shows that h...
echo5282 : Earth 2 was pretty good https://www.primewire.li/tv/98436-earth-2
Yes and no. It’s a lighting feature of URE5. Basically, it allows the artist to lay mesh layers on surfaces that all auto-simulate shadow generation based on light simulation and ray tracing. It’s really just an evolution of treating light like it’s something which is simply there to treating it like it’s something which has its own unique, very complex system of physics. Bloom lighting functionally makes everything that is “being” lit up generate light itself. Pores would not be visible bc light is being generated from the holes as well. Then you layer single path lighting which would create shadows, but in regards to pores, it creates a VERY uncanny valley effect. You could see pores head on, but as soon as the face contours away from the viewers perspective, it begins to look like an absolute smooth surface due to the skin itself “generating” light.
It’s not AI, an artist can “easily” do it for an individual image, but a film or tv show has 24 individual images per second. It’s something you could do for a section of a film/show at great expense. Now an artist can input what they want for a singular image and the engine and hardware will apply those commands to all of those images for that sequence. It took something that was incredibly repetitive and time consuming and automated it. The artist is still absolutely necessary. They have to do the initial inputs and will have to make adjustments as things move within the scene, but they no longer have to manually repeat and adjust everything themselves for every frame. It’s about as much a form of AI as Microsoft Excel is. It allows you to automate the copy/paste process and adjusts it based on a curve.
O.K., so that explains it. A bit over my head, but truth be told, so is how AI does things, but would rather see it done this way, as there is still the human element involved. This was really well done, and that is a fact. I watched the entire thing pretty close to back to back, and really hope there is a continuation of this.