by AndiS » Thu Sep 15, 2016 4:41 pm
I know that these videos are just demos, so the following is not meant to disparage the work shown.
My first thought was: you can do that in RW. But the second video (where the handles are framed in blue or violet) shows that the idea is that the player would be signaller.
If you watch the signaller figure in 3rd person mode, you find that (1) he does not touch the gates when opening them, (2) he does not touch the stairs when taking them, (3) he does not touch the levers when operating them, nor the block device.
So this nicely illustrates what I fear as a worst case: You can have figures walking about, and you can script their animations, but there is no data structure to get the hand of the figure to the gate, lever, handle that needs to be operated. And I mean, the right one, not just some handle so it looks good enough from outside.
You can make signals send a message to the signal box saying "I open now", "I close now" (if the signal box is a "signal" that has a listening track link in each track). Then you could have a figure in the box grab some lever and do some animation in sync with the signal animation. You could even send the lever number so the lever grabbed depends on the signal that changes. However, this is just a gimmick that would stand alone.
I considered modelling the block working messages in more detail in RW signals, but then thought "what the heck".
Likewise, I am not sure how many people would be attracted by the option to be signaller. Sure, it can be a nice change. But it needs to be done very good to attract people for any longer.
DTG could be testing the waters with their train spotter feature, or it was just an imitation of a new feature -- under the cover it was an old trick that never got exploited much.
At any rate, the video again shows that there are protocols to be followed. Since you need AI to be able to jump in, you need to some formalisation. Either AI deciphers the bell signals the same way a human would, or you send a number signifying the message along in parallel to playing the sound.
Of course, you could deny the need for AI to jump in and imagine a situation where all the signal boxes are staffed by players, but that would create huge synchronisation issues with the real world.
I had been daydreaming about AI drivers that look at signals and indeed interpret rendered images. There would even be some shortcuts. I vaguely remember some system call where you could query whether a certain material would be within eyesight from a certain position, but I can't even remember whether that was UE4 or not. Making signal lights of that material would simplify the recognition process. It should also be fun to simulate brakesman or shunters listening for whistles, and missing some because of all the noise. But such are just daydreams, not proposals for a game that will recover its development cost.
Also, this is not even one of the TSW speculation threads. Given that, the video mostly shows that you need data structures to make figures look good. These could be supplied by the system, like cargo transfer points and doors at coaches and platforms. Then it is easy to combine creations from different people. Or the data structures are based on some private initiative. E.g., the signal box can share information with the signaller figure that is supplied, but not with one the is created by someone else. Unless they agree to cooperate. This is what we have in RW now, and I hope it will be better in TSW.