Hi! I’m a programmer. I do this and that. I implement, integrate and innovate. Let me tell you some tidbits about sound and cutscenes.
It would be great if all audio related code and components could operate off of a single central sound manager. Then some customization could be possible in Unity so our sound pipeline wouldn’t be a convoluted mess full of tiny changes.
New conundrums arise while attaching sounds to stuff. For example a case where an action can be performed quickly in succession:
- Do you just play the sound each time?
- Do you just cancel the effect and play it again from the beginning?
- Do you crossfade the second bleep-bloop into the first bleep-bloop effectively making a single bleep-bloop that still sounds nice but indicates a repeated action?
- Do you do something completely different that really fits the visuals but is kind of tricky to construct?
All of these solutions have their uses in certain situations. First two can be implemented fairly quickly. The third one – which seems like a nice general solution – needs to keep tabs on all sounds that are currently playing and perform a crossfade of different sounds. The fourth approach can be a combination of anything and everything. That’s why we are using Master Audio for audio management. It already has these kinds of functions and it’s a component we would have had to craft ourselves if it hadn’t already existed.
So once we had our basic necessities covered we could focus more on our specific needs. The camera and character moving to certain spots should change the ambiance and music in a meaningful and mindful way. So we created something that could be called a sound map! A collection of triggers embedded into the earth which send directions to Master Audio through an interpreter which has information about the current soundscape and keeps the changes logical and deliberate. This sound map can hopefully solve increasingly complex audio situations with relative ease.
Actions that the characters do are more often than not intricately weaved into the dialogue. The dialogue system and articy have “stage directions” (third party support between assets and environments really makes me feel warm inside) which are tools to make sequencable commands. So with a little preparation it would be possible for any member of the team to use these high-level commands to program the cutscenes. This means it’s important that different animations and actions automagically fit together so the whole system is as dynamic and modular as possible. Otherwise our animators would have to redo every animation after each layer of polish.
That’s it for now. Until next time.