CMU Multimodal Data SDK is a python SDK built for simplifying loading and aligning data from multiple modalities. This SDK allows the multimodal data to be loaded from different sources. These different sources come at different frequencies, for example information regarding vision modality is normally extracted at a constant frequency of 30 frames per second while words appear at a variable frequency. CMU Affect Data SDK allows these information to be aligned together by time-stamping the modalities and aligning them to a reference clock (constant or variable frequency clock).
The SDK is available on GitHub: https://github.com/CMU-MultiComp-Lab/CMU-MultimodalSDK