A two-tiered experiment was conducted to assess the communication of phrasing structure from performance nuance to audience perception. Nine solo piano performances of two selected Chopin preludes, comparable in musical structure and complexity, were recorded multi-modally through audio, MIDI, and the Vicon motion capture system. Analyses of performance parameters such as tempo, dynamics, and movement were then conducted with reference to the notated score. Videos of each performance were presented to observers with musical knowledge who used a slider to determine the shape of each musical phrase. Having previously been presented performances in visual only mode, participants were now presented the performances in three modalities: visual, audio, and audiovisual. Further to findings that occurrence of performance gestures correlated with notated and perceived phrase boundaries, multimodal analysis of performance parameters confirmed that performers conveyed musical structure as intended in auditory as well as visual elements of performance.