There’s an interesting debate about whether or not performers can accurately communicate emotion to listeners.
Here’s the standard paradigm for testing this:

Basically, a researcher has a performer play a melody a bunch of times with the intent to express it with different emotion each time. If people can accurately recognize the intended emotion, we can attribute their recognition to the performer’s differing intentions. They can’t see the performer, so they must rely on acoustic cues. We get high internal validity with this paradigm!
People are actually quite good at this for basic emotions. However, these kinds of experiments are usually conducted in a lab with simple melodies and skilled performers. They may have little external validity… but how can we even begin to test this in a real-world scenario?
We really have no idea if people can accurately recognize intended emotion in the music they hear on their iPods or on the radio at work. While we intuitively think it must be possible, these sources are much more complex stimuli and require “on the go” testing. However, if we want to see if there is any ecological validity to the claim that people can accurately recognize emotion in music, we need new ideas for testing. Mobile devices may be a good candidate here, but how can we use them? If we can get devices to record excerpts of what people listen to, as well as allow the participant to enter what he/she perceives as the intended emotion, how can we tell what was originally intended by the performer? How can we find a balance between internal and external validity?
Email me when Robyn Irving publishes or recommends stories