I went to an accent-study survey (referenced from Language Log) in which many different speakers of English (mostly non native but with a few native speakers thrown in) read a passage – something about Stella shopping for peas and blue cheese and a toy frog. It gets kind of repetitive, and I concluded that the survey was badly designed. Here's the comment I posted at Language Log regarding my experience trying the survey.
I went through the survey fairly confidently, even naively, before reading these comments. And then I read Sarah Taub's: "I think my problem is that I wanted to interpret the study as asking how much the speakers sounded like native English speakers. I wish I had had more guidance from the experimenter as to how important the "native vs. non-native" and "US English vs. foreign" dimensions were. Perhaps those two dimensions should not have been combined in one experimental set."
Now I'm second-guessing myself. I really had no idea how to balance these two aspects, and I agree with her comment that they should have provided more guidance on separating the two dimensions: native vs non-native and American vs non-American. What I ended up doing was essentially rating them on strictly the native vs non-native dimension (because I've spent too much of my life outside the US, perhaps, to actually CARE about the US vs non-US dimension?). So I gave a lot of scores in the 4 to 6 range (as an ESL teacher I am perhaps too generous – if I can understand them then they don't sound THAT foreign – "foreign" is when you really actually can't even make out what they are talking about!). And then, if someone was a 6, I awarded a 7 to those who were unambiguously American-sounding to me – which was exactly two.
Despite my frustration with the survey, I have a nearly inexhaustible patience with and fascination for these types of things. I really should go back to graduate school, shouldn't I? In linguistics, I mean…