Some observations on social network survey research

About 10 days ago, I launched a survey on Last.fm designed to assess the nature of “friendship” on the site. In the broadest sense, I just what to know what kinds of relationships hide beneath that label.

Getting the survey posted in a way that anyone would respond was a challenge in itself. I spent a lot of time over the last 2 years building relationship with the site, its staff, and its users. Originally this was not because I wanted to do research, but because I fell in love with the site and, though I have periods of intense frustration with it, generally I still seem to check in multiple times a day out of my own fannish interest. But anyway, with staff consent, I posted the link to my survey in a couple of their forums, and one of their most visible staff people stepped in and endorsed it on behalf of the site.

I’ve been pleased with the response rate, but the whole process has raised a lot of interesting issues.

(1) Non-randomness. I take it as a given that you just can’t get a random sample in a site like that. So in addition to posting it in the forums, I asked people to ask others to fill it out. I’ve been really happy that people did do that. Some got so excited they ran around posting a link to it in all their friends’ shoutboxes, others wrote journals about it. I have no doubt that’s helped a lot. But what do I lose by having friends of friends filling it out? On the other hand, thus far it’s an amazingly diverse sample in terms of age and nationality, and that is thrilling (especially for those of us used to studying college students at one university).

(2) User involvement. I’ve made a point since posting it to be on the site a lot in order to respond to feedback. A few things have happened. First, to my surprise, about 20% of the people who’ve filled it out have left me notes to tell me that they did. Why? I assume it’s because I made a personal appeal (“I need your help”) and they want recognition for their altruism (which I’m more than happy to provide). Another reason is that many of them have found the survey interesting and appreciated how it made them think. I find that fascinating. Several have made a point of saying that they want me to do more research and they want to be involved when I do (“interview me! I want to discuss this!”) Most intruiging is the number of people who want to talk about the methodology with me — what are my research questions? how will I present the findings? how am I handling the issues of non-randomness? why do I ask them to report on a random friend and how does this affect the outcomes? It’s not often, I think, that our research is publicly critiqued while in progress, and I find that both rewarding and challenging, but (given the positive feedback) mostly rewarding. Still, I was not expecting to give mini lectures on methodology. But it’s good, it’s really good. It makes me think we should do much more of this.

(3) User misunderstanding. I’ve asked people to report about the first person on their friends list, which is random every time they open their profile. Several people have been uncomfortable with this because that person is not “typical” or not a good example of a “good” friend, or because it was a family member. One one hand, I may not have done well enough at conveying the point that I want ALL the varieties of relationship that are covered by the label, not just the ones that seem to fit the label. On the other hand, I think even when that’s clear, people seem to feel a moral tug toward what the label ought to mean, so that they’d rather describe a “good” example than a bad one, even when they all appear identical on the site. That’s interesting in its own right.

(4) Young people! One of my biggest frustrations is this: I state explicitly that you must be 18 or older to fill it out and that by clicking through you assert that you are 18 or older, yet just over 10% of them are filled out by people aged 12-17 who either skipped that paragraph or just didn’t care. It’s not that I don’t want to hear what these people have to say, I do, it’s that studying youth requires human subjects steps I didn’t think were tenable for me in this context (e.g. parental consent). So now I have this data I am not sure I’ll be able to use (I’m hoping our human subjects committee with okay my analysis of them, but if not I will do the required thing and chuck them, heart breaking all the while). KU has a really wonderful Human Subjects office, and I am loathe to cross them. I am hoping they will concur that I have no idea who these people are and their responses are harmless and will let me use them despite my best efforts to keep them out in the first place.

(5) The highs and lows. Watching the data roll in, or stagger to near nothingness, is bizarrely emotional for me. I should probably just not look until I get back to Kansas and am ready to do analysis, but I can’t help checking in all the time. When there are lots of new responses, I feel so happy. When there aren’t I get really bummed. How much is enough and what does ‘enough’ mean when it isn’t random anyhow and there’s no way to measure response rate?

(6) One of these days I hope I can learn to remember to design quantitative studies with a more quantitative sensibility. I am always asking about the things I want to know about, being qualitative, trying to get as much information as I can and build understanding from the bottom up. I am rarely thinking in advance in terms of modeling variables and their relationships to one another in ways that can be easily analyzed statistically. I dislike the simplicity of much statistical modeling, but managing the complexity of loads of variables is not easy. This survey combines quantitative and qualitative questions.

If you’re a researcher, how have you dealt with these issues?

If you’re a Last.fm user or know people who are, please spread the word!

Comments (4) to “Some observations on social network survey research”

  1. Is the survey link in the article above correct? All I get is a page with some “thank you” text; no form, no link or button to start the survey. Colour me baffled. :-)

  2. Thanks Bruce — the link should be fixed now! Nancy

  3. I can attest to some of the impressions and feelings you’re experiencing. I recently ran an online survey for my research using a viral email campaign, and there was a lot of tension when the responses trickled, but then a rush when hundreds of responses poured in. When you get a large nonrandom survey, you have precision vs. accuracy problem but it’s a nice problem to have if you are just looking for the existence of certain kinds of behavior or relationships.

  4. I can offer correlating anecdotes to point #2 – both times I’ve done major online surveys (on talk shows in my book and on Lost spoiler fans for a Particip@tions article), a number of respondents have felt the need/desire to offer methodological commentary about the surveys. Since both were primarily qualitative, most comments were lambasting my lack of statistical significance, randomness, etc., with some people stopping midway through to say that they cannot possibly complete a survey with such little objective merit! They were anonymous so I couldn’t offer gentle reminders that there is such a thing as qualitative research…