The net, and computer-mediated communication in general, is a relatively new field of inquiry. Thus my study is both deductive and exploratory in nature. I am exploring the nature of pseudonymous communications on the net (in specific my respondent population is participants on a particular MU*), but I also wish to discover if my hypothesis for the behavior my research question explores is valid or not. I will hopefully be able to use my survey questionnaire to elicit research data to both validate the hypothesis, and serve as a starting point for further, more thorough investigations.
Due to the rather fluid nature of the net, as well as its lack of extensive scholarly research, I find myself with an extensive list of terms and concepts that need definition -- almost a glossary, in fact. Unfortunately some of these concepts appear to lack any scholarly definition whatsoever in any paper that I've been able to find, and thus I appear to be embarking into completely unknown territory. However, it is also vital that each of these concepts is clearly defined; in order to discuss my question cogently I need to have an appropriate language. It is a given that as I learn more this list of terms will receive more revisions and additions over the time this study progresses.
Term definitions and operationalizations
The individual units of analysis will be the individual Players who will be answering the self-administered, e-mailed questionnaire. There will be no experimental research; while I have a hypothesis, I believe the best method to verify it is via a survey. Once the survey results are returned I will conduct latent content analysis to either prove or disprove my hypothesis.
The choice of a survey as research instrument was made for several reasons, most notably that I feel the disadvantages are outweighed by the advantages. True, questions may be missed, and the survey may convey an undesired inflexibility in responses. However, a survey will also be cheap and quick to administer, and there is no worry concerning participants not wishing to report 'deviant' attitudes to an interviewer. Also, a survey can operationalize and standardize a wide selection of terms, and can be administered to a very large group of respondents. Also, since it is being sent out via e-mail, returns should be extremely easy - just press the 'reply' button on the e-mail software, type in the desired responses, and hit 'send.' E-mail offers me one other benefit, in that I will not have to pay international postage should I be interested in the responses of players from other nations.
My initial selection method will be, unfortunately, a judgmental sample, as I have chosen friends on a MU* I frequent, that I believe will conscientiously reply to and return my questionnaire.
With net access comes access to myriad other MU*s. It is impossible to keep an accurate and up-to-date list of all the MU*s available, let alone all the participants in all of these games. More MU*s are being created and expiring every day, and some are exclusively private and are never publicized. Furthermore the system administrators of such MU*s must remain sensitive to privacy issues for their participants. Nevertheless, even a somewhat dated or inaccurate listing of these types of computer games can result in a huge number of MU* titles. Armed with such a list, I believe a satisfactory sample could be selected.
The first step would necessitate stratifying all the MU* titles according to theme, so the selection process would cover as many types or genres of games as possible. This would hopefully provide for a wide variety of potential interests or participant motivations, and avoid weighting. A sampling ratio would be picked, with a random initial number, and a new list of MU* titles compiled. From this list we could then engage in multistage cluster sampling. Each MU*'s administrators would be contacted. The research purpose and questionnaire would be explained to them, and a list of all adult participants' e-mail addresses would be requested. It should be noted that sample modification may be necessary at this point, should the MU* administrators not have permission or not wish to release such lists -- in which case the next MU* on the list could be contacted.
The MU* participant lists we received would then be organized by number of participants (or size). This would allow PPS sampling of the gamers, so that each element has the same chance of selection. A fixed number of elements would be selected from each MU*, but the larger games would receive a higher chance of selection, proportionate to its size. A list of e-mail addresses would result, and each of these would then be contacted to ask if they were willing to participate in the study. Those that replied in the affirmative would then be e-mailed a questionnaire. My current pretest rate of return will probably be higher than the actual rate of return. Thus a large number of questionnaires should be sent out initially. While I cannot say for sure what this number should be, I would initially recommend at least 3000-5000, so that there will be enough returned to be analyzed. If, of course, the returned answers start being the same as previous survey responses, the project will have reached enough individuals and no more surveys need be sent out.
This procedure should provide generalizablity for the research project, as the sample will be drawn from the international net-using and net-gaming public at large, and should thus be very representative of that particular population.
Data collection will commence as soon as the protocol is signed and I can get to my computer, to e-mail out the questionnaire to the MU* participants. A copy of the questionnaire is attached to this paper.
In order to achieve reliability in a large-scale research project on this question, I would suggest a large sample size, and replication of the process. The questionnaire should insure validity by the nature of the questions asked. In the replication of the research I would suggest a new and different questionnaire, both to avoid inter-respondent contamination, individual maturation affecting replies, and to ensure internal validity. Obviously, care needs to be taken to ensure the new questionnaire is comparable to the first one. Finally, diffusion is another risk in this research, should the respondents discuss with each other what their varying answers are, or cause a respondent to feel embarrassed and wish to withdraw from the survey. Unfortunately my only recourse in this situation is to ask the respondents not to discuss the questions with each other during the research study period. I would also suggest a follow-up e-mail with the questionnaire attached for participants that have not yet responded, approximately one to two weeks after the initial e-mail. While this is a bit less than suggested by Babbie (pg. 239), e-mail is a very immediate medium of communication, and thus I believe this would be the appropriate time to remind, and thus hopefully encourage the response rate.
Currently required resources are myself as the researcher, my questionnaire, my trusty net-connected computer, Eudora (e-mailing software), and the time to put all the gathered research information together into a coherent whole. Cheap and 'already-available' is good.
Required resources would be at minimum a researcher, and a relatively up-to-date computer with net-access. The computer system would run anywhere from about $800 to $7000-$8000 per computer, depending on how wonderful you want your computer(s) to be, and if you want to link up several computers for speed and accuracy of data analysis. A computer programmer would be extremely helpful also, to code in the necessary programs for sample selection, and to assist in any manifest content analysis. Fellow researchers (or interns/grad students) for assistance in latent content analysis is probably also a must.
Each of my respondents is both a volunteer and an adult, who understands the nature of my research. While I personally know all my respondents, I have assured them of confidentiality, and explained the difference between confidentiality and anonymity. They have each given me a pseudonym they find acceptable should I quote them in my paper. I cannot immediately conceive of how the questions in the questionnaire might harm someone, but I will not say that my reaction will be everyone's. Instead I have included an opening paragraph, previous to the questionnaire. This paragraph clearly states that participation in this research project is strictly voluntary, that confidentiality will be maintained, and that if a question causes discomfort it should be skipped.
This research project will not be anonymous, as the e-mail addresses could conceivably be used to track down the individual respondents. However, confidentiality could be maintained with a simple program to correlate each e-mail address with a number or random word, so the researchers themselves would not know from the e-mail headers who their respondents were. Additionally, surveys would be sent once only to each e-mail address, to prevent one person from responding more than once. It's true that potential participants could reply under several e-mail IDs; to prevent this I would suggest not accepting e-mail from free servers, such as hotmail.com or geocities.com. A determined individual could still manage to answer the questionnaire more than once, by buying more than one e-mail address, but I don't see that as being a very likely problem.
Of course, it's entirely possible that names (both Character and/or Player) could be mentioned in individual e-mail postings. In the case of latent content analysis, researcher discretion and reliability would be a necessary attribute. For any other case, a search-and-replace program could be used to 'X' out any proper names.
The Pretest Itself
The pre-test should hopefully give me some introductory information on possible reasons why MU* participants engage in pseudonymous, masked socialization while OOC, even when a space for non-pseudonymous chatting and socialization is provided. Once this information is correlated, my current hypothesis should be either proved or disproved.
My pretest population is known friends that have gamed on MU*s and are currently on RealityFault. I selected them because I believed they would actually give some thought to, and fill out the questionnaire, and because I have easy access to them, via e-mail. I believe this is a good pre-test for my general design for a variety of reasons. Firstly, there will hopefully be a high rate of return on the questionnaire, with considered responses. Secondly my respondents are on the whole slightly older, and have gamed somewhat longer, than the average MU*ing population. While this means they cannot be considered an exact match for the test population at large, it does mean they are more likely to approach the survey with a mature and thoughtful outlook, and to helpfully point out any fallacies or omissions in my research process so far. This should help ensure questionnaire validity, at the very least, and will hopefully also increase future reliability in regards to the questionnaire. Also, their increased gaming experience should hopefully insure that they are not being exposed to some of these issues for the first time, and might even have given them some previous thought. I would, after all, much prefer thoughtful answers instead of hasty decisions that are possibly arrived at without consideration of all factors. Finally, all my participants appear to both understand and appreciate the necessity of ethical behavior, which inclines me to believe that when I ask them not to discuss the questionnaire amongst themselves they will do as I ask.
Last Updated: 2000-Mar-28