Methodology usually begins with a few short introductory paragraphs that restate purpose and research questions.
Population and sampling
The basic research paradigm is:
1) Define the population
2) Draw a representative sample from the population
3) Do the research on the sample
4) Infer your results from the sample back to the population
As you can see, it all begins with a precise definition of the population. The whole idea of inferential research (using a sample to represent the entire population) depends upon an accurate description of the population. When you've finished your research and you make statements based on the results, who will they apply to? Usually, just one sentence is necessary to define the population. Examples are: "The population for this study is defined as all adult customers who make a purchase in our stores during the sampling time frame", or "...all home owners in the city of Minneapolis", or "...all potential consumers of our product".
While the population can usually be defined by a single statement, the sampling procedure needs to be described in extensive detail. There are numerous sampling methods from which to choose. Describe in minute detail, how you will select the sample. Use specific names, places, times, etc. Don't omit any details. This is extremely important because the reader of the paper must decide if your sample will sufficiently represent the population.
Instrumentation
If you are using a survey that was designed by someone else, state the source of the survey. Describe the theoretical constructs that the survey is attempting to measure. Include a copy of the actual survey in the appendix and state that a copy of the survey is in the appendix.
Procedure and time frame
State exactly when the research will begin and when it will end. Describe any special procedures that will be followed (e.g., instructions that will be read to participants, presentation of an informed consent form, etc.).
Analysis plan
The analysis plan should be described in detail. Each research question will usually require its own analysis. Thus, the research questions should be addressed one at a time followed by a description of the type of statistical tests that will be performed to answer that research question. Be specific. State what variables will be included in the analyses and identify the dependent and independent variables if such a relationship exists. Decision making criteria (e.g., the critical alpha level) should also be stated, as well as the computer software that will be used.
Validity and reliability
If the survey you're using was designed by someone else, then describe the previous validity and reliability assessments. When using an existing instrument, you'll want to perform the same reliability measurement as the author of the instrument. If you've developed your own survey, then you must describe the steps you took to assess its validity and a description of how you will measure its reliability.
Validity refers to the accuracy or truthfulness of a measurement. Are we measuring what we think we are? There are no statistical tests to measure validity. All assessments of validity are subjective opinions based on the judgment of the researcher. Nevertheless, there are at least three types of validity that should be addressed and you should state what steps you took to assess validity.
Face validity refers to the likelihood that a question will be misunderstood or misinterpreted. Pretesting a survey is a good way to increase the likelihood of face validity. One method of establishing face validity is described here. How to make sure your survey is valid.
Content validity refers to whether an instrument provides adequate coverage of a topic. Expert opinions, literature searches, and pretest open-ended questions help to establish content validity.
Construct validity refers to the theoretical foundations underlying a particular scale or measurement. It looks at the underlying theories or constructs that explain a phenomena. In other words, if you are using several survey items to measure a more global construct (e.g., a subscale of a survey), then you should describe why you believe the items comprise a construct. If a construct has been identified by previous researchers, then describe the criteria they used to validate the construct. A technique known as confirmatory factor analysis is often used to explore how individual survey items contribute to an overall construct measurement.
Reliability is synonymous with repeatability or stability. A measurement that yields consistent results over time is said to be reliable. When a measurement is prone to random error, it lacks reliability.
There are three basic methods to test reliability: test-retest, equivalent form, and internal consistency. Most research uses some form of internal consistency. When there is a scale of items all attempting to measure the same construct, then we would expect a large degree of coherence in the way people answer those items. Various statistical tests can measure the degree of coherence. Another way to test reliability is to ask the same question with slightly different wording in different parts of the survey. The correlation between the items is a measure of their reliability. See: How to test the reliability of a survey.
Assumptions
All research studies make assumptions. The most obvious is that the sample represents the population. Another common assumptions are that an instrument has validity and is measuring the desired constructs. Still another is that respondents will answer a survey truthfully. The important point is for the researcher to state specifically what assumptions are being made.
Scope and limitations
All research studies also have limitations and a finite scope. Limitations are often imposed by time and budget constraints. Precisely list the limitations of the study. Describe the extent to which you believe the limitations degrade the quality of the research.
No comments:
Post a Comment