Just another WordPress.com site

Having seen too much daytime television over the Easter Holidays, I began to wonder whether people actually try harder when the winning stakes are higher. Would you put more effort into winning £100,000 on a television game show or beating your brother at monopoly?

Lewis, Bardis, Flint, Mason, Smith, Tickle & Zinser (2012) suggest there is often more chance of dishonesty when trying to win money for charity. However, there are methodological issues which could be addressed.

All participants were students at the University of Bath, which limits the generalizability of results, as they were all obtained from the same geographical location. Secondly, the participants were heavily weighted female (n = 60) to male (n = 34). Combined with this, participants were more weighted towards Psychology students (n = 56) than Economics students (n = 38). The average age of 19.82 years also reduced the generalizability as there is no evidence to suggest the same findings would be obtained with older or younger participants.

The participants received delayed debriefing taking up to a week. Despite the reasoning, this practice appears unethical and could arguable cause distress to the participants. Finally, the results were compared by degree choice, in order to discover which participants were more inclined to lie. Economics students were reported to have stated much higher dice scores than Psychology students.

The idea that participants were completing the task in order to raise money for Cancer Research UK could be problematic. If the charity is not their charity of choice, participants may not have the same incentives to cheat compared to if they could raise money for a charity of their choice. Secondly, the maximum amount they could raise for the charity was 60 pence, and therefore it could be suggested participants saw this as a small amount which should easily go to charity without purposefully lying to raise a large amount of money.

Whether the participants would behave in the same manner when a large amount of money is at risk is not investigated. However, it is suggested participants are more inclined to lie when they believe its justified lying, perhaps in order to raise money for the charity. Despite this, they may simply be lying because they want to impress the researcher and not to raise more money for the charity. It could be a question of morals, where when winning money for charity, people are more inclined to put effort in.

Hoelzl & Rustichini (2005) suggest that individuals vary from overconfident to under confident when the task involved changes from “easy and familiar to unfamiliar” (p. 305). It is also suggested this effect appears more dominant whilst money is at risk compared to when it isn’t. Therefore, it could be the task itself. Participants may be more confident to lie when completing the relatively easy task, but the same results may not be obtained when completing a more difficult task.

Lesson to be learnt… trust Psychology students more than Economics students!

References:

Hoelz, E. & Rustichini, A. (2005). Overconfident: Do you put your money on it? The Economic Journal, 115, 305-318. Doi: 10.1111/j.1468-0297.2005.00990.x

Lewis, A., Bardis, A., Flint, C., Mason, C., Smith, N., Tickle, C. & Zinser, J. (2012). Drawing the Line Somewhere: An Experiment of Moral Compromise. Journal of Economic Psychology, 33, 718-725. Doi: 10.1016/j.joep.2012.01.005

It would seem any news involving alcohol in the media is always emphasising the negative aspects. However, there have been instances where alcohol has been suggested as an aid to creativity with Beveridge & Yorston (1999) proposing alcohol was a fundamental aspect of the famous author Lowry’s imaginative endeavours. Some recent research by Jarosz, Colflesh & Wiley (2012) provides scientific evidence for the long accepted idea that alcohol can help with creativity. Despite the evidence, there are some obvious threats to the reliability and validity of such results.

During the research, initially, 40 male participants aged between 21 and 30 were retrieved from two sources, and screened to ensure they met the criteria required. This could prove problematic, as when being asked about your health and drinking patterns during the screening, you may be inclined to twist the truth in order to hide any problems you may have, or any embarrassment you may feel. As such, people with excessive drinking, or non drinkers may have become participants, even though the 40 chosen individuals were described as social drinkers. These were then split into two groups, although as each group only contained 20 participants, there may not be enough data to obtain valid results, suggesting generalisation may become a problem.

The criteria for screening the participants asked the individuals about the previous three months. This may cause discrepancies in that people could have drunk a lot more or a lot less prior to the questioned period. As such, the participants’ past history may alter the current results.

Prior to the study, one group of participants were required to abstain from alcohol for 24 hours. Although this is necessary for the research, asking participants to do this lends itself to demand characteristics. If you were asked to not drink, you would tend to believe the research was studying the effects of alcohol, and participants may behave differently as a result in order to ‘help the researcher’.

Following this, the experimental group were required to drink some vodka to raise their blood alcohol levels to the required level. This provides problems in that some individuals may be more tolerable of alcohol than others. With only twenty participants, individual differences may play a part, and affect results, and the generalisation of results.

Participants were then asked to complete two tasks to measure perceived ability at problem solving, actual ability at problem solving and working memory capacity. Although the research suggests alcohol does improve creativity, it could be argued the methods employed and the standardisation of creativity do not provide as clear cut answers as portrayed. An individuals’ creativity is subjective, and everyone is different, therefore it is very difficult to accurately operationalize the concept to produce valid and reliable results.

 

References:

Beveridge, A. & Yorston, G. (1999). I drink, therefore I am: alcohol and creativity. Journal of the Royal Society of Medicine, 92, 646-648. Retrieved from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1297475/?

Jarosz, A. F., Colflesh, G. J. H. & Wiley, J. (2012). Uncorking the Muse: Alcohol Intoxication Facilitates Creative Problem Solving. Consciousness and Cognition, 21, 487-493. Doi:10.1016/j.concog.2012.01.002

It has been suggested that early experiences and childhood upbringing can affect an individuals’ choice of career later in life. Research by Elliott & Guy (1993) suggests that often, it was found therapists have a childhood upbringing involving dysfunctional family lifestyles, and these individuals also encounter substantial psychological stresses. Also, those in the mental health profession as opposed to other professions were more inclined to declare at least one trauma, including physical abuse, parental alcoholism and death of a parent of sibling (Elliott & Guy, 1993, p. 85).

These results were obtained through self report questionnaires sent to the homes of 6,000 professional US women, who were found through a directory. Of these, only approximately 3,000 were used in the final results as questionnaires sent to men, those who could not be delivered, and those completed by women who were no longer in the profession were removed.

Many questions can be asked about the conclusions of this study in relation to the validity of results due to the research methods used. It could be argued there was no way of knowing that the individuals were definitely not still in the profession, as the participants could have lied, in order to avoid filling out the questionnaire. Similarly, how could the researchers be sure that the required individuals did in fact fill out the questionnaire, as it could have been filled in by a friend or partner on their behalf.

Due to the sensitive nature of some of the questions asked, it could be suggested individuals may have withheld information which they did not want others to know, or they may have misjudged what the question was asking as suggested can happen with self report techniques (Knight, Stewart-Brown & Fletcher, 2001).

With regards to some questionnaires being removed as they were thought to be males, how can it be ensured all males were removed from the sample, and females were not removed by mistake? As these results were of research carried out in the US, the findings cannot be generalised to the rest of the world, without follow up studies to investigate whether the same patterns occur across cultures.

Another study by Paris & Frank (1983) further supporting the ideas of early experience and career choice, found that legal problems during childhood were more likely in law students of either gender, compared to medical students. Another finding suggested a childhood history including family illness was more prevalent in male medical students than male law students. It could be argued these results are subjective, and people can interpret questions differently. For example, what constitutes an illness, a cold or a chronic condition?

Overall, it would seem that there requires more research into influences on future career choices to try and eradicate some validity issues. However, the fact that both studies seem to conclude that experiences can alter career choices shows consistency. Although consistency can be coincidence and people may just like a certain profession. Correlation does not show causation.

References:

Elliott, D.M. & Guy, J.D. (1993). Mental health professionals versus non mental health professionals: Childhood trauma and adult functioning. Professional Psychology: Research and Practice, 24, 83-90. doi: 10.1037/0735-7028.24.1.83

Knight, M., Stewart-Brown, S. & Fletcher, L. (2001). Estimating Health Needs: The Impact of a Checklist of Conditions and Quality of Life Measurement on Health Information Derived from Community Surveys. Journal of Public Health Medicine, 23, 179-186. doi: 10.1093/pubmed/23.3.179

Paris, J. & Frank, H. (1983). Psychological Determinants of a Medical Career. The Canadian Journal of Psychiatry, 28, 354-357. Retrieved from: http://psycnet.apa.org/psycinfo/1984-16028-001

In a recent article by the Guardian Newspaper, Boseley (2012) explains the concerns over the newest edition of the DSM. The concerns focus on the fact that the criteria within the DSM 5 as opposed to the current DSM IV for being diagnosed with a mental illness will increase. As a result, more people could be diagnosed as having a mental illness, when some experts would argue they shouldn’t be. For example, the introduction of the DSM 5 could lead to those children who appear shy being diagnosed with shyness, and some rapists could be diagnosed as suffering from paraphilic coercive disorder.

If such diagnoses as shyness or loneliness as described by Boseley (2012) become clearly defined disorders, the way in which symptoms are measured and operationalised could impact any initial diagnosis. According to Howitt & Cramer (2011), operationalisation refers to the way in which we measure or control a specific concept. There are many ways of measuring variables, although often in clinical settings, the initial port of call in a diagnosis would be the consideration of an individual’s views of themselves, and therefore self report techniques may be used.

During research, when measuring variables using self report techniques, we often operationalise ideas by placing participants on a scale, in concordance with their answers to questions. It must be remembered however, any responses within a questionnaire are only indicators and are therefore not definitive (Howitt & Cramer, 2011).

As a result of a self reported questionnaire, Bennett-Levy & Marteau (1984) concluded that animals which have the characteristics of ugly, slimy, speedy or sudden moving are seen as more fear provoking and less approachable than other animals. Although the conclusions appear valid, questions could be asked as to whether the individuals really were more fearful of certain characteristics and therefore certain animals. Demand characteristics, extraneous variables and peer influences could lead to participants believing they are more fearful of some things, when in reality they are not.

This notion is supported by Arnold & Feldman (1981) who found the way in which a question is asked, primarily clearly defined subjective methods, can induce a socially desired answer. Further substantiation of the possibility of self report techniques generating invalid results comes from Spielholz, Silverstein, Morgan, Checkoway & Kaufman (2001), who found that research using self report methods produced the least accurate results, when compared to video observations and direct measurements.

However, Hindelang, Hirschi & Weis (1979) found that official records and self report techniques are often coherent, suggesting whether individuals were objectively observed, or whether results were primarily due to self report does not affect the subsequent results.

Overall, it would appear that self report techniques can, in some cases, lead to invalid results, by provoking participants to provide untrue answers. However, self report techniques are valuable tools, and should therefore be used with a combination of other research techniques, to produce the most valid results, and prevent individuals being incorrectly diagnosed as suffering from a disorder.

 References:

Arnold, H. J. & Feldman, D. C. (1981) Social Desirability Response Bias in Self Report Choice Situations. Academy of Management Journal, 24, 377-385. Retrieved from: http://www.jstor.org/stable/255848

Bennett-Levy, J. & Marteau, T. (1984) Fear of Animals: What is Prepared? British Journal of Psychology, 75, 37-42. DOI: 10.1111/j.2044-8295.1984.tb02787.x

Boseley, S. (2012) Psychologists fear US Manual will widen mental illness diagnosis. The Guardian. Retrieved from http://www.guardian.co.uk/society/2012/feb/09/us-mental-health-manual?newsfeed=true

Hindelang, M. J., Hirschi, T.  & Weis, J. G. (1979) Correlates of Delinquency: The Illusion of Discrepancy between self report techniques and official measures. American Sociological Review, 44, 995-1014. Retrieved from: http://www.jstor.org/stable/2094722

Howitt D. & Cramer, D. (2011) Glossary. Introduction to Research Methods in Psychology. 3rd Edition. Harlow, England: Pearson.

Spielholz, P., Silverstein, B., Morgan, M., Checkoway, H. & Kaufman, J. (2001) Comparison of Self Report, Video Observation and Direct Measurement Methods for Upper Extremity Musculoskeletal Disorder Physical Risk Factors. Ergonomics, 44, 588-613. DOI: 10.1080/00140130118050

 

Experimenter Bias refers to the extent to which the results of a study are affected by the actions of the experimenter. It can be described as the way in which the actions of the experimenter alter the behaviour of the participants (Brandt, 1971). This is very similar to experimenter expectancy effect where the expectations of the investigator of the study may alter the results (Howitt & Cramer, 2011).

Rosenthal (1961) suggested that experimenters who were classed as more anxious were less likely to bias their participants’ reactions than those experimenters who were seen as medium anxious. Experimenter bias is thought to be shown by any considerably higher results achieved within a test where it is previously predicted that there would be more high scores as opposed to lower scores (Barber, Forgione, Chaves, Calverley, McPeake & Bowen, 1969).

However, it has been shown that most studies do not provide evidence of such an effect (Barber & Silver, 1968). Rosenthal, Persinger & Fode (1962) further support this idea, finding there is not always a significant indication that the more anxious the experimenter appears, the less they bias the actions of participants, and therefore the results.

In such instances where experimenter biases can be displayed, it has been shown to be primarily due to “paralinguistic or kinesic cues” (Barber & Silver, 1968 p1). Other explanations include that experimenters have misquoted, incorrectly documented or misrepresented the results (Barber & Silver, 1968). It is also thought that not only experimenters, but experimenter assistants can also influence this effect (Rosenthal, Persinger, Kline & Mulry, 1963). It is suggested these biases can be influenced and sufficiently mediated by welcoming, attentive, animated communications, through body language and verbal indications (Adair & Epstein, 1968).

To counteract these effects, Orne (1962) would suggest in order to eradicate any experimenter bias, it would be logical to employ another experimenter. Secondly, the experimenter who leads the experiment should do so “blind” ensuring they do not know which part of the experiment any individual participant belongs to. Brandt, (1971) would suggest that any experiment involving human subjects are unfair, needless and insignificant when any behaviour by the participants can be foreseen by the experimenters own behaviour.

It would appear that although there are methods which can reduce these effects, any attempt to use these will not always work. These methods, such as blind testing should therefore be applied, to reduce any effects as much as possible; however any precautions taken will not completely eradicate the effect itself. Rosenthal & Fode (1963) would suggest every study and experimenter encounters experimenter bias, either consciously or not. This implies every study is on a level playing field in terms of their influence. Even when blind trials are put in place, there is still a chance experimenters could influence the behaviour of participants.

References:

Adair, J. G. & Epstein, J. S. (1968) Verbal Cues in the Mediation of Experimenter Bias, Psychological Reports, 22, 1045-1053. doi: 10.2466/pr0.1968.22.3c.1045

Barber, T. X., Forgione, A., Chaves, J. F., Calverley, D. S., McPeake, J. D. & Bowen, B. (1969) Five Attempts to replicate the Experimenter Bias Effect. Journal of Consulting and Clinical Psychology, 33, 1-6. doi: 10.1037/h0027229

Barber, T.X. & Silver, M. J. (1968) Fact, Fiction, and the Experimenter Bias Effect, Psychological Bulletin, 70, 1-29. doi: 10.1037/h0026724

Brandt, L. W. (1971) Science, Fallacies and Ethics, The Canadian Psychologist, 12, 231-242. doi: 10.1037/h0082096

Howitt D. & Cramer, D. (2011) Glossary. Introduction to Research Methods in Psychology. 3rd Edition. Harlow, England: Pearson.

Rosenthal, R. (1961) On the Social Psychology of the Psychological Experiment: With Particular Reference to Experimenter Bias, American Psychologist, 16, 458.  

Rosenthal, R., Persinger, G. W. & Fode, K. L. (1962) Experimenter Bias, Anxiety, and Social Desirability, Perceptual and Motor Skills, 15, 73-74. doi: 10.2466/pms.1962.15.1.73

Rosenthal, R., Persinger, G.W., Kline, L.V. & Mulry, R.C. (1963) The Role of the Research Assistant in the Mediation of Experimenter Bias, Journal of Personality, 31, 313-335. doi: 10.1111/j.1467-6494.1963.tb01302.x

Orne, M. T. (1962) On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776-783. doi: 10.1037/h0043424

The results of studies can easily be misinterpreted by the media. The Times article “At last, science discovers why blue is for boys but girls really do prefer pink” (2007) offers a completely misunderstood account of research conducted by Hulbert & Ling (2007). Although some details described are coherent with the original source, most of the assumptions given in the report are not justified or correct and the true findings have been altered to create a more ‘interesting’ article for readers. The original research report finds biological explanations for sex differences in colour preference, however, the news article reports historical reasons have been discovered, although these were only used as suggested explanations for the biological differences in the original journal article, and not proven. The news article does state the hunter gatherer explanation for colour preference which is clearly described in the original report, and therefore has some coherency.

Entitled “Biological Components of Sex Differences in Color Preferences”, the original journal article does address the title directly stating neural mechanism differences were found between the sexes. It is an appropriate title clearly describing what the article explains. The explanations for the findings are justified, but they cannot be further supported as we cannot know individuals’ colour preferences from the past. The evidence supports the ideas of behavioural trichromacy, suggesting colour preferences could be “innate or modulated by cultural context or individual experience” (p. 625). All probabilities are at the most 1 in 0.001 chance that the results are incorrect, and due to chance. Therefore we can be relatively sure results are correct.

However, Hulbert & Ling (2007) can be criticised in their generalisation of the findings from the study. The study itself recorded data from 208 participants aged 20-26. This is a very narrow age range, and it cannot be concluded people of all ages would create the same results after completing the same tasks. The age range used could also lead to different explanations as people of 20-26 years old could have learnt to prefer pinks or blues as opposed to younger children. The study used both English and Chinese students, which provides more generalizable results, although it could be argued other nationalities are required to provide more scientific evidence, which along with the increased age range would make the explanations for the findings more appropriate and generalisable. Explanations that there are “cross-cultural sex difference in color preference” (Hulbert & Ying, 2007, p. 623) cannot be generalised across all cultures as there has only been research across Chinese and British nationalities. 

The original source needs further evidence to support their claims and explanations of the findings and enable these findings to be generalised across the population. However, the results completely support the claims and therefore it cannot be said the claims are incorrect.

References:

Hulbert, A. C. & Ling, Y. (2007). Biological Components of Sex Differences in Color Preference. Current Biology, 17, 623-625. doi:10.1016/j.cub.2007.06.022

Henderson, M. (2007, August 21). At last, science discovers why blue is for boys but girls really do prefer pink. The Times.

Studies involving collecting data over time can be subdivided into panel studies (Lazarsfeld, 1948), longitudinal studies and retrospective studies (Engstrom, Geijerstam, Holmberry & Uhru, 1963). Longitudinal studies are studies which involve collecting results from at least two different points in time (Howitt & Cramer, 2011). Such studies are primarily used to try to avoid the problems of reversed causation and third variables, however, contrary to intentions, the effectiveness of longitudinal studies preventing third variable justifications is not found in many studies (Zapf, Dormann & Frese, 1996). The findings from longitudinal studies are often used to substantiate clinical conclusions as a result of research (Tooth, Ware, Bain, Purdie & Dobson, 2005).

It could be argued that there are confounding variables which are very difficult to control within the research design, perhaps reducing the validity of such research. Labouvie, Bartsch, Nesselroade & Baltes (1974) found that any increase in intelligence variables shown through longitudinal research can be attributed to retests and learning of information, as opposed to age itself. Hence, during research, both internal and external validity should be considered carefully as it can affect results, and can be threatened by several factors. Cook & Campbell (1979) describe these factors as including history, instrumentation, maturation, mortality, statistical regression and testing.

There is no established method for the recording of longitudinal research (Tooth, Ware, Bain, Purdie & Dobson, 2004). Studies can occur over a long period of time and take into account any change of the participants with regard to the variables under investigation. However this presents problems in that longitudinal research is susceptible to survivor bias, as in some cases participants may not live to the follow up data collection (Allen, Frier & Strachan, 2004). This may exaggerate and distort results.  

Only through the use of longitudinal design may cause and effect be determined, whilst acknowledging any confounding factors which may be of influence (Allen, Frier & Strachan, 2004). This is a major attraction of such research design. Cause and effect must only be shown if the cause precedes the effect (Howitt & Cramer, 2011), making longitudinal research a good foundation for such results. For example, results show that there is substantial evidence for community outcome and cognition being longitudinally linked. Results from studies reveal considerable support for longitudinal associations between cognition and community outcome in schizophrenia. This demonstrates that cognitive assessment can predict later functional outcome and aid reasoning for interventions to help cognitive weaknesses in schizophrenia (Green, Kern & Heaton, 2004).

As a result, longitudinal research appears an effective research design which can help to establish relationships between variables which may be less effectively measured through the use of other research designs. Although there are weaknesses in the research design, the benefits of enabling cause and effect to be shown, and therefore the influences upon the instigations of interventions and preventative measures for illnesses outweigh the costs involved. 

References:

Allen, K. V., Frier, B. M., & Strachan, M. W. J. (2004) The relationship between type 2 diabetes and cognitive dysfunction: longitudinal studies and their methodological limitations. European Journal of Pharmacology, 490, 169-175. doi:10.1016/j.ejphar.2004.02.054

Cook, T. D., & Campbell, D.T. (1979) Quasi Experimentation: Design and Analysis Issues for Field Settings. Chicago IL: Rand McNally

Engstrom, L., Geijerstam, G., Holmberg, N. G., & Uhrus, K. (1963) A prospective study of the relationship between psycho-social factors and course of pregnancy and delivery. Journal of Psychosomatic Research, 8, 151-155. 

Green, M. F., Kern, R. S., & Heaton, R. K. (2004) Longitudinal studies of cognition and functional outcome in schizophrenia: implications for MATRICS. Schizophrenia Research, 72, 41-51. doi:10.1016/j.schres.2004.09.009

Howitt, D., & Cramer, D. (2011) Longitudinal Studies. Introduction to Research Methods in Psychology. 3rd Edition. Harlow, England: Pearson.

Labouvie, E. W., Bartsch, T. W., Nesselroade, J. R. & Baltes, P. B. (1974) On the Internal and External Validity of Simple Longitudinal Designs. Child Development, 45, 282-290.

Lazarsfeld,P. F. (1948) The Use of Panels in Social Research. Proceedings of the American Philosophical Society, 92, 405-410.

Tooth, L., Ware, R., Bain, C., Purdie, D. M., & Dobson, A. (2005) Quality of Reporting of Observational Longitudinal Research. American Journal of Epidemiology, 161, 280-288. doi: 10.1093/aje/kwi042

Zapf, D., Dormann, C., & Frese, M. (1996) Longitudinal Studies in Organizational Stress Research: A Review of the Literature With Reference to Methodological Issues. Journal of Occupational Health Psychology, 1, 145-169. doi: 10.1037/1076-8998.1.2.145

The process of coding both qualitative and quantitative data is used to make data receptive to quantitative handling (Guetzkow, 1950). Coding is used to find the most meaningful parts of the data, and to generate concepts about the data, by manipulating it (Gough & Scott, 2000). Qualitative research focusses on defining and classifying the characteristics of data as opposed to the quantification of elements focussed on in quantitative data collection (Howitt & Cramer, 2010).

The coding of qualitative data can lead to further analyses and understanding of such information. It is used to comprehend the data which has been collected (Lockyer, 2004).The process can be used by researchers to gain a more thorough statistical representation of the results and theories involved (Guetzkow, 1950).

Coding can be done either manually or electronically; dependent upon variables such as size of research, knowledge of the researcher, and funding availability.(Basit, 2003) However, it seems more appropriate to code qualitative data manually to prevent any errors or misinterpretations, and therefore loss of validity, as a result of using electronic methods. During coding, categories are designed to fit qualitative data, whereas data fits the categories which are usually pre-coded for quantitative data (Howitt & Cramer, 2010). Being manually coded lends itself to human errors, however electronic coding can lend itself to errors as a result of data which does not ‘fit’ the categories.

Qualitative data is often collected to obtain further knowledge within a topic area (Howitt & Cramer, 2010). By coding data, you can lose the integrity of this data by changing what is implied. Data coding is conducted differently dependent upon the type of data acquired. As a result, coding data will have a bigger impact, and chance of misinterpretation, upon the pure qualitative data, compared to mixed data or pure quantitative data as the coded data is further from its original state. However, on coding a mixed data set, the extra detail you strove to obtain could be lost. The methods employed can vary, with eighteen defined techniques (Leech & Onwuegbuzie, 2008) and there are instances where certain methods would appear better than others. The classification of coding styles can be most easily considered based on the four major ways in which the data was collected ie. Observations (Leech & Onwuegbuzie, 2008).

Overall, the idea of coding is well reasoned and has many benefits, however it could be seen as reducing the complexity of the information obtained, which defies the original objective of collecting qualitative data. Turning the detailed information into numbers, or categories which may not be as specific as the actual results could lose the validity obtained. However, it does make it easier to find any trends or patterns, and to analyse the data to falsify predictions, which is an aim of science. Therefore, although losing some validity, coding of qualitative data is important and required to understand any results further.  

References:

Basit, T, N. (2003) Manual or Electronic? The role of coding in qualitative data analysis. Educational Research, 45, 143-154. doi: 10.1080/0013188032000133548

Gough, S., & Scott, W. (2000) Exploring the Purposes of Qualitative Data Coding in Educational Enquiry: Insights from recent research. Educational Studies, 26, 339-354.

Guetzkow, H. (1950) Unitizing and categorizing problems in coding qualitative data. Journal of Clinical Psychology, 6, 47-58. doi: 10.1002/1097-4679(195001)6:1<47::AID-JCLP2270060111>3.0.CO;2-I

Howitt, D., & Cramer, D. (2010) Coding Data. Introduction to Research Methods in Psychology. 3rd Edition. Essex, England: Pearson

Leech, N., L., & Onwuegbuzie, A., J., (2008) Qualitative Data Analysis: A Compendium of Techniques and a Framework for Selection for School Psychology Research and Beyond. School Psychology Quarterly, 23, 587-604.

Lockyer, S. (2004) Coding Qualitative Data. In M., S. Lewis-Beck, A. Bryman, T. Futing Liao (Eds.) The Sage Encyclopedia of Social Science Research Methods. London: Sage Publications Inc.

 

Field Experiments are conducted in a real world setting, such as an Underground Station. This is advantageous in reducing the possibility of experimenter effects which can occur as a result of research being completed in an artificial laboratory situation. In most cases, the participant does not know they are part of an experiment, and therefore demand characteristics are at a minimum.

 The experimenter has control over the IV, but it is harder to control all variables during a field experiment, and there is more opportunity for results to be influenced by extraneous variables. It could be argued that laboratory experiments have the upper hand. However, Orne (1962) would argue that experiments using this method unjustifiably assume participants are like inanimate objects reacting to stimuli. Laboratory experiments are an empirically sound method, but perhaps they are not as appropriate when studying behaviour.

 Piliavin, Rodin & Piliavin (1969) researched the effect of several variables on helping behaviour. They ‘staged standard collapses’ recording the effect of variables such as race and cause of collapse on how helpful people were. This research could not be completed in a laboratory setting and receive the same results, or without demand characteristics arising. Hence, it appears some research requires different methods.

 In spite of this, laboratory experiments are often favoured because of the level of control over variables the experimenter has. This helps to eliminate extraneous or confounding variables, but can result in experiments lacking mundane realism, due to the artificiality of the surroundings and tasks involved. Despite this, Harrison & List (2004) propose that this ‘sterile environment’ should not be seen as undesirable, if we appreciate the value of the knowledge gained from such research.

 Weick (1966) suggested that the methodology of laboratory experiments can lead to crucial issues in research being overlooked. He referred to equity theory, but the idea can apply universally. The notion of laboratory experiments being limiting in research is supported by Harrison & List (2004), who suggest that individual lab experiments are restrictive in enabling predictions of future behaviour. This may be as a result of the artificial situations involved. We cannot expect people to behave exactly as they would in an everyday situation whilst under observation in a laboratory.

The control within laboratory experiments enables findings to be obtained which should solely show the effect of the IV upon the DV. However when studying behaviour, this does not seem as appropriate. If people do not behave how they would in real life, you are not studying natural behaviour as such, and field experiments appear more effective in these instances.

References:

 Harrison, G., W., & List, J., A., (2004) Field Experiments. Journal of Economic Literature, 42. 1009-1055. doi: 10.1257/0022051043004577

 Orne, M., T., (1962) On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17. 776-783. doi: 10.1037/h0043424

 Piliavin, I., M., Rodin, J., & Piliavin, J., A., (1969) Good Samaritanism: An underground phenomenon? Journal of Personality and Social Psychology, 13, 289-299. doi: 10.1037/h0028433

 Weick, K., E., (1966) The Concept of Equity in the Perception of Pay. Administrative Science Quarterly, 11, 414-439.

Deciding upon a sample size is a big consideration during research to obtain the most appropriate findings. An aim of research is to gain knowledge which can be generalised to the population to ultimately improve the quality of life. To be generalisable, a sample is designed ‘to be representative of that population’ (Flanagan, Hartnoll & Murray, 2009) A population can involve a range of individuals and you need enough people to show the variation in the population. (Howitt & Cramer, 2011, p59)

When deciding on a sample size, we must consider; population parameters, previous research, the variation in the population, how the data will be collected and how precise the final estimates should be. (NIST/SEMATECH e-Handbook of Statistical Methods) Sample size can be affected by the outcome of the study. If you wish to generalise your results to a large population, you need enough research and a large enough sample size to prevent raising any doubts as to the validity of your results before you can generalise them.

Sample size can affect the statistical analysis of the findings of a study considerably. For example, the more results obtained, the less affected the mean would be by extreme results, and thus, often, a larger test sample is used to reduce the error rate during research (Raudys & Jain, 1991) However this shouldn’t mean all research should have large sample sizes. Each sample size should be tailored to the individual research as ‘Inappropriate, inadequate, or excessive sample sizes continue to influence the quality and accuracy of research’ (Bartlett, Kotrlik, & Higgins, 2001)

By using a sample size which is very small, you run the risk of losing validity and reliability in your findings. Research by Raudys, and Jain, (1991) suggests ‘Small sample effects can easily contaminate the design and evaluation of a proposed system.’ And therefore larger samples should be taken to ensure your findings truly represent what you set out to investigate.  

It would appear that there is no universal optimum sample size for research as it depends upon the individual research, however to minimise error rates, a larger sample would be better. On the other hand, the size of sample should not take priority over the suitability of participants, as this could lead to extraneous variables affecting findings. To generalise findings, you need a large enough, appropriate sample to represent and include the variation of everyone in the population you are studying.   

References:

Howitt D., & Cramer D., (2011) The Basics of Research. Introduction to Research Methods in Psychology (p59). Harlow, England: Pearson.

NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/ppc/section3/ppc333.htm (1 of 4)

 Raudys, S.J., & Jain, A.,K., (1991) Sample Size effects in statistical pattern recognition. Recommendations for practitioners. IEEE transactions on pattern analysis and machine intelligence. 13.252-264. doi: 10.1109/34.75512

 Bartlett J.E., Kotrlik J.W., & Higgins C.C., (2001) Organizational Research: Determining Appropriate Sample Size in Survey Research. Obtained from: http://www.osra.org/itlpj/bartlettkotrlikhiggins.pdf

 Flanagan C., Hartnoll L., & Murray R., (2009) Index. Psychology AS, The Complete Companion. Buckinghamshire, England: Folens.