Three Main Types of Research

  • Causal-Type Reseach

A lot of the time, what tends to come into people’s minds when they think of scientific experiments is that this is a type of research related to what causes something and what the effects are. Therfore, experiments of the causal variety usually investigate the effects that one variable or more can have on one or several outcome variables. Research of this type also looks at whether one thing causes another thing to change or to happen. A good example of causal research is changing a treatment type and evaluating the effect on this treatment change on those taking part in the study.   

  • Descriptive-Type Research

This type of research attempts to describe some scenario or thing that currently exists in a population or group of people. A suitable example of descriptive research is an opinion poll that seeks to find out which candidate people intend to vote for in an upcoming presidential election. This type of research does not attempt to measure the effects of variables. Its only purpose is describing something. 

  • Relational-Type Research

This is a type of research that examines the link between two variables or more. The items being compare are usually in existence in a population or group of people. A typical example of relational research is a study that considers the relationships between the genders and their preferences in music to determine the number of men and women who would be likely to buy a particular type of music. 

Hypothesis and Theory

A hypothesis is a quite specific prediciction that is testable and based on what a researcher expects to occur in or result from a study. For instance, a study that is designed to examine the link between study patterns and exam anxiety might be based on this hypothesis: “The purpose of this research is to evaluate the hypothesis that those who maintain a good study pattern will not suffer as much exam anxiety.” Unless the nature of a piece of research is exploratory, the hypothesis always explains what is expected to occur during an experiment or piece of research. By contrast, theories are well-established and accepted principles that are developed to define or explain a certain aspect of nature or the natural world. Theories are created by continuous testing and observation, and they incorporate predictions, facts, laws, and carefully tested and widely held hypotheses.    

Although the two terms are often used in an interchangeable way in general conversation, there is an important difference between a hypothesis and a theory when it comes to the study of experimental design. The following are two important differences: 

  • Theories predict events in a general way, whereas hypotheses are based on a specific expectation or prediction in repsect of a particular event or circumstance.
  • Theories have been tested extensively and they are usually widely-accepted facts, whereas hypotheses are guesswork or a form of speculation about something that is still untested. 

Time Effects in Psychology-Type Research

When designing a study or research project, you may use one of two time dimension types.

Research of the cross-sectional variety: This occurs at a given time.

  • Any variable, measure, or test is administered to the subject(s) on one particular occasion.
  • Research of this type aims to collect data on current conditions rather than examing the effects of something over a time period

Research of the longitudinal variety: These studies are conducted over given time periods.

  • Data is gathered at the beginning of the project and may be collected on a recurring basis while the study is ongoing.
  • Some studies of this type can be conducted over short periods e.g. a matter of days or, in some cases, over decades.
  • Longitudinal studies are often used, for example, to investigate the effects of the aging process. 

Relationships of the Causal Variety between Different Variables

When people refer to the “relationship” between things or variables, what do they mean? In most psychological-type research, this is a reference to a link between two factors or more that vary systematically or can be measured. One key distinction that should be made when referring to these relationships is what causation means. 

  • Causal relationships occur when one given variable brings about a change in or to a different variable. Experimental research investigates these relationship types to see if a change in one given variable really can cause a change in another. 

Relationships of the Correlational Variety between Different Variables

Correlations are a reference to how the relationships between variables are measured. These are variables that already exist in the population or in groups of people and they are not something the person conducting the experiment controls. 

  • Correlations of the positive type: These are direct type relationships whereby one variable increases in a corresponding manner to the other.
  • Correlations of the negative type: In these relationships, where one variable increases, the other decreases.
  • In each correlation type, no proof or evidence exists to show that changes in one thing or variable causes changes in another. All that correlation does is indicate that two things are linked by a relationship. The really important message here is that correlation is not the same as causation. Assuming that a causal relationship must exist because there is a relationship between two things is a mistake that many media sources are guilty of. 

Here is a Question: What is meant by Validity?

The answer is that validity is a reference to how well a test succeeds in measuring what it is supposed to measure. It is crucial that tests are valid if the results are to be interpreted and applied in an accurate manner.  It is not possible to determine validity by just one statistic. Rather, an entire piece of research that shows the relationship between the actual test and what it is supposed to be a masure of is required. Validity can be categorized into three different types: 

The content type: Where a test is to demonstrate content validity, the test items are representative of every potential item that the test should span. It is possible to draw individual questions for a test from a wide-ranging group of items covering a broad array of subject matter and topics.

In certain cases, a test may be attempting to measure a characteristic that just is not easy to define. Where this happens, a judge may be called on to rate the relevance of each item. Because individual judges use opinion for rating purposes, it is necessary to have two impartial judges to separately rate a test. Where two judges rate items as extremely relevant, these will go forward to the finals. 

Criterion Type:

Tests are referred to as being criterion-related when they are shown to be good at predicting indicators or criterion of some construct. The criterion type of validity can be further categorized into two types:

  • Concurrent: This type of validity is applicable where the test measures or criterion are taken along with the test results or scores i.e. concurrently. This is then used to indicate how well and/or how accurately the test results measure a person’s or item’s state in respect of the applicable criterion. Take, for instance, a test designed to measure depression levels wherein such a test would be judged to have this type of validied if it accurately measured the participant’s current depression levels.
  • Predictive: This type of validity is applicable where the test measures or criterion are taken at some time later than the test results. Aptitude and career tests are examples of predictive validity since they are useful in helping decide which candidates are likely to fail or succeed in particular jobs or subject areas.  

Construct Type:

 Tests are considered to be of the construct variety where they demonstarate a connection between test results and something that has been predicted about a theoretical characteristic. An example of construct validity would be intelligence-based tests. 

Question: What Does Reliability Mean?

The answer here is that reliability is a reference to how consistent a test measure is. Tests are deemed reliable if the same result(s) are repeatedly obtained. If, for instance, a test’s purpose is to measure a particular trait, then the results should be roughly the same whenever the test is run.  It is not possible to gauge reliability very precisely, but there are a number of ways to estimate it. 

Test Reliability

To detrmine test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of the testacross time. It predetermines no change in the quality being measured. 

Inter-Rater Type Reliability

Reliability of this type is dermined when two impartial judges (or more) rate a test. Then, results are compared to see how consistent the estimates of the judges are. One method for testing inter-rater reliability is to get judges to assign a score to each one of the test items e.g. using a scale of one to ten. After that, it would be necessary to calculate or estimate the correlation that exists between the two scores to decide the inter-rater reliability level. Another way to test this type of reliability would be to ask the judges to decide what category the different observations come into and to then work out what percentage agreement exists between the judges. Hence, for example, if judges are in agreement in eight out of ten cases, a test can be deemed to have an inter-rater reliability rating of 80%.   

Parallel-Forms Type of Reliability

The parallel-forms type of reliability is determined by doing a comparison of various tests with the same or similar content. This can be achieved by assembling a large group of same-quality items for testing, after which you would randomly divide all items and subject them to two different tests, which would then be run simulataneously on the same group of participants.  

Internal Consistency Type of Reliability

Reliability of the internal consistency variety is generally used to gauge how consistent results are when run on items in the same reliability test. What you are essentially comparing are test elements designed to measure one same construct with a view to determinging the internal consistency of the tests. If one question looks very much the same as another, it could mean the aim of both questions is to assess reliability. Because both questions are alike and serve the purpose of measuring similar criteria, the person taking the test is required to give the same answer(s) to the two questions, an indication of internal consistency.  

Simple Experiment 

Looking for Cause/Effect Relationships - What is Meant by the Simple Experiment?

This experimental method is designed to find cause(s) and effect(s), so studies of this type are often undertaken to find a treatment’s effect(s). With a simple experiment, the participants of a study are assigned in a random manner to one group – of which there are two groups in total. Usually, one of these groups is deemed to be “in control” and does not receive treatment. The second group, by contrast, is deemed to be “experimental” and does receive treatment.   

What a Simple Experiment is comprised of – The Parts

Experimental hypotheses: These are statements that predict a treatment will result in one or more effects. This type of hypothesis is always phrased in the shape of a cause/effect statement.

Null hypotheses: This type of hypothesis states that the treatment from an experiment will not have any effect(s) on the test participants or on the dependent variables. An important thing to note here is that not finding a treatment effect is not to say the treatment does not have any effect(s). It might, for example, have some effect on different variables that are not being measured in a particular experiment. 

Independent variables: Treatment variables that the person undertaking the experiments manipulates.

Dependent variables: These represent the test response being measured by the person(s) undertaking the experiment.

Control groups: These groups are comprised of test participants who are assigned in a random manner to a test group but group members do not receive test treatment. Results from these groups are compared to the results from corresponding experimental groups to see what – if any – effects the treatment has had.

Experimental groups: These groups are comprised of test participants who are assigned in a random manner to a test group and do receive test treatment. The results from these groups are then compared to the results from corresponding control groups to see what – if any – effects the test treatment has had. 

Deciding or Interpreting Simple Experiment Results

When data from a simple experiment is collected, the person(s) undertaking the test look at the results taken from a particular experimental group and compare them to those taken from a corresponding control group. The aim is to decide what effect the treatment has had. But how are effects determined? Because the risk of error is always present, it is not possible to be sure about the relationships that exist between variables. Still, there are methods for determining if some meaningful type of relationship exists. Those running such experiments can use statistics of the inferential variety to decide if a test has yielded any meaningful results. These statistics belong to a scientific area that draws inferences about a group or population using certain measures or techniques to interrogate representative samples take from the participating groups. Determining statistical significance is pivotal to deciding whether or what the effect(s) of a treatment are. 

Researchers rely heavily on statistical significance since this indicates that signs of a relationship between variables are unlikely to be attributable to chance and that any such relationships are much more likely to be meaningful ones. Often, statistical significance is represented as follows: x < .05 Where x-value is under .05, the likelihood of the results being mere chance is just 5% or less. Sometimes, lower values are attributed to x e.g. x < .01 or .02. It is possible to measure statistical significance in several ways. The nature of most statistical tests depend to a large extent on the research type and design being used. 

Studies of Correlation 

Psychology-Based Research Using Correlational Studies

Why Use a Correlational Study?

It is commonplace for researchers to use correlational-type studies when looking for relationships between different variables. These studies can produce three possibilities in terms of results e.g. a correlation of a positive or negative nature or no type of correlation at all. The strength of a correlation is measured by a correlation coefficient ranging from -1.00 up to +1.00. 

  • Correlations of the Positive Type

In this type of correlation, the two variables simultaneously go up or down i.e. they increase or they decrease together. A strong correlation of the positive type would be indicated by a coefficient approaching +1.00.  

  • Correlations of the Negative Type

In this type of correlation, one variable goes up or increases while the second one goes down or decreases or it may happen the other way around. A strong correlation of the negative type would be indicated by a coefficient approaching -1.00.

  • No Existence of Any Correlation

This means there is no type of relationship between variables. No correlation is indicated by a 0 coefficient. 

Are There Any Limitations to a Correlational Study?

Although a correlational study may indicate a link between both test variables, they are unable to prove that any one of the variables can bring about a change in the other. Put another way, a correlation is not the same as causation. In correlation studies, for instance, there may be an indication of a link between success and an individual’s self-esteem, yet they cannot prove if self-esteemed is higher or lower because of success.  Other test variables may play some part including an individaul’s personality, cognitive state, socio-economic circumstances, social skills/relationships and a host of other variables.   

Different Types of Correlational Study

1. Naturalistic Observation

This type of study requires the person conducting the test to observe and record the objects they are interested in as these interact in their natural habitat or environment i.e. without the tester interfering or manipulating them in any way. 

The benefits of naturalistic observation are: 

  • Allows the person conducting the test to observe variables in a natural environment or setting.
  • Can provide the only method of study e.g. if it isn’t possible to study the variables in a lab.
  • Can provide ideas or materials for future study. 

The downsides to naturalistic observation are:

  • Variables cannot be controlled scientifically.
  • Awareness of the experimenter’s presence may cause subjects to behave differently.
  • Observer has no control over extraneous objects/variables.
  • Can take up a lot of time and be costly. 

2. Survey as a Method of Study

Questionnaires and surveys are often used in research projects involving psychological study. The survey method requires a randomly-chosen group of participants to complete a questionnaire, test or survey related to a given object or variable. Generally speaking, random samples are vital in ensuring results are of a general nature. 

The benefits of survey as a study method are:  

  • Easy to conduct, quick, and inexpensive. Vast amounts of data can be collected in a short timeframe.
  • Offers greater flexibility than other test methods. 

The downsides to surveys are:

  • The results can be affected by participants. Some of those taking part may want to try to look better, want to please the surveyor, or their memories may play tricks on them.
  • Badly-written questions or samples that are unrepresentative can impact the results. 

3. Research of the Archival Type

This type of research involves analyzing various studies undertaken by other experimenters or researchers or by examining historical records. 

The benefits of archival-type research are: 

  • Vast amounts of test data provide a clearer and better picture of relationships, general trends, and results.
  • Experimenters cannot make changes to the way particpants behave.
  • Not as expensive as other methods of study. It is often possible to get access to data using free-of-charge databases and record archives. 

The downsides to archival-type research are:

  • Records may be missing vital data.
  • Research done previously may not be reliable.
  • Researchers do not have any control over data collection methods.