A lot of the time, what tends to come into people’s minds when they think of scientific experiments is that this is a type of research related to what causes something and what the effects are. Therfore, experiments of the causal variety usually investigate the effects that one variable or more can have on one or several outcome variables. Research of this type also looks at whether one thing causes another thing to change or to happen. A good example of causal research is changing a treatment type and evaluating the effect on this treatment change on those taking part in the study.
This type of research attempts to describe some scenario or thing that currently exists in a population or group of people. A suitable example of descriptive research is an opinion poll that seeks to find out which candidate people intend to vote for in an upcoming presidential election. This type of research does not attempt to measure the effects of variables. Its only purpose is describing something.
This is a type of research that examines the link between two variables or more. The items being compare are usually in existence in a population or group of people. A typical example of relational research is a study that considers the relationships between the genders and their preferences in music to determine the number of men and women who would be likely to buy a particular type of music.
A hypothesis is a quite specific prediciction that is testable and based on what a researcher expects to occur in or result from a study. For instance, a study that is designed to examine the link between study patterns and exam anxiety might be based on this hypothesis: “The purpose of this research is to evaluate the hypothesis that those who maintain a good study pattern will not suffer as much exam anxiety.” Unless the nature of a piece of research is exploratory, the hypothesis always explains what is expected to occur during an experiment or piece of research. By contrast, theories are well-established and accepted principles that are developed to define or explain a certain aspect of nature or the natural world. Theories are created by continuous testing and observation, and they incorporate predictions, facts, laws, and carefully tested and widely held hypotheses.
Although the two terms are often used in an interchangeable way in general conversation, there is an important difference between a hypothesis and a theory when it comes to the study of experimental design. The following are two important differences:
When designing a study or research project, you may use one of two time dimension types.
Research of the cross-sectional variety: This occurs at a given time.
Research of the longitudinal variety: These studies are conducted over given time periods.
When people refer to the “relationship” between things or variables, what do they mean? In most psychological-type research, this is a reference to a link between two factors or more that vary systematically or can be measured. One key distinction that should be made when referring to these relationships is what causation means.
Correlations are a reference to how the relationships between variables are measured. These are variables that already exist in the population or in groups of people and they are not something the person conducting the experiment controls.
The answer is that validity is a reference to how well a test succeeds in measuring what it is supposed to measure. It is crucial that tests are valid if the results are to be interpreted and applied in an accurate manner. It is not possible to determine validity by just one statistic. Rather, an entire piece of research that shows the relationship between the actual test and what it is supposed to be a masure of is required. Validity can be categorized into three different types:
The content type: Where a test is to demonstrate content validity, the test items are representative of every potential item that the test should span. It is possible to draw individual questions for a test from a wide-ranging group of items covering a broad array of subject matter and topics.
In certain cases, a test may be attempting to measure a characteristic that just is not easy to define. Where this happens, a judge may be called on to rate the relevance of each item. Because individual judges use opinion for rating purposes, it is necessary to have two impartial judges to separately rate a test. Where two judges rate items as extremely relevant, these will go forward to the finals.
Tests are referred to as being criterion-related when they are shown to be good at predicting indicators or criterion of some construct. The criterion type of validity can be further categorized into two types:
Tests are considered to be of the construct variety where they demonstarate a connection between test results and something that has been predicted about a theoretical characteristic. An example of construct validity would be intelligence-based tests.
The answer here is that reliability is a reference to how consistent a test measure is. Tests are deemed reliable if the same result(s) are repeatedly obtained. If, for instance, a test’s purpose is to measure a particular trait, then the results should be roughly the same whenever the test is run. It is not possible to gauge reliability very precisely, but there are a number of ways to estimate it.
To detrmine test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of the testacross time. It predetermines no change in the quality being measured.
Reliability of this type is dermined when two impartial judges (or more) rate a test. Then, results are compared to see how consistent the estimates of the judges are. One method for testing inter-rater reliability is to get judges to assign a score to each one of the test items e.g. using a scale of one to ten. After that, it would be necessary to calculate or estimate the correlation that exists between the two scores to decide the inter-rater reliability level. Another way to test this type of reliability would be to ask the judges to decide what category the different observations come into and to then work out what percentage agreement exists between the judges. Hence, for example, if judges are in agreement in eight out of ten cases, a test can be deemed to have an inter-rater reliability rating of 80%.
The parallel-forms type of reliability is determined by doing a comparison of various tests with the same or similar content. This can be achieved by assembling a large group of same-quality items for testing, after which you would randomly divide all items and subject them to two different tests, which would then be run simulataneously on the same group of participants.
Reliability of the internal consistency variety is generally used to gauge how consistent results are when run on items in the same reliability test. What you are essentially comparing are test elements designed to measure one same construct with a view to determinging the internal consistency of the tests. If one question looks very much the same as another, it could mean the aim of both questions is to assess reliability. Because both questions are alike and serve the purpose of measuring similar criteria, the person taking the test is required to give the same answer(s) to the two questions, an indication of internal consistency.
Looking for Cause/Effect Relationships - What is Meant by the Simple Experiment?
This experimental method is designed to find cause(s) and effect(s), so studies of this type are often undertaken to find a treatment’s effect(s). With a simple experiment, the participants of a study are assigned in a random manner to one group – of which there are two groups in total. Usually, one of these groups is deemed to be “in control” and does not receive treatment. The second group, by contrast, is deemed to be “experimental” and does receive treatment.
Experimental hypotheses: These are statements that predict a treatment will result in one or more effects. This type of hypothesis is always phrased in the shape of a cause/effect statement.
Null hypotheses: This type of hypothesis states that the treatment from an experiment will not have any effect(s) on the test participants or on the dependent variables. An important thing to note here is that not finding a treatment effect is not to say the treatment does not have any effect(s). It might, for example, have some effect on different variables that are not being measured in a particular experiment.
Independent variables: Treatment variables that the person undertaking the experiments manipulates.
Dependent variables: These represent the test response being measured by the person(s) undertaking the experiment.
Control groups: These groups are comprised of test participants who are assigned in a random manner to a test group but group members do not receive test treatment. Results from these groups are compared to the results from corresponding experimental groups to see what – if any – effects the treatment has had.
Experimental groups: These groups are comprised of test participants who are assigned in a random manner to a test group and do receive test treatment. The results from these groups are then compared to the results from corresponding control groups to see what – if any – effects the test treatment has had.
When data from a simple experiment is collected, the person(s) undertaking the test look at the results taken from a particular experimental group and compare them to those taken from a corresponding control group. The aim is to decide what effect the treatment has had. But how are effects determined? Because the risk of error is always present, it is not possible to be sure about the relationships that exist between variables. Still, there are methods for determining if some meaningful type of relationship exists. Those running such experiments can use statistics of the inferential variety to decide if a test has yielded any meaningful results. These statistics belong to a scientific area that draws inferences about a group or population using certain measures or techniques to interrogate representative samples take from the participating groups. Determining statistical significance is pivotal to deciding whether or what the effect(s) of a treatment are.
Researchers rely heavily on statistical significance since this indicates that signs of a relationship between variables are unlikely to be attributable to chance and that any such relationships are much more likely to be meaningful ones. Often, statistical significance is represented as follows: x < .05 Where x-value is under .05, the likelihood of the results being mere chance is just 5% or less. Sometimes, lower values are attributed to x e.g. x < .01 or .02. It is possible to measure statistical significance in several ways. The nature of most statistical tests depend to a large extent on the research type and design being used.
Psychology-Based Research Using Correlational Studies
Why Use a Correlational Study?
It is commonplace for researchers to use correlational-type studies when looking for relationships between different variables. These studies can produce three possibilities in terms of results e.g. a correlation of a positive or negative nature or no type of correlation at all. The strength of a correlation is measured by a correlation coefficient ranging from -1.00 up to +1.00.
In this type of correlation, the two variables simultaneously go up or down i.e. they increase or they decrease together. A strong correlation of the positive type would be indicated by a coefficient approaching +1.00.
In this type of correlation, one variable goes up or increases while the second one goes down or decreases or it may happen the other way around. A strong correlation of the negative type would be indicated by a coefficient approaching -1.00.
This means there is no type of relationship between variables. No correlation is indicated by a 0 coefficient.
Although a correlational study may indicate a link between both test variables, they are unable to prove that any one of the variables can bring about a change in the other. Put another way, a correlation is not the same as causation. In correlation studies, for instance, there may be an indication of a link between success and an individual’s self-esteem, yet they cannot prove if self-esteemed is higher or lower because of success. Other test variables may play some part including an individaul’s personality, cognitive state, socio-economic circumstances, social skills/relationships and a host of other variables.
1. Naturalistic Observation
This type of study requires the person conducting the test to observe and record the objects they are interested in as these interact in their natural habitat or environment i.e. without the tester interfering or manipulating them in any way.
The benefits of naturalistic observation are:
The downsides to naturalistic observation are:
2. Survey as a Method of Study
Questionnaires and surveys are often used in research projects involving psychological study. The survey method requires a randomly-chosen group of participants to complete a questionnaire, test or survey related to a given object or variable. Generally speaking, random samples are vital in ensuring results are of a general nature.
The benefits of survey as a study method are:
The downsides to surveys are:
3. Research of the Archival Type
This type of research involves analyzing various studies undertaken by other experimenters or researchers or by examining historical records.
The benefits of archival-type research are:
The downsides to archival-type research are: