Information

Reporting d-prime results

Reporting d-prime results



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am currently working on research analysing yes/no responses in a recognition memory task. The false alarm rate is quite high so I have performed a d-prime test and collected d-prime values. Now that I have these d-prime values I do not know how to analyse/report these findings as I have a d-prime value for each individual participant. Some papers show a comparison using an ANOVA but I do not know how to conduct this comparison with the d-prime data or if this is what I should be doing, should I be using the mean of my d-prime values? How should I go about reporting my d-prime results?


d-prime values are usually fairly well normally-distributed. So you can use any parametric test relying on the normality assumption (in facts you should always z-transform fractions before to use a parametric test on them). If you have only 2 conditions you can use a classical t-test. From your question it seems you have only 1 value per participants so that would probably be an independent t-test. If you have more than 1 comparison an ANOVA would be the typical test to use.

For reporting results it is a bit tricky. d-primes are not that intuitive for people. So you might want to report fraction correct in the text, but run your tests on d-primes. For the same reason you might want to plot your results as fractions. Below is an example of wording (but refer to a statistical textbook for conducting and reporting statistical tests). Note that it is fine to report means and standard deviations of fractions. However parametric tests should always be conducted on z-scores. I know people use fractions in tests all the time but it is wrong.

The mean recognition rate increased from NN% to NN% between the conditions… and… , for a mean improvement of NN ± NN% (SE). This difference was significant in an independent t-test on z-transformed recognition rates: t(N)=N, p


Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by his or her poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.


Reporting d-prime results - Psychology

Please pay attention to issues of italics and spacing. APA style is very precise about these. Also, with the exception of some p values, most statistics should be rounded to two decimal places.
Mean and Standard Deviation are most clearly presented in parentheses:

The sample as a whole was relatively young (M = 19.22, SD = 3.45).

The average age of students was 19.22 years (SD = 3.45).

Percentages are also most clearly displayed in parentheses with no decimal places:

Chi-Square statistics are reported with degrees of freedom and sample size in parentheses, the Pearson chi-square value (rounded to two decimal places), and the significance level:

T Tests are reported like chi-squares, but only the degrees of freedom are in parentheses. Following that, report the t statistic (rounded to two decimal places) and the significance level.

ANOVAs (both one-way and two-way) are reported like the t test, but there are two degrees-of-freedom numbers to report. First report the between-groups degrees of freedom, then report the within-groups degrees of freedom (separated by a comma). After that report the F statistic (rounded off to two decimal places) and the significance level.

Correlations are reported with the degrees of freedom (which is N &ndash 2) in parentheses and the significance level:


Regression results are often best presented in a table, but if you would like to report the regression in the text of your Results section, you should at least present the unstandardized or standardized slope (beta), whichever is more interpretable given the data, along with the t-test and the corresponding significance level. (Degrees of freedom for the t-test is N &ndash k &ndash 1 where k equals the number of predictor variables.) It is also customary to report the percentage of variance explained along with the corresponding F test.

Tables are useful if you find that a paragraph has almost as many numbers as words. If you do use a table, do not also report the same information in the text. It's either one or the other.

Based on:
American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). Washington, DC: Author.


Contents

Research can only contribute to knowledge if it is communicated from investigators to the community. The generally accepted primary means of communication is “full” publication of the study methods and results in an article published in a scientific journal. Sometimes, investigators choose to present their findings at a scientific meeting as well, either through an oral or poster presentation. These presentations are included as part of the scientific record as brief “abstracts” which may or may not be recorded in publicly accessible documents typically found in libraries or the World Wide Web. [ citation needed ]

Sometimes, investigators fail to publish the results of entire studies. The Declaration of Helsinki and other consensus documents have outlined the ethical obligation to make results from clinical research publicly available. [ citation needed ]

Reporting bias occurs when the dissemination of research findings is influenced by the nature and direction of the results, for instance in systematic reviews. [4] Positive results is a commonly used term to describe a study finding that one intervention is better than another. [ citation needed ]

Various attempts have been made to overcome the effects of the reporting biases, including statistical adjustments to the results of published studies. [5] None of these approaches has proved satisfactory, however, and there is increasing acceptance that reporting biases must be tackled by establishing registers of controlled trials and by promoting good publication practice. Until these problems have been addressed, estimates of the effects of treatments based on published evidence may be biased. [ citation needed ]

Litigation brought upon by consumers and health insurers against Pfizer for the fraudulent sales practices in marketing of the drug gabapentin in 2004 revealed a comprehensive publication strategy that employed elements of reporting bias. [6] Spin was used to put emphasis on favorable findings that favored gabapentin, and also to explain away unfavorable findings towards the drug. In this case, favorable secondary outcomes became the focus over the original primary outcome, which was unfavorable. Other changes found in outcome reporting include the introduction of a new primary outcome, failure to distinguish between primary and secondary outcomes, and failure to report one or more protocol-defined primary outcomes. [7]

The decision to publish certain findings in certain journals is another strategy. [6] Trials with statistically significant findings were generally published in academic journals with higher circulation more often than trials with nonsignificant findings. Timing of publication results of trials was influenced, in that the company tried to optimize the timing between the release of two studies. Trials with nonsignificant findings were found to be published in a staggered fashion, as to not have two consecutive trials published without salient findings. Ghost authorship was also an issue, where professional medical writers who drafted the published reports were not properly acknowledged.

Fallout from this case is still being settled by Pfizer in 2014, 10 years after the initial litigation. [8]

Publication bias Edit

The publication or nonpublication of research findings, depending on the nature and direction of the results. Although medical writers have acknowledged the problem of reporting biases for over a century, [9] it was not until the second half of the 20th century that researchers began to investigate the sources and size of the problem of reporting biases. [10]

Over the past two decades, evidence has accumulated that failure to publish research studies, including clinical trials testing intervention effectiveness, is pervasive. [10] Almost all failure to publish is due to failure of the investigator to submit [11] only a small proportion of studies are not published because of rejection by journals. [12]

The most direct evidence of publication bias in the medical field comes from follow-up studies of research projects identified at the time of funding or ethics approval. [13] These studies have shown that “positive findings” is the principal factor associated with subsequent publication: researchers say that the reason they don't write up and submit reports of their research for publication is usually because they are “not interested” in the results (editorial rejection by journals is a rare cause of failure to publish).

Even those investigators who have initially published their results as conference abstracts are less likely to publish their findings in full unless the results are “significant”. [14] This is a problem because data presented in abstracts are frequently preliminary or interim results and thus may not be reliable representations of what was found once all data were collected and analyzed. [15] In addition, abstracts are often not accessible to the public through journals, MEDLINE, or easily accessed databases. Many are published in conference programs, conference proceedings, or on CD-ROM, and are made available only to meeting registrants.

The main factor associated with failure to publish is negative or null findings. [16] Controlled trials that are eventually reported in full are published more rapidly if their results are positive. [15] Publication bias leads to overestimates of treatment effect in meta-analyses, which in turn can lead doctors and decision makers to believe a treatment is more useful than it is.

It is now well-established that publication bias with more favorable efficacy results is associated with the source of funding for studies that would not otherwise be explained through usual risk of bias assessments. [17]

Time lag bias Edit

The rapid or delayed publication of research findings, depending on the nature and direction of the results. In a systematic review of the literature, Hopewell and her colleagues found that overall, trials with “positive results” (statistically significant in favor of the experimental arm) were published about a year sooner than trials with “null or negative results” (not statistically significant or statistically significant in favor of the control arm). [15]

Multiple (duplicate) publication bias Edit

The multiple or singular publication of research findings, depending on the nature and direction of the results. Investigators may also publish the same findings multiple times using a variety of patterns of “duplicate” publication. [18] Many duplicates are published in journal supplements, potentially difficult to access literature. Positive results appear to be published more often in duplicate, which can lead to overestimates of a treatment effect.

Location bias Edit

The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results. There is also evidence that, compared to negative or null results, statistically significant results are on average published in journals with greater impact factors, [19] and that publication in the mainstream (non grey) literature is associated with an overall greater treatment effect compared to the grey literature. [20]

Citation bias Edit

The citation or non-citation of research findings, depending on the nature and direction of the results. Authors tend to cite positive results over negative or null results, and this has been established over a broad cross section of topics. [21] [22] [23] [24] [25] [26] Differential citation may lead to a perception in the community that an intervention is effective when it is not, and it may lead to over-representation of positive findings in systematic reviews if those left uncited are difficult to locate.

Selective pooling of results in a meta-analysis is a form of citation bias that is particularly insidious in its potential to influence knowledge. To minimize bias, pooling of results from similar but separate studies requires an exhaustive search for all relevant studies. That is, a meta-analysis (or pooling of data from multiple studies) must always have emerged from a systematic review (not a selective review of the literature), even though a systematic review does not always have an associated meta-analysis.

Language bias Edit

The publication of research findings in a particular language, depending on the nature and direction of the results. There is longstanding question about whether there is a language bias such that investigators choose to publish their negative findings in non-English language journals and reserve their positive findings for English language journals. Some research has shown that language restrictions in systematic reviews can change the results of the review [27] and in other cases, authors have not found that such a bias exists. [28]

Knowledge reporting bias Edit

The frequency with which people write about actions, outcomes, or properties is not a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals. People write about only some parts of the world around them much of the information is left unsaid. [2] [29]

Outcome reporting bias Edit

The selective reporting of some outcomes but not others, depending on the nature and direction of the results. [30] A study may be published in full, but pre-specified outcomes omitted or misrepresented. [7] [31] Efficacy outcomes that are statistically significant have a higher chance of being fully published compared to those that are not statistically significant.


Reporting Biases

The dissemination of research findings is not a division into published or unpublished, but a continuum ranging from the sharing of draft papers among colleagues, through presentations at meetings and published abstracts, to papers in journals that are indexed in the major bibliographic databases (Smith 1999). It has long been recognized that only a proportion of research projects ultimately reach publication in an indexed journal and thus become easily identifiable for systematic reviews.

Reporting biases arise when the dissemination of research findings is influenced by the nature and direction of results. Statistically significant, ‘positive’ results that indicate that an intervention works are more likely to be published, more likely to be published rapidly, more likely to be published in English, more likely to be published more than once, more likely to be published in high impact journals and, related to the last point, more likely to be cited by others. The contribution made to the totality of the evidence in systematic reviews by studies with non-significant results is as important as that from studies with statistically significant results.

The table below summarizes some different types of reporting biases.

Type of reporting bias

The publication or non-publication of research findings, depending on the nature and direction of the results

The rapid or delayed publication of research findings, depending on the nature and direction of the results

The multiple or singular publication of research findings, depending on the nature and direction of the results

The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results.

The citation or non-citation of research findings, depending on the nature and direction of the results

The publication of research findings in a particular language, depending on the nature and direction of the results

The selective reporting of some outcomes but not others, depending on the nature and direction of the results

While publication bias has long been recognized and much discussed, other factors can contribute to biased inclusion of studies in meta-analyses. Indeed, among published studies, the probability of identifying relevant studies for meta-analysis is also influenced by their results. These biases have received much less consideration than publication bias, but their consequences could be of equal importance.

Duplicate (multiple) publication bias

In 1989, Gøtzsche found that, among 244 reports of trials comparing non-steroidal anti-inflammatory drugs in rheumatoid arthritis, 44 (18%) were redundant, multiple publications, which overlapped substantially with a previously published article. Twenty trials were published twice, ten trials three times and one trial four times (Gøtzsche 1989). The production of multiple publications from single studies can lead to bias in a number of ways (Huston 1996). Most importantly, studies with significant results are more likely to lead to multiple publications and presentations (Easterbrook 1991), which makes it more likely that they will be located and included in a meta-analysis. It is not always obvious that multiple publications come from a single study, and one set of study participants may be included in an analysis twice. The inclusion of duplicated data may therefore lead to overestimation of intervention effects, as was demonstrated for trials of the efficacy of ondansetron to prevent postoperative nausea and vomiting (Tramèr 1997).

Other authors have described the difficulties and frustration caused by redundancy and the ‘disaggregation’ of medical research when results from a multi-centre trial are presented in several publications (Huston 1996, Johansen 1999). Redundant publications often fail to cross-reference each other (Bailey 2002, Barden 2003) and there are examples where two articles reporting the same trial do not share a single common author (Gøtzsche 1989, Tramèr 1997). Thus, it may be difficult or impossible for review authors to determine whether two papers represent duplicate publications of one study or two separate studies without contacting the authors, which may result in biasing a meta-analysis of this data.

Location bias

Research suggests that various factors related to the accessibility of study results are associated with effect sizes in trials. For example, in a series of trials in the field of complementary and alternative medicine, Pittler and colleagues examined the relationship between trial outcome, methodological quality and sample size with characteristics of the journals of publication of these trials (Pittler 2000). They found that trials published in low or non-impact factor journals were more likely to report significant results than those published in high-impact mainstream medical journals and that the quality of the trials was also associated with the journal of publication. Similarly, some studies suggest that trials published in English language journals are more likely to show strong significant effects than those published in non-English language journals (Egger 1997b), however this has not been shown consistently (Moher 2000, Jüni 2002, Pham 2005).

The term ‘location bias’ is also used to refer to the accessibility of studies based on variable indexing in electronic databases. Depending on the clinical question, choices regarding which databases to search may bias the effect estimate in a meta-analysis. For example, one study found that trials published in journals that were not indexed in MEDLINE might show a more beneficial effect than trials published in MEDLINE-indexed journals (Egger 2003). Another study of 61 meta-analyses found that, in general, trials published in journals indexed in EMBASE but not in MEDLINE reported smaller estimates of effect than those indexed in MEDLINE, but that the risk of bias may be minor, given the lower prevalence of the EMBASE unique trials (Sampson 2003). As above, these findings may vary substantially with the clinical topic being examined.

A final form of location bias is regional or developed country bias. Research supporting the evidence of this bias suggests that studies published in certain countries may be more likely than others to produce research showing significant effects of interventions. Vickers and colleagues demonstrated the potential existence of this bias (Vickers 1998).

Citation bias

The perusal of the reference lists of articles is widely used to identify additional articles that may be relevant although there is little evidence to support this methodology. The problem with this approach is that the act of citing previous work is far from objective and retrieving literature by scanning reference lists may thus produce a biased sample of studies. There are many possible motivations for citing an article. Brooks interviewed academic authors from various faculties at the University of Iowa and asked for the reasons for citing each reference in one of the authors’ recent articles (Brooks 1985). Persuasiveness, i.e. the desire to convince peers and substantiate their own point of view, emerged as the most important reason for citing articles. Brooks concluded that authors advocate their own opinions and use the literature to justify their point of view: “Authors can be pictured as intellectual partisans of their own opinions, scouring the literature for justification” (Brooks 1985).

In Gøtzsche’s analysis of trials of non-steroidal anti-inflammatory drugs in rheumatoid arthritis, trials demonstrating a superior effect of the new drug were more likely to be cited than trials with negative results (Gøtzsche 1987). Similar results were shown in an analysis of randomized trials of hepato-biliary diseases (Kjaergard 2002). Similarly, trials of cholesterol lowering to prevent coronary heart disease were cited almost six times more often if they were supportive of cholesterol lowering (Ravnskov 1992). Over-citation of unsupportive studies can also occur. Hutchison et al. examined reviews of the effectiveness of pneumococcal vaccines and found that unsupportive trials were more likely to be cited than trials showing that vaccines worked (Hutchison 1995).

Citation bias may affect the ‘secondary’ literature. For example, the ACP Journal Club aims to summarize original and review articles so that physicians can keep abreast of the latest evidence. However, Carter et al. found that trials with a positive outcome were more likely to be summarized, after controlling for other reasons for selection (Carter 2006). If positive studies are more likely to be cited, they may be more likely to be located and, thus, more likely to be included in a systematic review, thus biasing the findings of the review.

Language bias

Reviews have often been exclusively based on studies published in English. For example, among 36 meta-analyses reported in leading English-language general medicine journals from 1991 to 1993, 26 (72%) had restricted their search to studies reported in English (Grégoire 1995). This trend may be changing, with a recent review of 300 systematic reviews finding approximately 16% of reviews limited to trials published in English systematic reviews published in paper-based journals were more likely than Cochrane reviews to report limiting their search to trials published in English (Moher 2007). In addition, of reviews with a therapeutic focus, Cochrane reviews were more likely than non-Cochrane reviews to report having no language restrictions (62% vs. 26%) (Moher 2007).

Investigators working in a non-English speaking country will publish some of their work in local journals (Dickersin 1994). It is conceivable that authors are more likely to report in an international, English-language journal if results are positive whereas negative findings are published in a local journal. This was demonstrated for the German-language literature (Egger 1997b).

Bias could thus be introduced in reviews exclusively based on English-language reports (Grégoire 1995, Moher 1996). However, the research examining this issue is conflicting. In a study of 50 reviews that employed comprehensive literature searches and included both English and non-English-language trials, Jüni et al reported that non-English trials were more likely to produce significant results at P<0.05, while estimates of intervention effects were, on average, 16% (95% CI 3% to 26%) more beneficial in non-English-language trials than in English-language trials (Jüni 2002). Conversely, Moher and colleagues examined the effect of inclusion or exclusion of English-language trials in two studies of meta-analyses and found, overall, that the exclusion of trials reported in a language other than English did not significantly affect the results of the meta-analyses (Moher 2003). These results were similar when the analysis was limited to meta-analyses of trials of conventional medicines. When the analyses were conducted separately for meta-analyses of trials of complementary and alternative medicines, however, the effect size of meta-analyses was significantly decreased by excluding reports in languages other than English (Moher 2003).

The extent and effects of language bias may have diminished recently because of the shift towards publication of studies in English. In 2006, Galandi et al. reported a dramatic decline in the number of randomized trials published in German-language healthcare journals: with fewer than two randomized trials published per journal and year after 1999 (Galandi 2006). While the potential impact of studies published in languages other than English in a meta-analysis may be minimal, it is difficult to predict in which cases this exclusion may bias a systematic review. Review authors may want to search without language restrictions and decisions about including reports from languages other than English may need to be taken on a case-by-case basis.

Outcome reporting bias

In many studies, a range of outcome measures is recorded but not all are reported (Pocock 1987, Tannock 1996). The choice of outcomes that are reported can be influenced by the results, potentially making published results misleading. For example, two separate analyses (Mandel 1987, Cantekin 1991) of a double-blind placebo-controlled trial assessing the efficacy of amoxicillin in children with non-suppurative otitis media reached opposite conclusions mainly because different ‘weight’ was given to the various outcome measures that were assessed in the study. This disagreement was conducted in the public arena, since it was accompanied by accusations of impropriety against the team producing the findings favourable to amoxicillin. The leader of this team had received substantial fiscal support, both in research grants and as personal honoraria, from the manufacturers of amoxicillin (Rennie 1991). It is a good example of how reliance upon the data chosen to be presented by the investigators can lead to distortion (Anonymous 1991). Such ‘outcome reporting bias’ may be particularly important for adverse effects. Hemminki examined reports of clinical trials submitted by drug companies to licensing authorities in Finland and Sweden and found that unpublished trials gave information on adverse effects more often than published trials (Hemminki 1980). Since then several other studies have shown that the reporting of adverse events and safety outcomes in clinical trials is often inadequate and selective (Ioannidis 2001, Melander 2003, Heres 2006). A group from Canada, Denmark and the UK recently pioneered empirical research into the selective reporting of study outcomes (Chan 2004a, Chan 2004b, Chan 2005). These studies are described in Chapter 8 of the Handbook, along with a more detailed discussion of outcome reporting bias.


The next section of your lab report will be the method section. In this portion of your report, you will describe the procedures you used in your research. You'll include specific information such as the number of participants in your study, the background of each individual, your independent and dependent variables, and the type of experimental design you used.

In the results section of your lab report, you'll describe the statistical data you gathered from your research. This section will likely be quite short you don't need to include any interpretation of your results. Use tables and figures to display statistical data and results.


11.6: Reporting the Results of a Hypothesis Test

  • Contributed by Danielle Navarro
  • Associate Professor (Psychology) at University of New South Wales

When writing up the results of a hypothesis test, there&rsquos usually several pieces of information that you need to report, but it varies a fair bit from test to test. Throughout the rest of the book I&rsquoll spend a little time talking about how to report the results of different tests (see Section 12.1.9 for a particularly detailed example), so that you can get a feel for how it&rsquos usually done. However, regardless of what test you&rsquore doing, the one thing that you always have to do is say something about the p value, and whether or not the outcome was significant.

The fact that you have to do this is unsurprising it&rsquos the whole point of doing the test. What might be surprising is the fact that there is some contention over exactly how you&rsquore supposed to do it. Leaving aside those people who completely disagree with the entire framework underpinning null hypothesis testing, there&rsquos a certain amount of tension that exists regarding whether or not to report the exact p value that you obtained, or if you should state only that p<&alpha for a significance level that you chose in advance (e.g., p<.05).


Present your findings

This page and the next, on reporting and discussing your findings, deal with the core of the thesis. In a traditional doctoral thesis, this will consist of a number of chapters where you present the data that forms the basis of your investigation, shaped by the way you have thought about it. In a thesis including publication, it will be the central section of an article.

For some fields of study, the presentation and discussion of findings follows established conventions for others, the researcher&rsquos argument determines the structure. Therefore it is important for you to investigate the conventions of your own discipline, by looking at journal articles and theses.

Every thesis writer has to present and discuss the results of their inquiry. In these pages we consider these two activities separately, while recognising that in many kinds of thesis they will be integrated. This section is concerned with presenting the analysis of the results of data analysis.

There is a great deal of disciplinary variation in the presentation of findings. For example, a thesis in oral history and one in marketing may both use interview data that has been collected and analysed in similar ways, but the way the results of this analysis are presented will be very different because the questions they are trying to answer are different. The presentation of results from experimental studies will be different again. In all cases, though, the presentation should have a logical organisation that reflects:

  • the aims or research question(s) of the project, including any hypotheses that have been tested
  • the research methods and theoretical framework that have been outlined earlier in the thesis.

You are not simply describing the data. You need to make connections, and make apparent your reasons for saying that data should be interpreted in one way rather than another.

Structure

Each chapter needs an introduction outlining its organisation.

Examples

Chemical Engineering PhD thesis:

In this Chapter, all the experimental results from the phenomenological experiments outlined in Section 5.2 are presented and examined in detail. The effects of the major operating variables on the performance of the pilot filters are explained, and various implications for design are discussed. The new data may be found in Appendix C.

The principal goal of the vernacular adaptor of a Latin saint's life was to edify and instruct his audience. In this chapter I shall try to show to what extent our texts conform to vernacular conventions of a well-told story of a saint, and in what ways they had to modify their originals to do so, attempting also to identify some of the individual characteristics of the three poems.

After that, the organisation will vary according to the kind of research being reported. Below are some important principles for reporting experimental, quantitative (survey) and qualitative studies.

Experimental studies

The results of experiments are almost always presented separately from discussion.

  • Present results in tables and figures
  • Use text to introduce tables and figures and guide the reader through key results
  • Point out differences and relationships, and provide information about them
  • Include negative results (then try to explain them in the Discussion section/chapter)

Quantitative studies

There are generally accepted guidelines for presenting the results of statistical analyses of data about populations or groups of people, plants or animals. It is important that the results be presented in an informative way.

  • Demographic data that describe the sample are usually presented first.
  • Remind the reader of the research question being addressed, or the hypothesis being tested.
  • State which differences are significant.
  • Highlight the important trends and differences/comparisons.
  • Indicate whether the hypothesis is supported or not.

You can read more about reporting quantitative results in the next section, Reporting conventions.

Qualitative studies

The presentation and discussion of qualitative data are often combined.

Qualitative data is difficult to present neatly in tables and figures. It is usually expressed in words, and this results in a large quantity of written material, through which you must guide your reader.

Structure is therefore very important.

Try to make your sections and subsections reflect the themes that have emerged from your analysis of the data, and to make sure your reader knows how these themes evolved. Headings and subheadings, as well as directions to the reader, are forms of signposting you can use to make these chapters easy to navigate.

You can read more about reporting qualitative results in the next section, Reporting conventions.

What to include

For all types of research, decisions about what data to include are important.

  • Include what you need to support the points you need to make. Be guided by your research questions(s) and the nature of your data.
  • Make your selection criteria explicit.
  • More detail can be provided in an appendix. Evans and Gruba (2002) offer some good advice: 'Include enough data in an appendix to show how you collected it, what form it took, and how you treated it in the process of condensing it for presentation in the results chapter.' (p. 105)

Reporting conventions

Reporting conventions differ according to whether the data involved is quantitative or qualitative.

Quantitative data

The purpose of the results section of the thesis is to report the findings of your research. You usually present the data you obtained in appropriate figures (diagrams, graphs, tables and photographs) and you then comment on this data.

Comments on figures and tables (data commentary) usually have the following elements:

  • a location element
  • a summary of the information presented in the figure
  • a highlighting statement to point out what is significant in all the data presented (eg trends, patterns, results that stand out).

Data commentary element example

Instructions: Click on the highlighted data elements in the example below.

Table 5 shows the most common modes of computer infection in Australian businesses . As can be seen in the table, home disks are the most frequent source of infection .

Activity: Data commentary element example

Instructions: Click on the text below to identify the location element, summary and highlighting statement.

The influents to filter A and B were analysed fully on a number of occasions, and the averaged results are presented in Table 6.1 . It can be seen from the table that the wastewaters from plants A and B and of similar composition .

Sometimes a reduced location element is used which gives only the table or figure number in brackets after the highlighting statement.

  1. The ranges of metal atom concentrations for the two precipitate types were found to overlap (Table 6)
  2. Quantitative analysis revealed some variation in the composition of the rods in the various exservice samples (Figure 7 and Table 5).

Commentary on results may include:

  • explanations
  • comparisons between results
  • comments on whether the results are expected or unexpected
  • comments about unsatisfactory data.

Dealing with "Problems"

The difference between expected and obtained results may be due to the incorrect calibration of the instruments.
This discrepancy can be attributed to the small sample size.
The anomaly in the observations can probably be accounted for by a defect in the camera.
The lack of statistical significance is probably a consequence of weaknesses in the experimental design.
The difficulty in dating this archeological site would seem to stem from the limited amount of organic material available.

(Adapted from Swales & Feak, 2004, p. 138).

If you are discussing your findings in a separate chapter or section, limit your comments here to the specific results you have presented.

Past or present tense?

Location element present tense &hellipthe averaged results are presented in Table 6.1.
Table 5 shows&hellip
Summary of procedure past tense The influents to filter A and B were analysed fully on a number of occasions,&hellip
Results of analysis past tense The ranges of metal atom concentrations &hellip were found to overlap.
Comments present tense This discrepancy can be attributed to the small sample size.

Qualitative data

The reporting of qualitative data is much less bound by convention than that of quantitative data. The data itself usually consists of words, from written documents or interview transcripts (but may include images), which have been analysed in some way, often into themes. In reporting the data, it is generally important to convey both the themes and some of the flavour of the actual words.

The data needs to be connected back through the layers of detail to the overarching research question it relates to. This can be done through the introductions to carefully-structured sections and subsections. Individual data extracts can be connected back into this structure through a process of 'tell-show-tell'.

Click on the highlighted text below to read the comments.

Example from a Doctor of Education thesis:

6.4.3 Themes from the Interview Data

In analysing the interview data, two themes emerged which will be discussed in this section. These themes were: the complexity and challenges of working with families and the professional satisfaction and challenges of program planning for children in preschool or childcare.

For each of these graduates, their work with children was clearly the area of their professional lives that was bringing the most satisfaction, although there were some challenges identified . In the interviews, the data reveal that they were all seeking ways to improve their pedagogy and achieving success in different ways &hellip

Angela suggested that in her second year of teaching she had changed in that she was programming in a "more child oriented" way. She discussed this change:

One of the things I've changed is this idea of herding children through the Kinder day: they go from indoor play to snack time to the mat and so on. How I do it now is that I have a lot of different things happening at once. I'll have a small group on the mat and there might be some children sitting down and having a snack and there's still some children in home corner playing.

These comments seem to provide evidence that Angela is growing professionally for two reasons. First, the ability to identify changes in her program suggests to me that she has deeper pedagogical knowledge gained through critical reflection on her practice, and second, there is congruence between her expressed beliefs and the practice she describes.


Introduce your data

Before diving into your research findings, first describe the flow of participants at every stage of your study and whether any data were excluded from the final analysis.

Participant flow and recruitment period

It’s necessary to report any attrition, which is the decline in participants at every sequential stage of a study. That’s because an uneven number of participants across groups sometimes threatens internal validity and makes it difficult to compare groups. Be sure to also state all reasons for attrition.


Discussion

In this study, we demonstrate the existence of a relationship between lower and higher-order learning phenomena and aesthetic appreciation, as indicated by (1) better memorisation performances (accuracy rate and d-prime values) for subjectively preferred as compared with non-preferred triad chords (see Fig. 1b) (2) the trial-by-trial correlation between amplitude fluctuations of the N1 attention related component and subjective AJs and (3) enhanced electrophysiological mismatch detection responses, evidencing ameliorated implicit learning of sensory regularities for preferred intervals (see Fig. 2). Moreover, it is important to notice that, in Experiment 1, chord type per se (consonant vs. dissonant) did not influence memorisation performances. This result is coherent with those of previous studies investigating short-term memory for just-tuned consonant and dissonant dyad intervals, which demonstrated that small-integer ratio dyads (consonant intervals) showed no innate memory advantage musicians’ and non-musicians’ recognition of consonant intervals was no better or worse than that of dissonant intervals (Rogers & Levitin, 2007). As we will discuss below, these results, together with our findings, seem to support the hypothesis that memory advantages are independent from consonance per se, while memory performances might be directly linked to subjective preferences.

Overall, the present findings, indicating enhanced memorisation performances for subjectively preferred intervals and chords, may be considered as supporting evidence to our hypothesis of a correlation between perceptual learning and subjective aesthetic appreciation. In previous research we showed that more appreciated intervals boost perceptual processing, inducing an automatic re-orienting of attentional resources towards the sensory inputs (Sarasso, Neppi-Modona, et al., 2020a). This effect, also evident in Experiment 2, is reflected in the significant enhancement of attention-related electrophysiological responses (Sarasso, Ronga, et al., 2020b Sarasso, Ronga, et al., 2019b) and in the consequent improvement of perceptual performances for more appreciated stimuli (Sarasso, Ronga, et al., 2020b Spehar, Wong, van de Klundert, Lui, Clifford & Taylor, 2015). We propose that a similar mechanism might underlie the behavioural results of Experiment 1. Our interpretative hypothesis is that preferred intervals elicited increased sensory activations and improved perceptual implicit learning in the memorisation phase via an automatic attentional modulation, which in turn triggered enhanced memorisation performances in the recognition phase. In other words, the results of Experiment 1 seem to indicate that the previously demonstrated beauty-related boost in low-level perceptual processing might also induce a learning gain at higher levels. However, to the best of our knowledge, evidence directly exploring the beauty-driven modulation of low-level perceptual learning phenomena is still missing. With the final aim of verifying the presence of such a mechanism at an implicit level, we performed Experiment 2.

Results of Experiment 2 are twofold. First, our findings confirm previous studies evidencing a correlation between AJs and early attentional electrophysiological responses to more and less consonant musical intervals (Sarasso, Ronga, et al., 2019b) and images with more or less natural frequencies content (Sarasso, Ronga, et al., 2020b). The N1 component amplitude has been frequently described as an index of attentional engagement (Alho, 1992 Fritz et al., 2007 Giuliano et al., 2014 Wilkinson & Lee, 1972). Indeed, it has been shown that valid spatial and temporal cues can enhance the auditory N1 component (Hillyard & Anllo-Vento, 1998 Hötting et al., 2003). Fluctuations in the auditory N1 component are also modulated by task-relevance, stimulus saliency, and predictability (Lange, 2013 Zani & Proverbio, 2012). In accordance with previous findings (Regnault et al., 2001 Virtala et al., 2014), trial by-trial fluctuations in N1 voltages registered during Experiment 2 significantly correlated with single trial AJs (see Fig. 2). Moreover, as we expected, mismatch detection responses (i.e. responses to deviant intervals minus responses to standard intervals) were significantly more pronounced for more appreciated interval types. The increase in mismatch detection responses is usually interpreted as a correlate of optimal implicit statistical learning of sensory regularities (Garrido et al., 2016 Näätänen et al., 2007) and is impaired in a number of pathological conditions (Garrido et al., 2009) and learning impairments (Cantiani et al., 2019). Interestingly, the enhancement of mismatch detection has been demonstrated to correlate also with higher-order learning phenomena, such as the acquisition of new linguistic skills, thus indicating that improved low-level perceptual learning mechanisms might predict higher-order learning outcomes (Winkler et al., 2003 Ylinen et al., 2010).

Overall our behavioural and electrophysiological results, in accordance with previous evidence, show that subjective aesthetic appreciation is related to an automatic re-orienting of attention toward the sensory stimulation, leading in turn to the enhacement of lower-level (i.e. mismatch detection) and higher-level (i.e. memorisation) learning. What might explain such attentional capture and increased implicit perceptual learning for more appreciated intervals?

Previous neurocomputational theories suggested that, in order to maximize epistemic value, intelligent systems (biological and artificial) have developed an intrinsic feedback on information gains (Gottlieb et al., 2013). According to this view, the brain automatically generates intrinsic rewards in response to stimuli with high informational content, signaling to the nervous system to focus on present sensory stimulation to learn something new. As we previously discussed, higher AJs seem to be assigned to stimuli valued as more profitable in terms of informational content (Biederman & Vessel, 2006 Chetverikov & Kristjánsson, 2016 Consoli, 2015 Perlovsky, 2014 Perlovsky & Schoeller, 2019 Schmidhuber, 2009). In other words, aesthetic appreciation may emerge anytime the cognitive system senses a refinement of the mental representations of the environment (Muth & Carbon, 2013 Schoeller & Perlovsky, 2016 Van de Cruys & Wagemans, 2011). Accordingly, the perception of beauty may be considered as a feedback allowing the individual to discriminate between informationally profitable (i.e. leading to learning progresses) and noisy (i.e. “unlearnable”) signals. This might explain the overall preference for more consonant intervals, given the evidence that consonant intervals are processed more fluently than dissonant intervals (Crespo-Bojorque et al., 2018 Crespo-Bojorque & Toro, 2016 Masataka & Perlovsky, 2013). Crespo-Bojorque et al. (2018) found that dissonant infrequent intervals played within a stream of frequent consonant intervals elicited larger mismatch negativities (MMN) as compared with the opposite condition (i.e. infrequent consonant intervals embedded within a dissonant context). The authors interpret their results as evidence for an early processing advantage for consonant over dissonant intervals. Although it is impossible to exclude that these results were also driven by the easier detection of dissonant sounds within a consonant context, which more closely resembles everyday musical experience, the interpretation suggested by the authors confirms the present findings. Indeed, since electrophysiological mismatch detection responses reflect the extent to which sensory information is weighted according to its estimated reliability (also referred as precision-weighted prediction errors Quiroga-Martinez et al. 2019), it might be argued that in both our and Crespo-Bojorque’s study, mismatch detection responses elicited in a consonant context were enhanced by the automatic up-weighting of consonant sensory inputs. Apparently, a more consonant sensory context, similarly to a low-entropy sensory context, induces the brain to estimate the inputs as more reliable (Quiroga-Martinez et al., 2019). It has also been suggested that our auditory cortices are generally more tuned to process consonant sounds (Bowling & Purves, 2015 Bowling et al., 2017) due to the similarity with human vocalizations (Crespo-Bojorque & Toro, 2016 Toro & Crespo-Bojorque, 2017). However, personal experiences, as musical training and listening, seem to be able to modulate these general trends (Crespo-Bojorque et al., 2018). Accordingly, AJs, processing advantages and implicit perceptual learning do not always correlate with consonance, but can vary according to some contextual (Brattico et al., 2013 Mencke et al., 2019 Pelowski et al., 2017), experiential (Koelsch et al., 2019), cultural (Lahdelma & Eerola, 2020 McDermott et al., 2016), and personal factors (Brattico et al., 2009 McDermott et al., 2010 Plantinga & Trehub, 2014 Proverbio et al., 2016). Professional musicians, as an example, show larger MMNs (Crespo-Bojorque et al., 2018), a superior automatic discrimination (Brattico et al., 2009), and higher aesthetic appreciations (Istók et al., 2009 Müller et al., 2010 Schön et al., 2005 Smith & Melara, 1990) of non-prototypical dissonant intervals. However, the evidence for the hypothesis that musical expertise facilitates neural processing of dissonant musical stimuli is still conflicting. As an example, Linnavalli et al. (2020) found that dissonant deviant chords (embedded within a dissonant context) elicited similar MMN responses for musicians and non-musicians and hypothesize that the facilitating effects of musical expertise might emerge in higher stages of auditory processing, influencing only behavioural discrimination.

Altogether, cultural familiarity, individual experiences and even personality traits may induce the nervous system to reinterpret some specific sensory signals usually valued as “noisy” as more informationally profitable (Hsu et al., 2015 Mencke et al., 2019). As an example, beside purely acoustic factors, tritones might be usually disliked because of Western music aesthetic conventions (Partch, 1974). This effect is crucial in showing that the weighting of the sensory input, rather than being aprioristically defined, is sensitive to contextual variability (such as frequency of exposition, contextual relevance) and may differ across individuals and even within the same individual, from time to time (Ronga et al., 2017 Van Beers et al., 2002). This might explain the differences in subjective AJs and memorisation performances across triad types in Experiment 1. Coherently with this idea, in Experiment 2, results from the trial-by-trial correlation strongly suggest a direct relation between subjective aesthetic appreciation and the hypothesized attentional up-weighting of auditory inputs. Indeed trial-by-trial fluctuations in the amplitude of attentional N1 component correlate with single trials AJs independently from interval type.

As a limit of Experiment 2, we must point out that the result on mismatch responses to preferred versus non-preferred intervals, although in line with our hypothesis of a correlation between perceptual learning and subjective aesthetic appreciation, does not exclude that enhanced implicit learning of sensory regularities is exclusively related to interval consonance, rather than specifically to subjective AJs. Contrarily to Experiment 1, where we employed more similar (major and diminished) chords in terms of consonance/dissonance, in the sample of participants included in Experiment 2, individual preferences did not vary across more and less consonant (fifth and tritone) intervals. In Experiment 2 preferences were all oriented toward more consonant fifth intervals, which renders it impossible to disjoint the effect of mere acoustic difference of stimuli from subjective preference. These results are coherent with previous studies showing a inverted-U shape for preferences: when dissonance is relatively low, preference does not decrease with increasing dissonance, while for relatively higher degrees of dissonance, preference decreases with increasing dissonance (Lahdelma & Eerola, 2016a). This might explain why some participants in Experiment 1 preferred mildly dissonant diminished chords. Still, results from Experiment 2 do not allow a clear-cut dissociation between consonance and likings, thereby limiting the evidence in favor of a selective correlation between perceptual learning and AJs. However, fifth and tritone intervals, despite being very far in terms of consonance, are composed by single tones which are very similar in terms of frequency (Hz). This was essential to exclude that EEG fluctuations were exclusively related to changes in frequency (Sarasso, Ronga, et al., 2019b). Further research, aiming to extend the comprehension of the relation between perceptual learning and AJs beyond our preliminary and methodologically constrained study, might employ intervals that reside on less extreme points of the consonance/dissonance continoum, which would likely induce greater variability in individual preferences. Furthermore, less culturally loaded stimuli would ideally lead to less polarized preferences (Lahdelma & Eerola, 2020).

In a follow-up study which is curretly under review (Sarasso, Neppi-Modona, et al., 2021a), we employed a roving paradigm to compare deviant and standard responses to fifth and tritone intervals. In our experimental sample some participants preferred fifth intervals over tritones, and we found that, similarly to Experiment 1, MMN were significantly different only when comparing subjectively preferred and non-preferred intervals, but not when comparing consonant (fifth) versus dissonant (tritone) intervals. This result further points to a significant correlation between implicit learning and subjective aesthetic preferences, independently from stimuli acoustic features.


Present your findings

This page and the next, on reporting and discussing your findings, deal with the core of the thesis. In a traditional doctoral thesis, this will consist of a number of chapters where you present the data that forms the basis of your investigation, shaped by the way you have thought about it. In a thesis including publication, it will be the central section of an article.

For some fields of study, the presentation and discussion of findings follows established conventions for others, the researcher&rsquos argument determines the structure. Therefore it is important for you to investigate the conventions of your own discipline, by looking at journal articles and theses.

Every thesis writer has to present and discuss the results of their inquiry. In these pages we consider these two activities separately, while recognising that in many kinds of thesis they will be integrated. This section is concerned with presenting the analysis of the results of data analysis.

There is a great deal of disciplinary variation in the presentation of findings. For example, a thesis in oral history and one in marketing may both use interview data that has been collected and analysed in similar ways, but the way the results of this analysis are presented will be very different because the questions they are trying to answer are different. The presentation of results from experimental studies will be different again. In all cases, though, the presentation should have a logical organisation that reflects:

  • the aims or research question(s) of the project, including any hypotheses that have been tested
  • the research methods and theoretical framework that have been outlined earlier in the thesis.

You are not simply describing the data. You need to make connections, and make apparent your reasons for saying that data should be interpreted in one way rather than another.

Structure

Each chapter needs an introduction outlining its organisation.

Examples

Chemical Engineering PhD thesis:

In this Chapter, all the experimental results from the phenomenological experiments outlined in Section 5.2 are presented and examined in detail. The effects of the major operating variables on the performance of the pilot filters are explained, and various implications for design are discussed. The new data may be found in Appendix C.

The principal goal of the vernacular adaptor of a Latin saint's life was to edify and instruct his audience. In this chapter I shall try to show to what extent our texts conform to vernacular conventions of a well-told story of a saint, and in what ways they had to modify their originals to do so, attempting also to identify some of the individual characteristics of the three poems.

After that, the organisation will vary according to the kind of research being reported. Below are some important principles for reporting experimental, quantitative (survey) and qualitative studies.

Experimental studies

The results of experiments are almost always presented separately from discussion.

  • Present results in tables and figures
  • Use text to introduce tables and figures and guide the reader through key results
  • Point out differences and relationships, and provide information about them
  • Include negative results (then try to explain them in the Discussion section/chapter)

Quantitative studies

There are generally accepted guidelines for presenting the results of statistical analyses of data about populations or groups of people, plants or animals. It is important that the results be presented in an informative way.

  • Demographic data that describe the sample are usually presented first.
  • Remind the reader of the research question being addressed, or the hypothesis being tested.
  • State which differences are significant.
  • Highlight the important trends and differences/comparisons.
  • Indicate whether the hypothesis is supported or not.

You can read more about reporting quantitative results in the next section, Reporting conventions.

Qualitative studies

The presentation and discussion of qualitative data are often combined.

Qualitative data is difficult to present neatly in tables and figures. It is usually expressed in words, and this results in a large quantity of written material, through which you must guide your reader.

Structure is therefore very important.

Try to make your sections and subsections reflect the themes that have emerged from your analysis of the data, and to make sure your reader knows how these themes evolved. Headings and subheadings, as well as directions to the reader, are forms of signposting you can use to make these chapters easy to navigate.

You can read more about reporting qualitative results in the next section, Reporting conventions.

What to include

For all types of research, decisions about what data to include are important.

  • Include what you need to support the points you need to make. Be guided by your research questions(s) and the nature of your data.
  • Make your selection criteria explicit.
  • More detail can be provided in an appendix. Evans and Gruba (2002) offer some good advice: 'Include enough data in an appendix to show how you collected it, what form it took, and how you treated it in the process of condensing it for presentation in the results chapter.' (p. 105)

Reporting conventions

Reporting conventions differ according to whether the data involved is quantitative or qualitative.

Quantitative data

The purpose of the results section of the thesis is to report the findings of your research. You usually present the data you obtained in appropriate figures (diagrams, graphs, tables and photographs) and you then comment on this data.

Comments on figures and tables (data commentary) usually have the following elements:

  • a location element
  • a summary of the information presented in the figure
  • a highlighting statement to point out what is significant in all the data presented (eg trends, patterns, results that stand out).

Data commentary element example

Instructions: Click on the highlighted data elements in the example below.

Table 5 shows the most common modes of computer infection in Australian businesses . As can be seen in the table, home disks are the most frequent source of infection .

Activity: Data commentary element example

Instructions: Click on the text below to identify the location element, summary and highlighting statement.

The influents to filter A and B were analysed fully on a number of occasions, and the averaged results are presented in Table 6.1 . It can be seen from the table that the wastewaters from plants A and B and of similar composition .

Sometimes a reduced location element is used which gives only the table or figure number in brackets after the highlighting statement.

  1. The ranges of metal atom concentrations for the two precipitate types were found to overlap (Table 6)
  2. Quantitative analysis revealed some variation in the composition of the rods in the various exservice samples (Figure 7 and Table 5).

Commentary on results may include:

  • explanations
  • comparisons between results
  • comments on whether the results are expected or unexpected
  • comments about unsatisfactory data.

Dealing with "Problems"

The difference between expected and obtained results may be due to the incorrect calibration of the instruments.
This discrepancy can be attributed to the small sample size.
The anomaly in the observations can probably be accounted for by a defect in the camera.
The lack of statistical significance is probably a consequence of weaknesses in the experimental design.
The difficulty in dating this archeological site would seem to stem from the limited amount of organic material available.

(Adapted from Swales & Feak, 2004, p. 138).

If you are discussing your findings in a separate chapter or section, limit your comments here to the specific results you have presented.

Past or present tense?

Location element present tense &hellipthe averaged results are presented in Table 6.1.
Table 5 shows&hellip
Summary of procedure past tense The influents to filter A and B were analysed fully on a number of occasions,&hellip
Results of analysis past tense The ranges of metal atom concentrations &hellip were found to overlap.
Comments present tense This discrepancy can be attributed to the small sample size.

Qualitative data

The reporting of qualitative data is much less bound by convention than that of quantitative data. The data itself usually consists of words, from written documents or interview transcripts (but may include images), which have been analysed in some way, often into themes. In reporting the data, it is generally important to convey both the themes and some of the flavour of the actual words.

The data needs to be connected back through the layers of detail to the overarching research question it relates to. This can be done through the introductions to carefully-structured sections and subsections. Individual data extracts can be connected back into this structure through a process of 'tell-show-tell'.

Click on the highlighted text below to read the comments.

Example from a Doctor of Education thesis:

6.4.3 Themes from the Interview Data

In analysing the interview data, two themes emerged which will be discussed in this section. These themes were: the complexity and challenges of working with families and the professional satisfaction and challenges of program planning for children in preschool or childcare.

For each of these graduates, their work with children was clearly the area of their professional lives that was bringing the most satisfaction, although there were some challenges identified . In the interviews, the data reveal that they were all seeking ways to improve their pedagogy and achieving success in different ways &hellip

Angela suggested that in her second year of teaching she had changed in that she was programming in a "more child oriented" way. She discussed this change:

One of the things I've changed is this idea of herding children through the Kinder day: they go from indoor play to snack time to the mat and so on. How I do it now is that I have a lot of different things happening at once. I'll have a small group on the mat and there might be some children sitting down and having a snack and there's still some children in home corner playing.

These comments seem to provide evidence that Angela is growing professionally for two reasons. First, the ability to identify changes in her program suggests to me that she has deeper pedagogical knowledge gained through critical reflection on her practice, and second, there is congruence between her expressed beliefs and the practice she describes.


11.6: Reporting the Results of a Hypothesis Test

  • Contributed by Danielle Navarro
  • Associate Professor (Psychology) at University of New South Wales

When writing up the results of a hypothesis test, there&rsquos usually several pieces of information that you need to report, but it varies a fair bit from test to test. Throughout the rest of the book I&rsquoll spend a little time talking about how to report the results of different tests (see Section 12.1.9 for a particularly detailed example), so that you can get a feel for how it&rsquos usually done. However, regardless of what test you&rsquore doing, the one thing that you always have to do is say something about the p value, and whether or not the outcome was significant.

The fact that you have to do this is unsurprising it&rsquos the whole point of doing the test. What might be surprising is the fact that there is some contention over exactly how you&rsquore supposed to do it. Leaving aside those people who completely disagree with the entire framework underpinning null hypothesis testing, there&rsquos a certain amount of tension that exists regarding whether or not to report the exact p value that you obtained, or if you should state only that p<&alpha for a significance level that you chose in advance (e.g., p<.05).


Contents

Research can only contribute to knowledge if it is communicated from investigators to the community. The generally accepted primary means of communication is “full” publication of the study methods and results in an article published in a scientific journal. Sometimes, investigators choose to present their findings at a scientific meeting as well, either through an oral or poster presentation. These presentations are included as part of the scientific record as brief “abstracts” which may or may not be recorded in publicly accessible documents typically found in libraries or the World Wide Web. [ citation needed ]

Sometimes, investigators fail to publish the results of entire studies. The Declaration of Helsinki and other consensus documents have outlined the ethical obligation to make results from clinical research publicly available. [ citation needed ]

Reporting bias occurs when the dissemination of research findings is influenced by the nature and direction of the results, for instance in systematic reviews. [4] Positive results is a commonly used term to describe a study finding that one intervention is better than another. [ citation needed ]

Various attempts have been made to overcome the effects of the reporting biases, including statistical adjustments to the results of published studies. [5] None of these approaches has proved satisfactory, however, and there is increasing acceptance that reporting biases must be tackled by establishing registers of controlled trials and by promoting good publication practice. Until these problems have been addressed, estimates of the effects of treatments based on published evidence may be biased. [ citation needed ]

Litigation brought upon by consumers and health insurers against Pfizer for the fraudulent sales practices in marketing of the drug gabapentin in 2004 revealed a comprehensive publication strategy that employed elements of reporting bias. [6] Spin was used to put emphasis on favorable findings that favored gabapentin, and also to explain away unfavorable findings towards the drug. In this case, favorable secondary outcomes became the focus over the original primary outcome, which was unfavorable. Other changes found in outcome reporting include the introduction of a new primary outcome, failure to distinguish between primary and secondary outcomes, and failure to report one or more protocol-defined primary outcomes. [7]

The decision to publish certain findings in certain journals is another strategy. [6] Trials with statistically significant findings were generally published in academic journals with higher circulation more often than trials with nonsignificant findings. Timing of publication results of trials was influenced, in that the company tried to optimize the timing between the release of two studies. Trials with nonsignificant findings were found to be published in a staggered fashion, as to not have two consecutive trials published without salient findings. Ghost authorship was also an issue, where professional medical writers who drafted the published reports were not properly acknowledged.

Fallout from this case is still being settled by Pfizer in 2014, 10 years after the initial litigation. [8]

Publication bias Edit

The publication or nonpublication of research findings, depending on the nature and direction of the results. Although medical writers have acknowledged the problem of reporting biases for over a century, [9] it was not until the second half of the 20th century that researchers began to investigate the sources and size of the problem of reporting biases. [10]

Over the past two decades, evidence has accumulated that failure to publish research studies, including clinical trials testing intervention effectiveness, is pervasive. [10] Almost all failure to publish is due to failure of the investigator to submit [11] only a small proportion of studies are not published because of rejection by journals. [12]

The most direct evidence of publication bias in the medical field comes from follow-up studies of research projects identified at the time of funding or ethics approval. [13] These studies have shown that “positive findings” is the principal factor associated with subsequent publication: researchers say that the reason they don't write up and submit reports of their research for publication is usually because they are “not interested” in the results (editorial rejection by journals is a rare cause of failure to publish).

Even those investigators who have initially published their results as conference abstracts are less likely to publish their findings in full unless the results are “significant”. [14] This is a problem because data presented in abstracts are frequently preliminary or interim results and thus may not be reliable representations of what was found once all data were collected and analyzed. [15] In addition, abstracts are often not accessible to the public through journals, MEDLINE, or easily accessed databases. Many are published in conference programs, conference proceedings, or on CD-ROM, and are made available only to meeting registrants.

The main factor associated with failure to publish is negative or null findings. [16] Controlled trials that are eventually reported in full are published more rapidly if their results are positive. [15] Publication bias leads to overestimates of treatment effect in meta-analyses, which in turn can lead doctors and decision makers to believe a treatment is more useful than it is.

It is now well-established that publication bias with more favorable efficacy results is associated with the source of funding for studies that would not otherwise be explained through usual risk of bias assessments. [17]

Time lag bias Edit

The rapid or delayed publication of research findings, depending on the nature and direction of the results. In a systematic review of the literature, Hopewell and her colleagues found that overall, trials with “positive results” (statistically significant in favor of the experimental arm) were published about a year sooner than trials with “null or negative results” (not statistically significant or statistically significant in favor of the control arm). [15]

Multiple (duplicate) publication bias Edit

The multiple or singular publication of research findings, depending on the nature and direction of the results. Investigators may also publish the same findings multiple times using a variety of patterns of “duplicate” publication. [18] Many duplicates are published in journal supplements, potentially difficult to access literature. Positive results appear to be published more often in duplicate, which can lead to overestimates of a treatment effect.

Location bias Edit

The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results. There is also evidence that, compared to negative or null results, statistically significant results are on average published in journals with greater impact factors, [19] and that publication in the mainstream (non grey) literature is associated with an overall greater treatment effect compared to the grey literature. [20]

Citation bias Edit

The citation or non-citation of research findings, depending on the nature and direction of the results. Authors tend to cite positive results over negative or null results, and this has been established over a broad cross section of topics. [21] [22] [23] [24] [25] [26] Differential citation may lead to a perception in the community that an intervention is effective when it is not, and it may lead to over-representation of positive findings in systematic reviews if those left uncited are difficult to locate.

Selective pooling of results in a meta-analysis is a form of citation bias that is particularly insidious in its potential to influence knowledge. To minimize bias, pooling of results from similar but separate studies requires an exhaustive search for all relevant studies. That is, a meta-analysis (or pooling of data from multiple studies) must always have emerged from a systematic review (not a selective review of the literature), even though a systematic review does not always have an associated meta-analysis.

Language bias Edit

The publication of research findings in a particular language, depending on the nature and direction of the results. There is longstanding question about whether there is a language bias such that investigators choose to publish their negative findings in non-English language journals and reserve their positive findings for English language journals. Some research has shown that language restrictions in systematic reviews can change the results of the review [27] and in other cases, authors have not found that such a bias exists. [28]

Knowledge reporting bias Edit

The frequency with which people write about actions, outcomes, or properties is not a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals. People write about only some parts of the world around them much of the information is left unsaid. [2] [29]

Outcome reporting bias Edit

The selective reporting of some outcomes but not others, depending on the nature and direction of the results. [30] A study may be published in full, but pre-specified outcomes omitted or misrepresented. [7] [31] Efficacy outcomes that are statistically significant have a higher chance of being fully published compared to those that are not statistically significant.


Discussion

In this study, we demonstrate the existence of a relationship between lower and higher-order learning phenomena and aesthetic appreciation, as indicated by (1) better memorisation performances (accuracy rate and d-prime values) for subjectively preferred as compared with non-preferred triad chords (see Fig. 1b) (2) the trial-by-trial correlation between amplitude fluctuations of the N1 attention related component and subjective AJs and (3) enhanced electrophysiological mismatch detection responses, evidencing ameliorated implicit learning of sensory regularities for preferred intervals (see Fig. 2). Moreover, it is important to notice that, in Experiment 1, chord type per se (consonant vs. dissonant) did not influence memorisation performances. This result is coherent with those of previous studies investigating short-term memory for just-tuned consonant and dissonant dyad intervals, which demonstrated that small-integer ratio dyads (consonant intervals) showed no innate memory advantage musicians’ and non-musicians’ recognition of consonant intervals was no better or worse than that of dissonant intervals (Rogers & Levitin, 2007). As we will discuss below, these results, together with our findings, seem to support the hypothesis that memory advantages are independent from consonance per se, while memory performances might be directly linked to subjective preferences.

Overall, the present findings, indicating enhanced memorisation performances for subjectively preferred intervals and chords, may be considered as supporting evidence to our hypothesis of a correlation between perceptual learning and subjective aesthetic appreciation. In previous research we showed that more appreciated intervals boost perceptual processing, inducing an automatic re-orienting of attentional resources towards the sensory inputs (Sarasso, Neppi-Modona, et al., 2020a). This effect, also evident in Experiment 2, is reflected in the significant enhancement of attention-related electrophysiological responses (Sarasso, Ronga, et al., 2020b Sarasso, Ronga, et al., 2019b) and in the consequent improvement of perceptual performances for more appreciated stimuli (Sarasso, Ronga, et al., 2020b Spehar, Wong, van de Klundert, Lui, Clifford & Taylor, 2015). We propose that a similar mechanism might underlie the behavioural results of Experiment 1. Our interpretative hypothesis is that preferred intervals elicited increased sensory activations and improved perceptual implicit learning in the memorisation phase via an automatic attentional modulation, which in turn triggered enhanced memorisation performances in the recognition phase. In other words, the results of Experiment 1 seem to indicate that the previously demonstrated beauty-related boost in low-level perceptual processing might also induce a learning gain at higher levels. However, to the best of our knowledge, evidence directly exploring the beauty-driven modulation of low-level perceptual learning phenomena is still missing. With the final aim of verifying the presence of such a mechanism at an implicit level, we performed Experiment 2.

Results of Experiment 2 are twofold. First, our findings confirm previous studies evidencing a correlation between AJs and early attentional electrophysiological responses to more and less consonant musical intervals (Sarasso, Ronga, et al., 2019b) and images with more or less natural frequencies content (Sarasso, Ronga, et al., 2020b). The N1 component amplitude has been frequently described as an index of attentional engagement (Alho, 1992 Fritz et al., 2007 Giuliano et al., 2014 Wilkinson & Lee, 1972). Indeed, it has been shown that valid spatial and temporal cues can enhance the auditory N1 component (Hillyard & Anllo-Vento, 1998 Hötting et al., 2003). Fluctuations in the auditory N1 component are also modulated by task-relevance, stimulus saliency, and predictability (Lange, 2013 Zani & Proverbio, 2012). In accordance with previous findings (Regnault et al., 2001 Virtala et al., 2014), trial by-trial fluctuations in N1 voltages registered during Experiment 2 significantly correlated with single trial AJs (see Fig. 2). Moreover, as we expected, mismatch detection responses (i.e. responses to deviant intervals minus responses to standard intervals) were significantly more pronounced for more appreciated interval types. The increase in mismatch detection responses is usually interpreted as a correlate of optimal implicit statistical learning of sensory regularities (Garrido et al., 2016 Näätänen et al., 2007) and is impaired in a number of pathological conditions (Garrido et al., 2009) and learning impairments (Cantiani et al., 2019). Interestingly, the enhancement of mismatch detection has been demonstrated to correlate also with higher-order learning phenomena, such as the acquisition of new linguistic skills, thus indicating that improved low-level perceptual learning mechanisms might predict higher-order learning outcomes (Winkler et al., 2003 Ylinen et al., 2010).

Overall our behavioural and electrophysiological results, in accordance with previous evidence, show that subjective aesthetic appreciation is related to an automatic re-orienting of attention toward the sensory stimulation, leading in turn to the enhacement of lower-level (i.e. mismatch detection) and higher-level (i.e. memorisation) learning. What might explain such attentional capture and increased implicit perceptual learning for more appreciated intervals?

Previous neurocomputational theories suggested that, in order to maximize epistemic value, intelligent systems (biological and artificial) have developed an intrinsic feedback on information gains (Gottlieb et al., 2013). According to this view, the brain automatically generates intrinsic rewards in response to stimuli with high informational content, signaling to the nervous system to focus on present sensory stimulation to learn something new. As we previously discussed, higher AJs seem to be assigned to stimuli valued as more profitable in terms of informational content (Biederman & Vessel, 2006 Chetverikov & Kristjánsson, 2016 Consoli, 2015 Perlovsky, 2014 Perlovsky & Schoeller, 2019 Schmidhuber, 2009). In other words, aesthetic appreciation may emerge anytime the cognitive system senses a refinement of the mental representations of the environment (Muth & Carbon, 2013 Schoeller & Perlovsky, 2016 Van de Cruys & Wagemans, 2011). Accordingly, the perception of beauty may be considered as a feedback allowing the individual to discriminate between informationally profitable (i.e. leading to learning progresses) and noisy (i.e. “unlearnable”) signals. This might explain the overall preference for more consonant intervals, given the evidence that consonant intervals are processed more fluently than dissonant intervals (Crespo-Bojorque et al., 2018 Crespo-Bojorque & Toro, 2016 Masataka & Perlovsky, 2013). Crespo-Bojorque et al. (2018) found that dissonant infrequent intervals played within a stream of frequent consonant intervals elicited larger mismatch negativities (MMN) as compared with the opposite condition (i.e. infrequent consonant intervals embedded within a dissonant context). The authors interpret their results as evidence for an early processing advantage for consonant over dissonant intervals. Although it is impossible to exclude that these results were also driven by the easier detection of dissonant sounds within a consonant context, which more closely resembles everyday musical experience, the interpretation suggested by the authors confirms the present findings. Indeed, since electrophysiological mismatch detection responses reflect the extent to which sensory information is weighted according to its estimated reliability (also referred as precision-weighted prediction errors Quiroga-Martinez et al. 2019), it might be argued that in both our and Crespo-Bojorque’s study, mismatch detection responses elicited in a consonant context were enhanced by the automatic up-weighting of consonant sensory inputs. Apparently, a more consonant sensory context, similarly to a low-entropy sensory context, induces the brain to estimate the inputs as more reliable (Quiroga-Martinez et al., 2019). It has also been suggested that our auditory cortices are generally more tuned to process consonant sounds (Bowling & Purves, 2015 Bowling et al., 2017) due to the similarity with human vocalizations (Crespo-Bojorque & Toro, 2016 Toro & Crespo-Bojorque, 2017). However, personal experiences, as musical training and listening, seem to be able to modulate these general trends (Crespo-Bojorque et al., 2018). Accordingly, AJs, processing advantages and implicit perceptual learning do not always correlate with consonance, but can vary according to some contextual (Brattico et al., 2013 Mencke et al., 2019 Pelowski et al., 2017), experiential (Koelsch et al., 2019), cultural (Lahdelma & Eerola, 2020 McDermott et al., 2016), and personal factors (Brattico et al., 2009 McDermott et al., 2010 Plantinga & Trehub, 2014 Proverbio et al., 2016). Professional musicians, as an example, show larger MMNs (Crespo-Bojorque et al., 2018), a superior automatic discrimination (Brattico et al., 2009), and higher aesthetic appreciations (Istók et al., 2009 Müller et al., 2010 Schön et al., 2005 Smith & Melara, 1990) of non-prototypical dissonant intervals. However, the evidence for the hypothesis that musical expertise facilitates neural processing of dissonant musical stimuli is still conflicting. As an example, Linnavalli et al. (2020) found that dissonant deviant chords (embedded within a dissonant context) elicited similar MMN responses for musicians and non-musicians and hypothesize that the facilitating effects of musical expertise might emerge in higher stages of auditory processing, influencing only behavioural discrimination.

Altogether, cultural familiarity, individual experiences and even personality traits may induce the nervous system to reinterpret some specific sensory signals usually valued as “noisy” as more informationally profitable (Hsu et al., 2015 Mencke et al., 2019). As an example, beside purely acoustic factors, tritones might be usually disliked because of Western music aesthetic conventions (Partch, 1974). This effect is crucial in showing that the weighting of the sensory input, rather than being aprioristically defined, is sensitive to contextual variability (such as frequency of exposition, contextual relevance) and may differ across individuals and even within the same individual, from time to time (Ronga et al., 2017 Van Beers et al., 2002). This might explain the differences in subjective AJs and memorisation performances across triad types in Experiment 1. Coherently with this idea, in Experiment 2, results from the trial-by-trial correlation strongly suggest a direct relation between subjective aesthetic appreciation and the hypothesized attentional up-weighting of auditory inputs. Indeed trial-by-trial fluctuations in the amplitude of attentional N1 component correlate with single trials AJs independently from interval type.

As a limit of Experiment 2, we must point out that the result on mismatch responses to preferred versus non-preferred intervals, although in line with our hypothesis of a correlation between perceptual learning and subjective aesthetic appreciation, does not exclude that enhanced implicit learning of sensory regularities is exclusively related to interval consonance, rather than specifically to subjective AJs. Contrarily to Experiment 1, where we employed more similar (major and diminished) chords in terms of consonance/dissonance, in the sample of participants included in Experiment 2, individual preferences did not vary across more and less consonant (fifth and tritone) intervals. In Experiment 2 preferences were all oriented toward more consonant fifth intervals, which renders it impossible to disjoint the effect of mere acoustic difference of stimuli from subjective preference. These results are coherent with previous studies showing a inverted-U shape for preferences: when dissonance is relatively low, preference does not decrease with increasing dissonance, while for relatively higher degrees of dissonance, preference decreases with increasing dissonance (Lahdelma & Eerola, 2016a). This might explain why some participants in Experiment 1 preferred mildly dissonant diminished chords. Still, results from Experiment 2 do not allow a clear-cut dissociation between consonance and likings, thereby limiting the evidence in favor of a selective correlation between perceptual learning and AJs. However, fifth and tritone intervals, despite being very far in terms of consonance, are composed by single tones which are very similar in terms of frequency (Hz). This was essential to exclude that EEG fluctuations were exclusively related to changes in frequency (Sarasso, Ronga, et al., 2019b). Further research, aiming to extend the comprehension of the relation between perceptual learning and AJs beyond our preliminary and methodologically constrained study, might employ intervals that reside on less extreme points of the consonance/dissonance continoum, which would likely induce greater variability in individual preferences. Furthermore, less culturally loaded stimuli would ideally lead to less polarized preferences (Lahdelma & Eerola, 2020).

In a follow-up study which is curretly under review (Sarasso, Neppi-Modona, et al., 2021a), we employed a roving paradigm to compare deviant and standard responses to fifth and tritone intervals. In our experimental sample some participants preferred fifth intervals over tritones, and we found that, similarly to Experiment 1, MMN were significantly different only when comparing subjectively preferred and non-preferred intervals, but not when comparing consonant (fifth) versus dissonant (tritone) intervals. This result further points to a significant correlation between implicit learning and subjective aesthetic preferences, independently from stimuli acoustic features.


Reporting d-prime results - Psychology

Please pay attention to issues of italics and spacing. APA style is very precise about these. Also, with the exception of some p values, most statistics should be rounded to two decimal places.
Mean and Standard Deviation are most clearly presented in parentheses:

The sample as a whole was relatively young (M = 19.22, SD = 3.45).

The average age of students was 19.22 years (SD = 3.45).

Percentages are also most clearly displayed in parentheses with no decimal places:

Chi-Square statistics are reported with degrees of freedom and sample size in parentheses, the Pearson chi-square value (rounded to two decimal places), and the significance level:

T Tests are reported like chi-squares, but only the degrees of freedom are in parentheses. Following that, report the t statistic (rounded to two decimal places) and the significance level.

ANOVAs (both one-way and two-way) are reported like the t test, but there are two degrees-of-freedom numbers to report. First report the between-groups degrees of freedom, then report the within-groups degrees of freedom (separated by a comma). After that report the F statistic (rounded off to two decimal places) and the significance level.

Correlations are reported with the degrees of freedom (which is N &ndash 2) in parentheses and the significance level:


Regression results are often best presented in a table, but if you would like to report the regression in the text of your Results section, you should at least present the unstandardized or standardized slope (beta), whichever is more interpretable given the data, along with the t-test and the corresponding significance level. (Degrees of freedom for the t-test is N &ndash k &ndash 1 where k equals the number of predictor variables.) It is also customary to report the percentage of variance explained along with the corresponding F test.

Tables are useful if you find that a paragraph has almost as many numbers as words. If you do use a table, do not also report the same information in the text. It's either one or the other.

Based on:
American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). Washington, DC: Author.


Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by his or her poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.


Introduce your data

Before diving into your research findings, first describe the flow of participants at every stage of your study and whether any data were excluded from the final analysis.

Participant flow and recruitment period

It’s necessary to report any attrition, which is the decline in participants at every sequential stage of a study. That’s because an uneven number of participants across groups sometimes threatens internal validity and makes it difficult to compare groups. Be sure to also state all reasons for attrition.


Reporting Biases

The dissemination of research findings is not a division into published or unpublished, but a continuum ranging from the sharing of draft papers among colleagues, through presentations at meetings and published abstracts, to papers in journals that are indexed in the major bibliographic databases (Smith 1999). It has long been recognized that only a proportion of research projects ultimately reach publication in an indexed journal and thus become easily identifiable for systematic reviews.

Reporting biases arise when the dissemination of research findings is influenced by the nature and direction of results. Statistically significant, ‘positive’ results that indicate that an intervention works are more likely to be published, more likely to be published rapidly, more likely to be published in English, more likely to be published more than once, more likely to be published in high impact journals and, related to the last point, more likely to be cited by others. The contribution made to the totality of the evidence in systematic reviews by studies with non-significant results is as important as that from studies with statistically significant results.

The table below summarizes some different types of reporting biases.

Type of reporting bias

The publication or non-publication of research findings, depending on the nature and direction of the results

The rapid or delayed publication of research findings, depending on the nature and direction of the results

The multiple or singular publication of research findings, depending on the nature and direction of the results

The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results.

The citation or non-citation of research findings, depending on the nature and direction of the results

The publication of research findings in a particular language, depending on the nature and direction of the results

The selective reporting of some outcomes but not others, depending on the nature and direction of the results

While publication bias has long been recognized and much discussed, other factors can contribute to biased inclusion of studies in meta-analyses. Indeed, among published studies, the probability of identifying relevant studies for meta-analysis is also influenced by their results. These biases have received much less consideration than publication bias, but their consequences could be of equal importance.

Duplicate (multiple) publication bias

In 1989, Gøtzsche found that, among 244 reports of trials comparing non-steroidal anti-inflammatory drugs in rheumatoid arthritis, 44 (18%) were redundant, multiple publications, which overlapped substantially with a previously published article. Twenty trials were published twice, ten trials three times and one trial four times (Gøtzsche 1989). The production of multiple publications from single studies can lead to bias in a number of ways (Huston 1996). Most importantly, studies with significant results are more likely to lead to multiple publications and presentations (Easterbrook 1991), which makes it more likely that they will be located and included in a meta-analysis. It is not always obvious that multiple publications come from a single study, and one set of study participants may be included in an analysis twice. The inclusion of duplicated data may therefore lead to overestimation of intervention effects, as was demonstrated for trials of the efficacy of ondansetron to prevent postoperative nausea and vomiting (Tramèr 1997).

Other authors have described the difficulties and frustration caused by redundancy and the ‘disaggregation’ of medical research when results from a multi-centre trial are presented in several publications (Huston 1996, Johansen 1999). Redundant publications often fail to cross-reference each other (Bailey 2002, Barden 2003) and there are examples where two articles reporting the same trial do not share a single common author (Gøtzsche 1989, Tramèr 1997). Thus, it may be difficult or impossible for review authors to determine whether two papers represent duplicate publications of one study or two separate studies without contacting the authors, which may result in biasing a meta-analysis of this data.

Location bias

Research suggests that various factors related to the accessibility of study results are associated with effect sizes in trials. For example, in a series of trials in the field of complementary and alternative medicine, Pittler and colleagues examined the relationship between trial outcome, methodological quality and sample size with characteristics of the journals of publication of these trials (Pittler 2000). They found that trials published in low or non-impact factor journals were more likely to report significant results than those published in high-impact mainstream medical journals and that the quality of the trials was also associated with the journal of publication. Similarly, some studies suggest that trials published in English language journals are more likely to show strong significant effects than those published in non-English language journals (Egger 1997b), however this has not been shown consistently (Moher 2000, Jüni 2002, Pham 2005).

The term ‘location bias’ is also used to refer to the accessibility of studies based on variable indexing in electronic databases. Depending on the clinical question, choices regarding which databases to search may bias the effect estimate in a meta-analysis. For example, one study found that trials published in journals that were not indexed in MEDLINE might show a more beneficial effect than trials published in MEDLINE-indexed journals (Egger 2003). Another study of 61 meta-analyses found that, in general, trials published in journals indexed in EMBASE but not in MEDLINE reported smaller estimates of effect than those indexed in MEDLINE, but that the risk of bias may be minor, given the lower prevalence of the EMBASE unique trials (Sampson 2003). As above, these findings may vary substantially with the clinical topic being examined.

A final form of location bias is regional or developed country bias. Research supporting the evidence of this bias suggests that studies published in certain countries may be more likely than others to produce research showing significant effects of interventions. Vickers and colleagues demonstrated the potential existence of this bias (Vickers 1998).

Citation bias

The perusal of the reference lists of articles is widely used to identify additional articles that may be relevant although there is little evidence to support this methodology. The problem with this approach is that the act of citing previous work is far from objective and retrieving literature by scanning reference lists may thus produce a biased sample of studies. There are many possible motivations for citing an article. Brooks interviewed academic authors from various faculties at the University of Iowa and asked for the reasons for citing each reference in one of the authors’ recent articles (Brooks 1985). Persuasiveness, i.e. the desire to convince peers and substantiate their own point of view, emerged as the most important reason for citing articles. Brooks concluded that authors advocate their own opinions and use the literature to justify their point of view: “Authors can be pictured as intellectual partisans of their own opinions, scouring the literature for justification” (Brooks 1985).

In Gøtzsche’s analysis of trials of non-steroidal anti-inflammatory drugs in rheumatoid arthritis, trials demonstrating a superior effect of the new drug were more likely to be cited than trials with negative results (Gøtzsche 1987). Similar results were shown in an analysis of randomized trials of hepato-biliary diseases (Kjaergard 2002). Similarly, trials of cholesterol lowering to prevent coronary heart disease were cited almost six times more often if they were supportive of cholesterol lowering (Ravnskov 1992). Over-citation of unsupportive studies can also occur. Hutchison et al. examined reviews of the effectiveness of pneumococcal vaccines and found that unsupportive trials were more likely to be cited than trials showing that vaccines worked (Hutchison 1995).

Citation bias may affect the ‘secondary’ literature. For example, the ACP Journal Club aims to summarize original and review articles so that physicians can keep abreast of the latest evidence. However, Carter et al. found that trials with a positive outcome were more likely to be summarized, after controlling for other reasons for selection (Carter 2006). If positive studies are more likely to be cited, they may be more likely to be located and, thus, more likely to be included in a systematic review, thus biasing the findings of the review.

Language bias

Reviews have often been exclusively based on studies published in English. For example, among 36 meta-analyses reported in leading English-language general medicine journals from 1991 to 1993, 26 (72%) had restricted their search to studies reported in English (Grégoire 1995). This trend may be changing, with a recent review of 300 systematic reviews finding approximately 16% of reviews limited to trials published in English systematic reviews published in paper-based journals were more likely than Cochrane reviews to report limiting their search to trials published in English (Moher 2007). In addition, of reviews with a therapeutic focus, Cochrane reviews were more likely than non-Cochrane reviews to report having no language restrictions (62% vs. 26%) (Moher 2007).

Investigators working in a non-English speaking country will publish some of their work in local journals (Dickersin 1994). It is conceivable that authors are more likely to report in an international, English-language journal if results are positive whereas negative findings are published in a local journal. This was demonstrated for the German-language literature (Egger 1997b).

Bias could thus be introduced in reviews exclusively based on English-language reports (Grégoire 1995, Moher 1996). However, the research examining this issue is conflicting. In a study of 50 reviews that employed comprehensive literature searches and included both English and non-English-language trials, Jüni et al reported that non-English trials were more likely to produce significant results at P<0.05, while estimates of intervention effects were, on average, 16% (95% CI 3% to 26%) more beneficial in non-English-language trials than in English-language trials (Jüni 2002). Conversely, Moher and colleagues examined the effect of inclusion or exclusion of English-language trials in two studies of meta-analyses and found, overall, that the exclusion of trials reported in a language other than English did not significantly affect the results of the meta-analyses (Moher 2003). These results were similar when the analysis was limited to meta-analyses of trials of conventional medicines. When the analyses were conducted separately for meta-analyses of trials of complementary and alternative medicines, however, the effect size of meta-analyses was significantly decreased by excluding reports in languages other than English (Moher 2003).

The extent and effects of language bias may have diminished recently because of the shift towards publication of studies in English. In 2006, Galandi et al. reported a dramatic decline in the number of randomized trials published in German-language healthcare journals: with fewer than two randomized trials published per journal and year after 1999 (Galandi 2006). While the potential impact of studies published in languages other than English in a meta-analysis may be minimal, it is difficult to predict in which cases this exclusion may bias a systematic review. Review authors may want to search without language restrictions and decisions about including reports from languages other than English may need to be taken on a case-by-case basis.

Outcome reporting bias

In many studies, a range of outcome measures is recorded but not all are reported (Pocock 1987, Tannock 1996). The choice of outcomes that are reported can be influenced by the results, potentially making published results misleading. For example, two separate analyses (Mandel 1987, Cantekin 1991) of a double-blind placebo-controlled trial assessing the efficacy of amoxicillin in children with non-suppurative otitis media reached opposite conclusions mainly because different ‘weight’ was given to the various outcome measures that were assessed in the study. This disagreement was conducted in the public arena, since it was accompanied by accusations of impropriety against the team producing the findings favourable to amoxicillin. The leader of this team had received substantial fiscal support, both in research grants and as personal honoraria, from the manufacturers of amoxicillin (Rennie 1991). It is a good example of how reliance upon the data chosen to be presented by the investigators can lead to distortion (Anonymous 1991). Such ‘outcome reporting bias’ may be particularly important for adverse effects. Hemminki examined reports of clinical trials submitted by drug companies to licensing authorities in Finland and Sweden and found that unpublished trials gave information on adverse effects more often than published trials (Hemminki 1980). Since then several other studies have shown that the reporting of adverse events and safety outcomes in clinical trials is often inadequate and selective (Ioannidis 2001, Melander 2003, Heres 2006). A group from Canada, Denmark and the UK recently pioneered empirical research into the selective reporting of study outcomes (Chan 2004a, Chan 2004b, Chan 2005). These studies are described in Chapter 8 of the Handbook, along with a more detailed discussion of outcome reporting bias.


The next section of your lab report will be the method section. In this portion of your report, you will describe the procedures you used in your research. You'll include specific information such as the number of participants in your study, the background of each individual, your independent and dependent variables, and the type of experimental design you used.

In the results section of your lab report, you'll describe the statistical data you gathered from your research. This section will likely be quite short you don't need to include any interpretation of your results. Use tables and figures to display statistical data and results.


Watch the video: Comment réaliser un reporting mensuel complet tous les mois: Bilan, Pu0026L, TFT, KPI..? (August 2022).