Acessibilidade / Reportar erro

A Summary Evaluation of the Top-Five Brazilian Psychology Journals by Native English-Language Scholars1 1 Consulting editors for this article: Claudio Hutz & Gustavo Gauer (UFRGS), J. Landeira-Fernandez & Daniel Mograbi (PUC-Rio)

Uma Avaliação do Sumário das Top-Cinco Revistas Brasileiras de Psicologia por Professores de Língua Nativa Inglesa

Abstract

In the current century, English is the language for the research and dissemination of scientific findings. But for many scholars, English is a foreign language. This is especially true among the emerging and developing nations (EDNs), such as the BRICS nations, encompassing Brazil, Russia, India, China, and South Africa. The present study conducted a survey examining the translational integrity and overall impression of translated summary materials (abstracts and titles) from the five highest ranking (SCImago Journal Rank) Brazilian journals in the field of psychology. Analysis proceeded with two models. In the first model, translated summary materials from 12 randomly-selected articles from four of the five journals were evaluated by a panel of three native English-language scholars. Findings indicated an inverse relationship between the overall impression of the materials and their: abstract errors, r(34) = -0.61, p < .001; and total errors, r(34) = -0.62, p < .001; suggesting a direct relationship between the translational integrity of these EDN materials and the overall impression they leave with native English-language scholars. A second model added 3 additional articles from the fifth journal (English-language only) to the materials described. The findings from this second model suggested that for EDN journals, an investment in language resources may substantially improve the impression they leave with native English-language scholars, and thus promote wider dissemination of their findings.

English; translation; Brazil; lost science; lingua franca

Resumo

No século atual, o inglês tem sido a língua usada preferencialmente para pesquisa e a divulgação científica. Mas para muitos pesquisadores, o inglês é uma língua estrangeira. Essa constatação é muito verdadeira, especialmente, para nações emergentes e em desenvolvimento, (EDNs - Emerging and Developing Nations) tais como as nações do BRICS, abrangendo Brasil, Rússia, Índia, China, e África do Sul. O presente estudo é um levantamento da integridade translacional e a compreensão geral de sumários (resumos e títulos) das revistas brasileiras que ocupam os cinco primeiros lugares da classificação do SCImago Journal Rank, no campo da psicologia. A análise foi organizada em dois modelos. No primeiro, três professores de língua nativa inglesa avaliaram a tradução dos sumários de 12 artigos escolhidos aleatoriamente de quatro das cinco revistas. Os achados indicaram uma relação inversa entre a impressão geral e seus respectivos: erros no resumo r(34) = -0.61, p < .001; e erros totais r(34) = -0.62, p < .001; sugerindo uma relação direta entre a integridade translacional e a impressão geral que os artigos deixaram em professores de língua nativa inglesa. Um segundo modelo acrescentou 3 artigos de uma quinta revista, toda ela escrita em língua inglesa, aos materiais descritos. Os achados deste segundo modelo sugeriram que para as revistas EDNs, um investimento em recursos de linguagem poderão aumentar, substancialmente, a impressão que elas estão deixando em professores de língua nativa inglesa, e incrementar a divulgação dos seus achados.

inglês; tradução; Brasil; ciência perdida; língua franca


The enterprise of scholarship consists largely of writing manuscripts. It is a process of construction, in which floor-by-floor, level-by-level, an argument is presented (the hypothesis), the argument is tested, and the findings are interpreted and posited for value in the future. This level-by-level process involves a literature search, in which foundational material is gathered. In the 21st century, this search is conducted primarily on computers, with the scholar typing key words into worldwide databases (e.g., PubMed, PsycINFO, JSTOR, Google Scholar), which return published articles related to the search. From these published articles, the scholar gleans background and theory that will serve as the foundation-for the manuscript that he or she will write.

With worldwide databases, the scholar now has access to published science from all corners of the globe. While the diversity of information is a boon for hungry scholars, this abundance also presents challenges. Foremost of these challenges is language. For the English-language scholar, this challenge may entail deciphering non-English-language texts from non-English-language foreign research journals. However, with English as the lingua franca of the world, as the universal language of science (de Swaan, 2001de Swaan, A. (2001). Words of the world: The global language system. Cambridge: Polity Press. ; Meneghini & Packer, 2007Meneghini, R., & Packer, A. L. (2007). Is there science beyond English? EMBO reports, 8, 112-116. ), there is increasingly inclusion of summary materials in English, within the covers of these journals.

For example, when native-English scholars browse through non-English-language journals, they typically encounter full texts in native language with abstracts and titles-the summary materials-in both native language and in English. While foreign-language journals on occasion issue special English-language issues, for the most part this format is the norm. What this offers to the English-language scholar is the opportunity to grab the "gist" of an article-its purpose, methodology, results and implications. But does this hybrid format serve the reader or the author? Is information or confusion conveyed?

There has been considerable writing about the so-called "lost science" (e.g., Hanes, 2014Hanes, W. F. (2014). Nominal groups as an indicator of non-native English communication problems in top-ranked Brazilian science journals. Belas Infiéis, 2, 127-139. ; Meneghini & Packer, 2007; Montgomery, 2013Montgomery, S. L. (2013). Does science need a global language?: English and the future of research. Chicago: University of Chicago Press.; Packer & Meneghini, 2007). Coined by Gibbs (1995)Gibbs, W. W. (1995). Lost science in the third world. Scientific American, 273, 92-99. , this term refers to the unaccessed scientific output of the "emerging" or "developing" nations (EDNs). A frequently grouped constellation of EDNs is the BRICS nations, consisting of Brazil, Russia, India, China, and South Africa (Wilson & Purushothaman, 2003Wilson, D., & Purushothaman, R. (2003). Dreaming with BRICs: The path to 2050 (Vol 99). Goldman, Sachs & Company.) and constituting 43% of the world's population (Thussu & Nordenstreng, 2015Thussu, D. K., & Nordenstreng, K. (2015). Contextualizing the BRICS media. In K. Nordenstreng, & D. K. Thussu (Eds.), Mapping BRICS Media (p. 2). New York: Routledge.). Unfortunately, even in the digital age, with open-access format, science from the BRICS nations remains inaccessible or "lost" to many scholars in the English-speaking world. Why? Because the publishing is not in lingua franca (English) (Gibbs, 1995). In response to this conundrum, the dual-language abstract and title-at best a compromise-attempts to bridge the current gap. But does it work? Do these hybrid summary materials engage the English-language scholar? Or do they crumble in the process of translation? In our search for answers to these questions, we will sample several journals from Brazil. For our survey, we will focus on psychology.

Psychology is a field in which Brazil has grown quite amply in recent years (Gamba, Packer, & Meneghini, 2015Gamba, E. C., Packer, A. L., & Meneghini, R. (2015). Pathways to internationalize Brazilian journals of psychology. Psicologia: Reflexão e Crítica, 28(suppl. 1), 66-71. ), with some authorities describing Brazil's recent boon as a golden age, similar to what took place in the US in the 1940s and the 1950s (Hutz, McCarthy, & Gomes, 2004Hutz, C., McCarthy, S., & Gomes, W. (2004). Psychology in Brazil: The road behind and the road ahead. In M. J. Stevens & D.Wedding (Eds.), Handbook of International Psychology (pp. 151-168). New York: Brunner-Routledge. ). This increase is substantiated by the SCImago database 2014SCImago. (2015). SJR - SCImago Journal & Country Rank. Retrieved from http://www.scimagojr.com
http://www.scimagojr.com...
edition, which reveals that between 2004 and 2013, "the number of Brazilian psychology publications leaped from 136 to 1,032 articles per year, which corresponds respectively to a 0.41% and 1.59% share of worldwide publications" (Gamba, Packer, & Meneghini, 2015, p. 67). Additionally, among the nations of Latin America and the Caribbean, SCImago notes that Brazil contributes more than 54% of published output in the field, and on the worldwide stage, ranks 15th (SCImago, 2015).

In spite of Brazil's impressive EDN statistics, on the international stage there are problems. In fact, the top Brazilian psychology journals perform below the mean, when ranked within the global perspective (SCImago, 2015). As to reasons for this contrast, most authorities attribute it to language (Collazo-Reyes, Luna-Morales, Russell, & Pérez-Angón, 2008Collazo-Reyes, F., Luna-Morales, M., Russell, J., & Pérez-Angón, M. A. (2008). Publication and citation patterns of Latin American & Caribbean journals in the SCI and SSCI from 1995 to 2004. Scientometrics, 75(1), 145-161.; Meneghini & Packer, 2007; Packer, 2014). The editorial from this special issue cites language as the major challenge for Brazilian scientists (Gomes & Fradkin, 2015). Gamba, Packer, and Meneghini (2015) attribute the struggling performance of Brazilian journals on the international stage to the scarcity of English language articles. A recent examination of several top-ranked Brazilian journals found "proficiency problems" in the English-language texts in terms of: awkward collocation, nominal group error, punctuation and capitalization, and preposition use error (Hanes, 2014, p. 127). In this study, grammatical error rates among the journals were quite variable, ranging from 2.41 errors/1000 words to 113.59 errors/1000 words (Hanes, 2014). Based on the wide variability in error rates, one might assume that higher error rates would correlate with poorer first impression of the journal, especially for the native English-language scholar. But is that necessarily so?

While the Hanes (2014) study highlights the grammatical shortcomings of translation in Brazilian journals, there is no work to this date that has evaluated the translational integrity of summary materials (i.e., the abstract and title) from Brazilian or other EDN journals. Yet, authorities believe these summary materials are critical. For one, the American Psychological Association (APA) stresses the importance of conciseness and coherence in a title (American Psychological Association [APA], 2010American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association. , p. 23) and also stresses that a "well-prepared abstract can be the most important single paragraph in an article" (APA, 2010, p. 26). After all, the abstract is the "first contact" the reader has with the article when browsing in a literature search (APA, 2010, p. 26).

We should note that while the translational integrity of summary materials does depend on grammatical integrity, it also depends also on structured formats. For example, researchers browsing through an empirical study would, at minimum, expect the abstract to inform them of: (1) the research problem; (2) the participants; (3) features of the study methodology; (4) the findings (including effect sizes, confidence intervals, indications of statistical significance); and (5) the conclusions and the implications of the study (APA, 2010, p. 26). Thus, these English-language summary materials provide opportunity for both non-native English-language scholars and the journals that disseminate their work. At their best, they are a bridge that serves to span the mighty distance between the lingua-franca and non-lingua-franca worlds.

The aim of this study, therefore, was to empirically examine the translational integrity and overall impression that translated EDN summary materials leave with native English-language scholars. We focused on materials from Brazilian psychology journals. Based on the weight of findings from past research, we hypothesized that: (a) there would be mixed levels of translational integrity in the summary materials; (b) the summary materials would leave a mixed impression with native English-language scholars; and (c) there would be a positive correlation between translational integrity and the overall impression the summary materials left with native English-language scholars.

Method

A random sampling of abstracts and titles from the five highest-ranked Brazilian psychology journals was presented to a discriminating group of English-language scholars. The scholars, in turn, evaluated the materials for translational integrity and overall impression.

Participants

Participants included English-language scholars recruited from the University of California. Inclusion criteria were: (1) native English-language speaker; (2) psychologist; (3) tenured faculty member; and (4) strong publication history. Candidates were restricted to associate or full professors; with a minimum of 2,000 citations in their publication history. First-wave invitations were disseminated to a set of three candidates who satisfied the criteria. The candidates were informed that the study was examining the accessibility of foreign-language scientific articles that have been translated into English; and that half an hour of their time was requested to evaluate materials. All three first-wave candidates accepted; this resulted in a sample distribution consisting of 67% male, 100% Caucasian, 67% full (vs. associate) professor, age M = 57 years, and M citation count = 2,582. The sample represented the sub-areas of developmental (33%), health (33%), and quantitative (33%) psychology. Citation counts were ascertained through Google Scholar. Native English-language speaker status was ascertained through educational history (CV) and in person. Compensation was not offered to participants.

Materials

Of the 1,042 worldwide psychology journals listed in the 2013 SCImago Journal Ranking (SJR), 16 (1.5%) are from Brazil. Three original research articles from the most-recent issues of the five highest-ranked Brazilian journals in this field (SCImago, 2015) were randomly selected for analysis. All journals were peer-reviewed. Articles in review, editorial, and commentary were excluded. Abstracts and titles were gathered from the final articles (N = 15) and assembled on evaluation sheets, one article per sheet, with instructions for the evaluating scholars. With three evaluators for each article, this resulted in 45 sheets total (3 evaluators x 15 articles). Journal titles and names of authors were excluded from the sheets. Because one journal of the five did not include Portuguese-and-English summary materials, we considered data from that journal in a secondary model, hereafter referred to as Model 2. Descriptive details on the journals appear in Table 1. For a sample evaluation sheet, see Appendix A Appendix A .

Table 1
Descriptive Statistics of the Top Brazilian Psychology Journals

Measures

Error counts. Error counts served as proxy for translational integrity (i.e., low error count: high translational integrity). Title errors were phrases, words or sections in the title that "bumped" or inhibited the flow of information intake (e.g., grammatical errors, sentence structure errors, structural anomalies). Title errors were noted by the evaluating scholars by their underlining or circling the offending phrases, words or sections on each evaluation sheet. Abstract errors were phrases, words, sentences or sections in the abstract that "bumped" or inhibited the flow of information intake (e.g., grammatical errors, sentence structure errors, structural anomalies). As with title errors, abstract errors were noted by the evaluating scholars by their underlining or circling the offending phrases, words, sentences or sections on each evaluation sheet. Importantly, it should be noted that for its usage in this study, the term abstract errors refers specifically to errors in the abstract of the paper, vs. vague or non-specific errors. Total errors were calculated by summing title errors with abstract errors.

Overall impression. Overall impression was indexed on a 5-point Likert scale, with evaluators rating their impression of the publication, based on their review of the materials: "Based on your review of the abstract and title of this article, rate the likelihood of your revisiting this publication in the future: 0, non-existent; 1, unlikely; 2, possible; 3, likely; 4, very likely." Post evaluation, these responses were recoded into a tri-category scale (0-1, Negative; 2, Neutral; 3-4, Positive), as is presented in Table 2.

Table 2
Overall Rating Scale

Procedure

Evaluators and facilitator signed consent forms. Evaluators were then handed the 15 sheets/articles for evaluation, and instructed to approach the task as if they were browsing for articles relevant to their research. The instructions were reviewed for: (1) notating errors or "bumps" in the material; and (2) registering overall impression (see Appendix A Appendix A ). Evaluators were also reminded to focus on errors or structural anomalies that distracted them from the information-intake experience. They were reminded not to nitpick. They were also reminded that one phrase could qualify for more than one error. Evaluations were completed between May 14 and June 12, 2015, with each of the evaluators returning a full set (15) of completed sheets to facilitator. The three sets of completed sheets (N = 45) comprised the data set.

Models

Model 1 was the primary model for our study and consisted of the four journals with Portuguese-and-English summary materials: Teoria e Pesquisa (Brasilia), Psicologia e Sociedade, Paidéia (Ribeirão Preto), and Psicologia: Reflexão e Crítica. Model 2 was the secondary model for our study and consisted of the four previously mentioned journals plus Psychology & Neuroscience, the journal published exclusively in English, in partnership with the APA. As Model 1 addresses the main questions of our study; accordingly, our findings will focus on that model. Having said that, when Model 2 offers contrasts or pertinent perspectives, the findings from that model will come forward. Whenever possible, however, data from and findings for each model appear jointly in tables and in figures. And finally, for the sake of economy, the journals Teoria e Pesquisa (Brasilia) and Paidéia (Ribeirão Preto) will hereafter be referred to without their identifying cities in parentheses.

Analysis

Analyses were performed using SPSS version 16.0. First, title errors, abstract errors, and overall rating were manually transcribed from the 45 evaluation sheets. Although not included in the analyses, evaluator comments were transcribed as well. The variable total errors was then calculated, summing title and abstract errors. These raw scores are displayed in Table 3. Inter-rater reliability (IRR) was then addressed separately for both models using two-way mixed, absolute agreement, average-measures intraclass correlations (ICCs) (McGraw & Wong, 1996McGraw, K. O., & Wong, S. P. (1996). Forming inferences about some intraclass correlation coefficients. Psychological Methods, 1(1), 30-46. ) to assess the degree that the three evaluators provided consistency in their ratings of the articles. ICCs were calculated separately for each of the four variables (title errors, abstract errors, total errors, and overall rating) to estimate consistency of agreement across evaluators. A robust ICC would suggest that a minimal amount of measurement error was introduced by the independent evaluators, and that their ratings would be suitable for use in the hypothesis tests of the present study.

Table 3
Summary Material Ratings by Native English-Language Scholars: Raw Scores

Variability in error count (i.e., translational integrity) was addressed separately for both models through GLM Univariate Analysis. To test Hypothesis 1, significant main effects for the three error variables were separately examined with Wald F tests set at p < .05. In the event of a significant main effect, post hoc pairwise comparison tests, set at p < .05 using Bonferonni correction for multiple comparisons, were used to evaluate differences between journals. To test Hypothesis 2, a significant main effect for overall rating was examined with a Wald F test set at p < .05. In the event of a significant main effect, post hoc pairwise comparison tests, set at p < .05 using Bonferonni correction for multiple comparisons, were used to evaluate differences between journals. Although not a primary aim of the study, significant main effects for the three error variables and overall rating were separately examined with Wald F tests set at p < .05 across evaluators. As with the previous analyses, in the event of a significant main effect, post hoc pairwise comparison tests, set at p < .05 using Bonferonni correction, were used to evaluate differences across evaluators.

To test Hypothesis 3, Pearson correlation coefficients were calculated separately for both models between overall rating and (1) title errors, (2) abstract errors, and (3) total errors. Four sets of correlations were calculated: one for each of three evaluators, plus a fourth for the aggregated mean. Values for the 5-point Likert scale for overall rating were then recoded into a tri-category scale (negative, neutral, positive) for overall impression. Using this scale, articles and journals were ranked accordingly by the overall impression that they left with the evaluating scholars.

Results

Descriptive Statistics

As shown in Table 1, all five Brazilian journals in our sample rank in the fourth quintile internationally, placing them below the international average. On the domestic front, all but one of the journals (Psychology & Neuroscience) are currently indexed by the Scientific Electronic Library Online (SciELO) database, attesting to their presence in the Latin American and Caribbean markets. Additionally, the CAPES (Coordenadoria de Aperfeiçoamento de Pessoal de Nível Superior) Qualis indicator, a rating administered by Brazil's Ministry of Education, assigns three of the five journals (Psicologia: Teoria e Pesquisa; Paidéia; Psicologia: Reflexão e Crítica) their highest rating of A1, and two (Psicologia e Sociedade; Psychology & Neuroscience) their second-highest rating of A2. Of the five journals, the one included in the second model only (Psychology & Neuroscience) is published in partnership with the American Psychological Association (APA), while the remaining four are self-contained inside Brazil. As measured by the SCImago Journal Ranking (SJR) indicator (SCImago, 2015), Figure 1 reveals an impact differential of 29 to 1 between the top-five international psychology journals (M = 8.05) and the top-five Brazilian psychology journals in our sample (M = 0.28).

Figure 1
SCImago Journal Ranking (SJR) of highest-ranking international (left) and Brazilian (right) psychology journals.

Note. SJR, SCImago Journal Rank Indicator (2015); Ann Rev of Psych, Annual Review of Psychology; Psych Bulletin, Psychological Bulletin; Pers & Soc Psych Rev, Personality and Social Psychology Review; Ann Rev of Clin Psych, Annual Review of Clinical Psychology; Psi: Teoria e Pesquisa, Psicologia: Teoria e Pesquisa (Brasilia); Psi e Sociedade, Psicologia e Sociedade; Paidéia, Paidéia (Ribeirão Preto); Psych & Neurosci, Psychology and Neuroscience; Psi: Reflexão e Crítica, Psicologia: Reflexão e Crítica.

Interrater Reliability

IRR was addressed for both models. For Model 1, ICCs were calculated separately for the four variables of interest and yielded values, per Cicchetti (1994)Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284-290. , in the good (title errors) to excellent (abstract errors, total errors, overall impression) range (title errors ICC = 0.630 [95% CI = 0.103, 0.880], abstract errors ICC = 0.863 [95% CI = 0.647, 0.957], total errors ICC = 0.849 [95% CI = 0.613, 0.952], overall impression ICC = 0.768 [95% CI = 0.407, 0.926]). These values indicated that coders had a reasonably strong degree of agreement and suggested that ratings were consistent across evaluators. Incorporating the fifth journal into the analyses, the ICCs for Model 2 were substantially the same, with values in the good (title errors, overall impression) to excellent (abstract errors, total errors) range (title errors ICC = 0.655 [95% CI = 0.225, 0.871], abstract errors ICC = 0.849 [95% CI = 0.647, 0.944], total errors ICC = 0.844 [95% CI = 0.636, 0.943], overall impression ICC = 0.737 [95% CI = 0.393, 0.903]). The moderately high ICCs suggest that a minimal amount of measurement error was introduced by the independent evaluators; and that evaluator ratings were therefore deemed to be suitable for use in the hypothesis tests of the study.

Error Counts (Hypothesis 1)

Model 1. As seen in Table 4, while mean title error counts per article ranged from 0.00 (Paidéia) to 0.56 (Psicologia e Sociedade), the main effect, F(3, 32) = 1.48, p = .239, was not significant. Mean abstract error counts ranged from 3.22 (Psicologia: Reflexão e Crítica) to 7.00 (Psicologia e Sociedade), with a significant main effect, F(3, 32) = 4.76, p = .007. Post hoc analysis found the mean abstract error count of Psicologia e Sociedade statistically higher than the mean abstract error counts of the other three journals. Configured from the sum of title and abstract error counts, mean total error counts ranged from 3.33 (Paidéia) to 7.56 (Psicologia e Sociedade), and likewise, univariate analysis revealed a significant main effect, F(3, 32) = 4.87, p = .007. Post hoc analysis found the mean total error count of Psicologia e Sociedade statistically higher than the mean total error counts of Paidéia and Psicologia: Reflexão e Crítica; however there was no statistical difference in the mean total error count of Psicologia: Teoria e Pesquisa and the mean total error counts of the other three journals. As with the other post hoc pairwise comparison tests, these analyses were conducted at p < .05 with Bonferonni multiple pairwise correction. Error counts were also examined across evaluators, but consistent with the IRR analysis, no main effect was found: title errors, F(2, 33) = 1.65, p = .208; abstract errors, F(2, 33) = 0.41, p = .668; total errors, F(2, 33) = 0.47, p = .629. With absence of a main effect, post hoc analyses were not pursued. Means and standard deviations are displayed in Table 5.

Table 4
Mean (and Standard Deviation) for Error and Overall Rating Variables: Across Journals
Table 5
Mean (and Standard Deviation) for Error and Overall Rating Variables: Across Evaluators

Model 2. As seen in Table 4, when incorporating the fifth journal (Psychology & Neuroscience) into the analyses, the mean error counts were reduced for all three variables while the mean overall rating was increased. However, there were no substantial changes in effects: title error counts, F(4, 40) = 2.04, p = .107; abstract error counts, F(4, 40) = 5.30, p = .002; and total error counts, F(4, 40) = 5.66, p = .001. Post hoc analyses were also consistent with Model 1. As in Model 1, error counts were also examined across evaluators, but consistent with the IRR analysis, no main effect was found: title errors, F(2, 42) = 1.55, p = .225; abstract errors, F(2, 42) = 0.86, p = .432; total errors, F(2, 42) = 0.96, p = .393. With absence of a main effect, post hoc analyses were not pursued. Means and standard deviations are displayed in Table 5.

Evaluator Comments

Evaluator comments were unsolicited and were offered in response to materials from three journals. Psicologia: Teoria e Pesquisa and Paidéia received one and two comments, respectively, on inconsistent or inappropriate verb tense. These comments were from Evaluator 2. Psicologia e Sociedade received the harshest comments (e.g., "very convoluted," "generally poor," "bad"). These comments came from all three evaluators. However, the presence or absence of evaluator comments did not factor into the analyses.

Overall Rating (Hypothesis 2)

Model 1. As seen in Table 4, mean overall ratings ranged from 1.11 (Psicologia e Sociedade) to 2.44 (Paidéia and Psicologia: Reflexão e Crítica), with a significant main effect, F(3, 32) = 9.38, p < .001. Post hoc analysis found the mean overall rating of Psicologia e Sociedade statistically lower than the mean overall ratings of Paidéia and Psicologia: Reflexão e Crítica; however there was no statistical difference in the mean overall rating of Psicologia: Teoria e Pesquisa and the mean overall rating of the other three journals. Post hoc pairwise comparison tests were conducted at p < .05 with Bonferonni multiple pairwise correction. Overall rating was also examined across evaluators, but consistent with the IRR analysis, no main effect was found, F(2, 33) = 1.92, p = .163. With absence of a main effect, post hoc analyses were not pursued. Means and standard deviations are displayed in Table 5.

Model 2. As seen in Table 4, when incorporating the fifth journal (Psychology & Neuroscience) into the analyses, the mean overall ratings now range from 1.11 (Psicologia e Sociedade) to a higher 2.89 (Psychology & Neuroscience). However, there was no substantial change in effect size, F(4, 40) = 9.82, p < .001. Post hoc analysis found the mean overall rating of Psychology & Neuroscience statistically higher than the mean overall ratings of Psicologia e Sociedade and Psicologia: Teoria e Pesquisa; and the mean overall rating of Psicologia e Sociedade statistically lower than the mean overall ratings of Paidéia, Psychology & Neuroscience, and Psicologia: Reflexão e Crítica. Post hoc pairwise comparison tests were conducted at p < .05 with Bonferonni multiple pairwise correction. Overall rating was also examined across evaluators, but consistent with the IRR analysis, no main effect was found, F(2, 42) = 2.32, p = .110. With absence of a main effect, post hoc analyses were not pursued. Means and standard deviations are displayed in Table 5.

Associations Between Overall Rating and Error Counts (Hypothesis 3)

Model 1.Table 6 reports the association between mean overall rating and the three mean error counts: individually for each evaluator and aggregated, for the panel, as a whole. Considered as a whole, findings indicated a strongly negatively correlated relationship between mean overall rating and mean abstract errors, r(34) = -0.61, p < .001; and mean overall rating and mean total errors, r(34) = -0.62, p < .001. There was no statistically significant relationship between mean overall rating and mean title errors, r(34) = -0.29, p = .090. For evaluator 1, there were no statistically significant relationships between mean overall rating and any of the mean error counts: title errors, r(10) = 0.17, p = .588; abstract errors, r(10) = -0.49, p = .109; and total errors, r(10) = -0.46, p = .135. For evaluator 2, findings indicated a strongly negatively correlated relationship between mean overall rating and mean abstract errors, r(10) = -0.79, p = .002; as well as mean overall rating and mean total errors, r(10) = -0.83, p = .001; while the relationship between mean overall rating and mean title errors, r(10) = -0.29, p = .357, was not statistically significant. For evaluator 3, findings indicated a strongly negatively correlated relationship between mean overall rating and all three mean error counts: title errors r(10) = -0.83, p = .001; abstract errors r(10) = -0.71, p = .009; and total errors r(10) = -0.76, p = .004.

Table 6
Pearson Correlation Coefficients Between Overall Rating and Error Variables

Model 2. As seen in Table 6, incorporating the fifth journal (Psychology & Neuroscience) into the analyses of the sample as a whole, elicited a moderately negatively correlated relationship between mean overall rating and mean title errors, r(43) = -0.33, p = .028; a strongly negatively correlated relationship between mean overall rating and mean abstract errors, r(43) = -0.55, p < 0.001; and a strongly negatively correlated relationship between mean overall rating and mean total errors, r(43) = -0.56, p < .001. For evaluator 1, findings now indicated a strongly negatively correlated relationship between mean overall rating and mean abstract errors, r(13) = -0.53, p = .042; while relationships between mean overall rating and mean title errors, r(13) = 0.14, p = .635; and mean overall rating and mean total errors, r(13) = -0.51, p = .054, remained not statistically significant. For evaluator 2, the associations were not substantially changed; as they were not, also, for Evaluator 3.

Overall Impression (Hypotheses 1 & 2)

Table 7 presents an overall impression ranking of the Brazilian journals in the sample, based on a tri-category (negative, neutral, positive) scale (see Table 2 for conversion). For Model 1, which includes the journals with dual-language summary materials, Paidéia and Psicologia: Reflexão e Crítica ranked highest, both with positive overall impression ratings based on two positive (++) and one neutral (±) article rating. Psicologia: Teoria e Pesquisa ranked second lowest with a negative overall impression rating based on one positive (+) and two negative (--) article ratings. And Psicologia e Sociedade ranked lowest of the journals, with a negative overall impression rating based on three negative (---) article ratings. For Model 2, the fifth journal (Psychology & Neuroscience) ranked the highest, with its positive overall impression rating based on three positive (+++) article ratings. This, consequently, moved the other journals down a spot.

Table 7
Overall Impression of the Top-Five Brazilian Psychology Journals by Native English-Language Scholars

Discussion

This study is the first that we are aware of that establishes a relationship between the translational integrity of EDN translated summary materials and the impression they leave with native English-language scholars. Consistent with Hypothesis 1, as well as recent research (e.g., Hanes, 2014), is the finding of mixed levels of translational integrity among the journals in our sample. Consistent with Hypothesis 2 is the finding that the summary materials left a mixed overall impression with our native English-language scholars. And consistent with Hypothesis 3 is the finding of a positive correlation between translational integrity and the overall impression the summary materials left with our native English-language scholars. This final finding provides answers to questions that we posed, as to the efficacy of dual-language summary materials. As to the potential that this hybrid format has in narrowing the gap between the lingua franca and non-lingua franca worlds? Our prognosis is conditionally positive. It is conditionally positive because-as our findings indicate-the overall impression of the translated summary materials is related to its translational integrity. This finding forecasts both a narrowing of the gap and a widening of the gap that separates the lingua franca and non-lingua franca worlds. We forecast a narrowing of the gap, for those scientists and authors who rate high in their translational integrity-through access to resources or affinity for language; or a hunger to share their science with the world-; and a widening of the gap, for scientists who rate substantially lower, regardless of their circumstance or reason. Therefore, connection can be built or distance be made wider, depending on the resource of translation (see Curry & Lillis, 2014Curry, M. J., & Lillis, T. M. (2014). Strategies and tactics in academic knowledge production by multilingual scholars. Education Policy Analysis Archives, 22(32). ).

Regarding the specifics of our findings, there were significant differences in translational integrity and overall impression ratings between two of the journals that we surveyed: Psychology & Neuroscience and Psicologia e Sociedade. A partial explanation may be resources. As previously mentioned, of the five journals in the sample, Psychology & Neuroscience is the only journal in the group that is partnered with an English-language entity; it is published by the APA. It is important to remember, however, that while other Brazilian journals have attempted to publish exclusively in English, Psychology & Neuroscience was the first Brazilian psychology journal that did so and succeeded. This continued success, as well as partnership with the APA, is at least partially attributable to the journal having native English-language resources that predated the move to APA. And also, due to its partnership with the APA-the world's largest association of psychologists-the quality of submissions to Psychology & Neuroscience, in terms of content and coherence, may be at a slightly higher level than submissions offered to the other journals. As to the consistently low ratings of Psicologia e Sociedade, several observations bear discussion. The first concerns its presentation style. Of the five journals that we surveyed, Psicologia e Sociedade was the only one of all the five that styled its titles in an "all caps" format. While title styling is at the discretion of the journal, most native English-language scholars-psychologists especially-are used to titles in an APA format: "uppercase and lowercase" (APA, 2010, p. 23). Conceivably, therefore, an all caps presentation may have negatively influenced the ratings. The second observation concerns evaluator comments. Of the materials evaluated, the materials from Psicologia e Sociedade were the only ones that garnered comments such as "generally poor" or "really didn't understand this at all." Although these comments were not included in the analyses, they are consistent with our ratings of the journal. An explanation for this phenomenon may be that Psicologia e Sociedade represents a segment of psychology very "Brazilian" in its thought; so Brazilian, in fact, that it has difficulty moving its native scientific concepts into sister terminology in English.

There were particular challenges we encountered in this study. The first was our evaluation of translational integrity. In contrast to the Hanes study (2014), in which translated text was evaluated by a linguistic specialist for specific grammar errors, the evaluation in our study was more nuanced. In our study, the native English-language scholars were not only evaluating grammar, they were dealing with a larger whole. They were dealing with the all-elusive abstract. The abstract that sums the essence of the study: its point and purpose, its participants, its method, what it showed us, where it leads us, what it tells us of the future. In one paragraph: the essence of our work. By its very nature, the scientific abstract is a complex beast to quantify. For grading, there were no specific rubrics we adhered to. The evaluators focused on disruptions in the flow; and when encountering them: underlined or circled. And because disruptions in the flow may vary person-to-person, the grading of an abstract is elusive. Nonetheless, within our study, as reported earlier, there was consistency across evaluators.

A second challenge was in our overall rating assessment. The instructional wording for this rating was: "Based on your review of the abstract and title of this article, rate the likelihood of your revisiting this journal in the future." We were reminded, however, that browsing on-line scholars pay less attention to the journal than the article itself. When encountering this issue, the facilitator backed up and clarified our intent and answered questions the evaluator had. And again, as mentioned earlier, the IRR found consistency across evaluators.

This study aimed to empirically examine the translational integrity and overall impression that translated EDN summary materials leave with native English-language scholars. Previous studies by Theresa Lillis and Mary Jane Curry have examined writing for publication practices among non-native English-speaking scholars (Curry & Lillis, 2014; Lillis & Curry, 2010; Lillis, Hewings, Vladimirou, & Curry, 2010); as the previously-mentioned Hanes (2014) study has examined EDN grammatical integrity from a linguistic perspective. However, ours is the first study we are aware of that empirically examined the association between translational integrity and overall impression, from the perspective of a panel of well-published native English-language scholars.

An obvious limitation in this research is that it focuses on summary materials from one country in one discipline: Brazil and psychology, respectively. Accordingly, it would be helpful if we had data on summary materials from other EDNs, across a spectrum of varied disciplines. Should this opportunity create itself, it would be especially helpful if the same instruments were used, which would facilitate the merging and comparison of data. Comparisons across disciplines would be interesting to have; as would comparisons across culture and several other variables. Expanded research of this nature would obviate another limitation: in this study, we looked at Portuguese-to-English translation only. Would findings be consistent across Chinese-to-English or Russian-to-English? Accordingly, future research would address these and other questions, as we struggle with the issues of lost science.

The present study established a direct relationship between the translational integrity of EDN summary materials and their overall impression with native English-language scholars. In the case of our highest-rated journal (Psychology & Neuroscience), we see the likely influence that language resources has on translational integrity and, consequently, on the overall impression that the journal leaves with the native English-speaking scholar. In the case of our lowest-rated journal (Psicologia e Sociedade), we see the converse influence that language resources has on translational integrity and, consequently, on the overall impression the journal leaves with the native English-speaking scholar, through summary materials that, to this author's eyes, were not proofread by a native English-language speaker.

The mention of proofreading brings up our final issue: Are the EDN journal editors aware of the importance of proofreading? Especially translated summary materials? Through my experience at several EDN journals, in the role of English-language editor, reviewer, and guest editor, the answer is emphatically no. Through my experience, albeit anecdotal, the English-language translation of titles and abstracts is mostly viewed a nuisance or a necessary task. It is rarely viewed an opportunity to spread the gospel of one's science to the global universe. Hopefully, upon reading this, a few converts will be made. There is also the distinction between a "rote translation" and a translation transparent to the native English-language eye. Using the example of Portuguese-to-English translation, a rote translation would translate all the words, with a minimal regard for sentence-structure stylings or the myriad of subtleties between the languages. See Marlow (2014)Marlow, M. A. (2014). Writing scientific articles like a native English speaker: Top ten tips for Portuguese speakers. Clinics, 69, 153-157. for a top ten list of worst offenders. A transparent translation, of the same material, would read as if a native English-speaker were the writer. And, in accordance with our findings (Model 1 and Model 2), a transparent translation of coherent subject matter would engender a positive impression on the native English-language reader. And a positive impression on the journal. And, in doing so, recover a scrap of the lost science-in whatever modicum or tiny bit.

References

  • American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association.
  • American Psychological Association. (2015). Retrieved from http://www.apa.org/
    » http://www.apa.org/
  • Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284-290.
  • Collazo-Reyes, F., Luna-Morales, M., Russell, J., & Pérez-Angón, M. A. (2008). Publication and citation patterns of Latin American & Caribbean journals in the SCI and SSCI from 1995 to 2004. Scientometrics, 75(1), 145-161.
  • Curry, M. J., & Lillis, T. M. (2014). Strategies and tactics in academic knowledge production by multilingual scholars. Education Policy Analysis Archives, 22(32).
  • de Swaan, A. (2001). Words of the world: The global language system. Cambridge: Polity Press.
  • Gamba, E. C., Packer, A. L., & Meneghini, R. (2015). Pathways to internationalize Brazilian journals of psychology. Psicologia: Reflexão e Crítica, 28(suppl. 1), 66-71.
  • Gibbs, W. W. (1995). Lost science in the third world. Scientific American, 273, 92-99.
  • Gomes, W. B., & Fradkin, C. (2015). Editorial. Psicologia: Reflexão e Crítica, 28(suppl. 1), 1.
  • Hanes, W. F. (2014). Nominal groups as an indicator of non-native English communication problems in top-ranked Brazilian science journals. Belas Infiéis, 2, 127-139.
  • Hutz, C., McCarthy, S., & Gomes, W. (2004). Psychology in Brazil: The road behind and the road ahead. In M. J. Stevens & D.Wedding (Eds.), Handbook of International Psychology (pp. 151-168). New York: Brunner-Routledge.
  • Lillis, T. M., & Curry, M. J. (2010). Academic writing in global context. London: Routledge.
  • Lillis, T., Hewings, A., Vladimirou, D., & Curry, M. J. (2010). The geolinguistics of English as an academic lingua franca: Citation practices across English-medium national and English-medium international journals. International Journal of Applied Linguistics, 20(1), 111-135.
  • Marlow, M. A. (2014). Writing scientific articles like a native English speaker: Top ten tips for Portuguese speakers. Clinics, 69, 153-157.
  • McGraw, K. O., & Wong, S. P. (1996). Forming inferences about some intraclass correlation coefficients. Psychological Methods, 1(1), 30-46.
  • Meneghini, R., & Packer, A. L. (2007). Is there science beyond English? EMBO reports, 8, 112-116.
  • Montgomery, S. L. (2013). Does science need a global language?: English and the future of research. Chicago: University of Chicago Press.
  • Packer, A. L. (2014). The emergence of journals of Brazil and scenarios for their future. Educação e Pesquisa, 40(2), 301-323.
  • Packer, A. L., & Meneghini, R. (2007). Learning to communicate science in developing countries. INTERCIENCIA, 32, 643.
  • SCImago. (2015). SJR - SCImago Journal & Country Rank. Retrieved from http://www.scimagojr.com
    » http://www.scimagojr.com
  • Thussu, D. K., & Nordenstreng, K. (2015). Contextualizing the BRICS media. In K. Nordenstreng, & D. K. Thussu (Eds.), Mapping BRICS Media (p. 2). New York: Routledge.
  • Wilson, D., & Purushothaman, R. (2003). Dreaming with BRICs: The path to 2050 (Vol 99). Goldman, Sachs & Company.
  • 1
    Consulting editors for this article: Claudio Hutz & Gustavo Gauer (UFRGS), J. Landeira-Fernandez & Daniel Mograbi (PUC-Rio)

Appendix A

Publication Dates

  • Publication in this collection
    2015

History

  • Received
    08 July 2015
  • Reviewed
    12 July 2015
  • Accepted
    15 July 2015
Curso de Pós-Graduação em Psicologia da Universidade Federal do Rio Grande do Sul Rua Ramiro Barcelos, 2600 - sala 110, 90035-003 Porto Alegre RS - Brazil, Tel.: +55 51 3308-5691 - Porto Alegre - RS - Brazil
E-mail: prc@springeropen.com