首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 805 毫秒
1.
Reports an error in the original article by J. Rotton et al (American Psychologist, 1993[Aug], Vol 48[8], 911–912). Table 1 listed the journal Psychological Research twice, and the journals Cognition and Child Study Journal were omitted. The mean SSCI for applied journals in Table 1 should have been 1.17. Multiple rather than squared multiple correlations were reported for rejection rates. Area and type of journal explained 48% of variance in rejection rates, and the F ratio for predicting citations should have been F(9,28)?=?14.82. On page 912, the mean SSCI for experimental journals should have been 1.51. (The following abstract of this article originally appeared in record 1994-03368-001.) Comments on L. C. Buffardi and J. A. Nichols's (1981) list of rejection rates for psychological journals and further examines the relation between rejection rates, citation impact, and journal value. It was found that 69% of the variance in rejection rates was explained by area and type of journal. As Buffardi and Nichols reported, rejection rates were higher for APA than for non-APA journals (80.27% vs 65.37%), and citation indices were higher for APA than for non-APA journals (2.63 vs 0.91)… (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Investigated the role that factors such as journal circulation and acceptance rate play in relation to citation impact (CTI). CTI was negatively related to acceptance rate and positively related to circulation, consistent with the construct of CTI as a measure of journal quality. CTI was highest for moderate publication lag and had substantial stability over time. Compared to non-American Psychological Association (APA) journals, APA journals had significantly higher CTI in both 1977 and 1978, higher circulation, and lower acceptance rates. CTI is probably the best single measure of journal quality currently available. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Investigated possible favoritism in the publication policies of 4 American Psychological Association (APA) journals and 6 non-APA journals. While none of the APA journals had editors who were in private practice, retired or otherwise unaffiliated, all the non-APA journals had at least 2 editors in this category. Individual journals varied in the degree of professional favoritism shown, but this was not related to APA membership. Most journals devoted less than 10% of articles to editors' contributions, up to 33% to contributions from professional colleagues, and the remainder to sources outside immediate affiliations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Some characteristics of authors in 10 American Psychological Association journals and 9 other selected psychological journals are presented in a table. The "number of authors who are not affiliated with the APA is rather surprising although many of these may be students. The total proportion of non-APA members for the APA journals is .23 and for the others .28." Approximately "one-quarter to one-third of the authors throughout have no divisional affiliation… . All of the journals studied contained authors who were members of the Psychonomic Society. The APA journal with the highest proportion in this group was Psychological Review," and the next highest was the Journal of Experimental Psychology. Slightly more than ? of the articles are by individual authors and about ? are by 2 authors. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Evaluated professional cooperation in exchanging information about replicating studies on request using 6 American Psychological Association (APA) journals and 6 non-APA journals. 54 satisfactory responses were received from 72 requests. Data indicate that APA or non-APA status was not significant. Findings highlight the existence of a substantial level of responsibility and professional cooperation among research psychologists regarding replication information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The goal of the present comment was to empirically examine and describe the temporal trends in article length for American Psychological Association (APA) primary journals over the last 20 years (1986-2005) and the extent to which these trends were moderated by differences in journal impact factor (i.e., frequency of article citation). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Objective: We conducted a citation analysis to explore the impact of articles published in Health Psychology and determine whether the journal is fulfilling its stated mission. Design: Six years of articles (N = 408) representing three editorial tenures from 1993–2003 were selected for analysis. Main Outcome Measures: Articles were coded for several dimensions enabling examination of the relationship of article features to subsequent citations rates. Journals citing articles published in Health Psychology were classified into four categories: (1) psychology, (2) medicine, (3) public health and health policy, and (4) other journals. Results: The majority of citations of Health Psychology articles were in psychology journals, followed closely by medical journals. Studies reporting data collected from college students, and discussing the theoretical implications of findings, were more likely to be cited in psychology journals, whereas studies reporting data from clinical populations, and discussing the practice implications of findings, were more likely to be cited in medical journals. Time since publication and page length were both associated with increased citation counts, and review articles were cited more frequently than observational studies. Conclusion: Articles published in Health Psychology have a wide reach, informing psychology, medicine, public health and health policy. Certain characteristics of articles affect their subsequent pattern of citation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
9.
Conducted a citation analysis of 57 psychology journals. Total citations to articles published in each journal in 1972 and 1973 were counted from a sample of pages (10%) in the Social Science Citation Index. Journals were rank ordered according to citation frequency per articles published in each journal during the 2-yr period. Mean citation rate per published article was .9. Spearman rank correlations between the rank order based on citations per article and the rank orders of the same journals determined by subjective evaluation in 2 previous studies by D. Koulak and H. J. Keselman (1975) and K. C. Mace and H. D. Warner (1973) were .39 and .56, respectively. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Compared the K. C. Mace and H. D. Warner (1973) list of chairpersons' ratings of psychological journal reputation and an objective measure of journal eminence. No close correspondence was found between ratings and citation counts for journals. For chairpersons, professional reputation of a journal is evaluated by criteria other than its visibility in the literature. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Three recent comments in the September 1976 issue (Buss & McDermott; Levin & Kratochwill; Porter (see record 1990-57250-001; see record 1990-57248-001; and see record 1990-57249-001), attempting to deal with the difficult area of assessing journal "reputations," raised some interesting questions concerning our study reporting journal rankings (Koulack & Keselman, November 1975; see record 1976-24649-001). We are in agreement with Buss and McDermott (1976) that citations and rankings might not be measuring the same things, but we are in disagreement with Porter, who suggests that "fine ordering among journals is whimsical" (p. 675). In fact, as we suggest in our introduction and have demonstrated in the body of our article (Koulack & Keselman, 1975), journal rankings change as a function of type of work and area of interest. Perhaps Porter's (1976) findings might be a bit whimsical because of the procedure he used to obtain his correlations. Moreover, it is impossible to probe further because Porter does not present the rankings of the two journals chosen from the APA members' top 50, which appeared in either of the citation measures' top 50. Such data might provide some insight into the low correlations obtained between journal citations and rankings. For example, extremely low citation rankings on either citation index for these two journals, given their relatively high position in the APA membership rankings, would diminish the size of the correlation coefficients. The Levin and Kratochwill (1976) comment is somewhat annoying because it distorts a line from Shakespeare as well as misrepresents our presentation. They imply that (a) we thought our rankings represented a definitive approach to the journal rating problem, (b) we neglected to place emphasis on a table presented in the paper, and (c) respondents chose to ignore our instructions and in fact, rated journals on the basis of familiarity. In conclusion, we appreciate the fact that there are numerous ways of examining journal reputations (e.g., rankings by departmental chairpersons, rankings by APA membership, citations obtained from 77 psychology journals published in 1969, citations obtained from 3 psychology journals published from July 1973 to June 1975) and that each of them has potential value. However, comments that are not based on empirical investigation, such as those of Levin and Kratochwill (1976), are mere suppositions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Asserts that D. L. Schaeffer (1970) should not have compared journals published by the American Psychological Association (APA) with non-APA journals in ascertaining whether favoritism exists in APA journals. The comparison should have been with journals that do not practice favoritism, such as Psychometrika, in which articles are rated anonymously. A reply from Schaeffer follows. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
OBJECTIVES: To assess medical research publication output in New South Wales (NSW). DESIGN: Analysis of publication information from the Medline indexing database, 1993-1996 inclusive. SETTING: Teaching hospitals and affiliated universities and medical research institutes within NSW, the major sites for NSW medical science publications. MAJOR OUTCOME MEASURES: Cumulative number and location of Medline-identified publications; journal citation indices (impact factor and immediacy index). RESULTS: 8860 published articles were captured for the analysis period. Universities and hospitals accounted for most of the publications (n = 7755). A mean of 73.1% (range, 36%-100%) of all articles were published in overseas journals, and the rest in Australian journals. This average trend applied to most universities and teaching hospitals, whereas research institutes published almost exclusively in overseas journals. Average publication impact factor values for most universities and teaching hospitals were around the average value for all NSW publications (2.203). The range for teaching hospital publications was 1.000-2.823, but for the overseas-publishing medical research institutes it tended to be higher (2.480-5.423). Immediacy index data yielded similar findings. CONCLUSIONS: The universities and teaching hospitals account for most of the medical publications arising from NSW, and also those appearing in Australian journals. Thus, these sites provide the bulk of Australian medical practice end-user information. In contrast, the medical institutes concentrate on publishing in overseas journals with higher and quicker citation rates (higher impact factor and immediacy index).  相似文献   

14.
"The Classified Telephone Directory and the APA Directory… continue to provide crude barometer readings of some public activities by individuals claiming to be 'psychologists.'… At the present time, 55.8% of all individual advertisers are APA members, as compared to 46.7% in 1953 and 18.5% in 1947. In actual numbers, the extent of the APA takeover of the 'Psychology Section' is even more impressive: from 38 in 1947 to 323 in 1957, an increase of 750%p In comparison, the growth of non-APA members has been only 53% (from 167 to 256). Whereas in 1947 non-APA advertisers outnumbered listed APA members 5:1 (167:38), ten years later the ratio is better than even in favor of APA psychologists (256:323)." Tables of data surveying 1957 advertisers in the "Psychology Section" of Classified Telephone Directories for 1957, and 3 tables indicating data for the years 1947, 1949, and 1953 as well are provided. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Comments on the article by J. R. Haynes (August, 1983) regarding core psychology journals. Haynes originally argues that two APA journals, Journal of Experimental Psychology: Human Perception and Performance and Journal of Experimental Psychology: Human Learning and Memory failed to be included in the citation impact list because of extremely low citation impact for the Journal Citation Report. However, the Journal Citation Report for 1979-1981 is an unreliable source about Journal of Experimental Psychology journals for a number of reasons, including the citation of nonarticles and the conflation of citations for different journals. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Citation rates for 21 commentaries, the original articles, and the replies that were published in the Journal of Experimental Psychology: General between 1975 and 1980 were obtained and compared with citation rates for control articles, which were those that immediately preceded the original articles in the journal. It is concluded that although the commentaries were lengthy, they had little discernible effect on the subsequent literature. There was a significantly higher citation frequency for the original article compared to controls. (8 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Journal citations and scientific eminence in contemporary psychology.   总被引:1,自引:0,他引:1  
Investigated the validity and reliability of publication citations as a measure of scientific eminence in psychology. Citations from 14 representative English language journals, published from 1962-1967, were examined. Results were compared with other indices of eminence, i.e., being listed in American Men of Science, receiving scientific contribution awards, election to presidency of the American Psychological Association, etc. Results suggest that journal citation provides an index that is correlated with other measures of eminence. Difficulties from high journal citation and low eminence and the reverse of this are discussed. (30 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Comments on an article by T. R. Elliott and E. K. Byrd (see record 1987-23075-001), in which the authors conducted a citation analysis of articles in Rehabilitation Psychology, Volumes 27-29. They claimed to identify influential authors and publications useful for training and research in rehabilitation psychology. Their method of citation analysis, however, appears to have distorted their results. Crisp discusses the problems with their methodology, and makes suggestions for using a different citation method, using more than three volumes of the journal, and including other journals in the analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Maintains that there is little or no evidence that citation frequencies are useful for evaluating psychologists other than to identify those whose names are household words in the discipline. It is also stressed that the citation impact factor penalizes journals that publish articles that are not cited: The journal's impact factor is likely to be reduced by its publication of articles that do not conform to current customs, fads, and fashions. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
NM Meenen 《Canadian Metallurgical Quarterly》1997,23(4):128-34; discussion 135-6
With shortage of research funds and increasing competition for medical posts, performance indicators and control instruments are being looked for in order to be able to allot research funds and make professorial appointments in relation to scientific performance. Incomprehensibly for many, the impact factor has become the decisive scientometric indicator at German universities despite of substantial systematic limitations. The impact factor is derived from the journal citation reports. Its basis of calculation entails the following problems: the editorial board of the private Institute of Scientific Information (ISI) decides on whether a journal is to be classified as a source journal. The citation index of all journals is calculated from their citations alone. Crucial means of influencing the impact factor result from self-citations and citation groups in these source journals. Languages other than English and other than Latin alphabets are appreciably disadvantaged by the citation index, which is why for example despite its international significance the rapid development of the osteosynthesis technique in German speaking countries went unnoticed by British and American orthopedic surgeons and scientists. The articles on postgraduate training necessarily published by clinicians in the respective language of their country are not cited because the addresses of such publications do not engage in research. Clinical disciplines (especially highly specialized disciplines such as trauma and hand surgery) thus attain appreciably lower impact factors for their journals than basic disciplines and interdisciplinary clinical sectors which lead the ranking of journals. The period covered in calculating the impact factor is only 2 years. Very modern and widely disseminated organs of publication with a short information halflife are favored. From the 10 objectively most often cited and most important journals for the scientific society, only 2 are to be found amongst those with the highest impact factor. The impact front-runner from 1995 has a very low absolute number of citations. The impact factor provides limited statistical information on a journal in its special field. Using it for this purpose presupposes knowledge of rules, limitations and constraints. Its uncritical use as a general currency of science is fundamentally unscientific. In addition, this leads to the specialists in the field knowledge of the universities being disregarded in favor of a pseudo-objective parameter determined elsewhere. At all events, correction factors for the impact factor have to be applied in respect to the different disciplines. The faculties should reach agreement on relevant (also on German language) organs of publication. The impact factor is not suitable as an indicator of the research activity and the quality of a researcher or an institution. Besides careful human judgement and other classical methods of decision making, the Science Citation Index can contribute to the individual evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号