Keywords

1 Introduction

Periodic evaluation of academic job performance has been characterized as substantial and central elements in academic life [14] and an important criterion in hiring, tenure, and promotion decisions [16]. Both the criteria and procedures for academic tenure and promotion may differ between types of academic institutions (for example, research universities, doctorate-granting universities, comprehensive universities, and Liberal Arts colleges) [16]. Differences in evaluation criteria may also exist between disciplines as well as between academic systems (for example, the US versus the French, or German systems). However, three main areas appear to be evaluated although with varying weight and emphasis: research, teaching, and service. At research universities the highest weight is regularly put on research [14, 16], and lower weights are attributed to a scholar’s performance in teaching and service [12].

When academic tenure and promotion committees evaluate a scholar’s relative performance in research, mainly three factors are considered: productivity, impact, and individual signature.

The first factor, productivity typically refers to a scholar’s quantitative annual publication output at ranked and institutionally accepted outlets, which provide high-quality, double-blind peer reviews of submitted work. When inspecting a scholar’s publication output across time periods, evaluators expect to find a so-called publication rhythm, that is, a pattern of uninterrupted publications, which are seen as documenting steady and ongoing research involvement [11].

The second factor, scholarly impact has traditionally been measured in terms of number of citations [2, 11, 12]. However, significant differences exist between disciplines with regard to the mean of citations for the most senior researchers [11]. While senior social scientists may have lifetime citation numbers in the three to four thousands, senior researchers in the natural sciences may have citation numbers of over five times as many. The use of citation numbers as a proxy for measuring scholarly impact has repeatedly been criticized for its tendency towards inflation as a result of self-citations as well as the effect of multiple co-authorships, which function as citation accelerators [2]. Furthermore, the “lucky punch,” that is, a single massively cited publication might represent the lion’s share of a scholar’s overall citation number effectively hiding a weak publication rhythm. Last, the traditional citation indices, for example, Thomson Reuters’ Web of Science accounted only for journal citations omitting and neglecting other important publication outlets such as conferences, which penalizes disciplines, in which journals play an inferior role, for example, in Computer Science. The increasingly accepted Google Scholar citation index, therefore, includes journal and conference citations among others as well as the h-index [13] and the i10-index, which indicates the number of publications cited at least ten times [21].

The third factor, scholarly signature, has become a more important measure and analytical lens in recent years, whereby published work is analyzed also along the lines of identifiable individual contribution to the academic body of knowledge. Much scholarly work is multi-co-authored as opposed to single authorships [1]. Hiring, tenure, and promotion committees take a look at the mix of single-authored papers versus co-authored papers and lead co-authored papers versus non-lead co-authored papers. Also, the average number of co-authors is taken into account. The absence of single-authored or lead co-authored publications suggests an unidentifiable scholarly signature, whereas a significant number of single-authored and of lead co-authored publications reveals an identifiable scholarly signature.

In this study, productivity, impact, and individual scholarly signature of leading scholars in Electronic Government Research (EGR) are analyzed. EGR is a multi-disciplinary study domain, which is neither owned nor dominated by a single discipline. As a consequence the accepted standards of inquiry vary. The object of the study is to inform tenure and promotion-seeking EGR scholars about the landscape of scholarship in the study domain and provide orientation with regard to productivity, impact, and individual signature. It is also intended to help hiring, tenure, and promotion committees in their evaluation of candidates.

The paper is organized as follows: First, the current literature on the subject is briefly reviewed; then, the research questions are presented followed by the methodology section. Next, the findings are presented, which are then discussed in the succeeding section. Finally, the paper concludes that the EGR study domain has reached a new plateau of productivity, impact, and identifiable individual signatures of leading EGR scholars, which suggests that the study domain can maintain its solid academic standing as a multidisciplinary endeavor.

2 Literature Review

This review is concise, since the number of publications on EGR scholarship and publication trends is relatively low.

A number of bibliometric analyses based on the Electronic Government Reference Library (EGRL) has focused on the topical trends in EGR and on the profile of the scholarly community [1822, 24]. Topical trends and researcher profiles in EGR were also studied by different means and data sources such as select journals and other outlets [79, 17]. According to these studies EGR has so far mainly centered on topics such as organizational transformation, citizen participation, improvement of government services, technical design of e-government systems, institutional architectures and interoperability, policy and governance, and more recently also on topics such as cloud services, social media, transparency, and big and open data.

When attempting to size the active EGR community two indicators were used. The EGOV-List listserv subscriber count tallied 1,200 members, while the co-author count of the EGRL showed over 3,800 entries [20]. The EGOV-List also contains a couple hundred non-academic subscribers, whereas a large number of co-authors have only one or two entries in the EGRL. In contrast, the innermost circle of EGR scholars, that is, scholars with at least 18 publications or more was reported significantly smaller, that is, 51 scholars [21]. This led to size the active EGR community in the bracket of five to eight hundreds. Scholl’s 2014 study also reported on the academic impact of EGR scholars in the so-called core or “inner circle” of the study domain by detailing and comparing respective Google Scholar citation numbers, and h and i10 indices for the first time.

The Google Scholar citation counts along with the h and i10 indices are seen as more representative of a scholar’s overall impact than the sum of journal-based citation counts multiplied by the respective journal’s impact factor, since as mentioned above this approach unduly ignores the impact of conference publications altogether, which appears as highly problematic for a number of disciplines that appreciate conference publications significantly over journal publications.

Finally, the report also provided a breakdown of top-51 EGR contributors by geography revealing that the vast majority of leading researchers in this domain of study were still located in either Europe or North America. Interestingly, the European share among the top-51 EGR scholars had increased to almost 61 % while the North American share had fallen to under 30 % in the period between 2009 and 2013 from the previous five-year interval [21].

In summary, over the past decade the study domain has significantly grown in numbers of publications, numbers of scholars, and slightly grown also in number of disciplines involved. Thereby, the domain has gained excellent reputational standing across academia. Meanwhile publications like “Forums for Electronic Government Scholars” [24] have reportedly influenced hiring, tenure, and promotion decisions of EGR scholars in positive ways. Such cases, however, also identified a gap in understanding and a need for clarifying the meaning and comparability of various factors and indices of individual scholarly signature and individual impact.

3 Research Questions and Methodology

3.1 Research Questions

Based on bibliographic data derived from the EGRL (version 11.5, December 2015), it was possible to update the 2014 list of major contributors and most prolific EGR scholars along with these scholars’ academic impact (based on Google Scholar indices). Furthermore, the individual scholar’s “signature,” that is her/his unique and individual contribution and impact, could be determined, which leads to the following three research questions:

  • Research Question #1 (RQ #1): What cumulative publication output have the leading EGR scholars produced, and how has it changed?

  • Research Question #2 (RQ #2): What are leading EGR scholars’ Google Scholar indices such as citation numbers, h-index, and i10 index, and how have they changed?

  • Research Question #3 (RQ #3): In light of the cumulative publication output and the Google Scholar indices, what are leading EGR scholars’ individual contributions (“signatures”), and how can they be determined?

3.2 Data Selection and Analysis

Data Selection. The data source for this study was the Electronic Government Reference Library (EGRL, version 11.5, December of 2015) [22]. This reference library is a well established and acclaimed source of peer-reviewed academic EGR articles in the English language, which on average is updated every six months (see http://faculty.washington.edu/jscholl/egrl/history.php). The publishers of the EGRL aspire (see http://faculty.washington.edu/jscholl/egrl/criteria.php) to consistently capture at least 95 % of the eligible peer-reviewed and published EGR literature. EGRL version 11.5 contained a total of 7,899 references, an increase of 1,616 references (or, 25.7 %) over EGRL version 9.5 (6,283 references), which was the basis of the previous analysis two years before.

Data Extraction and Preparation. The EGRL version 11.5 was prepared with the EndNote reference manager, version X7.5.1.1 (Build 11194 – see http://endnote.com); it was used to export the references into the standard tagging Refman (RIS) file format, which is widely used to format and exchange references between digital libraries. As in the previous study, by means of the tags, for example, “TY - JOUR” for publication type journal, or, “AU - Bertot” for an author’s name, references were extracted and prepared for further processing and analysis. Data needed cleaning and harmonizing. For example, author names were found in different forms with regard to first names (abbreviated or full, with or without middle names, or initials). Furthermore, diacriticals needed to be exchanged against plain UTF-8 characters. Author names containing multiple terms (first name, middle name, last name) were concatenated by double equal symbols (==) between the terms so to avoid separation in subsequent analyses of term frequencies. Pre-analysis data preparation and harmonization was performed in part with TextEdit version 1.11 (Build 325) as well as with Mac Excel 2008 version 12.2.3 (Build 091001). All terms were converted to lowercase and diacriticals were removed except for dashes and double equal symbols.

Data Analysis. The analysis was mainly carried out using the R statistical package (version 3.0.3, GUI 1.63 Snow Leopard build (6660)). For text mining under R the tm package version 0.5–10 by Feinerer and Hornik [10, 15] was downloaded from the Comprehensive R Archive Network (CRAN) (see http://cran.us.r-project.org – accessed 3/12/2014) and used. Frequencies of author names were counted. For authors with frequency counts greater than or equal to 20 (18 before, or, +11.1 % over the previous study), which represented the most prolific 60 scholars in EGR (up from 51).

For each author in the top 60, the number of co-authors was counted for each publication in the EGRL providing a scholar’s average number of co-authors per publication. Furthermore, for each author in the top 60, the number of single authorships and lead co-authorships was counted providing a single/lead author index, that is, the ratio of single/lead (co-)authored publications over all publications of the respective author.

An additional (manual) data collection was performed with regard to individual author’s Google Scholar entry. For each scholar in the list the citation count, the h- and the i10-indices were recorded if publicly available (http://scholar.google.com/ - accessed March 7, 2016). For EGR scholars without a published publication profile, the Google Scholar citation counts and respective indices could have been counted and calculated; however, until now it is preferred that scholars publish their profile themselves, which is strongly recommended because the data is publicly available anyway.

It is also noteworthy, that in several cases the Google Scholar counts were erroneous, for example, for one EGOV scholar’s citation count was overrepresented by a staggering 811 citations (or, 35.5 %). However, other citation counts were also found identifiably inflated, yet not to this order of magnitude as in the aforementioned case. It is suggested that EGR scholars carefully review their Google Scholar data, once published, and manually eliminate counting errors and citation inflation.

Finally, for each EGR scholar in the top 60, the number of single authorships or lead co-authorships was counted for the top-10 most cited publications in Google Scholar as another indicator of individual “signature.”

4 Findings

Findings are presented in the order of the research questions.

4.1 Cumulative Scholarly Publication Output in EGR (RQ #1)

As recently presented elsewhere [23], within only two years the core or “inner circle” of EGR expanded from 51 to 60 scholars (18 %) defined by tallying a cumulative minimum of 20 peer-reviewed publications, which represents an increase of 11.1 publications for making it into the EGR core group.

It is also noteworthy that since the last publication of an bibliometric evaluation in EGR, the body of EGR-related knowledge increased from 6,283 publications in 2013 to 7,899 in late 2015, that is, an increase of 25.7 % within just two years [23].

As Table 1 indicates, the ranking of the top-6 cumulatively most prolific EGR scholars remained the same compared with 2014, while a group of four scholars (Reddick, Charalabidis, Dwivedi, and Grönlund) moved up into the top-10. In 2016 it required at least 45 peer-reviewed EGRL-recorded publications to rank among the top-10 most prolific EGR scholars, whereas two years earlier 36 publications would have provided that same ranking.

Table 1. Cumulative publication output by top-20 most prolific EGR scholars (early 2016)

Interestingly, the minimum publication number for reaching a top-10 ranking increased by 25 % matching the overall increase in EGR publications for the period studied. Focus on other areas of research or a slowdown of publication output due to retirement or leave of absence appear as the most likely explanations among other reasons. EGR scholars Dwivedi (9), Tarabanis (15) and Becker (20) have traditionally published in other areas than EGR. In the case of Dwivedi, it appears that a major shift in favor of EGR has occurred. The cumulatively top-20 most prolific EGR scholars had fairly wide ranges of productivity over the two-year period studied ranging from no increase to a 75.9 % increase.

As discussed before, the percentage-related increases describe the emphasis (or, de-emphasis, respectively) of EGR scholars with regard to their EGR-related publication output. While the mean percentage increase of publications for top-20 most prolific EGR scholars was 26.7 % (that is, slightly higher than the average increase in EGR publications), the median percentage increase was 21.2 %, and the mode 12.5 %.

In summary, the majority of top-20 most prolific scholars is still actively, and as the percentage numbers unveil, even massively engaged in EGR, and this group strongly contributes to the increase of the body of academic knowledge in the study domain. It is also worth mentioning that among the top-20 most prolific EGR scholars one finds a number of current or former editors-in-chief of leading EGR journals (Janssen and Bertot/GIQ, Weerakoddy/IJEGR, and Reddick/IJPADA) as well as organizers of leading conferences (Scholl/HICSS EGOV and IFIP EGOV, Janssen and Wimmer/IFIP EGOV). While no change was observed among the top-6 EGR contributors, some changes were noticed in the remainder of the top-20 rankings.

4.2 Leading EGR Scholars’ March 2016 Google Scholar Indices (RQ #2)

In this section the various Google Scholar indices are presented for the top-20 most prolific scholars in the domain. However, when it comes to interpreting citation numbers and indices, two particular circumstances have to be considered.

  1. (1)

    As Scholl pointed out in an earlier study [21], several most prolific EGR scholars have large numbers of publications (and, therefore, citations and credentials) outside EGR. It would be greatly misleading if these numbers were used in direct comparison with those of mostly or solely EGR-focused scholars. Although the EGR-related citations for these scholars could be manually counted and the respective indices calculated, for the purpose of this study it was decided to ignore these cases, which are Dwivedi, Tarabanis, Irani, and Becker. Instead the next most prolific authors were included as long as their citation numbers and indices were available from Google Scholar. This appears justifiable since despite relatively large EGR publication numbers, the relative fraction of citations and indices relating to EGR publications was still found minor relative to the remainder of the respective scholar’s work. However, admittedly in domain analyses the use of indices clearly shows its weaknesses for those scholars who work across multiple domains and disciplines. In future studies, cases such as Dwivedi’s might therefore become more problematic in comparative analyses like this one, since a strong shift of focus towards EGR like in Dwivedi’s case might make it necessary to individually calculate the EGR-related impacts (and signatures).

  2. (2)

    Another adjustment had to be made, since Grönlund, Macintosh, and Jaeger had not made public their Google Scholar citations and indices. In the absence of official numbers in these cases the next most prolific scholars were included in this analysis instead, as long as their Google Scholar citations and indices were published (see also [23]).

As further mentioned above, while citation indices have been criticized also from various other perspectives [1, 2, 11], they have nevertheless become a part of scholarly life, and in particular, evaluation of impact. In Tables 2, 3, and 4, the Google Scholar citation numbers, the h-indices, and the i10-indices are presented.

Table 2. Google Scholar citation numbers for leading EGR scholars (as of march 7, 2016); note: Grönlund, Macintosh, and Jaeger unpublished/not included
Table 3. Google Scholar h-index for leading EGR scholars (as of march 7, 2016); note: Grönlund, Macintosh, and Jaeger – unpublished/not included
Table 4. Google Scholar i10-index for leading EGR scholars (as of march 7, 2016)); note: Grönlund, Macintosh, and Jaeger – unpublished/not included

Table 2 shows the citation counts for leading EGR scholars as found on Google Scholar on March 7, 2016. Across the board EGR scholars’ citation counts grew rapidly within the relatively short reposting period of two years. Citation counts increased between 19.9 % and 92.4 %. The rank order of the most highly cited six scholars did not change; however, Janssen and Gil-Garcia had the highest percentage increases in the top echelon.

Table 3 shows the h-index for leading EGR scholars from the same data collection. Also in this case, the top-6 EGR scholars’ rankings have remained unchanged. Percentage increases range between 12.5 % and 57.1 %.

In comparison, Table 4 presents the i10-index, again from the same data collection. Rankings are by and large similar to the other two indices. Also, in the case of the i10-indices, the average percentage increase equals almost 42 %.

In summary, as the Google Scholar indices reveal the study domain’s leading scholars have significantly increased their overall impact across all three measures, the citation counts, the h-index, and the i10-index. Quite a number of EGR scholars are listed in all tables so far presented.

4.3 Identifying Leading EGR Scholars’ Individual “Signatures” (RQ #3)

A scholar’s so-called academic publication rhythm, impact, and reputation (and with those her/his unique “signature”) are not only evidenced (a) by the sheer number of publications [5] along with citation numbers and indices, but also (b) by participating in and co-organizing academic conferences, workshops, and colloquia domestically and around the world at various levels, (c) by serving on editorial boards, (d) by receiving external and internal funding for research, (e) by invited talks at renowned venues, (f) by requests for reviewing journal/conference articles, book manuscripts, and grant proposals, (g) by holding offices with professional academic organizations, (h) by participating in public events and publishing websites, and also (i) by receiving national or international awards such as fellowships, residencies, prizes, and other honors (see [3]).

While a scholar’s unique “signature” needs to be considered along these various indicators, the authorship of publications itself, however, already provides a good sense of “signature”: Consider, for example, a scholar who mostly publishes as a single author as opposed to a scholar who never publishes in the capacity of a single author. Or, consider an author who while publishing collaborative work with others mostly has the lead authorship, as opposed to a co-author who never appears in a lead author role, just to consider some extremes. Conventions for listing co-author names in the sequence of names vary across academic disciplines.

The “sequence-determines-credit” approach (SDC) appears as the most prevalent norm in many disciplines, according to which the name’s mention in the sequence of co-author names indicates the relative weight of individual contribution to the collaborative effort from highest to lowest [4, 6, 25]. This norm also appears to be the most prevalent in the study domain of EGR despite the variety of contributing disciplines. A special case under this norm is the publication of two co-authors, which would suggest equal contribution unless the alphabetical order of names is reversed, or the lead co-authorship of an alphabetically first-listed author is indicated otherwise. Other norms include the “equal contribution” norm (EC), which attributes citation numbers and impacts proportionally to the number of contributors, and the “first-last-author-emphasis” norm (FLAE), which is used in some areas of biological and medical research, as well as the “percent-contribution-indicated” approach (PCI), where authors acknowledge their contributions to the publication in percentage figures [25]. The latter two apparently play no role in EGR. Consequently, for this analysis a combined SDC/EC approach has been used.

Number of Co-authors. Among the top-20 most prolific and predominantly EGR-dedicated researchers the preferences with regard to co-authoring vary widely. Based on the EGRL version 11.5 in this top group the average number of co-authors per peer-reviewed contribution amounts to 2.90 (mode: an adjusted 2.65/median: 2.85). Number of co-authorships range from 1.50 to 4.80. For example, whereas at the one end of the spectrum Reddick (1.50) and Wimmer (2.04) occasionally publish with co-authors, although, not many, at the other end of the spectrum Askounis (3.80) and Charalabidis (4.30) appear to regularly publish with quite a number of co-authors (average co-author counts in parentheses). While in the former two cases a significant individual contribution can be inferred, in the latter cases the individual co-author’s contribution remains unclear.

Number of Single and Lead Authorships. As mentioned above single and lead authorships are indicators of high individual contributions to publication output and impact. Also in this category, the top-20 most prolific and predominantly EGR-dedicated researchers demonstrate widely different preferences. The spectrum ranges from an 0.88 index (that is, in 88 % of the publications the author is either a single or the lead author) to zero (that is, not a single sole or lead authorship could be identified). On average the top-20 most prolific authors have a lead of single authorship in about every other publication (mean = 0.51, median = 0.49, and median = 0.35).

Number of Single or Lead-authored Publications in Top-Ten Cited. While the former two categories already provide a good grasp of an individual scholar’s signature, when looking at a scholar’s top-ten most-highly cited publications in Google Scholar, the number of single and lead co-authored publications among the top ten reveal the individual impact even more clearly. Maximum and range were found at 8, that is, in case of the maximum value, 8 of 10 most highly cited publications were single of lead authored. The median and mode were 6, and the mean was 5.45. However, these descriptive statistics suggest leading EGR scholar truly lead also in terms of documented impact in this category, a few scholars predominantly gain their top-ten citation counts from publications, in which they had no lead whatsoever. The average of single and lead-authored publications is 4.8 and the median 5.

As a result, when taking into consideration the three impact (or signature) categories of (1) number of co-authors, (2) number of single and lead authorships, and (3) number of single or lead-authored publications in top-ten cited, citation and impact indices can be adjusted accordingly, which is shown for citation counts in Table 5.

Table 5. Most-cited EGR scholars’ adjusted citation indices, lead authorship indices, co-authorship indices, and top-ten cited index; note: Grönlund, Macintosh, and Jaeger unpublished/not included

When multiplying the gross citation number with the single/lead authorship index, an adjusted index results, which more adequately represents the scholar’s impact in terms of citations. As Table 5 reveals adjustments made on this basis can significantly reduce or increase a scholar’s impact figures. Similar adjustments could easily be made in the same fashion for h-indices and i10-indices (see gross numbers in Tables 3 and 4), which for space constraints cannot be shown here. Further adjustments can also be made for average number of co-authors regarding citation counts, h-indices, and i10-indices by dividing the respective count/index by the average number of co-authors as discussed above. Again, for space constraints these adjustments are not shown here.

Finally, for the most highly cited EGR scholars in Google Scholar the number of single/lead authorships within their respective top-ten most highly cited publications are also shown in Table 5 (rightmost column), which is a profound indicator of scholarly impact along with the other adjusted indices.

In summary, the three impact and signature categories discussed above allow for adjustments and informed interpretations of gross citation counts and indices. Adjusted counts and indices reveal more accurately the true impact of scholars, not just EGR scholars.

5 Discussion, Future Research, and Concluding Remarks

It has been the object of this investigation to update and further analyze the individual scholarly productivity of leading EGR scholars, determine their scholarly impact in terms of citations and citation indices, and introduce the concept of scholarly signature into EGR.

5.1 Remarks on Productivity and Unadjusted Impact

Overall Productivity. From the end of 2005 the volume of publications (see http://faculty.washington.edu/jscholl/egrl/history.php) in the English language in peer-reviewed outlets has grown more than eight-fold, which represents a compound annual growth rate of 21.6 %. In the reporting period since the last investigation in 2014, the number of entries into the EGRL had grown by more than a quarter indicating that the academic output in EGR has maintained its relatively strong growth pattern suggesting that the study domain is well established and topically sound. Major contributors to the continued overall growth are the leading scholars in EGR, whose average growth in publication output equals the overall average. This steady growth helps explain the continued sustainability of five journals and four major international conferences in EGR without a detectable effect of compromising the quality of publications; on the contrary, for example, the acceptance rates at leading conferences such as the HICSS EGOV track have decreased over the years.

Individual Productivity. Instruments such as the EGRL and Google Scholar make possible to closely track scholars’ publication output individually and also identify individual scholars’ publication behavior (in terms of preferred co-authors, number of co-authors, topics, outlets, and overall publication rhythm, among other measures). This provides an unprecedented and timely transparency to EGR scholars as well as to hiring, tenure, and promotion committees. While such transparency and measurability might be unwelcome to some, the vast majority of individual contributors shows remarkable levels of consistent performance. However, high productivity alone can only be an initial indicator, which in and by itself is not considered a sufficient measure of academic performance and contribution.

Unadjusted Impact. Ever since Google Scholar made individual scholarly profiles publishable in 2012, the impact of scholarly work became more readily identifiable to a wide audience. As reported, erroneous citation counts can still be identified and eliminated. The margin of error in terms of h-index and i-10 index appears to be far smaller for obvious reasons. Despite these known deficiencies, by and large, the Google Scholar service appears to have gained in reputation over the years and now informs hiring, promotion, and tenure committees around the world. However, for reasons discussed above, in particular, the citation counts can be fairly misleading if taken at face value.

5.2 Remarks on Adjusted Impact and Signature

Adjusted Impact. The adjustments presented above account for the number of co-authors and the number of single and lead authorships in publications. Obviously, the former presents a straightforward way to adjust indices by dividing the various counts and indices by the average number of co-authors on a publication and distributing the results evenly. This approach effectively curtails the phenomenon of citation count inflation by inflating the number of co-authors. However, it might also unduly misrepresent the contributions of lead co-authors. Therefore, a more accurate measure appears to be the recognition of single and lead authorships in multi-authored work. When multiplying the various citation counts and indices with the individual single/lead-authorship averages a far more accurate picture appears. Both adjustments taken together provide some significance. For example, in a case with an average of five co-authors per publication and very low or even no single/lead authorships it is hard to determine any individual contribution that stands out. In contrast, in case of a low average number of co-authors and a high number of single/lead authorships the high individual contribution would be undeniable. This would still hold in cases with high average numbers of co-authorships and high numbers of lead authorships. A case in point is Bertot with an average of three co-authors but a record of 88 % of lead authorships. In summary, the number of single and lead authorships along with the number of average co-authors per publication provide meaningful adjustments to otherwise potentially inflated citation counts and indices.

Signature. While these two adjustments already provide the contours of a scholar’s “signature,” another measure helps sharpen its silhouette: As discussed above, when counting the number of single and lead-authored contributions, for example, in the top-ten highest-cited Google Scholar publication per scholar, more evidence of individual impact and contribution emerges. It is remarkable that mean, median, and mode were all at or around 6 for the number of single/lead-authored publication in the top-ten most highly cited publications in the group of most prolific EGR scholars, which indicates a strong signature and individual impact of scholars in this group. On the other hand, low numbers (equal or lower than three) also point at a relatively weak signature in terms of genuine individual contributions to the earned citation count.

5.3 Making Sense of the Citation Counts and Indices

Multiple Perspectives. In the introduction performance evaluations and comparisons were portrayed as an inevitable and integral part of academic life. Performance evaluations do not only inform hiring, tenure, and promotion decisions, but rather also are an important control element for assuring the quality of academic outcomes and products. No single yardstick produces reliable and all-encompassing indicators, which would span across multiple disciplines and domains. Even inside a discipline or domain, a single measure would be highly problematic. However, in EGR, even if multiple criteria such as productivity, Google Scholar citation counts, h-indices, and i10-indices were taken just at face value, the results would still be inaccurate to unacceptable degrees. Adjustments like those discussed above appear as far more accurate measures. Rather than suggesting to simply replace the unadjusted figures by adjusted ones, it is held that all measures considered together provide a better overall grasp of the evaluation at hand than any of them in isolation. Finally, when reviewing the collective work and impact of leading EGR scholars, de-facto standards of inquiry and “good” research also begin to emerge. This will be the subject of a future study.

Other Future Research. Previous studies on the subject were reportedly used in hiring, tenure, and promotion decisions. It is expected that this will also be the case for this report. Future research is intended to establish how the various studies on academic job performance and evaluations have influenced and been used in hiring, tenure, and promotion cases throughout EGR and its contributing disciplines.