(c) by Harilaos N. Psaraftis
NOTE: Below is the unabridged version of an article published in OR/MS Today on June 7, 2021, SEE HERE. The abridged article has a link to the unabridged version, which is identical to the one below (only it is on the OR/MS Today server). Sit down, relax and enjoy.
ABSTRACT
The purpose of this paper is to express some thoughts on what I
term “the citations chase”. By this I mean the importance attached to citations
as an indicator of academic excellence, prominence, impact, influence, and the
like. Bibliometrics is a term used to describe the science of citations, and
activity in this area has grown considerably over the years. The paper has been
triggered by some recent bibliometric studies in which I was lucky to find my
name being included. Even though the exposition in this paper is, to some
extent, subjective and qualitative, an
attempt is also made to also address some quantitative aspects of the subject,
which may provide some additional insights on what may lie below the surface of
citations statistics. To that effect, and among some higher order citations
statistics, the “claim to fame ratio” is defined. Some rudimentary analysis of
a 100,000+ scientists database focuses on Logistics & Transportation and on
Operations Research.
1.
Introduction
The purpose of this paper is to express some thoughts on what I term
“the citations chase”. By this I mean the importance attached to citations as
an indicator of academic excellence, prominence, impact, influence, and the
like. Bibliometrics is a term used to describe the science of citations, and
activity in this area has grown considerably over the years. There are
citations statistics, more statistics and even more statistics produced,
analyzed and discussed, sometimes ad nauseam. The dictum “publish or perish”
has transcended into “be cited or perish”. Just publishing is not enough.
Triggers for this paper have been some recent bibliometric studies in which I was lucky (and surprised I must say) to find my name being included. These include (in reverse chronological order):
A bibliometric study for the last 20 years of the Networks journal, as described in Golden and Shier (2020).
A bibliometric overview of the last 40 years of the Transportation Research Part B: Methodological journal, as described in Jiang et al. (2020).
A bibliometric overview of the WMU Journal of Maritime Affairs journal since its inception in 2002, as described in Sahoo and Schönborn (2020).
A 100,000+ “highly cited researchers” database, as described in Ioannidis et al. (2019).
An analysis of the top 50 authors, affiliations and countries in maritime research, as reported in Chang et al. (2018).
First of all, a clarification is in order. It feels nice if one’s
paper is cited, and it may feel awful if no one bothers about it. Also, and by
and large, no one can claim to be a serious academic if he or she cannot
display a reasonable number of citations for his or her work.
At the same time, and having been in academia for close to 42
years and having studied the subject as best I could (although I admit
non-exhaustively), I honestly believe that, even though I can understand the
importance of citations, I also feel that this matter is overblown out of
proportion and really distracts from more important things in life, academic
and other. It does not do justice to many people, including those who consider
citations as extremely important. Further, sometimes there is more than meets
the eye if one is to dig deeper in the citations statistics.
Even though the exposition in this paper is, to some extent, subjective and qualitative, an attempt is
also made to also address some quantitative aspects of the subject, which may
provide some additional insights on what may lie below the surface of citations
statistics.
The rest of this paper is organized as follows. Section 2 goes
over some citations basics. Section 3 comments on the 100,000+ scientists
citations database, with a focus on Logistics & Transportation and on
Operations Research, and conducts a rudimentary analysis that differentiates
citations statistics based on the population of citations versus the population
of papers. To that effect, and among some higher-order citations statistics, the
“claim to fame ratio” is defined. Section 4 makes some concluding remarks.
2. Citations basics
Citations count how many times a particular paper is cited (or
referenced) by other papers. Whenever a paper is cited, all authors of the
paper get equal credit for the citation, even though some citations engines
also keep track of who is the first author, who is the last author and
obviously if one is the sole author. In some disciplines (but not in all) sometimes
the last author is considered as an honorary position. Depending on how the
contribution among co-authors is shared, the order of appearance of the
co-authors is determined by themselves, however there are no unambiguous rules.
In an equal-contribution paper, order can be typically alphabetical, even
though this may be unfair to authors closer to A and does not favor authors
closer to Z. They might as well toss a coin. I am lucky that in the Latin
alphabet my last name begins with a P, versus a Ψ in the Greek alphabet (one before last
letter Ω). By the way, Z is the 6th letter
in the Greek alphabet. If contributions are not equal, authors may be
displayed by decreasing share of contribution. But there is really no
easy way for an external person to assess or verify contributions, even though
recently publishers ask authors to write a little blurb specifying this kind of
information. Whatever it is, if a paper has N authors, each of the N authors
gets one citation each time the paper is cited by another paper.
According to this system, a sole author who gets one citation for
a paper, and an author of a paper with N>1 authors that also gets one
citation are equal in terms of citations. Each of the N co-authors of the
N-author paper will get one citation, the same as the citations of the sole
author paper. This, in my opinion, defies any reasonable sense of fairness,
especially if N is high. Cases in which N is between 2 and 5 are very common,
but there are also cases in which N is much higher. N can exceed 5,000, see
later example.
It is clear that the system can be abused. Assume that we
hypothetically have 10 distinct papers, each written by a distinct sole author,
and all these papers are published and each is cited once, so each author will
get one citation. But if all 10 authors collude to co-author the same 10 papers
and again each paper is cited once, each author will get 10 citations. What has
changed? Only the number of authors. This is of course an extreme example, but
in general more authors favor more citations.
Of course, if citations engines counted citations differently, for
instance by assigning a fractional
citation of 1/N to each of the N authors of a cited paper, as it would seem
the fair thing to do (at least as a first order approximation), things would be
very different. I suspect we would have a completely different
picture as regards what papers are written by whom, let alone who is highly
cited and who is not. But I consider the likelihood of this happening very low.
The citing papers may be authored by other authors or by some of
the authors of the cited paper. If one cites his or her own paper, it is a
self-citation, and it is typical to exclude self-citations in any serious
citations count. Citations engines such as Web of Science or Scopus detect
self-citations. There exist pathological cases of some authors getting most of
their citations from themselves.
For what things are citations important? They seem to be important
in hiring, promotion, and tenure decisions, not to mention salary negotiations
and other things, such as funding proposals and the like. As mentioned earlier,
they are considered a proxy for academic excellence, prominence, impact,
influence, and the like. Certainly they are not the only proxy, but they are
becoming increasingly important.
According to Google Scholar, which is one of the citations
engines, the highest cited scientist is Michel Foucault, with more than 780,000
citations. Also, a high number of citations in a paper usually correlates
with journal prestige. As an example, a 1999 paper in Science on
the emergence of scaling in random networks (Barabasi and Albert, 1999) has
more than 37,000 Google Scholar citations (it is interesting that the above
paper was submitted on June 24, 1999 and accepted on Sep. 1, 1999).
When I was a faculty member at MIT (1979 to 1989), I do not recall
citations playing a major role in my promotion to associate professor (1983) or
to the tenure I received (1985). At least I was not explicitly aware of it. I
was aware of the Science Citations Index, but that was about it. My
guess is that citations may have played an implicit role in these decisions,
but I recall that I was more preoccupied with the number of papers I had
written and publishing them in good journals than counting their citations,
which is something I did not do myself. Maybe someone else did this for me, but
it was not involved. In retrospect, academic life was much simpler then, even
though these evaluation processes were very elaborate.
The first time I seriously noticed citations was in the context of
some faculty promotions at the National Technical University of Athens (NTUA)
in Greece. And then I started looking at my own citations, which could be found
online. One thing that surprised me was that even though I continued to
publish, most of the citations I was getting were attributed to papers I had
written while at MIT. I do not think this was due to the switch of affiliation,
but I suspect it was due to the switch of the topics I was mainly writing
papers on. At MIT most of my papers were in Operations Research, while at NTUA
I basically switched gears and worked on things like Maritime Safety, Maritime
Economics, Ship Emissions, and all kinds of diverse other topics. Operations Research
ceased to be a main priority for me, even though I attempted to partially
reengage in it during the last 10 years or so. I observed that papers in the
new areas I published received far fewer citations, without knowing why. But
this did not really bother me.
Then at some point came the 5 ½ year period (1996 to 2002) when I
was CEO of the port of Piraeus, a major port in the Mediterranean. I wrote zero papers during that period, even
though I sent an article to OR/MS Today
(Psaraftis, 1998). This whole period, which I consider as very important in my
whole professional life, counted very little or nothing, citations-wise. I
wrote a couple of papers about the port after I left, which attracted very few
citations. I resumed my research activities afterwards, but at no point I was
preoccupied with citations. Getting funding for my projects seemed like a much
more important priority.
3. The 100,000+ scientists database
It was as late as fall 2019 that I became aware of what some
people refer to as the “100,000 highly cited scientists” database, which is described
in Ioannidis et al. (2019). The actual number of people listed in it is
105,000. A colleague told me that I was in that database, which to me was a pleasant
surprise. Still, I did not pay too much attention on who constructed the
database, how the database was constructed, and who else was in it.
My interest in that database resumed in 2020 with all the
publicity Professor John Ioannidis of Stanford University (an epidemiologist) received
in the context of his views on COVID-19 (see for instance, Ioannidis (2020)). I
did not know about him until then and I discovered that he had an exceptionally
high number of citations, something like 30 times mine. I then discovered that
he was the main architect behind this database. As many as 40 indices were
collected for each of the scientists in the database and people were ranked
according to a composite index that combines 6 citations metrics. To give a bit
of perspective, Ioannidis is No. 52 on this database, and I am No. 53,647.
Reading the (very interesting) paper that describes the database,
I discovered that some 6,880,389 scientists who had published 5 or more papers
formed the basis of that database. Of those 6,880,389, the top 100,000 (in fact
105,000) were selected and ranked. The database breaks down scientific
disciplines into 22
scientific fields and 176 subfields, according
to the Science-Metrix journal classification system (Archambault et al., 2011). Fields include broad areas
such as Engineering, Economics and Business, Biology, Chemistry and others (22
total), and the subfields go into further detail, for instance Optoelectronics
& Photonics, Urban & Regional
Planning, Distributed Computing, Operations Research, Logistics &
Transportation, and many others (176 total). In that sense, a scientist can be
ranked not only within the entire database, but also within the field and
subfields that are associated with the scientist. To that effect, and for each
scientist, the database provides the most common scientific field and the two
most common scientific subfields of his/her publications, one main and one
secondary. By doing so, one is supposed to be able to compare scientists that
work in the same areas, as it would make little sense to compare an astronomer
with a psychologist, or an economist with a nuclear physicist. Thus, the
database may allow comparisons and rankings of people working on the same
subjects.
Looking at the database and accompanying paper,
I was first impressed by the fact that people in some areas get widely
differing citations numbers than people in other areas. For instance, the
99-percentile of citations in Nuclear and Particle Physics is 32,708 (meaning
that the percentage of scientists in that subfield who have citations up to
32,708 is 99%). But in Astronomy and Astrophysics the 99-percentile is 16,244,
in Ecology it is 8,302 and in Cultural Studies it is only 588! I found these
numbers, along with the populations of each field and subfield, to be literally
“all over the place”. For some subfields like Drama & Theater, or Music,
there are zero people in the top 100,000+ list. But other areas, like for
instance Applied Physics, are very well represented in that list. This means
that for some disciplines it may be easier to get into the 100,000 list than
for some other disciplines.
Then the next question that came to mind was,
why do people in Nuclear and Particle Physics get so many more citations than
people in Ecology? Is it because these folks have higher IQ, work harder, or
are just more prolific? Is it because it is easier to publish in that area? Is
it because work can be dissected into narrower subjects, each being the subject
of a paper? (for instance, prove a theorem in mathematics, or prove a property
of a sub-atomic particle). Is it because there more interest in some areas than
in some other areas? Or is it because of another reason? To be honest I do not
know. I suspect that the number of
authors must also play some role, and papers in Nuclear and Particle Physics
typically have many authors. The record is a paper with 5,154 authors,
published in Physical Review Letters. Only the first 9 pages of
this 33-page paper (Aad et al., 2015) describe the research itself, including
the references. The rest of the paper lists the authors.
Judging from myself, more citations do not
necessarily imply author preference or favor of a paper. The favorite paper
that I wrote, Psaraftis (1984), published in Networks, has very few
citations. I like it as much as I did when I wrote it, in spite of the few
citations.
Also, getting more citations does not
necessarily mean that people agree with a paper’s results, or that they even
consider the paper important. One may get many citations if a lot of people
disagree with one’s paper, or if people say that the paper is nonsense.
Ioannidis’s perhaps most famous paper was a paper in which he proved that most
of published research findings are false (Ioannidis, 2015). That this paper is
being highly cited (more than 9,300 Google Scholar citations) does not necessarily
mean that those who cite it also agree with it. This would amount to their
consent that their own work is probably false. There is no such thing as a
negative citation.
The other thing I noticed, with some surprise,
is that the 100,000+ database omits some people I know and who I thought would
be included. Whether this is due to errors in the database, or these folks not
making the 100,000 person list, I do not know. After all, if the database is
wrong, it is a proof of Ioannidis’s own theory.
Digging deeper into the database, I thought
that some interesting “higher order” citations statistics can be defined. Suppose a scientist has X citations. Then we may want to know the following:
(a) What is the value of Cs, defined as the percentage of
X within the population of that person’s citations,
that is ascribed to papers in which he or she is sole author?
(b) What is the value of Ps, defined
as the percentage of X within the population of that person’s papers, that is ascribed to papers in which he or
she is sole author?
(c) How do the values of Cs and Ps
compare?
Percentages defined by (a) and (b) can be thought
of as second-order citations
statistics, being ratios of first-order citations statistics. They can also be
defined in a broader sense as Csf
(or Csfl), that is, percentages of X within the population of that
person’s citations, that are ascribed to papers in which he or she is sole or
first author (or sole, first or last author). Similar definitions pertain to Psf
(or Psfl).
Why would we want to know these percentages?
Because it may be interesting to know the extent of a particular author’s own contribution
to that person’s collection of papers, and also to his or her citations. Irrespective
of whether or not an author is prolific, how frequently does he or she publish
sole author papers? Or sole or first author papers? And, in cases in which the
last author position is honorary (and this is not universal by any means), how
frequently does this author appear as sole, first or last author?
In that sense, high values of P or C (for each
of their 3 variants) for an author may mean that being prolific or having many
paper citations may mainly be due to the initiative of that author. By
contrast, and even if an author’s first
order statistics are high, low values of P or C (again for each of their 3
variants) may mean that being prolific or having many paper citations may mainly
be due to the initiative of that
scientist’s co-authors.
As regards question (c), we note that the values
of C and P, as defined above,
are not necessarily the same, because they are drawn from different populations.
As an example from the database, a highly cited author was found to be sole,
first or last author in 54% of all his papers. But across his citations, only 42% represent papers in which he was the
sole, first or last author. So for these papers this author was cited less frequently
than he could be found as sole, first or last author across the population of
his papers. In another example, another highly cited author was found sole
author in 25% of his papers. But across his citations, papers in which he was
sole author represented 53% of his citations. So for these papers this author
was cited more frequently than he could be found as sole author in the
population of his papers.
The values of C and P being of interest in and of themselves, for
a specific scientist we can also define what we call the “claim to fame ratio” (for lack of a better name) as the ratio R = C/P
with C and P as defined above (for each of their 3 variants). R can be
considered as a third-order citations
statistic, being the ratio of two second-order statistics. A value of R higher
than 1 may mean that this author’s prominence (or fame) is more visible within
the spectrum of his or her citations than within the range of his or her
papers. The opposite is the case with a low value of R, and especially if R is much
lower than 1. Of course, there may very well be valid reasons for R to be much
lower than 1 on a systematic basis, and calculating this statistic does not
necessarily shed light into these reasons.
Note that Rs is undefined if an author has zero sole-author
papers, as in this case both the Cs and Ps percentages
are zero. To my surprise, there are some 895 authors in the 100,000+ scientists
database who have never authored a paper by
themselves.
Drawing from the 100,000+ scientist database, in the Appendix (Tables 1 and 2) I have calculated the 3 variants of the C, P, and R statistics for those scientists whose primary subfields are either (a) Logistics & Transportation, or (b) Operations Research. Citations statistics exclude self-citations. I confess that the main reason I chose these statistics to appear in this paper is that Logistics & Transportation happens to be my own primary subfield (among the 176 listed), even though they can be calculated for the entire database. A secondary reason is that Operations Research happens to be my secondary subfield (even though, as a result, I do not figure out in the list of people whose primary subfield is Operations Research). Note that the primary and secondary subfield designations (as well as the higher-level field designations) are determined by the database and are only obliquely the result of a scientist’s own choice. This is so as these designations are defined by the predominance of journals that a given scientist has published in, which are of course his or her own choice. My own higher-level field turns out to be Economics & Business, which has a 99-percentile citation of 3,719 and is ranked as No. 11 among the 22 higher-level fields of the 100,000+ scientists database if one is to rank them by 99-percentile citations. It encompasses some 2,073 scientists in the database (among 108,277 in the 6,880,389 population). As such, and according to this criterion, in the database it ranks lower than fields such as Physics & Astronomy, Biology, and Chemistry, among others, and higher than Engineering, Mathematics & Statistics, and Philosophy & Theology, among others.
As regards the Logistics
& Transportation subfield, and according to the database, this subfield has
a 99-percentile citation of 1,997, listed as No. 139 among the 176 subfields in
the database if one is to rank them by 99-percentile citations. As such, and
according to this criterion, in the database it ranks lower than subfields such
as Entomology, Ornithology, Nursing, Veterinary Sciences, and Applied Ethics,
among others. It ranks higher however than subfields such as Civil Engineering,
Aeronautics and Astronautics, and
General Mathematics. The 99-percentile citation of 1,997 means it is futile to
compare myself with a nuclear or particle physicist, whose 99-percentile
citation is about 16 times higher, and it is probably also futile to compare
myself to an ornithologist, a veterinarian, or an applied ethics scientist.
It turns out that 15,386 people (out of the 6,880,389 population) have Logistics & Transportation as their primary subfield, and that 84 of these people are in the top 100,000+ list. Table 1 (see Appendix) presents the top 50 scientists on this list, together with their three variants of C, P and R values (as per above), and their secondary subfield. Interestingly enough, there are only 16 other people in the entire 100,000+ scientists database (of which 6 in Table 1) who have both primary and secondary subfields the same as mine. This attests to the wide diversity of the database and to the difficulty of comparing scientists with one another. The list of secondary subfields for scientists who have Logistics & Transportation as their primary subfield is also very diverse, and includes Economics, Geography, Urban & Regional Planning, Human Factors, Public Health, and even Fluids and Plasmas (see Table 1).
A similar analysis can be made for the set of scientists who have Operations
Research as their primary subfield. The 100,000+ database lists 319 such
scientists, drawn from a group of 20,758
scientists among the 6,880,389 scientists wider group. As a primary subfield, Operations
Research has a 99-percentile citation of 3,435, being ranked No. 99 among the
176 scientific subfields, that is, higher than Logistics & Transportation. It
ranks however lower, according to this criterion, than subfields such as Economics,
Gerontology, Sports Sciences, Optics, and Dentistry, among several others.
Table 2 (see again Appendix) lists the top 50 scientists with Operations
Research as their primary subfield, together with their three variants of C, P
and R values (as per above) and their secondary subfield. Interestingly enough,
some of these people have Logistics & Transportation as their secondary
subfield, among many others.
Both Tables 1 and 2 confirm the wide diversity in citations
statistics, even across classes of scientists who have what may look like very
similar expertises (incidentally, I can name at least one person in either
table who has passed away). I can find no discernible pattern in either table, particularly
as regards a possible correlation between the C, P and R values vis-à-vis the
rank of the scientists in the database. Moreover, I find the diversity in these
numbers, and especially as regards the sole author paper percentages Cs
and Ps, as remarkable.
I have spent some time digging up and calculating statistics of
this sort. I am sure that more can be produced, even though the value of such
analyses (if any) is not immediately clear. Irrespective of this, I can only be
amazed by some of the numbers. For instance, I found most intriguing that a researcher
at the very top tier of the 100,000+ people database published something like
142 papers in 2020 (source: Google Scholar). This amounts to publishing around
12 new papers per month, or a new paper every 2.6 calendar days. I will refer
to this researcher –who is very real- by the fictitious nickname of Galore. The
question that immediately came to mind is, how much time (days, hours, minutes)
did Galore spend per paper? Whenever I write a paper, even if I am not the main
author, I do spend some time, so I will never reach these numbers, and I can not
even remotely approach them.
So I tried to make what turned out to be an
impossible calculation: estimate how many hours Galore may
have spent per paper. Below is an
attempt, which obviously involves several subjective assumptions that can be
contested. Changing these
assumptions will certainly change the results, however I suspect that the gist
of the conclusions will not change much.
The total number of hours in a year is 8760 (8784 in 2020 as it
was a leap year). Of course, nobody can
work 24/7. One has to sleep, eat, go on vacation, go fishing on weekends, and
engage in other non-professional activities.
Needless to say, these 4 hours and 23 minutes per paper also include, in addition to paper writing, editing, revision and other paper-writing tasks, they include all the actual prior research that led to each of these papers, including data collection, model formulation, computational runs, lab experiments, field surveys, validation, etc. This assumes of course that Galore contributed at least to some the above activities, and even though some of them may have taken place in prior years.
As 4 hours and 23 minutes per paper looks incredibly low, I next looked at the database to compute Galore’s C, P and R statistics, and obtained the following approximate picture:
Cs=9%, Csf =20%, Csfl=38%
Ps=13%, Psf=28%, Psfl=64%
Rs=0.69, Rsf=0.72, Rsfl=0.59
Most C and P percentages being low and all R “claim-to-fame” ratios being less than 1, this gives the clear impression that much (but surely not all) of Galore’s bibliometric performance can be attributed to the work of his or her co-authors. This may also explain the 4 hours and change. Of course, being able to assemble a broad network of scientific collaborators is very important, in and of itself. And for sure these results cannot be generalized to other scientists. However, I wonder how any minimal level of quality control can be maintained, either at the research level, or at the paper production level, or finally at the paper review level, if these numbers are valid.
4. Conclusions
To conclude, and even though bibliometrics is an interesting sport,
and if one wants to look below the surface some interesting information may be
revealed, in my view the importance of bibliometrics is way far overblown. I
may want to write a paper that I want to keep for myself, or share with only a
few people, or never bother publishing it the traditional way. What I think is
more important for any given paper is whether its content is sound, whether it
has improved upon the state of the art, whether it has developed a new method,
whether its results are useful to science, industry or society, whether it has
provided insights, and in general whether it has stimulated more research in an
area. In addition, some parts of an academic’s professional life, such as for
instance spending time with industry, may not be reflected at all in any
citation metric, even though they may be just as important. This paper has also
shown the difficulties in comparing scientists with one another.
Therefore, and as far as I am concerned, we can live without
bibliometrics.
REFERENCES
Aad,
G. et
al. 2015 (ATLAS Collaboration, CMS Collaboration) Physical Review Letters 114, 191803.
Archambault,
E., Caruso, J., & Beauchesne, O. , 2011. Towards a multilingual,
comprehensive and open scientific journal ontology. Proceedings of the 13th
International Conference of the International Society for Scientometrics and
Informetrics, 66–77.
Barabasi, A-L., Albert. R., 1999, The emergence
of scaling in random networks, Science ,
Vol. 286, Issue 5439, 509-512 DOI:
10.1126/science.286.5439.509
Chang, Y.-T., Choi. K.-S., Jo. A., Park. H.,
2018. Top 50 authors, affiliations and countries in maritime research. Int.
J. Shipping and Transport Logistics.
Vol. 10. No. 1. 87-111.
Golden. B.L., Shier. D.R., 2020. Twenty-one years in the life of Networks (2000
to 2020). Networks. 2020;1–9. DOI: 10.1002/net.21986
Ioannidis. J.P.A., 2005. Why Most Published Research Findings Are False. PLoS Medicine. https://doi.org/10.1371/journal.pmed.0020124
Ioannidis JPA, Baas J. Klavans R. Boyack
KW (2019) A standardized citation metrics author database annotated for
scientific field. PLoS Biology 17(8):
e3000384. https://doi.org/10.1371/journal.
pbio.3000384
Ioannidis.
J.P.A. 2020. Global perspective of COVID‐19 epidemiology for a full‐cycle
pandemic. European Journal of Clinical
Investigation. https://doi.org/10.1111/eci.13423
Jiang, C., Bhat, C., and Lam, W., 2020. A bibliometric
overview of Transportation Research Part B: Methodological in the past
forty years (1979–2019). Transportation
Research Part B: Methodological 138, 268–291.
Psaraftis, H.N., 1984. The practical importance of asymptotic optimality in certain heuristic algorithms. Networks 14. No.4. 587-596.
Sahoo, S. and Schönborn. A., 2020. A bibliometric overview of WMU Journal of Maritime Affairs since its inception in 2002. WMU Journal of Maritime Affairs https://doi.org/10.1007/s13437-020-00197-w
APPENDIX




