Psychology
is a very young science and there is much still to be learnt, in fact most of psychology
is not well understood yet. And progress towards better understanding is made
through research. This is not simply an academic affair, better understanding
of psychology will ultimately improve lives, as trauma, depression etc. become
better understood, and better treated.
By research
I mean collection of new information, this is not simply the research that one
might do for an essay, searching Google, but collecting new information from the
real world with surveys, experiments etc. This is why research skills are so
important to psychologists. To be a good psychologist one must, at the very
least, be able to understand research. Ideally, good psychologists would also
be skilled at doing it. Without such research skills it is impossible to be truly
evidence based, and if psychologists are not evidence based then they are at serious
risks of falling into quackery and pseudoscience. This, unfortunately, characterizes
much of current Ecuadorian psychology.
It is therefore
imperative for Ecuadorian psychologists to become research active. This is not
easy, as many undergraduate psychology degrees in Ecuador lack the capacity to
offer strong training in research skills, and there are few postgraduate
options. This is unfortunate, as in other counties, such as the UK, psychology graduates are usually the most research-skilled graduates of all. But the situation here in Ecuador will not improve unless we push for it. And the fact
is that there are some psychologists here with strong research skills. At the
Quito Brain and Behavior Lab we have lots of collaborations with professors
both within and without Universidad San Francisco de Quito. We also have lots
of research assistants and interns, mainly doing unpaid work. But by
collaborating and assisting they are all learning about research and developing
skills. The message is that you don’t necessarily need formal training to get
into research.
Research when
it is performed, must then be published. Research that is not published to a
global audience is not of much use. And publication should be in journal
articles. I’ll discuss the issues of where, e.g. which journals, in the next
blog post. But the for the moment it is important to understand that journal
articles are the basic method of transmitting high-level research. Research published by the London School of Economics suggested that in their survey of
social scientists, about 63% or all publications are journal articles, and only
about 17% are books. The other 20% is made of various other outputs such as ‘Working
and Discussion’ papers. For scientists, including psychologists, I suspect the percentage for journal articles would be even higher. Psychology
students should be reading journal articles, and professional psychologists
should be writing journal articles.
Academic journals are the most important source of information for academics, including psychologists. |
Journals are the basic source because they are peer-reviewed. That means they go through a tough process, being reviewed by anonymous experts from around the world. The majority of articles that are submitted to journals are rejected. And those that are accepted are usually only accepted after being revised based on the anonymous experts’ criticisms, sometimes with several rounds of revisions. It is the strict checking process that means journal articles are more reliable than any other sources of academic information. Newspapers and magazines are written for entertainment not information, and books in general are less trustworthy than many people realize. It is now recognized for example that many, many psychology textbooks contain grossly incorrect descriptions of basic psychological studies and phenomena.
Journals
are therefore the basic vehicle of academic research. And researchers are judged
mainly on their journal articles. I did my PhD at the Institute of Neurology in
London. At the time I was there, the head of the Institute was Professor David Marsden. He was a remarkable scientist-practitioner. and it is said that from
the date he graduated as a doctor to his death at age 60, he published 740 journal articles, 208 book chapters, 76 reviews and 100 research letters. An average of one publication every 12 days. This is a truly exceptional research output. Some academics never publish anything in their entire careers. It is notable
too that David Marsden was also a clinician. He provides a fine example that clinical and
academic are not polar. It may be that clinicians working solely in clinical
practice don’t need to research, but those clinicians that also work at a
University are also consequently academics. And academics should be involved in
research. The point is that academics, whether clinical or not, are judged on
their research output, mainly concerning their journal articles.
So how do
we judge research output? People used to talk about impact factors of journals,
as a proxy measure for the quality of publications. But really the academic world
has moved on from impact factors anyway. Nowadays individual journal articles
are all digitally linked together on the internet via their reference lists. It’s
easy to see how many times an article has been cited by other journal articles or books.
Several different databases calculate this data and display it publicly. The
most obvious example is Google Scholar. So rather than look at the impact factor
of a journal where a piece of research was published, we can actually see how
useful the individual article in question has been. And there is a lot of
variation. Some journal articles never, ever, get cited by anybody. Some fly and
are cited hundreds of times each year.
So, the
real measure of an article’s success, and hence the author’s success, is how
many times it has been cited. This is much better than simply looking at the
number of articles published, or the impact factors of the journals, or the
related metric of whether it’s Q1, Q2 etc. Now we can get a decent estimate of the
quality of any individual academic’s research output.
We can look
at how many times an academic has been cited. That is a good indication of
their success in research. An interesting and commonly used metric is the h-index.
This is a single number that captures both the number of articles somebody has
published, and how well cited they have been. It is calculated as the highest
number of articles that have been cited that same number of times. For example,
at the time of writing, I have about 45 published journal articles. Some are
highly cited and some are not, overall, I’ve been cited 1,661 times, but I have
18 articles that have all been cited at least 18 times each. My h-index is therefore 18.
This
h-index is now a common way to evaluate academics. I’ve been to international conferences
in which when the next guest speaker is introduced, their h-index score is
publicly announced. By this metric, the most success psychologist in the world has
been Herb Simon, who, at the time of writing this, has been cited 323,706 times.
He has an h-index of 172. Puts my h-index of 18 into perspective. The research described
above from the London School of Economics suggested that in social sciences in
general the h-indices of university professors are quite low, ranging from an
average of about 2.2 for Law professors to 7.6 for Economics professors. No
data was given for Psychology Professors but I’d guess that it may be higher,
given the significant research culture in Psychology.
So why research and publish? Because it makes psychology
better and makes us better psychologists. And it's what academics do. It's that simple.
In the next blog post I'll deal with the issue of where to publish.