On Research Publishing


Not much polemics, ponderousness or pontifications (three down on my list of to-be-used-at-least-once-before-you-die words). Just a few observations from my publishing experience in the previous two years. Two years after I read and used the definitive guide on how to publish.

If you are into research publishing like me, here is a definitive, algorithmic approach to research publishing in top journals of your field of interest — How to Publish in Top Journals. Indulge in the collection after reading this post. Actually, don't bother. You may go there now and come back to read this before you retire.

I got introduced to that collection two years back, from another collection of "how to publish" links (here is a related one). If you take my word at internet value, I assure you that collection of tips is an important guide — lesson, I could say — for beginners in research publishing. Yes, I already was aware of the publishing rudiments from my limited experience and peer-discussions but that contextual collection was an eye-opener. The logic outlined in those How to Publish algorithmic tips prove anything that is passable during peer review as correct, incremental and non-controversial research is publishable.

That statement may suggest the obvious for some of us and seem innocuously glib for a few others. The How to Publish algorithm is not against quality. It just earnestly suggests ways to increase your number of publications in the present peer-reviewed research publishing perspective. Increasing your publication count can be achieved by practicing the algorithm provided, you have some things assured. You should be ready to put the hard work — meaning, spend long hours at your work desk. You should be smart (or at least lucky) to pick a field of research catered by enough journals with varied degree of quality [*]. And of course, somebody — like, graduate students — should continuously keep feeding research results on your writing platter.

(* — check the Quantifying Research Quality using Article Level Metrics essay to get a hang of journal impact factors and hierarchy of journals in a field.)

In short, any field of research should be catered by journals that could eventually be grouped into top, middle and bottom tiers (more intermediate tiers possible). Usually the top two or three journals stand out above the rest in any field of study. Publishing only in them ensures a certain credibility and quality, but eventually, lesser quantity of papers over a period of time. Ideally, in a long academic career, such a publication record should bode well.

In his Writing a Paper (2004) [pdf], Whitesides asserted famously thus: If your research does not generate papers, it might just as well not have been done. "Interesting and unpublished" is equivalent to "non existent".

Being the living person with the highest H-index in chemistry, perhaps he can make such an assertion. What I understand from his statement is, don't do useless research. Start a research project with a hunch that the outcome would be interesting enough to publish. If along the way you realize it is going to lead nowhere, change or abandon the track. Try to do interesting, hence publishable research. Observe this interpretation doesn't impose anything on quality or quantity. Do publishable (hence cite-able) research in your time frame and publish them as any number of papers. That is what I understand from that Whitesides statement.

But the same Whitesides statement — in its purport — is often misinterpreted amongst research peers, with the sole aim of maximizing publications. The logic goes this way. Your research should generate papers as Whitesides, the been-there-done-that-authority, asserts. Since the current research (due to possible mediocrity of results) seems not to generate papers by getting reported in these journals, let me try to publish it in those other journals. If not, in yet another journal of decreasing standards of quality. Until my research generate papers, as Whitesides — purportedly — has pronounced.

The entire logic of research seems flipped: From "do research that should generate papers", the perspective is corkscrewed to "generate papers by doing research".

Importantly, as I keep reading the How to Publish algorithm, I find that logic could work successfully. It is true in our times of doing research for publishing, how to do research is effectively a separate activity from how to get it published. As the algorithm in its introduction asserts, "…here is no such thing as good luck in publication. Painstaking work, coupled with careful risk taking, is required for success." Success, mind it. Not necessarily significance.

Having used the How to Publish algorithm with my research publications during the last year, I should agree — perhaps, with remorse — those suggested tips and logic were pretty damn right. So much that at least three of my colleagues have asked me in the past few months on different occasions, how come I am pumping up my publication rate suddenly in the last two years. My students seem to have noticed it too, but they safely blame it all on my hard work. Only, I don't.

I was already aware of the general warning from the algorithm that "[t]here might be biases against you based on race, sex, nationality, or schooling" and "If you suspect discrimination, check the past issues of the journal in question. This will reveal surprising insights". But the more important lesson I learnt is altogether a different one. That I should not judge my research. Yes, I may realize that particular paper I have prepared from the recently served research dish by my graduate student is a relative incremental (if not 'excrement'al) junk, but if I believe I do research to publish, I should just submit it anyway. Only, be smart in choosing the journal. The easiest option is to start in a hierarchy and keep trying in the next best, until you succeed. No sir, I am not suggesting this. It was suggested to me. And it works.

There is a flip side. If you have a great revolutionary idea early in your career, shelve it. Yes, you heard it right. Read that algorithm when it says Do not write papers with breakthrough ideas at first (point 22 here). Out of the submitted papers in the last two years that ran into two digits, the only rejected paper I have so far is the one in which I thought I advanced something fundamental. I submitted it to a top journal and the next top and promptly got shot down in both places with a fifty-fifty review (one accept, one reject, editor deciding not to go ahead). I don't want to try it with an obscure journal.

The paper should stay rejected, if I were to believe the logic outlined in the algorithm provided, as what I tried advancing is technically sound (as agreed by the reviewers) but tries to replace some dearly held ad hoc notions in porous media. Hence I am trying to publish reasonably early in my career, something controversial. It is bound to be toast.

Disturbingly, the algorithm warns in point 22 here: If you do advance breakthrough ideas your papers will be rejected, and they might reappear in a modified, clearly written paper by someone else later.

Reckon if I am lucky to endure the above result, the algorithm gives me hope. I should be able to publish the same idea — perhaps the same manuscript — in enough years, when I am recognized as an expert with a triple digit publication count in my kitty. The day I check this out and proven to be right should be the day I retire from research publishing. That should be a reason why I am storing the manuscript for now, in a cold dry hard disk.

I am a young researcher, not a cynical one.