I have had enough with the peer-review process, both as a reviewer and as an author. If you Google “alternatives to peer review,” you will see that I am not alone. There were many ideas proposed and even implemented to mitigate the pitfalls of this outdated approach. My goal here is not to analyze these methods, but only to rant about my experience with peer-review ineffectiveness in the world of GNSS.
I must have reviewed over 60 manuscripts in the last 5 years. With each review requiring between 2 and 10 hours, I think I invested a significant amount of time and energy in improving the quality of scientific publications. While my standards as a reviewer were originally quite high, they gradually lowered as I reviewed countless low-quality articles. I initially convinced myself that my constructive criticisms were in fact an educational experience for students. Still, I couldn’t really understand why supervisors could not offer better guidance, and why they even accepted to put their names on such manuscripts. I eventually understood that quantity gained precedence over quality, and perhaps for a reason: a long list of references helps researchers get more credentials, increases their chances to get referenced, all of which leads to better funding opportunities. Publish or perish might describe the academic world.
My most unpleasant experience as a reviewer occurred when I received a revised manuscript in which the authors had simply ignored almost all comments and suggestions I had devoted hours to come up with. Authors might not necessarily agree with all comments made by reviewers, but they should at least show some respect and acknowledge the work that went into the review. It is a clear insult to do otherwise. Editors also have more and more trouble finding reviewers for papers, and it is easily understandable since nobody wants to waste precious time on non-innovative, poorly-written manuscripts from arrogant authors.
My experience as an author of peer-reviewed papers is more limited: I typically refuse to publish material more than once, even though it does not always get the exposure it deserves at conferences. Still, I have had my share of frustrations. One reviewer recommended outright rejection of what I consider to be my most innovative paper… thankfully, the other two reviewers weren’t as cynical and counterbalanced the decision by suggesting only minor revisions. I realize that I have at times been a ruthless reviewer myself, but I believe that I always gave proper credit to work that truly deserved it.
Another disappointing experience occurred recently when the review process for my manuscript was subject to unexpected delays. Coincidentally, another paper with similar content was submitted to the same journal shortly before my review came back. Even though I can’t discard the fact that these authors may have come up with the same innovative idea as mine with strangely inadequate timing, I can’t discard either the possibility that reviewers (and most likely researchers from their network of connections) shared, used and even re-published material that was under review. This is truly unethical and is almost impossible to condemn with an anonymous review process.
The peer-review process used to be an opportunity to learn about innovative algorithms, provide input to improve scientific methods and receive useful feedback to fine-tune our own work. Unfortunately, it seems to get harder and harder to get real benefits from this process, and I will certainly stay tuned on the latest developments in this area. Meanwhile, I still have to decide what to do with my latest draft...