Paper Wizard - between mediocracy and gold
3 September 2025
As artificial intelligence continues its relentless march toward world domination, my laboratory decided to join the party. With two fresh manuscripts hot off the proverbial press and a dose of scientific curiosity mixed with a fair amount of skepticism, we ventured into the new world of AI-powered peer review.
Enter Paper Wizard, the digital oracle promising expert manuscript reviews in a mere 15 minutes. Boasting an impressive client roster including NASA, Harvard, and the WHO, this AI reviewer looks impressive on paper. After all, if it's good enough for rocket scientists and epidemiologists, surely it could handle our humble laboratory offerings.
Round one: the accidental genius

Front page
For our first manuscript, Paper Wizard delivered, true to its word, a comprehensive 15-page review in under 15 minutes, complete with summary, synopsis, major critiques, minor comments, and a framework for addressing each concern. The formatting was good, the language eloquent, and the overall presentation... well, let's just say it had the unmistakable aura of an overly zealous postdoc armed with a thesaurus and an unfortunate abundance of enthusiasm.

Content summary
The AI had clearly read our paper with the meticulous attention of someone who had consumed far too much espresso and decided that everything needed commentary. However, here is where the plot thickens: buried among the verbose critiques were several genuinely insightful suggestions that made us pause, scratch our heads, and ultimately return to the laboratory bench. The AI had identified gaps in our experimental logic that indeed warranted investigation. This alone made the entire process worthwhile. Even better, the experiments suggested by our silicon-based critic yielded remarkably interesting and unexpected results.
Of course, the AI couldn't have known these experiments would be as fruitful as they turned out to be. It simply looked for holes in our chain of reasoning. The fact that plugging these holes led to exciting discoveries was as much caused by AI as by pure scientific fortune.
As a side point, I was surprised to find that the review did not contain any list of typos, grammar problems and ambiguous phrasing to resolve. The one thing that AI is really good at is picking these things out meticulously. This was, in part, what I was looking for, and as promising the review was in terms of the scientific results, it would be a very good addition in order to give a publication the final polish before submitting it to a journal.
Round two: the overzealous graduate student strikes
Emboldened by our initial success, we fed our second manuscript to the digital beast, expecting similar magic. The processing time remained impressively swift, and we received another lengthy review.

Example of one of the minor comments
Unfortunately, this time our AI reviewer seemed to have confused thoroughness with usefulness. The majority of comments focused on supporting analyses while completely missing the primary experimental evidence, rather like critiquing the frame while ignoring the Mona Lisa. For example, the AI exhibited a peculiar fixation on statistical approaches supporting conclusions that were already clearly demonstrated through direct experimental observation.
This highlighted a fundamental limitation that is so obvious, yet gets forgotten rather often: AI cannot think, at least not in the way seasoned researchers do. It processes text eloquently and identifies potential inconsistencies, but it lacks the scientific intuition to distinguish between crucial experimental flaws and methodological nitpicking. A human reviewer with genuine expertise would recognize when supporting evidence is just that, supporting, rather than fundamental to the argument.
The verdict: silicon dreams and carbon reality
For the first go we were able to use a free credit, and the second go worked via a referral credit (see below). After that you need further referral credits, or you need to fork out a moderate amount of money. So, would I use Paper Wizard again? Perhaps. The first review's unexpected dividends suggest there's genuine value in having an AI systematically probe our experimental logic, even if it occasionally sounds like an overcaffeinated graduate student with boundary issues. The question remains whether the cost justifies the potential benefits, particularly given the hit-or-miss nature of the insights provided.
The future of scientific publishing may indeed include artificial intelligence, but based on what we have seen, and for our approach to publishing, for now at least human wisdom remains irreplaceable. Though I suspect the AI overlords are working on that too.
However, one aspect of Paper Wizard's business model deserves special mention for its breathtaking audacity: their promotional credit system. They offer referral credits that expire after just one month. This means you can only benefit from referring colleagues if their and your papers are already near submission, a policy so counterproductive it borders on comedic.
So, if you want to help us, please contact us so that we can send you a referral link. If you use it, and we happen to have a paper ready, then maybe we can use it and maybe, just maybe, we might be finding gold again.