Introduction
Research evaluation systems are designed to measure performance, guide decisions, and support the development of knowledge systems. At their core, they rely on indicators intended to capture meaningful aspects of research activity.
However, evaluation systems do not operate in isolation.
They exist within environments where researchers, institutions, and policymakers respond to the signals these systems produce. Over time, this interaction creates feedback loops—cycles in which evaluation influences behavior, and behavior in turn reshapes evaluation outcomes.
These feedback loops are not inherently problematic. In well-designed systems, they can reinforce desirable practices and improve alignment between measurement and performance.
Yet when evaluation systems rely on simplified or rigid metrics, feedback loops can produce unintended consequences.
Instead of improving research, systems may begin to optimize for their own indicators.
This is where metric gaming emerges—not as an anomaly, but as a structural outcome.
1. Understanding Feedback Loops in Evaluation
A feedback loop occurs when outputs of a system influence the inputs that generate future outputs.
In research evaluation, this can be represented as:
Evaluation criteria → Research behavior → Measured outcomes → Validation of criteriaWhen this loop is aligned with meaningful research practices, it can support quality improvement.
However, when the evaluation criteria are incomplete or overly narrow, the loop begins to amplify distortions.
What is measured becomes what is pursued.
What is rewarded becomes what is repeated.
2. From Incentives to Optimization
Evaluation systems do more than assess performance—they create incentives.
Researchers and institutions adapt to these incentives by aligning their strategies with measurable outcomes.
This adaptation is not necessarily unethical. It is often rational.
However, over time, rational adaptation can shift into optimization:
prioritizing quantity over quality
targeting journals or outputs that maximize scores
structuring collaborations for metric advantage
focusing on short-term measurable impact
In such contexts, research behavior is no longer guided by epistemic goals, but by metric efficiency.
3. Metric Gaming as a System Outcome
Metric gaming is often framed as individual misconduct. In reality, it is frequently a predictable response to system design.
When evaluation systems:
rely on limited indicators
apply fixed thresholds
reward specific measurable outputs
they create environments where strategic behavior is not only possible—but incentivized.
Gaming can take subtle forms:
salami slicing of publications
excessive self-citation or network citation
selective collaboration patterns
emphasis on visibility over substance
These behaviors do not necessarily violate rules.
They exploit them.
4. The Illusion of Improvement
One of the most critical risks of feedback-driven systems is the illusion of progress.
As researchers optimize for metrics:
scores improve
rankings shift
output volumes increase
From the perspective of the system, this appears as success.
However, these improvements may reflect alignment with the metric—not genuine advancement in research quality.
This creates a self-validating system:
Better metrics → perceived improvement → reinforcement of the same systemThe system learns—but it learns the wrong lessons.
5. Feedback Loops and System Rigidity
In static or poorly governed systems, feedback loops can become locked in.
Once certain behaviors dominate:
indicators begin to reflect those behaviors
policies reinforce existing patterns
deviations become penalized
This reduces diversity in research approaches and limits innovation.
Over time, the system becomes less responsive to new forms of knowledge and more focused on maintaining its internal consistency.
6. Designing Feedback-Aware Systems
To mitigate the risks of metric gaming, evaluation systems must be designed with feedback awareness.
This involves:
Multi-Dimensional Measurement
Reducing reliance on single indicators and capturing diverse aspects of research.
Contextual Interpretation
Evaluating outputs within disciplinary and institutional contexts.
Dynamic Adjustment
Updating indicators and criteria to reflect evolving research practices.
Transparency of Incentives
Making clear what behaviors are being rewarded—and why.
Monitoring Behavioral Effects
Assessing how evaluation criteria influence research strategies over time.
Feedback should not be eliminated.
It should be governed.
7. From Measurement to System Responsibility
The presence of feedback loops shifts responsibility from individual actors to system designers.
If systems consistently produce gaming behaviors, the issue is not only with users—it is with the structure of incentives.
This requires a broader perspective:
evaluation systems are not neutral observers
they are active participants in shaping research behavior
Recognizing this role is essential for responsible system design.
Conclusion
Feedback loops are inherent to research evaluation systems. They link measurement to behavior, and behavior back to measurement.
When properly aligned, they can support meaningful improvement. When poorly designed, they can lead to metric gaming, distorted incentives, and the illusion of progress.
The challenge is not to eliminate feedback, but to understand and govern it.
Evaluation systems must be designed not only to measure research, but to anticipate how they will shape it.
Only then can they avoid learning the wrong lessons.

