This pages introduces the review process for measurement.network submissions. If you are not an academic, you might be unfamiliar with the general process behind peer review; If you are an academic, the specific approach to peer review taken here might also be unfamiliar for you.
Review System & Anonymity
The reviews are handled via a ‘HotCRP’ instance hosted here: https://submit.measurement.network/
Reviewers and authors need to create a measurement.network account to be able to log in.
As the review process is ultimately about improvement, and nor Accept/Reject decisions, the reviews are not blind, i.e., authors are known to reviewers, and reviewers are known to authors.
Objective
The core objective of the review process on measurement.network is not to determine whether a proposed measurement will be accepted at all. Instead, the purpose is iterating over the plan with the authors until the measurements are in a shape where they can be executed.
This means that the process does not have a determined deadline per-se; However, we ask reviewers to form a first opinion within two weeks of having volunteered for a review. Thereafter, the process should be interactive, communicating regularly with the authors.
Hence, as a reviewer, we also ask you to keep an open mind; The objective is to–ultimately–create a measurement that can be done (more) safely. Hence, essentially, ‘No’ is not a rating which can remain in place forever. Also, if a reviewer does not think that the measurements can be done as-is, they need to provide feedback with concrete steps forward to get the measurements into a run-able state.
Process
After the authors submitted a measurement proposal, reviewers will be approached as to whether they would want to review that measurement. We expect this to take, on average, around an hour per week maximum.
Reviewers then read the documents about the measurements provided by the authors, as well as the toolchain the authors want to use. They can also use provided virtual machines to test the measurement toolchain on a smaller scale.
Afterwards, the reviewers fill out a basic review form. This form asks for four things:
1) Should the proposed measurements be run against the Internet in their current state?
- No
- Meaning: I checked, and this should currently not be run; Instead, the following changes need to be applied.
- I have concerns, and it should not be run
- Meaning: I have a bad feeling which I can’t pin to anything specific yet, but can make improvements for suggestion.
- LGTM
- Meaning: I am not completely sure, or might just have glimpsed over some parts, but this should not be an issue.
- Yes
- Meaning: I checked, and these measurements should work the way they are designed, and I see no reason to suspect that they might cause unexpected harm.
2) Which changes should be made to the measurements? (free text)
3) Which risks/harm do you see potentially coming from the measurements? (free text)
4) Do you think that the measurements will collect meaningful data? (free text)
After the form has been submitted, the reviewers and authors can communicate via the platform (or, if preferred, via any other means). During the discussion, the authors will refine their measurements until all reviewers ultimately converge to at least a ‘LGTM’ rating.