OmniCalQualityMetric
Aggregate quality metrics to compare similar OmniCal calibrationsNote:
- The calculated values should only be used to assess the same or very
similar calibration setups. For example, adding or removing a camera or
projector will make the quality metrics technically incompatible with
earlier collected metrics.
However, there is no mechanism to stop users from making these
comparisons. Such a mechanism could be based on known information like
the list of cameras, projectors and their resolutions. But it might make
the OmniCal Score less useful.
Overall, users mainly want to know: Has it worked? The OmniCal Score is
slightly more informative than raw RMS errors, but as a single value it
is never enough to assess the result. Therefore Disguise always requires
users to check the actual projection coverage and alignment in the real world. - All aggregate values are calculated using a normalised data range.
This reduces systematic bias errors in case e.g. some projectors have a
different pixel resolution than others.
The returned value will be multiplied by the highest pixel width. - OmniCal provides a single RMS error value for each camera and projector.
This RMS error is already an aggregate value over all inlier points.
Gathering another aggregate over several such aggregate values is
mathematically a bit questionable, see:
https://stats.stackexchange.com/questions/490683/aggregating-error-metrics-like-rmse-for-multiple-time-series
In practice we do need some kind of aggregate, though, and we don’t want
to pass in the whole set of all individual errors. This is not only
impractical, but it would also lead to bias towards e.g. cameras that
have more inlier data points.
Taking the mean / average over a list of RMS errors would reduce the impact
of outliers / high errors, because better RMS values will compensate:
aggregate(rms1 … rms8) < rms(err11 … err8N)
Calculating the RMS over a list of RMS errors is typically higher than
the RMS over all individual errors:
rms(rms1 … rms8) > rms(err11 … err8N)
However the “RMS of RMS errors” is much closer to the “RMS of individual
values”, and better reflects higher individual errors to the users.
So we will use RMS(RMS) as a more reliable aggregate metric.
Category: Omnical
Base class: _BlipValue
Properties
cameraAggregateRMS : float (Read-Only)
Aggregate reprojection RMS error (in pixels) across all cameras. For information only: good values do not guarantee success. A value smaller than 1 is ideal.
UserName: Aggregate camera RMS error
omniCalScore : float (Read-Only)
OmniCal score. Based on aggregate projector error and quality/practicality assessment of results. Only this score should be used for comparisons between calibrations.
UserName: OmniCal score
projectorAggregateRMS : float (Read-Only)
Aggregate reprojection RMS error (in pixels) across all projectors. For information only: good values do not guarantee success. A value smaller than 1 is ideal, depending on projector resolution and audience viewing distance.
UserName: Aggregate projector RMS error