![Cinch knot](https://loka.nahovitsyn.com/135.jpg)
Furthermore, points on the ROC convex hull correspond to segments in the optimal cost curve. Each of the 11 points on the ROC curve corresponds to one of the 11 cost lines ( dashed). Right: the corresponding optimal cost curve (shown for cost proportions instead of skews). Other rates can be achieved in expectation by means of a random choice between the two bordering split points. The isometric ( bold line) going through the top left-hand corner has rate π 0. The diagonal lines are rate isometrics (lines connecting points with the same predicted positive rate r these have slope − π 1/ π 0 and intercept r/ π 0, where π 0 and π 1 are the class priors), one for each possible split point in the ranking.
![the two data curves on the figure illustrate that the two data curves on the figure illustrate that](https://bookdown.org/ekothe/navarro26/05.15-regression_files/figure-html/regression1a-1.png)
Left: A ROC curve corresponding to the ranking 0 0 1 0 0 0 1 0 1 0. Point-to-line correspondence between ROC curves and (optimal) cost curves (some of the terms used in this figure are defined in Sects. Thus, cost curves allow us to not only identify regions of dominance, but quantify exactly the advantage in classification loss of the dominating classifier over the dominated one at a particular operating condition. Further correspondences include that between the ROC convex hull and the lower envelope of a classifier’s cost lines, which both arise from optimal decision thresholds. Since a fixed threshold corresponds to a point in ROC space, this suggests a point-line duality between the two representations as noted by Drummond and Holte ( 2006) (see Fig. For example, if we fix the decision threshold and the class distribution and vary the relative misclassification cost c of one of the classes, then loss will vary linearly with c and we obtain a cost line. Cost curves were proposed by Drummond and Holte ( 2000, 2006) as an alternative to ROC curves that explicitly visualise loss on the y-axis against the operating condition on the x-axis. Operating conditions (class and misclassification cost distributions) manifest themselves as straight isometrics in ROC space.Ĭlassification loss at a particular decision threshold is not visualised directly in ROC curves, but has to be inferred from the true and false positive rate and operating condition. ROC curves can be used to identify optimal thresholds that yield points on a ROC curve’s convex hull, as well as regions where one classifier dominates another. A monotonic curve is obtained by sweeping through all possible decision thresholds, and the area under the curve ( AUC) corresponds to the proportion of correctly ranked pairs of positive and negative examples. A point on a ROC curve visualises the true and false positive rates achieved by a particular decision threshold.
![the two data curves on the figure illustrate that the two data curves on the figure illustrate that](https://nsidc.org/arcticseaicenews/files/2015/10/asina_N_stddev_timeseries1.png)
2000 Fawcett 2006) constitute a popular and highly useful graphical representation of classifier performance. We also derive the corresponding curve to the ROC convex hull in cost space this curve is different from the lower envelope of the cost lines, as the latter assumes only optimal thresholds are chosen. Furthermore, a decomposition of the rate-driven curves is introduced which separates the loss due to the threshold choice method from the ranking loss (Kendall τ distance). We show that the rate-driven curves are the genuine equivalent of ROC curves in cost space, establishing a point-point rather than a point-line correspondence. We call these new curves rate-driven curves, and we demonstrate that the expected loss as measured by the area under these curves is linearly related to AUC. In particular, we show that ROC curves can be transferred to cost space by means of a very natural threshold choice method, which sets the decision threshold such that the proportion of positive predictions equals the operating condition. In this paper we present new findings and connections between ROC space and cost space. ROC curves and cost curves are two popular ways of visualising classifier performance, finding appropriate thresholds according to the operating condition, and deriving useful aggregated measures such as the area under the ROC curve ( AUC) or the area under the optimal cost curve.
![Cinch knot](https://loka.nahovitsyn.com/135.jpg)