Nathan Fradet commited on
Commit
f72d273
·
unverified ·
1 Parent(s): f776b64

readme fix

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -16,25 +16,26 @@ pinned: false
16
 
17
  ## Metric Description
18
 
19
- This metrics computes the expected calibration error (ECE). ECE evaluates how well a model is calibrated, i.e. how well its output probabilities match the actual ground truth distribution. It measures the $$L^p$$ norm difference between a model’s posterior and the true likelihood of being correct.
20
  This module directly calls the [torchmetrics package implementation](https://torchmetrics.readthedocs.io/en/stable/classification/calibration_error.html), allowing to use its flexible arguments.
21
 
22
  ## How to Use
23
 
24
  ### Inputs
 
25
  *List all input arguments in the format below*
26
  - **predictions** *(float32): predictions (after softmax). They must have a shape (N,C) if multiclass, or (N,...) if binary;*
27
  - **references** *(int64): reference for each prediction, with a shape (N,...);*
28
- - **kwargs** *arguments to pass to the [ece](https://torchmetrics.readthedocs.io/en/stable/classification/calibration_error.html) methods.*
29
 
30
  ### Output Values
31
 
32
- ECE as float.
33
 
34
  ### Examples
35
 
36
  ```Python
37
- ce = evaluate.load("Natooz/ece")
38
  results = ece.compute(
39
  references=np.array([[0.25, 0.20, 0.55],
40
  [0.55, 0.05, 0.40],
 
16
 
17
  ## Metric Description
18
 
19
+ This metrics computes the expected calibration error (ECE). ECE evaluates how well a model is calibrated, i.e. how well its output probabilities match the actual ground truth distribution. It measures the $L^p$ norm difference between a model’s posterior and the true likelihood of being correct.
20
  This module directly calls the [torchmetrics package implementation](https://torchmetrics.readthedocs.io/en/stable/classification/calibration_error.html), allowing to use its flexible arguments.
21
 
22
  ## How to Use
23
 
24
  ### Inputs
25
+
26
  *List all input arguments in the format below*
27
  - **predictions** *(float32): predictions (after softmax). They must have a shape (N,C) if multiclass, or (N,...) if binary;*
28
  - **references** *(int64): reference for each prediction, with a shape (N,...);*
29
+ - **kwargs** *arguments to pass to the [calibration error](https://torchmetrics.readthedocs.io/en/stable/classification/calibration_error.html) method.*
30
 
31
  ### Output Values
32
 
33
+ ECE as a float number.
34
 
35
  ### Examples
36
 
37
  ```Python
38
+ ece = evaluate.load("Natooz/ece")
39
  results = ece.compute(
40
  references=np.array([[0.25, 0.20, 0.55],
41
  [0.55, 0.05, 0.40],