Description Usage Arguments Details Value binary_crossentropy categorical_crossentropy huber log_cosh See Also

Loss functions

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | ```
loss_binary_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
label_smoothing = 0,
axis = -1L,
...,
reduction = "auto",
name = "binary_crossentropy"
)
loss_categorical_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
label_smoothing = 0L,
axis = -1L,
...,
reduction = "auto",
name = "categorical_crossentropy"
)
loss_categorical_hinge(
y_true,
y_pred,
...,
reduction = "auto",
name = "categorical_hinge"
)
loss_cosine_similarity(
y_true,
y_pred,
axis = -1L,
...,
reduction = "auto",
name = "cosine_similarity"
)
loss_hinge(y_true, y_pred, ..., reduction = "auto", name = "hinge")
loss_huber(
y_true,
y_pred,
delta = 1,
...,
reduction = "auto",
name = "huber_loss"
)
loss_kullback_leibler_divergence(
y_true,
y_pred,
...,
reduction = "auto",
name = "kl_divergence"
)
loss_kl_divergence(
y_true,
y_pred,
...,
reduction = "auto",
name = "kl_divergence"
)
loss_logcosh(y_true, y_pred, ..., reduction = "auto", name = "log_cosh")
loss_mean_absolute_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_absolute_error"
)
loss_mean_absolute_percentage_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_absolute_percentage_error"
)
loss_mean_squared_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_squared_error"
)
loss_mean_squared_logarithmic_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_squared_logarithmic_error"
)
loss_poisson(y_true, y_pred, ..., reduction = "auto", name = "poisson")
loss_sparse_categorical_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
axis = -1L,
...,
reduction = "auto",
name = "sparse_categorical_crossentropy"
)
loss_squared_hinge(
y_true,
y_pred,
...,
reduction = "auto",
name = "squared_hinge"
)
``` |

`y_true` |
Ground truth values. shape = |

`y_pred` |
The predicted values. shape = |

`from_logits` |
Whether |

`label_smoothing` |
Float in |

`axis` |
The axis along which to compute crossentropy (the features axis).
Axis is 1-based (e.g, first axis is |

`...` |
Additional arguments passed on to the Python callable (for forward and backwards compatibility). |

`reduction` |
Only applicable if |

`name` |
Only applicable if |

`delta` |
A float, the point where the Huber loss function changes from a quadratic to linear. |

Loss functions for model training. These are typically supplied in
the `loss`

parameter of the `compile.keras.engine.training.Model()`

function.

If called with `y_true`

and `y_pred`

, then the corresponding loss is
evaluated and the result returned (as a tensor). Alternatively, if `y_true`

and `y_pred`

are missing, then a callable is returned that will compute the
loss function and, by default, reduce the loss to a scalar tensor; see the
`reduction`

parameter for details. (The callable is a typically a class
instance that inherits from `keras$losses$Loss`

).

Computes the binary crossentropy loss.

`label_smoothing`

details: Float in `[0, 1]`

. If `> 0`

then smooth the labels
by squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing`

for the target class and `0.5 * label_smoothing`

for the non-target class.

Computes the categorical crossentropy loss.

When using the categorical_crossentropy loss, your targets should be in
categorical format (e.g. if you have 10 classes, the target for each sample
should be a 10-dimensional vector that is all-zeros except for a 1 at the
index corresponding to the class of the sample). In order to convert
integer targets into categorical targets, you can use the Keras utility
function `to_categorical()`

:

`categorical_labels <- to_categorical(int_labels, num_classes = NULL)`

Computes Huber loss value.
For each value x in `error = y_true - y_pred`

:

1 2 |

where d is `delta`

. See: https://en.wikipedia.org/wiki/Huber_loss

Logarithm of the hyperbolic cosine of the prediction error.

`log(cosh(x))`

is approximately equal to `(x ** 2) / 2`

for small `x`

and
to `abs(x) - log(2)`

for large `x`

. This means that 'logcosh' works mostly
like the mean squared error, but will not be so strongly affected by the
occasional wildly incorrect prediction. However, it may return NaNs if the
intermediate value `cosh(y_pred - y_true)`

is too large to be represented
in the chosen precision.

`compile.keras.engine.training.Model()`

,
`loss_binary_crossentropy()`

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.