Skip to content

Commit a4113f6

Browse files
authored
Fix layers_normalizations.ipynb typos (#2606)
* Update layers_normalizations.ipynb - typo "neual" -> "neural" * Update layers_normalizations.ipynb - typo "independt" -> "independently"
1 parent f65419f commit a4113f6

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/tutorials/layers_normalizations.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@
6767
"* **Instance Normalization** (TensorFlow Addons)\n",
6868
"* **Layer Normalization** (TensorFlow Core)\n",
6969
"\n",
70-
"The basic idea behind these layers is to normalize the output of an activation layer to improve the convergence during training. In contrast to [batch normalization](https://keras.io/layers/normalization/) these normalizations do not work on batches, instead they normalize the activations of a single sample, making them suitable for recurrent neual networks as well. \n",
70+
"The basic idea behind these layers is to normalize the output of an activation layer to improve the convergence during training. In contrast to [batch normalization](https://keras.io/layers/normalization/) these normalizations do not work on batches, instead they normalize the activations of a single sample, making them suitable for recurrent neural networks as well. \n",
7171
"\n",
7272
"Typically the normalization is performed by calculating the mean and the standard deviation of a subgroup in your input tensor. It is also possible to apply a scale and an offset factor to this as well.\n",
7373
"\n",
@@ -260,7 +260,7 @@
260260
"### Introduction\n",
261261
"Layer Normalization is special case of group normalization where the group size is 1. The mean and standard deviation is calculated from all activations of a single sample.\n",
262262
"\n",
263-
"Experimental results show that Layer normalization is well suited for Recurrent Neural Networks, since it works batchsize independt.\n",
263+
"Experimental results show that Layer normalization is well suited for Recurrent Neural Networks, since it works batchsize independently.\n",
264264
"\n",
265265
"### Example\n",
266266
"\n",

0 commit comments

Comments
 (0)