From 4b8df7b2cb3d43e97ee4f49a529a31714fc8f64c Mon Sep 17 00:00:00 2001 From: Inyong Hwang Date: Mon, 15 Nov 2021 11:29:09 +0900 Subject: [PATCH 1/3] Update layers_normalizations.ipynb - typo "neual" -> "neural" --- docs/tutorials/layers_normalizations.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/layers_normalizations.ipynb b/docs/tutorials/layers_normalizations.ipynb index 01fffc7aa7..bdf6848d9c 100644 --- a/docs/tutorials/layers_normalizations.ipynb +++ b/docs/tutorials/layers_normalizations.ipynb @@ -67,7 +67,7 @@ "* **Instance Normalization** (TensorFlow Addons)\n", "* **Layer Normalization** (TensorFlow Core)\n", "\n", - "The basic idea behind these layers is to normalize the output of an activation layer to improve the convergence during training. In contrast to [batch normalization](https://keras.io/layers/normalization/) these normalizations do not work on batches, instead they normalize the activations of a single sample, making them suitable for recurrent neual networks as well. \n", + "The basic idea behind these layers is to normalize the output of an activation layer to improve the convergence during training. In contrast to [batch normalization](https://keras.io/layers/normalization/) these normalizations do not work on batches, instead they normalize the activations of a single sample, making them suitable for recurrent neural networks as well. \n", "\n", "Typically the normalization is performed by calculating the mean and the standard deviation of a subgroup in your input tensor. It is also possible to apply a scale and an offset factor to this as well.\n", "\n", From 82abea5b89359fc2fc924d0fd0d80befa787d438 Mon Sep 17 00:00:00 2001 From: Inyong Hwang Date: Mon, 15 Nov 2021 14:27:50 +0900 Subject: [PATCH 2/3] Update layers_normalizations.ipynb - typo "independt" -> "independently" --- docs/tutorials/layers_normalizations.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/layers_normalizations.ipynb b/docs/tutorials/layers_normalizations.ipynb index bdf6848d9c..0788784667 100644 --- a/docs/tutorials/layers_normalizations.ipynb +++ b/docs/tutorials/layers_normalizations.ipynb @@ -260,7 +260,7 @@ "### Introduction\n", "Layer Normalization is special case of group normalization where the group size is 1. The mean and standard deviation is calculated from all activations of a single sample.\n", "\n", - "Experimental results show that Layer normalization is well suited for Recurrent Neural Networks, since it works batchsize independt.\n", + "Experimental results show that Layer normalization is well suited for Recurrent Neural Networks, since it works batchsize independently.\n", "\n", "### Example\n", "\n", From 4a8c070ec7a0db7fba76cea6899f8b83a3ae5a86 Mon Sep 17 00:00:00 2001 From: Inyong Hwang Date: Fri, 10 Dec 2021 00:22:46 +0900 Subject: [PATCH 3/3] fix typo in average_optimizers_callback.ipynb - genral -> general --- docs/tutorials/average_optimizers_callback.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/average_optimizers_callback.ipynb b/docs/tutorials/average_optimizers_callback.ipynb index ecf0ca4646..adca267659 100644 --- a/docs/tutorials/average_optimizers_callback.ipynb +++ b/docs/tutorials/average_optimizers_callback.ipynb @@ -74,7 +74,7 @@ "source": [ "## Moving Averaging \n", "\n", - "> The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point.\n", + "> The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more general idea of the model training until some point.\n", "\n", "## Stochastic Averaging\n", "\n",