{"id":3171,"date":"2020-08-24T18:48:25","date_gmt":"2020-08-24T18:48:25","guid":{"rendered":"https:\/\/cloudxlab.com\/blog\/?p=3171"},"modified":"2020-09-03T11:06:47","modified_gmt":"2020-09-03T11:06:47","slug":"writing-custom-optimizer-in-tensorflow-and-keras","status":"publish","type":"post","link":"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/","title":{"rendered":"Writing Custom Optimizer in TensorFlow Keras API"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img width=\"960\" height=\"540\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/08\/customizing_ann.png\" alt=\"\" class=\"wp-image-3172\"\/><\/figure>\n\n\n\n<p>Recently, I came up with an idea for a new Optimizer (an algorithm for training neural network). In theory, it looked great but when I implemented it and tested it, it didn&#8217;t turn out to be good.<\/p>\n\n\n\n<p>Some of my learning are:<\/p>\n\n\n\n<ol><li>Neural Networks are hard to predict.<\/li><li>Figuring out how to customize TensorFlow is hard because the main documentation is messy.<\/li><li>Theory and Practical are two different things. The more hands-on you are, the higher are your chances of trying out an idea and thus iterating faster.<\/li><\/ol>\n\n\n\n<p>I am sharing my algorithm here. Even though this algorithm may not be of much use to you but it would give you ideas on how to implement your own optimizer using Tensorflow Keras.<\/p>\n\n\n\n<p>A neural network is basically a set of neurons connected to input and output. We need to adjust the connection strengths such that it gives the least error for a given set of input. To adjust the weight we use the algorithms. One brute force algorithm could be to try all possible combinations of weights (connections strength) but that will be too time-consuming. So, we usually use the greedy algorithm most of these are variants of Gradient Descent.&nbsp;In this article, we will write our custom algorithm to train a neural network. In other words, we will learn how to write our own custom optimizer using TensorFlow Keras.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Gradient descent is simply this:<\/p>\n\n\n\n<pre title=\"Gradient Descent\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">New_weight = weight  - eta * rate of change of error wrt weight\nw -= \u03b7*\u2202E\/\u2202w<\/code><\/pre>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\"><\/div><\/div>\n\n\n\n<p>Here eta (learning rate) is basically some constant. We will need to figure out. Usually, we keep eta as 0.001.<\/p>\n\n\n\n<p>Here is an easy way to visualize it. If there is only one weight, we can visualize it like this:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"624\" height=\"302\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/09\/Customizing-Optimizer-in-Tensorflow-and-Keras-2.png\" alt=\"Gradient Descent Algorithm\" class=\"wp-image-3203\"\/><figcaption>Gradient Descent algotirhm<\/figcaption><\/figure>\n\n\n\n<p>In this blog, we will learn how to create your own algorithm. Though it is extremely rare to need to customize the optimizer there are around 5-6 variants of Gradient descent algorithm but again if you get an idea of a new clever optimizer it could be a breakthrough.<\/p>\n\n\n\n<p>In Gradient Descent, if the eta or learning rate is too high the error might increase instead of decreasing because the next value of weight could go to the other side of minima.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"624\" height=\"470\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/09\/Customizing-Optimizer-in-Tensorflow-and-Keras-1.png\" alt=\"Gradient Descent divergence\" class=\"wp-image-3202\"\/><figcaption>Sometimes the Gradient Descent Does not converge.<\/figcaption><\/figure>\n\n\n\n<p>In my optimizer, I thought that the moment the slope changes sign, we will average the weight.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"624\" height=\"450\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/09\/Customizing-Optimizer-in-Tensorflow-and-Keras.png\" alt=\"Gradient Descent Improvement\" class=\"wp-image-3201\"\/><figcaption>This is how our optimizer is going to work.<\/figcaption><\/figure>\n\n\n\n<p>If the slope hasn&#8217;t changed the sign, the usual gradient descent will apply.<\/p>\n\n\n\n<p>Now, let us get to the coding.<\/p>\n\n\n\n<p>First import the libraries that we will be using.<\/p>\n\n\n\n<pre title=\"Usual Imports\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">import tensorflow as tf\nfrom tensorflow import keras\n\n# Common imports\nimport numpy as np\nimport os\n<\/code><\/pre>\n\n\n\n<p>We are going to test our optimizer on California housing data. Now, let us load the data and create test and training data sets.<\/p>\n\n\n\n<pre title=\"Load the data\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">from sklearn.datasets import fetch_california_housing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\n\nhousing = fetch_california_housing()\nX_train_full, X_test, y_train_full, y_test = train_test_split(\n    housing.data, housing.target.reshape(-1, 1), random_state=42)\nX_train, X_valid, y_train, y_valid = train_test_split(\n    X_train_full, y_train_full, random_state=42)\n\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train)\nX_valid_scaled = scaler.transform(X_valid)\nX_test_scaled = scaler.transform(X_test)\n<\/code><\/pre>\n\n\n\n<p>In order to create a custom optimizer we will have to extend from base Optimizer Class which is in keras.optimizers class.&nbsp;<\/p>\n\n\n\n<pre title=\"Skeleton of the Class\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">class SGOptimizer(keras.optimizers.Optimizer):\n\t\u2026\n\t&lt;&lt; this is where our implementation would be >>>\n\t\u2026 \n<\/code><\/pre>\n\n\n\n<p>We will be overriding or implementing these methods:&nbsp;&nbsp;<\/p>\n\n\n\n<ul><li>__init__ &#8211; Constructor<\/li><li>_create_slots<\/li><li>_resource_apply_dense<\/li><li>_resource_apply_sparse (just marking it not-implemented)<\/li><li>get_config<\/li><\/ul>\n\n\n\n<h2><strong>Constructor &#8211; __init__ method<\/strong><\/h2>\n\n\n\n<p>Whenever we define a class in Python, we define a constructor with a name __init__ (starts and ends with double dashes). This method should have the first argument as &#8216;self&#8217; which basically will point to the object. The remaining arguments are your own choice. You can supply these arguments at the time of creating the object of this. Here we usually define the hyperparameters. In our cases, we don&#8217;t have any other hyperparameters than the learning_rate. The default value of the learning rate we are setting as 0.01. In case, we don&#8217;t supply the learning_rate argument it would assume it to be 0.01. The name argument is something used by the system for displaying progress etc. The remaining arguments are absorbed by kwargs and passed to the parent as is.<\/p>\n\n\n\n<p>Here, we are delegating work to the parent class that is Optimizer by the way of calling super(). First we are creating the base class by calling __init__() method on super(). Notice that we are sending &#8220;name&#8221; and &#8220;kwargs&#8221; to the parent.<\/p>\n\n\n\n<p>Afterwards, we are setting the hyperparameter learning rate by calling _set_hyper. Notice that if someone provides &#8216;lr&#8217; also in arguments, that would take preference over learning_rate.<\/p>\n\n\n\n<p>Also, notice that we are setting _is_first to be true. We don&#8217;t want to use our algorithm for the first time because we need to compare the current gradient with previous if there signs are different.<\/p>\n\n\n\n<pre title=\"Constructor\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">def __init__(self, learning_rate=0.01, name=\"SGOptimizer\", **kwargs):\n        \"\"\"Call super().__init__() and use _set_hyper() to store hyperparameters\"\"\"\n        super().__init__(name, **kwargs)\n        self._set_hyper(\"learning_rate\", kwargs.get(\"lr\", learning_rate)) # handle lr=learning_rate\n        self._is_first = True\n<\/code><\/pre>\n\n\n\n<p>The next important method to implement is _create_slots. A slot is basically a placeholder where we will keep extra value. A slot is per variable where a variable could be weight or bais. We would need two slots extra &#8211; one for keeping track of the previous gradient so that we can compare if the previous gradient sign was different from current. We need another slot for keeping previous weight (or variable value) so that we can compute the average of current weight and previous if the signs of gradient are different. Every slot has a name. The names of our slot are &#8220;pv&#8221; and &#8220;pg&#8221;.<\/p>\n\n\n\n<pre title=\"Create Slots (Variables)\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">def _create_slots(self, var_list):\n        \"\"\"For each model variable, create the optimizer variable associated with it.\n        TensorFlow calls these optimizer variables \"slots\".\n        For momentum optimization, we need one momentum slot per model variable.\n        \"\"\"\n        for var in var_list:\n            self.add_slot(var, \"pv\") #previous variable i.e. weight or bias\n        for var in var_list:\n            self.add_slot(var, \"pg\") #previous gradient\n<\/code><\/pre>\n\n\n\n<p>Now, let us implement the main algorithm by the way of _resource_apply_dense. This method is called on every step. It provides your two variables grad and var. Both grad and var are basically tensors (or vectors) and contain the value of gradients (rate of change of loss wrt variable) and the variables. This method is called per layer but you don&#8217;t have to worry about that part since you are dealing with tensors.<\/p>\n\n\n\n<p>Remaining implementation is straight forward. Not @tf.function at the top, it is a signal to tensorflow to convert the function to tensorflow graph.<\/p>\n\n\n\n<pre title=\"Core Algorithm\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">@tf.function\n    def _resource_apply_dense(self, grad, var):\n        \"\"\"Update the slots and perform one optimization step for one model variable\n        \"\"\"\n        var_dtype = var.dtype.base_dtype\n        lr_t = self._decayed_lr(var_dtype) # handle learning rate decay\n\n        # Compute the new weight using the traditional gradient descent method\n        new_var_m = var - grad * lr_t\n\n        # Extract the previous values of Variables and Gradients\n        pv_var = self.get_slot(var, \"pv\")\n        pg_var = self.get_slot(var, \"pg\")\n        \n       # If it first time, use just the traditional method\n        if self._is_first:\n            self._is_first = False\n            new_var = new_var_m\n        else:\n\t# create a boolean tensor contain true and false\n            # True will be where the gradient haven't changed the sign and False will be the case where the gradients have changed sign\n            cond = grad*pg_var >= 0\n\t\n\t# Compute the average of previous weight and current. Though we will be using only few of these. \n#Of course, it is prone to overflow. We can also compute the avg using a + (b -a)\/2.0\n            avg_weights = (pv_var + var)\/2.0\n\t \n\t# tf.where picks the value from new_var_m where the cond is True otherwise it takes from avg_weights\n\t# We must avoid the for loops\n            new_var = tf.where(cond, new_var_m, avg_weights)\n        # Finally we are saving current values in the slots.\n        pv_var.assign(var)\n        pg_var.assign(grad)\n\n       # We are updating weight here. We don't need to return anything\n        var.assign(new_var)\n<\/code><\/pre>\n\n\n\n<p>The complete class would look like this:<\/p>\n\n\n\n<pre title=\"Complete Optimizer\" class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">class SGOptimizer(keras.optimizers.Optimizer):\n    def __init__(self, learning_rate=0.01, name=\"SGOptimizer\", **kwargs):\n        \"\"\"Call super().__init__() and use _set_hyper() to store hyperparameters\"\"\"\n        super().__init__(name, **kwargs)\n        self._set_hyper(\"learning_rate\", kwargs.get(\"lr\", learning_rate)) # handle lr=learning_rate\n        self._is_first = True\n    \n    def _create_slots(self, var_list):\n        \"\"\"For each model variable, create the optimizer variable associated with it.\n        TensorFlow calls these optimizer variables \"slots\".\n        For momentum optimization, we need one momentum slot per model variable.\n        \"\"\"\n        for var in var_list:\n            self.add_slot(var, \"pv\") #previous variable i.e. weight or bias\n        for var in var_list:\n            self.add_slot(var, \"pg\") #previous gradient\n\n\n    @tf.function\n    def _resource_apply_dense(self, grad, var):\n        \"\"\"Update the slots and perform one optimization step for one model variable\n        \"\"\"\n        var_dtype = var.dtype.base_dtype\n        lr_t = self._decayed_lr(var_dtype) # handle learning rate decay\n        new_var_m = var - grad * lr_t\n        pv_var = self.get_slot(var, \"pv\")\n        pg_var = self.get_slot(var, \"pg\")\n        \n        if self._is_first:\n            self._is_first = False\n            new_var = new_var_m\n        else:\n            cond = grad*pg_var >= 0\n            print(cond)\n            avg_weights = (pv_var + var)\/2.0\n            new_var = tf.where(cond, new_var_m, avg_weights)\n        pv_var.assign(var)\n        pg_var.assign(grad)\n        var.assign(new_var)\n\n    def _resource_apply_sparse(self, grad, var):\n        raise NotImplementedError\n\n    def get_config(self):\n        base_config = super().get_config()\n        return {\n            **base_config,\n            \"learning_rate\": self._serialize_hyperparameter(\"learning_rate\"),\n        }\n\n\n    def _resource_apply_sparse(self, grad, var):\n        raise NotImplementedError\n\n    def get_config(self):\n        base_config = super().get_config()\n        return {\n            **base_config,\n            \"learning_rate\": self._serialize_hyperparameter(\"learning_rate\"),\n            \"decay\": self._serialize_hyperparameter(\"decay\"),\n            \"momentum\": self._serialize_hyperparameter(\"momentum\"),\n        }\n<\/code><\/pre>\n\n\n\n<p>Now, let us test it. Let us first clear the tensorflow session and reset the the random seed:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">keras.backend.clear_session()\nnp.random.seed(42)\ntf.random.set_seed(42)<\/code><\/pre>\n\n\n\n<p>Let us fire up the training now. First we create a simple neural network with one layer and call compile by setting the loss and optimizer. Notice that we are passing the object of our optimizer. Finally call, model.fit.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">model = keras.models.Sequential([keras.layers.Dense(1, input_shape=[8])])\nmodel.compile(loss=\"mse\", optimizer=SGOptimizer(learning_rate=0.001))\nmodel.fit(X_train_scaled, y_train, epochs=50)<\/code><\/pre>\n\n\n\n<p>This is the output:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Train on 11610 samples<br>Epoch 1\/50<br>Tensor(\"GreaterEqual:0\", shape=(1,), dtype=bool)<br>11610\/11610 [==============================] - 1s 95us\/sample - loss: 3.7333<br>Epoch 2\/50<br>11610\/11610 [==============================] - 1s 47us\/sample - loss: 1.4848<br>Epoch 3\/50<br>11610\/11610 [==============================] - 1s 48us\/sample - loss: 0.9218<br>\u2026<br>\u2026<br>Epoch 47\/50<br>11610\/11610 [==============================] - 1s 45us\/sample - loss: 0.5306<br>Epoch 48\/50<br>11610\/11610 [==============================] - 1s 45us\/sample - loss: 0.5317<br>Epoch 49\/50<br>11610\/11610 [==============================] - 1s 47us\/sample - loss: 0.5311<br>Epoch 50\/50<br>11610\/11610 [==============================] - 1s 46us\/sample - loss: 0.5312<\/pre>\n\n\n\n<p>If you compare this trend of loss against the usual gradient descent or any of the variants of it, you will realize that it is not an improvement.&nbsp;<\/p>\n\n\n\n<p>The complete code is available in this repository: <a href=\"https:\/\/github.com\/cloudxlab\/ml\/blob\/master\/exp\/Optimizer_2.ipynb\">https:\/\/github.com\/cloudxlab\/ml\/blob\/master\/exp\/Optimizer_2.ipynb<\/a><\/p>\n\n\n\n<p>To learning more, visit <a href=\"http:\/\/CloudxLab.com\">CloudxLab.com<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently, I came up with an idea for a new Optimizer (an algorithm for training neural network). In theory, it looked great but when I implemented it and tested it, it didn&#8217;t turn out to be good. Some of my learning are: Neural Networks are hard to predict. Figuring out how to customize TensorFlow is &hellip; <a href=\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Writing Custom Optimizer in TensorFlow Keras API&#8221;<\/span><\/a><\/p>\n","protected":false},"author":14,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[67,29,28,30,14],"tags":[92,17,90,16,91,59],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v16.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Writing Custom Optimizer in TensorFlow Keras API | CloudxLab Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Writing Custom Optimizer in TensorFlow Keras API | CloudxLab Blog\" \/>\n<meta property=\"og:description\" content=\"Recently, I came up with an idea for a new Optimizer (an algorithm for training neural network). In theory, it looked great but when I implemented it and tested it, it didn&#8217;t turn out to be good. Some of my learning are: Neural Networks are hard to predict. Figuring out how to customize TensorFlow is &hellip; Continue reading &quot;Writing Custom Optimizer in TensorFlow Keras API&quot;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/\" \/>\n<meta property=\"og:site_name\" content=\"CloudxLab Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cloudxlab\" \/>\n<meta property=\"article:published_time\" content=\"2020-08-24T18:48:25+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-09-03T11:06:47+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/08\/customizing_ann.png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:site\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"10 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"CloudxLab Blog\",\"description\":\"Learn AI, Machine Learning, Deep Learning, Devops &amp; Big Data\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/cloudxlab.com\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/08\/customizing_ann.png\",\"contentUrl\":\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/08\/customizing_ann.png\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/#webpage\",\"url\":\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/\",\"name\":\"Writing Custom Optimizer in TensorFlow Keras API | CloudxLab Blog\",\"isPartOf\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/#primaryimage\"},\"datePublished\":\"2020-08-24T18:48:25+00:00\",\"dateModified\":\"2020-09-03T11:06:47+00:00\",\"author\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/4835f1b3d5000626cb15e9311d748e09\"},\"breadcrumb\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/writing-custom-optimizer-in-tensorflow-and-keras\/#webpage\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/4835f1b3d5000626cb15e9311d748e09\",\"name\":\"Sandeep Giri\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1393214840cf7455bb4cba055cb30468?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1393214840cf7455bb4cba055cb30468?s=96&d=mm&r=g\",\"caption\":\"Sandeep Giri\"},\"sameAs\":[\"https:\/\/cloudxlab.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3171"}],"collection":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/comments?post=3171"}],"version-history":[{"count":17,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3171\/revisions"}],"predecessor-version":[{"id":3210,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3171\/revisions\/3210"}],"wp:attachment":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/media?parent=3171"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/categories?post=3171"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/tags?post=3171"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}