{"id":2941,"date":"2020-04-20T14:39:05","date_gmt":"2020-04-20T14:39:05","guid":{"rendered":"https:\/\/cloudxlab.com\/blog\/?p=2941"},"modified":"2020-05-24T16:13:36","modified_gmt":"2020-05-24T16:13:36","slug":"number-plate-reader","status":"publish","type":"post","link":"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/","title":{"rendered":"How to make a custom number plate reader &#8211; Part 1"},"content":{"rendered":"\n<p>In this duology of blogs, we will explore how to create a custom number plate reader. We will use a few  machine learning tools to build the detector. An automatic number plate detector has multiple applications in traffic control, traffic violation detection, parking management etc. We will use the number plate detector as an exercise to try features in OpenCV, tensorflow object detection API, OCR, pytesseract<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>. <\/p>\n\n\n\n<h2>Machine Learning Tools Overview<\/h2>\n\n\n\n<p>We will use the following tools to build our application<\/p>\n\n\n\n<ol><li><a href=\"https:\/\/www.tensorflow.org\/\">TensorFlow <\/a>&#8211; One of the most popular open source libraries for machines learning, and supported by Google<\/li><li><a href=\"https:\/\/github.com\/tensorflow\/models\/tree\/master\/research\/object_detection\">Tensorflow Object Detection API<\/a> &#8211; We will use this API to create a model that will identify and localise the number plate. This API is an open-source framework built on top of TensorFlow. It lets us construct, train and deploy a variety of object detection models.<\/li><li><a href=\"https:\/\/supervise.ly\/explore\/models\/ssd-inception-v-2-coco-1861\/overview\">SSD Inception v2 model<\/a> &#8211;  The SSD or single shot detector lets us detect and localise objects in an image with a single pass or a single shot. Tensorflow object detection API can use several models for object detection. We will use the SSD inception v2 model as it gives us a good balance of both accuracy and speed.   You can find a list of all available models <a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/g3doc\/detection_model_zoo.md\">here<\/a>.<\/li><li><a href=\"https:\/\/opencv.org\/about\/\">OpenCV<\/a> &#8211; OpenCV or Open Computer Vision is the most popular tool for computer vision. It is written in C++, we will be using its Python extensions. OpenCV has a bunch of tools to manage pictures, videos and algorithms that manipulate images. <\/li><li><a href=\"https:\/\/pypi.org\/project\/pytesseract\/\">Pytesseract <\/a>&#8211; Pytesseract is a python wrapper for Tesseract an Optical Character Recogniser tool. It enables us to read text embedded in images.<\/li><li>Python -We will write all code in Python 3.<\/li><li><a href=\"https:\/\/github.com\/tzutalin\/labelImg\">LabelImg<\/a> &#8211; LabelImg is a graphical image annotation, which we will use in labelling our datasets.<\/li><\/ol>\n\n\n\n<h2>Number Plate Reader Methodology<\/h2>\n\n\n\n<p>We will break down the task of building a custom number plate reader to the following<\/p>\n\n\n\n<ol><li>Create a dataset of images with number plates<\/li><li>Annotate the dataset with LabelImg<\/li><li>Train an existing object detection model to detect number plates in a picture<\/li><li>Extract the number plate using the trained model <\/li><li>Run filters and cleanup of the picture <\/li><li>Read the number plate using an OCR tool<\/li><li>Identify shortcomings and explore methods to improve the model<\/li><\/ol>\n\n\n\n<p>We will cover the former 4 steps in this blog. The latter 3 will be covered in a later blog.<\/p>\n\n\n\n<h2>Creating the Dataset<\/h2>\n\n\n\n<figure class=\"wp-block-gallery columns-3 is-cropped\"><ul class=\"blocks-gallery-grid\"><li class=\"blocks-gallery-item\"><figure><img width=\"1280\" height=\"960\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/img11.jpeg\" alt=\"\" data-id=\"2949\" data-link=\"https:\/\/cloudxlab.com\/blog\/?attachment_id=2949\" class=\"wp-image-2949\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img width=\"1280\" height=\"960\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/img10.jpeg\" alt=\"\" data-id=\"2950\" data-link=\"https:\/\/cloudxlab.com\/blog\/?attachment_id=2950\" class=\"wp-image-2950\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img width=\"1280\" height=\"960\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/img8.jpeg\" alt=\"\" data-id=\"2952\" data-link=\"https:\/\/cloudxlab.com\/blog\/?attachment_id=2952\" class=\"wp-image-2952\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img width=\"1280\" height=\"960\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/img7.jpeg\" alt=\"\" data-id=\"2953\" data-link=\"https:\/\/cloudxlab.com\/blog\/?attachment_id=2953\" class=\"wp-image-2953\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img width=\"1280\" height=\"960\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/img6.jpeg\" alt=\"\" data-id=\"2954\" data-link=\"https:\/\/cloudxlab.com\/blog\/?attachment_id=2954\" class=\"wp-image-2954\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img width=\"1280\" height=\"960\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/img5.jpeg\" alt=\"\" data-id=\"2955\" data-link=\"https:\/\/cloudxlab.com\/blog\/?attachment_id=2955\" class=\"wp-image-2955\"\/><\/figure><\/li><\/ul><figcaption class=\"blocks-gallery-caption\">Images for number plate reader<\/figcaption><\/figure>\n\n\n\n<p>We can create a dataset on images with number plates like the ones above. You can take such photos with your mobile phone or scrape from the internet. Break the dataset into two directories &#8211; train and test. The train directory can have about 80% of the images and the test directory can have the remaining 20%. <\/p>\n\n\n\n<p>I advise you to be very careful in this step. You should make sure that you shuffle the entire set and your test and train sets are random and not over-representative of a certain type of images. For e.g. you may choose images from multiple sources like mobile images, web scraped, cctv feed etc. If your test set or train set has more images from a single source like web scraping, you will see bad results. Make sure your test and train sets are randomised and as representative of the images you want to use in the end.<\/p>\n\n\n\n<h2>Annotating the Images<\/h2>\n\n\n\n<p>First create a new environment with virtualenv to manage the workflow. I strongly recommend using a virtualenvwrapper as it makes management of multiple python environments very easy.<\/p>\n\n\n\n<pre class=\"wp-block-code code-overflow\"><code class=\"language-bash\">pip3 install virtualenvwrapper\nmkvirtualenv tf_obj<\/code><\/pre>\n\n\n\n<p>Install LabelImg following the instruction <a href=\"https:\/\/github.com\/tzutalin\/labelImg\">here<\/a>. Choose the appropriate workflow based on your environment. Open LabelImg and open the directory containing the images and annotate them.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"940\" height=\"728\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/Screenshot-2020-04-18-at-6.19.35-AM.png\" alt=\"\" class=\"wp-image-2960\"\/><figcaption>Annotating images with LabelImg to build a number plate detector<\/figcaption><\/figure>\n\n\n\n<p>We will annotate both the train and test image sets. Enter the label as numberplate and choose Pascal\/Voc as save format. Once your done, you will notice there are xml files associated with each picture. You can read the xml file and understand the parameters in the file. A sample xml file is below.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"844\" height=\"371\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/Screenshot-2020-04-18-at-9.52.03-PM.png\" alt=\"\" class=\"wp-image-2961\"\/><\/figure>\n\n\n\n<h2>Training<\/h2>\n\n\n\n<h3>Recommended Directory Structure<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">TensorFlow\n\u251c\u2500 models\n\u2502   \u251c\u2500 official\n\u2502   \u251c\u2500 research\n\u2502   \u251c\u2500 samples\n\u2502   \u2514\u2500 tutorials\n\u2514\u2500 workspace\n    \u2514\u2500 training_home\n       \u251c\u2500 annotations\n       \u251c\u2500 images\n       \u2502   \u251c\u2500 test\n       \u2502   \u2514\u2500 train\n       \u251c\u2500 pre-trained-model\n       \u251c\u2500 training\n       \u251c\u2500 eval\n       \u2514\u2500 trained-inference-graph<\/code><\/pre>\n\n\n\n<p>The images and xml files generated will be saved in the images directory. The test set and train set are kept images\/test and images\/train folders.<\/p>\n\n\n\n<h3>Install Tensorflow and Tensorflow Object Detection API<\/h3>\n\n\n\n<p>Tensorflow object detection API is a set of libraries that use TensorFlow. At the time of writing this blog tensorflow 2.0 was released but it did not support Tensorflow Object Detection yet. A key library tf.contrib was not part of tensorflow 2.0. For this exercise install version tf 1.15. Continue using the environment we created earlier. <\/p>\n\n\n\n<ul><li><a href=\"https:\/\/www.tensorflow.org\/install\/pip?lang=python3#virtualenv-install\">Tensorflow installation instructions<\/a>: Note install tensorflow 1.15, <\/li><li><a href=\"https:\/\/github.com\/tensorflow\/models\/tree\/master\/research\/object_detection\">Tensorflow Object Detection API instruction<\/a>: This API models have to be downloaded and kept locally. We will store the models under the directory models as seen in the Recommended Directory Structure. <\/li><\/ul>\n\n\n\n<p>For your reference, we have provided the entire requirements.txt file here.<\/p>\n\n\n\n<pre class=\"wp-block-code code-overflow\"><code class=\"language-properties line-numbers\">absl-py==0.9.0\nansiwrap==0.8.4\narrow==0.15.5\nastor==0.8.1\nastroid==2.3.3\nastropy==3.2.3\nattrs==19.3.0\nbackcall==0.1.0\nbcolz==1.2.1\nbinaryornot==0.4.4\nbleach==3.1.0\ncachetools==4.0.0\ncertifi==2019.11.28\ncffi==1.13.2\nchardet==3.0.4\nClick==7.0\ncloud-tpu-profiler==1.15.0rc1\ncloudpickle==1.2.2\ncolorama==0.4.3\nconfigparser==4.0.2\nconfuse==1.0.0\ncookiecutter==1.7.0\ncryptography==1.7.1\ncycler==0.10.0\nCython==0.29.15\ndaal==2019.0\ndatalab==1.1.5\ndecorator==4.4.1\ndefusedxml==0.6.0\ndill==0.3.1.1\ndistro==1.0.1\ndocker==4.1.0\nentrypoints==0.3\nenum34==1.1.6\nfairing==0.5.3\nfsspec==0.6.2\nfuture==0.18.2\ngast==0.2.2\ngcsfs==0.6.0\ngitdb2==2.0.6\nGitPython==3.0.5\ngoogle-api-core==1.16.0\ngoogle-api-python-client==1.7.11\ngoogle-auth==1.11.0\ngoogle-auth-httplib2==0.0.3\ngoogle-auth-oauthlib==0.4.1\ngoogle-cloud-bigquery==1.23.1\ngoogle-cloud-core==1.2.0\ngoogle-cloud-dataproc==0.6.1\ngoogle-cloud-datastore==1.10.0\ngoogle-cloud-language==1.3.0\ngoogle-cloud-logging==1.14.0\ngoogle-cloud-monitoring==0.31.1\ngoogle-cloud-spanner==1.13.0\ngoogle-cloud-storage==1.25.0\ngoogle-cloud-translate==2.0.0\ngoogle-compute-engine==20191210.0\ngoogle-pasta==0.1.8\ngoogle-resumable-media==0.5.0\ngoogleapis-common-protos==1.51.0\ngrpc-google-iam-v1==0.12.3\ngrpcio==1.26.0\nh5py==2.10.0\nhorovod==0.19.0\nhtml5lib==1.0.1\nhtmlmin==0.1.12\nhttplib2==0.17.0\nicc-rt==2020.0.133\nidna==2.8\nimageio==2.6.1\nimportlib-metadata==1.4.0\nintel-openmp==2020.0.133\nipykernel==5.1.4\nipython==7.9.0\nipython-genutils==0.2.0\nipython-sql==0.3.9\nipywidgets==7.5.1\nisort==4.3.21\njedi==0.16.0\nJinja2==2.11.0\njinja2-time==0.2.0\njoblib==0.14.1\njson5==0.8.5\njsonschema==3.2.0\njupyter==1.0.0\njupyter-aihub-deploy-extension==0.1\njupyter-client==5.3.4\njupyter-console==6.1.0\njupyter-contrib-core==0.3.3\njupyter-contrib-nbextensions==0.5.1\njupyter-core==4.6.1\njupyter-highlight-selected-word==0.2.0\njupyter-http-over-ws==0.0.7\njupyter-latex-envs==1.4.6\njupyter-nbextensions-configurator==0.4.1\njupyterlab==1.2.6\njupyterlab-git==0.9.0\njupyterlab-server==1.0.6\nKeras==2.3.1\nKeras-Applications==1.0.8\nKeras-Preprocessing==1.1.0\nkeyring==10.1\nkeyrings.alt==1.3\nkiwisolver==1.1.0\nkubernetes==10.0.1\nlazy-object-proxy==1.4.3\nllvmlite==0.31.0\nlxml==4.4.2\nMarkdown==3.1.1\nMarkupSafe==1.1.1\nmatplotlib==3.0.3\nmccabe==0.6.1\nmissingno==0.4.2\nmistune==0.8.4\nmkl==2019.0\nmkl-fft==1.0.6\nmkl-random==1.0.1.1\nmock==3.0.5\nmore-itertools==8.1.0\nnbconvert==5.6.1\nnbdime==1.1.0\nnbformat==5.0.4\nnetworkx==2.4\nnltk==3.4.5\nnotebook==6.0.3\nnumba==0.47.0\nnumpy==1.18.1\noauth2client==4.1.3\noauthlib==3.1.0\nopencv-python==4.1.2.30\nopt-einsum==3.1.0\npackaging==20.1\npandas==0.25.3\npandas-profiling==1.4.0\npandocfilters==1.4.2\npapermill==1.2.1\nparso==0.6.0\npathlib2==2.3.5\npexpect==4.8.0\nphik==0.9.8\npickleshare==0.7.5\nPillow==7.0.0\nplotly==4.5.0\npluggy==0.13.1\npoyo==0.5.0\nprettytable==0.7.2\nprometheus-client==0.7.1\npromise==2.3\nprompt-toolkit==2.0.10\nprotobuf==3.11.2\npsutil==5.6.7\nptyprocess==0.6.0\npy==1.8.1\npyarrow==0.15.1\npyasn1==0.4.8\npyasn1-modules==0.2.8\npycparser==2.19\npycrypto==2.6.1\npycurl==7.43.0\npydaal==2019.0.0.20180713\npydot==1.4.1\nPygments==2.5.2\npygobject==3.22.0\npylint==2.4.4\npyparsing==2.4.6\npyrsistent==0.15.7\npytest==5.3.4\npytest-pylint==0.14.1\npython-apt==1.4.1\npython-dateutil==2.8.1\npytz==2019.3\nPyWavelets==1.1.1\npyxdg==0.25\nPyYAML==5.3\npyzmq==18.1.1\nqtconsole==4.6.0\nrequests==2.22.0\nrequests-oauthlib==1.3.0\nretrying==1.3.3\nrsa==4.0\nscikit-image==0.15.0\nscikit-learn==0.22.1\nscipy==1.4.1\nseaborn==0.9.1\nSecretStorage==2.3.1\nSend2Trash==1.5.0\nsimplegeneric==0.8.1\nsix==1.14.0\nsmmap2==2.0.5\nSQLAlchemy==1.3.13\nsqlparse==0.3.0\ntbb==2019.0\ntbb4py==2019.0\ntenacity==6.0.0\ntensorboard==1.15.0\ntensorflow-datasets==1.2.0\ntensorflow-estimator==1.15.1\ntensorflow-gpu==1.15.2\ntensorflow-hub==0.6.0\ntensorflow-io==0.8.1\ntensorflow-metadata==0.21.1\ntensorflow-probability==0.9.0\ntensorflow-serving-api-gpu==1.14.0\ntermcolor==1.1.0\nterminado==0.8.3\ntestpath==0.4.4\ntextwrap3==0.9.2\ntfds-nightly==1.0.1.dev201903050105\ntornado==5.1.1\ntqdm==4.42.0\ntraitlets==4.3.3\ntyped-ast==1.4.1\nunattended-upgrades==0.1\nuritemplate==3.0.1\nurllib3==1.24.2\nvirtualenv==16.7.9\nwcwidth==0.1.8\nwebencodings==0.5.1\nwebsocket-client==0.57.0\nWerkzeug==0.16.1\nwhichcraft==0.6.1\nwidgetsnbextension==3.5.1\nwitwidget-gpu==1.5.1\nwrapt==1.11.2\nzipp==1.1.0\n<\/code><\/pre>\n\n\n\n<h3>The Label-Map file<\/h3>\n\n\n\n<p>The label-map file uniquely maps the labels in your model to integers. In our case we have only one label, so our label-map file will look like<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-json\">item {\n    id: 1\n    name: 'numberplate'\n}<\/code><\/pre>\n\n\n\n<p>We will call this file label-map.pbtxt. In case our model had more types of images we would have corresponding additional entries with unique integers mapping to the additional labels. We will keep this model in the annotations folder.<\/p>\n\n\n\n<h3>Creating the TF.Record<\/h3>\n\n\n\n<p>TensorFlow uses a binary format called TF.Record. Deep Learning uses large datasets, and storing the data in binary makes it efficient. It is more compact so it needs less disk for storage, and creates an efficient data pipeline to feed the training process. While working with datasets too large to store directly in memory, the training process only loads smaller segments or batches. TF.Records is highly optimised for TensorFlow and enables efficient data saving, loading, merging capabilities. Additionally, TF.Records simplifies processing of sequence data like time series and word encodings. For more details refer to this <a href=\"https:\/\/medium.com\/mostly-ai\/tensorflow-records-what-they-are-and-how-to-use-them-c46bc4bbb564\">link<\/a>.<\/p>\n\n\n\n<p>There are two steps to create TF records. First we create csv files from the xml files, then we create the TF.record files. The code to generate the csv file is in the code block below. The code has the usage at the top. The input is the directory with the xml files and output is in the annotation directory.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-python line-numbers\">\"\"\"\nUsage:\n# Create train data:\npython xml_to_csv.py -i [PATH_TO_IMAGES_FOLDER]\/train -o [PATH_TO_ANNOTATIONS_FOLDER]\/train_labels.csv\n\n# Create test data:\npython xml_to_csv.py -i [PATH_TO_IMAGES_FOLDER]\/test -o [PATH_TO_ANNOTATIONS_FOLDER]\/test_labels.csv\n\"\"\"\n\nimport os\nimport glob\nimport pandas as pd\nimport argparse\nimport xml.etree.ElementTree as ET\n\n\ndef xml_to_csv(path):\n    \"\"\"Iterates through all .xml files (generated by labelImg) in a given directory and combines them in a single Pandas datagrame.\n\n    Parameters:\n    ----------\n    path : {str}\n        The path containing the .xml files\n    Returns\n    -------\n    Pandas DataFrame\n        The produced dataframe\n    \"\"\"\n\n    xml_list = []\n    for xml_file in glob.glob(path + '\/*.xml'):\n        tree = ET.parse(xml_file)\n        root = tree.getroot()\n        for member in root.findall('object'):\n            value = (root.find('filename').text,\n                    int(root.find('size')[0].text),\n                    int(root.find('size')[1].text),\n                    member[0].text,\n                    int(member[4][0].text),\n                    int(member[4][1].text),\n                    int(member[4][2].text),\n                    int(member[4][3].text)\n                    )\n            xml_list.append(value)\n    column_name = ['filename', 'width', 'height',\n                'class', 'xmin', 'ymin', 'xmax', 'ymax']\n    xml_df = pd.DataFrame(xml_list, columns=column_name)\n    return xml_df\n\n\ndef main():\n    # Initiate argument parser\n    parser = argparse.ArgumentParser(\n        description=\"Sample TensorFlow XML-to-CSV converter\")\n    parser.add_argument(\"-i\",\n                        \"--inputDir\",\n                        help=\"Path to the folder where the input .xml files are stored\",\n                        type=str)\n    parser.add_argument(\"-o\",\n                        \"--outputFile\",\n                        help=\"Name of output .csv file (including path)\", type=str)\n    args = parser.parse_args()\n\n    if(args.inputDir is None):\n        args.inputDir = os.getcwd()\n    if(args.outputFile is None):\n        args.outputFile = args.inputDir + \"\/labels.csv\"\n\n    assert(os.path.isdir(args.inputDir))\n\n    xml_df = xml_to_csv(args.inputDir)\n    xml_df.to_csv(\n        args.outputFile, index=None)\n    print('Successfully converted xml to csv.')\n\n\nif __name__ == '__main__':\n    main()<\/code><\/pre>\n\n\n\n<p>After creating the csv files, we will use another script to create the TF.records. In the script below as you can see from the usage specified we take the csv files and create the record files. Note we use only one label here called numberplate. In case there are more, edit the FLAGS.label accordingly.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em>\"\"\"<\/em>\n<em>Usage:<\/em>\n\n<em># Create train data:<\/em>\n<em>python generate_tfrecord.py --label=&lt;LABEL&gt; --csv_input=&lt;PATH_TO_ANNOTATIONS_FOLDER&gt;\/train_labels.csv  --output_path=&lt;PATH_TO_ANNOTATIONS_FOLDER&gt;\/train.record<\/em>\n\n<em># Create test data:<\/em>\n<em>python generate_tfrecord.py --label=&lt;LABEL&gt; --csv_input=&lt;PATH_TO_ANNOTATIONS_FOLDER&gt;\/test_labels.csv  --output_path=&lt;PATH_TO_ANNOTATIONS_FOLDER&gt;\/test.record<\/em>\n<em>\"\"\"<\/em>\n\n<strong>from<\/strong> <strong>__future__<\/strong> <strong>import<\/strong> division\n<strong>from<\/strong> <strong>__future__<\/strong> <strong>import<\/strong> print_function\n<strong>from<\/strong> <strong>__future__<\/strong> <strong>import<\/strong> absolute_import\n\n<strong>import<\/strong> <strong>os<\/strong>\n<strong>import<\/strong> <strong>io<\/strong>\n<strong>import<\/strong> <strong>pandas<\/strong> <strong>as<\/strong> <strong>pd<\/strong>\n<strong>import<\/strong> <strong>tensorflow<\/strong> <strong>as<\/strong> <strong>tf<\/strong>\n<strong>import<\/strong> <strong>sys<\/strong>\nsys.path.append(\"..\/..\/models\/research\")\n\n<strong>from<\/strong> <strong>PIL<\/strong> <strong>import<\/strong> Image\n<strong>from<\/strong> <strong>object_detection.utils<\/strong> <strong>import<\/strong> dataset_util\n<strong>from<\/strong> <strong>collections<\/strong> <strong>import<\/strong> namedtuple, OrderedDict\n\nflags = tf.app.flags\nflags.DEFINE_string('csv_input', '', 'Path to the CSV input')\nflags.DEFINE_string('output_path', '', 'Path to output TFRecord')\nflags.DEFINE_string('label', '', 'Name of class label')\n<em># if your image has more labels input them as<\/em>\n<em># flags.DEFINE_string('label0', '', 'Name of class[0] label')<\/em>\n<em># flags.DEFINE_string('label1', '', 'Name of class[1] label')<\/em>\n<em># and so on.<\/em>\nflags.DEFINE_string('img_path', '', 'Path to images')\nFLAGS = flags.FLAGS\n\n\n<em># TO-DO replace this with label map<\/em>\n<em># for multiple labels add more else if statements<\/em>\n<strong>def<\/strong> class_text_to_int(row_label):\n    <strong>if<\/strong> row_label == FLAGS.label:  <em># 'numberplate':<\/em>\n        <strong>return<\/strong> 1\n    <em># comment upper if statement and uncomment these statements for multiple labelling<\/em>\n    <em># if row_label == FLAGS.label0:<\/em>\n    <em>#   return 1<\/em>\n    <em># elif row_label == FLAGS.label1:<\/em>\n    <em>#   return 0<\/em>\n    <strong>else<\/strong>:\n        None\n\n\n<strong>def<\/strong> split(df, group):\n    data = namedtuple('data', ['filename', 'object'])\n    gb = df.groupby(group)\n    <strong>return<\/strong> [data(filename, gb.get_group(x)) <strong>for<\/strong> filename, x <strong>in<\/strong> zip(gb.groups.keys(), gb.groups)]\n\n\n<strong>def<\/strong> create_tf_example(group, path):\n    <strong>with<\/strong> tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') <strong>as<\/strong> fid:\n        encoded_jpg = fid.read()\n    encoded_jpg_io = io.BytesIO(encoded_jpg)\n    image = Image.open(encoded_jpg_io)\n    width, height = image.size\n\n    filename = group.filename.encode('utf8')\n    image_format = b'jpg'\n    <em># check if the image format is matching with your images.<\/em>\n    xmins = []\n    xmaxs = []\n    ymins = []\n    ymaxs = []\n    classes_text = []\n    classes = []\n\n    <strong>for<\/strong> index, row <strong>in<\/strong> group.object.iterrows():\n        xmins.append(row['xmin'] \/ width)\n        xmaxs.append(row['xmax'] \/ width)\n        ymins.append(row['ymin'] \/ height)\n        ymaxs.append(row['ymax'] \/ height)\n        classes_text.append(row['class'].encode('utf8'))\n        classes.append(class_text_to_int(row['class']))\n\n    tf_example = tf.train.Example(features=tf.train.Features(feature={\n        'image\/height': dataset_util.int64_feature(height),\n        'image\/width': dataset_util.int64_feature(width),\n        'image\/filename': dataset_util.bytes_feature(filename),\n        'image\/source_id': dataset_util.bytes_feature(filename),\n        'image\/encoded': dataset_util.bytes_feature(encoded_jpg),\n        'image\/format': dataset_util.bytes_feature(image_format),\n        'image\/object\/bbox\/xmin': dataset_util.float_list_feature(xmins),\n        'image\/object\/bbox\/xmax': dataset_util.float_list_feature(xmaxs),\n        'image\/object\/bbox\/ymin': dataset_util.float_list_feature(ymins),\n        'image\/object\/bbox\/ymax': dataset_util.float_list_feature(ymaxs),\n        'image\/object\/class\/text': dataset_util.bytes_list_feature(classes_text),\n        'image\/object\/class\/label': dataset_util.int64_list_feature(classes),\n    }))\n    <strong>return<\/strong> tf_example\n\n\n<strong>def<\/strong> main(_):\n    writer = tf.python_io.TFRecordWriter(FLAGS.output_path)\n    path = os.path.join(os.getcwd(), FLAGS.img_path)\n    examples = pd.read_csv(FLAGS.csv_input)\n    grouped = split(examples, 'filename')\n    <strong>for<\/strong> group <strong>in<\/strong> grouped:\n        tf_example = create_tf_example(group, path)\n        writer.write(tf_example.SerializeToString())\n\n    writer.close()\n    output_path = os.path.join(os.getcwd(), FLAGS.output_path)\n    <strong>print<\/strong>('Successfully created the TFRecords: {}'.format(output_path))\n\n\n<strong>if<\/strong> __name__ == '__main__':\n    tf.app.run()<\/pre>\n\n\n\n<p>Refer to the <a href=\"https:\/\/tensorflow-object-detection-api-tutorial.readthedocs.io\/en\/latest\/training.html#creating-tensorflow-records\">link<\/a> for more details.<\/p>\n\n\n\n<h3>What is Transfer Training<\/h3>\n\n\n\n<p>We have converted the annotated images to a format compatible with tensorflow. In this section we will train an existing model to detect and localise a number plate in a picture. We will take an existing pre-trained model and feed it the annotated pictures from above. This process is called transfer training. The premise of transfer training is that the pre-existing model has already been extensively trained with a large set of images. So that model can already classify and detect a number of pictures. Tensorflow object detection API provides us with several models which have been pre-trained exhaustively with a large data set. By leveraging this learning we can get a working model that needs fewer number of images and fewer iterations. A list of these available models can be found in the <a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/g3doc\/detection_model_zoo.md\">model zoo<\/a>.<\/p>\n\n\n\n<h4>MOdeL ZOO<\/h4>\n\n\n\n<p>We have multiple models in the model zoo; some are faster, others more accurate. Some of the models can even do segmentation, which identifies the pixels that make that object. For our application detecting bounding boxes is sufficient. The list of available models speed and accuracy measured and mAP (mean Average Precision) on the coco set is below. Higher the mAP, better the accuracy.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><thead><tr><th>Model name<\/th><th>Speed (ms)<\/th><th>COCO mAP[^1]<\/th><th>Outputs<\/th><\/tr><\/thead><tbody><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v1_coco_2018_01_28.tar.gz\">ssd_mobilenet_v1_coco<\/a><\/td><td>30<\/td><td>21<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz\">ssd_mobilenet_v1_0.75_depth_coco \u2606<\/a><\/td><td>26<\/td><td>18<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18.tar.gz\">ssd_mobilenet_v1_quantized_coco \u2606<\/a><\/td><td>29<\/td><td>18<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v1_0.75_depth_quantized_300x300_coco14_sync_2018_07_18.tar.gz\">ssd_mobilenet_v1_0.75_depth_quantized_coco \u2606<\/a><\/td><td>29<\/td><td>16<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03.tar.gz\">ssd_mobilenet_v1_ppn_coco \u2606<\/a><\/td><td>26<\/td><td>20<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz\">ssd_mobilenet_v1_fpn_coco \u2606<\/a><\/td><td>56<\/td><td>32<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz\">ssd_resnet_50_fpn_coco \u2606<\/a><\/td><td>76<\/td><td>35<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v2_coco_2018_03_29.tar.gz\">ssd_mobilenet_v2_coco<\/a><\/td><td>31<\/td><td>22<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz\">ssd_mobilenet_v2_quantized_coco<\/a><\/td><td>29<\/td><td>22<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz\">ssdlite_mobilenet_v2_coco<\/a><\/td><td>27<\/td><td>22<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_inception_v2_coco_2018_01_28.tar.gz\">ssd_inception_v2_coco<\/a><\/td><td>42<\/td><td>24<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz\">faster_rcnn_inception_v2_coco<\/a><\/td><td>58<\/td><td>28<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_resnet50_coco_2018_01_28.tar.gz\">faster_rcnn_resnet50_coco<\/a><\/td><td>89<\/td><td>30<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz\">faster_rcnn_resnet50_lowproposals_coco<\/a><\/td><td>64<\/td><td><\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/rfcn_resnet101_coco_2018_01_28.tar.gz\">rfcn_resnet101_coco<\/a><\/td><td>92<\/td><td>30<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_resnet101_coco_2018_01_28.tar.gz\">faster_rcnn_resnet101_coco<\/a><\/td><td>106<\/td><td>32<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_resnet101_lowproposals_coco_2018_01_28.tar.gz\">faster_rcnn_resnet101_lowproposals_coco<\/a><\/td><td>82<\/td><td><\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz\">faster_rcnn_inception_resnet_v2_atrous_coco<\/a><\/td><td>620<\/td><td>37<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2018_01_28.tar.gz\">faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco<\/a><\/td><td>241<\/td><td><\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_nas_coco_2018_01_28.tar.gz\">faster_rcnn_nas<\/a><\/td><td>1833<\/td><td>43<\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/faster_rcnn_nas_lowproposals_coco_2018_01_28.tar.gz\">faster_rcnn_nas_lowproposals_coco<\/a><\/td><td>540<\/td><td><\/td><td>Boxes<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz\">mask_rcnn_inception_resnet_v2_atrous_coco<\/a><\/td><td>771<\/td><td>36<\/td><td>Masks<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/mask_rcnn_inception_v2_coco_2018_01_28.tar.gz\">mask_rcnn_inception_v2_coco<\/a><\/td><td>79<\/td><td>25<\/td><td>Masks<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz\">mask_rcnn_resnet101_atrous_coco<\/a><\/td><td>470<\/td><td>33<\/td><td>Masks<\/td><\/tr><tr><td><a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/mask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz\">mask_rcnn_resnet50_atrous_coco<\/a><\/td><td>343<\/td><td>29<\/td><td>Masks<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4>CONFIGURING the  PIPELINE<\/h4>\n\n\n\n<p>We will use ssd_inception as it is a good balance between speed and accuracy. Since we use the pre-trained model as a starting point, download the <a href=\"http:\/\/download.tensorflow.org\/models\/object_detection\/ssd_mobilenet_v2_coco_2018_03_29.tar.gz\">pre-trained model<\/a> from the model zoo. Decompress the file .tar.gz and store the contents in the directory pre-trained-model. We will now configure a pipeline for running the training. The pipeline will be configured with a config file. Samples of different config files can be found <a href=\"https:\/\/github.com\/tensorflow\/models\/tree\/master\/research\/object_detection\/samples\/configs\">here<\/a>. The config file for inception is <a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/samples\/configs\/ssd_inception_v2_coco.config\">here<\/a>. We will now edit the file, to make it compatible for our data set. We have changed the following items in the file<\/p>\n\n\n\n<ol><li>Change num_classes to 1 as we have only one class of objects. <\/li><li>fine_tune_checkpoint: &#8220;pre-trained-model\/model.ckpt&#8221; to point to the pretrained model<\/li><li>input_path for both train and test sets.<\/li><li>label_map to point to our label map in the annotations directory<\/li><li>num_steps is set to 20000 which is sufficient for our use case<\/li><\/ol>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\"># SSD with Inception v2 configuration for MSCOCO Dataset.\n# Users should configure the fine_tune_checkpoint field in the train config as\n# well as the label_map_path and input_path fields in the train_input_reader and\n# eval_input_reader. Search for \"PATH_TO_BE_CONFIGURED\" to find the fields that\n# should be configured.\n\nmodel {\n  ssd {\n    num_classes: 1    #Since we have only one class numberplates\n    box_coder {\n      faster_rcnn_box_coder {\n        y_scale: 10.0\n        x_scale: 10.0\n        height_scale: 5.0\n        width_scale: 5.0\n      }\n    }\n    matcher {\n      argmax_matcher {\n        matched_threshold: 0.5\n        unmatched_threshold: 0.5\n        ignore_thresholds: false\n        negatives_lower_than_unmatched: true\n        force_match_for_each_row: true\n      }\n    }\n    similarity_calculator {\n      iou_similarity {\n      }\n    }\n    anchor_generator {\n      ssd_anchor_generator {\n        num_layers: 6\n        min_scale: 0.2\n        max_scale: 0.95\n        aspect_ratios: 1.0\n        aspect_ratios: 2.0\n        aspect_ratios: 0.5\n        aspect_ratios: 3.0\n        aspect_ratios: 0.3333\n        reduce_boxes_in_lowest_layer: true\n      }\n    }\n    image_resizer {\n      fixed_shape_resizer {\n        height: 300\n        width: 300\n      }\n    }\n    box_predictor {\n      convolutional_box_predictor {\n        min_depth: 0\n        max_depth: 0\n        num_layers_before_predictor: 0\n        use_dropout: false\n        dropout_keep_probability: 0.8\n        kernel_size: 3\n        box_code_size: 4\n        apply_sigmoid_to_scores: false\n        conv_hyperparams {\n          activation: RELU_6,\n          regularizer {\n            l2_regularizer {\n              weight: 0.00004\n            }\n          }\n          initializer {\n            truncated_normal_initializer {\n              stddev: 0.03\n              mean: 0.0\n            }\n          }\n        }\n      }\n    }\n    feature_extractor {\n      type: 'ssd_inception_v2'\n      min_depth: 16\n      depth_multiplier: 1.0\n      conv_hyperparams {\n        activation: RELU_6,\n        regularizer {\n          l2_regularizer {\n            weight: 0.00004\n          }\n        }\n        initializer {\n          truncated_normal_initializer {\n            stddev: 0.03\n            mean: 0.0\n          }\n        }\n        batch_norm {\n          train: true,\n          scale: true,\n          center: true,\n          decay: 0.9997,\n          epsilon: 0.001,\n        }\n      }\n      override_base_feature_extractor_hyperparams: true\n    }\n    loss {\n      classification_loss {\n        weighted_sigmoid {\n          anchorwise_output: true\n        }\n      }\n      localization_loss {\n        weighted_smooth_l1 {\n          anchorwise_output: true\n        }\n      }\n      hard_example_miner {\n        num_hard_examples: 3000\n        iou_threshold: 0.99\n        loss_type: CLASSIFICATION\n        max_negatives_per_positive: 3\n        min_negatives_per_image: 0\n      }\n      classification_weight: 1.0\n      localization_weight: 1.0\n    }\n    normalize_loss_by_num_matches: true\n    post_processing {\n      batch_non_max_suppression {\n        score_threshold: 1e-8\n        iou_threshold: 0.6\n        max_detections_per_class: 100\n        max_total_detections: 100\n      }\n      score_converter: SIGMOID\n    }\n  }\n}\n\ntrain_config: {\n  batch_size: 24\n  optimizer {\n    rms_prop_optimizer: {\n      learning_rate: {\n        exponential_decay_learning_rate {\n          initial_learning_rate: 0.004\n          decay_steps: 800720\n          decay_factor: 0.95\n        }\n      }\n      momentum_optimizer_value: 0.9\n      decay: 0.9\n      epsilon: 1.0\n    }\n  }\n  fine_tune_checkpoint: \"pre-trained-model\/model.ckpt\"    # points to local starting point\n  from_detection_checkpoint: true\n  # Note: The below line limits the training process to 200K steps, which we\n  # empirically found to be sufficient enough to train the pets dataset. This\n  # effectively bypasses the learning rate schedule (the learning rate will\n  # never decay). Remove the below line to train indefinitely.\n  num_steps: 20000    #20 K steps is sufficient for us\n  data_augmentation_options {\n    random_horizontal_flip {\n    }\n  }\n  data_augmentation_options {\n    ssd_random_crop {\n    }\n  }\n}\n\ntrain_input_reader: {\n  tf_record_input_reader {\n    input_path: \"annotations\/train.record\"   # Pointing to our training set\n  }\n  label_map_path: \"annotations\/label_map.pbtxt\"   #Point to our labels\n}\n\neval_config: {\n  num_examples: 8000\n  # Note: The below line limits the evaluation process to 10 evaluations.\n  # Remove the below line to evaluate indefinitely.\n  max_evals: 100\n}\n\neval_input_reader: {\n  tf_record_input_reader {\n    input_path: \"annotations\/test.record\"   #pointing to our eval set\n  }\n  label_map_path: \"annotations\/label_map.pbtxt\"  #point to our label map\n  shuffle: false\n  num_readers: 1\n  num_epochs: 1\n}\n<\/code><\/pre>\n\n\n\n<p>We will keep the config pipeline in the training directory. To begin training and evaluation we will copy two files from the tensorflow object detection API to the directory training_home. The files are<\/p>\n\n\n\n<ul><li><em>TensorFlow\/models\/research\/object_detection\/legacy\/train.py<\/em><\/li><li><em>TensorFlow\/models\/research\/object_detection\/legacy\/eval.py<\/em><\/li><\/ul>\n\n\n\n<p>The following commands will launch training in the background, so the process will continue to run even if your session stops working.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">nohup python3 train.py --train_dir=training\/ --pipeline_config_path=train\/ssd_inception_v2_coco.config &gt; nohup_train.out 2&gt;&amp;1&amp;\n<\/code><\/pre>\n\n\n\n<p>The command above runs training in the background. I found it easier to manage if the training is running this way. The logs can be monitored by running<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">tail -f nohup_train.out<\/code><\/pre>\n\n\n\n<h4>Evaluation and monitoring<\/h4>\n\n\n\n<p>As training runs, it saves checkpoint regularly in the directory training. If we have evaluation running in parallel, we can see how well each checkpoint runs. To run eval, the command is<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">nohup python eval.py --checkpoint_dir=training\/ --eval_dir=eval\/ --pipeline_config_path=train\/ssd_inception_v2_coco.config &gt; nohup_eval.out 2&gt;&amp;1&amp;<\/code><\/pre>\n\n\n\n<p>We can also run tensorboard, to monitor both the training and evaluation. We will run tensorboard on two different ports, so we can monitor both in parallel.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">nohup tensorboard --logdir=train\/ --port=8008 &gt; nohup_tb_tr.out 2&gt;&amp;1&amp;\nnohup tensorboard --logdir=eval\/ --port=6006 &gt; nohup_tb_ev.out 2&gt;&amp;1&amp;<\/code><\/pre>\n\n\n\n<p>Tensorboard runs an http server on the port so you can monitor it by going to it with any browser.<\/p>\n\n\n\n<h3> Exporting the Model<\/h3>\n\n\n\n<p>After the training is complete, we can export the trained model using the program from <em>TensorFlow\/models\/research\/object_detection\/export_inference_graph.py<\/em> to the training_home directory. <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">python export_inference_graph.py --input_type image_tensor --pipeline_config_path training\/ssd_inception_v2_coco.config --trained_checkpoint_prefix training\/model.ckpt-20000 --output_directory trained-inference-graphs\/ssd_inception_output_inference_graph_number_plate_detector.pb<\/code><\/pre>\n\n\n\n<h2>Number Plate Reader (Detector only)<\/h2>\n\n\n\n<p>We will now test the detector component of our number plate reader. Use the code below and change the path to MODEL_FILE and INPUT_FILE depending on your actuals.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-python line-numbers\">import numpy as np\nimport tensorflow as tf\nimport cv2 as cv\nimport os\nfrom os import listdir\nfrom os.path import isfile, join\n\n\n\n#Change PATH names to match your file paths for the model and INPUT_FILE'\nMODEL_NAME='ssd_inception_output_inference_graph_v1.pb'\nPATH_TO_FROZEN_GRAPH = MODEL_NAME + '\/frozen_inference_graph.pb'\nINPUT_FILE='test_images\/img8.jpeg'\n\n\n#Read the model from the file\nwith tf.gfile.FastGFile(PATH_TO_FROZEN_GRAPH, 'rb') as f:\n    graph_def = tf.GraphDef()\n    graph_def.ParseFromString(f.read())\n\n\n#Create the tensorflow session\nwith tf.Session() as sess:\n\n    sess.graph.as_default()\n    tf.import_graph_def(graph_def, name='')\n    # Read the input file\n    img = cv.imread(INPUT_FILE)\n\n\n    rows = img.shape[0]\n    cols = img.shape[1]\n    inp = cv.resize(img, (300, 300))\n    inp = inp[:, :, [2, 1, 0]]  # BGR2RGB\n\n    # Run the model\n    out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),\n                    sess.graph.get_tensor_by_name('detection_scores:0'),\n                    sess.graph.get_tensor_by_name('detection_boxes:0'),\n                    sess.graph.get_tensor_by_name('detection_classes:0')],\n                   feed_dict={'image_tensor:0': inp.reshape(1, inp.shape[0], inp.shape[1], 3)})\n\n    # Visualize detected bounding boxes.\n    num_detections = int(out[0][0])\n    # Iterate through all detected detections\n    for i in range(num_detections):\n        classId = int(out[3][0][i])\n\n        score = float(out[1][0][i])\n        bbox = [float(v) for v in out[2][0][i]]\n\n        if score &gt; 0.9:\n            # Creating a box around the detected number plate\n            x = int(bbox[1] * cols)\n            y = int(bbox[0] * rows)\n            right = int(bbox[3] * cols)\n            bottom = int(bbox[2] * rows)\n            cv.rectangle(img, (x, y), (right, bottom), (125, 255, 51), thickness=2)\n            cv.imwrite('licence_plate_detected.png', img)\n<\/code><\/pre>\n\n\n\n<p>The output file of the number plate reader&#8217;s detector will look like this.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"1280\" height=\"960\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/licence_plate_detected.png\" alt=\"\" class=\"wp-image-2966\"\/><\/figure>\n\n\n\n<p>We have now learnt to use Tensorflow Object Detection API to build the number plate detector. In the <a href=\"https:\/\/cloudxlab.com\/blog\/how-to-build-a-number-plate-reader-part-2\/\">second part <\/a>we will now read the characters from this detected number plate.<\/p>\n\n\n\n<h4>references<\/h4>\n\n\n\n<ol><li><a href=\"https:\/\/tensorflow-object-detection-api-tutorial.readthedocs.io\/en\/latest\/training.html\">TensorFlow Object Detection API Read the Docs<\/a><\/li><\/ol>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this duology of blogs, we will explore how to create a custom number plate reader.<\/p>\n","protected":false},"author":26,"featured_media":2960,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[67,30],"tags":[61],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v16.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to make a custom number plate reader - Part 1 | CloudxLab Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to make a custom number plate reader - Part 1 | CloudxLab Blog\" \/>\n<meta property=\"og:description\" content=\"In this duology of blogs, we will explore how to create a custom number plate reader.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/\" \/>\n<meta property=\"og:site_name\" content=\"CloudxLab Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cloudxlab\" \/>\n<meta property=\"article:published_time\" content=\"2020-04-20T14:39:05+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-05-24T16:13:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/Screenshot-2020-04-18-at-6.19.35-AM.png\" \/>\n\t<meta property=\"og:image:width\" content=\"940\" \/>\n\t<meta property=\"og:image:height\" content=\"728\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:site\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"19 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"CloudxLab Blog\",\"description\":\"Learn AI, Machine Learning, Deep Learning, Devops &amp; Big Data\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/cloudxlab.com\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/cloudxlab.com\/blog\/wp-content\/uploads\/2020\/04\/Screenshot-2020-04-18-at-6.19.35-AM.png\",\"contentUrl\":\"https:\/\/cloudxlab.com\/blog\/wp-content\/uploads\/2020\/04\/Screenshot-2020-04-18-at-6.19.35-AM.png\",\"width\":940,\"height\":728},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/#webpage\",\"url\":\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/\",\"name\":\"How to make a custom number plate reader - Part 1 | CloudxLab Blog\",\"isPartOf\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/#primaryimage\"},\"datePublished\":\"2020-04-20T14:39:05+00:00\",\"dateModified\":\"2020-05-24T16:13:36+00:00\",\"author\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/e2c5cc7b933ebd4b15f9b463dc7cf1b4\"},\"breadcrumb\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/number-plate-reader\/#webpage\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/e2c5cc7b933ebd4b15f9b463dc7cf1b4\",\"name\":\"Praveen Pavithran\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/03c8d253347dcf9e04ec550cd6144973?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/03c8d253347dcf9e04ec550cd6144973?s=96&d=mm&r=g\",\"caption\":\"Praveen Pavithran\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/2941"}],"collection":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/comments?post=2941"}],"version-history":[{"count":26,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/2941\/revisions"}],"predecessor-version":[{"id":3078,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/2941\/revisions\/3078"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/media\/2960"}],"wp:attachment":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/media?parent=2941"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/categories?post=2941"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/tags?post=2941"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}