{"id":3024,"date":"2020-04-29T08:16:22","date_gmt":"2020-04-29T08:16:22","guid":{"rendered":"https:\/\/cloudxlab.com\/blog\/?p=3024"},"modified":"2020-04-30T03:01:36","modified_gmt":"2020-04-30T03:01:36","slug":"object-detection-yolo-and-python-pydarknet","status":"publish","type":"post","link":"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/","title":{"rendered":"Object Detection with Yolo Python and OpenCV- Yolo 2"},"content":{"rendered":"\n<p>This blog is part of series, where we examine practical applications of Yolo. In this blog, we will see how to setup object detection with Yolo and Python on images and video. We will also use Pydarknet a wrapper for Darknet in this blog. The impact of different configurations GPU on speed and accuracy will also be analysed.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2>Initial setup for YOLO with python<\/h2>\n\n\n\n<p>I presume you have already seen the <a href=\"https:\/\/cloudxlab.com\/blog\/setup-yolo-with-darknet\/\">first blog<\/a> on YOLO. There we have run YOLO with darknet. We will need the config, weights and names files used for this blog. The files needed are <\/p>\n\n\n\n<ol><li>yolov3.cfg &#8211; The standard config file used. This will be in the cfg\/ directory.<\/li><li>yolo-tiny.cfg &#8211; The speed optimised config file. This will be in the cfg\/ directory.<\/li><li>yolov3.weights &#8211; Pre-trained weights file for yolov3. This file is in the darknet\/ directory.<\/li><li>yolo-tiny.weights &#8211; Pre-trained speed optimised weight file. This file is in the darknet directory.<\/li><li>coco.names &#8211; List of items, that the model can recognise is also in the data\/ directory<\/li><li>coco.data &#8211; A config data file kept in the cfg\/ directory<\/li><\/ol>\n\n\n\n<p>Before we start writing the python code, we will create a environment using virtualenv<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">mkvirtualenv yolo-py<\/code><\/pre>\n\n\n\n<p>Install the required libraries using pip.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">pip install numpy opencv-python <\/code><\/pre>\n\n\n\n<h2>Object detection with YOLO, Python and OpenCV<\/h2>\n\n\n\n<p>The python code to run is below<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-python line-numbers\">import numpy as np\nimport time\nimport cv2\n\n\nINPUT_FILE='data\/dog.jpg'\nOUTPUT_FILE='predicted.jpg'\nLABELS_FILE='data\/coco.names'\nCONFIG_FILE='cfg\/yolov3.cfg'\nWEIGHTS_FILE='yolov3.weights'\nCONFIDENCE_THRESHOLD=0.3\n\nLABELS = open(LABELS_FILE).read().strip().split(\"\\n\")\n\nnp.random.seed(4)\nCOLORS = np.random.randint(0, 255, size=(len(LABELS), 3),\n\tdtype=\"uint8\")\n\n\nnet = cv2.dnn.readNetFromDarknet(CONFIG_FILE, WEIGHTS_FILE)\n\nimage = cv2.imread(INPUT_FILE)\n(H, W) = image.shape[:2]\n\n# determine only the *output* layer names that we need from YOLO\nln = net.getLayerNames()\nln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]\n\n\nblob = cv2.dnn.blobFromImage(image, 1 \/ 255.0, (416, 416),\n\tswapRB=True, crop=False)\nnet.setInput(blob)\nstart = time.time()\nlayerOutputs = net.forward(ln)\nend = time.time()\n\n\nprint(\"[INFO] YOLO took {:.6f} seconds\".format(end - start))\n\n\n# initialize our lists of detected bounding boxes, confidences, and\n# class IDs, respectively\nboxes = []\nconfidences = []\nclassIDs = []\n\n# loop over each of the layer outputs\nfor output in layerOutputs:\n\t# loop over each of the detections\n\tfor detection in output:\n\t\t# extract the class ID and confidence (i.e., probability) of\n\t\t# the current object detection\n\t\tscores = detection[5:]\n\t\tclassID = np.argmax(scores)\n\t\tconfidence = scores[classID]\n\n\t\t# filter out weak predictions by ensuring the detected\n\t\t# probability is greater than the minimum probability\n\t\tif confidence &gt; CONFIDENCE_THRESHOLD:\n\t\t\t# scale the bounding box coordinates back relative to the\n\t\t\t# size of the image, keeping in mind that YOLO actually\n\t\t\t# returns the center (x, y)-coordinates of the bounding\n\t\t\t# box followed by the boxes' width and height\n\t\t\tbox = detection[0:4] * np.array([W, H, W, H])\n\t\t\t(centerX, centerY, width, height) = box.astype(\"int\")\n\n\t\t\t# use the center (x, y)-coordinates to derive the top and\n\t\t\t# and left corner of the bounding box\n\t\t\tx = int(centerX - (width \/ 2))\n\t\t\ty = int(centerY - (height \/ 2))\n\n\t\t\t# update our list of bounding box coordinates, confidences,\n\t\t\t# and class IDs\n\t\t\tboxes.append([x, y, int(width), int(height)])\n\t\t\tconfidences.append(float(confidence))\n\t\t\tclassIDs.append(classID)\n\n# apply non-maxima suppression to suppress weak, overlapping bounding\n# boxes\nidxs = cv2.dnn.NMSBoxes(boxes, confidences, CONFIDENCE_THRESHOLD,\n\tCONFIDENCE_THRESHOLD)\n\n# ensure at least one detection exists\nif len(idxs) &gt; 0:\n\t# loop over the indexes we are keeping\n\tfor i in idxs.flatten():\n\t\t# extract the bounding box coordinates\n\t\t(x, y) = (boxes[i][0], boxes[i][1])\n\t\t(w, h) = (boxes[i][2], boxes[i][3])\n\n\t\tcolor = [int(c) for c in COLORS[classIDs[i]]]\n\n\t\tcv2.rectangle(image, (x, y), (x + w, y + h), color, 2)\n\t\ttext = \"{}: {:.4f}\".format(LABELS[classIDs[i]], confidences[i])\n\t\tcv2.putText(image, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,\n\t\t\t0.5, color, 2)\n\n# show the output image\ncv2.imwrite(\"example.png\", image)\n<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"768\" height=\"576\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/example.png\" alt=\"Object Detection with Yolo and Python\" class=\"wp-image-3025\" \/><figcaption>Object Detection with Yolo and Python<\/figcaption><\/figure>\n\n\n\n<h2>Yolo with Video<\/h2>\n\n\n\n<p>Now that we know how to work with images, we can easily extend this to work with video. The code is mostly the same. We will read the video in a loop and treat each frame as an image. We will also measure the frames per second (FPS), to check speed of the model. First install the imutils package which will be used in this segment.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">pip install imutils<\/code><\/pre>\n\n\n\n<p> We need input videos to analyse. I used a <a href=\"https:\/\/download.cnet.com\/MacX-YouTube-Downloader\/3000-2071_4-76641301.html\">Youtube Downloader <\/a> to download <a href=\"https:\/\/www.youtube.com\/watch?v=MNn9qKG2UFI\">this video<\/a>.  Feel free to us any thing you like. I am a bit biased toward traffic videos. Video processing can be very time consuming. In case you want stop the processing midway, press the key &#8216;q&#8217; (line 123). The code will stop processing midway and you can review the partial results. <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-python line-numbers\">import numpy as np\nimport time\nimport cv2\nimport imutils\nfrom imutils.video import FPS\nfrom imutils.video import VideoStream\n\n\n\nINPUT_FILE='traffic_1.mp4'\nOUTPUT_FILE='output.avi'\nLABELS_FILE='data\/coco.names'\nCONFIG_FILE='cfg\/yolov3.cfg'\nWEIGHTS_FILE='yolov3.weights'\nCONFIDENCE_THRESHOLD=0.3\n\nH=None\nW=None\n\nfps = FPS().start()\n\nfourcc = cv2.VideoWriter_fourcc(*\"MJPG\")\nwriter = cv2.VideoWriter(OUTPUT_FILE, fourcc, 30,\n\t(800, 600), True)\n\nLABELS = open(LABELS_FILE).read().strip().split(\"\\n\")\n\nnp.random.seed(4)\nCOLORS = np.random.randint(0, 255, size=(len(LABELS), 3),\n\tdtype=\"uint8\")\n\n\nnet = cv2.dnn.readNetFromDarknet(CONFIG_FILE, WEIGHTS_FILE)\n\nvs = cv2.VideoCapture(INPUT_FILE)\n\n\n# determine only the *output* layer names that we need from YOLO\nln = net.getLayerNames()\nln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]\ncnt =0;\nwhile True:\n\tcnt+=1\n\tprint (\"Frame number\", cnt)\n\ttry:\n\t\t(grabbed, image) = vs.read()\n\texcept:\n\t\tbreak\n\tblob = cv2.dnn.blobFromImage(image, 1 \/ 255.0, (416, 416),\n\t\tswapRB=True, crop=False)\n\tnet.setInput(blob)\n\tif W is None or H is None:\n\t\t(H, W) = image.shape[:2]\n\tlayerOutputs = net.forward(ln)\n\n\n\n\n\n\n\t# initialize our lists of detected bounding boxes, confidences, and\n\t# class IDs, respectively\n\tboxes = []\n\tconfidences = []\n\tclassIDs = []\n\n\t# loop over each of the layer outputs\n\tfor output in layerOutputs:\n\t\t# loop over each of the detections\n\t\tfor detection in output:\n\t\t\t# extract the class ID and confidence (i.e., probability) of\n\t\t\t# the current object detection\n\t\t\tscores = detection[5:]\n\t\t\tclassID = np.argmax(scores)\n\t\t\tconfidence = scores[classID]\n\n\t\t\t# filter out weak predictions by ensuring the detected\n\t\t\t# probability is greater than the minimum probability\n\t\t\tif confidence &gt; CONFIDENCE_THRESHOLD:\n\t\t\t\t# scale the bounding box coordinates back relative to the\n\t\t\t\t# size of the image, keeping in mind that YOLO actually\n\t\t\t\t# returns the center (x, y)-coordinates of the bounding\n\t\t\t\t# box followed by the boxes' width and height\n\t\t\t\tbox = detection[0:4] * np.array([W, H, W, H])\n\t\t\t\t(centerX, centerY, width, height) = box.astype(\"int\")\n\n\t\t\t\t# use the center (x, y)-coordinates to derive the top and\n\t\t\t\t# and left corner of the bounding box\n\t\t\t\tx = int(centerX - (width \/ 2))\n\t\t\t\ty = int(centerY - (height \/ 2))\n\n\t\t\t\t# update our list of bounding box coordinates, confidences,\n\t\t\t\t# and class IDs\n\t\t\t\tboxes.append([x, y, int(width), int(height)])\n\t\t\t\tconfidences.append(float(confidence))\n\t\t\t\tclassIDs.append(classID)\n\n\t# apply non-maxima suppression to suppress weak, overlapping bounding\n\t# boxes\n\tidxs = cv2.dnn.NMSBoxes(boxes, confidences, CONFIDENCE_THRESHOLD,\n\t\tCONFIDENCE_THRESHOLD)\n\n\t# ensure at least one detection exists\n\tif len(idxs) &gt; 0:\n\t\t# loop over the indexes we are keeping\n\t\tfor i in idxs.flatten():\n\t\t\t# extract the bounding box coordinates\n\t\t\t(x, y) = (boxes[i][0], boxes[i][1])\n\t\t\t(w, h) = (boxes[i][2], boxes[i][3])\n\n\t\t\tcolor = [int(c) for c in COLORS[classIDs[i]]]\n\n\t\t\tcv2.rectangle(image, (x, y), (x + w, y + h), color, 2)\n\t\t\ttext = \"{}: {:.4f}\".format(LABELS[classIDs[i]], confidences[i])\n\t\t\tcv2.putText(image, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,\n\t\t\t\t0.5, color, 2)\n\n\t# show the output image\n\tcv2.imshow(\"output\", cv2.resize(image,(800, 600)))\n\twriter.write(cv2.resize(image,(800, 600)))\n\tfps.update()\n\tkey = cv2.waitKey(1) &amp; 0xFF\n\tif key == ord(\"q\"):\n\t\tbreak\n\nfps.stop()\n\nprint(\"[INFO] elasped time: {:.2f}\".format(fps.elapsed()))\nprint(\"[INFO] approx. FPS: {:.2f}\".format(fps.fps()))\n\n# do a bit of cleanup\ncv2.destroyAllWindows()\n\n# release the file pointers\nprint(\"[INFO] cleaning up...\")\nwriter.release()\nvs.release()\n<\/code><\/pre>\n\n\n\n<p>The output of this video can be found here.<\/p>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube\"><div class=\"wp-block-embed__wrapper\">\n<div style=\"max-width: 1333px;\"><div style=\"left: 0; width: 100%; height: 0; position: relative; padding-bottom: 75%;\"><iframe title=\"Traffic Video Analysed with YOLO\" src=\"https:\/\/www.youtube.com\/embed\/bjtsoZcSYPw?rel=0\" style=\"border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;\" allowfullscreen scrolling=\"no\" allow=\"encrypted-media; accelerometer; gyroscope; picture-in-picture\"><\/iframe><\/div><\/div><script type=\"text\/javascript\">window.addEventListener(\"message\",function(e){\n                window.parent.postMessage(e.data,\"*\");\n            },false);<\/script>\n<\/div><figcaption>Object detection in video with YOLO and Python<\/figcaption><\/figure>\n\n\n\n<h2> Video Analytics with Pydarknet<\/h2>\n\n\n\n<p><a href=\"https:\/\/pypi.org\/project\/yolo34py\/\">Pydarknet <\/a>is a python wrapper on top of the Darknet model. I would strongly recommend this as it easier to use and can also be used with a GPU for HW acceleration.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-bash\">pip3 install numpy\npip3 install yolo34py\n#GPU version\npip3 install yolo34py-gpu<\/code><\/pre>\n\n\n\n<p>The code that uses the package is below. I have also included comments in each section explaining what each component does.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"language-python line-numbers\">from pydarknet import Detector, Image\nimport cv2\nimport numpy as np\nimport imutils\nfrom imutils.video import FPS\nfrom imutils.video import VideoStream\n\n\n#Files used in the program. Make changes for input, config, weights etc\nINPUT_FILE='traffic_1.mp4'\nOUTPUT_FILE='output.avi'\nLABELS_FILE='data\/coco.names'\nCONFIG_FILE='cfg\/yolov3.cfg'\nWEIGHTS_FILE='yolov3.weights'\nDATA_FILE=\"cfg\/coco.data\"\nCONFIDENCE_THRESHOLD=0.3\n\n\n#FPS calculator enables\nfps = FPS().start()\n\n#Declaration of output file to save the analysed video\nfourcc = cv2.VideoWriter_fourcc(*\"MJPG\")\nwriter = cv2.VideoWriter(OUTPUT_FILE, fourcc, 30,\n\t(800, 600), True)\n\n#Read all labels\nLABELS = open(LABELS_FILE).read().strip().split(\"\\n\")\nnp.random.seed(4)\nCOLORS = np.random.randint(0, 255, size=(len(LABELS), 3),\n\tdtype=\"uint8\")\n#Create a dictionary with different colors for each class of labels\nCOLOR_LABEL={}\nfor i in range(0, len(LABELS)):\n    COLOR_LABEL[LABELS[i]]=COLORS[i]\n\n#Read the YOLO files\nnet = Detector(bytes(CONFIG_FILE, encoding=\"utf-8\"), bytes(WEIGHTS_FILE, encoding=\"utf-8\"), 0, bytes(DATA_FILE,encoding=\"utf-8\"))\n\n#Setting the video reader\nvs = cv2.VideoCapture(INPUT_FILE)\ncnt=0\n\n\n#We have set a limit of 500 frames. This can be changed\nwhile True and cnt &lt; 500:\n    cnt+=1\n    print (\"Frame number\", cnt)\n    try:\n        (grabbed, image) = vs.read()\n    except:\n        break\n\n    img_darknet = Image(image)\n    \n    #Run detection on each frame\n\n    results = net.detect(img_darknet)\n    \n    #make bounding boxes and text for each image\n    for cat, score, bounds in results:\n\n        x, y, w, h = bounds\n        color = [int(c) for c in COLOR_LABEL[str(cat.decode(\"utf-8\"))]]\n        text = \"{}: {:.4f}\".format(str(cat.decode(\"utf-8\")), score)\n        cv2.rectangle(img, (int(x - w \/ 2), int(y - h \/ 2)), (int(x + w \/ 2), int(y + h \/ 2)), color, thickness=2)\n        \n        cv2.putText(img,text,(int(x - w\/2),int(y -h\/2 -5)),cv2.FONT_HERSHEY_COMPLEX,1,color)\n\n\n    #write frame to output file\n    writer.write(cv2.resize(image,(800, 600)))\n    fps.update()\n    \nfps.stop()\n\nprint(\"[INFO] elasped time: {:.2f}\".format(fps.elapsed()))\nprint(\"[INFO] approx. FPS: {:.2f}\".format(fps.fps()))\n\n# do a bit of cleanup\ncv2.destroyAllWindows()\n\n# release the file pointers\nprint(\"[INFO] cleaning up...\")\nwriter.release()\nvs.release()\n<\/code><\/pre>\n\n\n\n<h2>Speed of video processing<\/h2>\n\n\n\n<p>I then analysed the same video with different model configuration and hardware. For using yolov3-tiny, change the config and weights files paths. To change the size of the YOLOv3 model, open the config file and change height and width parameters. I have tested it with 608 (default), 416 and 320. For the GPU, I used a GCP compute instance with 1 NVIDA K10 GPU. The FPS from the different runs can be found in the table below. <\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>No<\/td><td>Type<\/td><td>FPS (GPU)<\/td><td>FPS(CPU)<\/td><\/tr><tr><td>1<\/td><td>YOLOv3-608<\/td><td>4.15<\/td><td>0.03<\/td><\/tr><tr><td>2<\/td><td>YOLOv3-416<\/td><td>4.83<\/td><td>0.05<\/td><\/tr><tr><td>3<\/td><td>YOLOv3-320<\/td><td>5.75<\/td><td>0.14<\/td><\/tr><tr><td>4<\/td><td>YOLOv3-tiny<\/td><td>8.67<\/td><td>0.59<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>A smaller model can get faster at the expense of accuracy. The gains from a smaller model are much more important if you are running on non-GPU hardware. If you have access to a large GPU, the bigger model is better. <\/p>\n\n\n\n<p>In the other segments of this series we will explore the following<\/p>\n\n\n\n<ol><li><a href=\"https:\/\/cloudxlab.com\/blog\/setup-yolo-with-darknet\/\">Running YOLO with Darknet<\/a><\/li><li><a href=\"https:\/\/cloudxlab.com\/blog\/how-to-run-yolo-on-cctv-feed\/\">Running YOLO on the CCTV feed<\/a><\/li><li>Label custom images for training a YOLO model<\/li><li>Custom training with YOLO<\/li><\/ol>\n\n\n\n<h2>References<\/h2>\n\n\n\n<ol><li><a href=\"https:\/\/pjreddie.com\/darknet\/yolo\/\">Darknet<\/a><\/li><li><a href=\"https:\/\/www.pyimagesearch.com\/2018\/11\/12\/yolo-object-detection-with-opencv\/\">Pyimagesearch<\/a><\/li><li><a href=\"https:\/\/www.learnopencv.com\/deep-learning-based-object-detection-using-yolov3-with-opencv-python-c\/\">LearnOpenCV<\/a><\/li><\/ol>\n","protected":false},"excerpt":{"rendered":"<p>we will see how to setup object detection with Yolo and Python on images and video. We will also use Pydarknet a wrapper for Darknet in this blog. The impact of different configurations GPU on speed and accuracy will also be analysed.<\/p>\n","protected":false},"author":26,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[67],"tags":[61],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v16.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Object Detection with Yolo Python and OpenCV- Yolo 2 | CloudxLab Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Object Detection with Yolo Python and OpenCV- Yolo 2 | CloudxLab Blog\" \/>\n<meta property=\"og:description\" content=\"we will see how to setup object detection with Yolo and Python on images and video. We will also use Pydarknet a wrapper for Darknet in this blog. The impact of different configurations GPU on speed and accuracy will also be analysed.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/\" \/>\n<meta property=\"og:site_name\" content=\"CloudxLab Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cloudxlab\" \/>\n<meta property=\"article:published_time\" content=\"2020-04-29T08:16:22+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-04-30T03:01:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/example.png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:site\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"9 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"CloudxLab Blog\",\"description\":\"Learn AI, Machine Learning, Deep Learning, Devops &amp; Big Data\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/cloudxlab.com\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/example.png\",\"contentUrl\":\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2020\/04\/example.png\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/#webpage\",\"url\":\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/\",\"name\":\"Object Detection with Yolo Python and OpenCV- Yolo 2 | CloudxLab Blog\",\"isPartOf\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/#primaryimage\"},\"datePublished\":\"2020-04-29T08:16:22+00:00\",\"dateModified\":\"2020-04-30T03:01:36+00:00\",\"author\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/e2c5cc7b933ebd4b15f9b463dc7cf1b4\"},\"breadcrumb\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/object-detection-yolo-and-python-pydarknet\/#webpage\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/e2c5cc7b933ebd4b15f9b463dc7cf1b4\",\"name\":\"Praveen Pavithran\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/03c8d253347dcf9e04ec550cd6144973?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/03c8d253347dcf9e04ec550cd6144973?s=96&d=mm&r=g\",\"caption\":\"Praveen Pavithran\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3024"}],"collection":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/comments?post=3024"}],"version-history":[{"count":8,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3024\/revisions"}],"predecessor-version":[{"id":3062,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3024\/revisions\/3062"}],"wp:attachment":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/media?parent=3024"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/categories?post=3024"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/tags?post=3024"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}