<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Mohamed Amine Ben Abdeljelil on Medium]]></title>
        <description><![CDATA[Stories by Mohamed Amine Ben Abdeljelil on Medium]]></description>
        <link>https://medium.com/@mohamedamine.benabdeljelil?source=rss-a5817cb53365------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 09:41:07 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@mohamedamine.benabdeljelil/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[From Model to API: A Practical Guide to Implementing YOLOv3 with FastAPI]]></title>
            <link>https://medium.com/@mohamedamine.benabdeljelil/from-model-to-api-a-practical-guide-to-implementing-yolov3-with-fastapi-5734d84b022d?source=rss-a5817cb53365------2</link>
            <guid isPermaLink="false">https://medium.com/p/5734d84b022d</guid>
            <category><![CDATA[fastapi]]></category>
            <category><![CDATA[object-detection]]></category>
            <category><![CDATA[deployment]]></category>
            <category><![CDATA[oop-concepts]]></category>
            <category><![CDATA[yolov3]]></category>
            <dc:creator><![CDATA[Mohamed Amine Ben Abdeljelil]]></dc:creator>
            <pubDate>Tue, 29 Aug 2023 12:48:33 GMT</pubDate>
            <atom:updated>2023-08-30T09:06:15.968Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/874/1*pedAkaM62tgikNauH28MeA.png" /></figure><h3>Introduction :</h3><p>YOLOv3, short for “You Only Look Once version 3” , stands out as a powerful and efficient object detection algorithm, capable of identifying objects in images and videos swiftly and accurately. Leveraging the capabilities of YOLOv3 within a FastAPI framework opens up new possibilities for creating seamless and responsive object detection applications.</p><p>If you’re eager to harness the potential of YOLOv3 and FastAPI to deploy your own object detection solution, you’re in the right place!</p><p>In this comprehensive guide, we’ll take you through a step-by-step journey, breaking down the process of setting up and deploying YOLOv3 object detection using the FastAPI framework and offer insights into best practices that will enable you to create a robust and efficient object detection API.</p><p>So, let’s dive in and unlock the potential of YOLOv3 and FastAPI !</p><blockquote><strong>P.S : This tutorial offers an in-depth exploration of the subject matter, which may result in a comprehensive and potentially lengthy read. However, for those seeking a quicker glance at the code, you have the option to directly access the </strong><a href="https://github.com/mohamedamine99/Object-Detection-with-YOLOv3-and-FastAPI/tree/master"><strong>GitHub repository</strong></a><strong> for a concise overview.</strong></blockquote><h3>Prerequisites and Installation Setup</h3><p>This tutorial assumes you have a foundational understanding of Python programming, object-oriented concepts, a basic grasp of OpenCV principles and basic understanding of Neural Networks.</p><p>For this tutorial, I opted for the <strong>PyCharm IDE</strong> due to its advantageous features. <strong>However</strong>, feel free to choose the IDE that aligns best with your preferences and familiarity.</p><p>I employed <strong>Python version 3.11.4</strong> for this tutorial; nonetheless, other versions will suffice.</p><p>Libraries you need to install :</p><pre>pip install numpy<br>pip install opencv-python<br>pip install fastapi<br>pip install uvicorn<br>pip install pydantic</pre><h4>Download YOLOv3 Weights and COCO labels</h4><p>Download and place these files in your working directory:</p><ul><li><a href="https://github.com/pjreddie/darknet/blob/master/cfg/yolov3.cfg">yolov3.cfg</a> : Yolov3 config file that contains various hyperparameters and settings that define the architecture and behavior of the YOLOv3 model.</li><li><a href="https://pjreddie.com/media/files/yolov3.weights">yolov3.weights</a> : file that contains the pre-trained weights of the YOLOv3 model.</li><li><a href="https://github.com/pjreddie/darknet/blob/master/data/coco.names">coco.names</a> : a text file that contains the list of class names corresponding to the objects that the YOLOv3 model is trained to detect using the COCO (Common Objects in Context) dataset.</li></ul><h3>Implementing YOLOv3 Object Detection with OpenCV</h3><p>In this section, we will discuss how to implement YOLOv3 object detection using the versatile capabilities of the OpenCV library and OOP concepts.</p><p>Why opt for an object-oriented approach, you might wonder, when a procedural approach could suffice? The answer lies in the elegance and efficiency of managing complexity. By structuring the code within a class, we unlock the ability to organize functionalities, maintain clear boundaries, and enhance reusability.</p><p>Let’s now delve into the inner workings of this module and discover how it brings YOLOv3 to life in a structured and organized manner.</p><p>Let’s start by creating a new python file named Yolov3_Detector.py . This file will be the module containing the Detector class, designed to streamline and configure the object detection process.</p><p>Let’s begin with some imports :</p><pre>import cv2<br>import matplotlib.pyplot as plt </pre><ul><li><strong>Detector class definition and constructor method :</strong></li></ul><pre>class Detector:<br>    &quot;&quot;&quot;<br>    Class for performing object detection using YOLOv3 model.<br>    &quot;&quot;&quot;<br><br>    def __init__(self, config_path=&#39;./yolov3.cfg&#39;, weights_path=&#39;./yolov3.weights&#39;,<br>                 coco_names_path=&#39;./coco.names&#39;, score_threshold=0.5, NMS_threshold=0.5):<br>        &quot;&quot;&quot;<br>        Initializes the Detector instance.<br><br>        Parameters:<br>            config_path (str): Path to YOLO model configuration file.<br>            weights_path (str): Path to YOLO model weights file.<br>            coco_names_path (str): Path to COCO class names file.<br>            score_threshold (float): Detection score threshold.<br>            NMS_threshold (float): Non-Maximum Suppression threshold.<br>        &quot;&quot;&quot;<br>        self.load_model(config_path, weights_path)<br>        self.load_coco_labels(coco_names_path)<br>        self.set_detection_params(score_threshold, NMS_threshold)</pre><p>This constructor method creates an instance of the `Detector` class, which is designed for object detection using the YOLOv3 model while allowing customization of various parameters that affect the behavior of the object detection process.</p><p><strong>How it works:</strong></p><p>During initialization, this constructor method loads the YOLO model configuration and weights using the specified paths, and it loads the COCO class names for label mapping. It also sets the detection parameters for later use in the object detection process. After successful initialization, the instance can be used to perform object detection.</p><p>This method internally uses several other methods, which are explained in the sections below.</p><p><strong>Parameters:</strong></p><ul><li>config_path (str, optional): The path to the YOLO model configuration file. Default is ./yolov3.cfg</li><li>weights_path(str, optional): The path to the YOLO model weights file. Default is ./yolov3.weights.</li><li>coco_names_path(str, optional): The path to the COCO class names file. Default is ./coco.names.</li><li>score_threshold(float, optional): The threshold for detection scores. Objects with scores lower than this value will be filtered out. Default threshold is <strong>0.5</strong>.</li><li>NMS_threshold(float, optional): The Non-Maximum Suppression (NMS) threshold. It controls the suppression of overlapping bounding boxes during post-processing. Default is 0.5.</li></ul><p><strong>load_model method :</strong></p><pre>def load_model(self, config_path, weights_path):<br>        &quot;&quot;&quot;<br>        Loads the YOLO model using provided configuration and weights paths.<br><br>        Parameters:<br>            config_path (str): Path to YOLO model configuration file.<br>            weights_path (str): Path to YOLO model weights file.<br>        &quot;&quot;&quot;<br>        net = cv2.dnn.readNet(config_path, weights_path)<br>        self.net = net<br>        print(f&#39;* Model config loaded from {config_path}\n* Model weights loaded from {weights_path}&#39;)</pre><p>The load_model method, used internally by the constructor, is responsible for loading the YOLO model using the provided configuration and weights paths.</p><p>It uses OpenCV&#39;s cv2.dnn.readNet function, assigns the loaded model to the instance variable self.net, and prints a message confirming the successful loading of the model&#39;s configuration and weights.</p><ul><li><strong>load_coco_labels method :</strong></li></ul><pre>    def load_coco_labels(self, coco_names_path):<br>        &quot;&quot;&quot;<br>        Loads COCO class labels from the provided file.<br><br>        Parameters:<br>            coco_names_path (str): Path to COCO class names file.<br>        &quot;&quot;&quot;<br>        labels = []<br>        with open(coco_names_path, &#39;rt&#39;) as coco_file:<br>            labels = coco_file.read().rstrip(&#39;\n&#39;).rsplit(&#39;\n&#39;)<br>        self.labels = labels<br>        print(f&#39;* COCO labels loaded from {coco_names_path}&#39;)</pre><p>The load_coco_labels method, used internally by the constructor, loads COCO class labels from the provided file, stores them in the instance variable self.labels, and prints a message to confirm the successful loading of the labels.</p><ul><li><strong>preprocess_img method:</strong></li></ul><pre>    def preprocess_img(self, img, size=(320, 320)):<br>        &quot;&quot;&quot;<br>        Preprocesses the input image using OpenCV&#39;s blobFromImage.<br><br>        Parameters:<br>            img (np.ndarray): Input image.<br>            size (tuple): Desired image size after preprocessing.<br><br>        Returns:<br>            np.ndarray: Preprocessed image blob.<br>        &quot;&quot;&quot;<br>        self.resize = size<br>        blob = cv2.dnn.blobFromImage(img, 1 / 255, size, [0, 0, 0], 1, crop=False)<br>        return blob</pre><p>the preprocess_img method preprocesses an input image using OpenCV&#39;s blobFromImage function.</p><p>It resizes the image to the desired size specified in the size parameter, and it returns the preprocessed image blob suitable for input to the YOLOv3 model. The desired image size is also stored in the instance variable self.resize.</p><ul><li><strong>set_detection_params method :</strong></li></ul><pre>    def set_detection_params(self, score_threshold, NMS_threshold):<br>        &quot;&quot;&quot;<br>        Sets detection parameters.<br><br>        Parameters:<br>            score_threshold (float): Detection score threshold.<br>            NMS_threshold (float): Non-Maximum Suppression threshold.<br>        &quot;&quot;&quot;<br>        self.score_threshold = score_threshold<br>        self.NMS_threshold = NMS_threshold</pre><p>The set_detection_params method ,used internally by the constructor, sets the detection parameters of the Detector instance.</p><p>It assigns the provided values for the detection score threshold and the Non-Maximum Suppression threshold to the corresponding instance variables (self.score_threshold and self.NMS_threshold)</p><ul><li><strong>run_detection_on_img method:</strong></li></ul><p>The run_detection_on_img method is responsible for running object detection on an input image using the YOLOv3 model and returning the results in the form of a list of dictionaries. The dictionaries contain information about the detected objects, including their labels, confidence scores, and bounding box coordinates.</p><p>Let’s break it down :</p><p><strong>First we start by pre-processing</strong> the input image and set the blob (the pre-processed image) as the input of the neural network model.</p><pre>def run_detection_on_img(self, img):<br>    &quot;&quot;&quot;<br>    Runs object detection on the provided image.<br><br>    Parameters:<br>        img (np.ndarray): Input image.<br><br>    Returns:<br>        list: List of dictionaries containing detection results.<br>    &quot;&quot;&quot;<br>    # Preprocess the input image using the previously defined &#39;preprocess_img&#39; method<br>    blob = self.preprocess_img(img)<br>    self.net.setInput(blob)</pre><p>Then we <strong>perform a forward pass</strong> of the blob (pre-processed image) through the neural network model.</p><pre>    # Perform forward pass through the DNN model<br>    layers_names = self.net.getLayerNames()<br>    output_layers_idx = self.net.getUnconnectedOutLayers()[0] - 1<br>    output_names = [(layers_names[idx - 1]) for idx in self.net.getUnconnectedOutLayers()]<br>    outputs = self.net.forward(output_names)</pre><ol><li>The method getLayerNames() retrieves the names of all the layers in the neural network model (self.net).</li><li>getUnconnectedOutLayers() returns the indices of the layers that are not connected to any subsequent layers. Since these indices are typically one-based, subtracting 1 gives us the zero-based index that can be used to access the layers. These unconnected layers are typically the output layers of the model. Why multiple output layers ? The backbone architecture (often based on Darknet-53) for the YOLOv3 model has multiple output layers in order to be able to detect objects at different scales.</li><li>Then we map the obtained indices of unconnected output layers to their corresponding names using the layers_names list.</li><li>Finally, we perform a forward pass through the neural network model (self.net) using the specified output_names list. It returns the outputs of the layers specified in output_names .</li></ol><p>Now that we have our outputs let’s proceed to decoding them into useful information.</p><pre># Get the shape of the input image<br>        hT, wT, cT = img.shape<br><br>        # Create empty lists to store the bounding boxes, class IDs, and confidence scores for detected objects<br>        bbox = []<br>        class_ids = []<br>        confs = []<br><br>        # Loop over each output of the DNN model after the forward pass<br>        for output in outputs:<br>            # Loop over each detection in the output<br>            for det in output:<br>                # Extract the class ID, confidence score, and bounding box coordinates from the detection<br>                scores = det[5:]<br>                class_id = np.argmax(scores)<br>                confidence = scores[class_id]<br>                if confidence &gt; self.score_threshold:<br>                    w, h = int(det[2] * wT), int(det[3] * hT)<br>                    x, y = int((det[0] * wT) - w / 2), int((det[1] * hT) - h / 2)<br>                    bbox.append([x, y, w, h])<br>                    class_ids.append(class_id)<br>                    confs.append(float(confidence))</pre><ul><li>we start by iterating through the outputs of the YOLOv3 model and process each detected object.</li><li>For each output and detection, we extract the confidence scores (scores), determine the class with the highest confidence (class_id), and check if the confidence is above the specified threshold.</li><li>If the confidence is sufficient, it calculates the bounding box coordinates based on the detected object&#39;s information and image dimensions. These coordinates and associated information are added to the respective lists (bbox, class_ids, and confs).</li></ul><pre>  # Perform non-maximum suppression to eliminate overlapping bounding boxes<br>  indices = cv2.dnn.NMSBoxes(bbox, confs, self.score_threshold, self.NMS_threshold)</pre><ul><li>Here, non-maximum suppression (NMS) is applied to the list of bounding boxes (bbox) and associated confidence scores (confs) to eliminate overlapping and redundant detections. The cv2.dnn.NMSBoxes function returns indices of the boxes that survive the NMS process. These indices correspond to the selected, non-overlapping detections.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/305/1*vs_0T9IQGx8zkKuNeUt1ww.png" /><figcaption>Example of how NMS removes redundant detections.</figcaption></figure><pre> results_list = []<br>        # Loop over each index in the indices list<br>        for i in indices:<br>            result_dict = {}<br>            # Get the bounding box coordinates, class label, and confidence score for the current index<br>            box = bbox[i]<br>            class_id = class_ids[i]<br>            conf = int(confs[i] * 100)<br>            # Create a dictionary to store each detection of the detection results<br>            result_dict[&#39;label&#39;] = self.labels[class_id]<br>            result_dict[&#39;confidence&#39;] = str(conf) +&#39;%&#39;<br>            result_dict[&#39;bbox_xywh&#39;] = box<br><br>            results_list.append(result_dict)<br><br>        # return the list of detections<br>        return results_list</pre><p>finally the run_detection_on_img organizes the filtered detection information into a list of dictionaries results_list, Each dictionary provides information about the detected object’s label, confidence level, and bounding box coordinates, making it easy to access and analyze the results of the object detection process.</p><h3>Model Deployment with FastAPI</h3><p>In this section, we’ll explore how to seamlessly transform your trained YOLOv3 object detection model into a dynamic and accessible API using FastAPI.</p><p>First create a new python script named main.py</p><p>Import libraries and modules :</p><pre>from fastapi import FastAPI, File, UploadFile<br>import uvicorn<br>from pydantic import BaseModel<br><br>from collections import Counter<br>import numpy as np<br>from io import BytesIO<br>import cv2<br><br>from Yolov3_Detector import Detector</pre><p>Let’s define a function that reads a binary image file and transform it into a numpy.ndarray.</p><pre>def read_image(file) -&gt; np.ndarray:<br>    &quot;&quot;&quot;Reads and decodes an image from an uploaded file.&quot;&quot;&quot;<br>    # Create a BytesIO stream from the uploaded file<br>    image_stream = BytesIO(file)<br><br>    # Move the stream&#39;s position to the beginning<br>    image_stream.seek(0)<br><br>    # Read the bytes from the stream and decode them using OpenCV<br>    # This will decode the image data into a NumPy array<br>    image = cv2.imdecode(np.frombuffer(image_stream.read(), np.uint8), cv2.IMREAD_COLOR)<br><br>    # Return the decoded image as a NumPy array<br>    return image</pre><p>the read_image function is useful for processing image uploads in a FastAPI application . It takes an uploaded image file, creates a stream from the file&#39;s binary data, decodes the image data using OpenCV, and returns the decoded image as a NumPy array.</p><pre>class DetectionResults(BaseModel):<br>    &quot;&quot;&quot;Model class to represent the results of object detection.&quot;&quot;&quot;<br>    filename: str = None<br>    results_str: str = &#39;No detections&#39;<br>    results_list: list = None<br><br>class DetectionParams(BaseModel):<br>    &quot;&quot;&quot;Model class to represent detection parameters.&quot;&quot;&quot;<br>    score_threshold: float = 0.5<br>    NMS_threshold: float = 0.5</pre><p>These two model classes (DetectionResults and DetectionParams) are used to define the structure and default values for the results of object detection and the parameters used for the detection process. They both inherit from the BaseModelclass provided by pydanticwhich makes it easier to validate, serialize, and deserialize data related to object detection within the FastAPI application.</p><p>Create a FastAPI and Detector instances:</p><pre># Create a FastAPI and Detector instances<br>app = FastAPI()<br>detector = Detector()</pre><p>Now let’s actually use FastAPI and create an endpoint that performs object detection on an uploaded image.:</p><pre>@app.post(&quot;/detection/&quot;)<br>async def detect_on_img(file: UploadFile = File(...)):<br>    &quot;&quot;&quot;Endpoint to perform object detection on an uploaded image.&quot;&quot;&quot;<br><br>    # Create an instance of DetectionResults to store the detection results<br>    results = DetectionResults()<br><br>    # Read the uploaded image and decode it using OpenCV<br>    img = read_image(await file.read())<br><br>    # Perform object detection using the detector instance<br>    results.results_list = detector.run_detection_on_img(img)<br><br>    # Store the filename of the uploaded image in the results<br>    results.filename = file.filename<br><br>    # If objects are detected in the image, update the results string<br>    if len(results.results_list):<br>        detected_labels_counter = Counter([detected_object[&#39;label&#39;] for detected_object in results.results_list])<br>        str2 = &quot;, &quot;.join(f&quot;{value} x {key}&quot; for key, value in detected_labels_counter.items())<br>        results.results_str = f&#39;Found {len(results.results_list)} objects: &#39; + str2<br><br>    # Return the DetectionResults instance containing the detection results<br>    return results</pre><p>Explanation of each step:</p><ol><li>@app.post(“/detection/”) <br>This is a FastAPI decorator that defines a POST endpoint for the URL path `/detection/`. The endpoint will be used to handle HTTP POST requests.</li></ol><p>2. async def detect_on_img(file:UploadFile = File(…))</p><ul><li>This is the function that will be executed when a POST request is made to the `/detection/` endpoint.</li><li>The function takes an argument named `file` of type `UploadFile`. This argument represents the uploaded image file.</li></ul><p>4. Creating a DetectionResults instance:</p><ul><li>An instance of the `DetectionResults` class is created to store the detection results.</li></ul><p>5. Reading and decoding the uploaded image:</p><ul><li>The `read_image` function is used to read and decode the uploaded image file using OpenCV. The result is stored in the `img` variable.</li></ul><p>6. Performing object detection:</p><ul><li>The `run_detection_on_img` method of the `detector` instance is called to perform object detection on the `img`. The detection results are stored in the `results_list` attribute of the `results` instance.</li></ul><p>7. Storing the filename:</p><ul><li>The filename of the uploaded image is stored in the `filename` attribute of the `results` instance.</li></ul><p>8. Updating the results string if objects are detected:</p><ul><li>If objects are detected in the image (i.e., `results_list` is not empty), a counter is created to count the occurrences of each detected label. The `results_str` attribute of the `results` instance is updated to provide information about the detected objects and their counts.</li></ul><p>9. Returning the results:</p><ul><li>The `results` instance, which contains the detection results and related information, is returned as the response to the HTTP POST request.</li></ul><p>In summary, this FastAPI endpoint processes an uploaded image, performs object detection using a `detector` instance, and returns the detection results along with relevant information, such as the filename and a summary of detected objects.</p><pre>if __name__ == &#39;__main__&#39;:<br>    uvicorn.run(app, host=&#39;127.0.0.1&#39;, port=8000)</pre><p>Now we can start the FastAPI application using the uvicorn server when the script is run directly.</p><h3>Now let’s put everything to the test :</h3><h4>Run the FastAPI Server:</h4><ul><li><strong>Method 1</strong> : Run with uvicorn CLI: Activate your environment then run the following command:</li></ul><pre>uvicorn main:app --host 127.0.0.1 --port 8000</pre><ul><li><strong>Method 2 </strong>: Execute the main.py script using either the command-line interface (CLI) or your preferred code editor:</li></ul><pre>python main.py</pre><h4>Make API Requests:</h4><ul><li><strong>Method 1</strong>: Use tools like curl or API clients to make POST requests to the API endpoint for object detection. Here’s an example of how to make a simple API request using curl:</li></ul><pre>curl -X POST -F &quot;file=@image.jpg&quot; http://localhost:8000/detection</pre><ul><li><strong>Method 2</strong>: Access Swagger UI: Open your web browser and navigate to the following URL to interact with your API using Swagger UI:</li></ul><pre>http://localhost:8000/docs</pre><p>Here, you’ll find an interactive interface that presents a list of all available API endpoints. You can explore each endpoint’s input parameters, send requests directly from the browser, and view the API’s responses. This greatly simplifies the process of testing and interacting with your FastAPI application.</p><h3>Let’s try it out :</h3><p>Uploaded image :</p><figure><img alt="Two people walking their dog" src="https://cdn-images-1.medium.com/max/405/1*EfU-5TeOJ7qqV-Z_a4DZ9g.png" /><figcaption>people walking.PNG</figcaption></figure><p>The response we get :</p><pre>{<br>  &quot;filename&quot;: &quot;people walking.PNG&quot;,<br>  &quot;results_str&quot;: &quot;Found 3 objects : 2 x person, 1 x dog&quot;,<br>  &quot;results_list&quot;: [<br>    {<br>      &quot;label&quot;: &quot;person&quot;,<br>      &quot;confidence&quot;: &quot;99%&quot;,<br>      &quot;bbox_xywh&quot;: [<br>        97,<br>        30,<br>        81,<br>        226<br>      ]<br>    },<br>    {<br>      &quot;label&quot;: &quot;person&quot;,<br>      &quot;confidence&quot;: &quot;99%&quot;,<br>      &quot;bbox_xywh&quot;: [<br>        174,<br>        50,<br>        78,<br>        201<br>      ]<br>    },<br>    {<br>      &quot;label&quot;: &quot;dog&quot;,<br>      &quot;confidence&quot;: &quot;99%&quot;,<br>      &quot;bbox_xywh&quot;: [<br>        276,<br>        168,<br>        73,<br>        96<br>      ]<br>    }<br>  ]<br>}</pre><p>The uploaded image :</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/665/1*rDQfz3yGtDlIcu_-H1UZ1g.png" /><figcaption>blank.PNG</figcaption></figure><p>The response :</p><pre>{<br>  &quot;filename&quot;: &quot;blank.PNG&quot;,<br>  &quot;results_str&quot;: &quot;No detections&quot;,<br>  &quot;results_list&quot;: []<br>}</pre><p>And voilà ! we have successfully deployed and tested an object detection model based on YOLOv3, able to accurately predict objects in an image, using FastAPI.</p><h3>Conclusion:</h3><p>In summary, this article has provided a practical roadmap for seamlessly integrating YOLOv3’s powerful object detection capabilities into FastAPI, demonstrating the fusion of advanced AI and web development for effective potential real-world applications.</p><h3>Key Takeaways:</h3><ul><li>YOLOv3 is a powerful yet straightforward state-of-the-art algorithm for fast and accurate object detection.</li><li>FastAPI is key to transforming a sophisticated machine learning model like YOLOv3 into a functional and accessible API.</li><li>By utilizing Python’s modularization, and OOP concepts, you can make your codebase more readable, maintainable, and efficient. Using these principles is considered one of software engineering best practices.</li></ul><h3>Exploring Beyond: More Tutorials on Computer Vision, Deep Learning, and Object Detection</h3><p>Feeling interested by the realms of computer vision, deep learning, and object detection? Then check out my compilation of tutorials and GitHub repositories for a comprehensive exploration.</p><ul><li><a href="https://github.com/mohamedamine99/Object-tracking-and-counting-using-YOLOV8">Object Detection, Tracking and Counting with YOLOv8</a>.</li><li><a href="https://github.com/mohamedamine99/YOLOv5-object-detection">Object Detection With YOLOv5</a>.</li><li><a href="https://github.com/mohamedamine99/Real-time-facial-recognition-with-python-dlib">Real-time Facial Recognition with Dlib and Python</a>.</li><li><a href="https://github.com/mohamedamine99/YOLOv8-custom-object-detection">Train your YOLOv8 model on a custom Dataset</a>.</li><li><a href="https://github.com/mohamedamine99/Visualizing-what-convnets-learn">Visualize what ConvNets learn</a>.</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5734d84b022d" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>