Welcome to IAFlash’s documentation!

├── docker                                 <- Docker configuration files
│   ├── conf.list
│   ├── conf.list.sample
│   ├── cpu
│   ├── env.list
│   ├── env.list.sample
│   └── gpu
├── docker-compose-gpu.yml
├── docker-compose.yml
├── docker-restart.yml
├── docs                                   <- Sphinx documentation folder
│   ├── build
│   ├── make.bat
│   ├── Makefile
│   └── source
├── Makefile                               <- Orchestring commands
├── matchvec                               <- Python application folder
│   ├── app.py
│   ├── classification.py
│   ├── __init__.py
│   ├── process.py
│   ├── retina_detection.py
│   ├── ssd_detection.py
│   ├── utils.py
│   └── yolo_detection.py
├── model                                  <- Folder for models
│   ├── resnet18-100
│   ├── ssd_mobilenet_v2_coco_2018_03_29
│   └── yolo
├── README.md                              <- Top-level README for developers using this project
└── tests                                  <- Unit test scripts
    ├── clio-peugeot.jpg
    └── test_process.py

API

POST /object_detection

Object detection

Image can be loaded either by using an internet URL in the url field or by using a local stored image in the image field

Status Codes
  • 200 OK – Result is a list of the following element (double list)

Response JSON Object
  • [] (any) –

POST /predict

Brand and model classifcation

Image can be loaded either by using an internet URL in the url field or by using a local stored image in the image field

Status Codes
  • 200 OK – Result is a list of the following element (double list)

Response JSON Object
  • [] (any) –

Contents

Application modules

Flask application

class app.AnonymPrediction(api=None, *args, **kwargs)

Image anonymisation

post()

Anonymisation

Image can be loaded either by using an internet URL in the url field or by using a local stored image in the image field

class app.ClassPrediction(api=None, *args, **kwargs)

Predict vehicule class

post()

Brand and model classifcation

Image can be loaded either by using an internet URL in the url field or by using a local stored image in the image field

class app.Custom_API(app=None, version='1.0', title=None, description=None, terms_url=None, license=None, license_url=None, contact=None, contact_url=None, contact_email=None, authorizations=None, security=None, doc='/', default_id=<function default_id>, default='default', default_label='Default namespace', validate=None, tags=None, prefix='', ordered=False, default_mediatype='application/json', decorators=None, catch_all_404s=False, serve_challenge_on_401=False, format_checker=None, **kwargs)
specs_url

The Swagger specifications absolute url (ie. swagger.json)

Return type

str

class app.ObjectDetection(api=None, *args, **kwargs)

Docstring for MyClass.

post()

Object detection

Image can be loaded either by using an internet URL in the url field or by using a local stored image in the image field

class app.VideoDetection(api=None, *args, **kwargs)

Docstring for MyClass.

post()

Video detection

Process function

process.IoU(boxA, boxB)

Calculate IoU

Parameters
  • boxA (dict) – Bounding box A

  • boxB (dict) – Bounding box B

Return type

float

process.filter_by_iou(df)

Filter box of car and truck when IoU>DETECTION_IOU_THRESHOLD If a car and a truck overlap, take in priority the car box! :type df: dict :param df: Detected boxes

Returns

Filtered boxes

Return type

df

process.filter_by_size(df, image)

Filter box too small

Parameters
  • df (List[dict]) – Detected boxes

  • image (ndarray) – Image used for detection

Returns

Filtered boxes

Return type

df

Classification model

Classification Marque Modèle

class classification_onnx.Classifier(**kw)

Classifier for marque et modèle

Classifies images using a pretrained model.

prediction(selected_boxes)

Inference in image

  1. Crops, normalize and transforms the image to tensor

  2. The image is forwarded to the resnet model

  3. The results are concatenated

Parameters
  • selected_boxes (Tuple[ndarray, List[float]]) – Contains a List of Tuple with the image and

  • of the crop. (coordinates) –

Returns

The result is two lists with the top 5 class prediction and the probabilities

Return type

(final_pred, final_prob)

classification_onnx.softmax(x)

Compute softmax values for each sets of scores in x.

class classification_torch.Classifier(**kw)

Classifier for marque et modèle

Classifies images using a pretrained model.

prediction(selected_boxes)

Inference in image

  1. Crops, normalize and transforms the image to tensor

  2. The image is forwarded to the resnet model

  3. The results are concatenated

Parameters

selected_boxes (Tuple[ndarray, List[float]]) – Contains a List of Tuple with the image and coordinates of the crop.

Returns

The result is two lists with the top 5 class prediction and the probabilities

Return type

(final_pred, final_prob)

class classification_torch.Crop

Rescale the image in a sample to a given size.

Parameters

params – Tuple containing the sample and coordinates. The image is cropped using the coordiantes.

class classification_torch.DatasetList(samples, transform=None, target_transform=None)

Datalist generator

Parameters
  • samples (Tuple[ndarray, List[float]]) – Samples to use for inference

  • transform – Transformation to be done to samples

  • target_transform – Transformation done to the targets

Detection avec SSD

SSD detection

class ssd_detection.Detector(**kw)

SSD Mobilenet object detection

DETECTION_MODEL: Detection model to use DETECTION_THRESHOLD: Detection threshold SWAPRB: Swap R and B chanels (usefull when opening using opencv) (Default: False)

create_df(result, image)

Filter predictions and create an output dictionary

Parameters
  • result (ndarray) – Result from prediction model

  • image (ndarray) – Image where the inference has been made

Returns

Object detection filtered predictions

Return type

df

prediction(image)

Inference

Parameters

image (ndarray) – image to make inference

Returns

Predictions form SSD Mobilenet

Return type

result

Detection avec Yolo

Yolo detection

class yolo_detection.Detector

Yolo object detection

Parameters
  • DETECTION_MODEL – Detection model to use

  • DETECTION_THRESHOLD – Detection threshold

  • NMS_THRESHOLD – Non Maximum Supression threshold (to remove overlapping boxes)

  • SWAPRB – Swap R and B chanels (usefull when opening using opencv) (Default: False)

  • SCALE – Yolo uses a normalisation factor different than 1 for each pixel

create_df(result, image)

Filter predictions and create an output list of dictionary

Parameters
  • result (List[ndarray]) – Result from prediction model

  • image (ndarray) – Image where the inference has been made

Returns

Onject detection predictions filtered

Return type

df

prediction(image)

Inference

Make inference

Parameters

image (ndarray) – input image

Returns

Yolo boxes from object detections

Return type

result

yolo_detection.filter_yolo(chunk)

Filter Yolo chunks

Create a list of dictionary from each chunk and then filter it with the DETECTION_THRESHOLD

Parameters

chunk (ndarray) – A Yolo chunk

Returns

The object detection predictions for the chunk

Return type

df

Other functions

Utilities for logging.

utils.timeit(method)

Decorator to log spended time

Indices and tables