Inspiration

Our startup has focus on computer vision product to enable insight automation from available cctv camera or smartphone camera. Throughout our journey, we explore and encounter many deep learning - based computer vision algorithm across different frameworks and implementations. Thus, we started to face a common problems :

  • A slow knowledge distribution across research members due to high learning curve on state-of-the-art model and utilities for model development ( such as data augmentations technique, data loading technique, model's optimization, etc )
  • Difficulties to implement an incremental improvement ( for example, graph optimization ) if we utilize many frameworks

Thus, we realize we need a unified framework for model development, in which this framework must be modular so it can easily be integrated with other model's development tools, e.g. : experiment logging, hyperparameter optimization; and it also needs to provide some flexibility to reuse it's component if the user choose to develop their own model's training script. And finally it needs to support production-grade model optimization to support deployment scenario.

Our expectation of this framework is so that the user can easily explore many utilities while cut the unnecessary time to learn the specific implementation for a deep learning model development, so they can iterate faster on the experiments and let the Vortex developer takes the hard implementation part

We named this framework as Visual Cortex ( Vortex )

What it does

Vortex provides a high-level complete development pipeline of a computer vision deep learning model :

  • Training pipeline
  • Validation pipeline
  • Prediction pipeline
  • IR Graph Export pipeline
  • Hyperparameter Optimization pipeline

which can be accessed by provide a single experiment file and a CLI command. However, user also can choose to utilize Vortex public API if they wish to integrate vortex into their own script.

How we built it

We choose Pytorch as the base deep learning framework due to its popularity and convenience to develop model in it. We carefully point out the atomic components of a deep learning model's development, such as dataset, dataloader, models, logger, optimizer, training iterator, etc. And design modular interaction between them in which in the end will form a full operational pipeline. We explore utilities that support deep learning model development such as albumentations for data augmentation, nvidia DALI for data loader and augmentations, optuna for hyperparameter optimization; and integrate them seamlessly to vortex so that user can utilize them only by modifying the experiment file

Challenges we ran into

We still think that our code design may not be perfect and we need many iterations to better improve it time to time. In the deep learning side, we struggle to replicate several architecture sota result when we integrate it to Vortex ( for example, currently our object detection implementation still cannot produce intended result). In other side, implementing graph optimization to ONNX is still pose a challenge because not all Pytorch operator is natively supported by the ONNX format, and furthermore not all ONNX operator is supported by the runtime, in this case we choose onnxruntime to be the runtime for ONNX graph IR prediction pipeline. Yet, we still quite satisfied with the result

Accomplishments that we're proud of

  • We succesfully prove that Vortex pipelines is fully integrated and working good, at least for the image classification model
  • We succesfully integrate many useful utilities, such as albumentations, optuna, nvidia DALI into it, greatly expanding options for model developer with ease of use
  • And finally we proud that we succesfully provide seamless ONNX and torchscript export for Vortex model
  • And finally, we quite proud of our model validation report which not only evaluate the model's performance on dataset, but also the resource usage of it, please check it out

What we learned

We learned many things, several of those are

  • How hyperparameter optimization can be very useful and easily be integrated
  • How the importance of data standard playing important roles on CV development, (e.g. bounding box data have many format : xywh, xyxy ,cxcywh, etc. and this is prone to the fault)
  • How to tinker and find way around for unsupported ONNX operators
  • How Pytorch design can greatly supports our implementation when refactoring model's structure, (in Vortex we separate several model's dependencies as components : such as ( normalizer, post processing, loss function )
  • How we iterate to find better modular design for our framework

What's next for Vortex

  • Supports for other task, such as segmentation
  • Implement mixed-precision training mechanism
  • Distributed training
  • More publicly accepted models
  • More runtime support based on ONNX and Torchscript IR

Built With

Share this project:

Updates