posted an update

Updates (2020.08.01)

Experiments

Experiments is much more powerful and much easier to use now:

import cflearn
import numpy as np

from cfdata.tabular import *

def main():
    x, y = TabularDataset.iris().xy
    experiments = cflearn.Experiments()
    experiments.add_task(x, y, model="fcnn")
    experiments.add_task(x, y, model="fcnn")
    experiments.add_task(x, y, model="tree_dnn")
    experiments.add_task(x, y, model="tree_dnn")
    results = experiments.run_tasks(num_jobs=2)
    # {'fcnn': [Task(fcnn_0), Task(fcnn_1)], 'tree_dnn': [Task(tree_dnn_0), Task(tree_dnn_1)]}
    print(results)
    ms = {k: list(map(cflearn.load_task, v)) for k, v in results.items()}
    # {'fcnn': [FCNN(), FCNN()], 'tree_dnn': [TreeDNN(), TreeDNN()]}
    print(ms)
    # experiments could be saved & loaded easily
    saving_folder = "__temp__"
    experiments.save(saving_folder)
    loaded = cflearn.Experiments.load(saving_folder)
    ms_loaded = {k: list(map(cflearn.load_task, v)) for k, v in loaded.tasks.items()}
    # {'fcnn': [FCNN(), FCNN()], 'tree_dnn': [TreeDNN(), TreeDNN()]}
    print(ms_loaded)
    assert np.allclose(ms["fcnn"][1].predict(x), ms_loaded["fcnn"][1].predict(x))

if __name__ == '__main__':
    main()

We can see that experiments.run_tasks returns a bunch of Tasks, which can be easily transfered to models through cflearn.load_task.

It is important to wrap the codes with main() on some platforms (e.g. Windows), because running codes in parallel will cause some issues if we don't do so. Here's an explaination.

Benchmark

Benchmark class is implemented for easier benchmark testing:

import cflearn
import numpy as np

from cfdata.tabular import *

def main():
    x, y = TabularDataset.iris().xy
    benchmark = cflearn.Benchmark(
        "foo",
        TaskTypes.CLASSIFICATION,
        models=["fcnn", "tree_dnn"]
    )
    benchmarks = {
        "fcnn": {"default": {}, "sgd": {"optimizer": "sgd"}},
        "tree_dnn": {"default": {}, "adamw": {"optimizer": "adamw"}}
    }
    msg1 = benchmark.k_fold(3, x, y, num_jobs=2, benchmarks=benchmarks).comparer.log_statistics()
    """
    ~~~  [ info ] Results
    ===============================================================================================================================
    |        metrics         |                       acc                        |                       auc                        |
    --------------------------------------------------------------------------------------------------------------------------------
    |                        |      mean      |      std       |     score      |      mean      |      std       |     score      |
    --------------------------------------------------------------------------------------------------------------------------------
    |    fcnn_foo_default    |    0.780000    | -- 0.032660 -- |    0.747340    |    0.914408    |    0.040008    |    0.874400    |
    --------------------------------------------------------------------------------------------------------------------------------
    |      fcnn_foo_sgd      |    0.113333    |    0.080554    |    0.032780    |    0.460903    |    0.061548    |    0.399355    |
    --------------------------------------------------------------------------------------------------------------------------------
    |   tree_dnn_foo_adamw   | -- 0.833333 -- |    0.077172    | -- 0.756161 -- | -- 0.944698 -- | -- 0.034248 -- | -- 0.910451 -- |
    --------------------------------------------------------------------------------------------------------------------------------
    |  tree_dnn_foo_default  |    0.706667    |    0.253684    |    0.452983    |    0.924830    |    0.060007    |    0.864824    |
    ================================================================================================================================
    """
    # save & load
    saving_folder = "__temp__"
    benchmark.save(saving_folder)
    loaded_benchmark, loaded_results = cflearn.Benchmark.load(saving_folder)
    msg2 = loaded_results.comparer.log_statistics()
    assert msg1 == msg2

if __name__ == '__main__':
    main()

Misc

  • Integrated trains.
  • Integrated Tracker from carefree-toolkit.
  • Integrated native amp from PyTorch.
  • Implemented FocalLoss.
  • Implemented cflearn.zoo.

  • Introduced CI.
  • Fixed some bugs.
  • Simplified some APIs.
  • Optimized some default settings.

What's next

I've already done some experiments on some benchmark datasets with Benchmark and already achieved satisfying performance. However, large scale benchmark testing is not done yet, limited by my lack of GPU cards XD

So the next step is to do large scale benchmark testing and optimize carefree-learn's performance, in a much more general way.

In the mean time, I'll do some research and implement some SOTA methods on tabular datasets (e.g. Deep Sparse Network, β-LASSO MLP, ...)

And, as always, bug fixing XD

Log in or sign up for Devpost to join the conversation.