Release Notes

0.3.5.2

  • Unsupervised Pre-training guide for semi-supervised learning included.

  • Larger datasets run faster due to optimized dataset cardinality calculation.

  • Autofit runs with the same early stopping callback but epochs set to 1000000, not 2**31, to make console output more interpretable.

  • Distillation report includes number of weights in source and target (teacher and student) models.

  • Logging includes both val and test sets (if test is available). Note that the metalearning algorithm never sees test - evaluations on test are purely for diagnostics.

  • Log directories named by run number (e.g. ~/.masterful/run-00001) instead of datetime (e.g. ~/.masterful/UTC_2021-08-18__17-34-29.037488)

  • Detailed logs originally (sent to ~/.masterful) can now also be sent to console via env variable MASTERFUL_LOG_TO_CONSOLE=1.

  • In some cases, fit was broken due to a bug in keras: model.trainable on a cloned model has undocumented behavior. Solution was implemented which ensures autofit and masterful.core.fit will run successfully.

  • Warmup implemented.

  • Batchnorm warmup implemented to ensure val metrics are based on stable batch norm moving metrics. This is particularly helpful on image data that is not prenormalized to zero-mean, unit-variance (ZMUV).

Known Issues:

  • “Gradient Not Found” warning is sent to console during autofit and masterful.core.fit. This warning is innocuous.

0.3.5.1

  • Noisy Student Training reintroduced.

  • Robust but slower settings for optimizer policy.

  • Unsupervised pretraining supports larger model sizes.

  • Distillation API.

  • Removed warmup due to overfitting bug, will slow down training speeds but not affect final accuracy.

  • Documentation for graphical front end, ensembling, and distillation.

0.3.5

  • Revised API for autofit and advanced “core” api.

  • find_optimizer_policy searches for optimal policy for optimizer settings.

  • find_batch_size searches for largest batch size that fits in memory to speedup training.

  • General API for data and model specifications.

  • Unsupervised pretraining in the advanced API.

  • Added reflection on spatial transformations.

0.3.4

  • Added quickstart tutorial documentation.

  • Separated console output from logging to disk.

  • Support for several data formats.

  • Anchor box conversion for Fizyr Keras Retinanet model.

  • Localization/detection support for spatial transforms.

  • Protobuf logging on intermediate search phases.

  • Layerization of losses with serialization.

  • Native support for loss_weights.

  • Native support for multiple losses.

(codename victor)

0.3.3

  • Mixup transformation.

  • Eliminated all showstopper bugs from previous release.

  • Careful control of LR during metalearning algorithm.

(codename uniform)

0.3.2

  • Cutmix transformation.

  • Removed epochs and lr callbacks, now user responsibility.

  • Eliminated some showstopper bugs from previous version.

Known Issues:

  • Unstable release, do not use.

(codename tango)

0.3.1

  • Added saving and loading of policies.

  • Noisy Student Training functionality.

Known Issues:

  • Unstable release, do not use.

(codename sierra)

0.3

  • Layerization of transforms for high speed augmentation.

  • Groundup implementation of transforms.

  • Distance analysis to cluster transforms.

  • Metalearner is now beam search.

(codename bravo)

0.2.1

  • Multiple bug fixes and performance improvements

  • Adds supports for TPU training in GCP using Tensorflow 1.15

  • Package name has been renamed masterful from masterful_lite

  • Corresponding API’s now reside under masterful.api rather than masterful.api.lite

(fka 0.1.5)

0.2

  • Support for multiple instance segmentation masks and bounding boxes has been added.

Breaking Changes:

  • This add an API breaking change in the way that labels and masks are packed. See the updated documentation for details.

(fka 0.1.2)

0.1

  • Episilon greedy based meta learning algorithm.

Known Issues:

  • Slow as it requires frequent shuffling between CPU and GPU via py_function.