Release Notes

0.4.1

  • Support for object detection using familiar models from the Tensorflow Object Detection API.

  • Support for Numpy array, Keras Sequence, and Python generator data types (in addition to TF’s tf.data.Dataset).

  • Updated learning rate finder and batch size finder.

  • Guides for SSL, data input types, classification, detection, and semantic segmentation.

  • API docstrings include code samples.

0.4

See https://www.masterfulai.com/blog/announcing-masterful-v0.4 for more details.

  • Software no longer requires sign-up to start working.

  • API refined for clarity and simplicity.

  • Standalone SSL Recipe and utilities.

  • Guides and Tutorials are runnable on Google Colab and also downloadable.

  • GUI charts displayed as a compact dashboard.

  • Loss waterfall chart now shows benefit of SSL.

  • Ensures dataset is deterministic to ensure automatic train/val split does not leak information.

  • GUI includes a sample policy.

0.3.6

  • Support for installation via pip install masterful.

  • Support for authentication via masterful.register function. Personal and evaluation authentication keys can be acquired at www.masterfulai.com.

  • Visual front-end installed automatically as dependency on masterful package, runnable as masterful-gui. Like tensorboard, the masterful-gui application is a webserver and talks to the masterful library via artifacts in the filesystem.

  • Front-end includes data health check, parallel coordinates plot for regularization policies, and analysis of performance of trained model on single-label classification tasks: loss waterfall, accuracy before and after, recall and precision.

  • Improved console output, with support for non-interactive sessions, interactive sessions, and Jupyter notebooks.

  • Documents updated with installation guide and revamped table of contents.

Known Issues:

  • Instance segmentation is not yet supported (but semantic segmentation is).

  • Detection is not supported. Full support for models from the Tensorflow Object Detection API is coming soon.

0.3.5.7

  • Improved performance when automatically splitting a single labeled dataset into train and val. New implementation samples train periodically from the very start of the dataset to the very end, thereby ensuring that any distribution shift in the dataset will not result in a distribution shift between train and val. A naive implementation using tf.data.Dataset.take() and tf.data.Dataset.skip() risks a distribution shift between train and val if the underlying generator of the tf.data.Dataset has a distribution shift from the start to the end of the records.

  • A call to autofit now creates only one logging directory under ~/.masterful, not two directories.

Known Issues:

  • TF26 issue, see 0.3.5.5. Workaround is to upgrade to TF27 or install K26 before TF26.

  • Models with a final softmax or sigmoid activations, rather than logits, will not benefit from Keras’s code to automatically ignore activations in favor of the more numerically stable logits.

  • Console output in interactive sessions and Jupyter notebooks includes ANSI color escape sequences.

0.3.5.6

  • No customer facing changes in this release.

Known Issues:

  • TF26 issue, see 0.3.5.5. Workaround is to upgrade to TF27 or install K26 before TF26.

  • Models with a final softmax or sigmoid activations, rather than logits, will not benefit from Keras’s code to automatically ignore activations in favor of the more numerically stable logits.

  • Console output in interactive sessions and Jupyter notebooks includes ANSI color escape sequences.

0.3.5.5

  • Support for multi-label classification added in. Product now supports single label classification, multi-label classification, and binary classification.

  • Python and Tensorflow versioning coverage tests introduced. All tests passing for TF2.4/P37 (default for AWS). TF2.5/P37, TF2.6/P37, TF2.7/P37,P38,P39.

  • Coverage test for blending synthetic data.

  • find_batch_size no longer prints out NaNs. These NaNs were not affecting the successful calculation of the ideal batch size, however, NaNs usually indicate a problem so they were eliminated.

  • Autofit API restored to 0.3.5.3 version: Removed concept of Training Policy.

Known Issues:

  • Installing TF 2.6 with an updated pip will pull in Keras 2.7. This will trigger an error during import of tensorflow. This is due to a bug in Tensorflow and not related to Masterful. To resolve this, either install TF 2.7, or, first install Keras 2.6 and then TF 2.6.

pip install --upgrade tensorflow==2.7

or

pip install --upgrade keras==2.6
pip install --upgrade tensorflow==2.6

0.3.5.4

  • Root cause of “Gradient Not Found” warning is fixed.

  • Models with kernel/bias/activity regularizers are correctly frozen during phase 2 of 2 of fit (self distillation / noisy student).

  • Student model weights are reinitialized during phase 2 of 2 of fit (self distillation / noisy student) to improve accuracy.

  • Updated autofit API to accept a user’s predefined settings for epochs, lr schedule, and optimizer.

  • Default optimizer in autofit/find_fit_policy set to LAMB if no kernel regularizer detected, otherwise, SGD.

  • Simplified API for creating an EnsemblePolicy object.

  • Recall and Precision visualizations in graphical frontend.

  • Visualizations autoscale to zoom in on relevant ranges.

  • Improved console output during warmup phase.

  • Updated Ensembling Guide.

Known Issues:

  • Autofit API will change next release.

0.3.5.3

  • Ensembling guide.

  • Support for single channel data like MNIST.

  • Major speed up in fit_fit_policy / find_augmentation_policy by using fixed epochs budget of 320.

  • Speed up in warmup, and now has console printouts to indicate progress.

  • Fixed bug in logging: “evals on val” was not evaluating with best weights.

  • Front end visualization improved with waterfall diagram for loss improvement, new error chart, and recall improvement chart.

Known Issues:

  • “Gradient Not Found” warning is sent to console during autofit and masterful.core.fit. This warning is innocuous.

  • Default optimizer in find_opt_policy and autofit is experimental and will change in the next release.

0.3.5.2

  • Unsupervised Pre-training guide for semi-supervised learning included.

  • Larger datasets run faster due to optimized dataset cardinality calculation.

  • Autofit runs with the same early stopping callback but epochs set to 1000000, not 2**31, to make console output more interpretable.

  • Distillation report includes number of weights in source and target (teacher and student) models.

  • Logging includes both val and test sets (if test is available). Note that the metalearning algorithm never sees test - evaluations on test are purely for diagnostics.

  • Log directories named by run number (e.g. ~/.masterful/run-00001) instead of datetime (e.g. ~/.masterful/UTC_2021-08-18__17-34-29.037488)

  • Detailed logs originally (sent to ~/.masterful) can now also be sent to console via env variable MASTERFUL_LOG_TO_CONSOLE=1.

  • In some cases, fit was broken due to a bug in keras: model.trainable on a cloned model has undocumented behavior. Solution was implemented which ensures autofit and masterful.core.fit will run successfully.

  • Warmup implemented.

  • Batchnorm warmup implemented to ensure val metrics are based on stable batch norm moving metrics. This is particularly helpful on image data that is not prenormalized to zero-mean, unit-variance (ZMUV).

Known Issues:

  • “Gradient Not Found” warning is sent to console during autofit and masterful.core.fit. This warning is innocuous.

0.3.5.1

  • Noisy Student Training reintroduced.

  • Robust but slower settings for optimizer policy.

  • Unsupervised pretraining supports larger model sizes.

  • Distillation API.

  • Removed warmup due to overfitting bug, will slow down training speeds but not affect final accuracy.

  • Documentation for graphical front end, ensembling, and distillation.

0.3.5

  • Revised API for autofit and advanced “core” api.

  • find_optimizer_policy searches for optimal policy for optimizer settings.

  • find_batch_size searches for largest batch size that fits in memory to speedup training.

  • General API for data and model specifications.

  • Unsupervised pretraining in the advanced API.

  • Added reflection on spatial transformations.

0.3.4

  • Added quickstart tutorial documentation.

  • Separated console output from logging to disk.

  • Support for several data formats.

  • Anchor box conversion for Fizyr Keras Retinanet model.

  • Localization/detection support for spatial transforms.

  • Protobuf logging on intermediate search phases.

  • Layerization of losses with serialization.

  • Native support for loss_weights.

  • Native support for multiple losses.

(codename victor)

0.3.3

  • Mixup transformation.

  • Eliminated all showstopper bugs from previous release.

  • Careful control of LR during metalearning algorithm.

(codename uniform)

0.3.2

  • Cutmix transformation.

  • Removed epochs and lr callbacks, now user responsibility.

  • Eliminated some showstopper bugs from previous version.

Known Issues:

  • Unstable release, do not use.

(codename tango)

0.3.1

  • Added saving and loading of policies.

  • Noisy Student Training functionality.

Known Issues:

  • Unstable release, do not use.

(codename sierra)

0.3

  • Layerization of transforms for high speed augmentation.

  • Groundup implementation of transforms.

  • Distance analysis to cluster transforms.

  • Metalearner is now beam search.

(codename bravo)

0.2.1

  • Multiple bug fixes and performance improvements

  • Adds supports for TPU training in GCP using Tensorflow 1.15

  • Package name has been renamed masterful from masterful_lite

  • Corresponding API’s now reside under masterful.api rather than masterful.api.lite

(fka 0.1.5)

0.2

  • Support for multiple instance segmentation masks and bounding boxes has been added.

Breaking Changes:

  • This add an API breaking change in the way that labels and masks are packed. See the updated documentation for details.

(fka 0.1.2)

0.1

  • Episilon greedy based meta learning algorithm.

Known Issues:

  • Slow as it requires frequent shuffling between CPU and GPU via py_function.