nif.optimizers

nif.optimizers.function_factory(model, loss, train_x, train_y, display_epoch)

A factory to create a function required by tfp.optimizer.lbfgs_minimize.

Parameters:
  • model [in] – an instance of tf.keras.Model or its subclasses.

  • loss [in] – a function with signature loss_value = loss(pred_y, true_y).

  • train_x [in] – the input part of training demo.

  • train_y [in] – the output part of training demo.

Returns:

A function that has a signature of – loss_value, gradients = f(model_parameters).

nif.optimizers.lbfgs_minimize(value_and_gradients_function, initial_position, previous_optimizer_results=None, num_correction_pairs=10, tolerance=1e-08, x_tolerance=0, f_relative_tolerance=0, initial_inverse_hessian_estimate=None, max_iterations=50, parallel_iterations=1, stopping_condition=None, max_line_search_iterations=50, f_absolute_tolerance=0, name=None)

Applies the L-BFGS algorithm to minimize a differentiable function.

Performs unconstrained minimization of a differentiable function using the L-BFGS scheme. See [Nocedal and Wright(2006)][1] for details of the algorithm.

### Usage:

The following example demonstrates the L-BFGS optimizer attempting to find the minimum for a simple high-dimensional quadratic objective function.

```python

# A high-dimensional quadratic bowl. ndims = 60 minimum = np.ones([ndims], dtype=’float64’) scales = np.arange(ndims, dtype=’float64’) + 1.0

# The objective function and the gradient. def quadratic_loss_and_gradient(x):

return tfp.math.value_and_gradient(
lambda x: tf.reduce_sum(

scales * tf.math.squared_difference(x, minimum), axis=-1),

start = np.arange(ndims, 0, -1, dtype=’float64’) optim_results = tfp.optimizer.lbfgs_minimize(

quadratic_loss_and_gradient, initial_position=start, num_correction_pairs=10, tolerance=1e-8)

# Check that the search converged assert(optim_results.converged) # Check that the argmin is close to the actual value. np.testing.assert_allclose(optim_results.position, minimum)

```

### References:

[1] Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series

in Operations Research. pp 176-180. 2006

http://pages.mtu.edu/~struther/Courses/OLD/Sp2013/5630/Jorge_Nocedal_Numerical_optimization_267490.pdf

Parameters:
  • value_and_gradients_function – A Python callable that accepts a point as a real Tensor and returns a tuple of Tensor`s of real dtype containing the value of the function and its gradient at that point. The function to be minimized. The input is of shape `[…, n], where n is the size of the domain of input points, and all others are batching dimensions. The first component of the return value is a real Tensor of matching shape […]. The second component (the gradient) is also of shape […, n] like the input value to the function.

  • initial_position – Real Tensor of shape […, n]. The starting point, or points when using batching dimensions, of the search procedure. At these points the function value and the gradient norm should be finite. Exactly one of initial_position and previous_optimizer_results can be non-None.

  • previous_optimizer_results – An LBfgsOptimizerResults namedtuple to intialize the optimizer state from, instead of an initial_position. This can be passed in from a previous return value to resume optimization with a different stopping_condition. Exactly one of initial_position and previous_optimizer_results can be non-None.

  • num_correction_pairs – Positive integer. Specifies the maximum number of (position_delta, gradient_delta) correction pairs to keep as implicit approximation of the Hessian matrix.

  • tolerance – Scalar Tensor of real dtype. Specifies the gradient tolerance for the procedure. If the supremum norm of the gradient vector is below this number, the algorithm is stopped.

  • x_tolerance – Scalar Tensor of real dtype. If the absolute change in the position between one iteration and the next is smaller than this number, the algorithm is stopped.

  • f_relative_tolerance – Scalar Tensor of real dtype. If the relative change in the objective value between one iteration and the next is smaller than this value, the algorithm is stopped.

  • initial_inverse_hessian_estimate – None. Option currently not supported.

  • max_iterations – Scalar positive int32 Tensor. The maximum number of iterations for L-BFGS updates.

  • parallel_iterations – Positive integer. The number of iterations allowed to run in parallel.

  • stopping_condition – (Optional) A Python function that takes as input two Boolean tensors of shape […], and returns a Boolean scalar tensor. The input tensors are converged and failed, indicating the current status of each respective batch member; the return value states whether the algorithm should stop. The default is tfp.optimizer.converged_all which only stops when all batch members have either converged or failed. An alternative is tfp.optimizer.converged_any which stops as soon as one batch member has converged, or when all have failed.

  • max_line_search_iterations – Python int. The maximum number of iterations for the hager_zhang line search algorithm.

  • f_absolute_tolerance – Scalar Tensor of real dtype. If the absolute change in the objective value between one iteration and the next is smaller than this value, the algorithm is stopped.

  • name – (Optional) Python str. The name prefixed to the ops created by this function. If not supplied, the default name ‘minimize’ is used.

Returns:

optimizer_results

A namedtuple containing the following items:
converged: Scalar boolean tensor indicating whether the minimum was

found within tolerance.

failed: Scalar boolean tensor indicating whether a line search

step failed to find a suitable step size satisfying Wolfe conditions. In the absence of any constraints on the number of objective evaluations permitted, this value will be the complement of converged. However, if there is a constraint and the search stopped due to available evaluations being exhausted, both failed and converged will be simultaneously False.

num_objective_evaluations: The total number of objective

evaluations performed.

position: A tensor containing the last argument value found

during the search. If the search converged, then this value is the argmin of the objective function.

objective_value: A tensor containing the value of the objective

function at the position. If the search converged, then this is the (local) minimum of the objective function.

objective_gradient: A tensor containing the gradient of the objective

function at the position. If the search converged the max-norm of this tensor should be below the tolerance.

position_deltas: A tensor encoding information about the latest

changes in position during the algorithm execution.

gradient_deltas: A tensor encoding information about the latest

changes in objective_gradient during the algorithm execution.

class nif.optimizers.LBFGSOptimizer(loss_closure, trainable_variables, steps=1)

Bases: object

property epoch
property loss
minimize()
class nif.optimizers.TFPLBFGS(model, loss_fun, inps, outs, display_epoch=1)

Bases: object

minimize(rounds=50, max_iter=50)
property history
class nif.optimizers.L4Adam(learning_rate=0.15, tau_m=10.0, tau_s=1000.0, tau=1000.0, gamma_0=0.75, gamma=0.9, epsilon=1e-07, name='L4Adam', **kwargs)

Bases: OptimizerV2

Implements the L4Adam optimizer.

This optimizer is an implementation of the L4 optimization algorithm with an adaptive learning rate that is based on the Adam optimizer.

Variables:
  • learning_rate – A float, the initial learning rate.

  • tau_m – A float, decay rate for first moment estimates.

  • tau_s – A float, decay rate for second moment estimates.

  • tau – A float, decay rate for the l_min estimate.

  • gamma_0 – A float, initial proportion of the loss to be considered as l_min.

  • gamma – A float, parameter to control the proportion of l_min in the update.

  • epsilon – A float, small constant for numerical stability.

  • name – Optional string, the name for the optimizer.

_create_slots()

Creates slots for the optimizer’s state.

_prepare_local()

Prepares the local hyperparameters and derived quantities.

_momentum_add()

Computes the momentum addition for a given variable.

_resource_apply_dense()

Applies the dense gradients to the model variables.

minimize()

Minimizes the loss function for the given model variables.

apply_gradients()

Applies the gradients to the model variables.

_distributed_apply()

Applies the gradients in a distributed setting.

_resource_apply_sparse()

NotImplemented, raises NotImplementedError.

get_config()

Returns the config dictionary for the optimizer instance.

minimize(loss, var_list, grad_loss=None, name=None, tape=None)

Minimize loss by updating var_list.

This method simply computes gradient using tf.GradientTape and calls apply_gradients(). If you want to process the gradient before applying then call tf.GradientTape and apply_gradients() explicitly instead of using this function.

Parameters:
  • lossTensor or callable. If a callable, loss should take no arguments and return the value to minimize. If a Tensor, the tape argument must be passed.

  • var_list – list or tuple of Variable objects to update to minimize loss, or a callable returning the list or tuple of Variable objects. Use callable when the variable list would otherwise be incomplete before minimize since the variables are created at the first time loss is called.

  • grad_loss – (Optional). A Tensor holding the gradient computed for loss.

  • name – (Optional) str. Name for the returned operation.

  • tape – (Optional) tf.GradientTape. If loss is provided as a Tensor, the tape that computed the loss must be provided.

Returns:

An Operation that updates the variables in var_list. The iterations will be automatically increased by 1.

Raises:

ValueError – If some of the variables are not Variable objects.

apply_gradients(grads_and_vars, name=None, experimental_aggregate_gradients=True, loss=None)

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

The method sums gradients from all replicas in the presence of tf.distribute.Strategy by default. You can aggregate gradients yourself by passing experimental_aggregate_gradients=False.

Example:

```python grads = tape.gradient(loss, vars) grads = tf.distribute.get_replica_context().all_reduce(‘sum’, grads) # Processing aggregated gradients. optimizer.apply_gradients(zip(grads, vars),

experimental_aggregate_gradients=False)

```

Parameters:
  • grads_and_vars – List of (gradient, variable) pairs.

  • name – Optional name for the returned operation. Default to the name passed to the Optimizer constructor.

  • experimental_aggregate_gradients – Whether to sum gradients from different replicas in the presence of tf.distribute.Strategy. If False, it’s user responsibility to aggregate the gradients. Default to True.

Returns:

An Operation that applies the specified gradients. The iterations will be automatically increased by 1.

Raises:
  • TypeError – If grads_and_vars is malformed.

  • ValueError – If none of the variables have gradients.

  • RuntimeError – If called in a cross-replica context.

get_config()

Returns the config of the optimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns:

Python dictionary.

class nif.optimizers.AdaBeliefOptimizer(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-14, weight_decay=0.0, rectify=True, amsgrad=False, sma_threshold=5.0, total_steps=0, warmup_proportion=0.1, min_lr=0.0, name='AdaBeliefOptimizer', print_change_log=True, **kwargs)

Bases: OptimizerV2

It implements the AdaBeliefOptimizer proposed by Juntang Zhuang et al. in [AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients](https://arxiv.org/abs/2010.07468). Contributor(s):

Jerry Yu [cryu854] <cryu854@gmail.com>

Example of usage: `python from adabelief_tf import AdaBeliefOptimizer opt = AdaBeliefOptimizer(lr=1e-3) ` Note: amsgrad is not described in the original paper. Use it with

caution.

AdaBeliefOptimizer is not a placement of the heuristic warmup, the settings should be kept if warmup has already been employed and tuned in the baseline method. You can enable warmup by setting total_steps and warmup_proportion: ```python opt = AdaBeliefOptimizer(

lr=1e-3, total_steps=10000, warmup_proportion=0.1, min_lr=1e-5,

)

In the above example, the learning rate will increase linearly from 0 to lr in 1000 steps, then decrease linearly from lr to min_lr in 9000 steps. Lookahead, proposed by Michael R. Zhang et.al in the paper [Lookahead Optimizer: k steps forward, 1 step back] (https://arxiv.org/abs/1907.08610v1), can be integrated with AdaBeliefOptimizer, which is announced by Less Wright and the new combined optimizer can also be called “Ranger”. The mechanism can be enabled by using the lookahead wrapper. For example: `python adabelief = AdaBeliefOptimizer() ranger = tfa.optimizers.Lookahead(adabelief, sync_period=6, slow_step_size=0.5) ` Example of serialization: `python optimizer = AdaBeliefOptimizer(learning_rate=lr_scheduler, weight_decay=wd_scheduler) config = tf.keras.optimizers.serialize(optimizer) new_optimizer = tf.keras.optimizers.deserialize(config, custom_objects={"AdaBeliefOptimizer": AdaBeliefOptimizer}) `

Args: learning_rate: A Tensor or a floating point value, or a schedule

that is a tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate.

beta_1: A float value or a constant float tensor.

The exponential decay rate for the 1st moment estimates.

beta_2: A float value or a constant float tensor.

The exponential decay rate for the 2nd moment estimates.

epsilon: A small constant for numerical stability. weight_decay: A Tensor or a floating point value, or a schedule

that is a tf.keras.optimizers.schedules.LearningRateSchedule. Weight decay for each parameter.

rectify: boolean. Whether to enable rectification as in RectifiedAdam amsgrad: boolean. Whether to apply AMSGrad variant of this

algorithm from the paper “On the Convergence of Adam and beyond”.

sma_threshold. A float value.

The threshold for simple mean average.

total_steps: An integer. Total number of training steps.

Enable warmup by setting a positive value.

warmup_proportion: A floating point value.

The proportion of increasing steps.

min_lr: A floating point value. Minimum learning rate after warmup. name: Optional name for the operations created when applying

gradients. Defaults to “AdaBeliefOptimizer”.

**kwargs: keyword arguments. Allowed to be {clipnorm,

clipvalue, lr, decay}. clipnorm is clip gradients by norm; clipvalue is clip gradients by value, decay is included for backward compatibility to allow time inverse decay of learning rate. lr is included for backward compatibility, recommended to use learning_rate instead.

set_weights(weights)

Set the weights of the optimizer.

The weights of an optimizer are its state (ie, variables). This function takes the weight values associated with this optimizer as a list of Numpy arrays. The first value is always the iterations count of the optimizer, followed by the optimizer’s state variables in the order they are created. The passed values are used to set the new state of the optimizer.

For example, the RMSprop optimizer for this simple model takes a list of three values– the iteration count, followed by the root-mean-square value of the kernel and bias of the single Dense layer:

>>> opt = tf.keras.optimizers.RMSprop()
>>> m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
>>> m.compile(opt, loss='mse')
>>> data = np.arange(100).reshape(5, 20)
>>> labels = np.zeros(5)
>>> results = m.fit(data, labels)  # Training.
>>> new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])]
>>> opt.set_weights(new_weights)
>>> opt.iterations
<tf.Variable 'RMSprop/iter:0' shape=() dtype=int64, numpy=10>
Parameters:

weights – weight values as a list of numpy arrays.

get_config()

Returns the config of the optimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns:

Python dictionary.

nif.optimizers.centralized_gradients_for_optimizer(optimizer)

Create a centralized gradients functions for a specified optimizer. # Arguments:

optimizer: a tf.keras.optimizers.Optimizer object. The optimizer you are using.

# Usage: `py >>> opt = tf.keras.optimizers.Adam(learning_rate=0.1) >>> opt.get_gradients = gctf.centralized_gradients_for_optimizer(opt) >>> model.compile(optimizer = opt, ...) `

class nif.optimizers.Lion(learning_rate=0.0001, beta_1=0.9, beta_2=0.99, wd=0, name='lion', **kwargs)

Bases: Optimizer

Implements the Lion optimization algorithm.

The Lion optimizer is a custom optimization algorithm based on first-order stochastic gradient descent methods. It incorporates a weighted decay term and momentum-based updates.

Variables:
  • learning_rate (float) – The learning rate. Defaults to 1e-4.

  • beta_1 (float) – The exponential decay rate for the first moment estimates. Defaults to 0.9.

  • beta_2 (float) – The exponential decay rate for the second moment estimates. Defaults to 0.99.

  • wd (float) – The weight decay factor. Defaults to 0.

  • name (str) – The name of the optimizer. Defaults to “lion”.

get_config()

Returns the config of the optimizer.

An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.

Returns:

Python dictionary.