Function to find the Maximum Likelihood Estimates of distributional parameters using TensorFlow.
mle_tf( x, xdist = "Normal", fixparam = NULL, initparam, bounds = NULL, optimizer = "AdamOptimizer", hyperparameters = NULL, maxiter = 10000, tolerance = .Machine$double.eps )
x | a vector containing the data to be fitted. |
---|---|
xdist | a character indicating the name of the distribution of interest. The default value is |
fixparam | a list containing the fixed parameters of the distribution of interest only if they exist. The parameters values and names must be specified in the list. |
initparam | a list with initial values of the parameters to be estimated. The list must contain the parameters values and names. |
bounds | a list with lower and upper bounds for each parameter to be estimated. The list must contain the parameters names and vectors with the bounds. The default value is NULL. |
optimizer | a character indicating the name of the TensorFlow optimizer to be used in the estimation process The default value is |
hyperparameters | a list with the hyperparameters values of the selected TensorFlow optimizer. If the hyperparameters are not specified, their default values will be used in the oprimization process. For more details of the hyperparameters go to this URL: https://www.tensorflow.org/api_docs/python/tf/compat/v1/train |
maxiter | a positive integer indicating the maximum number of iterations for the optimization algorithm. The default value is |
tolerance | a small positive number. When the difference between the loss value or the parameters values from one iteration to another is lower
than this value, the optimization process stops. The default value is |
This function returns the estimates, standard errors, Z-score and p-values of significance tests of the parameters from the distribution of interest as well as some information of the optimization process like the number of iterations needed for convergence.
mle_tf
computes the log-likelihood function of the distribution specified in
xdist
and finds the values of the parameters that maximizes this function using the TensorFlow optimizer
specified in optimizer
.
The R
function that contains the probability mass/density function must not contain curly brackets. The only curly brackets that the function can contain are those that enclose the function,
that is, those that define the beginning and end of the R
function.
The summary, print, plot_loss
functions can be used with a mle_tf
object.
Sara Garcés Céspedes sgarcesc@unal.edu.co
#----------------------------------------------------------------------------- # Estimation of parameters mean and sd of the normal distribution # Vector with the data to be fitted x <- rnorm(n = 1000, mean = 10, sd = 3) # Use the mle_tf function estimation_1 <- mle_tf(x, xdist = "Normal", optimizer = "AdamOptimizer", initparam = list(mean = 1.0, sd = 1.0), hyperparameters = list(learning_rate = 0.1)) # Get the summary of the estimates summary(estimation_1)#> Distribution: Normal #> Number of observations: 1000 #> TensorFlow optimizer: AdamOptimizer #> Negative log-likelihood: 653.9305 #> Loss function convergence, 1193 iterations needed. #> --------------------------------------------------- #> Estimate Std. Error Z value Pr(>|z|) #> mean 10.05937 0.09270 108.52 <2e-16 *** #> sd 2.93144 0.06581 44.55 <2e-16 *** #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#----------------------------------------------------------------------------- # Estimation of parameter lambda of the Instantaneous Failures distribution # Create an R function that contains the probability density function pdf <- function(X, lambda) { (1 / ((lambda ^ 2) * (lambda - 1))) * (lambda^2 + X - 2*lambda) * exp(-X/lambda) } # Vector with the data to be fitted x <- c(3.4, 0.0, 0.0, 15.8, 232.8, 8.8, 123.2, 47, 154, 103.2, 89.8, 12.2) # Use the mle_tf function estimation_2 <- mle_tf(x = x, xdist = pdf, initparam = list(lambda = rnorm(1, 5, 1)), bounds = list(lambda = c(2, Inf)), optimizer = "AdamOptimizer", hyperparameters = list(learning_rate = 0.1), maxiter = 10000) # Get the summary of the estimates summary(estimation_2)#> Number of observations: 12 #> TensorFlow optimizer: AdamOptimizer #> Negative log-likelihood: 62.2489 #> Loss function convergence, 97 iterations needed. #> --------------------------------------------------- #> Estimate Std. Error Z value Pr(>|z|) #> lambda 64.89 NA NA NA