BayesFactor function - RDocumentation (2024)

Description

This function calculates Bayes factors for two or more fitted objects of class demonoid, iterquad, laplace, pmc, or vb that were estimated respectively with the LaplacesDemon, IterativeQuadrature, LaplaceApproximation, PMC, or VariationalBayes functions, and indicates the strength of evidence in favor of the hypothesis (that each model, \(\mathcal{M}_i\), is better than another model, \(\mathcal{M}_j\)).

Usage

BayesFactor(x)

Arguments

x

This is a list of two or more fitted objects of class demonoid, iterquad, laplace, pmc, or vb. The components are named in order beginning with model 1, M1, and \(k\) models are usually represented as \(\mathcal{M}_1,\dots,\mathcal{M}_k\).

Value

BayesFactor returns an object of class bayesfactor that is a list with the following components:

B

This is a matrix of Bayes factors.

Hypothesis

This is the hypothesis, and is stated as 'row > column', indicating that the model associated with the row of an element in matrix B is greater than the model associated with the column of that element.

Strength.of.Evidence

This is the strength of evidence in favor of the hypothesis.

Posterior.Probability

This is a vector of the posterior probability of each model, given flat priors.

Details

Introduced by Harold Jeffreys, a 'Bayes factor' is a Bayesian alternative to frequentist hypothesis testing that is most often used for the comparison of multiple models by hypothesis testing, usually to determine which model better fits the data (Jeffreys, 1961). Bayes factors are notoriously difficult to compute, and the Bayes factor is only defined when the marginal density of \(\textbf{y}\) under each model is proper (see is.proper). However, the Bayes factor is easy to approximate with the Laplace-Metropolis estimator (Lewis and Raftery, 1997) and other methods of approximating the logarithm of the marginal likelihood (for more information, see LML).

Hypothesis testing with Bayes factors is more robust than frequentist hypothesis testing, since the Bayesian form avoids model selection bias, evaluates evidence in favor of the null hypothesis, includes model uncertainty, and allows non-nested models to be compared (though of course the model must have the same dependent variable). Also, frequentist significance tests become biased in favor of rejecting the null hypothesis with sufficiently large sample size.

The Bayes factor for comparing two models may be approximated as the ratio of the marginal likelihood of the data in model 1 and model 2. Formally, the Bayes factor in this case is

$$B = \frac{p(\textbf{y}|\mathcal{M}_1)}{p(\textbf{y}|\mathcal{M}_2)} = \frac{\int p(\textbf{y}|\Theta_1,\mathcal{M}_1)p(\Theta_1|\mathcal{M}_1)d\Theta_1}{\int p(\textbf{y}|\Theta_2,\mathcal{M}_2)p(\Theta_2|\mathcal{M}_2)d\Theta_2}$$

where \(p(\textbf{y}|\mathcal{M}_1)\) is the marginal likelihood of the data in model 1.

The IterativeQuadrature, LaplaceApproximation, LaplacesDemon, PMC, and VariationalBayes functions each return the LML, the approximate logarithm of the marginal likelihood of the data, in each fitted object of class iterquad, laplace, demonoid, pmc, or vb. The BayesFactor function calculates matrix B, a matrix of Bayes factors, where each element of matrix B is a comparison of two models. Each Bayes factor is calculated as the exponentiated difference of LML of model 1 (\(\mathcal{M}_1\)) and LML of model 2 (\(\mathcal{M}_2\)), and the hypothesis for each element of matrix B is that the model associated with the row is greater than the model associated with the column. For example, element B[3,2] is the Bayes factor that model 3 is greater than model 2. The 'Strength of Evidence' aids in the interpretation (Jeffreys, 1961).

A table for the interpretation of the strength of evidence for Bayes factors is available at https://web.archive.org/web/20150214194051/http://www.bayesian-inference.com/bayesfactors.

Each Bayes factor, B, is the posterior odds in favor of the hypothesis divided by the prior odds in favor of the hypothesis, where the hypothesis is usually \(\mathcal{M}_1 > \mathcal{M}_2\). For example, when B[3,2]=2, the data favor \(\mathcal{M}_3\) over \(\mathcal{M}_2\) with 2:1 odds.

It is also popular to consider the natural logarithm of the Bayes factor. The scale of the logged Bayes factor is the same above and below one, which is more appropriate for visual comparisons. For example, when comparing two Bayes factors at 0.5 and 2, the logarithm of these Bayes factors is -0.69 and 0.69.

Gelman finds Bayes factors generally to be irrelevant, because they compute the relative probabilities of the models conditional on one of them being true. Gelman prefers approaches that measure the distance of the data to each of the approximate models (Gelman et al., 2004, p. 180), such as with posterior predictive checks (see the predict.iterquad function regarding iterative quadrature, predict.laplace function in the context of Laplace Approximation, predict.demonoid function in the context of MCMC, predict.pmc function in the context of PMC, or predict.vb function in the context of Variational Bayes). Kass et al. (1995) asserts this can be done without assuming one model is the true model.

References

Gelman, A., Carlin, J., Stern, H., and Rubin, D. (2004). "Bayesian Data Analysis, Texts in Statistical Science, 2nd ed.". Chapman and Hall, London.

Jeffreys, H. (1961). "Theory of Probability, Third Edition". Oxford University Press: Oxford, England.

Kass, R.E. and Raftery, A.E. (1995). "Bayes Factors". Journal of the American Statistical Association, 90(430), p. 773--795.

Lewis, S.M. and Raftery, A.E. (1997). "Estimating Bayes Factors via Posterior Simulation with the Laplace-Metropolis Estimator". Journal of the American Statistical Association, 92, p. 648--655.

See Also

is.bayesfactor, is.proper, IterativeQuadrature, LaplaceApproximation, LaplacesDemon, LML, PMC, predict.demonoid, predict.iterquad, predict.laplace, predict.pmc, predict.vb, and VariationalBayes.

Examples

Run this code

# NOT RUN {# The following example fits a model as Fit1, then adds a predictor, and# fits another model, Fit2. The two models are compared with Bayes# factors.library(LaplacesDemon)############################## Demon Data ###############################data(demonsnacks)J <- 2y <- log(demonsnacks$Calories)X <- cbind(1, as.matrix(log(demonsnacks[,10]+1)))X[,2] <- CenterScale(X[,2])######################### Data List Preparation #########################mon.names <- "LP"parm.names <- as.parm.names(list(beta=rep(0,J), sigma=0))pos.beta <- grep("beta", parm.names)pos.sigma <- grep("sigma", parm.names)PGF <- function(Data) { beta <- rnorm(Data$J) sigma <- runif(1) return(c(beta, sigma)) }MyData <- list(J=J, PGF=PGF, X=X, mon.names=mon.names, parm.names=parm.names, pos.beta=pos.beta, pos.sigma=pos.sigma, y=y)########################## Model Specification ##########################Model <- function(parm, Data) { ### Parameters beta <- parm[Data$pos.beta] sigma <- interval(parm[Data$pos.sigma], 1e-100, Inf) parm[Data$pos.sigma] <- sigma ### Log-Prior beta.prior <- sum(dnormv(beta, 0, 1000, log=TRUE)) sigma.prior <- dhalfcauchy(sigma, 25, log=TRUE) ### Log-Likelihood mu <- tcrossprod(Data$X, t(beta)) LL <- sum(dnorm(Data$y, mu, sigma, log=TRUE)) ### Log-Posterior LP <- LL + beta.prior + sigma.prior Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP, yhat=rnorm(length(mu), mu, sigma), parm=parm) return(Modelout) }############################ Initial Values #############################Initial.Values <- GIV(Model, MyData, PGF=TRUE)######################## Laplace Approximation ##########################Fit1 <- LaplaceApproximation(Model, Initial.Values, Data=MyData, Iterations=10000)Fit1############################## Demon Data ###############################data(demonsnacks)J <- 3y <- log(demonsnacks$Calories)X <- cbind(1, as.matrix(demonsnacks[,c(7,8)]))X[,2] <- CenterScale(X[,2])X[,3] <- CenterScale(X[,3])######################### Data List Preparation #########################mon.names <- c("sigma","mu[1]")parm.names <- as.parm.names(list(beta=rep(0,J), sigma=0))pos.beta <- grep("beta", parm.names)pos.sigma <- grep("sigma", parm.names)PGF <- function(Data) return(c(rnormv(Data$J,0,10), rhalfcauchy(1,5)))MyData <- list(J=J, PGF=PGF, X=X, mon.names=mon.names, parm.names=parm.names, pos.beta=pos.beta, pos.sigma=pos.sigma, y=y)############################ Initial Values #############################Initial.Values <- GIV(Model, MyData, PGF=TRUE)######################## Laplace Approximation ##########################Fit2 <- LaplaceApproximation(Model, Initial.Values, Data=MyData, Iterations=10000)Fit2############################# Bayes Factor ##############################Model.list <- list(M1=Fit1, M2=Fit2)BayesFactor(Model.list)# }

Run the code above in your browser using DataLab

BayesFactor function - RDocumentation (2024)

FAQs

What is the Jeffreys Bayes factor interpretation? ›

Harold Jeffreys pioneered the development of default Bayes factor hypothesis tests for standard statistical problems. Using Jeffreys's Bayes factor hypothesis tests, researchers can grade the decisiveness of the evidence that the data provide for a point null hypothesis versus a composite alternative hypothesis .

What is the Bayes factor function? ›

Bayes factor functions (BFFs) are defined as the mapping of standardized effects to Bayes factors (or, more formally, the mapping of prior densities centered on standardized effect sizes to Bayes factors).

What is the interpretation of the Bayes factor? ›

Interpretation of Bayes Factors

The Bayes factor summarizes the evidence provided by the data in favor of one scientific theory, represented by a statistical model, as opposed to another.

What is the inclusion Bayes factor? ›

The inclusion Bayes factor quantifies the change from prior inclusion odds to posterior inclusion odds and can be interpreted as the evidence in the data for including a predictor.

What does a Bayes factor of 5 mean? ›

I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.

What does Bayes formula tell us? ›

The Bayes theorem is a mathematical formula for calculating conditional probability in probability and statistics. In other words, it's used to figure out how likely an event is based on its proximity to another. Bayes law or Bayes rule are other names for the theorem.

What does a Bayes factor of 3 mean? ›

A Bayes factor of 3 or more can be taken as moderate evidence for your theory (and against the null) and of 1/3 or less as moderate evidence for the null (and against your theory).

What is factor function in R documentation? ›

Description. The function factor is used to encode a vector as a factor (the terms 'category' and 'enumerated type' are also used for factors). If argument ordered is TRUE , the factor levels are assumed to be ordered. For compatibility with S there is also a function ordered .

What is the difference between P-value and Bayes factor? ›

To explain further: P- values “are based on calculating the probability of observing test statistics that are as extreme or more extreme than the test statistic actually observed, whereas Bayes factors represent the relative probability assigned to the observed data under each of the competing hypotheses.

How do you interpret Bayes rule? ›

which is the so-called Bayes Rule. It is common to think of Bayes rule in terms of updating our belief about a hypothesis A in the light of new evidence B. Specifically, our posterior belief P(A|B) is calculated by multiplying our prior belief P(A) by the likelihood P(B|A) that B will occur if A is true.

How to report a Bayes factor? ›

When reporting Bayes factors (BF), one can use the following sentence: “There is moderate evidence in favour of an absence of effect of x (BF = BF).”

What is Bayesian factor analysis? ›

The Bayesian Factor Analysis model incorporates available knowledge regarding the model parameters in the form of prior distributions obtained either subjectively from substantive experts or from previous experiments. This has the added consequence of eliminating the ambiguity of rotation.

Why is the Bayes factor important? ›

Bayes factors are useful for guiding an evolutionary model-building process. It is important, and feasible, to assess the sensitivity of conclusions to the prior distributions used. analysis; Strength of evidence.

Is Bayes factor the same as likelihood ratio? ›

The Bayesian factor, the so-called likelihood ratio, has not always been well-understood. In this article, we try to discuss the likelihood ratio and its value for a specific test result, a positive or negative test result, and a range of test results, along with their graphical representations.

What is the cut off for the Bayes factor? ›

Jeffreys et al. (1939/1961) suggested conventional cut-offs: A Bayes factor greater than 3 or else less than 1/3 represents substantial evidence; conversely, anything between 1/3 and 3 is only weak or “anecdotal” evidence.

What is the interpretation of Bayes Theorem? ›

At its simplest, Bayes' Theorem takes a test result and relates it to the conditional probability of that test result given other related events. For high-probability false positives, the theorem gives a more reasoned likelihood of a particular outcome.

How do you interpret Bayesian probability? ›

The Bayesian interpretation of probability is an alternative to the frequentist interpretation. In the Bayesian interpretation of probability, P ( g ( X ) ≠ Y ) is understood as the degree to which we personally believe that a randomly chosen molecule would be placed on the wrong side of the hyperplane.

What is the interpretation of factor score? ›

Factor scores (values) can be thought of as the actual values for each respondent on the underlying factors in a particular row of data that we discovered.

How do you interpret Bayes factor robustness check? ›

BF 10 greater than 3 indicate substantial evidence for the presence of an effect, BF 10 between 1 and 3 indicate anecdotal evidence for the presence of an effect, BF 10 between 1/3 and 1 indicate anecdotal evidence for the absence of an effect, and BF 10 smaller than 1/3 indicate substantial evidence for the absence of ...

References

Top Articles
Latest Posts
Article information

Author: Jerrold Considine

Last Updated:

Views: 6203

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Jerrold Considine

Birthday: 1993-11-03

Address: Suite 447 3463 Marybelle Circles, New Marlin, AL 20765

Phone: +5816749283868

Job: Sales Executive

Hobby: Air sports, Sand art, Electronics, LARPing, Baseball, Book restoration, Puzzles

Introduction: My name is Jerrold Considine, I am a combative, cheerful, encouraging, happy, enthusiastic, funny, kind person who loves writing and wants to share my knowledge and understanding with you.