Akaike Information Criterion – Use & Examples

19.04.23 Statistical models Time to read: 6min

How do you like this article?

0 Reviews


Akaike-information-criterion-01

The Akaike Information Criterion (AIC) is a widely used ingenious tool in statistics. By having several models and comparing them to AIC scores, you can select the best one for your data. Employing AIC, allows one to make more informed choices regarding which model is best for the analysis of research. It also enhances the reliability and quality of results and findings. This article discusses when to use Akaike information criterion and how to calculate AIC scores.

Akaike Information Criterion – In a Nutshell

You can calculate the Akaike information criterion from the following:

  • The total independent variables present in the model
  • The maximum possible estimate produced by the model
  • How best your model reproduces the data

Definition: Akaike information criterion

Akaike information criterion is a model-selecting, mathsematical criterion that estimates the measures of different models as they relate to a certain data set. The representation usually varies when researchers use a statistical model to represent data generation, meaning some information will be lost. Akaike information criterion estimates and compares the amount of information loss that occurs in all models.

According to Hirotsugu Akaike, the formulator of the Akaike information criterion, the best fitting model for your data set should explain the greatest variation using fewer independent variables. Afterward, you fit several regression models with this formula and compare each AIC score.

Note: It’s important to understand that Akaike information criterion scores are not significant alone. You must compare them with another model for accuracy.

Give your thesis a final format revision prior to printing
Have a last check of your formatting with our 3D preview feature before sending your thesis to print. The accurate virtual representation of what the physical print will look like, affords you to ensure the printed version aligns with your expectations.

When to use the Akaike information criterion

Keep in mind the following two aspects when using the Akaike information criterion:

Model selection

When choosing the best model, you must have several options, compare them against Akaike information criterion scores, and select the one with the lowest. Hence, if you compare two models explaining the same variation, the one with fewer parameters will give a lower AIC score.

Create different model sets containing combinations of the measured independent variables. For these combinations to work in the Akaike information criterion, you should do the following:

  • Understand the study system and use logically connected parameters
  • Use an experimental design

In a study of how “hours spent working” and “the work environment” affect work productivity levels, you select two models:

Example

  • Productivity levels in hours spent working
  • Productivity levels in hours spent working and the work environment

Your Akaike information criterion results are as follows:

Example

  • Model 1: r2 of 0.53 and p-value of less than 0.05
  • Model 2: r2 of 0.43 and p-value of less than 0.05

Running your models using the Akaike information criterion test shows that model 2 has the lowest AIC score. From these results, you can select model 2 as the best model for your research.

Now, calculate the Akaike information criterion scores of the two models and compare them. The ideal model should be more than two AIC units lesser than the other model.

Most statistical software like R has functions that can quickly calculate Akaike information criterion scores.

Example

From the formula, AIC= 2K – 2 In(L), K always has a standard of 2.

  • If your model has one independent variable, your K adds up to 3.
  • If your model uses two independent variables, your K turns to 4, and so on.

Remember that the Akaike information criterion is relevant when the sample size exceeds the set number of parameters. However, you may have a case study with a sample size relative to the parameters or even lesser. Consider other model selection methods, like the Bayesian Information Criterion (BIC).

AIC scores

These scores measure the relative quality of models that are compared against each other using the Akaike information criterion test.

The main purpose of the AIC score is to help you determine the best machine learning (ML) model for a particular data set, which is essential when you can’t test the data set easily.

Suppose you want to predict a person’s weight using age and height. You create two models and use the Akaike information criterion as follows:

Example

  • Model 1: Has age and height as predictors
  • Model 2: Has only height as a predictor

Feeding the two models to a dataset of 1,000 observations, you find these AIC scores:

Example

  • Model 1: AIC = 700
  • Model 2: AIC = 400

Select model 2 as the better-fitting model for the data variation from the above scores.

How to calculate Akaike information criterion

Calculating the Akaike information criterion value involves understanding several aspects, as outlined below:

Use the following formulae to calculate the Akaike information criterion:

AIC = 2K – 2 In(L)

Where:

  • K: number of independent variables
  • In(L): Log-likelibonnet of your model (this value tells you how likely the model fits the data)

Akaike information criterion will overfit the collected data when the sample size is small. It will select a complex model that would not generalize to new data. To address this issue, you can use the Akaike information criterion with a correction for small sample sizes (AICc).

Suppose you’re conducting a study to identify the variations in Body Mass Index (BMI) using sugar-sweetened beverage data. You create different models to determine how age, beverage consumption, and sex leads to these variations. So, you create several models, compare them, and fit in the three predictor variables (age, beverage consumption, and sex).

To begin, you create models to test how the variables work on their own, as follows:

Model 1: Has age as a predictor {(age.mod – lm(bmi ~ age, data = bmi.data)}

Model 2: Has sex as a predictor {(sex.mod – lm(bmi ~ sex, data = bmi.data)}

Model 3: Has beverage consumption as a predictor {consumption.mod – lm(BMI ~ consumption, data = bmi.data)}

Then, you create models to test how the variables work when combined:

Model 4: Has age and sex as predictors {age.sex.mod – lm(bmi ~ age + sex, data = bmi.data)}

Model 5: Has age, beverage consumption, and sex as predictors {combination. mod – lm(bmi ~ age + sex + consumption, data = bmi. data)}

Finally, you create a model that explains the interaction of sex, beverage consumption, and age with BMI:

Model 6: Has age, beverage consumption, and sex as predictors {interaction. mod – lm(bmi ~ age*sex*consumption, data = bmi. data)}

Interpreting Akaike information criterion results

After running these models on R, you’ll get the following output used to interpret the results:

Example

K AICc Delta_AICc AICcWt Cum.Wt LL
combination.mod 5 1743.02 0.00 0.96 0.96 -866.45
interaction.mod 9 1749.35 6.33 0.04 1.00 -865.49
age.sex.mod 4 1760.59 17.57 0.00 1.00 -876.26
age.mod 3 1764.91 21.89 0.00 1.00 -879.43
sex.mod 3 2815.68 1072.66 0.00 1.00 -1404.82
consumption.mod 3 2820.86 1077.84 0.00 1.00 -1407.41

According to the Akaike information criterion, the best model for the data is always listed first, and it contains the following information:

  • K: The number of parameters
  • AICc: The AIC information for a small size sample
  • Delta_AICc: The AIC score difference between the high-quality model and another one
  • AICcWt: The proportion of the predictive power of the complete data set
  • Wt: The sum of the AICc weights
  • LL: The log-likelibonnet of your model

From the above results, the best model is the combinations model, which has a lesser Akaike information criterion score.

Print Your Thesis Now
Printing your thesis with BachelorPrint guarantees every Australian student to benefit from numerous advantages:
  • ✓ Free express delivery
  • ✓ Individual embossing
  • ✓ Selection of high-quality bindings

to printing services

FAQs

Unfortunately, AIC can only provide a relative test of the model quality. Therefore, the Akaike information criterion will not indicate it if you have statistically unsatisfactory models.

You can use this tool in various circumstances, like when:

  • You need more data to test the results’ accuracy
  • You have a problem statement and have collected the necessary variables
  • Ensure the variables are important indicators for your problem

One or more independent variables make up a model. During the Akaike information criterion, researchers use their predicted interactions to determine variation in their corresponding dependent variables.


From

Lisa Neumann

How do you like this article?

0 Reviews
 
About the author

Lisa Neumann is studying marketing management in a dual program at IU Nuremberg and is working towards a bachelor's degree. They have already gained practical experience and regularly write scientific papers as part of their studies. Because of this, Lisa is an excellent fit for the BachelorPrint team. In this role, they emphasize the importance of high-quality content and aim to help students navigate their busy academic lives. As a student themself, they understand what truly matters and what support students need.

Show all articles from this author
About
BachelorPrint | #1 Online Printing Service
For Australian Students

BachelorPrint is an online printing service specialised in printing and binding academic papers, theses, and dissertations. Offering a wide arrange of bindings and configurations, BachelorPrint aims to enable every Australian student to receive its desired binding. Additionally, BachelorPrint offers hundreds of educational articles on various topics regarding academic writing in its Study Guide, supporting students with writing their thesis or dissertation.


Our posts on other topics