Lesson 39:  Binomial Confidence Intervals

<<Previous|Contents|Next>>

Video: Simulating binomial confidence intervals

Consider a binomial distribution with parameters n and p.  We want to study confidence intervals and coverage.  Here is a way to simulate binomial data:

%let n=10;
%let p=.5;
%let sims=10;
data one;
  do i=1 to &sims;
    x=ranbin(0,&n,&p);   *parameters are seed, n, p;
    output;
  end;

proc print;
run;

The usual method of calculating a confidence interval is using the normal approximation.  This can perform poorly especially if p is not close to .5 and n is small.  The formula is phat plus or minus z alpha/2 times square root of phat times qhat over n.

Now, coverage means the proportion or percent of confidence intervals that actually contain or capture the true parameter (p).  So, ideally, if we have a 95% confidence interval, 95% of confidence intervals should contain the true parameter.  However, this may not be true, if the assumptions are not met or the approximation is poor.

In our simulation we will then calculate a 95% CI for each sample being simulated.

%let n=10;
%let p=.5;
%let sims=10;
%let z=1.96;   *z-value for a 95% CI;
data one;
  do i=1 to &sims;
    x=ranbin(0,&n,&p);
    phat=x/&n;
    lb=phat-&z*sqrt(phat*(1-phat)/&n);  *upper and lower bounds of CI;
    ub=phat+&z*sqrt(phat*(1-phat)/&n);
    output;
  end;
proc print;
run;

To find coverage we need to keep track of how many times the CI includes the true mean.  Now, we could store the results of the simulation in a data set and then analyze it, but this is a simple question.  We can actually count the correct results as the data step runs.  We don't need to save all the results. 

data one(keep=cover coverage);
  do i=1 to &sims;
    x=ranbin(0,&n,&p);
    phat=x/&n;
    lb=phat-&z*sqrt(phat*(1-phat)/&n); 
    ub=phat+&z*sqrt(phat*(1-phat)/&n);
    if &p>lb and &p<ub then cover+1;  *count up the intervals that cover; 
    coverage=cover/&sims;    *remove the output statement, only need the final result;
  end;

Run this with p=.5 and n=10 with 1000 or 10,000 simulations.  I got a coverage of 89%.  This means we do not have a 95% CI, but only an 89% CI in this situation.  Now try increasing the sample size (n) to 20.  I got 95.7%.  So with larger sample sizes we are getting better coverage.  Increasing the sample size beyond that doesn't seem to improve it.  Now try setting p to .1.  It seems in this case you need a sample of at least 100 to get close to 95% coverage.

However, that is not the whole story.  There are other problems with these confidence intervals.  Look at this sample of 10 confidence intervals produced with a p of .1 and an n of 20.  Notice that one interval is 0 to 0 so it automatically misses.  But most of the examples have negative numbers.  This does not make sense, because 0<p<1.  If p=.9, you might get the same phenomenon at the other end, with values greater than 1.  Some people just solve this problem by setting the lower bound to 0 or the upper bound to 1.  It is still somewhat unsatisfactory. 

 -0.031481 0.23148
  0.000000 0.00000
 -0.006493 0.30649
 -0.006493 0.30649
  0.024692 0.37531
 -0.031481 0.23148
 -0.045519 0.14552
 -0.031481 0.23148
 -0.006493 0.30649
 -0.031481 0.23148

There is an alternate formula for the confidence interval (proposed by Cohen?).  The technique consists of adding two successes and two failures to the results.  In other words, the estimator is ptilde=(x+2)/(n+4).  We will add this formula to the program and compare the results.

data one(keep=cover coverage covert coveraget);
  do i=1 to &sims;
    x=ranbin(0,&n,&p);
    phat=x/&n;
    lb=phat-&z*sqrt(phat*(1-phat)/&n); 
    ub=phat+&z*sqrt(phat*(1-phat)/&n);
    if &p>lb and &p<ub then cover+1;  
    coverage=cover/&sims;  
    ptilde=(x+2)/(&n+1);
    lbt=ptilde-&z*sqrt(ptilde*(1-ptilde)/(&n+4));
    ubt=ptilde+&z*sqrt(ptilde*(1-ptilde)/(&n+4));
    if &p>lbt and &p<ubt then covert+1;
    coveraget=covert/&sims; 
  end;

Here are the alternate confidence intervals for the same run as that above.  We didn't get any 0 to 0 intervals, and we only got one negative number.

  0.03337 0.34758
 -0.02220 0.21268
  0.06769 0.40850
  0.06769 0.40850
  0.10498 0.46645
  0.03337 0.34758
  0.00286 0.28286
  0.03337 0.34758
  0.06769 0.40850
  0.03337 0.34758
  
For n=10 and p=.1, the old method has a coverage of about 65%, while the new method has a coverage of about 92%. The new method gives about 95% with only n=20, as compared to n=100 for the old method..

<<Previous|Contents|Next>>

Exercise:

Write a simulation to evaluate the "normal approximation" method of hypothesis testing for the p parameter of the binomial distribution.  The object is to check if the significance level is accurate.  That is, if you have alpha=.05 and the null hypothesis is true, do you reject 5% of the time?  Experiment with values of n and p and make some comments about how n and p affect the true significance level.  Important note:  Unlike the confidence interval, the standard deviation used in the hypothesis test is not based on the calculated p from the sample, but from the null hypothesis in the test.  The test statistic is phat/std, the null hypothesis will be rejected if the test statistic is outside the interval (-1.96,1.96) for alpha=.05.

Copyright reserved by Dr.  Dwight Galster, 2006.  Please request permission for reprints (other than for personal use) from dwight.galster@sdstate.edu  .  "SAS" is a registered trade name of SAS Institute, Cary North Carolina.  All other trade names mentioned are the property of their respective owners.  This document is a work in progress and comments are welcome.  Please send an email if you find it useful or if your site links to it.