Search for question
Question

By definition, the power of a test is delined as the chance of detecting an effect should one exist. For a test of the difference between two independent groups (whether

it's testing proportion, means, whatever), suppose a real difference exists. Discuss the following situations... As sample size increases, does the power of the test increase or decrease? Why? 2) If the variation in the data were larger, does the power of the test increase or decrease? Why? 3) If alpha was chosen to be larger than the standard 0.05, does the power of the testin crease or decrease? Why?

Fig: 1

Fig: 2

Fig: 3

Fig: 4