HP Forums
No σ for you! - Printable Version

+- HP Forums (https://www.hpmuseum.org/forum)
+-- Forum: HP Calculators (and very old HP Computers) (/forum-3.html)
+--- Forum: General Forum (/forum-4.html)
+--- Thread: No σ for you! (/thread-22744.html)

Pages: 1 2


No σ for you! - naddy - 11-21-2024 03:51 PM

Who had to die before \(\sigma_x\) could be added to the statistics functions of HP calculators?

I know approximately nothing about statistics, although I have encountered the standard deviation σ. The HP manuals helpfully explain the difference between the sample standard deviation \(s_x\) and the population standard deviation \(\sigma_x\). Whereas Sharp or Casio calculators from the 1980s would provide both functions, HP calculators invariably only provided s. This continued well into the models where you could no longer plead a lack of keys or other resources: The 32S, 42S, 28C/S, and 48SX all only provided s. It took until the 32SII (1991) and 48GX (1993) for σ to become available.

This feels like somebody didn't want σ to be readily available and it fell on the manual writers to explain how you could still calculate it.

Is there a background story to this?


RE: No σ for you! - KeithB - 11-21-2024 04:08 PM

I don't know, but I missed a tricky test question in college because of this.

The problem specifically pointed out that the data was the whole population, but I used my fancy new HP-41 to calculate it without a correction. oops.


RE: No σ for you! - AnnoyedOne - 11-21-2024 04:48 PM

I don't know either.

As I recall we calculated sample standard deviation (by hand) in HS.

Since the difference is dividing by n or n-1 it only matters for small n. If n is large enough the difference is minor.

My guess is that engineers (HP calculator designers) typically have small n (samples) hence Sx.

A1

PS: From the footnote on p53 (p65 of the PDF) of "HP-15C Collector's Edition Owner's Handbook"

Quote:The difference between the values is small for large n, and for most applications can be ignored. But if you want to calculate the exact value of the population standard deviation for an entire population, you can easily do so: simply add, using E+, the mean (x) of the data to the data before pressing g s. The result will be the population standard deviation.



RE: No σ for you! - KeithB - 11-21-2024 06:02 PM

The HP 71 reference manual seems to assume a lot:

"The sample standard deviation calculation uses n-1 as the denominator where n is the sample size. For information concerning statistical arrays, refer to the "Mathematical Discussion of HP-71 Statistical Arrays", page 334."

page 334 has no discussion of standard deviation, either population or sample.

The owner's manual is not particularly helpful, either.


RE: No σ for you! - Paul Dale - 11-21-2024 09:41 PM

I've a vague recollection that adding the mean to the data converts from the sample to the population variance. So there is no strict need for both functions.


RE: No σ for you! - Albert Chan - 11-21-2024 10:03 PM

Thanks for the question!

My statistics book say n-1 denominator because of n-1 degrees of freedom, but it just felt made up.
(if this explanation is true, then mean should divide by n-1 too!)

Here is the proof




P.S. Part 1 and 2 is worth watching too.


RE: No σ for you! - rprosperi - 11-22-2024 01:07 PM

(11-21-2024 06:02 PM)KeithB Wrote:  The HP 71 reference manual seems to assume a lot:

"The sample standard deviation calculation uses n-1 as the denominator where n is the sample size. For information concerning statistical arrays, refer to the "Mathematical Discussion of HP-71 Statistical Arrays", page 334."

page 334 has no discussion of standard deviation, either population or sample.

The owner's manual is not particularly helpful, either.

True, but page 335 does cover it, so more of a dumb editing issue. But your conclusion that these manuals don't cover it very well remains true IMHO, for such a subtle yet important difference, it should be explained better.


RE: No σ for you! - KeithB - 11-22-2024 02:27 PM

"True, but page 335 does cover it"

Not in mine it doesn't, no mention of standard deviation at all:

[attachment=14316]
And 336 is a discussion of the Add/Drop algorithm and 337 is a discussion of linear regression.

338 starts the discussion of IEEE floating point.

What does it mean by "3. It is easier to use sample means, variances and correlations as inputs in place of the original data"?

(Sorry for the poor image, I was not about to break my reference manual to put it flat on a scanner)


RE: No σ for you! - KeithB - 11-22-2024 02:54 PM

Also, one of my favorite formulas is one I learned in an excellent DOE class 35 or so years ago, taught by David Doehlert.

The 95% confident limits for a mean of a sample of a population is:

Mean +/- t * smplstddev / sqrt(r)

r is the number of replicants for the mean calculation
t is a factor based on the degrees of freedom for the sample stddev calculation. It is the t statistic
It ranges from 12.7 with 1 degree of freedom to 1.96 with infinite degrees or freedom.


RE: No σ for you! - carey - 11-23-2024 08:09 AM

(11-22-2024 02:54 PM)KeithB Wrote:  Also, one of my favorite formulas is one I learned in an excellent DOE class 35 or so years ago, taught by David Doehlert.

The 95% confident limits for a mean of a sample of a population is:

Mean +/- t * smplstddev / sqrt(r)

r is the number of replicants for the mean calculation
t is a factor based on the degrees of freedom for the sample stddev calculation. It is the t statistic
It ranges from 12.7 with 1 degree of freedom to 1.96 with infinite degrees or freedom.

In case it's of interest to anyone, here's some context to the formula in Keith's post and some comments relating topics in this thread to their use in physics.

The term "smplstddev / sqrt(r)" in the formula is the standard error (SE), i.e., standard deviation (SD) of the mean and is the SD divided by the square root of the number of trials. It's of great importance to experimenters because, unlike the SD (which experimenters can't control as it's a measure of the intrinsic variation in whatever is being studied), the SD of the mean (i.e., the SE) can be made arbitrarily small just by increasing the number of trials. This makes sense because, as the number of trials increases, our uncertainty in the value of the mean should decrease.

Now consider the "t factor" in the formula. Without it, we have: mean ± SE. This encompasses around 68% of the data because the area under the normal curve between limits of the mean ± 1 SD is around 68% of the total area. To encompass 95% of the data (corresponding to the "95% confidence limit" mentioned in the post) it's necessary to integrate the normal curve between limits of the mean ± 2 standard deviations (or more precisely 1.96 standard deviations as mentioned in the post). So the "t factor" in the formula is just the number of standard errors (SD of the mean) needed to encompass a particular % of the data. Since the value 1.96 is mentioned with "infinite degrees of freedom" the population standard deviation is appropriate.

Notes re: topics in this thread as applied to physics.

1) While σ often denotes SD in stat books and calculator manuals, in physics σ represents SE (not SD) since measurements are usually mean values and SE is the SD of the mean.

2) If the final output of a series of measurements is a mean ± SD, sure, use the sample SD for small data sets. However, if the goal is model testing, as in physics experiments using chi-squared minimization, SDs are used only to find SEs, and population SD is often used. While sample SD has less bias than population SD, dividing by N-1 vs N markedly increases variability at very low sample sizes (e.g., N = 2) where measurement uncertainties aren’t reliable. Note that this preference for population SD has nothing to do with it being easier to calculate as that hasn't been an issue for many decades.

3) Using the population SE and obtaining the "t-factor" just by direct integration of the normal curve makes the t-statistic unnecessary.


RE: No σ for you! - Albert Chan - 11-23-2024 12:38 PM

From Numerical recipe, Chapter 14. Statistical Description of Data
Quote:Var(x[0] ... x[N-1]) = sum((x[j] - x_bar)^2, j=0..N-1) / (N-1)

There is a long story about why the denominator of [above] is N-1 instead of N.
If you have never heard that story, you should consult any good statistics text.

Here we will be content to note that the N-1 should be changed to N if you are ever
in the situation of measuring the variance of a distribution whose mean x_bar is known
a priori rather than being estimated from the data. (We might also comment that if the
difference between N and N-1 ever matters to you, then you are probably up to no good
anyway — e.g., trying to substantiate a questionable hypothesis with marginal data.)



RE: No σ for you! - Nihotte(lma) - 11-24-2024 06:00 PM

(11-21-2024 03:51 PM)naddy Wrote:  Is there a background story to this?

Hi naddy,

My idea as an enlightened layman is that HP calculators were first aimed at knowledgeable professionals.
Quite simply, we did not choose an HP calculator by chance.
In my opinion, in practice, it is rather rare to encounter an exhaustive population during a statistical survey of data.
We are talking more about a survey.
For my part, I have always effectively found explicit information on this subject concerning the result provided by the calculator.
(for example, the standard deviation chapters of the HP 11C and HP 15C manuals).

Unless I am mistaken, at the same time, the TI 57 and probably the TI58 and TI59 had the same approach.

I confirm that very early on I had a rho(n) and rho(n-1) statistical function on my Casio calculators (see FX-602P and even earlier).
It was more disturbing than anything else, when used in class at the time!
(Indeed, the explanation and distinction between the 2 functions in the manual was contained in a simple sentence
as to the exhaustiveness or not of the statistical population!!)

Keep yourself healthy

Laurent


EDIT : 11/25/2024 - Sorry for my mistake in transcribing the name of the letter because it is not, of course, the lowercase letter rhô but the lowercase Greek letter sigma, which is much more consistent (σ / Σ).
And Above, I mean a "sample" as opposed to an "entire population" by giving the word survey.


RE: No σ for you! - HPing - 11-24-2024 06:48 PM

(11-23-2024 12:38 PM)Albert Chan Wrote:  "We might also comment that if the
difference between N and N-1 ever matters to you, then you are probably up to no good
anyway — e.g., trying to substantiate a questionable hypothesis with marginal data."
Hahaha.


RE: No σ for you! - naddy - 11-24-2024 07:13 PM

(11-24-2024 06:00 PM)Nihotte(lma) Wrote:  In my opinion, in practice, it is rather rare to encounter an exhaustive population during a statistical survey of data.
We are talking more about a survey.
For my part, I have always effectively found explicit information on this subject concerning the result provided by the calculator.
(for example, the standard deviation chapters of the HP 11C and HP 15C manuals).

Unless I am mistaken, at the same time, the TI 57 and probably the TI58 and TI59 had the same approach.

I checked the manuals:
  • TI-57: Provides only population variance σ² (press √x for population standard deviation).
  • TI-58/59: Provides population variance (press √x for population standard deviation) or sample standard deviation (press x² for sample variance). How bizarre!
Yes, this is explicitly documented.


RE: No σ for you! - Albert Chan - 11-24-2024 08:59 PM

Even if we use sample variance, its square root still under-estimated σ.
This is because square root is a concave function.

Unbiased estimation of standard deviation
Quote:It is not possible to find an estimate of the standard deviation which is unbiased for all population distributions,
as the bias depends on the particular distribution.

Unbiased S.D. Rule of Thumb for Normal Distribution:

[Image: 1456182db9860e0fd84aa1a2cd004f8d91c80a51]

This suggested σn-1 maybe better estimate than σn, even if we have true mean.


RE: No σ for you! - dm319 - 11-25-2024 12:54 PM

I never really got the whole squaring then square root thing. I'm my mind, the mean absolute variation seemed a more intuitive measure.


RE: No σ for you! - carey - 11-25-2024 03:58 PM

(11-25-2024 12:54 PM)dm319 Wrote:  I never really got the whole squaring then square root thing. I'm my mind, the mean absolute variation seemed a more intuitive measure.

Since the mean of the deviations is 0, there are only two ways to avoid cancellation of the deviations when taking their mean. One way is to use absolute values of deviations (i.e., the mean absolute variation you mention). It works, but working with absolute values in subsequent equations becomes unwieldly.

The other way is to square the deviations, then take the mean and the square root as done in the standard deviation. In fact, the standard deviation is the root-mean-square (RMS) deviation or RMSD. If we read its name backwards (from right to left), applying one word at a time (like an RPL or FORTH program :), the SD algorithm is generated.

Step 1: Deviation \[x_{i} - \bar{x}\]
Step 2: Square \[(x_{i} - \bar{x})^{2}\]
Step 3: Mean \[ \frac{\sum{x_{i} - \bar{x})^{2}}}{N} \]
Step 4: Root \[ SD = \sqrt{\frac{\sum{x_{i} - \bar{x})^{2}}}{N}} \]


RE: No σ for you! - EdS2 - 11-25-2024 04:01 PM

Think of Pythagoras and the hypotenuse: root mean square is a distance, in a usefully general sense. Sum of absolute differences is a different kind of distance (Manhattan distance) and isn't quite so well-behaved.


RE: No σ for you! - carey - 11-25-2024 04:49 PM

(11-25-2024 04:01 PM)EdS2 Wrote:  Think of Pythagoras and the hypotenuse: root mean square is a distance, in a usefully general sense. Sum of absolute differences is a different kind of distance (Manhattan distance) and isn't quite so well-behaved.

Yes, while absolute differences might be justified as a reasonable alternative if working only with discrete data, standard deviation applications go way beyond counting and measuring, e.g., the standard deviation is needed to define some continuous functions, e.g., the Gaussian (normal) distribution, Since the Central Limit Theorem ensures Gaussian distributions occur over a wide range of typical experimental conditions, this suggests that the standard deviation is Nature's way to characterize variation.

\[
Gaussian function = f(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left(-\frac{(x - \mu)^2}{2 \sigma^2}\right),
\]


RE: No σ for you! - KeithB - 11-25-2024 05:54 PM

Also of course used to get the RMS value for AC voltages. The average power is zero for a symmetrical sine wave, but the RMS will give you the heating power.