No σ for you!
|
11-21-2024, 03:51 PM
Post: #1
|
|||
|
|||
No σ for you!
Who had to die before \(\sigma_x\) could be added to the statistics functions of HP calculators?
I know approximately nothing about statistics, although I have encountered the standard deviation σ. The HP manuals helpfully explain the difference between the sample standard deviation \(s_x\) and the population standard deviation \(\sigma_x\). Whereas Sharp or Casio calculators from the 1980s would provide both functions, HP calculators invariably only provided s. This continued well into the models where you could no longer plead a lack of keys or other resources: The 32S, 42S, 28C/S, and 48SX all only provided s. It took until the 32SII (1991) and 48GX (1993) for σ to become available. This feels like somebody didn't want σ to be readily available and it fell on the manual writers to explain how you could still calculate it. Is there a background story to this? The best calculator is the one you actually use. |
|||
11-21-2024, 04:08 PM
Post: #2
|
|||
|
|||
RE: No σ for you!
I don't know, but I missed a tricky test question in college because of this.
The problem specifically pointed out that the data was the whole population, but I used my fancy new HP-41 to calculate it without a correction. oops. |
|||
11-21-2024, 04:48 PM
(This post was last modified: 11-21-2024 05:46 PM by AnnoyedOne.)
Post: #3
|
|||
|
|||
RE: No σ for you!
I don't know either.
As I recall we calculated sample standard deviation (by hand) in HS. Since the difference is dividing by n or n-1 it only matters for small n. If n is large enough the difference is minor. My guess is that engineers (HP calculator designers) typically have small n (samples) hence Sx. A1 PS: From the footnote on p53 (p65 of the PDF) of "HP-15C Collector's Edition Owner's Handbook" Quote:The difference between the values is small for large n, and for most applications can be ignored. But if you want to calculate the exact value of the population standard deviation for an entire population, you can easily do so: simply add, using E+, the mean (x) of the data to the data before pressing g s. The result will be the population standard deviation. HP-15C (2234A02xxx), HP-16C (2403A02xxx), HP-15C CE (9CJ323-03xxx), HP-20S (2844A16xxx), HP-12C+ (9CJ251) |
|||
11-21-2024, 06:02 PM
(This post was last modified: 11-21-2024 06:03 PM by KeithB.)
Post: #4
|
|||
|
|||
RE: No σ for you!
The HP 71 reference manual seems to assume a lot:
"The sample standard deviation calculation uses n-1 as the denominator where n is the sample size. For information concerning statistical arrays, refer to the "Mathematical Discussion of HP-71 Statistical Arrays", page 334." page 334 has no discussion of standard deviation, either population or sample. The owner's manual is not particularly helpful, either. |
|||
11-21-2024, 09:41 PM
Post: #5
|
|||
|
|||
RE: No σ for you!
I've a vague recollection that adding the mean to the data converts from the sample to the population variance. So there is no strict need for both functions.
|
|||
11-21-2024, 10:03 PM
Post: #6
|
|||
|
|||
RE: No σ for you!
Thanks for the question!
My statistics book say n-1 denominator because of n-1 degrees of freedom, but it just felt made up. (if this explanation is true, then mean should divide by n-1 too!) Here is the proof P.S. Part 1 and 2 is worth watching too. |
|||
11-22-2024, 01:07 PM
Post: #7
|
|||
|
|||
RE: No σ for you!
(11-21-2024 06:02 PM)KeithB Wrote: The HP 71 reference manual seems to assume a lot: True, but page 335 does cover it, so more of a dumb editing issue. But your conclusion that these manuals don't cover it very well remains true IMHO, for such a subtle yet important difference, it should be explained better. --Bob Prosperi |
|||
11-22-2024, 02:27 PM
Post: #8
|
|||
|
|||
RE: No σ for you!
"True, but page 335 does cover it"
Not in mine it doesn't, no mention of standard deviation at all: And 336 is a discussion of the Add/Drop algorithm and 337 is a discussion of linear regression. 338 starts the discussion of IEEE floating point. What does it mean by "3. It is easier to use sample means, variances and correlations as inputs in place of the original data"? (Sorry for the poor image, I was not about to break my reference manual to put it flat on a scanner) |
|||
11-22-2024, 02:54 PM
Post: #9
|
|||
|
|||
RE: No σ for you!
Also, one of my favorite formulas is one I learned in an excellent DOE class 35 or so years ago, taught by David Doehlert.
The 95% confident limits for a mean of a sample of a population is: Mean +/- t * smplstddev / sqrt(r) r is the number of replicants for the mean calculation t is a factor based on the degrees of freedom for the sample stddev calculation. It is the t statistic It ranges from 12.7 with 1 degree of freedom to 1.96 with infinite degrees or freedom. |
|||
11-23-2024, 08:09 AM
(This post was last modified: 11-24-2024 12:00 AM by carey.)
Post: #10
|
|||
|
|||
RE: No σ for you!
(11-22-2024 02:54 PM)KeithB Wrote: Also, one of my favorite formulas is one I learned in an excellent DOE class 35 or so years ago, taught by David Doehlert. In case it's of interest to anyone, here's some context to the formula in Keith's post and some comments relating topics in this thread to their use in physics. The term "smplstddev / sqrt(r)" in the formula is the standard error (SE), i.e., standard deviation (SD) of the mean and is the SD divided by the square root of the number of trials. It's of great importance to experimenters because, unlike the SD (which experimenters can't control as it's a measure of the intrinsic variation in whatever is being studied), the SD of the mean (i.e., the SE) can be made arbitrarily small just by increasing the number of trials. This makes sense because, as the number of trials increases, our uncertainty in the value of the mean should decrease. Now consider the "t factor" in the formula. Without it, we have: mean ± SE. This encompasses around 68% of the data because the area under the normal curve between limits of the mean ± 1 SD is around 68% of the total area. To encompass 95% of the data (corresponding to the "95% confidence limit" mentioned in the post) it's necessary to integrate the normal curve between limits of the mean ± 2 standard deviations (or more precisely 1.96 standard deviations as mentioned in the post). So the "t factor" in the formula is just the number of standard errors (SD of the mean) needed to encompass a particular % of the data. Since the value 1.96 is mentioned with "infinite degrees of freedom" the population standard deviation is appropriate. Notes re: topics in this thread as applied to physics. 1) While σ often denotes SD in stat books and calculator manuals, in physics σ represents SE (not SD) since measurements are usually mean values and SE is the SD of the mean. 2) If the final output of a series of measurements is a mean ± SD, sure, use the sample SD for small data sets. However, if the goal is model testing, as in physics experiments using chi-squared minimization, SDs are used only to find SEs, and population SD is often used. While sample SD has less bias than population SD, dividing by N-1 vs N markedly increases variability at very low sample sizes (e.g., N = 2) where measurement uncertainties aren’t reliable. Note that this preference for population SD has nothing to do with it being easier to calculate as that hasn't been an issue for many decades. 3) Using the population SE and obtaining the "t-factor" just by direct integration of the normal curve makes the t-statistic unnecessary. |
|||
11-23-2024, 12:38 PM
Post: #11
|
|||
|
|||
RE: No σ for you!
From Numerical recipe, Chapter 14. Statistical Description of Data
Quote:Var(x[0] ... x[N-1]) = sum((x[j] - x_bar)^2, j=0..N-1) / (N-1) |
|||
11-24-2024, 06:00 PM
(This post was last modified: 11-25-2024 03:55 AM by Nihotte(lma).)
Post: #12
|
|||
|
|||
RE: No σ for you!
(11-21-2024 03:51 PM)naddy Wrote: Is there a background story to this? Hi naddy, My idea as an enlightened layman is that HP calculators were first aimed at knowledgeable professionals. Quite simply, we did not choose an HP calculator by chance. In my opinion, in practice, it is rather rare to encounter an exhaustive population during a statistical survey of data. We are talking more about a survey. For my part, I have always effectively found explicit information on this subject concerning the result provided by the calculator. (for example, the standard deviation chapters of the HP 11C and HP 15C manuals). Unless I am mistaken, at the same time, the TI 57 and probably the TI58 and TI59 had the same approach. I confirm that very early on I had a rho(n) and rho(n-1) statistical function on my Casio calculators (see FX-602P and even earlier). It was more disturbing than anything else, when used in class at the time! (Indeed, the explanation and distinction between the 2 functions in the manual was contained in a simple sentence as to the exhaustiveness or not of the statistical population!!) Keep yourself healthy Laurent EDIT : 11/25/2024 - Sorry for my mistake in transcribing the name of the letter because it is not, of course, the lowercase letter rhô but the lowercase Greek letter sigma, which is much more consistent (σ / Σ). And Above, I mean a "sample" as opposed to an "entire population" by giving the word survey. |
|||
11-24-2024, 06:48 PM
Post: #13
|
|||
|
|||
RE: No σ for you! | |||
11-24-2024, 07:13 PM
Post: #14
|
|||
|
|||
RE: No σ for you!
(11-24-2024 06:00 PM)Nihotte(lma) Wrote: In my opinion, in practice, it is rather rare to encounter an exhaustive population during a statistical survey of data. I checked the manuals:
The best calculator is the one you actually use. |
|||
11-24-2024, 08:59 PM
Post: #15
|
|||
|
|||
RE: No σ for you!
Even if we use sample variance, its square root still under-estimated σ.
This is because square root is a concave function. Unbiased estimation of standard deviation Quote:It is not possible to find an estimate of the standard deviation which is unbiased for all population distributions, Unbiased S.D. Rule of Thumb for Normal Distribution: This suggested σn-1 maybe better estimate than σn, even if we have true mean. |
|||
11-25-2024, 12:54 PM
Post: #16
|
|||
|
|||
RE: No σ for you!
I never really got the whole squaring then square root thing. I'm my mind, the mean absolute variation seemed a more intuitive measure.
|
|||
11-25-2024, 03:58 PM
(This post was last modified: 11-25-2024 03:59 PM by carey.)
Post: #17
|
|||
|
|||
RE: No σ for you!
(11-25-2024 12:54 PM)dm319 Wrote: I never really got the whole squaring then square root thing. I'm my mind, the mean absolute variation seemed a more intuitive measure. Since the mean of the deviations is 0, there are only two ways to avoid cancellation of the deviations when taking their mean. One way is to use absolute values of deviations (i.e., the mean absolute variation you mention). It works, but working with absolute values in subsequent equations becomes unwieldly. The other way is to square the deviations, then take the mean and the square root as done in the standard deviation. In fact, the standard deviation is the root-mean-square (RMS) deviation or RMSD. If we read its name backwards (from right to left), applying one word at a time (like an RPL or FORTH program :), the SD algorithm is generated. Step 1: Deviation \[x_{i} - \bar{x}\] Step 2: Square \[(x_{i} - \bar{x})^{2}\] Step 3: Mean \[ \frac{\sum{x_{i} - \bar{x})^{2}}}{N} \] Step 4: Root \[ SD = \sqrt{\frac{\sum{x_{i} - \bar{x})^{2}}}{N}} \] |
|||
11-25-2024, 04:01 PM
Post: #18
|
|||
|
|||
RE: No σ for you!
Think of Pythagoras and the hypotenuse: root mean square is a distance, in a usefully general sense. Sum of absolute differences is a different kind of distance (Manhattan distance) and isn't quite so well-behaved.
|
|||
11-25-2024, 04:49 PM
Post: #19
|
|||
|
|||
RE: No σ for you!
(11-25-2024 04:01 PM)EdS2 Wrote: Think of Pythagoras and the hypotenuse: root mean square is a distance, in a usefully general sense. Sum of absolute differences is a different kind of distance (Manhattan distance) and isn't quite so well-behaved. Yes, while absolute differences might be justified as a reasonable alternative if working only with discrete data, standard deviation applications go way beyond counting and measuring, e.g., the standard deviation is needed to define some continuous functions, e.g., the Gaussian (normal) distribution, Since the Central Limit Theorem ensures Gaussian distributions occur over a wide range of typical experimental conditions, this suggests that the standard deviation is Nature's way to characterize variation. \[ Gaussian function = f(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left(-\frac{(x - \mu)^2}{2 \sigma^2}\right), \] |
|||
11-25-2024, 05:54 PM
Post: #20
|
|||
|
|||
RE: No σ for you!
Also of course used to get the RMS value for AC voltages. The average power is zero for a symmetrical sine wave, but the RMS will give you the heating power.
|
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: Pälzer, 6 Guest(s)