Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
sequence
218
2
null
1
2
null
This interesting question is the subject of some research in [ACERA](http://www.acera.unimelb.edu.au/). The lead researcher is Andrew Speirs-Bridge, and his work is eminently google-able :)
null
CC BY-SA 2.5
null
2010-07-20T05:13:21.963
2010-07-20T05:13:21.963
null
null
187
null
219
2
null
40
3
null
The Xorshift PNG designed by George Marsaglia. Its period (2^128-1) is much shorter than the Mersenne-Twister but the algorithm is very simple to implement and lends itself to parallelization. Performs well on many-core architectures such as DSP chips and Nvidia's Tesla.
null
CC BY-SA 2.5
null
2010-07-20T05:21:14.590
2010-07-20T05:33:00.140
2010-07-20T05:33:00.140
154
154
null
220
1
221
null
79
93750
If $X_1, ..., X_n$ are independent identically-distributed random variables, what can be said about the distribution of $\min(X_1, ..., X_n)$ in general?
How is the minimum of a set of IID random variables distributed?
CC BY-SA 4.0
null
2010-07-20T05:23:04.163
2020-06-30T20:49:14.163
2020-06-30T20:49:14.163
227013
85
[ "distributions", "random-variable", "extreme-value" ]
221
2
null
220
55
null
If the cdf of $X_i$ is denoted by $F(x)$, then the cdf of the minimum is given by $1-[1-F(x)]^n$.
null
CC BY-SA 3.0
null
2010-07-20T05:35:48.807
2012-04-12T23:26:05.790
2012-04-12T23:26:05.790
159
159
null
222
1
282
null
91
278937
What are principal component scores (PC scores, PCA scores)?
What are principal component scores?
CC BY-SA 3.0
null
2010-07-20T05:37:46.557
2022-07-25T13:56:39.230
2016-08-20T15:27:48.417
28666
191
[ "pca", "definition" ]
223
1
null
null
6
388
I have a friend who is an MD and wants to refresh his Statistics. So is there any recommended resource online (or offline) ? He did stats ~20 years ago.
Intro to statistics for an MD?
CC BY-SA 2.5
null
2010-07-20T05:54:15.017
2022-11-23T13:18:24.017
2012-02-01T18:51:11.727
4872
79
[ "references" ]
224
1
null
null
9
2210
Which visualization libraries (plots, graphs, ...) would you suggest to use in a standalone application (Linux, .Net, Windows, whatever). Reasonable performance would be nice as well.
Recommended visualization libraries for standalone applications
CC BY-SA 3.0
null
2010-07-20T06:03:59.930
2011-05-27T03:36:21.040
2011-05-27T03:36:21.040
183
128
[ "data-visualization", "software" ]
225
1
229
null
8
818
Why is the average of the highest value from 100 draws from a normal distribution different from the 98% percentile of the normal distribution? It seems that by definition that they should be the same. But... Code in R: ``` NSIM <- 10000 x <- rep(NA,NSIM) for (i in 1:NSIM) { x[i] <- max(rnorm(100)) } qnorm(.98) qnorm(.99) mean(x) median(x) hist(x) ``` I imagine that I'm misunderstanding something about what the maximum of a 100 draws from the normal distribution should be. As is demonstrated by an unexpectedly asymetrical distribution of maximum values.
Why is the average of the highest value from 100 draws from a normal distribution different from the 98th percentile of the normal distribution?
CC BY-SA 2.5
null
2010-07-20T06:07:37.007
2012-04-01T04:18:55.447
2010-07-20T08:47:33.180
196
196
[ "r", "distributions", "extreme-value" ]
226
2
null
222
24
null
Principal component analysis (PCA) is one popular approach analyzing variance when you are dealing with multivariate data. You have random variables X1, X2,...Xn which are all correlated (positively or negatively) to varying degrees, and you want to get a better understanding of what's going on. PCA can help. What PCA gives you is a change of variable into Y1, Y2,..., Yn (i.e. the same number of variables) which are linear combinations of the Xs. For example, you might have Y1 = 2.1 X1 - 1.76 X2 + 0.2 X3... The Ys the nice property that each of these have zero correlation with each other. Better still, you get them in decreasing order of variance. So, Y1 "explains" a big chunk of the variance of the original variables, Y2 a bit less and so on. Usually after the first few Ys, the variables become somewhat meaningless. The PCA score for any of the Xi is just it's coefficient in each of the Ys. In my earlier example, the score for X2 in the first principal component (Y1) is 1.76. The way PCA does this magic is by computing eigenvectors of the covariance matrix. To give a concrete example, imagine X1,...X10 are changes in 1 year, 2 year, ..., 10 year Treasury bond yields over some time period. When you compute PCA you generally find that the first component has scores for each bond of the same sign and about the same sign. This tells you that most of the variance in bond yields comes from everything moving the same way: "parallel shifts" up or down. The second component typically shows "steepening" and "flattening" of the curve and has opposite signs for X1 and X10.
null
CC BY-SA 2.5
null
2010-07-20T06:23:21.623
2010-07-20T06:23:21.623
null
null
173
null
227
2
null
222
4
null
Let $i=1,\dots,N$ index the rows and $j=1,\dots,M$ index the columns. Suppose you linearize the combination of variables (columns): $$Z_{i,1} = c_{i,1}\cdot Y_{i,1} + c_{i,2}\cdot Y_{i,2} + ... + c_{i,M}\cdot Y_{i,M}$$ The above formula basically says to multiply row elements with a certain value $c$ (loadings) and sum them by columns. Resulting values ($Y$ values times the loading) are scores. A principal component (PC) is a linear combination $Z_1 = (Z_{1,1}, ..., Z_{N,1}$) (values by columns which are called scores). In essence, the PC should present the most important features of variables (columns). Ergo, you can extract as many PC as there are variables (or less). An output from [R](http://cran.r-project.org/) on PCA (a fake example) looks like this. PC1, PC2... are principal components 1, 2... The example below is showing only the first 8 principal components (out of 17). You can also extract other elements from PCA, like loadings and scores. ``` Importance of components: PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 Standard deviation 1.0889 1.0642 1.0550 1.0475 1.0387 1.0277 1.0169 1.0105 Proportion of Variance 0.0697 0.0666 0.0655 0.0645 0.0635 0.0621 0.0608 0.0601 Cumulative Proportion 0.0697 0.1364 0.2018 0.2664 0.3298 0.3920 0.4528 0.5129 ```
null
CC BY-SA 3.0
null
2010-07-20T06:24:32.793
2012-07-02T02:33:13.853
2012-07-02T02:33:13.853
159
144
null
228
2
null
216
3
null
- http://insideria.com/2009/12/28-rich-data-visualization-too.html 28 Rich Data Visualization Tools - http://www.rgraph.net/ R graph - http://vis.stanford.edu/protovis/
null
CC BY-SA 2.5
null
2010-07-20T06:27:16.233
2010-07-20T06:27:16.233
null
null
10
null
229
2
null
225
10
null
The maximum does not have a normal distribution. Its cdf is $\Phi(x)^{100}$ where $\Phi(x)$ is the standard normal cdf. In general the moments of this distribution are tricky to obtain analytically. There is an ancient paper on this by [Tippett (Biometrika, 1925)](http://www.jstor.org/stable/2332087).
null
CC BY-SA 3.0
null
2010-07-20T06:28:47.557
2012-03-31T07:26:34.197
2012-03-31T07:26:34.197
4856
159
null
230
2
null
224
6
null
You could have a look at Processing: [http://processing.org/](http://processing.org/)
null
CC BY-SA 2.5
null
2010-07-20T06:35:21.287
2010-07-20T06:35:21.287
null
null
173
null
231
2
null
223
5
null
This is the one I've used successfully: [Statistics Without Maths for Psychology: Using Spss for Windows](http://www.amazon.co.uk/Statistics-without-Psychology-Christine-Dancey/dp/013124941X). I just stumbled on this too, this might be useful: [Statistics Notes in the British Medical Journal](http://www-users.york.ac.uk/%7Emb55/pubs/pbstnote.htm). I'm sure I knew of a free pdf that some doctors I know use, but I can't seem to find it at the moment. I will try to dig it out.
null
CC BY-SA 4.0
null
2010-07-20T06:41:13.570
2022-11-23T13:18:24.017
2022-11-23T13:18:24.017
362671
199
null
232
2
null
196
15
null
- Mondrian: Exploratory data analysis with focus on large data and databases. - iPlots: a package for the R statistical environment which provides high interaction statistical graphics, written in Java.
null
CC BY-SA 2.5
null
2010-07-20T06:41:25.627
2010-07-20T07:26:11.410
2010-07-20T07:26:11.410
103
103
null
234
2
null
222
8
null
Say you have a cloud of N points in, say, 3D (which can be listed in a 100x3 array). Then, the principal components analysis (PCA) fits an arbitrarily oriented ellipsoid into the data. The principal component score is the length of the diameters of the ellipsoid. In the direction in which the diameter is large, the data varies a lot, while in the direction in which the diameter is small, the data varies litte. If you wanted to project N-d data into a 2-d scatter plot, you plot them along the two largest principal components, because with that approach you display most of the variance in the data.
null
CC BY-SA 2.5
null
2010-07-20T06:47:14.180
2010-07-20T06:47:14.180
null
null
198
null
235
2
null
216
5
null
[RaphaelJS](http://raphaeljs.com/) can do some pretty amazing stuff and it just got some major backing from [Sencha](http://www.sencha.com/) (formerly ExtJS). Raphael is pretty smart about browsers by using a VML backend for Internet Explorer and SVG for everything else. However, the library is pretty low-level. Fortunately, the author has started another project, [gRaphael](http://g.raphaeljs.com/), that focuses on drawing charts and graphs. The [MIT SIMILE Project](http://simile.mit.edu/) also has some interesting JavaScript libraries: - Timeplot - Timeline There is also a project to port Processing to JavaScript: [ProcessingJS](http://processingjs.org/) [Jmol](http://jmol.sourceforge.net/) is a Java applet for viewing chemical structures, but it is used as the display engine for 3D graphics in the [SAGE](http://www.sagemath.org/) system, which has a completely browser-based GUI. And for an open source alternative to Google Maps, there is the excellent [OpenLayers](http://www.openlayers.org) JavaScript library which powers the frontend of the equally excellent [OpenStreetMap](http://www.openstreetmap.org/).
null
CC BY-SA 2.5
null
2010-07-20T07:17:58.993
2010-07-20T08:45:07.490
2010-07-20T08:45:07.490
13
13
null
236
2
null
224
0
null
Unfortunately, it only runs on macs, but otherwise a great application (basically Processing in python): - http://nodebox.net/code/index.php/Home > NodeBox is a Mac OS X application that lets you create 2D visuals (static, animated or interactive) using Python programming code and export them as a PDF or a QuickTime movie. NodeBox is free and well-documented.
null
CC BY-SA 2.5
null
2010-07-20T07:30:34.160
2010-07-21T18:33:33.063
2010-07-21T18:33:33.063
138
138
null
237
2
null
173
2
null
I'm going to leave the main question alone, because I think I will get it wrong (although I too analyse data for a healthcare provider, and to be honest, if I had these data, I would just analyse them using standard techniques and hope for the best, they look pretty okay to me). As for R packages, I have found the TSA library and it's accompanying [book](http://rads.stackoverflow.com/amzn/click/0387759581) very useful indeed. The `armasubsets` command, particularly, I think is a great time saver.
null
CC BY-SA 2.5
null
2010-07-20T07:31:08.963
2010-08-09T12:18:10.497
2010-08-09T12:18:10.497
8
199
null
238
2
null
224
9
null
There is always lovely gnuplot: - http://www.gnuplot.info/ > Gnuplot is a portable command-line driven graphing utility for linux, OS/2, MS Windows, OSX, VMS, and many other platforms. The source code is copyrighted but freely distributed (i.e., you don't have to pay for it). It was originally created to allow scientists and students to visualize mathematical functions and data interactively, but has grown to support many non-interactive uses such as web scripting. It is also used as a plotting engine by third-party applications like Octave. Gnuplot has been supported and under active development since 1986. Gnuplot supports many types of plots in either 2D and 3D. It can draw using lines, points, boxes, contours, vector fields, surfaces, and various associated text. It also supports various specialized plot types.
null
CC BY-SA 2.5
null
2010-07-20T07:33:09.067
2010-07-20T07:33:09.067
2020-06-11T14:32:37.003
-1
138
null
240
2
null
213
4
null
My first response would be that if you can do multivariate regression on the data, then to use the residuals from that regression to spot outliers. (I know you said it's not a regression problem, so this might not help you, sorry !) I'm copying some of this from a [Stackoverflow question I've previously answered](https://stackoverflow.com/questions/1444306/how-to-use-outlier-tests-in-r-code/1444548#1444548) which has some example [R](http://en.wikipedia.org/wiki/R_(programming_language)) code First, we'll create some data, and then taint it with an outlier; ``` > testout<-data.frame(X1=rnorm(50,mean=50,sd=10),X2=rnorm(50,mean=5,sd=1.5),Y=rnorm(50,mean=200,sd=25)) > #Taint the Data > testout$X1[10]<-5 > testout$X2[10]<-5 > testout$Y[10]<-530 > testout X1 X2 Y 1 44.20043 1.5259458 169.3296 2 40.46721 5.8437076 200.9038 3 48.20571 3.8243373 189.4652 4 60.09808 4.6609190 177.5159 5 50.23627 2.6193455 210.4360 6 43.50972 5.8212863 203.8361 7 44.95626 7.8368405 236.5821 8 66.14391 3.6828843 171.9624 9 45.53040 4.8311616 187.0553 10 5.00000 5.0000000 530.0000 11 64.71719 6.4007245 164.8052 12 54.43665 7.8695891 192.8824 13 45.78278 4.9921489 182.2957 14 49.59998 4.7716099 146.3090 <snip> 48 26.55487 5.8082497 189.7901 49 45.28317 5.0219647 208.1318 50 44.84145 3.6252663 251.5620 ``` It's often most usefull to examine the data graphically (you're brain is much better at spotting outliers than maths is) ``` > #Use Boxplot to Review the Data > boxplot(testout$X1, ylab="X1") > boxplot(testout$X2, ylab="X2") > boxplot(testout$Y, ylab="Y") ``` You can then use stats to calculate critical cut off values, here using the Lund Test (See Lund, R. E. 1975, "Tables for An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 4, pp. 473-476. and Prescott, P. 1975, "An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 1, pp. 129-132.) ``` > #Alternative approach using Lund Test > lundcrit<-function(a, n, q) { + # Calculates a Critical value for Outlier Test according to Lund + # See Lund, R. E. 1975, "Tables for An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 4, pp. 473-476. + # and Prescott, P. 1975, "An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 1, pp. 129-132. + # a = alpha + # n = Number of data elements + # q = Number of independent Variables (including intercept) + F<-qf(c(1-(a/n)),df1=1,df2=n-q-1,lower.tail=TRUE) + crit<-((n-q)*F/(n-q-1+F))^0.5 + crit + } > testoutlm<-lm(Y~X1+X2,data=testout) > testout$fitted<-fitted(testoutlm) > testout$residual<-residuals(testoutlm) > testout$standardresid<-rstandard(testoutlm) > n<-nrow(testout) > q<-length(testoutlm$coefficients) > crit<-lundcrit(0.1,n,q) > testout$Ynew<-ifelse(testout$standardresid>crit,NA,testout$Y) > testout X1 X2 Y newX1 fitted residual standardresid 1 44.20043 1.5259458 169.3296 44.20043 209.8467 -40.5171222 -1.009507695 2 40.46721 5.8437076 200.9038 40.46721 231.9221 -31.0183107 -0.747624895 3 48.20571 3.8243373 189.4652 48.20571 203.4786 -14.0134646 -0.335955648 4 60.09808 4.6609190 177.5159 60.09808 169.6108 7.9050960 0.190908291 5 50.23627 2.6193455 210.4360 50.23627 194.3285 16.1075799 0.391537883 6 43.50972 5.8212863 203.8361 43.50972 222.6667 -18.8306252 -0.452070155 7 44.95626 7.8368405 236.5821 44.95626 223.3287 13.2534226 0.326339981 8 66.14391 3.6828843 171.9624 66.14391 148.8870 23.0754677 0.568829360 9 45.53040 4.8311616 187.0553 45.53040 214.0832 -27.0279262 -0.646090667 10 5.00000 5.0000000 530.0000 NA 337.0535 192.9465135 5.714275585 11 64.71719 6.4007245 164.8052 64.71719 159.9911 4.8141018 0.118618011 12 54.43665 7.8695891 192.8824 54.43665 194.7454 -1.8630426 -0.046004311 13 45.78278 4.9921489 182.2957 45.78278 213.7223 -31.4266180 -0.751115595 14 49.59998 4.7716099 146.3090 49.59998 201.6296 -55.3205552 -1.321042392 15 45.07720 4.2355525 192.9041 45.07720 213.9655 -21.0613819 -0.504406009 16 62.27717 7.1518606 186.6482 62.27717 169.2455 17.4027250 0.430262983 17 48.50446 3.0712422 228.3253 48.50446 200.6938 27.6314695 0.667366651 18 65.49983 5.4609713 184.8983 65.49983 155.2768 29.6214506 0.726319931 19 44.38387 4.9305222 213.9378 44.38387 217.7981 -3.8603382 -0.092354925 20 43.52883 8.3777627 203.5657 43.52883 228.9961 -25.4303732 -0.634725264 <snip> 49 45.28317 5.0219647 208.1318 45.28317 215.3075 -7.1756966 -0.171560291 50 44.84145 3.6252663 251.5620 44.84145 213.1535 38.4084869 0.923804784 Ynew 1 169.3296 2 200.9038 3 189.4652 4 177.5159 5 210.4360 6 203.8361 7 236.5821 8 171.9624 9 187.0553 10 NA 11 164.8052 12 192.8824 13 182.2957 14 146.3090 15 192.9041 16 186.6482 17 228.3253 18 184.8983 19 213.9378 20 203.5657 <snip> 49 208.1318 50 251.5620 ``` Obviosuly there are other outlier tests than the Lund test (Grubbs springs to mind), but I'm not sure which are better suited to multivariate data.
null
CC BY-SA 2.5
null
2010-07-20T07:55:30.107
2010-07-20T07:55:30.107
2017-05-23T12:39:26.203
-1
114
null
241
2
null
213
6
null
I'm not sure what you mean when you say you aren't thinking of a regression problem but of "true multivariate data". My initial response would be to calculate the Mahalanobis distance since it doesn't require that you specify a particular IV or DV, but at its core (as far as I understand it) it is related to a leverage statistic.
null
CC BY-SA 2.5
null
2010-07-20T07:56:06.767
2010-07-20T07:56:06.767
null
null
196
null
242
1
306
null
13
892
This is a bit of a flippant question, but I have a serious interest in the answer. I work in a psychiatric hospital and I have three years' of data, collected every day across each ward regarding the level of violence on that ward. Clearly the model which fits these data is a time series model. I had to difference the scores in order to make them more normal. I fit an ARMA model with the differenced data, and the best fit I think was a model with one degree of differencing and first order auto-correlation at lag 2. My question is, what on earth can I use this model for? Time series always seems so useful in the textbooks when it's about hare populations and oil prices, but now I've done my own the result seems so abstract as to be completely opaque. The differenced scores correlate with each other at lag two, but I can't really advise everyone to be on high alert two days after a serious incident in all seriousness. Or can I?
Using time series analysis to analyze/predict violent behavior
CC BY-SA 2.5
null
2010-07-20T07:56:16.297
2017-10-24T18:31:05.713
2010-07-20T15:53:21.083
5
199
[ "time-series", "forecasting" ]
243
2
null
203
8
null
Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
null
CC BY-SA 2.5
null
2010-07-20T08:05:04.060
2010-07-20T09:09:01.443
2010-07-20T09:09:01.443
196
196
null
244
2
null
224
18
null
The Visualization Tool Kit [VTK](http://www.vtk.org) is pretty impressive for 3D visualizations of numerical data. Unfortunately, it is also pretty low level. [Graphviz](http://graphviz.org/) is used pretty extensively for visualizing graphs and other tree-like data structures. [igraph](http://igraph.sourceforge.net/) can also be used for visualization of tree-like data structures. Contains nice interfaces to scripting languages such as R and Python along with a stand-alone C library. The [NCL](http://www.ncl.ucar.edu/) (NCAR Command Language) library contains some pretty neat graphing routines- especially if you are looking at spatially distributed, multidimensional data such as wind fields. Which makes sense as NCAR is the National Center for Atmospheric Research. If you are willing to relax the executable requirement, or try a tool like [py2exe](http://www.py2exe.org/), there is the possibility of leveraging some neat Python libraries and applications such as: - MayaVi: A higher level front-end to VTK developed by Enthought. - Chaco: Another Enthought library focused on 2D graphs. - Matplotlib: Another 2D plotting library. Has nice support for TeX-based mathematical annotation. - Basemap: An add-on to Matplotlib for drawing maps and displaying geographic data (sexy examples here). If we were to bend the concept of "standalone application" even further to include PDF files, there are some neat graphics libraries available to LaTeX users: - Asymptote can generate a variety of graphs, but its crown jewel is definitely the ability to embed 3D graphs into PDF documents that can be manipulated (zoomed, rotated, animated, etc) by anyone using the Adobe Acrobat reader (example). - PGF/TikZ provides a wonderful vector drawing language to TeX documents. The manual is hands-down the most well-written, comprehensive and beautiful piece of documentation I have ever seen in an open source project. PGFPlots provides an abstraction layer for drawing plots. A wondeful showcase can be found at TeXample. - PSTricks served as an inspiration for TikZ and allows users to leverage the power of the PostScript language to create some neat graphics. And for kicks, there's [DISLIN](http://www.mps.mpg.de/dislin/), which has a native interface for `Fortran`! Not open source or free for commercial use though.
null
CC BY-SA 2.5
null
2010-07-20T08:13:31.763
2010-07-20T08:47:44.613
2010-07-20T08:47:44.613
13
13
null
245
2
null
222
3
null
Principal component scores are a group of scores that are obtained following a Principle Components Analysis (PCA). In PCA the relationships between a group of scores is analyzed such that an equal number of new "imaginary" variables (aka principle components) are created. The first of these new imaginary variables is maximally correlated with all of the original group of variables. The next is somewhat less correlated, and so forth until the point that if you used all of the principal components scores to predict any given variable from the initial group you would be able to explain all of its variance. The way in which PCA proceeds is complex and has certain restrictions. Among these is the restriction that the correlation between any two principal components (i.e. imaginary variables) is zero; thus it doesn't make sense to try to predict one principal component with another.
null
CC BY-SA 2.5
null
2010-07-20T08:20:36.037
2010-07-20T08:20:36.037
null
null
196
null
246
2
null
242
6
null
You fitted the model to the differences, which means that you're describing the change in levels of violence. You get a lag of 2 days. A lag is indicative of the memory of the process. In other words, the change in levels of violence today has some dependency on the change in levels of violence in the last two days. For longer time-scales, the contribution of random influences becomes strong enough so that there is no clear link anymore. Is the auto-correlation positive? Then a change of levels of violence today suggests a similar change in levels of violence in two days. Is it negative? Then violence might stay higher for two days. Of course, you may want to have to control for confounding effects. For example, after a serious incident, people may be more likely to report minor incidents, but this "sensitization" would be going away after two days.
null
CC BY-SA 2.5
null
2010-07-20T08:28:52.710
2010-07-20T08:28:52.710
null
null
198
null
247
2
null
224
3
null
Might be a bit narrow in scope, but if you're doing any work in Clojure on the JVM there's the excellent [Incanter](http://incanter.org): > Incanter is a Clojure-based, R-like platform for statistical computing and graphics.
null
CC BY-SA 2.5
null
2010-07-20T08:36:36.217
2010-07-20T08:36:36.217
null
null
171
null
248
2
null
196
4
null
ggobi and the R links to Ggobi are really rather good for this. There are simpler visualisations (iPlots is very nice, also interactive, as mentioned). But it depends whether you are doing something more specialised. For example TreeView lets you visualise the kind of cluster dendrograms you get out of microarrays.
null
CC BY-SA 2.5
null
2010-07-20T08:38:38.223
2010-07-20T08:38:38.223
null
null
211
null
249
1
null
null
4
774
I have a set of $N$ bodies, which is a random sample from a population whose mean and variance I want to estimate. A property of each body is being measured $m_i$ times ($m_i>1$) and different for each body index $i$ identifies which body it is; the property is expected to be distributed around zero). I would like to describe the resulting measurement. Particularly I'm interested in average property value and in the variance. The average value is simple. First calculate the mean values for each body and then calculate the mean of means. The variance is more tricky. There are two variances: the variance of measurement and the variance of property values. In order to have an idea on the confidence we have in any single measurement, we need to account for both the sources. Unfortunately, I can't think of a good method. It is obvious that putting all the numbers in a single pool and calculating the stdev of this pool isn't a good idea. Any suggestion? EDIT Colin Gillespie suggests applying Random Effects Model. This model seems to be the right solution for my case, except for the fact that it is described (in Wikipedia) for the cases where each group (body in my case) is sampled equally ($m_i$ is constant for all the bodies), which is not correct in my case
Variance components
CC BY-SA 3.0
null
2010-07-20T08:49:13.050
2012-12-05T18:20:34.357
2012-12-05T18:20:34.357
17230
213
[ "standard-deviation", "variance", "anova", "random-effects-model" ]
250
2
null
130
5
null
Update (August 2014): as @gappy comments below, as of R version 3.0.0 the limits are higher and means R is capable of handling larger datasets. Here's a data point: R has a ["big data ceiling"](http://www.bytemining.com/2010/05/hitting-the-big-data-ceiling-in-r/), useful to know if you plan on working with huge data sets. I'm unsure whether the same limitations apply to Clojure/Incanter, whether it outperforms R or is actually worse. I imagine the JVM can probably handle large datasets, especially if you manage to harness the power of Clojure's lazy features.
null
CC BY-SA 3.0
null
2010-07-20T08:53:15.833
2014-08-01T17:38:23.147
2014-08-01T17:38:23.147
171
171
null
252
2
null
73
6
null
For me personally, I use the following three packages the most, all available from the awesome [Omega Project for Statistical Computing](http://www.omegahat.org) (I do not claim to be an expert, but for my purposes they are very easy to use): - RCurl: It has lots of options which allows access to websites that the default functions in base R would have difficulty with I think it's fair to say. It is an R-interface to the libcurl library, which has the added benefit of a whole community outside of R developing it. Also available on CRAN. - XML: It is very forgiving of parsing malformed XML/HTML. It is an R-interface to the libxml2 library and again has the added benefit of a whole community outside of R developing it Also available on CRAN. - RJSONIO: It allows one to parse the text returned from a json call and organise it into a list structure for further analysis.The competitor to this package is rjson but this one has the advantage of being vectorised, readily extensible through S3/S4, fast and scalable to large data.
null
CC BY-SA 2.5
null
2010-07-20T09:13:29.427
2010-07-28T10:02:05.600
2010-07-28T10:02:05.600
81
81
null
253
2
null
138
8
null
If you're an economist/econometrician then Grant Farnworth's paper on using R is indispensable and is available on CRAN at: [http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf](http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf)
null
CC BY-SA 2.5
null
2010-07-20T09:26:05.113
2010-07-20T11:37:25.383
2010-07-20T11:37:25.383
215
215
null
254
2
null
249
5
null
I think if I understand your description correctly, you need to use a [linear mixed model](http://en.wikipedia.org/wiki/Random_effects_model). However, this maybe overkill, since these models are used to find differences between groups. For example, if you have two types of bodies and you wish to determine if they are different. Basically, you have between subject variation and within subject variation. To fit these models in R, you can use the `lmer` function from the `lme4` library. So if I understand you correctly, your function will look something like this: ``` #Load the R library library(lme4) #data is a R data frame that contains your data #measurement and Subject are variables fm1 = lmer(measurement ~ (1|Subject), data) ``` If you are looking for differences between bodies, then it will look something like: ``` fm2 = lmer(measurement ~ body + (body|Subject), data) ``` The command `summary(fm1)` should give the values you are after. Here are some resources that will help you get started: - Documentation for the lme4 package - Statistics with R Most statistical software will be able to fit models of this type. BTW, the subject part is usually called the random effect. However, there a many different views on what a random effect is. See Ch11.4 of [Data analysis using regression](http://rads.stackoverflow.com/amzn/click/0521867061) by Gelman and Hill for more details.
null
CC BY-SA 2.5
null
2010-07-20T09:30:18.797
2010-07-20T10:50:24.037
2010-07-20T10:50:24.037
8
8
null
255
2
null
213
8
null
It may be an overshoot, but you may train an unsupervised Random Forest on the data and use the object proximity measure to detect outliers. More details [here](http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#outliers).
null
CC BY-SA 2.5
null
2010-07-20T09:30:51.697
2010-07-20T09:30:51.697
null
null
null
null
256
1
265
null
24
9466
What is the easiest way to understand boosting? Why doesn't it boost very weak classifiers "to infinity" (perfection)?
How does boosting work?
CC BY-SA 3.0
null
2010-07-20T09:34:22.213
2015-08-25T12:17:32.040
2013-08-16T15:55:48.363
7290
217
[ "machine-learning", "boosting" ]
257
1
262
null
17
11376
We may assume that we have CSV file and we want a very basic line plot with several lines on one plot and a simple legend.
What is the easiest way to create publication-quality plots under Linux?
CC BY-SA 2.5
null
2010-07-20T09:38:44.770
2022-05-22T13:23:32.747
2017-05-19T12:32:22.213
101426
217
[ "data-visualization" ]
258
1
null
null
14
3544
Rules: - one classifier per answer - vote up if you agree - downvote/remove duplicates. - put your application in the comment
What is the best out-of-the-box 2-class classifier for your application?
CC BY-SA 3.0
null
2010-07-20T09:43:23.910
2016-04-11T08:36:06.453
2011-11-17T00:45:57.047
null
217
[ "machine-learning", "classification", "application" ]
259
2
null
258
12
null
[Support vector machine](http://en.wikipedia.org/wiki/Support_vector_machine)
null
CC BY-SA 2.5
null
2010-07-20T09:44:11.807
2010-07-20T09:44:11.807
null
null
217
null
260
2
null
258
14
null
[Random forest](http://en.wikipedia.org/wiki/Random_forest) - easily captures complicated structure/nonlinear relationship - invariant to variables' scale - no need to create dummy variables for categorical predictors - variable selection is not much needed - relatively hard to overfit
null
CC BY-SA 3.0
null
2010-07-20T09:45:06.377
2016-04-11T08:36:06.453
2016-04-11T08:36:06.453
85709
217
null
261
2
null
257
13
null
It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns: ``` x <- read.csv("file.csv") matplot(x[,1],x[,2:4],type="l",col=1:3) legend("topleft",legend=c("A","B","C"),lty=1,col=1:3) ```
null
CC BY-SA 2.5
null
2010-07-20T09:45:18.933
2010-07-20T09:45:18.933
null
null
159
null
262
2
null
257
14
null
The easiest way is to use R Use `read.csv` to enter the data into R, then use a combination of the `plot` and `line` commands If you want something really special, then look at the libraries [ggplot2](http://had.co.nz/ggplot2/) or [lattice](http://lmdvr.r-forge.r-project.org/figures/figures.html). In `ggplot2` the following commands should get you started. ``` require(ggplot2) #You would use read.csv here N = 10 d = data.frame(x=1:N,y1=runif(N),y2=rnorm(N), y3 = rnorm(N, 0.5)) p = ggplot(d) p = p+geom_line(aes(x, y1, colour="Type 1")) p = p+geom_line(aes(x, y2, colour="Type 2")) p = p+geom_line(aes(x, y3, colour="Type 3")) #Add points p = p+geom_point(aes(x, y3, colour="Type 3")) print(p) ``` This would give you the following plot: [](https://i.stack.imgur.com/Hlf1P.png) Saving plots in R Saving plots in R is straightforward: ``` #Look at ?jpeg to other different saving options jpeg("figure.jpg") print(p)#for ggplot2 graphics dev.off() ``` Instead of `jpeg`'s you can also save as a `pdf` or postscript file: ``` #This example uses R base graphics #Just change to print(p) for ggplot2 pdf("figure.pdf") plot(d$x,y1, type="l") lines(d$x, y2) dev.off() ```
null
CC BY-SA 4.0
null
2010-07-20T09:50:45.953
2022-05-22T13:23:32.747
2022-05-22T13:23:32.747
11887
8
null
263
2
null
257
11
null
My favorite tool is Python with [matplotlib](https://matplotlib.org/). The advantages: - Immediate export from the environment where I do my experiments in - Support for the scipy/numpy data structures - Familiar syntax/options (matlab background) - Full latex support for labels/legends etc. So same typesetting as in the rest of your document! Specifically, for different file formats like svg and eps, use the `format` parameter of [savefig](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html). An example: input.csv "Line 1",0.5,0.8,1.0,0.9,0.9 "Line 2",0.2,0.7,1.2,1.1,1.1 Code: ``` import csv import matplotlib.pyplot as plt legends = [] for row in csv.reader(open('input.csv')): legends.append(row[0]) plt.plot(row[1:]) plt.legend(legends) plt.savefig("out.svg", format='svg') ```
null
CC BY-SA 4.0
null
2010-07-20T09:53:11.583
2022-05-22T12:59:53.333
2022-05-22T12:59:53.333
313631
190
null
264
2
null
134
7
null
[Here](https://dcc-backup.ligo.org/public/0027/T030168/000/T030168-00.pdf) is an article describing one possible algorithm. Source code included and a quite serious application (gravitational wave detection based on laser interferometry), so you can expect it to be well tested.
null
CC BY-SA 3.0
null
2010-07-20T09:59:00.690
2018-02-27T15:34:22.913
2018-02-27T15:34:22.913
22047
217
null
265
2
null
256
28
null
In plain English: If your classifier misclassifies some data, train another copy of it mainly on this misclassified part with hope that it will discover something subtle. And then, as usual, iterate. On the way there are some voting schemes that allow to combine all those classifiers' predictions in sensible way. Because sometimes it is impossible (the noise is just hiding some of the information, or it is not even present in the data); on the other hand, boosting too much may lead to overfitting.
null
CC BY-SA 2.5
null
2010-07-20T10:05:48.050
2010-07-20T10:17:16.153
2010-07-20T10:17:16.153
null
null
null
266
2
null
134
3
null
As you mentioned sorting would be `O(n·log n)` for a window of length `n`. Doing this moving adds another `l=vectorlength` making the total cost `O(l·n·log n)`. The simplest way to push this is by keeping an ordered list of the last n elements in memory when moving from one window to the next one. As removing/inserting one element from/into an ordered list are both `O(n)` this would result in costs of `O(l·n)`. Pseudocode: ``` l = length(input) aidvector = sort(input(1:n)) output(i) = aid(n/2) for i = n+1:l remove input(i-n) from aidvector sort aid(n) into aidvector output(i) = aid(n/2) ```
null
CC BY-SA 2.5
null
2010-07-20T10:23:56.707
2010-07-20T10:23:56.707
null
null
128
null
267
1
271
null
9
10166
If I have two lists A and B, both of which are subsets of a much larger list C, how can I determine if the degree of overlap of A and B is greater than I would expect by chance? Should I just randomly select elements from C of the same lengths as lists A and B and determine that random overlap, and do this many times to determine some kind or empirical p-value? Is there a better way to test this?
How do I calculate if the degree of overlap between two lists is significant?
CC BY-SA 2.5
null
2010-07-20T10:35:55.960
2013-07-17T02:30:07.350
null
null
194
[ "statistical-significance" ]
268
2
null
47
3
null
You could of for a supervised self-organizing map (e.g. with [kohonen](http://cran.r-project.org/web/packages/kohonen/index.html) package for R), and use the login frequency as dependent variable. That way, the clustering will focus on separating the frequent visitors from the rare visitors. By plotting the number of users on each map unit, you may get an idea in clusters present in your data. Because SOMs are non-linear mapping methods, this approach is particularly interesting for tailed data.
null
CC BY-SA 2.5
null
2010-07-20T11:01:06.313
2010-07-20T11:01:06.313
null
null
107
null
269
1
416
null
45
423555
What is the difference between a population and a sample? What common variables and statistics are used for each one, and how do those relate to each other?
What is the difference between a population and a sample?
CC BY-SA 2.5
null
2010-07-20T11:07:42.403
2017-06-16T02:01:31.063
2010-08-07T17:55:39.090
null
62
[ "standard-deviation", "variance", "sample", "population" ]
270
1
279
null
18
7841
Due to the factorial in a poisson distribution, it becomes unpractical to estimate poisson models (for example, using maximum likelihood) when the observations are large. So, for example, if I am trying to estimate a model to explain the number of suicides in a given year (only annual data are available), and say, there are thousands of suicides every year, is it wrong to express suicides in hundreds, so that 2998 would be 29.98 ~= 30? In other words, is it wrong to change the unit of measurement to make the data manageable?
Poisson regression with large data: is it wrong to change the unit of measurement?
CC BY-SA 2.5
null
2010-07-20T11:08:47.770
2022-04-22T11:18:35.717
2010-10-08T16:05:24.073
8
90
[ "modeling", "poisson-distribution", "large-data" ]
271
2
null
267
9
null
If I understand your question correctly, you need to use the [Hypergeometric distribution](http://en.wikipedia.org/wiki/Hypergeometric_distribution). This distribution is usually associated with urn models, i.e there are $n$ balls in an urn, $y$ are painted red, and you draw $m$ balls from the urn. Then if $X$ is the number of balls in your sample of $m$ that are red, $X$ has a hyper-geometric distribution. For your specific example, let $n_A$, $n_B$ and $n_C$ denote the lengths of your three lists and let $n_{AB}$ denote the overlap between $A$ and $B$. Then $$n_{AB} \sim \text{HG}(n_A, n_C, n_B)$$ To calculate a p-value, you could use this R command: ``` #Some example values n_A = 100;n_B = 200; n_C = 500; n_A_B = 50 1-phyper(n_A_B, n_B, n_C-n_B, n_A) [1] 0.008626697 ``` Word of caution. Remember multiple testing, i.e. if you have lots of A and B lists, then you will need to adjust your p-values with a correction. For the example the FDR or Bonferroni corrections.
null
CC BY-SA 3.0
null
2010-07-20T11:10:42.150
2013-07-17T02:30:07.350
2013-07-17T02:30:07.350
805
8
null
272
2
null
161
4
null
Also, for some elaborate discussion (including bashing of ADF / PP / KPSS :) you might want to have a look at the book by Maddala and Kim: [http://www.amazon.com/Cointegration-Structural-Change-Themes-Econometrics/dp/0521587824](http://rads.stackoverflow.com/amzn/click/0521587824) Quite extensive and not very easy to read sometimes, but a useful reference.
null
CC BY-SA 2.5
null
2010-07-20T11:14:41.487
2010-07-20T11:14:41.487
null
null
216
null
273
2
null
25
1
null
- Clearly R - RadidMiner is nice, but switching to thinking in terms of operators takes a moment - Matlab / Octave If you describe a specific problem, I may be able to get more specific.
null
CC BY-SA 2.5
null
2010-07-20T11:18:01.390
2010-07-20T11:18:01.390
null
null
216
null
274
2
null
269
14
null
The population is the whole set of values, or individuals, you are interested in. The sample is a subset of the population, and is the set of values you actually use in your estimation. So, for example, if you want to know the average height of the residents of China, that is your population, ie, the population of China. The thing is, this is quite large a number, and you wouldn't be able to get data for everyone there. So you draw a sample, that is, you get some observations, or the height of some of the people in China (a subset of the population, the sample) and do your inference based on that.
null
CC BY-SA 2.5
null
2010-07-20T11:21:59.493
2010-07-20T11:21:59.493
null
null
90
null
275
2
null
270
7
null
In case of Poisson it is bad, since counts are counts -- their unit is an unity. On the other hand, if you'd use some advanced software like R, its Poisson handling functions will be aware of such large numbers and would use some numerical tricks to handle them. Obviously I agree that normal approximation is another good approach.
null
CC BY-SA 2.5
null
2010-07-20T11:29:53.070
2010-07-20T12:26:55.800
2010-07-20T12:26:55.800
null
null
null
276
1
553
null
13
10659
Is there a rule-of thumb or even any way at all to tell how large a sample should be in order to estimate a model with a given number of parameters? So, for example, if I want to estimate a least-squares regression with 5 parameters, how large should the sample be? Does it matter what estimation technique you are using (e.g. maximum likelihood, least squares, GMM), or how many or what tests you are going to perform? Should the sample variability be taken into account when making the decision?
How large should a sample be for a given estimation technique and parameters?
CC BY-SA 2.5
null
2010-07-20T11:43:25.013
2010-09-16T22:25:36.037
null
null
90
[ "sample-size", "estimation", "least-squares", "maximum-likelihood" ]
277
1
null
null
28
14538
When would one prefer to use a Conditional Autoregressive model over a Simultaneous Autoregressive model when modelling autocorrelated geo-referenced aerial data?
Spatial statistics models: CAR vs SAR
CC BY-SA 3.0
null
2010-07-20T11:49:02.490
2016-06-23T07:40:23.197
2016-06-23T07:40:23.197
7486
215
[ "modeling", "spatial" ]
278
1
null
null
8
339
When a non-hierarchical cluster analysis is carried out, the order of observations in the data file determine the clustering results, especially if the data set is small (i.e, 5000 observations). To deal with this problem I usually performed a random reorder of data observations. My problem is that if I replicate the analysis n times, the results obtained are different and sometimes these differences are great. How can I deal with this problem? Maybe I could run the analysis several times and after consider that one observation belong to the group in which more times was assigned. Has someone a better approach to this problem? Manuel Ramon
How to deal with the effect of the order of observations in a non hierarchical cluster analysis?
CC BY-SA 2.5
null
2010-07-20T11:49:27.543
2010-09-17T20:36:37.643
2010-09-17T20:36:37.643
null
221
[ "clustering" ]
279
2
null
270
19
null
When you're dealing with a Poisson distribution with large values of \lambda (its parameter), it is common to use a normal approximation to the Poisson distribution. As [this site](http://www.stat.ucla.edu/~dinov/courses_students.dir/Applets.dir/NormalApprox2PoissonApplet.html) mentions, it's all right to use the normal approximation when \lambda gets over 20, and the approximation improves as \lambda gets even higher. The Poisson distribution is defined only over the state space consisting of the non-negative integers, so rescaling and rounding is going to introduce odd things into your data. Using the normal approx. for large Poisson statistics is VERY common.
null
CC BY-SA 2.5
null
2010-07-20T11:54:15.197
2010-07-20T11:54:15.197
null
null
62
null
280
2
null
138
5
null
The [R project](http://www.r-project.org/) website has lots of manuals to start, and I suggest you the [Nabble R forum](http://r.789695.n4.nabble.com/) and the [R-bloggers](http://www.r-bloggers.com/) site as well.
null
CC BY-SA 2.5
null
2010-07-20T11:54:50.530
2010-07-20T11:54:50.530
null
null
221
null
281
2
null
73
4
null
Day-to-day the most useful package must be "foreign" which has functions for reading and writing data for other statistical packages e.g. Stata, SPSS, Minitab, SAS, etc. Working in a field where R is not that commonplace means that this is a very important package.
null
CC BY-SA 2.5
null
2010-07-20T11:59:56.727
2012-08-27T17:52:32.967
2012-08-27T17:52:32.967
919
215
null
282
2
null
222
78
null
First, let's define a score. John, Mike and Kate get the following percentages for exams in Maths, Science, English and Music as follows: ``` Maths Science English Music John 80 85 60 55 Mike 90 85 70 45 Kate 95 80 40 50 ``` In this case there are 12 scores in total. Each score represents the exam results for each person in a particular subject. So a score in this case is simply a representation of where a row and column intersect. Now let's informally define a Principal Component. In the table above, can you easily plot the data in a 2D graph? No, because there are four subjects (which means four variables: Maths, Science, English, and Music), i.e.: - You could plot two subjects in the exact same way you would with $x$ and $y$ co-ordinates in a 2D graph. - You could even plot three subjects in the same way you would plot $x$, $y$ and $z$ in a 3D graph (though this is generally bad practice, because some distortion is inevitable in the 2D representation of 3D data). But how would you plot 4 subjects? At the moment we have four variables which each represent just one subject. So a method around this might be to somehow combine the subjects into maybe just two new variables which we can then plot. This is known as Multidimensional scaling. Principal Component analysis is a form of multidimensional scaling. It is a linear transformation of the variables into a lower dimensional space which retain maximal amount of information about the variables. For example, this would mean we could look at the types of subjects each student is maybe more suited to. A principal component is therefore a combination of the original variables after a linear transformation. In R, this is: ``` DF <- data.frame(Maths=c(80, 90, 95), Science=c(85, 85, 80), English=c(60, 70, 40), Music=c(55, 45, 50)) prcomp(DF, scale = FALSE) ``` Which will give you something like this (first two Principal Components only for sake of simplicity): ``` PC1 PC2 Maths 0.27795606 0.76772853 Science -0.17428077 -0.08162874 English -0.94200929 0.19632732 Music 0.07060547 -0.60447104 ``` The first column here shows coefficients of linear combination that defines principal component #1, and the second column shows coefficients for principal component #2. So what is a Principal Component Score? It's a score from the table at the end of this post (see below). The above output from R means we can now plot each person's score across all subjects in a 2D graph as follows. First, we need to center the original variables by subtracting column means: ``` Maths Science English Music John -8.33 1.66 3.33 5 Mike 1.66 1.66 13.33 -5 Kate 6.66 -3.33 -16.66 0 ``` And then to form linear combinations to get PC1 and PC2 scores: ``` x y John -0.28*8.33 + -0.17*1.66 + -0.94*3.33 + 0.07*5 -0.77*8.33 + -0.08*1.66 + 0.19*3.33 + -0.60*5 Mike 0.28*1.66 + -0.17*1.66 + -0.94*13.33 + -0.07*5 0.77*1.66 + -0.08*1.66 + 0.19*13.33 + -0.60*5 Kate 0.28*6.66 + 0.17*3.33 + 0.94*16.66 + 0.07*0 0.77*6.66 + 0.08*3.33 + -0.19*16.66 + -0.60*0 ``` Which simplifies to: ``` x y John -5.39 -8.90 Mike -12.74 6.78 Kate 18.13 2.12 ``` There are six principal component scores in the table above. You can now plot the scores in a 2D graph to get a sense of the type of subjects each student is perhaps more suited to. The same output can be obtained in R by typing `prcomp(DF, scale = FALSE)$x`. EDIT 1: Hmm, I probably could have thought up a better example, and there is more to it than what I've put here, but I hope you get the idea. EDIT 2: full credit to @drpaulbrewer for his comment in improving this answer.
null
CC BY-SA 4.0
null
2010-07-20T12:02:26.817
2022-07-25T13:56:39.230
2022-07-25T13:56:39.230
919
81
null
283
1
307
null
71
98662
What is meant when we say we have a saturated model?
What is a "saturated" model?
CC BY-SA 2.5
null
2010-07-20T12:09:08.457
2022-03-24T21:14:54.740
2010-07-20T14:26:17.513
null
215
[ "modeling", "regression" ]
284
2
null
50
6
null
A random variable, usually denoted X, is a variable where the outcome is uncertain. The observation of a particular outcome of this variable is called a realisation. More concretely, it is a function which maps a probability space into a measurable space, usually called a state space. Random variables are discrete (can take a number of distinct values) or continuous (can take an infinite number of values). Consider the random variable X which is the total obtained when rolling two dice. It can take any of the values 2-12 (with equal probability given fair dice) and the outcome is uncertain until the dice are rolled.
null
CC BY-SA 2.5
null
2010-07-20T12:25:30.920
2010-07-20T12:25:30.920
null
null
215
null
285
2
null
47
1
null
You might consider transforming (perhaps a log) the positively skewed variables. If after exploring various clustering algorithms you find that the four variables simply reflect varying intensity levels of usage, you might think about a theoretically based classification. Presumably this classification is going to be used for a purpose and that purpose could drive meaningful cut points on one or more of the variables.
null
CC BY-SA 2.5
null
2010-07-20T12:26:39.483
2010-07-20T12:26:39.483
null
null
183
null
286
2
null
170
23
null
There's a superb Probability book here: [https://web.archive.org/web/20100102085337/http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html](https://web.archive.org/web/20100102085337/http://www.dartmouth.edu/%7Echance/teaching_aids/books_articles/probability_book/book.html) which you can also buy in hardcopy.;
null
CC BY-SA 4.0
null
2010-07-20T12:28:32.410
2023-06-02T11:50:50.423
2023-06-02T11:50:50.423
362671
211
null
287
1
506
null
16
5784
Can someone explain to me the difference between method of moments and GMM (general method of moments), their relationship, and when should one or the other be used?
What is the difference/relationship between method of moments and GMM?
CC BY-SA 2.5
null
2010-07-20T12:29:17.607
2021-02-26T17:25:32.740
2013-10-22T14:58:47.763
5739
90
[ "estimation", "method-of-moments", "generalized-moments" ]
288
1
null
null
7
3063
Suppose that I culture cancer cells in n different dishes g₁, g₂, … , gn and observe the number of cells ni in each dish that look different than normal. The total number of cells in dish gi is ti. There is individual differences between individual cells, but also differences between the populations in different dishes because each dish has a slightly different temperature, amount of liquid, and so on. I model this as a beta-binomial distribution: ni ~ Binomial(pi, ti) where pi ~ Beta(α, β). Given a number of observations of ni and ti, how can I estimate α and β?
Estimating beta-binomial distribution
CC BY-SA 2.5
null
2010-07-20T12:29:34.187
2012-11-15T07:31:15.177
2010-07-21T03:10:58.123
null
220
[ "estimation", "beta-binomial-distribution" ]
289
2
null
224
5
null
For visualizing graphs in a Java/SWT environment, check out Zest: [http://eclipse.org/gef/zest](http://eclipse.org/gef/zest)
null
CC BY-SA 2.5
null
2010-07-20T12:32:41.060
2010-07-20T12:32:41.060
null
null
80
null
290
1
445
null
9
2053
I know of Cameron and Trivedi's Microeconometrics Using Stata. What are other good texts for learning Stata?
Resources for learning Stata
CC BY-SA 3.0
null
2010-07-20T12:33:30.577
2017-10-12T15:20:16.267
2015-10-31T10:17:52.700
22468
189
[ "references", "stata" ]
291
2
null
7
11
null
A good place to look is Carnegie Mellon University's [Data and Story Library or DASL](http://lib.stat.cmu.edu/DASL/), which contains data files that "illustrate the use of basic statistics methods... A good example can make a lesson on a particular statistics method vivid and relevant. DASL is designed to help teachers locate and identify datafiles for teaching. We hope that DASL will also serve as an archive for datasets from the statistics literature."
null
CC BY-SA 3.0
null
2010-07-20T12:35:43.100
2016-03-29T09:06:44.293
2016-03-29T09:06:44.293
22228
211
null
292
2
null
276
2
null
It should always be large enough! ;) All parameter estimates come with an estimate uncertainty, which is determined by the sample size. If you carry out a regression analysis, it helps to remind yourself that the Χ2 distribution is constructed from the input data set. If your model had 5 parameters and you had 5 data points, you would only be able to calculate a single point of the Χ2 distribution. Since you will need to minimize it, you could only pick that one point as a guess for the minimum, but would have to assign infinite errors to your estimated parameters. Having more data points would allow you to map the parameter space better leading to a better estimate of the minimum of the Χ2 distribution and thus smaller estimator errors. Would you be using a Maximum Likelihood estimator instead the situation would be similar: More data points leads to better estimate of the minimum. As for point variance, you would need to model this as well. Having more data points would make clustering of points around the "true" value more obvious (due to the Central Limit Theorem) and the danger of interpreting a large, chance flucuation as the true value for that point would go down. And as for any other parameter your estimate for the point variance would become more stable the more data points you have.
null
CC BY-SA 2.5
null
2010-07-20T12:38:38.663
2010-07-20T12:38:38.663
null
null
56
null
293
2
null
192
4
null
I think you need to rework this question. It all depends on the problem/data which has generated the cross-tab.
null
CC BY-SA 2.5
null
2010-07-20T12:43:02.087
2010-07-20T12:43:02.087
null
null
211
null
294
2
null
53
61
null
Bernard Flury, in his excellent book introducing multivariate analysis, described this as an anti-property of principal components. It's actually worse than choosing between correlation or covariance. If you changed the units (e.g. US style gallons, inches etc. and EU style litres, centimetres) you will get substantively different projections of the data. The argument against automatically using correlation matrices is that it is quite a brutal way of standardising your data. The problem with automatically using the covariance matrix, which is very apparent with that heptathalon data, is that the variables with the highest variance will dominate the first principal component (the variance maximising property). So the "best" method to use is based on a subjective choice, careful thought and some experience.
null
CC BY-SA 2.5
null
2010-07-20T12:47:39.550
2010-07-20T12:47:39.550
null
null
211
null
295
2
null
31
18
null
A nice definition of p-value is "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true". The problem with that is that it requires an understanding of "test statistic" and "null hypothesis". But, that's easy to get across. If the null hypothesis is true, usually something like "parameter from population A is equal to parameter from population B", and you calculate statistics to estimate those parameters, what is the probability of seeing a test statistic that says, "they're this different"? E.g., If the coin is fair, what is the probability I'd see 60 heads out of 100 tosses? That's testing the null hypothesis, "the coin is fair", or "p = .5" where p is the probability of heads. The test statistic in that case would be the number of heads. Now, I assume that what you're calling "t-value" is a generic "test statistic", not a value from a "t distribution". They're not the same thing, and the term "t-value" isn't (necessarily) widely used and could be confusing. What you're calling "t-value" is probably what I'm calling "test statistic". In order to calculate a p-value (remember, it's just a probability) you need a distribution, and a value to plug into that distribution which will return a probability. Once you do that, the probability you return is your p-value. You can see that they are related because under the same distribution, different test-statistics are going to return different p-values. More extreme test-statistics will return lower p-values giving greater indication that the null hypothesis is false. I've ignored the issue of one-sided and two-sided p-values here.
null
CC BY-SA 2.5
null
2010-07-20T12:52:55.920
2010-07-20T12:52:55.920
null
null
62
null
296
2
null
7
13
null
[MLComp](https://web.archive.org/web/20100730170034/http://mlcomp.org/) has quite a few interesting datasets, and as a bonus your algorithm will get ranked if you upload it.
null
CC BY-SA 4.0
null
2010-07-20T12:54:28.273
2022-11-22T02:51:53.290
2022-11-22T02:51:53.290
362671
127
null
297
2
null
3
7
null
I really enjoy working with [RooFit](http://roofit.sourceforge.net/) for easy proper fitting of signal and background distributions and [TMVA](http://tmva.sourceforge.net/) for quick principal component analyses and modelling of multivariate problems with some standard tools (like genetic algorithms and neural networks, also does BDTs). They are both part of the [ROOT](http://root.cern.ch/drupal/) C++ libraries which have a pretty heavy bias towards particle physics problems though.
null
CC BY-SA 2.5
null
2010-07-20T13:08:42.907
2010-07-20T13:08:42.907
null
null
56
null
298
1
null
null
214
501393
Am I looking for a better behaved distribution for the independent variable in question, or to reduce the effect of outliers, or something else?
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
CC BY-SA 2.5
null
2010-07-20T13:11:50.297
2021-12-09T20:01:45.203
2021-08-24T07:10:36.977
35989
125
[ "regression", "distributions", "data-transformation", "logarithm", "faq" ]
299
2
null
298
16
null
One typically takes the log of an input variable to scale it and change the distribution (e.g. to make it normally distributed). It cannot be done blindly however; you need to be careful when making any scaling to ensure that the results are still interpretable. This is discussed in most introductory statistics texts. You can also read Andrew Gelman's paper on ["Scaling regression inputs by dividing by two standard deviations"](http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf) for a discussion on this. He also has a very nice discussion on this at the beginning of ["Data Analysis Using Regression and Multilevel/Hierarchical Models"](http://www.stat.columbia.edu/~gelman/arm/). Taking the log is not an appropriate method for dealing with bad data/outliers.
null
CC BY-SA 2.5
null
2010-07-20T13:16:29.303
2010-07-20T13:22:18.760
2010-07-20T13:22:18.760
5
5
null
300
2
null
278
4
null
A "right" answer cannot depend on an arbitrary ordering of some method you are using. You need to consider all possible orderings (or some representative sample) and estimate your parameters for every case. This will give you distributions for the parameters you are trying to estimate. Estimate the "true" parameter values from these distributions (this will also give you an estimate for your estimator error). Alternatively use a method that doesn't introduce an ordering.
null
CC BY-SA 2.5
null
2010-07-20T13:19:31.293
2010-07-20T13:19:31.293
null
null
56
null
301
2
null
298
11
null
You tend to take logs of the data when there is a problem with the residuals. For example, if you plot the residuals against a particular covariate and observe an increasing/decreasing pattern (a funnel shape), then a transformation may be appropriate. Non-random residuals usually indicate that your model assumptions are wrong, i.e. non-normal data. Some data types automatically lend themselves to logarithmic transformations. For example, I usually take logs when dealing with concentrations or age. Although transformations aren't primarily used to deal outliers, they do help since taking logs squashes your data.
null
CC BY-SA 2.5
null
2010-07-20T13:22:40.320
2010-07-20T13:22:40.320
null
null
8
null
302
2
null
276
2
null
I've heard two rules of thumb in this regard. One holds that so long as there are enough observations in the error term to evoke the central limit theorem, e.g. 20 or 30, you are fine. The other holds that for each estimated slope one should have at least 20 or 30 observations. The difference between using 20 or 30 as the target number is based on different thoughts regarding when there are enough observations to reasonably evoke the Central Limit Theorem.
null
CC BY-SA 2.5
null
2010-07-20T14:04:12.107
2010-07-20T14:04:12.107
null
null
196
null
303
2
null
288
3
null
You have a hierarchical bayesian model. Brief details below: Likelihood Function: $$f(n_i | p_i, t_i) = (t_i n_i) p_i^{n_i} (1-p_i)^{(t_i - n_i)}$$ Priors on $p_i, \alpha, \beta$: $$pi \sim Beta(\alpha, \beta)$$ $$\alpha ~ N(\alpha_{mean}, \alpha_{var}) I(\alpha > 0)$$ $$\beta ~ N(\beta_{mean}, \beta_{var}) I(\beta > 0)$$ Posteriors are: $$p_i \sim Beta(\alpha + n_i, \beta + t_i-n_i)$$ $$\alpha \propto I(\alpha > 0) \prod p_i^{(\alpha-1)} exp(-(\alpha-\alpha_{mean})^2) / (2 \alpha_{var})$$ $$\beta \propto I(\beta > 0) \prod (1-p_i)^{(\beta-1)} exp(-(\beta-\beta_{mean})^2) / (2 \beta_{var})$$ You can then use a combination of Gibbs and Metropolis-Hastings to draw from the posterior distributions.
null
CC BY-SA 3.0
null
2010-07-20T14:08:42.377
2012-11-15T07:31:15.177
2012-11-15T07:31:15.177
9007
null
null
304
2
null
298
3
null
Shane's point that taking the log to deal with bad data is well taken. As is Colin's regarding the importance of normal residuals. In practice I find that usually you can get normal residuals if the input and output variables are also relatively normal. In practice this means eyeballing the distribution of the transformed and untransformed datasets and assuring oneself that they have become more normal and/or conducting tests of normality (e.g. Shapiro-Wilk or Kolmogorov-Smirnov tests) and determining whether the outcome is more normal. Interpretablity and tradition are also important. For example, in cognitive psychology log transforms of reaction time are often used, however, to me at least, the interpretation of a log RT is unclear. Furthermore, one should be cautious using log transformed values as the shift in scale can change a main effect into an interaction and vice versa.
null
CC BY-SA 2.5
null
2010-07-20T14:13:50.103
2010-07-20T14:13:50.103
null
null
196
null
305
1
796
null
53
36585
It seems like when the assumption of homogeneity of variance is met that the results from a Welch adjusted t-test and a standard t-test are approximately the same. Why not simply always use the Welch adjusted t?
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
CC BY-SA 2.5
null
2010-07-20T14:19:41.383
2022-11-07T16:03:40.413
2010-07-26T13:05:38.663
159
196
[ "variance", "t-test", "heteroscedasticity" ]
306
2
null
242
9
null
The model that fits the data doesn't have to be a time series model; I would advise thinking outside the box a little. If you have multiple variables (e.g. age, gender, diet, ethnicity, illness, medication) you can use these for a different model. Maybe having certain patients in the same room is an important predictor? Or perhaps it has to do with the attending staff? Or consider using a multi-variate time series model (e.g. VECM) if you have other variables that you can use. Look at the relationships between violence across patients: do certain patients act out together? The time series model is useful if time has some important role in the behavior. For instance, there might be a clustering of violence. Look at the volatility clustering literature. As @Jonas suggests, with a lag order of 2, you may need to be on higher alert on the day following a burst in violence. But that doesn't help you prevent the first day: there may be other information that you can link into the analysis to actually understand the cause of the violence, rather than simply forcasting it in a time series fashion. Lastly, as a technical suggestion: if you're using R for the analysis, you might have a look at [the forecast package](http://cran.r-project.org/web/packages/forecast/index.html) from Rob Hyndman (the creator of this site). This has many very nice features; see the paper ["Automatic Time Series Forecasting: The forecast Package for R"](http://www.jstatsoft.org/v27/i03) in the Journal of Statistical Software.
null
CC BY-SA 2.5
null
2010-07-20T14:21:37.820
2010-07-20T14:21:37.820
null
null
5
null
307
2
null
283
46
null
A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data left to estimate variance. For example, if you have 6 data points and fit a 5th-order polynomial to the data, you would have a saturated model (one parameter for each of the 5 powers of your independant variable plus one for the constant term).
null
CC BY-SA 2.5
null
2010-07-20T14:23:30.843
2010-07-20T14:23:30.843
null
null
229
null
308
2
null
290
2
null
There are couple of good links with introductory material at Princeton Uni Library [website](http://libguides.princeton.edu/content.php?pid=27916&sid=459449).
null
CC BY-SA 3.0
null
2010-07-20T14:38:07.360
2015-08-07T12:32:14.717
2015-08-07T12:32:14.717
22047
22
null
309
2
null
305
3
null
The fact that something more complex reduces to something less complex when some assumption is checked is not enough to throw the simpler method away.
null
CC BY-SA 2.5
null
2010-07-20T14:40:32.910
2010-07-20T14:40:32.910
null
null
null
null
310
2
null
270
1
null
You can simply ignore the 'factorial' when using maximum likelihood. Here is the reasoning for your suicides example. Let: λ : Be the expected number of suicides per year ki: Be the number of suicides in year i. Then you would maximize the log-likelihood as: LL = ∑ ( ki log(λ) - λ - ki! ) Maximizing the above is equivalent to maximizing the following as ki! is a constant : LL' = ∑ ( ki log(λ) - λ ) Could explain why the factorial is an issue? Am I missing something?
null
CC BY-SA 2.5
null
2010-07-20T14:41:33.560
2010-07-20T14:41:33.560
null
null
null
null
311
2
null
220
23
null
Rob Hyndman gave the easy exact answer for a fixed n. If you're interested in asymptotic behavior for large n, this is handled in the field of [extreme value theory](http://en.wikipedia.org/wiki/Extreme_value_theory). There is a small family of possible limiting distributions; see for example the first chapters of [this book](http://books.google.com/books?id=3ZKmAAAAIAAJ&q=leadbetter+and+lindgren&dq=leadbetter+and+lindgren&hl=en&ei=z7ZFTPPFH9T-nAfa58HaAw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC0Q6AEwAA).
null
CC BY-SA 2.5
null
2010-07-20T14:47:31.380
2010-07-20T14:47:31.380
null
null
89
null
312
1
318
null
5
171
I'm a physics graduate who ended up doing infosec so most of the statistics I ever learned is useful for thermodynamics. I'm currently trying to think of a model for working out how many of a population of computers are infected with viruses, though I assume the maths works out the same way for real-world diseases so references in or answers relevant to that field would be welcome too. Here's what I've come up with so far: - assume I know the total population of computers, N. - I know the fraction D of computers that have virus-detection software (i.e. the amount of the population that is being screened) - I know the fraction I of computers that have detection software that has reported an infection - I don't know, but can find out or estimate, the probability of Type I and II errors in the detection software. - I don't (yet) care about the time evolution of the population. So where do I go from here? Would you model infection as a binomial distribution with probability like (I given D), or as a Poisson? Or is the distribution different?
What approach could be used for modelling virus infections?
CC BY-SA 2.5
null
2010-07-20T15:03:40.450
2022-11-30T05:56:14.203
null
null
null
[ "distributions", "modeling", "poisson-distribution", "binomial-distribution", "disease" ]
313
2
null
31
13
null
Imagine you have a bag containing 900 black marbles and 100 white, i.e. 10% of the marbles are white. Now imagine you take 1 marble out, look at it and record its colour, take out another, record its colour etc.. and do this 100 times. At the end of this process you will have a number for white marbles which, ideally, we would expect to be 10, i.e. 10% of 100, but in actual fact may be 8, or 13 or whatever simply due to randomness. If you repeat this 100 marble withdrawal experiment many, many times and then plot a histogram of the number of white marbles drawn per experiment, you'll find you will have a Bell Curve centred about 10. This represents your 10% hypothesis: with any bag containing 1000 marbles of which 10% are white, if you randomly take out 100 marbles you will find 10 white marbles in the selection, give or take 4 or so. The p-value is all about this "give or take 4 or so." Let's say by referring to the Bell Curve created earlier you can determine that less than 5% of the time would you get 5 or fewer white marbles and another < 5% of the time accounts for 15 or more white marbles i.e. > 90% of the time your 100 marble selection will contain between 6 to 14 white marbles inclusive. Now assuming someone plonks down a bag of 1000 marbles with an unknown number of white marbles in it, we have the tools to answer these questions i) Are there fewer than 100 white marbles? ii) Are there more than 100 white marbles? iii) Does the bag contain 100 white marbles? Simply take out 100 marbles from the bag and count how many of this sample are white. a) If there are 6 to 14 whites in the sample you cannot reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 6 through 14 will be > 0.05. b) If there are 5 or fewer whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 5 or fewer will be < 0.05. You would expect the bag to contain < 10% white marbles. c) If there are 15 or more whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 15 or more will be < 0.05. You would expect the bag to contain > 10% white marbles. In response to Baltimark's comment Given the example above, there is an approximately:- 4.8% chance of getter 5 white balls or fewer 1.85% chance of 4 or fewer 0.55% chance of 3 or fewer 0.1% chance of 2 or fewer 6.25% chance of 15 or more 3.25% chance of 16 or more 1.5% chance of 17 or more 0.65% chance of 18 or more 0.25% chance of 19 or more 0.1% chance of 20 or more 0.05% chance of 21 or more These numbers were estimated from an empirical distribution generated by a simple Monte Carlo routine run in R and the resultant quantiles of the sampling distribution. For the purposes of answering the original question, suppose you draw 5 white balls, there is only an approximate 4.8% chance that if the 1000 marble bag really does contain 10% white balls you would pull out only 5 whites in a sample of 100. This equates to a p value < 0.05. You now have to choose between i) There really are 10% white balls in the bag and I have just been "unlucky" to draw so few or ii) I have drawn so few white balls that there can't really be 10% white balls (reject the hypothesis of 10% white balls)
null
CC BY-SA 2.5
null
2010-07-20T15:10:15.333
2010-07-21T00:33:56.993
2010-07-21T00:33:56.993
226
226
null
314
2
null
257
12
null
R is definitely the answer. I would just add to what Rob and Colin already said: To improve the quality of your plots, you should consider using [the Cairo package](http://cran.r-project.org/web/packages/Cairo/index.html) for the output device. That will greatly improve the quality of the final graphics. You simply call the function before plotting and it redirects to Cairo as the output device. ``` Cairo(600, 600, file="plot.png", type="png", bg="white") plot(rnorm(4000),rnorm(4000),col="#ff000018",pch=19,cex=2) # semi-transparent red dev.off() # creates a file "plot.png" with the above plot ``` Lastly, in terms of putting it in a publication, that's the role that `Sweave` plays. It makes combining plots with your paper a trivial operation (and has the added benefit of leaving you with something that is reproducible and understandable). Use `cacheSweave` if you have long-running computations.
null
CC BY-SA 2.5
null
2010-07-20T15:10:34.927
2010-07-20T15:10:34.927
null
null
5
null
315
2
null
224
6
null
If you can use R try [ggplot2](http://had.co.nz/ggplot2/).
null
CC BY-SA 2.5
null
2010-07-20T15:15:57.377
2010-07-20T15:15:57.377
null
null
36
null
316
2
null
73
8
null
ggplot2 - hands down best visualization for R. RMySQL/RSQLite/RODBC - for connecting to a databases sqldf - manipulate data.frames with SQL queries Hmisc/rms - packages from Frank Harrell containing convenient miscellaneous functions and nice functions for regression analyses. GenABEL - nice package for genome-wide association studies Rcmdr - a decent GUI for R if you need one. Also check out [CRANtastic - this link](http://crantastic.org/popcon) has a list of the most popular R packages. Many on the top of the list have already been ment
null
CC BY-SA 2.5
null
2010-07-20T15:20:38.567
2010-07-20T15:28:42.347
2010-07-20T15:28:42.347
36
36
null
317
2
null
114
18
null
[The Endeavour](http://www.johndcook.com/blog/) sometimes features statistics posts. Otherwise it is mostly around the interplay of computer science and math.
null
CC BY-SA 2.5
null
2010-07-20T15:21:01.857
2010-07-20T15:21:01.857
null
null
56
null
318
2
null
312
5
null
Computer virus propagation is structurally similar to infectious diseases propagation (vaccinations = anti-virus software, virus via email = getting a virus from someone etc). Use the following links: [Mathematical modelling of infectious disease](https://en.wikipedia.org/wiki/Mathematical_modelling_of_infectious_disease) and [Compartmental models in epidemiology](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology) as jumping points for your model.
null
CC BY-SA 4.0
null
2010-07-20T15:23:53.653
2022-11-30T05:56:14.203
2022-11-30T05:56:14.203
362671
null
null
319
2
null
31
34
null
No amount of verbal explanation or calculations really helped me to understand at a gut level what p-values were, but it really snapped into focus for me once I took a course that involved simulation. That gave me the ability to actually see data generated by the null hypothesis and to plot the means/etc. of simulated samples, then look at where my sample's statistic fell on that distribution. I think the key advantage to this is that it lets students forget about the math and the test statistic distributions for a minute and focus on the concepts at hand. Granted, it required that I learn how to simulate that stuff, which will cause problems for an entirely different set of students. But it worked for me, and I've used simulation countless times to help explain statistics to others with great success (e.g., "This is what your data looks like; this is what a Poisson distribution looks like overlaid. Are you SURE you want to do a Poisson regression?"). This doesn't exactly answer the questions you posed, but for me, at least, it made them trivial.
null
CC BY-SA 2.5
null
2010-07-20T15:33:42.533
2010-07-20T15:33:42.533
null
null
71
null
320
2
null
283
28
null
A saturated model is a model that is overparameterized to the point that it is basically just interpolating the data. In some settings, such as image compression and reconstruction, this isn't necessarily a bad thing, but if you're trying to build a predictive model it's very problematic. In short, saturated models lead to extremely high-variance predictors that are being pushed around by the noise more than the actual data. As a thought experiment, imagine you've got a saturated model, and there is noise in the data, then imagine fitting the model a few hundred times, each time with a different realization of the noise, and then predicting a new point. You're likely to get radically different results each time, both for your fit and your prediction (and polynomial models are especially egregious in this regard); in other words the variance of the fit and the predictor are extremely high. By contrast a model that is not saturated will (if constructed reasonably) give fits that are more consistent with each other even under different noise realization, and the variance of the predictor will also be reduced.
null
CC BY-SA 2.5
null
2010-07-20T15:47:04.940
2010-07-20T15:47:04.940
null
null
61
null