**Statistics Technical Reports:**Search | Browse by year

**Term(s):**1994**Results:**15**Sorted by:**

**Title:**Locally Adaptive Lag-Window Spectral Estimation**Author(s):**Bühlmann, Peter; **Date issued:**Oct 1994

http://nma.berkeley.edu/ark:/28722/bk000472r3v (PDF) **Abstract:**We propose a procedure for the locally optimal window width in nonparametric spectral estimation, minimizing the asymptotic
mean square error at a fixed frequency of a lag-window estimator. Our approach is based on an iterative plug-in scheme. Besides
the estimation of a spectral density at a fixed frequency, e.g. at frequency zero, our procedure allows to perform nonparametric
spectral estimation with variable window width which adapts to the smoothness of the true underlying density.**Pub info:**Journal of Time Series Analysis, Vol.17**Keyword note:**Buhlmann__Peter**Report ID:**422**Relevance:**100

**Title:**Bagging Predictors**Author(s):**Breiman, Leo; **Date issued:**Sep 1994

http://nma.berkeley.edu/ark:/28722/bk0000n1v1j (PDF)

http://nma.berkeley.edu/ark:/28722/bk0000n1v23 (PostScript) **Abstract:**Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor.
The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a
class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning
sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression
show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method.
If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.**Keyword note:**Breiman__Leo**Report ID:**421**Relevance:**100

**Title:**Predicting multivariate responses in multiple linear regression**Author(s):**Breiman, L.; Friedman, J. H.; **Date issued:**August 1994**Keyword note:**Breiman__Leo Friedman__J_H**Report ID:**420**Relevance:**100

**Title:**Resampling Fewer Than n Observations: Gains, Losses, and Remedies for Losses**Author(s):**Bickel, P. J.; Götze, F.; van Zwet, W. R.; **Date issued:**Aug 1994

http://nma.berkeley.edu/ark:/28722/bk0000n3515 (PDF)

http://nma.berkeley.edu/ark:/28722/bk0000n352q (PostScript) **Abstract:**We discuss a number of resampling schemes in which $m=o(n)$ observations are resampled. We review nonparametric bootstrap
failure and give results old and new on how the $m$ out of $n$ with replacement bootstraps and without replacement works.
We extend work of Bickel and Yahav (1988) to show that $m$ out of $n$ bootstraps can be made second order correct, if the
usual nonparametric bootstrap is correct and study how these extrapolation techniques work when the nonparametric bootstrap
doesn't.**Keyword note:**Bickel__Peter_John Gotze__Friedrich van_Zwet__W_R**Report ID:**419**Relevance:**100

**Title:**Some issues in the foundation of statistics**Author(s):**Freedman, D. A.; **Date issued:**August 1994**Keyword note:**Freedman__David**Report ID:**418**Relevance:**100

**Title:**Simultaneous Confidence Intervals for Linear Estimates of Linear Functionals**Author(s):**Stark, P. B.; **Date issued:**Aug 1994**Date modified:**revised March, 1995

http://nma.berkeley.edu/ark:/28722/bk0000n1t8w (PDF)

http://nma.berkeley.edu/ark:/28722/bk0000n1t9f (PostScript) **Abstract:**This note presents three ways of constructing simultaneous confidence intervals for linear estimates of linear functionals
in inverse problems, including ``Backus-Gilbert'' estimates. Simultaneous confidence intervals are needed to compare estimates,
for example, to find spatial variations in a distributed parameter. The notion of simultaneous confidence intervals is introduced
using coin tossing as an example before moving to linear inverse problems. The first method for constructing simultaneous
confidence intervals is based on the Bonferroni inequality, and applies generally to confidence intervals for any set of parameters,
from dependent or independent observations. The second method for constructing simultaneous confidence intervals in inverse
problems is based on a ``global'' measure of fit to the data, which allows one to compute simultaneous confidence intervals
for any number of linear functionals of the model that are linear combinations of the data mappings. This leads to confidence
intervals whose widths depend on percentage points of the chi-square distribution with $n$ degrees of freedom, where $n$ is
the number of data. The third method uses the joint normality of the estimates to find shorter confidence intervals than
the other methods give, at the cost of evaluating some integrals numerically.**Keyword note:**Stark__Philip_B**Report ID:**417**Relevance:**100

**Title:**Heuristics of instability and stabilization in model selection**Author(s):**Breiman, L.; **Date issued:**June 1994

http://nma.berkeley.edu/ark:/28722/bk000472h80 (PDF) **Keyword note:**Breiman__Leo**Report ID:**416**Relevance:**100

**Title:**Some asymptotics of wavelet fits in the stationary error case**Author(s):**Brillinger, D. R.; **Date issued:**June 1994

http://nma.berkeley.edu/ark:/28722/bk000472h7f (PDF) **Keyword note:**Brillinger__David_R**Report ID:**415**Relevance:**100

**Title:**Consistency of Bayes estimates for nonparametric regression: normal theory**Author(s):**Diaconis, P.; Freedman, D. A.; **Date issued:**May 1994

http://nma.berkeley.edu/ark:/28722/bk000472h6w (PDF) **Keyword note:**Diaconis__Persi Freedman__David**Report ID:**414**Relevance:**100

**Title:**Looking at Markov Samplers through Cusum Path Plots:**Author(s):**Yu, Bin; **Date issued:**Jun 1994

http://nma.berkeley.edu/ark:/28722/bk0000n1z9h (PDF)

http://nma.berkeley.edu/ark:/28722/bk0000n200j (PostScript) **Abstract:**a simple diagnostic idea In this paper, we propose to monitor a Markov chain sampler using the cusum path plot of a chosen
1-dimensional summary statistic. We argue that the cusum path plot can bring out, more effectively than the sequential plot,
those aspects of a Markov sampler which tell the user how quickly or slowly the sampler is moving around in its sample space,
in the direction of the summary statistic. The proposal is then illustrated in four examples which represent situations where
the cusum path plot works well and not well. Moreover, a rigorous analysis is given for one of the examples. We conclude
that the cusum path plot is an effective tool for convergence diagnostics of a Markov sampler and for comparing different
Markov samplers.**Keyword note:**Yu__Bin**Report ID:**413**Relevance:**100

**Title:**Neighbourhood `correlation ratio' curves**Author(s):**Doksum, K.; Froda, S.; **Date issued:**April 1994

http://nma.berkeley.edu/ark:/28722/bk000472h5b (PDF) **Keyword note:**Doksum__Kjell_Andreas Froda__S**Report ID:**412**Relevance:**100

**Title:**Some properties of splitting criteria**Author(s):**Breiman, L.; **Date issued:**March 1994**Keyword note:**Breiman__Leo**Report ID:**410**Relevance:**100

**Title:**Estimating $L^1$ Error of Kernel Estimator: Monitoring Convergence of Markov Samplers**Author(s):**Yu, Bin; **Date issued:**Nov 1994

http://nma.berkeley.edu/ark:/28722/bk0000n1z6v (PDF)

http://nma.berkeley.edu/ark:/28722/bk0000n1z7d (PostScript) **Abstract:**In many Markov chain Monte Carlo problems, the target density function is known up to a normalization constant. In this paper,
we take advantage of this knowledge to facilitate the convergence diagnostic of a Markov sampler by estimating the $L^1$ error
of a kernel estimator. Firstly, we propose an estimator of the normalization constant which is shown to be asymptotically
normal under mixing and moment conditions. Secondly, the $L^1$ error of the kernel estimator is estimated using the normalization
constant estimator, and the ratio of the estimated $L^1$ error to the true $L^1$ error is shown to converge to 1 in probability
under similar conditions. Thirdly, we propose a sequential plot of the estimated $L^1$ error as a tool to monitor the convergence
of the Markov sampler. Finally, a 2-dimensional bimodal example is given to illustrate the proposal, and two Markov samplers
are compared in the example using the proposed diagnostic plot.**Keyword note:**Yu__Bin**Report ID:**409**Relevance:**100

**Title:**From association to causation via regression**Author(s):**Freedman, David A.; **Date issued:**Apr 1994

http://nma.berkeley.edu/ark:/28722/bk0000n2415 (PDF)

http://nma.berkeley.edu/ark:/28722/bk0000n242q (PostScript) **Abstract:**For nearly a century, investigators in the social sciences have used regression models to deduce cause-and-effect relationships
from patterns of association. Path models and automated search procedures are more recent developments. In my view, this
enterprise has not been successful. The models tend to neglect the difficulties in establishing causal relations, and the
mathematical complexities tend to obscure rather than clarify the assumptions on which the analysis is based. Formal statistical
inference is, by its nature, conditional. If maintained hypotheses A, B, C, ... hold, then H can be tested against the data.
However, if A, B, C, ... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore
be a critical part of empirical work-- a principle honored more often in the breach than the observance. I will discuss modeling
techniques that seem to convert association into causation. The object is to clarify the differences among the various uses
of regression, and the difficulties in making causal inferences by modeling.**Keyword note:**Freedman__David**Report ID:**408**Relevance:**100

**Title:**Inference in Hidden Markov Models I: Local Asymptotic Normality in the Stationary Case**Author(s):**Bickel, P. J.; Ritov, Y.; **Date issued:**Feb 1994**Date modified:**revised April 1995

http://nma.berkeley.edu/ark:/28722/bk0000n348h (PDF)

http://nma.berkeley.edu/ark:/28722/bk0000n3492 (PostScript) **Abstract:**Following up on Baum and Petrie (1966) we study likelihood based methods in hidden Markov models, where the hiding mechanism
can lead to continuous observations and is itself governed by a parametric model. We show that procedures essentially equivalent
to maximum likelihood estimates are asymptotically normal as expected and consistent estimates of their variance can be constructed,
so that the usual inferential procedures are asymptotically valid.**Keyword note:**Bickel__Peter_John Ritov__Yaacov**Report ID:**383**Relevance:**100