Reviews of John Aitchison's books
1. The lognormal distribution, with special reference to its uses in economics (1957), by John Aitchison and J A C Brown.
Mathematical Reviews MR0084966.
A variate is said to be lognormally distributed if its logarithm (or in some cases the logarithm of a translation) is Gaussian. The study of the lognormal distribution dates at least to McAlister in 1879, and it and its nice properties have been rediscovered often since then. One of the many attractive aspects of this book is a bibliography of 217 items, most of which bear directly on the lognormal distribution. ... This is a well-written and well-made book, bound to be a stimulus and an aid to research workers in economics and other fields.
Mathematical Reviews MR0365778 (51 #2030).
The problems are arranged in 7 chapters, each containing introductory remarks, some worked examples, and problems with brief solutions. Chapter headings: The nature of statistical problems, Probabilistic models for random experiments, Further descriptions of random experiments, Some important probability models, Sampling, Estimation, Testing of hypotheses.
Journal of the American Statistical Association 70 (349) (1975), 258.
'Choice Against Chance' is a well-written and interesting account of decision making under uncertainty. Readers need only a knowledge of high school algebra, as the book uses no more than finite mathematics. There are many practical examples illustrating applications of the techniques discussed, and approximately two hundred carefully formulated problems form an important part of the book. Professor Aitchison's text is a welcome contribution to the literature. This book is not an ordinary introductory account of statistical decision theory. The presentation, while utilizing only elementary mathematics, nevertheless deals with abstraction and is sophisticated, and it will challenge the reader. The format the author has chosen is to state eleven practical problems in the first chapter for motivation and illustration, and then to develop their solutions while progressing through the text. As more theory and techniques are mastered, the problems are rediscussed and more complete solutions to them formulated. The problems deal with business decisions, classification, sampling inspection of quality, estimation, treatment selection, negotiating contracts and tariffs, and control. ... I recommend this book for use in introductory decision theory courses, for self-study and for enjoyable reading by statisticians.
3.2. Review by: V D Barnett.
Journal of the Royal Statistical Society. Series A (General) 134 (2) (1971), 242-243.
This is an interesting and readable book. It presents an elementary introduction to the ideas of statistical decision making, with little mathematical prerequisite or formal demand on the reader, and with the material firmly tied to an extensive framework of simple but realistic examples. Designed for the non-mathematician, it achieves a simplicity of treatment by restricting attention to discrete, indeed finite, systems. At this level there is detailed coverage of probability theory, games theory, statistical inference (predominantly Bayesian) and decision theory. The narrative is easy and informal and holds the reader's interest and attention. ... In the preface the author summarizes the aims of the book. These include the desire to present a self-contained treatment stressing the unity of theory and practice, with "ideas and principles motivated by concrete situations and by the need to resolve real problems", with particular emphasis on the construction and analysis of realistic probability models. These aims seem admirably achieved in the main, and result in a readable and interesting review of the subject matter for the initiated. ... This enjoyable book ranks seventeen whoopees by anyone's standards!
3.3. Review by: Paul Switzer.
American Scientist 59 (3) (1971), 379.
As a textbook for a no-prerequisite course in statistical decision theory, the conversational style and multitude of often belaboured examples should make this book popular with students. Teachers, however, may feel somewhat uneasy be cause of a peculiar distribution of emphasis, an often idiosyncratic terminology, and very occasional but gross errors and misunderstandings.
Mathematical Reviews MR0365779 (51 #2031).
Continuation of 'Solving problems in statistics. I' reviewed above. Chapter headings: Probability descriptions of dependence, Statistical inference and decision-making, The statistical analysis of dependence, Tests of association, Sequential analysis.
Mathematical reviews MR0408097 (53 #11864).
The authors present a unified framework as well as a unified notation of certain aspects of statistical prediction theory. Their point of view is that any inferential problem whose solution depends on envisaging some future occurrence should be regarded as a problem in statistical prediction analysis. Both the frequentist and Bayesian points of view are presented with examples illustrating situations where each is appropriate. The book appears to be quite suitable for use as a text for a general course in statistical prediction analysis at a pre-measure-theory level. It will also serve as a good reference book. The list of references, although not complete, contains a variety of representative papers in the field. Using the references in these papers one could probably obtain a rather complete list of papers on prediction. The authors have done an excellent job of using examples to motivate the concepts and methods. Each chapter contains good problems which illustrate the specific techniques of the chapter as well as to further motivate the concepts. ... This book is an important contribution to the field of statistics.
As long age as 1897 Karl Pearson, in a now classic paper on spurious correlation, first pointed out dangers that may befall the analyst who attempts to interpret correlations between ratios whose numerators and denominators contain common parts. He thus implied that the analysis of compositional data, with its concentration on relationships between proportions of some whole, is likely to be fraught with difficulty. History has proved him correct: over the succeeding years and indeed right up to the present day, there has been no other form of data analysis where more confusion has reigned and where more improper and inadequate statistical methods have been applied.
6.2. Review by: M Stone.
Journal of the Royal Statistical Society. Series C (Applied Statistics) 36 (3) (1987), 375.
This splendid monograph develops sensible models for distributions on simplexes. Such distributions are of primary interest to geologists, but they also arise naturally in many other areas in the form of 'compositional data' i.e. independent 'samples' whose components are described by their proportions adding to unity. ... [The book's] main strengths are: (i) a scientific basis (definitions and methods are adapted to the unity constraint and motivated by scientific objectives, with the occasional concession to statistical feasibility), (ii) clear exposition, (iii) comprehensiveness (the author has a catholic viewpoint and deploys a wide range of statistical methods). ... The book is pleasantly free from any dogmatic claim that the model represents the last word. The author shows that the newer approach can produce results that 'surprise many geologists' (which is a plus, since many statistical analyses have no surprise value). But I sense that he might be responsive to subject-based criticisms of the applicability of the log transformation to very small proportions.
6.3. Review by: Graham F G Upton.
Mathematical Reviews MR0865647 (88c:62099).
This book is the definitive work in its area and is likely to be the standard reference for decades to come. Compositional data are data recording the breakdown of some item into its component parts. Examples include the composition of rocks into component chemical compounds, the breakdown of a fruit into flesh, skin and stone, and the day of a statistician subdivided by activity type. The final 50 pages of this book are taken up with a complete record of 40 such data sets from very varied contexts. The data sets mostly refer to decomposition of items into between 3 and 5 constituents and give information on up to 100 items. At the end of each chapter there are several problems mostly of an open-ended type referring to the supplied data sets and without solutions. The author has developed a considerable amount of associated software and a PC-compatible Basic package called CODA is available ... Compositional data can only easily be represented diagrammatically in the case of two or three components, using equilateral triangles and barycentric coordinates in the latter case. After discussing these and the general problems caused by the separate items being constrained to sum to 100%, the author argues that what is of interest is the relative magnitudes of the components and hence that one should deal with ratios of proportions. This naturally suggests using the logarithms of ratios, and, with the assumption that these logratios have a normal distribution, we are led to the so-called "logistic normal'' distribution. To test for "logistic normality'' we therefore test for normality of the distribution of the logratios, and the author devotes a chapter to this part of the analysis. Most of the book assumes that multivariate logistic normality is an acceptable description of the data, but a later chapter considers potential alternative distributions.
Statistical Concepts and Applications in Clinical Medicine presents a unique, problem-oriented approach to using statistical methods in clinical medical practice through each stage of the clinical process, including observation, diagnosis, and treatment. The authors present each consultative problem in its original form, then describe the process of problem formulation, develop the appropriate statistical models, and interpret the statistical analysis in the context of the real problem. Their treatment provides clear, accessible explanations of statistical methods. The text includes end-of-chapter exercises that help develop formulatory, analytic, and interpretative skills.
7.2. Review by: William F Rosenberger.
Journal of the American Statistical Association 101 (473) (2006), 404.
The rear cover of this text states: "the style of the authors makes for dynamic reading: the reader feels a part of the scientific endeavour which is almost like solving a mystery. This book will be fun to read and useful in practice." I agree with this statement and find this book superbly interesting and unique. When I first saw the title, I envisioned a standard book on the design and analysis of clinical trials. In truth, it is actually a fascinating book on an area of clinical medicine where biostatisticians rarely tread: the medical care of the individual. What does it mean for a patient to be clinically "normal"? How does one predict a congenital disease based on genetic data? How does one select the best available treatment for an individual patient? These are some of the gems in store for the reader. ... The style of the book is a conversation with the reader, which I really enjoyed. ... The book is unique, important and thorough. This is a wonderful book, and I give it my highest recommendation.
JOC/EFR August 2017
The URL of this page is: