Instruments to Analyze Continuous Signals of Psi Activity

by Michael Rossman

 

[Abstract]

         Sharpening the argument of its predecessor, this paper proposes instruments analogous in function and scale to instruments of fundamental research in the physical sciences, designed to generate and measure continuous signals of psi activity in large populations. It discusses their utility and resolving powers, their technological and economic feasibility, several candidate signals, and some social considerations in their development.

******

         Meta-analytic studies of the historical body of psi research have established the existence of anomalous effects more securely, by confirming the relative robustness and stability of their production in experiments conducted with relatively large subject-pools. E.g., analysis of 2549 ganzfeld sessions involving >>1000 subjects, reported in 41 publications over 22 years, confirms an average success-rate of 33% (+/-2%, with 95% confidence) in "remote sending and viewing" of images, as against 25% expected by chance. [1] Such analyses, serving the project of verification, may be read also as retrospective studies of the feasibility of instruments of fundamental research, analogous in function and scale to instruments in the physical sciences.

[A Prototype Ganzfeld Analyzer]

         To grasp their nature, consider the example of the ganzfeld trials, in which each subject tries many times to guess or discern which of four randomized images is being presented to a well-distanced observer. In effect, the 8% cumulative surplus of success may be understood as the average strength of a particular signal of psi activity educible through this means, as determined by greatly-intermittent spot sampling on a large population. This suggests the prospect of obtaining approximately-continuous measurement of such a signal. Suppose that 2500 ganzfeld sessions could be conducted daily for two years, with participants drawn randomly from a similar pool. As a "null" hypothesis, given the historical basis and consistent flow, we would expect an average surplus-success signal of about 8% to manifest continuously, with 95% confidence intervals ranging from +/-0.1% yearly to +/-3% for six-hour periods.

         Such an instrument is a coarse-grained tool, with resolving powers strongly dependent on scale. This example should be able to resolve differences in mean signal strength of 2.6% between women/men or grossly-different magnetic declinations; 3.7% between seasons or lunar phases; 7.6% during sun-spot flareups and for meditators; 25% for Catholic vegetarians; and 62% for brief periods of national concentration, of the "Superbowl" sort investigated by Radin. [2] In general, it should clearly resolve a difference of 6% between signals deriving from different deciles (e.g. the extremes) of any distribution describing the test-flow, and a 10% difference between the overall mean and the signal deriving from 1% of the test-flow as specified by any combination of variables. In principle, these accuracies may be improved to any desired degree (N-fold) by appropriate scaling (N^2-fold.)

         In such a concrete fashion, we already can approach the creation of instruments to generate and measure continuous signals of psi activity deriving from large populations. Published estimates imply that the cost of a "classical" ganzeld analyzer on the scale above would be c. $100 million/year. [3] I suggest below that instruments as useful can be developed for a small fraction of this cost. Yet the project is worth consideration even at the higher price, by comparison with the costs of instruments of fundamental inquiry in astronomy and particle physics, designed to capture signals no less fuzzy nor more primordial than these.


[The Utility of Such Instruments]

         What uses might measurement of such a signal serve? An overall "null" result, of measurements varying expectably around an invariant mean, would suggest that the observable psi activity was not influenced significantly by any global, natural observable varying significantly during this period. [4] "Null" results extending to comparisons of signals derived from and within different regions would suggest independence from a wide variety of natural and cultural variables. [5] Correlation of participant data could proceed nearly independently of such factors, with "null" results here suggesting the signal's independence of a wide variety of catagorical personal factors. [6] Conversely, any non-null finding of correlation with any natural, cultural, or personal factor would be equally of fundamental interest, inviting and constraining theory and guiding further research.

         Given its correlative capacities, such an instrument amounts to an intellectual spectroscope or filter, useful both to the theoretical project of inquiry into psi phenomena and to the practical project of understanding their human expression. In each perspective, its measurements would serve both to test the predictive fitness of hypotheses, and to suggest them. In both regards, its utility would depend on its resolving power, and on the variety of factors correlatable with its measurement-stream. In principle, given a particular signal, these parameters depend only on its scale, and the design of its participant base and testing circumstances.

[The Feasibility of On-Line Instruments]

         The prospect of such instruments has recently been radically advanced by the development of cybernetic techniques of randomization, test-presentation, data-recording, and correlative analysis. These developments enable the generation of clearer signals, somewhat less influenced by instrumental noise, and radical economies in their large-scale production and analysis. In particular, the interactive medium of the Web can now efficiently support psi-signal generators applying to large, dispersed populations in approximately-real time. A vestige of this potential is evident already in the public RetroPsychoKinesis Project [7] , though its design and index activity seem less than ideal for this purpose.

         The resolving power of an on-line instrument will depend also on the qualities of its index signal. The historical ganzfeld results suggest initial standards for a useful tool. Can signals as strong and no more fuzzy be educed by on-line means? Though other sorts of signals invite evaluation, this likelihood is immediate -- for every relevant feature of the latest-generation "autoganzfeld" methodology can be adapted directly to on-line execution. [8] [9] [10] It seems reasonable to predict that an on-line autoganzfeld analyzer will generate a signal substantially similar to published results, with the strength and scale-dependent resolving power estimated above. [11]

[The Variety of Measurable Psi Signals]

         This argument amounts in effect to an existence proof for a class of useful instruments. Other types of psi signal may prove to be more robust and less fuzzy than the autoganzfeld, and/or more economical to educe continuously from large populations by on-line means. One promising candidate is the anticipatory dermal response (ADR) recently documented by Radin. [12] This precognitive reaction could be measured by simple skin galvanometers connecting to participants' computers, with software recording and relaying their responses to neutral and disturbing images randomly presented by the home site. As this implementation duplicates the published protocol in all key regards [13] , it seems that similar results may be expected. The published statistics suggests that a signal as strong and clear as the autoganzfeld's may be educible; and that the ADR method may prove much more economical for any given degree of resolution. [14]

         The  ADR and autoganzfeld signals are apparently of different species in three regards. One is apparently intra-personal, the other apparently inter-personal and consciously transactional. One derives from "precognition", the other from "clairvoyance". The ADR signal's variation and interpretation are complicated by its somatic expression, which may be influenced directly by many factors of the sorts cited above; for the autoganzfeld signal, such complication follows instead from the conscious processes of interpretation involved in the protocol, subject likewise to multifactorial influence. Whether each difference is profound or superficial is unclear. But as it seems we must assume the former at first, these considerations may bear as strongly as simple economy of resolving-power in the choice of index-signal for a prototype instrument.

         Another sort of psi signal could be measured by registering dermal responses of "receivers" while "senders" focussed on randomly-presented images of neutral and disturbing character; another, by registering responses of "receivers" actively trying to connect to "senders" who stuck themselves with pins at randomly-signaled times. [15] As related methodologies have been explored for over thirty years, their gathered statistics invite appraisal. My impression -- that neither strategy offers signals of comparable quality to the autoganzfeld and ADR varieties -- extends to other candidates readily adaptable to on-line use, from forced-choice clairvoyance trials to protocols measuring anomalous relations with radiodecay events. Even so, economy in large-scale use may balance lesser signal quality for some, and others may offer signals particularly important to measure. In principle, it seems that instruments measuring a diverse variety of signals should be developed. This seems a priority in view of modern trends of theory collapsing the traditional catagories of psi expression. Any finding that signals of apparently-distinct species vary coordinately would be of fundamental importance. [16]

[The Economics of Large-Scale, On-Line Analyzers]

         The budget for any such instrument will include core costs of development and central administration, nearly independent of scale; and costs proportional to its scale, covering non-automated peripheral administration, on-line service, participant supplies, and participant pay. This table estimates the costs of autoganzfeld and ADR analyzers with the resolving-power indicated above, given certain footnoted assumptions:

 

 

autoganzfeld

ADR [23]

participants [17]

     25,000

15,000

per-capita sessions/yr [17]

     12

12

data-points

3/session; 2,500/day

20/session; 10,000/day

(hours x pay)/session

     2.5 x ($0-$20)

1.5 x ($0-$20)

per-cap supplies/yr [18]

$10-$20

$20-$50

development[19]

$10,000-$50,000

$10,000-$50,000

central administrat'n [20]

$100,000-$200,000

$100,000-$200,000

                  CORE COSTS

$110,000-$250,000

$110,000-$250,000

peripheral administr. [21]

$12,000-$120,000

$7,000-$72,000

on-line capability [22]

$15,000

$15,000

supplies [18]

$250,000-$500,000

$300,000-$750,000

 PERIPHERAL COSTS/YR

$277,000-$635,000

$322,000-$837,000

PARTICIPANT PAY/YR

$0-$15,000,000

$0-$4,800,000

         Apart from participant pay, it appears that useful, large-scale instruments can be developed to operate for two-year periods at costs of $700,000-$2,000,000 for the resolving power above; and that their resolution can be improved for costs roughly proportional to the square of the improvement. It seems that their core costs of development and administration will be exceeded or dwarfed by peripheral costs proportional to the participant base. Apart from pay, the latter will likely be dominated by supplies-costs relatively independent of session frequency. [24] A spartan prototype -- with unpaid participants and largely-volunteer administration, a low-maintenance protocol, and no participant supplies-costs beyond cheap software -- might be fieldable for $100,000.

         The factor of participant pay is central to the economics of such instruments, potentially dwarfing other costs. Justice in this is evident, as participants' time -- 270,000-750,000 hours/year in these examples -- is a substantial social investment, c. 100X as large as researchers'. Such instruments must also be understood as involving participants' investment in equipment (c. $30-$50 million here) and in training (comparable), for they are made feasible only by wide-spread, participatory investment in such a productive base. In this sense, the existential intimacy of participation in the instrument's measuring process extends concretely to its mechanism, in ways that qualify participants unusually to stand in league with researchers as producers and digesters of knowledge.

         Such considerations stand apart from economics in the narrow sense. It appears that two years' operation of a fully-funded, paid-participant analyzer would cost from $12 to $32 million for the resolution above. If the ADR signal proves as clear as the autoganzfeld, the cost might be held to $3 million. [23]

[Social Considerations of Large-Scale Instruments]

         Development of an instrument on this scale (> 10^4 participants) is inherently a social and cultural project, in senses that may be minimized or deliberately cultivated. Narrow considerations of utility suggest that the participant base should be atomized by its recruitment and management, ideally to isolated individuals, to inhibit the development of interacting groups whose dynamics might influence personal "signal sensitivities" in ways complicating signal interpretation. Prudence also suggests that the first, continuous "baseline" measurements be conducted in avoidance of the media attention that might properly attend a large-scale project of serious scientific inquiry into psi, to enable subsequent test of hypotheses that collective states of belief affect psi activity. A centrally-funded implementation consistent with these objectives could readily recruit students individually from a thousand colleges through campus job-placement offices offering modest stipends, with negligible participant interaction and perhaps negligible publicity. [25]

         At the other extreme, a fully-public implementation might proceed without substantial funding by constellating a sufficiently-reputable group of researchers and supporters, developing the core instrument, and recruiting participants by a call for volunteers -- propagated through on-line networks, mediated by the project Web-site, and publicized through other media. [26] Such an enterprise might even support itself completely by a modest membership fee ($20-$60/year in the examples above.) In this model, recruitment could still be targeted to particular populations; and enlistment could be filtered by design criteria (re personal characteristics, geographical distribution, etc.) Stability of the participant pool for the project's duration should be as assurable as in the model above. The well-established "sheep/goat" effect suggests that an instrument staffed by participants motivated to demonstrate psi activity would measure a stronger signal than one staffed simply through economic motivation.

         Various implementations permuting elements of these extremes may be practical. In choosing among them, consideration should be given not only to signal utility, economic feasibility, and protection of participant privacy, but to cultural ramifications. In effect, the public development of a large-scale instrument to measure psi activity would constitute a popular announcement that Organized Science had at last turned its attention seriously to inquiry in this domain. The project offers a novel and precious opportunity for constructive convergence between the poles of scientific skepticism and popular credulity, long maintained in such strained opposition. The development of an instrument of this magnitude and official stature would be not simply a logical extension of recent inquiry to a new phase, but a cultural watershed of fundamental consequence. [27]

 

[Footnotes]

[1] Radin, D. 1997. The Conscious Universe. N.Y., N.Y.: HarperCollins; p. 87.

[2] Radin, D., et. al. , 1996. Anomalous organization of random events by group consciousness.. Journal of Scientific Exploration 10:143-68

[3] Radin, D., op. cit., p. 75.

[4] E..g., lunar phase, ionospheric states, neutrino flux, planetary configurations.

[5] E.g., time of day, season, climate, weather, population and biomass densities, aquatic proximity, geomagnetic declination and anomaly, industrial and pollutant concentrations, electromagnetic radiation fluxes; and such collective, cultural variables as language, macroeconomic conditions, the credibility of "psychic" experiences, mass concentrations of attention, and shifts of public mood.

[6] E.g., age, gender, gender-preference, race, ethnicity, birth-order and family size, education, occupation, dietary habit, dream propensity, spiritual orientation, meditative practice, sheep/goat attitude, psychic and psychedelic experience.

[7] <http://www.fourmilab.ch/rpkp/>

[8] Its protocol of receiver preparation can be duplicated by distributing a simple kit extending common home hardware. Randomization and presentation of images can proceed through the same mechanisms; immediacy of presentation of downloaded images can be economically managed;.and the real-time interaction between receiver and sender can be approximated by keyboard or duplicated by nearly-similtaneous voice transmission using readily-available Web programs. (Direct participation of researchers in receiver induction could be approximated by on-line means; but this optional factor seems minor, and may be more than compensated by the comfort of testing in home quarters.) Randomized pairing of dispersed senders/receivers can proceed automatically and with negligible delay, given sufficient levels of participation. Tabulation and correlative analysis of test results can readily be automated to proceed in nearly-real time, offering possibilities of feedback -- and of more sophisticated species of participatory instruments -- beyond the scope of this treatment.

[9]  Some features of security against falsification resist such adaptation, but seem more useful to the project of verification than to measurement. New necessities of protection are apparent, to guard the process of measurement from contamination by outside hackers.

[10] Honorton, C., et. al.., 1990. Psi communication in the ganzfeld: Experiments with an automated testing system... In Journal of Parapsychology 53:281-308.

[11] As an on-line version offers less control of the quality of receiver preparation, it may risk some degradation of signal strength and clarity. But simple refinements, using multiple receivers and/or judges, seem likely to produce even more improvement.

[12] Radin, D., 1996. Unconscious perception of future emotions. J. Consciousness Studies Abstracts, Tucson II conference, U. Arizona/Tucson.
Bierman, D. and D. Radin, 1997. Anomalous anticipatory response on randomized future conditions. Perceptual & Motor Skills 84:689-9.

[13] Except for researchers' proximity to the testing, which seems of minor significance in this case. See also note [9].

[14] The cost per data-point in historical ganzeld research has been reckoned at 3-4.5 participant hours. [3] On-line adaptation of autoganzfeld protocol could probably cut it to 0.8-1.0 hours/data-point. An on-line ADR analyzer seems likely to cut it to 0.08 hours/data-point. At this price, an  ADR signal could be three times as fuzzy as the ganzfeld and still yield an instrument of higher resolution for the same cost. It seems likely instead to be as clear, and ten times cheaper to read.

[15] In terms of the models above, such signals are hybrid species: In their assays of "clairvoyance", somatic clarities in registering reception and in transmission bypass the muddy complexity of certain conscious stages of protocol, at the cost of their own complications.

[16] Conceivably, some such instruments may generate blurred or constant mean signals in consequence of the atemporal character of the measured phenomena.

[17] Multiple sessions are required to reduce the participant pool to managable size and consistent composition. This dependence on repetitive experience exposes the signal to instrumental corruption. To some degree, wise protocol design can work to counteract both "fatigue" and "learning" effects in the pool. The irreducible effects in the signal will be statistically tractable, and transparent to almost all other species of perturbation.

[18] For the autoganzfeld, participants would require a program enabling voice transmission through the on-line site, and simple supplies for sensory input; for the ADR, a skin galvanometer with computer-compatible jack, cheaply producible on this scale. These estimates are amortized over two years. They include (minimal) provision for initial and follow-up print documents sent to participants, though strict on-line economy might sacrifice this amenity.

[19]  Design of testing protocol, participant data-form, correlative factor grid, etc. would proceed through scholarly consortium, facilitateable by conferences and research grants ($0-$50K). To adapt/develop software for on-line registration, implementing protocol, and analyzing results should cost <$10K. Development would include stages of field-testing and refinement. For simplicity, these are understood here as the initial stages of the instrument's operation. As with large-scale instruments in astronomy and particle physics, a "shakedown and tuning" phase of some months would precede steady operation.

[20] "Setup" and "shakedown/tuning" phases of six months might require two person-years of central administration. Once operation is routinized, this would involve mainly mass on-line communication; an estimate of one person-year for two years of operation is probably generous. A terminal phase of organizing the data for use by others might require up to one person-year from the core team. (This strict budget excludes all work of interpreting the data made available by the instrument.) The upper estimate assumes funding of $50K/person-year; the lower, that half of the initial and terminal work would be covered by other, academic funding.

[21] The upper estimate assumes one hour of ad-hoc participant tending per fifty sessions (or equivalently, per four participants per year) at $20/hr. Protocols requiring more tending should be refined. The lower assumes one hour per 500 sessions, realizable with a protocol simple enough to trouble only 1/40 of user/machine pairs. As the degree and cost of tending individual participants will vary nearly inversely with per-capita sessions/yr, a protocol involving weekly participation would reduce peripheral administration costs four-fold.

[22] Server hardware, programs, setup, and maintenance <($8K/yr amortized over two years) and access service charges (c. $6K/yr.)

[23] In these estimates, the ADR signal is assumed to be twice as fuzzy as the autoganzfeld. Its more likely equivalence would cut peripheral costs four-fold, yielding a cheaper instrument than the autoganzfeld even for unpaid participation.

[24] This suggests the relative economy of protocols involving more frequent sessions with smaller pools of more motivated participants, a strategy arguable also on other grounds.

[25] If the yearly stipend at the rate above of $30-$50/month were insufficiently attractive, the monthly pay could be increased up to 4X by increasing session frequency, perhaps with shorter periods of service. The costs and inefficiences of administering higher rates of participant turnover would vary inversely with the protocol's "user-friendliness." Such an institutionally-constant participant base seems to offer the best chance of maintaining a statistically-homogeneous flow of participants engaged for short and variable periods, as contrasted with a base enrolled for the full period of measurement.

[26] As publicity would be inescapable, its management should be deliberately budgeted and planned with regard not simply to recruitment but to the considerations below.

[27] Such social considerations, and others pertinent to design, are discussed further in my earlier essay  “To Measure the Lifted Skirts of Mystery” in which the nature and utility of such instruments is derived from metaphysical considerations and set in broader context.

Return to: Top | Skirts of Mystery | Anomalies | Home

>