My rejected manuscript

by a psi research journal back in 2016

                                    Restoring misconceptions: Introduction.
                       My response to critique from two anonymous reviewers.

It has come to my attention that there are some misunderstandings regarding the mathematical data analysis presented in my paper, "The Balancing Effect in Brain-Machine Interaction," as revealed during the manuscript reviewing process. These misconceptions pertain to the role of the funnel plot of MicroPK data, the use of the Markovian model to successfully replicate it, the information gained from the Rescaled Range Analysis of MicroPK data sequences, and the interpretation of results based on these analyses - essentially, the entire mathematical framework of my investigation. 

I intend to address misunderstood concepts by directly responding to the relevant comments from the two anonymous reviewers, a privilege that the journal had not granted me. However, maintaining anonymity in a small field such as this is almost impossible, where its members are well acquainted and their views and workings are communicated.

1. "Actually what she says is that p(1) = p(0) = 0.5 and p(1,0) = p(0,1) > 0.5 -- I find that logically very ugly, because it is obvious that (0,1) and (1,0) can be mapped (in hard-or software) on '1' and the (1,1) and (0,0) on 0 and then one has actually what Fotini keeps repeating is non existing MicroPK." 

My response: Assuming that the comment indicates by p(1,0) the probability (or frequency) of dyad sequences of the kind: '10' and likewise '01', the statement p(1,0) = p(0,1) > 0.5 is wrong. This frequency, estimated by the Markovian model for the MicroPK database, is approximately 17%. It is below and not above 0.5, as this reviewer concludes. So, if "we mapped (0,1) and (1,0) on 1" as, then there will be fewer '1' than '0' digits. That's a clear imbalance of bits, not a non-existent MicroPK as the reviewer assumed. Yet, my paper shows quite the opposite.

The error seems to be the confusion about what digits '1' and '0' represent in the Markovian model of MicroPK data. Each MicroPK '1' involved in the funnel plot implies 'hit'; a successful trial in a study. Each MicroPK '0' is a 'failure' (a 'miss'), not bits generated by RNGs. [Bösch et al., (2006)] have carefully converted (from 'z-scores', or 'es') all records of MicroPK tests with true RNGs into the proportion of hits, 'pi'. The funnel plot presents the size of the study, N, against this proportion of hits, 'pi', i.e., the proportion of 1's.

Therefore, in the Markovian model, a dyad sequence {(0,1) or (1,0)}, i.e., 'hit after miss' or 'miss after hit' cannot be mapped by a 'hit'; by '1'! Similarly, a 'hit after hit' and 'miss after miss', i.e. (1,1), (0,0), cannot be mapped by a 'miss', by '0'. As these digits are not RNG-generated bits, they cannot be manipulated by software or hardware. If such replacement were introduced in MicroPK data sequences, as suggested, this would be condemned as data manipulation! Neither can such mapping be implemented in the Markovian model, as equally forbidden.

2. "In any case, I have discovered that the substance of the paper is already published …and so fails to meet the original contribution criterion".

My response: This paper was written specifically to show no discrepancy between my early test results that indicated the "balancing effect" and my recent results that indicate no evidence for the MicroPK hypothesis. Naturally, all related previously published results had to be invoked in this paper. As such, my paper maintains its originality.

3. "The paper seeks to explain funnel plots of effect sizes (ES) versus study size, N, in the BSB database. The plots show 1) a convergence close to the null (ES = 0.5) for large N; 2) a significant increase in the dispersion of ES's; 3) an asymmetry that skews to ES > 0.5 for experimental data and ES < 0.5 for control data. The author proposes that a Markovian process (MP) (her 'gluing' effect) accounts for all of these features."

My response: The convergence of the funnel plot of MicroPK data is not close to effect size 0.5 - it is 0.5. Also, the funnel plot refers to the whole meta-analysis database; it's not a funnel plot for large N or a funnel plot for small N. Therefore, the funnel plot converges to 0.5 for the whole MicroPK database. Said differently, the most representative effect size for the MicroPK database is 0.5, or 50%: The chance result of a random process that "true RNGs" exhibit!

The Markovian model successfully simulates the funnel plot features. (A) The broadening of the funnel plot and (B) the effect size (ES) convergence to 0.5. 

It is wrong to include the data scatter asymmetry in the success of the Markovian simulation of the meta-analysis funnel plot. Publication bias caused the asymmetry in data scatter on the funnel plot, introduced by the experimenters' neglect to report data or their reporting of erroneous data triggered by biases.

When the Markovian model generates data points on the funnel plot, one should randomly remove some from areas that align with the experimenters' attitudes, introducing publication bias to simulate the asymmetry.

4. "The funnel plot features were thoroughly examined …and debated in detail in published responses. The paper ignores the alternative explanations presented there and one would want some discussion of why they are not viable".

My response: The reviewer of my paper implies, as it was argued in the debate [1], that the body of MicroPK data with pure RNGs be truncated into smaller parts, which are conveniently tagged according to a property of the database (the size of the study), and to separately analyze smaller parts of the database. In that sense, a selection of data will be introduced, and many interpretations will be individually offered for the MicroPK hypothesis for each subdivision of the database as if it wasn't the only hypothesis to be tested.

As discussed in my paper, in studies of smaller size (often generating bits at a slower rate), there is a higher risk of introducing biases during the data collection (usually to satisfy the expectations of the experimenters). Later, I published an additional explanation of why errors introduced in small-size studies yield sizeable deviations from chance compared to large-size studies. [2]

Those who adhere to such database fragmentation instinctively understand or have first-hand experience with the stronger effects expected in small-size MicroPK studies. So, they emphasize the need to treat small-size studies separately. Yes, small studies tend to show higher MicroPK deviations from chance (50%), but this is not because some direct Mind-Matter Interaction manifests better in such small studies.

Besides, small and large MicroPK studies combine short-duration tests to avoid boredom and tiredness. Integrating many such short-duration tests with Random Number Generators (RNGs) increases the size of the study, which provides higher accuracy of the tested hypothesis. 

The researchers who conducted the MicroPK BSB-MA analysis introduced medical research practices while working in the field. These were the funnel plot and small-study effects, typical procedures in medicine but not suitable for testing the MicroPK hypothesis. In medicine, the effectiveness of a drug often depends on the patient population that the researchers have treated.

My paper presents the analysis of MicroPK data with pure RNGs as a whole, as the question under the microscope is only to investigate "the MicroPK effect with true Random Number Generators". Fragmentation of the database is equivalent to data manipulation, and this is my answer to why such 'alternative approaches' are not viable.

5. "In particular, the author states that she accepts clairvoyance as a psi effect but doesn't address why, say, clairvoyance à la DAT plus publication bias shouldn't offer a compelling alternative to her rather complicated mechanism".

My response: I have suggested that there are real psi effects, like telepathy and clairvoyance, worth scientific investigation, which is far from stating that I have accepted them as real psi effects.

Furthermore, the reviewer prompts the following non-scientific approach: to explain away a purported effect, MicroPK, by invoking another unsubstantiated effect, clairvoyance.

Finally, many scientists have introduced R/S analysis and Markovian processes in their research and do not consider them complicated.

6. "The reasoning (as best I can follow it) goes something like this. A rescaled range analysis (RSA) on a subset of RNG data from the PEAR consortium replication finds a Hurst scaling exponent (H) greater than 0.5, and this can be taken as evidence for PK-MP correlations in the data".

My response: The label "PK-MP correlations" (MP for Markovian process) used by the reviewer is misleading for several reasons.

First and foremost, my paper shows no evidence for a MicroPK effect.

Furthermore, the Rescaled Range analysis, R/S (or RSA as the reviewer labels it), does not provide evidence of a Markovian process (the 'PK-MP' according to the reviewer's tagging) but of possible long-range correlations (trends of persistent deviations from chance) present in the data sequences, not caused by Mind-Matter Interaction (MicroPK), but from errors in reporting data due to human biases (conformity bias).

The Markovian model was introduced as a magnifying lens into the inner machinery of the 'MicroPK process' at the level of single trials. The model has successfully simulated the main features of the database, (A) the broadening of data (indicating the presence of Markovian correlations between trials - trends of persistent deviations from chance), and (B) its convergence to 50% (trends of persistent deviations from chance having equal strength in both directions). Both characteristics refute the MicroPK hypothesis.

The collective evidence, including the R/S analysis, suggests that the conscious or unconscious errors during data collection and reporting introduced those long-range correlations in the data sequences and not some mind-matter interaction.

7. "Modelling the PK-MP shows that the effect can produce funnel plots with a large dispersion". 

My response: My analysis does not suggest the presence of a PK effect. On the contrary, it shows that there is no MicroPK effect.

Regarding the MP part of the same label the reviewer used, the Markovian model can simulate the funnel plot of a database consisting of the proportion of hits generated by a binary process. Such funnel plots can exhibit either larger or narrower dispersion than expected, as well as the scatter of random data {see Fig. 4, (2023) Understanding the Nature of Psychokinesis, Fotini Pallikari, (JAnom) Volume 23(1), pp. 103-131}.

The scatter of data on the MicroPK funnel plot, selected to test one question - if the mind affects the records of a "true RNG", is wider than expected for random data. It indicates correlations in the trial records, hits or misses, of each study of various strengths due to introduced errors during experiments. The Markovian model represents such correlations with one general strength label. The most representative score in the MicroPK database is 50%, refuting the MicroPK hypothesis.

8. "The RSA finds a highly significant H for experimental and control data. This is used to argue for PK-MP correlations between trials".

My response: My paper discusses two R/S (RSA) analyses performed separately on different data sequences, yielding different results.

1. The first analysis refers to the FAMMI MicroPK, control, and calibration data [Pallikari, (1998); Pallikari (2001)]. It indicated the presence of weak persistent long-range correlations in sequences of MicroPK data; even weaker correlations were present in control data and none in calibration data generated by RNGs that have passed the test for proper performance.

2. The second analysis was applied to the time series of MicroPK BSB meta-analysis data, arranged into time series as accurately as possible [Pallikari, (2015)].

It showed that the effect sizes in MicroPK tests by 62 principal experimenters over 35 years were not as independent as expected by separately reported experimental scores. The MicroPK and control data time series exhibited persistent long-range correlations (persistent trends), characterizing the mimicking attitude of experimenters reporting them (tagged as 'conformity bias'). Performing the R/S analysis on shuffled MicroPK and control data sequences destroyed these correlations.

9. "Markovian transition probabilities of p00 = p11 = 0.83 are needed to reproduce the funnel plot dispersion. This is a fantastically large PK effect that would be evident in the data with simpler analyses than the RSA".

My response:

1. The Rescaled Range Analysis (RSA) did not produce these Markovian self-transition probabilities. The fitting of the confidence interval curves for correlated data on the funnel plot of the BSB meta-analysis MicroPK scores estimated them.

2. The Markovian self-transition probabilities, P00 and P11, represent the average frequency of runs of size 2 of the same bits (a bit represents either a hit or a miss trial). They are probabilities that a 'hit' follows a 'hit' and a 'miss' follows a 'miss' in a sequence of all MicroPK records in the meta-analysis. True, such information cannot be available for practical reasons. Yet, such high frequencies map the longer runs of MicroPK 'successes' or 'failures' generated on average due to errors introduced during tests triggered by experimenter biases.

3. What exactly are these "simpler analyses" the reviewer refers to? And can these analyses estimate the frequency of runs of size 2, hit-hit & miss-miss, as in #2 above, across all MicroPK test results?

4. Unlike the reviewer's claims, these high transition probabilities do not indicate psychokinesis or PK. They reveal biases across all MicroPK tests, such as the conformity type.

5. An average frequency of two 'hit' or 'miss' trials in a row across all MicroPK experiments as high as 83% corresponds to a correlation coefficient between the adjacent time-series MicroPK records of 66% [Table 2, Fotini Pallikari, Investigating the Nature of Intangible Brain-Machine Interaction, Journal of Social Sciences and Humanities, 1(5), 499-508, (2015)].  Unlike what this reviewer believes, this is a moderate degree of data correlation. Most importantly, it does not indicate a 'PK effect' but the presence of biases in the MicroPK test records that force data to scatter wider than unbiased records.

Neglecting publication bias, if all experimenters consistently biased their MicroPK test records so that the correlation coefficient between adjacent trial scores was 66%, instead of 50%, due to an 83% persistence of a 'hit' after a 'hit' and a 'miss' after a 'miss' then the scatter of scores on the funnel plot would resemble the current one.

Removing scores below 50% in small-size tests simulates publication bias and the current data scatter asymmetry.

Yet, there is no confirmed universal mechanism of direct mind-matter interaction. The MicroPK test participants do not affect the random process through direct mental interference. The funnel plot converges to 50%, the most representative MicroPK effect size in the meta-analysis.

In conclusion, these "high" transition probabilities proclaimed by the reviewer reveal a mechanism operating deep at the level of trials, a conformity bias that makes hits and misses persist at the same potency rather than occurring randomly. The dispersion of scores on the funnel plot expands to propose statistical heterogeneity.

10. "However, the trial variance in all the Consortium data, including FAMMI, is at the null expectation. This entirely refutes the PK-MP hypothesis, at least on a scale that would reproduce sufficient funnel plot dispersion".

My response: 

The trial variance in the three consortium studies may be as expected in random data sequences. They have also reported effect sizes within chance for random data, refuting the MicroPK hypothesis. They stand as points on the top of the funnel plot at 50% effect size.

There exist studies in this meta-analysis of 380 MicroPK tests that cluster unnaturally in regions on its funnel plot beyond the confidence interval curves for random data. They are scores falsely reported induced by human biases.

Consequently, the collective variance of all 380 meta-analysis effect sizes is not that of random data, still mathematically described by the Markovian process of persistent self-transition probabilities. These faulty records introduced persistent trends in the MicroPK meta-analysis time series.

In conclusion, single studies on the funnel plot may exhibit trial variance of random data, while the collective variance of the 380 MicroPK studies deviates from randomness.

In addition, unlike what the reviewer suggests my analysis of the FAMMI data sequences detected moderate persistent trends in experimental data, similar weaker trends in control data sequences, and no such trends in RNG calibration data {(2001) A Study of the Fractal Character in Electronic Noise Processes, F. Pallikari, Chaos, Solitons & Fractals, 12, 1499-1507, doi:10.1016/S0960-0779(00)00167-3}.

The reviewer argues that there is no evidence to support the PK-MP hypothesis. As already explained, I agree that the MicroPK hypothesis (PK) has no scientific basis. However, the Markovian Process (MP) has successfully reproduced the main features of the broadened MicroPK funnel plot describing the potential mechanisms of data biasing.

11. "An explanation of the RSA consistent with the variance is that short periods of psi-hitting and psi missing among trials causes some internal correlations that are detected by the RSA".

My response: The reviewer uses the term 'psi-hitting' to refer to the situation where the mental effort favoring a specific outcome in a binary experimental outcome was statistically successful. 'Psi-missing' implies a mind-over-matter interaction, forcing the binary outcome of the 'true-RNG' in the opposite direction to wish and intention.

However, it's important to note that no psi-hitting or psi-missing occurred. The overall results of the three legs of the MicroPK replication consortium, supported by my analysis of the entire MicroPK database, indicate no MicroPK effect. We cannot interpret evidence by introducing an unsubstantiated effect like MicroPK.

The R/S analysis identifies long-range correlations, which are overall persistent trends, rather than short-range ones that the reviewer suggested. Human biases in reporting data, like the conformity type, introduced the correlations, not a nonexistent MicroPK effect. The R/S analysis reveals the overall trend in MicroPK test results that connects all data in all studies, regardless of their characteristics.

The reviewer may refer to the shorter MicroPK sequences generated by one group (FAMMI). Instead of relying on unsubstantiated claims like psi hitting or missing, it's more likely that the different ways of assembling experimental, control, and calibration data have introduced those long-range correlations.

12. "BSB discussed that the effect sizes decrease (more or less linearly) with publication date and there is a simple explanation for it. Most of the trend comes from early, significant studies from Schmidt's laboratory (authors Schmidt and Kelly in the database). These account for about 10% of the BSB studies, but most of the trend. With the studies removed, the RSA exponent loses most of its significance".

My response:  The analysis of MicroPK data presented in my paper considers the database generated in the associated meta-analysis [Bösch et al., (2006)] [4] (tagged as BSB by the reviewer) resulting from careful and honest data selection [3], one that provides valuable information about the MicroPK hypothesis with true RNGs.

The suggestion to remove data from this database by accusing experimenters of publishing incorrect and unreliable effect sizes relates to a form of manipulation. The purpose of this database was to test a specific question, which is to investigate the hypothesis on direct mind-matter interaction with true RNGs.

It is not fair to use unjustified beliefs to dismantle the database in an attempt to explain away one's expectations. The reviewer's comment highlights the argument in my paper, how easily experimenters can introduce data manipulation when contributing to or analyzing a database.

13. "The author uses PK-MP to provide the distribution. She then claims that the positive/negative asymmetries in the experimental/control funnel plots can be explained from publication bias. But without a viable PK-MP effect, the experimental database asymmetry cannot be reproduced".


My response:  The reviewer has invented a non-existent PK-MP effect. I am not referring to a PK-MP because the evidence indicates no micro-PK effect. As for the MP (Markovian Process) tag the reviewer used, the Markovian model successfully simulates the principal characteristics of the MicroPK database: its broadened scatter and its convergence to 50%. There is a Markovian character in the BSB meta-analysis data introduced by human biases.

The asymmetry of data scatter on the funnel plot of experimental and control MicroPK data is introduced a posteriori by removing appropriate data from the already simulated database in line with the experimenters' publication bias tendency, not from the original. It simulates the data that some experimenters decided not to report. The Markovian model does not simulate publication bias, as the reviewer suggested. 

14. "Unfortunately, the control funnel plot asymmetry is apparently an artifact of recording errors in the BSB database. In Figure 4, the asymmetry is evident as a group of studies all at the same N that stretch out to the left of the plot. The 20 "control" studies all derive from a single 1979 paper (Kugel; ref ID 806 in the database). There is just one experimental study from the paper and it has a slightly positive effect size. It would be surprising if Kugel reported 20 controls for one experimental study. Typically there are fewer control studies reported in papers which are why the control database N is only a quarter of the experimental one. I strongly suspect that BSB confused control and experimental labels when creating their database. There are other instances of mislabelling in the control database: in 12 cases the observed and theoretical hit rates are inverted. If the "control" studies from this one paper are removed, there is no significant asymmetry remaining in the funnel plot."

My response: The reviewer accuses the authors of the peer-reviewed MicroPK meta-analysis (Bösch et al., 2006) [4] and questions the quality of their work. However, the reviewer fails to provide precise and concrete evidence to support these accusations, instead relying on a 'strong suspicion' of hypothetical misconduct.

As I mentioned in comment #12, the BSB MicroPK meta-analysis [4] involved careful and honest data selection. Its supporters and detractors extensively examined its results, resulting in its modification accordingly before publication  [3]

"Bösch et al. did an admirable job searching for and retrieving all available psychokinesis studies, independent of publication status, and used well-justified eligibility criteria for establishing which studies to include in the synthesis".

The control data scatter asymmetry is not only limited to regions where the reviewer focuses, around sizes just above N=100. It is further present in the asymmetrical data spread in additional areas on the funnel plot, such as in study size N=1000 and above.

This comment raises a concern. The reviewer suggests deleting data from the database due to suspicions and personal hypotheses without evidence to support their claim. However, adding or removing data from a carefully assembled database is a form of data manipulation.

15. "The author claims that a statistical "balancing effect" is evident when combining the unweighted averages of control and experimental effect sizes, since these average to the null. This comparison no longer holds if the Kugel studies are mislabelled. The Kugel paper is in German and not easily accessible, but it would be advisable to verify the BSB database, and make corrections for other mislabeling (easily identifiable by examining the database) before doing analyses."

My response: The statistical balancing in the MicroPK database between experimental and control data is likely incidental. The two databases have different sizes, and their averages do not compare.

A balance between the deviations of cumulated z-scores of experimental and control data from their mean appeared years ago, given the name the balancing effect [5]. The observation was probably due to the random number generator (RNG) generating numbers from a pool of pre-recorded data, which I was unaware of, and my adopted testing protocol. This balancing of scores was probably the consequence of the law of large numbers in large enough databases, where the statistical average converges to a null mean shift. Nevertheless, the statistical balancing observed in unweighted experimental and control MicroPK scores is not a hypothesis under verification.

Before the publication of the MicroPK meta-analysis [Bösch et al., (2006)], there was a period of debate between the authors and a circle of researchers against this publication's results [1]. It was a time when the reviewer, who seemed to be well-informed about the details of this meta-analysis, should have addressed such concerns regarding the data validity.

If they had promptly raised such an objection, the authors of the BSB meta-analysis [4] fluent in German, would have spotted possible errors and corrected their database if required. As [Bösch et al. (2006)] have improved their meta-analysis after the pre-publication debates between supporters and detractors considering it correct for publication, such belated critique addressed here is inappropriate.

16. "I am puzzled by the expectancy MP proposal which I find odd for a number of reasons. First, if they were valid, PK-MP + experimental/control publication bias appear sufficient to reproduce the funnel plots in a qualitative sense. It seems ad hoc to add expectancy MP to this mechanism, the only motivation I can see being to lend an appearance of consistency to the RSA on the time ordered BSB data, which I suggest the author has misinterpreted. Second, the research and publication process for studies is long and not sequential. Generally, the research for two separate and successive publications will have overlapped in time. How then does the expectancy apply? Third, since we know that publication bias is a prevalent and serious problem in many disciplines, should the "publication expectancy effect" only apply to PK studies? Wouldn't it also "statistically balance" studies of other psi effects, or any phenomenon with a small effect size? I find the ad hoc way in which it is used in the paper to be unconvincing. It is a central part of the paper, yet its basis in psychology is not reviewed, and the justification for applying the effect to the publication process is not developed at all."

My response:  The term 'expectancy MP' possibly refers to the Markovian process replicating the 'experimenter expectancy effect' mentioned in my paper. It is the well-documented influence of the experimenters' hypotheses or expectations on the results of their research [Rosenthal (2004); Bakker et al. (2011)].

My R/S analysis of the MicroPK time series showed that previous MicroPK publications had influenced the records reported by MicroPK experimenters, yielding a Hurst exponent above 0.5. The arrangement of MicroPK scores per publication date could not include a small number of studies presented at the same conference.

The Markovian Process replicated the main features of the MicroPK funnel plot, regression to 50% chance, and its variance broadening. It indicated a similar mimicking attitude of the experimenters at the level of trials - a persistence of hits as well as a persistence of misses.

Let me correct my reviewer further.

a. My paper does not propose, as the reviewer labeled, an 'expectancy Markovian process' (an 'expectancy MP').

b. The 'experimenter expectancy effect' does not introduce a 'statistical balancing' of data.

c. The 'experimenter expectancy effect' is not present solely in the MicroPK studies but is common in almost all areas of scientific inquiry [Rosenthal (2004); Bakker et al. (2011)]. It was, therefore, not used ad hoc in my paper.

d. The 'experimenter expectancy effect' is well reviewed within psychology, too [R. Nuzzo, Nature, vol. 526, pp. 182-185, (2015), see page 184].

e. The suggestion that experimenters are influenced by previously published studies in the same field when reporting their results describes a trend - a tendency. It does not imply that every experimenter has done so. That would be impossible for MicroPK studies presented at the same conference.

17. "In addition, the author draws support for her arguments from a paper by Yu et al. that presents empirical reasons for rejecting the notion that consciousness is responsible for wave function collapse in quantum mechanics. One may accept that position without rejecting PK since we don't know if psi phenomena can be formulated within quantum theory or require an extension of it".

My response:

(A). Regarding the sentence 'Accepting Yu et al. position without rejecting PK': The reviewer asserts that although consciousness is not necessary to collapse the wavefunction, it can perform the task, nevertheless. This assertion is contradictory and fallacious, as explained in (B) below. The paper of Yu et al. does not reject the MicroPK (and PK) hypothesis. The refutation of MicroPK is achieved by the strong experimental evidence against it, as presented in my paper.

(B). Suggesting that consciousness can collapse the wave function (that the mind can directly affect the physical process) is equivalent to suggesting that the mind-matter MicroPK hypothesis is valid. Yet, there is no evidence to support the MicroPK hypothesis. So, neither is consciousness needed to collapse the wavefunction nor can it perform such a feat.

(C). Formulating a (quantum) theory for a phenomenon requires supporting evidence. However, there is no scientific proof for MicroPK, and no theory or extension of a theory makes sense for a nonexistent effect.

Furthermore, the discussion is limited to MicroPK data and does not encompass any 'psi phenomena' as the reviewer erroneously generalized.

References

Click here and start typing. Veritatis et quasi architecto beatae vitae dicta sunt explicabo nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.

[1]. Reexamining Psychokinesis: Comment on Bösch, Steinkamp and Boller (2006), Radin D., Nelson R.,Dobyns Y., Houtkooper Y., Psychological Bulletin 2006, Vol. 132, No. 4, 529–532.

[2]. (2023) Understanding the Nature of Psychokinesis, Fotini Pallikari, (JAnom) Volume 23(1), pp. 103-131.

[3]. As D. B. Wilson and W. R. Shadish admit in their commentary titled: On Blowing Trumpets to theTulips: To Prove or Not to Prove the Null Hypothesis—Comment on Bösch, Steinkamp, and Boller (2006), published in Psychological Bulletin, 2006, Vol. 132, No. 4, 524–528:
 "Bösch et al. did an admirable job searching for and retrieving all available psychokinesis studies, independent of publication status, and used well-justified eligibility criteria for establishing which studies to include in the synthesis".

[4]. (2006) Bösch, Steinkamp, and Boller, Psychological Bulletin, Vol. 132, No. 4, 524–528. 

[5].  (1998) On the Balancing Effect Hypothesis. In N. Zingrone, M. J. Schlitz, C. S. Alvarado, and J. Milton (eds.) Research in Parapsychology 1993. Lanham, Md. & London: The Scarecrow Press, Inc, pp. 102-103.

 

Create your website for free! This website was made with Webnode. Create your own for free today! Get started