Need reviewers for Common Design theory to be submitted to Science journal

Not from my experience though, especially for a highly prestigious journal like RSOS that only takes the scientific merit into account rather than its perceived novelty, impact, importance, and significance. If it was the latter, it would have been way faster like what we experienced with PNAS Nexus which only took a few days:

Date: 2024-10-02 17:12
Last Sent: 2024-10-02 17:12
Triggered By: Redacted
BCC: Redacted
Subject: PNAS Nexus MS# PNASNEXUS-2024-01365 Decision Notification
Message: October 2, 2024

Title: “A reboot of Richard Owen’s common archetype theory”
Tracking #: PNASNEXUS-2024-01365
Author(s):
Fazale Rana (Reasons to Believe)
Hugh Ross (Reasons to Believe)

Dear Dr. Rana,

Thank you for submitting your manuscript, titled “A reboot of Richard Owen’s common archetype theory,” for consideration at PNAS Nexus. After careful evaluation of your submission, we have determined that PNAS Nexus is not a suitable venue for publication of your manuscript.

All manuscripts undergo an initial evaluation to determine whether the potential novelty, impact, and relevance to the broad scientific community merits further detailed technical review. In the case of your submission, our assessment is that your manuscript does not meet one or more of the principle aims of the journal. On this basis, we expect that the likelihood that detailed review will lead to publication is low.|

Go to subsection “Design” in our article for more on this.

Because designers often re-use parts over-and-over in different types of products or ,in this case, organisms based on a common blueprint. Convergent co-option is an example of this in biology as shown in the study. As a result, this gives the appearance of common descent but in reality it is just common design.

Read section “Common Design from a Common Designer” and “Reasons for choosing nested patterns” for all the evidence of our mechanisms and how our theory predicts over 80% of functional ERV’s.

The functional elements of junk DNA is what I am referring to and how the fine-structure constant is so much more finely tuned than expected.

  1. Synaptic plasticity correlates with cytoskeletal architecture/activities

  2. Dynamic microtubule vibrations correlating with viral activity

Efficiency and speed of survivability, reproduction, and adaptations.

We explain in more detail at the start of the section “Reasons for choosing nested patterns”.

Excuse me, what I meant to say is that analogous principles and nested hierarchies appeared to be observed in biological systems based on a number of studies we listed in the section “Reasons for choosing nested patterns”.

Ultimately, the reason we can infer that these reasons ,described by human engineers, can be applied to biology is based on observations showing that the designer functions very similarly to humans according to quantum cognition theory.

I’m trying to understand your grand plan here. What do you mean by “once the manuscript gains approval?” What do you think that news agencies will report about your article? “Ah whoopsie, the past 150 years of common ancestry were all wrong according to a paper with a lot of hypothetical future tests?” What would be your ideal scenario here? Anywho hopping along to some random blurbs from the paper:

I’m not sure what thie following even means - what data do you think you could collect in the following case:

Experimental Setup: Select organisms with alleged design flaws, such as human heart issues or the GULO gene deficiency.
Data Collection Methods: Measure survival rates, reproductive success, and adaptation to environmental changes in organisms with and without the alleged design flaws.
Statistical Tests: Use statistical analysis, such as t-tests or ANOVA, to compare the performance metrics between organisms with and without the alleged design flaws.

You are going to measure the survival rates of people without a functioning GULO gene or heart issues? The point of the GULO gene or other pseudogenes isn’t even whether they have function or not, it has to do with nested hierarchies and parsimony.

I could pick on random statements throughout that would honestly take any reviewer an obscene amount of time to even bother trying to fact-check. For example, you (or whoever) writes:

Moreover, using the Planck scale Wilkinson Microwave Anisotropy Probe, researchers have shown that the fine-structure constant in physics has remained constant throughout the universe’s history

Where this paper is cited: Constraints on spatial variations in the fine-structure constant from Planck. Your statement is not even what the paper says, nor is it how someone who does Physics on those topics ought to really talk. The paper says boring things like:

At tens of degree angular scales and above, we constrain the rms fluctuations of the fine structure constant to be Ύα/α0 = (1.34 ± 5.82) × 10^−2 at the 95% confidence level with respect to the standard value α0. We find no evidence for a spatially varying α at a redshift of 10^3.

It doesn’t say the “fine-structure constant has remained constant throughout the universe’s history.” It only constrains the possible values of the constant within a certain range. Wikipedia even has a nice summary article on the time variance of the constants of nature:

There are a lot more papers and results that try to examine the potential time or spatial variance of the fine-structure contestant, with one result from the VLT preferring spatial variation over no-variation at a pretty high level.

But the point is not the fine-structure constant or a debate about it, but just that here, and I’m guessing other places, the language in your paper is wrong and imprecise and it would take a bajillion hours to try and check your paper carefully. There are so many claims in the paper from dozens of different fields, I wouldn’t touch the paper with a ten-foot pole if I was a science journal.

To jump into this conversation, I’m sorry, where does this prediction of 80% functional ERVs come from? And even then, it’s a pretty post hoc explanation of all of the hundreds of thousands of shared ERVs between species, all fitting in a nice nested hierarchy. This reminds me of how I agree with @T_aquaticus and his criticism of your “aha programmers write code in nested hierarchies, therefore the ‘Designer’ would have done it this way too.” Or this section of the paper:

Incredibly post hoc and I think not very impressive. At least the paper doesn’t deny the very strong evidence for evolution in nested hierarchies, but then it just pretends ah it makes just as much sense for us because of programmers and a divine being would have done it just like programmers.

Anyways, good luck I suppose in your endeavors.

2 Likes

Because the acknowledgement section does not mention that the ‘paper’ was not written by Fazale Rana and Hugh Ross of RTB, but by an unscrupulous charlatan who is using their names without their permission. .

As an actual software engineer, I know that it is not true.

While nested hierarchies are often used within a single system to assist in the architecture / design, they are not applicable across multiple discrete systems, even when those systems use shared modules or code.

This is very different from the nested hierarchies of biological systems, which do cross multiple organisms, species and higher taxa, but are not within an individual organism.

2 Likes

From the ENCODE consortium.

Do you think Meerkat has read them?

You are going to have to quote the paper where it describes how microtubules affect mutations in sperm and egg cells. I’m not seeing it.

You also describe the randomness of mutations being an assumption. It isn’t. Experiments have demonstrated that mutations are random with respect to fitness. You need to address the experiments demonstrating random mutations.

And when they do so they mix and match parts in a way that violates a nested hierarchy.

Quite frankly, your argument was already addressed 140 years ago.

I’m not seeing it. Perhaps you could quote that section?

You also mischaracterize junk DNA. At no time in history has any knowledgeable molecular biologist ever said that all non-coding DNA was junk DNA. Even one of the most vocal spokespersons for junk DNA, Larry Moran, states that there is 4 times more functional non-coding DNA than functional coding DNA.

The designs of human engineers don’t fall into a nested hierarchy.

Added in edit:

Just to drive this home, take a look at these three vehicles.

A.

B.

C.

Which of these two would you predict have the most similar engine? According to your claims that engineers use a nested hierarchy, we would predict that A and B have the most similar engines. They don’t. A and C have almost an identical engine, a Ford Ecoboost 3.5 liter. B has a V8.

5 Likes

What’s the story here? Do Ross and Rana not know that this paper is being submitted in their name?

Er that’s not really a “prediction of your theory” though. It’s not like you came up with that number from any specifics of your “theory” which seems to come from
 I don’t know where. It’s not from the Bible, but an idiosyncratic mix of the Bible and science unique to RTB.

And then, why 80%? Out of all the numbers for the your “functional” ERV prediction you picked this one, why? Why not go with this calculation and go for 15%? Is 80% more Biblical than 15%? Does 80% imply a divine creator than 60%? Why couldn’t 15% be a remarkable thing for a divine creator? Maybe the percentage could be figured out by how much an average computer programmer adds non-coding commentary to their code since that’s cited as evidence a divine creator would use nested hierarchies.

2 Likes

Also, @RTBsupporter

Indeed, why 80%.

For starters, most of what I reading about ERV’s and immunity are focused on ERV genes.

From memory, around 90% of ERV’s are solo LTR’s. This means 90% of ERV’s lack the genes necessary to interact with the immune system. According to the human genome paper, there are 127.2 million base pairs across 203,000 ERV’s. That’s an average of 627 bases per ERV. Obviously, a lot of them have to be solo LTR’s left over from homologous recombination of the matching LTR’s that bookend the viral insertion.

Some of these solo LTR’s can act as gene promoters, but at 180,000 solo LTR’s that would be about 9 solo LTR’s per human gene. I highly, highly doubt that any human gene is controlled by 9 solo LTR’s, much less nearly all human genes.

Example of how solo LTR’s form.

2 Likes

Yeah – there’s no way I would send this paper out for review. Along with all of the other issues that have been raised, the core idea connecting QM and biology here is the “self-collapsing wave function”, which can do anything at all one wants at any time to explain any event. It seems to me to be a science-sounding label for “then a miracle happens”.

And then there’s the appeal to “baraminology”, which isn’t remotely science, and the vague and and unjustified predictions about convergence (phenotypic? molecular? what’s the claim?).

5 Likes

Two weeks is very fast if they were actually searching for reviewers. (And there’s no way I would consider RSOS to be highly prestigious.)

1 Like

You’re now asking for evidence supporting a specific mechanism. Currently, empirical evidence demonstrating this mechanism in living systems remains inconclusive. However, we offer a prediction: “Dynamic microtubule vibrations correlating with viral activity,” and outline a way to test it in the section, “More falsifiable predictions and methods of testing.”

We’re not claiming randomness in mutations in the way you’re interpreting it. Instead, we demonstrate that scientists assume no conscious agent or designer is guiding mutations (i.e., no external teleology). Our research challenges or questions that assumption, but this won’t be conclusive until scientists test and confirm the prediction I just mentioned.

I agree with you if you’re referring to internal teleology or functional elements related to fitness, or if you’re using the selection-effect definition of function. However, when we claim that the majority of junk DNA is functional, it’s in the context of external teleology and the causal role definition of function, as predicted by Owen’s theory.

That’s not our argument. We’re not claiming that because these human engineers produce nested patterns, nature or this universal designer must do the same. We already understand why and how this designer produces nested patterns from Owen’s framework without depending on observations of human designers to gain insights. The question is why this designer would choose nested patterns to design organisms over other potential patterns. Since observations showing how the natural origin and design of viruses seem to mirror the artificial synthesis and design of viruses, it suggests this designer operates similarly to humans, we explored why human engineers use nested patterns and built a model from those insights, which is explained in the article.

Of course, they know. The paper wouldn’t have been submitted to those journals without their involvement.

To be fair, the specific figure of “80% functional ERVs” comes from the ENCODE results. However, creationists and ID proponents have long claimed that a vast majority of what was labeled “junk DNA” is functional—years before ENCODE—closely aligning with that 80% number, which Owen’s theory predicts, as discussed in the section “Common design from a Common designer” in our article. Again, this prediction is based on external teleology and the causal role definition of function.

Our model doesn’t claim that the non-local process explains every aspect of evolutionary history. We clarify this in the article.

Could you elaborate on why you believe this? I’d like to better understand your position.

I was referring to the initial editorial stages, where they assess whether the paper fits the scope of the journal. For a prestigious publisher like the Royal Society, even if a specific journal isn’t considered prestigious, this early decision-making process shouldn’t take more than two weeks.

Nope, that train does not run either direction.

So they are involved in your deception?

Let’s be absolutely clear. Based on what you have written here and elsewhere, the facts appear to be:

  • You are the main author of this ‘paper’.
  • You are not Faz Rana or Hugh Ross.
  • Your name is not on this paper.
  • Their names are on this paper.
  • This paper was submitted this paper to RSOS and PNAS Nexus under their names, from one of their e-mail addresses.
  • They are aware of, approve of, and have assisted with your work being submitted under their names.

Is all that correct?

2 Likes

Again, let’s be absolutely clear.

  • The ENCODE consortium announced that 80% of DNA was functional.
  • You believed them.
  • You then claimed that your model predicted something you already thought was true.
  • Your model doesn’t predict that


Is that all correct?

2 Likes

Two problems here. One is that many evolutionary biologists also long expected that the bulk of the genome was functional. They only changed their minds because the evidence pointed (and still points) strongly to the conclusion that many genomes have large amounts of nonfunctional DNA.

The other is that the ENCODE conclusion that 80% of the genome is functional doesn’t actually support ID arguments that most of the genome should have a function: ‘biochemically active’ does not imply ‘does something useful’.

I didn’t say it made that claim. I said that the self-collapsing wave function can do anything you want it to, including inducing mutations (which can in some instances include quantum effects), horizontal gene transfer, and convergent evolution (both of which are macroscopic processes that don’t depend on quantum effects). There is no mechanism suggested, no limitation on what this process can do, no theoretical machinery describing the probability of different outcomes – this isn’t physics, or indeed science of any kind.

As far as I know, there is no expectation in evolutionary biology about the amount of phenotypic convergent evolution (and parallel evolution, which isn’t the same thing) that we should see. You say it’s ubiquitous, in that there are hundred of known cases. But that’s out of millions of species. What’s the basis for concluding that this is more than predicted by common descent and natural selection? Convergent molecular evolution, on the other hand, is expected to be very rare for anything more complex than the simplest molecular patterns, simply because there are typically vast numbers of different nucleotide and AA sequences that can perform the same function. And that’s what we see: lots of phenotypic convergence and an almost total absence of complex molecular convergence. In contrast, when a programmer reuses code, they reuse the specific implementation of the code, not just the concept.

The brief section of the paper on convergence doesn’t delve into the relevant issues at all, nor offer any basis for making predictions about the amount or kind of convergence expected under different models of origins. It comes across as a talking point, not a scientific inquiry.

I was referring to the same stages. A paper has to get through the queue of submissions, have someone look at it, decide which editor to assign it to, pass it on to them; then the editor has to get to the paper in their queue, read the whole thing, maybe talk to someone else about it, and reach a decision. The only time I would expect that to take less than a few weeks(*) is when the paper’s topic is very time-sensitive, we’ve spoken with an editor in advance about the paper, and we know they’re interested (e.g. papers on the 2014 Ebola outbreak and some early covid papers). How many papers have you submitted or reviewed previously?

(*) ETA: regardless of the prestige of the journal or publisher. Indeed, if anything I might expect glam journals to take longer since they get so many submissions.

4 Likes

You just reminded me that quantum cognition theory is not the foundation for why we can explore how human engineers use nested patterns to build models, as we discussed in our article. Rather, it is based on observations showing how the natural origin and design of viruses seem to mirror the artificial synthesis and design of viruses, which we illustrated in the section “Common Design from a Common Designer.”

Now, if you still believe your objection holds, could you please explain the examples from our article where viruses are producing and mimicking nested patterns, as well as being used to design organisms? For instance, you mentioned, “Offspring look a lot like their parents, and that is due to the constraints of reproduction,” but I don’t see how this applies to viruses, which don’t reproduce sexually. Nor does it seem likely that they can be fully reconciled with the natural origin and evolution of viruses through common descent.

The first premise is incorrect. Fuz Rana is the main author because he is the CEO and managed the project administration.

Furthermore, I do not meet all the requirements to be listed as an author, as the majority of the information and ideas came from Fuz and Hugh Ross. I didn’t include my name in the acknowledgments section because other scholars had contributed similar ideas to their creation model through earlier publications on their website. Thus, t made more sense to use a group name that encompasses everyone. I also prefer not tio have my name associated for privacy reasons. In hindsight, it’s probably more accurate to say that we drafted the paper instead of wrote it, given how collaborative this project was. I will make that adjustment soon.

I only accept the first premise. We accept, rather than believe, the results from ENCODE because they are consistent with and confirm Owen’s theory. While we didn’t predict the specific number (80%), their results still support our theory, as it exceeds the 51% threshold, which Owen’s theory posits for the majority of “junk” DNA being functional. ENCODE’s findings simply make our theory’s predictions appear more impressive.

Under what definition of function and what model of species are you referring to? Our model has different conditions and implications, which we explained in the subsection “Universal Common Designer.” I don’t see how what you’re saying is relevant to our approach.

It seems you may not have fully read the sections “New Support for Owen’s Theory” and “Common Archetype Theory (Extended)” where we discuss mechanisms for both micro and macro processes, such as dark energy and microtubules, that are supported by evidence. We even highlighted experiments confirming predictions that indicate the existence of new physics, and outlined future experiments that could potentially confirm Owen’s theory in the last sections.

Yes, that’s the fundamental issue with common descent theory—it doesn’t appear to be as testable, or as rigorously testable, as common design.

Although the section on convergence is brief, we address the specific points you raise in the “Steps and Methods for Testing Model” section. We provide the necessary details there.

That’s true, but this journal operates differently than most others. It doesn’t have the usual scope restrictions, as it considers those too subjective. Instead, it evaluates articles based on objective peer review, emphasizing methodological rigor, statistical analysis, and the validity of conclusions. The importance and impact of an article are left to individual readers, the scientific community, and, ultimately, posterity.

Moreover, although the peer review process involves external experts, the editorial board takes a much more hands-on approach to managing reviews and ensuring the quality and relevance of published content.

If the editors lack familiarity with the subject matter, they might feel they cannot properly oversee the review process or assess feedback themselves. This is what the journal’s website suggests. Another unconventional journal has agreed and reached similar conclusions:

Dear Fazale,

thank you for submitting your manuscript to Qeios! We have reviewed both the document and the notes you shared with us.

Unfortunately, we must echo some of the concerns raised by the editors of Royal Society Open Science, whose feedback you passed on. While Qeios is a multidisciplinary platform, the specific nature of your manuscript presents challenges in finding reviewers with the right expertise to provide a thorough and detailed evaluation.

If we were to post your manuscript on our platform, there is a high risk that it could attract highly negative public reviews—not because of the intrinsic quality of the work, but due to the nature of the content, which might not resonate with our broader audience.

Rather than advising you to submit to field-specific preprint servers or journals, as the previous editors did, we recommend that you consider uploading your work on non-peer-reviewed platforms instead, like Academia, ResearchGate, or Medium. This would allow you to share your research without the immediate pressure of peer review. We also suggest manually reaching out to specific individuals in your network who are knowledgeable enough to comment on your work and provide valuable feedback. They might even help identify journals where your manuscript could be a good fit.

Finally, if your primary goal is to expand the global reach of your research, achieving this does not necessarily require publication in a journal or a peer-reviewed platform. You can achieve great visibility by choosing the right venue with a large, interested readership :slight_smile:

Quite a few, actually. With some exceptions, they did not take over two weeks to reach a decision on the scope and fit of the journal.

My emphasis:

You earlier wrote that you did predict that specific 80% number. So you cannot be trusted.

It seems the only difference between the scenario I outlined and what actually happened is the substitution of ‘accepted’ for ‘believed’ w.r.t. the ENCODE announcement - which doesn’t make one whit of difference to your malfeasance.

I may respond to the rest later, but since the above illuminates that your claims about authorship have less substance than a month-old cobweb, it’s probably unnecessary.

3 Likes

I’m only going to address one specific point and then leave it be.

Sorry, but you simply did not provide a mechanism by which dark energy or microtubules would generate the necessary wave function collapse: one that causes specific macro events like horizontal gene transfer, rather than, say, causing all living horses to simultaneously turn into polka-dotted rutabagas – also macro events that are highly unlikely under quantum mechanics. There is no theory underlying which events occur via this mechanism.

What you do provide is a vague analogy between the universal wave function (which you assert incorrectly is an empirical reality) and Owens’s notion of platonic archetypes, even though they’re conceptually very different. You then treat the two as equivalent and as the same as a universal designer, all without any real support.

Look, you said you were looking for peer review. That’s what I’m giving you. My background is in both particle physics and biology, including evolutionary biology, and I’ve published extensively in both fields. I’ve also annoyed some researchers by being willing to take creationists seriously. Peer review only works if you are willing to listen to what the reviewers are telling you. And what I’m telling you is that this is not a scientific paper.

6 Likes

Super interesting conversation. This reminded me of a fantastic video Anton Petrov did on quantum consciousness and how superradiance was confirmed with trytophan microtubules.