There is No “Alexandrian” Text Family


One of the greatest challenges to overcome when discussing textual criticism with the average Christian is breaking through the wall of misconceptions regarding the topic. My personal theory is that if those in the Modern Critical Text had more information, they likely would not support the ongoing efforts of textual scholars. One of the most powerful claims that a Critical Text advocate will make is that their text is based off of the “earliest and best” manuscripts. It is the kind of jargon that is located in the footnotes of study bibles that compels people to believe that they have a high quality product in their hands.

One of arguments used to support the claim that modern bibles are based on “better” manuscripts is that they come from a textual family or group that is earlier than the text family that Reformation era Bibles were made from. So the argument for the Modern Critical Text is that it is made from manuscripts that represent an earlier form of the text closer to the original than later manuscripts. There is a major problem with this assertion – these early manuscripts differ so greatly from one another that even the scholars admit they are not a part of a manuscript family. What this practically means is there is no way to substantiate that the Modern Critical Text represents a uniform version of the text that can be traced back to some authorial manuscript group. This seems to be evidence strong enough to pump the breaks on the whole Critical Text machine, but as we see it still charges forward.

The Scholarly Perspective on Text Types Has Changed

Formerly, the idea of “text types” was a major engine for the inner workings of textual criticism and scholarship. The way that variants would be assigned value was in large part based on this text-type formulation.

“The Alexandrian is typically considered the most reliable text-type, with the Western, Caesarean, and Byzantine generally following in that order”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 8.

“If one reads Bruce Metzger’s well-known commentary that accompanies the UBS, the notion of text-types is absolutely essential to his explanation of the history of the New Testament text and, with it, to the practice of textual criticism itself”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 8.

This foundational way of thinking has shifted in the last ten years to acknowledge that the concept of textual families, at least as it applies to the Alexandrian, Caesarean, and Western text-types, is not supported by the data. While the concept of text-types has been retired, the Byzantine texts have been retained as a group.

“One exception here is that the editors still recognize the Byzantine text as a distinct text in its own right. This is due to the remarkable agreement that one finds in our late Byzantine manuscripts.”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 9.

Now this is usually dismissed because, as the quoted material notes, these manuscripts are “late.” Yet it is acknowledged by all that the age of the paper a text is printed on does not necessarily speak to the age of the words on that paper. Due to the same data analysis that demonstrated that the Alexandrian text-type was not in fact a text type, scholars have acknowledged that the Byzantine family is quite old. This is also evidenced by the fact that Byzantine readings were found in the Papyri.

“As just noted, the editors still accept a Byzantine group even if they do not view it as a traditional text-type. In fact, they do much more than merely accept it; they have reevaluated it and concluded that it should be given more weight than in the past.”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 10.

“But when the CBGM was first used on the Catholic Letters, the editors found that a number of Byzantine witnesses were surprisingly similar to their own reconstructed text.”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 10.

“There were fifty-two changes to the critical text. In thirty-six cases the changes were made in conformity with the Byzantine text and in only two cases against the Byzantine text. Further, in 105 of 155 passages where the editors leave the decision open about the initial text, the Byzantine witnesses attest to the reading deemed to be of equal value to variant a (=NA28).”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 11.

What this is saying is that not only were Westcott and Hort wrong, but so was Metzger. When computer based analysis is applied, the Byzantine text gains a great deal of value, even when only considering the first 1,000 years of textual data.

What Does This Mean for the “Earliest and Best” Manuscripts?

In abandoning the former framework of text-types, the value of the Byzantine group has been elevated in many places to equal or even above the formerly titled Alexandrian text-type. This is interesting, but not as much as what the same analysis revealed about the quality of the so-called Alexandrian family. When the witnesses in the Alexandrian family are compared using computer tools, they share a very low level of agreement in the places of variation. For example, Codex Sinaiticus’ closest relative in the synoptic gospels is the NA28, not any known manuscript. Vaticanus comes in second.

“Sinaiticus’s closest relative is A, the editor’s reconstructed text (i.e., the NA28/UBS5 text. These two agree at 87.9 percent. Next in line is 03, with 84.9 percent, and so on.”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 46.

Interestingly enough, if you go to the manuscript clusters tool today, 01 and 03 only agree 65% in the same tool provided in the quoted material above. What that seems to demonstrate is that the NA28 is the closest relative to both Sinaiticus and Vaticanus by a very large margin, even more than Sinaiticus and Vaticanus are relatives to each other.

If we use this tool on both the synoptic gospels and on John, we find that there aren’t any witnesses used in this analysis that are coherent enough to warrant any sort of direct genealogical connection. In short, the NA28 is the closest related text to Sinaiticus and Vaticanus. Out of the thousands and thousands of manuscripts often cited in debates to support the Critical Text, the closest living relative to the “earliest and best” manuscripts is the scholar’s own reconstructed text.

How could it be then, that this is still the prevailing theory among Critical Text advocates? Does this not warrant a significant departure from favoring the “earliest and best” manuscripts? Well, no, because according to the scholars, there are no absolute rules to textual criticism.

“As with so much in textual criticism, there are no absolute rules here, and experience serves as the best guide.”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 57.

“The significance of this selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened.”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 113.

“However, there are still cases where contamination can go undetected in the CBGM, with the result that the proper ancestor-descendent relationships are inverted.”

Peter Gurry & Tommy Wasserman. A New Approach to Textual Criticism. 115.


The scholarly assessment of text-types and the new methods being employed to create modern bibles should tell us a few things. First, it should tell us that the previous era of scholars such as Westcott, Hort, and Metzger were incorrect in their conclusions. Second, it should tell us that the scholars that came after don’t share the same confidence in the CBGM to find the original as perhaps James White. Third, it should tell us that anybody still using text-types and Alexandrian priority to argue for the validity of textual variants are severely behind the times. Not only have the scholars abandoned such notions, but the data simply does not support the conclusion that the Alexandrian texts are better than later manuscripts. Since the closest relative to such manuscripts is the text that scholars themselves have created, the data does not appear to be the driving factor for this camp. The driving factor seems to be the notion that Vaticanus must be the best because it is the earliest surviving manuscript we have.

The Received Text position is not one that attempts to reconstruct the original from extant data because it recognizes a) that the Bible never fell away and therefore does not need to be reconstructed and b) that the data is insufficient to do so. Even so, we can look at the scholar’s own analysis and see quite plainly that even their conclusions are established on a thin layer of presuppositions that are not supported by the data. This data, by the way, is the very same that is the foundation for the footnotes, brackets, and removed words and passages from modern bibles. The Received Text crowd has already rejected such a practice, but the Modern Critical Text camp embraces it with open arms.

The texts that serve as a foundation for modern bibles are not a text-family. They have no widespread attestation in the manuscript tradition. Since this is the case, what are we doing using them to make modern bibles? Is this all it takes for the church to toss out passages such as Mark 16:9-20? The burden of proof is set remarkably low for a passage to be thrown out of the bible as inauthentic. It seems rash that this is all it would take for a Christian to believe that a passage should be stripped from the text of Scripture, and yet here we are. As I often say, if the average Modern Critical Text advocate simply listened to their own scholars, they might express the same concerns as the Received Text crowd. It takes a strong tradition and a priori belief to discover that the foundational principle is incorrect and still believe the outcome of that principle to be true.

Is the CBGM God’s Gift to the Church?


It is stated by some that the Coherence Based Genealogical Method is a blessing to the church, even gifted to the church by God way of God’s providence. I thought it would be helpful to examine this claim. Unfortunately, those who have made such statements regarding the Editio Critica Maior (ECM) and the CBGM have not seemed to provide an answer as to why this is the case. This is often a challenge in the textual discussion. Assertions and claims can be helpful to understanding what somebody believes, but oftentimes fall short in explaining why they believe something to be true. The closest explanation that I have heard as to why the CBGM is a blessing to the church is because it has been said that it can detail the exact form of the Bible as it existed around 125AD. Again, this is simply an assertion, and needs to be demonstrated. I have detailed in this article as to why I believe that claim is not true. 

In this article, I thought it would be helpful to provide a simple explanation of what the CBGM is, how it is being used, and the impact that the CBGM will have on Bibles going forward. The discerning reader can then decide for themselves if it is a blessing to the church. If there is enough interest in this article, perhaps I can write more at length later. I will be using Tommy Wasserman and Peter Gurry’s book, A New Approach to Textual Criticism: An Introduction to the Coherence Based Genealogical Method as a guide for this article. 

Some Insights Into the CBGM from the Source Material 

New Testament textual criticism has a direct impact on preaching, theology, commentaries, and how people read their Bible. The stated goal of the CBGM is to help pastors, scholars, and laypeople alike determine, “Which text should be read? Which should be applied?..For the New Testament, this means trying to determine, at each place where our copies disagree, what the author most likely wrote, or failing this, at least what the earliest text might have been” (1, emphasis mine). Note that one of the stated objectives of the CBGM is to find what the author most likely wrote, and when that cannot be determined, what the earliest text might have been. 

Here is a brief definition of the CBGM as provided by Dr. Gurry and Dr. Wasserman:

“The CBGM is a method that (1) uses a set of computer tools (2) based in a new way of relating manuscript texts that is (3) designed to help us understand the origin and history of the New Testament text” (3). 

The way that this method is relating manuscript texts is an adaptation of Karl Lachman’s common error method as opposed to manuscript families and text types. This is in part due to the fact that “A text of a manuscript may, of course, be much older than the parchment and ink that preserve it” (3). The CBGM is primarily concerned with developing genealogies of readings and how variants relate to each other, rather than manuscripts as a whole. This is done by using pregenealogical (algorithmic analysis) and genealogical (editorial analysis). The method examines places where manuscripts agree and disagree to gain insight on which readings are earliest. In the case that the same place in two manuscripts disagree, the new method can help in determining one of two things:

  1. One variant gave birth to another, therefore one is earlier
  2. The relationship between two variants is uncertain

It is important to keep in mind, that the CBGM is not simply a pure computer system. It requires user input and editorial judgement. “This means that the CBGM uses a unique combination of both objective and subjective data to relate texts to each other…the CBGM requires the user to make his or her own decisions about how variant readings relate to each other.” (4,5). That means that determining which variant came first “is determined by the user of the method, not by the computer” (5). The CBGM is not purely an objective method. People still determine which data to examine using the computer tools, and ultimately what ends up in the printed text will be the decisions of the editorial team. 

The average Bible reader should know that the CBGM “has ushered in a number of changes to the most popular editions of the Greek New Testament and to the practice of New Testament textual criticism itself…Clearly, these changes will affect not only modern Bible translations and commentaries but possibly even theology and preaching” (5). Currently, the CBGM has been partially applied to the data in the Catholic Epistles and Acts, and DC Parker and his team are working on the Gospel of John right now. The initial inquiry of this article was to examine the CBGM to determine if it is indeed a “blessing to the church”. In order for this to be the case, one would expect that the new method would introduce more certainty to Bible readers in regards to variants. Unfortunately, the opposite seems to be true. 

“Along with the changes to the text just mentioned, there has also been a slight increase in the ECM editors’ uncertainty about the text, an uncertainty that has been de facto adopted by the editors of the NA/UBS…their uncertainty is such that they refuse to offer any indication as to which reading they prefer” (6,7). 

“In all, there were in the Catholic Letters thirty-two uses of brackets compared to forty-three uses of the diamond and in Acts seventy-eight cases of brackets compared to 155 diamonds. This means that there has been an increase in both the number of places marked as uncertain and an increase in the level of uncertainty being marked. Overall, then, this reflects a slightly greater uncertainty about the earliest text on the part of the editors” (7).   

This uncertainty has resulted in “the editors to abandon the concept of text-types traditionally used to group and evaluate manuscripts” (7). What this practically means is that the Alexandrian texts, which were formerly called a text-type, are no longer considered as such. The editors of the ECM “still recognize the Byzantine text as a distinct text form in its own right. This is due to the remarkable agreement that one finds in our late Byzantine manuscripts. Their agreement is such that it is hard to deny that they should be grouped…when the CBGM was first used on the Catholic Letters, the editors found that a number of Byzantine witnesses were surprisingly similar to their own reconstructed text” (9,10). 

Along with abandoning the notion that the Alexandrian manuscripts represent a text type, another significant shift has occurred. Rather than pursuing what has historically been called the Divine Original or the Original Text, the editors of the ECM are now after what is called the Initial Text (Ausgangstext). There are various ways this term is defined, but opinions are split with the editors of the ECM. For example, DC Parker, who is leading the team who is using the CBGM in the Gospel of John has stated along with others that there is no good reason to believe that the Initial Text and the Original Text are the same. Others are more optimistic, but the 198 diamonds in the Acts and Catholic Letters may serve as an indication as to whether this optimism is warranted based on the data. The diamonds indicate a place where the reading is uncertain in the ECM. 

The computer based component of the CBGM is often sold as a conclusive means to determine the earliest, or even original reading. This is not true. “At best, pregenealogical coherence [computer] only tells us how likely it is that a variant had multiple sources of origin rather than just one…pregenealogical coherence is only one piece of the text-critical puzzle. The other pieces – knowledge of scribal tendencies, the date and quality of manuscripts, versions, and patristic citations, and the author’s theology and style are still required…As with so much textual criticism, there are no absolute rules here, and experience serves as the best guide” (56, 57. Emphasis added).

In the past it has been said that textual criticism was trying to build a 10,000 piece puzzle with 10,100 pieces. This perspective has changed greatly since the introduction of the CBGM. “we are trying to piece together a puzzle with only some of the pieces” (112). Not only does the CBGM not have all the data that has ever existed, it is only using “about one-third of our extant Greek manuscripts…The significance of this selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). Further, the CBGM is not omniscient. It will never know how many of the more complex corruption entered into the manuscripts, or the backgrounds and theology of the scribes, or even the purpose a manuscript was created. “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That means that it is likely that there will be readings produced by the CBGM that were not original or earliest, that will be mistakenly treated as such. “We do not want to give the impression that the CBGM has solved the problem of contamination once and for all. The CBGM still faces certain problematic scenarios, and the loss of witnesses plagues all methods at some point” (115). 

One of the impending realities that the CBGM has created is that there may be a push for individual users, Bible readers, to learn how to use and implement the CBGM in their own daily devotions. “Providing a customizable option would mean creating a version that allows each user to have his or her own editable database” (119,120). There will likely be a time in the near future where the average Bible reading Christian will be encouraged to understand and use this methodology, or at least pastors and seminarians. If you are not somebody who has the time or ability to do this, this could be extremely burdensome. Further, the concept of a “build your own Bible” tool seems like a slippery slope, though it is a slope we are already sliding down for those that make their own judgements on texts in isolation to the general consent of the believing people of God. 


Since the CBGM has not been fully implemented, I suppose there is no way to say with absolute confidence whether or not it is a “blessing to the church”. I will say, however, that I believe the church should be the one to decide on this matter, not scholars. It seems that the places where the CBGM has already been implemented have spoken rather loudly on the matter in at least 198 places. Hopefully this article has been insightful, and perhaps has shed light on the claims that many are parroting which say that the CBGM is a “blessing to the church” or an “act of God’s providence”. If anything, the increasing amount of uncertainty that the CBGM has introduced to the previous efforts of modern textual criticism should give cause for pause, because the Bibles that most people use are based on the methodologies that modern scholarship has abandoned.

Helpful Terms

Coherence: The foundation for the CBGM, coherence is synonymous with agreement or similarity between texts. Within the CBGM the two most important types are pregenealogical coherence and genealogical coherence. The former is defined merely by agreements and disagreements; the latter also includes the editors’ textual decisions in the disagreements (133).   

ECM: The Editio Critica Maior, or Major Critical Edition, was conceived by Kurt Aland as a replacement to Constantin von Tischendorf’s well-known Editio octava critica maior. The aim of the ECM is to present extensive data from the first one thousand years of transmission, including Greek manuscripts, versions, and patristics. Currently, editions for Acts and the Catholic Letters have been published, with more volumes in various stages for completion (135).  

Stemma: A stemma is simply a set of relationships either of manuscripts, texts, or their variants. The CBGM operates with three types that show the relationship of readings (local stemmata), the relationship of a single witness to its stemmatic ancestors (substemma), and the relationships of all the witnesses to each other (global stemmata) (138). 

Has the CBGM Gotten Us to 125AD?


So it has been said that the CBGM has been able to “get us to 125AD” as it pertains to the New Testament manuscripts with its analysis – or at least in Luke 23:34. Anybody who makes such a claim clearly has no working understanding of the Munster Method, or at least is choosing to use an invisible rod to bash people over the head. In any case, I thought it would be helpful to examine some potential weaknesses in the methodology in a series of articles. To begin, I thought I would discuss the reality that the CBGM is still in need of critical analysis. Dr. Peter Gurry, in his work, A Critical Examination of the Coherence Based Genealogical Method, as a part of the Brill Academic series New Testament Tools and Studies writes, “Despite the excitement about the CBGM and its adoption by such prominent editions, there has been no sustained attempt to critically test its principles and procedures” (2).

So my advice to any of those who believe such a bold claim that the CBGM can “get us to 125AD” should put on their discernment ears and wait until 2032 when the effort can be accurately examined in full. If its use in analyzing the Catholic Epistles is any indication of the kind of certainty it will provide, I now direct the reader to open their Nestle-Aland 28th Edition, if they own one, and examine the readings marked with a black diamond. It should be loudly noted that the methodology of the CBGM has not been fully examined, and I agree with Dr. Gurry when he writes, “If the method is fundamentally flawed, it matters little how well they used it” (4).

The CBGM and the Initial Text

Before the Christian church preemptively buys into this method wholesale, it is important to first recognize that there is not uniform agreement, even in the early implementation process of the CBGM, by all that this methodology will result in establishing what is being called the Initial Text. Bengt Alexanderson, in his work, Problems in the New Testament: Old Manuscripts and Papyri, the New Coherence-Based-Genealogical Method (CBGM) and the Editio Critica Maior (ECM), writes, “I do not think the method is of any value for establishing the text of the New Testament” (117). What should be noted loudly for those that are falling asleep, is that a significant shift has occurred under the noses of laypeople in the effort of textual scholarship as it pertains to the New Testament text.

That shift is the abandonment of the search for the Original or Authorial text for the pursuit of what is being called the Initial Text. Dr. Gurry writes, “These two terms [authorial or original text] have often been used interchangeably and their definition more often assumed than explained. Moreover, that this text was the goal of the discipline remained generally undisputed until the end of the twentieth-century. It was then that some scholars began to question whether the original text could or should be the only goal or even any goal at all” (90, bracketed material added). Regardless of whether this is the method one decides to advocate for, let it be said that this is indeed a shift, for better or for worse. Dr. Gurry continues, “Rather than clarify or resolve this debate, the advent of the CBGM has only complicated the matter by introducing an apparently new goal and a new term to go with it: Ausgangstext, or its English equivalent “initial text” (90-91). The problem of defining terms will always gray the bridge between academia and the people, so hopefully this article helps color in the gap.

While the debate rages on between the scholars as to how the Initial Text should be defined, I will start by presenting what might be considered as the conservative understanding of it and work from there. Gerd Mink, who is credited with the first use of the term Ausgangstext, employs the term to mean “progenitor” or the “hypothetical, so-called original text”(92). That is to say that the goal of the CBGM in theory is to produce the text that the rest of the manuscripts flowed from. This sounds great, in theory, but there remains a great distance to cover from saying that the CBGM should produce this Initial Text and the CBGM has produced this Initial Text. In any case, the use of the terminology “Original Text” is not employed in the same way as it was historically, and there is much deliberation as to whether Mink’s proposed definition will win out over and above those that wish to define it more loosely.

Based on my experience with systems, an appropriate definition of the term as “the text from which the extant tradition originates” (93) is much more precise and descriptive of what the method is capable of achieving. Any talk of whether or not the Initial Text represents the Original Text is merely speculation at this point, and I argue will remain speculation when the effort is complete. This of course requires a more humble assessment of the capabilities of the CBGM, in that an empirical method is only good for analysis on that which it has is its possession. Which is to say that methodologically speaking, there is still a gray area between the time that the earliest extant manuscripts are dated and the time that the original manuscripts were penned of about 300 years or more, depending on how early one dates the earliest complete manuscripts. This is what I have been calling the “Gray area between the authorial and initial text” or “The Gray Area” for short. Dr. Gurry has defined it as the historical gap (100). I suspect that this gray area will be the focus of all discussion pertaining to the actual value of the ECM by the time 2032 arrives.

The Gray Area Between the Authorial and Initial Text

Any critique of the CBGM is incomplete without a sincere handling of the Gray Area between the Original and Initial Text. Until that conversation has happened, it is rather preemptive to make any conclusions such as, “The CBGM can get us to about 125AD”. Dr. Gurry writes, “The reason is that there is a methodological gap between the start of the textual tradition as we have it and the text of the autograph itself. Any developments between these two points are outside the remit of textual criticism proper. Where there is “no trace [of the original text] in the manuscript tradition” the text critic must, on Mink’s terms, remain silent” (93).

This is understandably a weakness of the methodology itself, if one expects the methodology to produce a meaningful text. Dr. Gurry continues, “Minks statement that the initial text “should not necessarily be equated with any actual historical reality” is best read as a way to underscore this point” (93). I propose that it is of greatest importance that Christians begin discussing the Gray Area between the Original Text and the Initial Text now, as it outside of the interest of the text-critic proper. Yes, this discussion is most certainly a theological one, as much as that might pain some who have buried their heads in the sand to the weaknesses of the CBGM care to admit.

It is important to note, that in this conversation over the methodology of the CBGM, that there is certainly not uniform agreement on the capabilities of this relatively new method. It is my hope that by bringing this discussion into a more public space, that the terminology of Original and Initial Text, and the space between these two points in the transmission of the New Testament, fosters an important conversation as it pertains to the orthodox doctrinal standards of inspiration and preservation. Dr. Gerd Mink indirectly proposes one possible method of analyzing the Gray Area, which would be to demonstrate that there is a significant break between the Original and Initial Text. Perhaps some ambitious doctoral student might take upon himself to conduct this work, though I wonder if it is even possible to analyze data that does not exist. That is to say that determining the quality and authenticity of the Initial Text might as well be impossible, and any conclusions regarding this text will be assumptive, given that some new component is not added to the CBGM which allows such analysis to be done.

The ontological limitations of the CBGM give cause for the discerning onlooker to side with the assessments of DC Parker and Eldon J. Epp. Dr. Epp writes, “Many of us would feel that Initial Text – if inadequately defined and therefore open to be understood as the First Text or Starting Text in an absolute sense – suggests greater certainty than our knowledge of transmission warrants”(Epp, Which Text?, 70). Until those that have a more optimistic understanding of the Initial Text produce a methodology that is adequate in testing the veracity of the Initial Text, I see no reason why anybody should blindly trust that the Initial Text can be said to be same as the Original Text. And that is assuming that the ECM will reveal one Ausgangstext. It is likely, if not inevitable, that multiple initial texts will burst forth from the machine. A general understanding of the quality of the earliest extant texts certainly warrants such a thought, at least.


The purpose of this article is to 1) make a wider audience aware of the difference between the Initial Text and the Original Text and 2) to begin the conversation of the Gray Area between the Initial Text and the Original Text. It is best that the church begins discussing this now, rather than in 13 years when the ECM is completed. There are many Christians out there who may be caught completely off guard when they discover that the somewhat spurious claim that the CBGM has “gotten us to 125AD” is in fact, not the truth. The fact stands that nobody has the capability of making such a precise claim at this point, and will not be able to make such a claim in 2032 either. It is best then, that people allow the scholars to finish the work prior to making claims that the scholars themselves are still in dialogue about.

The Most Dangerous View of the Holy Scriptures


Quite often in the textual discussion, it is boldly proclaimed that “our earliest and best manuscripts” are Alexandrian. Yet, this statement introduces confusion at the start. It introduces confusion due to the fact that there are sound objections as to whether it is even appropriate to use such a term as “Alexandrian” when describing the “earliest and best manuscripts”, as though they were a text family or text type. This is because there doesn’t seem to be an “Alexandrian” text type, only a handful of manuscripts that have historically been called Alexandrian. This is due to the more precise methods being employed, which allow quantitative analysis to be done in the variant units of these manuscripts. The result of this analysis has demonstrated that the manuscripts called Alexandrian do not meet the threshold of agreement to be considered a textual family. Tommy Wasserman explains this shift in thought. 

“Although the theory of text types still prevails in current text-critical practice, some scholars have recently called to abandon the concept altogether in light of new computer-assisted methods for determining manuscript relationships in a more exact way. To be sure, there is already a consensus that the various geographic locations traditionally assigned to the text types are incorrect and misleading” (Wasserman, 

Thus, the only place the name “Alexandrian” might occupy in this discussion is one of historical significance or possibly to serve in identifying the handful of manuscripts that bear the markers of Sinaiticus and Vaticanus, which disagree heavily among themselves, as the Munster Method has demonstrated (65% agreement between 01 and 03  in the places examined in the gospels/26.4% to the Majority text So in using the terminology of “Alexandrian”, one is already introducing confusion into the conversation that represents an era of textual scholarship that is on its way out. Regardless of whether or not it is appropriate to use the term “Alexandrian”, it may be granted that it is a helpful descriptor for the sake of discussion, since the modern critical text in the most current UBS/NA platform generally agrees with it in at least two of these manuscripts (03 at 87.9% and 01 at 84.9%) in the places examined (See Gurry & Wasserman, 46). 

The bottom line is this – the new methods that are currently being employed (CBGM/Munster Method) are still ongoing, and will be ongoing until at least 2032. So any arguments made on behalf of the critical text are liable to shift as the effort continues and new data comes to light. As a result of this developing effort, any attempt to defend such texts is operating from an incomplete data set, based on the methods that are being defended. Given that the general instability of the modern critical text is granted, at least until the Editio critica maior (ECM) is completed, know that the conversation itself is likely to change over the next 12 years. In the meantime, it seems that the most productive conversation to have is that which discusses the validity of the method itself, since the dataset is admittedly incomplete.

Is the Munster Method Able to Demonstrate the Claim that the “Alexandrian” Manuscripts Are Earliest and Best?

The answer is no. The reason I say this is due to the method being employed. I have worked as an IT professional for 8 years, specifically in data analysis and database development, which gives me a unique perspective on the CBGM. An examination of the Munster Method (CBGM) will show that the method is insufficient to arrive at any conclusion on which text is earliest. While the method itself is actually quite brilliant , its limitations prevent it from providing any sort of absolute conclusion on which text is earliest, or original, or best. There are several flaws that should be examined, if those that support the current effort want to properly understand the method they are defending. 

  1. In its current form, it does not factor in versional or patristic data (or texts as they have been preserved in artwork for that matter)
  2. It can only perform analysis on the manuscripts that are extant, or surviving (so the thousands of manuscripts destroyed in the Diocletian persecution, or WWI and WWII can never be examined, for example)    
  3. The method is still vulnerable to the opinions and theories of men, which may or may not be faithful to the Word of God

So the weaknesses of the method are threefold – it does not account for all the data currently available, and it will never have the whole dataset. Even when the work is finished, the analysis will still need to be interpreted by fallible scholars. It’s biggest flaw, however, is that the analysis is being performed on a fraction of the dataset. Not only are defenders of the modern critical text defending an incomplete dataset, as the work is still ongoing, the end product of the work itself is operating from an incomplete dataset. So to defend this method is to defend the conclusions of men on the analysis of an incomplete dataset of an incomplete dataset. The scope of the conclusions this method will produce will be limited to the manuscripts that we have today. And since there is an overwhelming bias in the scholarly world on one subset of those manuscripts, it is more than likely that the conclusions drawn on the analysis will look very similar, if not the same, as the conclusions drawn by the previous era of textual scholarship (represented by Metzger and Hort). And even if these biases are crushed by the data analysis, the conclusions will be admittedly incomplete because the data is incomplete. Further, quantitative analysis will never be free of the biases of those who handle the data. Dr. Peter Gurry comments on one weakness in the method in his book A New Approach to Textual Criticism

“The significance of the selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). 

Further, the method itself is not immune to error. Dr. Gurry comments that, “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That is to say, that after all the computer analysis is done, the scholars making textual decisions can still make incorrect conclusions on which text is earliest, selecting a later reading as earliest. In the current iteration of the Munster Method, there are already many places where, rather than selecting a potentially incorrect reading, the text is marked to indicate that the evidence is equally strong for two readings. These places are indicated by a diamond in the apparatus of the current edition of the Nestle-Aland text produced in 2012. There are 19 of these in 1 and 2 Peter alone (See NA28). That is 19 places in just two books of the Bible where the Munster Method has not produced a definitive conclusion on the data. That means that even when the work is complete, there will be thousands of different conclusions drawn on which texts should be taken in a multitude of places. This is already the case in the modern camp without application of the CBGM, a great example is Luke 23:34, where certain defenders of the modern critical text have arrived at alternative conclusions on the originality of this verse.   

There is one vitally important observation that must be noted when it comes to the current effort of textual scholarship. The current text-critical effort, while the most sophisticated to date, is incapable of determining the earliest reading due to the limitations of the data and also in the methodology. A definitive analysis simply cannot be performed on an incomplete dataset. And even if the dataset was complete, no dataset is immune to the opinions of flawed men and women.  

An Additional Problem Facing the Munster Method

There is one more glaring issue that the Munster Method cannot resolve. There is no way to demonstrate that the oldest surviving manuscripts represent the general form of the text during the time period they are alleged to have been created (3rd – 4th century). An important component of quantitative analysis is securing a data set that is generally representative of the whole population of data. This may be fine in statistical analysis on a general population, but the precision of the effort at hand is not aiming at a generic form of precision, because the Word of God is being discussed, which is said to be perfectly preserved by God. That means that the sample of data being analyzed must be representative of the whole. The reality is, that the modern method is really doing an analysis on the earliest manuscripts, which do not represent the whole, against the whole of the dataset.

It is generally accepted among modern scholarship that the Alexandrian manuscripts represent the text form that the whole church used in the third and fourth century. This is made evident when people say things like, “The church wasn’t even aware of this text until the 1500’s!” or “This is the text they had at Nicea!” Yet such claims are woefully lacking any sort of proof, and in fact, the opposite can be demonstrated to be true. If it can be demonstrated that the dataset is inadequate as it pertains to the whole of the manuscript tradition, or that the dataset is incomplete, then the conclusions drawn from the analysis can never be said to be absolutely conclusive. There are two points I will examine to demonstrate the inadequacy of the dataset and methodology of the CBGM, which disallows it from being a final authority in its application to the original form of the New Testament.

First, I will examine the claim that the manuscripts generally known as Alexandrian were the only texts available to the church during the third and fourth centuries. This is a premise that must be proved in order to demonstrate that the conclusions of the CBGM represent the original text of the New Testament. In order to make such a claim, one has to adopt the narrative that the later manuscripts which are represented in the Byzantine tradition were a development, an evolution, of the New Testament text. The later manuscripts which became the majority were the product of scribal mischief and the revisionist meddling of the orthodox church, and not a separate tradition that goes back to the time of the Apostles. This narrative requires the admission that the Alexandrian texts evolved so heavily that by the Middle period, the Alexandrian text had transformed into an entirely different Bible, with a number of smoothed out readings and even additions of entire passages and verses into the text which were received by the church as canonical! Since this cannot be supported by any real understanding of preservation, the claim has to be made that the true text evolved and the original remains somewhere in the texts that existed prior to the scandalous revision effort of Christians throughout the ages. This is why there is such a fascination surrounding the Alexandrian texts, and a determination by some to “prove” them to be original (which is impossible, as I have discussed).

That being said, can it be demonstrated that these Alexandrian manuscripts were the only texts available to the church during the time of Nicea? The simple answer is no, and the evidence clearly shows that this is not the case at all. First, the number of examples of patristic quotations of Byzantine readings demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. One can point to Origen as the champion of the Alexandrian text, but Origen wasn’t exactly a bastion of orthodoxy, and I would hesitate to draw any conclusions other than the fact that after him, the church essentially woke up and found itself entirely Arian or some other form of heterodoxy as it pertained to Christ and the Trinity. Second, the existence of Byzantine readings in the papyri demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. Finally, Codex Vaticanus, one of the chief exemplars of the Alexandrian texts, is proof that other forms of the text existed at the time of their creation. This is chiefly demonstrated in the fact that there is a space the size of 11 verses at the end of Mark where a text should be. This space completely interrupts the otherwise uniform format of the codex which indicates that the scribes were aware that the Gospel of Mark did not end at, “And they went out quickly, and fled from the sepulchre; for they trembled and were amazed: neither said they any thing to any man; for they were afraid.” They were either instructed to exclude the text, or did not have a better copy as an exemplar which included the text. In any case, they were certainly aware of other manuscripts that had the verses in question, which points to the existence of other manuscripts contemporary to Sinaiticus and Vaticanus. Some reject this analysis of the blank space at the end of Mark as it applies to Sinaiticus (Which also has a blank space), but offer additional reasons why this is the case nonetheless, see this article for more. James Snapp notes that “the existence of P45 and the Old Latin version(s), and the non-Alexandrian character of early patristic quotations, supports the idea that the Alexandrian Text had competition, even in Egypt.” Therefore it is absurd to claim that every manuscript circulating at the time looked the same as these two exemplars, especially considering the evidence that other text forms certainly existed.

Second, I will examine the claim that the Alexandrian manuscripts represent the earliest form of the text of the New Testament. It can easily be demonstrated that these manuscripts do not represent all of their contemporary manuscripts, but that is irrelevant if they truly are the earliest. Yet the current methodology has absolutely no right to claim that it is capable of proving such an assertion. Since the dataset does not include the other manuscripts that clearly existed alongside the Alexandrian manuscripts, one simply cannot draw any conclusions regarding the supremacy of those texts. One must jump from the espoused method to conjecture and storytelling to do so. Those defending the modern text often boldly claim that fires, persecution, and war destroyed a great deal of manuscripts. That is exactly true, and needs to be considered when making claims regarding the manuscripts that survived, and clearly were not copied any further. One has to seriously ponder why, in the midst of the mass destruction of Bibles, the Alexandrian manuscripts were considered so unimportant that they weren’t used in the propagation of the New Testament, despite the clear need for such an effort. Further, these manuscripts are so heavily corrected by various scribes it is clear that they weren’t considered authentic in any meaningful way. 

Even if the Alexandrian manuscripts represent the “earliest and best”, there is absolutely no way of determining this to be true due to the simple fact that the dataset from that time period is so sparse. In fact, the dataset from this period only represents a text form that is aberrant, quantitatively speaking. It is evident that other forms of the text existed, and despite the fact that they no longer are surviving, the form of those texts survive in the manuscript tradition as a whole. The fact remains, there are no contemporary data points to even compare the Alexandrian manuscripts against to demonstrate this to be true. Further, there are not enough second century data points to compare the third and fourth century manuscripts against to demonstrate that the Alexandrian manuscripts represent any manuscript earlier than when they were created. It is just as likely, if not more likely, that these manuscripts were created as an anomaly in the manuscript tradition. The fact remains that the current methods simply are not sufficient to operate on data that isn’t available.  This relegates any form of analysis to the realm of story telling, which exists in the theories of modern scholars (expansion of piety, scribal smoothing, etc.) 


Regardless of which side one takes in the textual discussion, the fact remains that the critiques of the modern methodology as it exists in the CBGM are extremely valid. The method is primarily empirical in its form, and empirical analysis is ultimately limited by the data available. Since the data that is available is not complete outside of a massive 1st and 2nd century manuscript find, the method itself will forever be insufficient to provide a complete analysis. The product of the CBGM can never be applied honestly to the whole of the manuscript tradition. Even if we find 2,000 2nd century manuscripts, there will still be no way of validating that those manuscripts represent all of the text forms that existed during that time . As a result, the end product will simply provide an analysis of an incomplete dataset. It should not surprise anybody when the conclusions drawn from this dataset in 2032 simply look like the conclusions drawn by the textual scholarship of the past 200 years. This being the case, the conversation will be forced into the theological realm. If the modern methods cannot prove any one text to be authorial or original, those who wish to adhere to that text will ultimately be forced to make an argument from faith. This is already being done by those who downplay the significance of the 200 year gap in the manuscript tradition from the first to third centuries and say that the initial text is synonymous with the original text. 

The fact remains that ultimately those who believe the Holy Scriptures to be the divinely inspired word of God will still have to make an argument from faith at the end of the process. Based on the limitations of the Munster Method (CBGM), I don’t see any reason for resting my faith on an analysis of an incomplete dataset which is more than likely going to lean on the side of secular scholarship when it is all said and done. This is potentially the most dangerous position on the text of Scripture ever presented in the history of the world. This position is so dangerous because it says that God has preserved His Word in the manuscripts, but the method being used cannot ever determine which words He preserved.

The analysis performed on an incomplete dataset will be hailed as the authentic word(s) of God, and the conclusions of scholars will rule over the people of God. It is possible that there will be no room for other opinions in the debate, because the debate will be “settled”. And the settled debate will arrive at the conclusion of, “well, we did our best with what we have but we are still unsure what the original text said, based on our methods”. This effectively means that one can believe that God has preserved His Word, and at the same time not have any idea what Word He preserved. The adoption of such conclusions will inevitably result in the most prolific apostasy the church has ever seen. This is why it is so important for Christians to return to the old paths of the Reformation and post-Reformation, which affirmed the Scriptural truth that the Word of God is αυτοπιστος, self-authenticating. It is dishonest to say that the Reformed doctrine of preservation is “dangerous” without any evidence of this, especially considering the modern method is demonstrably harmful.