So You’re a Presuppositionalist? Prove it.


Presuppositional Apologetics has been critically acclaimed as the “only Biblical defense of the faith” by many who advocate for the method. Yet there is a critical inconsistency in the vast majority of those who champion Greg Bahnsen and Cornelius Van Til, especially when it comes to the text of Holy Scriptures. Bahnsen provides a starting point in his critically acclaimed book, Presuppositional Apologetics: Stated and Defended. 

“Faith is humble submission to the self-attesting Word of God. Faith accounts God truthful, faithful, and powerful on the basis of His own Word, not requiring to see demonstrable proof or evidence outside of God’s Word that could confirm it as trustworthy” (64). 

Bahnsen proposes in his book that every system must give an account for it’s claim to intelligibility. The Christian, being regenerated by the Holy Spirit in salvation, has had his mind renewed and operates from the epistemological starting point that God has spoken in His Word. The Christian system provides all of the meaningful conditions for logic, induction, and absolute morality.  If a system cannot provide a foundation for such intelligibility, then all claims that follow must be operating from another system that does provide those conditions for intelligibility. They must borrow from the Christian worldview. The goal of the apologist is to first present Biblical truth, and then step into the opposing system and perform an internal critique, demonstrating the foolishness of the opposing system. If the presuppositionalist first begins by assuming neutrality, which is to admit the other system does provide the preconditions for intelligibility outside of the Christian worldview, then they are violating the principles laid out in 1 Peter 3:15 and have lost the argument. 

Step 1: Answer Not a Fool According to His Folly

In order for this system to work, one must first presuppose that God exists (natural truth), and that He has spoken (revealed truth). In these last days, He has spoken through Jesus Christ in His Holy Scriptures (Heb. 1:1). Therefore, all meaningful presuppositional defenses of the Christian faith must begin with this premise. This presupposition is that the Holy Scriptures are the ultimate standard that all other standards must be evaluated by, because this standard is the only standard that provides the aforementioned preconditions for intelligibility. That means that the only standard that is capable of examining the standard set forth in the Holy Scriptures are the Scriptures themselves. If at any point an external standard is applied to this ultimate standard, then the Scriptures are no longer the ultimate standard. That is why the standard is presupposed, hence the name, presuppositional apologetics. 

Based on this starting point, any attempt to defend the Holy Scriptures outside of the Scriptures themselves is to immediately surrender the argument, and adopt the folly of the fool. 

Step 2: Answer a Fool According to His Folly

A modern trend in the practice of presuppositional apologetics is to defend the Scriptures evidentially. Evidence certainly has its place, as Van Til put forth, but not when it comes to evaluating an ultimate standard. The ultimate standard is presuppositional. Therefore, any attempts to “prove” the ultimate standard sets another standard above the ultimate standard, and the “ultimate standard” is no longer ultimate. In other words, the person has given up their claim to the preconditions of intelligibility, and they themselves have become the fool. A great example of this is to examine a situation wherein an atheist attacks the credibility of the ultimate standard by calling into question the ending of the Gospel of Mark. The presuppositionalist has two options here. 

The first option is to say, “Well our earliest and best manuscripts do not contain that passage, so it is not a part of the Scriptures. It is not part of the ultimate standard I am appealing to.” At that point the opponent should ask, “By what standard are you defining the parameters of your ultimate standard?” The presuppositionalist responds, “There are thousands of manuscripts that testify to the New Testament, it is the best testified document from antiquity. Our earliest and best manuscripts date back to the third and fourth century AD, and they do not have the ending of Mark. There is no other book in the history of the world that gets that close to the authorship event.” The opponent continues, “So it is the ultimate standard because it is the best testified document in antiquity?” The Presuppositionalist, realizing his error, responds, “No, it is the ultimate standard because it is God’s Word”. The opponent, noticing that he has won the exchange, presses harder. “So what standard do you use to determine the parameters of the Bible?”  The presuppositionalist has lost his right to claim that he can account for the preconditions of intelligibility, because in order to respond, he must apply an external standard upon the standard he has set forth as ultimate. He has stepped off of his proposed system and borrowed the canons of some other worldview.

The second option is to say, “By what standard are you calling into question the validity of the ending of Mark?” This answer is consistent with presuppositionalism, the first is not. By answering in this way, the presuppositionalist continues to point out that in order to call into question the authority of the Scriptures, one must assume the truths of Scripture in the first place. The opponent may not see this as a valid response, but the presuppositionalist has remained consistent. 


There is an interesting phenomenon within the people who adopt a presuppositional apologetic. On one hand, they claim that the Scriptures are the ultimate authority, and on the other, apply external standards to that ultimate authority. If the Scripture truly is the ultimate authority, it must be, well, ultimate. It is one thing to do this in an apologetic scenario – occasionally somebody outmaneuvers a Christian in debate. That has happened to anybody who has engaged in a difficult conversation with a learned atheist. It is an entirely different thing to claim that the Bible is the ultimate standard, and then adopt an entire system which says the Bible must be validated by way of the standards set forth by that other system. That is to say, that the Bible is not the ultimate standard because it is the Word of God, it is the ultimate standard because an individual thinks it is based on the work of that other system. The standard shifts from objective to subjective, and at that point it’s simply a matter of personal preference if one wants to consider the Bible to be the Word of God. 

This forces one to admit that the Bible is not ontologically the ultimate standard, it becomes the ultimate standard when shaped by the canons of some other system. So it does not follow that the presuppositionalists have any sort of meaningful, consistent claim to the preconditions of intelligibility if they adopt the ultimate standard of some other system, like modern textual scholarship. They must borrow from the worldview that says that the Bible is self-authenticating. In order to make such a claim that Mark 16:9-20 is not Scripture, one must apply some external principle to determine that. I wonder, does that standard meet the preconditions for intelligibility? 

Count the Cost, Christian

A Sea of Doubt

A component of critical thinking that has unfortunately been lost in the modern period is the ability to analyze the cost of making an argument. Few stop to consider what else must be true if the claim they are making is true. An argument does not exist in a vacuum, it is the product of a system. Claims regarding the Holy Scriptures are often made in this fashion, as though one can adopt a postmodern view of the Scriptures without any impact to the historical doctrines of inspiration and preservation. When one wades into the shallows of an ocean at low tide, he might find that all is right – the water is cool, the current easy, and he feels safe with his feet  planted in the soft sand. But every tide has an ebb and flow, and no ocean at low tide ever stays shallow for long. Lying beyond the safety of the shore is an undertow and the deep murky depths, and while one can see his feet in the shallows, with each ebb and flow the water darkens until he feels his feet leave that soft sand. 

Making an argument without counting the cost and considering the ends  is the same as venturing into the ocean at low tide and believing that it will stay safe and traversable. Underneath every shallow argument is a tide of consequences that will eventually rip the feet out from under those who make them. Such is the case when it comes to the textual discussion. Many arguments seem to work until the tide shifts and carries with it the children playing in the shallows. Nobody truly knows how deep the ocean is until they are separated from the shore. Under every argument is an ocean, and ignoring the tide for the sake of winning an argument only puts those carelessly playing in the sand in danger. And when the tide rises, it should surprise no one when yellow boats inscribed with the names “SS Barth” and “SS Bultmann” come to rescue the floundering children. 

Counting the Cost of Playing in the Low Tide

There are some important, practical realities to consider before saying “I want to know what Paul wrote!” The first question that one must ask is, “What method am I using to determine what Paul wrote?” One must take careful inventory of the state of the ground underfoot. Countless Christians have firmly planted their feet on the ground of modern textual scholarship without performing this analysis. They have not counted the cost. So when somebody standing on such ground rejects, let’s say, Mark 16:9-20, they do so without understanding why they are doing it, or where that rejection leads. 

So let’s examine the ground upon which this argument stands. The argument begins with manuscript evidence. Particularly, three manuscripts. Two of these manuscripts are said to have been created in the fourth century, and the other in the middle period. It then goes on to explain why these manuscripts are more valuable than the more than 1,000 manuscripts available that have the ending in it. It argues that these manuscripts are the best because they are the oldest surviving manuscripts. In the shallows of the low tide, this argument seems good enough, but what lies beneath the surface? 

First, like every argument for the modern critical text, it starts from an evidentiary standpoint. Even if the person making the argument has faith, the substance of the argument itself is one that is agnostic to the belief system of the one making it. That of course is the appeal. Yet, underneath this argument lies a deeper, more foundational starting point. In order to make this kind of argument, one first has to start with the assumption that an element outside of the Bible has the authority to authenticate this reading or that. The authority of the Bible rests on external validation. That is to say that the Bible has no authority in and of itself. It only becomes authoritative when an external element determines it to be so. Further down, this argument makes another assumption, that an empirical standard has the ability to make such a determination. This is not the case, however. Even the earliest manuscripts are still hundreds of years after the authorial event of the New Testament. And since the originals are lost, there is no way to actually prove that Mark 16:9-20 was or wasn’t there in the original manuscript according the modern critical standard. There is nothing to test the hypothesis against. 

So the standard that is being used is not capable of determining originality one way or another. At the deepest level of this argument lies the most fundamental starting point. If the longer ending of Mark is not original, then the people of God had the wrong Bible for over a thousand years, as almost every single manuscript containing Mark 16 has the passage, and the commentaries and quotations of the passage span from the Ancient fathers through the post Reformation period. That is to say, that the Bible was not preserved, and the people of God picked the wrong Bible, copied the wrong Bible, and used the wrong Bible. Thus, from this perspective, God may have inspired the original manuscripts, but the people of God never knew what exactly He inspired. One can assert originality from this perspective, but the argument from evidence is completely agnostic to religious views of inspiration and preservation. 

At its very core, modern textual criticism is completely agnostic, even hostile to opinions of faith. So when one makes a completely evidential claim to the authority of a given passage of Scripture, he is doing so from an agnostic starting point. The modern critical method does not care about religious feelings. When somebody adopts this starting point, they hand over the ability to make any sort of claim of divine authorship, because the Author has no authority within this system. This is what lies beyond the shallow tidewaters in the murky depths. All modern text-critical arguments begin with assuming that the Bible requires external validation and then adopts a method that cannot validate that argument in any meaningful way. Don’t believe me? Find a modern critical scholar who has “found the original”. As to whether or not the shifting modern text is speaking divinely to somebody, that remains in the mind of the subject, the person reading that text. The Bible is not divine because it is the Word of God, it becomes the Word of God by way of external examination or internal subjective experience. In and of itself, the Bible is simply a man made product that might be close to the original. 

The Shallows at High Tide

Isn’t there another option? Is it possible that God chose to preserve His Word imperfectly? Isn’t it possible that God never desired to give His Word to His people completely, or with absolute certainty? This is the argument made by people who are standing on the shore, watching the scholars play in the water. They are only comfortable making the arguments from evidence because they haven’t felt the crushing weight of the ocean bend them in half. They haven’t seen the tops of their feet disappear as the tide rolled in, or felt the darkness of the water reach up to them from the ocean floor. They haven’t considered the breadth of the deep. Or maybe they have, and haven’t realized they are drowning yet. They only know the shape of the ocean from afar, and that is why they are comfortable trusting the opinions of those in the water who say, “The ocean is deep, but not that deep. I wouldn’t go in if I were you. Just take my word for it.” The Christians on the warm sand see the crowd of heads nodding in agreement, and carry on as usual. Yet everybody bobbing in that water knows that there is a 300 foot gap between their science and the ocean floor, and the honest ones will say that they haven’t seen the bottom and never will. They look over to the yellow life boats called “SS Barth” and “SS Bultmann”  and “SS Vatican” and are grateful that those boats will save any Christian who decides to wander in as deep as they have. 

Count the Cost, Christian

The modern critical methodology cannot offer certainty, and it does not claim to offer certainty. It ends where it starts and starts where it ends. It can only do as much as its principles allow, and its principles cannot be applied to manuscripts it does not have. So does the modern critical text proponent have any right to claim whether or not the longer ending of Mark is original or an orthodox corruption? No, they don’t. That would require stretching the data farther than it is able to go on its own, which many do, betraying the ground they stand on. 

Count the cost, Christian. Does the Bible need to be authenticated externally, or is the Bible self-authenticating? If the Bible is not authentic in and of itself, are you willing to pay the price that comes with it? There is a reason the Reformers rallied around Sola Scriptura. They had paid the price of for too long. They had seen the logical end of a Bible imbued with papal authority. If you’re so committed to a Bible that requires external authentication, tell me, who would you have authenticate it? Are you willing to go down the road to Rome in the name of “Reformation”?

There is another path that avoids the water altogether. Ignore those who say that believing in God’s perfectly preserved Word is “Pious and sanctimonious”. That is not the voice of your shepherd. There is a better path, one that is well traveled, far away from the ocean of uncertainty. It does not start with evidence, but the fact that God has spoken. It does not rely on popular opinion and the machinations of scholars. The Word of God is an authority in itself. Hurry to the shore, out of the water, and onto the beaten path. The Word of God has not been lost. It does not need to be reconstructed. We know what Paul said because God preserved it. Receive the text that the fathers of your faith received and declare, “Thy word is truth”. 

Further Reading

Common Sense Arguments Against the Modern Critical Text


It is easy to get bogged down in conversations about textual variants, manuscripts, and elusive terminology when it comes to any talk about Textual Criticism. These types of conversations prevent the average Christian from entering into the discussion, and so it is common to just side with a favorite pastor or scholar. Fortunately, the conversation is not as complicated as many make it seem. It is true that in order to analyze a variant or read a manuscript, an understanding of the Greek language and a general knowledge of textual scholarship is required. This should cause the average Christian to pause and consider that reality. Should every Christian need to learn Greek and study textual criticism in order to read their Bible? Does that sound like something that God would require for His people to read His Word? Does God require papal or scholarly authority for His people to know which verses are authentic? 

Those who advocate for this have made a serious error in their understanding of the availability of the Scriptures. They have imposed a burdensome standard upon the Holy Scriptures which puts a barrier between the average Christian and New Testament scholarship. This cumbersome gatekeeping tool has informed Christians everywhere that unless they have a PhD in Text-Critical studies and know Greek, they are simply unequipped to determine which Bible they should read, or which variants within those Bibles can be trusted. This common idea has introduced a neo-papacy within the Protestant church, which tells Christians that they must wait for scholars or pastors or apologists to speak Ex cathadra before trusting any verse in their Bible. 

Is it Really That Complicated? 

Not really, no. The direction of modern textual criticism has refuted itself in the fact that it readily admits it cannot find the original text of the New Testament. In other words, their methods have failed. In order to obfuscate this reality, scholars have shifted the effort to finding the Initial Text, which is really just a presuppositional effort to produce a hypothetical (non-existent) archetype from the smattering of Alexandrian manuscripts. This is the first common sense argument against the Modern Critical Text – it doesn’t claim to be the original text, and the methodologies being employed cannot and do not make any certain claims on producing the original text. So for any Christian who wants to “know what Paul wrote,” the modern methods aren’t claiming to provide that kind of certainty. That kind of certainty is only provided, given a scholar or somebody else speaks authoritatively over a text for the people of God. This being the case, Christians need to pick a pope to decide for them if Luke 23:34 really is original, because the popes disagree. If the protestant religion is truly a religion of Sola Scriptura, this simply does not work. It is the same argument the Papists make, only the pope is exchanged for a scholar. If a Christian is okay with maybe knowing what Paul wrote, I present a second common sense argument against the Modern Critical Text. 

If you are fond of the argument that claims that the New Testament is the best attested piece of literature in antiquity, boasting thousands of manuscripts compared to other works such as the Iliad, than the Modern Critical Text fails that criteria. The only text platforms that can use this argument are texts that represent the vast majority of manuscripts, such as a Byzantine priority or Traditional Text based Bible. The Modern Critical Text is based primarily on two manuscripts, which means that the apologetic which says that we have thousands of manuscripts isn’t true for the Modern Critical Text. One would have to say that the New Testament is only supported by less than fifty manuscripts, which makes it one of the least attested to books in antiquity. The narrative of transmission presented by the modern critical scholars says that the rest of the thousands of manuscripts were byproducts of scribal smoothing and orthodox revision. In supporting these modern texts, one has to accept that fact that the vast majority of the 6,000 manuscripts we have were the product of scribal revision and orthodox tampering, and do not testify to a preserved Bible. In fact, this is the common opinion of the men and women engaged in actual textual scholarship. This reality transitions quite nicely to the third common sense argument against the Modern Critical Text. 

Christians should be confident that the thousands of manuscripts testify to the authentic New Testament when compared and edited together. The fact that these manuscripts were copied so much and were used to heavily throughout time should tell a story that is often brushed over by modern scholarship. The story is that these manuscripts, or a comparison of these manuscripts, were always treated as authentic throughout time. In fact, the manuscripts used by Erasmus represent the majority of manuscripts far more closely than the Modern Critical Text. While I don’t believe that simply counting manuscript readings produces an original text without any further consideration, it is a good place to start to reject the few spurious texts that the Modern Critical Text is based on. 

A common sense methodology would also admit that we do not have every manuscript surviving today, and that the testimony of the people of God throughout time should also be considered so that not one word is lost from the Holy Scriptures. In terms of data analysis, the amount of data points that the Modern Critical Text represents should be considered an outlier. So is it the case that a few manuscripts which did not survive in the manuscript tradition are original? Or is it more likely that the vast majority of manuscripts represent the original when compared? In order to responsibly represent the case for the Modern Critical Text, one has to tell a tale that the New Testament evolved over time, and became so corrupt that nobody alive today really knows what the original said. Thus the modern effort is focused on producing a hypothetical archetype for these outlier texts. The modern method assumes that the thousands of manuscripts are corrupt evolutions of the original text. That leads us to a fourth common sense argument against the Modern Critical Text. 

It technically could be true that the handful of early surviving manuscripts represent the original text of the New Testament. Simply counting readings does not necessarily prove originality. There are a handful of readings that the people of God have considered original throughout time that are no longer available in the majority of manuscripts. That is not proof, however, that these now minority readings were not the majority at one point in time, or considered authentic despite not being the majority. God never promised to preserve the majority text in every case, He simply promised that He would preserve His Word until the Last Day. The majority text simply testifies to a different text than the “earliest and best”, and the opinions of the people of God throughout time should serve as a way to understand which readings were considered authentic throughout time. The first time this was ever done on a large scale was during the 16th century, when the printing press was made available to 16th century theologians and scholars. 

So the work during the 16th century was taking place while manuscripts were still being used and copied in churches. The common sense argument is that those people had better access to the manuscripts that were circulating and considered authentic then we do today. After the Bible shifted from existing in hand-copied codices to printed editions, the hand-copied manuscripts were used less, and began being submitted to museums and libraries rather than being used in churches. The texts that the people of God used were no longer in manuscript form, but printed editions of those collated manuscripts. The simple reality is that in the modern period, the manuscripts are artifacts of a time before the printing press. Almost nobody has used a manuscript in a church for centuries, so the evaluation of those manuscripts is difficult without the testimony of the people who actually used them. Thus, the final common sense argument recognizes that the earliest surviving manuscripts are not a standard that anybody would use from the perspective of God preserving His Word. 

The final common sense argument is that the manuscripts used in the first effort of textual criticism do represent the best form of the New Testament as it was preserved in the manuscript tradition. Compare this to the opinion that a smattering of heavily corrected, barely copied past the fourth century manuscripts are “earliest and best”. That is because until the printing press, these handwritten codices were actually used in churches by the people of God. So at the time of the first printed editions, the textual scholars of the time had the best insight into the manuscripts that were actually being used, regardless of being majority or minority texts. In order to reject the text-critical efforts of the 16th century, one has to believe that texts were chosen which nobody was using or had never used. This stands in opposition to history however, as Erasmus was heavily influenced by readings that would received by all. Popular opinion often influenced Erasmus in his text-critical decisions. That is the real story behind his inclusion of 1 John 5:7 in his third edition of the Novum Testamentum. He did not lose a bet, he feared that people wouldn’t use his Greek New Testament if he didn’t include it. 


Based on common sense arguments, what makes more sense? Did the textual scholars who were doing text-critical work when manuscripts were actually being used have better insights into what the best manuscripts are?  Or do modern textual scholars who only have access to manuscripts in museums and libraries know which texts are the best? Is it more likely that God hid away His Word for a thousand years in a handful of manuscripts? Or did He preserve His Word in the manuscripts that were actually being used by the people of God? These are all questions that any layperson should be able to answer. It does not take a PhD in textual studies to determine that the Modern Critical Text starts in the wrong place, with the wrong manuscripts. 

The common sense conclusion is that texts used in the first production of printed texts represents the best form of the manuscript tradition that has ever existed. After this point in time, manuscripts were sent to libraries and museums and the printed form of the Greek New Testament was the form that the people of God used. These printed forms were translated into various common languages and used with little to no contest for the next 300 years, until modern theories of scribal tampering caused people to throw out the work of the 16th century. The claim that “we have more data” really does not mean a whole lot, considering we have less perspective on the value of said data. At the end of the conversation, one has to ask, “How valuable is the data that was hidden in caves and barrels?” Is the data that was not being used more important, or is the data that was being used more important? Modern scholars consent to the former, and the scholars of the 16th century consented to the latter. 

In order to conclude that modern scholars have a better perspective on the data, one must write off the perspective of Augustine, who said, “Certain persons of little faith or rather enemies of the true faith fearing I suppose less their wives should be given impunity in sinning removed from their manuscripts the lord’s act of forgiveness to the adulteress. As if he who had said, “sin no more” had granted permission to sin.” One must claim that Calvin and Beza were either liars, or confused and mistaken. One must declare that Turretin would have upheld the readings he rejected if “he simply had access to the data we have today”. It takes an effort of revisionist history to believe that the believing people of God would adopt the Modern Critical Text. The simple common sense conclusion is to read these theologians and scholars as though they weren’t fools, and determine that they simply disagreed with modern conclusions. Erasmus, Beza, Stephanus, Calvin, Turretin, Gill, and Dabney did not think anything of the Vatican Codex and manuscripts like it. In fact, they considered them a grotesque corruption of God’s Word. Based on the testimony of the people of God in time, which side is spinning tales and mythology? Is it the people who say that the Word of God evolved and became corrupted beyond repair? I heartily disagree, and affirm with the theological giants of the past that God has preserved His Word in the Received Text.  

Two Different Texts


In my articles, I frequently comment that the Modern Critical Text and the Traditional Text represent two different forms of the text of the New Testament. Some disagree, and use this website to demonstrate that they are not that different. The site is helpful as a comparative tool between the ESV and KJV, though it is not technically a comparison of the Critical Text and Traditional Text. First, it is a comparison of translations, which means it is not comparing Greek texts, but translations of those texts. So while it gives the reader a general idea of the differences, translational choices may obscure the actual differences between the two underlying texts. Second, it does not fully compare the Critical Text and the Traditional Text as it includes comparisons of passages in a way that downplays the differences. An example would be that the comparative tool includes the Pericope Adulterae in the Critical Text, as well as excludes the Longer Ending of Mark in both texts. This gives the average reader the impression that there are really no differences. A full comparison would include the verses in the TR up to verse 20 in Mark, and exclude John 7:53-8:11 from the Critical Text. I would expect that the tool would include these differences, as well as clarify that it is a comparison between two translations and not between the TR and CT. 

Are We Discussing Two Different Text Forms?

The exclusion of certain verses for comparison highlights an important fact: in order to say that the Modern Critical Text and Traditional Text are essentially the same, one must ignore or downplay the fact that they are not the same in certain important places. It is because of these important places that there is disagreement at all. If the differences were that minor, we would be having a conversation over translation methodology and that’s about as deep as it would go. That is not to say that somebody cannot be saved by reading a Bible translated from the Modern Critical Text, but a careful examination of the two underlying texts reveals that they are different. One can argue how significant these differences are, but the fact remains, there are differences which distinguish the two texts. 

That being said, from a certain perspective, modern Bibles and traditional Bibles are both Bibles. They both contain the 66 books of the Old and New Testament, and they mostly contain the same content. Thus the important conversation should be centered around two topics – the difference between underlying texts and translation methodology. In creating a comparison tool that is supposed to compare the TR and the CT, and then using translations of these texts as a point of comparison, the two categories of text and translation are blended. It is interesting to say that the two texts are essentially the same, because if that were the case I’m not sure anybody would be seriously having this discussion at all. It is because  these two texts are so different that there is even a conversation. The existence of these two opposing positions on the text of the New Testament refutes the idea that the texts are the same. 

I am not saying that sound doctrine cannot be taught from a modern Bible such as the ESV or NASB, just that the underlying texts of modern Bibles are different than that of traditional Bibles such as the KJV. Many sound Biblical teachers employ modern Bibles in their ministry and are not heretics. The problem is that the standard for judging a Bible has been set at “can sound doctrine be taught from it?” If this was the standard, we would have to throw out every Bible, because false doctrine is readily taught from all translations. This standard is somewhat arbitrary and obfuscates the point of the discussion entirely. An orthodox understanding of the Trinity can be brought out of the New World Translation (in fact this is a great apologetic tool), but that doesn’t mean that Protestants should read the New World Bible. Thus, the standard of, “Can all the doctrines be proved from this translation?” is not a meaningful standard for determining the quality of a text or translation. Thus the conversation is rightfully seated in discussing the authenticity of the underlying texts used for translation.      

Two Different Text Forms

If the Modern Critical Text and Traditional Text were really as similar as is claimed, then there would be no discussion at all. It would be as simple as answering the question, “which Bible is the best translation of the Greek?” It would simply be a conversation over vocabulary choices and whether or not formal (KJV, NASB, ESV) or dynamic (NIV) equivalence is better. In admitting that there is indeed a difference, the conversation of determining how significant those differences are can take place in a productive manner. That being said, what about these two texts makes them “two different text forms?”

The primary difference has to do with the actual Greek manuscripts, not a difference between the translational choices of the KJV and ESV. The Modern Critical Text in its popular printed form (NA/UBS) is based largely on Codex Vaticanus, a fourth century Uncial Manuscript which is stored at the Vatican. All of the major differences can generally be found within this manuscript or Codex Sinaiticus. These are the two manuscripts referred to in modern Bibles as “earliest and best”. The Vatican Codex was first made use of in text critical efforts when Desidarius Erasmus consulted it in his production of his Greek and Latin New Testaments. Erasmus rejected the readings, however, claiming that they seemed to be back translations of corrupted Latin versional readings rather than being copied from a Greek manuscript. Frederick Nolan, a 19th century theologian and linguist, writes this regarding Erasmus and the Vatican Codex.

“With respect to Manuscripts, it is indisputable that he [Erasmus] was acquainted with every variety which is known to us; having distributed them into two principal classes, one of which corresponds with the Complutensian edition, the other with the Vatican manuscript. And he has specified the positive grounds on which he received the one and rejected the other” (Nolan, Frederick.  An Inquiry into the Integrity of the Greek Vulgate, or Received Text of the New Testament. 413, 414). 

Nolan also says regarding the Vatican Codex, ““The affinity existing between the Vatican manuscript and the Vulgate is so striking, as to have induced Dr. Bentley and M. Westein to class them together” (Ibid. 61).  

The first major use of this manuscript in the modern period was by Westcott and Hort, who primarily employed Vaticanus and Sinaiticus as a base text to produce their Greek New Testament in 1881. This is the text that the American Standard Version was translated from, which eventually gave birth to the Revised Standard Version and finally the English Standard Version. These manuscripts would eventually be classified as Alexandrian, based on the region in Egypt where they are thought to have originated (though recent scholarship has revisited this idea). Out of the close to 6,000 manuscripts available today, these Alexandrian manuscripts represent less than fifty. The vast majority of manuscripts represent a different text form, traditionally called the Byzantine Text Platform. The Textus Receptus follows the Byzantine text more closely than the Alexandrian text. So while one might make a case that the Alexandrian and Byzantine Texts are similar enough to both be considered a form of the Bible, these texts are distinct enough to be identified as separate classes of manuscripts, and thus different forms of Bibles. 

Even if one were to make a case that the Alexandrian Texts and Byzantine Texts were “close enough”, two major points of comparison stands between them that sets them apart entirely – the Longer Ending of Mark (Mark 16:9-20) and The Pericope Adulterae (John 7:53-8:11). That is a total of 23 verses that are simply missing from the Alexandrian texts in two places that are present in the Byzantine texts. Even if one believes the modern claim that the Alexandrian texts are “earliest and best”, it does not follow to say that these are the same text form. These texts also exclude John 5:4, Romans 16:24, and others. Total, there are enough texts different to exceed the number of verses in the entire book of Jude. If these are so similar, I do not see a reason that the Alexandrian texts have been classed in a different category than the majority of manuscripts. 


The goal of this article is to support the claim that the Modern Critical Text and the Traditional Text are indeed two forms of the New Testament. They may both be considered a New Testament, but they certainly are not the same New Testament. The Modern Critical Text does not include an appearance account in all four Gospels, and is missing a number of verses when compared to the majority of manuscripts. Additionally, the Modern Critical Text represents a handful of manuscripts which were produced around the third and fourth centuries, and do not appear to be copied after that point in time. 

There are two major schools of thought as to what these Alexandrian Texts are to the greater manuscript tradition. In the Modern Critical school of thought, they are the earliest texts that the rest of the manuscripts evolved from. In the Confessional Text school of thought, they are an aberrant text stream that was not copied past the fourth century. These two forms may have spawned at the beginning of the same river, but by the third and fourth century they split and headed in different directions. The Alexandrian split seems to have met its end shortly after that split, if the thousands of manuscripts available today are any indication. That is why focusing on translational differences between the KJV and ESV is not the primary concern for those who reject modern Bibles. If the Alexandrian form of the text is truly an aberrant stream, then the Modern Critical Text is not truly the “earliest and best”, it is a strange blip which disappeared as quickly as it appeared. Hopefully this sheds light on why those in the Confessional Text camp do not read modern Bibles. Translation methodology certainly has a role in the discussion, but a primary reason for siding with traditional Bibles has to do with the rejection of the texts modern Bibles use in translation. 

A Response to Brother Mark Ward


First I want to acknowledge and commend the irenic spirit of Dr. Mark Ward as he presented a refutation of the position which he calls “Confessional Bibliology” in his lecture posted on September 27, 2019. For those that are readers of my blog, I have referred to this position as “The Confessional Text Position”, and I believe that Confessional Bibliology is an appropriate and charitable label, over and above “Textual Traditionalism” or “KJV Onlyism”.[EDIT: Ward has decided to call this position “KJV Only” anyway. We can’t all be winners.] It is important to remember that this is an intrafaith dialogue. I hope that my handling of his lecture will rise to the same level of integrity as brother Ward. Dr. Ward’s presentation is thorough, scholarly, and is befitting of a Christian, unlike many similar presentations. This is evident in that he freely discusses Pastor Jeff Riddle and Pastor Truelove without character defamation, misrepresentation, or name calling. I do acknowledge that some have treated Dr. Ward uncharitably in various groups, and I want to point out that I have had nothing but positive interactions with him (though brief). It is clear that he is a dear brother in the Lord, despite our disagreement in this one area. 

That being said, I do see some potential problems with his presentation that I would like to address. My goal is to emphasize, like Dr. Ward seems to do, that this conversation primarily finds its application pastorally, and not text-critically. This is not about being right and defeating each other, it is about giving confidence to Christians that they have God’s Word. As a pastor, my pure intention is to provide a position that can accomplish that goal. All of the text-critical work in the world is without use if our hearts are not in the first place focused on instilling men and women with confidence in their Bible, reassuring them that every word they read is “Thus saith the Lord”. The main focus of my critique is that the presentation proceeds backwards. It begins at a surface level and then stays there, brushing over the fundamental issue which divides the two camps so definitively.

Do the Minor differences between the CT and TR Give Cause for Abandoning the TR?  

In Dr. Ward’s presentation, there was a major effort to highlight the differences within the printed editions of the Received Text, rather than discussing the major differences between the Received Text and Critical Text. These major differences result in the form of the two texts being entirely different. I will argue that downplaying the difference within the Received Text and the Critical Text does not frame the discussion in its proper place, and that makes it difficult to interact with the nuances of the presentation in a meaningful way. That is because the problem is not initially about the minor differences within printed texts, it is about the fact that these two texts represent entirely different Bibles and two different methodologies.

Dr. Ward’s approach neglects to highlight the implications to the doctrine of preservation by focusing on the “jot and tittle” component of the Confessional Text position, which certainly deserves to be fleshed out further down the line. He rightfully comments that the missing sections at the end of Mark and in John 8 are a “serious threat” to the critical text. This seems like an appropriate problem to tackle prior to getting into the minutiae, which Dr. Ward carefully does in his presentation. Given that we both believe God has preserved His Word, it seems imperative to answer how one can uphold a meaningful doctrine of preservation while affirming two text platforms which disagree in major ways. If both sides can cross the bridge and agree that this poses difficulties to even the most loose definitions of preservation, there may be a great opportunity for a fruitful discussion about minor variations at some point from a believing perspective. 

Which is to say, that it is problematic to Dr. Ward’s critique to insist that God preserved two forms of the Bible. I argue frequently that the only reason there is so much tension in this discussion is the fact that modern critical text advocates continue to present the smattering of Alexandrian manuscripts as “earliest and best”, despite no evidence for such a claim other than they are the oldest surviving manuscripts. Even modern textual scholarship has demonstrated that original readings can indeed present themselves in later manuscripts.

If the handful of these idiosyncratic texts are viewed as tertiary within the manuscript tradition (or not properly seated within the tradition at all), this conversation becomes much more simple. The rise of modern textual scholarship has introduced this problem to the church by allowing for manuscript types which have been rejected historically to be valued so highly. It is important to acknowledge that the Received Text did not introduce this problem, modern scholarship did when they declared that the Reformation era text needed to be thrown out. A consistent application of Dr. Ward’s presentation should conclude in the Received Text and the KJV being dismissed wholesale, as it represents an entirely different text form. 

Since Dr. Ward did not suggest that, it is important to understand that textual decision making is done from a completely different perspective between the Confessional Bibliology group and modern textual scholarship. It is easily demonstrated that the base manuscripts from which the modern eclectic text and the Received Text are built on represent a different form altogether. So the difference is not in the amount of data necessarily, but in the methodology itself which accepts this data into the manuscript tradition. Much time is spent discussing whether or not the Post-Reformation Divines would have accepted this new data, and here is where Dr. Ward and I disagree fundamentally. I do not believe that the Post-Reformation Divines would have adopted the modern critical perspective, even if presented with the new data.

Francis Turretin comments on what Dr. Ward presents as a chief problem for the Confessional Text position – the problem of variants as it pertains to “every jot and tittle”. 

“A corruption differs from a variant reading. We acknowledge that many variant readings occur both in the Old and New Testaments arising from a comparison of different manuscripts, but we deny corruption (at least corruption that is universal)” (Institutes of Elenctic Theology, Vol.I, 111). 

So it is not chiefly a problem with variants, but the actual text form and the modern perspective that certain passages have been totally corrupted. Turretin continues. 

“There is no truth in the assertion that the Hebrew edition of the Old Testament and the Greek edition of the New Testament are said to be mutilated; nor can the arguments used by our opponents prove it. Not the history of adulteress (Jn. 8:1-11), for although it is lacking in the Syriac version, it is found in all the Greek manuscripts. Not 1 Jn. 5:7, for although some formerly called it into question and heretics now do, yet all the Greek copies have it, as Sixtus Senensis acknowledges: “they have been the words of never-doubted truth, and contained in all the Greek copies from the very times of the apostles” (Bibliotheca sancta [1575], 2:298). Not Mk. 16 which may have been wanting in several copies in the time of Jerome (as he asserts); but now it occurs in all, even in the Syriac version, and is clearly necessary to complete the history of the resurrection of Christ” (Ibid. 115). 

Turretin explicitly mentions “several copies in the time of Jerome”, which happens to be the time that Codex Vaticanus and Sinaiticus are said to have been produced. Whether he is explicitly referring to these two manuscripts or not, the unavoidable reality is that these two copies represent the form of text he is talking about – namely those missing those three variants. The minor variants discussed in Dr. Ward’s presentation are not that of a mutilating nature, but the two variants he lists as problematic certainly are.  So to accept manuscripts and readings from manuscripts bearing this form is to depart methodologically in a major way. The conversation of which jots and tittles may be profitable if this can be admitted, as the amount of jots and tittles to be discussed would shrink massively. 

Does Confessional Bibliology Reject Decision Making? 

In short, no. Those who advocate for this position do not balk at the “Which TR?” question, because it fundamentally misses the point of the argument itself. I will acknowledge, however, the validity of the question from his perspective. While Dr. Ward provides a thorough presentation of the 11 types of variations between the printed editions of the Received Text, the conclusions of his argument do not demonstrate that the effort of modern textual scholarship is in the same category as Reformation era textual scholarship.

He is absolutely correct in saying that variations exist between printed editions of the TR, and points out that there are just as many editions of the Nestle-Aland text (with many more to come!). The most important point to interact with however, is his critique that the KJV is not its own form of the TR. Dr. Ward wrongly assumes that ultimately, when the conversation is stripped down to its bare components, the Confessional Bibliology argument is the same as the KJV Only argument (Excluding Ruckman). I will note that I do not consider this to be any sort of serious error, just a matter of nuance that I believe was overlooked. Confessional Bibliology advocates read other translations than the KJV, so it is a bit of a misrepresentation to call them KVJO. It would be the same as calling somebody who prefers the ESV and reads the ESV an ESV Onlyist, despite viewing the NASB as a fine translation of the critical text.

While there are some within the Confessional Bibliology group that believe that some form of textual criticism is still necessary, most, as Dr. Ward points out, agree that the Scrivener edition of the Received Text, which represents the textual decisions of the KJV translators, is “the” Received Text. This is due to the nature of the argument from God’s providence, as well as exposure of the text to the people of God as it happened in history. This argument does not seem as far-fetched given that it is not hedged within the context of modern critical scholarship, though I am fully aware of the critiques of this position. It’s not as though the KJV translators were moved along by the Holy Spirit, or reinspired, but that their textual decisions represented a century’s work of scholarship, dialogue, and corporate reception of certain texts within the Received Text corpus. This is made plain and evident in the vast number of commentaries and theological works which use the Received Text of the Reformation.

In short, the Scrivener text is not the best representation of the Received Text by virtue of the King James Translation team, but rather by virtue of the reception of those readings by the people of God. Were it the case that those readings were rejected, like readings Erasmus examined from the Vatican codex, we might be right in following the argumentation of Dr. Ward. The fact stands, that not only did Erasmus reject those readings, but all of the Reformed textual scholars and theologians who came after him did so as well, even commenting on manuscripts missing the ending of Mark. Jan Krans notes the fundamental difference between modern textual scholarship and the method of Beza in his work, Beyond What is Written.

“In Beza’s view of the text, the Holy Spirit speaks through the biblical authors. He even regards the same Spirit’s speaking through the mouth of the prophets and the evangelist as a guarantee of the agreement between both…If the Spirit speaks in and through the Bible, the translator and critic works within the Church. Beza clearly places all his text critical and translational work in an ecclesiastical setting. When he proposes the conjecture ”  (‘wild pears’) for (‘locusts’) in Matt 3:4, he invokes “the kind permission of the Church” (328,329).

The point is this – it is not that the Confessional Bibliology group rejects textual decision making, they reject textual decision making in the context of modern textual scholarship. Within the Confessional Bibliology camp, there are vibrant and healthy discussions on this matter which has resulted in the mass adoption of the Scrivener text. The problem occurs when this is conflated with Reconstructionist Textual Scholarship, which, when applied to a text, results in its complete deconstruction and devaluation. The conversation simply cannot happen in a healthy way in a context that takes 15 miles when given an inch.

This is chiefly exemplified in the fact that a decision made on a variant that does not affect meaning is compared to removing 11 verses from Scripture. Categorically, those are not the same thing. I appreciate Dr. Ward’s care in presenting the minor variations, but those are not the problem at a fundamental level (Unless one chooses to make it a problem unnecessarily). That is also assuming that a decision cannot be made, or has not been made on the handful of significant variations that exist within the editions of the Received Text. Had the KJV translators made a printed edition of the textual decisions they chose, this conversation likely would not be happening. The claim that the text as it is represented by the 1881 Scrivener text is an “English Greek New Testament” would not be taken seriously. This was the conclusion of Dr. Hills as well, that the textual decisions of the KJV can be rightfully considered its own “TR”, which Dr. Ward acknowledges, but seems to disagree with. 


I appreciate that Dr. Ward has seated the conversation within the context of the believing church. This is a huge upgrade from the vast majority of the discussion which exists in the world of secular scholarship. The goal of this article is not to slam Dr. Ward or say that I have refuted him necessarily, but rather to point out that there is a major stumbling block standing in the way of bridge-crossing. I will argue that a simple critique of Dr. Ward’s argument is that it fails to recognize the two distinct text forms held by each respective position. If we were dealing with one text form, with minor variations, we might be able to readily understand Turretin and Owen’s commentary on the text better, and Dr. Ward’s presentation might be more applicable to those who subscribe to Confessional Bibliology. But since during that era, the church rejected manuscripts like Vaticanus, and in the modern era the Bibles are all built on top of Vaticanus, the effort of bridge-crossing may be more tedious. Until the people of God seriously consider the direction of modern textual scholarship and its wholesale abandonment of the Original Text for the Initial Text, it may be difficult to find the kind of agreement Dr. Ward desires in his presentation.

At the end of this analysis, I hope that all can see that while there is a fundamental disagreement that may stand in the way of bridge-crossing, it is not so great that we cannot treat each other with brotherly kindness and respect which is fitting for those who claim Christ. The fact stands that not all Bibles are created equal, and despite modern Bibles generically looking like Bibles made from the Received Text, they depart in major places which do indeed effect doctrine, like John 1:18 and Mark 16:9-20. It would also be a different conversation if both forms of the text were stable, but the modern text is not. The direction of the modern text-critical effort is only speeding up in the direction of uncertainty as the ECM is implemented (see 2 Peter 3:10 and the number of diamonds in the Catholic Epistles of the NA28). I’ll end with this quote by textual scholar DC Parker, which I find to accurately assess the nature of the modern critical text.  

 “The text is changing. Every time that I make an edition of the Greek New Testament, or anybody does, we change the wording. We are maybe trying to get back to the oldest possible form but, paradoxically, we are creating a new one. Every translation is different, every reading is different, and although there’s been a tradition in parts of Protestant Christianity to say there is a definitive single form of the text, the fact is you can never find it. There is never ever a final form of the text.”

For more resources:

Does the Confessional Text Position Start with the TR?


A common misconception with the Confessional Text is that the starting point of the position is that the Received Text is the preserved Word of God. It is said that adhering to this view on the text of Holy Scripture is simply an exercise of picking a text based on tradition and defending it tooth and nail. While this may seem convincing and easier to write off, it is an unfortunate misrepresentation. It may be that those who make the argument do not fully understand the position, or perhaps have no other method of responding. Those in the Confessional Text position defend the various readings of the TR, but it is not because of blind tradition. When it comes to the text of Scripture, it is important that the conversation starts with the foundations and works up to the more surface level discussion of variants. Variants are certainly important to understand, but not even those who advocate for the Modern Eclectic or Modern Critical Text do not start with variants. If they do, they likely do not understand their own camp.  

All views on the text of the Holy Scriptures ultimately begin with the theology of Scripture, specifically with inspiration and preservation. Any person who is unwilling to admit this plain fact is unfortunately blind to their tradition, or acknowledge their tradition but conflate it with the tradition of the Reformation and Post-Reformation. The difference between those who adhere to the Received Text and those that adhere to the Modern Critical Text is first and foremost a difference in the theology of Scripture. Before I get into the article, it is also important to recognize that the vast majority of Christians who read an English Bible do so based on translation methodology like Rev. Christian McShaffrey presents in this article here. While the textual issue is ultimately the foundational reason in determining which Bible one reads, it is not the only contributing factor. That being said, it is important that people recognize that no, those in the Confessional Text camp do not begin with the Received Text and then defend it. Christians need to realize that this is a cheap parlour trick of an argument that nobody who actually adheres to the position takes seriously. 


The starting point for the Confessional Text position is primarily that God has spoken (Deus dixit). In the time of the Old Testament, “holy men of God spake as they were moved by the Holy Ghost” (2 Pet. 1:21), and those holy men were “the prophets” (Heb. 1:1). God In these last days, has spoken through His Son Jesus Christ (Heb. 1:1). God, in His providence, chose to do so by way of human authors in the Apostolic age of the church. He used their unique vocabulary and experiences, though the words were not so organic as to say that the words were not truly that of God. That is how Paul can say that “all Scripture is given by inspiration of God” (2 Tim. 3:16), despite Paul himself being an author of many of the letters which would eventually become the New Testament. The Scriptures do not speak of themselves as being an invention of the Apostolic era writers, but a deposit that God delivered by His inspiration of men by the power of the Holy Spirit.  

The connection between the Old Testament and the New Testament in Hebrews 1:1 demonstrates the continuity between the two testaments and thus the continuity of God’s purpose. That purpose being the same one promised in Genesis 3:15 when He said, “And I will put enmity between thee and the woman, and between thy seed and her seed; it shall bruise thy head, and thou shalt bruise his heel”. This promise of Grace in the form of a covenant is progressively revealed in each of the “sundry times and divers manners”, catalogued in Hebrews 11, leading up to the time when God would make a “New Covenant with the house of Israel, and with the house of Judah” (Jer. 31:31), which would inaugurate the “last days” (Isa. 2:2-4;1 Pet. 1:20;Acts 2;2 Tim. 3:1). The purpose of Scripture, from the time of the “people of God of old” to the people of God in the last days, is covenantal in nature and sufficient in making men “wise unto salvation through faith which is in Christ Jesus” and is “profitable for doctrine, for reproof, for correction, for instruction in righteousness: That the man of God may be perfect, throughly furnished unto all good works” (2 Tim. 3:15-16). Turretin rightly says, “They were intended to be the contract of the covenant between God and us” (Institutes of Elenctic Theology, Vol. 1, 139). 

The New Testament is part of the fulfillment of Genesis 17:7 and Ezekiel 34:24 when God says, “And I will establish my covenant between me and thee and thy seed after thee in their generations for an everlasting covenant, to be a God unto thee, and to thy seed after thee” and “I the LORD will be their God, and my servant David a prince among them; I the LORD have spoken it.”. Since the Scriptures are the means that God has chosen to accomplish this task through faith in Christ, the expectation of the New Testament also carried with it the expectation of new covenant documents. We know that God did indeed fulfill this promise in Jesus Christ, and since God cannot fail (Isa. 46:10), we know that not only will he succeed in saving a people unto Himself, He will succeed in speaking to those people. Hence the principle foundation is Deus dixit, not the TR

God Continued to Speak 

The promise of God to His people was not limited to the first century AD. Jesus promised that “I am with you alway, even unto the end of the world” (Mat. 28:20). How is it that God accomplishes this? Through the Holy Scriptures (Heb. 1:1) by the power of the Holy Spirit (John 14:16; 10:26). This is a perpetual promise to the people of God until Christ returns. This is how the doctrine of inspiration is joined to the doctrine of preservation. Since the covenant promise of God is true and sure until the Last Day, it is rightly said that Jesus’ words in Matthew 5:18, “For verily I say unto you, Till heaven and earth pass, one jot or one tittle shall in no wise pass from the law, till all be fulfilled” apply to the means by which God prescribes for the fulfillment of all things – the Holy Scriptures. Thus the Westminster Divines rightly employ this as a proof text in  the Westminster Confession of Faith when they said that “by His singular care and providence, kept [the Scriptures] pure in all ages” (1.8, bracketed material added). The Reformed doctrine of the Scriptures explicitly joins the inspiration of the initial New Covenant documents (autographs) with the continued preservation of those inspired texts in the copies (apographs). This doctrine has been unfortunately abandoned in the modern period with the severing of inspiration from preservation as demonstrated in the critically acclaimed textbook, How to Understand and Apply the New Testament by Dr. Andrew Naselli (43) and the Chicago Statement on Biblical Inerrancy (Article X). 

It is from this theological starting point that the Reformed proceed. It is likely that the redefinition of Reformed Theology to only include TULIP has resulted in this departure, in part at least. The historical Calvinists were fundamentally covenantal. Thus Reformed Theology must include this rich, covenant structure which supplies a robust understanding of the Holy Scriptures. 

But Can You Produce a Text? 

The very request to “produce a methodology to create a text” stands in opposition to not only the Reformed doctrine of Scripture, but the Biblical doctrine of Scripture. This is made plain in the fact that the Westminster Divines employed the language “kept pure in all ages”, clearly demonstrating that they believed it was by God’s providence which prevented the Holy Scriptures from falling into such disarray that total corruption was possible and a reconstruction effort necessary. It is only when one disconnects the theology of the Reformation from the textual scholarship of the Reformation that one can say, “Beza and Erasmus were doing the same thing as modern textual scholars!” 

This claim is drawn from the conclusions made by Jan Krans in his work Beyond What is Written, which is a part of the Brill series New Testament Tools and Studies edited by Bart Ehrman and Eldon J. Epp. Yet it does not seem that Krans would necessarily agree with such statements made about his work. Krans makes a case for this regarding Erasmus in a certain sense, but even then his conclusions are not so broad and absolute. This is a major flaw in anybody who says this regarding Krans’ work. I will be releasing a full review at some point in the near future, cataloging where his conclusions may be a bit ambitious regarding Erasmus. In any case, he provides one valuable insight which directly refutes the claim that the textual scholars of the Reformation were doing the “same thing” as modern textual scholars in one quotation. 

“In Beza’s view of the text, the Holy Spirit speaks through the biblical authors. He even regards the same Spirit’s speaking through the mouth of the prophets and the evangelist as a guarantee of the agreement between both…If the Spirit speaks in and through the Bible, the translator and critic works within the Church. Beza clearly places all his text critical and translational work in an ecclesiastical setting. When he proposes the conjecture ”  (‘wild pears’) for (‘locusts’) in Matt 3:4, he invokes “the kind permission of the Church” (328,329).

The last time I checked, the CBGM does not include any mention of the Holy Spirit, a doctrine of inspiration, or the church in its methodology. So while Krans certainly does draw parallels between Reformation era scholarship and modern scholarship, it does not appear he would agree with such broad conclusions. Since that has been dealt with, I will now turn to explain why those in the Confessional Text camp are not phased by the accusation of “not doing textual criticism”. 

The Received Text

The Reformed doctrine of inspiration and preservation, as laid out above, is the starting point for determining the text that God has spoken in. Due to God’s covenantal promise, there is no need to “reconstruct” a text from the Reformed perspective. To admit as much is to admit that God has failed in His covenantal purpose. A total corruption of certain texts does not comport with the reality that God has preserved His Word. So the fact that the modern critical text contains a multitude of uncertain readings should cause the Reformed believer to pause. Those in the Confessional Text camp do not see a need to “construct” a text, but rather to receive a text. God has not failed, and thus His Word readily available. It is not the task for the Christian to “produce” or “reconstruct” a text, but to determine which text reflects a story of God succeeding in His task. 

On one hand, there is a text that represents generally a handful of 3rd and fourth century manuscripts which only gained popularity in the modern period. On the other, there is a text that represents generally the vast majority of extant manuscripts and the text which the vast majority of the commentaries, translations, and theological works employed after the printing press was invented. The Confessional Text position accounts for differences between the Majority Text by taking into consideration the use of such texts by the people of God throughout time.  


It should be clear to all that the Confessional Text position does not start with the TR as its foundation. It begins with the reality that God has spoken. It then builds on the covenantal reality that God has spoken in His Scriptures in these last days. It then applies the unfailing purpose of God to have a people unto himself and His promise to be with His people until the Last Day. These building blocks form the doctrines of inspiration and preservation, which were affirmed by the Post-Reformation Divines and codified in the confessional standards of the 16th and 17th centuries. Finally, a text is received which most aligns with the doctrines laid out in Scripture. The plain reality is that the ever-changing and recently adopted modern critical text does not comport with historical and Scriptural reality. 

So yes, it is true that those in the Confessional Text camp defend the Masoretic Hebrew Text and the Greek Received Text. It is also true that many disagree with the textual decisions of these texts. The goal of this article is to demonstrate that this is not a blind tradition, it is one built on a sturdy doctrine of Scripture. The adoption of the specific Greek and Hebrew texts of the Reformation  is simply the result of looking into history and seeing which text is more consistent with the Biblical doctrines of inspiration and preservation.

Has the CBGM Gotten Us to 125AD?


So it has been said that the CBGM has been able to “get us to 125AD” as it pertains to the New Testament manuscripts with its analysis – or at least in Luke 23:34. Anybody who makes such a claim clearly has no working understanding of the Munster Method, or at least is choosing to use an invisible rod to bash people over the head. In any case, I thought it would be helpful to examine some potential weaknesses in the methodology in a series of articles. To begin, I thought I would discuss the reality that the CBGM is still in need of critical analysis. Dr. Peter Gurry, in his work, A Critical Examination of the Coherence Based Genealogical Method, as a part of the Brill Academic series New Testament Tools and Studies writes, “Despite the excitement about the CBGM and its adoption by such prominent editions, there has been no sustained attempt to critically test its principles and procedures” (2).

So my advice to any of those who believe such a bold claim that the CBGM can “get us to 125AD” should put on their discernment ears and wait until 2032 when the effort can be accurately examined in full. If its use in analyzing the Catholic Epistles is any indication of the kind of certainty it will provide, I now direct the reader to open their Nestle-Aland 28th Edition, if they own one, and examine the readings marked with a black diamond. It should be loudly noted that the methodology of the CBGM has not been fully examined, and I agree with Dr. Gurry when he writes, “If the method is fundamentally flawed, it matters little how well they used it” (4).

The CBGM and the Initial Text

Before the Christian church preemptively buys into this method wholesale, it is important to first recognize that there is not uniform agreement, even in the early implementation process of the CBGM, by all that this methodology will result in establishing what is being called the Initial Text. Bengt Alexanderson, in his work, Problems in the New Testament: Old Manuscripts and Papyri, the New Coherence-Based-Genealogical Method (CBGM) and the Editio Critica Maior (ECM), writes, “I do not think the method is of any value for establishing the text of the New Testament” (117). What should be noted loudly for those that are falling asleep, is that a significant shift has occurred under the noses of laypeople in the effort of textual scholarship as it pertains to the New Testament text.

That shift is the abandonment of the search for the Original or Authorial text for the pursuit of what is being called the Initial Text. Dr. Gurry writes, “These two terms [authorial or original text] have often been used interchangeably and their definition more often assumed than explained. Moreover, that this text was the goal of the discipline remained generally undisputed until the end of the twentieth-century. It was then that some scholars began to question whether the original text could or should be the only goal or even any goal at all” (90, bracketed material added). Regardless of whether this is the method one decides to advocate for, let it be said that this is indeed a shift, for better or for worse. Dr. Gurry continues, “Rather than clarify or resolve this debate, the advent of the CBGM has only complicated the matter by introducing an apparently new goal and a new term to go with it: Ausgangstext, or its English equivalent “initial text” (90-91). The problem of defining terms will always gray the bridge between academia and the people, so hopefully this article helps color in the gap.

While the debate rages on between the scholars as to how the Initial Text should be defined, I will start by presenting what might be considered as the conservative understanding of it and work from there. Gerd Mink, who is credited with the first use of the term Ausgangstext, employs the term to mean “progenitor” or the “hypothetical, so-called original text”(92). That is to say that the goal of the CBGM in theory is to produce the text that the rest of the manuscripts flowed from. This sounds great, in theory, but there remains a great distance to cover from saying that the CBGM should produce this Initial Text and the CBGM has produced this Initial Text. In any case, the use of the terminology “Original Text” is not employed in the same way as it was historically, and there is much deliberation as to whether Mink’s proposed definition will win out over and above those that wish to define it more loosely.

Based on my experience with systems, an appropriate definition of the term as “the text from which the extant tradition originates” (93) is much more precise and descriptive of what the method is capable of achieving. Any talk of whether or not the Initial Text represents the Original Text is merely speculation at this point, and I argue will remain speculation when the effort is complete. This of course requires a more humble assessment of the capabilities of the CBGM, in that an empirical method is only good for analysis on that which it has is its possession. Which is to say that methodologically speaking, there is still a gray area between the time that the earliest extant manuscripts are dated and the time that the original manuscripts were penned of about 300 years or more, depending on how early one dates the earliest complete manuscripts. This is what I have been calling the “Gray area between the authorial and initial text” or “The Gray Area” for short. Dr. Gurry has defined it as the historical gap (100). I suspect that this gray area will be the focus of all discussion pertaining to the actual value of the ECM by the time 2032 arrives.

The Gray Area Between the Authorial and Initial Text

Any critique of the CBGM is incomplete without a sincere handling of the Gray Area between the Original and Initial Text. Until that conversation has happened, it is rather preemptive to make any conclusions such as, “The CBGM can get us to about 125AD”. Dr. Gurry writes, “The reason is that there is a methodological gap between the start of the textual tradition as we have it and the text of the autograph itself. Any developments between these two points are outside the remit of textual criticism proper. Where there is “no trace [of the original text] in the manuscript tradition” the text critic must, on Mink’s terms, remain silent” (93).

This is understandably a weakness of the methodology itself, if one expects the methodology to produce a meaningful text. Dr. Gurry continues, “Minks statement that the initial text “should not necessarily be equated with any actual historical reality” is best read as a way to underscore this point” (93). I propose that it is of greatest importance that Christians begin discussing the Gray Area between the Original Text and the Initial Text now, as it outside of the interest of the text-critic proper. Yes, this discussion is most certainly a theological one, as much as that might pain some who have buried their heads in the sand to the weaknesses of the CBGM care to admit.

It is important to note, that in this conversation over the methodology of the CBGM, that there is certainly not uniform agreement on the capabilities of this relatively new method. It is my hope that by bringing this discussion into a more public space, that the terminology of Original and Initial Text, and the space between these two points in the transmission of the New Testament, fosters an important conversation as it pertains to the orthodox doctrinal standards of inspiration and preservation. Dr. Gerd Mink indirectly proposes one possible method of analyzing the Gray Area, which would be to demonstrate that there is a significant break between the Original and Initial Text. Perhaps some ambitious doctoral student might take upon himself to conduct this work, though I wonder if it is even possible to analyze data that does not exist. That is to say that determining the quality and authenticity of the Initial Text might as well be impossible, and any conclusions regarding this text will be assumptive, given that some new component is not added to the CBGM which allows such analysis to be done.

The ontological limitations of the CBGM give cause for the discerning onlooker to side with the assessments of DC Parker and Eldon J. Epp. Dr. Epp writes, “Many of us would feel that Initial Text – if inadequately defined and therefore open to be understood as the First Text or Starting Text in an absolute sense – suggests greater certainty than our knowledge of transmission warrants”(Epp, Which Text?, 70). Until those that have a more optimistic understanding of the Initial Text produce a methodology that is adequate in testing the veracity of the Initial Text, I see no reason why anybody should blindly trust that the Initial Text can be said to be same as the Original Text. And that is assuming that the ECM will reveal one Ausgangstext. It is likely, if not inevitable, that multiple initial texts will burst forth from the machine. A general understanding of the quality of the earliest extant texts certainly warrants such a thought, at least.


The purpose of this article is to 1) make a wider audience aware of the difference between the Initial Text and the Original Text and 2) to begin the conversation of the Gray Area between the Initial Text and the Original Text. It is best that the church begins discussing this now, rather than in 13 years when the ECM is completed. There are many Christians out there who may be caught completely off guard when they discover that the somewhat spurious claim that the CBGM has “gotten us to 125AD” is in fact, not the truth. The fact stands that nobody has the capability of making such a precise claim at this point, and will not be able to make such a claim in 2032 either. It is best then, that people allow the scholars to finish the work prior to making claims that the scholars themselves are still in dialogue about.

A Meaningful, Reformed Defense of the Scriptures

Before reading this article, I recommend listening to this electrifying sermon by Pastor Joel Beeke:


Charles Spurgeon once said in a Sermon dealing with the defense of the Scriptures:

“I am a Christian minister, and you are Christians, or profess to be so; and there is never any necessity for Christian ministers to make a point of bringing forth infidel arguments in order to answer them. It is the greatest folly in the world. Infidels, poor creatures, do not know their own arguments till we tell them, and then they glean their blunted shafts to shoot them at the shield of truth again. It is folly to bring forth these firebrands of hell, even if we are well prepared to quench them. Let men of the world learn error of themselves; do not let us be propagators of their falsehoods. True, there are some preachers who are short of stock, and I want them to fill up! But God’s own chosen men need not do that; they are taught of God, and God supplies them with matter, with language, and with power” (New Park Street Pulpit, Volume 1, 110). 

This has always been the Reformed defense of the Holy Scriptures. Greg Bahnsen, in his work, Presuppositional Apologetics: Stated and Defended, writes:

“Faith is humble submission to the self-attesting Word of God. Faith accounts God truthful, faithful, and powerful on the basis of His own Word, not requiring to see demonstrable proof or evidence outside of God’s Word that could confirm it as trustworthy” (64). 

While many in the Reformed camp do not subscribe to a presuppositional method, this thought is pervasive throughout the Reformed Scholastics. See Francis Turretin on the topic: 

“But the orthodox church has always believed far otherwise, maintaining the revelation of the word of God to man to be absolutely and simply necessary for salvation” (Institutes of Elenctic Theology, vol. 1, 55). 

“The authority of the Scriptures depends on the origin. Just because they are from God, they must be authentic and divine” (62). 

“The Bible proves itself divine, not only authoritatively and in the manner of an artless argument or testimony when it proclaims itself God-inspired (theopneuston)” (63). 

Turretin then details the external and internal arguments for the authority of the Scriptures, which follows the same form as that of the Westminster Divines, and post-Reformation Divines:

“With regard to the duration; the wonderful preservation (even to this day) of the divine word by his providential care against powerful and hostile enemies who have endeavored by fire and sword to destroy it…the consent of all people who, although differing in customs (also in opinions about sacred things, worship, language and interest), have nevertheless received this word as a valuable treasury of divine truth and have regarded it as the foundation of religion and the worship of God. It is impossible to believe that God would have suffered so great a multitude of men, earnestly seeking him, to be so long deceived by lying books” (63). 

Turretin goes on to detail the internal testimony to the authority of the Scriptures:

“The internal and most powerful marks are also numerous. (1) With regard to the matter: the wonderful sublimity of the mysteries such as the Trinity, incarnation, the satisfaction of Christ, the resurrection of the dead and the like; the holiness and purity of the precepts regulating even the thoughts of the internal affections of the heart adapted to render man perfect in every kind of virtue and worth of his maker; the certainty of the prophecies concerning things even the most remote and hidden. (2) With regard to the style: the divine majesty, shining forth no less from the simplicity than the weight of expression and that consummate boldness in the commanding all without distinction, both highest and lowest. (3) With regard to the form: the divine agreement and entire harmony of doctrine, not only between both testaments in the fulfillments of predictions and types, but also between particular books of each testament; so much the more to be wondered at, as their writers both were many in number and wrote at different times and places so that they could not have an understanding among themselves as to what things should be written. (4) With regard to the end: the direction of all things to the glory of God alone and the holiness and salvation of men. (5) With regard to the effects: the light and efficacy of the divine doctrine which is so great that, sharper than any two-edged sword, it pierces to the soul itself, generates faith and piety in the minds of its heareers, as well as invincible firmness in its professors, and always victorious triumphs over the kingdom of Satan and false religion.” (64). 

A Return to the Power of God in the Gospel 

So the opinion of the Reformed, both presuppositional and classical, is that the nature of the Scriptures is self-authenticating (αυτοπιστος). The Scriptures testify to themselves that they are the Word of God. Where will one turn to find a sufficient apologetic outside of this testimony? Certainly not Bahnsen. The Lord has prescribed a sword to do battle (Hebrews 4:12), and many, supposing their own prowess, charge into battle with a shield as though God will honor that vain effort. 

The Scriptures are not to be put on trial, and the faithful of God are to go forth with the power which is the Gospel (Rom. 1:16). The heathen and infidel are not to be entertained with debate, as though a careful examination of corruption found within the manuscripts will convince them that God’s Word is inspired and preserved. Evidence of such attempts are well documented by the opinion of Muslims, Atheists, and Momons, who readily use the failed efforts of apologists who have put the Word of God on trial. The effort of the Christian is not to convince, it is to proclaim. This is in fact the testimony of the Apostle when he went forth to the pagan city of Corinth. 

“And I, brethren, when I came to you, came not with excellency of speech or of wisdom, declaring unto you the testimony of God. For I determined not to know any thing among you, save Jesus Christ, and him crucified. And I was with you in weakness, and in fear, and in much trembling. And my speech and my preaching was not with enticing words of man’s wisdom, but in demonstration of the Spirit and of power: That your faith should not stand in the wisdom of men, but in the power of God” (1 Cor. 2:1-5).

Christians, do not lose sight of the covenantal purpose of the Scriptures (2 Tim. 3:15), or the covenantal purpose of God, which is to make all things new (Gen. 3:15; Rev. 21:5). Our approach to the Muslim, and to the Atheist, and to the Mormon is not to demonstrate that Christians have a defense, it is to bring forth the power of God in the preaching of the Gospel. Remember, that when the Apostle arrived in Athens, the “spirit was stirred in him, when he saw the city was wholly given to idolatry” (Acts 17:16). He then calls them to note their great idolatry and then “preached unto them Jesus, and the resurrection” (Acts 17:18). He did not tarry on about the opinions of various gnostics who were already want to corrupt the orthodox profession of the faith. He proclaimed Christ. 

The faithful in Christ should remember the words of Bahnsen:

“Finally, it remains for us to see that according to the Bible a man cannot come to an understanding of God’s Word or a knowledge of God without regeneration and faith. Hence the apologist cannot give the unbeliever convincing understanding, rational demonstration, probable verification, or knowledgeable proof and expect these to bring him to faith in God’s Word. All the argumentation in the world, all the scholarly explanation that we can set forth cannot effect saving knowledge in the unbeliever, for as dead in his vanity of mind he needs regeneration by the Holy Spirit. Thus the apologist presupposes the Scripture and focuses on the unbeliever’s intellectual rebellion or sinfulness, witnesses to the Word of Christ, and argues upon its self-attesting authority, looking to God rather than secular “wisdom” to give success to his words of proclamation and defense. The apologist does not expect the unbeliever to be able properly to understand and thereby be convinced of the truth of the gospel as long as he remains unrepentant for his guilty rebellion against God and does not begin by faith in his approach to God’s Word” (64-65). 

The Reformed method of prolegomena is to first detail Natural Theology, then to describe the weakness of Natural Theology as it pertains to salvation, and finally to detail the particular beauty, power, and infallibility of the Special Revelation of God in His Holy Writ. The post-Reformation Divines emphatically declared the foundational, Trinitarian nature of God and salvation, the self-authenticating nature of the Scriptures, the inability of man to reconciled of His own accord, the great distance between God and man and the necessity for God’s voluntary condescension, and the importance of practical, experiential religion. 


If the purpose of our apologetic is not to win souls to Christ, than our apologetic has failed. And if the Gospel is only a footnote or even lacking in our presentation to the unbeliever, than we have not used the tools prescribed by Christ. And if we put the authority of Scriptures in the hands of men, we are no better than the Papists which the Reformers fought so vigorously against. There is a war waging which dates back to the time of Adam and Eve, and the weapons of choice chosen by the enemy are “Yea hath God said” (Gen. 3:1) and then “God hath said” (Gen. 3:3). The methodology of the enemy is simple and twofold: 

  1. Question the authority of God’s Word
  2. Assert himself as the authority over God’s Word

The Christian should not be so foolish as to fall to the same error as Adam and Eve. The seed of all errors is to believe that we owe a response to the method proposed by the enemy. Even when Christians proclaim, “I want to know what Paul wrote!” they suppose the thoughts of the enemies of faith which says we are still attempting to find out what Paul wrote, as though God has not delivered His Word and kept it pure. Thus, when one declares that they “want to know what Paul wrote!” as though they don’t know, they really are saying, “Yea hath God said?” When Satan tempted Jesus in the wilderness he attempted the same method that he tried with Adam and Eve. Jesus responded with Scripture. When Satan and his minions attempt to attack the authenticity of Scripture, our only response is to turn heavenward and declare, “Thy word is truth” (John 17:17). 

“For the preaching of the cross is to them that perish foolishness; but unto us which are saved it is the power of God. For it is written, I will destroy the wisdom of the wise, and will bring to nothing the understanding of the prudent. Where is the wise? where is the scribe? where is the disputer of this world? hath not God made foolish the wisdom of this world? For after that in the wisdom of God the world by wisdom knew not God, it pleased God by the foolishness of preaching to save them that believe” (1 Cor. 1:18-21). 

Does the Modern Apologetic Offer a Meaningful Response to Bart Ehrman?


It is often stated that the Confessional Text position, which was the position that the post-Reformation divines defended against the Papists and Anabaptists, offers no meaningful answer to critics like Bart Ehrman. It is said that in order to defend the text of the New Testament, one has to adopt the epistemology and methods of modern textual scholarship. There are two problems with this claim. The first is that Bart Ehrman is a huge influencer in the scholarship that is said to refute him. In other words, he is one of the top scholars in the field and has contributed a vast amount of work to the method that is said to refute him. He is the editor on the Brill series, New Testament Tools and Studies, which represents the latest research in New Testament Textual scholarship. In the recent work on the Pericope Adulterae produced by Tommy Wasserman and Jennifer Knust, the authors thank Bart Erhman for pointing them in the right direction. Additionally, he is the editor of the textbook that is standard curriculum in most seminaries (The Text of the New Testament)

Either Ehrman doesn’t know his own discipline well, or the claim is woefully lacking in any sort of support. In fact, one has to severely downplay the tremendous influence Ehrman has in the current effort of textual scholarship, obfuscating the fact that Erhman’s position is not all that different than the believing version of his view. Despite the fact that many New Testament scholars disagree with his conclusions, the fact stands that at its foundational level, the methods Ehrman uses to come to his conclusions are nearly the same as anybody else who adheres to the modern critical text as it is represented in the NA/UBS platform. In a debate held five years ago between Ehrman and a popular apologist, Ehrman rightfully comments that the apologist agreed with 8.5 out of 9 points presented in his book, Misquoting Jesus ( 

The apologist had no response to this pointed observation. At this point, the debate was definitely lost, and Erhman walked out of that room with a victory against the text of the Holy Scriptures (Not to mention that the whole of the debate was akin to a cat playing with a mouse). That is what happens when Christians put the Word of God on trial. So despite the claim made that this is a valid defense against Erhman, Erhman himself finds the position not significantly different than his own. It seems reasonable, that in order to refute Erhman, one must adopt a position different than is espoused by the books that Ehrman himself penned, or edited. It stands to reason that in order for a position to be potent apologetically, it must be different than the position that it is trying to refute. 

The second problem with the claim that defending the Bible requires an adoption of modern critical methods is that the method itself is not capable of proving anything one way or another as it pertains to what is considered the “original” or “authorial” text of the New Testament. Scientific methods do not care about Christians who believe the Bible to be preserved. Scholars and apologists can make conclusions regarding the data, but those conclusions are simply not definitive or demonstrable using the data itself. This is the entire claim of those who hold to a presuppositional method of apologetics. Yet, by adopting this method, one must adopt the folly of the fool to try to prove the fool wrong. Ehrman actually offers the same critique of the methods that those in the Confessional Text camp do, which certain apologists have pointed out. When this is pointed out, there is never a defense offered to silence the critique. Rather than refuting the claim, one must resort to various ad hominem attacks, assaults on the Bible that the Christian church used for centuries (and those that produced it), and other uncharitable schemes that do not provide a substantial argument. Is this really the best possible defense of the text of the Holy Scriptures? I argue no on several accounts. 

Does the Modern Critical Text Apologetic Refute Bart Ehrman?

The answer is a simple “no”. For those that are familiar with a presuppositional method of apologetics, the reason should be clear. It leaves the Christian unequivically incapable of answering the claims of Bart Erhman, and the Muslim apologist at that. Typically, if one wishes to refute somebody, one needs to take an opposing position, not the same one. I cannot think of a more apt example of a Christian handing their Bible over to the unbeliever in apologetics for the sake of neutrality. In this example, it is not just a metaphor, it is quite literally the case that the believer has handed their Bible over to Bart Ehrman to stand as judge over it. In the premise of the argument, the believer has already lost the debate by allowing the unbeliever to decide what the Bible does and does not say.  

Charles Spurgeon offers a great response to those that believe they need to prove every line of Scripture to the unbeliever using evidence.

“I am a Christian minister, and you are Christians, or profess to be so; and there is never any necessity for Christian ministers to make a point of bringing forth infidel arguments in order to answer them. It is the greatest folly in the world. Infidels, poor creatures, do not know their own arguments till we tell them, and then they glean their blunted shafts to shoot them at the shield of truth again. It is folly to bring forth these firebrands of hell, even if we are well prepared to quench them. Let men of the world learn error of themselves; do not let us be propagators of their falsehoods. True, there are some preachers who are short of stock, and I want them to fill up! But God’s own chosen men need not do that; they are taught of God, and God supplies them with matter, with language, and with power” (New Park Street Pulpit, Volume 1, 110). 

There is a difference between defending the texts of the Holy Scriptures, and adopting the methods of modernity which say that the Bible has been lost and needs to be reconstructed, and then trying to defend that it has not been lost. So what is the difference between the modern critical text apologist and Bart Ehrman? The difference is that both the modern critical text apologist and Bart Erhman look at the same dataset, and one says that the dataset is the preserved Word of God, and the other doesn’t. On this point, I agree with the modern critical text apologist that God has preserved His Word. I disagree, however, with the conclusion that the modern critical text has demonstrated that, or can demonstrate that. 

The way that this position is defended is by simply saying that God has preserved His Word. There is no evidence to support this claim, however, because there is not a single person who defends this method who will point at a text and say, “This is God’s preserved Word!” They must argue that God has generally preserved all the words, and that it is the task of human scholars to dig through the decaying manuscripts to find out which words He preserved. The modern critical text apologist says that this can be accomplished, and Bart Ehrman, along with a multitude of his peers, say that it cannot be done. Which is to say that the scholars who have all of the credentials, all of the accolades – the masters of this method – say that it cannot be done. That is why Christians should take the opinions of DC Parker and Bart Ehrman seriously when they critique the modern methods and the inability of such methods to produce a final form of the text. Of course there are more optimistic scholars than DC Parker and Bart Ehrman, but even they will not say that God’s Word has been preserved down to the word. Yet the problem does not lie in the fact that God’s Word has not been preserved, it rests in the reality that the methodology itself is incapable of proving such a claim. 

If it were able to prove this claim, the work of modern textual scholarship on the New Testament would have been completed decades ago. It is not that God has not preserved His Word that is the problem, the problem is that the modern methodology has decided this to be the case. So in adopting this modern method, one must adopt the various methods that have led scholars, both atheist and believer alike, to abandon the search for the Divine Original. In its premise, the argument admits that the Word of God still needs to be found, and the original (as I have defined it here) cannot be found. In admitting that the Word of God still needs to be found, the Christian has lost all claims on a Bible that is preserved. In a very real sense, this position says that while God has indeed preserved His Word, we simply will never know which one He preserved. This “defense” of the Holy Scriptures is no defense at all, it is surrender. It is like standing in a pile of keys that open a door, and not ever being able to find the key that opens the door. What a capricious God, who would dangle His Word in front of His people, declaring that He preserved His Word for them but never allowing them to know what that Word is that He preserved! 


The only meaningful apologetic for the Holy Scriptures is one which does not adopt the speculations and theories of modern scholarship. A Christian does not need to believe that in order to defend the Scriptures, they must capitulate to the opinions of Bart Ehrman and Muslims. We do not need to place the Holy Writ on an alter in a mosque or the academy and stand by as opponents of the faith critique and dismantle each line of God’s Word. We do not need to wait until 2032, when the scholars have handed the Bible back to the church with a big red stamp reading, “Undecided”. The defense for the Scriptures remains the same as it has for centuries – that God’s Word is self authenticating. It is in itself the rule of faith. It does not stand judged by men, but it is the judge of men. 

It is high time that the mockery of those who adhere to this divine truth be cast out of our favor as Christians. Those who truly wish to defend the Holy Scriptures must begin by rejecting the model that says the Bible has not been preserved perfectly, and kept pure in all ages. Christians should abhor those who mock the self-authenticating nature of the Sacred Deposit, and reject the opinions of those who do not see the Scriptures as any different than the Iliad. We must stop blindly believing the unfounded claims that the modern method has produced a meaningful apologetic for the Holy Scriptures when it clearly hasn’t, and return to the theological foundations of the protestant faith. God alone has spoken, and He does not need men to decide what He did, or did not say. I will follow up this article with a positive defense of the Holy Scriptures using a theological method, which is the method espoused by the giants of the faith whose shoulders we, as modern Christians, stand on. 

The Most Dangerous View of the Holy Scriptures


Quite often in the textual discussion, it is boldly proclaimed that “our earliest and best manuscripts” are Alexandrian. Yet, this statement introduces confusion at the start. It introduces confusion due to the fact that there are sound objections as to whether it is even appropriate to use such a term as “Alexandrian” when describing the “earliest and best manuscripts”, as though they were a text family or text type. This is because there doesn’t seem to be an “Alexandrian” text type, only a handful of manuscripts that have historically been called Alexandrian. This is due to the more precise methods being employed, which allow quantitative analysis to be done in the variant units of these manuscripts. The result of this analysis has demonstrated that the manuscripts called Alexandrian do not meet the threshold of agreement to be considered a textual family. Tommy Wasserman explains this shift in thought. 

“Although the theory of text types still prevails in current text-critical practice, some scholars have recently called to abandon the concept altogether in light of new computer-assisted methods for determining manuscript relationships in a more exact way. To be sure, there is already a consensus that the various geographic locations traditionally assigned to the text types are incorrect and misleading” (Wasserman, 

Thus, the only place the name “Alexandrian” might occupy in this discussion is one of historical significance or possibly to serve in identifying the handful of manuscripts that bear the markers of Sinaiticus and Vaticanus, which disagree heavily among themselves, as the Munster Method has demonstrated (65% agreement between 01 and 03  in the places examined in the gospels/26.4% to the Majority text So in using the terminology of “Alexandrian”, one is already introducing confusion into the conversation that represents an era of textual scholarship that is on its way out. Regardless of whether or not it is appropriate to use the term “Alexandrian”, it may be granted that it is a helpful descriptor for the sake of discussion, since the modern critical text in the most current UBS/NA platform generally agrees with it in at least two of these manuscripts (03 at 87.9% and 01 at 84.9%) in the places examined (See Gurry & Wasserman, 46). 

The bottom line is this – the new methods that are currently being employed (CBGM/Munster Method) are still ongoing, and will be ongoing until at least 2032. So any arguments made on behalf of the critical text are liable to shift as the effort continues and new data comes to light. As a result of this developing effort, any attempt to defend such texts is operating from an incomplete data set, based on the methods that are being defended. Given that the general instability of the modern critical text is granted, at least until the Editio critica maior (ECM) is completed, know that the conversation itself is likely to change over the next 12 years. In the meantime, it seems that the most productive conversation to have is that which discusses the validity of the method itself, since the dataset is admittedly incomplete.

Is the Munster Method Able to Demonstrate the Claim that the “Alexandrian” Manuscripts Are Earliest and Best?

The answer is no. The reason I say this is due to the method being employed. I have worked as an IT professional for 8 years, specifically in data analysis and database development, which gives me a unique perspective on the CBGM. An examination of the Munster Method (CBGM) will show that the method is insufficient to arrive at any conclusion on which text is earliest. While the method itself is actually quite brilliant , its limitations prevent it from providing any sort of absolute conclusion on which text is earliest, or original, or best. There are several flaws that should be examined, if those that support the current effort want to properly understand the method they are defending. 

  1. In its current form, it does not factor in versional or patristic data (or texts as they have been preserved in artwork for that matter)
  2. It can only perform analysis on the manuscripts that are extant, or surviving (so the thousands of manuscripts destroyed in the Diocletian persecution, or WWI and WWII can never be examined, for example)    
  3. The method is still vulnerable to the opinions and theories of men, which may or may not be faithful to the Word of God

So the weaknesses of the method are threefold – it does not account for all the data currently available, and it will never have the whole dataset. Even when the work is finished, the analysis will still need to be interpreted by fallible scholars. It’s biggest flaw, however, is that the analysis is being performed on a fraction of the dataset. Not only are defenders of the modern critical text defending an incomplete dataset, as the work is still ongoing, the end product of the work itself is operating from an incomplete dataset. So to defend this method is to defend the conclusions of men on the analysis of an incomplete dataset of an incomplete dataset. The scope of the conclusions this method will produce will be limited to the manuscripts that we have today. And since there is an overwhelming bias in the scholarly world on one subset of those manuscripts, it is more than likely that the conclusions drawn on the analysis will look very similar, if not the same, as the conclusions drawn by the previous era of textual scholarship (represented by Metzger and Hort). And even if these biases are crushed by the data analysis, the conclusions will be admittedly incomplete because the data is incomplete. Further, quantitative analysis will never be free of the biases of those who handle the data. Dr. Peter Gurry comments on one weakness in the method in his book A New Approach to Textual Criticism

“The significance of the selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). 

Further, the method itself is not immune to error. Dr. Gurry comments that, “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That is to say, that after all the computer analysis is done, the scholars making textual decisions can still make incorrect conclusions on which text is earliest, selecting a later reading as earliest. In the current iteration of the Munster Method, there are already many places where, rather than selecting a potentially incorrect reading, the text is marked to indicate that the evidence is equally strong for two readings. These places are indicated by a diamond in the apparatus of the current edition of the Nestle-Aland text produced in 2012. There are 19 of these in 1 and 2 Peter alone (See NA28). That is 19 places in just two books of the Bible where the Munster Method has not produced a definitive conclusion on the data. That means that even when the work is complete, there will be thousands of different conclusions drawn on which texts should be taken in a multitude of places. This is already the case in the modern camp without application of the CBGM, a great example is Luke 23:34, where certain defenders of the modern critical text have arrived at alternative conclusions on the originality of this verse.   

There is one vitally important observation that must be noted when it comes to the current effort of textual scholarship. The current text-critical effort, while the most sophisticated to date, is incapable of determining the earliest reading due to the limitations of the data and also in the methodology. A definitive analysis simply cannot be performed on an incomplete dataset. And even if the dataset was complete, no dataset is immune to the opinions of flawed men and women.  

An Additional Problem Facing the Munster Method

There is one more glaring issue that the Munster Method cannot resolve. There is no way to demonstrate that the oldest surviving manuscripts represent the general form of the text during the time period they are alleged to have been created (3rd – 4th century). An important component of quantitative analysis is securing a data set that is generally representative of the whole population of data. This may be fine in statistical analysis on a general population, but the precision of the effort at hand is not aiming at a generic form of precision, because the Word of God is being discussed, which is said to be perfectly preserved by God. That means that the sample of data being analyzed must be representative of the whole. The reality is, that the modern method is really doing an analysis on the earliest manuscripts, which do not represent the whole, against the whole of the dataset.

It is generally accepted among modern scholarship that the Alexandrian manuscripts represent the text form that the whole church used in the third and fourth century. This is made evident when people say things like, “The church wasn’t even aware of this text until the 1500’s!” or “This is the text they had at Nicea!” Yet such claims are woefully lacking any sort of proof, and in fact, the opposite can be demonstrated to be true. If it can be demonstrated that the dataset is inadequate as it pertains to the whole of the manuscript tradition, or that the dataset is incomplete, then the conclusions drawn from the analysis can never be said to be absolutely conclusive. There are two points I will examine to demonstrate the inadequacy of the dataset and methodology of the CBGM, which disallows it from being a final authority in its application to the original form of the New Testament.

First, I will examine the claim that the manuscripts generally known as Alexandrian were the only texts available to the church during the third and fourth centuries. This is a premise that must be proved in order to demonstrate that the conclusions of the CBGM represent the original text of the New Testament. In order to make such a claim, one has to adopt the narrative that the later manuscripts which are represented in the Byzantine tradition were a development, an evolution, of the New Testament text. The later manuscripts which became the majority were the product of scribal mischief and the revisionist meddling of the orthodox church, and not a separate tradition that goes back to the time of the Apostles. This narrative requires the admission that the Alexandrian texts evolved so heavily that by the Middle period, the Alexandrian text had transformed into an entirely different Bible, with a number of smoothed out readings and even additions of entire passages and verses into the text which were received by the church as canonical! Since this cannot be supported by any real understanding of preservation, the claim has to be made that the true text evolved and the original remains somewhere in the texts that existed prior to the scandalous revision effort of Christians throughout the ages. This is why there is such a fascination surrounding the Alexandrian texts, and a determination by some to “prove” them to be original (which is impossible, as I have discussed).

That being said, can it be demonstrated that these Alexandrian manuscripts were the only texts available to the church during the time of Nicea? The simple answer is no, and the evidence clearly shows that this is not the case at all. First, the number of examples of patristic quotations of Byzantine readings demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. One can point to Origen as the champion of the Alexandrian text, but Origen wasn’t exactly a bastion of orthodoxy, and I would hesitate to draw any conclusions other than the fact that after him, the church essentially woke up and found itself entirely Arian or some other form of heterodoxy as it pertained to Christ and the Trinity. Second, the existence of Byzantine readings in the papyri demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. Finally, Codex Vaticanus, one of the chief exemplars of the Alexandrian texts, is proof that other forms of the text existed at the time of their creation. This is chiefly demonstrated in the fact that there is a space the size of 11 verses at the end of Mark where a text should be. This space completely interrupts the otherwise uniform format of the codex which indicates that the scribes were aware that the Gospel of Mark did not end at, “And they went out quickly, and fled from the sepulchre; for they trembled and were amazed: neither said they any thing to any man; for they were afraid.” They were either instructed to exclude the text, or did not have a better copy as an exemplar which included the text. In any case, they were certainly aware of other manuscripts that had the verses in question, which points to the existence of other manuscripts contemporary to Sinaiticus and Vaticanus. Some reject this analysis of the blank space at the end of Mark as it applies to Sinaiticus (Which also has a blank space), but offer additional reasons why this is the case nonetheless, see this article for more. James Snapp notes that “the existence of P45 and the Old Latin version(s), and the non-Alexandrian character of early patristic quotations, supports the idea that the Alexandrian Text had competition, even in Egypt.” Therefore it is absurd to claim that every manuscript circulating at the time looked the same as these two exemplars, especially considering the evidence that other text forms certainly existed.

Second, I will examine the claim that the Alexandrian manuscripts represent the earliest form of the text of the New Testament. It can easily be demonstrated that these manuscripts do not represent all of their contemporary manuscripts, but that is irrelevant if they truly are the earliest. Yet the current methodology has absolutely no right to claim that it is capable of proving such an assertion. Since the dataset does not include the other manuscripts that clearly existed alongside the Alexandrian manuscripts, one simply cannot draw any conclusions regarding the supremacy of those texts. One must jump from the espoused method to conjecture and storytelling to do so. Those defending the modern text often boldly claim that fires, persecution, and war destroyed a great deal of manuscripts. That is exactly true, and needs to be considered when making claims regarding the manuscripts that survived, and clearly were not copied any further. One has to seriously ponder why, in the midst of the mass destruction of Bibles, the Alexandrian manuscripts were considered so unimportant that they weren’t used in the propagation of the New Testament, despite the clear need for such an effort. Further, these manuscripts are so heavily corrected by various scribes it is clear that they weren’t considered authentic in any meaningful way. 

Even if the Alexandrian manuscripts represent the “earliest and best”, there is absolutely no way of determining this to be true due to the simple fact that the dataset from that time period is so sparse. In fact, the dataset from this period only represents a text form that is aberrant, quantitatively speaking. It is evident that other forms of the text existed, and despite the fact that they no longer are surviving, the form of those texts survive in the manuscript tradition as a whole. The fact remains, there are no contemporary data points to even compare the Alexandrian manuscripts against to demonstrate this to be true. Further, there are not enough second century data points to compare the third and fourth century manuscripts against to demonstrate that the Alexandrian manuscripts represent any manuscript earlier than when they were created. It is just as likely, if not more likely, that these manuscripts were created as an anomaly in the manuscript tradition. The fact remains that the current methods simply are not sufficient to operate on data that isn’t available.  This relegates any form of analysis to the realm of story telling, which exists in the theories of modern scholars (expansion of piety, scribal smoothing, etc.) 


Regardless of which side one takes in the textual discussion, the fact remains that the critiques of the modern methodology as it exists in the CBGM are extremely valid. The method is primarily empirical in its form, and empirical analysis is ultimately limited by the data available. Since the data that is available is not complete outside of a massive 1st and 2nd century manuscript find, the method itself will forever be insufficient to provide a complete analysis. The product of the CBGM can never be applied honestly to the whole of the manuscript tradition. Even if we find 2,000 2nd century manuscripts, there will still be no way of validating that those manuscripts represent all of the text forms that existed during that time . As a result, the end product will simply provide an analysis of an incomplete dataset. It should not surprise anybody when the conclusions drawn from this dataset in 2032 simply look like the conclusions drawn by the textual scholarship of the past 200 years. This being the case, the conversation will be forced into the theological realm. If the modern methods cannot prove any one text to be authorial or original, those who wish to adhere to that text will ultimately be forced to make an argument from faith. This is already being done by those who downplay the significance of the 200 year gap in the manuscript tradition from the first to third centuries and say that the initial text is synonymous with the original text. 

The fact remains that ultimately those who believe the Holy Scriptures to be the divinely inspired word of God will still have to make an argument from faith at the end of the process. Based on the limitations of the Munster Method (CBGM), I don’t see any reason for resting my faith on an analysis of an incomplete dataset which is more than likely going to lean on the side of secular scholarship when it is all said and done. This is potentially the most dangerous position on the text of Scripture ever presented in the history of the world. This position is so dangerous because it says that God has preserved His Word in the manuscripts, but the method being used cannot ever determine which words He preserved.

The analysis performed on an incomplete dataset will be hailed as the authentic word(s) of God, and the conclusions of scholars will rule over the people of God. It is possible that there will be no room for other opinions in the debate, because the debate will be “settled”. And the settled debate will arrive at the conclusion of, “well, we did our best with what we have but we are still unsure what the original text said, based on our methods”. This effectively means that one can believe that God has preserved His Word, and at the same time not have any idea what Word He preserved. The adoption of such conclusions will inevitably result in the most prolific apostasy the church has ever seen. This is why it is so important for Christians to return to the old paths of the Reformation and post-Reformation, which affirmed the Scriptural truth that the Word of God is αυτοπιστος, self-authenticating. It is dishonest to say that the Reformed doctrine of preservation is “dangerous” without any evidence of this, especially considering the modern method is demonstrably harmful.