There are a number of ways the textual criticism discussion goes awry. Sometimes the conversation is hyper focused on textual variants and “textual data,” other times the topic of discussion is Erasmus or the Reformed. What is almost always ignored is what the Scriptures say. There is a reason the Critical Text advocates do not ever wish to talk about theology, and it is because the theology of the modern critical text system is completely bankrupt. It has strayed so far from Protestant orthodoxy that it shares similarities with Rome.
In this article, I will discuss why the Critical Text Advocate cannot justifiably debate variants in relation to the Divine Original.
The Methodological Gap
The Modern Critical Text methodology, which is allegedly the only “meaningful” and “consistent” apologetic, has what the scholars call a “methodological gap.”
“The reason is that there is a methodological gap between the start of the textual tradition as we have it and the text of the autograph itself. Any developments between these two points are outside the remit of textual criticism proper. Where there is “no trace [of the original text] in the manuscript tradition” the text critic must, on Mink’s terms, remain silent.”
Peter Gurry. A Critical Examination of the Coherence based Genealogical Method. 93.
“There still remains a gap between the form of the text from which we conclude by critical examination that the extant witnesses must be descended and the yet older forms from which that oldest recoverable text must be descended…Recognizing that there is a gap between the oldest recoverable forms of the text and the creation of the work requires us to address one final topic…The New Testament philologist’s task is not to recover the original authorial text, not only because we cannot at present know on philological grounds what the original text might have been, nor even because there may have been several forms of the tradition, but because philology is not able to make a pronouncement as to whether or not there was such an authorial text”
(DC Parker. Textual Scholarship and the Making of the New Testament. Kindle Edition. 26-27).
This means that the text critical methodologies employed by modern scholars and apologists cannot speak to the authenticity of any variant in relation to the original because modern textual criticism isn’t designed to deal with the concept of the original.
The scholars and apologists are quick to brag about the “scientific” nature of textual criticism, but in doing this, they give up the ground necessary to actually defend any singular textual variant as “authentic”, or the whole of Scripture for that matter. When such a methodological gap is recognized, so too is recognized the reality that this gap prevents advocates of the Modern Critical Text from speaking to the authorial text of Scripture based on the “textual data.” The textual data does is limited by the reality that there is nothing that connects the “earliest and most reliable manuscripts” with the autographic text. There is no way to verify that the reconstructed text is the original text, hence the methodological gap.
This is where the discussion of textual variants is extremely misleading and even deceitful. When Critical Text advocates make claims about the “authenticity” of a variant reading, they have stepped away from their “consistent methodology” to argue from a totally different epistemic starting point which assumes the concept of the Divine Original. As noted above by DC Parker, even the concept of one authorial tradition is not certain because of this methodological gap.
The concept of an “authorial” or “original” text is something that is theological in nature. It is something that is assumed a priori from Scripture. If the Scriptures were inspired by God(2 Tim. 3:16), there is one text that was inspired, and therefore Christians argue for one inspired text. This concept is not something that can be demonstrated from the textual data and is something that has been increasingly called into question in today’s world of textual scholarship.
“Books and the texts they preserve are human products, bound in innumerable ways to the circumstances and communities that produce them. This is also true of the New Testament, despite its status as a uniquely transcendent, sacred text, held by some to be inspired by God…Even if the text of the Gospels could be fixed – and, when viewed at the level of object and material artifact, this goal has never been achieved – the purported meaning of texts also change…Paradoxically, attempts to edit and preserve these important books multiplies rather than settles the many forms in which they appear, as each generation revises both the New Testament and the Gospels in concert with its own aspirations, assumptions, theological perspectives, and available technologies.”
Jennifer Knust & Tommy Wasserman. To Cast the First Stone. 15-16.
The Bible, according to these so called “Evangelical” textual scholars, is nothing more than a human product which reflects the communities of faith that produced it.
The methodological gap is the death of defending the Scriptures for Christians. It is an admission that any conclusion that scholars and apologists arrive to cannot be said about the authenticity or originality of any given verse or word in Scripture.
Think about this methodological gap the next time you engage with a Modern Critical Text advocate. They will vigorously debate passages such as Mark 16:9-20 and 1 John 5:7, despite the fact that they have no “consistent” reason to do so. What they actually can debate is whether or not those verses or passages should be printed in our Modern Critical Texts, but that’s it. The nature of this modern text has no credentials, no progeny. All that we know about these manuscripts is that they were created, and that a small number of them survived. We have no clue who created them or used them or if they were even a part of the manuscripts used by actual Christians. The methodological gap proves that modern critical text advocates have surrendered the ground necessary to defend any place of Scripture as authentic. It is simply inconsistent to do so, because the methodological and axiomatic foundation of the Modern Critical Text has nothing to say about the original, let alone if there ever was one original.
When a Critical Text advocate tries to argue for authenticity, they are borrowing a concept that does not exist in their system from another system, one that is theological. They borrow from a system that asserts the concept of an original from Scripture, which some would call an “a priori” assumption. This a priori assumption is one which is not consistent with the modern critical methodology. As some popular apologists point out, this is the sign of a failed argument. It proves that if the point of the discussion is the Divine Original, the Modern Critical Text advocate has no consistent reason to contribute. While the goal of some Evangelical textual scholars may be the original, there is certainly nothing in the methodology that can actually make that happen.
That is why those in the Received Text camp say that there are no modern critical textual scholars trying to find the original, because a desire to find the original doesn’t and cannot actually translate to anything tangible due to the methodological gap. Instead of rejecting the Modern Critical Text, scholars instead say, “No doctrines are affected” and hope that Christians don’t think too hard about it. It is a failed system if the goal is the Divine Original, and scholars know it. So when a Modern Critical Text advocate tries to say that a passage or verse is not original, the simple response is, “What does your system have anything to do with the original?” They cannot argue such claims from their system, and that is the brutal reality that Modern Critical Text advocates continue to ignore.
Today’s church has been flooded with new ideas that depart from the old paths of the Protestant Reformation. This is especially true when it comes to the doctrine of Scripture. It is common place to adhere to the doctrine of inerrancy in today’s conservative circles and beyond. While it is good that many Christians take some sort of stand on Scripture, it is important to investigate whether or not the doctrine of inerrancy is a Protestant doctrine. The Reformers were adamant when talking about the inspiration, authority, and preservation of Scripture that every last word had been kept pure and should be used for doctrine, preaching, and practice. James Ussher says clearly the common sentiment of the Reformed.
“The marvelous preservation of the Scriptures; though none in time be so ancient, nor none so much oppugned, yet God hath still by his providence preserved them, and every part of them.”
(James Ussher, A Body of Divinity)
Most Christians would happily affirm this doctrinal statement. Those that are more familiar with the discussion of textual criticism may not, however. It is common to dismiss men like James Ussher along with other Westminster Divines on the grounds that they were not aware of all of the textual data and therefore were speaking from ignorance. Much to the discomfort of these Christians, textual variants did exist during this time, many of which were the same we battle over today. The conclusion that should be drawn from this reality is not that the Reformed in the 16th and 17th centuries would have agreed with modern expressions of inspiration and preservation simply because we have “more data”. There is a careful nuance to be observed, and that nuance is in their actual doctrinal articulations of Scripture. This is necessarily the case, considering they were far more aware of textual variants than many would like to admit. Rather than attempting to understand the tension between the Reformed doctrine of Scripture and the existence of textual variants, it is commonplace to reinterpret the past through the lens of A.A. Hodge and B.B. Warfield, who reinterpreted the Westminster Confession of Faith 1.6 to make room for new trends in textual scholarship. William T. Shedd, a professor at Union Theological Seminary in the 19th century and premier systematic theologian articulated the view of Hodge and Warfield well regarding the confessional statement, “Kept pure in all ages”. He writes,
“This latter process is not supernatural and preclusive of all error, but providential and natural and allowing of some error. But this substantial reproduction, this relative ‘purity’ of the original text as copied, is sufficient for the Divine purposes in carrying forward the work of redemption in the world” .
William G. T. Shedd, Calvinism: Pure and Mixed. A Defense of the Westminster Standards, 142.
While this is close to the Reformed in the 16th and 17th centuries at face value, it still is a departure that ends up being quite significant, especially in light of the direction modern textual criticism has taken in the last ten years. For comparison, Francis Turretin articulates a similar thought in a different way.
“By the original texts, we do not mean the autographs written by the hand of Moses, of the prophets and of the apostles, which certainly do not now exist. We mean their apographs which are so called because they set forth to us the word of God in the very words of those who wrote under the immediate inspiration of the Holy Spirit”.
Francis Turretin, Institutes of Elenctic Theology, Vol. I, 106.
It is plainly evident that the two articulations of the same concept are not exactly the same. That is to say, that Turretin’s expression of the doctrine was slightly more conservative than Shedd. The difference being that the apographs, as Turretin understood them, were materially as perfect as the Divine Original. Turretin dealt at length with textual corruptions, as did his peers and those that followed after him, such as Puritan Divine John Owen, and still affirmed that the “very words” were available to the church. In order to fit a modern view into the Reformation and Post Reformation theologians, one must anachronistically impose a Warfieldian interpretation of the Westminster Confession onto those that framed it. There is no doubt that the Westminster Divines lived in the same reality of textual variants as Warfield and Hodge, and that they still affirmed a doctrine which said every jot and tittle had been preserved. Turretin and Warfield faced the same dilemma, yet Warfield secluded inspiration to only the autographs, whereas the Reformed included the apographs as well. Rather than attempting to reinterpret the theologians of the past, the goal should be to understand their doctrine as it existed during the 16th and 17th centuries, where the conversation of textual variants was just as alive as it is today.
A Careful Nuance
In order to examine the difference between the doctrine of Scripture from the Reformation to today, it’s important to zoom out and see how Warfield’s doctrine developed into the 21st century. The Doctrine of Inspiration, as it is articulated today, only extends to the autographic writings of the New Testament. I will appeal to David Naselli’s explanation from his textbook, How to Understand and Apply the New Testament, which has received high praise from nearly every major seminary.
“The Bible’s inerrancy does not mean that copies of the original writings or translations of those copies are inerrant. Copies and translations are inerrant only to the extent that they accurately represent the original writings.”
David Naseli. How to Understand and Apply the New Testament. 43.
This statement is generally agreeable, if we assume that there is a stable Bible in hand, and a stable set of manuscripts or a printed edition which is viewed as “original.” Unfortunately, neither of these exist in the world of the modern critical text. Not only do we not have the original manuscripts, there is no finished product that could be compared to the original. Since the effort of reconstructing the Initial Text is still ongoing, and since we do not have the original manuscripts, this doctrinal statement made by Naselli does not articulate a meaningful doctrine of inspiration or preservation. In stating what appears to be a solid doctrinal statement, he has said nothing at all. In order for this doctrine to have significant meaning, a text that “represents the original writings” would need to be produced. That is why the Reformed in the 16th and 17th centuries were so adamant about their confidence in having the original in hand. In order for any doctrine of Scripture to make sense, the Scriptures could not have fallen away after the originals were destroyed or lost. Doctrinally speaking, the articulation of the doctrine of Scripture demonstrated by Turretin and his contemporaries is necessary because it affirms that God providentially preserved the Scriptures in time and that they had access to those very Scriptures. If the modern critical text claimed to be a definitive text, like the Reformed claimed to have, the modern articulation of the doctrine of Scripture might be sound, but there is no modern critical text that exists as a solid ands table object. It is clear that the doctrine of Scripture, and the form of the Scriptures, cannot be separated or the meaning of that doctrine is lost. In order for doctrine to be built on a text, the text must be static. If we are to say that the Bible is inerrant in so far as it represents the original, there must be a 1) a stable text and 2) an original to compare that text against. Due to neither 1 or 2 being true, Naselli, along with everybody that agrees with him, have effectively set forth a meaningless doctrinal standard as it pertains to Scripture.
This means that the Reformed doctrine of Scripture is intimately tied to the text they considered to be authentic, inspired, and representative of the Divine Original. The text they had in hand was what is now called the Received Text. Whether it was simply a “default” text does not change the reality that it was the text these men of God had in their hands. It is abundantly clear that the doctrine of Scripture during the time of the Reformation and Post-Reformation was built on the TR, just like the modern doctrine of Scripture is built on the modern critical text and the assumptions used to create it. Further problems arise with the modern doctrine of Scripture when the effort of textual scholarship shifted from trying to find the original text to the initial text. Due to this shift, any articulation of Scripture which looks to the modern critical text is based on a concept that does not necessarily exist in modern textual scholarship. The concept of the “original” has moved from the sight of the editorial teams of Greek New Testaments, therefore it is necessary to conclude that such doctrinal statements which rely on outdated goals to find the “original” must also be redefined. What this means practically is that there are not any doctrinal statements that exist in the modern church which align with the doctrines used to produce modern Bibles.
Due to the doctrine of Scripture being intimately tied to the nature of the text it is describing, the various passages of the New Testament which have been considered inspired have changed throughout time, and are going to continue changing as the conclusions of scholars vary from year to year. If we take Naselli’s articulation of the doctrine of Scripture as true, this means that there is not one inerrant text of Holy Scripture, there are as many as there are Christians that read their Bible. So in a very real sense, according to the modern articulation of inspiration, the inspired text of the New Testament is not a stable rule of faith. It is only stable relative to crowd consensus, or perhaps at the individual level. A countless multitude of people who adhere to this doctrine of inspiration make individual rulings on Scripture, which effectively means that the Bible is given its authority by virtue of the person making those decisions. Thus, the number of Bibles which may be considered “original” is as numerous as the amount of people reading Bibles. It is due to this reality that the modern doctrine of Scripture has departed from the Reformation era doctrine in at least two ways. The first is that by “original”, the post-Warfield doctrine means the autographs which no longer exist and excludes the apographs. The second is that the Bible is only authoritative insofar as it has been judged authoritative by some standard or another. This combination contradicts any doctrine that would have the Scriptures be a stable rule for faith and practice. It is because of these differences that it can be safely said that while the doctrinal articulations may sound similar, they are not remotely the same.
The Reformed doctrine of Scripture in the 16th and 17th centuries is founded upon two principles that are different than that in the post-Warfield era. The first principle of the Reformed is that the Scriptures are self-authenticating, and the second is that they considered the original to also be represented and preserved in the text they had in hand. Therefore it seems necessary to understand the Reformation and Post-Reformation Divines through a different lens than the modern perspective, because the two camps are saying entirely different things. A greater effort should be made to understand what exactly the Reformed meant by “Every word and letter” in relationship to the text they had in hand, rather than impose the modern doctrine upon the Reformation and Post-Reformation divines.
The goal of this conversation should be to instill confidence in people that the Bible they are reading is indeed God’s inspired Word. Often times it is more about winning debates and being right than actually given confidence to Christians that what they have in their hands can be trusted. It is counter productive for Christians to continue to fight over textual variants in the way that they do, especially considering the paper thin modern articulations of the doctrine of Scripture. It is stated by some that receiving the Reformation Era Bible is “dangerous”, yet I think what is more dangerous is to convince somebody that they should not trust this Bible, which is exactly what happens when somebody takes the time to actually explain the nuances of modern textual criticism. These attacks are especially harmful when the Bible that is attacked is the one that the Protestant religion was founded upon, and the only text that carries with it a meaningful doctrine of Scripture. Christians need to consider very carefully the claims that are made about the Reformation era text which say it is not God’s Word, or that it is even dangerous to use. I cannot emphasize enough the harm this argument has done to the Christian religion as a whole. The constant effort to “disprove” the Reformation era text is a strange effort indeed, especially if “no doctrines are effected”. The alternative, which has been a work in progress since before 1881, and is still a work in progress today, offers no assurance that Christians are actually reading the Bible. In making the case that the Received text and translations made from it should not be used, critics have taken one Bible away and replaced it with nothing but uncertainty.
The claim made by advocates of the Received text is simple, and certainly not dangerous. The manuscripts that the Reformed had in the 16th century were as they claimed – of great antiquity and highest quality. The work done in that time resulted in a finished product, which continued to be used for hundreds of years after. That Bible in its various translations quite literally changed the world. If the Bible of the 16-18th centuries is so bad, I cannot understand why people who believe it to be a gross corruption of God’s Word still continue to read the theological works of those who used it. Further, it is difficult to comprehend how a Bible that is said to accomplish the same purpose as modern bibles would be so viscously attacked by those that oppose it. If all solid translations accomplish the same redemptive purpose, according to the modern critical doctrine, why would it make any sense to attack it? After spending 10 years reading modern Bibles, I simply do not see the validity to the claim that the Reformation era text is “dangerous” in any way. Christians do not need to “beware” of the text used by the beloved theologians of the past. At the end of the day, I think it is profitable for Christians to know that traditional Bibles are not scary, and have been used for centuries to produce the fullest expression of Christian doctrine in the history of the world. When the two doctrinal positions are compared, there is not a strong appeal to the axioms of Westcott and Hort, or Metzger, or even the CBGM. They are all founded on the vacuous doctrine of Scripture which requires that the current text be validated against the original, which cannot be done. There is no theological or practical value in constantly changing the words printed in our Bibles, and this practice is in fact detrimental to any meaningful articulation of what Scripture is. I have not once talked to anybody who has been given more confidence in the Word of God by this practice. In fact, the opposite is true in every real life encounter I’ve had.
It is said that the Received Text position is “pious” and “sanctimonious”, but I just don’t see how a changing Bible, with changing doctrines, is even something that a conservative Christian would seriously consider. If Christians desire a meaningful doctrine of Scripture, the modern critical text and its axioms are incapable of producing it.
It is easy to get bogged down in conversations about textual variants, manuscripts, and elusive terminology when it comes to any talk about Textual Criticism. These types of conversations prevent the average Christian from entering into the discussion, and so it is common to just side with a favorite pastor or scholar. Fortunately, the conversation is not as complicated as many make it seem. It is true that in order to analyze a variant or read a manuscript, an understanding of the Greek language and a general knowledge of textual scholarship is required. This should cause the average Christian to pause and consider that reality. Should every Christian need to learn Greek and study textual criticism in order to read their Bible? Does that sound like something that God would require for His people to read His Word? Does God require papal or scholarly authority for His people to know which verses are authentic?
Those who advocate for this have made a serious error in their understanding of the availability of the Scriptures. They have imposed a burdensome standard upon the Holy Scriptures which puts a barrier between the average Christian and New Testament scholarship. This cumbersome gatekeeping tool has informed Christians everywhere that unless they have a PhD in Text-Critical studies and know Greek, they are simply unequipped to determine which Bible they should read, or which variants within those Bibles can be trusted. This common idea has introduced a neo-papacy within the Protestant church, which tells Christians that they must wait for scholars or pastors or apologists to speak Ex cathadra before trusting any verse in their Bible.
Is it Really That Complicated?
Not really, no. The direction of modern textual criticism has refuted itself in the fact that it readily admits it cannot find the original text of the New Testament. In other words, their methods have failed. In order to obfuscate this reality, scholars have shifted the effort to finding the Initial Text, which is really just a presuppositional effort to produce a hypothetical (non-existent) archetype from the smattering of Alexandrian manuscripts. This is the first common sense argument against the Modern Critical Text – it doesn’t claim to be the original text, and the methodologies being employed cannot and do not make any certain claims on producing the original text. So for any Christian who wants to “know what Paul wrote,” the modern methods aren’t claiming to provide that kind of certainty. That kind of certainty is only provided, given a scholar or somebody else speaks authoritatively over a text for the people of God. This being the case, Christians need to pick a pope to decide for them if Luke 23:34 really is original, because the popes disagree. If the protestant religion is truly a religion of Sola Scriptura, this simply does not work. It is the same argument the Papists make, only the pope is exchanged for a scholar. If a Christian is okay with maybe knowing what Paul wrote, I present a second common sense argument against the Modern Critical Text.
If you are fond of the argument that claims that the New Testament is the best attested piece of literature in antiquity, boasting thousands of manuscripts compared to other works such as the Iliad, than the Modern Critical Text fails that criteria. The only text platforms that can use this argument are texts that represent the vast majority of manuscripts, such as a Byzantine priority or Traditional Text based Bible. The Modern Critical Text is based primarily on two manuscripts, which means that the apologetic which says that we have thousands of manuscripts isn’t true for the Modern Critical Text. One would have to say that the New Testament is only supported by less than fifty manuscripts, which makes it one of the least attested to books in antiquity. The narrative of transmission presented by the modern critical scholars says that the rest of the thousands of manuscripts were byproducts of scribal smoothing and orthodox revision. In supporting these modern texts, one has to accept that fact that the vast majority of the 6,000 manuscripts we have were the product of scribal revision and orthodox tampering, and do not testify to a preserved Bible. In fact, this is the common opinion of the men and women engaged in actual textual scholarship. This reality transitions quite nicely to the third common sense argument against the Modern Critical Text.
Christians should be confident that the thousands of manuscripts testify to the authentic New Testament when compared and edited together. The fact that these manuscripts were copied so much and were used to heavily throughout time should tell a story that is often brushed over by modern scholarship. The story is that these manuscripts, or a comparison of these manuscripts, were always treated as authentic throughout time. In fact, the manuscripts used by Erasmus represent the majority of manuscripts far more closely than the Modern Critical Text. While I don’t believe that simply counting manuscript readings produces an original text without any further consideration, it is a good place to start to reject the few spurious texts that the Modern Critical Text is based on.
A common sense methodology would also admit that we do not have every manuscript surviving today, and that the testimony of the people of God throughout time should also be considered so that not one word is lost from the Holy Scriptures. In terms of data analysis, the amount of data points that the Modern Critical Text represents should be considered an outlier. So is it the case that a few manuscripts which did not survive in the manuscript tradition are original? Or is it more likely that the vast majority of manuscripts represent the original when compared? In order to responsibly represent the case for the Modern Critical Text, one has to tell a tale that the New Testament evolved over time, and became so corrupt that nobody alive today really knows what the original said. Thus the modern effort is focused on producing a hypothetical archetype for these outlier texts. The modern method assumes that the thousands of manuscripts are corrupt evolutions of the original text. That leads us to a fourth common sense argument against the Modern Critical Text.
It technically could be true that the handful of early surviving manuscripts represent the original text of the New Testament. Simply counting readings does not necessarily prove originality. There are a handful of readings that the people of God have considered original throughout time that are no longer available in the majority of manuscripts. That is not proof, however, that these now minority readings were not the majority at one point in time, or considered authentic despite not being the majority. God never promised to preserve the majority text in every case, He simply promised that He would preserve His Word until the Last Day. The majority text simply testifies to a different text than the “earliest and best”, and the opinions of the people of God throughout time should serve as a way to understand which readings were considered authentic throughout time. The first time this was ever done on a large scale was during the 16th century, when the printing press was made available to 16th century theologians and scholars.
So the work during the 16th century was taking place while manuscripts were still being used and copied in churches. The common sense argument is that those people had better access to the manuscripts that were circulating and considered authentic then we do today. After the Bible shifted from existing in hand-copied codices to printed editions, the hand-copied manuscripts were used less, and began being submitted to museums and libraries rather than being used in churches. The texts that the people of God used were no longer in manuscript form, but printed editions of those collated manuscripts. The simple reality is that in the modern period, the manuscripts are artifacts of a time before the printing press. Almost nobody has used a manuscript in a church for centuries, so the evaluation of those manuscripts is difficult without the testimony of the people who actually used them. Thus, the final common sense argument recognizes that the earliest surviving manuscripts are not a standard that anybody would use from the perspective of God preserving His Word.
The final common sense argument is that the manuscripts used in the first effort of textual criticism do represent the best form of the New Testament as it was preserved in the manuscript tradition. Compare this to the opinion that a smattering of heavily corrected, barely copied past the fourth century manuscripts are “earliest and best”. That is because until the printing press, these handwritten codices were actually used in churches by the people of God. So at the time of the first printed editions, the textual scholars of the time had the best insight into the manuscripts that were actually being used, regardless of being majority or minority texts. In order to reject the text-critical efforts of the 16th century, one has to believe that texts were chosen which nobody was using or had never used. This stands in opposition to history however, as Erasmus was heavily influenced by readings that would received by all. Popular opinion often influenced Erasmus in his text-critical decisions. That is the real story behind his inclusion of 1 John 5:7 in his third edition of the Novum Testamentum. He did not lose a bet, he feared that people wouldn’t use his Greek New Testament if he didn’t include it.
Based on common sense arguments, what makes more sense? Did the textual scholars who were doing text-critical work when manuscripts were actually being used have better insights into what the best manuscripts are? Or do modern textual scholars who only have access to manuscripts in museums and libraries know which texts are the best? Is it more likely that God hid away His Word for a thousand years in a handful of manuscripts? Or did He preserve His Word in the manuscripts that were actually being used by the people of God? These are all questions that any layperson should be able to answer. It does not take a PhD in textual studies to determine that the Modern Critical Text starts in the wrong place, with the wrong manuscripts.
The common sense conclusion is that texts used in the first production of printed texts represents the best form of the manuscript tradition that has ever existed. After this point in time, manuscripts were sent to libraries and museums and the printed form of the Greek New Testament was the form that the people of God used. These printed forms were translated into various common languages and used with little to no contest for the next 300 years, until modern theories of scribal tampering caused people to throw out the work of the 16th century. The claim that “we have more data” really does not mean a whole lot, considering we have less perspective on the value of said data. At the end of the conversation, one has to ask, “How valuable is the data that was hidden in caves and barrels?” Is the data that was not being used more important, or is the data that was being used more important? Modern scholars consent to the former, and the scholars of the 16th century consented to the latter.
In order to conclude that modern scholars have a better perspective on the data, one must write off the perspective of Augustine, who said, “Certain persons of little faith or rather enemies of the true faith fearing I suppose less their wives should be given impunity in sinning removed from their manuscripts the lord’s act of forgiveness to the adulteress. As if he who had said, “sin no more” had granted permission to sin.” One must claim that Calvin and Beza were either liars, or confused and mistaken. One must declare that Turretin would have upheld the readings he rejected if “he simply had access to the data we have today”. It takes an effort of revisionist history to believe that the believing people of God would adopt the Modern Critical Text. The simple common sense conclusion is to read these theologians and scholars as though they weren’t fools, and determine that they simply disagreed with modern conclusions. Erasmus, Beza, Stephanus, Calvin, Turretin, Gill, and Dabney did not think anything of the Vatican Codex and manuscripts like it. In fact, they considered them a grotesque corruption of God’s Word. Based on the testimony of the people of God in time, which side is spinning tales and mythology? Is it the people who say that the Word of God evolved and became corrupted beyond repair? I heartily disagree, and affirm with the theological giants of the past that God has preserved His Word in the Received Text.
First I want to acknowledge and commend the irenic spirit of Dr. Mark Ward as he presented a refutation of the position which he calls “Confessional Bibliology” in his lecture posted on September 27, 2019. For those that are readers of my blog, I have referred to this position as “The Confessional Text Position”, and I believe that Confessional Bibliology is an appropriate and charitable label, over and above “Textual Traditionalism” or “KJV Onlyism”.[EDIT: Ward has decided to call this position “KJV Only” anyway. We can’t all be winners.] It is important to remember that this is an intrafaith dialogue. I hope that my handling of his lecture will rise to the same level of integrity as brother Ward. Dr. Ward’s presentation is thorough, scholarly, and is befitting of a Christian, unlike many similar presentations. This is evident in that he freely discusses Pastor Jeff Riddle and Pastor Truelove without character defamation, misrepresentation, or name calling. I do acknowledge that some have treated Dr. Ward uncharitably in various groups, and I want to point out that I have had nothing but positive interactions with him (though brief). It is clear that he is a dear brother in the Lord, despite our disagreement in this one area.
That being said, I do see some potential problems with his presentation that I would like to address. My goal is to emphasize, like Dr. Ward seems to do, that this conversation primarily finds its application pastorally, and not text-critically. This is not about being right and defeating each other, it is about giving confidence to Christians that they have God’s Word. As a pastor, my pure intention is to provide a position that can accomplish that goal. All of the text-critical work in the world is without use if our hearts are not in the first place focused on instilling men and women with confidence in their Bible, reassuring them that every word they read is “Thus saith the Lord”. The main focus of my critique is that the presentation proceeds backwards. It begins at a surface level and then stays there, brushing over the fundamental issue which divides the two camps so definitively.
Do the Minor differences between the CT and TR Give Cause for Abandoning the TR?
In Dr. Ward’s presentation, there was a major effort to highlight the differences within the printed editions of the Received Text, rather than discussing the major differences between the Received Text and Critical Text. These major differences result in the form of the two texts being entirely different. I will argue that downplaying the difference within the Received Text and the Critical Text does not frame the discussion in its proper place, and that makes it difficult to interact with the nuances of the presentation in a meaningful way. That is because the problem is not initially about the minor differences within printed texts, it is about the fact that these two texts represent entirely different Bibles and two different methodologies.
Dr. Ward’s approach neglects to highlight the implications to the doctrine of preservation by focusing on the “jot and tittle” component of the Confessional Text position, which certainly deserves to be fleshed out further down the line. He rightfully comments that the missing sections at the end of Mark and in John 8 are a “serious threat” to the critical text. This seems like an appropriate problem to tackle prior to getting into the minutiae, which Dr. Ward carefully does in his presentation. Given that we both believe God has preserved His Word, it seems imperative to answer how one can uphold a meaningful doctrine of preservation while affirming two text platforms which disagree in major ways. If both sides can cross the bridge and agree that this poses difficulties to even the most loose definitions of preservation, there may be a great opportunity for a fruitful discussion about minor variations at some point from a believing perspective.
Which is to say, that it is problematic to Dr. Ward’s critique to insist that God preserved two forms of the Bible. I argue frequently that the only reason there is so much tension in this discussion is the fact that modern critical text advocates continue to present the smattering of Alexandrian manuscripts as “earliest and best”, despite no evidence for such a claim other than they are the oldest surviving manuscripts. Even modern textual scholarship has demonstrated that original readings can indeed present themselves in later manuscripts.
If the handful of these idiosyncratic texts are viewed as tertiary within the manuscript tradition (or not properly seated within the tradition at all), this conversation becomes much more simple. The rise of modern textual scholarship has introduced this problem to the church by allowing for manuscript types which have been rejected historically to be valued so highly. It is important to acknowledge that the Received Text did not introduce this problem, modern scholarship did when they declared that the Reformation era text needed to be thrown out. A consistent application of Dr. Ward’s presentation should conclude in the Received Text and the KJV being dismissed wholesale, as it represents an entirely different text form.
Since Dr. Ward did not suggest that, it is important to understand that textual decision making is done from a completely different perspective between the Confessional Bibliology group and modern textual scholarship. It is easily demonstrated that the base manuscripts from which the modern eclectic text and the Received Text are built on represent a different form altogether. So the difference is not in the amount of data necessarily, but in the methodology itself which accepts this data into the manuscript tradition. Much time is spent discussing whether or not the Post-Reformation Divines would have accepted this new data, and here is where Dr. Ward and I disagree fundamentally. I do not believe that the Post-Reformation Divines would have adopted the modern critical perspective, even if presented with the new data.
Francis Turretin comments on what Dr. Ward presents as a chief problem for the Confessional Text position – the problem of variants as it pertains to “every jot and tittle”.
“A corruption differs from a variant reading. We acknowledge that many variant readings occur both in the Old and New Testaments arising from a comparison of different manuscripts, but we deny corruption (at least corruption that is universal)” (Institutes of Elenctic Theology, Vol.I, 111).
So it is not chiefly a problem with variants, but the actual text form and the modern perspective that certain passages have been totally corrupted. Turretin continues.
“There is no truth in the assertion that the Hebrew edition of the Old Testament and the Greek edition of the New Testament are said to be mutilated; nor can the arguments used by our opponents prove it. Not the history of adulteress (Jn. 8:1-11), for although it is lacking in the Syriac version, it is found in all the Greek manuscripts. Not 1 Jn. 5:7, for although some formerly called it into question and heretics now do, yet all the Greek copies have it, as Sixtus Senensis acknowledges: “they have been the words of never-doubted truth, and contained in all the Greek copies from the very times of the apostles” (Bibliotheca sancta , 2:298). Not Mk. 16 which may have been wanting in several copies in the time of Jerome (as he asserts); but now it occurs in all, even in the Syriac version, and is clearly necessary to complete the history of the resurrection of Christ” (Ibid. 115).
Turretin explicitly mentions “several copies in the time of Jerome”, which happens to be the time that Codex Vaticanus and Sinaiticus are said to have been produced. Whether he is explicitly referring to these two manuscripts or not, the unavoidable reality is that these two copies represent the form of text he is talking about – namely those missing those three variants. The minor variants discussed in Dr. Ward’s presentation are not that of a mutilating nature, but the two variants he lists as problematic certainly are. So to accept manuscripts and readings from manuscripts bearing this form is to depart methodologically in a major way. The conversation of which jots and tittles may be profitable if this can be admitted, as the amount of jots and tittles to be discussed would shrink massively.
Does Confessional Bibliology Reject Decision Making?
In short, no. Those who advocate for this position do not balk at the “Which TR?” question, because it fundamentally misses the point of the argument itself. I will acknowledge, however, the validity of the question from his perspective. While Dr. Ward provides a thorough presentation of the 11 types of variations between the printed editions of the Received Text, the conclusions of his argument do not demonstrate that the effort of modern textual scholarship is in the same category as Reformation era textual scholarship.
He is absolutely correct in saying that variations exist between printed editions of the TR, and points out that there are just as many editions of the Nestle-Aland text (with many more to come!). The most important point to interact with however, is his critique that the KJV is not its own form of the TR. Dr. Ward wrongly assumes that ultimately, when the conversation is stripped down to its bare components, the Confessional Bibliology argument is the same as the KJV Only argument (Excluding Ruckman). I will note that I do not consider this to be any sort of serious error, just a matter of nuance that I believe was overlooked. Confessional Bibliology advocates read other translations than the KJV, so it is a bit of a misrepresentation to call them KVJO. It would be the same as calling somebody who prefers the ESV and reads the ESV an ESV Onlyist, despite viewing the NASB as a fine translation of the critical text.
While there are some within the Confessional Bibliology group that believe that some form of textual criticism is still necessary, most, as Dr. Ward points out, agree that the Scrivener edition of the Received Text, which represents the textual decisions of the KJV translators, is “the” Received Text. This is due to the nature of the argument from God’s providence, as well as exposure of the text to the people of God as it happened in history. This argument does not seem as far-fetched given that it is not hedged within the context of modern critical scholarship, though I am fully aware of the critiques of this position. It’s not as though the KJV translators were moved along by the Holy Spirit, or reinspired, but that their textual decisions represented a century’s work of scholarship, dialogue, and corporate reception of certain texts within the Received Text corpus. This is made plain and evident in the vast number of commentaries and theological works which use the Received Text of the Reformation.
In short, the Scrivener text is not the best representation of the Received Text by virtue of the King James Translation team, but rather by virtue of the reception of those readings by the people of God. Were it the case that those readings were rejected, like readings Erasmus examined from the Vatican codex, we might be right in following the argumentation of Dr. Ward. The fact stands, that not only did Erasmus reject those readings, but all of the Reformed textual scholars and theologians who came after him did so as well, even commenting on manuscripts missing the ending of Mark. Jan Krans notes the fundamental difference between modern textual scholarship and the method of Beza in his work, Beyond What is Written.
“In Beza’s view of the text, the Holy Spirit speaks through the biblical authors. He even regards the same Spirit’s speaking through the mouth of the prophets and the evangelist as a guarantee of the agreement between both…If the Spirit speaks in and through the Bible, the translator and critic works within the Church. Beza clearly places all his text critical and translational work in an ecclesiastical setting. When he proposes the conjecture ” (‘wild pears’) for (‘locusts’) in Matt 3:4, he invokes “the kind permission of the Church” (328,329).
The point is this – it is not that the Confessional Bibliology group rejects textual decision making, they reject textual decision making in the context of modern textual scholarship. Within the Confessional Bibliology camp, there are vibrant and healthy discussions on this matter which has resulted in the mass adoption of the Scrivener text. The problem occurs when this is conflated with Reconstructionist Textual Scholarship, which, when applied to a text, results in its complete deconstruction and devaluation. The conversation simply cannot happen in a healthy way in a context that takes 15 miles when given an inch.
This is chiefly exemplified in the fact that a decision made on a variant that does not affect meaning is compared to removing 11 verses from Scripture. Categorically, those are not the same thing. I appreciate Dr. Ward’s care in presenting the minor variations, but those are not the problem at a fundamental level (Unless one chooses to make it a problem unnecessarily). That is also assuming that a decision cannot be made, or has not been made on the handful of significant variations that exist within the editions of the Received Text. Had the KJV translators made a printed edition of the textual decisions they chose, this conversation likely would not be happening. The claim that the text as it is represented by the 1881 Scrivener text is an “English Greek New Testament” would not be taken seriously. This was the conclusion of Dr. Hills as well, that the textual decisions of the KJV can be rightfully considered its own “TR”, which Dr. Ward acknowledges, but seems to disagree with.
I appreciate that Dr. Ward has seated the conversation within the context of the believing church. This is a huge upgrade from the vast majority of the discussion which exists in the world of secular scholarship. The goal of this article is not to slam Dr. Ward or say that I have refuted him necessarily, but rather to point out that there is a major stumbling block standing in the way of bridge-crossing. I will argue that a simple critique of Dr. Ward’s argument is that it fails to recognize the two distinct text forms held by each respective position. If we were dealing with one text form, with minor variations, we might be able to readily understand Turretin and Owen’s commentary on the text better, and Dr. Ward’s presentation might be more applicable to those who subscribe to Confessional Bibliology. But since during that era, the church rejected manuscripts like Vaticanus, and in the modern era the Bibles are all built on top of Vaticanus, the effort of bridge-crossing may be more tedious. Until the people of God seriously consider the direction of modern textual scholarship and its wholesale abandonment of the Original Text for the Initial Text, it may be difficult to find the kind of agreement Dr. Ward desires in his presentation.
At the end of this analysis, I hope that all can see that while there is a fundamental disagreement that may stand in the way of bridge-crossing, it is not so great that we cannot treat each other with brotherly kindness and respect which is fitting for those who claim Christ. The fact stands that not all Bibles are created equal, and despite modern Bibles generically looking like Bibles made from the Received Text, they depart in major places which do indeed effect doctrine, like John 1:18 and Mark 16:9-20. It would also be a different conversation if both forms of the text were stable, but the modern text is not. The direction of the modern text-critical effort is only speeding up in the direction of uncertainty as the ECM is implemented (see 2 Peter 3:10 and the number of diamonds in the Catholic Epistles of the NA28). I’ll end with this quote by textual scholar DC Parker, which I find to accurately assess the nature of the modern critical text.
“The text is changing. Every time that I make an edition of the Greek New Testament, or anybody does, we change the wording. We are maybe trying to get back to the oldest possible form but, paradoxically, we are creating a new one. Every translation is different, every reading is different, and although there’s been a tradition in parts of Protestant Christianity to say there is a definitive single form of the text, the fact is you can never find it. There is never ever a final form of the text.”
So it has been said that the CBGM has been able to “get us to 125AD” as it pertains to the New Testament manuscripts with its analysis – or at least in Luke 23:34. Anybody who makes such a claim clearly has no working understanding of the Munster Method, or at least is choosing to use an invisible rod to bash people over the head. In any case, I thought it would be helpful to examine some potential weaknesses in the methodology in a series of articles. To begin, I thought I would discuss the reality that the CBGM is still in need of critical analysis. Dr. Peter Gurry, in his work, A Critical Examination of the Coherence Based Genealogical Method, as a part of the Brill Academic series New Testament Tools and Studies writes, “Despite the excitement about the CBGM and its adoption by such prominent editions, there has been no sustained attempt to critically test its principles and procedures” (2).
So my advice to any of those who believe such a bold claim that the CBGM can “get us to 125AD” should put on their discernment ears and wait until 2032 when the effort can be accurately examined in full. If its use in analyzing the Catholic Epistles is any indication of the kind of certainty it will provide, I now direct the reader to open their Nestle-Aland 28th Edition, if they own one, and examine the readings marked with a black diamond. It should be loudly noted that the methodology of the CBGM has not been fully examined, and I agree with Dr. Gurry when he writes, “If the method is fundamentally flawed, it matters little how well they used it” (4).
The CBGM and the Initial Text
Before the Christian church preemptively buys into this method wholesale, it is important to first recognize that there is not uniform agreement, even in the early implementation process of the CBGM, by all that this methodology will result in establishing what is being called the Initial Text. Bengt Alexanderson, in his work, Problems in the New Testament: Old Manuscripts and Papyri, the New Coherence-Based-Genealogical Method (CBGM) and the Editio Critica Maior (ECM), writes, “I do not think the method is of any value for establishing the text of the New Testament” (117). What should be noted loudly for those that are falling asleep, is that a significant shift has occurred under the noses of laypeople in the effort of textual scholarship as it pertains to the New Testament text.
That shift is the abandonment of the search for the Original or Authorial text for the pursuit of what is being called the Initial Text. Dr. Gurry writes, “These two terms [authorial or original text] have often been used interchangeably and their definition more often assumed than explained. Moreover, that this text was the goal of the discipline remained generally undisputed until the end of the twentieth-century. It was then that some scholars began to question whether the original text could or should be the only goal or even any goal at all” (90, bracketed material added). Regardless of whether this is the method one decides to advocate for, let it be said that this is indeed a shift, for better or for worse. Dr. Gurry continues, “Rather than clarify or resolve this debate, the advent of the CBGM has only complicated the matter by introducing an apparently new goal and a new term to go with it: Ausgangstext, or its English equivalent “initial text” (90-91). The problem of defining terms will always gray the bridge between academia and the people, so hopefully this article helps color in the gap.
While the debate rages on between the scholars as to how the Initial Text should be defined, I will start by presenting what might be considered as the conservative understanding of it and work from there. Gerd Mink, who is credited with the first use of the term Ausgangstext, employs the term to mean “progenitor” or the “hypothetical, so-called original text”(92). That is to say that the goal of the CBGM in theory is to produce the text that the rest of the manuscripts flowed from. This sounds great, in theory, but there remains a great distance to cover from saying that the CBGM should produce this Initial Text and the CBGM has produced this Initial Text. In any case, the use of the terminology “Original Text” is not employed in the same way as it was historically, and there is much deliberation as to whether Mink’s proposed definition will win out over and above those that wish to define it more loosely.
Based on my experience with systems, an appropriate definition of the term as “the text from which the extant tradition originates” (93) is much more precise and descriptive of what the method is capable of achieving. Any talk of whether or not the Initial Text represents the Original Text is merely speculation at this point, and I argue will remain speculation when the effort is complete. This of course requires a more humble assessment of the capabilities of the CBGM, in that an empirical method is only good for analysis on that which it has is its possession. Which is to say that methodologically speaking, there is still a gray area between the time that the earliest extant manuscripts are dated and the time that the original manuscripts were penned of about 300 years or more, depending on how early one dates the earliest complete manuscripts. This is what I have been calling the “Gray area between the authorial and initial text” or “The Gray Area” for short. Dr. Gurry has defined it as the historical gap (100). I suspect that this gray area will be the focus of all discussion pertaining to the actual value of the ECM by the time 2032 arrives.
The Gray Area Between the Authorial and Initial Text
Any critique of the CBGM is incomplete without a sincere handling of the Gray Area between the Original and Initial Text. Until that conversation has happened, it is rather preemptive to make any conclusions such as, “The CBGM can get us to about 125AD”. Dr. Gurry writes, “The reason is that there is a methodological gap between the start of the textual tradition as we have it and the text of the autograph itself. Any developments between these two points are outside the remit of textual criticism proper. Where there is “no trace [of the original text] in the manuscript tradition” the text critic must, on Mink’s terms, remain silent” (93).
This is understandably a weakness of the methodology itself, if one expects the methodology to produce a meaningful text. Dr. Gurry continues, “Minks statement that the initial text “should not necessarily be equated with any actual historical reality” is best read as a way to underscore this point” (93). I propose that it is of greatest importance that Christians begin discussing the Gray Area between the Original Text and the Initial Text now, as it outside of the interest of the text-critic proper. Yes, this discussion is most certainly a theological one, as much as that might pain some who have buried their heads in the sand to the weaknesses of the CBGM care to admit.
It is important to note, that in this conversation over the methodology of the CBGM, that there is certainly not uniform agreement on the capabilities of this relatively new method. It is my hope that by bringing this discussion into a more public space, that the terminology of Original and Initial Text, and the space between these two points in the transmission of the New Testament, fosters an important conversation as it pertains to the orthodox doctrinal standards of inspiration and preservation. Dr. Gerd Mink indirectly proposes one possible method of analyzing the Gray Area, which would be to demonstrate that there is a significant break between the Original and Initial Text. Perhaps some ambitious doctoral student might take upon himself to conduct this work, though I wonder if it is even possible to analyze data that does not exist. That is to say that determining the quality and authenticity of the Initial Text might as well be impossible, and any conclusions regarding this text will be assumptive, given that some new component is not added to the CBGM which allows such analysis to be done.
The ontological limitations of the CBGM give cause for the discerning onlooker to side with the assessments of DC Parker and Eldon J. Epp. Dr. Epp writes, “Many of us would feel that Initial Text – if inadequately defined and therefore open to be understood as the First Text or Starting Text in an absolute sense – suggests greater certainty than our knowledge of transmission warrants”(Epp, Which Text?, 70). Until those that have a more optimistic understanding of the Initial Text produce a methodology that is adequate in testing the veracity of the Initial Text, I see no reason why anybody should blindly trust that the Initial Text can be said to be same as the Original Text. And that is assuming that the ECM will reveal one Ausgangstext. It is likely, if not inevitable, that multiple initial texts will burst forth from the machine. A general understanding of the quality of the earliest extant texts certainly warrants such a thought, at least.
The purpose of this article is to 1) make a wider audience aware of the difference between the Initial Text and the Original Text and 2) to begin the conversation of the Gray Area between the Initial Text and the Original Text. It is best that the church begins discussing this now, rather than in 13 years when the ECM is completed. There are many Christians out there who may be caught completely off guard when they discover that the somewhat spurious claim that the CBGM has “gotten us to 125AD” is in fact, not the truth. The fact stands that nobody has the capability of making such a precise claim at this point, and will not be able to make such a claim in 2032 either. It is best then, that people allow the scholars to finish the work prior to making claims that the scholars themselves are still in dialogue about.