Review: The King James Version Debate – Chapter 2

Introduction

This next chapter titled, “Kinds of Errors in New Testament Manuscripts” is a summary of the various types of scribal errors that can be observed in extant manuscripts, so this article will likely be short. Carson employs this chapter to give his reader context to the actual decision making process that is practiced by textual critics.

The Kinds of Errors in Manuscripts

Carson categorizes scribal errors into two categories: Intentional and unintentional. The study of scribes has been improved upon in the last decade and is most thoroughly examined by Dr. Jim Royse in his book entitled Scribal Habits in Early Greek New Testament Papyri published by Brill in the New Testament Tools, Studies and Documents series edited by Bart Ehrman. This book can be had for $436.00. Despite the further development of scribal habits, much of what Carson says in this chapter is still relevant today.

I will make one note on the study of scribal habits overall. While there are many things that can be discerned by the habits of scribes, some practices will never be understood fully. Take for example Carson’s commentary on marginal notes.

“Occasionally, honest errors of judgment have led to the introduction of an error. For example, if a scribe accidentally left out a line or a few words, the corrector might put them in the margin. The next scribe who came along and copied this manuscript might reinsert the words into the text at the wrong place. Alternatively, the marginal note may have been a scribe’s comment rather than an integral part of the text; but the scribe who copied that manuscript might well have inserted the note into the new copy he was writing, thus adding something to the text of Scripture that should not be there.”

Carson, D. A.. The King James Version Debate (p. 22). Baker Publishing Group. Kindle Edition.

Notice how Carson employs the words “may” and “might.” This is very important for the layperson to understand if they are not familiar with the way scholars interpret data. What these types of words should signal to everybody is that any statement that is prefaced with “may,” “might,” “probably,” etc. is that the statement is not some established record of fact. That does not mean that using such language is wrong or bad, just that it should be interpreted as what is likely, not what is certain. Scribes did not leave definitive guides to interpreting their annotations and much of what scholars come up with are educated guesses. Again, this practice is not wrong, it is just important to note as it is relevant when discussing with certainty what is in the text of Scripture.

The reason I take note of this is due to the fact that the average person will read this statement and others like it, and take it as a statement of certainty as to what is occurring. Because the scribes who inserted words in to margins do not offer explanation as to what the marginal note is or why it is there, the scholars must use “may haves” and “might bes” to speculate what exactly those marginal notes are. This is an accepted practice in every discipline of scholarship where the data does not offer certainty. It is a responsible practice, though Christians should be aware that any statement of certainty issued on top of the premise of a “might be” is severely irresponsible from a scholarly perspective. This practice is quite common among Critical Text apologists, so I wanted to make note of it here. If the goal is determining the text of Scripture with certainty, then statements prefaced by “may bes” and “might bes” are not adequate to do furnish that goal.

What Carson does practically in this chapter is assure his reader that despite the errors that he is describing, there is no cause for worry.

“Before taking the discussion further, I should pause and set at rest the troubled concern of anyone who, on the basis of what I have written so far, concludes that the manuscript tradition is entirely unreliable, or that we can not really be certain of any of it. There is no need for such rigorous pessimism.”

Carson, D. A.. The King James Version Debate (p. 24). Baker Publishing Group. Kindle Edition.

Based on what Carson has said, he is correct in making this statement. The problem with this statement is with the scholarship that informs this statement. If the situation were as simple as correcting errors with a complete set of manuscripts, then there is no issue at all with what Carson says here. The problem is that there is not a complete extant record to draw from and ultimately much of, if not all of the conclusions made by textual scholars end up being prefaced with “may,” “might,” “likely, ” and “probably.” Dan Wallace makes a nearly identical argument when he presents the scenario as a spectrum between radical skepticism and absolute certainty. I address this topic in further depth frequently on my blog, but this brief survey of quotations should help my reader understand the thin foundation Carson’s statement is built upon.

The last note I will make on this chapter is regarding Carson’s sources. In this chapter, Carson quotes Metzger on the topic, which should inform us that what we are going to find is essentially the condensed thoughts of Metzger. This is an important observation as it gives the reader an idea of the school of thinking Carson is drawing from.

Conclusion

This chapter is essentially a brief summary of the types of scribal errors described by Metzger. There isn’t a lot to comment on here, other than the references to Metzger inform the reader the school of thought behind Carson’s thinking. If the reader wanted to do further studies on the material in this chapter, they could pick up a copy of The Text of the New Testament: Its Transmission, Corruption, and Restoration which is the text referenced by Carson and the standard textbook in most seminaries on the topic. Bart Ehrman co-edited this textbook, which might be an important detail for some readers. This book is still recommended as introductory material on textual criticism by current scholars such as Dr. Peter Gurry and Dr. Elijah Hixson.

The Consequences of Rejecting Material Preservation

Introduction

Since the late 20th century, the doctrine of Scripture has been reformulated to say several things, most explicitly in the Chicago Statement on Biblical Inerrancy. The Chicago Statement articulates several things about the doctrine and nature of Scripture : 

  1. The original manuscripts (autographs) of the New Testament were without error
  2. The Scriptures as we have them now can be discerned to be original with great accuracy, but not without error
  3. The Scriptures as we have them now are without error in what they teach (sense), but not without error in the words (matter)

Within this modern formulation, there are also rules which anticipate certain critiques: 

  1. The Bible is not a witness to divine revelation
  2. The Bible does not derive its authority from church, tradition, or human source
  3. Inerrancy is not affected by lack of autographic texts

While this statement affirms many important things, it has a major flaw, which has resulted in many modern errors today. This is due to the fact that the Chicago Statement denies that the material of the New Testament has been preserved in the copies (apographs). This is a new development from the historical Protestant doctrine of Scripture, which affirms that God had providentially kept the material “pure in all ages.” The original texts in the possession of our great fathers in the faith were considered as the autographs.

“By the original texts, we do not mean the autographs written by the hand of Moses, of the prophets and of the apostles, which certainly do not now exist. We mean their apographs which are so called because they set forth to us the word of God in the very words of those who wrote under the immediate inspiration of the Holy Spirit”

(Francis Turretin, Institutes of Elenctic Theology, Vol. I, 106). 

This modern update is an attempt to resolve the issues of higher criticism and neo-orthodoxy which were introduced after the time of the Reformers and High Orthodox. While the statement itself affirms against these errors, it does not explain how a text can retain its infallibility and inerrancy while the material has not been preserved. Perhaps at the time, the assumption that the material was preserved to the degree of “great accuracy” was enough to give the people of God great confidence in the Word of God. The error of this formulation is that the mechanism which determines such accuracy is completely authoritative in determining which of the extant material is “accurate” to the original. This seemingly contradicts the doctrinal formulation within the Chicago Statement itself, though I imagine that the reach of textual scholars into the church was not then what it is now.

Infallibility, Inerrancy, and Greatly Accurate Texts

The contradiction of the Chicago Statement is that it affirms against human mechanisms of bestowing the Scripture authority, while itself being founded entirely upon these human mechanisms. In the modern formulation of the doctrine of Scripture, it assumes that the extant material is “greatly accurate” as it relates to the lost original. This level of accuracy, according to this formulation, is enough to know that the material is without error in what it teaches. The problem with this is in how we determine that level of accuracy. Since “great accuracy” is a vague metric, it allows for the amount of material that is considered “accurate” to fluctuate with the methods that determine that level of accuracy. It assumes that any change made by this mechanism cannot possibly change what that material teaches. That means that no matter the material shape of the text, it should be considered inerrant, because changes to the material shape “do not effect doctrine.”

Yet the very mechanism entrusted with this task has no ability to determine that the material it has produced represents what the original said, so the evaluation of “great accuracy” is not only vague, it is arbitrary. There is no meaningful standard to measure the material against to even determine how accurate it is, so any descriptions produced of that level of accuracy is based purely on the assumption that the texts produced by the chosen mechanism are as accurate as advertised. That is to say that the mechanism itself has the sole power of determining accuracy and yes, the authority of such a text. When this mechanism deems a passage, or verse, as not being accurate to the original, pastors simply do not preach that text any longer, and laypeople no longer read it. There are countless examples of the people of God appealing to this mechanism as the mechanism which gives the Scriptures authority. 

This is evident whenever a textual dispute arises in the text. All it takes is one manuscript to introduce such a dispute. What happens when such a dispute occurs? Christians appeal to the authority of the mechanism. In its very axioms, the Chicago Statement forces the people of God to submit to an external authority to validate the canonicity of a passage. Since it rejects magisterial, historical critical, and neo-orthodox models (rightly so), the only model acceptable to “authorize” a text by the modern doctrine of Scripture is lower criticism (not rightly so). Now, if lower criticism is defined simply as a process of comparing manuscripts to determine the original, this is not necessarily a problem. Many manuscripts were created by comparing multiple sources. So in that sense, lower-criticism has been practiced by the Christian church since the first time a copy of the Scriptures was made using multiple exemplar manuscripts. 

The problem occurs when that lower-critical function extends beyond the simple definition. The lower-critical mechanism elected by the modern doctrine of Scripture has reached far beyond such a definition. Rather than being a function which receives the original by comparison, it is a function which assumes that it is responsible for reconstructing a lost text. Further, that same mechanism not only assumes it is responsible for reconstructing, it is responsible for determining how the material was corrupted by reconstructing the history of the text. In other words, asserting its authority over the transmission of the text itself.

According to this mechanism, the Scripture did not come down to us safely, it actually developed away from its original form. The narrative of preservation from the Reformed and High Orthodox needed to be deconstructed so that another narrative could be developed. The sources of this material needed to be re-examined because there is no way that Mark, or John, or Paul wrote what the church thought they did. The text did not come down pure, it came down both textually and from tradition, and some of those pericopes made it into the Biblical text.  Textual variants, most commonly arose from scribal errors, but sometimes speak to the story of the religious communities who were trying to defend the orthodox structure of the Christian faith as it had developed in the traditions of the church. All of these are functions of the text-critical system that determines the “great accuracy” set forth by modern doctrinal statements. It is hard to responsibly say that this is a “lower” critical function. 

Practical Implications to Doctrine

The obvious issue here is that the foundation mechanism of the modern doctrinal statements are not restrained by the doctrinal statement itself. The most clear example of this is that the methods used to determine the “great accuracy” of the extant material as it relates to the original, do not even need to assume that an original ever existed in any meaningful way. This is plainly evidenced by the textual scholar DC Parker, who is the team lead for the Gospel of John in the ECM.   

“The New Testament is a collection of books which has come into being as a result of technological developments and the new ideas which both prompted and were inspired by them”

(Parker, Textual Scholarship and the Making of the New Testament, 3) 

 “We can all applaud when Bowers says that ‘we have no means of knowing what ideal form of a play took in Shakespeare’s mind before he wrote it down’, simply substituting gospel or epistle for play and St John or St Paul for Shakespeare”

(Ibid. 8)

 “The New Testament is – and always has been – the result of a fusion of technology of whatever kind is in vogue and its accompanying theory. The theological concept of a canon of authoritative texts comes after”

(Ibid. 12)

Even if evangelical scholars, pastors, and professors do not agree with DC Parker here in his words, they submit to his theology in practice. The texts which modern Bibles are built on are created according to various critical principles, and then the church theologizes them and calls them authoritative after the fact. Christians work with what they have, and what they have is susceptible to change based on models that do not recognize inspiration, preservation, or the Holy Spirit. Many scholars, pastors, professors, apologists, and even lay people then take that product and make individual determinations on that produced text as to its accuracy to the original. That means that the process of determining the text that is “greatly accurate” has gone through a three-form authentication before even being read. First, it is authenticated by the textual scholars and then printed using their preferred methods. Then it is authenticated by a translation committee, who makes determinations upon those determinations based on their preferred methods which may differ from the determinations of the previous committee. Then it is authenticated by the user of that text, who makes determinations based on their preferred methods, which may be different from the both of the previous two committees! 

This of course is necessary in a model which rejects material preservation and exposure of that material in its axioms. Some other mechanism must be introduced to give the text authority. This being the case, it is rather interesting that the modern articulation of the doctrine of Scripture rejects other mechanisms that bestow the text authority. What is wrong with a magisterium – that it is a function of the church? What is wrong with neo-orthodoxy? A similar process is taking place in the “lower-criticism” of the textual scholars, the simple difference is that it is approved by the people of God who use the product of that mechanism!  

Conclusion

The necessary practical conclusion of the modern articulation of the doctrine of Scripture is that Christians must place their trust in some other mechanism to give the Scriptures authority. These doctrinal statements rely upon the “great accuracy” of the text, so they necessarily rely upon the mechanisms that deem various texts “greatly accurate.” Since this modern doctrine says that God has not materially preserved His Word, a void is created that needs to be filled. There needs to be some mechanism that can determine the level of accuracy of the text that we do have left. The modern church has largely chosen “lower-criticism” as it is employed by those that create Greek texts and translations. Some have chosen neo-orthodoxy. Others have flocked to the Roman or Eastern magisterium.

The fruit of this doctrinal articulation is evident. Verses that were once considered “greatly accurate” to the original are now being called into question daily by Christians everywhere. Passages that have always enjoyed a comfy place in the English canon are ejected by whatever textual theory is in vogue. What is considered “greatly accurate” today may just as easily be considered a scribal interpolation tomorrow. Passages in John that have never been called into question may be discovered to contain “non-Johannine” vocabulary tomorrow based on the opinion of an up and coming scholar. A manuscript may be discovered that alters the form of the text in a number of places. All it takes is one early manuscript to unsettle the whole of Scripture, as we have seen with Codex B. 

Think of it this way. If you read a passage as authoritative five years ago, and no longer consider that passage as “greatly accurate” to the original, what changed? Can you point to a newly discovered manuscript that changed your mind? Was it your doctrine? Was it the opinion of a scholar or pastor or apologist you listen to? These are important questions to answer. When I went through my own textual crisis, I realized that I was the final judge over the text of Scripture. If an early manuscript emerged without John 3:16 in it, I would have thrown it out, especially if that was the opinion of my favorite scholar. I was pricked in my conscience that I had adopted such a frivolous approach to the Holy Scriptures, and it did not take long for me to seek out other doctrinal positions on the text. 

The mechanism that is most consistent, and approved by the Scriptures themselves, is God Himself. I asked myself, “what did God actually do in time to preserve His Word?” If the text did not fall away, certainly I could look around and see that to be the case. I found that there was indeed a textual position which affirmed this reality, and a text that had been vindicated in time. A text that the people of God used during the greatest Christian revival in history. The same text that was used to develop all of the doctrines I hold to. The same text which survived successfully against the Papist arguments that are not much different to the ones used today. So why would I abandon that text, and the theology of the men who used it? Adopting the critical text is not a matter of adherence to a “greatly accurate” text, it is a matter of departure from the historical text. The question of “which text should I use?” is quite simple, actually. The answer is: The text that God used in history and vindicated by His providence in time. The text that united the church, not divided it. The text that the majority of Bible readers still use today. I praise God for the Received Text, and the all of the faithful translations made from it. 

Before you ask, “What makes those readings vindicated?” Think to yourself which methods you are going to use to evaluate those readings. Do they involve deconstructing the narrative that God kept His Word pure in all ages? Do they include the belief that faith communities corrupted the text over time and introduced beloved pericopes from tradition? Do they rest upon the theory that Codex B represents the “earliest and best” text? If so, I would appeal to the Chicago Statement, which says, “We deny the legitimacy of any treatment of the text or quest for sources lying behind it that leads to relativizing, dehistoricizing, or discounting its teaching, or rejecting its claims to authorship”