Book Review: The King James Version Debate – Preface

Introduction

Recently I was asked what I thought of DA Carson’s The King James Version Debate in a comment on my blog. I have not read it, so I thought I would purchase a copy and do a chapter by chapter review like I did for Mark Ward’s Authorized. The reason I initially did not read this small book was largely due to the fact that it was written in 1978 and in many ways cannot represent the current thought of New Testament Textual Criticism. Despite this reality, this book is still used as a resource and many people’s understanding of the conversation is comprised of the material in this book. For that reason, this series should serve not only as an analysis, but also demonstrate to my reader the ways that trends in New Testament Textual Criticism have changed in 40 years. All quotations in this series will be taken from the Kindle edition. In this introductory article I will be giving an overview of the occasion, audience, goal, and organization of The King James Version Debate.

Occasion for Writing This Volume

Carson begins his work by explaining the occasion for his writing it.

“This little book is not the sort of thing I like to write. Yet for a variety of reasons I have been called upon again and again to say something about English versions of the Bible; and it has therefore been impressed on me repeatedly that a short volume on the subject, written at an easy level, was sorely needed.”

Carson, D. A.. The King James Version Debate (p. 7). Baker Publishing Group. Kindle Edition.

The purpose of this volume is to address growing concerns over modern Bible versions, specifically in the context of the “sizable and vocal body of opinion that defends the King James Version (KJV) as the best English version now extant” (9).

Scope and Audience of This Volume

According to Carson, this book is not meant to be an exhaustive treatise on textual criticism, but rather an accessible look at the discussion at large.

“The present slender volume is not an exhaustive treatise. It is not even a rapid survey of modern English translations of the Bible. That sort of book has already been written. Rather, these pages are given over to an easy introduction to two things: biblical textual criticism, that branch of biblical study which examines and correlates the manuscripts from which our English Bibles are translated; and some of the principles upon which translations are made. Moreover, with the possible exception of the appendix, this book aims at being minimally technical.”

Carson, D. A.. The King James Version Debate (p. 10). Baker Publishing Group. Kindle Edition.

This volume should be treated as an entry point into the discussion aimed at Christians who perhaps haven’t had any exposure to textual criticism or the Bible version discussion.

Stated Goal of This Work

After identifying his intended audience, Carson goes on to state his goal for writing The King James Version Debate.

“It is designed for students, pastors, and laymen who have no personal knowledge of the primary literature, but who find themselves influenced by the writings of the Trinitarian Bible Society and parallel groups, and do not know where to turn to find a popular rebuttal.”

Carson, D. A.. The King James Version Debate (p. 10). Baker Publishing Group. Kindle Edition.

So then we can take this book to be an a) entry level look at textual criticism and translation methodology b) aimed at students, pastors, and laymen who haven’t studied textual criticism c) with the goal of providing a popular rebuttal to groups such as the Trinitarian Bible Society.

Organization of This Work

Carson organizes this work into two parts made up of 9 chapters which I will list below.

Part 1

  1. The Early Circulation of the New Testament
  2. Kinds of Errors in the New Testament Manuscripts
  3. Text-Types
  4. Some Criteria for Making Textual Choices
  5. Origins of the Textus Receptus
  6. Modern Defense of the Byzantine Text-Type
  7. Fourteen Theses

Part 2

8. Preliminary Considerations

9. Some Thoughts on Translating Scripture

10. Conclusion

Conclusion

I have not provided any analysis of Carson’s book in this introductory article because I will be doing a chapter-by-chapter review in the upcoming days. Initially, I can say that much of what Carson says in this work is dated and likely should not be used today. That being said, his approach is far more respectable than that of James White and Mark Ward and I look forward to handling Carson’s work in the same manner that he handles the subject. DA Carson is a well respected scholar within conservative Evangelicalism, which means there are many out there who still understand the topic in the same way as he did at the time of writing this book. Overall, it may not be worth the time to do such an in depth review of a book that is over 40 years old, but it is still sold in Church book stores, so I’m sure somebody will benefit from it.

20 Articles That Refute Modern Textual Criticism

Introduction

Every time I write an article, my blog becomes increasingly difficult to navigate. I probably need to revamp how the site is organized, but until then I thought I’d put together an article that serves as a glossary to some helpful articles that respond to common claims made by Critical Text apologists.

I have heard it said that in the refutation of the Critical Text, TR advocates are being unnecessarily negative and critical without offering any solutions. This is not true, because the TR position has a rich doctrinal structure, furnished with historical and Scriptural support. If you want to read a summary of the argument in support of the TR, see this article. If you want to read a number of articles I have written on the topic, see this category here.

Common Claims Made by Critical Text Apologists Answered

  1. TR Advocates are more skeptical than Bart Ehrman
  2. Treating Text and Canon the same is a category error
  3. P75 proves that Vaticanus is early and reliable
  4. Beza was doing the same thing as modern textual critics
  5. The CBGM can get us to 125AD
  6. There is a “fatal flaw” in TR argumentation
  7. The CBGM is going to give us a Bible more accurate than before
  8. The CBGM is “God’s gift to the church”
  9. The TR position offers no meaningful apologetic to Bart Ehrman
  10. The TR position is “anachronistic”
  11. The TR position starts with the TR and is circular
  12. Adopting the critical text is consistent with presuppositional apologetics
  13. There is no doctrine affected between the TR and CT
  14. The TR position is “textual mythology”
  15. Learning textual criticism is necessary for apologetics
  16. The burden of proof is on the TR advocates
  17. The Bible does not teach providential preservation
  18. There is no difference between Critical Bibliology and Reformed Bibliology
  19. It is possible to reconstruct the original autographs with extant evidence
  20. The TR position is just fundamentalism, emotionalism, and traditionalism

Ever Learning, Never Able

This is the eighth and final article in the series, “Faith Seeking Understanding”.

Introduction

In the last installation of this series, I’d like to highlight possibly the number one reason people seek answers outside of the critical text, which inevitably leads people to either the Majority Text or the Traditional Text. What is likely the number one reason people abandon the critical text is the fact that it is incomplete, and has no function built into it that sets parameters on the scope of the work. In other words, it is not finished, and never will be. This is a challenging reality if you take into consideration even the standard view of Scripture held to by the majority of Bible believing Christians, let alone the Reformed view found in 1.8 of the Westminster and London Baptist confessions of faith.

When a pastor encourages his congregation that they have in their hands the very Word of God, it is objectively a false statement according to the critical text methodology. In the first place, textual scholars wouldn’t have a job if that were true. Secondly, the same scholars wouldn’t be working on new editions of the Greek New Testament if it were true that the church has in the critical text some sort of final product. In fact, the 2016 ESV was marketed initially as the “Permanent Text Edition”, which Crossway rolled back shortly after its release. While this reality is actually exciting for those that work in the field, this is the last thing that the majority of Christians want to hear. Most Christians don’t even know this about the modern critical texts. The changing nature of the modern critical texts can be broken into the categories of text and translation, which I will discuss in the final article of this series.

Text & Translation

There are very few realities other than this that should raise red flags to Christians when it comes to the modern critical texts. The general assumption made by most Christians is that we have over 5,000 manuscript copies of the Bible and those manuscripts give us enough information to know exactly what the Bible contains. This is probably due to the fact that most defenses of the Bible begin with, “We have 5,400 manuscripts!” Anybody who knows anything about textual criticism knows that this argument simply proves that a bible was written, not what that bible actually said. To many secular scholars, the manuscript tradition simply proves that there were multiple bibles that represent multiple Christianities that developed over time. The argument is completely bankrupt and should really not be used – especially to a textual scholar.

That point aside, the most problematic thing about the modern critical texts is that they are not finished and ever changing. Not a single scholar that I am aware of, Evangelical or not, will say that any edition currently available represents the original as it was penned, or that the versions we do have will not be revised in upcoming editions. In fact, the Evangelical scholars say the opposite! Here are several quotes just to give you a general idea of what I am talking about:

“We do not have now – in our critical Greek texts or any translations – exactly what the authors of the New Testament wrote. Even if we did, we would not know it. There are many, many places in which the text of the New Testament is uncertain.”

Gurry & Hixson, Myths & Mistakes in New Testament Textual Criticism, xii. Quote Dan Wallace.

“The text is changing. Every time that I make an edition of the Greek New Testament, or anybody does, we change the wording. We are maybe trying to get back to the oldest possible form but, paradoxically, we are creating a new one. Every translation is different, every reading is different, and although there’s been a tradition in parts of Protestant Christianity to say there is a definitive single form of the text, the fact is you can never find it. There is never ever a final form of the text”

DC Parker, BBC Radio Program “The Oldest Bible”. Editor of the Gospel of John in the ECM.

“Clearly, these changes will affect not only modern Bible translations and commentaries but possibly even theology and preaching”

Gurry, Peter, A New Approach to Textual Criticism, 6. Discussing the changes that will be made by the CBGM.

It is easily established that the scholarly guild believes that the modern text is not finished, and is expected to change. As I stated in previous articles, TR advocates take these words very seriously. That is the first component of the discussion. The second is that modern translations are also changing.

Not only are the underlying texts from which bibles are translated changing, the translation methodology itself is adapting with the culture of the American church. There is a reason MacArthur has endeavored to adapt the NASB into the Legacy Standard Bible to avoid politically correct translation methodology being applied to his favorite translation. This has been a long standing critique of modern bibles that even the most staunch advocate can recognize is a problem. Most bible believing Christians do not want a their translation to go “woke”. Further, the bible industrial complex is a real thing. There is a lot of money in bible sales. Changing up a few words every few years is good for business. Groups that want to create a study bible do this all the time to avoid paying loyalties to an existing publishing house. The changing nature of the critical text is actually quite good for the companies that make money selling bibles.

Conclusion

The fact that modern bibles are constantly in flux is a major draw to the TR for most people. You don’t need to be a fundamentalist to want to read one translation your whole life. As somebody who has gone from the NIV to the NKJV to the HCSB to the NASB to the ESV to the KJV, I have Scripture memorized in all of these translations and it’s obnoxious. I wish I would have just had one translation from the start. It is especially concerning when three different editions of the same translation differ from each other, like the ESV. You don’t need to know anything about textual criticism to be turned off by this reality.

If you add to this problem the issue of the actual underlying Greek changing every few years, you begin to see how the average Christian might take issue. So this is the final reason I will give in this series why somebody might be drawn to the TR for reasons other than Fundamentalism, Textual Traditionalism, or Emotionalism. A changing and incomplete bible is no bible at all, and most Christians recognize that. The problem is, the vast majority of Christians don’t even know that this is the reality of modern textual criticism, in large part due to irresponsible apologists who give Christians false comfort with poor argumentation.

Niche Textual Positions & Providence

Introduction

Many people are swept up by modern critical evaluations of the text of Holy Scripture. As a result, a handful of various textual positions have sprung up within mainstream evangelicalism. The fact is, there are other positions on the text of Holy Scripture than just the Received Text and Critical Text positions. Many people have asked why I only offer critique towards the modern critical text as it exists in the ECM or NA/UBS. The reason I do not address Tyndale House or other minority texts and viewpoints, is because these text platforms are not used by the people of God in churches. Think of it this way, if God is providentially working in time, is it the case that He is raising up a lone wolf to reconstruct the Word of God for the church? If He was doing this, wouldn’t the people of God know it? 

Here is the practical reality of providence and God working in ordinary means – if a Greek text is so niche that it hasn’t been translated for the people of God to use, or hasn’t been printed at all, then it doesn’t affect the church, who does not speak Greek. The only Christian people who speak Greek, interestingly enough, use a form of the TR. So while hobbyist textual positions seem to be fun for people to spend time on, they benefit the people of God in no meaningful way. They are simply academic exercises that do not translate to serving Christians, because the ordinary Christians who read their Bible in their mother tongue, cannot read Greek. If we look at this issue simply, it seems reasonable that positions that arise on the fringes of the church, which are adopted by essentially nobody, can be discarded.  

An Appeal to the Common Reader 

While the effort of producing Greek New Testaments may seem like a noble cause, it is a symptom of a strange phenomenon that has risen in the modern church. Namely, that the church does not have God’s Word and it needs to be found, and rogue individuals have taken up the mantle to do this. In order to justify this effort, modern scholars and other interested parties must attack the Received Text of God’s Word or even the modern critical text. They must “prove” that the Bibles people actually use are corrupt. This effort is praised and honored, mostly among Calvinistic Christians. The benefit of owning and using these niche printed Greek texts is practically an exercise of cataloguing, understanding, and advocating for the variant readings that arose in the copying process over the ages. If these readings aren’t found in a text that people actually use, they have very little impact on Christians at all. I argue that these efforts are a waste of time, because evidence-based reconstruction models cannot actually prove a reading to be original.  

What Christians should be asking is, “How does this affect me hearing God’s voice in the Scriptures?” Confessional Christians need to bring their theology back into the realm of textual criticism, and consider the practical implications of adhering to Chapters 1 and 5 of the WCF and LBCF. What good does it serve to entertain textual positions and Greek texts which have no stamp of providence on them? If these texts produced by fringe committees and lone wolves are truly God’s Word, why aren’t they being translated into the vulgar tongues of the earth? If these men are like Nehemiah, restoring the Word of God to the people, why don’t the people of God know it?

This brings me to another practical reality of the textual discussion. What good does it serve to spend hours upon hours cataloguing manuscripts, for example, which have 1 John 5:7 in them, if one does not believe that reading to be divinely inspired and authentic? What does the church gain by credentialed and non-credentialed scholars convincing the people of God passages in their Bible shouldn’t be there? How does it impact you, the person who actually reads the Bible? Practically speaking, the only thing it does is sow doubt, or perhaps causes you to just skip over a line of Scripture if it’s in your Bible. If you, like most Christians, have read John 7:53-8:11 or Mark 16:9-20 as original, and then are told that it is not original based on text-critical principles that can’t actually prove it, then you are told to question God’s Word on the authority of some scholarly or even non-scholarly opinion. The reality is, that most of these popular evangelical authorities and scholars have no say in producing Greek texts that are actually used by the people of God. That is why I advocate so heartily for the Received Text, because it is text that stands by its own weight and use. There is nothing new to say about it because there is nothing new about it. It is tried and true and received by the people of God, even to this day. My goal is simply to advocate for people to return to it who have adopted critical models of the text of Holy Scripture. 

Conclusion

The greatest disconnect between people who spend their time playing with variants and the people of God who read their Bible without a text-critical lens, is that text criticism doesn’t actually matter unless those variants make their way into a text that people actually use. This is why I do not address the textual position of somebody like James Snapp, or aim my arguments at the Tyndale House Greek Text. The simple reality is that James Snapp hasn’t produced a Greek text, and the Tyndale House Greek New Testament isn’t translated. I am actually quite alright with appealing to the materials of James Snapp where we agree. I am unabashedly a Calvinist and believe that “God the good Creator of all things, in His infinite power and wisdom doth uphold, direct, dispose, and govern all creatures and things, from the greatest even to the least, by His most wise and holy providence, to the end for the which they were created, according unto His infallible foreknowledge, and the free and immutable counsel of His own will” (LBCF 5:1). That means that the ordinary means of God speaking, His Word, is also guided by this providence. 

So if Christians wish to engage in textual criticism as a hobby, that’s totally fine, though I don’t exactly see the benefit of it. People would be much better served simply reading God’s Word. The tinkering of textual hobbyists doesn’t actually have a bearing on the people of God until that textual tinkering makes it into the main text and footnotes of people’s Bibles. At this point, the only real texts that have an impact on the people of God are the printed editions of the modern critical text and the ECM, and the Received Text of the Reformation. I argue that the modern critical text should not be used, because it has no stamp of providence on it, and the Received Text should be used because it does have a stamp of providence on it. These two texts disagree with one another, so it is logical to take a stand on one or the other. All other minority texts and textual positions are simply hobbies, and the only real impact that these textual positions have on the people of God is to convince them that “there are no perfect Bibles” which means that God did not preserve His Word. In all of these fringe textual positions, they all have one thing in common: that those who advocate for these texts as original, or perhaps “best,” are waiting on them to be finished so they can actually use them. I don’t see that as compatible with God’s providence, and so I focus my efforts on texts and methodologies that are used by the people who read Bibles.   

The Weakness of Evidence-Based Textual Criticism & The Received Text

Introduction

If I could identify the most significant disconnect between those that advocate for modern critical methods and those that advocate for the Received Text, it’s the difference in how evidence is handled. From a modern critical perspective, it is baffling that the manuscript evidence they produce for or against a reading is rejected. From a Received Text perspective, evidence should be used to support the God given text, not used to reconstruct a new text. Despite the frustration this might cause on both sides, there are good reasons that those who advocate for the preservation of the Received Text are not swayed by the evidence-based presentations of the academy and those who follow in that tradition. Instead of simply shrugging off these reasons, I believe that it is wise to consider the perspective of evidence presented by those in the Received Text camp. The concerns raised about evidence based critical methods cannot be answered by simply talking about textual variants ad nauseum. In this article, I will present four reasons why evidence should not be considered such an authoritative source for textual criticism, and then give a positive presentation on how we can know what the Scriptures say. 

Evidence Requires Interpretation

The single most impactful cause of changes in modern bibles is reinterpretation of data. That is because modern textual criticism views evidence as the source material to reconstruct a text that has been lost. One generation, the church may deem a manuscript or reading of little value, and the next, the most valuable text available. We have seen this practically implemented by the introduction of various brackets, footnotes, and omissions in modern Greek texts and translations made from them. This is inevitable with evidence based approaches, because the shape of the text is driven by whichever theory is in vogue that is used to evaluate the evidence. While the transmission of the New Testament was guarded from such significant changes by virtue of churches using these handwritten manuscripts and lack of technology for mass distribution, the modern church is not guarded by such a mechanism. The lack of church oversight in the creation of modern texts also plays into the ability for the Bible to shift year by year at such a quick rate. If a change is made in one place of Scripture today, it can be distributed in thousands of copies, essentially overnight, without consulting a single pastor. This should concern the people of God, but this has unfortunately become standard practice, and advertised as a necessary reality. 

Within the various modern printed Greek editions, the perspective of the editors define which verses are included, omitted, bracketed, and footnoted. Since evidence based methods are always driven by the perspective of the person interpreting the data, the shape of the New Testament is intimately connected with the methodologies of the scholars themselves. Yet even the scholar’s analysis is subject to interpretation. This is clearly demonstrated when an editor prints a reading in the main text of a Greek New Testament, and the user of that text selects a reading to use from the footnote over and above the reading chosen by the editor. Even within the modern critical text community, there are even disagreements over how the agreed upon source data should be interpreted, which is evidenced by the various printed editions produced by modern text critics. 

It is often said that the cause for change in modern Bibles is “new” data, yet this doesn’t seem to be the case. The texts which modern Bibles are based off were not created in the 19th century, they were published in the 19th century. And modern Bibles are mostly the same as Westcott-Hort’s text at a macro level. At one point, the people of God had that manuscript, and their interpretation of it shaped the way that Greek manuscripts were copied going forward. The difference is that modern interpreters of that data have valued that data more heavily than they have been weighted historically, and those in the Received Text camp view that as an act of God’s providential guidance of the text as it was passed along. In short, the interpretation of the new (to us) data is really a greater factor than the data itself.

Evidence Based Methods are Weak Because Manuscripts Cannot Talk  

In modern critical methods, extant manuscripts which are dated prior to AD 1000 are considered the most valuable or relevant New Testament witnesses. Though data after this time is consulted and considered, it is not given the same weight as earlier manuscripts. This becomes problematic because most of our New Testament manuscripts are from after the data window selected by scholars. Additionally, many of these early manuscripts are without a pedigree, meaning that we do not know who made them, why, and for what. For example, Frederic Kenyon proposed that the Papyri fragments were not used in churches for reading, but actually personal texts that a Christian could carry around and read privately. That means that they were more likely to be paraphrastic, contain errors, or be used outside of the mainstream of orthodoxy. Yet this is just a theory. We simply do not know why they were made, for whom, and how they were used. Such is the case for pretty much all of our early manuscripts. Why is this problematic? 

During the time period that our earliest extant manuscripts come from, some of the most heated theological debates were going on regarding the humanity and divinity of Christ. We also have testimony during that time which testify to the tampering of manuscripts. Since we do not know anything about the source of these manuscripts, it is nearly impossible to know if a manuscript was used in an orthodox church, or an Arian church, or wasn’t used in a church at all. What is more important from a data analysis perspective then, is what we do know of the context in which those manuscripts were created. Since theological context is not considered in modern critical approaches, we could very well have a reading in our Greek text that was introduced by Maricon himself and be none the wiser. And even if we do print a reading in our modern Bibles that have been historically questioned as gnostic or Arian, this is not taken into consideration by modern methodology.

That means that while early New Testament manuscript evidence serves a powerful apologetic purpose against claims that Christianity was “invented” at some point around the fourth century, it simply does not have the same kind of weight when it comes to being used for constructing accurate Greek texts. We may reproduce the exact hypothetical archetype for Codex B, for example, and still not know who used it, when that archetype was created, or where that archetype came from. One can say that the archetype of Codex B reaches back to the first century, when in reality it may have been created a month before. We simply cannot know. I will continue to argue on my blog that this is a good thing for the church, as it takes the authority of the Scriptures out of the hands of men. By God’s singular care and providence, He has kept His Word pure, and doesn’t need to be reconstructed. 

Evidence Based Methods Are the Weakest They Have Ever Been 

In the 21st century, we are the farthest away from the time the creation of the manuscripts used to make modern Bibles than any other generation. That means that we have the least amount of perspective on the data we do have, regardless of how much we have of it. The only group of people who had clear insight to Codex Vaticanus for example, were the people that created it, used it, or had access to since lost commentary on it, if that ever existed. If we ignore the insights of the scholars and theologians during the Reformation on this text, which modern scholars typically do, we essentially know nothing about it, except from what can be ascertained from its readings. And if we do not assume any text as standard base text, it is impossible to discern anything from the readings of that text at all, other than somebody used it at some point. How can we know if a reading is orthodox or not, if there is no standard to compare against? It is difficult, even impossible, to know much about a manuscript if the theological context from the time it was created  is not considered. 

That puts us at a unique time in history, different from even the Reformation. During the 16th century, manuscripts were still being used in churches and in liturgies. In every generation of the church this was the case until the printing press. Rather than assuming “we know more,” it is wiser to assume that we actually know less. Here is a metaphor that may be helpful in understanding my point. 

Let’s just say, 1,000 years from now, somebody finds a gas powered lawn mower disassembled into hundreds of pieces in a junkyard in a pile of other disassembled equipment. The person knows its a lawnmower, but doesn’t know what kind of lawnmower or what it originally looked like. If this person wanted to reconstruct that lawn mower, he would have to know which parts go to the lawnmower. He may have another lawnmower which looks kind of like the one he wants to reconstruct, but it’s not the same make and model, and it is from a different time period. Here’s the problem: The person doesn’t know which parts go to the lawnmower he wants to repair, and even if he did, he wouldn’t know exactly how to repair it without an instruction manual because nobody has used lawn mowers in 400 years. In order to repair the mower, he needs to find somebody who knows how, or a preserved model to use as a guide. He could spend his whole life trying to reconstruct the mower, and even if he got it to work, he wouldn’t know if the parts he used were from another piece of equipment that had the same parts as the mower, another mower altogether, or the original mower. The reconstructed mower might even have parts that work, but aren’t the right parts. Only a person who knew what the lawn mower originally looked like could tell him if he reconstructed it correctly with the right parts. A person trying to reconstruct the mower while other mowers of the same kind were still in production would have no problem with the same task, and achieve more accurate results. The fact is, the person will simply never know that all the parts he used even belonged to the mower to begin with, because he doesn’t have the original mower. You wouldn’t call that lawn mower preserved, in any case, even if all the parts were in the junkyard somewhere. 

The metaphor works quite nicely with manuscript evidence. During the time a manuscript was created, the people knew what that manuscript was for, who made it, and who used it. They even would know where it departs from the rest of the manuscripts circulating at the time. These are simply insights we cannot know, unless some extant commentary on the manuscript informs us on these things. And often times when we do have this kind of commentary, it is ignored and labeled fraudulent or “out of context.” The point is, the further away from the creation of a manuscript we get, the less we can know about it based on the manuscript itself. This, I argue, is yet another function of providence. We do not need to reconstruct a text, because the Bible has been kept pure in all ages. We need to receive the text as it has been passed down, not recreate a version of the New Testament that looks like a text(B) that was produced by, according to Scrivener, “more or less a Western unitarian” (Royse, Scribal Habits in Early Greek New Testament Papyri, 3). 

Evidence Based Methods Are Weak Because They Give False Confidence That We Have the Right Reading 

While the scholars working in the field are vocal about not having absolute confidence in the evidence, by the time a text gets to the pew, this doubt is dissipated through popular level presentations on textual criticism. A scholar can print a reading in a Greek text and have doubts about its place in the transmission history, and a Christian will use that reading as if it is the Divine Original itself. A scholar might even have great confidence that the reading printed, or not printed, in the text is original, and be wrong. Due to the nature of genealogical methods of text criticism, a reading can be erroneously placed earlier or later in the textual transmission history, and the scholar would never know it. 

The problem of basing the form of our Bibles on extant evidence is not a problem with all evidence. It is a problem of which evidence. If scholars are wrong in their theories, the church has a Bible that is based on the wrong manuscripts, and nobody is the wiser for it. In other words, scholars, like the person who set out to reconstruct the lawn mower, do not know enough about the manuscripts they have selected to use to say that their reconstruction looks anything like the original. Sure, it’s a form of the New Testament, but is it the New Testament? Just like the reconstructed lawn mower might look like the original, the person will never know what parts of the mower he used the wrong part for. Since we do not have this meta-data on our earliest extant manuscripts, the reconstructed product does not say much other than that it looks like a version of the New Testament that existed at what point in history. That text may have existed, but can we know who used it, and why we needed to reconstruct it? What text is this that has fallen away, if God’s Word has been preserved? Reconstructionist text criticism introduces far more problems than it solves – pointing again to the reality that the Bible never needed to be reconstructed based on the evidence that was published in the 19th and 20th centuries. That yet again points to the reality that God did not desire for His people to engage in this effort, but receive the text He had already given.  

Why Those In the Received Text Camp Do Not Base Their Bible Primarily on Evidence

Simply put, because it is not wise to do so, for the reasons listed above. That is why the primary function of the Received Text position is providence. According to the Scriptures, God has preserved His Word. The question for most people is, how did He do it? Those in the Received Text camp say that in every generation of copying of New Testament manuscripts, the Christians who copied and used manuscripts had the best perspective on those manuscripts. Those in the modern critical text camp typically say that we, in modernity, have the best perspective on the original languages and the extant manuscripts. Rather than assume that “we know better,” it is wise to avoid that kind of thinking, and instead, look to providence. Recognizing God’s providence is a matter of receiving the product that existed in continued use throughout the ages. Simply because a manuscript survived does not mean that it was in the category of “in continued use.” In fact, a manuscript from 1,700 years ago seemingly points in the opposite direction. I’ve worn out high quality, printed Bibles in a year. If somebody finds my Bible in tact in 1,700 years and can still read it, that would say a lot about how much I used it! Since we don’t know much about the earliest manuscripts, these are not helpful in determining such a text. If we can’t say where a manuscript came from, or how it was used, it is not wise to assume we know that information when we simply don’t. Further, the early data sample is not broad enough to know what else existed at the time that was being used. There is a reason the majority of our manuscripts do not look like those called, “Earliest and best,” and instead of assuming that the people of God got it wrong for hundreds of years, it is more reasonable to assume that the people of God had more information, and more insight on these manuscripts than we do now. 

That is why the Reformation is such a pivotal reference point for the transmission history of the New Testament text. It is a time where we, in the 21st century, have the best insight on what actually happened, and the most commentary on the manuscripts and readings that were agreed upon to be authentic by the people of God. During that time and even in the early church, the concept of an “authentic” manuscript was a driving force in identifying texts that should be used. No such function exists today in modern textual criticism. Manuscripts are weighted according to text critical principles, not evaluated by their authenticity or the way the church viewed them historically. Even with all we know of the Reformation, there is still so much we will never know about that process. If we do not even know every manuscript used in the creation of the Received Text, how much more absurd is it to try to figure out the origins of hand copied manuscripts from the fourth century and earlier? 

One of the things that needs to be recognized, is that we will never know exactly how the New Testament was transmitted, we simply know that it was. That is why textual scholars have been developing theories for the last 200 years, because there is no definitive trail through time leading back to the start that we can derive from extant data. It is important to note, that just because we do not know, does not mean that Christians throughout the ages did not know. Actually, it seems that this generation is the only generation that doesn’t seem to know. That speaks volumes to the methods of modernity. We cannot say that every New Testament in the fourth century looked like Codex B, and even from an evidenced based approach, that conclusion stands at odds with the data and reason. We can only reasonably approach the issue with what we do know: That God preserved His Word, and that by the time it was mass produced, it looked a certain way. If we want to approach the text like any other book, and say that the New Testament evolved, and was not preserved, we will spend our whole lifetime watching the text of the New Testament change form with the ebbs and flows of different theories of the academy. Often times evangelical textual scholars say, “I do not think God was obligated to give us the original. We should be grateful for what we do have.” Yet I say that that conclusion is based on theories on manuscripts that we simply do not know enough about to make such a conclusion. The conclusion first assumes that the Bible was destroyed, like the mower, and needs to be reconstructed. Yet it is clear, based on the extant evidence, that if the goal is to reproduce the original, that is an unwise errand. It seems especially off base if we are trying to maintain the doctrine of preservation in any meaningful way. 


A simple conclusion that causes people to return to the historical protestant text is often times the reality that we do not know enough about the transmission history of the New Testament up to the time of the Reformation to responsibly say that we can reconstruct it to the specifications of the original. Like the person who reconstructed the lawn mower, we will never know if we included all the parts, or even parts that came from other sources than the original. That is why the modern effort of textual criticism is more confident at saying what “isn’t” Scripture than what is Scripture. Even if those conclusions are based on evidence we really don’t know a whole lot about. What those in the Received Text camp set forth, is that it is not primarily a matter of extant evidence which gives us our New Testament, it is a matter of which evidence do we know the people of God used in time. There is no reason to assume that orthodox Christians even used our “earliest and best” manuscripts. That is assuming that modern scholars even factor in  that metric, which does not exist in the axioms of the critical methods. If our view of the transmission of the New Testament is based on the belief that God preserved His Word, it is difficult to propose that He did so by first destroying it so that it had to be reconstructed. The belief that God requires His Word to be reconstructed only came about due to the reevaluation of manuscripts which we know virtually nothing about. We do not know who created them, who used them, or even if they were used outside of a single church or area. That is not a wise foundation, from both a data perspective, and a common sense perspective. 

Conclusion

Simply because the early evidence is not uniform and has no pedigree does not mean that God did not preserve His Word. In fact, it seemingly demonstrates that God, in His providence, would make it quite difficult for Christians to responsibly place their faith in any such process that places the authority of the Scriptures in axioms of modern scholarship. When the early evidence is viewed in light of what we know about it, it’s value as a source for reconstruction fades to a dim glimmer. What it does demonstrate is that the New Testament existed as early as it says it did, and that it was transmitted all the way until today. The matter of identifying its original form was never meant to be something that men are responsible for reconstructing, but receiving. 

If we are not to reconstruct the text, but receive it as a preserved whole, then it seems providence is a much better guide than reconstruction by way of extant evidence that we have little information about. By recognizing God’s providence, we recognize that the people of God in every generation had the best insight on the manuscripts that were extant to them, used by them, and copied by them. This allows us to at least recognize the general form of the New Testament, what Theodore Letis called the “Macro Text.” In other words, the general copying process of the Text of Holy Scripture naturally corrected significant variants, either by producing another copy, or correcting that copy in the margin, according to the best manuscripts available in every age. By the time technology increased with the printing press, many manuscripts which had variants in them existed, yes. Almost every single one of the significant variants recognized today existed then. Yet, unlike this generation, the scholars and theologians of the time had better perspective on that data because it was still being used. They had more insight on the text that had been handed down as “authentic” then we ever will. 

We will never know what was contained in every manuscript that was destroyed after that time. In fact, there are hundreds of manuscripts that we know exist today that we simply do not have access to examine. Readings that we consider “minority” today could have easily been a majority reading in AD 800. There are readings considered “minority” today that could have been the majority during the time after the Reformation, and we would never know. The assumption that extant data is the best data is simply not in line with how much we know about manuscripts getting destroyed throughout time. How could it be, that for the first time in church history, that God finally allowed His church to “get it right” concerning the text of Holy Scripture? How could it be, that now, even knowing how many thousands of manuscripts that were destroyed, is the time where we have the most of them? It may be true that we have the most access to all of the manuscripts due to technological advances, but it is important to remind ourselves that we have the least amount of insight on the ones we do have. Additionally, what value is all of this data if the modern scholars are only looking at a subset of that data? The very subset that we know the least about, nonetheless!

The point of this blog is to give people confidence that the people of God in the previous era of the church had that insight, and by God’s providence, He preserved His Word. Rather than believing that we need to reconstruct the text, we should receive the text handed down to us. What we do know of the text of the Reformation is that the people of God used it, translated it, and commented on it. It was so agreed upon that people have called it the “default” text. Does that not sound providential? That the text was so agreed upon it was “default?” The reality is, we do not have the justification, based on evidence at least, to unseat this text so agreed upon. We have no reason to doubt that God has providentially preserved His Word by handing it down through the people of God that used it. We should cherish the fact that God does desire for His sheep to hear His voice, and has given us His Word to make that possible. 

The alternative, as I have seen it and demonstrated on my blog, is not such a view. It is a view which says that we don’t know exactly what God spoke by the prophets and apostles – that we need to reconstruct a lost text which has evolved over time. It is a view which says that God didn’t desire to give us all of His Word, just enough of it to get by. It is a view which says that even if God did preserve His Word, we would have no way to know that we have it. It is an honest evaluation of what the Bible is, if we assume that the early, choice evidence preferred by the academy is the only way we have to determine what God’s Word is. Yet this makes perfect sense that such scholars would come to these conclusions, if we consider the limitations of evidence based critical methods. This article hopefully demonstrates that. If anything, God’s providential work in time has shown us that it is folly to try and reconstruct a text that never fell away. It seems, that the real text that has fallen away, is the one the scholars are trying to reconstruct. 

For more on the Providentially Preserved Text: https://youngtextlessreformed.com/2019/11/06/a-summary-of-the-confessional-text-position/

The Consequences of Rejecting Material Preservation

Introduction

Since the late 20th century, the doctrine of Scripture has been reformulated to say several things, most explicitly in the Chicago Statement on Biblical Inerrancy. The Chicago Statement articulates several things about the doctrine and nature of Scripture : 

  1. The original manuscripts (autographs) of the New Testament were without error
  2. The Scriptures as we have them now can be discerned to be original with great accuracy, but not without error
  3. The Scriptures as we have them now are without error in what they teach (sense), but not without error in the words (matter)

Within this modern formulation, there are also rules which anticipate certain critiques: 

  1. The Bible is not a witness to divine revelation
  2. The Bible does not derive its authority from church, tradition, or human source
  3. Inerrancy is not affected by lack of autographic texts

While this statement affirms many important things, it has a major flaw, which has resulted in many modern errors today. This is due to the fact that the Chicago Statement denies that the material of the New Testament has been preserved in the copies (apographs). This is a new development from the historical Protestant doctrine of Scripture, which affirms that God had providentially kept the material “pure in all ages.” The original texts in the possession of our great fathers in the faith were considered as the autographs.

“By the original texts, we do not mean the autographs written by the hand of Moses, of the prophets and of the apostles, which certainly do not now exist. We mean their apographs which are so called because they set forth to us the word of God in the very words of those who wrote under the immediate inspiration of the Holy Spirit”

(Francis Turretin, Institutes of Elenctic Theology, Vol. I, 106). 

This modern update is an attempt to resolve the issues of higher criticism and neo-orthodoxy which were introduced after the time of the Reformers and High Orthodox. While the statement itself affirms against these errors, it does not explain how a text can retain its infallibility and inerrancy while the material has not been preserved. Perhaps at the time, the assumption that the material was preserved to the degree of “great accuracy” was enough to give the people of God great confidence in the Word of God. The error of this formulation is that the mechanism which determines such accuracy is completely authoritative in determining which of the extant material is “accurate” to the original. This seemingly contradicts the doctrinal formulation within the Chicago Statement itself, though I imagine that the reach of textual scholars into the church was not then what it is now.

Infallibility, Inerrancy, and Greatly Accurate Texts

The contradiction of the Chicago Statement is that it affirms against human mechanisms of bestowing the Scripture authority, while itself being founded entirely upon these human mechanisms. In the modern formulation of the doctrine of Scripture, it assumes that the extant material is “greatly accurate” as it relates to the lost original. This level of accuracy, according to this formulation, is enough to know that the material is without error in what it teaches. The problem with this is in how we determine that level of accuracy. Since “great accuracy” is a vague metric, it allows for the amount of material that is considered “accurate” to fluctuate with the methods that determine that level of accuracy. It assumes that any change made by this mechanism cannot possibly change what that material teaches. That means that no matter the material shape of the text, it should be considered inerrant, because changes to the material shape “do not effect doctrine.”

Yet the very mechanism entrusted with this task has no ability to determine that the material it has produced represents what the original said, so the evaluation of “great accuracy” is not only vague, it is arbitrary. There is no meaningful standard to measure the material against to even determine how accurate it is, so any descriptions produced of that level of accuracy is based purely on the assumption that the texts produced by the chosen mechanism are as accurate as advertised. That is to say that the mechanism itself has the sole power of determining accuracy and yes, the authority of such a text. When this mechanism deems a passage, or verse, as not being accurate to the original, pastors simply do not preach that text any longer, and laypeople no longer read it. There are countless examples of the people of God appealing to this mechanism as the mechanism which gives the Scriptures authority. 

This is evident whenever a textual dispute arises in the text. All it takes is one manuscript to introduce such a dispute. What happens when such a dispute occurs? Christians appeal to the authority of the mechanism. In its very axioms, the Chicago Statement forces the people of God to submit to an external authority to validate the canonicity of a passage. Since it rejects magisterial, historical critical, and neo-orthodox models (rightly so), the only model acceptable to “authorize” a text by the modern doctrine of Scripture is lower criticism (not rightly so). Now, if lower criticism is defined simply as a process of comparing manuscripts to determine the original, this is not necessarily a problem. Many manuscripts were created by comparing multiple sources. So in that sense, lower-criticism has been practiced by the Christian church since the first time a copy of the Scriptures was made using multiple exemplar manuscripts. 

The problem occurs when that lower-critical function extends beyond the simple definition. The lower-critical mechanism elected by the modern doctrine of Scripture has reached far beyond such a definition. Rather than being a function which receives the original by comparison, it is a function which assumes that it is responsible for reconstructing a lost text. Further, that same mechanism not only assumes it is responsible for reconstructing, it is responsible for determining how the material was corrupted by reconstructing the history of the text. In other words, asserting its authority over the transmission of the text itself.

According to this mechanism, the Scripture did not come down to us safely, it actually developed away from its original form. The narrative of preservation from the Reformed and High Orthodox needed to be deconstructed so that another narrative could be developed. The sources of this material needed to be re-examined because there is no way that Mark, or John, or Paul wrote what the church thought they did. The text did not come down pure, it came down both textually and from tradition, and some of those pericopes made it into the Biblical text.  Textual variants, most commonly arose from scribal errors, but sometimes speak to the story of the religious communities who were trying to defend the orthodox structure of the Christian faith as it had developed in the traditions of the church. All of these are functions of the text-critical system that determines the “great accuracy” set forth by modern doctrinal statements. It is hard to responsibly say that this is a “lower” critical function. 

Practical Implications to Doctrine

The obvious issue here is that the foundation mechanism of the modern doctrinal statements are not restrained by the doctrinal statement itself. The most clear example of this is that the methods used to determine the “great accuracy” of the extant material as it relates to the original, do not even need to assume that an original ever existed in any meaningful way. This is plainly evidenced by the textual scholar DC Parker, who is the team lead for the Gospel of John in the ECM.   

“The New Testament is a collection of books which has come into being as a result of technological developments and the new ideas which both prompted and were inspired by them”

(Parker, Textual Scholarship and the Making of the New Testament, 3) 

 “We can all applaud when Bowers says that ‘we have no means of knowing what ideal form of a play took in Shakespeare’s mind before he wrote it down’, simply substituting gospel or epistle for play and St John or St Paul for Shakespeare”

(Ibid. 8)

 “The New Testament is – and always has been – the result of a fusion of technology of whatever kind is in vogue and its accompanying theory. The theological concept of a canon of authoritative texts comes after”

(Ibid. 12)

Even if evangelical scholars, pastors, and professors do not agree with DC Parker here in his words, they submit to his theology in practice. The texts which modern Bibles are built on are created according to various critical principles, and then the church theologizes them and calls them authoritative after the fact. Christians work with what they have, and what they have is susceptible to change based on models that do not recognize inspiration, preservation, or the Holy Spirit. Many scholars, pastors, professors, apologists, and even lay people then take that product and make individual determinations on that produced text as to its accuracy to the original. That means that the process of determining the text that is “greatly accurate” has gone through a three-form authentication before even being read. First, it is authenticated by the textual scholars and then printed using their preferred methods. Then it is authenticated by a translation committee, who makes determinations upon those determinations based on their preferred methods which may differ from the determinations of the previous committee. Then it is authenticated by the user of that text, who makes determinations based on their preferred methods, which may be different from the both of the previous two committees! 

This of course is necessary in a model which rejects material preservation and exposure of that material in its axioms. Some other mechanism must be introduced to give the text authority. This being the case, it is rather interesting that the modern articulation of the doctrine of Scripture rejects other mechanisms that bestow the text authority. What is wrong with a magisterium – that it is a function of the church? What is wrong with neo-orthodoxy? A similar process is taking place in the “lower-criticism” of the textual scholars, the simple difference is that it is approved by the people of God who use the product of that mechanism!  

Conclusion

The necessary practical conclusion of the modern articulation of the doctrine of Scripture is that Christians must place their trust in some other mechanism to give the Scriptures authority. These doctrinal statements rely upon the “great accuracy” of the text, so they necessarily rely upon the mechanisms that deem various texts “greatly accurate.” Since this modern doctrine says that God has not materially preserved His Word, a void is created that needs to be filled. There needs to be some mechanism that can determine the level of accuracy of the text that we do have left. The modern church has largely chosen “lower-criticism” as it is employed by those that create Greek texts and translations. Some have chosen neo-orthodoxy. Others have flocked to the Roman or Eastern magisterium.

The fruit of this doctrinal articulation is evident. Verses that were once considered “greatly accurate” to the original are now being called into question daily by Christians everywhere. Passages that have always enjoyed a comfy place in the English canon are ejected by whatever textual theory is in vogue. What is considered “greatly accurate” today may just as easily be considered a scribal interpolation tomorrow. Passages in John that have never been called into question may be discovered to contain “non-Johannine” vocabulary tomorrow based on the opinion of an up and coming scholar. A manuscript may be discovered that alters the form of the text in a number of places. All it takes is one early manuscript to unsettle the whole of Scripture, as we have seen with Codex B. 

Think of it this way. If you read a passage as authoritative five years ago, and no longer consider that passage as “greatly accurate” to the original, what changed? Can you point to a newly discovered manuscript that changed your mind? Was it your doctrine? Was it the opinion of a scholar or pastor or apologist you listen to? These are important questions to answer. When I went through my own textual crisis, I realized that I was the final judge over the text of Scripture. If an early manuscript emerged without John 3:16 in it, I would have thrown it out, especially if that was the opinion of my favorite scholar. I was pricked in my conscience that I had adopted such a frivolous approach to the Holy Scriptures, and it did not take long for me to seek out other doctrinal positions on the text. 

The mechanism that is most consistent, and approved by the Scriptures themselves, is God Himself. I asked myself, “what did God actually do in time to preserve His Word?” If the text did not fall away, certainly I could look around and see that to be the case. I found that there was indeed a textual position which affirmed this reality, and a text that had been vindicated in time. A text that the people of God used during the greatest Christian revival in history. The same text that was used to develop all of the doctrines I hold to. The same text which survived successfully against the Papist arguments that are not much different to the ones used today. So why would I abandon that text, and the theology of the men who used it? Adopting the critical text is not a matter of adherence to a “greatly accurate” text, it is a matter of departure from the historical text. The question of “which text should I use?” is quite simple, actually. The answer is: The text that God used in history and vindicated by His providence in time. The text that united the church, not divided it. The text that the majority of Bible readers still use today. I praise God for the Received Text, and the all of the faithful translations made from it. 

Before you ask, “What makes those readings vindicated?” Think to yourself which methods you are going to use to evaluate those readings. Do they involve deconstructing the narrative that God kept His Word pure in all ages? Do they include the belief that faith communities corrupted the text over time and introduced beloved pericopes from tradition? Do they rest upon the theory that Codex B represents the “earliest and best” text? If so, I would appeal to the Chicago Statement, which says, “We deny the legitimacy of any treatment of the text or quest for sources lying behind it that leads to relativizing, dehistoricizing, or discounting its teaching, or rejecting its claims to authorship”

Absolute Certainty, The Received Text, and Matthew 23:13-14

Introduction

Recently, Reverend Christopher Myers of Phoenix Reformed Presbyterian Church (RPCNA) tagged me in on a Facebook post to address the topic of absolute certainty and the Received Text. Dr. Peter Gurry playfully chimed in with a test passage (Matthew 23:13-14). In this article I will be interacting with Dr. Gurry’s article. Any disagreements I have with his article do not represent what I think about him as a person. He is a brother in Christ and I no reason to think otherwise.

The question that must be answered is, “How can one have absolute certainty that the Scriptures they read are the Divine Original?” What first must be defined is the operational definition of “absolute” as it pertains to certainty. Of course I would never argue a definition of “absolute certainty” that means “omniscience.” Humans are creatures, and therefore do not know things absolutely in that sense. Yet, in a different, practical, experiential sense, Christians can be absolutely certain that God exists, that He has saved them, and that He has spoken by virtue of His own operation. So the certainty we do have as Christians is not by virtue of our self-perceived omniscience, but by virtue of God’s power in us. This is the clear testimony of Scripture.  

“The holy scriptures, which are wise to make thee wise unto salvation.”

(2 Timothy 3:15)


“All scripture is given by inspiration of God, and is profitable”

(2 Timothy 3:16)

“The Spirit of truth, is come, he will guide you unto all truth…He shall glorify me: for he shall receive of mine, and shall shew it unto you.”

(John 16:13,14)

“My sheep hear my voice”

(John 10:27)

That is to say that certainty in the Scriptures comes not from man, but from God, and therefore is not from a man. Of ourselves, we can never have certainty in the Scriptures, or any spiritual thing for that matter.


“But ye believe not, because ye are not of my sheep”

(John 10:26)

People do not believe that the Scriptures are the Word of God because of manuscript evidence, they believe the Scriptures are the Word of God because:

“our full persuasion and assurance of the infallible truth, and divine authority thereof, is from the inward work of the Holy Spirit bearing witness by and with the Word in our hearts”

(LBCF, WCF 1.5)

It is firmly the Protestant position that men can have “full persuasion and assurance” in the Scriptures not by virtue of their own knowledge, but because of the “inward work of the Holy Spirit” which bears witness to that “infallible truth, and divine authority,” the Scriptures, in the regenerated heart of the believer. That being said, the matter of certainty is not properly a text-critical category, it is a faith category. “Ye believe not, because ye are not of my sheep.” No matter which text one reads, it is definitely the case that text-critical evidence is not the reason for certainty, because God says that is Him who gives certainty. Even if every single manuscript were to read the same exact way in every single verse, this would still be true. That is why I continue to advocate that the text we receive should be derived from a method of faith, not science.

“For they that are after the flesh do mind the things of the flesh”

(Romans 8:5)

For a moment, let’s set aside the idea that there is any warrant to believe that text-critical evidence is the reason we believe a verse to be Holy Scripture, because the Scriptures teach that this is not the case. The Scriptures give abundant cause for experiential certainty by virtue of the inner working of the Holy Spirit. 

Examining the Test Case 

Since we are talking certainty here, let us first examine the two models proposed: methodologies which evaluate textual evidence, and the inner working of the Holy Spirit of the individual and the church catholic throughout the ages. Models which evaluate textual evidence are quite fragile. For example, in the article posted for examination by Dr. Gurry, he appeals to the NA27 and the Byzantine tradition to question the passage as it is found in the KJV. He also notes that the passage also occurs differently within the TR corpus. What is interesting, is that his major point is that the passage is not a majority reading, and that’s why it allegedly should be rejected, though he doesn’t make a case either way. If it is the case that a reading should be accepted or rejected based on the criteria provided in the article, I’d love to see an NA29 without any doubt cast upon Mark 16:9-20. The article does not really make a significant point at all regarding the text itself, just that Erasmus made a textual decision using his “limited resources.” Note that Gurry doesn’t make any statement at all regarding the authenticity of the reading, or inform the reader of what he thinks of the passage. Such is the modus operandi of textual scholars. In between the lines of the article is an obvious attempt to cast doubt on the authenticity of the Traditional reading, but on what grounds does he do so? There are three identifiable grounds that I could identify:

  1. It’s not the majority reading
  2. Erasmus had limited resources
  3. We don’t know where Erasmus got the reading

I suspect that is why he didn’t make an actual conclusion in his article, because the reasons he gives aren’t exactly arguments for or against the text itself. If they are, I fail to see how. There is only one text-critical camp that takes reason one as a valid text-critical criteria, and neither myself nor Peter Gurry hold to that position. Erasmus may have had “limited” resources, but how much more “resources” were used to make the general shape of the modern critical text in 1881? Aleph, B, and a smattering of readings from several other choice manuscripts? The shape of the NA27 is not leaps and bounds different from Hort’s text, despite having access to the Papyri, more Uncials, minuscules, and lectionaries.

“None of the popular hand-editions of the Greek NT takes us beyond Westcott-Hort in any substantive way as far as textual character is concerned”

Eldon J. Epp, The Twentieth Century Interlude in New Testament Textual Criticism. 1974. Aland cites 558 variants between the 1881 Westcott-Hort text and the 25th edition of the Nestle-Aland Text (NA25, 1963). The text of the NA27 is not significantly different from that of the NA25.

The sheer volume of additional data is not anything to be astounded by, because what actually matters is how that data has influenced the text. It doesn’t matter if we enter in 10,000 new manuscripts into evidence today, if that evidence introduces no new readings, and only supports the readings we have proportionately. Further, it especially doesn’t matter how much data we have if we only look at a small subset of that data.

Point three doesn’t actually matter because the reading ended up in his edition, and there are manuscripts that have that reading, which were available in the time of Erasmus. Dr. Gurry even lists them in his article. So unless we want to say that Erasmus made up the readings and those readings happened to match a Greek manuscript, I fail to see what the point is here. 

The interesting thing that this article has shown, is that the standard Dr. Gurry sets forth to evaluate the TR is a standard that he probably wouldn’t try against his NA27. There are many minority readings within that text. Further, do we know where the readings of Aleph and B came from? If we take Erasmus’ opinion of Codex B, he alleges the same thing about it that Gurry does Erasmus’ text – that parts of it were following the Latin. It is quite strange that Erasmus, having such a strong opinion against the Vulgate, would follow Latin readings so often! The difference between Gurry’s claim and Erasmus, is that Erasmus’ text is supported by Greek witnesses, and many, many readings from Codex B are supported by virtually no other Greek manuscript.

This brings me to my final question – what sort of grounds does one stand on to evaluate a text from a modern critical perspective? The modern critical methodology cannot say much about the original text of Scripture with any kind of authority. It is a text that is based on a localized smattering of idiosyncratic manuscripts that have no pedigree and that disappear from the history of textual transmission. I understand why a majority text appeal is made, but a majority text appeal from a modern critical text perspective is more confusing than anything, because there are many majority readings that those in the modern critical text camp reject. It is an interesting article, but the article mostly just demonstrates that modern critical text advocates like going after Erasmus as if that defeats the validity of the Greek Received Text.

Now to the Question of Certainty at Matthew 23:13-14

Now that we have seen that Dr. Gurry didn’t actually make an argument against the reading at Matthew 23:13-14 within the TR tradition, I think it will be helpful to explain why Christians should have certainty that the underlying Greek text of the KJV is the original reading. 

  1. It is the reading that was used, commented on, translated, and received by the people of God in the age of the printing press
  2. It fits in the passage and is theologically correct
  3. It exists in Greek manuscripts (even Byzantine ones)
  4. It was translated into ancient versions
  5. John Chrysostom preached it (Homily LXXIII)
  6. Calvin commented on it (Commentary on a Harmony of the Evangelist, Matthew 23:13-15; Mark 12:40; Luke 11:42, 20:47)
  7. It does not contradict other Biblical accounts

I have absolutely no reason to doubt that this verse should be there. The only reason I would have for questioning its authenticity is if I was trying to find errors with God’s Word. A reading being omitted by, as Metzger puts it in his textual commentary, “earliest and best authorities,” is not exactly a strange occurrence. If I recall, these “earliest and best authorities” are known for such qualities. What is more likely, that a scribe made a mistake in a verse that starts exactly the same as the verse above and below it, or that somebody intentionally harmonized the text with another gospel before the time of Chrysostom (4th century)?

“Scribes typically copy their sources with fidelity so that ancestors and descendants are closely related”

A New Approach to Textual Criticism, Wasserman & Gurry, 98

If we’re after the simplest solution, what is stopping us from believing a scribe made a common slip-of-the-eye error, and many faithful scribes followed in his steps? Are we going to believe in the meddling scribes theory or the faithful scribes theory? At what point are we going to admit that we are more interested in scrutinizing the text rather than believing it? 

Yet, despite all of the good evidential reasons to believe that the TR reading at Matthew 23:13-14 is the original reading, that is not why I believe it to be God’s Word. I believe it to be God’s Word because the Holy Spirit bears witness to it in my heart. I know, not very text critical of me. 

Conclusion

Matthew 23:13-14 is a great test case to examine the various doctrines of Scripture available in today’s conservative church. On one hand, there is the critical camp, which rejects that we can be certain in the text of Holy Scripture, that relies upon critical analysis of evidence to derive varying levels of confidence. On the other hand, there is the Received Text camp, who recognizes God’s providence as a meaningful metric for recognizing the text of Scripture. Instead of assuming that we have lost the text of Holy Scripture, Christians should believe that he has preserved it, and receive the text he preserved. We shouldn’t be looking for reasons to prove the text of the Protestant Reformation wrong. If the final textual product of the Protestant Reformation is woefully corrupt, then it doesn’t seem that providence had anything to do with the transmission of the text of the New Testament. Further, if the text of the Reformation is corrupt, then we do not have now, and have never had, a stable text of Holy Scripture.

Christians can have certainty in the text of the Holy Scriptures, because God says He provides that certainty. Certainty isn’t derived from our acquisition of knowledge, but rather the internal witness of the Holy Spirit with the Word of God. No amount of text-critical analysis can offer certainty in God’s Word, because there is nothing particular about text-critical methods that can offer certainty in God’s Word. Take, for example, DC Parker, an authority in the discipline, and the team lead for the Gospel of John in the ECM:

“The text is changing. Every time that I make an edition of the Greek New Testament, or anybody does, we change the wording. We are maybe trying to get back to the oldest possible form but, paradoxically, we are creating a new one. Every translation is different, every reading is different, and although there’s been a tradition in parts of Protestant Christianity to say there is a definitive single form of the text, the fact is you can never find it. There is never ever a final form of the text.”

Certainty is a category of faith, not knowledge. If we examine the fruit of the modern critical text machine on the doctrine of Scripture, this is plainly the case. Text critical methods have only produced doubt. So we can talk about Erasmus all we want, but that’s not going to make the New Testament autographs appear. Christians must hold fast to the Scriptures, and derive their certainty from the only infallible hope, our God and Savior Jesus Christ by the power of the Holy Spirit. There is an objective standard Christians can look at to prove this, God’s providential preservation in time.


“We do not have now – in our critical Greek texts or any of translations – exactly what the authors of the New Testament wrote. Even if we did, we would not know it.” – Dan Wallace

(Gurry & Hixson, Myths and Mistakes in New Testament Textual Criticism, xii)

Reconstructionists, The Burden of Proof is On You

Introduction

A common refrain in the text-criticism discussion is the appeal to “the burden of proof.” The burden of proof is on those who advocate for the traditional text to demonstrate that the readings within the text are original. This appeal is a simple misdirect that should not fool any sound thinking Christian. In making this argument, it draws the attention away from the failure of reconstructionist textual criticism apologists to fulfill any sort of burden of proof themselves. Typically, those in the modern critical text camp do not venture past manuscript evidence to examine the theological, epistemological, and logical implications of approaching the text in the way they do. Due to framing the discussion within the narrow frame of manuscript evidence and textual variants, it is possible to completely avoid the marrow of the discussion. If it is possible to demonstrate that a variant is supported by one manuscript, or a church father, or an ancient version, then it doesn’t matter what the theological implications are of adopting that particular reading. This methodology is appealing because it seems scientific, logical, and conclusive. In the case of evidential reconstructionist textual criticism the reality is that it merely has the form of science, but not any sort of real power. In other words, it is completely, and utterly, arbitrary. Let me explain. 

Let’s just say, for the sake of argument, that modern reconstructionist textual criticism is consistent in its methodology – which it is plainly not. At one reading, they appeal to one standard, and at another they appeal to entirely different standards. Any claim saying that the axioms of modern textual criticism are consistent is either misleading, or relying upon their audience’s ignorance of the system.  Even if these axioms were consistent, they will never be able to claim any sort of practical certainty on a given reading. Since the goal is reconstruction of the text, the starting principle of the methodology itself is that the text of Holy Scripture has been lost. Since this is the theological and epistemological starting point, all methods that proceed from this point begin the effort of textual criticism standing three feet in mid air, because the earliest manuscripts do not reach back to the time of the Apostles. No matter how you spin this reality, you will never escape the fundamental truth that all reconstructive methodologies are operating entirely from conjecture. The genealogical methods employed to reconstruct the text of the New Testament simply cannot demonstrate a reading original. It may be the case that somebody believes a reading original, but that belief does not originate from reconstructionist principles. They have to borrow that from a system which offers epistemological certainty. The method of reconstruction is arbitrary, and any claim to certainty of any kind is alien to the reconstructionist system. 

The Arbitrary Standard of Reconstructionist Textual Criticism

These methods are arbitrary because of the standard itself. Often times, proponents of reconstructionist textual criticism will appeal to the axioms of other systems to bolster the weaknesses of the system they have chosen. In other words, they borrow capital from the theology of the Protestants to put newspaper over the milk that they spilled. See, if the Bible has been lost to the point that it requires reconstruction, then it has not been preserved. There is no escaping this reality, and this is colorfully highlighted in the fact that the term Initial Text is being employed in place of Original or Autographic Text. Even when the term Original is used, it is employed in an entirely different way than it has been historically in Protestant Theology. No matter how hard one tries to put a Theological spin on this concept, a duck dressed up as a swan is still a duck. Simply calling the theological concept of the Initial Text equitous with the Original text does not make it true, and the methodology used to construct such an Initial Text cannot make any such claim responsibly. The fact remains that our earliest manuscripts are not the earliest manuscripts, patristic quotations are not inspired and often are paraphrastic, and ancient versional evidence faces the same problem as ancient manuscript evidence. The plain truth is that our earliest manuscripts have no pedigree. We don’t know who made them or who used them. The only thing these sources demonstrate is whether or not a reading existed, and has nothing to say about whether that reading was original from the pen of the Apostolic writers or a machination from an early heretic. The simple problem with genealogical reconstructions is that they can just as easily place a late reading in the spot of an early reading without being detected at all. In fact, scholars are quite vocal in admitting this. In addition to the logical flaws with these early manuscripts, the material flaws are overwhelming. There are more places where the darling early Uncials disagree than agree, and if our manuscripts of Shakespeare were of such quality, we would have something like, “The question is, to beat, or not to beat Toby?” We wouldn’t have Shakespeare at all, just an echo of Shakespeare. 

This is what happens when human reconstructionist principles of textual criticism are inscripturated. The educated Christian church has been catechized to believe that these axioms are the only way to determine the text of Holy Scripture, and therefore forcing an arbitrary text onto the church. A text that can present later, unoriginal readings, into the text and pass them off as original without any knowledge of such an event. The major problem with this is that if reconstructionist text critical principles are the only way to determine what is Scripture, then Christians must place their faith in a method that is entirely arbitrary and in no way conclusive. Since the material is not perfectly preserved, the doctrine of inspiration must be refashioned around a text that is not materially pure. 

The Reconstructionists Must Defend Their Thesis 

At the outset, the method admits that the text of Scripture, at least part of it, has been lost and must be reconstructed. The principle axiom of the method looks at the text of Scripture and says, “We don’t know what it says, and we don’t have the whole thing.” The next step should have been, from that point on, figuring out if a reconstruction effort could be done with the materials. “For which of you, intending to build a tower, sitteth not down first, and counteth the cost, whether he have sufficient to finish it?” (Luke 14:28). In fact, this was done by Dr. John Burgon in the 19th century (The Revision Revised), wherein he conclusively demonstrates that the source material for this reconstructed text was utterly devoid of the quality required for such an effort. This was again demonstrated by H.C. Hoskier in the 20th century (Codex B and its Allies). In the 21st century, the answer to whether or not the extant data is sufficient is succinctly answered by Dan Wallace, “We do not have now – in our critical Greek texts or any translations – exactly what the authors of the New Testament wrote. Even if we did, we would not know it. There are many, many places in which the text of the New Testament is uncertain” (Myths and Mistakes, xii). If the answer wasn’t apparent in the time of Westcott and Hort, it certainly is now. If the theological and epistemological case that I have laid out in this blog over the last few months is not convincing, the fruit of the reconstructionist effort should be. If the data is available, if the text of Scripture is preserved, why can’t the well meaning scholars get back to it? How long is the church going to entertain this project? 

Theologically, the church should reject any method that starts with the premise that any text of Scripture has fallen away (Mat. 5:18; Mat. 24:35). Epistemologically, the church should reject any method that says the Word of God must be authenticated by scientific principles (2 Tim. 3:16). Logically, the church should reject any method that plainly admits they have not, and cannot reconstruct the text (Luke 14:28). Yet reconstructionist textual criticism continues to be the muse of the Christian academy. With each passing year, the incomplete text continues to be propped up and celebrated by Christians all over the world. The conversation of “Which text?” is irrelevant from a reconstructionist textual model because the method itself doesn’t believe that any text is “the text.” Why would somebody entertain the arguments of somebody whose starting point rejects the concept of “the” text of Holy Scripture? That is why it is important to investigate the effort that led to conservative Christian scholars adopting such a theological position. If the effort cannot be justified, and has not borne good fruit, why should the church continue to prop it up? Why should Christians act like the modern critical text is the “better” text, when the scholars producing it and advocating for it are unwilling to call it “the” text? If the so called “new” data has given us so much more insight than our fathers of the faith, why has it produced so much uncertainty? It is one thing to make appeals to “new and better data,” and another to actually prove it. It is foolish to continue to defend such “new” data when the data has overwhelmingly failed in producing anything but uncertainty. 

Conclusion

Christians are called to “Prove all things” (1 Thess. 5:20), and the axioms and text of the reconstructionists is objectively the new thing on the scene that must be proved. It has the burden of proof, not the traditional text. The reconstructionists need to demonstrate that their method can produce a text. The traditional text is not the problem, it is not the newcomer that needs to be proved. Why unseat the text of the Protestant church for a model that has not produced a text, cannot produce a text, and will not produce a text? What reason shall we give for such an illogical departure? It is time that Christians reject the misdirection of the reconstructionists who insist that the burden of proof is on the traditional text advocates, when the method they demand for establishing that proof is insufficient to do so. Since the reconstructionist model has not proved a text, those that advocate for the ongoing effort are literally defending an immaterial text that doesn’t exist. On one hand they say “we do not have the text,” and on the other they say, “But our text is better.” These two principles cannot stand together, and until the reconstructionists demonstrate that their effort is justified, the burden of proof is on them.