Going Back to the Start

Introduction

There are approximately 450 Bible translations in English, each one unique. The most popular of these include the NIV, KJV, NLT, ESV, NKJV, NASB, and CSB. All of these utilize different translation methodologies, and all of these are either revisions from earlier translations, or follow the translational choices of previous translations. Among conservatives, the KJV and ESV reign supreme, though the ESV has largely won the hearts of the modern Calvinist camp. Prior to the late 19th century, there was really only one Bible used by all English speaking churches, the King James Version (KJV), also known as the Authorized Version (AV). We’ve come a long way in just over 100 years. It is easy to get bogged down in discussions over which version is the best, and that certainly does happen, often. It is often said that the English speaking world has an “embarrassment of riches” as it pertains to Bible versions, and I suppose that is true if we’re counting noses. Unfortunately, the quality of this multitude of versions should cause the sound minded Christian to see the hundreds of versions for what they are – simply an embarrassment. So how did we get here? How did we get from a church with one text to a church with more texts than there are genders recognized by the state of New York? 

A Short History of English Translations 

Prior to taking a trip through time, it is important to evaluate the state of affairs of Bible versions, and the fruit of such Bible versions. In the first place, it should be apparent to all that the number of Bible translations are not a blessing, but a blight to the English speaking church. Not only is it common, but inevitable, that you will encounter a multitude of translations wherever you go to fellowship. I could write several blog posts chronicling the various occasions on which an NASB devotee and an ESV reader went at it, ultimately resulting in the Bible study devolving into a shoddy attempt to do word studies using some online lexicon. Rather than going back to the Greek, let’s go back in history and see how this all started. 

In 1881, a revision of the Authorized Version was completed, and the product was called the Revised Version. The committee responsible for this effort were authorized with the simple task of removing the “plain and clear errors” in the AV. Some of the rules for such a revision included: 

  1. To introduce as few alterations as possible into the text of the AV
  2. To indicate such alteration in the margin
  3. Only necessary changes were to be made

Not only did the “revision” team not follow these rules, they broke them in excess. They didn’t just “revise” the AV, they created an entirely new underlying Greek text, an entirely new translation, and the notes which they left in the margin to detail such additions and subtractions were so inadequate that even the most learned reader could be misled by them. The vague statements such as “some ancient authorities” in the margins have been carried over in spirit into the beloved modern versions, most notably the ESV, NIV, and NASB. These kinds of footnotes are not only bewildering, they introduce doubt where doubt need not be introduced. That is not to say that the intentions of the revision team were malicious, but if they were graded on how well they could follow directions, they would have failed. This is relevant because almost all of the modern versions stand in this textual tradition. 

In the preface of the ESV, it reads that it “stands in the classic mainstream of English Bible translations over the past half-millennium. The fountainhead of that stream was William Tyndale’s New Testament of 1526; marking its course were the King James Version of 1611 (KJV), the English Revised Version of 1885 (RV), the American Standard Version of 1901 (ASV), and the Revised Standard Version of 1952 and 1971 (RSV).” 

If you’ve read both the ESV and KJV, you’re probably thinking what I thought when I read that: What loose definition of “classic mainstream” is being used? 

If by “classic mainstream” it is meant that it has 66 books in it, then it certainly does stand in the same stream. Yet, anybody who has read these two versions knows that this is a plain abuse of the term which only serves to obfuscate the reality to the reader. The reality is that the ESV, and almost every modern version stands in the stream of either the RSV or the ASV, and only the KJV can properly claim to stand in the tradition of Tyndale. One cannot say responsibly that a text is in the same tradition as another when they are different in hundreds of places, and those Bibles which stand in the RV stream omit over forty verses from the KJV stream. See, the English Bibles leading up to the KJV in 1611 all had the Longer Ending of Mark, the Pericope Adulterae, the Comma Johanneum, John 5:4, Acts 8:37, Romans 16:24, and so on. They all translated the same reading at John 1:18 and 1 Timothy 3:16. They all stood in the same textual tradition. So it is rather disingenuous when the revision team first introduced their text, advertising it as a “revision,” and again disingenuous when Crossway published their preface saying that the ESV was in the “Classic Mainstream” going back to Tyndale. This same strategy is still employed today by many top scholars who consistently prop up this idea that the two streams are essentially the same. 

It is important to note that not only is the text vastly different between the RV tradition and the KJV tradition, the textual methodologies are completely different as well. If the two streams are different, every Christian should be asking three questions: 

  1. Why is there such a concentrated effort to mitigate the differences between the two streams? 
  2. Why is it so important that the two streams stand in the same tradition? 
  3. If the “revision” effort resulted in an entirely new Greek text, should we adopt that text? 

Answering the Difficult Questions

The reason there is an effort to mitigate the differences between the two streams is due to the fact that if the streams are truly significantly different, the classic Protestant doctrine of inspiration and preservation is incorrect. If the Scriptures have been kept pure in all ages, there wouldn’t be, and cannot be, two textual traditions that are both valid. And if the modern textual tradition is valid, then the sum of classic Protestant doctrine was built on an incorrect text. That is why it is so important that the two texts stand in the same tradition. If we were talking about a handful of insignificant readings which were simply ignored for a hundred years or so, that is easily written off by the fallibility of the textual criticism done in the 16th century. That is not the case, however. The reality is, that we are talking about hundreds and hundreds of differences. So many differences, in fact, that the two text platforms are entirely different Bibles. That should cause every single Christian who cares about Inspiration and Preservation to give serious thought to the reality of two different textual platforms. If the ESV, for example, does not stand in the “classic mainstream” of Scripture, what should we think of it? 

Rather than viewing the discussion over translation as a matter of preference, we need to revisit the history of translations and see if that first “revision” was even warranted, and what exactly was done as a part of the effort. The reality is, if the revision team had followed instructions, it is likely that I wouldn’t be writing this blog post. We’d all be reading a faithful update to the Authorized Version. That of course, did not happen. Instead, we have hundreds of Bible versions, endless debates over which translation is best, the enemies of the faith constantly attacking our embarrassing situation, and utter chaos in our churches as a result of our multitude of Bible versions. Yet these are not the only products of that fateful “revision” effort. As a result of the modern textual methodology, pastors and layman alike are taught to read their Bibles critically and subjectively, picking and choosing verses to believe and not to believe. The common opinion is that “no translation can adequately bring forth the original,” resulting in people utilizing Greek lexicons to warp the text into what they want it to say. The plain fruit of this is that people simply do not trust their Bible translation. Even worse, the latest textual methodology that has evolved from the 19th century has brought the levels of skepticism to dire extremes. Not only is it the conservative position to approach the text skeptically and subjectively, it is perfectly normal to reject the preservation of Scripture altogether. In fact, it is naive, and even considered fundamentalism to affirm that God has preserved the matter of Scripture perfectly into the 21st century. 

The reality is that the “revision” done in 1881 has led to an embarrassment of problems for the modern church. It has given license for people to not only doubt their translation, but to doubt the text it was translated from. It has introduced division by forcing the church to take a stand on the traditional text against the modern text. It has created controversies, debates, strife, confusion, chaos, and has opened the door for not only the enemies of the faith to discredit the Scriptures, but given full license for Christians to do the same. Look around, Christian. There is not a Bible anymore, just bibles. Rather than squabbling for hours on Facebook over textual variants, perhaps it is a good idea to back up for a second and look around. Which text is the real problem? Which text really needs to be justified? Who is the burden of proof really on? It is abundantly clear which text has caused more problems. The question to ask, and an important one at that, is: Was it justified for the church to adopt a text that was conceived in scandal, and should we adopt the children of that text today?

In the following blog posts, I will be exploring these questions. Happy New Year!


A Crash Course in the Textual Discussion

Introduction

When I first started learning about the textual variants in my Bible, I had a great number of misconceptions about textual criticism. I thought myself rather educated on the matter because I had read the KJV Only Controversy twice and had spent hours upon hours watching the Dividing Line. Yet, when it came down to actually understanding anything at all about the matter, I realized I didn’t know anything. Even though I knew a lot of text-critical jargon, and could employ that jargon, much of the arguments I had learned were factually incorrect or misinformed. A comment on my YouTube channel earlier today demonstrated to me that many others are in the same boat I was in. 

The fact is,I couldn’t tell you why the Papyri were significant, or even how many Papyri were extant and what sections of the Bible they included. I couldn’t even name a proper textual scholar, except for maybe Bart Ehrman, but I thought he was just an angry atheist. I had heard that the CBGM was going to get us to a very early text form, but I couldn’t explain how or if that text was reliable. I knew that textual criticism was changing, but again, I didn’t know what those changes were or how they affected my Bible. There are a lot of downsides to getting your information from one or two sources, especially if those sources are simply interpreters of textual scholarship and not textual scholars themselves. The only thing that I had really adopted from the sources I had interacted with was confidence that I was on the right side of things, without really knowing why. I developed a list of questions that I wish somebody had asked me before I adopted the axioms of the Modern Critical Text, and perhaps they will be helpful for my reader.

  1. How did the Papyri finds impact the effort of textual scholarship?
  2. Is the concept of “text-type” a driving factor in the current effort of textual scholarship? 
  3. Which manuscripts are primarily used as a “base text” of the modern critical text as it is represented in the NA27 and 28? 
  4. What is the Editio Critica Maior (ECM)?
  5. Which textual scholars are involved in creating the Editio Critica Maior (ECM)? 
  6. What is the Initial Text, and how is it different than the Original?
  7. What is the difference materially between the Received Text and the Modern Critical Text?
  8. What is the CBGM, and how is it impacting modern Greek texts and Bible translations? 
  9. Which scholars are contributing to the current effort of textual scholarship, and what are their thoughts on the CBGM and ECM? 
  10. What do the scholars who are editing the modern Greek New Testament as it is represented in the Nestle-Aland/UBS platform think of the text they are creating? 
  11. What is the TR?

This “quiz” of sorts is a good litmus test as to whether or not you are up to date on the current trends in textual criticism. 

Answer Key

  1. The Papyri, while initially exciting, did not yield the kind of fruit that many would have hoped. In the first place, they disproved Hort’s theory that Codex Vaticanus was earliest text, because the Papyri included readings that were not extant in the Alexandrian manuscripts, which were called “Earliest and Best” all throughout the 20th century and even still today by some. This means that the Papyri do not vindicate the Alexandrian text form as “earliest”, and in fact, they prove that there were other “text forms” circulating at the same time. While the Papyri may be helpful in establishing that the Bible existed prior to the fourth century, every single Christian, in theory at least, believes this to be true regardless of the Papyri. Christian apologetics were done successfully well before the discovery of the Papyri. The Christian faith is one which believes that the eternal Logos became flesh in the first century, lived a perfect life, died on a Roman cross, was dead for three days, rose again on the third day, appeared to a group of disciples and a multitude of others, then ascended to the right hand of the Father. This is established without the Papyri, as the Bible is not established based on the Papyri. Further, there are less than 150 Papyri manuscripts, and many of them are scraps. We could not construct a whole New Testament with the Papyri manuscripts. So while the Papyri may to some serve some sort of apologetic purpose, their value as it pertains to actually creating a Greek New Testament is much less significant than other later New Testament data. 
  1. Due to the pre-genealogical coherence component of the CBGM, the concept of text-types has largely been abandoned by textual scholars, except for perhaps the Byzantine text-type, which is largely uniform. Due to algorithmic analysis driven by the power of electrical computing,  modern critical methods have demonstrated that the manuscripts formerly classed in the Alexandrian, Western, and Cesarean text families do not share enough statistical similarity to be properly called a text-family. Further, the current text-critical scholars have adopted a different method, which focuses primarily on evaluating individual verses, or readings, rather than manuscripts as a whole. So not only are the manuscripts formerly classed into the Alexandrian, Western, and Cesarean text families not families, the concept of text families is not necessarily being used in the current methodology. 
  1. The two manuscripts which serve as a “base-text” for the NA/UBS platform are Codex Vaticanus (B), and Codex Sinaiticus (Aleph). Significant variations between the Received Text and the Modern Critical Text are typically the result of prioritization of these two manuscripts over and above the readings found in the majority of manuscripts or other manuscripts. This is shifting as the concept of text-types is being retired, but the text as it exists in modern Bibles generally reflects the text form of just two manuscripts. As the CBGM is implemented, this may cause certain Alexandrian readings to be rejected, but as it stands, modern Greek Texts and Bibles heavily favor the two manuscripts mentioned above. These two manuscripts do not belong in the same family, which is to say that they likely do not share one common ancestor or ancestors. It is possible that perhaps that they share a cousin manuscript, but even that is speculative. 
  1. The Editio Critica Maior (ECM) is a documented history of the Greek New Testament up to about 1,000AD which considers Greek manuscripts, translations, and ancient citations of the New Testament. The ECM also provides information on the development of variants according to the analysis of the editors. The first edition was published in 1997 and is slated to be finished by 2030. The ECM is not necessarily a Greek New Testament per se, but rather a history of how the text is said to have evolved in the first 1,000 years of the church. This means that it excludes copies made from manuscripts after 1,000AD that predate 1000AD. For example, if a manuscript was copied in 1300AD from a manuscript created in 500AD, the readings from the 1300AD copy will not be considered, despite preserving very old readings. The main text printed in the ECM contains the readings which are said to be the earliest, though there are many places where the editors of the ECM are split in determining which reading came first. Due to these split readings, the ECM functionally serves as a dataset, which the user can individually evaluate to select which readings they believe to be the earliest. A current weakness of the ECM is that it does not consider all of the extant data, and it is yet to be seen if the final product in 2030 will incorporate all extant New Testament witnesses. As it stands, it is an incomplete history of the New Testament, despite being the largest critical edition produced to date. 
  1. It is difficult to find all of the men and women working on the ECM, but some of the scholars who have worked on, or are working on the ECM are Holger Strutwolf, DC Parker, and Klaus Wachtel. The Institute for New Testament Textual Research in Munster is overall responsible for the project. The ECM is supported by the Union of German Academies of Sciences and Humanities. 
  1. The conversation of the Original text vs. the Initial Text is still one being hotly debated amongst textual scholars, but Dr. Peter Gurry defines it as, “The ECM editor’s own reconstructed text that, taken as a whole, represents the hypothetical witness from which all the extant witnesses derive. This hypothetical witness is designated A in the CBGM, from the German Ausgangstext, which could be translated as “source text” or “starting text.” The relationship of the initial text to the author’s original text needs to be decided for each corpus and by each editor; it cannot be assumed” (Peter Gurry, A New Approach to Textual Criticism, 136). Simply put, the Initial Text is the “as far back as we can go text.” It is up to the editor, or perhaps the Bible reader, whether or not that Initial Text represents what the writers of the Bible actually wrote. It is important to keep in mind that the Initial Text is likely to favor texts from a particular region. That is to say, that the Initial Text produced by scholars is only one of many potential Initial Texts. Despite the fact that many are optimistic regarding the Initial Text, the fact stands that there are many readings in the ECM which the editors are split on which reading is initial. That means there is no consensus on what the Initial Text is, or what it will be. How this will be determined has yet to be seen. I comment on the discussion here and here
  1. The difference between the Received Text (TR) and the Modern Critical Text (MCT) is significant. The MCT is at least 26 verses shorter, as it excludes the ending of Mark (Mk. 16:9-20), the Pericope Adulterae (Jn. 7:53-8:11), the Comma Johanneum (1 Jn. 5:7), John 5:4, Acts 8:37, and Romans 16:24. There are also a number of places where the readings are different, such as John 1:18, and 1 Tim. 3:16. There are also places in the MCT like 2 Peter 3:10 where the readings has the opposite meaning as the TR. Many advocates of the MCT are quick to point out that the TR does not have Greek manuscript support for Revelation 16:5, but the MCT also has readings that do not have Greek manuscript support, like 2 Peter 3:10, mentioned above. This does not mean that the verses cannot be supported, just that it is rather hypocritical that many MCT advocates demand extant manuscript support when there were manuscripts available at one time that may have had a reading. In many of the doctrinally significant places where the MCT and TR differ, the TR contains readings found in the majority of manuscripts, whereas the MCT represents a small minority, and in some places, just two manuscripts (Mk. 16:9-20). In other places, the TR contains minority readings, though I argue that these minority readings can be substantiated by the consensus of commentaries, theological works, and Bible translations throughout the history of the church. In any case, the amount of variants in the within the TR tradition is minute compared to the amount of variants that must be reconciled within the MCT tradition. 
  1. The Coherence Based Genealogical Method (CBGM) is “a method that (1) uses a set of computer tools (2) based in a new way of relating manuscript texts that is (3) designed to help us understand the origin and history of the New Testament Text” (ibid. 3). The CBGM uses statistical comparison to determine how closely related two witnesses are to each other, and then text-critics evaluate that comparison to determine which reading potentially came first in the transmission history of the text. This is the method that is primarily being used to construct the ECM. To see a basic overview of the method, please refer to this video, which is a thoughtful and helpful examination of the CBGM. I comment on the CBGM more here.
  1. The scholars that are using the CBGM and creating the ECM have varied opinions on what is being constructed. Men like Eldon Epp and DC Parker do not believe that the ECM has anything to say about the original, or authorial text of the New Testament. Others are more optimistic, such as Dirk Jongkind and Peter Gurry. As it stands, it has yet to be demonstrated how the ECM can definitively say anything about the original or authorial text, as the methods of the CBGM do not offer this sort of conclusion. Further, it has yet to be shown how a text with split readings can be said, in any meaningful way, to represent one unified Initial Text, let alone an original. That is to say, that the ECM contains the potential for multiple Initial Texts. The problem of split readings in the ECM has yet to be addressed adequately as far as I know. 
  1. The scholars creating printed Greek texts such as the NA/UBS platform do not believe they are creating original texts. They are simply creating printed texts that serve as a tool in translation and exegesis. The editors are typically disinterested in speaking to whether this text represents the authorial text, that is up to the user of the printed edition. This is evident in the fact that the 28th edition of the Nestle-Aland text and the 5th edition of the United Bible Society text are not a final text. Due to the ongoing creation of the ECM, these printed Greek texts are going to change, even optimistic scholars, such as Dr. Peter Gurry, comment that these changes “will affect not only modern Bible translations and commentaries but possibly even theology and preaching” (ibid. 6).
  1. The Received Text (TR), is the form of the Greek New Testament as it existed during the first era of printed Greek Bibles during the 16th century after the introduction of the printing press in Europe. Up to that point, all books were hand copied. There is not one “TR”, per se, but rather a corpus of Greek Texts which are generally uniform. The places of variation between the TR are minor when the significance of these differences is considered. The opinion of textual scholar Dr. Edward F. Hills was that these variations amount to less than 10. High orthodox theologians such as Turretin considered such variations to be easily resolved upon brief examination. This was the Greek text that the Westminster Divines considered “Pure in all ages” and is the text platform that the Reformed and Post-Reformation Divines used in their commentaries and theological works from the middle of the 16th century up to the higher critical period when Hort’s text (Based on Vaticanus, generally the same text that is used for the ESV) was introduced as an alternative. There are varying views on what “the” TR is, but across all of the printed editions of the Received Text corpus, the differences are so minute that it can be considered the same Bible nonetheless. Modern debate tactics have introduced much confusion into the definition of “the’ TR, but the fact stands that this sort of question was not a problem to the men who used it to develop protestant theology up to the higher critical period. Adherence to the TR is based on the vindication of readings by the use of such readings by the people of God in time over and above extant manuscript data, which cannot represent all of the manuscripts that have ever existed, since a great number have been lost or destroyed.

Conclusion

Prior to entering into the Textual Discussion, I think it wise that Christians are up to date on not only the updated jargon, but also the information that underlies the jargon. If one wants to argue that the Papyri are definitive proof of one text being superior to another, he should be ready to substantiate that claim by demonstrating how the readings of the Papyri have impacted modern text-critical efforts. In the same way, if somebody wishes to stake a claim on the CBGM, it should also follow that one should be ready to demonstrate how this method has proved one conclusion or another. Simply saying that the Papyri and the CBGM have “proven” a particular text right or wrong is simply an assertion that needs to be substantiated. It may be the case that the claim is correct, but it is important that we hold ourselves to the same standard an 8th grade math teacher might hold us to, and “show your work.” The fact stands that a Bible cannot be constructed from all of the Papyri and the CBGM has introduced a “slight increase in the ECM editors’ uncertainty about the text, an uncertainty which has been de facto adopted by the editors of the NA/UBS” (ibid. 6). 

It is easy to get caught up in conversations on textual variants and the scholarly blunders of Erasmus, but these discussions do not come close to addressing the important components of the Textual Discussion. An important reality to consider when discussing variants from an MCT perspective is that the modern critical text is not finished, and the finished product is not claiming to be a stable or definitive text. The opinions on a variant may change in the next ten years, and new variants may be considered that have been ignored throughout the history of the church. One might make a case for why Luke 23:34 is not original, but the fact is that it is impossible to prove such a claim by modern critical methods without the original to substantiate the claim against. Even in the case of 1 John 5:7, which is admittedly a difficult verse to defend evidentially, it cannot be proven that other manuscripts contemporary to Vaticanus and Sinaiticus excluded the passage, because those manuscripts are no longer extant. Since it is well known that other Bibles with different readings existed at the time of our so called earliest manuscripts (because of the Papyri!), we can at least say with confidence that these two manuscripts do not represent what all of the Bibles looked like at that time. That is to say, that those who argue vehemently for Bibles which closely follow these two manuscripts are simply putting their faith in the unprovable claim that the other contemporary manuscripts did not have the readings that explode into the manuscript tradition shortly after and even minority readings that made it into the TR. Some people, like James Snapp, have developed entire textual positions which recognize this problem, which I consider a sort of mediating position between the Received Text and the Modern Critical Text. Unlike many of the MCT advocates, James Snapp is more than willing to show his work.  

In any case, it is high time that the bubble of Codex B is pricked. Times have changed, and even the most recent iteration of modern text-criticism has supposedly done away with Hort’s archaic theories. It may be time that Christians stop appealing to the Papyri and the CBGM without actually understanding what those two things are, and instead pick up some of the literature and become acquainted with what has changed since Metzger penned his Text of the New Testament. In my opinion, Snapp has answered many of the questions that modern textual scholars are unwilling to answer with his Equitable Eclecticism. While I believe his position still faces the same epistemological problems as the ECM and the CBGM, it certainly is an upgrade from the MCT. I hope that this article has helped people understand the effort of modern textual criticism better, and perhaps even sparked interested in investigating the information themselves. 

Sources for Further Reading on Modern Textual Criticism

D.C. Parker, editor of the ECM for the Gospel of John

Peter Gurry’s Introduction to the CBGM

Peter Gurry and Elijah Hixson’s Latest Book

The Latest, Authoritative Work on the Pericope Adulterae (Jn. 7:53-8:11)

Sources for Further Study on the Received Text Position

Audio from the Text and Canon Conference 

Audio from Dr. Jeff Riddle’s Word Magazine

Mark 16:9-20 is Scripture

Introduction

The rejection of the ending of Mark, formally known as the “longer ending of Mark”, is a Canonical crisis. In this article, I want to make a case for why people who read and use modern Bible translations should be outraged at the brackets and footnotes in their Bible at Mark 16:9-20. This is the textual variant that ultimately led me to putting down my ESV and picking up an NKJV, and then a KJV. When I understood the reason that my Bible instructed me to doubt this passage, I realized the methods which put the brackets and footnotes in my Bible were not to be trusted. The primary reason that I did not believe this passage to be Scripture was due to my blind adherence to things I had heard, not the reality of the data. The quickness with which I cast God’s Word into the trash caused me to be deeply remorseful, and I’m not alone in that . Not only had I been catechized to reject the ending of the Gospel of Mark, but I was instructed to berate others who were “foolish” enough to believe it is original. Meanwhile, enemies of the faith delight in the fact that Christians boldly reject this passage, because it proves their point that the Bible is not inspired. I will now walk through the data that caused me to be deeply remorseful of casting this passage aside.

The External Evidence

The first step in my journey was to examine the actual manuscript evidence for and against the passage. There are over 1,600 extant manuscripts of Mark, and only three of them end at verse 8. The decision to remove it, or delegate it to brackets, was made on the basis of only two of these. When I discovered this, I was dismayed. I had been using the argument that “we have thousands and thousands of manuscripts,” and I realized, based on my own position of the text, that I could not responsibly use this apologetic argument. My argument for the text, at least in the Gospel of Mark, was not based on thousands of manuscripts, just two. Yet even in one of these manuscripts (03), there is a space left for the ending of Mark, as though the scribe knew about the ending and excluded it. I later discovered that text-critics such as H.C. Hoskier believed that very manuscript to be created by a Unitarian, and that Erasmus thought the manuscript to be a choppy mash of Latin versional readings. I realized, that only some textual scholars thought these manuscripts to be “best”, and my research seemed to be demonstrating that this claim of high quality was rather vacuous indeed. I was operating on the theory that these two manuscripts represented the only text-form in the early church, which I discovered has been mostly abandoned. This is due to the Byzantine readings found in the Papyri, and the statistical analysis done by the CBGM. Further, and most shocking to me at the time, is that the two manuscripts in question do not look like the rest of the thousands of extant manuscripts of Mark. Below is the % of agreement that these two manuscripts share with the rest of the manuscripts of Mark – most of them are not even close enough to be cousins, let alone direct ancestors.   

Codex Vaticanus (03) and Codex Sinaiticus (01), the two early manuscripts in question, do not agree with any other extant manuscript in the places examined in Mark in a significant way, other than minuscule 2427, which has been known to be a 19th century forgery since 2006. What these numbers mean is that these manuscripts look very different from the rest of the manuscripts of Mark. I realized I could not responsibly claim that these two manuscripts were “earliest and best”. There was no way I could defend that in any sort of apologetic scenario, at least. I abandoned this belief on the grounds of two realities: 1) The data shows that different text forms were contemporaries of Vaticanus and Sinaiticus, so they weren’t necessarily “earliest”, just surviving and 2) these manuscripts did not look like the rest of the thousands of manuscripts I was constantly appealing to in apologetic scenarios. Further, I found it quite easy to demonstrate that there were other manuscripts circulating at the time which had the longer ending of Mark in it! Even Bart Ehrman admits as much (Bart Ehrman, Lost Christianities, 78,79). This is a simple fact, considering the amount of quotations from the ending of Mark found in patristic writings, including Papias (110AD), Justin Martyr (160AD), Tatian (172AD), and Ireneaus (184AD). The most compelling of these witnesses is Irenaeus, who directly quotes Mark 16:19 in the third book of Against Heresies. “Also, towards the conclusion of his Gospel, Mark says: ‘So then, after the Lord Jesus had spoken to them, He was received up into heaven, and sits on the right hand of God.’” So the passage most certainly existed prior to its exclusion in the two manuscripts in question. Hierocles(or porphyry), a pagan apologist, even provokes his Christian reader to drink poison, quoting the ending of Mark. It seems that atheists never tire of that retort. 

In order to reject this passage from an evidentiary standpoint is to completely ignore not only the manuscript data, but also the patristic citations which predate our earliest surviving manuscripts. If manuscript data does not matter, and patristic sources do not matter, than what does matter? Well, tradition matters, apparently. See, up until recently, the theory about the ending of Mark was that it was simply lost to time. The book did not initially end at verse 8, but the true ending has been lost. Well that doesn’t quite work for most Christians, so other theories had to be contrived to hold onto the supremacy of these two manuscripts. Rather than adopting the ending that is found in over 1,600 manuscripts, the default position of the 20th century has lingered in modern Bibles in the form of brackets and footnotes. The reason for this? Some of the earliest manuscripts don’t have it. “Some”, as though the number of manuscripts cannot be counted or determined. It seems that the editors of Crossway might want to consider being more precise, but I imagine it would be harder to justify those brackets if the reader knew the actual number. Even the RV, which is the ESV’s predecessor, contained this information. I still, to this day, feel betrayed by the way that my ESV presented that information in my Bible. I felt further betrayed by all of the people who knew this information and still told me that the ending of Mark was not Scripture.   

The Internal and Theological Evidence

If you are a Christian, you believe that the Bible was inspired by God. That means that the New Testament should be coherent, both grammatically and theologically. That is reality that kept me assured during my examination of the ending of Mark. I figured if God had truly preserved His Word, there would be a simple answer to whether or not this passage was indeed Scripture. I found that there was, and overwhelmingly so. I didn’t even need to go sifting through all of the evidence to know what the true reading of the ending of Mark was, the answer was laid out in the doctrine of Scripture in my London Baptist Confession of Faith. 

“We may be moved and induced by the testimony of the church of God to a high and reverent esteem of the Holy Scriptures; and the heavenliness of the matter, the efficacy of the doctrine, and the majesty of the style, the consent of all the parts, the scope of the whole (which is to give all glory to God), the full discovery it makes of the only way of man’s salvation, and many other incomparable excellencies, and entire perfections therefore, is from the inward work of the Holy Spirit bearing witness by and with the Word in our hearts” (LBCF 1.5). 

If Mark ends at verse 8, there is a significant problem, at least from a confessional standpoint. The problem is that verse 8 requires a verse 9 due to its grammar. There is no place in the whole of ancient Greek literature that ends a narrative with the word “for” (γαρ). This means that Mark did not stop writing at verse 8, if the assumption is that the Scriptures were at least perfect in the autograph. So if Mark did not stop writing at verse 8, and the Bible is indeed inspired and would not have included such a basic grammatical error, I figured perhaps it is the case that the reading that occurs in over 1,600 manuscripts should be considered over and above the two manuscripts which contain this idiosyncratic grammar mistake. In order to adopt the abrupt ending of Mark, I could not say that the Bible had any sort of “majesty of style” because it in fact, contains this atrocious grammar error at the “ending” of Mark. 

Further, if Mark ends at verse 8, there is a basic theological problem that puts the Bible at odds with itself. The confession says that the Bible should be esteemed on the account of “the consent of all the parts.” If the Gospel of Mark ends at verse 8, it does not consent with all the parts of Scripture. It excludes an appearance account, which is included in Matthew, Luke, John, and even in Paul’s testimony of the Gospel in 1 Corinthians 15. That means that Mark is apparently the only Gospel writer who didn’t have his story straight. 

Even Paul, who wasn’t there to experience the life of Jesus, has his facts in line. It is vital that the Gospel that Christians use contains the life, death, burial, resurrection, and appearance of Jesus. I figured that Mark would not have been ignorant to this. It seemed illogical in fact, to affirm the opposite, that Mark would have excluded such a fundamental detail. The burial and appearance are crucial to affirming two fundamental doctrines of the Christian faith: 1) That Jesus was very man and actually died, and 2) that after dying, Christ was raised up and thus very God. Without the appearance, there is no actual vindication of the latter. Turretin even affirmed this truth in saying that the ending of Mark was necessary for establishing the truth of the Gospel account, which I imagine he included as a means to respond to people like me, who were calling the passage into question. At first I said that it didn’t matter because this account is available in other places, but I was making the assumption that early readers of Mark had access to those other witnesses. See, I sat through a semester at Arizona State University where I heard all of the theories of Bauer and Ehrman, so I should have known better than to make that argument. If one takes the higher critical perspective of Markan priority, that Mark was the first Gospel, than the earliest Christians did not have a Gospel account which vindicated the truth of the resurrection. Which is to say, that the only apologetic defense of the Gospel I had to the actual critics of the faith was essentially to say, “Well that’s just wrong!” Kant and Kierkegard would have been proud of me. 

Conclusion

At the end of my research on the ending of Mark, I found that there was no good reason to continue propagating the idea that the Gospel of Mark ends with poor grammar, two scared women, and no vindication of the resurrection. If one of the uses of the Bible is to defeat the enemies of the faith in debate, than this clearly was not the way to go about it. In this journey, I also learned something vitally important – that the purpose of the Bible was not to defend the faith, it was to have faith and increase in faith. It was the means that God had given me to commune with Him. The majority of the Christian church, who reads their Bible to hear the voice of their Shepherd, should not be subject to the threadbare theories of higher and lower critics in the footnotes of their Bible. There are certain places that warrant a serious discussion regarding textual variants by Christians, this is not one of them. 

Not only is the evidence overwhelmingly in support of this passage being original, it is impossible to responsibly say that rejecting this passage is in line with a Reformed, confessional view. Not only does it violate the basic principles of the doctrine of Scripture in 1.5, it ignores the fact that doctrines are actually built upon the ending of Mark as a proof text (WCF 28:4; LBCF 7.2). In both the LBCF and the Westminster Larger Catechism, this passage is used to establish the ascension of Christ, which is doctrinally significant. Even more important to me, was how I had to view the Bible as a whole if I accepted the theory that the ending of Mark was not original. I had to believe that a passage of Scripture has fallen away, lost to time, and cannot be recovered. Since this must be true for the ending of Mark, I might as well apply that theory to every other area of textual variation in the New and Old Testament texts. The theories of higher critical thought must be adopted to explain how the text evolved, and justify the ongoing effort to reconstruct this lost bible. I later discovered that is exactly what is being done by nearly every textual scholar, so it seems I was not alone in my conclusions. 

In my examination of just one textual variant, I came to a significant conclusion. Using Dr. Jeff Riddle’s words, we are living in the age of a Canonical crisis. The fact that the Gospel of John as it exists in the NA28 is different than the Gospel of John as it exists in the unpublished Editio Critica Maior demonstrates this reality. Christians are reading the Gospel of John as it existed in 2012, while the “true” Gospel of John is currently being constructed in Munster, Germany. Who knows if the John that is produced out of the black box sometime in the next 10 years will be the same as the Gospel of John as it is being read now? I wonder what Schrodinger would think of this paradox? 

It is important that Christians realize that the artificial divide between higher and lower criticism is just that – artificial. The footnote which has informed Christians to call into doubt the text of Holy Scripture at the end of Mark is not purely informed by manuscript data. Science is done by the intellect, and the intellect of man is terribly limited and subjective. Theories must be applied, and there is not a single textual scholar who approaches the text without assumptions. The deconstruction of the New Testament text is higher criticism restrained by the religious feelings of Protestants who actually buy Bibles. Honest scholars admit as much. “With the rise of an Enlightenment turn to ‘science,’ and informed by a Protestant preference for ‘the original,’ however, critics like Johann Jakob Griesbach, Karl Lachmann, Constantin Von Tischendorf, Samuel Tragelles, and finally, B.F. Westcott and F.J.A.; Hort reevaluated the evidence…” (Knust & Wasserman, To Cast the First Stone, 16). The reevaluation of the manuscript data in the 19th century is what unseated this passage in Mark from the canon, and the church complied. The people of God do not have to comply with this opinion, and that is the reality. Read the ending of Mark, and know that it is authentic. 

Post Script: A Personal Note from the Author

I do not have the scholarly credentials, but I do have one unique qualification that I believe is important. I am a part of the first generation of Christians who came to faith after the battle for the Bible. My generation is feeling the impact of a changing Bible harder than any other generation to date. I was taught how to read my Bible after the longer ending of Mark had been overwhelmingly dismissed. When I approached bracketed texts, I ignored them, because that is what I was told to do. I did not consider the theological impact of removed texts because modern exegesis and hermeneutics are designed around a shifting text. That is why, when I began to study historical protestant theology, these modern hermeneutical methods were so crazy to me. If doctrine cannot be established upon contested verses, what place is left to build doctrine upon? The answer is very few places, and the diamonds in the apparatus of the NA28 are proof of that. The Reformed believed that every word, all Scripture, should be used. That is why it was such a shock to me when I discovered the reasons that these texts were put into brackets. I was raised in a generation of skeptics, and I did not become converted under the assumption that I would need to take a Kantian leap of faith to believe in my Bible. Christians in my generation should not have to believe that they must wait until 2030 to read God’s Word. That is unprecedented in the history of the church. If the Bible isn’t going to be ready for another ten years, what is the point of even reading it until then? The answer is simple: there isn’t a good reason to read it until then, or after then for that matter.   

If the longer ending of Mark is not Scripture, what then is Scripture? What piece of the text cannot be put under the same scrutiny if all it takes is one shoddy manuscript that is stored in the Vatican to change the whole Bible? How many manuscripts would it take to unseat John 3:16 or Romans 8:28? The reality is, the modern Bible is being held together by the people that read it, not the evaluation of manuscripts. The Bible becomes smaller with each implementation of text-critical methods. I imagine that the rapid progression of the modern text-critical effort is directly related to the fact that people simply don’t read their Bibles anymore. It’s easy to ignore footnotes and brackets and a constantly changing text if people don’t know that anything has changed in the first place.  

It is clear that something needs to change, or the Christian church will be in deep trouble by 2030 when scholars begin teaching the people of God how to construct their own Bible using online software. Yes, that is the reality, not some speculation. The split readings in the ECM will eventually make their way into the text of translations, and by that time, the Christian will not have a Bible or a defense for the Bible. If the CBGM has proven one thing, it is that none of the scholars using it can determine what the original said. My hope is that things will change before that happens, but time will tell. 

The Reformation Day Post: VERY Spooky

In the Beginning

God’s Word has been contested since the very beginning in the Garden when Satan said, “Yea, hath God said, Ye shall not eat of every tree of the garden?” Eve then changes what God said, and Satan reinterprets it. “God hath said, Ye shall not eat of it, neither shall ye touch it, lest ye die. And the serpent said unto the woman, Ye shall not surely die” (Genesis 3:1-4). Yes, from the very beginning of time, the battle for the authority of God’s Word has been fought. God has delivered His Word in every generation, and even delivers it anew to His people when it was thought to have been lost (2 Kings 22-23). The struggle for the authority of the Scriptures continued on through the Old Covenant, as the unfaithful kings of Israel continued to build and rebuild the high places. During Jesus’ time, the Pharisees had so distorted the meaning of God’s Word that Jesus issued a lengthy rebuke to them in the form of His exegesis of the law in Matthew 5.

And Again

Even past the time of Christ’s earthly ministry, with Marcion and others, the authority of the Scriptures continued to be questioned, and the actual words and passages themselves were contested and removed in some unfaithful manuscripts. Augustine of Hippo comments on the phenomenon, “Some men of slight faith, or, rather, some hostile to the true faith, fearing, as I believe, that liberty to sin with impunity is granted to their wives, remove from their Scriptural texts the account of our Lord’s pardon of the adulteress” (De adulterinis coniugiis 2.7.6). In the New Testament age, the method of attacking the authority of God’s Word has not changed.

Little is known about the transmission of the New Testament until the middle ages, other than the fact that a lot of Bibles were destroyed by persecution, war, fires, and other natural causes. The history of the New Testament, as it were, is largely clouded to a modern audience until the explosion of manuscripts in the 9th century. Despite this fact, there are quotations from theologians throughout the ages which testify to the existence of ancient and accurate copies that survived through the age of tampering. Deuteronomy 4:2 became an integral text to Augustine and other theologians during this time. “Augustine and his contemporaries were well aware that editing of this sort could potentially take place, and they invented various strategies to deal with the problem: curses were added to the end of certain treatises, sternly warning those who would dare to alter texts that they would be punished for their misdeeds” (Wasserman, Knust, To Cast the First Stone, 100). 

The manuscripts from the period just before and during Augustine’s time demonstrate that this period of time could be considered a tampering period of the text of the New Testament. Despite this tampering period, and the fact that Christianity almost lost to Arianism at the same time, the orthodox faith, along with the original Scriptures, continued on in time. This is the most reasonable explanation for the explosion of uniform manuscripts suddenly appearing in history during the middle ages. It was not long after this time that the next major attack on the authority of Scripture occurred. As the end of the middle scholastic period came to an end, theologians began to discover corruptions in the Latin Vulgate.

And Again…

The text of the Western church had in some places conformed to the teachings of Rome, which had been heading in a dark direction for quite some time. The Western church had, due to a number of reasons, developed into more of a political player than a religious one. Popes began to sell their papacy to the highest bidder, and one point, three popes occupied the office. Indulgences were introduced to encourage knights to fight for the Holy Roman Empire, and this led to the grossly abusive practice of the church which drained the pockets of the laity. Some churches had failed to give communion to the people in years, and in many cases, the only people taking communion were the priests themselves, with the laity observing. Despite this corruption, the seed of the Reformation lived in the marrow of the church with men like Wycliffe and Hus. In the same way that Athenasius was raised up during the Arian controversy in the early church, faithful men of God were called out of the wilderness and began crying out in protest against the abuses that had developed in the Western church. God began orchestrating the Reformation well before that fateful October day in Wittenberg in 1517. 

In fact, there were several providential events that are often forgotten leading up to the Reformation. In the mid 15th century, two things occur that contribute to the Protestant movement. The first is the construction of the printing press in Guttenberg in 1436, and the second is the fall of Constantinople shortly after that. Up to that point in the west, the Bible that was used was Latin, and the means of reproducing that Bible was hand-copying. When Constantinople fell, the Greek speaking people of God came flooding into the West, bringing with them their language and their Bibles. Bibles continued to be hand-copied for some time after this event, but it wasn’t long until the printing press was purposed for printing the Bible in all sorts of languages. During this pre-Reformation period, men like Wycliffe had already started producing Bibles in English, and in response, the Roman church said that the Bible was only authoritative insofar as it was approved by the church, and the only Bible approved by the church was the Latin Vulgate as it had come to exist during that time. The Roman church was not mighty enough to stop the events that had been started at the fall of Constantinople and the invention of the printing press, however. In 1514, the Complutensian Polyglot New Testament had been printed, and two years later in 1516, Erasmus’ first edition of the Novum Testamentum was hot on the press. There was nothing that Rome could do to stop what would happen next. 

On October 31, 1517, a German Roman Catholic Monk named Martin Luther posted 95 theses which detailed the places the Western church need to change. This moment marks the date that most people consider the Protestant Reformation to have officially started. During this time, the battle for the Bible centered around one question: In what way are the Scriptures authoritative. On one hand, the Roman church said that the Scriptures were authoritative by virtue of the church. On the other hand, the Protestants said that the Bible was authoritative in itself, it was self-authenticating (αυτοπιστος). The doctrine of the self-authenticating nature of the Scriptures was in fact the fundamental principle that drove the doctrine of Sola Scriptura and thus drove the entire Reformation. The only refutation for the doctrine of Rome was to return to the Scriptural reality that God Himself gave authority to the Bible. This doctrine of Scripture ultimately becomes a staple in Protestant doctrine and is codified in all of the major confessions of the 17th century. 

And Again…

If history has taught us anything, the battle for the authority of Scripture did not end with the high orthodox theologians following the Reformation. The next major battle that the church would face came from Germany, the birthplace of the Reformation. Starting with a German theologian named Friedrich Schleiermacher, the way that theology was done forever changed. The Bible no longer was the Word of God, the Bible was the documentation of the experience of communities of faith. In the German schools, the idea that the Bible was infallible came under fire and the way the Bible was described and understood changed rapidly. Due to the rise of the sciences and the development of the philosophy of religion, much of the historical information found in the Scriptures was determined to be factually incorrect. As a result, German theologians made sense of this by splitting the interpretation of history into at least two categories.

The first was history as it actually happened, and the second was history as it was experienced by various communities in time. The miracles in the Bible were not true history, they were the interpretation of history by human communities who were trying to make sense of their religious experience. The birth of historical criticism, or higher criticism, would be the next giant the church had to slay. German theologian Karl Barth, who came onto the scene like a stampeding elephant trumpeting through a Sunday school class, made an attempt at responding to higher criticism with what is now known as Neo-Orthodoxy. The Bible didn’t have to be factually correct or materially correct to be the Word of God according to Barth. The Bible was the authoritative witness to the incarnation of Jesus Christ, the Word of God. The Word of God was Jesus Christ, and the Bible became the Word of God when the Holy Spirit worked in the believer. The Bible was not the Word of God, it was a witness to the Word of God, Jesus Christ, and it became the Word of God on occasion.

The theology of Barth sent the church reeling, scrambling to give a response. Theologians like Cornelius Van Til spent nearly 30 years offering a response to idealism and neo-orthodoxy by developing his transcendentalism. Prior to the rise of Neo-Orthodoxy, B.B. Warfield and A.A. Hodge attempted to address higher criticism by reinterpreting the Westminster Confession. The Bible did not have to be materially preserved to be inerrant, they said, it just had to preserve the sense of the thing. The Bible was really only inspired and perfect in the autographs, and that is what the high orthodox meant. Unfortunately, that is not what the high orthodox meant, and the church thought that the high-orthodox doctrine of Scripture could not stand its ground to higher criticism like it had against Rome in the 16th century. What Warfield’s doctrine meant was that the Bible could be proved to be original by way of evidence, that by an effort of lower criticism, the original could most certainly be reconstructed. This articulation of Scripture was entirely dependent on the abilities of textual scholars to demonstrate that an original could be produced from the surviving manuscripts. In other words, the Scriptures are the Word of God insofar as they could be demonstrated to be the Word of God. At the time of Warfield, theologians were nearly unanimous in believing that this could be done with lower criticism. In fact, Warfield believed that the efforts of text-critics in his day were the providential workings of God to restore the original text of the Scriptures to the church. 

Some time later, the battle for the Bible began and led to the production of the Chicago Statement on Biblical Inerrancy. It was a direct response to the neo-orthodox doctrine of Scripture which had turned the church upside down. The latest battle for the authority of Scripture did two things: 1) It codified the theology of Warfield and 2) determined that higher and lower criticism were two separate and unrelated disciplines. Yet the theology of Scheliermacher and Barth were planted, like twin mustard seeds, and today stand as mighty trees in the center of orthodoxy. 

The next battle for the Bible is arguably happening now, and will most certainly rage on until Barth and Schleiermacher are answered totally and finally. The Chicago Statement on Biblical Inerrancy has not aged well, and the ghost of Schleiermacher haunts the canons of modern textual scholarship. Since Warfield’s doctrine was so reliant on the success of lower-criticism and its separation from higher criticism, it is completely contingent on these two things being reality. Yet, something has happened since Warfield’s time which has given cause for a new battle. The development of lower criticism has resulted in its fusion with higher criticism, and the reality upon which Warfield’s argument rests is no longer reality. See, Warfield’s doctrine was contingent on the success of the lower critics in proving the original from the extant manuscripts. Since the stated goal of textual criticism is now the Initial Text, Warfield’s formulation has lost its power. Further, the line between higher and lower criticism has become blurred and the actual textual decisions being made by the lower-critics are informed by a combination of both textual data and higher critical principles. 

This is evident in that the stated goal of the Editio Critica Maior is not to produce an original Bible, but rather to reconstruct the history of the transmission of the New Testament Text. In other words, the goal of this critical text is to produce the history of how Christians have experienced their religion in time by examining the documents they left behind. The readings which are determined earliest only speak to the written expression of Christianity in the time and place that it represents. The variants which rank later simply represent how faith communities evolved and developed throughout time. Since the goal is not a definitive text, the goal is inherently in line with documenting how Christians recorded their experience in time. The ECM is not the Bible, it may or may not contain the Bible. That means that while printed editions created from the ECM may have the objective of producing an early witness to the New Testament text, it in itself says nothing regarding the authorial text. Some may say that this Initial Text represents the authorial text, but this is simply how Kant would have responded to Schleiermacher. The very concept of the ECM is the direct implementation of higher criticism in text-critical practice.   

There are two ways that Christians can respond to the reality of the ECM. The first is found in Barth or perhaps Bultmann. It is fine that the Bible contains errors and factual problems, the Word of God is contained in the Bible or perhaps the Bible is a witness to the Word of God. In fact, it would be putting limitations on God by saying that He must speak in a narrowly defined set of Scriptures. God is far beyond anything we can comprehend, and therefore the words in Scripture become of the Word of God when God speaks through them. Since the Bible cannot be proven to be original by lower criticism, and higher criticism results in demythologizing the Bible, the only answer must be Barth, or some variation. The second option, which was not tried during the Warfieldian era, is the high-orthodox view of Scripture. The Bible does not need to be reconstructed, or demonstrated to be original by way of lower-criticism, because it was never lost and does not need to be proved. God Himself authenticates the Scriptures and by His special care and providence has kept them pure in all ages. The Holy Scriptures were faithfully handed down in time by the believing people of God until a providential innovation of technology allowed for them to be printed. This text was edited according to the common faith and was universally received by the Protestants by the end of the 16th century. This is the text that won against the Papists and reigned supreme until the theories of higher critics unseated it from the favor of the academy. The reception of this text vindicates God’s providence in the matter and it is the most widely read text, even today. It has been cast down by the schoolmen, but among the people of God it has held its place. 

There is a reason that the Reformed stood on the doctrine of Scripture which said that the Bible was self-authenticating. It was the only response to the Papists that would have resulted in the success of Protestantism. The doctrine of Warfield was bound to fail as it was intimately tied to the success of men in reproducing an original text. When the concept of the “original” became obsolete, so did Warfield’s doctrine. At the same time, this allowed higher critical principles an official seat back at the lower-critical table. It will be interesting to see whether Christians uphold the high-orthodox view of the Scriptures, or retreat back to Barth for empty comfort. 

Revisiting the Fatal Flaw Argument Against the Traditional Text

Introduction

One of the primary purposes of this blog is to give people confidence that the Bible they read is God’s inspired Word. Attacks on the Bible of the Protestant Reformation often send people into a spiral of doubt and can damage one’s faith in approaching, reading, praying over, and meditating upon the Holy Scriptures. An argument frequently leveled at the Bible of the Protestant Reformation is what may be called The Fatal Flaw Argument. I initially addressed this argument on the Agros Blog a while back, but since that time I have seen it pop up all over my Facebook feed, so I thought it would be helpful to write a more pointed response than the one I initially crafted. The argument is constructed like this:

  1. The Bible must be able to be reconstructed from extant manuscripts in the event that all printed editions of the Scriptures are wiped off the face of the planet in order to be used, read, preached from, etc. 
  2. If a Bible cannot be reproduced exactly by reconstructive methodologies, than it should not be used, read, preached from, etc. 
  3. The Traditional Text, as it exists in the Textus Receptus cannot be reproduced exactly if a reconstruction effort using a “consistent” methodology was employed in the event of a printed edition extinction event, therefore it should not be used, read, preached from, etc. 

This argument may seem appealing, but it actually undermines the validity of essentially every Bible on the market today, including the ESV, NASB, and NIV. The fatal flaw in this so called Fatal Flaw Argument is that there is not a single Bible available today that could be reconstructed exactly if this hypothetical extinction event occurred. The primary assumption of this argument is that there are a set of canons that could be consistently applied to manuscripts which would, in theory, produce the current form of the Greek New Testament. The obvious issue with this is that the Modern Critical Text, as it exists in the Editio Critica Maior, has yet to even produce a text in the first place. It will be finished in ten years or so down the road, and even when finished, it is more of a dataset of texts than a text itself. The onus of the person making this argument is to first demonstrate that they have a text in the first place.

Prior to beginning my analysis of this argument, it is interesting to point out that it assumes the Received Text and the Modern Critical Text are inherently different, which some do not readily admit. This is true in two ways. The first is that it grants in its premise that the methodologies employed by the textual scholars during the Reformation era were fundamentally different than the methodologies employed today. This is apparent in the reality that modern text-critical methods could not produce the text of the Protestant Reformation with its current canons. The second is that grants that the actual text form is inherently different, as the claim is that the Received Text could not be reproduced, while the Modern Critical Text allegedly could. In any case, in order to make this argument, one has to be willing to apply the argument to all texts, not just the Textus Receptus. In the event that this hypothetical extinction event occurs, a new form of the Bible would emerge, even if the same methods are consistently applied. D.C. Parker, the textual scholar leading the ECM team for the Gospel of John currently, says this: 

“The text is changing. Every time that I make an edition of the Greek New Testament, or anybody does, we change the wording. We are maybe trying to get back to the oldest possible form but, paradoxically, we are creating a new one. Every translation is different, every reading is different, and although there’s been a tradition in parts of Protestant Christianity to say there is a definitive single form of the text, the fact is you can never find it. There is never ever a final form of the text.” 

I do not employ this quote to disparage Dr. Parker, but rather to demonstrate the reality that even in today’s current text-critical climate, without an absurd hypothetical extinction event of printed editions, the editors of Greek New Testaments would seem to refute the premise of the argument itself by their own words. This further demonstrates that this argument does not only attack the Textus Receptus, but all Bibles. That being said, I do not think this argument is wise to use, no matter which Bible you read. It is an open invitation to attack the validity and authority of every single Bible on the market for the sake of winning a debate against Christians who read a traditional Bible. This is a good reminder that we should be careful not to attack the authority of the Scriptures in our attempts to defend the current Bible we think is best. That being said, there are three reasons I believe this argument should be abandoned. 

The Fatal Flaw Argument Against the Traditional Text Rejects God’s Providence 

The first reason this argument should be abandoned is that it rejects God’s providence in the transmission, preservation, and inspiration of the Holy Scriptures. The assumption on all sides of this discussion is that when somebody reads a Bible in their native tongue, they are reading God’s inspired Word. This is true for Christians who read the ESV as well as the KJV. If a Christian does not believe that their Bible is inspired, I’m not sure why they are even reading it, as it is simply like any other document produced by humans in history. It may be a valuable book of moral tales, but if the Bible is not inspired, it is not more special than the Iliad or Cicero. 

That being said, this argument assumes that what God has done in time does not matter as it pertains to the transmission of the text and reception of the Bible by the people of God. The only effort that matters is the one that is happening now, which is currently ongoing. In any view of inspiration, whether it be Warfield or Westminster, God’s providence is recognized as the instrument working in the production of Bibles. Warfield believed that the efforts of textual scholars in his day were an act of God’s special providence in giving the Bible back to the people of God. The Westminster Divines affirmed overwhelmingly that by God’s special care and providence, the Scriptures had been kept pure in all ages. 

That means that the Bibles that have been produced matter, because the printed texts are the texts that Christians use for reading, preaching, and evangelism. Even if one believes that a particular Bible is of lesser quality, Christians should find unity in the fact that God uses translations to speak in so far as they represent the original texts. If printed editions and translations do not matter, then all Christians need to quickly learn Hebrew and Aramaic and Greek, as well as gain access to the compendium of extant manuscripts, so they can read a Bible. That means that regardless of the Bible one reads, all Christians believe together that God Himself has delivered it. The Textual Discussion comes down to determining which text God preserved. In proposing this hypothetical, one is simply saying, “It doesn’t matter what God did in time, the only thing that matters is what is going on now.” I don’t know many Christians, let alone any Calvinists, who would ever say that what God did providentially in time does not matter. 

The Fatal Flaw Argument Against the Traditional Text Assumes That All Current Bibles Are Not God’s Word

The fundamental problem with this argument and the second reason it should be abandoned is that it takes away every single Bible from every single believer. If a consistent methodology must be employed to create a single text from the manuscripts, then it seems that nobody has a Bible, or ever will have a Bible. The fact is that different methodologies have been employed since the first effort of creating printed texts in the 16th century. Erasmus employed different methods than Beza, and Beza employed different methods that Hort, and Hort employed different methods than D.C. Parker and the editors of the ECM. Not only that, there are a wealth of different opinions among textual scholars in between, such as Karl Lachmann, Maurice Robinson, H.C. Hoskier, Edward F. Hills, and even among the editors of the ECM there are differences in opinion on the manuscript data. This argument assumes that all of the editors of Greek New Testaments today are unified in their opinions on the text. The reality is, that they are not. 

Further, if a consistent methodology is required, which methodology should be considered the “most consistent”? Which methodology is going to be used in this reconstruction effort after this hypothetical extinction? The CBGM hasn’t been fully implemented and thus hasn’t been fully analyzed. The existence of the CBGM itself demonstrates that Hort and Metzger didn’t have it all right. That is not even taking into consideration the evolution of opinions on scribal habits, “Text Families”, and weighing manuscripts. Did scribes generally copy faithfully or did they tend to smooth out readings and add orthodox doctrines into the text? If all the printed editions were wiped out, I imagine that includes the ECM. Since the ECM is already going to take ten more years to complete, that means that the people of God would simply be without a Bible for at least ten more years. The argument is so incredibly asinine it is hard to believe that people are using it at all. 

The fact is, that all Christians have to look back at history to have confidence in the Bible they read. The current methodology, the CBGM, isn’t fully implemented yet, and won’t be for another ten years. That means that every single Christian is trusting that the text-critical work done already is the method God used in delivering His Word to His people to some degree or another. The difference is in how Christians believe that God accomplished this task. Some believe the Bible was preserved up to the Protestant Reformation, and thus look to the printed texts of that era which have that text form. Some believe that the Bible was preserved in caves, monasteries, and barrels until the 19th century, and look to the printed texts produced in that era. Some even believe differently than either of these two positions. No matter which view of the text one holds, every single Christian looks into history to see God’s providence in their view of the text. Either that or they believe that all the Bibles up to this point aren’t complete or correct Bibles, and are patiently awaiting 2030 when the ECM is finished. In every case, the argument fundamentally assumes that the work done in history does not matter and should not be considered as a valid “methodology”.  

The Fatal Flaw Argument Against the Traditional Text Misleads the People of God 

The final flaw in the Fatal Flaw argument against the Traditional Text and the third reason it should be abandoned is that it is horribly misleading. It makes Christians think that the canons of modern textual criticism are settled and unified. The fact is that scholars are still discussing the proper application of what the CBGM is creating, and how it should be understood. This argument leads people to believe that if all of the ESV Bibles and the printed texts it was translated from were raptured suddenly, that the methods of textual criticism could give them the same exact Bible. Unless somebody has the all of the underlying readings of the ESV memorized, this simply could not be done. Even if somebody were to have all the readings memorized, they wouldn’t be applying any methodology, they would be copying down what they memorized. The reality is that even without a hypothetical extinction of all printed texts, the methods being implemented are not producing the same text time and time again. With each new iteration of the modern methods, new Bibles are being produced. In some cases, these new Bibles have significant changes. That is not my opinion, that is simply what is happening. There is a reason that Crossway removed the title “Permanent Edition” from the prefatory material of the 2016 ESV. 

That is why, in my blog, I focus so heavily on the doctrine of Scripture. The current efforts of textual criticism are not capable of producing a stable text. In fact, a stable or final text is not even the goal. The goal of modern textual criticism as it exists in the effort of the ECM is to construct the history of the surviving texts of the New Testament, not a final authorial text for all time. The only way the modern critical methods could produce a stable text would be to strip out all of the verses that are contested by variation. Even then, new manuscript finds and reevaluation of the data could just as easily cause that text to change. The fact is that every single Christian looks back to history when determining which Bible is best. The one method that every Christian uses to decide which Bible they read is the one method that modern critical methods do not use – the reception of readings by the people of God. Christians will never be able to escape their history, as hard as they may try. In an effort to defend the ongoing effort of modern textual criticism of the New Testament, many Christians have blatantly undermined the authority of the Scriptures as a whole. If the goal is to give Christians a defense for their Bible, this argument is absolutely not it. In fact, this so called Fatal Flaw Argument hands the Bible directly to the critics of the faith.  

Conclusion

At the end of the day, the goal of this conversation is give confidence to Christians that when they read their Bible, they are reading the Word of God. This kind of argument undermines everybody reading a Bible, no matter which version they read. In fact, it is almost identical to the argument that Bart Erhman makes against Christians who adhere to the modern critical text. When we begin taking our cues from Bart Ehrman, perhaps it’s time to take a step back and reevaluate. In any case, there is a consistent methodology that Christians can employ to receive the Bible they read, and it does not involve trusting the ongoing reconstruction effort of the history of the New Testament text. 

The fact is that God has spoken (Deus dixit). God speaking is the means that God has always used to condescend to man, from the time of Adam in the garden. His speaking is the covenant means of communication to His covenant people. God will not fail in His covenant purpose, which means that God will not fail to communicate to His people (Mat. 5:18). Since God has ordained the Scriptures as the means of covenant communication in these last days (Heb. 1:1), then the preservation of His Word is intimately tied with His covenant purpose. Since God has not failed, and cannot fail, then He has not failed in speaking, or preserving the Word He spoke. In every generation, from the time of Adam, God has spoken to His people clearly and without error. The introduction of textual variants in manuscripts did not thwart this effort. In every generation, in faithful copies of manuscripts, God preserved His Word. This preservation did not somehow stop in the fourth century, or even in the 16th century. Which means, that if the Bible is indeed preserved, it was still preserved at the time of the Protestant Reformation. If this is the case, then the manuscripts which were used during the time of the Protestant Reformation were indeed preserved. Which means the text-critical work done during this time was done using preserved copies of the New Testament. The manuscripts did not suddenly become preserved during the 16th century, they were the ones handed down in faithful churches from the time of the Apostles. The alternative seems to be that God stored His word away in barrels, caves, and monasteries lined with skulls.

This Fatal Flaw argument, fundamentally, is simply saying, “We don’t have a Bible, so you can’t either”. This is not the way you defend the text of the New Testament, it is how you destroy the validity of the text of the New Testament. It does not matter which Bible you read, attacking the validity of all Bibles in order to win an argument is not appropriate, or necessary. At the end of the Textual Discussion, Christians still need to have a Bible they feel they can read and use. All Christians employ the same methodology when selecting a Bible at the end of the day. They look back in time, and receive a text based on their understanding of inspiration and preservation. Some receive a text they believe was preserved until the fourth century which has been reconstructed to some degree or another, and others receive a text they believe was preserved up to the Reformation and beyond. Others do not receive any one text, but all of the differing texts. The vast majority of Christians are not textual scholars, do not know the original languages, and thus are at the mercy of various scholarly opinions. The average Christian wants to know, “Can I trust my Bible?” If our efforts are not concentrated in that direction, we have already failed.  

Memoirs of an ESV-Onlyist: Reflecting on the Text and Canon Conference

Introduction

On Reformation weekend, a small conference was held in Atlanta, Georgia called The Text and Canon Conference which focused on offering a clear definition of what it means when people advocate for the Masoretic Hebrew and Received Greek text. For those that are not up to date with all of the jargon, the Masoretic Hebrew text is the only full Hebrew Old Testament text available, and the Greek Received Text is the Greek New Testament which was used during the Protestant Reformation and Post-Reformation period. At the time of the Reformation, the Bibles used the Masoretic Text and Received Text for all translational efforts. Bibles produced in the modern era use the Masoretic Text as a foundation for the Old Testament, but frequently use Greek, Latin, and other translations of the Hebrew over the Masoretic text. Modern Bibles also utilize a different Greek text for the New Testament which is commonly called the Modern Critical Text. As a result of these differences, the Bibles produced from the text of the Reformation are different in many ways from the Bibles produced during the recent years.  

One of the major focuses of the conference was to demonstrate that it is still a good idea, and even necessary, to use a Reformation era Bible, or Bibles that utilize the same Hebrew and Greek texts as the Reformation era Bibles. The key speakers, Dr. Jeff Riddle and Pastor Robert Truelove, delivered a series of lectures which demonstrated the historical perspective on the transmission history of the Old and New Testaments and presented a wealth of reasons why the Reformation era Hebrew and Greek texts are still reliable, even today. I will be writing a series of articles which cover some of the key highlights of the conference. In this article, I want to explain why I think this conference was necessary, and also to detail the series of events which led me to attending this conference. 

Why Was the Text and Canon Conference Necessary?  

There are two major reasons that I believe the Text and Canon conference was necessary. The first is that many Christians do not believe that there is any justifiable reason to retain the historical text of the Protestant church. The second is that many Christians are not fully informed on the state of current text critical efforts. Due to this reality, lectures delivered at the Text and Canon conference provided theological and historical reasons which supported the continued use of the Reformation era Hebrew and Greek texts, as well as offered information on the current effort of textual scholarship. An important reality in the textual discussion is that the majority of Christians do not have the time and in many cases, the ability to keep up to date with all of the textual variants and text-critical methodologies that go into making modern Bibles. There is a great need in the church today for clear articulations of the history of the Bible, as well as accessible presentations on how modern Bibles are produced. The Text and Canon conference, in part, met this need, as well as offered many opportunities for fellowship and like-minded conversation. Prior to launching into a series of commentary on the conference, I thought it would be helpful to share my journey from being a modern critical text advocate to a Traditional Text advocate. 

From the 2016 ESV to the Text and Canon Conference

Prior to switching to a Reformation era Bible, I began to discover certain realities about the modern efforts of textual criticism which caused me to have serious doubts as to whether or not the Bible was preserved. I had a hard time reconciling my doctrine of inspiration and preservation with the fact that there is an ongoing effort to reconstruct the Bible that has been in progress for over 200 years. These doubts increased when I discovered that not only had the methods of text-criticism changed since I was converted to Christianity over ten years ago, but that the modern critical text would be changing more in the next ten years. I began to read anything I could get my hands on to see if I could figure out more information on the methods that were responsible for creating the Bible I was reading at the time. When I began this process of investigation, I had just finished my cover-to-cover reading plan of the new 2016 ESV. At first, I was attempting to simply understand the methodology of the modern critical text with the assumption that a better understanding of it would help me defend the Scriptures against the opponents of the faith. The process quickly became a search for another position on the text of Scripture. This is due to some of the more alarming things I learned in my investigation of modern critical methods. There are six significant discoveries I made when investigating the current effort of textual criticism that I would like to share here. These six discoveries led me from being a committed ESV reader to a committed KJV reader.  

The first discovery that sent me down a different path than the modern critical text was when I investigated the manuscript data supporting the removal of Mark 16:9-20 in my 2016 ESV. The other pastor of Agros Church, Dane Johannsson, had called me to tell me about some information he learned about the Longer Ending of Mark after listening to an episode of Word Magazine, produced by Dr. Jeff Riddle. Up to this point, I had heard many pastors that I trusted say that the manuscript data was heavily in favor of this passage not being original. My Bible even said that “Some of the earliest manuscripts do not include this passage”. I was seriously confused when I found out that only three of the thousands of manuscripts excluded the passage, and only two of them are dated before the fifth century. This made me wonder, if all it took was two early manuscripts to discredit the validity of a passage in Scripture, what would happen if more manuscripts were found that did not have other passages that I had prayed over, studied, and heard preached? If a passage that had thousands of manuscripts supporting it could be delegated to brackets, footnotes, or removed based on the testimony of two manuscripts, I realized that this same logic could be easily applied to quite literally any place in my Bible. All that it would take for other passages to be removed would be another manuscript discovery, or even a reevaluation of the evidence already in hand.  

The second discovery was the one that fully convinced me to put away my 2016 ESV and initially, pick up an NKJV. At the time of this exploration process I was utilizing my Nestle-Aland 28th edition and the United Bible Society 5th edition in my Greek studies. I was still learning to use my apparatus when I learned what the diamond meant. In the prefatory material of the NA28, it states that the diamond indicates a place where the editors of the Editio Critica Maior (ECM) were split in determining which textual variant was earliest. That meant that it was up to me, or possibly somebody else,  to determine which reading belonged in the main text. This is a reality that I would have never known by simply reading my ESV. I discovered that there were places where the ESV translators had actually gone with a different decision than the ECM editors, like 2 Peter 3:10, where the critical text reads the exact opposite of the ESV. This of course was concerning, but I wasn’t exactly sure why at the time. I figured there had to be a good reason for this, there were thousands of manuscripts, after all. I began investigating the methodology that was used to produce these diamond readings, and learned that it was called the Coherence Based Genealogical Method (CBGM). I quickly found out that there was not a whole lot of literature on the topic. The two books that I initially found were priced at $34 and $127, which was a bit staggering for me at the time. It was important for me to understand these methods, so I ended up at first purchasing the $34 book. It was what I discovered in this book that heavily concerned me. Due to the literature on the CBGM being relatively new, and possibly too expensive for the average person to purchase, I had a hard time finding anybody to discuss the book with me. It was actually the literature on the CBGM that motivated me to start podcasting and writing on the issue. If I couldn’t find anybody to discuss this with, it meant that nobody really knew about it.   

The third discovery was the one that convinced me that I should start writing more about, and even advocating against, this new methodology. This was the methodology that was being employed in creating the Bible translations that all of my friends were reading, and that I was reading up until switching to the NKJV. It’s not that I “had it out” for modern Bibles, I figured that if these discoveries had caused so much turmoil in my faith, they would cause others to have similar struggles. Most of my friends knew nothing about the CBGM, just that they had heard it was a computer program that was going to produce a very accurate, even original, Bible. After reading the introductory work on the method, I knew that what I heard about the CBGM was perhaps too precipitated. Based on my conversations with my friends on textual criticism, I knew that my friends were just as uninformed as I was on the current effort of textual scholarship. It wasn’t that I thought I was the first person to discover these things that motivated me to start writing,  but the fact that myself and all of my friends were not aware of any of the information I was reading. Up to that point in my research, I was under the assumption that the goal of textual criticism was to reconstruct the original text that the prophets and apostles had penned. I even thought that scholars believed they had produced that original text which I was reading in English in my ESV. I found out that this was not the case for the current effort of textual scholarship. I learned that the goal of textual criticism had, at some point in the last ten years, shifted from the pursuit of the original to what is called the Initial Text. In my studies, I realized that there were differing opinions on how the Initial Text should be defined, and even if there was one Initial Text. In all cases, however, the goal was different than what I thought. It did not take me long to realize the theological implications of this shift in effort. At the time, I fully adhered to both the London Baptist Confession of Faith 1.8, as well as the Chicago Statement on Biblical Inerrancy. It was in examining the Chicago Statement on Biblical Inerrancy against the stated goals of the newest effort of textual criticism that made me realize there were severe theological implications to what I was reading and studying.  

The fourth discovery was the one that made me realize that the conversation of textual criticism was not only about Greek texts and translations, it was about the doctrine of Scripture itself. At the time I believed that the Bible was inspired insofar as it represented the original, and the original, as I found out, was no longer being pursued. The original was no longer being pursued, I learned, because the majority, if not all of the scholars, believed it could not be found, and that it was lost as soon as the first copy of the New Testament had been made. There are various ways of articulating this reality, but I could not find a single New Testament scholar who was actually doing work in the field of textual scholarship who still held onto the idea that the original, in the sense that I was defining it, could be attained. Even Holger Strutwolf, a conservative editor of the Modern Critical Text, seems to define the original as being as “far back to the roots as possible” (Original Text and Textual History, 41). This being the case, if the current effort of textual criticism was not claiming to have determined the original readings of the Bible, than my doctrine of Scripture was seemingly vacuous. If the Bible was inspired insofar as it represented the original, and there was nobody able to determine which texts were original, my view of the Bible was that it wasn’t inspired at all. At the bare minimum, it was only inspired where there weren’t serious variants. In either case, this reality was impossible for me to reconcile. I then sought out to discover how the Christians who were informed on all the happenings of textual criticism explained the doctrine of Scripture in light of this reality. I figured I wasn’t the first person to discover this about the modern text-critical effort, so somebody had to have a good doctrinal explanation. 

The fifth discovery was the one that made me realize that I did not have a claim to an inspired text, if I trusted in the efforts of modern textual criticism. In my search for faithful explanations of inspiration in light of the current effort of textual criticism, I did not find anything meaningful. In nearly every case, the answer was simply one of Kantian faith. Despite the split readings in the ECM and the abandoned pursuit of the original, I was told I had to believe it was preserved. Even if nearly every textual scholar was saying that the idea of the “original” was a novel idea from the past, or simply the earliest surviving text, I had to reconcile that reality with my theology. One of the answers I received was that the original text was preserved somewhere in all of the surviving manuscripts, and that there really was not any doctrine lost, no matter which textual variants were translated. This is based, in part, on an outdated theory which says that variants are “tenacious” – that once a variant enters the manuscript tradition it doesn’t fall out. This of course cannot be proven, and even can be shown to be false. Another answer I found was that all of the surviving manuscripts essentially taught the same exact thing. This would have been comforting, had I not spent time using my NA28 apparatus and reading different translations. I knew for a fact that there were many places where variants changed doctrine, sometimes in significant ways. Would the earth be burnt up on the last day, or would it not be burnt up? Was Jesus the unique god, or the only begotten Son? The answers I received simply did not line up with reality. I had no way of proving which of the countless variants were original. When I discovered this, I finally understood the position of Bart Ehrman. He, like myself, had come to the conclusion that the theories, methods, and conclusions which went into the construction of the modern critical text told a story of a Bible that really wasn’t all that preserved. 

The sixth and final discovery I made, which did not necessarily happen in chronological order with the rest of my discoveries, was that there were several other views of textual criticism within the Reformed and larger Evangelical tradition. Prior to beginning my research project, I had read The King James Only Controversy, which led me to believe that there were really only two views on the text – KJV Onlyists and everybody else. I discovered that this was the farthest thing from reality and a terrible misrepresentation of the people of God who held to these other positions. The modern critical text was not a monolith, and I did not need to adopt it to defend my faith, or have a Bible. In fact, I knew that there was no way I could defend my faith with the modern critical text. In my research, I even discovered countless enemies of the faith who used the modern critical text as a way to disprove the preservation of Scripture. Various debates against Bart Ehrman that I watched demonstrated this fact clearly. I learned that even within the camp of modern textual criticism, there were people who did not read Bibles translated from the modern critical text. There were even people who disagreed on which readings were earliest within the modern critical text. There were people who adopted the longer ending of Mark and the woman caught in adultery who also did not read the KJV. There were also people who believed that the Bible was preserved in the majority of manuscripts, in opposition to other positions which say that original readings can be preserved in just one or two manuscripts. I also discovered the position I hold to now, which says that the original text of the Bible was preserved up to the Reformation, and thus the translations made during that time represent that transmitted original. This ultimately was the position that made the most sense to me theologically, as well as historically. I realized that the attacks on the TR, which often said that it was only created from “half a dozen” manuscripts, was not exactly meaningful, as the modern critical text often makes textual decisions based on just two manuscripts. In any case, the conversation of textual criticism was much more nuanced and complex than I had believed it to be. 

Conclusion

I can only speak for myself as to how my discoveries affected my faith. It is clear that many Christians do not have a problem with a Greek text that is changing, and in many places, undecided. In my case, I was told to take a Kantian leap of faith to trust in this text. In my experience, most of the time people simply are unaware of the happenings of modern textual scholarship. It is not that I have any special knowledge, or secret wisdom, I simply had the time and energy and opportunity to read a lot of the current literature on the latest methods being employed in creating Bibles. One thing that has motivated me to be so vocal about this issue is the reality that most people simply are uninformed on the issue, like myself at the time of starting my research project. Due to one reason or another, the information on the current methods is difficult to access for many, and even more simply do not know that anything has changed in the last 20 years. My gut tells me that if people were simply informed more on the issue, they might at least consider embarking on a research project like I did. The fact is, that many scholars and apologists for the critical text are insistent on framing this discussion as “KJV Onlyism against the world”, and it is apparent that it has been effective. Despite this, it was not my love for tradition or an affinity for the KJV that led me to reading it. In fact, I was hesitant to read it as a result of all the negative things I had heard about it. Primarily,it was my discoveries regarding the state of modern textual criticism that led me to putting down my ESV and picking up an NKJV, and then finally a KJV. 

I thought it would be helpful to detail my discoveries which led me to the position I hold now on the text of Scripture. I will be writing more articles commenting on what I consider to be the more important points of the conference. Hopefully my commentary can serve to give you, the reader, more confidence in the Scriptures, and to share some of the important information presented at the Text and Canon conference.  

Inspiration: Now and Then

Introduction

Today’s church has been flooded with new ideas that depart from the old paths of the Protestant Reformation. This is especially true when it comes to the doctrine of Scripture. It is common place to adhere to the doctrine of inerrancy in today’s conservative circles and beyond. While it is good that many Christians take some sort of stand on Scripture, it is important to investigate whether or not the doctrine of inerrancy is a Protestant doctrine. The Reformers were adamant when talking about the inspiration, authority, and preservation of Scripture that every last word had been kept pure and should be used for doctrine, preaching, and practice. James Ussher says clearly the common sentiment of the Reformed.

“The marvelous preservation of the Scriptures; though none in time be so ancient, nor none so much oppugned, yet God hath still by his providence preserved them, and every part of them.”

(James Ussher, A Body of Divinity)

Most Christians would happily affirm this doctrinal statement. Those that are more familiar with the discussion of textual criticism may not, however. It is common to dismiss men like James Ussher along with other Westminster Divines on the grounds that they were not aware of all of the textual data and therefore were speaking from ignorance. Much to the discomfort of these Christians, textual variants did exist during this time, many of which were the same we battle over today. The conclusion that should be drawn from this reality is not that the Reformed in the 16th and 17th centuries would have agreed with modern expressions of inspiration and preservation simply because we have “more data”. There is a careful nuance to be observed, and that nuance is in their actual doctrinal articulations of Scripture. This is necessarily the case, considering they were far more aware of textual variants than many would like to admit. Rather than attempting to understand the tension between the Reformed doctrine of Scripture and the existence of textual variants, it is commonplace to reinterpret the past through the lens of A.A. Hodge and B.B. Warfield, who reinterpreted the Westminster Confession of Faith 1.6 to make room for new trends in textual scholarship. William T. Shedd, a professor at Union Theological Seminary in  the 19th century and premier systematic theologian articulated the view of Hodge and Warfield well regarding the confessional statement, “Kept pure in all ages”.  He writes,

“This latter process is not supernatural and preclusive of all error, but providential and natural and allowing of some error. But this substantial reproduction, this relative ‘purity’ of the original text as copied, is sufficient for the Divine purposes in carrying forward the work of redemption in the world” . 

William G. T. Shedd, Calvinism: Pure and Mixed. A Defense of the Westminster Standards, 142.

While this is close to the Reformed in the 16th and 17th centuries at face value, it still is a departure that ends up being quite significant, especially in light of the direction modern textual criticism has taken in the last ten years. For comparison, Francis Turretin articulates a similar thought in a different way.



“By the original texts, we do not mean the autographs written by the hand of Moses, of the prophets and of the apostles, which certainly do not now exist. We mean their apographs which are so called because they set forth to us the word of God in the very words of those who wrote under the immediate inspiration of the Holy Spirit”. 

Francis Turretin, Institutes of Elenctic Theology, Vol. I, 106.

It is plainly evident that the two articulations of the same concept are not exactly the same. That is to say, that Turretin’s expression of the doctrine was slightly more conservative than Shedd. The difference being that the apographs, as Turretin understood them, were materially as perfect as the Divine Original. Turretin dealt at length with textual corruptions, as did his peers and those that followed after him, such as Puritan Divine John Owen, and still affirmed that the “very words” were available to the church. In order to fit a modern view into the Reformation and Post Reformation theologians, one must anachronistically impose a Warfieldian interpretation of the Westminster Confession onto those that framed it. There is no doubt that the Westminster Divines lived in the same reality of textual variants as Warfield and Hodge, and that they still affirmed a doctrine which said every jot and tittle had been preserved. Turretin and Warfield faced the same dilemma, yet Warfield secluded inspiration to only the autographs, whereas the Reformed included the apographs as well. Rather than attempting to reinterpret the theologians of the past, the goal should be to understand their doctrine as it existed during the 16th and 17th centuries, where the conversation of textual variants was just as alive as it is today.

A Careful Nuance

In order to examine the difference between the doctrine of Scripture from the Reformation to today, it’s important to zoom out and see how Warfield’s doctrine developed into the 21st century. The Doctrine of Inspiration, as it is articulated today, only extends to the autographic writings of the New Testament. I will appeal to David Naselli’s explanation from his textbook, How to Understand and Apply the New Testament, which has received high praise from nearly every major seminary. 

“The Bible’s inerrancy does not mean that copies of the original writings or translations of those copies are inerrant. Copies and translations are inerrant only to the extent that they accurately represent the original writings.” 

David Naseli. How to Understand and Apply the New Testament. 43.

This statement is generally agreeable, if we assume that there is a stable Bible in hand, and a stable set of manuscripts or a printed edition which is viewed as “original.” Unfortunately, neither of these exist in the world of the modern critical text. Not only do we not have the original manuscripts, there is no finished product that could be compared to the original. Since the effort of reconstructing the Initial Text is still ongoing, and since we do not have the original manuscripts, this doctrinal statement made by Naselli does not articulate a meaningful doctrine of inspiration or preservation. In stating what appears to be a solid doctrinal statement, he has said nothing at all. In order for this doctrine to have significant meaning, a text that “represents the original writings” would need to be produced. That is why the Reformed in the 16th and 17th centuries were so adamant about their confidence in having the original in hand. In order for any doctrine of Scripture to make sense, the Scriptures could not have fallen away after the originals were destroyed or lost. Doctrinally speaking, the articulation of the doctrine of Scripture demonstrated by Turretin and his contemporaries is necessary because it affirms that God providentially preserved the Scriptures in time and that they had access to those very Scriptures. If the modern critical text claimed to be a definitive text, like the Reformed claimed to have, the modern articulation of the doctrine of Scripture might be sound, but there is no modern critical text that exists as a solid ands table object. It is clear that the doctrine of Scripture, and the form of the Scriptures, cannot be separated or the meaning of that doctrine is lost. In order for doctrine to be built on a text, the text must be static. If we are to say that the Bible is inerrant in so far as it represents the original, there must be a 1) a stable text and 2) an original to compare that text against. Due to neither 1 or 2 being true, Naselli, along with everybody that agrees with him, have effectively set forth a meaningless doctrinal standard as it pertains to Scripture.  

This means that the Reformed doctrine of Scripture is intimately tied to the text they considered to be authentic, inspired, and representative of the Divine Original. The text they had in hand was what is now called the Received Text. Whether it was simply a “default” text does not change the reality that it was the text these men of God had in their hands. It is abundantly clear that the doctrine of Scripture during the time of the Reformation and Post-Reformation was built on the TR, just like the modern doctrine of Scripture is built on the modern critical text and the assumptions used to create it. Further problems arise with the modern doctrine of Scripture when the effort of textual scholarship shifted from trying to find the original text to the initial text. Due to this shift, any articulation of Scripture which looks to the modern critical text is based on a concept that does not necessarily exist in modern textual scholarship. The concept of the “original” has moved from the sight of the editorial teams of Greek New Testaments, therefore it is necessary to conclude that such doctrinal statements which rely on outdated goals to find the “original” must also be redefined. What this means practically is that there are not any doctrinal statements that exist in the modern church which align with the doctrines used to produce modern Bibles.

Due to the doctrine of Scripture being intimately tied to the nature of the text it is describing, the various passages of the New Testament which have been considered inspired have changed throughout time, and are going to continue changing as the conclusions of scholars vary from year to year. If we take Naselli’s articulation of the doctrine of Scripture as true, this means that there is not one inerrant text of Holy Scripture, there are as many as there are Christians that read their Bible. So in a very real sense, according to the modern articulation of inspiration, the inspired text of the New Testament is not a stable rule of faith. It is only stable relative to crowd consensus, or perhaps at the individual level. A countless multitude of people who adhere to this doctrine of inspiration make individual rulings on Scripture, which effectively means that the Bible is given its authority by virtue of the person making those decisions. Thus, the number of Bibles which may be considered “original” is as numerous as the amount of people reading Bibles. It is due to this reality that the modern doctrine of Scripture has departed from the Reformation era doctrine in at least two ways. The first is that by “original”, the post-Warfield doctrine means the autographs which no longer exist and excludes the apographs. The second is that the Bible is only authoritative insofar as it has been judged authoritative by some standard or another. This combination contradicts any doctrine that would have the Scriptures be a stable rule for faith and practice. It is because of these differences that it can be safely said that while the doctrinal articulations may sound similar, they are not remotely the same.  

The Reformed doctrine of Scripture in the 16th and 17th centuries is founded upon two principles that are different than that in the post-Warfield era. The first principle of the Reformed is that the Scriptures are self-authenticating, and the second is that they considered the original to also be represented and preserved in the text they had in hand. Therefore it seems necessary to understand the Reformation and Post-Reformation Divines through a different lens than the modern perspective, because the two camps are saying entirely different things. A greater effort should be made to understand what exactly the Reformed meant by “Every word and letter” in relationship to the text they had in hand, rather than impose the modern doctrine upon the Reformation and Post-Reformation divines.   

Conclusion

The goal of this conversation should be to instill confidence in people that the Bible they are reading is indeed God’s inspired Word. Often times it is more about winning debates and being right than actually given confidence to Christians that what they have in their hands can be trusted. It is counter productive for Christians to continue to fight over textual variants in the way that they do, especially considering the paper thin modern articulations of the doctrine of Scripture. It is stated by some that receiving the Reformation Era Bible is “dangerous”, yet I think what is more dangerous is to convince somebody that they should not trust this Bible, which is exactly what happens when somebody takes the time to actually explain the nuances of modern textual criticism. These attacks are especially harmful when the Bible that is attacked is the one that the Protestant religion was founded upon, and the only text that carries with it a meaningful doctrine of Scripture. Christians need to consider very carefully the claims that are made about the Reformation era text which say it is not God’s Word, or that it is even dangerous to use. I cannot emphasize enough the harm this argument has done  to the Christian religion as a whole. The constant effort to “disprove” the Reformation era text is a strange effort indeed, especially if “no doctrines are effected”. The alternative, which has been a work in progress since before 1881, and is still a work in progress today, offers no assurance that Christians are actually reading the Bible. In making the case that the Received text and translations made from it should not be used, critics have taken one Bible away and replaced it with nothing but uncertainty.  

The claim made by advocates of the Received text is simple, and certainly not dangerous. The manuscripts that the Reformed had in the 16th century were as they claimed – of great antiquity and highest quality. The work done in that time resulted in a finished product, which continued to be used for hundreds of years after. That Bible in its various translations quite literally changed the world. If the Bible of the 16-18th centuries is so bad, I cannot understand why people who believe it to be a gross corruption of God’s Word still continue to read the theological works of those who used it. Further, it is difficult to comprehend how a Bible that is said to accomplish the same purpose as modern bibles would be so viscously attacked by those that oppose it. If all solid translations accomplish the same redemptive purpose, according to the modern critical doctrine, why would it make any sense to attack it? After spending 10 years reading modern Bibles, I simply do not see the validity to the claim that the Reformation era text is “dangerous” in any way. Christians do not need to “beware” of the text used by the beloved theologians of the past. At the end of the day, I think it is profitable for Christians to know that traditional Bibles are not scary, and have been used for centuries to produce the fullest expression of Christian doctrine in the history of the world. When the two doctrinal positions are compared, there is not a strong appeal to the axioms of Westcott and Hort, or Metzger, or even the CBGM. They are all founded on the vacuous doctrine of Scripture which requires that the current text be validated against the original, which cannot be done. There is no theological or practical value in constantly changing the words printed in our Bibles, and this practice is in fact detrimental to any meaningful articulation of what Scripture is. I have not once talked to anybody who has been given more confidence in the Word of God by this practice. In fact, the opposite is true in every real life encounter I’ve had.

It is said that the Received Text position is “pious” and “sanctimonious”, but I just don’t see how a changing Bible, with changing doctrines, is even something that a conservative Christian would seriously consider. If Christians desire a meaningful doctrine of Scripture, the modern critical text and its axioms are incapable of producing it.

Is the CBGM God’s Gift to the Church?

Introduction

It is stated by some that the Coherence Based Genealogical Method is a blessing to the church, even gifted to the church by God way of God’s providence. I thought it would be helpful to examine this claim. Unfortunately, those who have made such statements regarding the Editio Critica Maior (ECM) and the CBGM have not seemed to provide an answer as to why this is the case. This is often a challenge in the textual discussion. Assertions and claims can be helpful to understanding what somebody believes, but oftentimes fall short in explaining why they believe something to be true. The closest explanation that I have heard as to why the CBGM is a blessing to the church is because it has been said that it can detail the exact form of the Bible as it existed around 125AD. Again, this is simply an assertion, and needs to be demonstrated. I have detailed in this article as to why I believe that claim is not true. 

In this article, I thought it would be helpful to provide a simple explanation of what the CBGM is, how it is being used, and the impact that the CBGM will have on Bibles going forward. The discerning reader can then decide for themselves if it is a blessing to the church. If there is enough interest in this article, perhaps I can write more at length later. I will be using Tommy Wasserman and Peter Gurry’s book, A New Approach to Textual Criticism: An Introduction to the Coherence Based Genealogical Method as a guide for this article. 

Some Insights Into the CBGM from the Source Material 

New Testament textual criticism has a direct impact on preaching, theology, commentaries, and how people read their Bible. The stated goal of the CBGM is to help pastors, scholars, and laypeople alike determine, “Which text should be read? Which should be applied?..For the New Testament, this means trying to determine, at each place where our copies disagree, what the author most likely wrote, or failing this, at least what the earliest text might have been” (1, emphasis mine). Note that one of the stated objectives of the CBGM is to find what the author most likely wrote, and when that cannot be determined, what the earliest text might have been. 

Here is a brief definition of the CBGM as provided by Dr. Gurry and Dr. Wasserman:

“The CBGM is a method that (1) uses a set of computer tools (2) based in a new way of relating manuscript texts that is (3) designed to help us understand the origin and history of the New Testament text” (3). 

The way that this method is relating manuscript texts is an adaptation of Karl Lachman’s common error method as opposed to manuscript families and text types. This is in part due to the fact that “A text of a manuscript may, of course, be much older than the parchment and ink that preserve it” (3). The CBGM is primarily concerned with developing genealogies of readings and how variants relate to each other, rather than manuscripts as a whole. This is done by using pregenealogical (algorithmic analysis) and genealogical (editorial analysis). The method examines places where manuscripts agree and disagree to gain insight on which readings are earliest. In the case that the same place in two manuscripts disagree, the new method can help in determining one of two things:

  1. One variant gave birth to another, therefore one is earlier
  2. The relationship between two variants is uncertain

It is important to keep in mind, that the CBGM is not simply a pure computer system. It requires user input and editorial judgement. “This means that the CBGM uses a unique combination of both objective and subjective data to relate texts to each other…the CBGM requires the user to make his or her own decisions about how variant readings relate to each other.” (4,5). That means that determining which variant came first “is determined by the user of the method, not by the computer” (5). The CBGM is not purely an objective method. People still determine which data to examine using the computer tools, and ultimately what ends up in the printed text will be the decisions of the editorial team. 

The average Bible reader should know that the CBGM “has ushered in a number of changes to the most popular editions of the Greek New Testament and to the practice of New Testament textual criticism itself…Clearly, these changes will affect not only modern Bible translations and commentaries but possibly even theology and preaching” (5). Currently, the CBGM has been partially applied to the data in the Catholic Epistles and Acts, and DC Parker and his team are working on the Gospel of John right now. The initial inquiry of this article was to examine the CBGM to determine if it is indeed a “blessing to the church”. In order for this to be the case, one would expect that the new method would introduce more certainty to Bible readers in regards to variants. Unfortunately, the opposite seems to be true. 

“Along with the changes to the text just mentioned, there has also been a slight increase in the ECM editors’ uncertainty about the text, an uncertainty that has been de facto adopted by the editors of the NA/UBS…their uncertainty is such that they refuse to offer any indication as to which reading they prefer” (6,7). 

“In all, there were in the Catholic Letters thirty-two uses of brackets compared to forty-three uses of the diamond and in Acts seventy-eight cases of brackets compared to 155 diamonds. This means that there has been an increase in both the number of places marked as uncertain and an increase in the level of uncertainty being marked. Overall, then, this reflects a slightly greater uncertainty about the earliest text on the part of the editors” (7).   

This uncertainty has resulted in “the editors to abandon the concept of text-types traditionally used to group and evaluate manuscripts” (7). What this practically means is that the Alexandrian texts, which were formerly called a text-type, are no longer considered as such. The editors of the ECM “still recognize the Byzantine text as a distinct text form in its own right. This is due to the remarkable agreement that one finds in our late Byzantine manuscripts. Their agreement is such that it is hard to deny that they should be grouped…when the CBGM was first used on the Catholic Letters, the editors found that a number of Byzantine witnesses were surprisingly similar to their own reconstructed text” (9,10). 

Along with abandoning the notion that the Alexandrian manuscripts represent a text type, another significant shift has occurred. Rather than pursuing what has historically been called the Divine Original or the Original Text, the editors of the ECM are now after what is called the Initial Text (Ausgangstext). There are various ways this term is defined, but opinions are split with the editors of the ECM. For example, DC Parker, who is leading the team who is using the CBGM in the Gospel of John has stated along with others that there is no good reason to believe that the Initial Text and the Original Text are the same. Others are more optimistic, but the 198 diamonds in the Acts and Catholic Letters may serve as an indication as to whether this optimism is warranted based on the data. The diamonds indicate a place where the reading is uncertain in the ECM. 

The computer based component of the CBGM is often sold as a conclusive means to determine the earliest, or even original reading. This is not true. “At best, pregenealogical coherence [computer] only tells us how likely it is that a variant had multiple sources of origin rather than just one…pregenealogical coherence is only one piece of the text-critical puzzle. The other pieces – knowledge of scribal tendencies, the date and quality of manuscripts, versions, and patristic citations, and the author’s theology and style are still required…As with so much textual criticism, there are no absolute rules here, and experience serves as the best guide” (56, 57. Emphasis added).

In the past it has been said that textual criticism was trying to build a 10,000 piece puzzle with 10,100 pieces. This perspective has changed greatly since the introduction of the CBGM. “we are trying to piece together a puzzle with only some of the pieces” (112). Not only does the CBGM not have all the data that has ever existed, it is only using “about one-third of our extant Greek manuscripts…The significance of this selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). Further, the CBGM is not omniscient. It will never know how many of the more complex corruption entered into the manuscripts, or the backgrounds and theology of the scribes, or even the purpose a manuscript was created. “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That means that it is likely that there will be readings produced by the CBGM that were not original or earliest, that will be mistakenly treated as such. “We do not want to give the impression that the CBGM has solved the problem of contamination once and for all. The CBGM still faces certain problematic scenarios, and the loss of witnesses plagues all methods at some point” (115). 

One of the impending realities that the CBGM has created is that there may be a push for individual users, Bible readers, to learn how to use and implement the CBGM in their own daily devotions. “Providing a customizable option would mean creating a version that allows each user to have his or her own editable database” (119,120). There will likely be a time in the near future where the average Bible reading Christian will be encouraged to understand and use this methodology, or at least pastors and seminarians. If you are not somebody who has the time or ability to do this, this could be extremely burdensome. Further, the concept of a “build your own Bible” tool seems like a slippery slope, though it is a slope we are already sliding down for those that make their own judgements on texts in isolation to the general consent of the believing people of God. 

Conclusion

Since the CBGM has not been fully implemented, I suppose there is no way to say with absolute confidence whether or not it is a “blessing to the church”. I will say, however, that I believe the church should be the one to decide on this matter, not scholars. It seems that the places where the CBGM has already been implemented have spoken rather loudly on the matter in at least 198 places. Hopefully this article has been insightful, and perhaps has shed light on the claims that many are parroting which say that the CBGM is a “blessing to the church” or an “act of God’s providence”. If anything, the increasing amount of uncertainty that the CBGM has introduced to the previous efforts of modern textual criticism should give cause for pause, because the Bibles that most people use are based on the methodologies that modern scholarship has abandoned.

Helpful Terms

Coherence: The foundation for the CBGM, coherence is synonymous with agreement or similarity between texts. Within the CBGM the two most important types are pregenealogical coherence and genealogical coherence. The former is defined merely by agreements and disagreements; the latter also includes the editors’ textual decisions in the disagreements (133).   

ECM: The Editio Critica Maior, or Major Critical Edition, was conceived by Kurt Aland as a replacement to Constantin von Tischendorf’s well-known Editio octava critica maior. The aim of the ECM is to present extensive data from the first one thousand years of transmission, including Greek manuscripts, versions, and patristics. Currently, editions for Acts and the Catholic Letters have been published, with more volumes in various stages for completion (135).  

Stemma: A stemma is simply a set of relationships either of manuscripts, texts, or their variants. The CBGM operates with three types that show the relationship of readings (local stemmata), the relationship of a single witness to its stemmatic ancestors (substemma), and the relationships of all the witnesses to each other (global stemmata) (138). 

Putting the Conversation in Perspective

Introduction

It may be difficult for many people to see the relevance of the textual discussion. This is often because it is rare that a positive case is made for the modern critical text.The majority of exposure people get to this conversation from a modern critical text position are simply polemics and a healthy dose of pejoratives. The problem with this is that these methods fail to offer a reason to believe that the modern critical text is the best. Simply saying the TR is awful and shouldn’t be used actually introduces far more problems than it solves. From a practical standpoint, if the Masoretic Hebrew text and the Received Greek text is not viable for use in the church, then not only was the Protestant religion sparked and built on a bad Bible, but there is an unfinished Bible for today’s church. It is important to clarify that I am not saying that people who adopt the modern critical perspective cannot be saved or cannot benefit from modern translations. I myself read through the Bible for the first time using an NIV. What I am saying is that a “mere Christianity” approach should not be adopted for the Bible we use. As Christians, we should be concerned with every jot and tittle, not the bare minimum it takes for somebody to be saved. That being said, I want to explain why somebody who found great comfort in the NIV in the early years of his Christian walk now reads a traditional Bible. If the last book you read on text-criticism was The Text of the New Testament in seminary, things have changed…a lot. Let’s take a step into the mindset of a modern critical text advocate for a moment here. The justification for adopting the modern critical text requires three main assumptions.

  1. The Received Greek Text does not represent the earliest manuscripts, and therefore represents a New Testament that was corrupted by well-meaning Christians over time
  2. The Masoretic Hebrew Text does not represent the original manuscripts as it has been corrupted by Jews seeking to diminish the deity of Christ
  3. The modern critical methods, and thus the modern critical text, are better than the previous text and should be used over and above the traditional text of the protestant church due to this orthodox and Jewish corruption of the Scriptures

An unfortunate side effect of advocating against the historical text of the Protestants is that the validity of the Bible is undermined as a whole. If the Masoretic Text has not been kept pure, which Hebrew text should be translated from? Typically the Septuagint is offered. There are two main problems with this. 1) There isn’t one “Septuagint” and 2) the confessions affirm against using translations as the ultimate rule of faith. Further, if the Received Text is not the New Testament, then the people of God have been woefully deceived. There are two ways to look at this deception. In the first place, if the Received Text was a strange, historical phenomenon where the people of God chose manuscripts that nobody had ever used in history, then the church was deceived for hundreds of years. This is in essence what is being claimed when somebody says, “This reads in a fashion unknown to the Christian tradition for a full 1,500 years.” If it is the case that the manuscripts used in the Reformation era printed texts represented the “most ancient copies”, as they claimed, then the church was deceived since the early church. In advocating for the modern critical text, there is a significant theological problem introduced that cannot be resolved without arguing for a total corruption of the text. 

More Questions Than Answers

If the theories of textual scholars are correct, the actual Bible is preserved partially in a small minority of manuscripts from the third and fourth centuries. The vast majority of manuscripts, according to modern scholarship, are the product of a well-meaning corruption by Christians to solidify doctrine, add beloved pericopes, and correct grammar mistakes. No matter how somebody spins it, God not only let his church and the Jews corrupt the Scriptures, but then allowed them to believe that those corruptions were inspired. In simple terms, there is no continuity in the preservation of God’s Word from a modern critical text perspective. The BIble was lost for a time, and now needs to be recovered. The text existed in the early church, became corrupted by the believing people of God and the Jews for a large chunk of church history, and resurfaced in the modern period for use by all in a small amount of neglected manuscripts and some versions of the Septuagint where doubt is cast on the Hebrew. 

The basic argument that is presented by the Confessional Text position is that the Bible was preserved going into the medieval and Reformation period, and that the text-critical work done in that period used those preserved manuscripts. If the assumption is that God preserved His Word, it would make sense that the general form of manuscripts used by the church would be most abundant, as they were used the most. Manuscripts that were later found in libraries, caves, and barrels sat collecting dust for a reason. Therefore the text-critical effort of the Reformation period was one of printing versions of the manuscripts which were considered best during that time. The problem that many have with this perspective is that the Reformation era text is often compared against the modern critical text with the assumption that the MCT is representative of the authorial, or original text. 

Yet a significant problem with this perspective is that it cannot be proven, or demonstrated with any level of confidence from an evidentiary standpoint. This is made evident in the fact that the theory of using text families to get back to the original text has been mostly abandoned. Instead, the effort of modern textual scholarship has shifted from finding the true authorial text to the hypothetical initial text. This is the major shift that occurred from the time of the Hort-Metzger era. Since the text that the people of God used during the Reformation period has been written off as a corruption, the only thing left to do is try and reconstruct the text that existed before that happened. This is more or less the current effort of the Editio critica maior. Instead of using text families, the current method is examining individual variant units and trying to determine which variant gave birth to the rest of the readings found in later manuscripts. No matter how thorough this analysis is, there will never be a way to determine if the earliest reading represents the original reading, or if that reading is even the earliest. This is the biggest limitation of the CBGM. There will never be a method that can span the historical gap between the authorial text and the initial text. In reality, this initial text will simply represent something similar to one version of the Bible from the third or fourth century that the people of God didn’t use universally. This is clearly shown in that the extant third and fourth century manuscripts do not represent the majority text or the Reformation era text. 

To put this in perspective, there are eight (P45, P46, P47, P66, P72, P75, Aleph, B, EDIT: Manuscript Clusters Tool is not linking properly. Type in Manuscript Name to use) significant manuscripts from before the fifth century that represent the text form which is called “earliest and best” in textbooks and modern bibles. Only two of these are complete bibles. The most complete of these manuscripts do not agree enough with each other to be related directly, which means that they did not descend from one uniform manuscript tradition. That means that the origin of these manuscripts will forever be a gray area to some extent. 

 Let me paint a picture that may help you understand what this means. Imagine you find a stack of nearly six thousand bibles. A handful of those bibles are extremely old, but not used very much so they are still able to be handled and examined. These older bibles have abrupt readings, omitted verses, more variants between the synoptic passages in the gospels, and have a great number of difficult grammatical constructions which take some effort to understand. They look different from the rest of the bibles, which have better grammar, less omitted passages, and more harmony in the readings. These handful of bibles are older, however, so you determine that they are the best. Since the majority of the bibles have a number of readings in the New and Old Testament that disagree with these older bibles, you determine that the majority of the bibles are wrong. You devise a theory that the original bible looked like the minority of older bibles. You make it your life’s mission to ensure that the majority of bibles are not used anymore, and 120 years later, the majority of churches are using the bible you’ve determined to be earliest and best. A small minority of churches still use the rejected bible, but are mocked and ridiculed for reading it. Those who read the newly declared oldest bibles ensure that these people are called “traditionalists” so that everybody knows they are wrong for not adopting the new bible. You devise pejorative terms like “New Bible Onlyists” to further scorn people for not adapting to the times. The majority of bibles are said to have been proven to be corrupt, so the division between the two camps becomes wider. There is only one problem – in the 120 years that the church adopted this new bible, nobody has been able to prove that the original claim was correct. In fact, there is an increasing amount of evidence which demonstrates that that claim was not correct at all. Instead of rejecting these old bibles, a new method is devised to prove the original theory. The church, mostly unaware of this, continues to read these newly adopted bibles and viciously attack those that have not adopted the new standard.

Conclusion

The period of time from the authorial event of the New Testament to the Reformation period is the most significant when it comes to the textual discussion. There are two narratives of the transmission history during this time. The first is that the Bible was kept pure in the manuscript tradition until the Reformation period, where the text-critical efforts of that time took those preserved manuscripts, edited them into printed editions, and made Bibles from them. The second is that by the third and fourth century, the manuscript tradition began to evolve as believing Christians smoothed out the grammar, added beloved pericopes, and expanded verses to make the Christology of the Bible more clear. In the second narrative, the Jews were also hard at work corrupting the Hebrew Scriptures so that by the time the modern period came around, there was not a single Hebrew text which represented the authorial text. 

This conversation is not about the TR or the modern critical text, it is about the narrative of preservation. If God preserved the Bible into the Reformation period, than the work done during that time was the final effort needed. The only reason to believe that an ongoing text-critical effort is required is if the first effort used a corrupted version of God’s Word in the Hebrew and Greek. Since the source material of the Reformation period needs to be considered corrupted to justify the modern effort, additional methods must be employed which extend beyond the capabilities of the extant data. These methods include constructing hypothetical archetypes of the earliest texts and correcting the Hebrew with Greek versional readings. Despite the best efforts of modern textual scholarship, the results of these methods cannot “prove” anything regarding the original text. The strongest testimony to the authorial text will always be the witness of the people who used those texts in time. Christians can indeed have confidence in their Bible, but I argue that the modern critical methodology cannot provide that confidence. If the Bible was preserved, it was preserved up to the time of the first text-critical effort. That effort produced the Bibles that sparked the Protestant Reformation and the largest Christian revival in the history of the World. The theological works which the modern church stands on were developed from this text, and Christians still stand on that theology, especially the confessionally Reformed. At the very foundation of this conversation is two different narratives, and two different methodologies. Neither of these narratives can be proved purely by extant manuscript data if the manuscript data is viewed agnostically. The real question that must be answered by Christians is, “Did God preserve His Word into the middle period and Reformation period, or not?” If manuscripts that represent the minority of the extant data are rejected, than the perspectives of the Reformed are clear as day. They believed the Bible had been preserved in both the Hebrew and the Greek, and I argue that the modern church should join them in that belief. If it is the case that an argument can be made for a preserved Bible from a modern critical perspective, I have yet to see it demonstrated. Unless that happens, I will continue to stand on, and advocate for, the Bible of the Protestant Reformation.  

More Resources:

Jeff Riddle Word Magazine

Introduction to the CBGM “Clearly, these changes will affect not only modern Bible translations and commentaries but possibly even theology and preaching”

Dr. Joel Beeke on Retaining the KJV

Refutation of Dan Wallace on the Byzantine Text

No, Beza Was Not Doing Modern Text-Criticism

Introduction

There is a lot of confusion over what exactly text-criticism is, and what it means to engage in it. Many people, due mostly to meaningless assertions made online, genuinely believe that the modern effort of textual scholarship equals the scholarship during the Reformation period. Many people say that because Beza and Stephanus did text-criticism, and that the modern textual scholars are doing text-criticism, the two efforts are both equal to one another. This is true in a certain sense, but the most important component of this appeal is completely neglected. If one were to compare Beza to the CBGM or Hort, for example, there are critical differences in their methodologies that shed light on the shallowness of the claim mentioned above. A brain surgeon and an ophthalmologist may both be doctors, but they are certainly not doing the same thing in their practice. 

There are four major distinctions that set apart Beza from modern textual scholarship.

  1. Beza approached his text-critical work believing that the text had been inspired and preserved by God
  2. Beza valued and utilized a different text platform than modern scholars value and utilize 
  3. Beza took into consideration the reception of a reading by the church as a part of his text-critical methodology, according to the “common faith”
  4. Beza utilized theology in his text-critical methodology

Beza, Set Apart from Modernity

In Beza’s time, there were no such notions as the Initial Text, or the earliest extant text that textual scholars must attempt to reconstruct. There was no such notion of an evolving text, or mockery of the idea that the people of God had the Scriptures in total within the Reformed camp. These theological concepts had not yet been introduced to the church, except perhaps by the Papacy and other heretical groups of course. The “default” text of the Reformation was default for a reason – it was the text that the church overwhelmingly used up to that point in history. The theological foundation that the Bible needed to be reconstructed was adopted by the church when modern textual scholarship realized it could not find the original text with its methodology. Rather than fighting this clear abandonment of orthodoxy, the church capitulated and adopted the modern view that the Bible had only been preserved in the autographs, the original writings of the New Testament. Since the autographs are lost to time, that effectively equates to a bible that is not preserved. Thus, any attempts to equate this modern perspective with Beza is confused at best. 

Beza was an astute scholar with a true faith in Christ. Despite the common misconception introduced by unreliable internet sources, Beza shared a wide correspondence with his contemporaries on his text, including John Calvin, Joachim Camerarius, Pierre Pithou, Patricius Junius, Johannes Gyrnaeus, Girolamo Zanchi, Meletius Pigas, Johannes Piscator, Johannes Drusius, Tussanus Berchetus, Cornelius Bertram, Matthaeus Beroaldus, and Isaac Casaubon. 

It is often stated that the text platform called the Received Text  is based on half a dozen manuscripts and that it was essentially developed by Erasmus in a vacuum. This is an unfortunate error, as Beza himself recorded that he used a copy from Stephanus’ library which was created from a collation of at least fifteen codices, as well as almost all of the printed editions.

“In addition to all this came a copy from the library of our Stephanus, collated by Henri Stephanus, his son and heir of his father’s assiduity, as accurately as possible with some twenty-five manuscript codices and almost all the printed ones” (1565, p. *.iiiii  and Correspondance 5, p. 170). 

This copy, along with readings from as much as nineteen Greek ancient manuscripts were used in Beza’s 1598 edition.

“…with as many as nineteen very old manuscripts and many printed books from everywhere…” (1598, preface).

Some scholars assert that this number 25 was a typesetting error, and that fifteen manuscripts were used. In any case, this number is certainly much higher than the low evaluation of six manuscripts which Erasmus is said to have used. It is also important to note that this a greater number of full manuscripts than are valued highly and used for the modern critical text, and that those manuscripts represented the great majority of manuscripts that we have today. The modern critical text cannot say the same. This brings up an extremely important point – the work of Erasmus does not represent the whole work of what became the Textus Receptus. It is far more accurate to say that the Received Text is a representation of Beza and Stephanus than of Erasmus, though Erasmus’ work played a part in the effort. Jan Krans recognizes as much in his work, Beyond What is Written

“Beza acquired a very high status in Protestant and especially Calvinist circles during his lifetime and in the first generations after him. His Greek text was not contested but faithfully reprinted; through the Elzevir editions it was elevated to the status of ‘received text’, textus receptus. ”(197). 

While the differences between Erasmus and Beza’s work were slight, many of Beza’s corrections were actually revisions of Erasmus’ work, especially in Revelation, where he made 17 changes.  The claim is often made that Beza utilized Vulgate readings, but this is intentionally misleading, because though he referred to the Vulgate, he never considered a Vulgate reading sufficient to edit the Greek text on its own. It was actually the Papists who circulated such rumors to undermine the validity of Beza’s work. An example of Beza’s methodology which sets it apart from the modern effort is his use of theological principles to decide on a variant, like in Luke 2:22.

“Of Mary, αυτης. In the Vulgate: ‘eius (‘of him/her’), apparently ‘of Mary’. For it is proper to fulfill the Law, although Mary after Christ’s birth would be all the more sanctified, in any case, we have expressed the antecedent itself in full, in order to avoid any ambiguity. Most manuscripts [codices] have αυτων, and thus Origen reads also, followed by Erasmus. But I fail to see how this could fit, while the law of purification only concerns the mother. And so I prefer to follow the old edition with which the Complutensian edition agrees” (Krans, 294. Cited from 1556 edition). 

This sort of methodology is exemplary of Beza’s work. Modern critical text advocates may not approve of this sort of methodology, which should cause them to distance themselves from Beza, not claim that he was doing the same thing that they are doing. As far as I can tell, no actual textual scholars are claiming to do what Beza did. The only people who make this claim are the ones who wish to convince Christians that the modern effort is acceptable for use in the church. It is abundantly clear that Beza approached the text from a much different perspective. In order to support the claim that Beza and Stephanus, whose work represents what would eventually be called the Received Text, were doing the “same thing as we are today”, one would have to demonstrate six things:

  1. Modern scholars working on the ECM are orthodox, protestant believers
  2. Modern scholars working on the ECM believe they have the original
  3. Modern scholars that are working on the ECM believe the Bible to be inspired by God 
  4. Modern scholars utilize, in part, orthodox protestant theology to decide on variant readings 
  5. Modern scholars consult the “common faith” of the Christian religion in their methodology 
  6. Modern scholars value the readings historically received by the church when deciding on a variant

Saying that Beza was “doing the same thing as modern text-critics” because both Beza and modern scholars have made editions of the Greek New Testament is simply ignorant. This is apparent in the perspective of DC Parker, who is leading the team who will give the book of John to the people of God in the ECM, which modern bibles will use in translation. 

“The New Testament continued to evolve, so that the New Testament of today is different from the New Testament of the sixteenth century, which is in turn different from the ninth” (Textual Scholarship and the Making of the New Testament, 12). 

“In its text and in its format, the work will continue to change, just as it has done throughout history hitherto. The textual scholarship of each generation and each individual contribution has its value as a step in the road, but is never complete in itself” (Ibid., 21). 

“I argued, the modern concept of a single authoritative ‘original’ text was a hopeful anachronism, foisting on early Christianity something that can only exist as a result of modern concepts of textual production” (Ibid., 24). 

“The New Testament philologist’s task is not to recover an original authorial text, not only because we cannot at present know on philological grounds what the original text might have been, nor even because there may have been several forms to the tradition, but because philology is not able to make a pronouncement as to whether or not there was such an authorial text” (Ibid., 27). 

“As I have said, the task of editing is to reconstruct the oldest available form of a work by analysis of the texts that appear in the extant witnesses. This is a logical process which unveils the history of the text and its oldest form. It cannot itself have anything to say about the relationship of that oldest form to an authorial text” (Ibid., 28). 

“But we need not then believe that the Initial Text is an authorial text, or a definitive text, or the only form in which the works once circulated” (Ibid., 29). 

“I should add a word of warning, that in the case of biblical research and bibliography will inevitably find theology dragged into it at some point. Where a text is revered by some people as divinely inspired, in some cases as verbally precise pronouncement by an all-powerful God, or even at its least dramatic when it is viewed as a helpful guide for daily life, the findings of the bibliographer may be of particular importance. And in case we get too carried away with the importance of penmanship and of the texts by which it is preserved, let us remember that our codices are not all in all, and may be no more than a byproduct of our lives” (Ibid., 30,31). 

Conclusion

It is clear then, that the work of Beza stands in stark contrast to the work of modern textual scholars. It is high time that the assertion that Beza did the same thing as modern text critics are doing now is viewed with incredulity and rejected outright. The fact remains that Beza employed a number of principles that are found nowhere in the CBGM, or any other significant text-critical methodology for that matter. DC Parker is actually a huge blessing to the church, because his commentary on the effort of modern textual criticism is accurate and not plagued by religious feelings or optimism. What a better spokesperson for the modern critical text than one of the editors for the ECM? Christians should examine the quotations above and test them against the Scriptures and against their conscience. I doubt I could find a single Reformed believer who would agree with DC Parker on what the Bible is, and yet the vast majority of the modern day Reformed are getting their Bible from him and his colleagues. I do not say that to disparage Dr. Parker, he is one of the best textual scholars alive today. I say that to highlight the reality that the methodology being employed is not the same as has been employed throughout the ages, and it is hard to believe that many Reformed believers would approve of the methodology if they understood it better.

The time is coming where every Christian will be faced with the reality of this evolving text. Many have already seen enough of it to know that they cannot, in good faith, support such a text. An evolving bible simply does not comport with orthodox Christian belief. There are a wealth of reasons to reject the modern critical text, and this is another one. The work of Beza was the work of a faithful Christian and a brilliant scholar. He used the manuscripts which the people of God consented to, and consulted many scholars and theologians in the process. When Christians attack the Received Text, they really need to consider which text it is that they are attacking. Further, when Christians advocate for the modern critical text, they need to consider the text that they are supporting. Almost always, the only arguments offered for the modern critical text are simply attacks on the Received Text. Yet that same text that is attacked so viciously in order to prop up the evolving modern critical text is the text that the church faithfully used and built the doctrines that we, as modern protestants, stand on. The only reality in which the Bible is preserved is the reality that the textual efforts of the Reformation period were the faithful efforts of men that God used to distribute his preserved Word to the world. 

This is possibly the most severe disconnect in the logic of those who support the modern critical text. The modern critical text does not offer what orthodox Christianity expects from a book claiming to be the Holy Scriptures. I have not seen a single meaningful argument which addresses how a text that disagrees with the text of the previous era can possibly be the same preserved text. On one hand, a Christian offers lip service to the perfect preservation and inerrancy of God’s Word, and on the other, adopts a text that nobody actually involved in the creation of that text thinks is inerrant or preserved in any meaningful way. All it takes is a brief conversation with a textual scholar at Tyndale House to realize as much.The reality is, if I believed that the modern critical text was the only option, I wouldn’t believe it preserved either. It disagrees in important places with the vast majority of manuscripts and even more importantly, the testimony of the church throughout the ages. It disagrees with the same text which theologians built doctrine upon that modern scholars have casually tossed out in modern bibles. The only logical conclusion that can be drawn from the modern critical text is that the bible has not been preserved. Any attempt to claim otherwise is simply a Kantian leap of faith. It should be abundantly clear that the theological problems posed by the modern critical text have not been answered in any way that comports with the reality that God has preserved His Word.