Textual Traditionalism, TR Onlyism, and KJV Onlyism

Introduction

The use of pejoratives in debate is a time tested tactic that works. I imagine that is why people use them. In the case of the Textual Discussion, many employ pejoratives to associate adherence to a particular Greek and Hebrew text with positions that have negative connotations. This has been effective in steering people away from, in particular, the Confessional Text position. Two examples are “Textual Traditionalism” and “TR Onlyism”. Another similar tactic is employed by simply conflating adherence to the Reformation era text to King James Onlyism, as it is defined by Peter Ruckman and Sam Gipp. In any case, for those actually interested in understanding this position and representing it fairly, these terms are unhelpful because they are clear and intentional misrepresentations. The term, “misrepresentation” is often used, but rarely explained. It is important that Christians turn on their brains when they hear the word “misrepresentation” and investigate if somebody is actually being truthful when they say they are being misrepresented. It is often the case that opponents of the Reformation era texts readily employ this language without explaining how they are being misrepresented. Typically, somebody who cries “misrepresentation!” every time somebody disagrees with them is fond of playing victim.

When those in the Confessional Text camp claim that pejoratives such as “Textual Traditionalist” and “TR Onlyist” and “KJV Onlyist” are blatant and uncharitable misrepresentations, those who rabidly attack the Received Text are prone to mock and issue scorn. This may be warranted if there were no justification for the claim of misrepresentation, but the continued use of such pejoratives after ample explanation is a chief example of biting and devouring (Gal. 5:15) and prideful contention (Prov. 13:10). Despite the assertion that we should treat Christian brothers with the least amount of charity as possible if they disagree on a point of doctrine, the Biblical testimony is abundantly clear here – we should endeavor to keep the unity of the Spirit in the bond of peace (Eph. 4:3). The Bible does not call us to be doctrinal vigilantes, but to exhort with all patience and humility (Col. 3:12-17).

That is not to say that Christians are not called to battle (1 Pet. 1:13), but the way that Christians should do battle should be, well, distinctly Christian (John 13:35). The chief battle Christians fight is against their sin, not each other. So when Christians continue to unabashedly and proudly employ pejoratives in their critique of other Christians, it is clear that something is off. I am not opposed to strong language and rhetoric, so as long as that language and rhetoric is justified. In any case, I thought that I would provide a helpful review of the uncharitable pejoratives which are used as debate tactics against those who adhere to the historical text of the Protestant religion. It doesn’t matter how long these pejoratives have been in use, every Christian has the responsibility to be better than those that came before them and determine if such terms accurately describe the person they are talking about. It is especially condemning if Christians, after seeing how these terms misrepresent brothers and sisters in Christ, continue to use these terms.

Textual Traditionalism

In the first place, Christians should seek to be accurate when describing a theological or perhaps traditional perspective. When the term “Anabaptist” is employed for example, it is not appropriately applied to Particular Baptists, as that is simply historically imprecise. The only reason you would call a Reformed Baptist an “Anabaptist” is if you were trying to bite and sting. Misuse of terminology introduces more confusion into a conversation, which Christians should generally be opposed to in principle (1 Cor. 14:33). If a term is employed that introduces more confusion and chaos than order and structure, it should generally be avoided. So does the term “Textual Traditionalist” introduce more clarity? Does it provide insight to what is being discussed? The answer is clearly no.

The term is unfortunately vague and imprecise. Anybody who is claiming to be a scholar, or make a scholarly argument, would avoid such ambiguity. To use database language, there is nothing that uniquely identifies this term with any particular position. It could just as easily be applied to “red letter Christians” or the “unhitchers” whose textual tradition is offensive to Reformed believers. This term only serves a polemic purpose aimed at the inclinations of the modern church who recoil at the term “tradition.” Traditionalism implies that people adhere to a tradition for the sake of the tradition itself. This is not the case for the Confessional Text camp.

Yet, if you’re Reformed, the term “tradition” should not scare you. It is famously said, “He who says he has no tradition is blind to his tradition.” This holds true to those who employ this kind of language, typically. Everybody has a tradition, and those traditions have specific names. This highlights an important reality as it pertains to this pejorative – it plays to an audience who associates negativity to tradition while also appealing to an audience who supposedly has a great deal of pride in their Protestant heritage. In making use of such a term, one simultaneously appeals to the soft, “tradition is bad” version of Christianity, while also seemingly arguing for an alternative form of “textual traditionalism.”  If our definition of traditionalism is that one only accepts their own tradition as valid, then those who aggressively advocate for the modern critical text are also traditionalists, so it seems. The term is so vague that it might as well apply to anybody who has any thought out tradition on the text of Holy Scripture. It is wise to avoid using terminology that is so imprecise that it practically means nothing at all, if the goal is to be “scholarly.” If the intention is to prevent people from actually understanding the position itself and to paint a brother in Christ as a rabid fundamentalist, then it is quite apt. In any case, it is better to use a precise term than an imprecise term, if a precise term exists. That seems like a simple principle to follow.

TR Onlyism

This is probably the most commonly used pejorative for the Confessional Text position. It dates back at least to 1990, and typically is used to describe those that only accept Bibles which are translated from the Received Greek Text of the Protestant Reformation era. Typically, opponents of this text will misrepresent this position by saying that advocates of the TR “believe it to be inspired” specially in some sort of re-inspiration event. I don’t know a single person in the Confessional Text camp who believes the TR to be re-inspired.

Similar to the first term, it is unfortunately vague, and obviously meant for use in debate, not to provide clarity. In every case that it is used, it is used to conflate the Confessional Text position with King James Onlyism, which is typically defined by way of Peter Ruckman. This is problematic for several reasons. The first is that the Confessional Text position is demonstrably not Ruckmanite KJV Onlyism. The Ruckmanite view of the Bible is dangerously false and it is embarrassing and shameful to apply such a view to a supposed brother in the Lord. The second is that it is far too vague of a title to be used in any way that can be considered scholarly. Scholars are constantly priding themselves on being precise, not intentionally dull. Since those who read Bibles made from the Received Text also read the Old Testament, a more precise title would be “Masoretic Text and Received Text Onlyists”, or “MTRT Onlyists” for short. It is true that those in the Confessional Text camp read translations made from these texts, so the title is adequately descriptive. Though if we’re in the business of calling anybody who has a distinct view on a topic an “onlyist”, I encourage those who rail against the Received Text to adopt the title, “Modern Critical Text Onlyist,” or perhaps, “Historical Critical Text Onlyist.” Whichever suits your fancy.

The major problem with calling every disagreement a controversy and every person who holds a distinct position an “onlyist” is that it is lacking in Christian charity and scholarly candor. Those in the Confessional Text camp do not adhere to these texts by virtue of these texts themselves, but primarily because they are the texts that the framers of the Confessions received. Thus, those in the Confessional Text camp adopt the reasons and logic which caused the Reformed to adopt those texts as well. The reasons and logic for receiving such a text are laid out in chapter 1 of the WCF and LBCF. All of the proof texts for the doctrines within the Reformed confessions are based on the Traditional Text of Scripture. They rejected the readings which have made their way wholesale into the modern Bible versions. This may come as a shock to people, but the framers of the Reformed confessions built their body of divinity on many texts that have been thrown out of modern Bibles. This is not a matter of opinion, but fact. The Reformed Confessions, in their original form, were reliant upon having the text form of the Traditional Text. People can think this was due to their ignorance of the text, or that they were just wrong in establishing doctrine on 1 John 5:7, Mark 16:9-20, etc., but the fact is that they did. You can’t change history simply because you don’t like it. Ironically, this is the charge leveled by those who advocate for the use of the Modern Critical Text against those who adhere to the Received Text. In any case, the name “Confessional Text” is used simply because it describes a position which adheres to the same text as the framers of the Reformed Protestant confessions for the same reasons.

King James Version Onlyism

Maybe it is time that somebody writes a book called the “Onlyist Controversy” where somebody catalogs every Christian position which makes them an “Onlyist.” Some examples might be Psalmody Onlyists, Presbyterian Onlyists, Credobaptist Onlyists, and so on. When I first heard of the term KJV Onlyist, I thought it meant that somebody thinks the KJV, in English, is literally immediately inspired by the pens of the translators. Due to popular works such as the King James Only Controversy and critically acclaimed textbooks such as How to Interpret and Apply the New Testament, the definition of KJV Onlyist has been extended to everybody who doesn’t read a modern Bible, even majority text advocates and people who read the NKJV. If the meaning of KJV Onlyist applies to people who think that somebody has to learn English to read the Bible, then it has a whole lot of meaning. It is a distinct category set apart from all other categories that is applied appropriately to one specific subset of people. If it means everybody who doesn’t read a modern Bible, then the standard becomes extremely arbitrary and vague. It loses its meaning and its specificity, thus transforming it from a scalpel to a bludgeoning rod. 

One of the things that Christians, especially within the Calvnistic apologetic realm, value, is consistency. If the goal is consistency, I’d like to apply the “onlyist” standard equally across the board. If you are a Christian that only reads an ESV, you are an ESV onlyist. If you are a Christian who only reads a Bible based on the modern critical text, you are a Modern Critical Text onlyist. Note that when this standard is applied equally across the board, it doesn’t make a whole lot of sense. Thousands of Christians only read one translation. Simply adding the term “Onlyist” to the end of something somebody believes is simply useless in terms of conveying meaning. It has nothing to say about why the person only reads that version. What it does convey is the idea of “badness” or “wrongness” by ironically appealing to modern idea that exclusivity is bad. The term KJV Onlyist has actually lost all meaning because it has been applied so broadly, and doesn’t make sense at all when the same standard is applied to everybody else. If we were to apply that term to only Ruckmanites, then perhaps it would have meaning. Due to the broad application of the term, it’s difficult to determine if being an “onlyist” is even a bad thing. It’s just a thing. Is being an ESV Onlyist bad? Well I suppose that is dependent on why you only read an ESV. Is being a KJV Onlyist bad? Well I suppose that is dependent on why you only read the KJV. Ironically, the grossly wide application of the term “KJV Onlyist” to quite literally everybody who doesn’t read a modern Bible has resulted in the term becoming ambiguous. This is what happens when we aren’t consistent, things stop making sense. So if the goal is specificity, the term KJV Onlyist simply means that somebody only reads the KJV. In the same way, an ESV Onlyist is somebody who only reads the ESV.

So I propose a solution. If the only qualifier for being a translational onlyist is that you only read one Bible, then I say we apply the onlyist standard across the board. In any case, the terminology in itself does not explain the why so it is simply a synonym for KJV reader or ESV reader. That is not to say that the term “KJV Onlyist” doesn’t have certain negative connotations, but according to the books on the matter, there are four or five different kinds of KJV Onlyists, and they all are very different. Since these different groups are so radically different, it seems appropriate to use more specific terms. In fact, in every case, there are terms that can be used for these different types of “KJV Onlyists”. Here they are:

1. “I like the KJV the best” – KJV Preferred

2. “The Textual Argument” – Majority Text Advocate or Confessional Text Advocate

3. “Received Text Only” – Nobody holds this position as it is defined in the literature, as nobody believes the TR was “re-inspired”

4. “The KJV as New Revelation” – Ruckmanite KJV Onlyism

It is not that hard to define these distinct groups, and it takes very little effort to do so. Some people proudly tote the KJVO title, but are not Ruckmanites. In any case, believe it or not, people have legitimate reasons for reading the KJV other than by the reasoning of Sam Gipp or Peter Ruckman.

Conclusion

Relying on pejoratives to apply the “boogeyman effect” on a group of people is an effective tactic, I’ll grant that. It becomes a problem when there are more specific terms that adequately describe a position that actually convey meaning. This of course is assuming that we are all Christians here. If the goal is rational, Christlike discussion, then perhaps let’s be rational and Christlike. Mark Ward was able to do it when he employed the term Confessional Bibliology to describe the Confessional Text position. The term is concise, accurate, and not a pejorative. Simply making up nicknames for people or groups you don’t like may be popular on the playground, but as Ward has shown, it’s not the way things are done in the scholarly world. Dirk Jongkind shows the same scholarly care when he employs the term “Textus Receptus proponents” in his book. It’s amazing how readily scholars use terminology that actually conveys meaning. Both Ward and Jongkind use terminology that is recognizable, specific, and descriptive. Perhaps they are not fans of wasting words, or perhaps they are actually concerned with representing their brothers in Christ fairly. In any case, it seems that it is possible to discuss the issue without being pedantic. 

So what will you say, Christian? Will you employ the terminology used by scholars, or continue using pejoratives which convey very little meaning and add confusion to the conversation? At least, for the sake of consistency, pick something meaningful and specific.   

A Bit of Friendly Dialogue with Triablogue

Introduction

Today I was pointed to an article posted by the brother(s) who run the Triablogue site. I thought that the points represented a lot of the mainstream ideas regarding the Confessional Text position, and that responding would be helpful for all. I am grateful for this chance to interact, and I hope my response is productive. I will be going through each of the 16 points offered by author here.

16 Points

1. At one level, the significance of this issue is easily overblown. The text of the NT has enormous multiple-attestation. Even if you opt for the Byzantine text, there’s not much that can go wrong. 

Starting off my interaction, I want to offer an alternative opinion than this perspective. There is in fact, a lot that can go wrong when it comes to textual criticism of the New Testament. The post includes the “Bart Ehrman” tag at the bottom, so I imagine he falls into the category of “a lot going wrong”. Regardless of what the author thinks, this issue is very important to Christians everywhere who read their Bible, and confusion on just one variant can send believers into spiritual tumult. I have fielded many phone calls with people dealing with this issue on a personal level, so I’m not sure it’s fair to say that this issue is “easily overblown”. Most Christians do not have the time or money to invest in understanding this issue, and as a result, it can be extremely important. I am hesitant to set the bar too low when dealing with people’s confidence in their Bible.

2. At another level, it is a big issue. What’s at stake is convincing Christians to believe their faith hinges on a particular text tradition like the Byzantine or the TR. That’s the “canonical” text. This leaves them poised for a gratuitous crisis of faith if they develop doubts about the TR. In this case, their faith in the Bible becomes inseparable from faith in the TR and the KJV. That’s apostasy waiting to happen. DeSoto is going down exactly the same road as Bart Ehrman. The same all-or-nothing mentality. The same false dichotomies. 

In this second point, it seems that the author does recognize the importance of the issue, but the issue is “convincing Christians to believe their faith hangs on a particular text tradition”. I have never made this claim, nor will I ever say that people who read other Bibles cannot be saved, or that their faith “hinges” on which Bible they read. The Shepherd is not in the business of losing sheep. I will say, however, that many people do, in the real world, encounter a faith crises when they discover certain realities about the modern critical text. That is not to say that this is the experience with all Christians, but it does happen, and it happens perhaps more often than the author might think. While the author has stated that I am “going down exactly the same road as Bart Ehrman”, I can’t help but think this is simply a rehashing of James White’s claim on the matter without any evidence to actually support it. I imagine the author agrees a lot more with Bart Ehrman than he realizes. I have interacted with Bart Ehrman’s material, and have found that the Confessional Text position, from an epistemological and material standpoint, actually does offer a response to to his claims. Again, I am not making a false, “all or nothing” claim. I know many dear brothers and sisters in Christ who do just fine with the ESV and NASB. In my experience, many people are simply unaware of the process that goes into making their Bible. Christians all operate on their Bible being the inspired Word of God, and when they investigate the various text-critical theories, it often causes them to doubt that their Bible is what they think it is. This is not about being “right” or creating false dichotomies.

3. Because we have so many copies of the Greek NT, copies with many, mostly trivial variants, it’s important although not strictly necessary to produce a critical edition of the NT. That’s not unique to proponents of the eclectic text. Astute proponents of the Byzantine text also appreciate the need to produce a critical edition of the NT, using internal and external textual criteria. For instance: http://rosetta.reltech.org/TC/v06/Robinson2001.html

I have no commentary on this point, but am including it for the sake of including the whole article in my response.

4. I myself subscribe to mainstream textual criticism and the eclectic text approach. I don’t have a firm opinion about CBGM. Certainly we should take advantage of computers to digitize our MSS, then compare them. Stanley Porter is a critic of CBGM.  I’d add that it isn’t necessary to choose between CBGM and traditional textual criticism. You can compare the results of both, and the reasoning behind their choices. Metzger’s textual commentary explains how traditional text critics made their choices. And there will be a textual commentary for the CBGM edition when that project is completed. 

I’m not sure what is the “mainstream textual criticism” that the author refers to here. The mainstream approach to creating Greek texts is the CBGM, as that is the method that is being used to produce the mainstream printed editions of the modern critical text as it is represented in the NA/UBS platform. The Tyndale House Greek New Testament, and various Majority Text editions are the exception, of course, but there aren’t really translations of those texts. It is important to note here that the CBGM and reasoned eclecticism are not odds with each other. The CBGM is the method being used to create the Editio Critica Maior (Not the “CBGM Edition”), and the ECM is being used to make printed Greek texts such as the NA/UBS. If by “mainstream textual criticism” the author means the pre-CBGM era of textual scholarship, it is again important to note that the axioms developed during the Hort-Metzger era of textual scholarship have been abandoned almost universally. Everything from the way scribal habits are viewed to the notion of “text-types” has moved in a different direction. Even though this is a popular level article, it may be important for the author to brush up on some of the current literature before providing a response.

5. Opting for the Byzantine tradition would be more defensible than opting for the T.R. That’s not my own position,  but there’s a respectable argument for that alternative. 

I agree, from a critical, evidence-based approach, the majority text or byzantine platform is far more defensible than the modern critical text, as it represents the text that was most copied, and has a wide harmony of agreement in its witnesses. If you’re looking for a well developed, though not widely adopted critical position on the text, James Snapp has done extensive work with his Equitable Eclecticism.

6. From what I can tell, all the Reformed proponents of the TR and the KJV are dabblers and dilettantes. They have no formal expertise in textual criticism. In fairness, they might say the same thing about me. But that proves my point. I admit that I’m an amateur when it comes to OT/NT criticism. And I don’t object to amateurs having opinions about range of specialized issues. I don’t think we should abode unconditional confidence in the judgment of experts.  But it’s because I’m an amateur that I don’t need to get my information from another amateur. If I want an amateur opinion about textual criticism, I can just consult my own opinion! By the same token, I don’t get my information about biology and physics from amateurs. Rather, I study what the professionals have to say. I might still dissent on philosophical or theological grounds. Or I might dissent if I think their discipline has become politicized, which skews their assessment.   This also goes beyond formal training. Some scholars like Bruce Metzger, Peter Williams, and Emanuel Tov have an exceptional skill set and natural aptitude that many scholars lack. Reformed proponents of the TR might also say that since mainstream NT criticism is so compromised, it’s a good thing that they lack formal training in that discipline. But that begs the question. 

I admit, I am no textual scholar. I have only done a bit of reading and writing on the matter. There are textual scholars that one might look to for a more scholarly handling of this position, including Dr. Jeff Riddle, Edward F. Hills, Theodore Letis, and Dean Burgon. It is interesting that the author simultaneously discredits the popular level proponents of the Confessional Text position, such as myself, while also admittedly being less than well-read on the topic and writing a popular level article. Again, Dr. Gurry’s book is very helpful and accessible and may offer some helpful new vocabulary and concepts to the reader. In terms of where I am getting my information from, I frequently cite my sources on my blog. These sources include H.C. Hoskier, Bart Ehrman, D.C. Parker, Eldon J. Epp, Jim Royse, Jan Krans, Peter Gurry, Tommy Wasserman, Jennifer Knust, Edward F. Hills, Dirk Jongkind, and so on. I even have, sitting on my desk next to me, Dr. Gurry and Dr. Hixson’s latest book on Myths and Mistakes in New Testament Textual Criticism. That may be a helpful place to start for the author of this article.

7. A basic problem with canonizing the KJV is that most Christians aren’t English-speakers, most Christians were never English-speakers, and within the foreseeable future, most Christians won’t be English-speakers. So it’s absurdly ethnocentric. 

This point seems rather disconnected from the actual discussion. I personally have not advocated for the “canonization of the KJV”, and I’m not sure of any in my camp who do. There are a wealth of translations made from the Traditional Greek and Hebrew texts that have been used for hundreds of years, and hundreds more being made by the Trinitarian Bible Society into every vulgar tongue. There are also other English translations from the Traditional Text that are not the KJV that many in the Confessional Text camp read. Preference for the KJV is usually based on a preference for the actual translation methodology and the fact that it is most widely used. Again, this point seems to indicate that the author is not familiar with the position other than perhaps Dr. Peter Ruckman’s strange opinion that the King James Translation was somehow re inspired. Even the Independent Fundamentalist Baptists, who often proudly tote the title “KJVO” do not believe in the same vain as Ruckman or Gipp.

8. Another problem is that we have a better understanding of Greek and Hebrew than the KJV translators. We have a wider sampling of ancient Hebrew than they had. And we have a wider sampling of ancient Greek than they had. For instance, Greek papyri give us access to non-literary Greek. That gives us access to Greek slang or Greek words with slang meanings. In addition, computers enable us to make exhaustive comparisons in vocabulary and grammatical constructions. 

I recommend the author to look up just one of the KJV translators, Lancelot Andrews, and see if he still believes this claim. Andrews was one among many who knew the languages so fluently he could fluently converse in them. I wonder if the author, or perhaps most seminary professors, could have a conversation casually in Latin, Hebrew, Greek, Aramaic, etc without skipping a beat. The King James translators certainly could. But again, I’m still struggling to see how translation of texts has anything to do with the text itself. If somebody wanted to come along and try to make a better version from the Traditional Text, I wouldn’t have a problem with that. Interestingly enough, the King James Translators didn’t need a computer to know the languages. I’m sure the author would agree with me when I say that if somebody needed a computer to construct a sentence in English, they probably don’t know English all that well. I wonder how well Google translate would do making a Bible?

9. It’s true that earlier MSS aren’t necessarily better than later MSS. Obviously an 8C MS isn’t automatically better than a 9C MS. But when we’re talking about the NT papyri, I do think there’s a presumption that earlier is better because they are so chronologically close to the Urtext.  

I’m glad the author recognizes that newer manuscripts can preserve older readings. This much is fact. In terms of the Papyri, I’m not sure what that has to do with the conversation. There are less than 150 published Papyri (most of them scraps), and there aren’t enough of them to make a whole New Testament. Not to mention that there are Byzantine readings in the Papyri. Eldon J Epp calls the 20th century the “period of the papyri” and an “interlude” in the scheme of Textual Scholarship, because what came before and after is more significant. Perhaps the author could provide some examples of how the Papyri have changed the grand scheme of textual criticism.

10. Reformed TR proponents operate with an arbitrary notion of divine providence regarding the preservation of the text. They act like special providence singles out the TR rather than the Byzantine text or the NT papyri or the DDS or Codex Vaticanus. But why would providence only extend to the preservation of the text in the TR?  Likewise, the reason OT textual critics sometimes prefer the LXX to the MT is because the LXX translators had an earlier text than the Massoretes. So they had a text that might well preserve the original reading in some cases where the MT lost it.  

I’m not sure I would say it’s “arbitrary”. The Masoretic Hebrew and Greek Received Texts are the texts that were used almost exclusively among the protestants for translation, commentary, and theological works from the Reformation into the modern period. Chances are, if you have any works of the Puritans and the Post-Reformation Divines, they are using this text. If you adhere to a confession, they used the “arbitrary” text. Most theological works and commentaries into the 20th century use the AV and underlying texts. Some might argue that this is simply because they did not have the modern critical text, but isn’t that the point? God works in time, and in time, the church had the Traditional Text. Further, the argument against the Masoretic Text is curious, because there aren’t really any other Hebrew Texts to point to. The Reformed Confessions set the standard at the Greek and Hebrew texts being immediately inspired. If we don’t have those texts, what do we have? If the argument is that the LXX preserves original readings, is that not the argument you have problems with as it pertains to King James Onlyists? If a translation can preserve authentic readings, what exactly is the problem you have with your view of KJVO? Translations can give valuable insight in supporting readings with slim manuscript support, but supplying a reading from a translation is exactly the kind of argument Ruckman and Gipp make when they suppose the KJV should correct the Greek and Hebrew. There is an important nuance between supplying a reading from a translation that does not agree with the original texts and supporting a slimly attested reading in the original languages by way of versional evidence.

11. I’m no expert (something I share in common with Reformed TR proponents), but it seems to be that appeal to the Majority text is a statistical fallacy. If more MSS were produced by a particular locality, and more of those survive, that just means our extant MSS oversample a local textual tradition. Their numerical preponderance in itself creates no presumption that it’s more representative. Rather, that may simply be a geographical and historical accident. So the larger sample is an arbitrary sample. The fact that we have a larger sample of that textual tradition is random in the sense that it’s a coincidence of geography and the ravages of time. The Majority text may well be unrepresentative because a local textual tradition is overrepresented. 

I agree that I am no expert, with that I take no issue. I would challenge the author of this article to turn that argument on the formerly titled “Alexandrian Text Family”, which has very slim manuscript support. The author appeals to “other textual traditions”, but there aren’t really any others. There is no Alexandrian Text Family, or Western, or Cesarean. The pregeneaological coherence component of the CBGM demonstrates this overwhelmingly. In the same way, and more likely, the smallest smattering of manuscripts, which are geologically local to one area, are in fact the anomaly. This is especially easy to understand when those manuscripts weren’t copied, and the ancestor(s) of those manuscripts is lost. The text that the author is advocating for is a blip on the outskirts of the map of the manuscript data. The author is right when it comes to the manuscript data, however, it is almost impossible to prove the significance of any one manuscript because we simply do not know enough about them other than how the people of God used and copied them.

12. It’s often said that despite all the textual variants, the true reading is contained somewhere in our “5000” extant Greek MSS. But that bare statement can be misleading. This isn’t like finding a needle in a haystack. It’s not like our MSS are riddled with indetectable mistakes. 

i) Words are parts of sentences. If a scribe uses the wrong word, that usually makes the sentence nonsense. And it’s easy to spot which word is messing up the sentence. Moreover, it’s usually easy to figure out what the right word was, even if you only have that MS to go by.
We do this all the time. Email and text messages frequently contained recognizable typos, but we can usually figure out the intended word. 
ii) But suppose we can’t figure out what the original word was. So we consult other MS. The right word isn’t indetectable. If another MSS has the same sentence, but with a different word, and the sentence makes sense, then that’s probably the authentic reading. 
ii) Suppose I have two independent editions of The Adventures of Tom Sawyer. Both editions contain typos. But they contain different typos. Suppose one edition contains a sentence with a typo, and I can’t figure out the original word. So I consult the other edition, where the parallel sentence makes sense. So that probably preserves the original word. 

I find no issue with this point. The first problem is, in the pre-CBGM era of textual scholarship, axioms such as “prefer the less harmonious reading” and “prefer the shorter reading” sort of gunk up point i) in the above list. Regardless, in the current edition of the NA28, there are readings which are “split”. Which means the editors of the text cannot determine which came first in the manuscript tradition. So when the author makes the claim, ” It’s not like our MSS are riddled with indetectable mistakes,” that is totally possible. Dr. Gurry comments on this in his book when he says that a later reading can be mistakenly supplied into the text over an earlier reading – undetected. Since the texts deemed earliest are basically alone in the manuscript tradition, unless you compare the readings to the thousands the author seems to reject, they are indeed alone. The author’s scenario of Tom Sawyer is not exactly relevant, because we know what the source material of Tom Sawyer said. There is a master copy to compare to. This idea of correcting the text by comparison of a few manuscripts, is however, the view of the Reformed and Post-Reformation Divines when it came to the text they had in hand, the Traditional Text. They believed that this sort of decision making could be done. That is what I believe can be done, and has been done. The editors of the modern critical text do not share in that opinion necessarily, as evidenced by the minefield of diamonds in the apparatus of the NA28. The fact is, that the current work being done is far more complex than the example given in the original article.

13. Opponents of the eclectic text allege that editors are “creating” the text. But that’s deceptive. It doesn’t mean they are inventing sentences. It just means they use more than one witness to the text. Since we know for a fact that scribes introduces changes into the text (usually inadvertently), we can’t rely on a single MS as it stands. It’s necessary to make corrections. And we do that by reference to other MSS. 

To this point I’ll simply respond with somebody has actually created Greek texts, and is listed as the team lead for the ECM in the Gospel of John, D.C. Parker.

“The text is changing. Every time that I make an edition of the Greek New Testament, or anybody does, we change the wording. We are maybe trying to get back to the oldest possible form but, paradoxically, we are creating a new one. Every translation is different, every reading is different, and although there’s been a tradition in parts of Protestant Christianity to say there is a definitive single form of the text, the fact is you can never find it. There is never ever a final form of the text.”

I agree that we cannot rely on one manuscript, I don’t think anybody would disagree. Nobody believes that the Bible came down through one single copy of the original. The Reformed and Post-Reformation Divines did not believe that, and neither do I. A more important conversation to have is whether or not the manuscripts we have now represent the historically transmitted text in time. God never promised to preserve His Word in hand written copies of the original texts. It seems we can draw from good and necessary consequence that those preserved texts must be in the original languages, but not necessarily in handwritten form. The fact is, original readings in the original languages can be, and have been, preserved with printed ink, not just handwritten ink. The author does not seem to have a problem with a reading being preserved in a translation (LXX), so I’m not sure what point is being made here.

14. In general, biblical teaching is redundant. It doesn’t hinge on one particular passage. Major doctrines are multiply-attested. The life of Christ is multiple-attested (four Gospels). 

I don’t disagree that many Biblical doctrines are not dependent on one proof text. I do not agree that all doctrines can be demonstrated equally with multiple proofs. I would be curious to see how the author handles the hapax legomena that our doctrine of inspiration comes from – θεοπνευστας. There are quite a few places where doctrine isn’t established on multiple verses totally. The Reformed were constantly building doctrine on one passage of Scripture. This is evidenced in the scripture proof texts of the Westminster Confession. Further, we do not demand this kind of thinking from preachers when they work expositionally through the text. We trust that each word is valuable for preaching and doctrine from the pulpit, despite the fact that those verses can be built out by passages in other places. It is not that we do not let Scripture interpret Scripture, but that we let Scripture interpret Scripture while also trusting and believing in every word that “proceedeth out of the mouth of God.”

15. The way Reformed TR advocates cling to the Long Ending of Mark is hypocritical. If they truly believe that’s the original ending, then they ought to belong to charismatic, snake-handling churches. 

I wonder if Calvin, Matthew Henry, Gill, etc. would agree? They all seemed to get by just fine with the historical interpretation of it. Perhaps the author could investigate a commentary to see how the church has interpreted this passage before it was hucked out of the Bible on the bases of just two manuscripts.

16. What does God require of us? To be faithful to the best text we have at our disposal. Surely he doesn’t require us to be faithful to an unattainable word-perfect text. Even in the 1C, Christians copied originals. The originals were inerrant but the copies were not. 

This is a modern view by way of A.A. Hodge and B.B. Warfield. Turretin and Van Mastricht certainly don’t agree with this statement. If the Bible isn’t perfect, what exactly is it does the author think he’s reading when he opens a Bible? Is it not the inspired Word of God? Or is it just partially inspired, where it can be proven to be so by text-critical practice? How do we determine which passages are apart of the attainable Word of God, and which parts are the unattainable? I’d be curious to see if the author would be willing to provide a methodology for determining this. John Brown of Haddington thought that kind of distinction was unwise, and I think we should too.

Conclusion

I hope my interaction with the article is not perceived as hostile. Hopefully my response causes those who read it to go and pick up some of the literature being published on the modern critical methods. We can always learn more, including myself. If the author wants to provide another response, I may be able to interact if I have the time. In the meantime, I hope the reader picks up a Bible and reads it, regardless of where they stand on the textual issue.

A Summary of the Confessional Text Position

Introduction

In this article, I will provide a shotgun blast summary of the Confessional Text Position, as well as some further commentary which will help those trying to understand the position better. In this short article, I do not expect that I have articulated every nuance of the position perfectly, but I hope that I have communicated it clearly enough for people to understand it as a whole. My goal is the reader can at least see why I adhere to the Traditional Hebrew and Greek text and translations thereof.

In 15 Points

1. God has voluntarily condescended to man by way of speaking to man (Deus Dixit) and making covenants with him (Gen. 2:17; 3:15)

2. In the time of the people of God of old, He spoke by way of the prophets (Heb. 1:1)

3. In these last days, He has spoken to His people by His Son, Jesus Christ (Heb. 1:1)

4. The way that God has spoken by Jesus Christ is in Scripture through the inspiration of Biblical writers by the Holy Spirit (2 Peter 1:21; 2 Tim. 3:16). The Bible is the Word of God, and in these last days, is the way that Christians hear the voice of their Shepherd by the power of the Holy Spirit (John 10:27). The Bible does not contain the Word of God, or become the Word of God, it is the Word of God.

5. The purpose of this speaking is to make man “wise unto salvation” and “furnished unto all good works” (2 Tim. 3:15;17; Rom. 1:16; 10:17)

6. Jesus promised that His Word would never fall away, as it is the means of accomplishing His covenant purpose (Mat. 5:18; 24:35)

7. Since God has promised that His words would not fall away, the words of Scripture have been kept pure in all ages, or in every generation (WCF 1.8; Mat. 5:18; 24:35) until the last day

8. Up until the 15th century with the invention of the printing press in Europe, books were hand copied. This hand copying resulted in thousands of manuscripts being circulated and used in churches for all matters of faith and practice. These manuscripts are generally uniform, except for a handful of manuscripts formerly known as the “Alexandrian Text Family”, which were not really copied or circulated. When Constantinople fell in 1453, just 14 years after the invention of the printing press in Europe, Greek Christians fled to Italy, bringing with them their Bibles and language.

9. The printing press was put to use in the creation of printed Bibles, in many different languages, specifically Greek and Latin

10. If it is true that the Bible has been kept pure, it was kept pure up to the 16th century. Thus, the manuscripts that were used in the first effort of creating printed text was the same text used by the people of God up to that point. Text-critics such as Theodore Beza would appeal to the “consent of the church” as a part of his textual methodology, which demonstrates that the reception of readings by the church were an integral part of the compilation of this text

11. The text produced over the course of a century during the Reformation period was universally accepted by the protestants, even to the point of other texts being rejected. It is historically documented that this is the “text received by all” (Received Text), which is abundantly made clear in the commentaries, confessions (see proof texts), translations, and theological works up until the 19th century.

12. This Greek text, along with the Masoretic Hebrew text, remained the main text for translation, commentary, theological works, etc. until the 19th century when Hort’s Greek text, based on Codex Vaticanus was adopted by many. At the time, many believed that Hort’s text was the true original, which caused many people to adopt readings from this text over and above the Received Text. This text was rejected by Erasmus and the Reformers, and has no surviving contemporary ancestor copies, meaning it was simply not copied or used by the church at large.

13. This Greek text was adopted based on Hort’s theory that Vaticanus was “earliest and best” and the text of modern Bibles all generally reflect this text form, even today. Due to the Papyri and the CBGM, Hort’s theory has been rejected by all in the scholarly community. Not to mention Hoskier’s devastating analysis of Codex B (Vaticanus).

14. Thus, the Confessional Text position adopts the Greek and Hebrew text, and translations thereof, that were “received by all” in the age of printed Bibles, and used universally by the orthodox for 300 years practically uncontested, except by Roman Catholics and other heretical groups (Anabaptists, Socinians, etc.).

15. The most popular of these translations, the Authorized Version (KJV), is still used by at least 55% of people who read their Bible daily as of 2014, and at least 6,200 churches. Additionally, Bibles made from these Greek and Hebrew texts into other languages remain widely popular across the world. Other English Bibles are based on this text, such as the MEV, NKJV, GNV, and KJ3, but they are relatively unused compared to the AV.

Further Commentary

The adoption of the Greek Received Text and the Hebrew Masoretic text is one based on what God has done providentially in time. Many assert that the history of the New Testament can only be traced by extant manuscript copies, but those copies do not tell the whole story. The readings in the Bible are vindicated, not on the smattering of early surviving manuscripts, but rather by the people that have used those readings in history (John 10:27), which are preserved in the texts actually used by those people. Since we will never have all of the manuscripts due to war, fire, etc., it is impossible to verify genuine readings by the data available today, as there is no “Master Copy” to compare them against. That is why the current effort of text-criticism is pursuing a hypothetical Initial Text, which relies on constructing a text based on the first 1,000 years of manuscript transmission.

The product of this is called the Editio Critica Maior (ECM), and it will not be finished until 2030. The methodology used (CBGM) to construct this text has already introduced uncertainty to the editors of those making Greek texts as to whether or not they can even find the Initial Text, or if they will even find one Initial Text. That is to say, that from the time of Hort’s text in the 19th century, the modern effort of textual criticism has yet to produce a single stable text. The printed editions of the modern critical text contain a great wealth of textual data, but none of these are a stable text that will not change in the next ten years. That is to say, that translations built on these printed editions are merely a representation of what the editors think the best readings are, not necessarily what the best readings are in reality.

Rather than placing hope in the ability of scholars to prove this Initial Text to be original, Christians in the Confessional Text camp look back to the time when hand copied manuscripts were still being used in churches and circulated in the world. The first effort of “textual criticism” if you will, is unique because it is the only effort of textual criticism that took place when hand copied codices were still being used as a part of the church’s practice. That means that the quality and value of such codices could be validated by the “consent of the church”, because the church would have only adopted a text that was familiar to the one they had been using up to that point. This kind of perspective is not achievable to a modern audience. During the time of the first printed editions, the corruption of the Latin Vulgate was exposed, and the printed editions created during that time were in themselves a protest against the Vulgate and the Roman Catholic church, who had in their possession a corrupted translation of the Scriptures. It was during this time, and because of these printed texts, that Protestantism was born.

Any denomination claiming to be protestant has direct ties back to this text, and the theology built upon it. The case for the Confessional Text is really quite simple, when you think about it. God preserved His Word in every generation in hand copied manuscripts until the form of Bibles transitioned to printed texts. Then He preserved His Word in printed Greek texts based on the circulating and approved manuscripts. This method of transmission was much more efficient, cheap, and easily distributed than the former method of hand copying. This text was received, commented on, preached from, and translated for centuries, and is still used by the majority of Bible reading Christians today. The argument for this text is not one based in tradition, it is one based on simply looking back into history and seeing which text the people of God have used in time. Not simply the story that the choice manuscripts of the modern scholars tells.

Any theories on other text forms are typically based on a handful of ancient manuscripts that were not copied or used widely, and the idea that this smattering of early manuscripts represents the original text form is simply speculation. What history tells us is that the text vindicated in time is the text the people of God used, copied, printed, and translated. This does not mean that every Christian at all times has used this text, just the overwhelming testimony of the people of God as a whole. The fact is, that we know very little about the transmission and form of the text in the ancient church in comparison to what we know about the text after the ancient period. The critical text, while generally looking like the Received Text, is different than the historical text of the protestants, which is why those in the Confessional Text camp do not use them. The few Papyri we have even demonstrate that later manuscripts known as the Byzantine text family were circulating in the ancient church.

Conclusion

So why is there a discussion regarding which text is better? Up until this point in history, the alternative text, the critical text, has been thought to be much more stable and certain than it is now. Currently, the modern critical text is unfinished, and will remain that way until at least 2030 when the ECM is finished. Those in the Confessional Text position might ask two very important questions regarding this text: Does a text that represents the text form of a handful of the thousands of manuscripts, a text which is incomplete, sound like a text that is vindicated in time? Does a changing, uncertain, unfinished text speak to a text that has been preserved, or one that has yet to be found? I suppose these questions aren’t answerable until 2030 when it is complete. This alone is a powerful consideration for those investigating the issue earnestly. Most people in the Confessional Text camp do not anathematize those who read Bibles from the critical text, or break fellowship over it, but we do encourage and advocate for the use of Traditional Text Bibles, as it is the historical text of the Protestant church.

For More Information on Why I Prefer the Received Text, Click Here

For Interactions with Arguments Against the Received Text, Click Here

A Crash Course in the Textual Discussion

Introduction

When I first started learning about the textual variants in my Bible, I had a great number of misconceptions about textual criticism. I thought myself rather educated on the matter because I had read the KJV Only Controversy twice and had spent hours upon hours watching the Dividing Line. Yet, when it came down to actually understanding anything at all about the matter, I realized I didn’t know anything. Even though I knew a lot of text-critical jargon, and could employ that jargon, much of the arguments I had learned were factually incorrect or misinformed. A comment on my YouTube channel earlier today demonstrated to me that many others are in the same boat I was in. 

The fact is,I couldn’t tell you why the Papyri were significant, or even how many Papyri were extant and what sections of the Bible they included. I couldn’t even name a proper textual scholar, except for maybe Bart Ehrman, but I thought he was just an angry atheist. I had heard that the CBGM was going to get us to a very early text form, but I couldn’t explain how or if that text was reliable. I knew that textual criticism was changing, but again, I didn’t know what those changes were or how they affected my Bible. There are a lot of downsides to getting your information from one or two sources, especially if those sources are simply interpreters of textual scholarship and not textual scholars themselves. The only thing that I had really adopted from the sources I had interacted with was confidence that I was on the right side of things, without really knowing why. I developed a list of questions that I wish somebody had asked me before I adopted the axioms of the Modern Critical Text, and perhaps they will be helpful for my reader.

  1. How did the Papyri finds impact the effort of textual scholarship?
  2. Is the concept of “text-type” a driving factor in the current effort of textual scholarship? 
  3. Which manuscripts are primarily used as a “base text” of the modern critical text as it is represented in the NA27 and 28? 
  4. What is the Editio Critica Maior (ECM)?
  5. Which textual scholars are involved in creating the Editio Critica Maior (ECM)? 
  6. What is the Initial Text, and how is it different than the Original?
  7. What is the difference materially between the Received Text and the Modern Critical Text?
  8. What is the CBGM, and how is it impacting modern Greek texts and Bible translations? 
  9. Which scholars are contributing to the current effort of textual scholarship, and what are their thoughts on the CBGM and ECM? 
  10. What do the scholars who are editing the modern Greek New Testament as it is represented in the Nestle-Aland/UBS platform think of the text they are creating? 
  11. What is the TR?

This “quiz” of sorts is a good litmus test as to whether or not you are up to date on the current trends in textual criticism. 

Answer Key

  1. The Papyri, while initially exciting, did not yield the kind of fruit that many would have hoped. In the first place, they disproved Hort’s theory that Codex Vaticanus was earliest text, because the Papyri included readings that were not extant in the Alexandrian manuscripts, which were called “Earliest and Best” all throughout the 20th century and even still today by some. This means that the Papyri do not vindicate the Alexandrian text form as “earliest”, and in fact, they prove that there were other “text forms” circulating at the same time. While the Papyri may be helpful in establishing that the Bible existed prior to the fourth century, every single Christian, in theory at least, believes this to be true regardless of the Papyri. Christian apologetics were done successfully well before the discovery of the Papyri. The Christian faith is one which believes that the eternal Logos became flesh in the first century, lived a perfect life, died on a Roman cross, was dead for three days, rose again on the third day, appeared to a group of disciples and a multitude of others, then ascended to the right hand of the Father. This is established without the Papyri, as the Bible is not established based on the Papyri. Further, there are less than 150 Papyri manuscripts, and many of them are scraps. We could not construct a whole New Testament with the Papyri manuscripts. So while the Papyri may to some serve some sort of apologetic purpose, their value as it pertains to actually creating a Greek New Testament is much less significant than other later New Testament data. 
  1. Due to the pre-genealogical coherence component of the CBGM, the concept of text-types has largely been abandoned by textual scholars, except for perhaps the Byzantine text-type, which is largely uniform. Due to algorithmic analysis driven by the power of electrical computing,  modern critical methods have demonstrated that the manuscripts formerly classed in the Alexandrian, Western, and Cesarean text families do not share enough statistical similarity to be properly called a text-family. Further, the current text-critical scholars have adopted a different method, which focuses primarily on evaluating individual verses, or readings, rather than manuscripts as a whole. So not only are the manuscripts formerly classed into the Alexandrian, Western, and Cesarean text families not families, the concept of text families is not necessarily being used in the current methodology. 
  1. The two manuscripts which serve as a “base-text” for the NA/UBS platform are Codex Vaticanus (B), and Codex Sinaiticus (Aleph). Significant variations between the Received Text and the Modern Critical Text are typically the result of prioritization of these two manuscripts over and above the readings found in the majority of manuscripts or other manuscripts. This is shifting as the concept of text-types is being retired, but the text as it exists in modern Bibles generally reflects the text form of just two manuscripts. As the CBGM is implemented, this may cause certain Alexandrian readings to be rejected, but as it stands, modern Greek Texts and Bibles heavily favor the two manuscripts mentioned above. These two manuscripts do not belong in the same family, which is to say that they likely do not share one common ancestor or ancestors. It is possible that perhaps that they share a cousin manuscript, but even that is speculative. 
  1. The Editio Critica Maior (ECM) is a documented history of the Greek New Testament up to about 1,000AD which considers Greek manuscripts, translations, and ancient citations of the New Testament. The ECM also provides information on the development of variants according to the analysis of the editors. The first edition was published in 1997 and is slated to be finished by 2030. The ECM is not necessarily a Greek New Testament per se, but rather a history of how the text is said to have evolved in the first 1,000 years of the church. This means that it excludes copies made from manuscripts after 1,000AD that predate 1000AD. For example, if a manuscript was copied in 1300AD from a manuscript created in 500AD, the readings from the 1300AD copy will not be considered, despite preserving very old readings. The main text printed in the ECM contains the readings which are said to be the earliest, though there are many places where the editors of the ECM are split in determining which reading came first. Due to these split readings, the ECM functionally serves as a dataset, which the user can individually evaluate to select which readings they believe to be the earliest. A current weakness of the ECM is that it does not consider all of the extant data, and it is yet to be seen if the final product in 2030 will incorporate all extant New Testament witnesses. As it stands, it is an incomplete history of the New Testament, despite being the largest critical edition produced to date. 
  1. It is difficult to find all of the men and women working on the ECM, but some of the scholars who have worked on, or are working on the ECM are Holger Strutwolf, DC Parker, and Klaus Wachtel. The Institute for New Testament Textual Research in Munster is overall responsible for the project. The ECM is supported by the Union of German Academies of Sciences and Humanities. 
  1. The conversation of the Original text vs. the Initial Text is still one being hotly debated amongst textual scholars, but Dr. Peter Gurry defines it as, “The ECM editor’s own reconstructed text that, taken as a whole, represents the hypothetical witness from which all the extant witnesses derive. This hypothetical witness is designated A in the CBGM, from the German Ausgangstext, which could be translated as “source text” or “starting text.” The relationship of the initial text to the author’s original text needs to be decided for each corpus and by each editor; it cannot be assumed” (Peter Gurry, A New Approach to Textual Criticism, 136). Simply put, the Initial Text is the “as far back as we can go text.” It is up to the editor, or perhaps the Bible reader, whether or not that Initial Text represents what the writers of the Bible actually wrote. It is important to keep in mind that the Initial Text is likely to favor texts from a particular region. That is to say, that the Initial Text produced by scholars is only one of many potential Initial Texts. Despite the fact that many are optimistic regarding the Initial Text, the fact stands that there are many readings in the ECM which the editors are split on which reading is initial. That means there is no consensus on what the Initial Text is, or what it will be. How this will be determined has yet to be seen. I comment on the discussion here and here
  1. The difference between the Received Text (TR) and the Modern Critical Text (MCT) is significant. The MCT is at least 26 verses shorter, as it excludes the ending of Mark (Mk. 16:9-20), the Pericope Adulterae (Jn. 7:53-8:11), the Comma Johanneum (1 Jn. 5:7), John 5:4, Acts 8:37, and Romans 16:24. There are also a number of places where the readings are different, such as John 1:18, and 1 Tim. 3:16. There are also places in the MCT like 2 Peter 3:10 where the readings has the opposite meaning as the TR. Many advocates of the MCT are quick to point out that the TR does not have Greek manuscript support for Revelation 16:5, but the MCT also has readings that do not have Greek manuscript support, like 2 Peter 3:10, mentioned above. This does not mean that the verses cannot be supported, just that it is rather hypocritical that many MCT advocates demand extant manuscript support when there were manuscripts available at one time that may have had a reading. In many of the doctrinally significant places where the MCT and TR differ, the TR contains readings found in the majority of manuscripts, whereas the MCT represents a small minority, and in some places, just two manuscripts (Mk. 16:9-20). In other places, the TR contains minority readings, though I argue that these minority readings can be substantiated by the consensus of commentaries, theological works, and Bible translations throughout the history of the church. In any case, the amount of variants in the within the TR tradition is minute compared to the amount of variants that must be reconciled within the MCT tradition. 
  1. The Coherence Based Genealogical Method (CBGM) is “a method that (1) uses a set of computer tools (2) based in a new way of relating manuscript texts that is (3) designed to help us understand the origin and history of the New Testament Text” (ibid. 3). The CBGM uses statistical comparison to determine how closely related two witnesses are to each other, and then text-critics evaluate that comparison to determine which reading potentially came first in the transmission history of the text. This is the method that is primarily being used to construct the ECM. To see a basic overview of the method, please refer to this video, which is a thoughtful and helpful examination of the CBGM. I comment on the CBGM more here.
  1. The scholars that are using the CBGM and creating the ECM have varied opinions on what is being constructed. Men like Eldon Epp and DC Parker do not believe that the ECM has anything to say about the original, or authorial text of the New Testament. Others are more optimistic, such as Dirk Jongkind and Peter Gurry. As it stands, it has yet to be demonstrated how the ECM can definitively say anything about the original or authorial text, as the methods of the CBGM do not offer this sort of conclusion. Further, it has yet to be shown how a text with split readings can be said, in any meaningful way, to represent one unified Initial Text, let alone an original. That is to say, that the ECM contains the potential for multiple Initial Texts. The problem of split readings in the ECM has yet to be addressed adequately as far as I know. 
  1. The scholars creating printed Greek texts such as the NA/UBS platform do not believe they are creating original texts. They are simply creating printed texts that serve as a tool in translation and exegesis. The editors are typically disinterested in speaking to whether this text represents the authorial text, that is up to the user of the printed edition. This is evident in the fact that the 28th edition of the Nestle-Aland text and the 5th edition of the United Bible Society text are not a final text. Due to the ongoing creation of the ECM, these printed Greek texts are going to change, even optimistic scholars, such as Dr. Peter Gurry, comment that these changes “will affect not only modern Bible translations and commentaries but possibly even theology and preaching” (ibid. 6).
  1. The Received Text (TR), is the form of the Greek New Testament as it existed during the first era of printed Greek Bibles during the 16th century after the introduction of the printing press in Europe. Up to that point, all books were hand copied. There is not one “TR”, per se, but rather a corpus of Greek Texts which are generally uniform. The places of variation between the TR are minor when the significance of these differences is considered. The opinion of textual scholar Dr. Edward F. Hills was that these variations amount to less than 10. High orthodox theologians such as Turretin considered such variations to be easily resolved upon brief examination. This was the Greek text that the Westminster Divines considered “Pure in all ages” and is the text platform that the Reformed and Post-Reformation Divines used in their commentaries and theological works from the middle of the 16th century up to the higher critical period when Hort’s text (Based on Vaticanus, generally the same text that is used for the ESV) was introduced as an alternative. There are varying views on what “the” TR is, but across all of the printed editions of the Received Text corpus, the differences are so minute that it can be considered the same Bible nonetheless. Modern debate tactics have introduced much confusion into the definition of “the’ TR, but the fact stands that this sort of question was not a problem to the men who used it to develop protestant theology up to the higher critical period. Adherence to the TR is based on the vindication of readings by the use of such readings by the people of God in time over and above extant manuscript data, which cannot represent all of the manuscripts that have ever existed, since a great number have been lost or destroyed.

Conclusion

Prior to entering into the Textual Discussion, I think it wise that Christians are up to date on not only the updated jargon, but also the information that underlies the jargon. If one wants to argue that the Papyri are definitive proof of one text being superior to another, he should be ready to substantiate that claim by demonstrating how the readings of the Papyri have impacted modern text-critical efforts. In the same way, if somebody wishes to stake a claim on the CBGM, it should also follow that one should be ready to demonstrate how this method has proved one conclusion or another. Simply saying that the Papyri and the CBGM have “proven” a particular text right or wrong is simply an assertion that needs to be substantiated. It may be the case that the claim is correct, but it is important that we hold ourselves to the same standard an 8th grade math teacher might hold us to, and “show your work.” The fact stands that a Bible cannot be constructed from all of the Papyri and the CBGM has introduced a “slight increase in the ECM editors’ uncertainty about the text, an uncertainty which has been de facto adopted by the editors of the NA/UBS” (ibid. 6). 

It is easy to get caught up in conversations on textual variants and the scholarly blunders of Erasmus, but these discussions do not come close to addressing the important components of the Textual Discussion. An important reality to consider when discussing variants from an MCT perspective is that the modern critical text is not finished, and the finished product is not claiming to be a stable or definitive text. The opinions on a variant may change in the next ten years, and new variants may be considered that have been ignored throughout the history of the church. One might make a case for why Luke 23:34 is not original, but the fact is that it is impossible to prove such a claim by modern critical methods without the original to substantiate the claim against. Even in the case of 1 John 5:7, which is admittedly a difficult verse to defend evidentially, it cannot be proven that other manuscripts contemporary to Vaticanus and Sinaiticus excluded the passage, because those manuscripts are no longer extant. Since it is well known that other Bibles with different readings existed at the time of our so called earliest manuscripts (because of the Papyri!), we can at least say with confidence that these two manuscripts do not represent what all of the Bibles looked like at that time. That is to say, that those who argue vehemently for Bibles which closely follow these two manuscripts are simply putting their faith in the unprovable claim that the other contemporary manuscripts did not have the readings that explode into the manuscript tradition shortly after and even minority readings that made it into the TR. Some people, like James Snapp, have developed entire textual positions which recognize this problem, which I consider a sort of mediating position between the Received Text and the Modern Critical Text. Unlike many of the MCT advocates, James Snapp is more than willing to show his work.  

In any case, it is high time that the bubble of Codex B is pricked. Times have changed, and even the most recent iteration of modern text-criticism has supposedly done away with Hort’s archaic theories. It may be time that Christians stop appealing to the Papyri and the CBGM without actually understanding what those two things are, and instead pick up some of the literature and become acquainted with what has changed since Metzger penned his Text of the New Testament. In my opinion, Snapp has answered many of the questions that modern textual scholars are unwilling to answer with his Equitable Eclecticism. While I believe his position still faces the same epistemological problems as the ECM and the CBGM, it certainly is an upgrade from the MCT. I hope that this article has helped people understand the effort of modern textual criticism better, and perhaps even sparked interested in investigating the information themselves. 

Sources for Further Reading on Modern Textual Criticism

D.C. Parker, editor of the ECM for the Gospel of John

Peter Gurry’s Introduction to the CBGM

Peter Gurry and Elijah Hixson’s Latest Book

The Latest, Authoritative Work on the Pericope Adulterae (Jn. 7:53-8:11)

Sources for Further Study on the Received Text Position

Audio from the Text and Canon Conference 

Audio from Dr. Jeff Riddle’s Word Magazine

Mark 16:9-20 is Scripture

Introduction

The rejection of the ending of Mark, formally known as the “longer ending of Mark”, is a Canonical crisis. In this article, I want to make a case for why people who read and use modern Bible translations should be outraged at the brackets and footnotes in their Bible at Mark 16:9-20. This is the textual variant that ultimately led me to putting down my ESV and picking up an NKJV, and then a KJV. When I understood the reason that my Bible instructed me to doubt this passage, I realized the methods which put the brackets and footnotes in my Bible were not to be trusted. The primary reason that I did not believe this passage to be Scripture was due to my blind adherence to things I had heard, not the reality of the data. The quickness with which I cast God’s Word into the trash caused me to be deeply remorseful, and I’m not alone in that . Not only had I been catechized to reject the ending of the Gospel of Mark, but I was instructed to berate others who were “foolish” enough to believe it is original. Meanwhile, enemies of the faith delight in the fact that Christians boldly reject this passage, because it proves their point that the Bible is not inspired. I will now walk through the data that caused me to be deeply remorseful of casting this passage aside.

The External Evidence

The first step in my journey was to examine the actual manuscript evidence for and against the passage. There are over 1,600 extant manuscripts of Mark, and only three of them end at verse 8. The decision to remove it, or delegate it to brackets, was made on the basis of only two of these. When I discovered this, I was dismayed. I had been using the argument that “we have thousands and thousands of manuscripts,” and I realized, based on my own position of the text, that I could not responsibly use this apologetic argument. My argument for the text, at least in the Gospel of Mark, was not based on thousands of manuscripts, just two. Yet even in one of these manuscripts (03), there is a space left for the ending of Mark, as though the scribe knew about the ending and excluded it. I later discovered that text-critics such as H.C. Hoskier believed that very manuscript to be created by a Unitarian, and that Erasmus thought the manuscript to be a choppy mash of Latin versional readings. I realized, that only some textual scholars thought these manuscripts to be “best”, and my research seemed to be demonstrating that this claim of high quality was rather vacuous indeed. I was operating on the theory that these two manuscripts represented the only text-form in the early church, which I discovered has been mostly abandoned. This is due to the Byzantine readings found in the Papyri, and the statistical analysis done by the CBGM. Further, and most shocking to me at the time, is that the two manuscripts in question do not look like the rest of the thousands of extant manuscripts of Mark. Below is the % of agreement that these two manuscripts share with the rest of the manuscripts of Mark – most of them are not even close enough to be cousins, let alone direct ancestors.   

Codex Vaticanus (03) and Codex Sinaiticus (01), the two early manuscripts in question, do not agree with any other extant manuscript in the places examined in Mark in a significant way, other than minuscule 2427, which has been known to be a 19th century forgery since 2006. What these numbers mean is that these manuscripts look very different from the rest of the manuscripts of Mark. I realized I could not responsibly claim that these two manuscripts were “earliest and best”. There was no way I could defend that in any sort of apologetic scenario, at least. I abandoned this belief on the grounds of two realities: 1) The data shows that different text forms were contemporaries of Vaticanus and Sinaiticus, so they weren’t necessarily “earliest”, just surviving and 2) these manuscripts did not look like the rest of the thousands of manuscripts I was constantly appealing to in apologetic scenarios. Further, I found it quite easy to demonstrate that there were other manuscripts circulating at the time which had the longer ending of Mark in it! Even Bart Ehrman admits as much (Bart Ehrman, Lost Christianities, 78,79). This is a simple fact, considering the amount of quotations from the ending of Mark found in patristic writings, including Papias (110AD), Justin Martyr (160AD), Tatian (172AD), and Ireneaus (184AD). The most compelling of these witnesses is Irenaeus, who directly quotes Mark 16:19 in the third book of Against Heresies. “Also, towards the conclusion of his Gospel, Mark says: ‘So then, after the Lord Jesus had spoken to them, He was received up into heaven, and sits on the right hand of God.’” So the passage most certainly existed prior to its exclusion in the two manuscripts in question. Hierocles(or porphyry), a pagan apologist, even provokes his Christian reader to drink poison, quoting the ending of Mark. It seems that atheists never tire of that retort. 

In order to reject this passage from an evidentiary standpoint is to completely ignore not only the manuscript data, but also the patristic citations which predate our earliest surviving manuscripts. If manuscript data does not matter, and patristic sources do not matter, than what does matter? Well, tradition matters, apparently. See, up until recently, the theory about the ending of Mark was that it was simply lost to time. The book did not initially end at verse 8, but the true ending has been lost. Well that doesn’t quite work for most Christians, so other theories had to be contrived to hold onto the supremacy of these two manuscripts. Rather than adopting the ending that is found in over 1,600 manuscripts, the default position of the 20th century has lingered in modern Bibles in the form of brackets and footnotes. The reason for this? Some of the earliest manuscripts don’t have it. “Some”, as though the number of manuscripts cannot be counted or determined. It seems that the editors of Crossway might want to consider being more precise, but I imagine it would be harder to justify those brackets if the reader knew the actual number. Even the RV, which is the ESV’s predecessor, contained this information. I still, to this day, feel betrayed by the way that my ESV presented that information in my Bible. I felt further betrayed by all of the people who knew this information and still told me that the ending of Mark was not Scripture.   

The Internal and Theological Evidence

If you are a Christian, you believe that the Bible was inspired by God. That means that the New Testament should be coherent, both grammatically and theologically. That is reality that kept me assured during my examination of the ending of Mark. I figured if God had truly preserved His Word, there would be a simple answer to whether or not this passage was indeed Scripture. I found that there was, and overwhelmingly so. I didn’t even need to go sifting through all of the evidence to know what the true reading of the ending of Mark was, the answer was laid out in the doctrine of Scripture in my London Baptist Confession of Faith. 

“We may be moved and induced by the testimony of the church of God to a high and reverent esteem of the Holy Scriptures; and the heavenliness of the matter, the efficacy of the doctrine, and the majesty of the style, the consent of all the parts, the scope of the whole (which is to give all glory to God), the full discovery it makes of the only way of man’s salvation, and many other incomparable excellencies, and entire perfections therefore, is from the inward work of the Holy Spirit bearing witness by and with the Word in our hearts” (LBCF 1.5). 

If Mark ends at verse 8, there is a significant problem, at least from a confessional standpoint. The problem is that verse 8 requires a verse 9 due to its grammar. There is no place in the whole of ancient Greek literature that ends a narrative with the word “for” (γαρ). This means that Mark did not stop writing at verse 8, if the assumption is that the Scriptures were at least perfect in the autograph. So if Mark did not stop writing at verse 8, and the Bible is indeed inspired and would not have included such a basic grammatical error, I figured perhaps it is the case that the reading that occurs in over 1,600 manuscripts should be considered over and above the two manuscripts which contain this idiosyncratic grammar mistake. In order to adopt the abrupt ending of Mark, I could not say that the Bible had any sort of “majesty of style” because it in fact, contains this atrocious grammar error at the “ending” of Mark. 

Further, if Mark ends at verse 8, there is a basic theological problem that puts the Bible at odds with itself. The confession says that the Bible should be esteemed on the account of “the consent of all the parts.” If the Gospel of Mark ends at verse 8, it does not consent with all the parts of Scripture. It excludes an appearance account, which is included in Matthew, Luke, John, and even in Paul’s testimony of the Gospel in 1 Corinthians 15. That means that Mark is apparently the only Gospel writer who didn’t have his story straight. 

Even Paul, who wasn’t there to experience the life of Jesus, has his facts in line. It is vital that the Gospel that Christians use contains the life, death, burial, resurrection, and appearance of Jesus. I figured that Mark would not have been ignorant to this. It seemed illogical in fact, to affirm the opposite, that Mark would have excluded such a fundamental detail. The burial and appearance are crucial to affirming two fundamental doctrines of the Christian faith: 1) That Jesus was very man and actually died, and 2) that after dying, Christ was raised up and thus very God. Without the appearance, there is no actual vindication of the latter. Turretin even affirmed this truth in saying that the ending of Mark was necessary for establishing the truth of the Gospel account, which I imagine he included as a means to respond to people like me, who were calling the passage into question. At first I said that it didn’t matter because this account is available in other places, but I was making the assumption that early readers of Mark had access to those other witnesses. See, I sat through a semester at Arizona State University where I heard all of the theories of Bauer and Ehrman, so I should have known better than to make that argument. If one takes the higher critical perspective of Markan priority, that Mark was the first Gospel, than the earliest Christians did not have a Gospel account which vindicated the truth of the resurrection. Which is to say, that the only apologetic defense of the Gospel I had to the actual critics of the faith was essentially to say, “Well that’s just wrong!” Kant and Kierkegard would have been proud of me. 

Conclusion

At the end of my research on the ending of Mark, I found that there was no good reason to continue propagating the idea that the Gospel of Mark ends with poor grammar, two scared women, and no vindication of the resurrection. If one of the uses of the Bible is to defeat the enemies of the faith in debate, than this clearly was not the way to go about it. In this journey, I also learned something vitally important – that the purpose of the Bible was not to defend the faith, it was to have faith and increase in faith. It was the means that God had given me to commune with Him. The majority of the Christian church, who reads their Bible to hear the voice of their Shepherd, should not be subject to the threadbare theories of higher and lower critics in the footnotes of their Bible. There are certain places that warrant a serious discussion regarding textual variants by Christians, this is not one of them. 

Not only is the evidence overwhelmingly in support of this passage being original, it is impossible to responsibly say that rejecting this passage is in line with a Reformed, confessional view. Not only does it violate the basic principles of the doctrine of Scripture in 1.5, it ignores the fact that doctrines are actually built upon the ending of Mark as a proof text (WCF 28:4; LBCF 7.2). In both the LBCF and the Westminster Larger Catechism, this passage is used to establish the ascension of Christ, which is doctrinally significant. Even more important to me, was how I had to view the Bible as a whole if I accepted the theory that the ending of Mark was not original. I had to believe that a passage of Scripture has fallen away, lost to time, and cannot be recovered. Since this must be true for the ending of Mark, I might as well apply that theory to every other area of textual variation in the New and Old Testament texts. The theories of higher critical thought must be adopted to explain how the text evolved, and justify the ongoing effort to reconstruct this lost bible. I later discovered that is exactly what is being done by nearly every textual scholar, so it seems I was not alone in my conclusions. 

In my examination of just one textual variant, I came to a significant conclusion. Using Dr. Jeff Riddle’s words, we are living in the age of a Canonical crisis. The fact that the Gospel of John as it exists in the NA28 is different than the Gospel of John as it exists in the unpublished Editio Critica Maior demonstrates this reality. Christians are reading the Gospel of John as it existed in 2012, while the “true” Gospel of John is currently being constructed in Munster, Germany. Who knows if the John that is produced out of the black box sometime in the next 10 years will be the same as the Gospel of John as it is being read now? I wonder what Schrodinger would think of this paradox? 

It is important that Christians realize that the artificial divide between higher and lower criticism is just that – artificial. The footnote which has informed Christians to call into doubt the text of Holy Scripture at the end of Mark is not purely informed by manuscript data. Science is done by the intellect, and the intellect of man is terribly limited and subjective. Theories must be applied, and there is not a single textual scholar who approaches the text without assumptions. The deconstruction of the New Testament text is higher criticism restrained by the religious feelings of Protestants who actually buy Bibles. Honest scholars admit as much. “With the rise of an Enlightenment turn to ‘science,’ and informed by a Protestant preference for ‘the original,’ however, critics like Johann Jakob Griesbach, Karl Lachmann, Constantin Von Tischendorf, Samuel Tragelles, and finally, B.F. Westcott and F.J.A.; Hort reevaluated the evidence…” (Knust & Wasserman, To Cast the First Stone, 16). The reevaluation of the manuscript data in the 19th century is what unseated this passage in Mark from the canon, and the church complied. The people of God do not have to comply with this opinion, and that is the reality. Read the ending of Mark, and know that it is authentic. 

Post Script: A Personal Note from the Author

I do not have the scholarly credentials, but I do have one unique qualification that I believe is important. I am a part of the first generation of Christians who came to faith after the battle for the Bible. My generation is feeling the impact of a changing Bible harder than any other generation to date. I was taught how to read my Bible after the longer ending of Mark had been overwhelmingly dismissed. When I approached bracketed texts, I ignored them, because that is what I was told to do. I did not consider the theological impact of removed texts because modern exegesis and hermeneutics are designed around a shifting text. That is why, when I began to study historical protestant theology, these modern hermeneutical methods were so crazy to me. If doctrine cannot be established upon contested verses, what place is left to build doctrine upon? The answer is very few places, and the diamonds in the apparatus of the NA28 are proof of that. The Reformed believed that every word, all Scripture, should be used. That is why it was such a shock to me when I discovered the reasons that these texts were put into brackets. I was raised in a generation of skeptics, and I did not become converted under the assumption that I would need to take a Kantian leap of faith to believe in my Bible. Christians in my generation should not have to believe that they must wait until 2030 to read God’s Word. That is unprecedented in the history of the church. If the Bible isn’t going to be ready for another ten years, what is the point of even reading it until then? The answer is simple: there isn’t a good reason to read it until then, or after then for that matter.   

If the longer ending of Mark is not Scripture, what then is Scripture? What piece of the text cannot be put under the same scrutiny if all it takes is one shoddy manuscript that is stored in the Vatican to change the whole Bible? How many manuscripts would it take to unseat John 3:16 or Romans 8:28? The reality is, the modern Bible is being held together by the people that read it, not the evaluation of manuscripts. The Bible becomes smaller with each implementation of text-critical methods. I imagine that the rapid progression of the modern text-critical effort is directly related to the fact that people simply don’t read their Bibles anymore. It’s easy to ignore footnotes and brackets and a constantly changing text if people don’t know that anything has changed in the first place.  

It is clear that something needs to change, or the Christian church will be in deep trouble by 2030 when scholars begin teaching the people of God how to construct their own Bible using online software. Yes, that is the reality, not some speculation. The split readings in the ECM will eventually make their way into the text of translations, and by that time, the Christian will not have a Bible or a defense for the Bible. If the CBGM has proven one thing, it is that none of the scholars using it can determine what the original said. My hope is that things will change before that happens, but time will tell. 

The Reformation Day Post: VERY Spooky

In the Beginning

God’s Word has been contested since the very beginning in the Garden when Satan said, “Yea, hath God said, Ye shall not eat of every tree of the garden?” Eve then changes what God said, and Satan reinterprets it. “God hath said, Ye shall not eat of it, neither shall ye touch it, lest ye die. And the serpent said unto the woman, Ye shall not surely die” (Genesis 3:1-4). Yes, from the very beginning of time, the battle for the authority of God’s Word has been fought. God has delivered His Word in every generation, and even delivers it anew to His people when it was thought to have been lost (2 Kings 22-23). The struggle for the authority of the Scriptures continued on through the Old Covenant, as the unfaithful kings of Israel continued to build and rebuild the high places. During Jesus’ time, the Pharisees had so distorted the meaning of God’s Word that Jesus issued a lengthy rebuke to them in the form of His exegesis of the law in Matthew 5.

And Again

Even past the time of Christ’s earthly ministry, with Marcion and others, the authority of the Scriptures continued to be questioned, and the actual words and passages themselves were contested and removed in some unfaithful manuscripts. Augustine of Hippo comments on the phenomenon, “Some men of slight faith, or, rather, some hostile to the true faith, fearing, as I believe, that liberty to sin with impunity is granted to their wives, remove from their Scriptural texts the account of our Lord’s pardon of the adulteress” (De adulterinis coniugiis 2.7.6). In the New Testament age, the method of attacking the authority of God’s Word has not changed.

Little is known about the transmission of the New Testament until the middle ages, other than the fact that a lot of Bibles were destroyed by persecution, war, fires, and other natural causes. The history of the New Testament, as it were, is largely clouded to a modern audience until the explosion of manuscripts in the 9th century. Despite this fact, there are quotations from theologians throughout the ages which testify to the existence of ancient and accurate copies that survived through the age of tampering. Deuteronomy 4:2 became an integral text to Augustine and other theologians during this time. “Augustine and his contemporaries were well aware that editing of this sort could potentially take place, and they invented various strategies to deal with the problem: curses were added to the end of certain treatises, sternly warning those who would dare to alter texts that they would be punished for their misdeeds” (Wasserman, Knust, To Cast the First Stone, 100). 

The manuscripts from the period just before and during Augustine’s time demonstrate that this period of time could be considered a tampering period of the text of the New Testament. Despite this tampering period, and the fact that Christianity almost lost to Arianism at the same time, the orthodox faith, along with the original Scriptures, continued on in time. This is the most reasonable explanation for the explosion of uniform manuscripts suddenly appearing in history during the middle ages. It was not long after this time that the next major attack on the authority of Scripture occurred. As the end of the middle scholastic period came to an end, theologians began to discover corruptions in the Latin Vulgate.

And Again…

The text of the Western church had in some places conformed to the teachings of Rome, which had been heading in a dark direction for quite some time. The Western church had, due to a number of reasons, developed into more of a political player than a religious one. Popes began to sell their papacy to the highest bidder, and one point, three popes occupied the office. Indulgences were introduced to encourage knights to fight for the Holy Roman Empire, and this led to the grossly abusive practice of the church which drained the pockets of the laity. Some churches had failed to give communion to the people in years, and in many cases, the only people taking communion were the priests themselves, with the laity observing. Despite this corruption, the seed of the Reformation lived in the marrow of the church with men like Wycliffe and Hus. In the same way that Athenasius was raised up during the Arian controversy in the early church, faithful men of God were called out of the wilderness and began crying out in protest against the abuses that had developed in the Western church. God began orchestrating the Reformation well before that fateful October day in Wittenberg in 1517. 

In fact, there were several providential events that are often forgotten leading up to the Reformation. In the mid 15th century, two things occur that contribute to the Protestant movement. The first is the construction of the printing press in Guttenberg in 1436, and the second is the fall of Constantinople shortly after that. Up to that point in the west, the Bible that was used was Latin, and the means of reproducing that Bible was hand-copying. When Constantinople fell, the Greek speaking people of God came flooding into the West, bringing with them their language and their Bibles. Bibles continued to be hand-copied for some time after this event, but it wasn’t long until the printing press was purposed for printing the Bible in all sorts of languages. During this pre-Reformation period, men like Wycliffe had already started producing Bibles in English, and in response, the Roman church said that the Bible was only authoritative insofar as it was approved by the church, and the only Bible approved by the church was the Latin Vulgate as it had come to exist during that time. The Roman church was not mighty enough to stop the events that had been started at the fall of Constantinople and the invention of the printing press, however. In 1514, the Complutensian Polyglot New Testament had been printed, and two years later in 1516, Erasmus’ first edition of the Novum Testamentum was hot on the press. There was nothing that Rome could do to stop what would happen next. 

On October 31, 1517, a German Roman Catholic Monk named Martin Luther posted 95 theses which detailed the places the Western church need to change. This moment marks the date that most people consider the Protestant Reformation to have officially started. During this time, the battle for the Bible centered around one question: In what way are the Scriptures authoritative. On one hand, the Roman church said that the Scriptures were authoritative by virtue of the church. On the other hand, the Protestants said that the Bible was authoritative in itself, it was self-authenticating (αυτοπιστος). The doctrine of the self-authenticating nature of the Scriptures was in fact the fundamental principle that drove the doctrine of Sola Scriptura and thus drove the entire Reformation. The only refutation for the doctrine of Rome was to return to the Scriptural reality that God Himself gave authority to the Bible. This doctrine of Scripture ultimately becomes a staple in Protestant doctrine and is codified in all of the major confessions of the 17th century. 

And Again…

If history has taught us anything, the battle for the authority of Scripture did not end with the high orthodox theologians following the Reformation. The next major battle that the church would face came from Germany, the birthplace of the Reformation. Starting with a German theologian named Friedrich Schleiermacher, the way that theology was done forever changed. The Bible no longer was the Word of God, the Bible was the documentation of the experience of communities of faith. In the German schools, the idea that the Bible was infallible came under fire and the way the Bible was described and understood changed rapidly. Due to the rise of the sciences and the development of the philosophy of religion, much of the historical information found in the Scriptures was determined to be factually incorrect. As a result, German theologians made sense of this by splitting the interpretation of history into at least two categories.

The first was history as it actually happened, and the second was history as it was experienced by various communities in time. The miracles in the Bible were not true history, they were the interpretation of history by human communities who were trying to make sense of their religious experience. The birth of historical criticism, or higher criticism, would be the next giant the church had to slay. German theologian Karl Barth, who came onto the scene like a stampeding elephant trumpeting through a Sunday school class, made an attempt at responding to higher criticism with what is now known as Neo-Orthodoxy. The Bible didn’t have to be factually correct or materially correct to be the Word of God according to Barth. The Bible was the authoritative witness to the incarnation of Jesus Christ, the Word of God. The Word of God was Jesus Christ, and the Bible became the Word of God when the Holy Spirit worked in the believer. The Bible was not the Word of God, it was a witness to the Word of God, Jesus Christ, and it became the Word of God on occasion.

The theology of Barth sent the church reeling, scrambling to give a response. Theologians like Cornelius Van Til spent nearly 30 years offering a response to idealism and neo-orthodoxy by developing his transcendentalism. Prior to the rise of Neo-Orthodoxy, B.B. Warfield and A.A. Hodge attempted to address higher criticism by reinterpreting the Westminster Confession. The Bible did not have to be materially preserved to be inerrant, they said, it just had to preserve the sense of the thing. The Bible was really only inspired and perfect in the autographs, and that is what the high orthodox meant. Unfortunately, that is not what the high orthodox meant, and the church thought that the high-orthodox doctrine of Scripture could not stand its ground to higher criticism like it had against Rome in the 16th century. What Warfield’s doctrine meant was that the Bible could be proved to be original by way of evidence, that by an effort of lower criticism, the original could most certainly be reconstructed. This articulation of Scripture was entirely dependent on the abilities of textual scholars to demonstrate that an original could be produced from the surviving manuscripts. In other words, the Scriptures are the Word of God insofar as they could be demonstrated to be the Word of God. At the time of Warfield, theologians were nearly unanimous in believing that this could be done with lower criticism. In fact, Warfield believed that the efforts of text-critics in his day were the providential workings of God to restore the original text of the Scriptures to the church. 

Some time later, the battle for the Bible began and led to the production of the Chicago Statement on Biblical Inerrancy. It was a direct response to the neo-orthodox doctrine of Scripture which had turned the church upside down. The latest battle for the authority of Scripture did two things: 1) It codified the theology of Warfield and 2) determined that higher and lower criticism were two separate and unrelated disciplines. Yet the theology of Scheliermacher and Barth were planted, like twin mustard seeds, and today stand as mighty trees in the center of orthodoxy. 

The next battle for the Bible is arguably happening now, and will most certainly rage on until Barth and Schleiermacher are answered totally and finally. The Chicago Statement on Biblical Inerrancy has not aged well, and the ghost of Schleiermacher haunts the canons of modern textual scholarship. Since Warfield’s doctrine was so reliant on the success of lower-criticism and its separation from higher criticism, it is completely contingent on these two things being reality. Yet, something has happened since Warfield’s time which has given cause for a new battle. The development of lower criticism has resulted in its fusion with higher criticism, and the reality upon which Warfield’s argument rests is no longer reality. See, Warfield’s doctrine was contingent on the success of the lower critics in proving the original from the extant manuscripts. Since the stated goal of textual criticism is now the Initial Text, Warfield’s formulation has lost its power. Further, the line between higher and lower criticism has become blurred and the actual textual decisions being made by the lower-critics are informed by a combination of both textual data and higher critical principles. 

This is evident in that the stated goal of the Editio Critica Maior is not to produce an original Bible, but rather to reconstruct the history of the transmission of the New Testament Text. In other words, the goal of this critical text is to produce the history of how Christians have experienced their religion in time by examining the documents they left behind. The readings which are determined earliest only speak to the written expression of Christianity in the time and place that it represents. The variants which rank later simply represent how faith communities evolved and developed throughout time. Since the goal is not a definitive text, the goal is inherently in line with documenting how Christians recorded their experience in time. The ECM is not the Bible, it may or may not contain the Bible. That means that while printed editions created from the ECM may have the objective of producing an early witness to the New Testament text, it in itself says nothing regarding the authorial text. Some may say that this Initial Text represents the authorial text, but this is simply how Kant would have responded to Schleiermacher. The very concept of the ECM is the direct implementation of higher criticism in text-critical practice.   

There are two ways that Christians can respond to the reality of the ECM. The first is found in Barth or perhaps Bultmann. It is fine that the Bible contains errors and factual problems, the Word of God is contained in the Bible or perhaps the Bible is a witness to the Word of God. In fact, it would be putting limitations on God by saying that He must speak in a narrowly defined set of Scriptures. God is far beyond anything we can comprehend, and therefore the words in Scripture become of the Word of God when God speaks through them. Since the Bible cannot be proven to be original by lower criticism, and higher criticism results in demythologizing the Bible, the only answer must be Barth, or some variation. The second option, which was not tried during the Warfieldian era, is the high-orthodox view of Scripture. The Bible does not need to be reconstructed, or demonstrated to be original by way of lower-criticism, because it was never lost and does not need to be proved. God Himself authenticates the Scriptures and by His special care and providence has kept them pure in all ages. The Holy Scriptures were faithfully handed down in time by the believing people of God until a providential innovation of technology allowed for them to be printed. This text was edited according to the common faith and was universally received by the Protestants by the end of the 16th century. This is the text that won against the Papists and reigned supreme until the theories of higher critics unseated it from the favor of the academy. The reception of this text vindicates God’s providence in the matter and it is the most widely read text, even today. It has been cast down by the schoolmen, but among the people of God it has held its place. 

There is a reason that the Reformed stood on the doctrine of Scripture which said that the Bible was self-authenticating. It was the only response to the Papists that would have resulted in the success of Protestantism. The doctrine of Warfield was bound to fail as it was intimately tied to the success of men in reproducing an original text. When the concept of the “original” became obsolete, so did Warfield’s doctrine. At the same time, this allowed higher critical principles an official seat back at the lower-critical table. It will be interesting to see whether Christians uphold the high-orthodox view of the Scriptures, or retreat back to Barth for empty comfort. 

Revisiting the Fatal Flaw Argument Against the Traditional Text

Introduction

One of the primary purposes of this blog is to give people confidence that the Bible they read is God’s inspired Word. Attacks on the Bible of the Protestant Reformation often send people into a spiral of doubt and can damage one’s faith in approaching, reading, praying over, and meditating upon the Holy Scriptures. An argument frequently leveled at the Bible of the Protestant Reformation is what may be called The Fatal Flaw Argument. I initially addressed this argument on the Agros Blog a while back, but since that time I have seen it pop up all over my Facebook feed, so I thought it would be helpful to write a more pointed response than the one I initially crafted. The argument is constructed like this:

  1. The Bible must be able to be reconstructed from extant manuscripts in the event that all printed editions of the Scriptures are wiped off the face of the planet in order to be used, read, preached from, etc. 
  2. If a Bible cannot be reproduced exactly by reconstructive methodologies, than it should not be used, read, preached from, etc. 
  3. The Traditional Text, as it exists in the Textus Receptus cannot be reproduced exactly if a reconstruction effort using a “consistent” methodology was employed in the event of a printed edition extinction event, therefore it should not be used, read, preached from, etc. 

This argument may seem appealing, but it actually undermines the validity of essentially every Bible on the market today, including the ESV, NASB, and NIV. The fatal flaw in this so called Fatal Flaw Argument is that there is not a single Bible available today that could be reconstructed exactly if this hypothetical extinction event occurred. The primary assumption of this argument is that there are a set of canons that could be consistently applied to manuscripts which would, in theory, produce the current form of the Greek New Testament. The obvious issue with this is that the Modern Critical Text, as it exists in the Editio Critica Maior, has yet to even produce a text in the first place. It will be finished in ten years or so down the road, and even when finished, it is more of a dataset of texts than a text itself. The onus of the person making this argument is to first demonstrate that they have a text in the first place.

Prior to beginning my analysis of this argument, it is interesting to point out that it assumes the Received Text and the Modern Critical Text are inherently different, which some do not readily admit. This is true in two ways. The first is that it grants in its premise that the methodologies employed by the textual scholars during the Reformation era were fundamentally different than the methodologies employed today. This is apparent in the reality that modern text-critical methods could not produce the text of the Protestant Reformation with its current canons. The second is that grants that the actual text form is inherently different, as the claim is that the Received Text could not be reproduced, while the Modern Critical Text allegedly could. In any case, in order to make this argument, one has to be willing to apply the argument to all texts, not just the Textus Receptus. In the event that this hypothetical extinction event occurs, a new form of the Bible would emerge, even if the same methods are consistently applied. D.C. Parker, the textual scholar leading the ECM team for the Gospel of John currently, says this: 

“The text is changing. Every time that I make an edition of the Greek New Testament, or anybody does, we change the wording. We are maybe trying to get back to the oldest possible form but, paradoxically, we are creating a new one. Every translation is different, every reading is different, and although there’s been a tradition in parts of Protestant Christianity to say there is a definitive single form of the text, the fact is you can never find it. There is never ever a final form of the text.” 

I do not employ this quote to disparage Dr. Parker, but rather to demonstrate the reality that even in today’s current text-critical climate, without an absurd hypothetical extinction event of printed editions, the editors of Greek New Testaments would seem to refute the premise of the argument itself by their own words. This further demonstrates that this argument does not only attack the Textus Receptus, but all Bibles. That being said, I do not think this argument is wise to use, no matter which Bible you read. It is an open invitation to attack the validity and authority of every single Bible on the market for the sake of winning a debate against Christians who read a traditional Bible. This is a good reminder that we should be careful not to attack the authority of the Scriptures in our attempts to defend the current Bible we think is best. That being said, there are three reasons I believe this argument should be abandoned. 

The Fatal Flaw Argument Against the Traditional Text Rejects God’s Providence 

The first reason this argument should be abandoned is that it rejects God’s providence in the transmission, preservation, and inspiration of the Holy Scriptures. The assumption on all sides of this discussion is that when somebody reads a Bible in their native tongue, they are reading God’s inspired Word. This is true for Christians who read the ESV as well as the KJV. If a Christian does not believe that their Bible is inspired, I’m not sure why they are even reading it, as it is simply like any other document produced by humans in history. It may be a valuable book of moral tales, but if the Bible is not inspired, it is not more special than the Iliad or Cicero. 

That being said, this argument assumes that what God has done in time does not matter as it pertains to the transmission of the text and reception of the Bible by the people of God. The only effort that matters is the one that is happening now, which is currently ongoing. In any view of inspiration, whether it be Warfield or Westminster, God’s providence is recognized as the instrument working in the production of Bibles. Warfield believed that the efforts of textual scholars in his day were an act of God’s special providence in giving the Bible back to the people of God. The Westminster Divines affirmed overwhelmingly that by God’s special care and providence, the Scriptures had been kept pure in all ages. 

That means that the Bibles that have been produced matter, because the printed texts are the texts that Christians use for reading, preaching, and evangelism. Even if one believes that a particular Bible is of lesser quality, Christians should find unity in the fact that God uses translations to speak in so far as they represent the original texts. If printed editions and translations do not matter, then all Christians need to quickly learn Hebrew and Aramaic and Greek, as well as gain access to the compendium of extant manuscripts, so they can read a Bible. That means that regardless of the Bible one reads, all Christians believe together that God Himself has delivered it. The Textual Discussion comes down to determining which text God preserved. In proposing this hypothetical, one is simply saying, “It doesn’t matter what God did in time, the only thing that matters is what is going on now.” I don’t know many Christians, let alone any Calvinists, who would ever say that what God did providentially in time does not matter. 

The Fatal Flaw Argument Against the Traditional Text Assumes That All Current Bibles Are Not God’s Word

The fundamental problem with this argument and the second reason it should be abandoned is that it takes away every single Bible from every single believer. If a consistent methodology must be employed to create a single text from the manuscripts, then it seems that nobody has a Bible, or ever will have a Bible. The fact is that different methodologies have been employed since the first effort of creating printed texts in the 16th century. Erasmus employed different methods than Beza, and Beza employed different methods that Hort, and Hort employed different methods than D.C. Parker and the editors of the ECM. Not only that, there are a wealth of different opinions among textual scholars in between, such as Karl Lachmann, Maurice Robinson, H.C. Hoskier, Edward F. Hills, and even among the editors of the ECM there are differences in opinion on the manuscript data. This argument assumes that all of the editors of Greek New Testaments today are unified in their opinions on the text. The reality is, that they are not. 

Further, if a consistent methodology is required, which methodology should be considered the “most consistent”? Which methodology is going to be used in this reconstruction effort after this hypothetical extinction? The CBGM hasn’t been fully implemented and thus hasn’t been fully analyzed. The existence of the CBGM itself demonstrates that Hort and Metzger didn’t have it all right. That is not even taking into consideration the evolution of opinions on scribal habits, “Text Families”, and weighing manuscripts. Did scribes generally copy faithfully or did they tend to smooth out readings and add orthodox doctrines into the text? If all the printed editions were wiped out, I imagine that includes the ECM. Since the ECM is already going to take ten more years to complete, that means that the people of God would simply be without a Bible for at least ten more years. The argument is so incredibly asinine it is hard to believe that people are using it at all. 

The fact is, that all Christians have to look back at history to have confidence in the Bible they read. The current methodology, the CBGM, isn’t fully implemented yet, and won’t be for another ten years. That means that every single Christian is trusting that the text-critical work done already is the method God used in delivering His Word to His people to some degree or another. The difference is in how Christians believe that God accomplished this task. Some believe the Bible was preserved up to the Protestant Reformation, and thus look to the printed texts of that era which have that text form. Some believe that the Bible was preserved in caves, monasteries, and barrels until the 19th century, and look to the printed texts produced in that era. Some even believe differently than either of these two positions. No matter which view of the text one holds, every single Christian looks into history to see God’s providence in their view of the text. Either that or they believe that all the Bibles up to this point aren’t complete or correct Bibles, and are patiently awaiting 2030 when the ECM is finished. In every case, the argument fundamentally assumes that the work done in history does not matter and should not be considered as a valid “methodology”.  

The Fatal Flaw Argument Against the Traditional Text Misleads the People of God 

The final flaw in the Fatal Flaw argument against the Traditional Text and the third reason it should be abandoned is that it is horribly misleading. It makes Christians think that the canons of modern textual criticism are settled and unified. The fact is that scholars are still discussing the proper application of what the CBGM is creating, and how it should be understood. This argument leads people to believe that if all of the ESV Bibles and the printed texts it was translated from were raptured suddenly, that the methods of textual criticism could give them the same exact Bible. Unless somebody has the all of the underlying readings of the ESV memorized, this simply could not be done. Even if somebody were to have all the readings memorized, they wouldn’t be applying any methodology, they would be copying down what they memorized. The reality is that even without a hypothetical extinction of all printed texts, the methods being implemented are not producing the same text time and time again. With each new iteration of the modern methods, new Bibles are being produced. In some cases, these new Bibles have significant changes. That is not my opinion, that is simply what is happening. There is a reason that Crossway removed the title “Permanent Edition” from the prefatory material of the 2016 ESV. 

That is why, in my blog, I focus so heavily on the doctrine of Scripture. The current efforts of textual criticism are not capable of producing a stable text. In fact, a stable or final text is not even the goal. The goal of modern textual criticism as it exists in the effort of the ECM is to construct the history of the surviving texts of the New Testament, not a final authorial text for all time. The only way the modern critical methods could produce a stable text would be to strip out all of the verses that are contested by variation. Even then, new manuscript finds and reevaluation of the data could just as easily cause that text to change. The fact is that every single Christian looks back to history when determining which Bible is best. The one method that every Christian uses to decide which Bible they read is the one method that modern critical methods do not use – the reception of readings by the people of God. Christians will never be able to escape their history, as hard as they may try. In an effort to defend the ongoing effort of modern textual criticism of the New Testament, many Christians have blatantly undermined the authority of the Scriptures as a whole. If the goal is to give Christians a defense for their Bible, this argument is absolutely not it. In fact, this so called Fatal Flaw Argument hands the Bible directly to the critics of the faith.  

Conclusion

At the end of the day, the goal of this conversation is give confidence to Christians that when they read their Bible, they are reading the Word of God. This kind of argument undermines everybody reading a Bible, no matter which version they read. In fact, it is almost identical to the argument that Bart Erhman makes against Christians who adhere to the modern critical text. When we begin taking our cues from Bart Ehrman, perhaps it’s time to take a step back and reevaluate. In any case, there is a consistent methodology that Christians can employ to receive the Bible they read, and it does not involve trusting the ongoing reconstruction effort of the history of the New Testament text. 

The fact is that God has spoken (Deus dixit). God speaking is the means that God has always used to condescend to man, from the time of Adam in the garden. His speaking is the covenant means of communication to His covenant people. God will not fail in His covenant purpose, which means that God will not fail to communicate to His people (Mat. 5:18). Since God has ordained the Scriptures as the means of covenant communication in these last days (Heb. 1:1), then the preservation of His Word is intimately tied with His covenant purpose. Since God has not failed, and cannot fail, then He has not failed in speaking, or preserving the Word He spoke. In every generation, from the time of Adam, God has spoken to His people clearly and without error. The introduction of textual variants in manuscripts did not thwart this effort. In every generation, in faithful copies of manuscripts, God preserved His Word. This preservation did not somehow stop in the fourth century, or even in the 16th century. Which means, that if the Bible is indeed preserved, it was still preserved at the time of the Protestant Reformation. If this is the case, then the manuscripts which were used during the time of the Protestant Reformation were indeed preserved. Which means the text-critical work done during this time was done using preserved copies of the New Testament. The manuscripts did not suddenly become preserved during the 16th century, they were the ones handed down in faithful churches from the time of the Apostles. The alternative seems to be that God stored His word away in barrels, caves, and monasteries lined with skulls.

This Fatal Flaw argument, fundamentally, is simply saying, “We don’t have a Bible, so you can’t either”. This is not the way you defend the text of the New Testament, it is how you destroy the validity of the text of the New Testament. It does not matter which Bible you read, attacking the validity of all Bibles in order to win an argument is not appropriate, or necessary. At the end of the Textual Discussion, Christians still need to have a Bible they feel they can read and use. All Christians employ the same methodology when selecting a Bible at the end of the day. They look back in time, and receive a text based on their understanding of inspiration and preservation. Some receive a text they believe was preserved until the fourth century which has been reconstructed to some degree or another, and others receive a text they believe was preserved up to the Reformation and beyond. Others do not receive any one text, but all of the differing texts. The vast majority of Christians are not textual scholars, do not know the original languages, and thus are at the mercy of various scholarly opinions. The average Christian wants to know, “Can I trust my Bible?” If our efforts are not concentrated in that direction, we have already failed.  

Memoirs of an ESV-Onlyist: Reflecting on the Text and Canon Conference

Introduction

On Reformation weekend, a small conference was held in Atlanta, Georgia called The Text and Canon Conference which focused on offering a clear definition of what it means when people advocate for the Masoretic Hebrew and Received Greek text. For those that are not up to date with all of the jargon, the Masoretic Hebrew text is the only full Hebrew Old Testament text available, and the Greek Received Text is the Greek New Testament which was used during the Protestant Reformation and Post-Reformation period. At the time of the Reformation, the Bibles used the Masoretic Text and Received Text for all translational efforts. Bibles produced in the modern era use the Masoretic Text as a foundation for the Old Testament, but frequently use Greek, Latin, and other translations of the Hebrew over the Masoretic text. Modern Bibles also utilize a different Greek text for the New Testament which is commonly called the Modern Critical Text. As a result of these differences, the Bibles produced from the text of the Reformation are different in many ways from the Bibles produced during the recent years.  

One of the major focuses of the conference was to demonstrate that it is still a good idea, and even necessary, to use a Reformation era Bible, or Bibles that utilize the same Hebrew and Greek texts as the Reformation era Bibles. The key speakers, Dr. Jeff Riddle and Pastor Robert Truelove, delivered a series of lectures which demonstrated the historical perspective on the transmission history of the Old and New Testaments and presented a wealth of reasons why the Reformation era Hebrew and Greek texts are still reliable, even today. I will be writing a series of articles which cover some of the key highlights of the conference. In this article, I want to explain why I think this conference was necessary, and also to detail the series of events which led me to attending this conference. 

Why Was the Text and Canon Conference Necessary?  

There are two major reasons that I believe the Text and Canon conference was necessary. The first is that many Christians do not believe that there is any justifiable reason to retain the historical text of the Protestant church. The second is that many Christians are not fully informed on the state of current text critical efforts. Due to this reality, lectures delivered at the Text and Canon conference provided theological and historical reasons which supported the continued use of the Reformation era Hebrew and Greek texts, as well as offered information on the current effort of textual scholarship. An important reality in the textual discussion is that the majority of Christians do not have the time and in many cases, the ability to keep up to date with all of the textual variants and text-critical methodologies that go into making modern Bibles. There is a great need in the church today for clear articulations of the history of the Bible, as well as accessible presentations on how modern Bibles are produced. The Text and Canon conference, in part, met this need, as well as offered many opportunities for fellowship and like-minded conversation. Prior to launching into a series of commentary on the conference, I thought it would be helpful to share my journey from being a modern critical text advocate to a Traditional Text advocate. 

From the 2016 ESV to the Text and Canon Conference

Prior to switching to a Reformation era Bible, I began to discover certain realities about the modern efforts of textual criticism which caused me to have serious doubts as to whether or not the Bible was preserved. I had a hard time reconciling my doctrine of inspiration and preservation with the fact that there is an ongoing effort to reconstruct the Bible that has been in progress for over 200 years. These doubts increased when I discovered that not only had the methods of text-criticism changed since I was converted to Christianity over ten years ago, but that the modern critical text would be changing more in the next ten years. I began to read anything I could get my hands on to see if I could figure out more information on the methods that were responsible for creating the Bible I was reading at the time. When I began this process of investigation, I had just finished my cover-to-cover reading plan of the new 2016 ESV. At first, I was attempting to simply understand the methodology of the modern critical text with the assumption that a better understanding of it would help me defend the Scriptures against the opponents of the faith. The process quickly became a search for another position on the text of Scripture. This is due to some of the more alarming things I learned in my investigation of modern critical methods. There are six significant discoveries I made when investigating the current effort of textual criticism that I would like to share here. These six discoveries led me from being a committed ESV reader to a committed KJV reader.  

The first discovery that sent me down a different path than the modern critical text was when I investigated the manuscript data supporting the removal of Mark 16:9-20 in my 2016 ESV. The other pastor of Agros Church, Dane Johannsson, had called me to tell me about some information he learned about the Longer Ending of Mark after listening to an episode of Word Magazine, produced by Dr. Jeff Riddle. Up to this point, I had heard many pastors that I trusted say that the manuscript data was heavily in favor of this passage not being original. My Bible even said that “Some of the earliest manuscripts do not include this passage”. I was seriously confused when I found out that only three of the thousands of manuscripts excluded the passage, and only two of them are dated before the fifth century. This made me wonder, if all it took was two early manuscripts to discredit the validity of a passage in Scripture, what would happen if more manuscripts were found that did not have other passages that I had prayed over, studied, and heard preached? If a passage that had thousands of manuscripts supporting it could be delegated to brackets, footnotes, or removed based on the testimony of two manuscripts, I realized that this same logic could be easily applied to quite literally any place in my Bible. All that it would take for other passages to be removed would be another manuscript discovery, or even a reevaluation of the evidence already in hand.  

The second discovery was the one that fully convinced me to put away my 2016 ESV and initially, pick up an NKJV. At the time of this exploration process I was utilizing my Nestle-Aland 28th edition and the United Bible Society 5th edition in my Greek studies. I was still learning to use my apparatus when I learned what the diamond meant. In the prefatory material of the NA28, it states that the diamond indicates a place where the editors of the Editio Critica Maior (ECM) were split in determining which textual variant was earliest. That meant that it was up to me, or possibly somebody else,  to determine which reading belonged in the main text. This is a reality that I would have never known by simply reading my ESV. I discovered that there were places where the ESV translators had actually gone with a different decision than the ECM editors, like 2 Peter 3:10, where the critical text reads the exact opposite of the ESV. This of course was concerning, but I wasn’t exactly sure why at the time. I figured there had to be a good reason for this, there were thousands of manuscripts, after all. I began investigating the methodology that was used to produce these diamond readings, and learned that it was called the Coherence Based Genealogical Method (CBGM). I quickly found out that there was not a whole lot of literature on the topic. The two books that I initially found were priced at $34 and $127, which was a bit staggering for me at the time. It was important for me to understand these methods, so I ended up at first purchasing the $34 book. It was what I discovered in this book that heavily concerned me. Due to the literature on the CBGM being relatively new, and possibly too expensive for the average person to purchase, I had a hard time finding anybody to discuss the book with me. It was actually the literature on the CBGM that motivated me to start podcasting and writing on the issue. If I couldn’t find anybody to discuss this with, it meant that nobody really knew about it.   

The third discovery was the one that convinced me that I should start writing more about, and even advocating against, this new methodology. This was the methodology that was being employed in creating the Bible translations that all of my friends were reading, and that I was reading up until switching to the NKJV. It’s not that I “had it out” for modern Bibles, I figured that if these discoveries had caused so much turmoil in my faith, they would cause others to have similar struggles. Most of my friends knew nothing about the CBGM, just that they had heard it was a computer program that was going to produce a very accurate, even original, Bible. After reading the introductory work on the method, I knew that what I heard about the CBGM was perhaps too precipitated. Based on my conversations with my friends on textual criticism, I knew that my friends were just as uninformed as I was on the current effort of textual scholarship. It wasn’t that I thought I was the first person to discover these things that motivated me to start writing,  but the fact that myself and all of my friends were not aware of any of the information I was reading. Up to that point in my research, I was under the assumption that the goal of textual criticism was to reconstruct the original text that the prophets and apostles had penned. I even thought that scholars believed they had produced that original text which I was reading in English in my ESV. I found out that this was not the case for the current effort of textual scholarship. I learned that the goal of textual criticism had, at some point in the last ten years, shifted from the pursuit of the original to what is called the Initial Text. In my studies, I realized that there were differing opinions on how the Initial Text should be defined, and even if there was one Initial Text. In all cases, however, the goal was different than what I thought. It did not take me long to realize the theological implications of this shift in effort. At the time, I fully adhered to both the London Baptist Confession of Faith 1.8, as well as the Chicago Statement on Biblical Inerrancy. It was in examining the Chicago Statement on Biblical Inerrancy against the stated goals of the newest effort of textual criticism that made me realize there were severe theological implications to what I was reading and studying.  

The fourth discovery was the one that made me realize that the conversation of textual criticism was not only about Greek texts and translations, it was about the doctrine of Scripture itself. At the time I believed that the Bible was inspired insofar as it represented the original, and the original, as I found out, was no longer being pursued. The original was no longer being pursued, I learned, because the majority, if not all of the scholars, believed it could not be found, and that it was lost as soon as the first copy of the New Testament had been made. There are various ways of articulating this reality, but I could not find a single New Testament scholar who was actually doing work in the field of textual scholarship who still held onto the idea that the original, in the sense that I was defining it, could be attained. Even Holger Strutwolf, a conservative editor of the Modern Critical Text, seems to define the original as being as “far back to the roots as possible” (Original Text and Textual History, 41). This being the case, if the current effort of textual criticism was not claiming to have determined the original readings of the Bible, than my doctrine of Scripture was seemingly vacuous. If the Bible was inspired insofar as it represented the original, and there was nobody able to determine which texts were original, my view of the Bible was that it wasn’t inspired at all. At the bare minimum, it was only inspired where there weren’t serious variants. In either case, this reality was impossible for me to reconcile. I then sought out to discover how the Christians who were informed on all the happenings of textual criticism explained the doctrine of Scripture in light of this reality. I figured I wasn’t the first person to discover this about the modern text-critical effort, so somebody had to have a good doctrinal explanation. 

The fifth discovery was the one that made me realize that I did not have a claim to an inspired text, if I trusted in the efforts of modern textual criticism. In my search for faithful explanations of inspiration in light of the current effort of textual criticism, I did not find anything meaningful. In nearly every case, the answer was simply one of Kantian faith. Despite the split readings in the ECM and the abandoned pursuit of the original, I was told I had to believe it was preserved. Even if nearly every textual scholar was saying that the idea of the “original” was a novel idea from the past, or simply the earliest surviving text, I had to reconcile that reality with my theology. One of the answers I received was that the original text was preserved somewhere in all of the surviving manuscripts, and that there really was not any doctrine lost, no matter which textual variants were translated. This is based, in part, on an outdated theory which says that variants are “tenacious” – that once a variant enters the manuscript tradition it doesn’t fall out. This of course cannot be proven, and even can be shown to be false. Another answer I found was that all of the surviving manuscripts essentially taught the same exact thing. This would have been comforting, had I not spent time using my NA28 apparatus and reading different translations. I knew for a fact that there were many places where variants changed doctrine, sometimes in significant ways. Would the earth be burnt up on the last day, or would it not be burnt up? Was Jesus the unique god, or the only begotten Son? The answers I received simply did not line up with reality. I had no way of proving which of the countless variants were original. When I discovered this, I finally understood the position of Bart Ehrman. He, like myself, had come to the conclusion that the theories, methods, and conclusions which went into the construction of the modern critical text told a story of a Bible that really wasn’t all that preserved. 

The sixth and final discovery I made, which did not necessarily happen in chronological order with the rest of my discoveries, was that there were several other views of textual criticism within the Reformed and larger Evangelical tradition. Prior to beginning my research project, I had read The King James Only Controversy, which led me to believe that there were really only two views on the text – KJV Onlyists and everybody else. I discovered that this was the farthest thing from reality and a terrible misrepresentation of the people of God who held to these other positions. The modern critical text was not a monolith, and I did not need to adopt it to defend my faith, or have a Bible. In fact, I knew that there was no way I could defend my faith with the modern critical text. In my research, I even discovered countless enemies of the faith who used the modern critical text as a way to disprove the preservation of Scripture. Various debates against Bart Ehrman that I watched demonstrated this fact clearly. I learned that even within the camp of modern textual criticism, there were people who did not read Bibles translated from the modern critical text. There were even people who disagreed on which readings were earliest within the modern critical text. There were people who adopted the longer ending of Mark and the woman caught in adultery who also did not read the KJV. There were also people who believed that the Bible was preserved in the majority of manuscripts, in opposition to other positions which say that original readings can be preserved in just one or two manuscripts. I also discovered the position I hold to now, which says that the original text of the Bible was preserved up to the Reformation, and thus the translations made during that time represent that transmitted original. This ultimately was the position that made the most sense to me theologically, as well as historically. I realized that the attacks on the TR, which often said that it was only created from “half a dozen” manuscripts, was not exactly meaningful, as the modern critical text often makes textual decisions based on just two manuscripts. In any case, the conversation of textual criticism was much more nuanced and complex than I had believed it to be. 

Conclusion

I can only speak for myself as to how my discoveries affected my faith. It is clear that many Christians do not have a problem with a Greek text that is changing, and in many places, undecided. In my case, I was told to take a Kantian leap of faith to trust in this text. In my experience, most of the time people simply are unaware of the happenings of modern textual scholarship. It is not that I have any special knowledge, or secret wisdom, I simply had the time and energy and opportunity to read a lot of the current literature on the latest methods being employed in creating Bibles. One thing that has motivated me to be so vocal about this issue is the reality that most people simply are uninformed on the issue, like myself at the time of starting my research project. Due to one reason or another, the information on the current methods is difficult to access for many, and even more simply do not know that anything has changed in the last 20 years. My gut tells me that if people were simply informed more on the issue, they might at least consider embarking on a research project like I did. The fact is, that many scholars and apologists for the critical text are insistent on framing this discussion as “KJV Onlyism against the world”, and it is apparent that it has been effective. Despite this, it was not my love for tradition or an affinity for the KJV that led me to reading it. In fact, I was hesitant to read it as a result of all the negative things I had heard about it. Primarily,it was my discoveries regarding the state of modern textual criticism that led me to putting down my ESV and picking up an NKJV, and then finally a KJV. 

I thought it would be helpful to detail my discoveries which led me to the position I hold now on the text of Scripture. I will be writing more articles commenting on what I consider to be the more important points of the conference. Hopefully my commentary can serve to give you, the reader, more confidence in the Scriptures, and to share some of the important information presented at the Text and Canon conference.  

Inspiration: Now and Then

Introduction

Today’s church has been flooded with new ideas that depart from the old paths of the Protestant Reformation. This is especially true when it comes to the doctrine of Scripture. It is common place to adhere to the doctrine of inerrancy in today’s conservative circles and beyond. While it is good that many Christians take some sort of stand on Scripture, it is important to investigate whether or not the doctrine of inerrancy is a Protestant doctrine. The Reformers were adamant when talking about the inspiration, authority, and preservation of Scripture that every last word had been kept pure and should be used for doctrine, preaching, and practice. James Ussher says clearly the common sentiment of the Reformed.

“The marvelous preservation of the Scriptures; though none in time be so ancient, nor none so much oppugned, yet God hath still by his providence preserved them, and every part of them.”

(James Ussher, A Body of Divinity)

Most Christians would happily affirm this doctrinal statement. Those that are more familiar with the discussion of textual criticism may not, however. It is common to dismiss men like James Ussher along with other Westminster Divines on the grounds that they were not aware of all of the textual data and therefore were speaking from ignorance. Much to the discomfort of these Christians, textual variants did exist during this time, many of which were the same we battle over today. The conclusion that should be drawn from this reality is not that the Reformed in the 16th and 17th centuries would have agreed with modern expressions of inspiration and preservation simply because we have “more data”. There is a careful nuance to be observed, and that nuance is in their actual doctrinal articulations of Scripture. This is necessarily the case, considering they were far more aware of textual variants than many would like to admit. Rather than attempting to understand the tension between the Reformed doctrine of Scripture and the existence of textual variants, it is commonplace to reinterpret the past through the lens of A.A. Hodge and B.B. Warfield, who reinterpreted the Westminster Confession of Faith 1.6 to make room for new trends in textual scholarship. William T. Shedd, a professor at Union Theological Seminary in  the 19th century and premier systematic theologian articulated the view of Hodge and Warfield well regarding the confessional statement, “Kept pure in all ages”.  He writes,

“This latter process is not supernatural and preclusive of all error, but providential and natural and allowing of some error. But this substantial reproduction, this relative ‘purity’ of the original text as copied, is sufficient for the Divine purposes in carrying forward the work of redemption in the world” . 

William G. T. Shedd, Calvinism: Pure and Mixed. A Defense of the Westminster Standards, 142.

While this is close to the Reformed in the 16th and 17th centuries at face value, it still is a departure that ends up being quite significant, especially in light of the direction modern textual criticism has taken in the last ten years. For comparison, Francis Turretin articulates a similar thought in a different way.



“By the original texts, we do not mean the autographs written by the hand of Moses, of the prophets and of the apostles, which certainly do not now exist. We mean their apographs which are so called because they set forth to us the word of God in the very words of those who wrote under the immediate inspiration of the Holy Spirit”. 

Francis Turretin, Institutes of Elenctic Theology, Vol. I, 106.

It is plainly evident that the two articulations of the same concept are not exactly the same. That is to say, that Turretin’s expression of the doctrine was slightly more conservative than Shedd. The difference being that the apographs, as Turretin understood them, were materially as perfect as the Divine Original. Turretin dealt at length with textual corruptions, as did his peers and those that followed after him, such as Puritan Divine John Owen, and still affirmed that the “very words” were available to the church. In order to fit a modern view into the Reformation and Post Reformation theologians, one must anachronistically impose a Warfieldian interpretation of the Westminster Confession onto those that framed it. There is no doubt that the Westminster Divines lived in the same reality of textual variants as Warfield and Hodge, and that they still affirmed a doctrine which said every jot and tittle had been preserved. Turretin and Warfield faced the same dilemma, yet Warfield secluded inspiration to only the autographs, whereas the Reformed included the apographs as well. Rather than attempting to reinterpret the theologians of the past, the goal should be to understand their doctrine as it existed during the 16th and 17th centuries, where the conversation of textual variants was just as alive as it is today.

A Careful Nuance

In order to examine the difference between the doctrine of Scripture from the Reformation to today, it’s important to zoom out and see how Warfield’s doctrine developed into the 21st century. The Doctrine of Inspiration, as it is articulated today, only extends to the autographic writings of the New Testament. I will appeal to David Naselli’s explanation from his textbook, How to Understand and Apply the New Testament, which has received high praise from nearly every major seminary. 

“The Bible’s inerrancy does not mean that copies of the original writings or translations of those copies are inerrant. Copies and translations are inerrant only to the extent that they accurately represent the original writings.” 

David Naseli. How to Understand and Apply the New Testament. 43.

This statement is generally agreeable, if we assume that there is a stable Bible in hand, and a stable set of manuscripts or a printed edition which is viewed as “original.” Unfortunately, neither of these exist in the world of the modern critical text. Not only do we not have the original manuscripts, there is no finished product that could be compared to the original. Since the effort of reconstructing the Initial Text is still ongoing, and since we do not have the original manuscripts, this doctrinal statement made by Naselli does not articulate a meaningful doctrine of inspiration or preservation. In stating what appears to be a solid doctrinal statement, he has said nothing at all. In order for this doctrine to have significant meaning, a text that “represents the original writings” would need to be produced. That is why the Reformed in the 16th and 17th centuries were so adamant about their confidence in having the original in hand. In order for any doctrine of Scripture to make sense, the Scriptures could not have fallen away after the originals were destroyed or lost. Doctrinally speaking, the articulation of the doctrine of Scripture demonstrated by Turretin and his contemporaries is necessary because it affirms that God providentially preserved the Scriptures in time and that they had access to those very Scriptures. If the modern critical text claimed to be a definitive text, like the Reformed claimed to have, the modern articulation of the doctrine of Scripture might be sound, but there is no modern critical text that exists as a solid ands table object. It is clear that the doctrine of Scripture, and the form of the Scriptures, cannot be separated or the meaning of that doctrine is lost. In order for doctrine to be built on a text, the text must be static. If we are to say that the Bible is inerrant in so far as it represents the original, there must be a 1) a stable text and 2) an original to compare that text against. Due to neither 1 or 2 being true, Naselli, along with everybody that agrees with him, have effectively set forth a meaningless doctrinal standard as it pertains to Scripture.  

This means that the Reformed doctrine of Scripture is intimately tied to the text they considered to be authentic, inspired, and representative of the Divine Original. The text they had in hand was what is now called the Received Text. Whether it was simply a “default” text does not change the reality that it was the text these men of God had in their hands. It is abundantly clear that the doctrine of Scripture during the time of the Reformation and Post-Reformation was built on the TR, just like the modern doctrine of Scripture is built on the modern critical text and the assumptions used to create it. Further problems arise with the modern doctrine of Scripture when the effort of textual scholarship shifted from trying to find the original text to the initial text. Due to this shift, any articulation of Scripture which looks to the modern critical text is based on a concept that does not necessarily exist in modern textual scholarship. The concept of the “original” has moved from the sight of the editorial teams of Greek New Testaments, therefore it is necessary to conclude that such doctrinal statements which rely on outdated goals to find the “original” must also be redefined. What this means practically is that there are not any doctrinal statements that exist in the modern church which align with the doctrines used to produce modern Bibles.

Due to the doctrine of Scripture being intimately tied to the nature of the text it is describing, the various passages of the New Testament which have been considered inspired have changed throughout time, and are going to continue changing as the conclusions of scholars vary from year to year. If we take Naselli’s articulation of the doctrine of Scripture as true, this means that there is not one inerrant text of Holy Scripture, there are as many as there are Christians that read their Bible. So in a very real sense, according to the modern articulation of inspiration, the inspired text of the New Testament is not a stable rule of faith. It is only stable relative to crowd consensus, or perhaps at the individual level. A countless multitude of people who adhere to this doctrine of inspiration make individual rulings on Scripture, which effectively means that the Bible is given its authority by virtue of the person making those decisions. Thus, the number of Bibles which may be considered “original” is as numerous as the amount of people reading Bibles. It is due to this reality that the modern doctrine of Scripture has departed from the Reformation era doctrine in at least two ways. The first is that by “original”, the post-Warfield doctrine means the autographs which no longer exist and excludes the apographs. The second is that the Bible is only authoritative insofar as it has been judged authoritative by some standard or another. This combination contradicts any doctrine that would have the Scriptures be a stable rule for faith and practice. It is because of these differences that it can be safely said that while the doctrinal articulations may sound similar, they are not remotely the same.  

The Reformed doctrine of Scripture in the 16th and 17th centuries is founded upon two principles that are different than that in the post-Warfield era. The first principle of the Reformed is that the Scriptures are self-authenticating, and the second is that they considered the original to also be represented and preserved in the text they had in hand. Therefore it seems necessary to understand the Reformation and Post-Reformation Divines through a different lens than the modern perspective, because the two camps are saying entirely different things. A greater effort should be made to understand what exactly the Reformed meant by “Every word and letter” in relationship to the text they had in hand, rather than impose the modern doctrine upon the Reformation and Post-Reformation divines.   

Conclusion

The goal of this conversation should be to instill confidence in people that the Bible they are reading is indeed God’s inspired Word. Often times it is more about winning debates and being right than actually given confidence to Christians that what they have in their hands can be trusted. It is counter productive for Christians to continue to fight over textual variants in the way that they do, especially considering the paper thin modern articulations of the doctrine of Scripture. It is stated by some that receiving the Reformation Era Bible is “dangerous”, yet I think what is more dangerous is to convince somebody that they should not trust this Bible, which is exactly what happens when somebody takes the time to actually explain the nuances of modern textual criticism. These attacks are especially harmful when the Bible that is attacked is the one that the Protestant religion was founded upon, and the only text that carries with it a meaningful doctrine of Scripture. Christians need to consider very carefully the claims that are made about the Reformation era text which say it is not God’s Word, or that it is even dangerous to use. I cannot emphasize enough the harm this argument has done  to the Christian religion as a whole. The constant effort to “disprove” the Reformation era text is a strange effort indeed, especially if “no doctrines are effected”. The alternative, which has been a work in progress since before 1881, and is still a work in progress today, offers no assurance that Christians are actually reading the Bible. In making the case that the Received text and translations made from it should not be used, critics have taken one Bible away and replaced it with nothing but uncertainty.  

The claim made by advocates of the Received text is simple, and certainly not dangerous. The manuscripts that the Reformed had in the 16th century were as they claimed – of great antiquity and highest quality. The work done in that time resulted in a finished product, which continued to be used for hundreds of years after. That Bible in its various translations quite literally changed the world. If the Bible of the 16-18th centuries is so bad, I cannot understand why people who believe it to be a gross corruption of God’s Word still continue to read the theological works of those who used it. Further, it is difficult to comprehend how a Bible that is said to accomplish the same purpose as modern bibles would be so viscously attacked by those that oppose it. If all solid translations accomplish the same redemptive purpose, according to the modern critical doctrine, why would it make any sense to attack it? After spending 10 years reading modern Bibles, I simply do not see the validity to the claim that the Reformation era text is “dangerous” in any way. Christians do not need to “beware” of the text used by the beloved theologians of the past. At the end of the day, I think it is profitable for Christians to know that traditional Bibles are not scary, and have been used for centuries to produce the fullest expression of Christian doctrine in the history of the world. When the two doctrinal positions are compared, there is not a strong appeal to the axioms of Westcott and Hort, or Metzger, or even the CBGM. They are all founded on the vacuous doctrine of Scripture which requires that the current text be validated against the original, which cannot be done. There is no theological or practical value in constantly changing the words printed in our Bibles, and this practice is in fact detrimental to any meaningful articulation of what Scripture is. I have not once talked to anybody who has been given more confidence in the Word of God by this practice. In fact, the opposite is true in every real life encounter I’ve had.

It is said that the Received Text position is “pious” and “sanctimonious”, but I just don’t see how a changing Bible, with changing doctrines, is even something that a conservative Christian would seriously consider. If Christians desire a meaningful doctrine of Scripture, the modern critical text and its axioms are incapable of producing it.

Is the CBGM God’s Gift to the Church?

Introduction

It is stated by some that the Coherence Based Genealogical Method is a blessing to the church, even gifted to the church by God way of God’s providence. I thought it would be helpful to examine this claim. Unfortunately, those who have made such statements regarding the Editio Critica Maior (ECM) and the CBGM have not seemed to provide an answer as to why this is the case. This is often a challenge in the textual discussion. Assertions and claims can be helpful to understanding what somebody believes, but oftentimes fall short in explaining why they believe something to be true. The closest explanation that I have heard as to why the CBGM is a blessing to the church is because it has been said that it can detail the exact form of the Bible as it existed around 125AD. Again, this is simply an assertion, and needs to be demonstrated. I have detailed in this article as to why I believe that claim is not true. 

In this article, I thought it would be helpful to provide a simple explanation of what the CBGM is, how it is being used, and the impact that the CBGM will have on Bibles going forward. The discerning reader can then decide for themselves if it is a blessing to the church. If there is enough interest in this article, perhaps I can write more at length later. I will be using Tommy Wasserman and Peter Gurry’s book, A New Approach to Textual Criticism: An Introduction to the Coherence Based Genealogical Method as a guide for this article. 

Some Insights Into the CBGM from the Source Material 

New Testament textual criticism has a direct impact on preaching, theology, commentaries, and how people read their Bible. The stated goal of the CBGM is to help pastors, scholars, and laypeople alike determine, “Which text should be read? Which should be applied?..For the New Testament, this means trying to determine, at each place where our copies disagree, what the author most likely wrote, or failing this, at least what the earliest text might have been” (1, emphasis mine). Note that one of the stated objectives of the CBGM is to find what the author most likely wrote, and when that cannot be determined, what the earliest text might have been. 

Here is a brief definition of the CBGM as provided by Dr. Gurry and Dr. Wasserman:

“The CBGM is a method that (1) uses a set of computer tools (2) based in a new way of relating manuscript texts that is (3) designed to help us understand the origin and history of the New Testament text” (3). 

The way that this method is relating manuscript texts is an adaptation of Karl Lachman’s common error method as opposed to manuscript families and text types. This is in part due to the fact that “A text of a manuscript may, of course, be much older than the parchment and ink that preserve it” (3). The CBGM is primarily concerned with developing genealogies of readings and how variants relate to each other, rather than manuscripts as a whole. This is done by using pregenealogical (algorithmic analysis) and genealogical (editorial analysis). The method examines places where manuscripts agree and disagree to gain insight on which readings are earliest. In the case that the same place in two manuscripts disagree, the new method can help in determining one of two things:

  1. One variant gave birth to another, therefore one is earlier
  2. The relationship between two variants is uncertain

It is important to keep in mind, that the CBGM is not simply a pure computer system. It requires user input and editorial judgement. “This means that the CBGM uses a unique combination of both objective and subjective data to relate texts to each other…the CBGM requires the user to make his or her own decisions about how variant readings relate to each other.” (4,5). That means that determining which variant came first “is determined by the user of the method, not by the computer” (5). The CBGM is not purely an objective method. People still determine which data to examine using the computer tools, and ultimately what ends up in the printed text will be the decisions of the editorial team. 

The average Bible reader should know that the CBGM “has ushered in a number of changes to the most popular editions of the Greek New Testament and to the practice of New Testament textual criticism itself…Clearly, these changes will affect not only modern Bible translations and commentaries but possibly even theology and preaching” (5). Currently, the CBGM has been partially applied to the data in the Catholic Epistles and Acts, and DC Parker and his team are working on the Gospel of John right now. The initial inquiry of this article was to examine the CBGM to determine if it is indeed a “blessing to the church”. In order for this to be the case, one would expect that the new method would introduce more certainty to Bible readers in regards to variants. Unfortunately, the opposite seems to be true. 

“Along with the changes to the text just mentioned, there has also been a slight increase in the ECM editors’ uncertainty about the text, an uncertainty that has been de facto adopted by the editors of the NA/UBS…their uncertainty is such that they refuse to offer any indication as to which reading they prefer” (6,7). 

“In all, there were in the Catholic Letters thirty-two uses of brackets compared to forty-three uses of the diamond and in Acts seventy-eight cases of brackets compared to 155 diamonds. This means that there has been an increase in both the number of places marked as uncertain and an increase in the level of uncertainty being marked. Overall, then, this reflects a slightly greater uncertainty about the earliest text on the part of the editors” (7).   

This uncertainty has resulted in “the editors to abandon the concept of text-types traditionally used to group and evaluate manuscripts” (7). What this practically means is that the Alexandrian texts, which were formerly called a text-type, are no longer considered as such. The editors of the ECM “still recognize the Byzantine text as a distinct text form in its own right. This is due to the remarkable agreement that one finds in our late Byzantine manuscripts. Their agreement is such that it is hard to deny that they should be grouped…when the CBGM was first used on the Catholic Letters, the editors found that a number of Byzantine witnesses were surprisingly similar to their own reconstructed text” (9,10). 

Along with abandoning the notion that the Alexandrian manuscripts represent a text type, another significant shift has occurred. Rather than pursuing what has historically been called the Divine Original or the Original Text, the editors of the ECM are now after what is called the Initial Text (Ausgangstext). There are various ways this term is defined, but opinions are split with the editors of the ECM. For example, DC Parker, who is leading the team who is using the CBGM in the Gospel of John has stated along with others that there is no good reason to believe that the Initial Text and the Original Text are the same. Others are more optimistic, but the 198 diamonds in the Acts and Catholic Letters may serve as an indication as to whether this optimism is warranted based on the data. The diamonds indicate a place where the reading is uncertain in the ECM. 

The computer based component of the CBGM is often sold as a conclusive means to determine the earliest, or even original reading. This is not true. “At best, pregenealogical coherence [computer] only tells us how likely it is that a variant had multiple sources of origin rather than just one…pregenealogical coherence is only one piece of the text-critical puzzle. The other pieces – knowledge of scribal tendencies, the date and quality of manuscripts, versions, and patristic citations, and the author’s theology and style are still required…As with so much textual criticism, there are no absolute rules here, and experience serves as the best guide” (56, 57. Emphasis added).

In the past it has been said that textual criticism was trying to build a 10,000 piece puzzle with 10,100 pieces. This perspective has changed greatly since the introduction of the CBGM. “we are trying to piece together a puzzle with only some of the pieces” (112). Not only does the CBGM not have all the data that has ever existed, it is only using “about one-third of our extant Greek manuscripts…The significance of this selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). Further, the CBGM is not omniscient. It will never know how many of the more complex corruption entered into the manuscripts, or the backgrounds and theology of the scribes, or even the purpose a manuscript was created. “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That means that it is likely that there will be readings produced by the CBGM that were not original or earliest, that will be mistakenly treated as such. “We do not want to give the impression that the CBGM has solved the problem of contamination once and for all. The CBGM still faces certain problematic scenarios, and the loss of witnesses plagues all methods at some point” (115). 

One of the impending realities that the CBGM has created is that there may be a push for individual users, Bible readers, to learn how to use and implement the CBGM in their own daily devotions. “Providing a customizable option would mean creating a version that allows each user to have his or her own editable database” (119,120). There will likely be a time in the near future where the average Bible reading Christian will be encouraged to understand and use this methodology, or at least pastors and seminarians. If you are not somebody who has the time or ability to do this, this could be extremely burdensome. Further, the concept of a “build your own Bible” tool seems like a slippery slope, though it is a slope we are already sliding down for those that make their own judgements on texts in isolation to the general consent of the believing people of God. 

Conclusion

Since the CBGM has not been fully implemented, I suppose there is no way to say with absolute confidence whether or not it is a “blessing to the church”. I will say, however, that I believe the church should be the one to decide on this matter, not scholars. It seems that the places where the CBGM has already been implemented have spoken rather loudly on the matter in at least 198 places. Hopefully this article has been insightful, and perhaps has shed light on the claims that many are parroting which say that the CBGM is a “blessing to the church” or an “act of God’s providence”. If anything, the increasing amount of uncertainty that the CBGM has introduced to the previous efforts of modern textual criticism should give cause for pause, because the Bibles that most people use are based on the methodologies that modern scholarship has abandoned.

Helpful Terms

Coherence: The foundation for the CBGM, coherence is synonymous with agreement or similarity between texts. Within the CBGM the two most important types are pregenealogical coherence and genealogical coherence. The former is defined merely by agreements and disagreements; the latter also includes the editors’ textual decisions in the disagreements (133).   

ECM: The Editio Critica Maior, or Major Critical Edition, was conceived by Kurt Aland as a replacement to Constantin von Tischendorf’s well-known Editio octava critica maior. The aim of the ECM is to present extensive data from the first one thousand years of transmission, including Greek manuscripts, versions, and patristics. Currently, editions for Acts and the Catholic Letters have been published, with more volumes in various stages for completion (135).  

Stemma: A stemma is simply a set of relationships either of manuscripts, texts, or their variants. The CBGM operates with three types that show the relationship of readings (local stemmata), the relationship of a single witness to its stemmatic ancestors (substemma), and the relationships of all the witnesses to each other (global stemmata) (138).