Memoirs of an ESV-Onlyist: Reflecting on the Text and Canon Conference

Introduction

On Reformation weekend, a small conference was held in Atlanta, Georgia called The Text and Canon Conference which focused on offering a clear definition of what it means when people advocate for the Masoretic Hebrew and Received Greek text. For those that are not up to date with all of the jargon, the Masoretic Hebrew text is the only full Hebrew Old Testament text available, and the Greek Received Text is the Greek New Testament which was used during the Protestant Reformation and Post-Reformation period. At the time of the Reformation, the Bibles used the Masoretic Text and Received Text for all translational efforts. Bibles produced in the modern era use the Masoretic Text as a foundation for the Old Testament, but frequently use Greek, Latin, and other translations of the Hebrew over the Masoretic text. Modern Bibles also utilize a different Greek text for the New Testament which is commonly called the Modern Critical Text. As a result of these differences, the Bibles produced from the text of the Reformation are different in many ways from the Bibles produced during the recent years.  

One of the major focuses of the conference was to demonstrate that it is still a good idea, and even necessary, to use a Reformation era Bible, or Bibles that utilize the same Hebrew and Greek texts as the Reformation era Bibles. The key speakers, Dr. Jeff Riddle and Pastor Robert Truelove, delivered a series of lectures which demonstrated the historical perspective on the transmission history of the Old and New Testaments and presented a wealth of reasons why the Reformation era Hebrew and Greek texts are still reliable, even today. I will be writing a series of articles which cover some of the key highlights of the conference. In this article, I want to explain why I think this conference was necessary, and also to detail the series of events which led me to attending this conference. 

Why Was the Text and Canon Conference Necessary?  

There are two major reasons that I believe the Text and Canon conference was necessary. The first is that many Christians do not believe that there is any justifiable reason to retain the historical text of the Protestant church. The second is that many Christians are not fully informed on the state of current text critical efforts. Due to this reality, lectures delivered at the Text and Canon conference provided theological and historical reasons which supported the continued use of the Reformation era Hebrew and Greek texts, as well as offered information on the current effort of textual scholarship. An important reality in the textual discussion is that the majority of Christians do not have the time and in many cases, the ability to keep up to date with all of the textual variants and text-critical methodologies that go into making modern Bibles. There is a great need in the church today for clear articulations of the history of the Bible, as well as accessible presentations on how modern Bibles are produced. The Text and Canon conference, in part, met this need, as well as offered many opportunities for fellowship and like-minded conversation. Prior to launching into a series of commentary on the conference, I thought it would be helpful to share my journey from being a modern critical text advocate to a Traditional Text advocate. 

From the 2016 ESV to the Text and Canon Conference

Prior to switching to a Reformation era Bible, I began to discover certain realities about the modern efforts of textual criticism which caused me to have serious doubts as to whether or not the Bible was preserved. I had a hard time reconciling my doctrine of inspiration and preservation with the fact that there is an ongoing effort to reconstruct the Bible that has been in progress for over 200 years. These doubts increased when I discovered that not only had the methods of text-criticism changed since I was converted to Christianity over ten years ago, but that the modern critical text would be changing more in the next ten years. I began to read anything I could get my hands on to see if I could figure out more information on the methods that were responsible for creating the Bible I was reading at the time. When I began this process of investigation, I had just finished my cover-to-cover reading plan of the new 2016 ESV. At first, I was attempting to simply understand the methodology of the modern critical text with the assumption that a better understanding of it would help me defend the Scriptures against the opponents of the faith. The process quickly became a search for another position on the text of Scripture. This is due to some of the more alarming things I learned in my investigation of modern critical methods. There are six significant discoveries I made when investigating the current effort of textual criticism that I would like to share here. These six discoveries led me from being a committed ESV reader to a committed KJV reader.  

The first discovery that sent me down a different path than the modern critical text was when I investigated the manuscript data supporting the removal of Mark 16:9-20 in my 2016 ESV. The other pastor of Agros Church, Dane Johannsson, had called me to tell me about some information he learned about the Longer Ending of Mark after listening to an episode of Word Magazine, produced by Dr. Jeff Riddle. Up to this point, I had heard many pastors that I trusted say that the manuscript data was heavily in favor of this passage not being original. My Bible even said that “Some of the earliest manuscripts do not include this passage”. I was seriously confused when I found out that only three of the thousands of manuscripts excluded the passage, and only two of them are dated before the fifth century. This made me wonder, if all it took was two early manuscripts to discredit the validity of a passage in Scripture, what would happen if more manuscripts were found that did not have other passages that I had prayed over, studied, and heard preached? If a passage that had thousands of manuscripts supporting it could be delegated to brackets, footnotes, or removed based on the testimony of two manuscripts, I realized that this same logic could be easily applied to quite literally any place in my Bible. All that it would take for other passages to be removed would be another manuscript discovery, or even a reevaluation of the evidence already in hand.  

The second discovery was the one that fully convinced me to put away my 2016 ESV and initially, pick up an NKJV. At the time of this exploration process I was utilizing my Nestle-Aland 28th edition and the United Bible Society 5th edition in my Greek studies. I was still learning to use my apparatus when I learned what the diamond meant. In the prefatory material of the NA28, it states that the diamond indicates a place where the editors of the Editio Critica Maior (ECM) were split in determining which textual variant was earliest. That meant that it was up to me, or possibly somebody else,  to determine which reading belonged in the main text. This is a reality that I would have never known by simply reading my ESV. I discovered that there were places where the ESV translators had actually gone with a different decision than the ECM editors, like 2 Peter 3:10, where the critical text reads the exact opposite of the ESV. This of course was concerning, but I wasn’t exactly sure why at the time. I figured there had to be a good reason for this, there were thousands of manuscripts, after all. I began investigating the methodology that was used to produce these diamond readings, and learned that it was called the Coherence Based Genealogical Method (CBGM). I quickly found out that there was not a whole lot of literature on the topic. The two books that I initially found were priced at $34 and $127, which was a bit staggering for me at the time. It was important for me to understand these methods, so I ended up at first purchasing the $34 book. It was what I discovered in this book that heavily concerned me. Due to the literature on the CBGM being relatively new, and possibly too expensive for the average person to purchase, I had a hard time finding anybody to discuss the book with me. It was actually the literature on the CBGM that motivated me to start podcasting and writing on the issue. If I couldn’t find anybody to discuss this with, it meant that nobody really knew about it.   

The third discovery was the one that convinced me that I should start writing more about, and even advocating against, this new methodology. This was the methodology that was being employed in creating the Bible translations that all of my friends were reading, and that I was reading up until switching to the NKJV. It’s not that I “had it out” for modern Bibles, I figured that if these discoveries had caused so much turmoil in my faith, they would cause others to have similar struggles. Most of my friends knew nothing about the CBGM, just that they had heard it was a computer program that was going to produce a very accurate, even original, Bible. After reading the introductory work on the method, I knew that what I heard about the CBGM was perhaps too precipitated. Based on my conversations with my friends on textual criticism, I knew that my friends were just as uninformed as I was on the current effort of textual scholarship. It wasn’t that I thought I was the first person to discover these things that motivated me to start writing,  but the fact that myself and all of my friends were not aware of any of the information I was reading. Up to that point in my research, I was under the assumption that the goal of textual criticism was to reconstruct the original text that the prophets and apostles had penned. I even thought that scholars believed they had produced that original text which I was reading in English in my ESV. I found out that this was not the case for the current effort of textual scholarship. I learned that the goal of textual criticism had, at some point in the last ten years, shifted from the pursuit of the original to what is called the Initial Text. In my studies, I realized that there were differing opinions on how the Initial Text should be defined, and even if there was one Initial Text. In all cases, however, the goal was different than what I thought. It did not take me long to realize the theological implications of this shift in effort. At the time, I fully adhered to both the London Baptist Confession of Faith 1.8, as well as the Chicago Statement on Biblical Inerrancy. It was in examining the Chicago Statement on Biblical Inerrancy against the stated goals of the newest effort of textual criticism that made me realize there were severe theological implications to what I was reading and studying.  

The fourth discovery was the one that made me realize that the conversation of textual criticism was not only about Greek texts and translations, it was about the doctrine of Scripture itself. At the time I believed that the Bible was inspired insofar as it represented the original, and the original, as I found out, was no longer being pursued. The original was no longer being pursued, I learned, because the majority, if not all of the scholars, believed it could not be found, and that it was lost as soon as the first copy of the New Testament had been made. There are various ways of articulating this reality, but I could not find a single New Testament scholar who was actually doing work in the field of textual scholarship who still held onto the idea that the original, in the sense that I was defining it, could be attained. Even Holger Strutwolf, a conservative editor of the Modern Critical Text, seems to define the original as being as “far back to the roots as possible” (Original Text and Textual History, 41). This being the case, if the current effort of textual criticism was not claiming to have determined the original readings of the Bible, than my doctrine of Scripture was seemingly vacuous. If the Bible was inspired insofar as it represented the original, and there was nobody able to determine which texts were original, my view of the Bible was that it wasn’t inspired at all. At the bare minimum, it was only inspired where there weren’t serious variants. In either case, this reality was impossible for me to reconcile. I then sought out to discover how the Christians who were informed on all the happenings of textual criticism explained the doctrine of Scripture in light of this reality. I figured I wasn’t the first person to discover this about the modern text-critical effort, so somebody had to have a good doctrinal explanation. 

The fifth discovery was the one that made me realize that I did not have a claim to an inspired text, if I trusted in the efforts of modern textual criticism. In my search for faithful explanations of inspiration in light of the current effort of textual criticism, I did not find anything meaningful. In nearly every case, the answer was simply one of Kantian faith. Despite the split readings in the ECM and the abandoned pursuit of the original, I was told I had to believe it was preserved. Even if nearly every textual scholar was saying that the idea of the “original” was a novel idea from the past, or simply the earliest surviving text, I had to reconcile that reality with my theology. One of the answers I received was that the original text was preserved somewhere in all of the surviving manuscripts, and that there really was not any doctrine lost, no matter which textual variants were translated. This is based, in part, on an outdated theory which says that variants are “tenacious” – that once a variant enters the manuscript tradition it doesn’t fall out. This of course cannot be proven, and even can be shown to be false. Another answer I found was that all of the surviving manuscripts essentially taught the same exact thing. This would have been comforting, had I not spent time using my NA28 apparatus and reading different translations. I knew for a fact that there were many places where variants changed doctrine, sometimes in significant ways. Would the earth be burnt up on the last day, or would it not be burnt up? Was Jesus the unique god, or the only begotten Son? The answers I received simply did not line up with reality. I had no way of proving which of the countless variants were original. When I discovered this, I finally understood the position of Bart Ehrman. He, like myself, had come to the conclusion that the theories, methods, and conclusions which went into the construction of the modern critical text told a story of a Bible that really wasn’t all that preserved. 

The sixth and final discovery I made, which did not necessarily happen in chronological order with the rest of my discoveries, was that there were several other views of textual criticism within the Reformed and larger Evangelical tradition. Prior to beginning my research project, I had read The King James Only Controversy, which led me to believe that there were really only two views on the text – KJV Onlyists and everybody else. I discovered that this was the farthest thing from reality and a terrible misrepresentation of the people of God who held to these other positions. The modern critical text was not a monolith, and I did not need to adopt it to defend my faith, or have a Bible. In fact, I knew that there was no way I could defend my faith with the modern critical text. In my research, I even discovered countless enemies of the faith who used the modern critical text as a way to disprove the preservation of Scripture. Various debates against Bart Ehrman that I watched demonstrated this fact clearly. I learned that even within the camp of modern textual criticism, there were people who did not read Bibles translated from the modern critical text. There were even people who disagreed on which readings were earliest within the modern critical text. There were people who adopted the longer ending of Mark and the woman caught in adultery who also did not read the KJV. There were also people who believed that the Bible was preserved in the majority of manuscripts, in opposition to other positions which say that original readings can be preserved in just one or two manuscripts. I also discovered the position I hold to now, which says that the original text of the Bible was preserved up to the Reformation, and thus the translations made during that time represent that transmitted original. This ultimately was the position that made the most sense to me theologically, as well as historically. I realized that the attacks on the TR, which often said that it was only created from “half a dozen” manuscripts, was not exactly meaningful, as the modern critical text often makes textual decisions based on just two manuscripts. In any case, the conversation of textual criticism was much more nuanced and complex than I had believed it to be. 

Conclusion

I can only speak for myself as to how my discoveries affected my faith. It is clear that many Christians do not have a problem with a Greek text that is changing, and in many places, undecided. In my case, I was told to take a Kantian leap of faith to trust in this text. In my experience, most of the time people simply are unaware of the happenings of modern textual scholarship. It is not that I have any special knowledge, or secret wisdom, I simply had the time and energy and opportunity to read a lot of the current literature on the latest methods being employed in creating Bibles. One thing that has motivated me to be so vocal about this issue is the reality that most people simply are uninformed on the issue, like myself at the time of starting my research project. Due to one reason or another, the information on the current methods is difficult to access for many, and even more simply do not know that anything has changed in the last 20 years. My gut tells me that if people were simply informed more on the issue, they might at least consider embarking on a research project like I did. The fact is, that many scholars and apologists for the critical text are insistent on framing this discussion as “KJV Onlyism against the world”, and it is apparent that it has been effective. Despite this, it was not my love for tradition or an affinity for the KJV that led me to reading it. In fact, I was hesitant to read it as a result of all the negative things I had heard about it. Primarily,it was my discoveries regarding the state of modern textual criticism that led me to putting down my ESV and picking up an NKJV, and then finally a KJV. 

I thought it would be helpful to detail my discoveries which led me to the position I hold now on the text of Scripture. I will be writing more articles commenting on what I consider to be the more important points of the conference. Hopefully my commentary can serve to give you, the reader, more confidence in the Scriptures, and to share some of the important information presented at the Text and Canon conference.  

Inspiration: Now and Then

Introduction

Today’s church has been flooded with new ideas that depart from the old paths of the Protestant Reformation. This is especially true when it comes to the doctrine of Scripture. It is common place to adhere to the doctrine of inerrancy in today’s conservative circles and beyond. While it is good that many Christians take some sort of stand on Scripture, it is important to investigate whether or not the doctrine of inerrancy is a Protestant doctrine. The Reformers were adamant when talking about the inspiration, authority, and preservation of Scripture that every last word had been kept pure and should be used for doctrine, preaching, and practice. James Ussher says clearly the common sentiment of the Reformed.

“The marvelous preservation of the Scriptures; though none in time be so ancient, nor none so much oppugned, yet God hath still by his providence preserved them, and every part of them.”

(James Ussher, A Body of Divinity)

Most Christians would happily affirm this doctrinal statement. Those that are more familiar with the discussion of textual criticism may not, however. It is common to dismiss men like James Ussher along with other Westminster Divines on the grounds that they were not aware of all of the textual data and therefore were speaking from ignorance. Much to the discomfort of these Christians, textual variants did exist during this time, many of which were the same we battle over today. The conclusion that should be drawn from this reality is not that the Reformed in the 16th and 17th centuries would have agreed with modern expressions of inspiration and preservation simply because we have “more data”. There is a careful nuance to be observed, and that nuance is in their actual doctrinal articulations of Scripture. This is necessarily the case, considering they were far more aware of textual variants than many would like to admit. Rather than attempting to understand the tension between the Reformed doctrine of Scripture and the existence of textual variants, it is commonplace to reinterpret the past through the lens of A.A. Hodge and B.B. Warfield, who reinterpreted the Westminster Confession of Faith 1.6 to make room for new trends in textual scholarship. William T. Shedd, a professor at Union Theological Seminary in  the 19th century and premier systematic theologian articulated the view of Hodge and Warfield well regarding the confessional statement, “Kept pure in all ages”.  He writes,

“This latter process is not supernatural and preclusive of all error, but providential and natural and allowing of some error. But this substantial reproduction, this relative ‘purity’ of the original text as copied, is sufficient for the Divine purposes in carrying forward the work of redemption in the world” . 

William G. T. Shedd, Calvinism: Pure and Mixed. A Defense of the Westminster Standards, 142.

While this is close to the Reformed in the 16th and 17th centuries at face value, it still is a departure that ends up being quite significant, especially in light of the direction modern textual criticism has taken in the last ten years. For comparison, Francis Turretin articulates a similar thought in a different way.



“By the original texts, we do not mean the autographs written by the hand of Moses, of the prophets and of the apostles, which certainly do not now exist. We mean their apographs which are so called because they set forth to us the word of God in the very words of those who wrote under the immediate inspiration of the Holy Spirit”. 

Francis Turretin, Institutes of Elenctic Theology, Vol. I, 106.

It is plainly evident that the two articulations of the same concept are not exactly the same. That is to say, that Turretin’s expression of the doctrine was slightly more conservative than Shedd. The difference being that the apographs, as Turretin understood them, were materially as perfect as the Divine Original. Turretin dealt at length with textual corruptions, as did his peers and those that followed after him, such as Puritan Divine John Owen, and still affirmed that the “very words” were available to the church. In order to fit a modern view into the Reformation and Post Reformation theologians, one must anachronistically impose a Warfieldian interpretation of the Westminster Confession onto those that framed it. There is no doubt that the Westminster Divines lived in the same reality of textual variants as Warfield and Hodge, and that they still affirmed a doctrine which said every jot and tittle had been preserved. Turretin and Warfield faced the same dilemma, yet Warfield secluded inspiration to only the autographs, whereas the Reformed included the apographs as well. Rather than attempting to reinterpret the theologians of the past, the goal should be to understand their doctrine as it existed during the 16th and 17th centuries, where the conversation of textual variants was just as alive as it is today.

A Careful Nuance

In order to examine the difference between the doctrine of Scripture from the Reformation to today, it’s important to zoom out and see how Warfield’s doctrine developed into the 21st century. The Doctrine of Inspiration, as it is articulated today, only extends to the autographic writings of the New Testament. I will appeal to David Naselli’s explanation from his textbook, How to Understand and Apply the New Testament, which has received high praise from nearly every major seminary. 

“The Bible’s inerrancy does not mean that copies of the original writings or translations of those copies are inerrant. Copies and translations are inerrant only to the extent that they accurately represent the original writings.” 

David Naseli. How to Understand and Apply the New Testament. 43.

This statement is generally agreeable, if we assume that there is a stable Bible in hand, and a stable set of manuscripts or a printed edition which is viewed as “original.” Unfortunately, neither of these exist in the world of the modern critical text. Not only do we not have the original manuscripts, there is no finished product that could be compared to the original. Since the effort of reconstructing the Initial Text is still ongoing, and since we do not have the original manuscripts, this doctrinal statement made by Naselli does not articulate a meaningful doctrine of inspiration or preservation. In stating what appears to be a solid doctrinal statement, he has said nothing at all. In order for this doctrine to have significant meaning, a text that “represents the original writings” would need to be produced. That is why the Reformed in the 16th and 17th centuries were so adamant about their confidence in having the original in hand. In order for any doctrine of Scripture to make sense, the Scriptures could not have fallen away after the originals were destroyed or lost. Doctrinally speaking, the articulation of the doctrine of Scripture demonstrated by Turretin and his contemporaries is necessary because it affirms that God providentially preserved the Scriptures in time and that they had access to those very Scriptures. If the modern critical text claimed to be a definitive text, like the Reformed claimed to have, the modern articulation of the doctrine of Scripture might be sound, but there is no modern critical text that exists as a solid ands table object. It is clear that the doctrine of Scripture, and the form of the Scriptures, cannot be separated or the meaning of that doctrine is lost. In order for doctrine to be built on a text, the text must be static. If we are to say that the Bible is inerrant in so far as it represents the original, there must be a 1) a stable text and 2) an original to compare that text against. Due to neither 1 or 2 being true, Naselli, along with everybody that agrees with him, have effectively set forth a meaningless doctrinal standard as it pertains to Scripture.  

This means that the Reformed doctrine of Scripture is intimately tied to the text they considered to be authentic, inspired, and representative of the Divine Original. The text they had in hand was what is now called the Received Text. Whether it was simply a “default” text does not change the reality that it was the text these men of God had in their hands. It is abundantly clear that the doctrine of Scripture during the time of the Reformation and Post-Reformation was built on the TR, just like the modern doctrine of Scripture is built on the modern critical text and the assumptions used to create it. Further problems arise with the modern doctrine of Scripture when the effort of textual scholarship shifted from trying to find the original text to the initial text. Due to this shift, any articulation of Scripture which looks to the modern critical text is based on a concept that does not necessarily exist in modern textual scholarship. The concept of the “original” has moved from the sight of the editorial teams of Greek New Testaments, therefore it is necessary to conclude that such doctrinal statements which rely on outdated goals to find the “original” must also be redefined. What this means practically is that there are not any doctrinal statements that exist in the modern church which align with the doctrines used to produce modern Bibles.

Due to the doctrine of Scripture being intimately tied to the nature of the text it is describing, the various passages of the New Testament which have been considered inspired have changed throughout time, and are going to continue changing as the conclusions of scholars vary from year to year. If we take Naselli’s articulation of the doctrine of Scripture as true, this means that there is not one inerrant text of Holy Scripture, there are as many as there are Christians that read their Bible. So in a very real sense, according to the modern articulation of inspiration, the inspired text of the New Testament is not a stable rule of faith. It is only stable relative to crowd consensus, or perhaps at the individual level. A countless multitude of people who adhere to this doctrine of inspiration make individual rulings on Scripture, which effectively means that the Bible is given its authority by virtue of the person making those decisions. Thus, the number of Bibles which may be considered “original” is as numerous as the amount of people reading Bibles. It is due to this reality that the modern doctrine of Scripture has departed from the Reformation era doctrine in at least two ways. The first is that by “original”, the post-Warfield doctrine means the autographs which no longer exist and excludes the apographs. The second is that the Bible is only authoritative insofar as it has been judged authoritative by some standard or another. This combination contradicts any doctrine that would have the Scriptures be a stable rule for faith and practice. It is because of these differences that it can be safely said that while the doctrinal articulations may sound similar, they are not remotely the same.  

The Reformed doctrine of Scripture in the 16th and 17th centuries is founded upon two principles that are different than that in the post-Warfield era. The first principle of the Reformed is that the Scriptures are self-authenticating, and the second is that they considered the original to also be represented and preserved in the text they had in hand. Therefore it seems necessary to understand the Reformation and Post-Reformation Divines through a different lens than the modern perspective, because the two camps are saying entirely different things. A greater effort should be made to understand what exactly the Reformed meant by “Every word and letter” in relationship to the text they had in hand, rather than impose the modern doctrine upon the Reformation and Post-Reformation divines.   

Conclusion

The goal of this conversation should be to instill confidence in people that the Bible they are reading is indeed God’s inspired Word. Often times it is more about winning debates and being right than actually given confidence to Christians that what they have in their hands can be trusted. It is counter productive for Christians to continue to fight over textual variants in the way that they do, especially considering the paper thin modern articulations of the doctrine of Scripture. It is stated by some that receiving the Reformation Era Bible is “dangerous”, yet I think what is more dangerous is to convince somebody that they should not trust this Bible, which is exactly what happens when somebody takes the time to actually explain the nuances of modern textual criticism. These attacks are especially harmful when the Bible that is attacked is the one that the Protestant religion was founded upon, and the only text that carries with it a meaningful doctrine of Scripture. Christians need to consider very carefully the claims that are made about the Reformation era text which say it is not God’s Word, or that it is even dangerous to use. I cannot emphasize enough the harm this argument has done  to the Christian religion as a whole. The constant effort to “disprove” the Reformation era text is a strange effort indeed, especially if “no doctrines are effected”. The alternative, which has been a work in progress since before 1881, and is still a work in progress today, offers no assurance that Christians are actually reading the Bible. In making the case that the Received text and translations made from it should not be used, critics have taken one Bible away and replaced it with nothing but uncertainty.  

The claim made by advocates of the Received text is simple, and certainly not dangerous. The manuscripts that the Reformed had in the 16th century were as they claimed – of great antiquity and highest quality. The work done in that time resulted in a finished product, which continued to be used for hundreds of years after. That Bible in its various translations quite literally changed the world. If the Bible of the 16-18th centuries is so bad, I cannot understand why people who believe it to be a gross corruption of God’s Word still continue to read the theological works of those who used it. Further, it is difficult to comprehend how a Bible that is said to accomplish the same purpose as modern bibles would be so viscously attacked by those that oppose it. If all solid translations accomplish the same redemptive purpose, according to the modern critical doctrine, why would it make any sense to attack it? After spending 10 years reading modern Bibles, I simply do not see the validity to the claim that the Reformation era text is “dangerous” in any way. Christians do not need to “beware” of the text used by the beloved theologians of the past. At the end of the day, I think it is profitable for Christians to know that traditional Bibles are not scary, and have been used for centuries to produce the fullest expression of Christian doctrine in the history of the world. When the two doctrinal positions are compared, there is not a strong appeal to the axioms of Westcott and Hort, or Metzger, or even the CBGM. They are all founded on the vacuous doctrine of Scripture which requires that the current text be validated against the original, which cannot be done. There is no theological or practical value in constantly changing the words printed in our Bibles, and this practice is in fact detrimental to any meaningful articulation of what Scripture is. I have not once talked to anybody who has been given more confidence in the Word of God by this practice. In fact, the opposite is true in every real life encounter I’ve had.

It is said that the Received Text position is “pious” and “sanctimonious”, but I just don’t see how a changing Bible, with changing doctrines, is even something that a conservative Christian would seriously consider. If Christians desire a meaningful doctrine of Scripture, the modern critical text and its axioms are incapable of producing it.

Is the CBGM God’s Gift to the Church?

Introduction

It is stated by some that the Coherence Based Genealogical Method is a blessing to the church, even gifted to the church by God way of God’s providence. I thought it would be helpful to examine this claim. Unfortunately, those who have made such statements regarding the Editio Critica Maior (ECM) and the CBGM have not seemed to provide an answer as to why this is the case. This is often a challenge in the textual discussion. Assertions and claims can be helpful to understanding what somebody believes, but oftentimes fall short in explaining why they believe something to be true. The closest explanation that I have heard as to why the CBGM is a blessing to the church is because it has been said that it can detail the exact form of the Bible as it existed around 125AD. Again, this is simply an assertion, and needs to be demonstrated. I have detailed in this article as to why I believe that claim is not true. 

In this article, I thought it would be helpful to provide a simple explanation of what the CBGM is, how it is being used, and the impact that the CBGM will have on Bibles going forward. The discerning reader can then decide for themselves if it is a blessing to the church. If there is enough interest in this article, perhaps I can write more at length later. I will be using Tommy Wasserman and Peter Gurry’s book, A New Approach to Textual Criticism: An Introduction to the Coherence Based Genealogical Method as a guide for this article. 

Some Insights Into the CBGM from the Source Material 

New Testament textual criticism has a direct impact on preaching, theology, commentaries, and how people read their Bible. The stated goal of the CBGM is to help pastors, scholars, and laypeople alike determine, “Which text should be read? Which should be applied?..For the New Testament, this means trying to determine, at each place where our copies disagree, what the author most likely wrote, or failing this, at least what the earliest text might have been” (1, emphasis mine). Note that one of the stated objectives of the CBGM is to find what the author most likely wrote, and when that cannot be determined, what the earliest text might have been. 

Here is a brief definition of the CBGM as provided by Dr. Gurry and Dr. Wasserman:

“The CBGM is a method that (1) uses a set of computer tools (2) based in a new way of relating manuscript texts that is (3) designed to help us understand the origin and history of the New Testament text” (3). 

The way that this method is relating manuscript texts is an adaptation of Karl Lachman’s common error method as opposed to manuscript families and text types. This is in part due to the fact that “A text of a manuscript may, of course, be much older than the parchment and ink that preserve it” (3). The CBGM is primarily concerned with developing genealogies of readings and how variants relate to each other, rather than manuscripts as a whole. This is done by using pregenealogical (algorithmic analysis) and genealogical (editorial analysis). The method examines places where manuscripts agree and disagree to gain insight on which readings are earliest. In the case that the same place in two manuscripts disagree, the new method can help in determining one of two things:

  1. One variant gave birth to another, therefore one is earlier
  2. The relationship between two variants is uncertain

It is important to keep in mind, that the CBGM is not simply a pure computer system. It requires user input and editorial judgement. “This means that the CBGM uses a unique combination of both objective and subjective data to relate texts to each other…the CBGM requires the user to make his or her own decisions about how variant readings relate to each other.” (4,5). That means that determining which variant came first “is determined by the user of the method, not by the computer” (5). The CBGM is not purely an objective method. People still determine which data to examine using the computer tools, and ultimately what ends up in the printed text will be the decisions of the editorial team. 

The average Bible reader should know that the CBGM “has ushered in a number of changes to the most popular editions of the Greek New Testament and to the practice of New Testament textual criticism itself…Clearly, these changes will affect not only modern Bible translations and commentaries but possibly even theology and preaching” (5). Currently, the CBGM has been partially applied to the data in the Catholic Epistles and Acts, and DC Parker and his team are working on the Gospel of John right now. The initial inquiry of this article was to examine the CBGM to determine if it is indeed a “blessing to the church”. In order for this to be the case, one would expect that the new method would introduce more certainty to Bible readers in regards to variants. Unfortunately, the opposite seems to be true. 

“Along with the changes to the text just mentioned, there has also been a slight increase in the ECM editors’ uncertainty about the text, an uncertainty that has been de facto adopted by the editors of the NA/UBS…their uncertainty is such that they refuse to offer any indication as to which reading they prefer” (6,7). 

“In all, there were in the Catholic Letters thirty-two uses of brackets compared to forty-three uses of the diamond and in Acts seventy-eight cases of brackets compared to 155 diamonds. This means that there has been an increase in both the number of places marked as uncertain and an increase in the level of uncertainty being marked. Overall, then, this reflects a slightly greater uncertainty about the earliest text on the part of the editors” (7).   

This uncertainty has resulted in “the editors to abandon the concept of text-types traditionally used to group and evaluate manuscripts” (7). What this practically means is that the Alexandrian texts, which were formerly called a text-type, are no longer considered as such. The editors of the ECM “still recognize the Byzantine text as a distinct text form in its own right. This is due to the remarkable agreement that one finds in our late Byzantine manuscripts. Their agreement is such that it is hard to deny that they should be grouped…when the CBGM was first used on the Catholic Letters, the editors found that a number of Byzantine witnesses were surprisingly similar to their own reconstructed text” (9,10). 

Along with abandoning the notion that the Alexandrian manuscripts represent a text type, another significant shift has occurred. Rather than pursuing what has historically been called the Divine Original or the Original Text, the editors of the ECM are now after what is called the Initial Text (Ausgangstext). There are various ways this term is defined, but opinions are split with the editors of the ECM. For example, DC Parker, who is leading the team who is using the CBGM in the Gospel of John has stated along with others that there is no good reason to believe that the Initial Text and the Original Text are the same. Others are more optimistic, but the 198 diamonds in the Acts and Catholic Letters may serve as an indication as to whether this optimism is warranted based on the data. The diamonds indicate a place where the reading is uncertain in the ECM. 

The computer based component of the CBGM is often sold as a conclusive means to determine the earliest, or even original reading. This is not true. “At best, pregenealogical coherence [computer] only tells us how likely it is that a variant had multiple sources of origin rather than just one…pregenealogical coherence is only one piece of the text-critical puzzle. The other pieces – knowledge of scribal tendencies, the date and quality of manuscripts, versions, and patristic citations, and the author’s theology and style are still required…As with so much textual criticism, there are no absolute rules here, and experience serves as the best guide” (56, 57. Emphasis added).

In the past it has been said that textual criticism was trying to build a 10,000 piece puzzle with 10,100 pieces. This perspective has changed greatly since the introduction of the CBGM. “we are trying to piece together a puzzle with only some of the pieces” (112). Not only does the CBGM not have all the data that has ever existed, it is only using “about one-third of our extant Greek manuscripts…The significance of this selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). Further, the CBGM is not omniscient. It will never know how many of the more complex corruption entered into the manuscripts, or the backgrounds and theology of the scribes, or even the purpose a manuscript was created. “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That means that it is likely that there will be readings produced by the CBGM that were not original or earliest, that will be mistakenly treated as such. “We do not want to give the impression that the CBGM has solved the problem of contamination once and for all. The CBGM still faces certain problematic scenarios, and the loss of witnesses plagues all methods at some point” (115). 

One of the impending realities that the CBGM has created is that there may be a push for individual users, Bible readers, to learn how to use and implement the CBGM in their own daily devotions. “Providing a customizable option would mean creating a version that allows each user to have his or her own editable database” (119,120). There will likely be a time in the near future where the average Bible reading Christian will be encouraged to understand and use this methodology, or at least pastors and seminarians. If you are not somebody who has the time or ability to do this, this could be extremely burdensome. Further, the concept of a “build your own Bible” tool seems like a slippery slope, though it is a slope we are already sliding down for those that make their own judgements on texts in isolation to the general consent of the believing people of God. 

Conclusion

Since the CBGM has not been fully implemented, I suppose there is no way to say with absolute confidence whether or not it is a “blessing to the church”. I will say, however, that I believe the church should be the one to decide on this matter, not scholars. It seems that the places where the CBGM has already been implemented have spoken rather loudly on the matter in at least 198 places. Hopefully this article has been insightful, and perhaps has shed light on the claims that many are parroting which say that the CBGM is a “blessing to the church” or an “act of God’s providence”. If anything, the increasing amount of uncertainty that the CBGM has introduced to the previous efforts of modern textual criticism should give cause for pause, because the Bibles that most people use are based on the methodologies that modern scholarship has abandoned.

Helpful Terms

Coherence: The foundation for the CBGM, coherence is synonymous with agreement or similarity between texts. Within the CBGM the two most important types are pregenealogical coherence and genealogical coherence. The former is defined merely by agreements and disagreements; the latter also includes the editors’ textual decisions in the disagreements (133).   

ECM: The Editio Critica Maior, or Major Critical Edition, was conceived by Kurt Aland as a replacement to Constantin von Tischendorf’s well-known Editio octava critica maior. The aim of the ECM is to present extensive data from the first one thousand years of transmission, including Greek manuscripts, versions, and patristics. Currently, editions for Acts and the Catholic Letters have been published, with more volumes in various stages for completion (135).  

Stemma: A stemma is simply a set of relationships either of manuscripts, texts, or their variants. The CBGM operates with three types that show the relationship of readings (local stemmata), the relationship of a single witness to its stemmatic ancestors (substemma), and the relationships of all the witnesses to each other (global stemmata) (138). 

Textual Methodology, Text Platforms, and Translation

Introduction

The conversation of textual criticism, which is properly called textual scholarship, has made its way to popular forums, Facebook threads, and even churches. Perhaps this has been the case for some time, but it seems that there has been a major uptick in people who have expressed interest in the topic. Oftentimes terminology muddles the conversation, so the goal of this article is to provide proper category distinctions that will hopefully bring more clarity at a popular level. Due to popular level podcasts, articles, and books, the average onlooker of the conversation has been taught to conflate the various categories within the conversation. A great example of this is the constant confusion between translation methodology and text-critical methodology. Despite common thought, the focus of this conversation is not primarily concerned with which Bible translation one uses. That is simply the practical implementation of one’s viewpoint on the topic. At a basic level, this conversation can be simplified into the three categories which are 1) textual methodology, 2) text platform, and 3) translation. 

Textual Methodology, Text Platforms, and Translations

The methodology one chooses is directly related to the doctrine of Scripture, namely inspiration and preservation. At its foundational level, a person’s understanding of the nature of Scripture drives all other opinions regarding the matter. The two competing thoughts right now are whether Scripture has been generally or partially preserved, or particularly preserved. This methodology flows into which underlying texts one believes to be the “best” or “original”. It can be helpful to discuss the differences between text platforms, but ultimately the conversation comes down to how one answers the question, “Has the Bible been preserved or not?” The final category is simply the practical implementation of the first two categories, and results in which Bible one reads. The major methodologies are modern reasoned eclecticism, equitable eclecticism, majority text or Byzantine priority, and the Confessional Text position (Traditional Text, Ecclesiastical Text, etc.). Each of these methodologies have their own canons and systems which are distinct from each other. The final category, translation, is not technically a text-critical category, but at a popular level, it inevitably comes up.

Translation methodology in itself is partially related to the first two categories, because all translations must make employ of a base text, sometimes called a “text platform”. That base text is chosen based on theological and methodological reasons. At its foundations, however, translation is simply taking a text from one language to another. That means that a translation can use an extremely accurate original text, and still be of poor quality, depending on the translation committee’s methodology and knowledge of both the original text and target language. That is why many who believe that the Modern Critical Text is the best can still reject the NLT or NIV as a sound translation in place for the ESV or NASB. 

Many popular level discussions simplify the conversation to “KJV Onlyists” vs. the rest of the world, but that simply does not work if one wants to engage charitably in the conversation. There is a depth of nuance that contributes to the discussion, and many people read the KJV for reasons completely independent of their understanding of textual scholarship. The same can be said for people who read the ESV, NASB, NIV, or any other translation for that matter. If I were to ask somebody which translation they read, and they responded, “I only read the ESV. It’s the translation that scholars trust, and it’s easy for me to understand”, would it be fair for me to call them an “ESV Onlyist”? Even if somebody had an informed opinion on textual methodology and decided to only read the ESV as a result of that, would it be fair for me to call that person an “ESV Onlyist”? No, it wouldn’t. Is it fair for somebody in one of the other methodological camps to call somebody who defends the modern critical text a “Modern Critical Text Onlyist”? Again, no it wouldn’t. Titles like these only serve to add unnecessary hostility, division, and confusion into the conversation.

It is especially important to understand these category distinctions, considering a great effort has been made to intentionally conflate them for one reason or another. Unfortunately, it seems to be the case that due to popular level presentations on the topic, the vast majority of Christians have actually been instructed to make these conflations. This is evidenced in the fact that most people, including some scholars, do not know the difference between a majority text position and a confessional text position, or that the KJV and ESV are translated from different text platforms. Popular level literature has actually instructed Christians to define anybody who doesn’t read a modern Bible as a “KJV Onlyist”, even those who don’t read the KJV. At a popular level, Christians do not understand the difference between textual methodology and translation methodology, or even understand the methodologies being employed to produce the Nestle-Aland/UBS printed texts that modern Bibles are translated from. For most Christians, the conversation has been framed as “KJV Onlyism” vs. the “correct view”. 

Conclusion

The kind of argumentation employed to defend the texts produced by modern reasoned eclecticism often introduces a terrible amount of confusion into the conversation that disallows for any sort of meaningful discussion. My goal in writing this article is to provide clarity by offering some important category distinctions. The first category is textual methodology, which is based upon an individual’s doctrines of inspiration and preservation. The second category is text platform, which is selected based on an individual’s textual methodology. The final category is translation methodology, which is the practical implementation of the first two categories. By allowing for these category distinctions, a productive conversation should be possible.    

Two Different Texts

Introduction

In my articles, I frequently comment that the Modern Critical Text and the Traditional Text represent two different forms of the text of the New Testament. Some disagree, and use this website to demonstrate that they are not that different. The site is helpful as a comparative tool between the ESV and KJV, though it is not technically a comparison of the Critical Text and Traditional Text. First, it is a comparison of translations, which means it is not comparing Greek texts, but translations of those texts. So while it gives the reader a general idea of the differences, translational choices may obscure the actual differences between the two underlying texts. Second, it does not fully compare the Critical Text and the Traditional Text as it includes comparisons of passages in a way that downplays the differences. An example would be that the comparative tool includes the Pericope Adulterae in the Critical Text, as well as excludes the Longer Ending of Mark in both texts. This gives the average reader the impression that there are really no differences. A full comparison would include the verses in the TR up to verse 20 in Mark, and exclude John 7:53-8:11 from the Critical Text. I would expect that the tool would include these differences, as well as clarify that it is a comparison between two translations and not between the TR and CT. 

Are We Discussing Two Different Text Forms?

The exclusion of certain verses for comparison highlights an important fact: in order to say that the Modern Critical Text and Traditional Text are essentially the same, one must ignore or downplay the fact that they are not the same in certain important places. It is because of these important places that there is disagreement at all. If the differences were that minor, we would be having a conversation over translation methodology and that’s about as deep as it would go. That is not to say that somebody cannot be saved by reading a Bible translated from the Modern Critical Text, but a careful examination of the two underlying texts reveals that they are different. One can argue how significant these differences are, but the fact remains, there are differences which distinguish the two texts. 

That being said, from a certain perspective, modern Bibles and traditional Bibles are both Bibles. They both contain the 66 books of the Old and New Testament, and they mostly contain the same content. Thus the important conversation should be centered around two topics – the difference between underlying texts and translation methodology. In creating a comparison tool that is supposed to compare the TR and the CT, and then using translations of these texts as a point of comparison, the two categories of text and translation are blended. It is interesting to say that the two texts are essentially the same, because if that were the case I’m not sure anybody would be seriously having this discussion at all. It is because  these two texts are so different that there is even a conversation. The existence of these two opposing positions on the text of the New Testament refutes the idea that the texts are the same. 

I am not saying that sound doctrine cannot be taught from a modern Bible such as the ESV or NASB, just that the underlying texts of modern Bibles are different than that of traditional Bibles such as the KJV. Many sound Biblical teachers employ modern Bibles in their ministry and are not heretics. The problem is that the standard for judging a Bible has been set at “can sound doctrine be taught from it?” If this was the standard, we would have to throw out every Bible, because false doctrine is readily taught from all translations. This standard is somewhat arbitrary and obfuscates the point of the discussion entirely. An orthodox understanding of the Trinity can be brought out of the New World Translation (in fact this is a great apologetic tool), but that doesn’t mean that Protestants should read the New World Bible. Thus, the standard of, “Can all the doctrines be proved from this translation?” is not a meaningful standard for determining the quality of a text or translation. Thus the conversation is rightfully seated in discussing the authenticity of the underlying texts used for translation.      

Two Different Text Forms

If the Modern Critical Text and Traditional Text were really as similar as is claimed, then there would be no discussion at all. It would be as simple as answering the question, “which Bible is the best translation of the Greek?” It would simply be a conversation over vocabulary choices and whether or not formal (KJV, NASB, ESV) or dynamic (NIV) equivalence is better. In admitting that there is indeed a difference, the conversation of determining how significant those differences are can take place in a productive manner. That being said, what about these two texts makes them “two different text forms?”

The primary difference has to do with the actual Greek manuscripts, not a difference between the translational choices of the KJV and ESV. The Modern Critical Text in its popular printed form (NA/UBS) is based largely on Codex Vaticanus, a fourth century Uncial Manuscript which is stored at the Vatican. All of the major differences can generally be found within this manuscript or Codex Sinaiticus. These are the two manuscripts referred to in modern Bibles as “earliest and best”. The Vatican Codex was first made use of in text critical efforts when Desidarius Erasmus consulted it in his production of his Greek and Latin New Testaments. Erasmus rejected the readings, however, claiming that they seemed to be back translations of corrupted Latin versional readings rather than being copied from a Greek manuscript. Frederick Nolan, a 19th century theologian and linguist, writes this regarding Erasmus and the Vatican Codex.

“With respect to Manuscripts, it is indisputable that he [Erasmus] was acquainted with every variety which is known to us; having distributed them into two principal classes, one of which corresponds with the Complutensian edition, the other with the Vatican manuscript. And he has specified the positive grounds on which he received the one and rejected the other” (Nolan, Frederick.  An Inquiry into the Integrity of the Greek Vulgate, or Received Text of the New Testament. 413, 414). 

Nolan also says regarding the Vatican Codex, ““The affinity existing between the Vatican manuscript and the Vulgate is so striking, as to have induced Dr. Bentley and M. Westein to class them together” (Ibid. 61).  

The first major use of this manuscript in the modern period was by Westcott and Hort, who primarily employed Vaticanus and Sinaiticus as a base text to produce their Greek New Testament in 1881. This is the text that the American Standard Version was translated from, which eventually gave birth to the Revised Standard Version and finally the English Standard Version. These manuscripts would eventually be classified as Alexandrian, based on the region in Egypt where they are thought to have originated (though recent scholarship has revisited this idea). Out of the close to 6,000 manuscripts available today, these Alexandrian manuscripts represent less than fifty. The vast majority of manuscripts represent a different text form, traditionally called the Byzantine Text Platform. The Textus Receptus follows the Byzantine text more closely than the Alexandrian text. So while one might make a case that the Alexandrian and Byzantine Texts are similar enough to both be considered a form of the Bible, these texts are distinct enough to be identified as separate classes of manuscripts, and thus different forms of Bibles. 

Even if one were to make a case that the Alexandrian Texts and Byzantine Texts were “close enough”, two major points of comparison stands between them that sets them apart entirely – the Longer Ending of Mark (Mark 16:9-20) and The Pericope Adulterae (John 7:53-8:11). That is a total of 23 verses that are simply missing from the Alexandrian texts in two places that are present in the Byzantine texts. Even if one believes the modern claim that the Alexandrian texts are “earliest and best”, it does not follow to say that these are the same text form. These texts also exclude John 5:4, Romans 16:24, and others. Total, there are enough texts different to exceed the number of verses in the entire book of Jude. If these are so similar, I do not see a reason that the Alexandrian texts have been classed in a different category than the majority of manuscripts. 

Conclusion

The goal of this article is to support the claim that the Modern Critical Text and the Traditional Text are indeed two forms of the New Testament. They may both be considered a New Testament, but they certainly are not the same New Testament. The Modern Critical Text does not include an appearance account in all four Gospels, and is missing a number of verses when compared to the majority of manuscripts. Additionally, the Modern Critical Text represents a handful of manuscripts which were produced around the third and fourth centuries, and do not appear to be copied after that point in time. 

There are two major schools of thought as to what these Alexandrian Texts are to the greater manuscript tradition. In the Modern Critical school of thought, they are the earliest texts that the rest of the manuscripts evolved from. In the Confessional Text school of thought, they are an aberrant text stream that was not copied past the fourth century. These two forms may have spawned at the beginning of the same river, but by the third and fourth century they split and headed in different directions. The Alexandrian split seems to have met its end shortly after that split, if the thousands of manuscripts available today are any indication. That is why focusing on translational differences between the KJV and ESV is not the primary concern for those who reject modern Bibles. If the Alexandrian form of the text is truly an aberrant stream, then the Modern Critical Text is not truly the “earliest and best”, it is a strange blip which disappeared as quickly as it appeared. Hopefully this sheds light on why those in the Confessional Text camp do not read modern Bibles. Translation methodology certainly has a role in the discussion, but a primary reason for siding with traditional Bibles has to do with the rejection of the texts modern Bibles use in translation. 

The Most Dangerous View of the Holy Scriptures

Introduction

Quite often in the textual discussion, it is boldly proclaimed that “our earliest and best manuscripts” are Alexandrian. Yet, this statement introduces confusion at the start. It introduces confusion due to the fact that there are sound objections as to whether it is even appropriate to use such a term as “Alexandrian” when describing the “earliest and best manuscripts”, as though they were a text family or text type. This is because there doesn’t seem to be an “Alexandrian” text type, only a handful of manuscripts that have historically been called Alexandrian. This is due to the more precise methods being employed, which allow quantitative analysis to be done in the variant units of these manuscripts. The result of this analysis has demonstrated that the manuscripts called Alexandrian do not meet the threshold of agreement to be considered a textual family. Tommy Wasserman explains this shift in thought. 

“Although the theory of text types still prevails in current text-critical practice, some scholars have recently called to abandon the concept altogether in light of new computer-assisted methods for determining manuscript relationships in a more exact way. To be sure, there is already a consensus that the various geographic locations traditionally assigned to the text types are incorrect and misleading” (Wasserman, http://bibleodyssey.org/en/places/related-articles/alexandrian-text). 

Thus, the only place the name “Alexandrian” might occupy in this discussion is one of historical significance or possibly to serve in identifying the handful of manuscripts that bear the markers of Sinaiticus and Vaticanus, which disagree heavily among themselves, as the Munster Method has demonstrated (65% agreement between 01 and 03  in the places examined in the gospels/26.4% to the Majority text http://intf.uni-muenster.de/TT_PP/Cluster4.php). So in using the terminology of “Alexandrian”, one is already introducing confusion into the conversation that represents an era of textual scholarship that is on its way out. Regardless of whether or not it is appropriate to use the term “Alexandrian”, it may be granted that it is a helpful descriptor for the sake of discussion, since the modern critical text in the most current UBS/NA platform generally agrees with it in at least two of these manuscripts (03 at 87.9% and 01 at 84.9%) in the places examined (See Gurry & Wasserman, 46). 

The bottom line is this – the new methods that are currently being employed (CBGM/Munster Method) are still ongoing, and will be ongoing until at least 2032. So any arguments made on behalf of the critical text are liable to shift as the effort continues and new data comes to light. As a result of this developing effort, any attempt to defend such texts is operating from an incomplete data set, based on the methods that are being defended. Given that the general instability of the modern critical text is granted, at least until the Editio critica maior (ECM) is completed, know that the conversation itself is likely to change over the next 12 years. In the meantime, it seems that the most productive conversation to have is that which discusses the validity of the method itself, since the dataset is admittedly incomplete.

Is the Munster Method Able to Demonstrate the Claim that the “Alexandrian” Manuscripts Are Earliest and Best?

The answer is no. The reason I say this is due to the method being employed. I have worked as an IT professional for 8 years, specifically in data analysis and database development, which gives me a unique perspective on the CBGM. An examination of the Munster Method (CBGM) will show that the method is insufficient to arrive at any conclusion on which text is earliest. While the method itself is actually quite brilliant , its limitations prevent it from providing any sort of absolute conclusion on which text is earliest, or original, or best. There are several flaws that should be examined, if those that support the current effort want to properly understand the method they are defending. 

  1. In its current form, it does not factor in versional or patristic data (or texts as they have been preserved in artwork for that matter)
  2. It can only perform analysis on the manuscripts that are extant, or surviving (so the thousands of manuscripts destroyed in the Diocletian persecution, or WWI and WWII can never be examined, for example)    
  3. The method is still vulnerable to the opinions and theories of men, which may or may not be faithful to the Word of God

So the weaknesses of the method are threefold – it does not account for all the data currently available, and it will never have the whole dataset. Even when the work is finished, the analysis will still need to be interpreted by fallible scholars. It’s biggest flaw, however, is that the analysis is being performed on a fraction of the dataset. Not only are defenders of the modern critical text defending an incomplete dataset, as the work is still ongoing, the end product of the work itself is operating from an incomplete dataset. So to defend this method is to defend the conclusions of men on the analysis of an incomplete dataset of an incomplete dataset. The scope of the conclusions this method will produce will be limited to the manuscripts that we have today. And since there is an overwhelming bias in the scholarly world on one subset of those manuscripts, it is more than likely that the conclusions drawn on the analysis will look very similar, if not the same, as the conclusions drawn by the previous era of textual scholarship (represented by Metzger and Hort). And even if these biases are crushed by the data analysis, the conclusions will be admittedly incomplete because the data is incomplete. Further, quantitative analysis will never be free of the biases of those who handle the data. Dr. Peter Gurry comments on one weakness in the method in his book A New Approach to Textual Criticism

“The significance of the selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). 

Further, the method itself is not immune to error. Dr. Gurry comments that, “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That is to say, that after all the computer analysis is done, the scholars making textual decisions can still make incorrect conclusions on which text is earliest, selecting a later reading as earliest. In the current iteration of the Munster Method, there are already many places where, rather than selecting a potentially incorrect reading, the text is marked to indicate that the evidence is equally strong for two readings. These places are indicated by a diamond in the apparatus of the current edition of the Nestle-Aland text produced in 2012. There are 19 of these in 1 and 2 Peter alone (See NA28). That is 19 places in just two books of the Bible where the Munster Method has not produced a definitive conclusion on the data. That means that even when the work is complete, there will be thousands of different conclusions drawn on which texts should be taken in a multitude of places. This is already the case in the modern camp without application of the CBGM, a great example is Luke 23:34, where certain defenders of the modern critical text have arrived at alternative conclusions on the originality of this verse.   

There is one vitally important observation that must be noted when it comes to the current effort of textual scholarship. The current text-critical effort, while the most sophisticated to date, is incapable of determining the earliest reading due to the limitations of the data and also in the methodology. A definitive analysis simply cannot be performed on an incomplete dataset. And even if the dataset was complete, no dataset is immune to the opinions of flawed men and women.  

An Additional Problem Facing the Munster Method

There is one more glaring issue that the Munster Method cannot resolve. There is no way to demonstrate that the oldest surviving manuscripts represent the general form of the text during the time period they are alleged to have been created (3rd – 4th century). An important component of quantitative analysis is securing a data set that is generally representative of the whole population of data. This may be fine in statistical analysis on a general population, but the precision of the effort at hand is not aiming at a generic form of precision, because the Word of God is being discussed, which is said to be perfectly preserved by God. That means that the sample of data being analyzed must be representative of the whole. The reality is, that the modern method is really doing an analysis on the earliest manuscripts, which do not represent the whole, against the whole of the dataset.

It is generally accepted among modern scholarship that the Alexandrian manuscripts represent the text form that the whole church used in the third and fourth century. This is made evident when people say things like, “The church wasn’t even aware of this text until the 1500’s!” or “This is the text they had at Nicea!” Yet such claims are woefully lacking any sort of proof, and in fact, the opposite can be demonstrated to be true. If it can be demonstrated that the dataset is inadequate as it pertains to the whole of the manuscript tradition, or that the dataset is incomplete, then the conclusions drawn from the analysis can never be said to be absolutely conclusive. There are two points I will examine to demonstrate the inadequacy of the dataset and methodology of the CBGM, which disallows it from being a final authority in its application to the original form of the New Testament.

First, I will examine the claim that the manuscripts generally known as Alexandrian were the only texts available to the church during the third and fourth centuries. This is a premise that must be proved in order to demonstrate that the conclusions of the CBGM represent the original text of the New Testament. In order to make such a claim, one has to adopt the narrative that the later manuscripts which are represented in the Byzantine tradition were a development, an evolution, of the New Testament text. The later manuscripts which became the majority were the product of scribal mischief and the revisionist meddling of the orthodox church, and not a separate tradition that goes back to the time of the Apostles. This narrative requires the admission that the Alexandrian texts evolved so heavily that by the Middle period, the Alexandrian text had transformed into an entirely different Bible, with a number of smoothed out readings and even additions of entire passages and verses into the text which were received by the church as canonical! Since this cannot be supported by any real understanding of preservation, the claim has to be made that the true text evolved and the original remains somewhere in the texts that existed prior to the scandalous revision effort of Christians throughout the ages. This is why there is such a fascination surrounding the Alexandrian texts, and a determination by some to “prove” them to be original (which is impossible, as I have discussed).

That being said, can it be demonstrated that these Alexandrian manuscripts were the only texts available to the church during the time of Nicea? The simple answer is no, and the evidence clearly shows that this is not the case at all. First, the number of examples of patristic quotations of Byzantine readings demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. One can point to Origen as the champion of the Alexandrian text, but Origen wasn’t exactly a bastion of orthodoxy, and I would hesitate to draw any conclusions other than the fact that after him, the church essentially woke up and found itself entirely Arian or some other form of heterodoxy as it pertained to Christ and the Trinity. Second, the existence of Byzantine readings in the papyri demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. Finally, Codex Vaticanus, one of the chief exemplars of the Alexandrian texts, is proof that other forms of the text existed at the time of their creation. This is chiefly demonstrated in the fact that there is a space the size of 11 verses at the end of Mark where a text should be. This space completely interrupts the otherwise uniform format of the codex which indicates that the scribes were aware that the Gospel of Mark did not end at, “And they went out quickly, and fled from the sepulchre; for they trembled and were amazed: neither said they any thing to any man; for they were afraid.” They were either instructed to exclude the text, or did not have a better copy as an exemplar which included the text. In any case, they were certainly aware of other manuscripts that had the verses in question, which points to the existence of other manuscripts contemporary to Sinaiticus and Vaticanus. Some reject this analysis of the blank space at the end of Mark as it applies to Sinaiticus (Which also has a blank space), but offer additional reasons why this is the case nonetheless, see this article for more. James Snapp notes that “the existence of P45 and the Old Latin version(s), and the non-Alexandrian character of early patristic quotations, supports the idea that the Alexandrian Text had competition, even in Egypt.” Therefore it is absurd to claim that every manuscript circulating at the time looked the same as these two exemplars, especially considering the evidence that other text forms certainly existed.

Second, I will examine the claim that the Alexandrian manuscripts represent the earliest form of the text of the New Testament. It can easily be demonstrated that these manuscripts do not represent all of their contemporary manuscripts, but that is irrelevant if they truly are the earliest. Yet the current methodology has absolutely no right to claim that it is capable of proving such an assertion. Since the dataset does not include the other manuscripts that clearly existed alongside the Alexandrian manuscripts, one simply cannot draw any conclusions regarding the supremacy of those texts. One must jump from the espoused method to conjecture and storytelling to do so. Those defending the modern text often boldly claim that fires, persecution, and war destroyed a great deal of manuscripts. That is exactly true, and needs to be considered when making claims regarding the manuscripts that survived, and clearly were not copied any further. One has to seriously ponder why, in the midst of the mass destruction of Bibles, the Alexandrian manuscripts were considered so unimportant that they weren’t used in the propagation of the New Testament, despite the clear need for such an effort. Further, these manuscripts are so heavily corrected by various scribes it is clear that they weren’t considered authentic in any meaningful way. 

Even if the Alexandrian manuscripts represent the “earliest and best”, there is absolutely no way of determining this to be true due to the simple fact that the dataset from that time period is so sparse. In fact, the dataset from this period only represents a text form that is aberrant, quantitatively speaking. It is evident that other forms of the text existed, and despite the fact that they no longer are surviving, the form of those texts survive in the manuscript tradition as a whole. The fact remains, there are no contemporary data points to even compare the Alexandrian manuscripts against to demonstrate this to be true. Further, there are not enough second century data points to compare the third and fourth century manuscripts against to demonstrate that the Alexandrian manuscripts represent any manuscript earlier than when they were created. It is just as likely, if not more likely, that these manuscripts were created as an anomaly in the manuscript tradition. The fact remains that the current methods simply are not sufficient to operate on data that isn’t available.  This relegates any form of analysis to the realm of story telling, which exists in the theories of modern scholars (expansion of piety, scribal smoothing, etc.) 

Conclusion

Regardless of which side one takes in the textual discussion, the fact remains that the critiques of the modern methodology as it exists in the CBGM are extremely valid. The method is primarily empirical in its form, and empirical analysis is ultimately limited by the data available. Since the data that is available is not complete outside of a massive 1st and 2nd century manuscript find, the method itself will forever be insufficient to provide a complete analysis. The product of the CBGM can never be applied honestly to the whole of the manuscript tradition. Even if we find 2,000 2nd century manuscripts, there will still be no way of validating that those manuscripts represent all of the text forms that existed during that time . As a result, the end product will simply provide an analysis of an incomplete dataset. It should not surprise anybody when the conclusions drawn from this dataset in 2032 simply look like the conclusions drawn by the textual scholarship of the past 200 years. This being the case, the conversation will be forced into the theological realm. If the modern methods cannot prove any one text to be authorial or original, those who wish to adhere to that text will ultimately be forced to make an argument from faith. This is already being done by those who downplay the significance of the 200 year gap in the manuscript tradition from the first to third centuries and say that the initial text is synonymous with the original text. 

The fact remains that ultimately those who believe the Holy Scriptures to be the divinely inspired word of God will still have to make an argument from faith at the end of the process. Based on the limitations of the Munster Method (CBGM), I don’t see any reason for resting my faith on an analysis of an incomplete dataset which is more than likely going to lean on the side of secular scholarship when it is all said and done. This is potentially the most dangerous position on the text of Scripture ever presented in the history of the world. This position is so dangerous because it says that God has preserved His Word in the manuscripts, but the method being used cannot ever determine which words He preserved.

The analysis performed on an incomplete dataset will be hailed as the authentic word(s) of God, and the conclusions of scholars will rule over the people of God. It is possible that there will be no room for other opinions in the debate, because the debate will be “settled”. And the settled debate will arrive at the conclusion of, “well, we did our best with what we have but we are still unsure what the original text said, based on our methods”. This effectively means that one can believe that God has preserved His Word, and at the same time not have any idea what Word He preserved. The adoption of such conclusions will inevitably result in the most prolific apostasy the church has ever seen. This is why it is so important for Christians to return to the old paths of the Reformation and post-Reformation, which affirmed the Scriptural truth that the Word of God is αυτοπιστος, self-authenticating. It is dishonest to say that the Reformed doctrine of preservation is “dangerous” without any evidence of this, especially considering the modern method is demonstrably harmful.  

The Septuagint and the Received Text

Introduction

Recently, I encountered the view that the Hebrew masoretic text of the Old Testament was not inspired. Some say that it was a wicked, corrupted, invention of Christ hating Jews. Others simply deny the authenticity or preservation of the Hebrew text in favor of the Septuagint. This is not some niche corner of the internet either. This is a popular opinion, even among the Reformed. First, it must be stated that the argument needs clarification at its beginning, as there is not one “Septuagint”, there are Septuagints. There is not one Greek Old Testament, there are many versions and editions. Further, the Dead Sea Scrolls do not contain an entire Old Testament, so it is not adequate to appeal to them as a complete authority.

While that may not cause those who adhere to this position to reconsider, it is an important observation nonetheless. In any case, it should be understood why the people of God should start with the Hebrew Old Testament texts over the Septuagint or any other version. It is important then to examine the foundation and logical end of these claims according to the standard of Scripture and to see the implications of such a belief. First, I will examine the Scriptural testimony to itself in regard to its sufficiency and purpose, source and method, and scope and promise. Second, I will present several affirmations for and against considering translations as immediately inspired . Third, I will comment on the nature of citations of external sources in the New Testament text. 

Sufficiency and Purpose

“All scripture is given by inspiration of God, and is profitable for doctrine, for reproof, for correction, for instruction in righteousness: That the man of God may be perfect, throughly furnished unto all good works.”

2 Timothy 3:16-17 KJV

The first standard set forth in the Holy Scriptures is that all Scripture is given by way of inspiration by God and is sufficient for all matters of faith and practice, “That the man of God may be perfect”. From this text, there are several important claims regarding Scripture:

1. That all Scripture is inspired, not just some 

2. That all Scripture is sufficient, not just the important parts 

3. That Scripture alone is the means that God has given to the people of God for all matters of faith and practice 

The method of inspiration is debated as to how exactly God inspired the text, yet this much is clear: 

1. In the Old Testament, God used means of prophets, dreams, visions, Christophanies and Theophanies, and angelic messengers to deliver His Word to His people

2. In the New Testament, God used means of apostolic writers to deliver His Word to His people

The method of inspiration of the Scriptures is often called “verbal plenary”, and it is typically nuanced in such a way that God used the unique authors and their vocabulary and experiences to inspire the words of the New Testament Scriptures. There are various ways of describing the nature of this inspiration, some much too liberal for conservative belief, but I will save that for another article. In the meantime, please refer to this article: https://purelypresbyterian.com/2016/10/13/the-apostles-and-prophets-secretaries-of-the-holy-ghost/

Source and Method 

“For the prophecy came not in old time by the will of man: but holy men of God spake as they were moved by the Holy Ghost.”

2 Peter 1:21 KJV

The second standard set forth in the Holy Scriptures is that Scripture was delivered through “holy men of God”. This was done specifically, as Hebrews 1:1 says, “by the prophets” in the Old Testament, and in these last days, “by his Son”. The language of the people of God in the Old Testament was Hebrew, and in certain places, Aramaic. These comprise the “Hebrew Scriptures”. The language that the New Testament was written in, as attested to by every generation of orthodox believers until the modern period, was Greek. Thus, it should be universally accepted that the documents that were immediately inspired were those written in these languages. This is affirmed by both the 17th century confessions as well as the Chicago Statement on Biblical Inerrancy. Most conservative Christians accept at least one of these as a valid creedal statement on Scripture.

Scope and Promise 

“For verily I say unto you, Till heaven and earth pass, one jot or one tittle shall in no wise pass from the law, till all be fulfilled.”

Matthew 5:18 KJV

The third standard set forth in the Holy Scriptures is that not “one jot or one tittle shall in no wise pass from the law, till all be fulfilled”. In this text, Jesus is declaring that “the truth of the law, and every part of it, is secure, and that nothing so durable is to be found in the whole frame of the world” (Calvin, Commentary Mat. 5:18). This directly applies to the Old Testament as the covenantal document given to the people of God of old, and necessarily applies to the New Testament as it is the covenantal document given to the people of God in the last days. The Westminster Divines affirmed the usage of this passage as speaking authoritatively to the perfect preservation of God’s Word (1.8). 

“The authority of Scripture has always been recognized in the Christian church. Jesus and the apostles believed in the OT as the Word of God and attributed divine authority to it. The Christian church was born and raised under [the influence of] the authority of the Scripture. What the apostles wrote must be accepted as though Christ himself had written it, said Augustine. And in Calvin’s commentary on 2 Timothy 3:16, he states that we owe Scripture the same reverence we owe to God. Up until the 18th century, that authority of Scripture was firmly established in all the churches and among all Christians.”

Herman Bavinck. Reformed Dogmatics, Vol. 1, 455.

The Nature of Translations

Are translations of the original languages as authoritative in so far as they represent the immediately inspired text? We affirm. Are translations themselves immediately inspired? We affirm against. There is a severe error among the people of God today which says that not only can a translation be immediately inspired, but certain translations are indeed immediately inspired – even when they disagree with the immediately inspired text.

Yet, the Scriptures are clear that “God spake” through the prophets and the apostolic witnesses, not scribes, translators, or text-critics. The argument that a translation is immediately inspired is in fact the argument that Ruckmanites employ to affirm the inerrancy of the Authorized Version. They claim that God supernaturally worked in the translators of the King James Version and inspired anew the text of Holy Scripture into an English translation. The main application of this heinous error is found equally among the Ruckmanites and those that affirm that the various Greek translations of the Old Testament, commonly called “the Septuagint” (LXX), is the immediately inspired text of the Old Testament. 

First, let us examine the claim that the Septuagint is the immediately inspired Word of God in the Old Testament. The first premise that must be agreed upon, is that the text of the Old Testament was originally written in Hebrew (and in certain places Aramaic). This must be affirmed due to the fact that at the time of the inspiration of the Old Testament, the Greek language either did not exist, or in later times existed in a form entirely foreign to that of the Septuagint. Thus, by affirming the reality that the Old Testament could not have been originally penned in Greek, we affirm that the Greek text of the Old Testament cannot be the immediately inspired text. Additionally, the language of the people of God of old was not Greek, but Hebrew. So by both accounts, the immediately  inspired text of the Old Testament was written in Hebrew (and in places Aramaic). 

Second, let us examine the implications to the doctrine of inspiration, should the Septuagint be accepted as the immediately inspired Word of God in the Old Testament. The first assertion that I will examine is that a translation can be accepted as the immediately  inspired Word of God. If this is the case, then one must deny the method of inspiration employed by God as attested to in the Scriptures (2 Peter 1:21; Hebrews 1:1). The authority of inspiration then is shifted to those who have translated the original text into vulgar tongues of the nations. Granting this premise, there is no reason to affirm against any vulgar translation being accepted as the immediately inspired Word of God, and one has no grounds to affirm against the Ruckmanites, or the Papists for that matter. 

Third, let us examine the implications to the shape of Scripture, should the Septuagint (or any other translation) be accepted as the immediately inspired Word of God in the Old Testament. If a translation can be accepted as immediately inspired, one must first attempt to find a Scriptural standard which informs the people of God which translation should be accepted. The common proof that is given for the Greek Old Testament are the various quotations of the Septuagint by the Apostolic authors. Should it be the case, that any text cited by the Apostolic authors causes the source text to be accepted as Scripture, a serious error arises. By adopting this understanding, one must also accept the writings of the pagan authors Menander and Epimenides as quoted by the Apostle Paul in Acts 17:28, 1 Cor. 15:33, and Titus 1:12. Further, one must also accept the book of Enoch as Scripture (Jude 14). That is not to say that a translation of the original texts is equivalent with pagan authors or apocryphal texts, but that the form of the argument as it pertains to inspiration requires such an admission. If a text is qualified as inspired based on its quotation by the Apostolic authors and not by the source of the revelation which is God, than all cited texts should be considered inspired. In inspiring the text of Holy Scripture, God does not inspire the source texts cited, only the text itself as it exists within the Holy Scriptures. 

Do quotations by the Apostolic writers retroactively inspire a cited text? We affirm against this error. In order to suppose that any text quoted by the Apostles actually inspires the whole of the cited text, or even the portion of text cited, one must accept that the method of inspiration is interrupted. We affirm that the words delivered by the Apostles are inspired, but not the source cited. In this sense, the Septuagint quotations and quotations of other authors are equally uninspired as they exist outside of the New Testament text. Not that the Septuagint in itself is uninspired as it represents the original Hebrew, just that the words themselves were not immediately inspired. In simple terms, a translation is only considered authentic insofar as it represents the inspired text it is translated from. Should this be the case that the standard for inspiration of a text is its use by the Apostolic writers in the New Testament, the canon should be edited to include the aforementioned cited works, as they are inspired. To this we affirm against. 

Further if the Septuagint is accepted as an inspired text apart from the original Hebrew, one would have to accept the various apocrypha contained within that text, including the multiple versions of “Bell and the Dragon”. To accept one book of the Septuagint and not another is to accept the form of the Hebrew Scriptures but not the content. If the argument is made that the Septuagint is only inspired as far as it is cited in the New Testament, then the whole corpus of the Septuagint is to be rejected where it is not cited by the New Testament authors, in which case the argument that the Septuagint is inspired is refuted. In the case that the Septuagint is affirmed as inspired and not immediately inspired, it would need to be demonstrated that the Septuagint is a faithful representation of the immediately inspired text, in which case appealing to the Septuagint is no longer necessary. We affirm that both the shape and content of the Hebrew Scriptures are immediately inspired, and not any part of the form or content of the Septuagint as it exists apart from the text it was translated from.

Understanding the Quotations of Non-Inspired Texts by the Apostolic Authors 

Now that is abundantly clear the implications of holding to such a doctrine that translations and other non-canonical texts can be inspired apart from its representation of the original, let us examine the proper understanding of quotations of non-inspired texts by the Apostolic authors. Though the Apostolic authors employ non-inspired texts, this does not mean that those texts are uninspired as they exist within the Holy Scriptures. We affirm that the use of quotations in the New Testament authors are inspired insofar as they exist within the New Testament. This is due to the New Testament being inspired by God. We affirm against the practice of using New Testament quotations to correct the immediately inspired text, specifically the Hebrew Scriptures. 

We affirm against this for several reasons. The first is that the Greek Old Testament(s) is a translation, and not the immediately inspired text. The second is that the Greek Old Testament and the Hebrew Old Testament are not the same text. It may stand to reason that the New Testament quotations of the Septuagint may be used to correct other versions of the Greek Old Testament, but it does not follow to then say that the Hebrew Text should be corrected by the Greek. This is the same argument employed by the Ruckmanites when they affirm that the Greek and Hebrew should be corrected by the English Bible. The use of the Septuagint by the Ancient Fathers does not authorize the use of the Septuagint in the correction of the text in the authentic copies, just like the use of the ESV in contemporary writings does not authorize the correction of the original text .

The Ancient Fathers did not have Apostolic authority. The third is that though translations are necessary and are the common means that the people of God access the Bible in their mother tongue, translations in themselves are subject to translational obscurities and the equivalency of one word to another may be misunderstood due the semantic evolution of a word or poor translation. This being the case, we affirm against using versional readings to correct any immediately inspired text of the Holy Scriptures. This is not to say that versional readings cannot be consulted to better understand the nature of the evolution of a variant, or to gain confidence in an original reading, but that versional reading should not be held over and above the authentic reading in the original tongues. 

Conclusion

It should now be understood by all the doctrinal foundations for accepting a text as immediately inspired, as well as the doctrinal foundations for rejecting a text as inspired. It is abundantly clear that the only texts that should be considered immediately inspired are the authentic Hebrew Old Testament Scriptures and the authentic Greek New Testament Scriptures. The translations made from these texts are warranted and necessary, though they do not stand above the original texts as a judge or a corrector. To affirm this is to affirm against the Biblical doctrine of inspiration and to reject the authority of God’s revelation to His people. In affirming that versional readings are inspired or of higher authority than the immediately inspired text in the original languages, one must accept that the method of inspiration as detailed in Scripture has failed or is incorrect, and the Word of God not authoritative or preserved. To affirm this is to affirm the same doctrine of inspiration as the Ruckmanites, and against the orthodox doctrine of Scripture as articulated from the beginning. 

Text and Translation

Introduction

It is easy to read an article or facebook thread on the issue of textual criticism or translation and have trouble understanding what is going on. The conversation is shrouded by specialized terminology and polemics. This is often due to people getting their information from their favorite podcast or YouTube program. Often times, the conversation becomes muddled when it comes to differentiating between the underlying text and the translations made from those texts. There are two important conversations that happen regarding the Bible – the conversation of which New and Old Testament texts should be used for translation, and the conversation of translation methodology and quality. Yet these two distinct topics are constantly conflated and mixed together.

The most common occurrence of this conflation happens when people utilize the term “KJV Onlyist” when discussing the Greek and Hebrew. This argument was made popular first by internet podcast host James White and reiterated in Dr. Andrew Naselli’s critically acclaimed textbook How to Understand and Apply the New Testament. It is almost impossible to have a conversation about which underlying Greek and Hebrew should be used in translation now without being called a “KJV Onlyist” if you are brave enough to affirm against the modern text.

Yet there is an important difference between the text a Bible is translated from and the translation itself. This is easily demonstrated in the fact that people disagree on which modern translation is the best. Some people swear by the NASB because they believe it to be “the most literal translation available”, and others only read the ESV because it is the “most scholarly translation”. Most times, Christians select their Bible based on the translation methodology and the quality of the translation itself. The underlying Greek has nothing to do with it. So it is absolutely possible that somebody prefers the modern Greek texts, but does not prefer any of the modern Bible translations, and reads a traditional Bible based on their preference of translation alone. Yes, it is possible that somebody would prefer a KJV without knowing anything about the underlying textual discussion. 

The Textual Discussion

The conversation of “which Bible is the best” can be separated into two categories, text and translation. The first category has to do with the Biblical languages, which are the Hebrew Old Testament (which includes small portions of Aramaic or Chaldean as the Puritans called it) and the Greek New Testament. Some people have also taken the modern position that the Bible can also be translated from other translations, such as the Greek Old Testament or the Syriac Old Testament. The ESV, NIV, and NASB all do this. This would be akin to creating a fresh Bible out of the ESV. The confessional position states that translations should not be made from versional readings like the Greek Old Testament, but that falls into the category of translation methodology. 

In terms of the first category, which is the text, the conversation has to do with answering the question “which original text should be translated from?” There are a handful of positions when it comes to text. The first can be generically called the modern critical text position. Within this camp, there are an array of different thoughts, so this brief description will obviously not cover every nuance of the conversation. The main thought is that the Greek Testament is best represented in Codex Vaticanus and other texts similar to it. Codex Vaticanus is said to have originated in Alexandria around the fourth century and was published in the 19th century. Codex Vaticanus is stored at the Vatican Library and the first time it was explicitly mentioned was in the Reformation period due to Erasmus consulting some of its readings as a part of his work on his Latin and Greek New Testaments. Erasmus believed that the Vatican codex followed Latin versional readings, and rejected it based on his detestation for the contemporary iteration of the Vulgate he was seeking to correct in his Latin edition. 

The most significant markers of these types of manuscripts are their short, abrupt readings and the absence of the three most discussed variants (John 7:53-8:11; 1 John 5:7; Mark 16:9-20). It also excludes many majority readings such as John 5:4 and Romans 16:24. If you look in a modern Bible, these verses are simply skipped over without renumbering the whole chapter. The Vatican Codex was employed heavily by Westcott and Hort in their Greek New Testament published in 1881 and all modern translations closely follow the readings of this manuscript and those like it. Out of the close to 6,000 manuscripts extant today, Vaticanus represents anywhere from 17 to 30 of them. These manuscripts were formerly called the “Alexandrian Family”, but recent scholarship has moved away from that conclusion due to their lack of coherence with one another. It is more accurate to say that they are cousin manuscripts than a text family. 

In any case, those that hold to the modern critical text position believe that the Bible is best preserved (Read partially or generically preserved) in the readings contained within these manuscripts, and make textual decisions based on prioritizing the Alexandrian texts as better than the majority of the manuscripts available today. There are many nuances within this camp, and some modern critical text advocates adopt some majority readings over Alexandrian readings (Like the Tyndale House Greek New Testament at John 1:18). The Greek New Testament most employed by those in this camp is the Nestle-Aland Greek New Testament which is now in its 28th edition. The Nestle-Aland text is the base text used for almost all of the modern Bible translations made today. The modern critical text position also tends to favor Old Testament versional readings like the Greek, Syriac, and the Latin Vulgate over the Masoretic Hebrew text (see ESV 2016 prefatory material for more information). 

The second position is called the majority text position, which also has an array of different thoughts within it. Some scholars, like Wilbur Pickering, take a theological approach within the majority text position, and others take more of an evidential approach. In both cases, the majority text advocates reject the theory that the Alexandrian manuscripts are “earliest and best” and instead start with the readings represented most abundantly in the manuscript tradition (Or even pick one manuscript as the authentic representative). The basic premise of this position is that the readings that are most abundant are the readings that God preserved. Some within this camp do not dogmatically pick the majority reading every time, however. They still make decisions on each variant like one might do within the modern critical text camp based on the extant data available. There are Bible translations made from various collations of the majority text, like the family 35 majority text. Often times, the majority text advocates do not read a Bible that represents their favorite text, though the NKJV and even the KJV are popular within this camp. 

The third position is called the confessional text position (also called the Ecclesiastical text, canonical text, or less preferably TR advocates). This position favors the texts that were employed during the time of the Reformation and confessional period after which are represented by the Masoretic Hebrew Old Testament and the Received Text of the New Testament. While this position typically favors the Authorized Version (KJV), many within this camp read the NKJV, MEV, or GNV, and are open to fresh translations of the texts of the Reformation. Most read the AV simply because they believe the translation methodology employed by the translators is more faithful than the other Bibles available. This position is not so much about translation, but rather about the underlying Biblical texts used for translation. Since the modern Bibles employ a different underlying text, this camp rejects those Bibles because they do not believe they represent the original.

The Greek text preferred by the confessional text position aligns most closely with the majority of manuscripts available today, though it does depart from the majority text in certain places, which makes it a distinct position. This is why this position is often conflated with the majority text position, though they are different from one another. This conflation is made in Dr. Andrew Naselli’s textbook mentioned above. The major difference between this and the modern critical text position, is that this camp believes the work of collating manuscripts was accomplished during the Reformation period. During this time, the process of copying manuscripts evolved from hand copying to printing with the invention of the printing press, and thus the method of copying was formalized and a more concentrated effort of textual criticism was warranted. Since this text was to be massively distributed for the first time in church history, this effort represents a significant phase in the providential preservation of the Word of God. 

A major point of confusion by those who do not adhere to this position, is the fact that the confessional text camp is not trying to find the original Bible, they believe they have it. They are not primarily concerned with supporting every reading with extant manuscript evidence (though they can) because they do not believe this aligns with the Biblical doctrines of inspiration and preservation. The manuscripts do not offer definitive conclusions on the text 400 years removed from the time when they were still being copied in the Reformation period. Modern critical text advocates have trouble understanding the idea that the Bible was never in need of reconstruction, it was received in every generation and massively distributed for the first time in the 1600s. The effort of Reformation era textual criticism was not an effort of reconstruction, like today’s effort, but rather a collation and editing. Simply put, the Reformation era text-critics (not just Erasmus), were collecting faithful copies of the New Testament, and editing them into printed editions. Reformation era scholarship on inspiration and preservation demonstrates that this was the common thought of the day. They believed that the text of the New Testament was available, and with editing into one edition, could be found easily. Commentary by the Westminster Divines and other Puritan scholars affirms this overwhelmingly. Those in the confessional text camp affirm the determinations of these scholars and theologians, and believe that the text used for translation and theology for the next 300 years was the text that the people of God had used since the beginning (While acknowledging aberrant text streams and variants). 

Notice that while translation is connected with the textual discussion, it is not the same discussion at all. Those that are nuanced in the conversation select their Bible translation based on their understanding of the underlying text, but the translations themselves are entirely distinct from the text they are translated from. That is why it is unhelpful and actually detrimental to reduce the conversation of text to a matter of translational preference, as many do today. In fact, labeling somebody a “KJV Onlyist” for preferring the Received Text or Majority text only demonstrates an extreme amount of ignorance on the topic. Conflating the Received Text with the Majority text is even more condemning. The conversation of text can take place without discussing translation at all, though translation often comes up. It has to do with the underlying Hebrew Old Testament and Greek New Testament. 

The Translation Discussion

A translation is simply the product of translating one language to another. In the context of Bible translation, the translations are typically made from the Hebrew Old Testament and the Greek New Testament (though many modern translations translate from translations in the Old Testament). It is true that people often select the translation they read based on their view of the underlying text, though this is not always the case. That is because translation methodology is an entirely different discussion. A good translation has nothing to do with the text it is translated from. In fact, it is possible to make a horrible translation from a great underlying text, and an accurate translation from a horrible underlying text. A translation simply takes a text from one language into another. 

The conversation of translation can be separated into two categories – translation methodology and the accuracy of the translation itself. Translation methodology is more closely related to the textual discussion due to methodology often being impacted by the translator’s view of the text. For example, the Reformation era translators did not translate from versional or translational readings. They translated from the Hebrew Old Testament and the Greek New Testament. Modern translation methodology does not strictly translate from the original languages, but often translates from other ancient translations. Somebody could agree that the modern critical text is better, but disagree with the modern translation methodology, and choose to read a traditional Bible because of it. 

In my experience, translation methodology is actually way more significant to people when choosing a Bible than the textual discussion. Most people choose the NASB because it is “the most literal” or the ESV because it is “the most scholarly” or the NIV because it “captures the original intention of the authors”. Most people reject the KJV because they “cannot understand it”. While the textual discussion is extremely important to some people, most Christians choose a Bible based on the translation itself. In fact, most people pick the ESV simply because they enjoy how it reads. 

Translation methodology is actually an extremely important topic that often goes neglected. It is important because most people only have access to a translation, so they have to trust that the translators have faithfully given them God’s Word in their mother tongue. Translation methodology has to do with which texts are being used to translate from, whether the translators are attempting to translate more formally (ESV, KJV, NASB) or dynamically (NIV, MEV), and even the complexity of the vocabulary. Bibles that are translated using formal equivalence (more literal) are often preferred over Bibles that are translated using dynamic equivalence (thought for thought). A simple internet search reveals that when people are selecting a Bible, this is the primary motivating factor for most people when selecting a translation.  

The second category of the translational discussion is the actual accuracy of the translation itself. This has to do with the accuracy of a word actually being translated from one language into another. It is more uncommon for people to choose a translation based on accuracy, but it is a factor that people take into account. People want to know that what they are reading represents the original language. This part of the discussion is another important component that is frequently neglected but it is certainly becoming more central to the translation discussion as the NASB and ESV are beginning to do more interpretation instead of translation in each new edition. A great example is whether or not αδελφοι should be translated as “brothers” or “brothers and sisters”. A literal translation would simply translate the plural form of the word “brother” (αδελφος) into “brothers”, but modern translation methodology has evolved into doing more interpretation than translation. While the usage of the word can include both men and women depending on the context, it literally just means “brothers”. In the case that a translation team decides to translate the word into “brothers and sisters”, the translators are making a decision to include an interpretation of the word in the translation itself. 

While this might seem like an unimportant nuance, translation accuracy is the reason many are decrying the next edition of the NASB. People are not comfortable with the translators interpreting a passage – they simply want the passage translated and the interpretation left to the person reading the text. Due to the trend of modern Bibles doing an increasing amount of interpretation in the translation itself, many people have actually decided not to purchase the newest editions of the ESV, NASB, and NIV. This is another great example of how translation methodology can cause somebody to determine which Bible they read, despite somebody’s understanding of the textual discussion. When I was a modern critical text advocate, I had already considered abandoning modern translations based on the direction that the translation methodologies were going. There are many people who read the KJV and NKJV simply because the modern translations take many liberties in translation. 

Conclusion

The conversation of text and translation is complicated and nuanced. There are a vast array of reasons that one might decide to read a particular translation over another, and the underlying text is only one of those reasons. In many cases, the underlying text is not even the main reason somebody picks one Bible over another. The important thing to recognize is that there are many important differences between text and translation, and some people care more about one than the other. In fact, most people are fine with the differences between the underlying texts used for translation because they believe they have “all the important stuff” no matter which Bible they read. The reality is, that many Christians read the NKJV or KJV based on translation methodology, preference, or familiarity over and above the textual discussion. That is because it does not matter how pure the underlying Greek and Hebrew is, if the translation is not faithful, than people want nothing to do with it. 

Simply calling somebody a “KJV Onlyist” reduces the conversation to polemics and is entirely unhelpful and even detrimental to the discussion. There is a plethora of reasons to reject modern Bibles, tradition is just one of them. It is time that Christians realize that being a “KJV Onlyist” is not the only reason to read a KJV, or the only reason people reject modern Bibles. The fact is that many Christians are becoming disenchanted with the increasing number of revisions to the underlying modern Greek text and the evolving translation methodologies of modern Bibles. People do not want a changing Bible. They want consistency and stability. The direction that modern translations have been heading for decades does not, and cannot offer this. 

There is No Modern Doctrine of Preservation

Introduction

There is no modern doctrine of preservation, and I’m not sure people have realized it quite yet. What does preserved mean? It means that something has been kept safe from harm, uncorrupted, or maintaining the same form as it was when it was created. In this case, the New Testament corpus is the object that is said to be preserved. This means that in order for the New Testament to be preserved, it had to have stayed the same from the time it was penned and in the collection of faithful copies and collated editions going forward. That does not mean that every copy or collation is faithful to the text that God inspired or preserved, just that original was transmitted faithfully throughout the ages and even to the modern period. The words of the New Testament were not lost. The existence of different text forms and variants does not disqualify the Bible as being preserved. It simply indicates that certain lines of textual transmission were corrupted, and even within faithful manuscripts there were variants introduced into the text. There is no mistake that the manuscript tradition tells a complex story full of many scribal errors and corruptions. 

In order for a text to be preserved in light of textual variants introduced by scribal errors and corruptions, there is one process that could have resulted in the original text being transmitted faithfully into the modern period. This process would have involved correcting scribal errors and corruptions as the manuscripts were copied throughout the ages. This can be observed in surviving manuscripts by the existence of corrections by various scribes, as well as the increased uniformity of texts going into the middle period (though not perfect uniformity). In order to believe that the text of the New Testament has been preserved, one has to say that the effort of the scribes was successful in every generation of copying. If the text has been preserved, one would expect the text to become increasingly uniform over time, as the number of copyists increased along with the number of Christians.

Due to the heavy persecution of Christians in the early church alongside the fragility of stationary, the early manuscript evidence of the New Testament is sparse. All of the extant, early manuscripts generally represent a different text form than what survived later in the textual tradition, and is generally agreed to have originated in one locality. Based on empirical methods, there simply is not enough data to draw any definitive conclusions on the authenticity of surviving manuscripts from the third and fourth century. It would be more definitive if the earliest manuscripts agreed in more places, but even the early surviving witnesses to the New Testament are massively divided. The only thing that the handful of texts surviving from that period can tell us is that there was a unique stream of manuscripts with many idiosyncrasies, generally existing in one locality, that seems to have died off. That means that, if the New Testament is actually preserved, the later manuscripts provide the best insight into what the original text looked like because they are more abundant and uniform.

While this seems straightforward, there are many who disagree with this assessment and believe that the text must be reconstructed. Scholars have doubled down on the theory that the smattering of early surviving manuscripts can be collated to find the original. Secular scholarship has overwhelmingly admitted that the effort of finding the original was a farce. When this effort failed, the more faithful set out to find the hypothetical archetype that the earliest surviving manuscripts were copied from by developing genealogies of each variant. While this is a clever idea, the result will only be a hypothetical possibility. Others have adopted a Byzantine priority or a majority text position, which weighs the vast majority of manuscripts more heavily than the thinly distributed minority which seems to have existed in a bubble for a couple hundred years. In any case, these positions on the text should be viewed in light of a doctrinal position on preservation. This leads to the main focus of this article, that the modern period has no doctrine of preservation. 

Generic and Partial Preservation

Is it a fair assessment to say that there is “no modern view of preservation”? Not in a practical sense, because there are in fact many presentations of preservation offered by various people. But in the technical and formal sense, this statement holds true. This is because while many say that the Bible has been preserved, the actual articulation of the nature of that preservation violates what it means for something to be preserved. Remember the basic definition of what “preserved” means. In its application to the text of the New Testament, it means that there is one stream of text that was preserved in faithful and authentic copies and collations of copies in every generation. Which means, that if the text of the New Testament is truly preserved, the authentic text would have been the text that continued to be copied while copies were still being made up into the 16th and 17th century. 

That means that during the time of the first effort to massively distribute the Bible to people in the 16th and 17th centuries, the authentic text of the New Testament was still being copied. If the early surviving manuscripts were authentic, why weren’t those too being copied? Why do the thousands of surviving manuscripts tell a different story than the early surviving ones? The reason that the first effort of unifying the text did not use texts that looked like the earliest surviving manuscripts is because those manuscripts were not considered to be authentic by the people of God leading up to and during that period. This is further demonstrated by the fact that there are less than a handful of manuscripts copied in the middle period that represent the text form of the earliest surviving manuscripts. The manuscript tradition, along with the textual decisions during the Reformation period, tells a tale that the people of God rejected the texts that are being considered “earliest and best” today. 

So in one sense, yes, people do offer various understandings of the word “preservation” and how that applies to the New Testament text. But in a much more real sense, those presentations do not adequately explain the existence of two text streams, or the ongoing effort of modern scholars to find the original text. Something that is preserved does not need to be reconstructed or found. The Bible is not a mosquito preserved in amber waiting to be dug up by an archeologist. It is not a 1,000 piece puzzle in which we only have 900 pieces, or a 10,000 piece puzzle to which we have 10,100 pieces. It is a 5,624 piece puzzle to which we have all 5,624 pieces. The method of preservation that God used was not encasing the Bible in a cave, or a bucket, or the sand. He used human copyists, which eventually evolved into the printing press, and again with the introduction of digital storage. The Bible has always been available to the people of God, whether in manuscript form, or printed edition, or even a digital copy. 

The modern understanding of preservation is vague and indecisive. It doesn’t actually put forth a meaningful definition of preservation. In a very practical sense, it accepts that the general form of the New Testament has been preserved, with wiggle room for disagreement on certain texts that may or may not be original. The Bible has been preserved in its basic form, to the degree of “great accuracy”. The Bible is partially preserved, and that is the way God designed it to be. The effort of modern textual criticism is to increase the level of “great” in “great accuracy”. The efforts of the Reformation were good, but flawed. So to some degree or another, most people with a modern understanding of preservation accept the Reformation era text as “good enough”, it’s just not the “best”. This reveals a greater issue, which should be picking at the back of your brain. 

The greater issue is that if the efforts of the Reformation era were flawed, than the idea of a preserved text, in the sense that I’ve defined it and the Reformation era theologians defined it, has not ever existed, nor can it ever be attained. The word “preserved” is a gooey, moldable, ever-shifting concept that really does not ever take a solid form. One might say that the Bible was preserved until the fourth century, but we do not know exactly what it looked like, or that the Bible is preserved today, just not precisely. In either case, the word “preservation” requires a qualifier. The Bible is either generically preserved, or it is partially preserved. In either case, the word “preserved” is simply inappropriate for what is being described. Here is a quote from Thomas Watson – a Puritan Divine – that adequately describes the historic definition of preservation:

“The Letter of Scripture hath been preserved without any Corruption in the Original Tongue, The Scriptures were not corrupted before Christ’s Time, for then Christ would never have sent the Jews to the Scriptures; but he sends them to the Scriptures, John 5.39. Search the Scriptures. Christ knew these Sacred Springs were not muddied with Human Fancies”

Thomas Watson, A Body of Practical Divinity (London: Thomas Parkhurst, 1692), 13.

Here is another description of preservation, offered by Westminster Divine Richard Capel:

“Well then, as God committed the Hebrew Text of the Old Testament to the Jewes, and did and doth move their hearts to keep it untainted to this day: So I dare lay it on the same God, that he in his providence is so with the Church of the Gentiles, that they have and do preserve the Greek Text uncorrupt, and clear: As for some scapes by Transcribers, that comes to no more, then to censure a book to be corrupt, because of some scapes in the printing, and ’tis certaine, that what mistake is in one print, is correct in another.”

Capel’s Remains pg. 79-80

The foundation of the doctrine of preservation during the time of the Reformation and post-Reformation is that in the same way that God preserved the Hebrew Scriptures, God preserved the Greek Scriptures. And by preserved, they meant “every jot and tittle” (See WCF ch.1). 

The ironic truth of the modern view of preservation is that it does not even allow for proper textual criticism. If God did not preserve every word, then what is the purpose of contemporary text-critical efforts? We have what we need, and that is all that matters. If the standard is “great accuracy”, then the work is done. There is no need to pursue greater accuracy because there is no standard for what “great accuracy” even means. There is no way to determine which words matter, and which words do not matter. Is it greatly accurate compared to other ancient texts? Is it greatly accurate based on the surviving manuscripts? Because the definition of “preserved” is so vague and arbitrary, there isn’t actually a meaningful standard to aim for. Text critics will never be able to determine when the work is done, because there is no definition of what it means to be done. Will the work be done when the true ending of Mark is found? Or will it be when we discover a new cache of early manuscripts? The efforts of modern textual criticism are planted firmly three feet in mid air because the modern method doesn’t allow for a precise definition of preservation. The fact that the work is still ongoing reveals the reality that scholars are either operating from a place of generic preservation or partial preservation. In both cases, the Bible has not been preserved in any meaningful way. 

Conclusion

There is not a modern doctrine of preservation in a very real sense. When the word is used, it either means generic preservation or partial preservation. In the case that by “preservation” it is actually meant generic preservation, then the work of textual criticism is done, because we have the Bible generically. At that point it is a matter of preference whether or not the woman caught in adultery is or is not Scripture, because the Bible contains all the correct doctrines in both instances. In the case that by “preservation” it is actually mean partial preservation, than the work of textual criticism does not matter, because the preserved Word will never be found. It is a matter of preference whether or not one accepts the ending of Mark as original because we’ll never know with 100% certainty. The former espouses the position that God did not intend to preserve every word, so that is not the goal. The latter says that God didn’t preserve His Word at all, so the goal is simply to get as close to the original as possible. Both positions betray the word preservation. 

When the word “preservation” is taken at face value, it simply means that the whole thing being preserved has not been corrupted, or harmed, or destroyed in any way. It does not mean that every single manuscript, or even one manuscript has been kept without error. It means that in every generation, the original text has survived in the approved manuscripts that the people of God have relied on for all matters of faith and practice. It means that scribal errors were corrected and that manuscripts of poor quality were retired or destroyed. This process was done by hand leading up to the 16th century when the printing press revolutionized how copying was done. That is why the Reformation era textual criticism is unique and set apart from modern textual criticism. It occurred during a time where copying was still being done, and a technological innovation was introduced to that process. The manuscripts that were being used by the people of God were still in circulation, and those manuscripts looked nothing like the modern text. 

A proper definition of preservation stands at odds with the opinion that the Bible is generically preserved, or partially preserved. If this seems like an impossibly strict standard, then it is best to say that you don’t believe that the Bible has been preserved. And if you do believe that the Bible has been preserved, the task is now to determine which text tells the story of a preserved Bible. The duty of the Christian is then to receive that preserved text as God has delivered it.