The Theology of the Text: What is Scripture?

This article is the first in the series called “The Theology of the Text,” designed to cover the topic of the text in short, accessible articles. 

The Theology of the Text Part I: What is Scripture?

Christian theology is built from the ground up from Scripture. Without Scripture, there is no stability to the Christian religion. If the Scriptures are rejected as the ultimate foundation for the Christain religion, subjectivism and human experience become the god of the church. What we believe about Scripture is of utmost importance, as it is the foundation for all Christian faith and practice. The reason that the Scriptures are the foundation for faith and practice, is because God declares them such, and gives them to His people for that purpose. 

All Scripture is given by inspiration of God, and is profitable for doctrine, for reproof, for correction, for instruction in righteousness: that the man of God may be perfect, throughly furnished unto all good works.”

2 Timothy 3:16

There are two points from this passage that are important to know: 

  1. All Scripture is given by inspiration of God
  2. All Scripture is given as a sufficient rule of faith; for practice and interpretation

The first point means that every line of Scripture is delivered from God to man by way of His inspiration. That is to say, that God is the author, and the men who physically wrote the Bible were the instrument, or writers. There are two points that should be observed from this first point: 1) The Scriptures are God’s words written down, and therefore pure and perfect. 2) The use of human means to write the Scriptures does not suppose human error, as God inspired those human authors. The process of giving the Scriptures to the church was not an accidental that God simply used in a reactionary way. Scripture is something He Himself caused and delivered. The nature of inspiration is such that the very words are inspired, and also that the vocabulary and experiences of the writers were employed by God. This has been called “organic” or “verbal plenary” inspiration by some, but it is important to remember that the nature of inspiration was not so organic that the words of Scripture are simply human words and ideas that God used. All Scripture is given by God, and therefore all Scripture is God’s words, regardless of the means that He used to accomplish such inspiration. 

The second point means that the Scriptures were given as a sufficient rule of faith to the people of God for all matters of faith and practice, “instruction in righteousness.” This serves as both a rule for what the Scriptures are to be used for, and also gives the people of God the necessary “self-interpreting” hermeneutic principle. First, the Scriptures are given “for instruction in righteousness.” That means that there are other ways to learn things outside of the Scriptures. There is benefit in reading history books, maths textbooks, theological commentaries and works, and other works of literature which cover various disciplines not pertaining to faith and practice. That does not mean that the Bible does not say anything about math, or science, or history, just that the Bible itself does not say it is the only means to get knowledge in all things. The Scriptures provide the foundation for how a Christian approaches all other disciplines, but does not contain exhaustive knowledge of all other disciplines in itself. It is sufficient as it pertains to Christian faith and practice, and also sufficient in its declarations about the Christian should approach other disciplines. Second, since the Bible is sufficient for all matters of faith and practice, that means that the hermeneutic principle of “let Scripture interpret Scripture” is warranted from this text. There is no need to interpret the Scriptures through Biological science, ecclesiastical tradition, critical approaches, or using various numerological systems. Further, this also means that no further revelation is necessary “for doctrine, for reproof, for correction, for instruction in righteousness.” The Scriptures are fully sufficient for understanding all of Scripture. Historical studies may help one understand context better, but Scripture should always be read with the self-interpreting principle. 

Conclusion

The Scriptures set forth in 2 Timothy 3:16, and many other passages, that the Word of God is pure (Ps. 12:6) and perfect (Ps. 19:7), and sufficient for use in all matters of faith and practice. The source of a multitude of modern errors stem from rejecting this doctrine. When a Christian reads Scripture and hears Scripture correctly and faithfully preached, they are hearing God’s words (John 10:27). The Scriptures are lacking nothing, in word count and in what they teach. There is no prophetic word, vision, or dream which is necessary, because the Scriptures are sufficient. God gave the Scriptures to His people so that He could speak surely in every age until the last day (Mat. 5:18). In an age where the Bible is viewed as a corrupt, man-made document, this doctrine is essential to affirm for the sake of assurance of faith, and unity among the people of God. 

If the Text-Critics Went to Lunch and Didn’t Come Back

Introduction

An important practice in the business world is determining the viability and impact of a project before investing resources into that project. It seems this is a wise analysis to consider for evangelical text-criticism.

 For which of you, intending to build a tower, sitteth not down first, and counteth the cost, whether he have sufficient to finish it? Lest haply, after he hath laid the foundation, and is not able to finish it, all that behold it begin to mock him, Saying, This man began to build, and was not able to finish.

The Holy Bible: King James Version, Electronic Edition of the 1900 Authorized Version. (Bellingham, WA: Logos Research Systems, Inc., 2009), Lk 14:28–30.

Christians should now act like wise investors. The church has been patient, but it is time to analyze the project afresh. The evangelical text-critics have determined that while we will never have the original text God inspired, what we have is close enough. A valuable analytical process is to determine the impact of ending an ongoing project. According to the careful analysis and hard work of the evangelical textual scholars, the church has all it needs from the manuscripts to get by. No doctrine has been affected in nearly 200 years of textual criticism, the church has what it needs. So what is the impact on the church, if all of the text-critics went out for lunch and never went back to work?

Seven Benefits to Ending the Effort of Modern Evangelical Text-Criticism

First, Greek Bibles would stop changing. No new additions or subtractions would be made to God’s Word. The only changes to God’s Word would have to be made by translation committees. 

Second, the text of the modern Bibles would be stable. Christians could buy a translation and keep it their whole lives without it expiring. 

Third, the work of men like Bart Ehrman would be irrelevant to the church, because Evangelical scholars wouldn’t be working with him and for him and under him any longer.  

Fourth, Christian textual scholars could spend more time doing exegesis for the church and pastoring, rather than scraping through manuscripts and counting words. Many of these men have a Masters of Divinity from well reputed seminaries, they could apply their education to shepherding the flock. 

Fifth, seminaries could remove Bruce Metzger and Bart Ehrman’s textbook from the standard curriculum. We have what we need in our Greek texts, there is no need to continue giving Erhman a platform. 

Sixth, the heroic apologists of the Christian faith could spend more time defending the teachings of the Word of God, rather than trying to discover what it says. 

Seventh, resources spent on text-criticism could go to planting churches, supporting struggling churches, and training pastors. 

Conclusion

 If the best and the brightest text critics say that they haven’t found the original text in a time where we have “the best data,” and have determined that “we have what we need,” there is no point in carrying on. “No doctrine has been affected,” so it seems the church is equipped to press on. The church does not need to support a project that has already made the necessary conclusions. Instead, it should support those evangelical textual scholars in putting their MDivs to use pastoring churches and feeding God’s people. Let the secular academy continue their quest for the “historical Jesus” and free up the men of God to do work for the Kingdom! 

A good question to answer to determine the impact of ending such a project is, “What would happen if evangelicals stopped making Greek New Testaments?” The answer is nothing. Nothing would happen. The church would carry on without a hiccup. Pastors would preach, seminaries would train, and the Gospel would still go forth to all the nations. The average Christian would be none the wiser. The Bible has been preserved after all, no need to keep working on a finished product. 

Why the Doctrine of Inerrancy Demands the Defense of the Received Text

Introduction

On this blog, I have highlighted many of the doctrinal errors underpinning the modern critical text, as well as set forth positively the historical orthodox position on the Holy Scriptures. I have been critical of the doctrine of inerrancy as articulated by modern scholars and compared it to the historical doctrine of providential preservation, demonstrating how they are different. That is not to say that the doctrine of inerrancy is completely bad, though it has a critical flaw which I highlight in the linked article above. For those that do not have the time to read the above article, the essential flaw is that it founds the “great accuracy” of the text of Holy Scripture on modern text critical methods and thus allows for a changing text. In this article, I will demonstrate why the current articulation of inerrancy undercuts any meaningful arguments against the Received Text.

Inerrancy vs. Providential Preservation

If a proponent of the modern critical text adheres to the doctrine of inerrancy, as opposed to the historical definition of providential preservation as stated in WCF 1.8, they have no grounds for attacking the Received Text. I am defining inerrancy as the doctrine which teaches that the original manuscripts of the New Testament were without error, and that those originals have been preserved in all that they teach in the extant copies. This is in opposition to providential preservation,which teaches that in every age, the Holy Scriptures have been kept pure essentially in what they teach and also preserved in the words from which those teachings are derived. If one limits the doctrine of inerrancy to only the autographs, then the defense of the Scriptures is pointless, because we don’t have the originals. So, if it is the case, as the doctrine of inerrancy teaches, that the Scriptures are without error in all that they teach while the words of the material text are changing, then it must also be said that the material text of the Scriptures can change and be inerrant, so as long as they can be said to teach the same doctrines. If no doctrine is affected between the Reformation era printed Greek texts and the modern critical printed Greek texts, then the necessary conclusion is that both are inerrant. That, or neither are inerrant. 

Since, according to the modern critical perspective, the Reformation era text teaches the same doctrines as the Critical Text, then according to the modern doctrinal formulation of inerrancy, the Reformation era text must be inerrant too.

If, then, the Reformation Era text teaches the same doctrines and is therefore inerrant, advocates of the modern critical text have no argument against it from a theological perspective. This is the logical end of the claim that “no doctrine is affected.” If no doctrine is affected between the Reformation era printed Greek texts and the modern critical printed Greek texts, then the necessary conclusion is that both are inerrant. This is an important observation, because it means that opponents of the Received Text have no theological warrant to attack the text of the Reformation, seeing as it is an inerrant text. Until they say, “There is a final text, this is it, and it teaches different doctrine,” not only is it inconsistent to attack the Received Text, it is hostile to the text of Holy Scripture, by their own doctrinal standard. It stands against reason that a modern critical text proponent would attack a text, which is, by their own admission, inerrant. 

 In order to responsibly attack the Received Text from a modern critical vantage point, one must admit and adopt several things:

  1. They must admit that doctrine is affected between texts.
  2. They must adopt a final text to have a stable point of comparison between texts. 
  3. They must assert that the Received Text is not inerrant, and thus not Scripture.

This of course, is impossible for a modern critical text advocate, since the modern critical text is changing, and will continue to change. Since, according to the modern doctrinal standard of inerrancy, the Bible is without error in all that it teaches, any Bible that is without error in all that it teaches should be considered inerrant and actually defended as such. If, at the same time, a proponent of the modern doctrine of the modern critical text and inerrancy wishes to add a component of providence to the equation, then they necessarily have to defend the Received Text. If providence is considered, there is no change to Holy Scripture, based on text critical principles, that can affect the teaching of the Scriptures. Consequently, if one were to argue that changes to the printed texts of Holy Scripture can affect doctrine, preaching, and theology, then the doctrine of inerrancy must be rejected outright, as the previous iterations of that text would have contained doctrines that were improved upon, and thus erred, prior to those changes. If a change, introduced by text critical methods, changes doctrine, then the Critical Text cannot be inerrant. This presents a theological challenge to those who continue to advocate against the Received Text and also wish to uphold the inerrancy of a changing modern critical text. There are two necessary conclusions that must be drawn from this reality:

  1. Either the Scriptures are inerrant, and text-critical changes cannot affect doctrine, and thus the Received Text is inerrant along with the modern critical text,
  2. Or the Scriptures are not inerrant, as the changes introduced by new modern text critical methods will change doctrine. 

The necessary conclusion of maintaining that the words of Scriptures have changed and will change and that they are also inerrant is that those material changes must not affect doctrine. If it is the case that these changes will affect doctrine, then the Bible is necessarily not inerrant and the conversation is now far outside the realm of even modern orthodoxy. 

Conclusion

The question we should all be asking is this: If no doctrine is affected between the Received Text and the modern critical text and the Bible is inerrant, why do modern critical text advocates attack an inerrant Bible? Is it consistent to affirm the modern doctrine of inerrancy and also attack the historical Protestant Scriptures? It seems that the answer is no, it is not consistent. One might argue that the modern critical text is “better,” but better in what way? If no doctrine is affected, how is it better? In order to make the argument for a “better” text, one has to first argue that doctrine is indeed changed in the new critical Bibles, and thus admit that the Scriptures are not inerrant. And even if one were to admit that the modern critical text is better, and admit that the Bible is not inerrant, they would need to produce a standard, stable text to defend that claim. So, until the advocates of the modern critical text are willing to admit that doctrine is changed and thus the Scriptures are not inerrant, they simply are attacking the Received Text, which by their own doctrinal standard, is inerrant. 

This article should demonstrate one of the chief inconsistencies of those who uphold inerrancy of Scripture and also attack the Received Text of the Reformation. It seems, based on the axiom that “no doctrine is affected,” there actually is no warrant to attack a version of the Scriptures that is inerrant. In order to do so, one would have to adopt the view that the Scriptures have been kept pure in both what they teach and the words that teach those doctrines, and then defend a finished text. And if it is the case that the Bible has been kept pure in all ages, and is providentially preserved, then it stands that adopting a critical text which differs from the text of the previous era of the church is not justified in the first place and incompatible with the argument.

I’m looking forward to seeing all of the modern critical text advocates joining the fight to defend the inerrant Received Text!

The Consequences of Rejecting Material Preservation

Introduction

Since the late 20th century, the doctrine of Scripture has been reformulated to say several things, most explicitly in the Chicago Statement on Biblical Inerrancy. The Chicago Statement articulates several things about the doctrine and nature of Scripture : 

  1. The original manuscripts (autographs) of the New Testament were without error
  2. The Scriptures as we have them now can be discerned to be original with great accuracy, but not without error
  3. The Scriptures as we have them now are without error in what they teach (sense), but not without error in the words (matter)

Within this modern formulation, there are also rules which anticipate certain critiques: 

  1. The Bible is not a witness to divine revelation
  2. The Bible does not derive its authority from church, tradition, or human source
  3. Inerrancy is not affected by lack of autographic texts

While this statement affirms many important things, it has a major flaw, which has resulted in many modern errors today. This is due to the fact that the Chicago Statement denies that the material of the New Testament has been preserved in the copies (apographs). This is a new development from the historical Protestant doctrine of Scripture, which affirms that God had providentially kept the material “pure in all ages.” The original texts in the possession of our great fathers in the faith were considered as the autographs.

“By the original texts, we do not mean the autographs written by the hand of Moses, of the prophets and of the apostles, which certainly do not now exist. We mean their apographs which are so called because they set forth to us the word of God in the very words of those who wrote under the immediate inspiration of the Holy Spirit”

(Francis Turretin, Institutes of Elenctic Theology, Vol. I, 106). 

This modern update is an attempt to resolve the issues of higher criticism and neo-orthodoxy which were introduced after the time of the Reformers and High Orthodox. While the statement itself affirms against these errors, it does not explain how a text can retain its infallibility and inerrancy while the material has not been preserved. Perhaps at the time, the assumption that the material was preserved to the degree of “great accuracy” was enough to give the people of God great confidence in the Word of God. The error of this formulation is that the mechanism which determines such accuracy is completely authoritative in determining which of the extant material is “accurate” to the original. This seemingly contradicts the doctrinal formulation within the Chicago Statement itself, though I imagine that the reach of textual scholars into the church was not then what it is now.

Infallibility, Inerrancy, and Greatly Accurate Texts

The contradiction of the Chicago Statement is that it affirms against human mechanisms of bestowing the Scripture authority, while itself being founded entirely upon these human mechanisms. In the modern formulation of the doctrine of Scripture, it assumes that the extant material is “greatly accurate” as it relates to the lost original. This level of accuracy, according to this formulation, is enough to know that the material is without error in what it teaches. The problem with this is in how we determine that level of accuracy. Since “great accuracy” is a vague metric, it allows for the amount of material that is considered “accurate” to fluctuate with the methods that determine that level of accuracy. It assumes that any change made by this mechanism cannot possibly change what that material teaches. That means that no matter the material shape of the text, it should be considered inerrant, because changes to the material shape “do not effect doctrine.”

Yet the very mechanism entrusted with this task has no ability to determine that the material it has produced represents what the original said, so the evaluation of “great accuracy” is not only vague, it is arbitrary. There is no meaningful standard to measure the material against to even determine how accurate it is, so any descriptions produced of that level of accuracy is based purely on the assumption that the texts produced by the chosen mechanism are as accurate as advertised. That is to say that the mechanism itself has the sole power of determining accuracy and yes, the authority of such a text. When this mechanism deems a passage, or verse, as not being accurate to the original, pastors simply do not preach that text any longer, and laypeople no longer read it. There are countless examples of the people of God appealing to this mechanism as the mechanism which gives the Scriptures authority. 

This is evident whenever a textual dispute arises in the text. All it takes is one manuscript to introduce such a dispute. What happens when such a dispute occurs? Christians appeal to the authority of the mechanism. In its very axioms, the Chicago Statement forces the people of God to submit to an external authority to validate the canonicity of a passage. Since it rejects magisterial, historical critical, and neo-orthodox models (rightly so), the only model acceptable to “authorize” a text by the modern doctrine of Scripture is lower criticism (not rightly so). Now, if lower criticism is defined simply as a process of comparing manuscripts to determine the original, this is not necessarily a problem. Many manuscripts were created by comparing multiple sources. So in that sense, lower-criticism has been practiced by the Christian church since the first time a copy of the Scriptures was made using multiple exemplar manuscripts. 

The problem occurs when that lower-critical function extends beyond the simple definition. The lower-critical mechanism elected by the modern doctrine of Scripture has reached far beyond such a definition. Rather than being a function which receives the original by comparison, it is a function which assumes that it is responsible for reconstructing a lost text. Further, that same mechanism not only assumes it is responsible for reconstructing, it is responsible for determining how the material was corrupted by reconstructing the history of the text. In other words, asserting its authority over the transmission of the text itself.

According to this mechanism, the Scripture did not come down to us safely, it actually developed away from its original form. The narrative of preservation from the Reformed and High Orthodox needed to be deconstructed so that another narrative could be developed. The sources of this material needed to be re-examined because there is no way that Mark, or John, or Paul wrote what the church thought they did. The text did not come down pure, it came down both textually and from tradition, and some of those pericopes made it into the Biblical text.  Textual variants, most commonly arose from scribal errors, but sometimes speak to the story of the religious communities who were trying to defend the orthodox structure of the Christian faith as it had developed in the traditions of the church. All of these are functions of the text-critical system that determines the “great accuracy” set forth by modern doctrinal statements. It is hard to responsibly say that this is a “lower” critical function. 

Practical Implications to Doctrine

The obvious issue here is that the foundation mechanism of the modern doctrinal statements are not restrained by the doctrinal statement itself. The most clear example of this is that the methods used to determine the “great accuracy” of the extant material as it relates to the original, do not even need to assume that an original ever existed in any meaningful way. This is plainly evidenced by the textual scholar DC Parker, who is the team lead for the Gospel of John in the ECM.   

“The New Testament is a collection of books which has come into being as a result of technological developments and the new ideas which both prompted and were inspired by them”

(Parker, Textual Scholarship and the Making of the New Testament, 3) 

 “We can all applaud when Bowers says that ‘we have no means of knowing what ideal form of a play took in Shakespeare’s mind before he wrote it down’, simply substituting gospel or epistle for play and St John or St Paul for Shakespeare”

(Ibid. 8)

 “The New Testament is – and always has been – the result of a fusion of technology of whatever kind is in vogue and its accompanying theory. The theological concept of a canon of authoritative texts comes after”

(Ibid. 12)

Even if evangelical scholars, pastors, and professors do not agree with DC Parker here in his words, they submit to his theology in practice. The texts which modern Bibles are built on are created according to various critical principles, and then the church theologizes them and calls them authoritative after the fact. Christians work with what they have, and what they have is susceptible to change based on models that do not recognize inspiration, preservation, or the Holy Spirit. Many scholars, pastors, professors, apologists, and even lay people then take that product and make individual determinations on that produced text as to its accuracy to the original. That means that the process of determining the text that is “greatly accurate” has gone through a three-form authentication before even being read. First, it is authenticated by the textual scholars and then printed using their preferred methods. Then it is authenticated by a translation committee, who makes determinations upon those determinations based on their preferred methods which may differ from the determinations of the previous committee. Then it is authenticated by the user of that text, who makes determinations based on their preferred methods, which may be different from the both of the previous two committees! 

This of course is necessary in a model which rejects material preservation and exposure of that material in its axioms. Some other mechanism must be introduced to give the text authority. This being the case, it is rather interesting that the modern articulation of the doctrine of Scripture rejects other mechanisms that bestow the text authority. What is wrong with a magisterium – that it is a function of the church? What is wrong with neo-orthodoxy? A similar process is taking place in the “lower-criticism” of the textual scholars, the simple difference is that it is approved by the people of God who use the product of that mechanism!  

Conclusion

The necessary practical conclusion of the modern articulation of the doctrine of Scripture is that Christians must place their trust in some other mechanism to give the Scriptures authority. These doctrinal statements rely upon the “great accuracy” of the text, so they necessarily rely upon the mechanisms that deem various texts “greatly accurate.” Since this modern doctrine says that God has not materially preserved His Word, a void is created that needs to be filled. There needs to be some mechanism that can determine the level of accuracy of the text that we do have left. The modern church has largely chosen “lower-criticism” as it is employed by those that create Greek texts and translations. Some have chosen neo-orthodoxy. Others have flocked to the Roman or Eastern magisterium.

The fruit of this doctrinal articulation is evident. Verses that were once considered “greatly accurate” to the original are now being called into question daily by Christians everywhere. Passages that have always enjoyed a comfy place in the English canon are ejected by whatever textual theory is in vogue. What is considered “greatly accurate” today may just as easily be considered a scribal interpolation tomorrow. Passages in John that have never been called into question may be discovered to contain “non-Johannine” vocabulary tomorrow based on the opinion of an up and coming scholar. A manuscript may be discovered that alters the form of the text in a number of places. All it takes is one early manuscript to unsettle the whole of Scripture, as we have seen with Codex B. 

Think of it this way. If you read a passage as authoritative five years ago, and no longer consider that passage as “greatly accurate” to the original, what changed? Can you point to a newly discovered manuscript that changed your mind? Was it your doctrine? Was it the opinion of a scholar or pastor or apologist you listen to? These are important questions to answer. When I went through my own textual crisis, I realized that I was the final judge over the text of Scripture. If an early manuscript emerged without John 3:16 in it, I would have thrown it out, especially if that was the opinion of my favorite scholar. I was pricked in my conscience that I had adopted such a frivolous approach to the Holy Scriptures, and it did not take long for me to seek out other doctrinal positions on the text. 

The mechanism that is most consistent, and approved by the Scriptures themselves, is God Himself. I asked myself, “what did God actually do in time to preserve His Word?” If the text did not fall away, certainly I could look around and see that to be the case. I found that there was indeed a textual position which affirmed this reality, and a text that had been vindicated in time. A text that the people of God used during the greatest Christian revival in history. The same text that was used to develop all of the doctrines I hold to. The same text which survived successfully against the Papist arguments that are not much different to the ones used today. So why would I abandon that text, and the theology of the men who used it? Adopting the critical text is not a matter of adherence to a “greatly accurate” text, it is a matter of departure from the historical text. The question of “which text should I use?” is quite simple, actually. The answer is: The text that God used in history and vindicated by His providence in time. The text that united the church, not divided it. The text that the majority of Bible readers still use today. I praise God for the Received Text, and the all of the faithful translations made from it. 

Before you ask, “What makes those readings vindicated?” Think to yourself which methods you are going to use to evaluate those readings. Do they involve deconstructing the narrative that God kept His Word pure in all ages? Do they include the belief that faith communities corrupted the text over time and introduced beloved pericopes from tradition? Do they rest upon the theory that Codex B represents the “earliest and best” text? If so, I would appeal to the Chicago Statement, which says, “We deny the legitimacy of any treatment of the text or quest for sources lying behind it that leads to relativizing, dehistoricizing, or discounting its teaching, or rejecting its claims to authorship”

Substantial Preservation and the Sin of Certainty

Introduction

In today’s world of Biblical scholarship, a common idea is that Christians should have a good amount of confidence that the sum total of the Bible has been preserved. This means that while Christians should not have any dogmatic ideas of perfect preservation of words, they should have confidence that God has given to His people enough. That is to say that despite the fact that there are challenging passages in the Scriptures, none of these challenges are so great that Christians should lose confidence in their Bible as a whole. This concept of general reliability is agreeable even to some unbelieving textual scholars, which is possibly why it has become a sort of default position within Evangelical textual scholarship. The idea of absolute certainty in every word of the Bible is not considered a viable theological position due to the perspectives of modern textual scholars. According to this view, there is simply no justifiable reason to believe that every word has been kept pure, and to hold to a view like this is unnecessarily dogmatic. This article is not meant to challenge the integrity of the scholars, some of which are genuine brothers in Christ, but rather to put forth a serious problem with the general reliability theory of the New Testament. While I understand the sentiment behind this mediating position between radical skepticism and absolute certainty in the Text of the New Testament, I believe that this perspective, which may be called Substantial Preservation, is not defensible, practical, or Scriptural. 

Substantial Preservation is Not Demonstrable by Evidence

John Brown of Haddington wrote on this very topic in his systematic theology in the 18th century when defending the Scriptures against such a view that certain truths had fallen away. He argued that all Scriptures, while some are less essential than others, are “essentially necessary in their own place.” That is to say, that while many passages discuss matters not pertaining directly to salvation, that does not make those passages any less important is it pertains to the whole of what God is saying to His people. The Scriptures affirm as much in 2 Timothy 3:15-17. Despite some passages which may be considered more or less important by some, the Bible is clear, “all Scripture” is profitable and is to be used for every matter of faith and practice. Brown then comments on the fundamental challenge of dividing the Scriptures into essential and non-essential.

“All attempts to determine which are fundamental, and which not, are calculated to render us deficient and slothful in the study of religious knowledge; – To fix precisely what truths are fundamental and what not, is neither necessary, nor profitable, nor safe, nor possible” (Brown, Systematic Theology, 97). 

When the theological position is taken that says that the “sum total” or the “necessary” or the “important” parts of Scripture have been reliably transmitted, this is what is taking place. An attempt is being made to say that while some or many words have fallen away from the Scriptures, the whole sense of the thing is not lost by certain words falling away or indeterminate. Brown makes an extremely pointed observation here – how would one even come to a determination like that? There is absolutely no way to know if a doctrine is lost, unless of course that person is omniscient, or all knowing, and can determine that those words were not meant to be preserved by God. The weight of the substantial preservation argument rests on a faith claim that God never desired to preserve every word, and that Christians should have a reasonable amount of certainty that the words we can have confidence in are the ones God intended to preserve. 

Since the starting point of this claim is evidential, the conclusions and further claims made from this starting point must be evidential as well. That means that if one is to make the claim that the evidence demonstrates a substantially preserved Bible, one has to demonstrate that the words we do have represent substantially what was originally written. This of course is impossible to demonstrate from evidence. Dan Wallace admits as much, “We do not have now – in our critical Greek texts or any translations – exactly what the authors of the New Testament wrote. Even if we did, we would not know it” (Gurry & Hixson, Myths and Mistakes in New Testament Textual Criticism, xii). So in this view, we don’t know exactly what the “authors” of the New Testament wrote, and we have no way of demonstrating what they wrote, even if we did have it. This being the case, there does not seem to be a reason to responsibly make such a determination regarding the general reliability of the New Testament. The claim that the New Testament is generally reliable is not proven by lower criticism, it is simply believed despite the conclusions of textual scholars. Since our earliest extant New Testament manuscripts are dated to well after the authorial event, there is no way of determining, evidentially, how different those manuscripts are from the authorial text. This perspective may have been rejected, even fifty years ago by Evangelicals, but according to Wallace, believing textual scholars are  “far more comfortable with ambiguity and uncertainty than the previous generations” (Ibid., xii). While many scholars may be comfortable with this view, the millions of Christians around the world who believe in verbal plenary inspiration may not be. 

Substantial Preservation is Not Practical 

If the Bible is preserved in the “sum total” of its material, then Christians must add an additional layer to their Bible reading. Rather than simply reading the words on the page, Christians must first establish that those words are reliable. Since some words cannot be trusted outright, there is no reason to believe that all of the words can be trusted outright. That is due to the fact that the methodology which deems some verses reliable and others not is completely and utterly subjective. In some cases, the majority reading is the deciding principle, in other places, the least harmonious reading is the deciding principle, and in more places, what is considered the earliest reading is the deciding principle. In order to validate these deciding principles, Christians must become text-critics themselves. They must examine the evidence for each verse in the Bible and determine if it meets some threshold of certainty based on the current canons of text-criticism or develop their own. Most Christians are not equipped for this kind of work. 

Since the reality is that the vast majority of Christains are not equipped for this kind of work, this is done for the Christian by the editors of various translations and rogue Christians with some knowledge of the original languages. Christians are told which verses they are to have confidence in by a handful of people. The footnotes tell a Christian what to read, popular opinion tells a Christian what to read, or Christians decide for themselves what they ought to read. Yet, underneath every verse is a mountain of textual variations and a sign that says, “We do not have now, in our critical Greek texts or any translations – exactly what the authors of the New Testament wrote. Even if we did, we would not know it.” That is to say, that every Christian is held captive by the judgements of textual scholars, translation committees, and the opinions of one or two people with a platform when they read their Bible. What a Christian considers Scripture today could easily be out of vogue tomorrow. This is plainly evident in the transformation of modern Bibles in the last fifty years.  

Even if there is no critical footnote in the margin of a translation, the Christian has to know that every single verse can be questioned with the same amount of uncertainty applied to the verses which do have footnotes. This is due to the fact that the axiom itself produces “radical skepticism.” That is to say, that if a Christian cannot know for sure that the words they are reading are authentic, they must adopt some sort of theological principle which gives them certainty. The common view seems to be, “We don’t know what the original said, yet we are going to read it as though we do anyway.” Yet this view is entirely contingent upon external methods, and produces different results for each Christian. It is perfectly reasonable, for two Christians adhering to this same view of the text to have radically different opinions on each line of Scripture. Since, according to the top scholars, we can’t know who is right, both Christians are equally justified in their decision. There is nothing wrong with one person accepting Luke 23:34 and another rejecting it in this view of the text. On what grounds would one even begin to make a dogmatic statement one way or the other using text-critical methods? In an attempt to combat “absolute certainty,” the people of God are held hostage by the opinions of men. The practical task of reading a Bible has been turned into a task that only the most qualified men and women can execute. The act of reading the Bible is unmistakably transformed into an activity of scrutinizing the text and then reading that text with only a reasonable amount of certainty. The Bible has yet again been taken from the hands of the plowboy. 

Substantial Preservation is Not Scriptural

The Bible is clear that Christians are to have absolute certainty in the Scriptures (Mat. 24:35; Ps. 19:7; 2 Tim. 3:15-17; 2 Pet. 1:20-21; Heb. 1:1-2; Mat. 4:4; Mat. 5:18; Jn.10:27). Absolute certainty in God’s Word is not a bad thing, despite the strange opinions of men and women who say that it is something to be fought off and beat down out of the church. No pastor in his right mind would mount the pulpit and say that we do not know, and have no way of knowing what the original text of the New and Old Testament said. There is no gentle way to put it, this view is dangerous and the proponents of such a view would have been put outside of the camp for saying such a thing all throughout church history. Yet, in this day and age, the view of absolute certainty in the Holy Scriptures is called “dangerous” and is demonized. The visceral reality is, if Christians are not to have absolute certainty in their Bibles, they have no reason to believe it is God’s Word at all. If some of God’s Word is compromised, why should Christians believe that all of it isn’t compromised? That is to say, that if God had the ability to preserve some of the words, He had the ability to preserve them all. There is no reason to believe that God would conveniently preserve the words we, in the 21st century, think are preserved. In order to make a theological claim that God only preserved some words, you must adopt a contradictory view that God is both a) powerful to preserve the words of Scripture and b) not powerful enough to preserve them all. This position is adopted with the guise of humility. Since we “know” that there places where the Scriptures weren’t kept pure, then God must not have preserved all of them! It is actually anachronistic and prideful to think that God preserved every word! Yet, these places where God didn’t keep His Word pure conveniently have aligned with the theories and conclusions of textual scholars for over 200 years. It is rather peculiar that God would think so much like a text-critic.

If Christians are to take a mediating position between radical skepticism and absolute certainty, the process of reading a Bible becomes a burdensome act that few Christians are even capable of doing. 99% of Christians do not know the original languages, and even those who do are not up to date on all of the changes in textual scholarship. That means nearly every Christian is held captive by their preferred scholar or pastor on what their Bible really says. They either have to simply put their head in the sand and go with the flow of every changing edition of their Bible, or get lost in the radical skepticism that is espoused by textual scholars. Do not be mistaken, the idea that we cannot know what the prophets and apostles wrote is absolutely a form of radical skepticism. It may not be the case in intention or heart of these scholars, but in practice I see no way around it. If the Bible is only generally reliable, than each Christian has the responsibility of figuring out the places of general reliability. This view leads to opinions like the one I received on my YouTube channel, where a man said, “The textual apparatus is the lifeblood of the pastor.” This view is so disconnected from any sort of pastoral reality I wanted to scream. No sir, every word that proceedeth forth out of the mouth of God is the lifeblood of the pastor, not the places where God’s Word has been called into question. The act of reading the Bible is not to be an activity of constantly saying, “Yea, hath God said?”

Conclusion

The doctrine of Scripture which says that the words are generally reliable is one that is not defensible, practical, or Scriptural. It is one that is so far disconnected from the people of God that I hope it never succeeds in being forced upon people who actually read their Bible daily. Not only is there no way of determining which passages of Scripture are “important” enough, there is no way to even prove that a passage is reliable if we have no way of validating those passages. This view, as it is articulated now, leaves every single Christian hanging three feet in mid air in the clutches of people who are “qualified” to make judgements on the text of Holy Scripture. The bottom line of this view is that each Christian needs to either a) trust a scholar to tell them what God’s Word says, b) develop their own canons of validating God’s Word by learning Greek and Hebrew and the history of text-criticism, or c) put their head in the sand and ignore the sign post under each verse that says, “this may or may not be God’s Word.” It seems that in an attempt to appease the mockers of God’s Word, scholars have simply given up God’s Word altogether. Yes, this is an all or nothing sort of situation. You cannot say in one breath that the Word of God is reliable, and then in the next say we don’t know what God’s Word originally said. There is no middle ground here. Either we know or we don’t. I think if Evangelical scholars took Bart Ehrman more seriously they may recognize that his fundamental problem is the fundamental problem with modern textual scholarship. It is the problem that the historical protestants defended by standing on the self-authenticating principle of Scripture. 

The false dichotomy of “radical skepticism” and “absolute certainty” misses the point of this discussion entirely. All Christians are commanded to have absolute certainty in the Holy Scriptures. If one rejects absolute certainty, then there is no middle ground between that and radical skepticism. This perhaps would require some scholar producing a work which catalogues all of the verses that are generally reliable and those that are not. Even if that work were produced, it would have an asterisk next to every determination that would read, “I have no way of proving this.” The fruit of such an opinion is evident in the real world. Since the axioms and implementation of modern text-criticism has only produced a data set, and not a text, every single Christian with a bit of knowledge on the topic is encouraged to produce their own text.This is made clear in the fact that this is exactly what Christians are doing. 

The modern printed texts are simply a guide as to what one should read as Scripture. Protestantism was founded on the self-authenticating principle of Scripture, Sola Scriptura. This was the foundational doctrine which caused the Reformation to succeed. Christians did not need a magisterium for Scripture to be authenticated, the Scriptures themselves provided the authority. Yet, in the modern period, this has been abandoned. Every Protestant has their own Bible, their own authority, which may or may not be God’s Word. Christians leap into battle with this Bible and try to combat the Muslim or the Roman Catholic, and they do so thinking that they are winning. The fact is, when Christians adopt an uncertain view of the text, they rush into battle with a Nerf sword thinking they have a hardened Claymore, and the opponents of the faith know it. Why do you think these apologists are so eager to broadcast these debates worldwide? 

I can already see people trying to make this conversation about textual variants, because that is all they can do. Yet, I want people to remember, when an Evangelical shouts about variants from a modern critical text position, they are standing on grounds that cannot support any of their claims with any amount of certainty. Absolute certainty is bad, if you recall. Remember this quote the next time somebody tries to say they know what the author originally wrote at Ephesians 3:9 or 1 John 5:7: “We do not have now – in our critical Greek texts or any translations – exactly what the authors of the New Testament wrote. Even if we did, we would not know it.” Any argument for absolute certainty on a text from a modern critical perspective is built on a foundation that does not claim to know what the original reading said absolutely. There is no consistent methodology that can produce a printed text that represents exactly what the prophets and the apostles wrote, and the honest scholars admit as much. So when somebody says, “I want what Paul wrote!” and then argues for a modern critical methodology, just remember that they have not adopted a methodology that can produce what Paul wrote. 

Even if it could, they would not know it. At the end of the day, they are cold and naked, trusting with blind faith that their autonomous reasoning and critical methodologies have given them at least a middle ground between skepticism and certainty. That is why the war which is waged against those with absolute certainty doesn’t make a whole lot of sense to me. The real argument that is made when somebody asks, “Which TR?” or makes a demonstration from evidence against or for a reading is, “Why are you so certain that this reading is original?” The only thing that is inherently problematic, from a modern critical perspective, is absolute certainty, not the readings themselves. Anybody who says that the readings themselves are wrong is simply being inconsistent, because they have adopted a system that does not pretend to have produced original readings (or at least know they have produced original readings). It is impossible to have any legitimate problems with a particular printed text because these critics aren’t claiming to know what it says themselves! The only appropriate answer from a modern critical perspective to somebody who believes 1 John 5:7 is Scripture is simply, “I don’t have confidence that that is original, but I can’t prove it either way.”

I imagine that many will take issue with this article. They will say that I have misrepresented the perspectives and opinions of those who adopt a form of substantial preservation. To these critics I say, can you produce a list of verses that the people of God should be certain about? Would you be willing to take those verses to Bart Ehrman and DC Parker  and Eldon J. Epp and say that those readings are original? Can you detail the methodology you used to determine which doctrines are important and which are not? Can you prove to me that the verses you deemed unoriginal weren’t in the original text of the prophets and apostles? Did you use a methodology that is consistent and repeatable? The fact is, there is not a single responsible scholar alive who would be willing to produce answers for these questions. Instead, they will instruct Christians to believe that a) we don’t know absolutely what the prophets and apostles said and b) that Christians should believe that the words placed in the printed Greek texts and translations are the words of the prophets and apostles anyway, with a medium amount of certainty. Either that, or they will continue to shout about a particular variant they have researched and ignore the underlying reality that I have presented in this article. 

This is not the ticket, church. The only outcome of this view, ironically, is radical skepticism. Fortunately, God is not tossed to and fro by the opinions of 21st century scholars. He has indeed given His Scriptures to the church, and the church has received them in time. I don’t believe that God is “generally reliable,” I believe He is absolutely reliable. Which means I believe His Word is absolutely reliable, and should be absolutely trusted. If the only grounds we have for believing in Scripture is the conclusions of modern textual scholars, I don’t see any good reasons for any Christian to believe in the Scriptures at all. Yet, the Scriptures are clear. The grounds for believing in Scripture is the fact that God has spoken, and has spoken in His Word. If God has truly spoken in His Word, the Holy Scriptures, then Christians have every reason to believe that they can be absolutely certain about God’s Word.    

A Summary of the Confessional Text Position

Introduction

In this article, I will provide a shotgun blast summary of the Confessional Text Position, as well as some further commentary which will help those trying to understand the position better. In this short article, I do not expect that I have articulated every nuance of the position perfectly, but I hope that I have communicated it clearly enough for people to understand it as a whole. My goal is the reader can at least see why I adhere to the Traditional Hebrew and Greek text and translations thereof.

In 15 Points

1. God has voluntarily condescended to man by way of speaking to man (Deus Dixit) and making covenants with him (Gen. 2:17; 3:15)

2. In the time of the people of God of old, He spoke by way of the prophets (Heb. 1:1)

3. In these last days, He has spoken to His people by His Son, Jesus Christ (Heb. 1:1)

4. The way that God has spoken by Jesus Christ is in Scripture through the inspiration of Biblical writers by the Holy Spirit (2 Peter 1:21; 2 Tim. 3:16). The Bible is the Word of God, and in these last days, is the way that Christians hear the voice of their Shepherd by the power of the Holy Spirit (John 10:27). The Bible does not contain the Word of God, or become the Word of God, it is the Word of God.

5. The purpose of this speaking is to make man “wise unto salvation” and “furnished unto all good works” (2 Tim. 3:15;17; Rom. 1:16; 10:17)

6. Jesus promised that His Word would never fall away, as it is the means of accomplishing His covenant purpose (Mat. 5:18; 24:35)

7. Since God has promised that His words would not fall away, the words of Scripture have been kept pure in all ages, or in every generation (WCF 1.8; Mat. 5:18; 24:35) until the last day

8. Up until the 15th century with the invention of the printing press in Europe, books were hand copied. This hand copying resulted in thousands of manuscripts being circulated and used in churches for all matters of faith and practice. These manuscripts are generally uniform, except for a handful of manuscripts formerly known as the “Alexandrian Text Family”, which were not really copied or circulated. When Constantinople fell in 1453, just 14 years after the invention of the printing press in Europe, Greek Christians fled to Italy, bringing with them their Bibles and language.

9. The printing press was put to use in the creation of printed Bibles, in many different languages, specifically Greek and Latin

10. If it is true that the Bible has been kept pure, it was kept pure up to the 16th century. Thus, the manuscripts that were used in the first effort of creating printed text was the same text used by the people of God up to that point. Text-critics such as Theodore Beza would appeal to the “consent of the church” as a part of his textual methodology, which demonstrates that the reception of readings by the church were an integral part of the compilation of this text

11. The text produced over the course of a century during the Reformation period was universally accepted by the protestants, even to the point of other texts being rejected. It is historically documented that this is the “text received by all” (Received Text), which is abundantly made clear in the commentaries, confessions (see proof texts), translations, and theological works up until the 19th century.

12. This Greek text, along with the Masoretic Hebrew text, remained the main text for translation, commentary, theological works, etc. until the 19th century when Hort’s Greek text, based on Codex Vaticanus was adopted by many. At the time, many believed that Hort’s text was the true original, which caused many people to adopt readings from this text over and above the Received Text. This text was rejected by Erasmus and the Reformers, and has no surviving contemporary ancestor copies, meaning it was simply not copied or used by the church at large.

13. This Greek text was adopted based on Hort’s theory that Vaticanus was “earliest and best” and the text of modern Bibles all generally reflect this text form, even today. Due to the Papyri and the CBGM, Hort’s theory has been rejected by all in the scholarly community. Not to mention Hoskier’s devastating analysis of Codex B (Vaticanus).

14. Thus, the Confessional Text position adopts the Greek and Hebrew text, and translations thereof, that were “received by all” in the age of printed Bibles, and used universally by the orthodox for 300 years practically uncontested, except by Roman Catholics and other heretical groups (Anabaptists, Socinians, etc.).

15. The most popular of these translations, the Authorized Version (KJV), is still used by at least 55% of people who read their Bible daily as of 2014, and at least 6,200 churches. Additionally, Bibles made from these Greek and Hebrew texts into other languages remain widely popular across the world. Other English Bibles are based on this text, such as the MEV, NKJV, GNV, and KJ3, but they are relatively unused compared to the AV.

Further Commentary

The adoption of the Greek Received Text and the Hebrew Masoretic text is one based on what God has done providentially in time. Many assert that the history of the New Testament can only be traced by extant manuscript copies, but those copies do not tell the whole story. The readings in the Bible are vindicated, not on the smattering of early surviving manuscripts, but rather by the people that have used those readings in history (John 10:27), which are preserved in the texts actually used by those people. Since we will never have all of the manuscripts due to war, fire, etc., it is impossible to verify genuine readings by the data available today, as there is no “Master Copy” to compare them against. That is why the current effort of text-criticism is pursuing a hypothetical Initial Text, which relies on constructing a text based on the first 1,000 years of manuscript transmission.

The product of this is called the Editio Critica Maior (ECM), and it will not be finished until 2030. The methodology used (CBGM) to construct this text has already introduced uncertainty to the editors of those making Greek texts as to whether or not they can even find the Initial Text, or if they will even find one Initial Text. That is to say, that from the time of Hort’s text in the 19th century, the modern effort of textual criticism has yet to produce a single stable text. The printed editions of the modern critical text contain a great wealth of textual data, but none of these are a stable text that will not change in the next ten years. That is to say, that translations built on these printed editions are merely a representation of what the editors think the best readings are, not necessarily what the best readings are in reality.

Rather than placing hope in the ability of scholars to prove this Initial Text to be original, Christians in the Confessional Text camp look back to the time when hand copied manuscripts were still being used in churches and circulated in the world. The first effort of “textual criticism” if you will, is unique because it is the only effort of textual criticism that took place when hand copied codices were still being used as a part of the church’s practice. That means that the quality and value of such codices could be validated by the “consent of the church”, because the church would have only adopted a text that was familiar to the one they had been using up to that point. This kind of perspective is not achievable to a modern audience. During the time of the first printed editions, the corruption of the Latin Vulgate was exposed, and the printed editions created during that time were in themselves a protest against the Vulgate and the Roman Catholic church, who had in their possession a corrupted translation of the Scriptures. It was during this time, and because of these printed texts, that Protestantism was born.

Any denomination claiming to be protestant has direct ties back to this text, and the theology built upon it. The case for the Confessional Text is really quite simple, when you think about it. God preserved His Word in every generation in hand copied manuscripts until the form of Bibles transitioned to printed texts. Then He preserved His Word in printed Greek texts based on the circulating and approved manuscripts. This method of transmission was much more efficient, cheap, and easily distributed than the former method of hand copying. This text was received, commented on, preached from, and translated for centuries, and is still used by the majority of Bible reading Christians today. The argument for this text is not one based in tradition, it is one based on simply looking back into history and seeing which text the people of God have used in time. Not simply the story that the choice manuscripts of the modern scholars tells.

Any theories on other text forms are typically based on a handful of ancient manuscripts that were not copied or used widely, and the idea that this smattering of early manuscripts represents the original text form is simply speculation. What history tells us is that the text vindicated in time is the text the people of God used, copied, printed, and translated. This does not mean that every Christian at all times has used this text, just the overwhelming testimony of the people of God as a whole. The fact is, that we know very little about the transmission and form of the text in the ancient church in comparison to what we know about the text after the ancient period. The critical text, while generally looking like the Received Text, is different than the historical text of the protestants, which is why those in the Confessional Text camp do not use them. The few Papyri we have even demonstrate that later manuscripts known as the Byzantine text family were circulating in the ancient church.

Conclusion

So why is there a discussion regarding which text is better? Up until this point in history, the alternative text, the critical text, has been thought to be much more stable and certain than it is now. Currently, the modern critical text is unfinished, and will remain that way until at least 2030 when the ECM is finished. Those in the Confessional Text position might ask two very important questions regarding this text: Does a text that represents the text form of a handful of the thousands of manuscripts, a text which is incomplete, sound like a text that is vindicated in time? Does a changing, uncertain, unfinished text speak to a text that has been preserved, or one that has yet to be found? I suppose these questions aren’t answerable until 2030 when it is complete. This alone is a powerful consideration for those investigating the issue earnestly. Most people in the Confessional Text camp do not anathematize those who read Bibles from the critical text, or break fellowship over it, but we do encourage and advocate for the use of Traditional Text Bibles, as it is the historical text of the Protestant church.

For More Information on Why I Prefer the Received Text, Click Here

For Interactions with Arguments Against the Received Text, Click Here

The Most Dangerous View of the Holy Scriptures

Introduction

Quite often in the textual discussion, it is boldly proclaimed that “our earliest and best manuscripts” are Alexandrian. Yet, this statement introduces confusion at the start. It introduces confusion due to the fact that there are sound objections as to whether it is even appropriate to use such a term as “Alexandrian” when describing the “earliest and best manuscripts”, as though they were a text family or text type. This is because there doesn’t seem to be an “Alexandrian” text type, only a handful of manuscripts that have historically been called Alexandrian. This is due to the more precise methods being employed, which allow quantitative analysis to be done in the variant units of these manuscripts. The result of this analysis has demonstrated that the manuscripts called Alexandrian do not meet the threshold of agreement to be considered a textual family. Tommy Wasserman explains this shift in thought. 

“Although the theory of text types still prevails in current text-critical practice, some scholars have recently called to abandon the concept altogether in light of new computer-assisted methods for determining manuscript relationships in a more exact way. To be sure, there is already a consensus that the various geographic locations traditionally assigned to the text types are incorrect and misleading” (Wasserman, http://bibleodyssey.org/en/places/related-articles/alexandrian-text). 

Thus, the only place the name “Alexandrian” might occupy in this discussion is one of historical significance or possibly to serve in identifying the handful of manuscripts that bear the markers of Sinaiticus and Vaticanus, which disagree heavily among themselves, as the Munster Method has demonstrated (65% agreement between 01 and 03  in the places examined in the gospels/26.4% to the Majority text http://intf.uni-muenster.de/TT_PP/Cluster4.php). So in using the terminology of “Alexandrian”, one is already introducing confusion into the conversation that represents an era of textual scholarship that is on its way out. Regardless of whether or not it is appropriate to use the term “Alexandrian”, it may be granted that it is a helpful descriptor for the sake of discussion, since the modern critical text in the most current UBS/NA platform generally agrees with it in at least two of these manuscripts (03 at 87.9% and 01 at 84.9%) in the places examined (See Gurry & Wasserman, 46). 

The bottom line is this – the new methods that are currently being employed (CBGM/Munster Method) are still ongoing, and will be ongoing until at least 2032. So any arguments made on behalf of the critical text are liable to shift as the effort continues and new data comes to light. As a result of this developing effort, any attempt to defend such texts is operating from an incomplete data set, based on the methods that are being defended. Given that the general instability of the modern critical text is granted, at least until the Editio critica maior (ECM) is completed, know that the conversation itself is likely to change over the next 12 years. In the meantime, it seems that the most productive conversation to have is that which discusses the validity of the method itself, since the dataset is admittedly incomplete.

Is the Munster Method Able to Demonstrate the Claim that the “Alexandrian” Manuscripts Are Earliest and Best?

The answer is no. The reason I say this is due to the method being employed. I have worked as an IT professional for 8 years, specifically in data analysis and database development, which gives me a unique perspective on the CBGM. An examination of the Munster Method (CBGM) will show that the method is insufficient to arrive at any conclusion on which text is earliest. While the method itself is actually quite brilliant , its limitations prevent it from providing any sort of absolute conclusion on which text is earliest, or original, or best. There are several flaws that should be examined, if those that support the current effort want to properly understand the method they are defending. 

  1. In its current form, it does not factor in versional or patristic data (or texts as they have been preserved in artwork for that matter)
  2. It can only perform analysis on the manuscripts that are extant, or surviving (so the thousands of manuscripts destroyed in the Diocletian persecution, or WWI and WWII can never be examined, for example)    
  3. The method is still vulnerable to the opinions and theories of men, which may or may not be faithful to the Word of God

So the weaknesses of the method are threefold – it does not account for all the data currently available, and it will never have the whole dataset. Even when the work is finished, the analysis will still need to be interpreted by fallible scholars. It’s biggest flaw, however, is that the analysis is being performed on a fraction of the dataset. Not only are defenders of the modern critical text defending an incomplete dataset, as the work is still ongoing, the end product of the work itself is operating from an incomplete dataset. So to defend this method is to defend the conclusions of men on the analysis of an incomplete dataset of an incomplete dataset. The scope of the conclusions this method will produce will be limited to the manuscripts that we have today. And since there is an overwhelming bias in the scholarly world on one subset of those manuscripts, it is more than likely that the conclusions drawn on the analysis will look very similar, if not the same, as the conclusions drawn by the previous era of textual scholarship (represented by Metzger and Hort). And even if these biases are crushed by the data analysis, the conclusions will be admittedly incomplete because the data is incomplete. Further, quantitative analysis will never be free of the biases of those who handle the data. Dr. Peter Gurry comments on one weakness in the method in his book A New Approach to Textual Criticism

“The significance of the selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). 

Further, the method itself is not immune to error. Dr. Gurry comments that, “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That is to say, that after all the computer analysis is done, the scholars making textual decisions can still make incorrect conclusions on which text is earliest, selecting a later reading as earliest. In the current iteration of the Munster Method, there are already many places where, rather than selecting a potentially incorrect reading, the text is marked to indicate that the evidence is equally strong for two readings. These places are indicated by a diamond in the apparatus of the current edition of the Nestle-Aland text produced in 2012. There are 19 of these in 1 and 2 Peter alone (See NA28). That is 19 places in just two books of the Bible where the Munster Method has not produced a definitive conclusion on the data. That means that even when the work is complete, there will be thousands of different conclusions drawn on which texts should be taken in a multitude of places. This is already the case in the modern camp without application of the CBGM, a great example is Luke 23:34, where certain defenders of the modern critical text have arrived at alternative conclusions on the originality of this verse.   

There is one vitally important observation that must be noted when it comes to the current effort of textual scholarship. The current text-critical effort, while the most sophisticated to date, is incapable of determining the earliest reading due to the limitations of the data and also in the methodology. A definitive analysis simply cannot be performed on an incomplete dataset. And even if the dataset was complete, no dataset is immune to the opinions of flawed men and women.  

An Additional Problem Facing the Munster Method

There is one more glaring issue that the Munster Method cannot resolve. There is no way to demonstrate that the oldest surviving manuscripts represent the general form of the text during the time period they are alleged to have been created (3rd – 4th century). An important component of quantitative analysis is securing a data set that is generally representative of the whole population of data. This may be fine in statistical analysis on a general population, but the precision of the effort at hand is not aiming at a generic form of precision, because the Word of God is being discussed, which is said to be perfectly preserved by God. That means that the sample of data being analyzed must be representative of the whole. The reality is, that the modern method is really doing an analysis on the earliest manuscripts, which do not represent the whole, against the whole of the dataset.

It is generally accepted among modern scholarship that the Alexandrian manuscripts represent the text form that the whole church used in the third and fourth century. This is made evident when people say things like, “The church wasn’t even aware of this text until the 1500’s!” or “This is the text they had at Nicea!” Yet such claims are woefully lacking any sort of proof, and in fact, the opposite can be demonstrated to be true. If it can be demonstrated that the dataset is inadequate as it pertains to the whole of the manuscript tradition, or that the dataset is incomplete, then the conclusions drawn from the analysis can never be said to be absolutely conclusive. There are two points I will examine to demonstrate the inadequacy of the dataset and methodology of the CBGM, which disallows it from being a final authority in its application to the original form of the New Testament.

First, I will examine the claim that the manuscripts generally known as Alexandrian were the only texts available to the church during the third and fourth centuries. This is a premise that must be proved in order to demonstrate that the conclusions of the CBGM represent the original text of the New Testament. In order to make such a claim, one has to adopt the narrative that the later manuscripts which are represented in the Byzantine tradition were a development, an evolution, of the New Testament text. The later manuscripts which became the majority were the product of scribal mischief and the revisionist meddling of the orthodox church, and not a separate tradition that goes back to the time of the Apostles. This narrative requires the admission that the Alexandrian texts evolved so heavily that by the Middle period, the Alexandrian text had transformed into an entirely different Bible, with a number of smoothed out readings and even additions of entire passages and verses into the text which were received by the church as canonical! Since this cannot be supported by any real understanding of preservation, the claim has to be made that the true text evolved and the original remains somewhere in the texts that existed prior to the scandalous revision effort of Christians throughout the ages. This is why there is such a fascination surrounding the Alexandrian texts, and a determination by some to “prove” them to be original (which is impossible, as I have discussed).

That being said, can it be demonstrated that these Alexandrian manuscripts were the only texts available to the church during the time of Nicea? The simple answer is no, and the evidence clearly shows that this is not the case at all. First, the number of examples of patristic quotations of Byzantine readings demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. One can point to Origen as the champion of the Alexandrian text, but Origen wasn’t exactly a bastion of orthodoxy, and I would hesitate to draw any conclusions other than the fact that after him, the church essentially woke up and found itself entirely Arian or some other form of heterodoxy as it pertained to Christ and the Trinity. Second, the existence of Byzantine readings in the papyri demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. Finally, Codex Vaticanus, one of the chief exemplars of the Alexandrian texts, is proof that other forms of the text existed at the time of their creation. This is chiefly demonstrated in the fact that there is a space the size of 11 verses at the end of Mark where a text should be. This space completely interrupts the otherwise uniform format of the codex which indicates that the scribes were aware that the Gospel of Mark did not end at, “And they went out quickly, and fled from the sepulchre; for they trembled and were amazed: neither said they any thing to any man; for they were afraid.” They were either instructed to exclude the text, or did not have a better copy as an exemplar which included the text. In any case, they were certainly aware of other manuscripts that had the verses in question, which points to the existence of other manuscripts contemporary to Sinaiticus and Vaticanus. Some reject this analysis of the blank space at the end of Mark as it applies to Sinaiticus (Which also has a blank space), but offer additional reasons why this is the case nonetheless, see this article for more. James Snapp notes that “the existence of P45 and the Old Latin version(s), and the non-Alexandrian character of early patristic quotations, supports the idea that the Alexandrian Text had competition, even in Egypt.” Therefore it is absurd to claim that every manuscript circulating at the time looked the same as these two exemplars, especially considering the evidence that other text forms certainly existed.

Second, I will examine the claim that the Alexandrian manuscripts represent the earliest form of the text of the New Testament. It can easily be demonstrated that these manuscripts do not represent all of their contemporary manuscripts, but that is irrelevant if they truly are the earliest. Yet the current methodology has absolutely no right to claim that it is capable of proving such an assertion. Since the dataset does not include the other manuscripts that clearly existed alongside the Alexandrian manuscripts, one simply cannot draw any conclusions regarding the supremacy of those texts. One must jump from the espoused method to conjecture and storytelling to do so. Those defending the modern text often boldly claim that fires, persecution, and war destroyed a great deal of manuscripts. That is exactly true, and needs to be considered when making claims regarding the manuscripts that survived, and clearly were not copied any further. One has to seriously ponder why, in the midst of the mass destruction of Bibles, the Alexandrian manuscripts were considered so unimportant that they weren’t used in the propagation of the New Testament, despite the clear need for such an effort. Further, these manuscripts are so heavily corrected by various scribes it is clear that they weren’t considered authentic in any meaningful way. 

Even if the Alexandrian manuscripts represent the “earliest and best”, there is absolutely no way of determining this to be true due to the simple fact that the dataset from that time period is so sparse. In fact, the dataset from this period only represents a text form that is aberrant, quantitatively speaking. It is evident that other forms of the text existed, and despite the fact that they no longer are surviving, the form of those texts survive in the manuscript tradition as a whole. The fact remains, there are no contemporary data points to even compare the Alexandrian manuscripts against to demonstrate this to be true. Further, there are not enough second century data points to compare the third and fourth century manuscripts against to demonstrate that the Alexandrian manuscripts represent any manuscript earlier than when they were created. It is just as likely, if not more likely, that these manuscripts were created as an anomaly in the manuscript tradition. The fact remains that the current methods simply are not sufficient to operate on data that isn’t available.  This relegates any form of analysis to the realm of story telling, which exists in the theories of modern scholars (expansion of piety, scribal smoothing, etc.) 

Conclusion

Regardless of which side one takes in the textual discussion, the fact remains that the critiques of the modern methodology as it exists in the CBGM are extremely valid. The method is primarily empirical in its form, and empirical analysis is ultimately limited by the data available. Since the data that is available is not complete outside of a massive 1st and 2nd century manuscript find, the method itself will forever be insufficient to provide a complete analysis. The product of the CBGM can never be applied honestly to the whole of the manuscript tradition. Even if we find 2,000 2nd century manuscripts, there will still be no way of validating that those manuscripts represent all of the text forms that existed during that time . As a result, the end product will simply provide an analysis of an incomplete dataset. It should not surprise anybody when the conclusions drawn from this dataset in 2032 simply look like the conclusions drawn by the textual scholarship of the past 200 years. This being the case, the conversation will be forced into the theological realm. If the modern methods cannot prove any one text to be authorial or original, those who wish to adhere to that text will ultimately be forced to make an argument from faith. This is already being done by those who downplay the significance of the 200 year gap in the manuscript tradition from the first to third centuries and say that the initial text is synonymous with the original text. 

The fact remains that ultimately those who believe the Holy Scriptures to be the divinely inspired word of God will still have to make an argument from faith at the end of the process. Based on the limitations of the Munster Method (CBGM), I don’t see any reason for resting my faith on an analysis of an incomplete dataset which is more than likely going to lean on the side of secular scholarship when it is all said and done. This is potentially the most dangerous position on the text of Scripture ever presented in the history of the world. This position is so dangerous because it says that God has preserved His Word in the manuscripts, but the method being used cannot ever determine which words He preserved.

The analysis performed on an incomplete dataset will be hailed as the authentic word(s) of God, and the conclusions of scholars will rule over the people of God. It is possible that there will be no room for other opinions in the debate, because the debate will be “settled”. And the settled debate will arrive at the conclusion of, “well, we did our best with what we have but we are still unsure what the original text said, based on our methods”. This effectively means that one can believe that God has preserved His Word, and at the same time not have any idea what Word He preserved. The adoption of such conclusions will inevitably result in the most prolific apostasy the church has ever seen. This is why it is so important for Christians to return to the old paths of the Reformation and post-Reformation, which affirmed the Scriptural truth that the Word of God is αυτοπιστος, self-authenticating. It is dishonest to say that the Reformed doctrine of preservation is “dangerous” without any evidence of this, especially considering the modern method is demonstrably harmful.  

Providential Exposure as it Relates to Preservation

The Theological Method and Preservation

The Theological Method for determining the text of Scripture heavily relies upon understanding the text that has been received by Christians, which is commonly called “exposure”. The text of Scripture is that which has been exposed to the people of God throughout the ages. John 10:27 says, “My sheep hear my voice, and I know them, and they follow me” (KJV). Michael Kruger, in his book Canon Revisited, says this,

“When people’s eyes are opened, they are struck by the divine qualities of Scripture – it’s beauty and efficacy – and recognize and embrace Scripture for what it is, the word of God. They realize that the voice of Scripture is the voice of the Shepherd” (101). 

This might seem like subjectivism, but this is the historic doctrine that has been recognized throughout the ages by the theologians of the faith, most notably John Calvin and Herman Bavinck. This doctrine is not to be confused with the Mormon doctrine of “burning in the bosom”, which has been done by men like James White. Many false doctrines are based on truth, and here the Christian must recognize that God’s Word is the means that He is speaking to His people in these last days, regardless of how that doctrine has been twisted by other systems. 

What distinguishes this doctrine from its Mormon counterpart is that this reception of the Scriptures by the people of God is not purely individualistic. The text of Scripture, as it has been handed down, exists ontologically, not just subjectively. There is a concrete shape of God’s Word that exists, and the people of God have had that Word in every generation. The text of Scripture must primarily be viewed as a covenantal document given to the covenant people of God for their use in all matters of faith and practice (LBCF 1.1). That does not mean that all those professing Christianity throughout the ages have agreed upon what belongs in Scripture, or that every Christian has had access to the whole of Scripture in every generation. In fact, there are a multitude of Christians that do not have access to God’s Word, either by circumstance, or by choice. 

It is important to take note of how the Apostolic church received the text of Holy Scripture to understand the doctrine of exposure as it relates to preservation. In the New Testament, there is never a case where Scripture is said to be a gift delivered to individual people. The Scriptures were always a corporate blessing to the covenant people of God (Acts 15:14; Titus 2:14; 1 Peter 2:9). In the testimony of Scripture itself, we can see that God delivers His Word to the people of God, not individual people of God. So the doctrine of exposure does not crumble due to certain individuals not having a copy of the Bible at all times. If this were the case, the fact that there are Christians who simply do not own a Bible would discredit this doctrine altogether. 

Despite the fact that the Canon is recognized in part by its corporate reception, this doctrine of providential exposure does not rest on ecclesiastical authority, as the papists might claim. There is no one single church which is responsible for giving authority to the text of Holy Scripture. In fact, no church could give authority to the Scriptures, they are authoritative in themselves (αυτοπιστος). Kruger explains this well, “The books received by the church inform our understanding of which books are canonical not because the church is infallible o because it created or constituted the canon, but because the church’s reception of these books is a natural and inevitable outworking of the self-authenticating nature of Scripture” (106).

It must be stated, that Kruger makes the distinction between the canon, and the text of the canon, which is the common thought amongst conservative scholarship. Upon examination of the theological method  however, there does not seem to be good reason to separate the two. If the doctrinal foundation of providential exposure demonstrates the “efficacy of the Shepherd’s voice to call”(106), then it follows that there must be a definitive voice that does the calling. The name of a canonical book is simply not efficacious to call sinners to repentance and faith. Simply listing off the canonical list is not the Gospel call. So the material that is providentially exposed to the people of God must also contain the substance which is effective unto life by the power of the Holy Spirit. God has not just preserved the book sleeves of the Bible, He has also preserved the words within those book sleeves. 

Since the Bible is self-authenticating, Christians cannot look to the totality or purity of its reception to determine which books or texts of the Bible should be received today. That is to say that because the majority of the Christian people do not accept one passage as authentic today, does not mean that it has not been properly received in the past, or that it is not ontologically a part of the canon. A passage of Scripture may not have been accepted as canonical by various groups throughout history, and this has indeed been the case, usually due to theological controversy. 

It is antithetical to the Theological method to say that the Scriptures are self-authenticating, but then also say that people must authenticate those Scriptures by a standard outside of the Scriptures themselves. Either the Scriptures are self-authenticating, or they are not. Which is why evidence are great tools to defend the Scriptures, but those evidences can never authenticate those Scriptures in themselves. It is problematic to say that God’s Word has been preserved, and kept pure in all ages, and then to immediately say that He has done so imperfectly, or has not fully exposed that Word to His people yet. 

The Theological method provides a framework that actually gives more weight to historical thought as opposed to modern thought. It disallows for a perspective that believes that the people of God had lost or added passages of Scripture, and that these texts need to be recovered or removed. It prevents certain theories that the text evolved, or that Christ’s divinity was developed over time. It especially rejects the idea that the original text of the New Testament was choppy, crude, and in places incoherent, and that scribes smoothed out the readings to make the text readable. In fact, it exposes those manuscripts that are choppy and missing parts to be of poor quality by assuming that the Holy Scriptures were inspired by the Holy Spirit rather than invented by ostensibly literate first century Jews. 

There may have been localities that corrupted the text (usually intentionally), but this does not represent the providential preservation that was taking place universally. The vast majority of textual variants are due to Scribal errors, but the significant variants were certainly an effort of revision. A Scribe simply wouldn’t have removed or added 12 verses by accident. A great example is the idiosyncratic Egyptian manuscripts uncovered in the 19th and 20th centuries, which tend to disagree with the general manuscript tradition in important variant units. These manuscripts have been given tremendous weight in the modern period due to shifting views of inspiration and preservation. 

If the Scriptures truly were inspired and preserved, then one should expect that the text did not evolve, or that the closest representative of those originals would be riddled with short, abrupt readings. One would expect that in every stage of copying, Scribal errors would be purged out, and that the true readings would persevere. In fact, this phenomenon can be observed in the vast majority of the extant New Testament manuscripts, which has unfortunately been described as Scribal interference, or smoothing out the text. When the transmission of the New Testament manuscripts are viewed Theologically, an entirely different story is told by the manuscripts which largely disagrees with the modern narrative which favors those choppy manuscripts which existed in one locality of the Christian world. 

The preservation of God’s Word can be demonstrated evidentially, but not without the proper Theological lens. Evidential arguments can be a powerful tool in all disciplines, but they often are not effective in themselves to change anybody’s mind. That is why the Theology of scholars will ultimately determine the manuscripts they deem to be earliest and best. Simply counting manuscripts, or weighing manuscripts, is simply not consistent with the conservative doctrine of preservation. In both cases, these methods attempt to take an external authority, such as manuscript count, or the age of the manuscript, and use that to authenticate the Word of God. Yet both of these are at odds with the doctrinal standard that is laid forth in Scriptures themselves, that the Word of God is self-authenticating. That is why the language of “the text that has been received” is warranted in this conversation, because it recognizes God’s providential preservation and exposure of the ontological canon to the people of God in every age. 

Conclusion 

The doctrine of exposure is often misunderstood as being too similar to the Mormon doctrine of “burning of the bosom” or the papal doctrine which states that Rome has the authority to authenticate the Bible. Despite these abuses of Scripture, the fact remains that the Scriptures are self-authenticating. It is easy to fall back onto empirical approaches, because they seem to be the most logical. Yet these empirical approaches do not do what they claim they can do, and this is becoming increasingly evident with each passing year. The number of Bibles has only increased, and exponentially at that. Modern methodology has not narrowed the text of the New Testament to fewer legitimate readings, but has expanded greatly the number of readings that “could be” original or early. 

The efforts of modern textual scholarship has only increased the uncertainty of the text of the New Testament. This has culminated in the abandonment of the search for the original text of the Bible for the Ausgangstext, or the earliest text scholars can get back to (which is 3rd or 4th century). Practically speaking, this pursuit will simply result in arriving at some hypothetical form of the text that may have existed in Egypt in the third century. Since this seems to be the direction of most current New Testament text-critical scholarship, it seems that it is time to return to the old paths. The Theological method has been expressed by countless Theologians of the Christian faith, and it should not be abandoned for the sake of adopting the modern critical scientific method. The Scriptures should always be handled as self-authenticating, and a shift to this way of thinking would result in a massive change in the direction of modern New Testament scholarship.