A Summary of the Confessional Text Position

Introduction

In this article, I will provide a shotgun blast summary of the Confessional Text Position, as well as some further commentary which will help those trying to understand the position better. In this short article, I do not expect that I have articulated every nuance of the position perfectly, but I hope that I have communicated it clearly enough for people to understand it as a whole. My goal is the reader can at least see why I adhere to the Traditional Hebrew and Greek text and translations thereof.

In 15 Points

1. God has voluntarily condescended to man by way of speaking to man (Deus Dixit) and making covenants with him (Gen. 2:17; 3:15)

2. In the time of the people of God of old, He spoke by way of the prophets (Heb. 1:1)

3. In these last days, He has spoken to His people by His Son, Jesus Christ (Heb. 1:1)

4. The way that God has spoken by Jesus Christ is in Scripture through the inspiration of Biblical writers by the Holy Spirit (2 Peter 1:21; 2 Tim. 3:16). The Bible is the Word of God, and in these last days, is the way that Christians hear the voice of their Shepherd by the power of the Holy Spirit (John 10:27). The Bible does not contain the Word of God, or become the Word of God, it is the Word of God.

5. The purpose of this speaking is to make man “wise unto salvation” and “furnished unto all good works” (2 Tim. 3:15;17; Rom. 1:16; 10:17)

6. Jesus promised that His Word would never fall away, as it is the means of accomplishing His covenant purpose (Mat. 5:18; 24:35)

7. Since God has promised that His words would not fall away, the words of Scripture have been kept pure in all ages, or in every generation (WCF 1.8; Mat. 5:18; 24:35) until the last day

8. Up until the 15th century with the invention of the printing press in Europe, books were hand copied. This hand copying resulted in thousands of manuscripts being circulated and used in churches for all matters of faith and practice. These manuscripts are generally uniform, except for a handful of manuscripts formerly known as the “Alexandrian Text Family”, which were not really copied or circulated. When Constantinople fell in 1453, just 14 years after the invention of the printing press in Europe, Greek Christians fled to Italy, bringing with them their Bibles and language.

9. The printing press was put to use in the creation of printed Bibles, in many different languages, specifically Greek and Latin

10. If it is true that the Bible has been kept pure, it was kept pure up to the 16th century. Thus, the manuscripts that were used in the first effort of creating printed text was the same text used by the people of God up to that point. Text-critics such as Theodore Beza would appeal to the “consent of the church” as a part of his textual methodology, which demonstrates that the reception of readings by the church were an integral part of the compilation of this text

11. The text produced over the course of a century during the Reformation period was universally accepted by the protestants, even to the point of other texts being rejected. It is historically documented that this is the “text received by all” (Received Text), which is abundantly made clear in the commentaries, confessions (see proof texts), translations, and theological works up until the 19th century.

12. This Greek text, along with the Masoretic Hebrew text, remained the main text for translation, commentary, theological works, etc. until the 19th century when Hort’s Greek text, based on Codex Vaticanus was adopted by many. At the time, many believed that Hort’s text was the true original, which caused many people to adopt readings from this text over and above the Received Text. This text was rejected by Erasmus and the Reformers, and has no surviving contemporary ancestor copies, meaning it was simply not copied or used by the church at large.

13. This Greek text was adopted based on Hort’s theory that Vaticanus was “earliest and best” and the text of modern Bibles all generally reflect this text form, even today. Due to the Papyri and the CBGM, Hort’s theory has been rejected by all in the scholarly community. Not to mention Hoskier’s devastating analysis of Codex B (Vaticanus).

14. Thus, the Confessional Text position adopts the Greek and Hebrew text, and translations thereof, that were “received by all” in the age of printed Bibles, and used universally by the orthodox for 300 years practically uncontested, except by Roman Catholics and other heretical groups (Anabaptists, Socinians, etc.).

15. The most popular of these translations, the Authorized Version (KJV), is still used by at least 55% of people who read their Bible daily as of 2014, and at least 6,200 churches. Additionally, Bibles made from these Greek and Hebrew texts into other languages remain widely popular across the world. Other English Bibles are based on this text, such as the MEV, NKJV, GNV, and KJ3, but they are relatively unused compared to the AV.

Further Commentary

The adoption of the Greek Received Text and the Hebrew Masoretic text is one based on what God has done providentially in time. Many assert that the history of the New Testament can only be traced by extant manuscript copies, but those copies do not tell the whole story. The readings in the Bible are vindicated, not on the smattering of early surviving manuscripts, but rather by the people that have used those readings in history (John 10:27), which are preserved in the texts actually used by those people. Since we will never have all of the manuscripts due to war, fire, etc., it is impossible to verify genuine readings by the data available today, as there is no “Master Copy” to compare them against. That is why the current effort of text-criticism is pursuing a hypothetical Initial Text, which relies on constructing a text based on the first 1,000 years of manuscript transmission.

The product of this is called the Editio Critica Maior (ECM), and it will not be finished until 2030. The methodology used (CBGM) to construct this text has already introduced uncertainty to the editors of those making Greek texts as to whether or not they can even find the Initial Text, or if they will even find one Initial Text. That is to say, that from the time of Hort’s text in the 19th century, the modern effort of textual criticism has yet to produce a single stable text. The printed editions of the modern critical text contain a great wealth of textual data, but none of these are a stable text that will not change in the next ten years. That is to say, that translations built on these printed editions are merely a representation of what the editors think the best readings are, not necessarily what the best readings are in reality.

Rather than placing hope in the ability of scholars to prove this Initial Text to be original, Christians in the Confessional Text camp look back to the time when hand copied manuscripts were still being used in churches and circulated in the world. The first effort of “textual criticism” if you will, is unique because it is the only effort of textual criticism that took place when hand copied codices were still being used as a part of the church’s practice. That means that the quality and value of such codices could be validated by the “consent of the church”, because the church would have only adopted a text that was familiar to the one they had been using up to that point. This kind of perspective is not achievable to a modern audience. During the time of the first printed editions, the corruption of the Latin Vulgate was exposed, and the printed editions created during that time were in themselves a protest against the Vulgate and the Roman Catholic church, who had in their possession a corrupted translation of the Scriptures. It was during this time, and because of these printed texts, that Protestantism was born.

Any denomination claiming to be protestant has direct ties back to this text, and the theology built upon it. The case for the Confessional Text is really quite simple, when you think about it. God preserved His Word in every generation in hand copied manuscripts until the form of Bibles transitioned to printed texts. Then He preserved His Word in printed Greek texts based on the circulating and approved manuscripts. This method of transmission was much more efficient, cheap, and easily distributed than the former method of hand copying. This text was received, commented on, preached from, and translated for centuries, and is still used by the majority of Bible reading Christians today. The argument for this text is not one based in tradition, it is one based on simply looking back into history and seeing which text the people of God have used in time. Not simply the story that the choice manuscripts of the modern scholars tells.

Any theories on other text forms are typically based on a handful of ancient manuscripts that were not copied or used widely, and the idea that this smattering of early manuscripts represents the original text form is simply speculation. What history tells us is that the text vindicated in time is the text the people of God used, copied, printed, and translated. This does not mean that every Christian at all times has used this text, just the overwhelming testimony of the people of God as a whole. The fact is, that we know very little about the transmission and form of the text in the ancient church in comparison to what we know about the text after the ancient period. The critical text, while generally looking like the Received Text, is different than the historical text of the protestants, which is why those in the Confessional Text camp do not use them. The few Papyri we have even demonstrate that later manuscripts known as the Byzantine text family were circulating in the ancient church.

Conclusion

So why is there a discussion regarding which text is better? Up until this point in history, the alternative text, the critical text, has been thought to be much more stable and certain than it is now. Currently, the modern critical text is unfinished, and will remain that way until at least 2030 when the ECM is finished. Those in the Confessional Text position might ask two very important questions regarding this text: Does a text that represents the text form of a handful of the thousands of manuscripts, a text which is incomplete, sound like a text that is vindicated in time? Does a changing, uncertain, unfinished text speak to a text that has been preserved, or one that has yet to be found? I suppose these questions aren’t answerable until 2030 when it is complete. This alone is a powerful consideration for those investigating the issue earnestly. Most people in the Confessional Text camp do not anathematize those who read Bibles from the critical text, or break fellowship over it, but we do encourage and advocate for the use of Traditional Text Bibles, as it is the historical text of the Protestant church.

For More Information on Why I Prefer the Received Text, Click Here

For Interactions with Arguments Against the Received Text, Click Here

The Most Dangerous View of the Holy Scriptures

Introduction

Quite often in the textual discussion, it is boldly proclaimed that “our earliest and best manuscripts” are Alexandrian. Yet, this statement introduces confusion at the start. It introduces confusion due to the fact that there are sound objections as to whether it is even appropriate to use such a term as “Alexandrian” when describing the “earliest and best manuscripts”, as though they were a text family or text type. This is because there doesn’t seem to be an “Alexandrian” text type, only a handful of manuscripts that have historically been called Alexandrian. This is due to the more precise methods being employed, which allow quantitative analysis to be done in the variant units of these manuscripts. The result of this analysis has demonstrated that the manuscripts called Alexandrian do not meet the threshold of agreement to be considered a textual family. Tommy Wasserman explains this shift in thought. 

“Although the theory of text types still prevails in current text-critical practice, some scholars have recently called to abandon the concept altogether in light of new computer-assisted methods for determining manuscript relationships in a more exact way. To be sure, there is already a consensus that the various geographic locations traditionally assigned to the text types are incorrect and misleading” (Wasserman, http://bibleodyssey.org/en/places/related-articles/alexandrian-text). 

Thus, the only place the name “Alexandrian” might occupy in this discussion is one of historical significance or possibly to serve in identifying the handful of manuscripts that bear the markers of Sinaiticus and Vaticanus, which disagree heavily among themselves, as the Munster Method has demonstrated (65% agreement between 01 and 03  in the places examined in the gospels/26.4% to the Majority text http://intf.uni-muenster.de/TT_PP/Cluster4.php). So in using the terminology of “Alexandrian”, one is already introducing confusion into the conversation that represents an era of textual scholarship that is on its way out. Regardless of whether or not it is appropriate to use the term “Alexandrian”, it may be granted that it is a helpful descriptor for the sake of discussion, since the modern critical text in the most current UBS/NA platform generally agrees with it in at least two of these manuscripts (03 at 87.9% and 01 at 84.9%) in the places examined (See Gurry & Wasserman, 46). 

The bottom line is this – the new methods that are currently being employed (CBGM/Munster Method) are still ongoing, and will be ongoing until at least 2032. So any arguments made on behalf of the critical text are liable to shift as the effort continues and new data comes to light. As a result of this developing effort, any attempt to defend such texts is operating from an incomplete data set, based on the methods that are being defended. Given that the general instability of the modern critical text is granted, at least until the Editio critica maior (ECM) is completed, know that the conversation itself is likely to change over the next 12 years. In the meantime, it seems that the most productive conversation to have is that which discusses the validity of the method itself, since the dataset is admittedly incomplete.

Is the Munster Method Able to Demonstrate the Claim that the “Alexandrian” Manuscripts Are Earliest and Best?

The answer is no. The reason I say this is due to the method being employed. I have worked as an IT professional for 8 years, specifically in data analysis and database development, which gives me a unique perspective on the CBGM. An examination of the Munster Method (CBGM) will show that the method is insufficient to arrive at any conclusion on which text is earliest. While the method itself is actually quite brilliant , its limitations prevent it from providing any sort of absolute conclusion on which text is earliest, or original, or best. There are several flaws that should be examined, if those that support the current effort want to properly understand the method they are defending. 

  1. In its current form, it does not factor in versional or patristic data (or texts as they have been preserved in artwork for that matter)
  2. It can only perform analysis on the manuscripts that are extant, or surviving (so the thousands of manuscripts destroyed in the Diocletian persecution, or WWI and WWII can never be examined, for example)    
  3. The method is still vulnerable to the opinions and theories of men, which may or may not be faithful to the Word of God

So the weaknesses of the method are threefold – it does not account for all the data currently available, and it will never have the whole dataset. Even when the work is finished, the analysis will still need to be interpreted by fallible scholars. It’s biggest flaw, however, is that the analysis is being performed on a fraction of the dataset. Not only are defenders of the modern critical text defending an incomplete dataset, as the work is still ongoing, the end product of the work itself is operating from an incomplete dataset. So to defend this method is to defend the conclusions of men on the analysis of an incomplete dataset of an incomplete dataset. The scope of the conclusions this method will produce will be limited to the manuscripts that we have today. And since there is an overwhelming bias in the scholarly world on one subset of those manuscripts, it is more than likely that the conclusions drawn on the analysis will look very similar, if not the same, as the conclusions drawn by the previous era of textual scholarship (represented by Metzger and Hort). And even if these biases are crushed by the data analysis, the conclusions will be admittedly incomplete because the data is incomplete. Further, quantitative analysis will never be free of the biases of those who handle the data. Dr. Peter Gurry comments on one weakness in the method in his book A New Approach to Textual Criticism

“The significance of the selectivity of our evidence means that our textual flow diagrams and the global stemma do not give us a picture of exactly what happened” (113). 

Further, the method itself is not immune to error. Dr. Gurry comments that, “There are still cases where contamination can go undetected in the CBGM, with the result that proper ancestor-descendant relationships are inverted” (115). That is to say, that after all the computer analysis is done, the scholars making textual decisions can still make incorrect conclusions on which text is earliest, selecting a later reading as earliest. In the current iteration of the Munster Method, there are already many places where, rather than selecting a potentially incorrect reading, the text is marked to indicate that the evidence is equally strong for two readings. These places are indicated by a diamond in the apparatus of the current edition of the Nestle-Aland text produced in 2012. There are 19 of these in 1 and 2 Peter alone (See NA28). That is 19 places in just two books of the Bible where the Munster Method has not produced a definitive conclusion on the data. That means that even when the work is complete, there will be thousands of different conclusions drawn on which texts should be taken in a multitude of places. This is already the case in the modern camp without application of the CBGM, a great example is Luke 23:34, where certain defenders of the modern critical text have arrived at alternative conclusions on the originality of this verse.   

There is one vitally important observation that must be noted when it comes to the current effort of textual scholarship. The current text-critical effort, while the most sophisticated to date, is incapable of determining the earliest reading due to the limitations of the data and also in the methodology. A definitive analysis simply cannot be performed on an incomplete dataset. And even if the dataset was complete, no dataset is immune to the opinions of flawed men and women.  

An Additional Problem Facing the Munster Method

There is one more glaring issue that the Munster Method cannot resolve. There is no way to demonstrate that the oldest surviving manuscripts represent the general form of the text during the time period they are alleged to have been created (3rd – 4th century). An important component of quantitative analysis is securing a data set that is generally representative of the whole population of data. This may be fine in statistical analysis on a general population, but the precision of the effort at hand is not aiming at a generic form of precision, because the Word of God is being discussed, which is said to be perfectly preserved by God. That means that the sample of data being analyzed must be representative of the whole. The reality is, that the modern method is really doing an analysis on the earliest manuscripts, which do not represent the whole, against the whole of the dataset.

It is generally accepted among modern scholarship that the Alexandrian manuscripts represent the text form that the whole church used in the third and fourth century. This is made evident when people say things like, “The church wasn’t even aware of this text until the 1500’s!” or “This is the text they had at Nicea!” Yet such claims are woefully lacking any sort of proof, and in fact, the opposite can be demonstrated to be true. If it can be demonstrated that the dataset is inadequate as it pertains to the whole of the manuscript tradition, or that the dataset is incomplete, then the conclusions drawn from the analysis can never be said to be absolutely conclusive. There are two points I will examine to demonstrate the inadequacy of the dataset and methodology of the CBGM, which disallows it from being a final authority in its application to the original form of the New Testament.

First, I will examine the claim that the manuscripts generally known as Alexandrian were the only texts available to the church during the third and fourth centuries. This is a premise that must be proved in order to demonstrate that the conclusions of the CBGM represent the original text of the New Testament. In order to make such a claim, one has to adopt the narrative that the later manuscripts which are represented in the Byzantine tradition were a development, an evolution, of the New Testament text. The later manuscripts which became the majority were the product of scribal mischief and the revisionist meddling of the orthodox church, and not a separate tradition that goes back to the time of the Apostles. This narrative requires the admission that the Alexandrian texts evolved so heavily that by the Middle period, the Alexandrian text had transformed into an entirely different Bible, with a number of smoothed out readings and even additions of entire passages and verses into the text which were received by the church as canonical! Since this cannot be supported by any real understanding of preservation, the claim has to be made that the true text evolved and the original remains somewhere in the texts that existed prior to the scandalous revision effort of Christians throughout the ages. This is why there is such a fascination surrounding the Alexandrian texts, and a determination by some to “prove” them to be original (which is impossible, as I have discussed).

That being said, can it be demonstrated that these Alexandrian manuscripts were the only texts available to the church during the time of Nicea? The simple answer is no, and the evidence clearly shows that this is not the case at all. First, the number of examples of patristic quotations of Byzantine readings demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. One can point to Origen as the champion of the Alexandrian text, but Origen wasn’t exactly a bastion of orthodoxy, and I would hesitate to draw any conclusions other than the fact that after him, the church essentially woke up and found itself entirely Arian or some other form of heterodoxy as it pertained to Christ and the Trinity. Second, the existence of Byzantine readings in the papyri demonstrate the existence of other forms of the text of the New Testament which were contemporary to the Alexandrian manuscripts. Finally, Codex Vaticanus, one of the chief exemplars of the Alexandrian texts, is proof that other forms of the text existed at the time of their creation. This is chiefly demonstrated in the fact that there is a space the size of 11 verses at the end of Mark where a text should be. This space completely interrupts the otherwise uniform format of the codex which indicates that the scribes were aware that the Gospel of Mark did not end at, “And they went out quickly, and fled from the sepulchre; for they trembled and were amazed: neither said they any thing to any man; for they were afraid.” They were either instructed to exclude the text, or did not have a better copy as an exemplar which included the text. In any case, they were certainly aware of other manuscripts that had the verses in question, which points to the existence of other manuscripts contemporary to Sinaiticus and Vaticanus. Some reject this analysis of the blank space at the end of Mark as it applies to Sinaiticus (Which also has a blank space), but offer additional reasons why this is the case nonetheless, see this article for more. James Snapp notes that “the existence of P45 and the Old Latin version(s), and the non-Alexandrian character of early patristic quotations, supports the idea that the Alexandrian Text had competition, even in Egypt.” Therefore it is absurd to claim that every manuscript circulating at the time looked the same as these two exemplars, especially considering the evidence that other text forms certainly existed.

Second, I will examine the claim that the Alexandrian manuscripts represent the earliest form of the text of the New Testament. It can easily be demonstrated that these manuscripts do not represent all of their contemporary manuscripts, but that is irrelevant if they truly are the earliest. Yet the current methodology has absolutely no right to claim that it is capable of proving such an assertion. Since the dataset does not include the other manuscripts that clearly existed alongside the Alexandrian manuscripts, one simply cannot draw any conclusions regarding the supremacy of those texts. One must jump from the espoused method to conjecture and storytelling to do so. Those defending the modern text often boldly claim that fires, persecution, and war destroyed a great deal of manuscripts. That is exactly true, and needs to be considered when making claims regarding the manuscripts that survived, and clearly were not copied any further. One has to seriously ponder why, in the midst of the mass destruction of Bibles, the Alexandrian manuscripts were considered so unimportant that they weren’t used in the propagation of the New Testament, despite the clear need for such an effort. Further, these manuscripts are so heavily corrected by various scribes it is clear that they weren’t considered authentic in any meaningful way. 

Even if the Alexandrian manuscripts represent the “earliest and best”, there is absolutely no way of determining this to be true due to the simple fact that the dataset from that time period is so sparse. In fact, the dataset from this period only represents a text form that is aberrant, quantitatively speaking. It is evident that other forms of the text existed, and despite the fact that they no longer are surviving, the form of those texts survive in the manuscript tradition as a whole. The fact remains, there are no contemporary data points to even compare the Alexandrian manuscripts against to demonstrate this to be true. Further, there are not enough second century data points to compare the third and fourth century manuscripts against to demonstrate that the Alexandrian manuscripts represent any manuscript earlier than when they were created. It is just as likely, if not more likely, that these manuscripts were created as an anomaly in the manuscript tradition. The fact remains that the current methods simply are not sufficient to operate on data that isn’t available.  This relegates any form of analysis to the realm of story telling, which exists in the theories of modern scholars (expansion of piety, scribal smoothing, etc.) 

Conclusion

Regardless of which side one takes in the textual discussion, the fact remains that the critiques of the modern methodology as it exists in the CBGM are extremely valid. The method is primarily empirical in its form, and empirical analysis is ultimately limited by the data available. Since the data that is available is not complete outside of a massive 1st and 2nd century manuscript find, the method itself will forever be insufficient to provide a complete analysis. The product of the CBGM can never be applied honestly to the whole of the manuscript tradition. Even if we find 2,000 2nd century manuscripts, there will still be no way of validating that those manuscripts represent all of the text forms that existed during that time . As a result, the end product will simply provide an analysis of an incomplete dataset. It should not surprise anybody when the conclusions drawn from this dataset in 2032 simply look like the conclusions drawn by the textual scholarship of the past 200 years. This being the case, the conversation will be forced into the theological realm. If the modern methods cannot prove any one text to be authorial or original, those who wish to adhere to that text will ultimately be forced to make an argument from faith. This is already being done by those who downplay the significance of the 200 year gap in the manuscript tradition from the first to third centuries and say that the initial text is synonymous with the original text. 

The fact remains that ultimately those who believe the Holy Scriptures to be the divinely inspired word of God will still have to make an argument from faith at the end of the process. Based on the limitations of the Munster Method (CBGM), I don’t see any reason for resting my faith on an analysis of an incomplete dataset which is more than likely going to lean on the side of secular scholarship when it is all said and done. This is potentially the most dangerous position on the text of Scripture ever presented in the history of the world. This position is so dangerous because it says that God has preserved His Word in the manuscripts, but the method being used cannot ever determine which words He preserved.

The analysis performed on an incomplete dataset will be hailed as the authentic word(s) of God, and the conclusions of scholars will rule over the people of God. It is possible that there will be no room for other opinions in the debate, because the debate will be “settled”. And the settled debate will arrive at the conclusion of, “well, we did our best with what we have but we are still unsure what the original text said, based on our methods”. This effectively means that one can believe that God has preserved His Word, and at the same time not have any idea what Word He preserved. The adoption of such conclusions will inevitably result in the most prolific apostasy the church has ever seen. This is why it is so important for Christians to return to the old paths of the Reformation and post-Reformation, which affirmed the Scriptural truth that the Word of God is αυτοπιστος, self-authenticating. It is dishonest to say that the Reformed doctrine of preservation is “dangerous” without any evidence of this, especially considering the modern method is demonstrably harmful.  

Providential Exposure as it Relates to Preservation

The Theological Method and Preservation

The Theological Method for determining the text of Scripture heavily relies upon understanding the text that has been received by Christians, which is commonly called “exposure”. The text of Scripture is that which has been exposed to the people of God throughout the ages. John 10:27 says, “My sheep hear my voice, and I know them, and they follow me” (KJV). Michael Kruger, in his book Canon Revisited, says this,

“When people’s eyes are opened, they are struck by the divine qualities of Scripture – it’s beauty and efficacy – and recognize and embrace Scripture for what it is, the word of God. They realize that the voice of Scripture is the voice of the Shepherd” (101). 

This might seem like subjectivism, but this is the historic doctrine that has been recognized throughout the ages by the theologians of the faith, most notably John Calvin and Herman Bavinck. This doctrine is not to be confused with the Mormon doctrine of “burning in the bosom”, which has been done by men like James White. Many false doctrines are based on truth, and here the Christian must recognize that God’s Word is the means that He is speaking to His people in these last days, regardless of how that doctrine has been twisted by other systems. 

What distinguishes this doctrine from its Mormon counterpart is that this reception of the Scriptures by the people of God is not purely individualistic. The text of Scripture, as it has been handed down, exists ontologically, not just subjectively. There is a concrete shape of God’s Word that exists, and the people of God have had that Word in every generation. The text of Scripture must primarily be viewed as a covenantal document given to the covenant people of God for their use in all matters of faith and practice (LBCF 1.1). That does not mean that all those professing Christianity throughout the ages have agreed upon what belongs in Scripture, or that every Christian has had access to the whole of Scripture in every generation. In fact, there are a multitude of Christians that do not have access to God’s Word, either by circumstance, or by choice. 

It is important to take note of how the Apostolic church received the text of Holy Scripture to understand the doctrine of exposure as it relates to preservation. In the New Testament, there is never a case where Scripture is said to be a gift delivered to individual people. The Scriptures were always a corporate blessing to the covenant people of God (Acts 15:14; Titus 2:14; 1 Peter 2:9). In the testimony of Scripture itself, we can see that God delivers His Word to the people of God, not individual people of God. So the doctrine of exposure does not crumble due to certain individuals not having a copy of the Bible at all times. If this were the case, the fact that there are Christians who simply do not own a Bible would discredit this doctrine altogether. 

Despite the fact that the Canon is recognized in part by its corporate reception, this doctrine of providential exposure does not rest on ecclesiastical authority, as the papists might claim. There is no one single church which is responsible for giving authority to the text of Holy Scripture. In fact, no church could give authority to the Scriptures, they are authoritative in themselves (αυτοπιστος). Kruger explains this well, “The books received by the church inform our understanding of which books are canonical not because the church is infallible o because it created or constituted the canon, but because the church’s reception of these books is a natural and inevitable outworking of the self-authenticating nature of Scripture” (106).

It must be stated, that Kruger makes the distinction between the canon, and the text of the canon, which is the common thought amongst conservative scholarship. Upon examination of the theological method  however, there does not seem to be good reason to separate the two. If the doctrinal foundation of providential exposure demonstrates the “efficacy of the Shepherd’s voice to call”(106), then it follows that there must be a definitive voice that does the calling. The name of a canonical book is simply not efficacious to call sinners to repentance and faith. Simply listing off the canonical list is not the Gospel call. So the material that is providentially exposed to the people of God must also contain the substance which is effective unto life by the power of the Holy Spirit. God has not just preserved the book sleeves of the Bible, He has also preserved the words within those book sleeves. 

Since the Bible is self-authenticating, Christians cannot look to the totality or purity of its reception to determine which books or texts of the Bible should be received today. That is to say that because the majority of the Christian people do not accept one passage as authentic today, does not mean that it has not been properly received in the past, or that it is not ontologically a part of the canon. A passage of Scripture may not have been accepted as canonical by various groups throughout history, and this has indeed been the case, usually due to theological controversy. 

It is antithetical to the Theological method to say that the Scriptures are self-authenticating, but then also say that people must authenticate those Scriptures by a standard outside of the Scriptures themselves. Either the Scriptures are self-authenticating, or they are not. Which is why evidence are great tools to defend the Scriptures, but those evidences can never authenticate those Scriptures in themselves. It is problematic to say that God’s Word has been preserved, and kept pure in all ages, and then to immediately say that He has done so imperfectly, or has not fully exposed that Word to His people yet. 

The Theological method provides a framework that actually gives more weight to historical thought as opposed to modern thought. It disallows for a perspective that believes that the people of God had lost or added passages of Scripture, and that these texts need to be recovered or removed. It prevents certain theories that the text evolved, or that Christ’s divinity was developed over time. It especially rejects the idea that the original text of the New Testament was choppy, crude, and in places incoherent, and that scribes smoothed out the readings to make the text readable. In fact, it exposes those manuscripts that are choppy and missing parts to be of poor quality by assuming that the Holy Scriptures were inspired by the Holy Spirit rather than invented by ostensibly literate first century Jews. 

There may have been localities that corrupted the text (usually intentionally), but this does not represent the providential preservation that was taking place universally. The vast majority of textual variants are due to Scribal errors, but the significant variants were certainly an effort of revision. A Scribe simply wouldn’t have removed or added 12 verses by accident. A great example is the idiosyncratic Egyptian manuscripts uncovered in the 19th and 20th centuries, which tend to disagree with the general manuscript tradition in important variant units. These manuscripts have been given tremendous weight in the modern period due to shifting views of inspiration and preservation. 

If the Scriptures truly were inspired and preserved, then one should expect that the text did not evolve, or that the closest representative of those originals would be riddled with short, abrupt readings. One would expect that in every stage of copying, Scribal errors would be purged out, and that the true readings would persevere. In fact, this phenomenon can be observed in the vast majority of the extant New Testament manuscripts, which has unfortunately been described as Scribal interference, or smoothing out the text. When the transmission of the New Testament manuscripts are viewed Theologically, an entirely different story is told by the manuscripts which largely disagrees with the modern narrative which favors those choppy manuscripts which existed in one locality of the Christian world. 

The preservation of God’s Word can be demonstrated evidentially, but not without the proper Theological lens. Evidential arguments can be a powerful tool in all disciplines, but they often are not effective in themselves to change anybody’s mind. That is why the Theology of scholars will ultimately determine the manuscripts they deem to be earliest and best. Simply counting manuscripts, or weighing manuscripts, is simply not consistent with the conservative doctrine of preservation. In both cases, these methods attempt to take an external authority, such as manuscript count, or the age of the manuscript, and use that to authenticate the Word of God. Yet both of these are at odds with the doctrinal standard that is laid forth in Scriptures themselves, that the Word of God is self-authenticating. That is why the language of “the text that has been received” is warranted in this conversation, because it recognizes God’s providential preservation and exposure of the ontological canon to the people of God in every age. 

Conclusion 

The doctrine of exposure is often misunderstood as being too similar to the Mormon doctrine of “burning of the bosom” or the papal doctrine which states that Rome has the authority to authenticate the Bible. Despite these abuses of Scripture, the fact remains that the Scriptures are self-authenticating. It is easy to fall back onto empirical approaches, because they seem to be the most logical. Yet these empirical approaches do not do what they claim they can do, and this is becoming increasingly evident with each passing year. The number of Bibles has only increased, and exponentially at that. Modern methodology has not narrowed the text of the New Testament to fewer legitimate readings, but has expanded greatly the number of readings that “could be” original or early. 

The efforts of modern textual scholarship has only increased the uncertainty of the text of the New Testament. This has culminated in the abandonment of the search for the original text of the Bible for the Ausgangstext, or the earliest text scholars can get back to (which is 3rd or 4th century). Practically speaking, this pursuit will simply result in arriving at some hypothetical form of the text that may have existed in Egypt in the third century. Since this seems to be the direction of most current New Testament text-critical scholarship, it seems that it is time to return to the old paths. The Theological method has been expressed by countless Theologians of the Christian faith, and it should not be abandoned for the sake of adopting the modern critical scientific method. The Scriptures should always be handled as self-authenticating, and a shift to this way of thinking would result in a massive change in the direction of modern New Testament scholarship.