jason atchley
Karl Schieneman & Thomas Gricks III, Law Technology News
Read more: http://www.lawtechnologynews.com/id=1202642979755/Vendor-Voice%3A-Statistics%2C-Rule-26%28g%29-and-Getting-Stuck-in-TAR#ixzz2tb8nlGSB
Vendor Voice: Statistics, Rule 26(g) and Getting Stuck in TAR
The TAR process must be implemented with a consistent eye toward certification requirements.
|0 Comments
Anyone who has ever tried to use a technology-assisted review or predictive coding tool usually starts by talking to a vendor—or a handful of vendors—who immediately suggest these tools are exceedingly simple to use and speed up the time and lower the costs of litigation. While no doubt true, attorneys in federal court are held to a standard of “reasonable inquiry” as dictated by Rule 26(g) of the Federal Rules of Civil Procedure. If attorneys do not keep a mindful eye on the process, the easy button of TAR can raise unintended Rule 26(g) challenges by the opposing party or unilaterally by the Judge in the case.
The Implications of Rule 26(g) on the Use of Technology-Assisted Review was recently published in the Federal Courts Law Review. The article analyzes five phases of the TAR process that, if not fully considered and properly executed can engender Rule 26(g) arguments. These stages are collection, disclosure, training, stabilization, and validation. For example, during the collection phase, attorneys seldom consider the impact of the richness of the collection upon the reasonableness of the inquiry under Rules 26(g). Since the advent of the recent federal rules and the warnings of Zubulake V (Zubulake v. UBS Warburg 229 F.R.D. 422 (S.D.N.Y. 2004), attorneys have been fearful of sanctions for not preserving and collecting all relevant electronically stored information. The knee jerk reaction has been to preserve and collect broadly, and then throw more data into a review tool than is even remotely tied to a case. This is compounded by the fact that requesting parties, recognizing the relative ease of searching ESI (as compared with hard copy documents) tend to make overly broad document production requests.
As a practical matter, this can make it more difficult to implement a technology-assisted review, which depends on the development of a language model to distinguish between relevant and non-relevant documents. This is generally accomplished by the algorithmic analysis of language patterns in documents which are coded responsive versus documents which are coded not responsive. If relatively few of the documents in a collection are responsive, it becomes a challenge finding enough documents to develop the model. If you use a seeding approach of finding and picking exemplar documents, much like we use key words, you run the risk of not finding the documents that, while perhaps relevant and even important to the case, are not sufficiently like the seed documents to be uncovered by the tool.
If you use a machine assisted or random approach, it may be necessary to code a significant number of documents to develop the model. For example, if only 1 percent of the collection is relevant, a completely random selection of documents may require a review of 20,000 to 50,000 documents. While this will typically be a very small fraction of the entire collection, it can be difficult for senior attorneys to devote the necessary time to the review.
This situation also implicates the Rule 26(g) certification. TAR productions are often validated to a confidence level of 95 precent and a confidence interval of ±2 percent by reviewing just under 2,400 documents. For a reasonable collection in which 10 percent of the documents may be relevant, the actual confidence interval would be closer to ±1.2 percent, or just 12 percent of the anticipated value.
However, in a poor collection in which only one percent of the documents are expected to be relevant, the confidence interval, while only ±0.4 percent, would actually equate to 40 percent of the estimated value. The article discusses the implications of this situation on the Rule 26(g) certification, because the producing party is uniquely situated to manage the results of the TAR process.
The article also addresses other situations, such as the challenges implicit in effecting cooperation and transparency when lawyers are typically accustomed to sharing as little information as necessary with opposing counsel. In cases such as In Re Actos and Global Aerospace v. Landow Aviation, L.P. dba Dulles Jet Center, et al, the parties agreed to share documents coded as non-responsive which were fed into the TAR tool, in order to gain an agreement from opposing counsel and to reduce the risks of a challenge to the training process.
The concept behind sharing this information is similar to the idea behind sharing key words with opposing counsel. Any agreement reduces the risk of being challenged for not having undertaking a “reasonable inquiry.” Cases spawned by 75 years of manual review do not require this level of transparency and cooperation, and courts have been slow to move in that direction. See In Re Biomet. Nevertheless, deficiencies in training the TAR tool, can only be discovered by an adversary who has not been the beneficiary of transparency and cooperation after the time and expense of training have been incurred by the defendants. Any deficiencies in the production may be viewed negatively under Rule 26(g) if the court sees transparency as a way to reduce the cost of litigation and improve the value of discovery. This is especially true if the producing party opposed transparency during the course of the litigation. The article explains how transparency can serve as an insurance policy against a Rule 26(g) challenge.
The key takeaway: Rule 26(g) implicates every aspect of the TAR process, and the process must be implemented with a consistent eye toward certification requirements. Equally as important is the notion that linear review (which is typically conducted on electronic data) may well become subject to the same types of considerations as attorneys attempt to impose validation requirements on modern document productions. As Rule 26(g) moves to the forefront, lawyers who do not fully appreciate the background sampling and statistics risk finding themselves stuck in “tar” of the reasonable inquiry standards of Rule 26(g)
Attorney Karl Schieneman is president of Review Less (kas@reviewless.com); Thomas Gricks III is head of predictive coding and a shareholder at Schnader Harrison Segal and Lewis (tgricks@schnader.com). Both are based in Pittsburgh, Pa., and participated in the Global Aerospace case.
Read more: http://www.lawtechnologynews.com/id=1202642979755/Vendor-Voice%3A-Statistics%2C-Rule-26%28g%29-and-Getting-Stuck-in-TAR#ixzz2tb8nlGSB
Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley
Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley Jason Atchley
No comments:
Post a Comment