24h-payday

Posts Tagged ‘discovery’

Kleen Products Update: Is Technology Usage Becoming the New “Proportionality” Factor for Judges?

Wednesday, October 30th, 2013

Readers may recall last year’s expensive battle over the use of predictive coding technology in the 7th Circuit’s Kleen Products case. Although the battle was temporarily resolved in Defendants’ favor (they were not required to redo their production using predictive coding or other “Content Based Advanced Analytics” software), a new eDiscovery battle has surfaced this year between Plaintiffs and a non-party, The Levin Group (“TLG”).

In Kleen, Plaintiffs allege anticompetitive and collusive conduct by a number of companies in the containerboard industry. The Plaintiffs served TLG with a subpoena requesting “every document relating to the containerboard industry.” TLG, a non-party retained as a financial and strategic consultant by two of the Defendants, complied by reviewing 21,000 documents comprising 82,000 pages of material.

Extraordinary Billing Rates for Manual Review?

The wheels began to fall off the bus when Plaintiffs received a $55,000 bill from TLG for the review and production of documents in response to the subpoena. TLG billed $500/hour for 110 hours of document review performed by TLG’s founder (a lawyer) and a non-lawyer employee. Although FRCP 45(c)(3)(C) authorizes “reasonable compensation” of a subpoenaed nonparty and the Court previously ordered the Plaintiffs to “bear the costs of their discovery request,” TLG and the Plaintiffs disagreed over the definition of “reasonable compensation” once the production was complete. Plaintiffs argue that the bill is excessive in light of market rates of $35-$45/hour charged by contract attorneys for review and they also claim that they never agreed to a billing rate.

Following a great deal of back and forth about the costs, the court decided to defer its decision until December 16, 2013 because discovery in the underlying antitrust action is still ongoing. Regardless of the outcome in Kleen, the current dispute feels a bit like déjà vu all over again. Both disputes highlight the importance of cooperation and role of technology in reducing eDiscovery costs. For example, better cooperation among the parties during earlier stages of discovery might have helped prevent or at least minimize some of the downstream post-production arguments that occurred last year and this year. Although the “cooperation” drum has been beaten loudly for several years by judges and think tanks like the Sedona Conference, cooperation is an issue that will never fully disappear in an adversarial system.

Judges May Increasingly Consider Technology as Part of Proportionality Analysis

A more novel and interesting eDiscovery issue in Kleen relates to the fact that judges are increasingly being asked to consider the use (or non-use) of technology when resolving discovery disputes. Last year in Kleen the issue was whether or not a producing party should be required to use advanced technology to assure a more thorough production. This year the Kleen court may be asked to consider the role of technology in the context of the disputed document review fees. For example, the court may consider whether or not TLG could have reduced the number of documents by leveraging de-duplication, domain filtering, document threading or other tools in the Litigator’s Toolbelt™ to reduce the number of documents requiring costly manual review.

Recent trends indicate that the federal bench is increasingly under pressure to consider whether or not and how parties utilize technology as factors in resolving eDiscovery disputes. For example, a 2011 Forbes article titled: “Will New Electronic Discovery Rules Save Organizations Millions or Deny Justice?” framed early discussions about amending the Federal Rules of Civil Procedure (Rules) as follows:

A key question that many feel has been overlooked is whether or not organizations claiming significant eDiscovery costs could have reduced those costs had they invested in better technology solutions.  Most agree that technology alone cannot solve the problem or completely eliminate costs.  However, many also believe that understanding the extent to which the inefficient or non-use of modern eDiscovery technology solutions impacts overall costs is critical to evaluating whether better practices might be needed instead of new Rules.”

Significant interest in the topic was further sparked in Da Silva Moore v. Publicis Group in 2012 when Judge Andrew Peck put parties on notice that technology is increasingly important in evaluating eDiscovery disputes. In Da Silva Moore, Judge Peck famously declared that “computer-assisted review is acceptable in appropriate cases.” Judge Peck’s decision was the first to squarely address the use of predictive coding technology, and a number of cases, articles, and blogs on the topic quickly ensued in what seemed to be the opening of Pandora’s Box with respect to the technology discussion.

More recently, The Duke Law Center for Judicial Studies proposed that the Advisory Committee on Civil Rules add language to the newly proposed amendments to the Federal Rules of Civil Procedure addressing the use of technology-assisted review (TAR). The group advocates adding the following sentence at the end of the first paragraph of the Committee Note to proposed Rule 26(b)(1) dealing with “proportionality” in eDiscovery:

“As part of the proportionality considerations, parties are encouraged, in appropriate cases, to consider the use of advanced analytical software applications and other technologies that can screen for relevant and privileged documents in ways that are at least as accurate as manual review, at far less cost.

Conclusion

The significant role technology plays in managing eDiscovery risks and costs continues to draw more and more attention from lawyers and judges alike. Although early disputes in Kleen highlight the fact that litigators do not always agree on what technology should be used in eDiscovery, most in the legal community recognize that many technology tools in the Litigator’s Toolbelt™ are available to help reduce the costs of eDiscovery. Regardless of how the court in Kleen resolves the current issue, the use or non-use of technology tools is likely to become a central issue in the Rules debate and a prominent factor in most judges’ proportionality analysis in the future.

*Blog post co-authored by Matt Nelson and Adam Kuhn

The Top 3 Forensic Data Collection Myths in eDiscovery

Wednesday, August 7th, 2013

Confusion about establishing a legally defensible approach for collecting data from computer hard drives during eDiscovery has existed for years. The confusion stems largely from the fact that traditional methodologies die hard and legal requirements are often misunderstood. The most traditional approach to data collection entails making forensic copies or mirror images of every custodian hard drive that may be relevant to a particular matter. This practice is still commonly followed because many believe collecting every shred of potentially relevant data from a custodian’s computer is the most efficient approach to data collection and the best way to avoid spoliation sanctions.

 

In reality, courts typically do not require parties to collect every shred of electronically stored information (ESI) as part of a defensible eDiscovery process and organizations wedded to this process are likely wasting significant amounts of time and money. If collecting everything is not required, then why would organizations waste time and money following an outdated and unnecessary approach? The answer is simple – many organizations fall victim to 3 forensic data collection myths that perpetuate inefficient data collection practices. This article debunks these 3 myths and provides insight into more efficient data collection methodologies that can save organizations time and money without increasing risk.

 

Myth #1: “Forensic Copy” and “Forensically Sound” are Synonymous

 

For many, the confusion begins with a misunderstanding of the terms “forensic copy” and “forensically sound.” The Sedona Conference, a leading nonprofit research and educational institute dedicated to the advanced study of law, defines a forensic copy as follows:

 

An exact copy of an entire physical storage media (hard drive, CD-ROM, DVD-ROM, tape, etc.), including all active and residual data and unallocated or slack space on the media. Forensic copies are often called “images” or “imaged copies.” (See: The Sedona Conference Glossary: E-Discovery & Digital Information Management, 3rd Edition, Sept. 2010).

 

Forensically sound, on the other hand, refers to the integrity of the data collection process and relates to the defensibility of how ESI is collected. Among other things, electronic files should not be modified or deleted during collection and a proper chain of custody should be established in order for the data collection to be deemed forensically sound. If data is not collected in a forensically sound manner, then the integrity of the ESI that is collected may be suspect and could be excluded as evidence.

 

Somehow over time, many have interpreted the need for a forensically sound collection to require forensic copies of hard drives to be made. In other words, they believe an entire computer hard drive must be collected for a collection to be legally defensible (forensically sound). In reality, entire hard drives (forensic copies) or even all active user files need not be copied as part of a defensible data collection process. What is required, however, is the collection of ESI in a forensically sound manner regardless of whether an entire drive is copied or only a few files.

 

Myth # 2: Courts Require Forensic Copies for Most Cases

 

Making forensic copies of custodian hard drives is often important as part of criminal investigations, trade secret theft cases, and other matters where the recovery and analysis of deleted files, internet browsing history, and other non-user generated information is important to a case. However, most large civil matters only require the production of user-generated files like emails, Microsoft Word documents, and other active files (as opposed to deleted files).

 

Unnecessarily making forensic copies results in more downstream costs in the form of increased document processing, attorney review, and vendor hosting fees because more ESI is collected than necessary. The simple rule of thumb is that the more ESI collected at the beginning of a matter, the higher the downstream eDiscovery costs. That means casting a narrow collection net at the beginning of a case rather than “over-collecting” more ESI than legally required can save significant time and money.

 

Federal Rule of Civil Procedure 34 and case law help dispel the myth that forensic copies are required for most civil cases. The notes to Rule 34(a)(1) state that,

 

Rule 34(a)…is not meant to create a routine right of direct access to a party’s electronic information system, although such access might be justified in some circumstances. Courts should guard against undue intrusiveness resulting from inspecting or testing such systems.

More than a decade ago, the Tenth Circuit validated the notion that opposing parties should not be routinely entitled to forensic copies of hard drives. In McCurdy Group v. Am. Biomedical Group, Inc., 9 Fed. Appx. 822 (10th Cir. 2001) the court held that skepticism concerning whether a party has produced all responsive, non-privileged documents from certain hard drives is an insufficient reason standing alone to warrant production of the hard drives: “a mere desire to check that the opposition has been forthright in its discovery responses is not a good enough reason.” Id. at 831.

On the other hand, Ameriwood Indus. v. Liberman, 2006 U.S. Dist. LEXIS 93380 (E.D. Mo. Dec. 27, 2006), is a good example of a limited situation where making a forensic copy of a hard drive might be appropriate. In Ameriwood, the court referenced Rule 34(a)(1) to support its decision to order a forensic copy of the defendant’s hard drive in a trade secret misappropriation case because the defendant “allegedly used the computer itself to commit the wrong….” In short, courts expect parties to take a reasonable approach to data collection. A reasonable approach to collection only requires making forensic copies of computer hard drives in limited situations.

 

Myth #3: Courts Have “Validated” Some Proprietary Collection Tools

 

Confusion about computer forensics, data collection, and legal defensibility has also been stoked as the result of overzealous claims by technology vendors that courts have “validated” some data collection tools and not others. This has led many attorneys to believe they should play it safe by only using tools that have ostensibly been “validated” by courts. Unfortunately, this myth exacerbates the over-collection of ESI problem that frequently costs organizations time and money.

 

The notion that courts are in the business of validating particular vendors or proprietary technology solutions is a hot topic that has been summarily dismissed by one of the leading eDiscovery attorneys and computer forensic examiners on the planet. In his article titled, We’re Both Part of the Same Hypocrisy, Senator, Craig Ball explains that courts generally are not in the business of “validating” specific companies and products. To make his point, Mr. Ball poignantly states that:

 

just because a product is named in passing in a court opinion and the court doesn’t expressly label the product a steaming pile of crap does not render the product ‘court validated.’ 

 

In a nod to the fact that the defensibility of the data collection process is dependent on the methodology as much as the tools used, Mr. Ball goes on to explain that, “the integrity of the process hinges on the carpenter, not the hammer.”

 

Conclusion

 

In the past decade, ESI collection tools have evolved dramatically to enable the targeted collection of ESI from multiple data sources in an automated fashion through an organization’s computer network. Rather than manually connecting a collection device to every custodian hard drive or server to identify and collect ESI for every new matter, new tools enable data to be collected from multiple custodians and data sources within an organization using a single collection tool. This streamlined approach saves organizations time and money without sacrificing legal defensibility or forensic soundness.

 

Choosing the correct collection approach is important for any organization facing regulatory scrutiny or routine litigation because data collection represents an early and important step in the eDiscovery process. If data is overlooked, destroyed, altered, or collected too slowly, the organization could face embarrassment and sanctions. On the other hand, needlessly over-collecting data could result in unnecessary downstream processing and review expenses. Properly assessing the data collection requirements of each new matter and understanding modern collection technologies will help you avoid the top 3 forensic data collection myths and save your organization time and money.

The Need for a More Active Judiciary in eDiscovery

Wednesday, July 24th, 2013

Various theories have been advanced over the years to determine why the digital age has caused the discovery process to spiral out of control. Many believe that the sheer volume of ESI has led to the increased costs and delays that now characterize eDiscovery. Others place the blame on the quixotic advocacy of certain lawyers who seek “any and all documents” in their quest for the proverbial smoking gun. While these factors have undoubtedly contributed to the current eDiscovery frenzy, there is still another key reason that many cognoscenti believe has impacted discovery: a lack of judicial involvement. Indeed, in a recent article published by the University of Kansas Law Review, Professor Steven Gensler and Judge Lee Rosenthal argue that many of the eDiscovery challenges facing lawyers and litigants could be addressed in a more efficient and cost-effective manner through “active case management” by judges. According to Professor Gensler and Judge Rosenthal, a meaningful Rule 16 conference with counsel can enable “the court to ensure that the lawyers and parties have paid appropriate attention to planning for electronic discovery.”

To facilitate this vision of a more active judiciary in the discovery process, the Advisory Committee has proposed a series of changes to the Federal Rules of Civil Procedure. Most of these changes are designed to improve the effectiveness of the Rule 26(f) discovery conference and to encourage courts to provide input on key discovery issues at the outset of a case.

Rules 26 and 34 – Improving the Effectiveness of the Rule 26(f) Discovery Conference

One way the Committee felt that it could enable greater judicial involvement in case management was to have the parties conduct a more meaningful Rule 26(f) discovery conference. Such a step is significant since courts generally believe that a successful conference is the lynchpin for conducting discovery in a proportional manner.

To enhance the usefulness of the conference, the Committee recommended that Rule 26(f) be amended to specifically require the parties to discuss any pertinent issues surrounding the preservation of ESI. This provision is calculated to get the parties thinking proactively about preservation problems that could arise later in discovery. It is also designed to work in conjunction with the proposed amendments to Rule 16(b)(3) and Rule 37(e). Changes to the former would expressly empower the court to issue a scheduling order addressing ESI preservation issues. Under the latter, the extent to which preservation issues were addressed at a discovery conference or in a scheduling order could very well affect any subsequent motion for sanctions relating to a failure to preserve relevant ESI.

Another amendment to Rule 26(f) would require the parties to discuss the need for a “clawback” order under Federal Rule of Evidence 502. Though underused, Rule 502(d) orders generally reduce the expense and hassle of litigating issues surrounding the inadvertent disclosure of ESI protected by the lawyer-client privilege. To ensure this overlooked provision receives attention from litigants, the Committee has drafted a corresponding amendment to Rule 16(b)(3) that would enable the court to weigh in on Rule 502 related issues in a scheduling order.

The final step the Committee has proposed for increasing the effectiveness of the Rule 26(f) conference is to amend Rule 26(d) and Rule 34(b)(2) to enable parties to serve Rule 34 document requests prior to that conference. These “early” requests, which are not deemed served until the conference, are “designed to facilitate focused discussion during the Rule 26(f) conference.” This, the Committee hopes, will enable the parties to subsequently prepare document requests that are more targeted and proportional to the issues in play.

Rule 16 – Greater Judicial Input on Key Discovery Issues

As mentioned above, the Committee has suggested adding provisions to Rule 16(b)(3) that track those in Rule 26(f) so as to provide the opportunity for greater judicial input on certain eDiscovery issues at the outset of a case. In addition to these changes, Rule 16(b)(3) would also allow a court to require that the parties caucus with the court before filing a discovery-related motion. The purpose of this provision is to encourage judges to informally resolve discovery disputes before the parties incur the expense of fully engaging in motion practice. According to the Committee, various courts have used similar arrangements under their local rules that have “proven highly effective in reducing cost and delay.”

Conclusion

Whether or not these changes are successful depends on how committed the courts are to using the proposed case management tools. Without more active involvement from the courts, the newly proposed initiatives regarding cooperation and proportionality may very well fall by the wayside and remain noble, but unmet expectations. Compliance with the draft rules is likely the only method to ensure that these amendments (if enacted) are to be successful.

A Comprehensive Look at the Newly Proposed eDiscovery Amendments to the Federal Rules of Civil Procedure

Tuesday, July 9th, 2013

You have probably heard the news. Changes are in the works for the Federal Rules of Civil Procedure that govern the discovery process. Approved for public comment last month by the Standing Committee on Rules of Practice and Procedure, the proposed amendments are generally designed to streamline discovery, encourage cooperative advocacy among litigants and eliminate gamesmanship. The amendments also try to tackle the continuing problems associated with the preservation of electronically stored information (ESI). As a result, a package of amendments has been developed that affects most aspects of federal discovery practice.

We will provide a comprehensive overview of the newly proposed amendments in a series of posts over the next few weeks. The posts will cover the changes that are designed to usher in a new era of cooperation, proportionality, and active judicial case management in discovery. We will also review the draft amendment to Federal Rule 37(e), which would create a uniform national standard for discovery sanctions stemming from failures to preserve evidence. This amendment has by far attracted the most interest, which is understandable given the far-reaching impact that such a change could have on organizations’ defensible deletion efforts. A final post will describe the timeline for moving the amendment package forward.

Cooperation

Drafted by the Civil Rules Advisory Committee, the proposed amendments are generally designed to facilitate the tripartite aims of Federal Rule 1 in the discovery process. To carry out Rule 1’s lofty yet important mandate of securing “the just, speedy, and inexpensive determination” of litigation, the Committee has proposed several modifications to advance the notions of cooperation and proportionality. Other changes focus on improving “early and effective judicial case management.” Judicial Conference of the United States, Report of the Advisory Committee on Civil Rules 4 (May 8, 2013) (Report). Today’s post will provide an overview of the draft amendment to Rule 1, which is designed to spotlight the importance of cooperation. Posts covering the proportionality and judicial case management amendments will follow shortly.

The Proposed Amendment to Rule 1

To better emphasize the need for cooperative advocacy in discovery, the Committee has recommended that Rule 1 be amended to specify that clients share the responsibility with the court for achieving the rule’s objectives. The proposed revisions to the rule (in italics with deletions in strikethrough) read in pertinent part as follows:

[These rules] should be construed, and administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.

Report, at 17.

Even though this concept was already set forth in the Advisory Committee Notes to Rule 1, the Committee felt that an express reference in the rule itself would prompt litigants and their lawyers to engage in more cooperative conduct. Indeed, while acknowledging that such a rule change would not “cure all adversary excesses,” the Committee still felt the amended wording “will do some good.” Report, at 16-17.

Perhaps more importantly, this mandate is also designed to enable judges “to elicit better cooperation when the lawyers and parties fall short.” Report, at 16. Indeed, such a reference, when coupled with the “stop and think” certification requirement from Federal Rule 26(g), should give jurists more than enough procedural basis to remind counsel and clients of their duty to conduct discovery in a cooperative and cost effective manner.

Though difficult to gauge the actual impact that such an amendment might have, the decision to spotlight cooperation could be beneficial if litigants and lawyers ultimately decide to conduct discovery with laser-like precision instead of the sledgehammer approach of the current regime. If implemented and followed as contemplated by the Committee, the proposed amendment to Rule 1 could help decrease the costs and delays associated with discovery.

 

Breaking News: Court Orders Google to Produce eDiscovery Search Terms in Apple v. Samsung

Friday, May 10th, 2013

Apple obtained a narrow discovery victory yesterday in its long running legal battle against fellow technology titan Samsung. In Apple Inc. v. Samsung Electronics Co. Ltd, the court ordered non-party Google to turn over the search terms and custodians that it used to produce documents in response to an Apple subpoena.

According to the court’s order, Apple argued for the production of Google’s search terms and custodians in order “to know how Google created the universe from which it produced documents.” The court noted that Apple sought such information “to evaluate the adequacy of Google’s search, and if it finds that search wanting, it then will pursue other courses of action to obtain responsive discovery.”

Google countered that argument by defending the extent of its production and the burdens that Apple’s request would place on Google as a non-party to Apple’s dispute with Samsung. Google complained that Apple’s demands were essentially a gateway to additional discovery from Google, which would arguably be excessive given Google’s non-party status.

Sensitive to the concerns of both parties, the court struck a middle ground in its order. On the one hand, the court ordered Google to produce the search terms and custodians since that “will aid in uncovering the sufficiency of Google’s production and serves greater purposes of transparency in discovery.” But on the other hand, the court preserved Google’s right to object to any further discovery efforts by Apple: “The court notes that its order does not speak to the sufficiency of Google’s production nor to any arguments Google may make regarding undue burden in producing any further discovery.”

This latest opinion from the Apple v. Samsung series of lawsuits is noteworthy for two reasons. First, the decision is instructive regarding the eDiscovery burdens that non-parties must shoulder in litigation. While the disclosure of a non-party’s underlying search methodology (in this instance, search terms and custodians) may not be unduly burdensome, further efforts to obtain non-party documents could exceed the boundaries of reasonableness that courts have designed to protect non-parties from the vicissitudes of discovery. For as the court in this case observed, a non-party “should not be required to ‘subsidize’ litigation to which it is not a party.”

Second, the decision illustrates that the use of search terms remains a viable method for searching and producing responsive ESI. Despite the increasing popularity of predictive coding technology, it is noteworthy that neither the court nor Apple took issue with Google’s use of search terms in connection with its production process. Indeed, the intelligent use of keyword searches is still an acceptable eDiscovery approach for most courts, particularly where the parties agree on the terms. That other forms of technology assisted review, such as predictive coding, could arguably be more efficient and cost effective in identifying responsive documents does not impugn the use of keyword searches in eDiscovery. Only time will tell whether the use of keyword searches as the primary means for responding to document requests will give way to more flexible approaches that include the use of multiple technology tools.

ADR Offers Unique Solutions to Address Common eDiscovery Challenges

Friday, May 3rd, 2013

Much of the writing in the eDiscovery community focuses on the consequences of a party failing to adequately accomplish one of the nine boxes of the Electronic Discovery Reference Model. Breaking news posts frequently report on how spoliation and sanctions are typically issued for failure to suspend auto-deletion or to properly circulate a written litigation hold notices. This begs the question, aside from becoming perfectly adept in all nine boxes of the EDRM, how else can an organization protect themselves from discovery wars and sanctions?

One way is explore the possibilities Alternative Dispute Resolution (ADR) has to offer. While there is no substitute for the proper implementation of information governance processes, technology, and the people who manage them; there are alternative and creative ways to minimize exposure. This is not to say that ESI is less discoverable in ADR, but it is to say with the proper agreements in place, the way ESI is handled in the event of a dispute can be addressed proactively.  That is because although parties are free to use the Federal Rules of Civil Procedure in ADR proceedings, they are not constricted by them. In other words, ADR proceedings can provide parties with the flexibility to negotiate and tailor their own discovery rules to address the specific matter and issues at hand.

Arbitration is a practical and preferred way to resolve disputes because it is quick, relatively inexpensive and commonly binding. With enough foresight, parties can preemptively limit the scope of discovery in their agreements to ensure the just and speedy resolution of a matter. Practitioners who are well versed in electronic discovery will be the best positioned to counsel clients in the formation of their agreements upfront, obviating protracted discovery. While a similar type of agreement can be reached and protection can be achieved with the Meet and Confer Conference in civil litigation, ADR offers a more private forum giving the parties more contractual power and less unwanted surprises.

For example, JAMS includes this phrase in one of their model recommendations:

JAMS recognizes that there is significant potential for dealing with time and other limitations on discovery in the arbitration clauses of commercial contracts. An advantage of such drafting is that it is much easier for parties to agree on such limitations before a dispute has arisen. A drawback, however, is the difficulty of rationally providing for how best to arbitrate a dispute that has not yet surfaced. Thus, the use of such clauses may be most productive in circumstances in which parties have a good idea from the outset as to the nature and scope of disputes that might thereafter arise.

Thus, arbitration is an attractive option for symmetrical litigation where the merits of the case are high stakes and neither party wants to delve into a discovery war. A fair amount of early case assessment would be necessary as well, so parties have a full appreciation about what they are agreeing to include or not include in the way of ESI.  Absent a provision to use specific rules (American Arbitration Association or Federal Arbitration Act), the agreement between parties is the determining factor as to how extensive the scope of discovery will be.

In Mitsubishi Motors v. Soler Chrysler-Plymouth, Inc., 473 U.S. 614, 625 (1985), the U.S. Supreme Court has explained that the “liberal federal policy favoring arbitration agreements’…is at bottom a policy guaranteeing the enforcement of private contractual agreements. As such, assuming an equal bargaining position or, at least an informed judgment, courts will enforce stipulations regarding discovery, given the policy of enforcing arbitration agreements by their terms.” Please also see an excellent explanation of Discovery in Arbitration by Joseph L. Forstadt for more information.

Cooperation amongst litigants in discovery has long been a principle of the revered Sedona Conference. ADR practitioners facing complex discovery questions are looking to Sedona’s Cooperation Proclamation for guidance with an eye toward negotiation by educating themselves on ways to further minimize distractions and costs in discovery.  An example of one such event is at The Center for Negotiation and Dispute Resolution at UC Hastings, where they are conducting a mock Meet and Confer on May 16, 2013. The event highlights the need for all practitioners, whether it be the 26 (f) conference for litigation or the preliminary hearing in the case of arbitration, to assess electronic discovery issues with the same weight they do claims and damages early on in the dispute.

It is also very important that arbitrators, especially given the power they have over a matter, to understand the consequences of their rulings. Discovery is typically under the sole control of the arbitrator in a dispute, and only in very select circumstances can relief be granted by the court. An arbitrator that knows nothing about eDiscovery could miss something material and affect the entire outcome adversely. For parties that have identified and addressed these issues proactively, there is more protection and certainty in arbitration. Typically, the primary focus of an arbitrator is enforcing the contract between parties, not to be an eDiscovery expert.

It is also important to caution against revoking rights to discovery by entering into mutual agreements to unreasonably limit discovery.  This approach is somewhat reminiscent of the days when lawyers would agree not to conduct discovery, because neither knew how. Now, while efficiency and cost savings are a priority, we must guard against a potential similar paradigm emerging as we may know too much about how to shield relevant ESI.

As we look to the future, especially for serial litigants, one can imagine a perfect world in arbitration for predictive coding. In the Federal courts, we have seen over the past two years or so an emergence of the use of predictive coding technologies. However, even when the parties agree, which they don’t always, they still struggle with achieving a meeting of the minds on the protocol. These disputes have at times overshadowed the advantage of using predictive coding because discovery disputes and attorney’s fees have overtaken any savings. In ADR there is a real opportunity for similarly situated parties to agree via contract, upfront on tools, methodologies and scope. Once these contracts are in place, both parties are bound to the same rules and a just and speedy resolution of a matter can take place.

The “Sedona Bubble” and the Top 3 TAR Trends of 2013

Tuesday, April 23rd, 2013

 

References to the “Sedona Bubble” are overheard more and more commonly at conferences dealing with cutting edge topics like the use of predictive coding technology in eDiscovery. The “Sedona Bubble” refers to a small number of lawyers and judges (most of whom are members of The Sedona Conference) that are fully engaged in discussions about issues that influence the evolution of modern discovery practice. Let’s face it. The fact that only a small percentage of judges and lawyers drive important eDiscovery policy decisions is more than just a belief, it is reality.

This reality stems largely from the fact that litigators are a busy lot. So busy in fact, that they are often forced to operate reactively instead of proactively because putting out unexpected fires comes with the territory in litigation practice. As a result, the Sedona Bubble has a tremendous impact on cutting edge eDiscovery issues that include topics spanning everything from cross-border litigation and cloud computing, to social media and bring your own device to work (BYOD) issues. Recognizing the heavy time demands facing most litigators is what compelled me to provide more insight into the Sedona Bubble. That is why I am sharing my top three observations about the current state of predictive coding – one of the hottest eDiscovery topics on the planet.

#1 – Plenty of confusion about TAR still exists

Technology-assisted review (TAR) is a term that often means different things to different people. Adding further confusion to the discussion is the fact that the acronym TAR is commonly used interchangeably with other terms like computer-assisted review (CAR) and predictive coding. Many believe confusion about TAR is largely the result of misinformation spread by eDiscovery providers eager to capitalize on current marketplace momentum. Regardless of the reason, many in the industry remain confused about the key differences between predictive coding and other kinds of TAR tools.

What is important to remember is that most people are referring to predictive coding technology when they use any of the aforementioned terms. Predictive coding is a type of supervised machine learning technology that relies on human input to “train” a computer to classify documents. That does not mean attorneys are abdicating their responsibility to review and classify documents during discovery. t means that attorneys can review a fraction of the documents at a fraction of the cost by training the computer system.

In recent months, more litigators and judges are beginning to understand that there are many kinds of TAR tools to choose from in the litigator’s toolbelt.™ Predictive coding is one of the tools that falls underneath the broader TAR umbrella and is arguably the most important tool in the toolbelt™ if used properly. All the tools can be helpful, however, TAR tools such as keyword searching, concept searching, clustering, email threading, and de-duplication are not supervised machine learning tools and therefore are not predictive coding tools. The rule of thumb for those being courted by predictive coding is caveat emptor.  Make sure the providers clarify what they mean when they use terms like TAR, CAR, or predictive coding.

#2 – Momentum is building

In 2013, more and more attorneys and judges are dipping their toes into predictive coding waters – waters that were largely perceived as too frigid to enter only last year. One explanation for the increased usage of predictive coding technologies is the corresponding increase in judicial guidance. In the beginning of 2012, there were no known cases addressing the use of predictive coding technology. Since then, at least six different judges have addressed the use of predictive coding technology. (Moore v. Publicis Group; Kleen Products v. Packaging Corporation of America; Global Aerospace v. Landow Aviation; In re: Actos Product Liability Litigation; EOHRB v. HOA Holdings; Gabriel Technologies v. Qualcomm). Taken as whole, the court decisions are either supportive of the technology or remain neutral on the issue. In fact, an order in a new case named In Re Biomet was reported only a few days ago and continues the general trend toward judicial awareness and support of the technology.

In addition to the growing number of judicial opinions, conference attendees are sharing experiences related to the use of these technologies far more than was the case at conferences in 2012. This data point suggests that usage far exceeds the number of reported predictive coding cases. Further evidence of this momentum, and possibly even greater momentum to come, are discussions about adding comments to the proposed FRCP amendments that would encourage the use of predictive coding technology. Newly proposed amendments to the Federal Rules of Civil Procedure are expected to be published for comment in August, and predictive coding will almost certainly be part of the discussion.

#3 – Skeptics remain

Despite a significant uptick in predictive coding usage since early 2012, the technology is not without skeptics. Those less bullish cite concerns about the multitude of new predictive coding offerings that have recently come onto the market. Most realize that all predictive coding technologies are not created equally and the vast majority of tools on the market lack transparency. A key concern on this front is the lack of visibility into the underlying statistical methodology that many tools and their providers apply. Since statistics are the backbone of a viable predictive coding process, a lack of transparency into statistical methodologies by most providers has left some to perceive all predictive coding tools as “black boxes.” In reality, different tools provide different levels of transparency, but a general lack of transparency in the industry has perpetuated a “throw the baby out with the bathwater” mentality in some circles. Rumblings about the applicability of Daubert and/or Rule 702 in vetting these tools and the methodologies they rely upon are likely to gain steam.

The issue of transparency is also a common area of debate in the context of an issue known as the “discard pile.” The discard pile generally refers to documents classified as non-responsive that are used to train the predictive coding solution. The protocol established in Da Silva Moore and other cases requires the producing party to reveal the discard pile to the propounding party as part of the predictive coding training process. Proponents argue that this additional level of cooperation invites scrutiny by both parties that will help insure that training documents are properly classified. The rationale in support of this approach is that predictive coding tools are garbage-in garbage-out devices so improperly classifying training documents will lead to erroneous downstream results. The pushback by producing parties varies, but one common theme is predominant and can be summarized as follows: “I will share my non-responsive documents to the other side when they are pried from my cold, dead fingers.”

Conclusion

Although some barriers to widespread predictive coding adoption remain, it is clear that the future of predictive coding is now. Eventually best practices for using these technologies will rise to the surface and the tools themselves will improve. For example, most tools today require complex statistical calculations to be made manually. That means hiring consultants and/or statisticians to crunch the numbers in order to ensure a defensible process which increases costs. The tools themselves can also be costly because most providers charge a premium to use predictive coding solutions. However, price pressure is already afoot and some providers offer their predictive coding technology at no additional cost. In short, despite some early challenges, most of those within the Sedona Bubble believe predictive coding is here to stay.   

 

How Good Is Your Predictive Coding Poker Face? (Video Series – Part Two)

Friday, April 12th, 2013

In Part One of “How Good is Your Predictive Coding Poker Face?” we shared video footage of Maura R. Grossman, Craig Ball, Ralph C. Losey and myself (Matthew Nelson) discussing similarities between predictive coding technology and the popular poker game Texas Hold ‘em during a panel discussion at Legal Tech New York in January. In particular we discussed how to “read your opponent” when devising predictive coding protocols and we tackled sensitive issues like whether or not parties should be required to show their “discard pile” (aka non-responsive files used to train the predictive coding system) to the other side.

In Part Two of our two part video series, the panel digs deeper into the parallels between poker and predictive coding by discussing a “middle ground” approach to predictive coding referred to as “splitting the pot.” The panel also explores interesting issues like the dwindling role of keyword search technology in eDiscovery, the importance of statistics, and the need for transparency. Listen in as the panel discusses these and other important eDiscovery issues and feel free to share your feedback.

Does “splitting the pot” make sense?

In poker, two or more players might end up splitting the money (the pot) when they have the same hand. In this situation, neither party wins the hand, but neither party loses. Instead, they live to play another day. Listen to the panelists explore how transparency could be the key to a “middle ground” approach where neither party completely wins the discard pile issue, but neither party loses. The panel also discusses the role judges or special masters can play in ensuring a fair protocol without sacrificing traditional notions of privilege.

http://bcove.me/jinlk2rx

Does using keyword search in conjunction with predictive coding tools result in a stacked deck?

Some predictive coding advocates believe keyword search is dying a slow death in eDiscovery while others believe the proper application of keyword searching has simply changed. When should keyword searches be used in conjunction with predictive coding technology, if at all? Is the deck stacked against parties that insist on keyword culling prior to using predictive coding technologies? Should other technology tools in the litigator’s technology toolbelt be incorporated into predictive coding protocols? Hear from Ralph Losey about a case where keyword searching tools didn’t quite stack up to predictive coding technology and listen to Maura Grossman explain how the high cost of many predictive coding solutions can slow adoption of better technology approaches.

http://bcove.me/dlnnebe5

Will ignoring statistics and transparency ruin your game?

Every good poker player understands the important role statistics play when making decisions like how much you should bet or whether or not you should call your opponent’s bet. Basic statistical calculations can help players estimate the likelihood of beating their opponent in certain situations, but miscalculations or ignoring statistics completely can result in costly errors. Listen to Maura Grossman discuss basic statistical approaches that can make or break your predictive coding protocol and hear Craig Ball’s final word on the importance of transparency for both parties.

http://bcove.me/y1jh1o8g

You may not understand everything there is to know about predictive coding technology after watching these short video clips. However, you will receive valuable tips from industry experts to help you avoid playing a rigged game with a stacked deck. Or as Kenny Rogers might say, you will know when to walk away from a bad predictive coding game and you will know when to run.

How Good Is Your Predictive Coding Poker Face? (Video Series – Part One)

Thursday, April 4th, 2013

Predictive coding technology is a lot like the popular poker game Texas Hold ‘em. Both can be risky and expensive for players who don’t understand the fundamentals of the game. Good players understand what kind of information they need from their opponents in order to make informed decisions. Bad players, on the other hand, ignore important elements of the game like statistics that must be understood in order to avoid making big mistakes.

In January, Maura R. Grossman, Craig Ball, Ralph C. Losey and I (Matthew Nelson) discussed these and other parallels between poker and predictive coding in front of a full-house at Legal Tech New York (LTNY). Please enjoy some of the live video clips from part one of our session titled, “How Good is Your Predictive Coding Poker Face?” as you contemplate whether or not you’re ready to go all-in with predictive coding technology.

Why “reading” your opponent is important

Recognizing your opponent’s strengths and weaknesses, aka “reading your opponent” is a key strategic consideration whether you’re playing poker or establishing a predictive coding protocol. In litigation, the Federal Rule 26(f) discovery conference often serves as the best opportunity to evaluate your opponent’s eDiscovery acumen. What if your opponent isn’t tech savvy? Do you still have a legal or ethical obligation to explain what kind of technology you plan to use during discovery? If not, should you disclose what technology is being used anyway? Watch this video clip as the panel examines whether or not opposing counsel’s level of technological sophistication is a factor that should be considered when deciding whether or not to reveal your technology approach.

http://bcove.me/nzo0c3my

Is the game changing?

Although the rules of poker are constant, the way the game is played continues to change and evolve in sophistication as more and more players try different strategies and approaches. Similarly, many believe a new eDiscovery paradigm is developing whereby methodologies for responding to discovery are likely to be more closely scrutinized by the court and opposing parties than in the past? Craig Ball thinks that responding parties have been “getting away with murder” for a long time and that the eDiscovery game is changing. Ralph Losey believes in Sedona Principle 6 and the notion that responding parties are in the best position to understand their data regardless of whether or not the game is changing. Take a look at how the panel plays this tricky hand.

http://bcove.me/8pxcfl65

Should a request to see the predictive coding “discard pile” be treated as a stone cold bluff?

Do parties have an obligation to disclose non-responsive files (aka the discard pile) used to train the predictive coding system? What if the opposing party insists on requiring the disclosure of the discard pile as part of the predictive coding protocol? Is your opponent bluffing or should you think seriously about cooperating with the request? If you cooperate, will too much transparency lead to a gradual erosion of traditional work product protection?

http://bcove.me/8vheqta0

How Good is Your Predictive Coding Poker Face? (Part Two)

Stay tuned for part two of “How Good is Your Predictive Coding Poker Face?” where our panel digs deeper into the parallels between poker and predictive coding and considers the possibility of a “middle ground” approach that may satisfy both parties. The panel also explores other interesting issues like the importance of statistics, the need for transparency, and the dwindling role of keyword search technology in eDiscovery.

In the meantime, let us know what you think. Is predictive coding changing the eDiscovery game? Are producing parties getting away with murder? Should attorneys be required to show their predictive coding discard pile to the other side if they use predictive coding?

Falling Off The Cliff: Parties Are Still Failing The Proportionality Test

Thursday, March 28th, 2013

One of the great questions that the legal profession and the eDiscovery cognoscenti are grappling with is how to best address the unreasonable costs and burdens associated with the discovery process. This is not a new phenomenon. While accentuated by the information explosion, the courts and rules makers have been struggling for years with a solution to this perpetual dilemma.

Proportionality As The Solution

Over the past three decades, the answer to this persistent problem has generally focused on emphasizing proportionality standards. Proportionality – requiring that the benefits of discovery be commensurate with the corresponding burdens – has the potential to be a game-changing concept. If proportionality standards are followed by counsel, clients and the courts, there is a strong possibility that discovery costs and burdens could be made more reasonable. That is perhaps why various courts (at the circuit, district and state levels) throughout the U.S. have implemented rules to highlight proportionality as the touchstone of discovery practice.

These issues were recently spotlighted by United States Magistrate Judge Frank Maas, Lockheed Martin Associate General Counsel Shawn Cheadle and Milberg partner Ariana Tadler at the LegalTech conference in New York City. What is most evident and important from the various video excerpts of their discussion is the panelists’ general agreement that proportionality standards – if followed – can keep a lawsuit from veering off the eDiscovery cliff. These experts, who represent vastly different and conflicting constituencies, emphasized how proportionality and the related concepts of reasonableness and cooperation can lead to quicker and ostensibly cheaper results in litigation.  As Judge Maas makes clear, however, that will only happen with a “change in paradigm and a change in thinking on both sides” of a lawsuit.

Failing The Proportionality Test

Unfortunately, far too many litigants often still neglect to follow basic proportionality standards. This troubling trend is confirmed by various court opinions that are seemingly issued every month in which discovery costs and burdens are increased due to litigants’ failures to engage in proportional discovery. The failure to engage in proportional discovery follows a familiar pattern. Overly broad discovery requests are typically met with general objections and evasive responses that unreasonably limit the scope of responsive information. Such requests and responses generally run contrary to the spirit of proportionality.

The “bible” on proportionality law, Mancia v. Mayflower Textile Services Co., provides that discovery requests and their corresponding responses must be reasonable and proportional. To achieve such an objective, the Mancia court urged counsel and clients to “stop and think” about their discovery conduct as mandated by Federal Rule 26(g):

Rule 26(g) imposes an affirmative duty to engage in pretrial discovery in a responsible manner that is consistent with the spirit and purposes of Rules 26 through 37. In addition, Rule 26(g) is designed to curb discovery abuse by explicitly encouraging the imposition of sanctions. The subdivision provides a deterrent to both excessive discovery and evasion by imposing a certification requirement that obliges each attorney to stop and think about the legitimacy of a discovery request, a response thereto, or an objection.

The clear lesson from this example is the negative impact that discovery conduct can have on a case. Instead of engaging in a proportional approach in which the parties cooperatively hammer out (with court assistance, if necessary) the parameters and limitations of discovery, parties frequently adopt a unilateral, “take no prisoners” strategy. Such an approach generally affects the cost and pace of litigation. Instead of addressing the merits of a dispute through dispositive motion practice, the parties and the court are often thrown into distracting and costly collateral eDiscovery litigation. And as the LegalTech panelists made clear, the resulting situation benefits nobody.

Falling Off The Cliff?

The current discovery paradigm is particularly troubling given that many sophisticated litigants who are incentivized to engage in proportional discovery may not be doing so. If that is the case, how can courts realistically expect other less educated parties to do otherwise?

To deter such discovery conduct, courts may need to embark on a proportionality education campaign. However, any such efforts will likely need to include a promise to address noncompliance with sanctions under Federal Rule 26(g). As the courts have made clear, many counsel and clients will likely engage in proportional discovery only under the threat of some real consequence.

To better address these issues, the federal Civil Advisory Committee is now considering multiple amendments to the Federal Rules of Civil Procedure that would better emphasize proportionality standards. In a recent post, we discussed one such proposal, which would change Rule 37(e) to ensure that courts consider the role of proportionality in connection with parties’ preservation efforts. Another would modify Federal Rule 26(b)(1) to spotlight the limitations of proportionality on the permissible scope of discovery. Though still far from final, these proposed rule amendments could ultimately advance the objective of reducing the costs and burdens of discovery. Such efforts may very well be necessary if we are to keep the discovery process from falling off the cliff.