24h-payday

Archive for the ‘e-discovery news’ Category

Breaking News: Court Orders Google to Produce eDiscovery Search Terms in Apple v. Samsung

Friday, May 10th, 2013

Apple obtained a narrow discovery victory yesterday in its long running legal battle against fellow technology titan Samsung. In Apple Inc. v. Samsung Electronics Co. Ltd, the court ordered non-party Google to turn over the search terms and custodians that it used to produce documents in response to an Apple subpoena.

According to the court’s order, Apple argued for the production of Google’s search terms and custodians in order “to know how Google created the universe from which it produced documents.” The court noted that Apple sought such information “to evaluate the adequacy of Google’s search, and if it finds that search wanting, it then will pursue other courses of action to obtain responsive discovery.”

Google countered that argument by defending the extent of its production and the burdens that Apple’s request would place on Google as a non-party to Apple’s dispute with Samsung. Google complained that Apple’s demands were essentially a gateway to additional discovery from Google, which would arguably be excessive given Google’s non-party status.

Sensitive to the concerns of both parties, the court struck a middle ground in its order. On the one hand, the court ordered Google to produce the search terms and custodians since that “will aid in uncovering the sufficiency of Google’s production and serves greater purposes of transparency in discovery.” But on the other hand, the court preserved Google’s right to object to any further discovery efforts by Apple: “The court notes that its order does not speak to the sufficiency of Google’s production nor to any arguments Google may make regarding undue burden in producing any further discovery.”

This latest opinion from the Apple v. Samsung series of lawsuits is noteworthy for two reasons. First, the decision is instructive regarding the eDiscovery burdens that non-parties must shoulder in litigation. While the disclosure of a non-party’s underlying search methodology (in this instance, search terms and custodians) may not be unduly burdensome, further efforts to obtain non-party documents could exceed the boundaries of reasonableness that courts have designed to protect non-parties from the vicissitudes of discovery. For as the court in this case observed, a non-party “should not be required to ‘subsidize’ litigation to which it is not a party.”

Second, the decision illustrates that the use of search terms remains a viable method for searching and producing responsive ESI. Despite the increasing popularity of predictive coding technology, it is noteworthy that neither the court nor Apple took issue with Google’s use of search terms in connection with its production process. Indeed, the intelligent use of keyword searches is still an acceptable eDiscovery approach for most courts, particularly where the parties agree on the terms. That other forms of technology assisted review, such as predictive coding, could arguably be more efficient and cost effective in identifying responsive documents does not impugn the use of keyword searches in eDiscovery. Only time will tell whether the use of keyword searches as the primary means for responding to document requests will give way to more flexible approaches that include the use of multiple technology tools.

New Gartner Report Spotlights Significance of Email Archiving for Defensible Deletion

Thursday, November 1st, 2012

Gartner recently released a report that spotlights the importance of using email archiving as part of an organization’s defensible deletion strategy. The report – Best Practices for Using Email Archiving to Eliminate PST and Mailbox Quota Headaches (Alan Dayley, September 21, 2012) – specifically focuses on the information retention and eDiscovery challenges associated with email storage on Microsoft Exchange and how email archiving software can help address these issues. As Gartner makes clear in its report, an archiving solution can provide genuine opportunities to reduce the costs and risks of email hoarding.

The Problem: PST Files

The primary challenge that many organizations are experiencing with Microsoft Exchange email is the unchecked growth of messages stored in portable storage tablet (PST) files. Used to bypass storage quotas on Exchange, PST files are problematic because they increase the costs and risks of eDiscovery while circumventing information retention policies.

That the unrestrained growth of PST files could create problems downstream for organizations should come as no surprise. Various court decisions have addressed this issue, with the DuPont v. Kolon Industries litigation foremost among them. In the DuPont case, a $919 million verdict and 20 year product injunction largely stemmed from the defendant’s inability to prevent the destruction of thousands pages of email formerly stored in PST files. That spoliation resulted in a negative inference instruction to the jury and the ensuing verdict against the defendant.

The Solution: Eradicate PSTs with the Help of Archiving Software and Retention Policies

To address the PST problem, Gartner suggests following a three-step process to help manage and then eradicate PSTs from the organization. This includes educating end users regarding both the perils of PSTs and the ease of access to email through archiving software. It also involves disabling the creation of new PSTs, a process that should ultimately culminate with the elimination of existing PSTs.

In connection with this process, Gartner suggests deployment of archiving software with a “PST management tool” to facilitate the eradication process. With the assistance of the archiving tool, existing PSTs can be discovered and migrated into the archive’s central data repository. Once there, email retention policies can begin to expire stale, useless and even harmful messages that were formerly outside the company’s information retention framework.

With respect to the development of retention policies, organizations should consider engaging in a cooperative internal process involving IT, compliance, legal and business units. These key stakeholders must be engaged and collaborate if a workable policies are to be created. The actual retention periods should take into account the types of email generated and received by an organization, along with the enterprise’s business, industry and litigation profile.

To ensure successful implementation of such retention policies and also address the problem of PSTs, an organization should explore whether an on premise or cloud archiving solution is a better fit for its environment. While each method has its advantages, Gartner advises organizations to consider whether certain key features are included with a particular offering:

Email classification. The archiving tool should allow your organization to classify and tag the emails in accordance with your retention policy definitions, including user-selected, user/group, or key-word tagging.

User access to archived email. The tool must also give end users appropriate and user-friendly access to their archived email, thus eliminating concerns over their inability to manage their email storage with PSTs.

Legal and information discovery capabilities. The search, indexing, and e-discovery capabilities of the archiving tool should also match your needs or enable integration into corporate e-discovery systems.

While perhaps not a panacea for the storage and eDiscovery problems associated with email, on premise or cloud archiving software should provide various benefits to organizations. Indeed, such technologies have the potential to help organizations store, manage and discover their email efficiently, cost effectively and in a defensible manner. Where properly deployed and fully implemented, organizations should be able to reduce the nettlesome costs and risks connected with email.

Will Predictive Coding Live Up to the eDiscovery Hype?

Monday, May 14th, 2012

The myriad of published material regarding predictive coding technology has almost universally promised reduced costs and lighter burdens for the eDiscovery world. Indeed, until the now famous order was issued in the Da Silva Moore v. Publicis Groupe case “approving” the use of predictive coding, many in the industry had parroted this “lower costs/lighter burdens” mantra like the retired athletes who chanted “tastes great/less filling” during the 1970s Miller Lite commercials. But a funny thing happened on the way to predictive coding satisfying the cost cutting mandate of Federal Rule of Civil Procedure 1: the same old eDiscovery story of high costs and lengthy delays are plaguing the initial outlay of this technology. The three publicized cases involving predictive coding are particularly instructive on this early, but troubling development.

Predictive Coding Cases

In Moore v. Publicis Groupe, the plaintiffs’ attempt to recuse Judge Peck has diverted the spotlight from the costs and delays associated with use of predictive coding. Indeed, the parties have been wrangling for months over the parameters of using this technology for defendant MSL’s document review. During that time, each side has incurred substantial attorney fees and other costs to address fairly routine review issues. This tardiness figures to continue as the parties now project that MSL’s production will not be complete until September 7, 2012. Even that date seems too sanguine, particularly given Judge Peck’s recent observation about the slow pace of production: “You’re now woefully behind schedule already at the first wave.” Moreover, Judge Peck has suggested on multiple occasions that a special master be appointed to address disagreements over relevance designations. Special masters, production delays, additional briefings and related court hearings all lead to the inescapable conclusion that the parties will be saddled with a huge eDiscovery bill (despite presumptively lower review costs) due to of the use of predictive coding technology.

The Kleen Products v. Packing Corporation case is also plagued by cost and delay issues. As explained in our post on this case last month, the plaintiffs are demanding a “do-over” of the defendants’ document production, insisting that predictive coding technology be used instead of keyword search and other analytical tools. Setting aside plaintiffs’ arguments, the costs the parties have incurred in connection with this motion are quickly mounting. After submitting briefings on the issues, the court has now held two hearings on the matter, including a full day of testimony from the parties’ experts. With another “Discovery Hearing” now on the docket for May 22nd, predictive coding has essentially turned an otherwise routine document production query into an expensive, time consuming sideshow with no end in sight.

Cost and delay issues may very well trouble the parties in the Global Aerospace v. Landow Aviation matter, too. In Global Aerospace, the court acceded to the defendants’ request to use predictive coding technology over the plaintiffs’ objections. Despite allowing the use of such technology, the court provided plaintiffs with the opportunity to challenge the “completeness or the contents of the production or the ongoing use of predictive coding technology.” Such a condition essentially invites plaintiffs to re-litigate their objections through motion practice. Moreover, like the proverbial “exception that swallows the rule,” the order allows for the possibility that the court could withdraw its approval of predictive coding technology. All of which could lead to seemingly endless discovery motions, production “re-dos” and inevitable cost and delay issues.

Better Times Ahead?

At present, the Da Silva Moore, Kleen Products and Global Aerospace cases do not suggest that predictive coding technology will “secure the just, speedy, and inexpensive determination of every action and proceeding.” Nevertheless, there is room for considerable optimism that predictive coding will ultimately succeed. Technological advances in the industry will provide greater transparency into the black box of predictive coding technology that to date has not existed. Additional advances should also lead to easy-to-use workflow management consoles, which will in turn increase defensibility of the process and satisfy legitimate concerns regarding production results, such as those raised by the plaintiffs in Moore and Global Aerospace.

Technological advances that also increase the accuracy of first generation predictive coding tools should yield greater understanding and acceptance about the role predictive coding can play in eDiscovery. As lawyers learn to trust the reliability of transparent predictive coding, they will appreciate how this tool can be deployed in various scenarios (e.g., prioritization, quality assurance for linear review, full scale production) and in connection with existing eDiscovery technologies. In addition, such understanding will likely facilitate greater cooperation among counsel, a lynchpin for expediting the eDiscovery process. This is evident from the Moore, Kleen Products and Global Aerospace cases, where a lack of cooperation has caused increased costs and delays.

With the promise of transparency and simpler workflows, predictive coding technology should eventually live up to its billing of helping organizations discover their information in an efficient, cost effective and defensible manner.  As for now, the “promise” of first generation predictive coding tools appears to be nothing more than that, leaving organizations looking like the cash-strapped “Monopoly man,” wondering where there litigation dollars have gone.

District Court Upholds Judge Peck’s Predictive Coding Order Over Plaintiff’s Objection

Monday, April 30th, 2012

In a decision that advances the predictive coding ball one step further, United States District Judge Andrew L. Carter, Jr. upheld Magistrate Judge Andrew Peck’s order in Da Silva Moore, et. al. v. Publicis Groupe, et. al. despite Plaintiff’s multiple objections. Although Judge Carter rejected all of Plaintiff’s arguments in favor of overturning Judge Peck’s predictive coding order, he did not rule on Plaintiff’s motion to recuse Judge Peck from the current proceedings – a matter that is expected to be addressed separately at a later time. Whether or not a successful recusal motion will alter this or any other rulings in the case remains to be seen.

Finding that it was within Judge Peck’s discretion to conclude that the use of predictive coding technology was appropriate “under the circumstances of this particular case,” Judge Carter summarized Plaintiff’s key arguments listed below and rejected each of them in his five-page Opinion and Order issued on April 26, 2012.

  • the predictive coding method contemplated in the ESI protocol lacks generally accepted reliability standards,
  • Judge Peck improperly relied on outside documentary evidence,
  • Defendant MSLGroup’s (“MSL’s”) expert is biased because the use of predictive coding will reap financial benefits for his company,
  • Judge Peck failed to hold an evidentiary hearing and adopted MSL’s version of the ESI protocol on an insufficient record and without proper Rule 702 consideration

Since Judge Peck’s earlier order is “non-dispositive,” Judge Carter identified and applied the “clearly erroneous or contrary to law” standard of review in rejecting Plaintiffs’ request to overturn the order. Central to Judge Carter’s reasoning is his assertion that any confusion regarding the ESI protocol is immaterial because the protocol “contains standards for measuring the reliability of the process and the protocol builds in levels of participation by Plaintiffs.” In other words, Judge Carter essentially dismisses Plaintiff’s concerns as premature on the grounds that the current protocol provides a system of checks and balances that protects both parties. To be clear, that doesn’t necessarily mean Plaintiffs won’t get a second bite of the apple if problems with MSL’s productions surface.

For now, however, Judge Carter seems to be saying that although Plaintiffs must live with the current order, they are by no means relinquishing their rights to a fair and just discovery process. In fact, the existing protocol allows Plaintiffs to actively participate in and monitor the entire process closely. For example, Judge Carter writes that, “if the predictive coding software is flawed or if Plaintiffs are not receiving the types of documents that should be produced, the parties are allowed to reconsider their methods and raise their concerns with the Magistrate Judge.”

Judge Carter also specifically addresses Plaintiff’s concerns related to statistical sampling techniques which could ultimately prove to be their meatiest argument. A key area of disagreement between the parties is whether or not MSL is reviewing enough documents to insure relevant documents are not completely overlooked even if this complex process is executed flawlessly. Addressing this point Judge Carter states that, “If the method provided in the protocol does not work or if the sample size is indeed too small to properly apply the technology, the Court will not preclude Plaintiffs from receiving relevant information, but to call the method unreliable at this stage is speculative.”

Although most practitioners are focused on seeing whether and how many of these novel predictive coding issues play out, it is important not to overlook two key nuggets of information lining Judge Carter’s Opinion and Order. First, Judge Carter’s statement that “[t]here simply is no review tool that guarantees perfection” serves as an acknowledgement that “reasonableness” is the standard by which discovery should be measured, not “perfection.” Second, Judge Carter’s acknowledgement that manual review with keyword searches may be appropriate in certain situations should serve as a wake-up call for those who think predictive coding technology will replace all predecessor technologies. To the contrary, predictive coding is a promising new tool to add to the litigator’s tool belt, but it is not necessarily a replacement for all other technology tools.

Plaintiffs in Da Silva Moore may not have received the ruling they were hoping for, but Judge Carter’s Opinion and Order makes it clear that the court house door has not been closed. Given the controversy surrounding this case, one can assume that Plaintiffs are likely to voice many of their concerns at a later date as discovery proceeds. In other words, don’t expect all of these issues to fade away without a fight.

Breaking News: Court Clarifies Duty to Preserve Evidence, Denies eDiscovery Sanctions Motion Against Pfizer

Wednesday, April 18th, 2012

It is fortunately becoming clearer that organizations do not need to preserve information until litigation is “reasonably anticipated.” In Brigham Young University v. Pfizer (D. Utah Apr. 16, 2012), the court denied the plaintiff university’s fourth motion for discovery sanctions against Pfizer, likely ending its chance to obtain a “game-ending” eDiscovery sanction. The case, which involves disputed claims over the discovery and development of prominent anti-inflammatory drugs, is set for trial on May 29, 2012.

In Brigham Young, the university pressed its case for sanctions against Pfizer based on a vastly expanded concept of a litigant’s preservation duty. Relying principally on the controversial Phillip M. Adams & Associates v. Dell case, the university argued that Pfizer’s “duty to preserve runs to the legal system generally.” The university reasoned that just as the defendant in the Adams case was “sensitized” by earlier industry lawsuits to the real possibility of plaintiff’s lawsuit, Pfizer was likewise put on notice of the university’s claims due to related industry litigation.

The court rejected such a sweeping characterization of the duty to preserve, opining that it was “simply too broad.” Echoing the concerns articulated by the Advisory Committee when it framed the 2006 amendments to the Federal Rules of Civil Procedure (FRCP), the court took pains to emphasize the unreasonable burdens that parties such as Pfizer would face if such a duty were imposed:

“It is difficult for the Court to imagine how a party could ever dispose of information under such a broad duty because of the potential for some distantly related litigation that may arise years into the future.”

The court also rejected the university’s argument because such a position failed to appreciate the basic workings of corporate records retention policies. As the court reasoned, “[e]vidence may simply be discarded as a result of good faith business procedures.” When those procedures operate to inadvertently destroy evidence before the duty to preserve is triggered, the court held that sanctions should not issue: “The Federal Rules protect from sanctions those who lack control over the requested materials or who have discarded them as a result of good faith business procedures.”

The Brigham Young case is significant for a number of reasons. First, it reiterates that organizations need not keep electronically stored information (ESI) for legal or regulatory purposes until the duty to preserve is reasonably anticipated. As American courts have almost uniformly held since the 1997 case of Concord Boat Corp. v. Brunswick Corp., organizations are not required to keep every piece of paper, every email, every electronic document and every back up tape.

Second, Brigham Young emphasizes that organizations can and should use document retention protocols to rid themselves of data stockpiles. Absent a preservation duty or other exceptional circumstances, paring back ESI pursuant to “good faith business procedures” (such as a neutral retention policy) will be protected under the law.

Finally, Brigham Young narrows the holding of the Adams case to its particular facts. The Adams case has been particularly troublesome to organizations as it arguably expanded their preservation duty in certain circumstances. However, Brigham Young clarified that this expansion was unwarranted in the instant case, particularly given that Pfizer documents were destroyed pursuant to “good faith business procedures.”

In summary, Brigham Young teaches that organizations will be protected from eDiscovery sanctions to the extent they destroy ESI in good faith pursuant to a reasonable records retention policy. This will likely bring a sigh of relief to enterprises struggling with the information explosion since it encourages confident deletion of data when the coast is clear of a discrete litigation event.

eDiscovery Down Under: New Zealand and Australia Are Not as Different as They Sound, Mate!

Thursday, March 29th, 2012

Shortly after arriving in Wellington, New Zealand, I picked up the Dominion Post newspaper and read its lead article: a story involving U.S. jurisdiction being exercised over billionaire NZ resident Mr. Kim Dotcom. The article reinforced the challenges we face with blurred legal and data governance issues presented by the globalization of the economy and the expansive reach of the internet. Originally from Germany, and having changed his surname to reflect the origin of his fortune, Mr. Dotcom has become all too familiar in NZ of late. He has just purchased two opulent homes in NZ, and has become an internationally controversial figure for internet piracy. Mr. Dotcom’s legal troubles arise out of his internet business that enables illegal downloads of pirated material between users, which allegedly is powering the largest copyright infringement in global history. It is approximated that his website constitutes 4% of the internet traffic in the world, which means there could be tons of discovery in this case (or, cases).

The most recent legal problems Mr. Dotcom faces are with U.S. authorities who want to extradite him to face copyright charges worth $500 million by his Megaupload file-sharing website. From a criminal and record-keeping standpoint, Mr. Dotcom’s issues highlight the need for and use of appropriate technologies. In order to establish a case against him, it’s likely that search technologies were deployed by U.S. intelligence agencies to piece together Mr. Dotcom’s activities, banking information, emails and the data transfers on his site. In a case like this, where intelligence agencies would need to collect, search and cull email from so many different geographies and data sources down to just the relevant information, using technologies that link email conversation threads and give insight into a data collection set from a transparent search point of view would provide immense value. Additionally, the Immigration bureau in New Zealand has been required to release hundreds of documents about Mr. Dotcom’s residency application that were requested under the Official Information Act (OIA). The records that Immigration had to produce were likely pulled from their archive or records management system in NZ, and then redacted for private information before production to the public.

The same tools are needed in Australia and New Zealand to build a criminal case or to comply with the OIA that we use here in the U.S for investigatory and compliance purposes, as well as for litigation. The trend in information governance technology in APAC is trending first toward government agencies who are purchasing archiving and eDiscovery technologies more rapidly than private companies. Why is this? One reason could be that because the governments in APAC have a larger responsibility for healthcare, education and the protection of privacy; they are more invested in the compliance requirements and staying off the front page of the news for shortcomings. APAC private enterprises that are small or mid-sized and are not yet doing international business do not have the same archiving and eDiscovery needs large government agencies do, nor do they face litigation in the same way their American counterparts do. Large global companies should assume no matter where they are based, that they may be availed to litigation where they are doing business.

An interesting NZ use case on the enterprise level is that of Transpower (the quasi-governmental energy agency), where compliance with both the “private and public” requirements are mandatory. Transpower is an organisation that is government-owned, yet operates for a profit. Sally Myles, an experienced records manager that recently came to Transpower to head up information governance initiatives, says,

“We have to comply with the Public Records Act of 2005, public requests for information are frequent as we and are under constant scrutiny about where we will develop our plants. We also must comply with the Privacy Act of 1993. My challenge is to get the attention of our leadership to demonstrate why we need to make these changes and show them a plan for implementation as well as cost savings.”

Myles’ comments indicate NZ is facing many of the same information challenges we are here in the US with storage, records management and searching for meaningful information within the organisation.

Australia, New Zealand and U.S. Commonalities

In Australia and NZ, litigation is not seen as a compelling business driver the same way it is in the U.S. This is because many of the information governance needs of organisations are driven by regulatory, statutory and compliance requirements and the environment is not as litigious as it is in the U.S. The Official Information Act in NZ, and the Freedom of Information in Australia, are analogous to the Freedom of Information Act (FOIA) here in the U.S. The requirements to produce public records alone justify the use of technology to provide the ability to manage large volumes of data and produce appropriately redacted information to the public. This is true regardless of litigation. Additionally, there are now cases like DuPont or Mr. Dotcom’s, that legitimatize the risk of litigation with the U.S. The fact that implementing an information governance product suite will also enable a company to be prepared for litigation is a beneficial by-product for many entities as they need technology for record keeping and privacy reasons anyway. In essence, the same capabilities are achieved at the end of the day, regardless of the impetus for implementing a solution.

The Royal Commission – The Ultimate eDiscovery Vehicle

One way to think about the Australian Royal Commission (RCs) is to see it as a version of the U.S.’ government investigation. A key difference, however, is that in the case of the U.S. government, an investigation is typically into private companies. Conversely, a Royal Commission is typically an investigation into a government body after a major tragedy and it is initiated by the Head of State. A RC is an ad-hoc, formal, public inquiry into a defined issue with considerable discovery powers. These powers can be greater than those of a judge and are restricted to the scope and terms of reference of the Commission. RCs are called to look into matters of great importance and usually have very large budgets. The RC is charged with researching the issue, consulting experts both within and outside of government and developing findings to recommend changes to the law or other courses of actions. RCs have immense investigatory powers, including summoning witnesses under oath, offering of indemnities, seizing of documents and other evidence (sometimes including those normally protected, such as classified information), holding hearings in camera if necessary and—in a few cases—compelling government officials to aid in the execution of the Commission.

These expansive powers give the RC the opportunity to employ state of the art technology and to skip the slow bureaucratic decision making processes found within the government when it comes to implementing technological change. For this reason, initially, eDiscovery will continue to increase in the government sector at a more rapid pace than in the private in the Asia Pacific region. This is because litigation is less prevalent in the Asia Pacific, and because the RC is a unique investigatory vehicle with the most far-reaching authority for discovering information. Moreover, the timeframes for RCs are tight and their scopes are broad, making them hair on fire situations that move quickly.

While the APAC information management environment does not have the exact same drivers the U.S. market does, it definitely has the same archiving, eDiscovery and technology needs for different reasons. Another key point is that the APAC archiving and eDiscovery market will likely be driven by the government as records, search and production requirements are the main compliance needs in Australia and NZ. APAC organisations would be well served by beginning to modularly implement key elements of an information governance plan, as globalization is driving us all to a more common and automated approach to data management. 

Computer-Assisted Review “Acceptable in Appropriate Cases,” says Judge Peck in new Da Silva Moore eDiscovery Ruling

Saturday, February 25th, 2012

The Honorable Andrew J. Peck, United States Magistrate Judge for the Southern District of New York, issued an opinion and order (order) on February 24th in Da Silva Moore v. Publicis Groupe, stating that computer-assisted review in eDiscovery is “acceptable in appropriate cases.”  The order was issued over plaintiffs’ objection that the predictive coding protocol submitted to the court will not provide an appropriate level of transparency into the predictive coding process.  This and other objections will be reviewed by the district court for error, leaving open the possibility that the order could be modified or overturned.  Regardless of whether or not that happens, Judge Peck’s order makes it clear that the future of predictive coding technology is bright, the role of other eDiscovery technology tools should not be overlooked, and the methodology for using any technology tool is just as important as the tool used.

Plaintiffs’ Objections and Judge Peck’s Preemptive Strikes

In anticipation of the district court’s review, the order preemptively rejects plaintiffs’ assertion that defendant MSL’s protocol is not sufficiently transparent.  In so doing, Judge Peck reasons that plaintiffs will be able to see how MSL codes emails.  If they disagree with MSL’s decisions, plaintiffs will be able to seek judicial intervention. (Id. at 16.)  Plaintiffs appear to argue that although this and other steps in the predictive coding protocol are transparent, the overall protocol (viewed in its entirety) is not transparent or fair.  The crux of plaintiffs’ argument is that just because MSL provides a few peeks behind the curtain during this complex process, many important decisions impacting the accuracy and quality of the document production are being made unilaterally by MSL.  Plaintiffs essentially conclude that such unilateral decision-making does not allow them to properly vet MSL’s methodology, which leads to a fox guarding the hen house problem.

Similarly, Judge Peck dismissed plaintiffs’ argument that expert testimony should have been considered during the status conference pursuant to Rule 702 and the Daubert standard.  In one of many references to his article, “Search, Forward: will manual document review and keyword searches be replaced by computer-assisted coding?” Judge Peck explains:

My article further explained my belief that Daubert would not apply to the results of using predictive coding, but that in any challenge to its use, this Judge would be interested in both the process used and the results.” (Id. at 4.)

The court further hints that results may play a bigger role than science:

“[I]f the use of predictive coding is challenged in a case before me, I will want to know what was done and why that produced defensible results. I may be less interested in the science behind the “black box” of the vendor’s software than in whether it produced responsive documents with reasonably high recall and high precision.” (Id.)

Judge Peck concludes that Rule 702 and Daubert are not applicable to how documents are searched for and found in discovery.  Instead, both deal with the” trial court’s role as gatekeeper to exclude unreliable testimony from being submitted to the jury at trial.” (Id. at 15.)  Despite Judge Peck’s comments, the waters are still murky on this point as evidenced by differing views expressed by Judges Grimm and Facciola in O’Keefe, Equity Analytics, and Victor Stanley.  For example, in Equity Analytics, Judge Facciola addresses the need for expert testimony to support keyword search technology:

[D]etermining whether a particular search methodology, such as keywords, will or will not be effective certainly requires knowledge beyond the ken of a lay person (and a lay lawyer) and requires expert testimony that meets the requirements of Rule 702 of the Federal Rules of Evidence.” (Id. at 333.)

Given the uncertainty regarding the applicability of Rule 702 and Daubert, it will be interesting to see if and how the district court addresses the issue of expert testimony.

What This Order Means and Does not Mean for the Future of Predictive Coding

The order states that “This judicial opinion now recognizes that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.” (Id. at 2.)  Recognizing that there have been some erroneous reports, Judge Peck went to great lengths to clarify his order and to “correct the many blogs about this case.” (Id. at 2, fn. 1.)  Some important excerpts are listed below:

The Court did not order the use of predictive coding

“[T]he Court did not order the parties to use predictive coding.  The parties had agreed to defendants’ use of it, but had disputes over the scope and implementation, which the Court ruled on, thus accepting the use of computer-assisted review in this lawsuit.” (Id.)

Computer-assisted review is not required in all cases

“That does not mean computer-assisted review must be used in all cases, or that the exact ESI protocol approved here will be appropriate in all future cases that utilize computer-assisted review. (Id. at 25.)

The opinion should not be considered an endorsement of any particular vendors or tools

“Nor does this Opinion endorse any vendor…, nor any particular computer-assisted review tool.” (Id.)

Predictive coding technology can still be expensive

MSL wanted to only review and produce the top 40,000 documents, which it estimated would cost $200,000 (at $5 per document). (1/4/12 Conf. Tr. at 47-48, 51.)

Process and methodology are as important as the technology utilized

“As with keywords or any other technological solution to eDiscovery, counsel must design an appropriate process, including use of available technology, with appropriate quality control testing, to review and produce relevant ESI while adhering to Rule 1 and Rule 26(b )(2)(C) proportionality.” (Id.)

Conclusion

The final excerpt drives home the points made in a recent Forbes article involving this and another predictive coding case (Kleen Products).  The first point is that there are a range of technology-assisted review (TAR) tools in the litigator’s tool belt that will often be used together in eDiscovery, and predictive coding technology is one of those tools.  Secondly, none of these tools will provide accurate results unless they are relatively easy to use and used properly.  In other words, the carpenter is just as important as the hammer.  Applying these guideposts and demanding cooperation and transparency between the parties will help the bench usher in a new era of eDiscovery technology that is fair and just for everyone.

Plaintiffs Object to Predictive Coding Order, Argue Lack of Transparency in eDiscovery Process

Friday, February 24th, 2012

The other shoe dropped in the Da Silva Moore v. Publicis Groupe case this week as the plaintiffs filed their objections to a preliminary eDiscovery order addressing predictive coding technology. In challenging the order issued by the Honorable Andrew J. Peck, the plaintiffs argue that the protocol will not provide an appropriate level of transparency into the predictive coding process. In particular, the plaintiffs assert that the ordered process does not establish “the necessary standards” and “quality assurance” levels required to satisfy Federal Rule of Civil Procedure 26(b)(1) and Federal Rule  of Evidence 702.

The Rule 26(b) Relevance Standard

With respect to the relevance standard under Rule 26, plaintiffs maintain that there are no objective criteria to establish that defendant’s predictive coding technology will reliably “capture a sufficient number of relevant documents from the total universe of documents in existence.” Unless the technology’s “search methodologies” are “carefully crafted and tested for quality assurance,” there is risk that the defined protocol could “exclude a large number of responsive email” from the defendant’s production. This, plaintiffs assert, is not acceptable in an employment discrimination matter where liberal discovery is typically the order of the day.

Reliability under Rule 702

The plaintiffs also contend that the court abdicated its gatekeeper role under Rule 702 and the U.S. Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals by not soliciting expert testimony to assess the reliability of the defendant’s predictive coding technology. Such testimony is particularly necessary in this instance, plaintiffs argue, where the technology at issue is new and untested by the judiciary. To support their position, the plaintiffs filed a declaration from their expert witness that challenges its reliability. Relying on that declaration, the plaintiffs complain that the process lacks “explicit and defined standards.” According to the plaintiffs, such standards would typically include “calculations . . . to determine whether the system is accurate in identifying responsive documents.” They would also include “the standard of acceptance that they are trying to achieve,” i.e., whether the defendant’s “method actually works.”  Plaintiffs conclude that without such “quality assurance measurements in place to determine whether the methodology is reliable,” the current predictive coding process is “fundamentally flawed” and should be rejected.

Wait and See

Now that the plaintiffs have filed their objections, the eDiscovery world must now wait and see what will happen next. The defendant will certainly respond in kind, vigorously defending the ordered process with declarations from its own experts. Whether the plaintiffs or the defendant will carry the day depends on how the district court views these issues, particularly the issue of transparency. Simply put, the question is whether the process at issue is sufficiently transparent to satisfy Rule 26 and Rule 702? That is the proverbial $64,000 question as we wait and see how this issue plays out in the courts over the coming weeks and months.

Judge Peck Issues Order Addressing “Joint Predictive Coding Protocol” in Da Silva Moore eDiscovery Case

Thursday, February 23rd, 2012

Litigation attorneys were abuzz last week when a few breaking news stories erroneously reported that The Honorable Andrew J. Peck, United States Magistrate Judge for the Southern District of New York, ordered the parties in a gender discrimination case to use predictive coding technology during discovery.  Despite early reports, the parties in the case (Da Silva Moore v. Publicis Group, et. al.) actually agreed to use predictive coding technology during discovery – apparently of their own accord.  The case is still significant because predictive coding technology in eDiscovery is relatively new to the legal field, and many have been reluctant to embrace a new technological approach to document review due to, among other things, a lack of judicial guidance.

Unfortunately, despite this atmosphere of cooperation, the discussion stalled when the parties realized they were miles apart in terms of defining a mutually agreeable predictive coding protocol.  A February status conference transcript reveals significant confusion and complexity related to issues such as random sampling, quality control testing, and the overall process integrity.  In response, Judge Peck ordered the parties to submit a Joint Protocol for eDiscovery to address eDiscovery generally and the use of predictive coding technology specifically.

The parties submitted their proposed protocol on February 22, 2012 and Judge Peck quickly reduced that submission to a stipulation and order.  The stipulation and order certainly provides more clarity and insight into the process than the status conference transcript.  However, reading the stipulation and order leaves little doubt that the devil is in the details – and there are a lot of details.  Equally clear is the fact that the parties are still in disagreement and the plaintiffs do not support the “joint” protocol laid out in the stipulation and order.  Plaintiffs actually go so far as to incorporate a paragraph into the stipulation and order stating that they “object to this ESI Protocol in its entirety” and they “reserve the right to object to its use in the case.”

These problems underscore some of the points made in a Forbes article published earlier this week titled,Federal Judges Consider Important Issues That Could Shape the Future of Predictive Coding Technology.”  The Forbes article relies in part on a recent predictive coding survey to make the point that, while predictive coding technology has tremendous potential, the solutions need to become more transparent and the workflows must be simplified before they go mainstream.

Survey Says… Information Governance and Predictive Coding Adoption Slow, But Likely to Gain Steam as Technology Improves

Wednesday, February 15th, 2012

The biggest legal technology event of the year, otherwise known as LegalTech New York, always seems to have a few common rallying cries and this year was no different.  In addition to cloud computing and social media, predictive coding and information governance were hot topics of discussion that dominated banter among vendors, speakers, and customers.  Symantec conducted a survey on the exhibit show floor to find out what attendees really thought about these two burgeoning areas and to explore what the future might hold.

Information Governance is critical, understood, and necessary – but it is not yet being adequately addressed.

Although 84% of respondents are familiar with the term information governance and 73% believe that an integrated information governance strategy is critical to reducing information risk and cost, only 19% have implemented an information governance solution.  These results beg the question, if information governance is critical, then why aren’t more organizations adopting information governance practices?

Perhaps the answer lies in the cross-functional nature of information governance and confusion about who is responsible for the organization’s information governance strategy.  For example, the survey also revealed that information governance is a concept that incorporates multiple functions across the organization, including email/records retention, data storage, data security and privacy, compliance, and eDiscovery.  Given the broad impact of information governance across the organization, it is no surprise  respondents also indicated that multiple departments within the organization – including Legal, IT, Compliance, and Records Management – have an ownership stake.

These results tend to suggest at least two things.  First, information governance is a concept that touches multiple parts of the organization.  Defining and implementing appropriate information governance policies across the organization should include an integrated strategy that involves key stakeholders within the organization.  Second, recognition that information governance is a common goal across the entire organization highlights the fact that technology must evolve to help address information governance challenges.

The days of relying too heavily on disconnected point solutions to address eDiscovery, storage, data security, and record retention concerns are limited as organizations continue to mandate internal cost cutting and data security measures.  Decreasing the number of point solutions an organization supports and improving integration between the remaining solutions is a key component of a good information governance strategy because it has the effect of driving down technology and labor costs.   Similarly, an integrated solution strategy helps streamline the backup, retrieval, and overall management of critical data, which simultaneously increases worker productivity and reduces organizational risk in areas such as eDiscovery and data loss prevention.

The trail that leads from point solutions to an integrated solution strategy is already being blazed in the eDiscovery space and this trend serves as a good information governance roadmap.  More and more enterprises faced with investigations and litigation avoid the cost and time of deploying point solutions to address legal hold, data collection, data processing, and document review in favor of a single, integrated, enterprise eDiscovery platform.  The resulting reduction in cost and risk is significant and is fueling support for even broader information governance initiatives in other areas.  These broader initiatives will still include integrated eDiscovery solutions, but the initiatives will continue to expand the integrated solution approach into other areas such as storage management, record retention, and data security technologies to name a few.

Despite mainstream familiarity, predictive coding technology has not yet seen mainstream adoption but the future looks promising.

Much like the term information governance, most respondents were familiar with predictive coding technology for electronic discovery, but the survey results indicated that adoption of the technology to date has been weak.  Specifically, the survey revealed that while 97% of respondents are familiar with the term predictive coding, only 12% have adopted predictive coding technology.  Another 19% are “currently adopting” or plan to adopt predictive coding technology, but the timeline for adoption is unclear.

When asked what challenges “held back” respondents from adopting predictive coding technology, most cited accuracy, cost, and defensibility as their primary concerns.  Concerns about “privilege/confidentiality” and difficulty understanding the technology were also cited as reasons impeding adoption.  Significantly, 70% of respondents believe that predictive coding technology would “go mainstream” if it was easier to use, more transparent, and less expensive. These findings are consistent with the observations articulated in my recent blog (2012:  Year of the Dragon and Predictive Coding – Will the eDiscovery Landscape Be Forever Changed?)

The survey results combined with the potential cost savings associated with predictive coding technology suggest that the movement toward predictive coding technology is gaining steam.  Lawyers are typically reluctant to embrace new technology that is not intuitive because it is difficult to defend a process that is difficult to understand.  The complexity and confusion surrounding today’s predictive coding technology was highlighted recently in Da Silva Moore v. Publicis Group, et. al. during a recent status conference.  The case is venued in Southern District of New York Federal Court before Judge Andrew Peck and serves as further evidence that predictive coding technology is gaining steam.  Expect future proceedings in the Da Silva Moore case to further validate these survey results by revealing both the promise and complexity of current predictive coding technologies.  Similarly, expect next generation predictive coding technology to address current complexities by becoming easier to use, more transparent, and less expensive.