24h-payday

Posts Tagged ‘e-discovery news’

Will Predictive Coding Live Up to the eDiscovery Hype?

Monday, May 14th, 2012

The myriad of published material regarding predictive coding technology has almost universally promised reduced costs and lighter burdens for the eDiscovery world. Indeed, until the now famous order was issued in the Da Silva Moore v. Publicis Groupe case “approving” the use of predictive coding, many in the industry had parroted this “lower costs/lighter burdens” mantra like the retired athletes who chanted “tastes great/less filling” during the 1970s Miller Lite commercials. But a funny thing happened on the way to predictive coding satisfying the cost cutting mandate of Federal Rule of Civil Procedure 1: the same old eDiscovery story of high costs and lengthy delays are plaguing the initial outlay of this technology. The three publicized cases involving predictive coding are particularly instructive on this early, but troubling development.

Predictive Coding Cases

In Moore v. Publicis Groupe, the plaintiffs’ attempt to recuse Judge Peck has diverted the spotlight from the costs and delays associated with use of predictive coding. Indeed, the parties have been wrangling for months over the parameters of using this technology for defendant MSL’s document review. During that time, each side has incurred substantial attorney fees and other costs to address fairly routine review issues. This tardiness figures to continue as the parties now project that MSL’s production will not be complete until September 7, 2012. Even that date seems too sanguine, particularly given Judge Peck’s recent observation about the slow pace of production: “You’re now woefully behind schedule already at the first wave.” Moreover, Judge Peck has suggested on multiple occasions that a special master be appointed to address disagreements over relevance designations. Special masters, production delays, additional briefings and related court hearings all lead to the inescapable conclusion that the parties will be saddled with a huge eDiscovery bill (despite presumptively lower review costs) due to of the use of predictive coding technology.

The Kleen Products v. Packing Corporation case is also plagued by cost and delay issues. As explained in our post on this case last month, the plaintiffs are demanding a “do-over” of the defendants’ document production, insisting that predictive coding technology be used instead of keyword search and other analytical tools. Setting aside plaintiffs’ arguments, the costs the parties have incurred in connection with this motion are quickly mounting. After submitting briefings on the issues, the court has now held two hearings on the matter, including a full day of testimony from the parties’ experts. With another “Discovery Hearing” now on the docket for May 22nd, predictive coding has essentially turned an otherwise routine document production query into an expensive, time consuming sideshow with no end in sight.

Cost and delay issues may very well trouble the parties in the Global Aerospace v. Landow Aviation matter, too. In Global Aerospace, the court acceded to the defendants’ request to use predictive coding technology over the plaintiffs’ objections. Despite allowing the use of such technology, the court provided plaintiffs with the opportunity to challenge the “completeness or the contents of the production or the ongoing use of predictive coding technology.” Such a condition essentially invites plaintiffs to re-litigate their objections through motion practice. Moreover, like the proverbial “exception that swallows the rule,” the order allows for the possibility that the court could withdraw its approval of predictive coding technology. All of which could lead to seemingly endless discovery motions, production “re-dos” and inevitable cost and delay issues.

Better Times Ahead?

At present, the Da Silva Moore, Kleen Products and Global Aerospace cases do not suggest that predictive coding technology will “secure the just, speedy, and inexpensive determination of every action and proceeding.” Nevertheless, there is room for considerable optimism that predictive coding will ultimately succeed. Technological advances in the industry will provide greater transparency into the black box of predictive coding technology that to date has not existed. Additional advances should also lead to easy-to-use workflow management consoles, which will in turn increase defensibility of the process and satisfy legitimate concerns regarding production results, such as those raised by the plaintiffs in Moore and Global Aerospace.

Technological advances that also increase the accuracy of first generation predictive coding tools should yield greater understanding and acceptance about the role predictive coding can play in eDiscovery. As lawyers learn to trust the reliability of transparent predictive coding, they will appreciate how this tool can be deployed in various scenarios (e.g., prioritization, quality assurance for linear review, full scale production) and in connection with existing eDiscovery technologies. In addition, such understanding will likely facilitate greater cooperation among counsel, a lynchpin for expediting the eDiscovery process. This is evident from the Moore, Kleen Products and Global Aerospace cases, where a lack of cooperation has caused increased costs and delays.

With the promise of transparency and simpler workflows, predictive coding technology should eventually live up to its billing of helping organizations discover their information in an efficient, cost effective and defensible manner.  As for now, the “promise” of first generation predictive coding tools appears to be nothing more than that, leaving organizations looking like the cash-strapped “Monopoly man,” wondering where there litigation dollars have gone.

District Court Upholds Judge Peck’s Predictive Coding Order Over Plaintiff’s Objection

Monday, April 30th, 2012

In a decision that advances the predictive coding ball one step further, United States District Judge Andrew L. Carter, Jr. upheld Magistrate Judge Andrew Peck’s order in Da Silva Moore, et. al. v. Publicis Groupe, et. al. despite Plaintiff’s multiple objections. Although Judge Carter rejected all of Plaintiff’s arguments in favor of overturning Judge Peck’s predictive coding order, he did not rule on Plaintiff’s motion to recuse Judge Peck from the current proceedings – a matter that is expected to be addressed separately at a later time. Whether or not a successful recusal motion will alter this or any other rulings in the case remains to be seen.

Finding that it was within Judge Peck’s discretion to conclude that the use of predictive coding technology was appropriate “under the circumstances of this particular case,” Judge Carter summarized Plaintiff’s key arguments listed below and rejected each of them in his five-page Opinion and Order issued on April 26, 2012.

  • the predictive coding method contemplated in the ESI protocol lacks generally accepted reliability standards,
  • Judge Peck improperly relied on outside documentary evidence,
  • Defendant MSLGroup’s (“MSL’s”) expert is biased because the use of predictive coding will reap financial benefits for his company,
  • Judge Peck failed to hold an evidentiary hearing and adopted MSL’s version of the ESI protocol on an insufficient record and without proper Rule 702 consideration

Since Judge Peck’s earlier order is “non-dispositive,” Judge Carter identified and applied the “clearly erroneous or contrary to law” standard of review in rejecting Plaintiffs’ request to overturn the order. Central to Judge Carter’s reasoning is his assertion that any confusion regarding the ESI protocol is immaterial because the protocol “contains standards for measuring the reliability of the process and the protocol builds in levels of participation by Plaintiffs.” In other words, Judge Carter essentially dismisses Plaintiff’s concerns as premature on the grounds that the current protocol provides a system of checks and balances that protects both parties. To be clear, that doesn’t necessarily mean Plaintiffs won’t get a second bite of the apple if problems with MSL’s productions surface.

For now, however, Judge Carter seems to be saying that although Plaintiffs must live with the current order, they are by no means relinquishing their rights to a fair and just discovery process. In fact, the existing protocol allows Plaintiffs to actively participate in and monitor the entire process closely. For example, Judge Carter writes that, “if the predictive coding software is flawed or if Plaintiffs are not receiving the types of documents that should be produced, the parties are allowed to reconsider their methods and raise their concerns with the Magistrate Judge.”

Judge Carter also specifically addresses Plaintiff’s concerns related to statistical sampling techniques which could ultimately prove to be their meatiest argument. A key area of disagreement between the parties is whether or not MSL is reviewing enough documents to insure relevant documents are not completely overlooked even if this complex process is executed flawlessly. Addressing this point Judge Carter states that, “If the method provided in the protocol does not work or if the sample size is indeed too small to properly apply the technology, the Court will not preclude Plaintiffs from receiving relevant information, but to call the method unreliable at this stage is speculative.”

Although most practitioners are focused on seeing whether and how many of these novel predictive coding issues play out, it is important not to overlook two key nuggets of information lining Judge Carter’s Opinion and Order. First, Judge Carter’s statement that “[t]here simply is no review tool that guarantees perfection” serves as an acknowledgement that “reasonableness” is the standard by which discovery should be measured, not “perfection.” Second, Judge Carter’s acknowledgement that manual review with keyword searches may be appropriate in certain situations should serve as a wake-up call for those who think predictive coding technology will replace all predecessor technologies. To the contrary, predictive coding is a promising new tool to add to the litigator’s tool belt, but it is not necessarily a replacement for all other technology tools.

Plaintiffs in Da Silva Moore may not have received the ruling they were hoping for, but Judge Carter’s Opinion and Order makes it clear that the court house door has not been closed. Given the controversy surrounding this case, one can assume that Plaintiffs are likely to voice many of their concerns at a later date as discovery proceeds. In other words, don’t expect all of these issues to fade away without a fight.

Breaking News: Court Clarifies Duty to Preserve Evidence, Denies eDiscovery Sanctions Motion Against Pfizer

Wednesday, April 18th, 2012

It is fortunately becoming clearer that organizations do not need to preserve information until litigation is “reasonably anticipated.” In Brigham Young University v. Pfizer (D. Utah Apr. 16, 2012), the court denied the plaintiff university’s fourth motion for discovery sanctions against Pfizer, likely ending its chance to obtain a “game-ending” eDiscovery sanction. The case, which involves disputed claims over the discovery and development of prominent anti-inflammatory drugs, is set for trial on May 29, 2012.

In Brigham Young, the university pressed its case for sanctions against Pfizer based on a vastly expanded concept of a litigant’s preservation duty. Relying principally on the controversial Phillip M. Adams & Associates v. Dell case, the university argued that Pfizer’s “duty to preserve runs to the legal system generally.” The university reasoned that just as the defendant in the Adams case was “sensitized” by earlier industry lawsuits to the real possibility of plaintiff’s lawsuit, Pfizer was likewise put on notice of the university’s claims due to related industry litigation.

The court rejected such a sweeping characterization of the duty to preserve, opining that it was “simply too broad.” Echoing the concerns articulated by the Advisory Committee when it framed the 2006 amendments to the Federal Rules of Civil Procedure (FRCP), the court took pains to emphasize the unreasonable burdens that parties such as Pfizer would face if such a duty were imposed:

“It is difficult for the Court to imagine how a party could ever dispose of information under such a broad duty because of the potential for some distantly related litigation that may arise years into the future.”

The court also rejected the university’s argument because such a position failed to appreciate the basic workings of corporate records retention policies. As the court reasoned, “[e]vidence may simply be discarded as a result of good faith business procedures.” When those procedures operate to inadvertently destroy evidence before the duty to preserve is triggered, the court held that sanctions should not issue: “The Federal Rules protect from sanctions those who lack control over the requested materials or who have discarded them as a result of good faith business procedures.”

The Brigham Young case is significant for a number of reasons. First, it reiterates that organizations need not keep electronically stored information (ESI) for legal or regulatory purposes until the duty to preserve is reasonably anticipated. As American courts have almost uniformly held since the 1997 case of Concord Boat Corp. v. Brunswick Corp., organizations are not required to keep every piece of paper, every email, every electronic document and every back up tape.

Second, Brigham Young emphasizes that organizations can and should use document retention protocols to rid themselves of data stockpiles. Absent a preservation duty or other exceptional circumstances, paring back ESI pursuant to “good faith business procedures” (such as a neutral retention policy) will be protected under the law.

Finally, Brigham Young narrows the holding of the Adams case to its particular facts. The Adams case has been particularly troublesome to organizations as it arguably expanded their preservation duty in certain circumstances. However, Brigham Young clarified that this expansion was unwarranted in the instant case, particularly given that Pfizer documents were destroyed pursuant to “good faith business procedures.”

In summary, Brigham Young teaches that organizations will be protected from eDiscovery sanctions to the extent they destroy ESI in good faith pursuant to a reasonable records retention policy. This will likely bring a sigh of relief to enterprises struggling with the information explosion since it encourages confident deletion of data when the coast is clear of a discrete litigation event.

eDiscovery Down Under: New Zealand and Australia Are Not as Different as They Sound, Mate!

Thursday, March 29th, 2012

Shortly after arriving in Wellington, New Zealand, I picked up the Dominion Post newspaper and read its lead article: a story involving U.S. jurisdiction being exercised over billionaire NZ resident Mr. Kim Dotcom. The article reinforced the challenges we face with blurred legal and data governance issues presented by the globalization of the economy and the expansive reach of the internet. Originally from Germany, and having changed his surname to reflect the origin of his fortune, Mr. Dotcom has become all too familiar in NZ of late. He has just purchased two opulent homes in NZ, and has become an internationally controversial figure for internet piracy. Mr. Dotcom’s legal troubles arise out of his internet business that enables illegal downloads of pirated material between users, which allegedly is powering the largest copyright infringement in global history. It is approximated that his website constitutes 4% of the internet traffic in the world, which means there could be tons of discovery in this case (or, cases).

The most recent legal problems Mr. Dotcom faces are with U.S. authorities who want to extradite him to face copyright charges worth $500 million by his Megaupload file-sharing website. From a criminal and record-keeping standpoint, Mr. Dotcom’s issues highlight the need for and use of appropriate technologies. In order to establish a case against him, it’s likely that search technologies were deployed by U.S. intelligence agencies to piece together Mr. Dotcom’s activities, banking information, emails and the data transfers on his site. In a case like this, where intelligence agencies would need to collect, search and cull email from so many different geographies and data sources down to just the relevant information, using technologies that link email conversation threads and give insight into a data collection set from a transparent search point of view would provide immense value. Additionally, the Immigration bureau in New Zealand has been required to release hundreds of documents about Mr. Dotcom’s residency application that were requested under the Official Information Act (OIA). The records that Immigration had to produce were likely pulled from their archive or records management system in NZ, and then redacted for private information before production to the public.

The same tools are needed in Australia and New Zealand to build a criminal case or to comply with the OIA that we use here in the U.S for investigatory and compliance purposes, as well as for litigation. The trend in information governance technology in APAC is trending first toward government agencies who are purchasing archiving and eDiscovery technologies more rapidly than private companies. Why is this? One reason could be that because the governments in APAC have a larger responsibility for healthcare, education and the protection of privacy; they are more invested in the compliance requirements and staying off the front page of the news for shortcomings. APAC private enterprises that are small or mid-sized and are not yet doing international business do not have the same archiving and eDiscovery needs large government agencies do, nor do they face litigation in the same way their American counterparts do. Large global companies should assume no matter where they are based, that they may be availed to litigation where they are doing business.

An interesting NZ use case on the enterprise level is that of Transpower (the quasi-governmental energy agency), where compliance with both the “private and public” requirements are mandatory. Transpower is an organisation that is government-owned, yet operates for a profit. Sally Myles, an experienced records manager that recently came to Transpower to head up information governance initiatives, says,

“We have to comply with the Public Records Act of 2005, public requests for information are frequent as we and are under constant scrutiny about where we will develop our plants. We also must comply with the Privacy Act of 1993. My challenge is to get the attention of our leadership to demonstrate why we need to make these changes and show them a plan for implementation as well as cost savings.”

Myles’ comments indicate NZ is facing many of the same information challenges we are here in the US with storage, records management and searching for meaningful information within the organisation.

Australia, New Zealand and U.S. Commonalities

In Australia and NZ, litigation is not seen as a compelling business driver the same way it is in the U.S. This is because many of the information governance needs of organisations are driven by regulatory, statutory and compliance requirements and the environment is not as litigious as it is in the U.S. The Official Information Act in NZ, and the Freedom of Information in Australia, are analogous to the Freedom of Information Act (FOIA) here in the U.S. The requirements to produce public records alone justify the use of technology to provide the ability to manage large volumes of data and produce appropriately redacted information to the public. This is true regardless of litigation. Additionally, there are now cases like DuPont or Mr. Dotcom’s, that legitimatize the risk of litigation with the U.S. The fact that implementing an information governance product suite will also enable a company to be prepared for litigation is a beneficial by-product for many entities as they need technology for record keeping and privacy reasons anyway. In essence, the same capabilities are achieved at the end of the day, regardless of the impetus for implementing a solution.

The Royal Commission – The Ultimate eDiscovery Vehicle

One way to think about the Australian Royal Commission (RCs) is to see it as a version of the U.S.’ government investigation. A key difference, however, is that in the case of the U.S. government, an investigation is typically into private companies. Conversely, a Royal Commission is typically an investigation into a government body after a major tragedy and it is initiated by the Head of State. A RC is an ad-hoc, formal, public inquiry into a defined issue with considerable discovery powers. These powers can be greater than those of a judge and are restricted to the scope and terms of reference of the Commission. RCs are called to look into matters of great importance and usually have very large budgets. The RC is charged with researching the issue, consulting experts both within and outside of government and developing findings to recommend changes to the law or other courses of actions. RCs have immense investigatory powers, including summoning witnesses under oath, offering of indemnities, seizing of documents and other evidence (sometimes including those normally protected, such as classified information), holding hearings in camera if necessary and—in a few cases—compelling government officials to aid in the execution of the Commission.

These expansive powers give the RC the opportunity to employ state of the art technology and to skip the slow bureaucratic decision making processes found within the government when it comes to implementing technological change. For this reason, initially, eDiscovery will continue to increase in the government sector at a more rapid pace than in the private in the Asia Pacific region. This is because litigation is less prevalent in the Asia Pacific, and because the RC is a unique investigatory vehicle with the most far-reaching authority for discovering information. Moreover, the timeframes for RCs are tight and their scopes are broad, making them hair on fire situations that move quickly.

While the APAC information management environment does not have the exact same drivers the U.S. market does, it definitely has the same archiving, eDiscovery and technology needs for different reasons. Another key point is that the APAC archiving and eDiscovery market will likely be driven by the government as records, search and production requirements are the main compliance needs in Australia and NZ. APAC organisations would be well served by beginning to modularly implement key elements of an information governance plan, as globalization is driving us all to a more common and automated approach to data management. 

Computer-Assisted Review “Acceptable in Appropriate Cases,” says Judge Peck in new Da Silva Moore eDiscovery Ruling

Saturday, February 25th, 2012

The Honorable Andrew J. Peck, United States Magistrate Judge for the Southern District of New York, issued an opinion and order (order) on February 24th in Da Silva Moore v. Publicis Groupe, stating that computer-assisted review in eDiscovery is “acceptable in appropriate cases.”  The order was issued over plaintiffs’ objection that the predictive coding protocol submitted to the court will not provide an appropriate level of transparency into the predictive coding process.  This and other objections will be reviewed by the district court for error, leaving open the possibility that the order could be modified or overturned.  Regardless of whether or not that happens, Judge Peck’s order makes it clear that the future of predictive coding technology is bright, the role of other eDiscovery technology tools should not be overlooked, and the methodology for using any technology tool is just as important as the tool used.

Plaintiffs’ Objections and Judge Peck’s Preemptive Strikes

In anticipation of the district court’s review, the order preemptively rejects plaintiffs’ assertion that defendant MSL’s protocol is not sufficiently transparent.  In so doing, Judge Peck reasons that plaintiffs will be able to see how MSL codes emails.  If they disagree with MSL’s decisions, plaintiffs will be able to seek judicial intervention. (Id. at 16.)  Plaintiffs appear to argue that although this and other steps in the predictive coding protocol are transparent, the overall protocol (viewed in its entirety) is not transparent or fair.  The crux of plaintiffs’ argument is that just because MSL provides a few peeks behind the curtain during this complex process, many important decisions impacting the accuracy and quality of the document production are being made unilaterally by MSL.  Plaintiffs essentially conclude that such unilateral decision-making does not allow them to properly vet MSL’s methodology, which leads to a fox guarding the hen house problem.

Similarly, Judge Peck dismissed plaintiffs’ argument that expert testimony should have been considered during the status conference pursuant to Rule 702 and the Daubert standard.  In one of many references to his article, “Search, Forward: will manual document review and keyword searches be replaced by computer-assisted coding?” Judge Peck explains:

My article further explained my belief that Daubert would not apply to the results of using predictive coding, but that in any challenge to its use, this Judge would be interested in both the process used and the results.” (Id. at 4.)

The court further hints that results may play a bigger role than science:

“[I]f the use of predictive coding is challenged in a case before me, I will want to know what was done and why that produced defensible results. I may be less interested in the science behind the “black box” of the vendor’s software than in whether it produced responsive documents with reasonably high recall and high precision.” (Id.)

Judge Peck concludes that Rule 702 and Daubert are not applicable to how documents are searched for and found in discovery.  Instead, both deal with the” trial court’s role as gatekeeper to exclude unreliable testimony from being submitted to the jury at trial.” (Id. at 15.)  Despite Judge Peck’s comments, the waters are still murky on this point as evidenced by differing views expressed by Judges Grimm and Facciola in O’Keefe, Equity Analytics, and Victor Stanley.  For example, in Equity Analytics, Judge Facciola addresses the need for expert testimony to support keyword search technology:

[D]etermining whether a particular search methodology, such as keywords, will or will not be effective certainly requires knowledge beyond the ken of a lay person (and a lay lawyer) and requires expert testimony that meets the requirements of Rule 702 of the Federal Rules of Evidence.” (Id. at 333.)

Given the uncertainty regarding the applicability of Rule 702 and Daubert, it will be interesting to see if and how the district court addresses the issue of expert testimony.

What This Order Means and Does not Mean for the Future of Predictive Coding

The order states that “This judicial opinion now recognizes that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.” (Id. at 2.)  Recognizing that there have been some erroneous reports, Judge Peck went to great lengths to clarify his order and to “correct the many blogs about this case.” (Id. at 2, fn. 1.)  Some important excerpts are listed below:

The Court did not order the use of predictive coding

“[T]he Court did not order the parties to use predictive coding.  The parties had agreed to defendants’ use of it, but had disputes over the scope and implementation, which the Court ruled on, thus accepting the use of computer-assisted review in this lawsuit.” (Id.)

Computer-assisted review is not required in all cases

“That does not mean computer-assisted review must be used in all cases, or that the exact ESI protocol approved here will be appropriate in all future cases that utilize computer-assisted review. (Id. at 25.)

The opinion should not be considered an endorsement of any particular vendors or tools

“Nor does this Opinion endorse any vendor…, nor any particular computer-assisted review tool.” (Id.)

Predictive coding technology can still be expensive

MSL wanted to only review and produce the top 40,000 documents, which it estimated would cost $200,000 (at $5 per document). (1/4/12 Conf. Tr. at 47-48, 51.)

Process and methodology are as important as the technology utilized

“As with keywords or any other technological solution to eDiscovery, counsel must design an appropriate process, including use of available technology, with appropriate quality control testing, to review and produce relevant ESI while adhering to Rule 1 and Rule 26(b )(2)(C) proportionality.” (Id.)

Conclusion

The final excerpt drives home the points made in a recent Forbes article involving this and another predictive coding case (Kleen Products).  The first point is that there are a range of technology-assisted review (TAR) tools in the litigator’s tool belt that will often be used together in eDiscovery, and predictive coding technology is one of those tools.  Secondly, none of these tools will provide accurate results unless they are relatively easy to use and used properly.  In other words, the carpenter is just as important as the hammer.  Applying these guideposts and demanding cooperation and transparency between the parties will help the bench usher in a new era of eDiscovery technology that is fair and just for everyone.

Plaintiffs Object to Predictive Coding Order, Argue Lack of Transparency in eDiscovery Process

Friday, February 24th, 2012

The other shoe dropped in the Da Silva Moore v. Publicis Groupe case this week as the plaintiffs filed their objections to a preliminary eDiscovery order addressing predictive coding technology. In challenging the order issued by the Honorable Andrew J. Peck, the plaintiffs argue that the protocol will not provide an appropriate level of transparency into the predictive coding process. In particular, the plaintiffs assert that the ordered process does not establish “the necessary standards” and “quality assurance” levels required to satisfy Federal Rule of Civil Procedure 26(b)(1) and Federal Rule  of Evidence 702.

The Rule 26(b) Relevance Standard

With respect to the relevance standard under Rule 26, plaintiffs maintain that there are no objective criteria to establish that defendant’s predictive coding technology will reliably “capture a sufficient number of relevant documents from the total universe of documents in existence.” Unless the technology’s “search methodologies” are “carefully crafted and tested for quality assurance,” there is risk that the defined protocol could “exclude a large number of responsive email” from the defendant’s production. This, plaintiffs assert, is not acceptable in an employment discrimination matter where liberal discovery is typically the order of the day.

Reliability under Rule 702

The plaintiffs also contend that the court abdicated its gatekeeper role under Rule 702 and the U.S. Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals by not soliciting expert testimony to assess the reliability of the defendant’s predictive coding technology. Such testimony is particularly necessary in this instance, plaintiffs argue, where the technology at issue is new and untested by the judiciary. To support their position, the plaintiffs filed a declaration from their expert witness that challenges its reliability. Relying on that declaration, the plaintiffs complain that the process lacks “explicit and defined standards.” According to the plaintiffs, such standards would typically include “calculations . . . to determine whether the system is accurate in identifying responsive documents.” They would also include “the standard of acceptance that they are trying to achieve,” i.e., whether the defendant’s “method actually works.”  Plaintiffs conclude that without such “quality assurance measurements in place to determine whether the methodology is reliable,” the current predictive coding process is “fundamentally flawed” and should be rejected.

Wait and See

Now that the plaintiffs have filed their objections, the eDiscovery world must now wait and see what will happen next. The defendant will certainly respond in kind, vigorously defending the ordered process with declarations from its own experts. Whether the plaintiffs or the defendant will carry the day depends on how the district court views these issues, particularly the issue of transparency. Simply put, the question is whether the process at issue is sufficiently transparent to satisfy Rule 26 and Rule 702? That is the proverbial $64,000 question as we wait and see how this issue plays out in the courts over the coming weeks and months.

Judge Peck Issues Order Addressing “Joint Predictive Coding Protocol” in Da Silva Moore eDiscovery Case

Thursday, February 23rd, 2012

Litigation attorneys were abuzz last week when a few breaking news stories erroneously reported that The Honorable Andrew J. Peck, United States Magistrate Judge for the Southern District of New York, ordered the parties in a gender discrimination case to use predictive coding technology during discovery.  Despite early reports, the parties in the case (Da Silva Moore v. Publicis Group, et. al.) actually agreed to use predictive coding technology during discovery – apparently of their own accord.  The case is still significant because predictive coding technology in eDiscovery is relatively new to the legal field, and many have been reluctant to embrace a new technological approach to document review due to, among other things, a lack of judicial guidance.

Unfortunately, despite this atmosphere of cooperation, the discussion stalled when the parties realized they were miles apart in terms of defining a mutually agreeable predictive coding protocol.  A February status conference transcript reveals significant confusion and complexity related to issues such as random sampling, quality control testing, and the overall process integrity.  In response, Judge Peck ordered the parties to submit a Joint Protocol for eDiscovery to address eDiscovery generally and the use of predictive coding technology specifically.

The parties submitted their proposed protocol on February 22, 2012 and Judge Peck quickly reduced that submission to a stipulation and order.  The stipulation and order certainly provides more clarity and insight into the process than the status conference transcript.  However, reading the stipulation and order leaves little doubt that the devil is in the details – and there are a lot of details.  Equally clear is the fact that the parties are still in disagreement and the plaintiffs do not support the “joint” protocol laid out in the stipulation and order.  Plaintiffs actually go so far as to incorporate a paragraph into the stipulation and order stating that they “object to this ESI Protocol in its entirety” and they “reserve the right to object to its use in the case.”

These problems underscore some of the points made in a Forbes article published earlier this week titled,Federal Judges Consider Important Issues That Could Shape the Future of Predictive Coding Technology.”  The Forbes article relies in part on a recent predictive coding survey to make the point that, while predictive coding technology has tremendous potential, the solutions need to become more transparent and the workflows must be simplified before they go mainstream.

Breaking News: Federal Circuit Denies Google’s eDiscovery Mandamus Petition

Wednesday, February 8th, 2012

The U.S. Court of Appeals for the Federal Circuit dealt Google a devastating blow Monday in connection with Oracle America’s patent and copyright infringement suit against Google involving features of Java and Android. The Federal Circuit affirmed the district court’s order that a key email was not entitled to protection under the attorney-client privilege.

Google had argued that the email was privileged under Upjohn Co. v. United States, asserting that the message reflected discussions about litigation strategy between a company engineer and in-house counsel. While acknowledging that Upjohn would protect such discussions, the court rejected that characterization of the email.  Instead, the court held that the email reflected a tactical discussion about “negotiation strategy” with Google management, not an “infringement or invalidity analysis” with Google counsel.

Getting beyond the core privilege issues, Google might have avoided this dispute had it withheld the eight earlier drafts of the email that it produced to Oracle. As we discussed in our previous post, organizations conducting privilege reviews should consider using robust, next generation eDiscovery technology such as email analytical software, that could have isolated the drafts and potentially removed them from production. Other technological capabilities, such as Near Duplicate Identification, could also have helped identify draft materials and marry them up with finals marked as privileged. As this case shows, in the fast moving era of eDiscovery, having the right technology is essential for maintaining a strategic advantage in litigation.

Breaking News: Pippins Court Affirms Need for Cooperation and Proportionality in eDiscovery

Tuesday, February 7th, 2012

The long awaited order regarding the preservation of thousands of computer hard drives in Pippins v. KPMG was finally issued last week. In a sharply worded decision, the Pippins court overruled KPMG’s objections to the magistrate’s preservation order and denied its motion for protective order. The firm must now preserve the hard drives of certain former and departing employees unless it can reach an agreement with the plaintiffs on a methodology for sampling data from a select number of those hard drives.

Though easy to get caught up in the opinion’s rhetoric (“[i]t smacks of chutzpah (no definition required) to argue that the Magistrate failed to balance the costs and benefits of preservation . . .”), the Pippins case confirms the importance of both cooperation and proportionality in eDiscovery. With respect to cooperation, the court emphasized that parties should take reasonable positions in discovery so as to reach mutually agreeable results. The order also stressed the importance of communicating with the court to clarify discovery obligations.  In that regard, the court faulted the parties and the magistrate for not seeking the court’s clarification with respect to its prior order staying discovery. The court explained that the discovery stay – which KPMG had understood to prevent any sampling of the hard drives – could have been partially lifted to allow for sampling. And this, in turn, could have obviated the costs and delays associated with the motion practice on this matter.

Regarding proportionality, the court confirmed the importance of this doctrine in determining the scope of preservation. Indeed, the court declared that proportionality is typically “determinative” of a motion for protective order. Nevertheless, the court could not engage in a proportionality analysis – weighing the benefits of preserving the hard drives against its burdens – as the defendant had not yet produced any evidence from the hard drives to evaluate the nature of the evidence. Only after the evidence from a sampling of hard drives had been produced and evaluated could such a determination be made.

The Pippins case demonstrates that courts have raised their expectations for how litigants will engage in eDiscovery. Staking out unreasonable positions in the name of zealous advocacy stands in stark contrast to the clear trend that discovery should comply with the cost cutting mandate of Federal Rule 1. Cooperation and proportionality are two of the principal touchstones for effectuating that mandate.

The Top Ten “What NOT to Do” List for LegalTech New York 2012

Thursday, January 26th, 2012

As we approach LegalTech New York next week, oft referred to as the Super Bowl of legal technology events, there are any number of helpful blogs and articles telling new attendees what to expect, where to go, what to say, what to do. Undoubtedly, there’s some utility to this approach, but since we’ll be in New York, I think it’s appropriate to take a more skeptical approach and proffer a list of what *NOT* to do at LTNY.

  1. DON’T get caught up in Buzzword Bingo. There are already dozens of sources attempting to prognosticate what the most popular buzzwords will be at this year’s show.  Leading candidates include “predictive coding,” “technology assisted review,” “information governance,” “big data” and even the pedestrian sounding “sampling.” And, while these terms will undoubtedly be on booths and broadcast repeatedly from the Hilton elevator, it doesn’t mean an attendee should merely parrot these without a deeper dive.  Here, the key is go behind the green curtain to see what vendors, panelists and tweet-ers actually mean by these buzzwords, since it’s often surprising to see how the devil really is in the details.
  2. DON’T get a coffee at the Hilton Starbucks. Yes, we all love our morning coffee, but there’s no need to wait in the Justin Bieber-esque line queue at the in-hotel Starbucks. There are approximately 49 locations in a ½ mile radius, including one right across the street. There’s also the vendor giving out free coffee on the second floor, so save yourself 30 minutes of needless line waiting.
  3. DON’T ride the Hilton elevator. For those staying or taking meetings at the Hilton, the elevator lines can be excessively long.  Once you finally get on, you’ll wish they’d been even longer as you then find yourself subjected to the brainwashing of vendor announcements while you make multiple stops on your way to your desired floor. Either take the stairs or, if that’s not possible, try to minimize the trips to keep your sanity. Or, plan B – bring your iPod.
  4. DON’T talk to booth models. It’s tempting to gravitate to the most attractive person at a given vendor’s booth, but they’re often hired professionals designed to get you in for the all-important “badge scan.” Instead, focus on  the person who looks like they’ve been in the same company-branded oxford for 48 hours, because they probably have. While perhaps less aesthetically pleasing, they’ll certainly know more about the product and that’s why you’re there after all, isn’t it?
  5. DON’T pass out your resume on the show floor. While certainly a great networking opportunity, LTNY isn’t the place to blatantly tout your professional wares, at least if you want to keep your nascent job search on the down low. And, if you want to have more private meetings, you’ll need to do better than “hiding out” at the Warwick across the street. For more clandestine purposes, think about the Bronx.
  6. DON’T take tchotchkes without hearing the spiel. There are certain tchotchke hounds out there who roam around LTNY collecting “gifts” for the kids back at home. While I won’t frown on this behavior per se, it’s only courteous to actually listen to the pitch (as a quid pro quo) before you ask for the swag. Anything less is uncivilized.
  7. DON’T get over-served at the B-Discovery Party. After a long day on the show floor you’re probably ready to let loose with some of the eDiscovery practitioners you haven’t seen in a year.  But, in this era of flip cams and instant tweeting, letting your hair down too much can be career limiting. If you haven’t done Jägermeister shots since college, LTNY probably isn’t a good time to resume that dubious practice.
  8. DON’T forget to take your badge off (please!). Yes, it’s cool to let everyone know you’re attending the premier legal technology event of the year, but once you leave the show floor random New Yorkers will heckle you for sporting your badge after hours – particularly the baristas at Starbucks. Plus, if you’ve broken any of the other admonitions above, at least you’ll be more anonymous.
  9. DON’T forget to bring a heavy coat, mittens and scarf. Last year there was the infamous ice storm that stranded folks for days (me included). Even if the weather isn’t that severe this year, anyone from warmer climates will need to bundle up, particularly because it’s easy to unintentionally get caught outside for extended amounts of time – waiting for a cab in the Hilton queue, eating at Symantec’s free food cart, walking to a meeting at a “nearby” hotel that’s “just a block or so away.” Keep in mind those cross town blocks are longer than they appear on a map.
  10. DON’T forget to learn something. Without hyperbole, LTNY has the world’s greatest collection of legal/technology minds in one place for 3 days.  Most folks, even the vaunted panelists, judges and industry luminaries are actually quite accessible. So, at a minimum, attend sessions, ask questions and interact with your peers. Try to ignore the bright lights and signs on the floor and make sure to take some useful information back to your firm, company or governmental agency. You’ll undoubtedly have fun (and maybe a Jagermeister shot, too) along the way.