24h-payday

Posts Tagged ‘ediscovery’

In Re: Biomet Order Addresses Hot Button Predictive Coding Issue

Friday, December 20th, 2013

United States District Court Judge for the Northern District of Indiana, Ronald J. Miller, recently addressed what has arguably become the hottest predictive coding issue since Judge Andrew J. Peck’s February 2012 order in Da Silva Moore v. Publicis Groupe. The issue is whether or not parties who use predictive coding technology to assist with document productions should disclose the non-responsive documents used to train their system to the other side.

Judge Peck Opens the Predictive Coding Door

In Da Silva Moore, Judge Peck became the first judge to state that the use of predictive coding technology is “acceptable in appropriate cases.” Since the decision, some litigation attorneys have criticized the predictive coding protocol the parties established. Central to that criticism is the inclusion of a provision requiring the voluntary disclosure of non-privileged documents used to train the predictive coding system.

Fearing a judicial trend, many attorneys have argued that the Federal Rules of Civil Procedure (Rules) simply do not require the disclosure of non-responsive documents under any circumstances. Others argue that a little cooperation and transparency between adversaries isn’t a bad thing when one party saves money and time and the other receives a more thorough production. Not surprisingly, both sides have eagerly awaited judicial guidance.

Judge Miller Tackles the Hot Button Issue

In In Re: Biomet, Judge Miller provided that long-awaited guidance by holding that Rule 26 does not require a party to disclose seed set documents used to train a predictive coding system. The order came on the heels of an earlier April 2013 order denying plaintiffs’ motion to compel Biomet to re-do earlier document productions (unless plaintiffs paid). The plaintiffs argued that Biomet’s decision to use key word search terms and de-duplication techniques to cull 19.5 million documents down to 2.5 million before using predictive coding technology “tainted” the production process. More specifically, plaintiffs contended that using keywords to filter out documents likely excluded responsive documents that should have been produced. Judge Miller found plaintiffs’ arguments unconvincing, largely due to the fact that Biomet had already spent approximately $1.07 million on eDiscovery.

Four months later plaintiffs filed another motion requesting more transparency into Biomet’s predictive coding process. Plaintiffs moved to compel Biomet to disclose and identify the initial seed set documents used to train the predictive coding system to distinguish between a responsive and non-responsive document. Plaintiffs reasoned that knowing which documents Biomet coded as responsive and non-responsive was necessary to measure the accuracy of Biomet’s production. In the order denying plaintiffs’ request, Judge Miller stated:

“As I understand it, a predictive coding algorithm offers up a document, and the user tells the algorithm to find more like that document or that the user doesn’t want more documents like what was offered up. The Steering Committee wants the whole seed set Biomet used for the algorithm’s initial training. That request reaches well beyond the scope of any permissible discovery by seeking irrelevant or privileged documents used to tell the algorithm what not to find. That the Steering Committee has no right to discover irrelevant or privileged documents seems self-evident.”

Judge Miller continued by acknowledging plaintiffs’ argument that Biomet was not proceeding in the cooperative spirit endorsed by the Sedona Conference Cooperation Proclamation and the 7th Circuit Pilot Program. However, he stated that:

“[N]either the Sedona Conference nor the Seventh Circuit project expands a federal district court’s powers, so they can’t provide me with authority to compel discovery of information not made discoverable by the Federal Rules.”

In particular, Judge Miller pointed to the language contained in FRCP 26(b)(1) as a basis for his decision. He concluded that because the plaintiffs knew of the “existence and location” of each discoverable document Biomet used in the seed set, Biomet had complied with their production obligation. Surprisingly, Judge Miller’s analysis did not specifically address what some may argue is the key language in FRCP 26(b)(1) which states:

“For good cause, the court may order discovery of any matter relevant to the subject matter involved in the action. Relevant information need not be admissible at the trial if the discovery appears reasonably calculated to lead to the discovery of admissible evidence.”

Judge Miller went on to criticize Biomet’s “unexplained lack of cooperation” and urged Biomet to rethink its refusal to at least reveal the responsive documents used in the seed set. His comments indicated that plaintiffs’ position would be stronger if they had only requested the identification of the non-privileged and non-responsive seed set. However, he ultimately refused to compel the identity of any of the seed set documents because he lacked “any discretion in this dispute.”

Is the Issue Resolved?

Even though Judge Miller explained that he lacked “any discretion in this dispute,” some future litigants are likely to argue that Rule 26 provides judges with the discretion to order the disclosure of documents that are both non-responsive and non-privileged where appropriate. For example, proponents of disclosure are likely to argue that coding decisions applied to training documents could have a significant impact on the discovery of admissible evidence. If training documents are coded accurately, the likelihood of discovering admissible evidence increases if that evidence exists. On the other hand, adversaries are likely to respond sharply that sharing non-responsive documents has not been required in the past and should not be required in the future. In fact, following Da Silva Moore, some have argued that even keywords are work-product protected and should not be disclosed.

Conclusion

In Re: Biomet appears to be the first case addressing whether or not parties are obligated to share non-responsive documents used to train a predictive coding system — but likely won’t be the last. First, the decision is not binding. Second, Judge Miller did not thoroughly address key language contained within 26(b)(1) which invites further analysis. Lastly, the legal industry is struggling to define predictive coding best practices and to understand the range of different predictive coding technology solutions. Given the current confusion, demands for more predictive coding transparency are likely to continue as the market evolves. Don’t expect this hot button issue to cool off any time soon.

*Blog post co-authored by Matt Nelson and Adam Kuhn

Music piracy the least of your audio worries; Dodd–Frank forces a closer listen

Wednesday, December 11th, 2013

We’re quickly approaching another milestone in the epic implementation of the Commodity Futures Trading Commission (CFTC) rules associated with the Dodd Frank Wall Street Reform and Consumer Protection Act (DFA); the expiration of a very contentious exemptive order that provided relief to cross border swap dealers (SD) and major swap participants (MSP) and foreign groups of US SDs and MSPs. If you follow the heated debate between Wall Street and the CFTC it is quite fitting that the order happens to expire on the winter solstice, December 21st 2013. Let’s hope the day at which the sun comes to a standstill in the sky before reversing direction doesn’t forebode a similar experience in the cross border free markets.

The 848 pages of Dodd-Frank legislation has resulted in (at current count) 67 new rules, exemptive orders, guidance and five ‘other’ actions from the CFTC – the regulatory body tasked with enforcing Title VII of the DFA. Prior to the DFA, the CFTC averaged about four rules per year. eDiscovery nerds will appreciate the fact that the complexity and length of the rules issued by the CFTC requires a website that offers Proximity and Boolean search options to navigate. Within these 67 rules are critical adjustments to the way that organizations, subject to the CFTC’s scope, need to capture, store, manage, search and produce information related to the many flavors of swaps – basically derivatives by which counterparties exchange cash flows of one financial instrument for another. That information includes all data concerning the swap, and communications leading up to the execution of the swap, including any voicemail or phone conversations with relevant information.

While audio discovery is nothing new, especially in regards to criminal investigations, these new regulations, rules and guidance have anointed audio data into the critical content sources category for many enterprises. Let’s discuss what that means for the eDiscovery technology world.

1. Audio search is now must-have eDiscovery functionality

If your organization is categorized as a swap data repository, derivatives clearing organization, designated contract market, swap execution facility, swap dealer, major swap participant and non-MSP counterparty (where most organizations outside financial services will be categorized) you are now subject to new rules for swap record keeping.

First, covered organizations must retain the following:

“…all oral and written communications provided or received concerning quotes, solicitations, bids, offers, instructions, trading, and prices, that lead to the conclusion of a related cash or forward transaction, whether communicated by telephone, voicemail, facsimile, instant messaging, chat rooms, electronic mail, mobile device, or other digital or electronic media.” 77 Fed. Reg. 17 CFR Part 45 (December 8 2010)

Secondly, this data has specific retention and retrieval requirements. At Symantec, we’re keeping track by categorizing them into the 5 & 5, 5 & 3 and 1 & 5 rules:

  • All the data above, except audio files, must be retained for a period of 5 years post termination of the underlying swap.
  • For SDs and MSPs it must be retrievable and producible within 3 days
  • For non-MSP counterparties it must be retrievable and producible within 5 days
  • Audio files, they must be kept for a period of 1-year post termination of the swap and also retrievable and producible within 5 days.

2. A turnkey ‘Dodd – Frank’ solution is unlikely, so a repeatable eDiscovery process is critical

As the CFTC rules were being finalized over the past two years, Symantec invited our customers to discuss the impact of the DFA on their eDiscovery workflows. A primary concern was the belief that the rules required organizations to have a system in place to store and eventually reproduce a trade and associated communications in their entirety. The many lobbyists and organizations that submitted grievances and clarification requests to the CFTC shared this concern. In response, the CFTC adjusted its rules to state that an organization’s swap data need not be categorized and retained in what amounts to a single-swap file, provided that all related information could be retrieved and produced from wherever it resides within the required timeframe.

Although the CFTC isn’t forcing organizations into the implementation of a magical swap data captor, data growth, diversification and dispersion across the organization could still present major challenges to collecting, searching and producing requested swap information on an ad hoc basis. For example, sales and marketing data, research information on commodity markets, email and instant message communications and voice data, would very often be found in multiple systems.

In order to comply, organizations should evaluate whether they have the ability to collect audio files and other information in a timely manner from multiple data repositories. If not retained in a per-swap manner, organizations will need to be able to consolidate all relevant communications and data into a single system so that the review is complete and audit-able for requesting regulatory bodies. But pulling from these various sources is likely to collect a large amount of non-swap data. The ability to confidently exclude the large amount of non-swap related information will help organizations curtail the potential time and costs associated with identifying the proper swap data. Finally, this process should be duplicable for each search, retrieval and production to the CFTC or Swap Data Repositories.

Side note; I’m writing with an eDiscovery-only lens, but the retention and management angle of this particular challenge lends itself to a proactive information governance discussion, one that our friends at eDiscovery Journal have touched upon already.

3. eDiscovery search capabilities must satisfy the unique nature of swap data

The DFA record keeping requirements as it pertains to swaps are unique in that they require the combination of both static, database-like structured data (trade value, time, etc.) and un-structured communications (email, Bloomberg messages, voice mail, etc.) These communications will often bridge multiple systems, for instance, multiple emails and Bloomberg IM’s prior to a phone call confirming the trade. Teams reviewing data prior to production to the CFTC or Swap Data Repositories will be challenged to make sense of the entire communication thread especially under a five-day deadline. This review process is not one to be taken lightly either. Teams need to be extra careful with the search and review of all audio content as they risk mistakenly producing spoken information, not as easily identified as written, that is not related to the trade.

Organizations should consider how quickly they could get the necessary information in a searchable form. Five days to retrieve and produce is slim at best, so even audio processing advantages, like phonetic based audio indexing as opposed to speech to text to transcription could be critical. They should also consider how they can organize swap communications into a coherent form – functionality like discussion threading and topic clustering can help teams quickly understand and identify communication related to a specific swap.

The Symantec eDiscovery team considered the Dodd Frank Act and CFTC rules as we developed our latest release of the Clearwell eDiscovery Platform, from Symantec, now enabling advanced audio processing, search, and review capabilities to drastically accelerate audio discovery efforts. In addition to supporting over 400 file types for electronic discovery, these new capabilities leverage a powerful phonetic engine that can index up to 20,000 hours of recorded audio per day. Whether you are investigating voicemails, call-center recordings, or financial transactions, Symantec makes it easy to find what you are looking for.

 

Kleen Products Update: Is Technology Usage Becoming the New “Proportionality” Factor for Judges?

Wednesday, October 30th, 2013

Readers may recall last year’s expensive battle over the use of predictive coding technology in the 7th Circuit’s Kleen Products case. Although the battle was temporarily resolved in Defendants’ favor (they were not required to redo their production using predictive coding or other “Content Based Advanced Analytics” software), a new eDiscovery battle has surfaced this year between Plaintiffs and a non-party, The Levin Group (“TLG”).

In Kleen, Plaintiffs allege anticompetitive and collusive conduct by a number of companies in the containerboard industry. The Plaintiffs served TLG with a subpoena requesting “every document relating to the containerboard industry.” TLG, a non-party retained as a financial and strategic consultant by two of the Defendants, complied by reviewing 21,000 documents comprising 82,000 pages of material.

Extraordinary Billing Rates for Manual Review?

The wheels began to fall off the bus when Plaintiffs received a $55,000 bill from TLG for the review and production of documents in response to the subpoena. TLG billed $500/hour for 110 hours of document review performed by TLG’s founder (a lawyer) and a non-lawyer employee. Although FRCP 45(c)(3)(C) authorizes “reasonable compensation” of a subpoenaed nonparty and the Court previously ordered the Plaintiffs to “bear the costs of their discovery request,” TLG and the Plaintiffs disagreed over the definition of “reasonable compensation” once the production was complete. Plaintiffs argue that the bill is excessive in light of market rates of $35-$45/hour charged by contract attorneys for review and they also claim that they never agreed to a billing rate.

Following a great deal of back and forth about the costs, the court decided to defer its decision until December 16, 2013 because discovery in the underlying antitrust action is still ongoing. Regardless of the outcome in Kleen, the current dispute feels a bit like déjà vu all over again. Both disputes highlight the importance of cooperation and role of technology in reducing eDiscovery costs. For example, better cooperation among the parties during earlier stages of discovery might have helped prevent or at least minimize some of the downstream post-production arguments that occurred last year and this year. Although the “cooperation” drum has been beaten loudly for several years by judges and think tanks like the Sedona Conference, cooperation is an issue that will never fully disappear in an adversarial system.

Judges May Increasingly Consider Technology as Part of Proportionality Analysis

A more novel and interesting eDiscovery issue in Kleen relates to the fact that judges are increasingly being asked to consider the use (or non-use) of technology when resolving discovery disputes. Last year in Kleen the issue was whether or not a producing party should be required to use advanced technology to assure a more thorough production. This year the Kleen court may be asked to consider the role of technology in the context of the disputed document review fees. For example, the court may consider whether or not TLG could have reduced the number of documents by leveraging de-duplication, domain filtering, document threading or other tools in the Litigator’s Toolbelt™ to reduce the number of documents requiring costly manual review.

Recent trends indicate that the federal bench is increasingly under pressure to consider whether or not and how parties utilize technology as factors in resolving eDiscovery disputes. For example, a 2011 Forbes article titled: “Will New Electronic Discovery Rules Save Organizations Millions or Deny Justice?” framed early discussions about amending the Federal Rules of Civil Procedure (Rules) as follows:

A key question that many feel has been overlooked is whether or not organizations claiming significant eDiscovery costs could have reduced those costs had they invested in better technology solutions.  Most agree that technology alone cannot solve the problem or completely eliminate costs.  However, many also believe that understanding the extent to which the inefficient or non-use of modern eDiscovery technology solutions impacts overall costs is critical to evaluating whether better practices might be needed instead of new Rules.”

Significant interest in the topic was further sparked in Da Silva Moore v. Publicis Group in 2012 when Judge Andrew Peck put parties on notice that technology is increasingly important in evaluating eDiscovery disputes. In Da Silva Moore, Judge Peck famously declared that “computer-assisted review is acceptable in appropriate cases.” Judge Peck’s decision was the first to squarely address the use of predictive coding technology, and a number of cases, articles, and blogs on the topic quickly ensued in what seemed to be the opening of Pandora’s Box with respect to the technology discussion.

More recently, The Duke Law Center for Judicial Studies proposed that the Advisory Committee on Civil Rules add language to the newly proposed amendments to the Federal Rules of Civil Procedure addressing the use of technology-assisted review (TAR). The group advocates adding the following sentence at the end of the first paragraph of the Committee Note to proposed Rule 26(b)(1) dealing with “proportionality” in eDiscovery:

“As part of the proportionality considerations, parties are encouraged, in appropriate cases, to consider the use of advanced analytical software applications and other technologies that can screen for relevant and privileged documents in ways that are at least as accurate as manual review, at far less cost.

Conclusion

The significant role technology plays in managing eDiscovery risks and costs continues to draw more and more attention from lawyers and judges alike. Although early disputes in Kleen highlight the fact that litigators do not always agree on what technology should be used in eDiscovery, most in the legal community recognize that many technology tools in the Litigator’s Toolbelt™ are available to help reduce the costs of eDiscovery. Regardless of how the court in Kleen resolves the current issue, the use or non-use of technology tools is likely to become a central issue in the Rules debate and a prominent factor in most judges’ proportionality analysis in the future.

*Blog post co-authored by Matt Nelson and Adam Kuhn

Moving Data to the Cloud? Top 5 Tips for Corporate Legal Departments

Monday, September 30th, 2013

One of the hottest information technology (IT) trends is to move data once stored within the corporate firewall into a hosted cloud environment managed by third-party providers. In 2013 alone, the public cloud services market is forecast to grow an astonishing 18.5 percent to $131 billion worldwide, up from $111 billion in 2012. The trend is driven largely by the fact that labor, infrastructure, and software costs can be reduced by sending email and other data to third-party providers for off-site hosting. Although the benefits of cloud computing are real, many organizations make the decision to move to the cloud without thoroughly weighing all the risks and benefits first.

A common problem is that many corporate IT departments fail to consult with their legal department before making the decision to move company data into the cloud even though the decision may have legal consequences. For example, retrieving information from the cloud in response to eDiscovery requests presents unique challenges that could cause delay and increase costs. Similarly, problems related to data access and even ownership could also arise if the cloud provider merges with another company, goes bankrupt, or decides to change their terms of service.

The bad news is that the list of possible challenges is long when data is stored in the cloud. The good news is that many of the risks of moving important company data to the cloud are foreseeable and can be mitigated by negotiating terms with prospective cloud providers in advance. Although not comprehensive, the following highlights some of the key areas attorneys should consider before agreeing to store company data with third-party cloud providers.

1.      Who Owns the Data?

The cloud market is hot right now with no immediate end in sight. That means competition is likely for years to come and counsel must be prepared for some market volatility. At a high level, delineating the “customer” as the “data owner” in agreements with cloud service providers is a must. More specifically, terms that address what happens in the event of a merger, bankruptcy, divestiture or any event that leads to the termination or alteration of the relationship should be clearly outlined. Identifying the customer as the data owner and preserving the right to retrieve data within a reasonable time and for a reasonable price will help prevent company data from being used as a bargaining chip if there is a disagreement or change in ownership.

2.      eDiscovery Response Times and Capabilities

Many businesses know first-hand that the costs and burdens of complying with eDiscovery requests are significant. In fact, a recent RAND study estimates that every gigabyte of data reviewed costs approximately $18,000. Surprisingly, many businesses do not realize that storing data in the cloud with the wrong provider could increase the risks and costs of eDiscovery significantly. Risks include the possibility of sanctions for overlooking information that should have been produced or failing to produce information in a timely fashion. Costs could be exacerbated by storing data with a cloud provider that lacks the resources or technology to respond to eDiscovery requests efficiently.

That means counsel must understand whether or not the provider has the technology to preserve, collect, and produce copies of data stored in the cloud. If so, is the search technology accurate and thorough? What are the time frames for responding to requests and are there surcharges for expedited versus non-expedited requests for data? Also, is data collected in a forensically sound manner and is the chain of custody recorded to validate the integrity and reasonableness of the process in the event of legal challenges? More than one cloud customer has encountered excessive fees, unacceptable timelines, and mass confusion when relying on cloud providers to help respond to eDiscovery requests. Avoid surprises by negotiating acceptable terms before, not after, moving company data to the cloud.

3.      Where is Data Physically Located?

Knowing where your data is physically located is important because there could be legal consequences. For example, several jurisdictions have their own unique privacy laws that may impact where employee data can be physically located, how it must be stored, and how it can be used. This can result in conflicts where data stored in the cloud is subject to discovery in one jurisdiction, but disclosure is prohibited by the laws of the jurisdiction where the data is stored. Failure to comply with these local foreign laws, sometimes known as blocking statutes, could result in penalties and challenges that might not be circumvented by choice of law provisions. That means knowing where your data will be stored and understanding the applicable laws governing data privacy in that jurisdiction is critical.

4.      Retention, Backup, & Security

Part of any good data retention program requires systematically deleting information that the business does not have a legal or business need to retain. Keeping information longer than necessary increases long-term storage costs and increases the amount of information the organization must search in the event of future litigation. The cloud provider should have the ability to automate the archiving, retention, and disposition of information in accordance with the customer’s preferred policies as well as the ability to suspend any automated deletion policies during litigation.

Similarly, where and how information is backed up and secured is critical. For many organizations, merely losing access to email for a few hours brings productivity to a screeching halt. Actually losing the company email due to technical problems and/or as the result of insufficient backup technology could cripple some companies indefinitely. Likewise, losing confidential customer data, research and development plans, or other sensitive information due to the lack of adequate data security and encryption technology could result in legal penalties and/or the loss of important intellectual property. Understanding how your data is backed up and secured is critical to choosing a cloud provider. Equally important is determining the consequences of and the process for handling data breaches, losses, and downtime if something goes wrong.

5.      Responding to Subpoenas and Third-party Data Requests

Sometimes it’s not just criminals who try to take your data out of the cloud; litigants and investigators might also request information directly from cloud providers without your knowledge or consent. Obviously companies have a vested interest in vetting the reasonableness and legality of any data requests coming from third parties. That interest does not change when data is stored in the cloud. Knowing how your cloud provider will respond to third-party requests for data and obtaining written assurances that you will be notified of requests as appropriate is critical to protecting intellectual property and defending against bogus claims. Furthermore, making sure data in the cloud is encrypted may provide an added safeguard if data somehow slips through the cracks without your permission.

Conclusion

Today’s era of technological innovation moves at lightning speed and cloud computing technology is no exception. This fast-paced environment sometime results in organizations making the important decision to move data to the cloud without properly assessing all the potential risks. In order to minimize these risks, organizations should consider The Top 5 Tips for Corporate Legal Departments and consult with legal counsel before moving data to the cloud.

Judge Scheindlin Blasts Proposed FRCP Amendments in Unconventional Style

Thursday, August 29th, 2013

A prominent federal judge wasted little time to air her dissatisfaction with the proposed amendments to the Federal Rules of Civil Procedure (Rules) the exact day the period for public comment on the Rules opened. In lieu of following the formal process of submitting written comments to the proposed amendments the Honorable Shira Scheindlin, Federal District Court Judge for the Southern District of New York, provided her feedback in more dramatic fashion. She went out of her way to blast newly proposed Federal Rule 37(e) in a footnote to a recent court order in a case where she sanctioned a party for spoliation of evidence. The order, dated August 15, 2013, conspicuously coincides with the opening day for public comment to the newly proposed amendments to the Rules and likely riled some attorneys who have lobbied hard for this particular Rule change for years.

The facts relayed in Sekisui American Corporation v. Hart are not uncommon. In fact, most have likely heard this story repeat itself for a decade despite the passage of amendments to the Rules in 2006 and myriad case law guiding against such conduct. The short version of the story is that a group of employees leave their company, the company sues the former employees, discovery ensues, and emails are missing. Why? Because emails were deleted long after the duty to preserve electronically stored information (ESI) was triggered and now those emails are lost. The question then turns to whether and how the judge should rectify the missing email problem. Those familiar with some of Judge Scheindlin’s prior decisions know the answer to that question – sanctions.

Judge Scheindlin is not a renegade who issues sanctions regardless of the facts of the case. However, some believe her attempts to provide clarity for litigants in her courtroom sometimes go too far which stirs debate. Whether you agree with her decisions or not, Judge Scheindlin takes special care to meticulously articulate the facts of the case, identify the relevant legal authority, present the legal analysis, and then nail the offending party with sanctions when they screw up discovery.

That is exactly what happened in Sekisui and the order is worth a read. People new to eDiscovery will learn the basics of when to apply a legal hold, while more seasoned eDiscovery veterans will be treated to a stroll down “case law memory lane” that includes stories of eDiscovery train wrecks past like the Zubulake and Pension Committee decisions. If you don’t have the stomach to read the entire thirty-two page opinion, then read the nice article written by Law Technology News aptly titled: Scheindlin Not Charmed When Visiting Spoliation a Third Time for further background on the case.

What is striking about the case is that Scheindlin used the case and issues at hand as an opportunity to articulate her displeasure with the proposed amendments to the Rules. In particular, she calls out proposed Rule 37(e) in footnote 51 of the opinion where she explains:

“the proposed rule would permit sanctions only if the destruction of evidence (1) caused substantial prejudice and was willful or in bad faith or (2) irreparably deprived a party of any meaningful opportunity to present or defend its claims…. The Advisory Committee Note to the proposed rule would require the innocent party to prove that ‘it has been substantially prejudiced by the loss’ of relevant information, even where the spoliating party destroyed information willfully or in bad faith. 5/8/2013 Report of the Advisory Committee on Civil Rules at 47. I do not agree that the burden to prove prejudice from missing evidence lost as a result of willful or intentional misconduct should fall on the innocent party. Furthermore, imposing sanctions only where evidence is destroyed willfully or in bad faith creates perverse incentives and encourages sloppy behavior. Under the proposed rule, parties who destroy evidence cannot be sanctioned (although they can be subject to “remedial curative measures”) even if they were negligent, grossly negligent, or reckless in doing so.”

Judge Scheindlin’s “Footnote 51” is almost certain to become a focal point of debate as the dialogue about the Rules continue. Not only did Judge Scheindlin ignite much of the early eDiscovery debate with her Zubulake line of decisions, she has also served on the  Federal Rules of Civil Procedure Advisory Committee from 1998 to 2005. The fact that she is known as the Godmother of eDiscovery in some circles illustrates that her influence over the rule making process is undeniable.  The time for public comment on the Rules closes on February 15, 2014 and the Godmother of eDiscovery has thrown down the gauntlet once again. Let the games begin.

The Top 3 Forensic Data Collection Myths in eDiscovery

Wednesday, August 7th, 2013

Confusion about establishing a legally defensible approach for collecting data from computer hard drives during eDiscovery has existed for years. The confusion stems largely from the fact that traditional methodologies die hard and legal requirements are often misunderstood. The most traditional approach to data collection entails making forensic copies or mirror images of every custodian hard drive that may be relevant to a particular matter. This practice is still commonly followed because many believe collecting every shred of potentially relevant data from a custodian’s computer is the most efficient approach to data collection and the best way to avoid spoliation sanctions.

 

In reality, courts typically do not require parties to collect every shred of electronically stored information (ESI) as part of a defensible eDiscovery process and organizations wedded to this process are likely wasting significant amounts of time and money. If collecting everything is not required, then why would organizations waste time and money following an outdated and unnecessary approach? The answer is simple – many organizations fall victim to 3 forensic data collection myths that perpetuate inefficient data collection practices. This article debunks these 3 myths and provides insight into more efficient data collection methodologies that can save organizations time and money without increasing risk.

 

Myth #1: “Forensic Copy” and “Forensically Sound” are Synonymous

 

For many, the confusion begins with a misunderstanding of the terms “forensic copy” and “forensically sound.” The Sedona Conference, a leading nonprofit research and educational institute dedicated to the advanced study of law, defines a forensic copy as follows:

 

An exact copy of an entire physical storage media (hard drive, CD-ROM, DVD-ROM, tape, etc.), including all active and residual data and unallocated or slack space on the media. Forensic copies are often called “images” or “imaged copies.” (See: The Sedona Conference Glossary: E-Discovery & Digital Information Management, 3rd Edition, Sept. 2010).

 

Forensically sound, on the other hand, refers to the integrity of the data collection process and relates to the defensibility of how ESI is collected. Among other things, electronic files should not be modified or deleted during collection and a proper chain of custody should be established in order for the data collection to be deemed forensically sound. If data is not collected in a forensically sound manner, then the integrity of the ESI that is collected may be suspect and could be excluded as evidence.

 

Somehow over time, many have interpreted the need for a forensically sound collection to require forensic copies of hard drives to be made. In other words, they believe an entire computer hard drive must be collected for a collection to be legally defensible (forensically sound). In reality, entire hard drives (forensic copies) or even all active user files need not be copied as part of a defensible data collection process. What is required, however, is the collection of ESI in a forensically sound manner regardless of whether an entire drive is copied or only a few files.

 

Myth # 2: Courts Require Forensic Copies for Most Cases

 

Making forensic copies of custodian hard drives is often important as part of criminal investigations, trade secret theft cases, and other matters where the recovery and analysis of deleted files, internet browsing history, and other non-user generated information is important to a case. However, most large civil matters only require the production of user-generated files like emails, Microsoft Word documents, and other active files (as opposed to deleted files).

 

Unnecessarily making forensic copies results in more downstream costs in the form of increased document processing, attorney review, and vendor hosting fees because more ESI is collected than necessary. The simple rule of thumb is that the more ESI collected at the beginning of a matter, the higher the downstream eDiscovery costs. That means casting a narrow collection net at the beginning of a case rather than “over-collecting” more ESI than legally required can save significant time and money.

 

Federal Rule of Civil Procedure 34 and case law help dispel the myth that forensic copies are required for most civil cases. The notes to Rule 34(a)(1) state that,

 

Rule 34(a)…is not meant to create a routine right of direct access to a party’s electronic information system, although such access might be justified in some circumstances. Courts should guard against undue intrusiveness resulting from inspecting or testing such systems.

More than a decade ago, the Tenth Circuit validated the notion that opposing parties should not be routinely entitled to forensic copies of hard drives. In McCurdy Group v. Am. Biomedical Group, Inc., 9 Fed. Appx. 822 (10th Cir. 2001) the court held that skepticism concerning whether a party has produced all responsive, non-privileged documents from certain hard drives is an insufficient reason standing alone to warrant production of the hard drives: “a mere desire to check that the opposition has been forthright in its discovery responses is not a good enough reason.” Id. at 831.

On the other hand, Ameriwood Indus. v. Liberman, 2006 U.S. Dist. LEXIS 93380 (E.D. Mo. Dec. 27, 2006), is a good example of a limited situation where making a forensic copy of a hard drive might be appropriate. In Ameriwood, the court referenced Rule 34(a)(1) to support its decision to order a forensic copy of the defendant’s hard drive in a trade secret misappropriation case because the defendant “allegedly used the computer itself to commit the wrong….” In short, courts expect parties to take a reasonable approach to data collection. A reasonable approach to collection only requires making forensic copies of computer hard drives in limited situations.

 

Myth #3: Courts Have “Validated” Some Proprietary Collection Tools

 

Confusion about computer forensics, data collection, and legal defensibility has also been stoked as the result of overzealous claims by technology vendors that courts have “validated” some data collection tools and not others. This has led many attorneys to believe they should play it safe by only using tools that have ostensibly been “validated” by courts. Unfortunately, this myth exacerbates the over-collection of ESI problem that frequently costs organizations time and money.

 

The notion that courts are in the business of validating particular vendors or proprietary technology solutions is a hot topic that has been summarily dismissed by one of the leading eDiscovery attorneys and computer forensic examiners on the planet. In his article titled, We’re Both Part of the Same Hypocrisy, Senator, Craig Ball explains that courts generally are not in the business of “validating” specific companies and products. To make his point, Mr. Ball poignantly states that:

 

just because a product is named in passing in a court opinion and the court doesn’t expressly label the product a steaming pile of crap does not render the product ‘court validated.’ 

 

In a nod to the fact that the defensibility of the data collection process is dependent on the methodology as much as the tools used, Mr. Ball goes on to explain that, “the integrity of the process hinges on the carpenter, not the hammer.”

 

Conclusion

 

In the past decade, ESI collection tools have evolved dramatically to enable the targeted collection of ESI from multiple data sources in an automated fashion through an organization’s computer network. Rather than manually connecting a collection device to every custodian hard drive or server to identify and collect ESI for every new matter, new tools enable data to be collected from multiple custodians and data sources within an organization using a single collection tool. This streamlined approach saves organizations time and money without sacrificing legal defensibility or forensic soundness.

 

Choosing the correct collection approach is important for any organization facing regulatory scrutiny or routine litigation because data collection represents an early and important step in the eDiscovery process. If data is overlooked, destroyed, altered, or collected too slowly, the organization could face embarrassment and sanctions. On the other hand, needlessly over-collecting data could result in unnecessary downstream processing and review expenses. Properly assessing the data collection requirements of each new matter and understanding modern collection technologies will help you avoid the top 3 forensic data collection myths and save your organization time and money.

The Proportionality Amendments to the Federal Rules Spotlight the Importance of Efficient, Cost-Effective eDiscovery

Tuesday, July 16th, 2013

One of the most compelling objectives for amending the Federal Rules of Civil Procedure is to make civil discovery more efficient and cost effective. The proposed amendment to Federal Rule 1 – featured in our introductory post on this series that provides a comprehensive overview of the proposed amendments – is only one of several measures found in the amendment package that are designed to decrease the costs and delays associated with eDiscovery. Perhaps the most important of those measures are those that emphasize proportionality standards.

Proportionality standards, which require that the benefits of discovery be commensurate with its burdens, have been extant in the Federal Rules since 1983. Nevertheless, they have been invoked too infrequently over the past 30 years to address the problems of over-discovery and gamesmanship that permeate the discovery process. In an effort to spotlight this “highly valued” yet “missing in action” doctrine, the Civil Rules Advisory Committee has proposed numerous changes to the current Rules regime. Judicial Conference of the United States, Report of the Advisory Committee on Civil Rules 4 (May 8, 2013) (Report). The most significant of these changes are found in Rules 26(b)(1) and 34(b).

Rule 26(b)(1) – Tightening the Scope of Permissible Discovery

The Committee has proposed that the permissible scope of discovery under Rule 26(b)(1) be modified to spotlight the limitations that proportionality imposes on discovery. Those limitations are presently found in Rule 26(b)(2)(C) and are not readily apparent to many lawyers or judges. The proposed modification (in italics) would address this problem by making clear that discovery must satisfy proportionality standards:

Parties may obtain discovery regarding any non privileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case considering the amount in controversy, the importance of the issues at stake in the action, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit.

Report, at 19-20. By moving the proportionality rule directly into the scope of discovery, counsel and the courts should gain a better understanding of the restraints that this concept places on discovery.

Rule 26(b)(1) has additionally been modified to enforce the notion that discovery is confined to those matters that are relevant to the claims or defenses at issue in a particular case. Even though discovery has been limited in this regard for many years, the Committee felt that this limitation was being “swallowed” by the “reasonably calculated” provision in Rule 26(b)(1). That provision currently provides for the discovery of relevant evidence that is inadmissible so long as it is “reasonably calculated to lead to the discovery of admissible evidence.” Despite the narrow purpose of this provision, the Committee found that many judges and lawyers unwittingly extrapolated the “reasonably calculated” wording to broaden discovery beyond the benchmark of relevance. To disabuse courts and counsel of this practice, the “reasonably calculated” phrase has been removed and replaced with the following sentence: “Information within this scope of discovery need not be admissible in evidence to be discoverable.” Report, at 11.

Similarly, the Committee has recommended eliminating the provision in Rule 26(b)(1) that presently allows the court – on a showing of good cause – to order “discovery of any matter relevant to the subject matter.” In its proposed “Committee Note,” the Committee justified this suggested change by reiterating its mantra about the proper scope of discovery: “Proportional discovery relevant to any party’s claim or defense suffices.” Report, at 10-11.

Rule 34(b) – Eliminating Gamesmanship with Document Productions

The three key modifications the Committee has proposed for Rule 34 are designed to eliminate some of the gamesmanship associated with written discovery responses. The first such change is a requirement in Rule 34(b)(2)(B) that any objection made in response to a document request must be stated with specificity. This recommended change is supposed to do away with the assertion of general objections. While such objections have almost universally been rejected in federal discovery practice, they still appear in Rule 34 responses. By including an explicit requirement for specific objections and coupling it with the threat of sanctions for non-compliance under Rule 26(g), the Committee may finally eradicate this practice from discovery.

The second change is calculated to address another longstanding discovery dodge: making a party’s response “subject to” a particular set of objections. Whether such objections are specific or general, the Committee concluded that such a conditional response leaves the party who requested the materials unsure as to whether anything was withheld and if so, on what grounds. To remedy this practice, the Committee added the following provision to Rule 34(b)(2)(C): “An objection must state whether any responsive materials are being withheld on the basis of that objection.” Report, at 15-16. If enforced, such a requirement could make Rule 34 responses more straightforward and less evasive.

The third change is intended to clarify the uncertainty surrounding the responding party’s timeframe for producing documents. As it now stands, Rule 34 does not expressly mandate when the responding party must complete its production of documents. That omission has led to open-ended productions, which can unreasonably lengthen the discovery process and increase litigation expenses. To correct this oversight, the Committee proposed that the responding party complete its production “no later than the time for inspection stated in the request or [at] a later reasonable time stated in the response.” Report, at 26. For so-called “rolling productions,” the responding party “should specify the beginning and end dates of the production.” Id. Such a provision should ultimately provide greater clarity and increased understanding surrounding productions of ESI.

Other Changes – Cost Shifting in Rule 26(c), Reductions in Discovery under Rules 30, 31, 33, 36

There were several additional changes the Committee recommended that are grounded in the concept of proportionality. While space does not allow for a detailed review of all of these changes, practitioners should take note of the new cost shifting provision in Rule 26(c). That change would expressly enable courts to allocate the expenses of discovery among the parties. See Report, at 12, 20-21, 23.

The Committee has also suggested reductions in the number of depositions, interrogatories, and requests for admission. Under the draft amendments, the number of depositions is reduced from 10 to 5. Oral deposition time has also been cut from seven hours to six. As for written discovery, the number of interrogatories would decrease from 25 to 15 and a numerical limit of 25 has been introduced for requests for admission. That limit of 25, however, does not apply to requests that seek to ascertain the genuineness of a particular document. See Report, at 12-15.

The effect of these proportionality amendments on the eDiscovery process could be far-reaching, but their impact remains to be seen. If lawyers continue to ignore proportionality standards and should courts fail to counter such non-compliance with sanctions under Federal Rule 26(g), the depressing duo of unreasonable eDiscovery costs and delays will continue unabated. For those who truly wish to reverse this trend, strict enforcement of these proportionality standards must be the rule of the day.

Push or Pull? Deciding How Much Oversight is Required of In-house Counsel in eDiscovery

Tuesday, June 18th, 2013

When Kolon Industries recently found itself on the wrong side of a $919 million verdict, the legal department for the South Korean-based manufacturer probably started to take inventory on what it might have done differently to have avoided such a fate. While that list could have included any number of entries, somewhere near the top had to be an action item to revamp its process for supervising the preservation and collection of electronically stored information (ESI) from company executives and employees. Breakdowns in that process led to the destruction of nearly 18,000 pages of ESI. This resulted in an instruction to the jury in E.I. du Pont de Nemours and Co. v. Kolon Industries, Inc. that Kolon had engaged in wholesale destruction of key evidence. All of which culminated in the devastating verdict against the manufacturer.

Most enterprises will likely never have to deal with the fallout from a nearly $1 billion verdict. Nevertheless, many companies still struggle with the same issues associated with document collection that ultimately tripped up Kolon Industries. Indeed, one of the most troubling issues facing in-house counsel is determining the degree of oversight that must be exercised in connection with document preservation and collection in eDiscovery. While this is an issue counsel has always grappled with, the degree of difficulty has substantially increased in the digital age. With the explosion of information, courts have raised their expectations for how organizations and their counsel address ESI in discovery. Now that the stakes have been raised, should counsel allow executives and employees to decide what is relevant and have them “push” the data for production? Or, should the team collect  (i.e., “pull”) the data and then cull and review for relevancy?

These issues were recently considered in an article published in May 2013 by the ACC Docket. Authored by Shawn Cheadle, General Counsel, Military Space, Lockheed Martin Space Systems Company, and me, the article describes how counsel can balance these countervailing factors to appropriately supervise the inextricably intertwined eDiscovery phases of ESI preservation and collection. In this article, we detail the elements in play, and discuss the leading court cases and their respective factual scenarios, with an eye toward helping in-house counsel understand the dynamics that are driving this trend. We also provide some suggestions for how counsel can meet the required degree of eDiscovery oversight without neglecting its other duties.

A copy of this article is available here for your reading pleasure.

The Gartner 2013 Magic Quadrant for eDiscovery Software is Out!

Wednesday, June 12th, 2013

This week marks the release of the 3rd annual Gartner Magic Quadrant for e-Discovery Software report.  In the early days of eDiscovery, most companies outsourced almost every sizeable project to vendors and law firms so eDiscovery software was barely a blip on the radar screen for technology analysts. Fast forward a few years to an era of explosive information growth and rising eDiscovery costs and the landscape has changed significantly. Today, much of the outsourced eDiscovery “services” business has been replaced by eDiscovery software solutions that organizations bring in house to reduce risk and cost. As a result, the enterprise eDiscovery software market is forecast to grow from $1.4 billion in total software revenue worldwide in 2012 to $2.9 billion by 2017. (See Forecast:  Enterprise E-Discovery Software, Worldwide, 2012 – 2017, Tom Eid, December, 2012).

Not surprisingly, today’s rapidly growing eDiscovery software market has become significant enough to catch the attention of mainstream analysts like Gartner. This is good news for company lawyers who are used to delegating enterprise software decisions to IT departments and outside law firms. Because today those same company lawyers are involved in eDiscovery and other information management software purchasing decisions for their organizations. While these lawyers understand the company’s legal requirements, they do not necessarily understand how to choose the best technology to address those requirements. Conversely, IT representatives understand enterprise software, but they do not necessarily understand the law. Gartner bridges this information gap by providing in depth and independent analysis of the top eDiscovery software solutions in the form of the Gartner Magic Quadrant for e-Discovery Software.

Gartner’s methodology for preparing the annual Magic Quadrant report is rigorous. Providers must meet quantitative requirements such as revenue and significant market penetration to be included in the report. If these threshold requirements are met then Gartner probes deeper by meeting with company representatives, interviewing customers, and soliciting feedback to written questions. Providers that make the cut are evaluated across four Magic Quadrant categories as either “leaders, challengers, niche players, or visionaries.” Where each provider ends up on the quadrant is guided by an independent evaluation of each provider’s “ability to execute” and “completeness of vision.” Landing in the “leaders” quadrant is considered a top recognition.

The nine Leaders in this year’s Magic Quadrant have four primary characteristics (See figure 1 above).

The first is whether the provider has functionality that spans both sides of the electronic discovery reference model (EDRM) (left side – identification, preservation, litigation hold, collection, early case assessment (ECA) and processing and right-side – processing, review, analysis and production). “While Gartner recognizes that not all enterprises — or even the majority — will want to perform legal-review work in-house, more and more are dictating what review tools will be used by their outside counsel or legal-service providers. As practitioners become more sophisticated, they are demanding that data change hands as little as possible, to reduce cost and risk. This is a continuation of a trend we saw developing last year, and it has grown again in importance, as evidenced both by inquiries from Gartner clients and reports from vendors about the priorities of current and prospective customers.”

We see this as consistent with the theme that providers with archiving solutions designed to automate data retention and destruction policies generally fared better than those without archiving technology. The rationale is that part of a good end-to-end eDiscovery strategy includes proactively deleting data organizations do not have a legal or business need to keep. This approach decreases the amount of downstream electronically stored information (ESI) organizations must review on a case-by-case basis so the cost savings can be significant.

Not surprisingly, whether or not a provider offers technology assisted review or predictive coding capabilities was another factor in evaluating each provider’s end-to-end functionality. The industry has witnessed a surge in predictive coding case law since 2012 and judicial interest has helped drive this momentum. However, a key driver for implementing predictive coding technology is the ability to reduce the amount of ESI attorneys need to review on a case-by-case basis. Given the fact that attorney review is the most expensive phase of the eDiscovery process, many organizations are complementing their proactive information reduction (archiving) strategy with a case-by-case information reduction plan that also includes predictive coding.

The second characteristic Gartner considered was that Leaders’ business models clearly demonstrate that their focus is software development and sales, as opposed to the provision of services. Gartner acknowledged that the eDiscovery services market is strong, but explains that the purpose of the Magic Quadrant is to evaluate software, not services. The justification is that “[c]orporate buyers and even law firms are trending towards taking as much e-Discovery process in house as they can, for risk management and cost control reasons. In addition, the vendor landscape for services in this area is consolidating. A strong software offering, which can be exploited for growth and especially profitability, is what Gartner looked for and evaluated.”

Third, Gartner believes the solution provider market is shrinking and that corporations are becoming more involved in buying decisions instead of deferring technology decisions to their outside law firms. Therefore, those in the Leaders category were expected to illustrate a good mix of corporate and law firm buying centers. The rationale behind this category is that law firms often help influence corporate buying decisions so both are important players in the buying cycle. However, Gartner also highlighted that vendors who get the majority of their revenues from the “legal solution provider channel” or directly from “law firms” may soon face problems.

The final characteristic Gartner considered for the Leaders quadrant is related to financial performance and growth. In measuring this component, Gartner explained that a number of factors were considered. Primary among them is whether the Leaders are keeping pace with or even exceeding overall market growth. (See “Forecast:  Enterprise E-Discovery Software, Worldwide, 2012 – 2017,” Tom Eid, December, 2012).

Companies landing in Gartner’s Magic Quadrant for eDiscovery Software have reason to celebrate their position in an increasingly competitive market. To review Gartner’s full report yourself, click here. In the meantime, please feel free to share your own comments below as the industry anxiously awaits next year’s Magic Quadrant Report.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Save the Date: Defensible Deletion Google + Hangout

Wednesday, June 12th, 2013

As information volumes continue to explode, the need for a strategic information governance plan has never been more important. Organizations are struggling to reduce their electronically stored information (ESI) footprint, while at the same time ensuring they are prepared to satisfy eDiscovery requests and comply with retention requirements stemming from Dodd-Frank and FINRA 10-06.

This is where Defensible Deletion comes into play. Defensible Deletion is a comprehensive approach that companies implement to reduce the storage costs and legal risks associated with the retention of electronically stored information. Organizations that establish a systematic methodology for cutting down their information clutter have been successful in avoiding court sanctions and eliminating ESI that has little or no business value.

Please join the Symantec Archiving & eDiscovery team on Wednesday, June 19 at 9:30am PT for an On Air Google+ Hangout and learn techniques for reducing risk and storage costs through the implementation of a Defensible Deletion plan for both active and archived content: http://bit.ly/17by3e6.  Also, join the conversation at #SYMChangout.