24h-payday

Posts Tagged ‘tools’

New Gartner Report Spotlights Significance of Email Archiving for Defensible Deletion

Thursday, November 1st, 2012

Gartner recently released a report that spotlights the importance of using email archiving as part of an organization’s defensible deletion strategy. The report – Best Practices for Using Email Archiving to Eliminate PST and Mailbox Quota Headaches (Alan Dayley, September 21, 2012) – specifically focuses on the information retention and eDiscovery challenges associated with email storage on Microsoft Exchange and how email archiving software can help address these issues. As Gartner makes clear in its report, an archiving solution can provide genuine opportunities to reduce the costs and risks of email hoarding.

The Problem: PST Files

The primary challenge that many organizations are experiencing with Microsoft Exchange email is the unchecked growth of messages stored in portable storage tablet (PST) files. Used to bypass storage quotas on Exchange, PST files are problematic because they increase the costs and risks of eDiscovery while circumventing information retention policies.

That the unrestrained growth of PST files could create problems downstream for organizations should come as no surprise. Various court decisions have addressed this issue, with the DuPont v. Kolon Industries litigation foremost among them. In the DuPont case, a $919 million verdict and 20 year product injunction largely stemmed from the defendant’s inability to prevent the destruction of thousands pages of email formerly stored in PST files. That spoliation resulted in a negative inference instruction to the jury and the ensuing verdict against the defendant.

The Solution: Eradicate PSTs with the Help of Archiving Software and Retention Policies

To address the PST problem, Gartner suggests following a three-step process to help manage and then eradicate PSTs from the organization. This includes educating end users regarding both the perils of PSTs and the ease of access to email through archiving software. It also involves disabling the creation of new PSTs, a process that should ultimately culminate with the elimination of existing PSTs.

In connection with this process, Gartner suggests deployment of archiving software with a “PST management tool” to facilitate the eradication process. With the assistance of the archiving tool, existing PSTs can be discovered and migrated into the archive’s central data repository. Once there, email retention policies can begin to expire stale, useless and even harmful messages that were formerly outside the company’s information retention framework.

With respect to the development of retention policies, organizations should consider engaging in a cooperative internal process involving IT, compliance, legal and business units. These key stakeholders must be engaged and collaborate if a workable policies are to be created. The actual retention periods should take into account the types of email generated and received by an organization, along with the enterprise’s business, industry and litigation profile.

To ensure successful implementation of such retention policies and also address the problem of PSTs, an organization should explore whether an on premise or cloud archiving solution is a better fit for its environment. While each method has its advantages, Gartner advises organizations to consider whether certain key features are included with a particular offering:

Email classification. The archiving tool should allow your organization to classify and tag the emails in accordance with your retention policy definitions, including user-selected, user/group, or key-word tagging.

User access to archived email. The tool must also give end users appropriate and user-friendly access to their archived email, thus eliminating concerns over their inability to manage their email storage with PSTs.

Legal and information discovery capabilities. The search, indexing, and e-discovery capabilities of the archiving tool should also match your needs or enable integration into corporate e-discovery systems.

While perhaps not a panacea for the storage and eDiscovery problems associated with email, on premise or cloud archiving software should provide various benefits to organizations. Indeed, such technologies have the potential to help organizations store, manage and discover their email efficiently, cost effectively and in a defensible manner. Where properly deployed and fully implemented, organizations should be able to reduce the nettlesome costs and risks connected with email.

Defensible Deletion: The Cornerstone of Intelligent Information Governance

Tuesday, October 16th, 2012

The struggle to stay above the rising tide of information is a constant battle for organizations. Not only are the costs and logistics associated with data storage more troubling than ever, but so are the potential legal consequences. Indeed, the news headlines are constantly filled with horror stories of jury verdicts, court judgments and unreasonable settlements involving organizations that failed to effectively address their data stockpiles.

While there are no quick or easy solutions to these problems, an ever increasing method for effectively dealing with these issues is through an organizational strategy referred to as defensible deletion. A defensible deletion strategy could refer to many items. But at its core, defensible deletion is a comprehensive approach that companies implement to reduce the storage costs and legal risks associated with the retention of electronically stored information (ESI). Organizations that have done so have been successful in avoiding court sanctions while at the same time eliminating ESI that has little or no business value.

The first step to implementing a defensible deletion strategy is for organizations to ensure that they have a top-down plan for addressing data retention. This typically requires that their information governance principals – legal and IT – are cooperating with each other. These departments must also work jointly with records managers and business units to decide what data must be kept and for what length of time. All such stakeholders in information retention must be engaged and collaborate if the organization is to create a workable defensible deletion strategy.

Cooperation between legal and IT naturally leads the organization to establish records retention policies, which carry out the key players’ decisions on data preservation. Such policies should address the particular needs of an organization while balancing them against litigation requirements. Not only will that enable a company to reduce its costs by decreasing data proliferation, it will minimize a company’s litigation risks by allowing it to limit the amount of potentially relevant information available for current and follow-on litigation.

In like manner, legal should work with IT to develop a process for how the organization will address document preservation during litigation. This will likely involve the designation of officials who are responsible for issuing a timely and comprehensive litigation hold to custodians and data sources. This will ultimately help an organization avoid the mistakes that often plague document management during litigation.

The Role of Technology in Defensible Deletion

In the digital age, an essential aspect of a defensible deletion strategy is technology. Indeed, without innovations such as archiving software and automated legal hold acknowledgements, it will be difficult for an organization to achieve its defensible deletion objectives.

On the information management side of defensible deletion, archiving software can help enforce organization retention policies and thereby reduce data volume and related storage costs. This can be accomplished with classification tools, which intelligently analyze and tag data content as it is ingested into the archive. By so doing, organizations may retain information that is significant or that otherwise must be kept for business, legal or regulatory purposes – and nothing else.

An archiving solution can also reduce costs through efficient data storage. By expiring data in accordance with organization retention policies and by using single instance storage to eliminate ESI duplicates, archiving software frees up space on company servers for the retention of other materials and ultimately leads to decreased storage costs. Moreover, it also lessens litigation risks as it removes data available for future litigation.

On the eDiscovery side of defensible deletion, an eDiscovery platform with the latest in legal hold technology is often essential for enabling a workable litigation hold process. Effective platforms enable automated legal hold acknowledgements on various custodians across multiple cases. This allows organizations to confidently place data on hold through a single user action and eliminates concerns that ESI may slip through the proverbial cracks of manual hold practices.

Organizations are experiencing every day the costly mistakes of delaying implementation of a defensible deletion program. This trend can be reversed through a common sense defensible deletion strategy which, when powered by effective, enabling technologies, can help organizations decrease the costs and risks associated with the information explosion.

Mission Impossible? The eDiscovery Implications of the ABA’s New Ethics Rules

Thursday, August 30th, 2012

The American Bar Association (ABA) recently announced changes to its Model Rules of Professional Conduct that are designed to address digital age challenges associated with practicing law in the 21st century. These changes emphasize that lawyers must understand the ins and outs of technology in order to provide competent representation to their clients. From an eDiscovery perspective, such a declaration is particularly important given the lack of understanding that many lawyers have regarding even the most basic supporting technology needed to effectively satisfy their discovery obligations.

With respect to the actual changes, the amendment to the commentary language from Model Rule 1.1 was most significant for eDiscovery purposes. That rule, which defines a lawyer’s duty of competence, now requires that attorneys discharge that duty with an understanding of the “benefits and risks” of technology:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.

This rule certainly restates the obvious for experienced eDiscovery counsel. Indeed, the Zubulake series of opinions from nearly a decade ago laid the groundwork for establishing that competence and technology are irrevocably and inextricably intertwined. As Judge Scheindlin observed in Zubulake V, “counsel has a duty to effectively communicate to her client its discovery obligations so that all relevant information is discovered, retained, and produced.” This includes being familiar with client retention policies, in addition to its “data retention architecture;” communicating with the “client’s information technology personnel” and arranging for the “segregation and safeguarding of any archival media (e.g., backup tapes) that the party has a duty to preserve.”

Nevertheless, Model Rule 1.1 is groundbreaking in that it formally requires lawyers in those jurisdictions following the Model Rules to be up to speed on the impact of eDiscovery technology. In 2012, that undoubtedly means counsel should become familiar with the benefits and risks of predictive coding technology. With its promise of reduced document review costs and decreased legal fees, counsel should closely examine predictive coding solutions to determine whether they might be deployed in some phase of the document review process (e.g., prioritization, quality assurance for linear review, full scale production). Yet caution should also be exercised given the risks associated with this technology, particularly the well-known limitations of early generation predictive coding tools.

In addition to predictive coding, lawyers would be well served to better understand traditional eDiscovery technology tools such as keyword search, concept search, email threading and data clustering. Indeed, there is significant confusion regarding the continued viability of keyword searching given some prominent judicial opinions frowning on so-called blind keyword searches. However, most eDiscovery jurisprudence and authoritative commentators confirm the effectiveness of keyword searches that involve some combination of testing, sampling and iterative feedback.

Whether the technology involves predictive coding, keyword searching, attorney client privilege reviews or other areas of eDiscovery, the revised Model Rules appear to require counsel to understand the benefits and risks of these tools. Moreover, this is not simply a one-time directive. Because technology is always changing, lawyers should continue to stay abreast of changes and developments. This continuing duty of competence is well summarized in The Sedona Conference Best Practices Commentary on the Use of Search & Retrieval Methods in E-Discovery:

Parties and the courts should be alert to new and evolving search and information retrieval methods. What constitutes a reasonable search and information retrieval method is subject to change, given the rapid evolution of technology. The legal community needs to be vigilant in examining new and emerging techniques and methods which claim to yield better search results.

While the challenge of staying abreast of these complex technological changes is difficult, it is certainly not “mission impossible.” Lawyers untrained in the areas of technology have often developed tremendous skill sets required for dealing with other areas of complexities in the law. Perhaps the wise but encouraging reminder from Anthony Hopkins to Tom Cruise in Mission Impossible II will likewise spur reluctant attorneys to accept this difficult, though not impossible task: “Well this is not Mission Difficult, Mr. Hunt, it’s Mission Impossible. Difficult should be a walk in the park for you.”

How to Prepare for eDiscovery under New UK Civil Procedure Rule 31.5

Thursday, August 9th, 2012

Be sure to circle the month of April on your 2013 calendars. That is the projected effective date for new UK Civil Procedure Rule 31.5. CPR 31.5 is designed to modify the disclosure process and usher in a new era of proportional discovery in the UK. These changes figure to be significant, particularly as they come on the heels of new Practice Direction 31B. Between the Practice Direction and CPR 31.5, the bar is now being substantially raised for how UK lawyers handle eDiscovery. That lawyers need to take particular note of these changes and their impact on disclosure was highlighted last year when Lord Justice Jackson declared that “few solicitors and even fewer barristers really understand how to undertake e-disclosure in an effective way.

Presently, courts may order some form of standard disclosure or dispense with disclosure altogether. However, under the revised version of Rule 31.5, courts in England and Wales will have several options for addressing disclosure of all multi-track claims (except those for personal injuries). This new “menu of possible disclosure orders” will enable courts to tailor a disclosure plan to fit the specific needs of a given case. In particular, courts may provide direction regarding particular searches for documents, the phasing of disclosure in different stages, and the formats for disclosing documents.

Furthermore, new Rule 31.5 emphasizes the importance of early and efficient disclosure. For example, each party must serve a report two weeks before the first case management conference (CMC) that describes existing documents relevant to matters at issue in the case. The report must also specify the location of such documents and cost estimates for standard disclosure. Parties must then meet a week prior to the CMC to reach an agreed upon plan for disclosure that satisfies the “overriding objective.”

In addition, the provisions from the new rule are equally applicable to electronic documents. This is accomplished by having new Rule 31.5 operate in conjunction with Practice Direction 31B. As Lord Justice Jackson explained, these two provisions “fit neatly together” so as to ensure that disclosure of electronic documents meets the overriding objective.

All of which places a demand on lawyers to understand how to undertake eDiscovery in an effective way. While it is essential to understand the particulars of new Rule 31.5, deploying effective, enabling technologies will be just as critical to ensuring compliance with the new disclosure provisions. This is because both the new Rule and the Practice Direction emphasize the role of technology in meeting the overriding objective in disclosure.

In summary, new Rule 31.5 will likely represent a sea change for eDiscovery in the UK. Nevertheless, savvy lawyers who learn the rules and use effective the tools to support their disclosure process can rest assured that they will likely be prepared for the rule’s implementation in April 2013.

FOIA Matters! — 2012 Information Governance Survey Results for the Government Sector

Thursday, July 12th, 2012

At this year’s EDGE Summit in April, Symantec polled attendees about a range of government-specific information governance questions. The attendees of the event were primarily comprised of members from IT, Legal, as well as Freedom of Information Act (FOIA) agents, government investigators and records managers. The main purpose of the EDGE survey was to gather attendees’ thoughts on what information governance means for their agencies, discern what actions were being taken to address Big Data challenges, and assess how far along agencies were in their information governance implementations pursuant to the recent Presidential Mandate.

As my colleague Matt Nelson’s blog recounts from the LegalTech conference earlier this year, information governance and predictive coding were among the hottest topics at the LTNY 2012 show and in the industry generally. The EDGE Summit correspondingly held sessions on those two topics, as well as delved deeper into questions that are unique to the government. For example, when asked what the top driver for implementation of an information governance plan in an agency was, three out of four respondents answered “FOIA.”

The fact that FOIA was listed as the top driver for government agencies planning to implement an information governance solution is in line with data reported by the Department of Justice (DOJ) from 2008-2011 on the number of requests received. In 2008, 605,491 FOIA requests were received. This figure grew to 644,165 in 2011. While the increase in FOIA requests is not enormous percentage-wise, what is significant is the reduction in backlogs for FOIA requests. In 2008, there was a backlog of 130,419 requests and was decreased to 83,490 by 2011. This is likely due to the implementation of newer and better technology, coupled with the fact that the current administration has made FOIA request processing a priority.

In 2009, President Obama directed agencies to adopt “a presumption in favor’” of FOIA requests for greater transparency in the government. Agencies have had pressure from the President to improve the response time to (and completeness of) FOIA requests. Washington Post reporter Ed O’Keefe wrote,

“a study by the National Security Archive at George Washington University and the Knight Foundation, found approximately 90 federal agencies are equipped to process FOIA requests, and of those 90, only slightly more than half have taken at least some steps to fulfill Obama’s goal to improve government transparency.”

Agencies are increasingly more focused on complying with FOIA and will continue to improve their IT environments with archiving, eDiscovery and other proactive records management solutions in order to increase access to data.

Not far behind FOIA requests on the list of reasons to implement an information governance plan were “lawsuits” and “internal investigations.” Fortunately, any comprehensive information governance plan will axiomatically address FOIA requests since the technology implemented to accomplish information governance inherently allows for the storage, identification, collection, review and production of data regardless of the specific purpose. The use of information governance technology will not have the same workflow or process for FOIA that an internal investigation would require, for example, but the tools required are the same.

The survey also found that the top three most important activities surrounding information governance were: email/records retention (73%), data security/privacy (73%) and data storage (72%). These concerns are being addressed modularly by agencies with technology like data classification services, archiving, and data loss prevention technologies. In-house eDiscovery tools are also important as they facilitate the redaction of personally identifiable information that must be removed in many FOIA requests.

It is clear that agencies recognize the importance of managing email/records for the purposes of FOIA and this is an area of concern in light of not only the data explosion, but because 53% of respondents reported they are responsible for classifying their own data. Respondents have connected the concept of information governance with records management and the ability to execute more effectively on FOIA requests. Manual classification is rapidly becoming obsolete as data volumes grow, and is being replaced by automated solutions in successfully deployed information governance plans.

Perhaps the most interesting piece of data from the survey was the disclosures about what was preventing governmental agencies from implementing information governance plans. The top inhibitors for the government were “budget,” “internal consensus” and “lack of internal skill sets.” Contrasted with the LegalTech Survey findings from 2012 on information governance, with respondents predominantly from the private sector, the government’s concerns and implementation timelines are slightly different. In the EDGE survey, only 16% of the government respondents reported that they have implemented an information governance solution, contrasted with the 19% of the LegalTech audience. This disparity is partly because the government lacks the budget and the proper internal committee of stakeholders to sponsor and deploy a plan, but the relatively lows numbers in both sectors indicate the nascent state of information governance.

In order for a successful information governance plan to be deployed, “it takes a village,” to quote Secretary Clinton. Without prioritizing coordination between IT, legal, records managers, security, and the other necessary departments on data management, merely having the budget only purchases the technology and does not ensure true governance. In this year’s survey, 95% of EDGE respondents were actively discussing information governance solutions. Over the next two years the percentage of agencies that will submit a solution is expected to triple from 16%-52%. With the directive on records management due this month by the National Archives Records Administration (NARA), the government agencies will have clear guidance on what the best practices are for records management, and this will aid the adoption of automated archiving and records classification workflows.

The future is bright with the initiative by the President and NARA’s anticipated directive to examine the state of technology in the government. The EDGE survey results support the forecast, provided budget can be obtained, that agencies will be in an improved state of information governance within the next two years. This will be an improvement for FOIA request compliance, efficient litigation with the government and increase their ability to effectively conduct internal investigations.

Many would have projected that the results of the survey question on what drives information governance in the government would be litigation, internal investigations, and FOIA requests respectively. And yet, FOIA has recently taken on a more important role given the Obama administration’s focus on transparency and the increased number of requests by citizens. While any one of the drivers could have facilitated updates in process and technology the government clearly needs, FOIA has positive momentum behind it and seems to be the impetus primarily driving information governance. Fortunately, archiving and eDiscovery technology, only two parts of information governance continuum, can help with all three of the aforementioned drivers with different workflows.

Later this month we will examine NARA’s directive and what the impact will be on the government’s technology environment – stay tuned.

#InfoGov Twitter Chat Hones in on Starting Places and Best Practices

Tuesday, July 3rd, 2012

Unless you’re an octogenarian living in rural Uzbekistan[i] you’ve likely seen the meteoric rise of social media over the last decade. Even beyond hyper-texting teens, businesses too are taking advantage of this relatively new form function to engage with their more technically savvy customers. Recently, Symantec held its first “Twitter Chat” on the topic of information governance (fondly referred to on Twitter as #InfoGov). For those not familiar with the concept, a Twitter Chat is a virtual discussion held on Twitter using a specific hashtag – in this case #IGChat. At a set date and time, parties interested in the topic log into Twitter and start participating in the fireworks on the designated hashtag.

“Fireworks” may be a bit overstated, but given that the moderators (eDiscovery Counsel at Symantec) and participants were limited to 140 characters, the “conversation” was certainly frenetic. Despite the fast pace, one benefit of a Twitter Chat is that you can communicate with shortened web links, as a way to share and discuss content beyond the severely limited word count. During this somewhat staccato discussion, we found the conversation to take some interesting twists and turns, which I thought I’d excerpt (and expound upon[ii]) in this blog.

Whether in a Twitter Chat or otherwise, once the discussion of information governance begins everyone wants to know where to start. The #IGChat was no different.

  • Where to begin?  While there wasn’t consensus per se on a good starting place, one cogent remark out of the blocks was: “The best way to start is to come up with an agreed upon definition — Gartner’s is here t.co/HtGTWN2g.” While the Gartner definition is a good starting place, there are others out there that are more concise. The eDiscovery Journal Group has a good one as well:  “Information Governance is a comprehensive program of controls, processes, and technologies designed to help organizations maximize the value of information assets while minimizing associated risks and costs.”  Regardless of the precise definition, it’s definitely worth the cycles to rally around a set construct that works for your organization.
  • Who’s on board?  The next topic centered around trying to find the right folks organizationally to participate in the information governance initiative. InfoGovlawyer chimed in: “Seems to me like key #infogov players should include IT, Compliance, Legal, Security reps.” Then, PhilipFavro suggested that the “[r]ight team would likely include IT, legal, records managers, pertinent business units and compliance.” Similar to the previous question, at this stage in the information governance maturation process, there isn’t a single, right answer. More importantly, the team needs to have stakeholders from at least Legal and IT, while bringing in participants from other affected constituencies (Infosec, Records, Risk, Compliance, etc.) – basically, anyone interested in maximizing the value of information while reducing the associated risks.
  • Where’s the ROI?  McManusNYLJ queried: “Do you think #eDiscovery, #archiving and compliance-related technology provide ample ROI? Why or why not?”  Here, the comments came in fast and furious. One participant pointed out that case law can be helpful in showing the risk reduction:  “Great case showing the value of an upstream archive – Danny Lynn t.co/dcReu4Qg.” AlliWalt chimed in: “Yes, one event can set your company back millions…just look at the Dupont v. Kolon case… ROI is very real.” Another noted that “Orgs that take a proactive approach to #eDiscovery requests report a 64% faster response time, 2.3x higher success rate.” And, “these same orgs were 78% less likely to be sanctioned and 47% less likely to be legally compromised t.co/5dLRUyq6.” ROI for information governance seemed to be a nut that can be cracked any number of ways, ranging from risk reduction (via sanctions and adverse legal decisions) to better preparation. Here too, an organization’s particular sensitivities should come into play since all entities won’t have the same concerns about risk reduction, for example.
  • Getting Granular. Pegduncan, an active subject matter expert on the topic, noted that showing ROI was the right idea, but not always easy to demonstrate: “But you have to get their attention. Hard to do when IT is facing funding challenges.” This is when granular eDiscovery costs were mentioned: “EDD costs $3 -18k per gig (Rand survey) and should wake up most – adds up w/ large orgs having 147 matters at once.” Peg wasn’t that easily convinced: “Agreed that EDD costs are part of biz case, but .. it’s the problem of discretionary vs non-discretionary spending.”
  • Tools Play a Role. One participant asked: “what about tools for e-mail thread analysis, de-duplication, near de-duplication – are these applicable to #infogov?” A participant noted that “in the future we will see tools like #DLP and #predictivecoding used for #infogov auto-classification – more on DLP here: t.co/ktDl5ULe.” Pegduncan chimed in that “DLP=Data Loss Prevention. Link to Clearwell’s post on Auto-Classification & DLP t.co/ITMByhbj.”

With a concept as broad and complex as information governance, it’s truly amazing that a cogent “conversation” can take place in a series of 140 character tweets. As the Twitter Chat demonstrates, the information governance concept continues to evolve and is doing so through discussions like this one via a social media platform. As with many of the key information governance themes (Ownership, ROI, Definition, etc.) there isn’t a right answer at this stage, but that isn’t an excuse for not asking the critical questions. “Sooner started, sooner finished” is a motto that will serve many organizations well in these exciting times. And, for folks who say they can’t spare the time, they’d be amazed what they can learn in 140 characters.

Mark your calendars and track your Twitter hashtags now: The next #IGChat will be held on July 26 @ 10am PT.



[i] I’ve never been to rural Uzbekistan, but it just sounded remote.  So, my apologies if there’s a world class internet infrastructure there where the denizens tweet prolifically. Given that’s it’s one (of two) double landlocked countries in the world it seemed like an easy target. Uzbeks please feel free to use the comment field and set me straight.

[ii] Minor edits were made to select tweets, but generally the shortened Twitter grammar wasn’t changed.

APAC eDiscovery Passports: Litigation Basics for the Asia-Pacific Region

Wednesday, June 13th, 2012

Global economic indicators point to increased trade with and outsourcing to emerging markets around the world, specifically the Asia Pacific (APAC) region. Typical U.S. sectors transacting with the East include: manufacturing, business process outsourcing (BPO)/legal process outsourcing (LPO), call centers, and other industries. The Asian Development Bank stated last year that Asia will account for half of all global economic output by 2050 if their collective GDP stays on pace.  The next 10 years will likely bring BRICS (Brazil, Russia, India, China and Japan) and The Four Asian Tigers (Hong Kong, Singapore, South Korea and Taiwan) into the forefront of the global economy. Combining this projected economic growth with the data explosion makes knowledge about the APAC legal system a necessity for litigators and international business people alike.

The convergence of the global economy across different privacy and data protection regimes has increased the complexity of addressing electronically stored information (ESI). Money and data in large volumes cross borders daily in order to conduct international business. This is true not only for Asian countries transacting with each other, but increasingly with Europe and the United States. Moreover, because technology continues to decrease the reliance on data in paper format, data will need to be produced and analyzed in the form in which it was created. This is important from a forensic standpoint, as well as an information management perspective.  This technical push is reason alone that organizations will need to shift their processes and technologies to focus more on ESI – not in only in how data is created, but in how those organizations store, search, retrieve, review and produce data.

Discovery Equals eDiscovery

The world of eDiscovery for the purposes of regulation and litigation is no longer a U.S. anomaly. This is not only because organizations may be subject to the federal and state rules of civil procedure governing pre-trial discovery in U.S. civil litigation, but because under existing Asian laws and regulatory schemes, the ability to search and retrieve data may be necessary.

Regardless of whether the process of searching, retrieving, reviewing and producing data (eDiscovery) is called discovery or disclosure or whether these processes occur before trial or during, the reality in litigation, especially for multinational corporations, is that eDiscovery may be required around the world. The best approach is to not only equip your organization with the best technology available for legal defensibility and cost-savings from the litigator’s tool belt, but to know the rules by which one must play.

The Passports

The knowledge level for many lawyers about how to approach a discovery request in APAC jurisdictions is often minimal, but there are resources that provide straightforward answers at no cost to the end-user. For example, Symantec has just released a series of “eDiscovery Passports™” for APAC that focus on discovery in civil litigation, the collision of data privacy laws, questions about the cross-border transfer of data, and the threat of U.S. litigation as businesses globalize.  The Passports are a basic guide that frame key components about a country including the legal system, discovery/disclosure, privacy, international considerations and data protection regulations. The Passports are useful tools to begin the process of exploring what considerations need to be made when litigating in the APAC region.

While the rules governing discovery in common law countries like Australia (UPC) and New Zealand (HCR) may be less comprehensive and require slightly different timing than that of the U.S. and U.K., they do exist under the UPC and HCR.  Countries like Hong Kong and Singapore, that also follow a traditional common law system, contain several procedural nuances that are unique to their jurisdictions.  The Philippines, for example, is a hybrid of both civil and common law legal systems, embodying similarities to California law due to history and proximity.  Below are some examples of cases that evidence trends in Asian jurisdictions that lean toward the U.S. Federal Rules of Civil Procedure (FRCP), Sedona Principles and that support the idea that eDiscovery is going global.

  • Hong Kong. In Moulin Global Eyecare Holdings Ltd. v. KPMG (2010), the court held the discovery of relevant documents must apply to both paper and ESI. The court did, however, reject the argument by plaintiffs that overly broad discovery be ordered as this would be ‘tantamount to requiring the defendants to turn over the contents of their filing cabinets for the plaintiffs to rummage through.’ Takeaway: Relevance and proportionality are the key factors in determining discovery orders, not format.
  • Singapore. In Deutsche Bank AG v. Chang Tse Wen (2010), the court acknowledged eDiscovery as particularly useful when the relevant data to be discovered is voluminous.  Because the parties failed to meet and confer in this case, the court ordered parties to take note of the March 2012 Practice Direction which sets out eDiscovery protocols and guidance. Takeaway: Parties must meet and confer to discuss considerations regarding ESI and be prepared to explain why the discovery sought is relevant to the case.
  • U.S. In E.I. du Pont de Nemours v. Kolon Industries (E.D. Va. July 21, 2011), the court held that defendants failed to issue a timely litigation hold.  The resulting eDiscovery sanctions culminated in a $919 million dollar verdict against the defendant South Korean company. While exposure to the FRCP for a company doing business with the U.S. should not be the only factor in determining what eDiscovery processes and technologies are implemented, it is an important consideration in light of sanctions. Takeaway:  Although discovery requirements are not currently as expansive in Asia as they are in the U.S., if conducting business with the U.S., companies may be availed to U.S. law. U.S. law requires legal hold be deployed in when litigation is reasonably anticipated.

Asia eDiscovery Exchange

On June 6-7 at the Excelsior Hotel in Hong Kong, industry experts from the legal, corporate and technology industries gathered for the Asia eDiscovery Exchange.  Jeffrey Toh of innoXcell, the organizer of the event in conjunction with the American eDJ Group, says “this is still a very new initiative in Asia, nevertheless, regulators in Asia have taken steps to implement practice directions for electronic evidence.” Exchanges like these indicate the market is ready for comprehensive solutions for proactive information governance, as well as reactive eDiscovery.  The three themes the conference touched on were information governance, eDiscovery and forensics.  Key sessions included “Social Media is surpassing email as a means of communication; What does this mean for data collection and your Information Governance Strategy” with Barry Murphy, co-founder and principal analyst, eDiscovery Journal and Chris Dale, founder, e-Disclosure Information Project, as well as “Proactive Legal Management” (with Rebecca Grant, CEO of iCourts in Australia and Philip Rohlik, Debevoise & Plimpton in Hong Kong).

The Asian market is ripe for new technologies, and the Asia eDiscovery Exchange should yield tremendous insight into the unique drivers for the APAC region and how vendors and lawyers alike are adapting to market with their offerings.  The eDiscovery Passports™ are also timely as they coincide with a marked increase in Asian business and the proposal of new data protection laws in the region.  Because the regional differences are distinct with regard to discovery, resources like this can help litigators in Asia interregionally, as well as lawyers around the world.  Thought leaders in the APAC region have come together to discuss these differences and how technology can best address the unique requirements in each jurisdiction.  The conference has made clear that information governance, archiving and eDiscovery tools are necessary in the region, even if those needs are not necessarily motivated by litigation as in the U.S. 

7th Circuit eDiscovery Pilot Program Tackles Technology Assisted Review With Mock Arguments

Tuesday, May 22nd, 2012

The 7th Circuit eDiscovery Pilot Program’s Mock Argument is the first of its kind and is slated for June 14, 2012.  It is not surprising that the Seventh Circuit’s eDiscovery Pilot Program would be the first to host an event like this on predictive coding, as the program has been a progressive model across the country for eDiscovery protocols since 2009.  The predictive coding event is open to the public (registration required) and showcases the expertise of leading litigators, technologists and experts from all over the United States.  Speakers include: Jason R. Baron, Director of Litigation at the National Archives and Records Administration; Maura R. Grossman, Counsel at Wachtell, Lipton, Rosen & Katz; Dr. David Lewis, Technology Expert and co-founder of the TREC Legal Track; Ralph Losey, Partner at Jackson Lewis; Matt Nelson, eDiscovery Counsel at Symantec; Lisa Rosen, President of Rosen Technology ResourcesJeff Sharer, Partner at Sidley Austin; and Tomas Thompson, Senior Associate at DLA Piper.

The eDiscovery 2.0 blog has extensively covered the three recent predictive coding cases currently being litigated, and while real court cases are paramount to the direction of predictive coding, the 7th Circuit program will proactively address a scenario that has not yet been considered by a court.  In Da Silva Moore, the parties agreed to the use of predictive coding, but couldn’t subsequently agree on the protocol.  In Kleen, plaintiffs want defendants to redo their review process using predictive coding even though the production is 99% complete.  And, in Global Aerospace the defendant proactively petitioned to use predictive coding over plaintiff’s objections.  By contrast, in the 7th Circuit’s hypothetical, the mock argument predicts another likely predictive coding scenario; the instance where a defendant has a deployed in-house solution in place and argues against the use of predictive coding before discovery has begun.

Traditionally, courts have been reticent to bless or admonish technology, but rather rule on the reasonableness of an organization’s process and depend on expert testimony for issues beyond that scope.  It is expected that predictive coding will follow suit; however, because so little is understood about how the technology works, interest has been generated in a way the legal technology industry has not seen before, as evidenced by this tactical program.

* * *

The hypothetical dispute is a complex litigation matter pending in a U.S. District Court involving a large public corporation that has been sued by a smaller high-tech competitor for alleged anticompetitive conduct, unfair competition and various business torts.  The plaintiff has filed discovery requests that include documents and communications maintained by the defendant corporation’s vast international sales force.  To expedite discovery and level the playing field in terms of resources and costs, the Plaintiff has requested the use of predictive coding to identify and produce responsive documents.  The defendant, wary of the latest (and untested) eDiscovery technology trends, argues that the organization already has a comprehensive eDiscovery program in place.  The defendant will further argue that the technological investment and defensible processes in-house are more than sufficient for comprehensive discovery, and in fact, were designed in order to implement a repeatable and defensible discovery program.  The methodology of the defendant is estimated to take months and result in the typical massive production set, whereas predictive coding would allegedly make for a shorter discovery period.  Because of the burden, the defendant plans to shift some of these costs to the plaintiff.

Ralph Losey’s role will be as the Magistrate Judge, defense counsel will be Martin T. Tully (partner Katten Muchin Rosenman LLP), with Karl Schieneman (of Review Less/ESI Bytes) as the litigation support manager for the corporation and plaintiff’s counsel will be Sean Byrne (eDiscovery solutions director at Axiom) with Herb Roitblat (of OrcaTec) as plaintiff’s eDiscovery consultant.

As the hottest topic in the eDiscovery world, the promises of predictive coding include: increased search accuracy for relevant documents, decreased cost and time spent for manual review, and possibly greater insight into an organization’s corpus of data allowing for more strategic decision making with regard to early case assessment.  The practical implications of predictive coding use are still to be determined and programs like this one will flesh out some of those issues before they get to the courts, which is good for practitioners and judges alike.  Stay tuned for an analysis of the arguments, as well as a link to the video.

Courts Increasingly Cognizant of eDiscovery Burdens, Reject “Gotcha” Sanctions Demands

Friday, May 18th, 2012

Courts are becoming increasingly cognizant of the eDiscovery burdens that the information explosion has placed on organizations. Indeed, the cases from 2012 are piling up in which courts have rejected demands that sanctions be imposed for seemingly reasonable information retention practices. The recent case of Grabenstein v. Arrow Electronics (D. Colo. April 23, 2012) is another notable instance of this trend.

In Grabenstein, the court refused to sanction a company for eliminating emails pursuant to a good faith document retention policy. The plaintiff had argued that drastic sanctions (evidence, adverse inference and monetary) should be imposed on the company since relevant emails regarding her alleged disability were not retained in violation of both its eDiscovery duties and an EEOC regulatory retention obligation. The court disagreed, finding that sanctions were inappropriate because the emails were not deleted before the duty to preserve was triggered: “Plaintiff has not provided any evidence that Defendant deleted e-mails after the litigation hold was imposed.”

Furthermore, the court declined to issue sanctions of any kind even though it found that the company deleted emails in violation of its EEOC regulatory retention duty. The court adopted this seemingly incongruous position because the emails were overwritten pursuant to a reasonable document retention policy:

“there is no evidence to show that the e-mails were destroyed in other than the normal course of business pursuant to Defendant’s e-mail retention policy or that Defendant intended to withhold unfavorable information from Plaintiff.”

The Grabenstein case reinforces the principle that reasonable information retention and eDiscovery processes can and often do trump sanctions requests. Just like the defendant in Grabenstein, organizations should develop and follow a retention policy that eliminates data stockpiles before litigation is reasonably anticipated. Grabenstein also demonstrates the value of deploying a timely and comprehensive litigation hold process to ensure that relevant electronically stored information (ESI) is retained once a preservation duty is triggered. These principles are consistent with various other recent cases, including a decision last month in which pharmaceutical giant Pfizer defeated a sanctions motion by relying on its “good faith business procedures” to eliminate legacy materials before a duty to preserve arose.

The Grabenstein holding also spotlights the role that proportionality can play in determining the extent of a party’s preservation duties. The Grabenstein court reasoned that sanctions would be inappropriate since plaintiff managed to obtain the destroyed emails from an alternative source. Without expressly mentioning “proportionality,” the court implicitly drew on Federal Rule of Civil Procedure 26(b)(2)(C) to reach its “no harm, no foul” approach to plaintiff’s sanctions request. Rule 2626(b)(2)(C)(i) empowers a court to limit discovery when it is “unreasonably cumulative or duplicative, or can be obtained from some other source that is more convenient, less burdensome, or less expensive.” Given that plaintiff actually had the emails in question and there was no evidence suggesting other ESI had been destroyed, proportionality standards tipped the scales against the sanctions request.

The Grabenstein holding is good news for organizations looking to reduce their eDiscovery costs and burdens. By refusing to accede to a tenuous sanctions motion and by following principles of proportionality, the court sustained reasonableness over “gotcha” eDiscovery tactics. If courts adhere to the Grabenstein mantra that preservation and production should be reasonable and proportional, organizations truly stand a better chance of seeing their litigation costs and burdens reduced accordingly.

Will Predictive Coding Live Up to the eDiscovery Hype?

Monday, May 14th, 2012

The myriad of published material regarding predictive coding technology has almost universally promised reduced costs and lighter burdens for the eDiscovery world. Indeed, until the now famous order was issued in the Da Silva Moore v. Publicis Groupe case “approving” the use of predictive coding, many in the industry had parroted this “lower costs/lighter burdens” mantra like the retired athletes who chanted “tastes great/less filling” during the 1970s Miller Lite commercials. But a funny thing happened on the way to predictive coding satisfying the cost cutting mandate of Federal Rule of Civil Procedure 1: the same old eDiscovery story of high costs and lengthy delays are plaguing the initial outlay of this technology. The three publicized cases involving predictive coding are particularly instructive on this early, but troubling development.

Predictive Coding Cases

In Moore v. Publicis Groupe, the plaintiffs’ attempt to recuse Judge Peck has diverted the spotlight from the costs and delays associated with use of predictive coding. Indeed, the parties have been wrangling for months over the parameters of using this technology for defendant MSL’s document review. During that time, each side has incurred substantial attorney fees and other costs to address fairly routine review issues. This tardiness figures to continue as the parties now project that MSL’s production will not be complete until September 7, 2012. Even that date seems too sanguine, particularly given Judge Peck’s recent observation about the slow pace of production: “You’re now woefully behind schedule already at the first wave.” Moreover, Judge Peck has suggested on multiple occasions that a special master be appointed to address disagreements over relevance designations. Special masters, production delays, additional briefings and related court hearings all lead to the inescapable conclusion that the parties will be saddled with a huge eDiscovery bill (despite presumptively lower review costs) due to of the use of predictive coding technology.

The Kleen Products v. Packing Corporation case is also plagued by cost and delay issues. As explained in our post on this case last month, the plaintiffs are demanding a “do-over” of the defendants’ document production, insisting that predictive coding technology be used instead of keyword search and other analytical tools. Setting aside plaintiffs’ arguments, the costs the parties have incurred in connection with this motion are quickly mounting. After submitting briefings on the issues, the court has now held two hearings on the matter, including a full day of testimony from the parties’ experts. With another “Discovery Hearing” now on the docket for May 22nd, predictive coding has essentially turned an otherwise routine document production query into an expensive, time consuming sideshow with no end in sight.

Cost and delay issues may very well trouble the parties in the Global Aerospace v. Landow Aviation matter, too. In Global Aerospace, the court acceded to the defendants’ request to use predictive coding technology over the plaintiffs’ objections. Despite allowing the use of such technology, the court provided plaintiffs with the opportunity to challenge the “completeness or the contents of the production or the ongoing use of predictive coding technology.” Such a condition essentially invites plaintiffs to re-litigate their objections through motion practice. Moreover, like the proverbial “exception that swallows the rule,” the order allows for the possibility that the court could withdraw its approval of predictive coding technology. All of which could lead to seemingly endless discovery motions, production “re-dos” and inevitable cost and delay issues.

Better Times Ahead?

At present, the Da Silva Moore, Kleen Products and Global Aerospace cases do not suggest that predictive coding technology will “secure the just, speedy, and inexpensive determination of every action and proceeding.” Nevertheless, there is room for considerable optimism that predictive coding will ultimately succeed. Technological advances in the industry will provide greater transparency into the black box of predictive coding technology that to date has not existed. Additional advances should also lead to easy-to-use workflow management consoles, which will in turn increase defensibility of the process and satisfy legitimate concerns regarding production results, such as those raised by the plaintiffs in Moore and Global Aerospace.

Technological advances that also increase the accuracy of first generation predictive coding tools should yield greater understanding and acceptance about the role predictive coding can play in eDiscovery. As lawyers learn to trust the reliability of transparent predictive coding, they will appreciate how this tool can be deployed in various scenarios (e.g., prioritization, quality assurance for linear review, full scale production) and in connection with existing eDiscovery technologies. In addition, such understanding will likely facilitate greater cooperation among counsel, a lynchpin for expediting the eDiscovery process. This is evident from the Moore, Kleen Products and Global Aerospace cases, where a lack of cooperation has caused increased costs and delays.

With the promise of transparency and simpler workflows, predictive coding technology should eventually live up to its billing of helping organizations discover their information in an efficient, cost effective and defensible manner.  As for now, the “promise” of first generation predictive coding tools appears to be nothing more than that, leaving organizations looking like the cash-strapped “Monopoly man,” wondering where there litigation dollars have gone.