24h-payday

Archive for the ‘inaccessibility’ Category

Look Before You Leap! Avoiding Pitfalls When Moving eDiscovery to the Cloud

Monday, May 7th, 2012

It’s no surprise that the eDiscovery frenzy gripping the American legal system over the past decade has become increasingly expensive.  Particularly costly to organizations is the process of preserving and collecting documents, a fact repeatedly emphasized by the Advisory Committee in its report regarding the 2006 amendments to the Federal Rules of Civil Procedure (FRCP).  These aspects of discovery are often lengthy and can be disruptive to business operations.  Just as troubling, they increase the duration and expense of litigation.

Because these costs and delays affect the courts as well as clients, it comes as no surprise that judges have now heightened their expectation for how organizations store, manage and discover their electronically stored information (ESI).  Gone are the days when enterprises could plead ignorance for not preserving or producing their data in an efficient, cost effective and defensible manner.  Organizations must now follow best practices – both during and before litigation – if they are to safely navigate the stormy seas of eDiscovery.

The importance of deploying such practices applies acutely to those organizations that are exploring “cloud”-based alternatives to traditional methods for preserving and producing electronic information.  Under the right circumstances, the cloud may represent a fantastic opportunity to streamline the eDiscovery process for an organization.  Yet it could also turn into a dangerous liaison if the cloud offering is not properly scrutinized for basic eDiscovery functionality.  Indeed, the City of Los Angeles’s recent decision to partially disengage from its cloud service provider exemplifies this admonition to “look before you leap” to the cloud.  Thus, before selecting a cloud provider for eDiscovery, organizations should be particularly careful to ensure that a provider has the ability both to efficiently retrieve data from the cloud and to issue litigation hold notices.

Effective Data Retrieval Requires Efficient Data Storage

The hype surrounding the cloud has generally focused on the opportunity for cheap and unlimited storage of information.  Storage, however, is only one of many factors to consider in selecting a cloud-based eDiscovery solution.  To be able to meet the heightened expectations of courts and regulatory bodies, organizations must have the actual – not theoretical – ability to retrieve their data in real time.  Otherwise, they may not be able to satisfy eDiscovery requests from courts or regulatory bodies, let alone the day-to-day demands of their operations.

A key step to retrieving company data in a timely manner is to first confirm whether the cloud offering can intelligently organize that information such that organizations can quickly respond to discovery requests and other legal demands.  This includes the capacity to implement and observe company retention protocols.  Just like traditional data archiving software, the cloud must enable automated retention rules and thus limit the retention of information to a designated time period.  This will enable data to be expired once it reaches the end of that period.

The pool of data can be further decreased through single instance storage.  This deduplication technology eliminates redundant data by preserving only a master copy of each document placed into the cloud.  This will reduce the amount of data that needs to be identified, preserved, collected and reviewed as part of any discovery process.  For while unlimited data storage may seem ideal now, reviewing unlimited amounts of data will quickly become a logistical and costly nightmare.

Any viable cloud offering should also have the ability to suspend automated document retention/deletion rules to ensure the adequate preservation of relevant information.  This goes beyond placing a hold on archival data in the cloud.  It requires that an organization have the ability to identify the data sources in the cloud that may contain relevant information and then modify aspects of its retention policies to ensure that cloud-stored data is retained for eDiscovery.  Taking this step will enable an organization to create a defensible document retention strategy and be protected from court sanctions under the Federal Rule of Civil Procedure 37(e) “safe harbor.”  The decision from Viramontes v. U.S. Bancorp (N.D. Ill. Jan. 27, 2011) is particularly instructive on this issue.

In Viramontes, the defendant bank defeated a sanctions motion because it timely modified aspects of its email retention policy.  The bank implemented a policy that kept emails for 90 days, after which the emails were deleted.  That policy was promptly suspended, however, once litigation was reasonably foreseeable.  Because the bank followed that procedure in good faith, it was protected from sanctions under Rule 37(e).

As the Viramontes case shows, an organization can be prepared for eDiscovery disputes by appropriately suspending aspects of its document retention policies.  By creating and then faithfully observing a policy that requires retention policies be suspended on the occurrence of litigation or other triggering event, an organization can develop a defensible retention procedure. Having such eDiscovery functionality in a cloud provider will likely facilitate an organization’s eDiscovery process and better insulate it from litigation disasters.

The Ability to Issue Litigation Hold Notices

To be effective for eDiscovery purposes, a cloud service provider must also enable an organization to deploy a litigation hold to prevent users from destroying data. Unless the cloud has litigation hold technology, the entire discovery process may very well collapse.  For electronic data to be produced in litigation, it must first be preserved.  And it cannot be preserved if the key players or data source custodians are unaware that such information must be retained.  Indeed, employees and data sources may discard and overwrite electronically stored information if they are oblivious to a preservation duty.

A cloud service provider should therefore enable automated legal hold acknowledgements.  Such technology will allow custodians to be promptly and properly notified of litigation and thereby retain information that might otherwise have been discarded.  Inadequate litigation hold technology leaves organizations vulnerable to data loss and court punishment.

Conclusion

Confirming that a cloud offering can quickly retrieve and efficiently store enterprise data while effectively deploying litigation hold notices will likely address the basic concerns regarding its eDiscovery functionality. Yet these features alone will not make that solution the model of eDiscovery cloud providers. Advanced search capabilities should also be included to reduce the amount of data that must be analyzed and reviewed downstream. In addition, the cloud ought to support load files in compatible formats for export to third party review software. The cloud should additionally provide an organization with a clear audit trail establishing that neither its documents, nor their metadata were modified when transmitted to the cloud.  Without this assurance, an organization may not be able to comply with key regulations or establish the authenticity of its data in court. Finally, ensure that these provisions are memorialized in the service level agreement governing the relationship between the organization and the cloud provider.

Take Two and Call me in the Morning: U.S. Hospitals Need an Information Governance Remedy

Wednesday, April 11th, 2012

Given the vast amount of sensitive information and legal exposure faced by hospitals today it’s a mystery why these organizations aren’t taking advantage of enabling technologies to minimize risk. Both HIPPA and the HITECH Act are often achieved by manual, ad hoc methods, which are hazardous at best. In the past, state and federal auditing environments have not been very aggressive in ensuring compliance, but that is changing. While many hospitals have invested in high tech records management systems (EMR/EHR), those systems do not encompass the entire information and data environment within a hospital. Sensitive information often finds its way into and onto systems outside the reach of EMR/EHR systems, bringing with it increased exposure to security breach and legal liability.

This information overload often metastasizes into email (both hospital and personal), attachments, portable storage devices, file, web and development servers, desktops and laptops, home or affiliated practice’s computers and mobile devices such as iPads and smart phones. These avenues for the dissemination and receipt of information expand the information governance challenge and data security risks. Surprisingly, the feedback from the healthcare sector suggests that hospitals rarely get sued in federal court.

One place hospitals do not want to be is the “Wall of Shame,” otherwise known as the HHS website that has detailed 281 Health Insurance Portability and Accountability Act (HIPAA) security violations that have affected more than 500 individuals as of June 9, 2011. Overall, physical theft and loss accounted for about 63% of the reported breaches. Unauthorized access / disclosure accounted for another 16%, while hacking was only 6%. While Software Advice reasons these statistics seem to indicate that physical theft has been the reason for the majority of breaches, it should also be considered that due to the lack of data loss prevention technology, many hospitals are unaware of breaches that have occurred and therefore cannot report on them.

There are a myriad of reasons hospitals aren’t landing on the front page of the newspaper with the same frequency as other businesses and government agencies when it comes to security breach, and document retention and eDiscovery blunders. But, the underlying contagion is not contained and it certainly is not benign. Feedback from the field reveals some alarming symptoms of the unhealthy state of healthcare information governance, including:

  • uncontrolled .pst files
  • exploding storage growth
  • missing or incomplete data retention rules
  • doctors/nurses storing and sending sensitive data via their personal email, iPads and smartphones
  • encryption rules that rely on individuals to determine what to encrypt
  • data backup policies that differ from data retention and information governance rules
  • little to no compliance training
  • and many times non-existent data loss prevention efforts.

This results in the need for more storage, while creating larger legal liability, an indefensible eDiscovery posture, and the risk of breach.

The reason this problem remains latent in most hospitals is because they are not yet feeling the pain of the problem from massive and multiple lawsuits, large invoices from outside law firms or the operational challenges/costs incurred from searching through many mountains of dispersed data.  The symptoms are observable, the pathology is present, the problem is real and the pain is about to acutely present itself as more states begin to deeply embrace eDiscovery requirements and government regulators increase audit frequency and fine amounts. Another less talked about reason hospitals have not had the same pressure to search and produce their data pursuant to litigation is due to cases being settled before they even get to the discovery stage. The lack of well-developed information governance practices leads to cases being settled too soon, for too much money when they otherwise may not have needed to settle at all.

The Patient’s Symptoms Were Treated, but the Patient’s Data Still Needs Medicine

What is still unclear is why hospitals, given their compliance requirements and tightening IT budgets, aren’t archiving, classifying, and protecting their data with the same type of innovation they are demonstrating in their cutting edge patient care technology. In this realm, two opposite ends of the IT innovation spectrum seem to co-exist in the hospital’s data environment. This dichotomy leaves much of a hospital’s data unprotected, unorganized and uncontrolled. Hospitals are experiencing increasing data security breaches and often are not aware that a breach or data loss has occurred. As more patient data is created and copied in electronic format, used in and exposed by an increasing number of systems and delivered on emerging mobile platforms, the legal and audit risks are compounding on top of a faulty or missing information governance foundation.

Many hospitals have no retention schedules or data classification rules applied to existing information, which often results in a checkbox compliance mentality and a keep-everything-forever practice. Additionally, many hospitals have no ability to apply a comprehensive legal hold across different data sources and lack technology to stop or alert them when there has been a breach.

Information Governance and Data Health in Hospitals

With the mandated push for paper to be converted to digital records, many hospitals are now evaluating the interplay of their various information management and distribution systems. They must consider the newly scanned legacy data (or soon to be scanned), and if they have been operating without an archive, they must now look to implement a searchable repository where they can collectively apply document retention and records management while decreasing the amount of storage needed to retain the data.  We are beginning to see internal counsel leading the way to make this initiative happen across business units. Different departments are coming together to pool resources in tight economic and high regulation times that require collaboration.  We are at the beginning of a widespread movement in the healthcare industry for archiving, data classification and data loss prevention as hospitals link their increasing compliance and data loss requirements with the need to optimize and minimize storage costs. Finally, it comes as no surprise that the amount of data hospitals are generating is crippling their infrastructures, breaking budgets and serving as the primary motivator for change absent lawsuits and audits.

These factors are bringing together various stakeholders into the information governance conversation, helping to paint a very clear picture that putting in place a comprehensive information governance solution is in the entire hospital’s best interest. The symptoms are clear, the problem is treatable, the prescription for information governance is well proven. Hospitals can begin this process by calling an information governance meeting with key stakeholders and pursuing an agenda set around examining their data map and assessing areas of security vulnerability, as well as auditing the present state of compliance with regulations for the healthcare industry.

Editor’s note: This post was co-authored with Eric Heck, Healthcare Account Manager at Symantec.  Eric has over 25 years of experience in applying technology to emerging business challenges, and currently works with healthcare providers and hospitals to manage the evolving threat landscape of compliance, security, data loss and information governance within operational, regulatory and budgetary constraints.

Big Data Decisions Ahead: Government-Sponsored Town Hall Meeting for eDiscovery Industry Coincides With Federal Agency Deadline

Wednesday, February 29th, 2012

Update For Report Submission By Agencies

We are fast approaching the March 27, 2012 deadline for federal agencies to submit their reports to the Office of Management and Budget and the National Archives and Records Administration (NARA) to comply with the Presidential Mandate on records management. We are only at the inception, as we look to a very exciting public town hall meeting in Washington, D.C. – also scheduled for March 27, 2012. This meeting is primarily focused on gathering input from the public sector community, the vendor/IT community, and members of the public at large. Ultimately, NARA will issue a directive that will outline a centralized approach for the federal government for managing records and eDiscovery.

Agencies have been tight lipped about how far along they are in the process of evaluating their workflows and tools for managing their information (both electronic and paper). There is, however, some empirical data from an InformationWeek Survey conducted last year that takes the temperature on where the top IT professionals within the government have their sights set, and the Presidential Mandate should bring some of these concerns to the forefront of the reports. For example, the #1 business driver for migrating to the cloud – cited by 62% of respondents – was cost, while 77% of respondents said their biggest concern was security. Nonetheless, 46% were still highly likely to migrate to a private cloud.

Additionally, as part of the Federal Data Center Consolidation Initiative, agencies are looking to eliminate 800 data centers. While the cost savings are clear, from an information governance viewpoint, it’s hard not to ask what the government plans to do with all of those records?  Clearly, this shift, should it happen, will force the government into a more service-based management approach, as opposed to the traditional asset-based management approach. Some agencies have already migrated to the cloud. This is squarely in line with the Opex over Capex approach emerging for efficiency and cost savings.

Political Climate Unknown

Another major concern that will affect any decisions or policy implementation within the government is, not surprisingly, politics. Luckily, regardless of political party affiliation, it seems to be broadly agreed that the combination of IT spend in Washington, D.C. and the government’s slow move to properly manage electronic records is a problem. Two of the many examples of the problem are manifested in the inability to issue effective litigation holds or respond to Freedom of Information Act (FOIA) requests in a timely and complete manner. Even still, the political agenda of the Republican party may affect the prioritization of the Democratic President’s mandate and efforts could be derailed with a potential change in administration.

Given the election year and the heavy analysis required to produce the report, there is a sentiment in Washington that all of this work may be for naught if the appropriate resources cannot be secured then allocated to effectuate the recommendations. The reality is that data is growing at an unprecedented rate, and the need for the intelligent management of information is no longer deniable. The long term effects of putting this overhaul on the back burner could be disastrous. The government needs a modular plan and a solid budget to address the problem now, as they are already behind.

VanRoekel’s Information Governance

One issue that will likely not be agreed upon between Democrats and Republicans to accomplish the mandate is the almighty budget, and the technology the government must purchase in order to accomplish the necessary technological changes are going to cost a pretty penny.  Steven VanRoekel, the Federal CIO, stated upon the release of the FY 2013 $78.8 billion dollar IT budget:

“We are also making cyber security a cross-agency, cross-government priority goal this year. We have done a good job in ramping up on cyber capabilities agency-by-agency, and as we come together around this goal, we will hold the whole of government accountable for cyber capabilities and examine threats in a holistic way.”

His quote indicates the priority from the top down of evaluating IT holistically, which dovetails nicely with the presidential mandate since security and records management are only two parts of the entire information governance picture. Each agency still has their own work cut out for them across the EDRM. One of the most pressing issues in the upcoming reports will be what each agency decides to bring in-house or to continue outsourcing. This decision will in part depend on whether the inefficiencies identified lead agencies to conclude that they can perform those functions for less money and more efficiently than their contractors.  In evaluating their present capabilities, each agency will need to look at what workflows and technologies they currently have deployed across divisions, what they presently outsource,  and what the marketplace potentially offers them today to address their challenges.

The reason this question is central is because it begs an all-important question about information governance itself.  Information governance inherently implies that an organization or government control most or all aspects of the EDRM model in order to derive the benefits of security, storage, records management and eDiscovery capabilities. Presently, the government is outsourcing many of their litigation services to third party companies that have essentially become de facto government agencies.  This is partly due to scalability issues, and partly because the resources and technologies that are deployed in-house within these agencies are inadequate to properly execute a robust information governance plan.

Conclusion

The ideal scenario for each government agency to comply with the mandate would be that they deploy automated classification for their records management, archiving with expiration appropriately implemented for more than just email, and finally, some level of eDiscovery capability in order to conduct early case assessment and easily produce data for FOIA.  The level of early case assessment needed by each agency will vary, but the general idea would be that before contacting a third party to conduct data collection, the scope of an investigation or matter would be able to be determined in-house.  All things considered, the question remains if the Obama administration will foot this bill or if we will have to wait for a bigger price tag later down the road.  Either way, the government will have to come up to speed and make these changes eventually and the town hall meeting should be an accurate thermometer on where the government stands.

Judge Peck Issues Order Addressing “Joint Predictive Coding Protocol” in Da Silva Moore eDiscovery Case

Thursday, February 23rd, 2012

Litigation attorneys were abuzz last week when a few breaking news stories erroneously reported that The Honorable Andrew J. Peck, United States Magistrate Judge for the Southern District of New York, ordered the parties in a gender discrimination case to use predictive coding technology during discovery.  Despite early reports, the parties in the case (Da Silva Moore v. Publicis Group, et. al.) actually agreed to use predictive coding technology during discovery – apparently of their own accord.  The case is still significant because predictive coding technology in eDiscovery is relatively new to the legal field, and many have been reluctant to embrace a new technological approach to document review due to, among other things, a lack of judicial guidance.

Unfortunately, despite this atmosphere of cooperation, the discussion stalled when the parties realized they were miles apart in terms of defining a mutually agreeable predictive coding protocol.  A February status conference transcript reveals significant confusion and complexity related to issues such as random sampling, quality control testing, and the overall process integrity.  In response, Judge Peck ordered the parties to submit a Joint Protocol for eDiscovery to address eDiscovery generally and the use of predictive coding technology specifically.

The parties submitted their proposed protocol on February 22, 2012 and Judge Peck quickly reduced that submission to a stipulation and order.  The stipulation and order certainly provides more clarity and insight into the process than the status conference transcript.  However, reading the stipulation and order leaves little doubt that the devil is in the details – and there are a lot of details.  Equally clear is the fact that the parties are still in disagreement and the plaintiffs do not support the “joint” protocol laid out in the stipulation and order.  Plaintiffs actually go so far as to incorporate a paragraph into the stipulation and order stating that they “object to this ESI Protocol in its entirety” and they “reserve the right to object to its use in the case.”

These problems underscore some of the points made in a Forbes article published earlier this week titled,Federal Judges Consider Important Issues That Could Shape the Future of Predictive Coding Technology.”  The Forbes article relies in part on a recent predictive coding survey to make the point that, while predictive coding technology has tremendous potential, the solutions need to become more transparent and the workflows must be simplified before they go mainstream.

Breaking News: Federal Circuit Denies Google’s eDiscovery Mandamus Petition

Wednesday, February 8th, 2012

The U.S. Court of Appeals for the Federal Circuit dealt Google a devastating blow Monday in connection with Oracle America’s patent and copyright infringement suit against Google involving features of Java and Android. The Federal Circuit affirmed the district court’s order that a key email was not entitled to protection under the attorney-client privilege.

Google had argued that the email was privileged under Upjohn Co. v. United States, asserting that the message reflected discussions about litigation strategy between a company engineer and in-house counsel. While acknowledging that Upjohn would protect such discussions, the court rejected that characterization of the email.  Instead, the court held that the email reflected a tactical discussion about “negotiation strategy” with Google management, not an “infringement or invalidity analysis” with Google counsel.

Getting beyond the core privilege issues, Google might have avoided this dispute had it withheld the eight earlier drafts of the email that it produced to Oracle. As we discussed in our previous post, organizations conducting privilege reviews should consider using robust, next generation eDiscovery technology such as email analytical software, that could have isolated the drafts and potentially removed them from production. Other technological capabilities, such as Near Duplicate Identification, could also have helped identify draft materials and marry them up with finals marked as privileged. As this case shows, in the fast moving era of eDiscovery, having the right technology is essential for maintaining a strategic advantage in litigation.

Breaking News: Pippins Court Affirms Need for Cooperation and Proportionality in eDiscovery

Tuesday, February 7th, 2012

The long awaited order regarding the preservation of thousands of computer hard drives in Pippins v. KPMG was finally issued last week. In a sharply worded decision, the Pippins court overruled KPMG’s objections to the magistrate’s preservation order and denied its motion for protective order. The firm must now preserve the hard drives of certain former and departing employees unless it can reach an agreement with the plaintiffs on a methodology for sampling data from a select number of those hard drives.

Though easy to get caught up in the opinion’s rhetoric (“[i]t smacks of chutzpah (no definition required) to argue that the Magistrate failed to balance the costs and benefits of preservation . . .”), the Pippins case confirms the importance of both cooperation and proportionality in eDiscovery. With respect to cooperation, the court emphasized that parties should take reasonable positions in discovery so as to reach mutually agreeable results. The order also stressed the importance of communicating with the court to clarify discovery obligations.  In that regard, the court faulted the parties and the magistrate for not seeking the court’s clarification with respect to its prior order staying discovery. The court explained that the discovery stay – which KPMG had understood to prevent any sampling of the hard drives – could have been partially lifted to allow for sampling. And this, in turn, could have obviated the costs and delays associated with the motion practice on this matter.

Regarding proportionality, the court confirmed the importance of this doctrine in determining the scope of preservation. Indeed, the court declared that proportionality is typically “determinative” of a motion for protective order. Nevertheless, the court could not engage in a proportionality analysis – weighing the benefits of preserving the hard drives against its burdens – as the defendant had not yet produced any evidence from the hard drives to evaluate the nature of the evidence. Only after the evidence from a sampling of hard drives had been produced and evaluated could such a determination be made.

The Pippins case demonstrates that courts have raised their expectations for how litigants will engage in eDiscovery. Staking out unreasonable positions in the name of zealous advocacy stands in stark contrast to the clear trend that discovery should comply with the cost cutting mandate of Federal Rule 1. Cooperation and proportionality are two of the principal touchstones for effectuating that mandate.

Losing Weight, Developing an Information Governance Plan, and Other New Year’s Resolutions

Tuesday, January 17th, 2012

It’s already a few weeks into the new year and it’s easy to spot the big lines at the gym, folks working on fad diets and many swearing off any number of vices.  Sadly perhaps, most popular resolutions don’t even really change year after year.  In the corporate world, though, it’s not good enough to simply recycle resolutions every year since there’s a lot more at stake, often with employee’s bonuses and jobs hanging in the balance.

It’s not too late to make information governance part of the corporate 2012 resolution list.  The reason is pretty simple – most companies need to get out of the reactive firefighting of eDiscovery given the risks of sloppy work, inadvertent productions and looming sanctions.  Yet, so many are caught up in the fog of eDiscovery war that they’ve failed to see the nexus between the upstream, proactive good data management hygiene and the downstream eDiscovery chaos.

In many cases the root cause is the disconnect between differing functional groups (Legal, IT, Information Security, Records Management, etc.).  This is where the emerging umbrella concept of Information Governance comes to play, serving as a way to tackle these information risks along a unified front. Gartner defines information governanceas the:

“specification of decision rights, and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archiving and deletion of information, … [including] the processes, roles, standards, and metrics that ensure the effective and efficient use of information to enable an organization to achieve its goals.”

Perhaps more simply put, what were once a number of distinct disciplines—records management, data privacy, information security and eDiscovery—are rapidly coming together in ways that are important to those concerned with mitigating and managing information risk. This new information governance landscape is comprised of a number of formerly discrete categories:

  • Regulatory Risks – Whether an organization is in a heavily regulated vertical or not, there are a host of regulations that an organization must navigate to successfully stay in compliance.  In the United States these include a range of disparate regimes, including the Sarbanes-Oxley Act, HIPPA, the Securities and Exchange Act, the Foreign Corrupt Practices Act (FCPA) and other specialized regulations – any number of which require information to be kept in a prescribed fashion, for specified periods of time.  Failure to turn over information when requested by regulators can have dramatic financial consequences, as well as negative impacts to an organization’s reputation.
  • Discovery Risks – Under the discovery realm there are any number of potential risks as a company moves along the EDRM spectrum (i.e., Identification, Preservation, Collection, Processing, Analysis, Review and Production), but the most lethal risk is typically associated with spoliation sanctions that arise from the failure to adequately preserve electronically stored information (ESI).  There have been literally hundreds of cases where both plaintiffs and defendants have been caught in the judicial crosshairs, resulting in penalties ranging from outright case dismissal to monetary sanctions in the millions of dollars, simply for failing to preserve data properly.  It is in this discovery arena that the failure to dispose of corporate information, where possible, rears its ugly head since the eDiscovery burden is commensurate with the amount of data that needs to be preserved, processed and reviewed.  Some statistics show that it can cost as much as $5 per document just to have an attorney privilege review performed.  And, with every gigabyte containing upwards of 75,000 pages, it is easy to see massive discovery liability when an organization has terabytes and even petabytes of extraneous data lying around.
  • Privacy Risks – Even though the US has a relatively lax information privacy climate there are any number of laws that require companies to notify customers if their personally identifiable information (PII) such as credit card, social security, or credit numbers have been compromised.  For example, California’s data breach notification law (SB1386) mandates that all subject companies must provide notification if there is a security breach to the electronic database containing PII of any California resident.  It is easy to see how unmanaged PII can increase corporate risk, especially as data moves beyond US borders to the international stage where privacy regimes are much more staunch.
  • Information Security Risks Data breaches have become so commonplace that the loss/theft of intellectual property has become an issue for every company, small and large, both domestically and internationally.  The cost to businesses of unintentionally exposing corporate information climbed 7 percent last year to over $7 million per incident.  Recently senators asked the SEC to “issue guidance regarding disclosure of information security risk, including material network breaches” since “securities law obligates the disclosure of any material network breach, including breaches involving sensitive corporate information that could be used by an adversary to gain competitive advantage in the marketplace, affect corporate earnings, and potentially reduce market share.”  The senators cited a 2009 survey that concluded that 38% of Fortune 500 companies made a “significant oversight” by not mentioning data security exposures in their public filings.

Information governance as an umbrella concept helps organizations to create better alignment between functional groups as they attempt to solve these complex and interrelated data risk challenges.  This coordination is even more critical given the way that corporate data is proliferating and migrating beyond the firewall.  With even more data located in the cloud and on mobile devices a key mandate is managing data in all types of form factors. A great first step is to determine ownership of a consolidated information governance approach where the owner can:

  • Get C-Level buy-in
  • Have the organizational savvy to obtain budget
  • Be able to define “reasonable” information governance efforts, which requires both legal and IT input
  • Have strong leadership and consensus building skills, because all stakeholders need to be on the same page
  • Understand the nuances of their business, since an overly rigid process will cause employees to work around the policies and procedures

Next, tap into and then leverage IT or information security budgets for archiving, compliance and storage.  In most progressive organizations there are likely ongoing projects that can be successfully massaged into a larger information governance play.  A great place to focus on initially is information archiving, since this one of the simplest steps an organization can take to improve their information governance hygiene.  With an archive organizations can systematically index, classify and retain information and thus establish a proactive approach to data management.  It’s this ability to apply retention and (most importantly) expiration policies that allows organizations to start reducing the upstream data deluge that will inevitably impact downstream eDiscovery processes.

Once an archive is in place, the next logical step is to couple a scalable, reactive eDiscovery process with the upstream data sources, which will axiomatically include email, but increasingly should encompass cloud content, social media, unstructured data, etc.  It is important to make sure  that a given  archive has been tested to ensure compatibility with the chosen eDiscovery application to guarantee that it can collect content at scale in the same manner used to collect from other data sources.  Overlaying both of these foundational pieces should be the ability to place content on legal hold, whether that content exists in the archive or not.

As we enter 2012, there is no doubt that information governance should be an element in building an enterprise’s information architecture.  And, different from fleeting weight loss resolutions, savvy organizations should vow to get ahead of the burgeoning categories of information risk by fully embracing their commitment to integrated information governance.  And yet, this resolution doesn’t need to encompass every possible element of information governance.  Instead, it’s best to put foundational pieces into place and then build the rest of the infrastructure in methodical and modular fashion.

Lessons Learned for 2012: Spotlighting the Top eDiscovery Cases from 2011

Tuesday, January 3rd, 2012

The New Year has now dawned and with it, the certainty that 2012 will bring new developments to the world of eDiscovery.  Last month, we spotlighted some eDiscovery trends for 2012 that we feel certain will occur in the near term.  To understand how these trends will play out, it is instructive to review some of the top eDiscovery cases from 2011.  These decisions provide a roadmap of best practices that the courts promulgated last year.  They also spotlight the expectations that courts will likely have for organizations in 2012 and beyond.

Issuing a Timely and Comprehensive Litigation Hold

Case: E.I. du Pont de Nemours v. Kolon Industries (E.D. Va. July 21, 2011)

Summary: The court issued a stiff rebuke against defendant Kolon Industries for failing to issue a timely and proper litigation hold.  That rebuke came in the form of an instruction to the jury that Kolon executives and employees destroyed key evidence after the company’s preservation duty was triggered.  The jury responded by returning a stunning $919 million verdict for DuPont.

The spoliation at issue occurred when several Kolon executives and employees deleted thousands emails and other records relevant to DuPont’s trade secret claims.  The court laid the blame for this destruction on the company’s attorneys and executives, reasoning they could have prevented the spoliation through an effective litigation hold process.  At issue were three hold notices circulated to the key players and data sources.  The notices were all deficient in some manner.  They were either too limited in their distribution, ineffective since they were prepared in English for Korean-speaking employees, or too late to prevent or otherwise ameliorate the spoliation.

The Lessons for 2012: The DuPont case underscores the importance of issuing a timely and comprehensive litigation hold notice.  As DuPont teaches, organizations should identify what key players and data sources may have relevant information.  A comprehensive notice should then be prepared to communicate the precise hold instructions in an intelligible fashion.  Finally, the hold should be circulated immediately to prevent data loss.

Organizations should also consider deploying the latest technologies to help effectuate this process.  This includes an eDiscovery platform that enables automated legal hold acknowledgements.  Such technology will allow custodians to be promptly and properly apprised of litigation and thereby retain information that might otherwise have been discarded.

Another Must-Read Case: Haraburda v. Arcelor Mittal U.S.A., Inc. (D. Ind. June 28, 2011)

Suspending Document Retention Policies

Case: Viramontes v. U.S. Bancorp (N.D. Ill. Jan. 27, 2011)

Summary: The defendant bank defeated a sanctions motion because it modified aspects of its email retention policy once it was aware litigation was reasonably foreseeable.  The bank implemented a retention policy that kept emails for 90 days, after which the emails were overwritten and destroyed.  The bank also promulgated a course of action whereby the retention policy would be promptly suspended on the occurrence of litigation or other triggering event.  This way, the bank could establish the reasonableness of its policy in litigation.  Because the bank followed that procedure in good faith, it was protected from court sanctions under the Federal Rules of Civil Procedure 37(e) “safe harbor.”

The Lesson for 2012: As Viramontes shows, an organization can be prepared for eDiscovery disputes by timely suspending aspects of its document retention policies.  By modifying retention policies when so required, an organization can develop a defensible retention procedure and be protected from court sanctions under Rule 37(e).

Coupling those procedures with archiving software will only enhance an organization’s eDiscovery preparations.  Effective archiving software will have a litigation hold mechanism, which enables an organization to suspend automated retention rules.  This will better ensure that data subject to a preservation duty is actually retained.

Another Must-Read Case: Micron Technology, Inc. v. Rambus Inc., 645 F.3d 1311 (Fed. Cir. 2011)

Managing the Document Collection Process

Case: Northington v. H & M International (N.D.Ill. Jan. 12, 2011)

Summary: The court issued an adverse inference jury instruction against a company that destroyed relevant emails and other data.  The spoliation occurred in large part because legal and IT were not involved in the collection process.  For example, counsel was not actively engaged in the critical steps of preservation, identification or collection of electronically stored information (ESI).  Nor was IT brought into the picture until 15 months after the preservation duty was triggered. By that time, rank and file employees – some of whom were accused by the plaintiff of harassment – stepped into this vacuum and conducted the collection process without meaningful oversight.  Predictably, key documents were never found and the court had little choice but to promise to inform the jury that the company destroyed evidence.

The Lesson for 2012: An organization does not have to suffer the same fate as the company in the Northington case.  It can take charge of its data during litigation through cooperative governance between legal and IT.  After issuing a timely and effective litigation hold, legal should typically involve IT in the collection process.  Legal should rely on IT to help identify all data sources – servers, systems and custodians – that likely contain relevant information.  IT will also be instrumental in preserving and collecting that data for subsequent review and analysis by legal.  By working together in a top-down fashion, organizations can better ensure that their eDiscovery process is defensible and not fatally flawed.

Another Must-Read Case: Green v. Blitz U.S.A., Inc. (E.D. Tex. Mar. 1, 2011)

Using Proportionality to Dictate the Scope of Permissible Discovery

Case: DCG Systems v. Checkpoint Technologies (N.D. Ca. Nov. 2, 2011)

The court adopted the new Model Order on E-Discovery in Patent Cases recently promulgated by the U.S. Court of Appeals for the Federal Circuit.  The model order incorporates principles of proportionality to reduce the production of email in patent litigation.  In adopting the order, the court explained that email productions should be scaled back since email is infrequently introduced as evidence at trial.  As a result, email production requests will be restricted to five search terms and may only span a defined set of five custodians.  Furthermore, email discovery in DCG Systems will wait until after the parties complete discovery on the “core documentation” concerning the patent, the accused product and prior art.

The Lesson for 2012: Courts seem to be slowly moving toward a system that incorporates proportionality as the touchstone for eDiscovery.  This is occurring beyond the field of patent litigation, as evidenced by other recent cases.  Even the State of Utah has gotten in on the act, revising its version of Rule 26 to require that all discovery meet the standards of proportionality.  While there are undoubtedly deviations from this trend (e.g., Pippins v. KPMG (S.D.N.Y. Oct. 7, 2011)), the clear lesson is that discovery should comply with the cost cutting mandate of Federal Rule 1.

Another Must-Read Case: Omni Laboratories Inc. v. Eden Energy Ltd [2011] EWHC 2169 (TCC) (29 July 2011)

Leveraging eDiscovery Technologies for Search and Review

Case: Oracle America v. Google (N.D. Ca. Oct. 20, 2011)

The court ordered Google to produce an email that it previously withheld on attorney client privilege grounds.  While the email’s focus on business negotiations vitiated Google’s claim of privilege, that claim was also undermined by Google’s production of eight earlier drafts of the email.  The drafts were produced because they did not contain addressees or the heading “attorney client privilege,” which the sender later inserted into the final email draft.  Because those details were absent from the earlier drafts, Google’s “electronic scanning mechanisms did not catch those drafts before production.”

The Lesson for 2012: Organizations need to leverage next generation, robust technology to support the document production process in discovery.  Tools such as email analytical software, which can isolate drafts and offer to remove them from production, are needed to address complex production issues.  Other technological capabilities, such as Near Duplicate Identification, can also help identify draft materials and marry them up with finals that have been marked as privileged.  Last but not least, technology assisted review has the potential of enabling one lawyer to efficiently complete the work that previously took thousands of hours.  Finding the budget and doing the research to obtain the right tools for the enterprise should be a priority for organizations in 2012.

Another Must-Read Case: J-M Manufacturing v. McDermott, Will & Emery (CA Super. Jun. 2, 2011)

Conclusion

There were any number of other significant cases from 2011 that could have made this list.  We invite you to share your favorites in the comments section or contact us directly with your feedback.

For more on the cases discussed above, watch this video:

Backup Tapes and Archives Bursting at the Seams? The Seven Year Itch Has Technology to Answer the Scratch

Monday, December 12th, 2011

Just like Marilyn Monroe stopped traffic in her white dress in The Seven Year Itch, enterprises are being stopped dead in their tracks by the data explosion, lack of information governance policies and overstuffed IT infrastructures.  During the 2004-05 timeframe, a large number of enterprises began migrating to an archive, and this trend has kept steady pace since.  Archiving historically began with email, but has been recently extended to many other forms of information, including social media, unstructured data and cloud content.  This adoption was somewhat related to the historic Zubulake ruling, that required preservation to attach upon “reasonable anticipation of litigation.”  Another significant driver behind the archive need is the ability to comply with a range of statutes and regulations.  The reality is it is difficult to preserve efficiently and defensibly without an archive and other automatic classification technologies.  Some companies still complete the information management and eDiscovery processes manually, but not without peril.

Currently, there is a sudden upsurge in corporations finally starting to shrink the archives that they implemented to manage email, legal preservation requirements and regulatory compliance.  After roughly seven years, over which time there have been many advances in technology, a shift in thinking is taking place with regard to information governance and data retention.  Change has been borne out of necessity, as infrastructures are suffering with the amount of data they are retaining and the pains associated with searching that data.  This shift will enable companies to delete with confidence, clean up their backup tapes, shrink their archives, and manage/expire data on a go-forward basis effectively.  Collectively, this type of good information governance hygiene allows organizations to minimize the litigation risk that’s attendant with bloated information stores.

One reason many archives have become so bloated is because many enterprises purchased archiving software, but did not properly enable expiry procedures according to a  defensible document retention policy.  This resulted in saving everything for the past seven or so years.  Another reason for retaining all data in the archive was because enterprises were afraid to delete anything fearing being accused of spoliation and/or the inability to retrieve data that should have been on legal hold.  These two reasons combined have resulted in companies being forced to address the impact of having to search this massive amount of data in the archive each time a matter arises.  The resulting workflow for data collection is time consuming and expensive, especially for companies that still employ third party vendors for data collection.  For many organizations, the situation has become unsustainable from both a legal and IT perspective.

In recent years, backup has been given less attention as archives have become more common, storage has become more affordable, and most lawyers argue that tapes are “inaccessible” – making restoration less common.  However, there is still an area of concern with regard to over-retention of backup, especially when organizations do not have an archive.  They may be required to produce backup tapes as much of the relevant information to a matter could be contained therein.  This has led to saving large numbers of backup tapes with no real knowledge of what data is on the tapes and no one wanting to be accountable for pulling the trigger on deletion.  When forced to restore backup tapes it can be expensive and an eDiscovery nightmare.

For example, in Moore v. Gilead Sciences (N.D. Ca. Nov. 16, 2011), the plaintiff sought production of “all archived emails” that he sent or received during his five-year tenure with the defendant pharmaceutical company.  The company objected to the request as being unduly burdensome.  The company argued that:

  1. The emails were exclusively stored on its disaster recovery backup tapes;
  2. It would cost $360,000 to index those tapes, exclusive of processing and review costs;
  3. Many of the requested emails would not be retrieved since the company conducted its backups on monthly (not daily) intervals; and
  4. Over 25,000 pages of the plaintiff’s emails had already been produced in the litigation.

It is common for the inaccessibility and unduly burdensome arguments to be made with regard to backup tapes to combat indexing and restoration.  However, where a discovery dispute has merit, courts routinely reject projected cost estimates (such as the company’s $360,000 figure) as being unfounded/speculative and order production nevertheless.  [See Pippins v. KPMG and Escamilla v. SMS Holdings Corp.]  Had the judge gone the other way on restoration in Moore, the outcome could have easily been different, expensive and detrimental to the company.

What does this mean for organizations keeping seven years or more of legacy content?  Firstly, take inventory on where backup tapes reside and determine if they need to be saved or if they can be deleted.  Most corporations have amassed many tapes that are only a legal liability at this point.  Technology exists today that can index and search what is on the tapes, enabling educated decisions to then be made about whether to delete and/or transfer to the archive for legal hold.  Essentially, new technology can give sight to the blind.  Those decisions must be made according to a plan and documented.  Backup should only be for disaster recovery.

Secondly, purchase an archive if the company does not yet have one and configure the archive to expire data according to the document retention policy that can protect the company’s data decisions under Safe Harbor laws.

Is the company experiencing what many others are right now, which is an archive that is bursting at the seams? If the company does have an archive, check to see if expiry has been properly deployed according to the company’s policy.  If not, initiate a project to free up the archive from information retention that is unnecessary and that should not be subject to discovery.  Again, this must be documented.  Archives are for discovery and they need to be lean, efficient, and executing the information management lifecycle.

Avoid the request for backup tapes in litigation by having a sufficient archive and clearly stating that backup tapes are solely for disaster recovery. Delete tapes when possible by analyzing what is on them with appropriate technology and through a documented process that will eliminate the possibility of them being discoverable in litigation.

In sum, it is very helpful to examine the EDRM model and carve out what technologies and policies will apply to each aspect of the continuum.  For the challenges addressed in this blog, backup tapes fall under information management as does an archive all the way to the left of the model.  Backup tapes need search and expiry in order to retain only what is necessary for legal hold and should be ingested into an archive;  then, the company’s disaster recovery policies should be enforced on a go-forward basis.  Similarly, the archive needs search and expiration according to document retention policies so it does not become overgrown. From left to right, the model logically walks through the lifecycle of data, and many of the responsibilities associated with the data can be automated.  This project will require commitment, resources and time, but in light of the fact that data is only growing, there aren’t any other options.

ECPA, 4th Amendment, and FOIA: A Trident of Laws Collide on the 25th Birthday of the Electronic Communications Privacy Act

Wednesday, November 2nd, 2011

Google has publicly released the number of U.S. Government requests it had for email productions in the six months preceding December 31, 2009.  They have had to comply with 94% of these 4,601 requests.  Granted, many of these requests were search warrants or subpoenas, but many were not.  Now take 4,601 and multiply it by at least 3 for other social media sources for Facebook, LinkedIn, and Twitter.  The number is big – and so is the concern over how this information is being obtained.

What has becoming increasingly common (and alarming at the same time) is the way this electronically stored information (ESI) is being obtained from third party service providers by the U.S. Government. Some of these requests were actually secret court orders; it is unclear how many of the matters were criminal or civil.  Many of these service providers (Sonic, Google, Microsoft, etc.) are challenging these requests and most often losing. They are losing on two fronts:  1) they are not allowed to inform the data owner about the requests, nor the subsequent production of the emails, and 2) they are forced to actually produce the information.  For example, the U.S. Government obtained one of these secret orders to get WikiLeaks volunteer Jacob Applebaum’s email contact list of the people he has corresponded with over the past two years.  Both Google and Sonic.net were ordered to turn over information and Sonic challenged  the order and lost.  This has forced technology companies to band together to lobby Congress to require search warrants in digital investigations.

There are three primary laws operating at this pivotal intersection that affect the discovery of ESI that resides with third party service providers, and these laws are in a car wreck with no ambulance in sight.  First, there is the antiquated Federal Law, the Electronic Communications Privacy Act of 1986, over which there is much debate at present.  To put the datedness of the ECPA in perspective, it was written before the internet.  This law is the basis that allows the government to secretly obtain information from email and cell phones without a search warrant. Not having a search warrant is in direct conflict with the U.S. Constitution’s 4th Amendment protection against unreasonable searches and seizures.  In the secret order scenario, the creator of data is denied their right to know about the search and seizure (as they would if their homes were being searched, for example) as it is transpiring with the third party.

Where a secret order has been issued and emails have been obtained from a third party service provider, we see the courts treating email much differently than traditional mail and telephone lines.  However, the intent of the law was to give electronic communications the same protections that mail and phone calls have enjoyed for some time. Understandably, the law did not anticipate the advent of the technology we have today.  This is the first collision, and the reason the wheels have gone off the car, since the standard under the ECPA sets a lower bar for email than that of the former two modes of communication.  The government must only show “reasonable grounds” that the records would be “relevant and material” to an investigation, criminal or civil, compared to the other higher standard.

The third law in this collision is the Freedom of Information Act (FOIA).  While certain exceptions and allowances are made for national security and in criminal investigations, these secret orders are not able to be seen by the person whose information has been requested.  Additionally, the public wants to see these requests and these orders, especially if they have no chance of fighting them.  What remains to be seen is what our rights are under FOIA to see these orders, either as a party or a non-related individual to the investigation as a matter of public record.  U.S. Senator Patrick Leahy, (D-VT), the author of the ECPA, acknowledged in no uncertain terms that the law is “significantly outdated and outpaced by rapid changes in technology.”   He has since introduced a bill with many changes that third party service providers have lobbied for to bring the ECPA up to date. The irony of this situation is that the law was intended to provide the same protections for all modes of communication, but in fact makes it easier for the government to request information without the author even knowing.

This is one of the most important issues now facing individuals and the government in the discovery of ESI during investigations and litigation.  A third party service provider of cloud offerings is really no different than a utility company, and the same paradigm can exist as it does with the U.S. Postal Service and the telephone companies when looking to discover this information under the Fourth Amendment, where a warrant is required. The law looks to be changing to reflect this and FOIA should allow the public to access these orders.  Amendments to the Act have been introduced by Senator Leahy, and we can look forward to the common sense changes he proposes that are necessary.  The American people don’t like secrets. Lawyers, get ready to embrace the revisions into your practice by reading up on the changes as they will impact your practices significantly in the near future.