24h-payday

Posts Tagged ‘Symantec’

Moving Data to the Cloud? Top 5 Tips for Corporate Legal Departments

Monday, September 30th, 2013

One of the hottest information technology (IT) trends is to move data once stored within the corporate firewall into a hosted cloud environment managed by third-party providers. In 2013 alone, the public cloud services market is forecast to grow an astonishing 18.5 percent to $131 billion worldwide, up from $111 billion in 2012. The trend is driven largely by the fact that labor, infrastructure, and software costs can be reduced by sending email and other data to third-party providers for off-site hosting. Although the benefits of cloud computing are real, many organizations make the decision to move to the cloud without thoroughly weighing all the risks and benefits first.

A common problem is that many corporate IT departments fail to consult with their legal department before making the decision to move company data into the cloud even though the decision may have legal consequences. For example, retrieving information from the cloud in response to eDiscovery requests presents unique challenges that could cause delay and increase costs. Similarly, problems related to data access and even ownership could also arise if the cloud provider merges with another company, goes bankrupt, or decides to change their terms of service.

The bad news is that the list of possible challenges is long when data is stored in the cloud. The good news is that many of the risks of moving important company data to the cloud are foreseeable and can be mitigated by negotiating terms with prospective cloud providers in advance. Although not comprehensive, the following highlights some of the key areas attorneys should consider before agreeing to store company data with third-party cloud providers.

1.      Who Owns the Data?

The cloud market is hot right now with no immediate end in sight. That means competition is likely for years to come and counsel must be prepared for some market volatility. At a high level, delineating the “customer” as the “data owner” in agreements with cloud service providers is a must. More specifically, terms that address what happens in the event of a merger, bankruptcy, divestiture or any event that leads to the termination or alteration of the relationship should be clearly outlined. Identifying the customer as the data owner and preserving the right to retrieve data within a reasonable time and for a reasonable price will help prevent company data from being used as a bargaining chip if there is a disagreement or change in ownership.

2.      eDiscovery Response Times and Capabilities

Many businesses know first-hand that the costs and burdens of complying with eDiscovery requests are significant. In fact, a recent RAND study estimates that every gigabyte of data reviewed costs approximately $18,000. Surprisingly, many businesses do not realize that storing data in the cloud with the wrong provider could increase the risks and costs of eDiscovery significantly. Risks include the possibility of sanctions for overlooking information that should have been produced or failing to produce information in a timely fashion. Costs could be exacerbated by storing data with a cloud provider that lacks the resources or technology to respond to eDiscovery requests efficiently.

That means counsel must understand whether or not the provider has the technology to preserve, collect, and produce copies of data stored in the cloud. If so, is the search technology accurate and thorough? What are the time frames for responding to requests and are there surcharges for expedited versus non-expedited requests for data? Also, is data collected in a forensically sound manner and is the chain of custody recorded to validate the integrity and reasonableness of the process in the event of legal challenges? More than one cloud customer has encountered excessive fees, unacceptable timelines, and mass confusion when relying on cloud providers to help respond to eDiscovery requests. Avoid surprises by negotiating acceptable terms before, not after, moving company data to the cloud.

3.      Where is Data Physically Located?

Knowing where your data is physically located is important because there could be legal consequences. For example, several jurisdictions have their own unique privacy laws that may impact where employee data can be physically located, how it must be stored, and how it can be used. This can result in conflicts where data stored in the cloud is subject to discovery in one jurisdiction, but disclosure is prohibited by the laws of the jurisdiction where the data is stored. Failure to comply with these local foreign laws, sometimes known as blocking statutes, could result in penalties and challenges that might not be circumvented by choice of law provisions. That means knowing where your data will be stored and understanding the applicable laws governing data privacy in that jurisdiction is critical.

4.      Retention, Backup, & Security

Part of any good data retention program requires systematically deleting information that the business does not have a legal or business need to retain. Keeping information longer than necessary increases long-term storage costs and increases the amount of information the organization must search in the event of future litigation. The cloud provider should have the ability to automate the archiving, retention, and disposition of information in accordance with the customer’s preferred policies as well as the ability to suspend any automated deletion policies during litigation.

Similarly, where and how information is backed up and secured is critical. For many organizations, merely losing access to email for a few hours brings productivity to a screeching halt. Actually losing the company email due to technical problems and/or as the result of insufficient backup technology could cripple some companies indefinitely. Likewise, losing confidential customer data, research and development plans, or other sensitive information due to the lack of adequate data security and encryption technology could result in legal penalties and/or the loss of important intellectual property. Understanding how your data is backed up and secured is critical to choosing a cloud provider. Equally important is determining the consequences of and the process for handling data breaches, losses, and downtime if something goes wrong.

5.      Responding to Subpoenas and Third-party Data Requests

Sometimes it’s not just criminals who try to take your data out of the cloud; litigants and investigators might also request information directly from cloud providers without your knowledge or consent. Obviously companies have a vested interest in vetting the reasonableness and legality of any data requests coming from third parties. That interest does not change when data is stored in the cloud. Knowing how your cloud provider will respond to third-party requests for data and obtaining written assurances that you will be notified of requests as appropriate is critical to protecting intellectual property and defending against bogus claims. Furthermore, making sure data in the cloud is encrypted may provide an added safeguard if data somehow slips through the cracks without your permission.

Conclusion

Today’s era of technological innovation moves at lightning speed and cloud computing technology is no exception. This fast-paced environment sometime results in organizations making the important decision to move data to the cloud without properly assessing all the potential risks. In order to minimize these risks, organizations should consider The Top 5 Tips for Corporate Legal Departments and consult with legal counsel before moving data to the cloud.

Judge Scheindlin Blasts Proposed FRCP Amendments in Unconventional Style

Thursday, August 29th, 2013

A prominent federal judge wasted little time to air her dissatisfaction with the proposed amendments to the Federal Rules of Civil Procedure (Rules) the exact day the period for public comment on the Rules opened. In lieu of following the formal process of submitting written comments to the proposed amendments the Honorable Shira Scheindlin, Federal District Court Judge for the Southern District of New York, provided her feedback in more dramatic fashion. She went out of her way to blast newly proposed Federal Rule 37(e) in a footnote to a recent court order in a case where she sanctioned a party for spoliation of evidence. The order, dated August 15, 2013, conspicuously coincides with the opening day for public comment to the newly proposed amendments to the Rules and likely riled some attorneys who have lobbied hard for this particular Rule change for years.

The facts relayed in Sekisui American Corporation v. Hart are not uncommon. In fact, most have likely heard this story repeat itself for a decade despite the passage of amendments to the Rules in 2006 and myriad case law guiding against such conduct. The short version of the story is that a group of employees leave their company, the company sues the former employees, discovery ensues, and emails are missing. Why? Because emails were deleted long after the duty to preserve electronically stored information (ESI) was triggered and now those emails are lost. The question then turns to whether and how the judge should rectify the missing email problem. Those familiar with some of Judge Scheindlin’s prior decisions know the answer to that question – sanctions.

Judge Scheindlin is not a renegade who issues sanctions regardless of the facts of the case. However, some believe her attempts to provide clarity for litigants in her courtroom sometimes go too far which stirs debate. Whether you agree with her decisions or not, Judge Scheindlin takes special care to meticulously articulate the facts of the case, identify the relevant legal authority, present the legal analysis, and then nail the offending party with sanctions when they screw up discovery.

That is exactly what happened in Sekisui and the order is worth a read. People new to eDiscovery will learn the basics of when to apply a legal hold, while more seasoned eDiscovery veterans will be treated to a stroll down “case law memory lane” that includes stories of eDiscovery train wrecks past like the Zubulake and Pension Committee decisions. If you don’t have the stomach to read the entire thirty-two page opinion, then read the nice article written by Law Technology News aptly titled: Scheindlin Not Charmed When Visiting Spoliation a Third Time for further background on the case.

What is striking about the case is that Scheindlin used the case and issues at hand as an opportunity to articulate her displeasure with the proposed amendments to the Rules. In particular, she calls out proposed Rule 37(e) in footnote 51 of the opinion where she explains:

“the proposed rule would permit sanctions only if the destruction of evidence (1) caused substantial prejudice and was willful or in bad faith or (2) irreparably deprived a party of any meaningful opportunity to present or defend its claims…. The Advisory Committee Note to the proposed rule would require the innocent party to prove that ‘it has been substantially prejudiced by the loss’ of relevant information, even where the spoliating party destroyed information willfully or in bad faith. 5/8/2013 Report of the Advisory Committee on Civil Rules at 47. I do not agree that the burden to prove prejudice from missing evidence lost as a result of willful or intentional misconduct should fall on the innocent party. Furthermore, imposing sanctions only where evidence is destroyed willfully or in bad faith creates perverse incentives and encourages sloppy behavior. Under the proposed rule, parties who destroy evidence cannot be sanctioned (although they can be subject to “remedial curative measures”) even if they were negligent, grossly negligent, or reckless in doing so.”

Judge Scheindlin’s “Footnote 51” is almost certain to become a focal point of debate as the dialogue about the Rules continue. Not only did Judge Scheindlin ignite much of the early eDiscovery debate with her Zubulake line of decisions, she has also served on the  Federal Rules of Civil Procedure Advisory Committee from 1998 to 2005. The fact that she is known as the Godmother of eDiscovery in some circles illustrates that her influence over the rule making process is undeniable.  The time for public comment on the Rules closes on February 15, 2014 and the Godmother of eDiscovery has thrown down the gauntlet once again. Let the games begin.

Symantec Positioned Highest in Execution and Vision in Gartner Archiving MQ

Tuesday, December 18th, 2012

Once again Gartner has named Symantec as a leader in the Enterprise Information Archiving magic quadrant.  We’ve continued to invest significantly in this market and it is gratifying to see the recognition for the continued effort we put into archiving both in the cloud and on premises with our Enterprise Vault.cloud and Enterprise Vault products. Symantec has now been rated a leader 9 years in a row.

 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Symantec.

Gartner does not endorse any vendor, product or service depicted in the Magic Quadrant, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

 This year marks a transition in a couple of regards.  We are seeing an acceleration of customers looking for the convenience and simplicity of SaaS based archiving solution. The caveat being that they want the security and trust that only a vendor like Symantec can deliver.

Similarly the market has continued to ask for integrated solutions that deliver information archiving and eDiscovery to quickly address often complex and time sensitive process of litigation and regulatory requests.  The deep integration we offer between our archiving solutions – Enterprise Vault and Enterprise Vault.cloud – and the Clearwell eDiscovery Platform has led many customers to deploy these together to streamline their eDiscovery workflow.

An archive is inherently deployed with the long term in mind.  Over the history of Gartner’s Enterprise Information Archiving MQ, only Symantec has provided a consistent solution to customers by investing and innovating with Enterprise Vault to lead the industry in performance, functionality, and support without painful migrations or changes. 

We’re excited about what we have planned next for Enterprise Vault and Enterprise Vault.cloud and intend to maintain our leadership in the years to come. Our customers will continue to be able to manage their critical information assets and meet their needs for eDiscovery and Information Governance as we improve our products year after year.

Q&A With Predictive Coding Guru, Maura R. Grossman, Esq.

Tuesday, November 13th, 2012

Can you tell us a little about your practice and your interest in predictive coding?

After a prior career as a clinical psychologist, I joined Wachtell Lipton as a litigator in 1999, and in 2007, when I was promoted to counsel, my practice shifted exclusively to advising lawyers and clients on legal, technical, and strategic issues involving electronic discovery and information management, both domestically and abroad.

I became interested in technology-assisted review (“TAR”) in the 2007/2008 time frame, when I sought to address the fact that Wachtell Lipton had few associates to devote to document review, and contract attorney review was costly, time-consuming, and generally of poor quality.  At about the same time, I crossed paths with Jason R. Baron and got involved in the TREC Legal Track.

What are a few of the biggest predictive coding myths?

There are so many, it’s hard to limit myself to only a few!  Here are my nominations for the top ten, in no particular order:

Myth #1:  TAR is the same thing as clustering, concept search, “find similar,” or any number of other early case assessment tools.
Myth #2:  Seed or training sets must always be random.
Myth #3:  Seed or training sets must always be selected and reviewed by senior partners.
Myth #4:  Thousands of documents must be reviewed as a prerequisite to employing TAR, therefore, it is not suitable for smaller matters.
Myth #5:  TAR is more susceptible to reviewer error than the “traditional approach.”
Myth #6:  One should cull with keywords prior to employing TAR.
Myth #7:  TAR does not work for short documents, spreadsheets, foreign language documents, or OCR’d documents.
Myth #8:  Tar finds “easy” documents at the expense of “hot” documents.
Myth #9:  If one adds new custodians to the collection, one must always retrain the system.
Myth #10:  Small changes to the seed or training set can cause large changes in the outcome, for example, documents that were previously tagged as highly relevant can become non-relevant. 

The bottom line is that your readers should challenge commonly held (and promoted) assumptions that lack empirical support.

Are all predictive coding tools the same?  If not, then what should legal departments look for when selecting a predictive coding tool?

Not at all, and neither are all manual reviews.  It is important to ask service providers the right questions to understand what you are getting.  For example, some TAR tools employ supervised or active machine learning, which require the construction of a “training set” of documents to teach the classifier to distinguish between responsive and non-responsive documents.  Supervised learning methods are generally more static, while active learning methods involve more interaction with the tool and more iteration.  Knowledge engineering approaches (a.k.a. “rule-based” methods) involve the construction of linguistic and other models that replicate the way that humans think about complex problems.  Both approaches can be effective when properly employed and validated.  At this time, only active machine learning and rule-based approaches have been shown to be effective for technology-assisted review.  Service providers should be prepared to tell their clients what is “under the hood.”

What is the number one mistake practitioners should avoid when using these tools?

Not employing proper validation protocols, which are essential to a defensible process.  There is widespread misunderstanding of statistics and what they can and cannot tell us.  For example, many service providers report that their tools achieve 99% accuracy.  Accuracy is the fraction of documents that are correctly coded by a search or review effort.  While accuracy is commonly advanced as evidence of an effective search or review effort, it can be misleading because it is heavily influenced by prevalence, or the number of responsive documents in the collection.  Consider, for example, a document collection containing one million documents, of which ten thousand (or 1%) are relevant.  A search or review effort that identified 100% of the documents as non-relevant, and therefore, found none of the relevant documents, would have 99% accuracy, belying the failure of that search or review effort to identify a single relevant document.

What do you see as the key issues that will confront practitioners who wish to use predictive coding in the near-term?

There are several issues that will be played out in the courts and in practice over the next few years.  They include:  (1) How does one know if the proposed TAR tool will work (or did work) as advertised?; (2) Must seed or training sets be disclosed, and why?; (3) Must documents coded as non-relevant be disclosed, and why?; (4) Should TAR be held to a higher standard of validation than manual review?; and (5) What cost and effort is justified for the purposes of validation?  How does one ensure that the cost of validation does not obliterate the savings achieved by using TAR?

What have you been up to lately?

In an effort to bring order to chaos by introducing a common framework and set of definitions for use by the bar, bench, and vendor community, Gordon V. Cormack and I recently prepared a glossary on technology-assisted review that is available for free download at:  http://cormack.uwaterloo.ca/targlossary.  We hope that your readers will send us their comments on our definitions and additional terms for inclusion in the next version of the glossary.

Maura R. Grossman, counsel at Wachtell, Lipton, Rosen & Katz, is a well-known e-discovery lawyer and recognized expert in technology-assisted review.  Her work was cited in the landmark 2012 case, Da Silva Moore v. Publicis Group (S.D.N.Y. 2012).

Falcon Discovery Ushers in Savings with Transparent Predictive Coding

Tuesday, September 4th, 2012

The introduction of Transparent Predictive Coding to Symantec’s Clearwell eDiscovery Platform helps organizations defensibly reduce the time and cost of document review. Predictive coding refers to machine learning technology that can be used to automatically predict how documents should be classified based on limited human input. As expert reviewers tag documents in a training set, the software identifies common criteria across those documents, which it uses to “predict” the responsiveness of the remaining case documents. The result is that fewer irrelevant and non-responsive documents need to be reviewed manually – thereby accelerating the review process, increasing accuracy and allowing organizations to reduce the time and money spent on traditional page-by-page attorney document review.

Given the cost, speed and accuracy improvements that predictive coding promises, its adoption may seem to be a no-brainer. Yet predictive coding technology hasn’t been widely adopted in eDiscovery – largely because the technology and process itself still seems opaque and complex. Symantec’s Transparent Predictive Coding was developed to address these concerns and provide the level of defensibility necessary to enable legal teams to adopt predictive coding as a mainstream technology for eDiscovery review. Transparent Predictive Coding provides reviewers with complete visibility into the training and prediction process and delivers context for more informed, defensible decision-making.

Early adopters like Falcon Discovery have already witnessed the benefits of Transparent Predictive Coding. Falcon is a managed services provider that leverages a mix of top legal talent and cutting-edge technologies to help corporate legal departments, and the law firms that serve them, manage discovery and compliance challenges across matters. Recently, we spoke with Don McLaughlin, founder and CEO of Falcon Discovery, on the firm’s experiences with and lessons learned from using Transparent Predictive Coding.

1. Why did Falcon Discovery decide to evaluate Transparent Predictive Coding?

Predictive coding is obviously an exciting development for the eDiscovery industry, and we want to be able to offer Falcon’s clients the time and cost savings that it can deliver. At the same time there is an element of risk. For example, not all solutions provide the same level of visibility into the prediction process, and helping our clients manage eDiscovery in a defensible manner is of paramount importance. Over the past several years we have tested and/or used a number of different software solutions that include some assisted review or prediction technology. We were impressed that Symantec has taken the time and put in the research to integrate best practices into its predictive coding technology. This includes elements like integrated, dynamic statistical sampling, which takes the guesswork out of measuring review accuracy. This ability to look at accuracy across the entire review set provides a more complete picture, and helps address key issues that have come to light in some of the recent predictive coding court cases like Da Silva Moore.

2. What’s something you found unique or different from other solutions you evaluated?

I would say one of the biggest differentiators is that Transparent Predictive Coding uses both content and metadata in its algorithms to capture the full context of an e-mail or document, which we found to be appealing for two reasons. First, you often have to consider metadata during review for sensitive issues like privilege and to focus on important communications between specific individuals during specific time periods. Second, this can yield more accurate results with less work because the software has a more complete picture of the important elements in an e-mail or document. This faster time to evaluate the documents is critical for our clients’ bottom line, and enables more effective litigation risk analysis, while minimizing the chance of overlooking privileged or responsive documents.

3. So what were some of the success metrics that you logged?

Using Transparent Predictive Coding, Falcon was able to achieve extremely high levels of review accuracy with only a fraction of the time and review effort. If you look at academic studies on linear search and review, even under ideal conditions you often get somewhere between 40-60% accuracy. With Transparent Predictive Coding we are seeing accuracy measures closer to 90%, which means we are often achieving 90% recall and 80% precision by reviewing only a small fraction – under 10% – of the data population that you might otherwise review document-by-document. For the appropriate case and population of documents, this enables us to cut review time and costs by 90% compared to pure linear review. Of course, this is on top of the significant savings derived from leveraging other technologies to intelligently cull the data to a more relevant review set, prior to even using Transparent Predictive Coding. This means that our clients can understand the key issues, and identify potentially ‘smoking gun’ material, much earlier in a case.

4. How do you anticipate using this technology for Falcon’s clients?

I think it’s easy for people to get swept up by the “latest and greatest” technology or gadget and assume this is the silver bullet for everything we’ve been toiling over before. Take, for example, the smartphone camera – great for a lot of (maybe even most) situations, but sometimes you’re going to want that super zoom lens or even (gasp!) regular film. By the same token, it’s important to recognize that predictive coding is not an across-the-board substitute for other important eDiscovery review technologies and targeted manual review. That said, we’ve leveraged Clearwell to help our clients lower the time and costs of the eDiscovery process on hundreds of cases now, and one of the main benefits is that the solution offers the flexibility of using any number of advanced analytics tools to meet the specific requirements of the case at hand. We’re obviously excited to be able to introduce our clients to this predictive coding technology – and the time and cost benefits it can deliver – but this is in addition to other Clearwell tools, like advanced keyword search, concept or topic clustering, domain filtering, discussion threading and so on, that can and should be used together with predictive coding.

5. Based on your experience, do you have advice for others who may be looking to defensibly reduce the time and cost of document review with predictive coding technology?

The goal of the eDiscovery process is not perfection. At the end of the day, whether you employ a linear review approach and/or leverage predictive coding technology, you need to be able to show that what you did was reasonable and achieved an acceptable level of recall and precision. One of the things you notice with predictive coding is that as you review more documents, the recall and precision scores go up but at a decreasing rate. A key element of a reasonable approach to predictive coding is measuring your review accuracy using a proven statistical sampling methodology. This includes measuring recall and precision accurately to ensure the predictive coding technology is performing as expected. We’re excited to be able to deliver this capability to our clients out of the box with Clearwell, so they can make more informed decisions about their cases early-on and when necessary address concerns of proportionality with opposing parties and the court.

To find out more about Transparent Predictive Coding, visit http://go.symantec.com/predictive-coding

FOIA Matters! — 2012 Information Governance Survey Results for the Government Sector

Thursday, July 12th, 2012

At this year’s EDGE Summit in April, Symantec polled attendees about a range of government-specific information governance questions. The attendees of the event were primarily comprised of members from IT, Legal, as well as Freedom of Information Act (FOIA) agents, government investigators and records managers. The main purpose of the EDGE survey was to gather attendees’ thoughts on what information governance means for their agencies, discern what actions were being taken to address Big Data challenges, and assess how far along agencies were in their information governance implementations pursuant to the recent Presidential Mandate.

As my colleague Matt Nelson’s blog recounts from the LegalTech conference earlier this year, information governance and predictive coding were among the hottest topics at the LTNY 2012 show and in the industry generally. The EDGE Summit correspondingly held sessions on those two topics, as well as delved deeper into questions that are unique to the government. For example, when asked what the top driver for implementation of an information governance plan in an agency was, three out of four respondents answered “FOIA.”

The fact that FOIA was listed as the top driver for government agencies planning to implement an information governance solution is in line with data reported by the Department of Justice (DOJ) from 2008-2011 on the number of requests received. In 2008, 605,491 FOIA requests were received. This figure grew to 644,165 in 2011. While the increase in FOIA requests is not enormous percentage-wise, what is significant is the reduction in backlogs for FOIA requests. In 2008, there was a backlog of 130,419 requests and was decreased to 83,490 by 2011. This is likely due to the implementation of newer and better technology, coupled with the fact that the current administration has made FOIA request processing a priority.

In 2009, President Obama directed agencies to adopt “a presumption in favor’” of FOIA requests for greater transparency in the government. Agencies have had pressure from the President to improve the response time to (and completeness of) FOIA requests. Washington Post reporter Ed O’Keefe wrote,

“a study by the National Security Archive at George Washington University and the Knight Foundation, found approximately 90 federal agencies are equipped to process FOIA requests, and of those 90, only slightly more than half have taken at least some steps to fulfill Obama’s goal to improve government transparency.”

Agencies are increasingly more focused on complying with FOIA and will continue to improve their IT environments with archiving, eDiscovery and other proactive records management solutions in order to increase access to data.

Not far behind FOIA requests on the list of reasons to implement an information governance plan were “lawsuits” and “internal investigations.” Fortunately, any comprehensive information governance plan will axiomatically address FOIA requests since the technology implemented to accomplish information governance inherently allows for the storage, identification, collection, review and production of data regardless of the specific purpose. The use of information governance technology will not have the same workflow or process for FOIA that an internal investigation would require, for example, but the tools required are the same.

The survey also found that the top three most important activities surrounding information governance were: email/records retention (73%), data security/privacy (73%) and data storage (72%). These concerns are being addressed modularly by agencies with technology like data classification services, archiving, and data loss prevention technologies. In-house eDiscovery tools are also important as they facilitate the redaction of personally identifiable information that must be removed in many FOIA requests.

It is clear that agencies recognize the importance of managing email/records for the purposes of FOIA and this is an area of concern in light of not only the data explosion, but because 53% of respondents reported they are responsible for classifying their own data. Respondents have connected the concept of information governance with records management and the ability to execute more effectively on FOIA requests. Manual classification is rapidly becoming obsolete as data volumes grow, and is being replaced by automated solutions in successfully deployed information governance plans.

Perhaps the most interesting piece of data from the survey was the disclosures about what was preventing governmental agencies from implementing information governance plans. The top inhibitors for the government were “budget,” “internal consensus” and “lack of internal skill sets.” Contrasted with the LegalTech Survey findings from 2012 on information governance, with respondents predominantly from the private sector, the government’s concerns and implementation timelines are slightly different. In the EDGE survey, only 16% of the government respondents reported that they have implemented an information governance solution, contrasted with the 19% of the LegalTech audience. This disparity is partly because the government lacks the budget and the proper internal committee of stakeholders to sponsor and deploy a plan, but the relatively lows numbers in both sectors indicate the nascent state of information governance.

In order for a successful information governance plan to be deployed, “it takes a village,” to quote Secretary Clinton. Without prioritizing coordination between IT, legal, records managers, security, and the other necessary departments on data management, merely having the budget only purchases the technology and does not ensure true governance. In this year’s survey, 95% of EDGE respondents were actively discussing information governance solutions. Over the next two years the percentage of agencies that will submit a solution is expected to triple from 16%-52%. With the directive on records management due this month by the National Archives Records Administration (NARA), the government agencies will have clear guidance on what the best practices are for records management, and this will aid the adoption of automated archiving and records classification workflows.

The future is bright with the initiative by the President and NARA’s anticipated directive to examine the state of technology in the government. The EDGE survey results support the forecast, provided budget can be obtained, that agencies will be in an improved state of information governance within the next two years. This will be an improvement for FOIA request compliance, efficient litigation with the government and increase their ability to effectively conduct internal investigations.

Many would have projected that the results of the survey question on what drives information governance in the government would be litigation, internal investigations, and FOIA requests respectively. And yet, FOIA has recently taken on a more important role given the Obama administration’s focus on transparency and the increased number of requests by citizens. While any one of the drivers could have facilitated updates in process and technology the government clearly needs, FOIA has positive momentum behind it and seems to be the impetus primarily driving information governance. Fortunately, archiving and eDiscovery technology, only two parts of information governance continuum, can help with all three of the aforementioned drivers with different workflows.

Later this month we will examine NARA’s directive and what the impact will be on the government’s technology environment – stay tuned.

#InfoGov Twitter Chat Hones in on Starting Places and Best Practices

Tuesday, July 3rd, 2012

Unless you’re an octogenarian living in rural Uzbekistan[i] you’ve likely seen the meteoric rise of social media over the last decade. Even beyond hyper-texting teens, businesses too are taking advantage of this relatively new form function to engage with their more technically savvy customers. Recently, Symantec held its first “Twitter Chat” on the topic of information governance (fondly referred to on Twitter as #InfoGov). For those not familiar with the concept, a Twitter Chat is a virtual discussion held on Twitter using a specific hashtag – in this case #IGChat. At a set date and time, parties interested in the topic log into Twitter and start participating in the fireworks on the designated hashtag.

“Fireworks” may be a bit overstated, but given that the moderators (eDiscovery Counsel at Symantec) and participants were limited to 140 characters, the “conversation” was certainly frenetic. Despite the fast pace, one benefit of a Twitter Chat is that you can communicate with shortened web links, as a way to share and discuss content beyond the severely limited word count. During this somewhat staccato discussion, we found the conversation to take some interesting twists and turns, which I thought I’d excerpt (and expound upon[ii]) in this blog.

Whether in a Twitter Chat or otherwise, once the discussion of information governance begins everyone wants to know where to start. The #IGChat was no different.

  • Where to begin?  While there wasn’t consensus per se on a good starting place, one cogent remark out of the blocks was: “The best way to start is to come up with an agreed upon definition — Gartner’s is here t.co/HtGTWN2g.” While the Gartner definition is a good starting place, there are others out there that are more concise. The eDiscovery Journal Group has a good one as well:  “Information Governance is a comprehensive program of controls, processes, and technologies designed to help organizations maximize the value of information assets while minimizing associated risks and costs.”  Regardless of the precise definition, it’s definitely worth the cycles to rally around a set construct that works for your organization.
  • Who’s on board?  The next topic centered around trying to find the right folks organizationally to participate in the information governance initiative. InfoGovlawyer chimed in: “Seems to me like key #infogov players should include IT, Compliance, Legal, Security reps.” Then, PhilipFavro suggested that the “[r]ight team would likely include IT, legal, records managers, pertinent business units and compliance.” Similar to the previous question, at this stage in the information governance maturation process, there isn’t a single, right answer. More importantly, the team needs to have stakeholders from at least Legal and IT, while bringing in participants from other affected constituencies (Infosec, Records, Risk, Compliance, etc.) – basically, anyone interested in maximizing the value of information while reducing the associated risks.
  • Where’s the ROI?  McManusNYLJ queried: “Do you think #eDiscovery, #archiving and compliance-related technology provide ample ROI? Why or why not?”  Here, the comments came in fast and furious. One participant pointed out that case law can be helpful in showing the risk reduction:  “Great case showing the value of an upstream archive – Danny Lynn t.co/dcReu4Qg.” AlliWalt chimed in: “Yes, one event can set your company back millions…just look at the Dupont v. Kolon case… ROI is very real.” Another noted that “Orgs that take a proactive approach to #eDiscovery requests report a 64% faster response time, 2.3x higher success rate.” And, “these same orgs were 78% less likely to be sanctioned and 47% less likely to be legally compromised t.co/5dLRUyq6.” ROI for information governance seemed to be a nut that can be cracked any number of ways, ranging from risk reduction (via sanctions and adverse legal decisions) to better preparation. Here too, an organization’s particular sensitivities should come into play since all entities won’t have the same concerns about risk reduction, for example.
  • Getting Granular. Pegduncan, an active subject matter expert on the topic, noted that showing ROI was the right idea, but not always easy to demonstrate: “But you have to get their attention. Hard to do when IT is facing funding challenges.” This is when granular eDiscovery costs were mentioned: “EDD costs $3 -18k per gig (Rand survey) and should wake up most – adds up w/ large orgs having 147 matters at once.” Peg wasn’t that easily convinced: “Agreed that EDD costs are part of biz case, but .. it’s the problem of discretionary vs non-discretionary spending.”
  • Tools Play a Role. One participant asked: “what about tools for e-mail thread analysis, de-duplication, near de-duplication – are these applicable to #infogov?” A participant noted that “in the future we will see tools like #DLP and #predictivecoding used for #infogov auto-classification – more on DLP here: t.co/ktDl5ULe.” Pegduncan chimed in that “DLP=Data Loss Prevention. Link to Clearwell’s post on Auto-Classification & DLP t.co/ITMByhbj.”

With a concept as broad and complex as information governance, it’s truly amazing that a cogent “conversation” can take place in a series of 140 character tweets. As the Twitter Chat demonstrates, the information governance concept continues to evolve and is doing so through discussions like this one via a social media platform. As with many of the key information governance themes (Ownership, ROI, Definition, etc.) there isn’t a right answer at this stage, but that isn’t an excuse for not asking the critical questions. “Sooner started, sooner finished” is a motto that will serve many organizations well in these exciting times. And, for folks who say they can’t spare the time, they’d be amazed what they can learn in 140 characters.

Mark your calendars and track your Twitter hashtags now: The next #IGChat will be held on July 26 @ 10am PT.



[i] I’ve never been to rural Uzbekistan, but it just sounded remote.  So, my apologies if there’s a world class internet infrastructure there where the denizens tweet prolifically. Given that’s it’s one (of two) double landlocked countries in the world it seemed like an easy target. Uzbeks please feel free to use the comment field and set me straight.

[ii] Minor edits were made to select tweets, but generally the shortened Twitter grammar wasn’t changed.

The Increasing Importance of Cross-Border eDiscovery and Data Protection Awareness

Thursday, June 28th, 2012

Some of the hot news in the legal world suggests that cross-border eDiscovery and data protection issues are gaining increasing importance in the eyes of organizations and their counsel. Recent court cases, international political discussions and industry publications confirm this trend.

On the judicial front, courts appear to weigh in more frequently on cross-border eDiscovery disputes. Unfortunately, their decisions are frequently inconsistent and often fail to provide a clear message for how organizations should approach these complex matters. For example, the U.S. federal court in Manhattan recently issued two diametrically opposed decisions on whether parties must use the Hague Convention for obtaining written discovery from the People’s Republic of China. In Tiffany (NJ) LLC v. Forbse (S.D.N.Y. May 23, 2012), the court ordered the plaintiff to use the Hague Convention to obtain the sought-after discovery in China from two non-party banks. This stands in sharp contrast to the Gucci America, Inc. v. Weixing Li (S.D.N.Y. May 18, 2012) decision, which allowed the plaintiffs to bypass the Hague treaty and obtain discovery through a Rule 45 subpoena. The principal difference between the orders was their respective authors, with each judge reaching a different conclusion as to the merits of proceeding with the Hague Convention. These conflicting decisions, largely the result of the U.S. Supreme Court’s decision in Aerospatiale, confirm that the jurisprudence on cross-border data requests is quite unsettled and will remain an issue of considerable interest for the foreseeable future.

Of no less importance to multinational corporations is the issue of cross-border data protection. In fact, data protection seems to be outpacing eDiscovery in terms of significance to organizations. This was apparent last week at The Sedona Conference Working Group Six (WG6) annual meeting held in Toronto. Though the custom is to not disseminate specifics about Sedona meetings, cross-border data protection laws and their impact on organizations were a significant theme at the conference. This should come as no surprise given the number of data protection laws both enacted and proposed in the past year now confronting organizations. The Philippines, Singapore, Australia, New Zealand, the European Union, the United States and several other countries have either implemented or are considering new data protection laws. Moreover, even the U.S. and the European Union have announced a desire for rapprochement over their continuing differences on data protection and privacy.

Leading industry analyst firm Gartner also weighed in on these issues in its recently released Magic Quadrant for E-Discovery Software. In that Magic Quadrant report, Gartner concluded that eDiscovery was reaching across international boundaries and impacting organizations across the globe. With cross-border litigation on the rise and new regulations in the United Kingdom affecting the financial services industry, Gartner has predicted that “demand for e-discovery products and services will accelerate.”

All of which suggests the need for greater awareness on the eDiscovery and data protection front. Organizations looking to obtain a better understanding of these issues can find resources at their fingertips through the American Bar Association and The Sedona Conference. These not-for-profit entities have issued publications that provide a good starting point for obtaining an understanding of the issues and highlighting best practices for addressing global eDiscovery and data protection laws. Symantec has likewise made resources available in this regard. In addition to its eDiscovery Passports™, Symantec is recording a series of podcasts with industry thought leaders that spotlight key cross-border considerations for organizations. The first of these podcasts features Chris Dale, a well-known international lawyer in this field, who discusses some key aspects of disclosure in the United Kingdom impacting organizations around the world.

Obtaining a greater awareness of cross-border eDiscovery and data protection should ultimately help companies meet the legal challenges accompanying globalization. And greater awareness will likely lead to better corporate practices, which has the opportunity to reduce risks, fees and lost opportunities.

Gartner’s 2012 Magic Quadrant for E-Discovery Software Looks to Information Governance as the Future

Monday, June 18th, 2012

Gartner recently released its 2012 Magic Quadrant for E-Discovery Software, which is its annual report analyzing the state of the electronic discovery industry. Many vendors in the Magic Quadrant (MQ) may initially focus on their position and the juxtaposition of their competitive neighbors along the Visionary – Execution axis. While a very useful exercise, there are also a number of additional nuggets in the MQ, particularly regarding Gartner’s overview of the market, anticipated rates of consolidation and future market direction.

Context

For those of us who’ve been around the eDiscovery industry since its infancy, it’s gratifying to see the electronic discovery industry mature.  As Gartner concludes, the promise of this industry isn’t off in the future, it’s now:

“E-discovery is now a well-established fact in the legal and judicial worlds. … The growth of the e-discovery market is thus inevitable, as is the acceptance of technological assistance, even in professions with long-standing paper traditions.”

The past wasn’t always so rosy, particularly when the market was dominated by hundreds of service providers that seemed to hold on by maintaining a few key relationships, combined with relatively high margins.

“The market was once characterized by many small providers and some large ones, mostly employed indirectly by law firms, rather than directly by corporations. …  Purchasing decisions frequently reflected long-standing trusted relationships, which meant that even a small book of business was profitable to providers and the effects of customary market forces were muted. Providers were able to subsist on one or two large law firms or corporate clients.”

Consolidation

The Magic Quadrant correctly notes that these “salad days” just weren’t feasible long term. Gartner sees the pace of consolidation heating up even further, with some players striking it rich and some going home empty handed.

“We expect that 2012 and 2013 will see many of these providers cease to exist as independent entities for one reason or another — by means of merger or acquisition, or business failure. This is a market in which differentiation is difficult and technology competence, business model rejuvenation or size are now required for survival. … The e-discovery software market is in a phase of high growth, increasing maturity and inevitable consolidation.”

Navigating these treacherous waters isn’t easy for eDiscovery providers, nor is it simple for customers to make purchasing decisions if they’re correctly concerned that the solution they buy today won’t be around tomorrow.  Yet, despite the prognostication of an inevitable shakeout (Gartner forecasts that the market will shrink 25% in the raw number of firms claiming eDiscovery products/services) they are still very bullish about the sector.

“Gartner estimates that the enterprise e-discovery software market came to $1 billion in total software vendor revenue in 2010. The five-year CAGR to 2015 is approximately 16%.”

This certainly means there’s a window of opportunity for certain players – particularly those who help larger players fill out their EDRM suite of offerings, since the best of breed era is quickly going by the wayside.  Gartner notes that end-to-end functionality is now table stakes in the eDiscovery space.

“We have seen a large upsurge in user requests for full-spectrum EDRM functionality. Whether that functionality will be used initially, or at all, remains an open question. Corporate buyers do seem minded to future-proof their investments in this way, by anticipating what they may wish to do with the software and the vendor in the future.”

Information Governance

Not surprisingly, it’s this “full-spectrum” functionality that most closely aligns with marrying the reactive, right side of the EDRM with the proactive, left side.  In concert, this yin and yang is referred to as information governance, and it’s this notion that’s increasingly driving buying behaviors.

“It is clear from our inquiry service that the desire to bring e-discovery under control by bringing data under control with retention management is a strategy that both legal and IT departments pursue in order to control cost and reduce risks. Sometimes the archiving solution precedes the e-discovery solution, and sometimes it follows it, but Gartner clients that feel the most comfortable with their e-discovery processes and most in control of their data are those that have put archiving systems in place …”

As Gartner looks out five years, the analyst firm anticipates more progress on the information governance front, because the “entire e-discovery industry is founded on a pile of largely redundant, outdated and trivial data.”  At some point this digital landfill is going to burst and organizations are finally realizing that if they don’t act now, it may be too late.

“During the past 10 to 15 years, corporations and individuals have allowed this data to accumulate for the simple reason that it was easy — if not necessarily inexpensive — to do so. … E-discovery has proved to be a huge motivation for companies to rethink their information management policies. The problem of determining what is relevant from a mass of information will not be solved quickly, but with a clear business driver (e-discovery) and an undeniable return on investment (deleting data that is no longer required for legal or business purposes can save millions of dollars in storage costs) there is hope for the future.”

 

The Gartner Magic Quadrant for E-Discovery Software is insightful for a number of reasons, not the least of which is how it portrays the developing maturity of the electronic discovery space. In just a few short years, the niche has sprouted wings, raced to $1B and is seeing massive consolidation. As we enter the next phase of maturation, we’ll likely see the sector morph into a larger, information governance play, given customers’ “full-spectrum” functionality requirements and the presence of larger, mainstream software companies.  Next on the horizon is the subsuming of eDiscovery into both the bigger information governance umbrella, as well as other larger adjacent plays like “enterprise information archiving, enterprise content management, enterprise search and content analytics.” The rapid maturation of the eDiscovery industry will inevitably result in growing pains for vendors and practitioners alike, but in the end we’ll all benefit.

 

About the Magic Quadrant
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

APAC eDiscovery Passports: Litigation Basics for the Asia-Pacific Region

Wednesday, June 13th, 2012

Global economic indicators point to increased trade with and outsourcing to emerging markets around the world, specifically the Asia Pacific (APAC) region. Typical U.S. sectors transacting with the East include: manufacturing, business process outsourcing (BPO)/legal process outsourcing (LPO), call centers, and other industries. The Asian Development Bank stated last year that Asia will account for half of all global economic output by 2050 if their collective GDP stays on pace.  The next 10 years will likely bring BRICS (Brazil, Russia, India, China and Japan) and The Four Asian Tigers (Hong Kong, Singapore, South Korea and Taiwan) into the forefront of the global economy. Combining this projected economic growth with the data explosion makes knowledge about the APAC legal system a necessity for litigators and international business people alike.

The convergence of the global economy across different privacy and data protection regimes has increased the complexity of addressing electronically stored information (ESI). Money and data in large volumes cross borders daily in order to conduct international business. This is true not only for Asian countries transacting with each other, but increasingly with Europe and the United States. Moreover, because technology continues to decrease the reliance on data in paper format, data will need to be produced and analyzed in the form in which it was created. This is important from a forensic standpoint, as well as an information management perspective.  This technical push is reason alone that organizations will need to shift their processes and technologies to focus more on ESI – not in only in how data is created, but in how those organizations store, search, retrieve, review and produce data.

Discovery Equals eDiscovery

The world of eDiscovery for the purposes of regulation and litigation is no longer a U.S. anomaly. This is not only because organizations may be subject to the federal and state rules of civil procedure governing pre-trial discovery in U.S. civil litigation, but because under existing Asian laws and regulatory schemes, the ability to search and retrieve data may be necessary.

Regardless of whether the process of searching, retrieving, reviewing and producing data (eDiscovery) is called discovery or disclosure or whether these processes occur before trial or during, the reality in litigation, especially for multinational corporations, is that eDiscovery may be required around the world. The best approach is to not only equip your organization with the best technology available for legal defensibility and cost-savings from the litigator’s tool belt, but to know the rules by which one must play.

The Passports

The knowledge level for many lawyers about how to approach a discovery request in APAC jurisdictions is often minimal, but there are resources that provide straightforward answers at no cost to the end-user. For example, Symantec has just released a series of “eDiscovery Passports™” for APAC that focus on discovery in civil litigation, the collision of data privacy laws, questions about the cross-border transfer of data, and the threat of U.S. litigation as businesses globalize.  The Passports are a basic guide that frame key components about a country including the legal system, discovery/disclosure, privacy, international considerations and data protection regulations. The Passports are useful tools to begin the process of exploring what considerations need to be made when litigating in the APAC region.

While the rules governing discovery in common law countries like Australia (UPC) and New Zealand (HCR) may be less comprehensive and require slightly different timing than that of the U.S. and U.K., they do exist under the UPC and HCR.  Countries like Hong Kong and Singapore, that also follow a traditional common law system, contain several procedural nuances that are unique to their jurisdictions.  The Philippines, for example, is a hybrid of both civil and common law legal systems, embodying similarities to California law due to history and proximity.  Below are some examples of cases that evidence trends in Asian jurisdictions that lean toward the U.S. Federal Rules of Civil Procedure (FRCP), Sedona Principles and that support the idea that eDiscovery is going global.

  • Hong Kong. In Moulin Global Eyecare Holdings Ltd. v. KPMG (2010), the court held the discovery of relevant documents must apply to both paper and ESI. The court did, however, reject the argument by plaintiffs that overly broad discovery be ordered as this would be ‘tantamount to requiring the defendants to turn over the contents of their filing cabinets for the plaintiffs to rummage through.’ Takeaway: Relevance and proportionality are the key factors in determining discovery orders, not format.
  • Singapore. In Deutsche Bank AG v. Chang Tse Wen (2010), the court acknowledged eDiscovery as particularly useful when the relevant data to be discovered is voluminous.  Because the parties failed to meet and confer in this case, the court ordered parties to take note of the March 2012 Practice Direction which sets out eDiscovery protocols and guidance. Takeaway: Parties must meet and confer to discuss considerations regarding ESI and be prepared to explain why the discovery sought is relevant to the case.
  • U.S. In E.I. du Pont de Nemours v. Kolon Industries (E.D. Va. July 21, 2011), the court held that defendants failed to issue a timely litigation hold.  The resulting eDiscovery sanctions culminated in a $919 million dollar verdict against the defendant South Korean company. While exposure to the FRCP for a company doing business with the U.S. should not be the only factor in determining what eDiscovery processes and technologies are implemented, it is an important consideration in light of sanctions. Takeaway:  Although discovery requirements are not currently as expansive in Asia as they are in the U.S., if conducting business with the U.S., companies may be availed to U.S. law. U.S. law requires legal hold be deployed in when litigation is reasonably anticipated.

Asia eDiscovery Exchange

On June 6-7 at the Excelsior Hotel in Hong Kong, industry experts from the legal, corporate and technology industries gathered for the Asia eDiscovery Exchange.  Jeffrey Toh of innoXcell, the organizer of the event in conjunction with the American eDJ Group, says “this is still a very new initiative in Asia, nevertheless, regulators in Asia have taken steps to implement practice directions for electronic evidence.” Exchanges like these indicate the market is ready for comprehensive solutions for proactive information governance, as well as reactive eDiscovery.  The three themes the conference touched on were information governance, eDiscovery and forensics.  Key sessions included “Social Media is surpassing email as a means of communication; What does this mean for data collection and your Information Governance Strategy” with Barry Murphy, co-founder and principal analyst, eDiscovery Journal and Chris Dale, founder, e-Disclosure Information Project, as well as “Proactive Legal Management” (with Rebecca Grant, CEO of iCourts in Australia and Philip Rohlik, Debevoise & Plimpton in Hong Kong).

The Asian market is ripe for new technologies, and the Asia eDiscovery Exchange should yield tremendous insight into the unique drivers for the APAC region and how vendors and lawyers alike are adapting to market with their offerings.  The eDiscovery Passports™ are also timely as they coincide with a marked increase in Asian business and the proposal of new data protection laws in the region.  Because the regional differences are distinct with regard to discovery, resources like this can help litigators in Asia interregionally, as well as lawyers around the world.  Thought leaders in the APAC region have come together to discuss these differences and how technology can best address the unique requirements in each jurisdiction.  The conference has made clear that information governance, archiving and eDiscovery tools are necessary in the region, even if those needs are not necessarily motivated by litigation as in the U.S.