24h-payday

Posts Tagged ‘investigation’

APAC eDiscovery Passports: Litigation Basics for the Asia-Pacific Region

Wednesday, June 13th, 2012

Global economic indicators point to increased trade with and outsourcing to emerging markets around the world, specifically the Asia Pacific (APAC) region. Typical U.S. sectors transacting with the East include: manufacturing, business process outsourcing (BPO)/legal process outsourcing (LPO), call centers, and other industries. The Asian Development Bank stated last year that Asia will account for half of all global economic output by 2050 if their collective GDP stays on pace.  The next 10 years will likely bring BRICS (Brazil, Russia, India, China and Japan) and The Four Asian Tigers (Hong Kong, Singapore, South Korea and Taiwan) into the forefront of the global economy. Combining this projected economic growth with the data explosion makes knowledge about the APAC legal system a necessity for litigators and international business people alike.

The convergence of the global economy across different privacy and data protection regimes has increased the complexity of addressing electronically stored information (ESI). Money and data in large volumes cross borders daily in order to conduct international business. This is true not only for Asian countries transacting with each other, but increasingly with Europe and the United States. Moreover, because technology continues to decrease the reliance on data in paper format, data will need to be produced and analyzed in the form in which it was created. This is important from a forensic standpoint, as well as an information management perspective.  This technical push is reason alone that organizations will need to shift their processes and technologies to focus more on ESI – not in only in how data is created, but in how those organizations store, search, retrieve, review and produce data.

Discovery Equals eDiscovery

The world of eDiscovery for the purposes of regulation and litigation is no longer a U.S. anomaly. This is not only because organizations may be subject to the federal and state rules of civil procedure governing pre-trial discovery in U.S. civil litigation, but because under existing Asian laws and regulatory schemes, the ability to search and retrieve data may be necessary.

Regardless of whether the process of searching, retrieving, reviewing and producing data (eDiscovery) is called discovery or disclosure or whether these processes occur before trial or during, the reality in litigation, especially for multinational corporations, is that eDiscovery may be required around the world. The best approach is to not only equip your organization with the best technology available for legal defensibility and cost-savings from the litigator’s tool belt, but to know the rules by which one must play.

The Passports

The knowledge level for many lawyers about how to approach a discovery request in APAC jurisdictions is often minimal, but there are resources that provide straightforward answers at no cost to the end-user. For example, Symantec has just released a series of “eDiscovery Passports™” for APAC that focus on discovery in civil litigation, the collision of data privacy laws, questions about the cross-border transfer of data, and the threat of U.S. litigation as businesses globalize.  The Passports are a basic guide that frame key components about a country including the legal system, discovery/disclosure, privacy, international considerations and data protection regulations. The Passports are useful tools to begin the process of exploring what considerations need to be made when litigating in the APAC region.

While the rules governing discovery in common law countries like Australia (UPC) and New Zealand (HCR) may be less comprehensive and require slightly different timing than that of the U.S. and U.K., they do exist under the UPC and HCR.  Countries like Hong Kong and Singapore, that also follow a traditional common law system, contain several procedural nuances that are unique to their jurisdictions.  The Philippines, for example, is a hybrid of both civil and common law legal systems, embodying similarities to California law due to history and proximity.  Below are some examples of cases that evidence trends in Asian jurisdictions that lean toward the U.S. Federal Rules of Civil Procedure (FRCP), Sedona Principles and that support the idea that eDiscovery is going global.

  • Hong Kong. In Moulin Global Eyecare Holdings Ltd. v. KPMG (2010), the court held the discovery of relevant documents must apply to both paper and ESI. The court did, however, reject the argument by plaintiffs that overly broad discovery be ordered as this would be ‘tantamount to requiring the defendants to turn over the contents of their filing cabinets for the plaintiffs to rummage through.’ Takeaway: Relevance and proportionality are the key factors in determining discovery orders, not format.
  • Singapore. In Deutsche Bank AG v. Chang Tse Wen (2010), the court acknowledged eDiscovery as particularly useful when the relevant data to be discovered is voluminous.  Because the parties failed to meet and confer in this case, the court ordered parties to take note of the March 2012 Practice Direction which sets out eDiscovery protocols and guidance. Takeaway: Parties must meet and confer to discuss considerations regarding ESI and be prepared to explain why the discovery sought is relevant to the case.
  • U.S. In E.I. du Pont de Nemours v. Kolon Industries (E.D. Va. July 21, 2011), the court held that defendants failed to issue a timely litigation hold.  The resulting eDiscovery sanctions culminated in a $919 million dollar verdict against the defendant South Korean company. While exposure to the FRCP for a company doing business with the U.S. should not be the only factor in determining what eDiscovery processes and technologies are implemented, it is an important consideration in light of sanctions. Takeaway:  Although discovery requirements are not currently as expansive in Asia as they are in the U.S., if conducting business with the U.S., companies may be availed to U.S. law. U.S. law requires legal hold be deployed in when litigation is reasonably anticipated.

Asia eDiscovery Exchange

On June 6-7 at the Excelsior Hotel in Hong Kong, industry experts from the legal, corporate and technology industries gathered for the Asia eDiscovery Exchange.  Jeffrey Toh of innoXcell, the organizer of the event in conjunction with the American eDJ Group, says “this is still a very new initiative in Asia, nevertheless, regulators in Asia have taken steps to implement practice directions for electronic evidence.” Exchanges like these indicate the market is ready for comprehensive solutions for proactive information governance, as well as reactive eDiscovery.  The three themes the conference touched on were information governance, eDiscovery and forensics.  Key sessions included “Social Media is surpassing email as a means of communication; What does this mean for data collection and your Information Governance Strategy” with Barry Murphy, co-founder and principal analyst, eDiscovery Journal and Chris Dale, founder, e-Disclosure Information Project, as well as “Proactive Legal Management” (with Rebecca Grant, CEO of iCourts in Australia and Philip Rohlik, Debevoise & Plimpton in Hong Kong).

The Asian market is ripe for new technologies, and the Asia eDiscovery Exchange should yield tremendous insight into the unique drivers for the APAC region and how vendors and lawyers alike are adapting to market with their offerings.  The eDiscovery Passports™ are also timely as they coincide with a marked increase in Asian business and the proposal of new data protection laws in the region.  Because the regional differences are distinct with regard to discovery, resources like this can help litigators in Asia interregionally, as well as lawyers around the world.  Thought leaders in the APAC region have come together to discuss these differences and how technology can best address the unique requirements in each jurisdiction.  The conference has made clear that information governance, archiving and eDiscovery tools are necessary in the region, even if those needs are not necessarily motivated by litigation as in the U.S. 

Look Before You Leap! Avoiding Pitfalls When Moving eDiscovery to the Cloud

Monday, May 7th, 2012

It’s no surprise that the eDiscovery frenzy gripping the American legal system over the past decade has become increasingly expensive.  Particularly costly to organizations is the process of preserving and collecting documents, a fact repeatedly emphasized by the Advisory Committee in its report regarding the 2006 amendments to the Federal Rules of Civil Procedure (FRCP).  These aspects of discovery are often lengthy and can be disruptive to business operations.  Just as troubling, they increase the duration and expense of litigation.

Because these costs and delays affect the courts as well as clients, it comes as no surprise that judges have now heightened their expectation for how organizations store, manage and discover their electronically stored information (ESI).  Gone are the days when enterprises could plead ignorance for not preserving or producing their data in an efficient, cost effective and defensible manner.  Organizations must now follow best practices – both during and before litigation – if they are to safely navigate the stormy seas of eDiscovery.

The importance of deploying such practices applies acutely to those organizations that are exploring “cloud”-based alternatives to traditional methods for preserving and producing electronic information.  Under the right circumstances, the cloud may represent a fantastic opportunity to streamline the eDiscovery process for an organization.  Yet it could also turn into a dangerous liaison if the cloud offering is not properly scrutinized for basic eDiscovery functionality.  Indeed, the City of Los Angeles’s recent decision to partially disengage from its cloud service provider exemplifies this admonition to “look before you leap” to the cloud.  Thus, before selecting a cloud provider for eDiscovery, organizations should be particularly careful to ensure that a provider has the ability both to efficiently retrieve data from the cloud and to issue litigation hold notices.

Effective Data Retrieval Requires Efficient Data Storage

The hype surrounding the cloud has generally focused on the opportunity for cheap and unlimited storage of information.  Storage, however, is only one of many factors to consider in selecting a cloud-based eDiscovery solution.  To be able to meet the heightened expectations of courts and regulatory bodies, organizations must have the actual – not theoretical – ability to retrieve their data in real time.  Otherwise, they may not be able to satisfy eDiscovery requests from courts or regulatory bodies, let alone the day-to-day demands of their operations.

A key step to retrieving company data in a timely manner is to first confirm whether the cloud offering can intelligently organize that information such that organizations can quickly respond to discovery requests and other legal demands.  This includes the capacity to implement and observe company retention protocols.  Just like traditional data archiving software, the cloud must enable automated retention rules and thus limit the retention of information to a designated time period.  This will enable data to be expired once it reaches the end of that period.

The pool of data can be further decreased through single instance storage.  This deduplication technology eliminates redundant data by preserving only a master copy of each document placed into the cloud.  This will reduce the amount of data that needs to be identified, preserved, collected and reviewed as part of any discovery process.  For while unlimited data storage may seem ideal now, reviewing unlimited amounts of data will quickly become a logistical and costly nightmare.

Any viable cloud offering should also have the ability to suspend automated document retention/deletion rules to ensure the adequate preservation of relevant information.  This goes beyond placing a hold on archival data in the cloud.  It requires that an organization have the ability to identify the data sources in the cloud that may contain relevant information and then modify aspects of its retention policies to ensure that cloud-stored data is retained for eDiscovery.  Taking this step will enable an organization to create a defensible document retention strategy and be protected from court sanctions under the Federal Rule of Civil Procedure 37(e) “safe harbor.”  The decision from Viramontes v. U.S. Bancorp (N.D. Ill. Jan. 27, 2011) is particularly instructive on this issue.

In Viramontes, the defendant bank defeated a sanctions motion because it timely modified aspects of its email retention policy.  The bank implemented a policy that kept emails for 90 days, after which the emails were deleted.  That policy was promptly suspended, however, once litigation was reasonably foreseeable.  Because the bank followed that procedure in good faith, it was protected from sanctions under Rule 37(e).

As the Viramontes case shows, an organization can be prepared for eDiscovery disputes by appropriately suspending aspects of its document retention policies.  By creating and then faithfully observing a policy that requires retention policies be suspended on the occurrence of litigation or other triggering event, an organization can develop a defensible retention procedure. Having such eDiscovery functionality in a cloud provider will likely facilitate an organization’s eDiscovery process and better insulate it from litigation disasters.

The Ability to Issue Litigation Hold Notices

To be effective for eDiscovery purposes, a cloud service provider must also enable an organization to deploy a litigation hold to prevent users from destroying data. Unless the cloud has litigation hold technology, the entire discovery process may very well collapse.  For electronic data to be produced in litigation, it must first be preserved.  And it cannot be preserved if the key players or data source custodians are unaware that such information must be retained.  Indeed, employees and data sources may discard and overwrite electronically stored information if they are oblivious to a preservation duty.

A cloud service provider should therefore enable automated legal hold acknowledgements.  Such technology will allow custodians to be promptly and properly notified of litigation and thereby retain information that might otherwise have been discarded.  Inadequate litigation hold technology leaves organizations vulnerable to data loss and court punishment.

Conclusion

Confirming that a cloud offering can quickly retrieve and efficiently store enterprise data while effectively deploying litigation hold notices will likely address the basic concerns regarding its eDiscovery functionality. Yet these features alone will not make that solution the model of eDiscovery cloud providers. Advanced search capabilities should also be included to reduce the amount of data that must be analyzed and reviewed downstream. In addition, the cloud ought to support load files in compatible formats for export to third party review software. The cloud should additionally provide an organization with a clear audit trail establishing that neither its documents, nor their metadata were modified when transmitted to the cloud.  Without this assurance, an organization may not be able to comply with key regulations or establish the authenticity of its data in court. Finally, ensure that these provisions are memorialized in the service level agreement governing the relationship between the organization and the cloud provider.

The eDiscovery “Passport”: The First Step to Succeeding in International Legal Disputes

Monday, April 2nd, 2012

The increase in globalization continues to erase borders throughout the world economy. Organizations now routinely conduct business in countries that were previously unknown to their industry vertical.  The trend of global integration is certain to increase, with reports such as the Ernst & Young 2011 Global Economic Survey confirming that 74% of companies believe that globalization, particularly in emerging markets, is essential to their continued vitality.

Not surprisingly, this trend of global integration has also led to a corresponding increase in cross-border litigation. For example, parties to U.S. litigation are increasingly seeking discovery of electronically stored information (ESI) from other litigants and third parties located in Continental Europe and the United Kingdom. Since traditional methods under the Federal Rules of Civil Procedure (FRCP) may be unacceptable for discovering ESI in those forums, the question then becomes how such information can be obtained.

At this point, many clients and their counsel are unaware how to safely navigate these international waters. The short answer for how to address these issues for much of Europe would be to resort to the Hague Convention of March 18, 1970 on the Taking of Evidence Abroad in Civil or Commercial Matters (Hague Convention). Simply referring to the Hague Convention, however, would ignore the complexities of electronic discovery in Europe. Worse, it would sidestep the glaring knowledge gap that exists in the United States regarding the cultural differences distinguishing European litigation from American proceedings.

The ability to bridge this gap with an awareness of the discovery processes in Europe is essential. Understanding that process is similar to holding a valid passport for international travel. Just as a passport is required for travelers to successfully cross into foreign lands, an “eDiscovery Passport™” is likewise necessary for organizations to effectively conduct cross-border discovery.

The Playing Field for eDiscovery in Continental Europe

Litigation in Continental Europe and is culturally distinct from American court proceedings. “Discovery,” as it is known in the United States, does not exist in Europe. Interrogatories, categorical document requests and requests for admissions are simply unavailable as European discovery devices. Instead, European countries generally allow only a limited exchange of documents, with parties typically disclosing only that information that supports their claims.

The U.S. Court of Appeals for the Seventh Circuit recently commented on this key distinction between European and American discovery when it observed that “the German legal system . . . does not authorize discovery in the sense of Rule 26 of the Federal Rules of Civil Procedure.” The court went on to explain that “[a] party to a German lawsuit cannot demand categories of documents from his opponent. All he can demand are documents that he is able to identify specifically—individually, not by category.” Heraeus Kulzer GmbH v. Biomet, Inc., 633 F.3d 591, 596 (7th Cir. 2011).

Another key distinction to discovery in Continental Europe is the lack of rules or case law requiring the preservation of ESI or paper documents. This stands in sharp contrast to American jurisprudence, which typically requires organizations to preserve information as soon as they reasonably anticipate litigation. See, e.g., Micron Technology, Inc. v. Rambus Inc., 645 F.3d 1311, 1320 (Fed.Cir. 2011). In Europe, while an implied preservation duty could arise if a court ordered the disclosure of certain materials, the penalties for European non-compliance are typically not as severe as those issued by American courts.

Only the nations of the United Kingdom, from which American notions of litigation are derived, have discovery obligations that are more similar to those in the United States. For example, in the combined legal system of England and Wales, a party must disclose to the other side information adverse to its claims. Moreover, England and Wales also suggest that parties should take affirmative steps to prepare for disclosure. According to the High Court in Earles v Barclays Bank Plc [2009] EWHC 2500 (Mercantile) (08 October 2009), this includes having “an efficient and effective information management system in place to provide identification, preservation, collection, processing, review analysis and production of its ESI in the disclosure process in litigation and regulation.” For organizations looking to better address these issues, a strategic and intelligent information governance plan offers perhaps the best chance to do so.

Hostility to International Discovery Requests

Despite some similarities between the U.S. and the U.K., Europe as a whole retains a certain amount of cultural hostility to pre-trial discovery. Given this fact, it should come as no surprise that international eDiscovery requests made pursuant to the Hague Convention are frequently denied. Requests are often rejected because they are overly broad.  In addition, some countries such as Italy simply refuse to honor requests for pre-trial discovery from common law countries like the United States. Moreover, other countries like Austria are not signatories to the Hague Convention and will not accept requests made pursuant to that treaty. To obtain ESI from those countries, litigants must take their chances with the cumbersome and time-consuming process of submitting letters rogatory through the U.S. State Department. Finally, requests for information that seek email or other “personal information” (i.e., information that could be used to identify a person) must additionally satisfy a patchwork of strict European data protection rules.

Obtaining an eDiscovery Passport

This backdrop of complexity underscores the need for both lawyers and laymen to understand the basic principles governing eDisclosure in Europe. Such a task should not be seen as daunting. There are resources that provide straightforward answers to these issues at no cost to the end-user. For example, Symantec has just released a series of eDiscovery Passports™ that touch on the basic issues underlying disclosure and data privacy in the United Kingdom, France, Germany, Holland, Belgium, Austria, Switzerland, Italy and Spain. Organizations such as The Sedona Conference have also made available materials that provide significant detail on these issues, including its recently released International Principles on Discovery, Disclosure and Data Protection.

These resources can provide valuable information to clients and counsel alike and better prepare litigants for the challenges of pursuing legal rights across international boundaries. By so doing, organizations can moderate the effects of legal risk and more confidently pursue their globalization objectives.

eDiscovery Down Under: New Zealand and Australia Are Not as Different as They Sound, Mate!

Thursday, March 29th, 2012

Shortly after arriving in Wellington, New Zealand, I picked up the Dominion Post newspaper and read its lead article: a story involving U.S. jurisdiction being exercised over billionaire NZ resident Mr. Kim Dotcom. The article reinforced the challenges we face with blurred legal and data governance issues presented by the globalization of the economy and the expansive reach of the internet. Originally from Germany, and having changed his surname to reflect the origin of his fortune, Mr. Dotcom has become all too familiar in NZ of late. He has just purchased two opulent homes in NZ, and has become an internationally controversial figure for internet piracy. Mr. Dotcom’s legal troubles arise out of his internet business that enables illegal downloads of pirated material between users, which allegedly is powering the largest copyright infringement in global history. It is approximated that his website constitutes 4% of the internet traffic in the world, which means there could be tons of discovery in this case (or, cases).

The most recent legal problems Mr. Dotcom faces are with U.S. authorities who want to extradite him to face copyright charges worth $500 million by his Megaupload file-sharing website. From a criminal and record-keeping standpoint, Mr. Dotcom’s issues highlight the need for and use of appropriate technologies. In order to establish a case against him, it’s likely that search technologies were deployed by U.S. intelligence agencies to piece together Mr. Dotcom’s activities, banking information, emails and the data transfers on his site. In a case like this, where intelligence agencies would need to collect, search and cull email from so many different geographies and data sources down to just the relevant information, using technologies that link email conversation threads and give insight into a data collection set from a transparent search point of view would provide immense value. Additionally, the Immigration bureau in New Zealand has been required to release hundreds of documents about Mr. Dotcom’s residency application that were requested under the Official Information Act (OIA). The records that Immigration had to produce were likely pulled from their archive or records management system in NZ, and then redacted for private information before production to the public.

The same tools are needed in Australia and New Zealand to build a criminal case or to comply with the OIA that we use here in the U.S for investigatory and compliance purposes, as well as for litigation. The trend in information governance technology in APAC is trending first toward government agencies who are purchasing archiving and eDiscovery technologies more rapidly than private companies. Why is this? One reason could be that because the governments in APAC have a larger responsibility for healthcare, education and the protection of privacy; they are more invested in the compliance requirements and staying off the front page of the news for shortcomings. APAC private enterprises that are small or mid-sized and are not yet doing international business do not have the same archiving and eDiscovery needs large government agencies do, nor do they face litigation in the same way their American counterparts do. Large global companies should assume no matter where they are based, that they may be availed to litigation where they are doing business.

An interesting NZ use case on the enterprise level is that of Transpower (the quasi-governmental energy agency), where compliance with both the “private and public” requirements are mandatory. Transpower is an organisation that is government-owned, yet operates for a profit. Sally Myles, an experienced records manager that recently came to Transpower to head up information governance initiatives, says,

“We have to comply with the Public Records Act of 2005, public requests for information are frequent as we and are under constant scrutiny about where we will develop our plants. We also must comply with the Privacy Act of 1993. My challenge is to get the attention of our leadership to demonstrate why we need to make these changes and show them a plan for implementation as well as cost savings.”

Myles’ comments indicate NZ is facing many of the same information challenges we are here in the US with storage, records management and searching for meaningful information within the organisation.

Australia, New Zealand and U.S. Commonalities

In Australia and NZ, litigation is not seen as a compelling business driver the same way it is in the U.S. This is because many of the information governance needs of organisations are driven by regulatory, statutory and compliance requirements and the environment is not as litigious as it is in the U.S. The Official Information Act in NZ, and the Freedom of Information in Australia, are analogous to the Freedom of Information Act (FOIA) here in the U.S. The requirements to produce public records alone justify the use of technology to provide the ability to manage large volumes of data and produce appropriately redacted information to the public. This is true regardless of litigation. Additionally, there are now cases like DuPont or Mr. Dotcom’s, that legitimatize the risk of litigation with the U.S. The fact that implementing an information governance product suite will also enable a company to be prepared for litigation is a beneficial by-product for many entities as they need technology for record keeping and privacy reasons anyway. In essence, the same capabilities are achieved at the end of the day, regardless of the impetus for implementing a solution.

The Royal Commission – The Ultimate eDiscovery Vehicle

One way to think about the Australian Royal Commission (RCs) is to see it as a version of the U.S.’ government investigation. A key difference, however, is that in the case of the U.S. government, an investigation is typically into private companies. Conversely, a Royal Commission is typically an investigation into a government body after a major tragedy and it is initiated by the Head of State. A RC is an ad-hoc, formal, public inquiry into a defined issue with considerable discovery powers. These powers can be greater than those of a judge and are restricted to the scope and terms of reference of the Commission. RCs are called to look into matters of great importance and usually have very large budgets. The RC is charged with researching the issue, consulting experts both within and outside of government and developing findings to recommend changes to the law or other courses of actions. RCs have immense investigatory powers, including summoning witnesses under oath, offering of indemnities, seizing of documents and other evidence (sometimes including those normally protected, such as classified information), holding hearings in camera if necessary and—in a few cases—compelling government officials to aid in the execution of the Commission.

These expansive powers give the RC the opportunity to employ state of the art technology and to skip the slow bureaucratic decision making processes found within the government when it comes to implementing technological change. For this reason, initially, eDiscovery will continue to increase in the government sector at a more rapid pace than in the private in the Asia Pacific region. This is because litigation is less prevalent in the Asia Pacific, and because the RC is a unique investigatory vehicle with the most far-reaching authority for discovering information. Moreover, the timeframes for RCs are tight and their scopes are broad, making them hair on fire situations that move quickly.

While the APAC information management environment does not have the exact same drivers the U.S. market does, it definitely has the same archiving, eDiscovery and technology needs for different reasons. Another key point is that the APAC archiving and eDiscovery market will likely be driven by the government as records, search and production requirements are the main compliance needs in Australia and NZ. APAC organisations would be well served by beginning to modularly implement key elements of an information governance plan, as globalization is driving us all to a more common and automated approach to data management. 

The Social Media Rubik’s Cube: FINRA Solved it First, Are Non-Regulated Industries Next?

Wednesday, January 25th, 2012

It’s no surprise that the first industry to be heavily regulated regarding social media use was the financial services industry. The predominant factor that drove regulators to address the viral qualities of social media was the fiduciary nature of investing that accompanies securities, coupled with the potential detrimental financial impact these offerings could have on investors.

Although there is no explicit language in FINRA’s Regulatory Notices 10-06 (January 2010) or 11-30 (August 2011) requiring archival, the record keeping component of the notices necessitate social media archiving in most cases due to the sheer volume of data produced on social media sites. Melanie Kalemba, Vice President of Business Development at SocialWare in Austin, Texas states:

“Our clients in the financial industry have led the way, they have paved the road for other industries, making social media usage less daunting. Best practices for monitoring third-party content, record keeping responsibilities, and compliance programs are available and developed for other industries to learn from. The template is made.”

eDiscovery and Privacy Implications. Privacy laws are an important aspect of social media use that impact discoverability. Discovery and privacy represent layers of the Rubik’s cube in the ever-changing and complex social media environment. No longer are social media cases only personal injury suits or HR incidents, although those are plentiful. For example, in Largent v. Reed the court ruled that information posted by a party on their personal Facebook page was discoverable and ordered the plaintiff to provide user name and password to enable the production of the information. In granting the motion to compel the Defendant’s login credentials, Judge Walsh acknowledged that Facebook has privacy settings, and that users must take “affirmative steps” to keep their information private. However, his ruling determined that no social media privacy privilege exists: “No court has recognized such a privilege, and neither will we.” He further reiterated his ruling by adding, “[o]nly the uninitiated or foolish could believe that Facebook is an online lockbox of secrets.”

Then there are the new cases emerging over social media account ownership which affect privacy and discoverability. In the recently filed Phonedog v. Kravitz, 11-03474 (N.D. Cal.; Nov. 8, 2011), the lines between the “professional” versus the “private” user are becoming increasingly blurred. This case also raises questions about proprietary client lists, valuations on followers, and trade secrets  – all of which are further complicated when there is no social media policy in place. The financial services industry has been successful in implementing effective social media policies along with technology to comply with agency mandates – not only because they were forced to by regulation, but because they have developed best practices that essentially incorporate social media into their document retention policies and information governance infrastructures.

Regulatory Framework. Adding another Rubik’s layer are the multitude of regulatory and compliance issues that many industries face. The most active and vocal regulators for guidance in the US on social media have been FINRA, the SEC and the FTC. FINRA initiated guidance to the financial services industry, and earlier this month the SEC issued their alert. The SEC’s exam alert to registered investment advisers issued on January 4, 2012 was not meant to be a comprehensive summary for compliance related to the use of social media. Instead, it lays out staff observations of three major categories: third party content, record keeping and compliance – expounding on FINRA’s notice.

Last year the FTC issued an extremely well done Preliminary FTC Staff Report on Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Businesses and Policymakers.  Three main components are central to the report. The first is a call for all companies to build privacy and security mechanisms into new products – considering the possible negative ramifications at the outset, avoiding social media and privacy issues as an afterthought. The FTC has cleverly coined the notion, “Privacy by Design.” Second, “Just-In-Time” is a concept about notice and encourages companies to communicate with the public in a simple way that prompts them to make informed decisions about their data in terms that are clear and that require an affirmative action (i.e., checking a box). Finally, the FTC calls for greater transparency around data collection, use and retention. The FTC asserts that consumers have a right to know what kind of data companies collect, and should have access to the sensitivity and intended use of that data. The FTC’s report is intended to inform policymakers, including Congress, as they legislate on privacy – and to motivate companies to self-regulate and develop best practices. 

David Shonka, Principal Deputy General Counsel at the FTC in Washington, D.C., warns, “There is a real tension between the situations where a company needs to collect data about a transaction versus the liabilities associated with keeping unneeded data due to privacy concerns. Generally, archiving everything is a mistake.” Shonka arguably reinforces the case for instituting an intelligent archive, whether a company is regulated or not;  an archive that is selective about what it ingests based on content, and that has an appropriate deletion cycle applied to defined data types/content according to a policy. This will ensure expiry of private consumer information in a timely manner, but retains the benefits of retrieval for a defined period if necessary.

The Non-Regulated Use Case­. When will comprehensive social media policies, retention and monitoring become more prevalent in the non-regulated sectors? In the case of FINRA and the SEC, regulations were issued to the financial industry. In the case of the FTC, guidance had been given to companies regarding how to avoid false advertisement and protect consumer privacy. The two are not dissimilar in effect. Both require a social media policy, monitoring, auditing, technology, and training. While there is no clear mandate to archive social media if you are in a non-regulated industry, this can’t be too far away. This is evidenced by companies that have already implemented social media monitoring systems for reasons like brand promotion/protection, or healthcare companies that deal with highly sensitive information. If social media is replacing email, and social media is essentially another form of electronic evidence, why would social media not be part of the integral document retention/expiry procedures within an organization?

Content-based monitoring and archiving is possible with technology available today, as the financial sector has demonstrated. Debbi Corej, who is a compliance expert for the financial sector and has successfully implemented an intensive social media program, says it perfectly: “How do you get to yes? Yes you can use social media, but in a compliant way.” The answer can be found at LegalTech New YorkJanuary 30 @ 2:00pm.

2012: Year of the Dragon – and Predictive Coding. Will the eDiscovery Landscape Be Forever Changed?

Monday, January 23rd, 2012

2012 is the Year of the Dragon – which is fitting, since no other Chinese Zodiac sign represents the promise, challenge, and evolution of predictive coding technology more than the Dragon.  The few who have embraced predictive coding technology exemplify symbolic traits of the Dragon that include being unafraid of challenges and willing to take risks.  In the legal profession, taking risks typically isn’t in a lawyer’s DNA, which might explain why predictive coding technology has seen lackluster adoption among lawyers despite the hype.  This blog explores the promise of predictive coding technology, why predictive coding has not been widely adopted in eDiscovery, and explains why 2012 is likely to be remembered as the year of predictive coding.

What is predictive coding?

Predictive coding refers to machine learning technology that can be used to automatically predict how documents should be classified based on limited human input.  In litigation, predictive coding technology can be used to rank and then “code” or “tag” electronic documents based on criteria such as “relevance” and “privilege” so organizations can reduce the amount of time and money spent on traditional page by page attorney document review during discovery.

Generally, the technology works by prioritizing the most important documents for review by ranking them.  In addition to helping attorneys find important documents faster, this prioritization and ranking of documents can even eliminate the need to review documents with the lowest rankings in certain situations. Additionally, since computers don’t get tired or day dream, many believe computers can even predict document relevance better than their human counterparts.

Why hasn’t predictive coding gone mainstream yet?

Given the promise of faster and less expensive document review, combined with higher accuracy rates, many are perplexed as to why predictive coding technology hasn’t been widely adopted in eDiscovery.  The answer really boils down to one simple concept – a lack of transparency.

Difficult to Use

First, early predictive coding tools attempt to apply a complicated new technological approach to a document review process that has traditionally been very simple.  Instead of relying on attorneys to read each and every document to determine relevance, the success of today’s predictive coding technology typically depends on review decisions input into a computer by one or more experienced senior attorneys.  The process commonly involves a complex series of steps that include sampling, testing, reviewing, and measuring results in order to fine tune an algorithm that will eventually be used to predict the relevancy of the remaining documents.

The problem with early predictive coding technologies is that the majority of these complex steps are done in a ‘black box’.  In other words, the methodology and results are not always clear, which increases the risk of human error and makes the integrity of the electronic discovery process difficult to defend.  For example, the methodology for selecting a statistically relevant sample is not always intuitive to the end user.  This fundamental problem could result in improper sampling techniques that could taint the accuracy of the entire process.  Similarly, the process must often be repeated several times in order to improve accuracy rates.  Even if accuracy is improved, it may be difficult or impossible to explain how accuracy thresholds were determined or to explain why coding decisions were applied to some documents and not others.

Accuracy Concerns

Early predictive coding tools also tend to lack transparency in the way the technology evaluates the language contained in each document.  Instead of evaluating both the text and metadata fields within a document, some technologies actually ignore document metadata.  This omission means a privileged email sent by a client to her attorney, Larry Lawyer, might be overlooked by the computer if the name “Larry Lawyer” is only part of the “recipient” metadata field of the document and isn’t part of the document text.  The obvious risk is that this situation could lead to privilege waiver if it is inadvertently produced to the opposing party.

Another practical concern is that some technologies do not allow reviewers to make a distinction between relevant and non-relevant language contained within individual documents.  For example, early predictive coding technologies are not intelligent enough to know that only the second paragraph on page 95 of a 100-page document contains relevant language.  The inability to discern what language  led to the determination that the document is relevant could skew results when the computer tries to identify other documents with the same characteristics.  This lack of precision increases the likelihood that the computer will retrieve an over-inclusive number of irrelevant documents.  This problem is generally referred to as ‘excessive recall,’ and it is important because this lack of precision increases the number of documents requiring manual review which directly impacts eDiscovery cost.

Waiver & Defensibility

Perhaps the biggest concern with early predictive coding technology is the risk of waiver and concerns about defensibility.  Notably, there have been no known judicial decisions that specifically address the defensibility of these new technology tools even though some in the judiciary, including U.S. Magistrate Judge Andrew Peck, have opined that this kind of technology should be used in certain cases.

The problem is that today’s predictive coding tools are difficult to use, complicated for the average attorney, and the way they work simply isn’t transparent.  All these limitations increase the risk of human error.  Introducing human error increases the risk of overlooking important documents or unwittingly producing privileged documents.  Similarly, it is difficult to defend a technological process that isn’t always clear in an era where many lawyers are still uncomfortable with keyword searches.  In short, using black box technology that is difficult to use and understand is perceived as risky, and many attorneys have taken a wait-and-see approach because they are unwilling to be the guinea pig.

Why is 2012 likely to be the year of predictive coding?

The word transparency may seem like a vague term, but it is the critical element missing from today’s predictive coding technology offerings.  2012 is likely to be the year of predictive coding because improvements in transparency will shine a light into the black box of predictive coding technology that hasn’t existed until now.  In simple terms, increasing transparency will simplify the user experience and improve accuracy which will reduce longstanding concerns about defensibility and privilege waiver.

Ease of Use

First, transparent predictive coding technology will help minimize the risk of human error by incorporating an intuitive user interface into a complicated solution.  New interfaces will include easy-to-use workflow management consoles to guide the reviewer through a step-by-step process for selecting, reviewing, and testing data samples in a way that minimizes guesswork and confusion.  By automating the sampling and testing process, the risk of human error can be minimized which decreases the risk of waiver or discovery sanctions that could result if documents are improperly coded.  Similarly, automated reporting capabilities make it easier for producing parties to evaluate and understand how key decisions were made throughout the process, thereby making it easier for them to defend the reasonableness of their approach.

Intuitive reports also help the producing party measure and evaluate confidence levels throughout the testing process until appropriate confidence levels are achieved.  Since confidence levels can actually be measured as a percentage, attorneys and judges are in a position to negotiate and debate the desired level of confidence for a production set rather than relying exclusively on the representations or decisions of a single party.  This added transparency allows the type of cooperation between parties called for in the Sedona Cooperation Proclamation and gives judges an objective tool for evaluating each party’s behavior.

Accuracy & Efficiency

2012 is also likely to be the year of transparent predictive coding technology because technical limitations that have impacted the accuracy and efficiency of earlier tools will be addressed.  For example, new technology will analyze both document text and metadata to avoid the risk that responsive or privileged documents are overlooked.  Similarly, smart tagging features will enable reviewers to highlight specific language in documents to determine a document’s relevance or non-relevance so that coding predictions will be more accurate and fewer non-relevant documents will be recalled for review.

Conclusion - Transparency Provides Defensibility

The bottom line is that predictive coding technology has not enjoyed widespread adoption in the eDiscovery process due to concerns about simplicity and accuracy that breed larger concerns about defensibility.  Defending the use of black box technology that is difficult to use and understand is a risk that many attorneys simply are not willing to take, and these concerns have deterred widespread adoption of early predictive coding technology tools.  In 2012, next generation transparent predictive coding technology will usher in a new era of computer-assisted document review that is easy to use, more accurate, and easier to defend. Given these exciting technological advancements, I predict that 2012 will not only be the year of the dragon, it will also be the year of predictive coding.

Losing Weight, Developing an Information Governance Plan, and Other New Year’s Resolutions

Tuesday, January 17th, 2012

It’s already a few weeks into the new year and it’s easy to spot the big lines at the gym, folks working on fad diets and many swearing off any number of vices.  Sadly perhaps, most popular resolutions don’t even really change year after year.  In the corporate world, though, it’s not good enough to simply recycle resolutions every year since there’s a lot more at stake, often with employee’s bonuses and jobs hanging in the balance.

It’s not too late to make information governance part of the corporate 2012 resolution list.  The reason is pretty simple – most companies need to get out of the reactive firefighting of eDiscovery given the risks of sloppy work, inadvertent productions and looming sanctions.  Yet, so many are caught up in the fog of eDiscovery war that they’ve failed to see the nexus between the upstream, proactive good data management hygiene and the downstream eDiscovery chaos.

In many cases the root cause is the disconnect between differing functional groups (Legal, IT, Information Security, Records Management, etc.).  This is where the emerging umbrella concept of Information Governance comes to play, serving as a way to tackle these information risks along a unified front. Gartner defines information governanceas the:

“specification of decision rights, and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archiving and deletion of information, … [including] the processes, roles, standards, and metrics that ensure the effective and efficient use of information to enable an organization to achieve its goals.”

Perhaps more simply put, what were once a number of distinct disciplines—records management, data privacy, information security and eDiscovery—are rapidly coming together in ways that are important to those concerned with mitigating and managing information risk. This new information governance landscape is comprised of a number of formerly discrete categories:

  • Regulatory Risks – Whether an organization is in a heavily regulated vertical or not, there are a host of regulations that an organization must navigate to successfully stay in compliance.  In the United States these include a range of disparate regimes, including the Sarbanes-Oxley Act, HIPPA, the Securities and Exchange Act, the Foreign Corrupt Practices Act (FCPA) and other specialized regulations – any number of which require information to be kept in a prescribed fashion, for specified periods of time.  Failure to turn over information when requested by regulators can have dramatic financial consequences, as well as negative impacts to an organization’s reputation.
  • Discovery Risks – Under the discovery realm there are any number of potential risks as a company moves along the EDRM spectrum (i.e., Identification, Preservation, Collection, Processing, Analysis, Review and Production), but the most lethal risk is typically associated with spoliation sanctions that arise from the failure to adequately preserve electronically stored information (ESI).  There have been literally hundreds of cases where both plaintiffs and defendants have been caught in the judicial crosshairs, resulting in penalties ranging from outright case dismissal to monetary sanctions in the millions of dollars, simply for failing to preserve data properly.  It is in this discovery arena that the failure to dispose of corporate information, where possible, rears its ugly head since the eDiscovery burden is commensurate with the amount of data that needs to be preserved, processed and reviewed.  Some statistics show that it can cost as much as $5 per document just to have an attorney privilege review performed.  And, with every gigabyte containing upwards of 75,000 pages, it is easy to see massive discovery liability when an organization has terabytes and even petabytes of extraneous data lying around.
  • Privacy Risks – Even though the US has a relatively lax information privacy climate there are any number of laws that require companies to notify customers if their personally identifiable information (PII) such as credit card, social security, or credit numbers have been compromised.  For example, California’s data breach notification law (SB1386) mandates that all subject companies must provide notification if there is a security breach to the electronic database containing PII of any California resident.  It is easy to see how unmanaged PII can increase corporate risk, especially as data moves beyond US borders to the international stage where privacy regimes are much more staunch.
  • Information Security Risks Data breaches have become so commonplace that the loss/theft of intellectual property has become an issue for every company, small and large, both domestically and internationally.  The cost to businesses of unintentionally exposing corporate information climbed 7 percent last year to over $7 million per incident.  Recently senators asked the SEC to “issue guidance regarding disclosure of information security risk, including material network breaches” since “securities law obligates the disclosure of any material network breach, including breaches involving sensitive corporate information that could be used by an adversary to gain competitive advantage in the marketplace, affect corporate earnings, and potentially reduce market share.”  The senators cited a 2009 survey that concluded that 38% of Fortune 500 companies made a “significant oversight” by not mentioning data security exposures in their public filings.

Information governance as an umbrella concept helps organizations to create better alignment between functional groups as they attempt to solve these complex and interrelated data risk challenges.  This coordination is even more critical given the way that corporate data is proliferating and migrating beyond the firewall.  With even more data located in the cloud and on mobile devices a key mandate is managing data in all types of form factors. A great first step is to determine ownership of a consolidated information governance approach where the owner can:

  • Get C-Level buy-in
  • Have the organizational savvy to obtain budget
  • Be able to define “reasonable” information governance efforts, which requires both legal and IT input
  • Have strong leadership and consensus building skills, because all stakeholders need to be on the same page
  • Understand the nuances of their business, since an overly rigid process will cause employees to work around the policies and procedures

Next, tap into and then leverage IT or information security budgets for archiving, compliance and storage.  In most progressive organizations there are likely ongoing projects that can be successfully massaged into a larger information governance play.  A great place to focus on initially is information archiving, since this one of the simplest steps an organization can take to improve their information governance hygiene.  With an archive organizations can systematically index, classify and retain information and thus establish a proactive approach to data management.  It’s this ability to apply retention and (most importantly) expiration policies that allows organizations to start reducing the upstream data deluge that will inevitably impact downstream eDiscovery processes.

Once an archive is in place, the next logical step is to couple a scalable, reactive eDiscovery process with the upstream data sources, which will axiomatically include email, but increasingly should encompass cloud content, social media, unstructured data, etc.  It is important to make sure  that a given  archive has been tested to ensure compatibility with the chosen eDiscovery application to guarantee that it can collect content at scale in the same manner used to collect from other data sources.  Overlaying both of these foundational pieces should be the ability to place content on legal hold, whether that content exists in the archive or not.

As we enter 2012, there is no doubt that information governance should be an element in building an enterprise’s information architecture.  And, different from fleeting weight loss resolutions, savvy organizations should vow to get ahead of the burgeoning categories of information risk by fully embracing their commitment to integrated information governance.  And yet, this resolution doesn’t need to encompass every possible element of information governance.  Instead, it’s best to put foundational pieces into place and then build the rest of the infrastructure in methodical and modular fashion.

Information Governance Gets Presidential Attention: Banking Bailout Cost $4.76 Trillion, Technology Revamp Approaches $240 Billion

Tuesday, January 10th, 2012

On November 28, 2011, The White House issued a Presidential Memorandum that outlines what is expected of the 480 federal agencies of the government’s three branches in the next 240 days.  Up until now, Washington, D.C. has been the Wild West with regard to information governance as each agency has often unilaterally adopted its own arbitrary policies and systems.  Moreover, some agencies have recently purchased differing technologies.  Unfortunately,  with the President’s ultimate goal of uniformity, this centralization will be difficult to accomplish with a range of disparate technological approaches.

Particular pain points for the government traditionally include retention, search, collection, review and production of vast amounts of data and records.  Specifically, these pain points include examples of: FOIA requests gone awry, the issuance of legal holds across different agencies leading to spoliation, and the ever present problem of decentralization.

Why is the government different?

Old Practices. First, in some instances the government is technologically behind (its corporate counterparts) and is failing to meet the judiciary’s expectation that organizations effectively store, manage and discover their information.  This failing is self-evident via  the directive coming from the President mandating that these agencies start to get a plan to attack this problem.  Though different than other corporate entities, the government is nevertheless held to the same standards of eDiscovery under the Federal Rules of Civil Procedure (FRCP).  In practice, the government has been given more leniency until recently, and while equal expectations have not always been the case, the gap between the private and public sectors in no longer possible to ignore.

FOIA.  The government’s arduous obligation to produce information under the Freedom of Information Act (FOIA) has no corresponding analog for private organizations, who are responding to more traditional civil discovery requests.  Because the government is so large with many disparate IT systems, it is cumbersome to work efficiently through the information governance process across agencies and many times still difficult inside one individual agency with multiple divisions.  Executing this production process is even more difficult if not impossible to do manually without properly deployed technology.  Additionally, many of the investigatory agencies that issue requests to the private sector need more efficient ways to manage and review data they are requesting.  To compound problems, within the US government there are two opposing interests are at play; both screaming for a resolution, and that solution needs to be centralized.  On the one hand, the government needs to retain more than a corporation may need to in order to satisfy a FOIA request.

Titan Pulled at Both Ends. On the other hand, without classification of the records that are to be kept, technology to organize this vast amount of data and some amount of expiry, every agency will essentially become their own massive repository.  The “retain everything mentality” coupled with the inefficient search and retrieval of data and records is where they stand today.  Corporations are experiencing this on a smaller scale today and many are collectively further along than the government in this process, without the FOIA complications.

What are agencies doing to address these mandates?

In their plans, agencies must describe how they will improve or maintain their records management programs, particularly with regard to email, social media and other electronic communications.  They must also move away from such a paper-centric existence.  eDiscovery consultants and software companies are helping agencies through this process, essentially writing their plans to match the President’s directive.  The cloud conversation has been revisited, and agencies also have to explain how they will use cloud-based services and storage solutions, as well as identify gaps in existing laws or regulations that presently prevent improved management.  Small innovations are taking place.  In fact, just recently the DOJ added a new search feature on their website to make it easier for the public to find documents that have been posted by agencies on their websites.

The Office of Management and Budget (OMB), National Archives and Records Administration (NARA), and Justice Department will use those reports to come up with a government-wide records management framework that is more efficient, maintains accountability by documenting agency actions and promotes “appropriate” public access to records.  Hopefully, the framework they come up with will be centralized and workable on a realistic timeframe with resources sufficiently allocated to the initiative.

How much will this cost?

The President’s mandate is a great initiative and very necessary, but one cannot help but think about the costs in terms of money, time and resources when considering these crucial changes.  The most recent version of a financial services and general government appropriations bill in the Senate extends $378.8 million to NARA for this initiative.  President Obama appointed Steven VanRoekel as the United States CIO in August 2011 to succeed Vivek Kundra.  After VanRoekel’s speech at the Churchill Club in October of 2011, an audience member asked him what the most surprising aspect of his new job was.  VanRoekel said that it was managing the huge and sometimes unwieldy resources of his $80 billion budget.  It is going to take even more than this to do the job right, however.

Using conservative estimates, assume for an agency to implement archiving and eDiscovery capabilities as an initial investment would be $100 million.  That approximates $480 billion for all 480 agencies.  Assume a uniform information governance platform gets adopted by all agencies at a 50% discount due to the large contracts and also factoring in smaller sums for agencies with lesser needs.  The total now comes to $240 billion.  For context, that figure is 5% of what was spent by Federal Government ($4.76 trillion) on the biggest bailout in history in 2008. That leaves a need for $160 billion more to get the job done. VanRoekel also commented at the same meeting that he wants to break down massive multi-year information technology projects into smaller, more modular projects in the hopes of saving the government from getting mired in multi-million dollar failures.   His solution to this, he says, is modular and incremental deployment.

While Rome was not built in a day, this initiative is long overdue, yet feasible, as technology exists to address these challenges rather quickly.  After these 240 days are complete and a plan is drawn the real question is, how are we going to pay now for technology the government needed yesterday?  In a perfect world, the government would select a platform for archiving and eDiscovery, break the project into incremental milestones and roll out a uniform combination of solutions that are best of breed in their expertise.

New Utah Rule 26: A Blueprint for Proportionality in eDiscovery

Tuesday, December 20th, 2011

The eDiscovery frenzy that has gripped the American legal system over the past decade has become increasingly expensive.  Particularly costly to both clients and courts is the process of preserving, collecting and producing documents.  This was supposed to change after the Federal Rules of Civil Procedure (FRCP) were amended in 2006.  After all, weren’t the amended rules designed to streamline discovery, allowing parties to focus on the merits while making discovery costs more reasonable?  Instead, it seems the rules have spawned more collateral discovery disputes than ever before about preservation, collection and production issues.

As a solution to these costs, the eDiscovery cognoscenti are emphasizing the concept of “proportionality.”  Proportionality typically requires that the benefits of discovery be commensurate with its corresponding burdens.  Under the Federal Rules of Civil Procedure, the directive that discovery be proportional is found in Rules 26(c), 26(b)(2)(C) and Rule 26(b)(2)(B).  Under Rule 26(c), courts may generally issue protective orders that limit or even proscribe discovery that causes “annoyance, embarrassment, oppression, or undue burden or expense.”  More specifics are set forth in Rule 26(b)(2)(C), which enables courts to restrict discovery if the requests are unreasonably cumulative or duplicative, the discovery can be obtained from an alternative source that is less expensive or burdensome, or the burden or expense of the discovery outweighs its benefit.  In the specific context of electronic discovery, Rule 26(b)(2)(B) restricts the discovery of backup tapes and other electronically stored information that are “not reasonably accessible” due to “undue burden or cost.”

Despite the existence of these provisions, they are often bypassed.  The most recent and notable example of this trend is found in Pippins v. KPMG (S.D.N.Y. Oct. 7, 2011).  In Pippins, the court ordered the defendant accounting firm to continue preserving thousands of employee hard drives.  In so doing, the court sidestepped the firm’s proportionality argument, citing Orbit One v. Numerex (S.D.N.Y. 2010) for the premise that such a standard is “too amorphous” and therefore unworkable.  Regardless of cost or burden, the court reasoned that “prudence” required preservation of all relevant materials “until a more precise definition [of proportionality] is created by rule.”

The Pippins order and its associated costs for the firm – potentially into the millions of dollars – has given new fuel to the argument that an amended federal rule should be implemented to include a more express mandate regarding proportionality.  Surprisingly enough, a blueprint for such an amended rule is already in place in the State of Utah.  Effective November 1, 2011, Utah implemented sweeping changes to civil discovery practice through amended Civil Procedure Rule 26.  The new rule makes proportionality the standard now governing eDiscovery in Utah.

Proportionality Dictates the Scope of Permissible Discovery

Utah Rule 26 has changed the permissible scope of discovery to expressly condition that all discovery meet the standards of proportionality.  That means parties may seek discovery of relevant, non-privileged materials “if the discovery satisfies the standards of proportionality.”  This effectively shifts the burden of proof on proportionality from the responding party to the requesting party.  Indeed, Utah Rule 26(b)(3) specifically codifies this stunning change:  “The party seeking discovery always has the burden of showing proportionality and relevance.”  This stands in sharp contrast to Federal Rules 26(b)(2) and 26(c), which require the responding party to show the discovery is not proportional.

The “standards of proportionality” that have been read into Utah Rule 26 incorporate those found in Federal Rule 26(b)(2)(C).  In addition, Utah Rule 26 requires that discovery be “reasonable.”  Reasonableness is to be determined on the needs of a given case such as the amount in controversy, the parties’ resources, the complexity and importance of the issues, and the role of the discovery in addressing such issues.  Last but not least, discovery must expressly comply with the cost cutting mandate of Rule 1 and thereby “further the just, speedy and inexpensive determination of the case.”

Proportionality Limits the Amount of Discovery

To further address the burdens and costs of disproportionate discovery, Utah Rule 26(c) limits the amount of discovery that parties may conduct as a matter of right based on the specific amounts in controversy.  For those matters involving damages of $300,000 or more, parties may propound 20 interrogatories, document requests and requests for admissions.  Total fact deposition time is restricted to a mere 30 hours.  For matters between $50,000 and $300,000, those figures are halved.  And for matters under $50,000, only five document requests and requests for admissions are allotted to the parties.  Fact depositions are curtailed to three hours total per side, while interrogatories are eliminated.

If these limits are too restrictive, parties may request “extraordinary discovery” under Rule 26(c)(6).  However, any such request must demonstrate that the sought after discovery is “necessary and proportional” under the rules.  The parties must also certify that a budget for the discovery has been “reviewed and approved.”

A Potential Model for Federal Discovery Rule Amendments

Utah Rule 26 could perhaps serve as a model for amending the scope of permissible discovery under the Federal Rules.  Like Utah Rule 26, Federal Rule 26 could be amended to expressly condition discovery on meeting the principles of proportionality.  The Federal Rules could also be modified to ensure the propounding party always has the burden of demonstrating the fact specific good cause for its discovery.  Doing so would undoubtedly force counsel and client to be more precise with their requests and do away with the current regime of “promiscuous discovery.”  Calcor Space Facility, Inc. v. Superior Court, 53 Cal.App.4th 216, 223 (1997) (urging courts to “aggressively” curb discovery abuses which, “like a cancerous growth, can destroy a meritorious cause or defense”).

Tiering the amounts of permitted discovery based on alleged damages could also reduce the costs of discovery.  With limited deposition time and fewer document requests, discovery of necessity would likely focus on the merits instead of eDiscovery sideshows.  Coupling this with an “extraordinary discovery” provision would enable courts to exercise greater control over the process and ensure that genuinely complex matters are litigated efficiently.

If all of this seems like a radical departure from established discovery practice, consider that the new Model Order on E-Discovery in Patent Cases has also incorporated tiered and extraordinary discovery provisions.  See DCG Systems v. Checkpoint Technologies (N.D. Ca. Nov. 2, 2011) (adopting the model order and explaining the benefits of limiting eDiscovery in patent cases).

For those who are seeking a vision of how proportionality might be incorporated into the Federal Rules, new Utah Rule 26 could be a blueprint for doing so.

Watchdog (SEC) v. Watchdog (FINRA): Destruction, Doctoring and Deflection

Monday, November 14th, 2011

In the first settlement of its kind, FINRA settled with the SEC on October 27, 2011 due to allegations over a 2008 incident where a regional Kansas City office of FINRA doctored documents.  The alleged doctored documents were from three internal staff meetings, where information was either edited or deleted and then provided to the SEC with the “inaccurate and incomplete” changes. Mary Shapiro, currently the Chairman of the SEC, is in an interesting spot as she was Chief Executive of FINRA at the time of the alleged wrongdoing.  She apparently had no direct involvement with the decision to take action against FINRA.

The motives for doctoring the documents are unclear, and so is whether or not the alterations of the documents led to any material damage other than FINRA’s diminished credibility.  Ironically, the SEC has had its own struggles in recent months with a slew of articles published in various newspapers highlighting their own challenges with document retention and the improper destruction of documents. Both of these scenarios have been called to light by whistleblowers within their respective agencies.

These antics certainly pose the question: Is it a good use of taxpayer money to have regulatory agencies fighting each other over document retention and record keeping practices? The answer is probably no. But the first question begs the second: If they don’t do it, who will?  While information management is not the sexiest part of the SEC and FINRA’s responsibilities, it certainly is an important one and the foundation of their information intelligence.  Without proper document retention and information governance, the probability of connecting the dots to discover insider trading or other malfeasance is low.  Moreover, in order for agencies to retain credibility they need to be able to locate documents with ease and speed and those documents must be truthful and accurate.

Because FINRA is a self-regulatory firm for securities and is overseen by the SEC, it seems appropriate that they investigate matters like the one at hand.  According to the SEC, the 2008 incident is the third instance in the past eight years where an employee of FINRA, or its predecessor, the National Association of Security Dealers, has provided altered or misleading documents to the SEC.  It remains to be seen if this is intentional on the part of FINRA to conceal undesirable facts or to promote an item on their agenda, or if in fact they are simply negligent with regard to their record keeping policies.  Either way, it is a problem for the SEC and the government in general as it undermines agency credibility and compromises the ability to intelligently leverage information.   This settlement also does no favors for FINRA at a time when they aim to expand their 4,600 base of supervisory authority to include 10,000 more investment advisory firms.

So, what can be done about this behavior and the risks it poses? Corporations and governments are facing the same issues that information governance poses due to the data explosion and the growing complexity of data sources today.  At a minimum, there needs to be a policy in place that governs how data, regardless of form, is handled and disposed of in the information lifecycle.  It also makes sense to form an audit committee within the government that can inspect and assess the information management practices of each agency, as well as serve as a  third party mediator between agencies when these challenges arise.  This is a good idea for two reasons.  One, agencies can focus on their responsibilities instead of getting sidetracked with issues they are not expert in, like document retention or record management.  Next, this problem has reached a point that it’s necessary to appoint an independent group to audit the government due to the data explosion and pace of technology today.  We have the SEC and FINRA to watch the financial industry and provide us with assurance that business is being conducted in a lawful manner.  We don’t need the SEC or FINRA to take up document retention as another responsibility, as there are other professionals that can do that more effectively and independently.

While expansion of government is not the goal of forming yet another committee, this committee could potentially free up agencies to do more of the work they are charged with.  This would also promote standardization across agencies and regulatory bodies, which would be a giant step in the right direction as data volumes grow.  The actions that resulted in this settlement were remedial in nature.  FINRA took decisive action to air a podcast about document integrity and scheduled an agency-wide town hall meeting addressing the same for all current and new employees.  They also hired an independent outside consultant to provide additional staff training on document retention and integrity.  This will be a continual educational process for the private and public sector, and employee training and auditing the process will be the lynchpins for success.  The element of deflection is also at work here, as the SEC is not a model example of best practices for document retention and the moment.

The SEC is working through allegations of document destruction, FINRA is accused of document doctoring, but all these assertions circle back to the central theme of having a document retention policy and compliance with that policy.  This naturally leads to the need for education and training, and the ultimate auditing of the process for compliance.  In this rare case of watchdog bites watchdog, three points become clear: 1) The SEC has a higher and best use other than policing these issues; 2) information management has reached a point that it requires a separate and independent body to monitor and regulate allegations of misconduct; and 3) sometimes it takes a dog biting a dog to truly illustrate the magnitude of a problem.