24h-payday

Archive for January, 2012

The Top Ten “What NOT to Do” List for LegalTech New York 2012

Thursday, January 26th, 2012

As we approach LegalTech New York next week, oft referred to as the Super Bowl of legal technology events, there are any number of helpful blogs and articles telling new attendees what to expect, where to go, what to say, what to do. Undoubtedly, there’s some utility to this approach, but since we’ll be in New York, I think it’s appropriate to take a more skeptical approach and proffer a list of what *NOT* to do at LTNY.

  1. DON’T get caught up in Buzzword Bingo. There are already dozens of sources attempting to prognosticate what the most popular buzzwords will be at this year’s show.  Leading candidates include “predictive coding,” “technology assisted review,” “information governance,” “big data” and even the pedestrian sounding “sampling.” And, while these terms will undoubtedly be on booths and broadcast repeatedly from the Hilton elevator, it doesn’t mean an attendee should merely parrot these without a deeper dive.  Here, the key is go behind the green curtain to see what vendors, panelists and tweet-ers actually mean by these buzzwords, since it’s often surprising to see how the devil really is in the details.
  2. DON’T get a coffee at the Hilton Starbucks. Yes, we all love our morning coffee, but there’s no need to wait in the Justin Bieber-esque line queue at the in-hotel Starbucks. There are approximately 49 locations in a ½ mile radius, including one right across the street. There’s also the vendor giving out free coffee on the second floor, so save yourself 30 minutes of needless line waiting.
  3. DON’T ride the Hilton elevator. For those staying or taking meetings at the Hilton, the elevator lines can be excessively long.  Once you finally get on, you’ll wish they’d been even longer as you then find yourself subjected to the brainwashing of vendor announcements while you make multiple stops on your way to your desired floor. Either take the stairs or, if that’s not possible, try to minimize the trips to keep your sanity. Or, plan B – bring your iPod.
  4. DON’T talk to booth models. It’s tempting to gravitate to the most attractive person at a given vendor’s booth, but they’re often hired professionals designed to get you in for the all-important “badge scan.” Instead, focus on  the person who looks like they’ve been in the same company-branded oxford for 48 hours, because they probably have. While perhaps less aesthetically pleasing, they’ll certainly know more about the product and that’s why you’re there after all, isn’t it?
  5. DON’T pass out your resume on the show floor. While certainly a great networking opportunity, LTNY isn’t the place to blatantly tout your professional wares, at least if you want to keep your nascent job search on the down low. And, if you want to have more private meetings, you’ll need to do better than “hiding out” at the Warwick across the street. For more clandestine purposes, think about the Bronx.
  6. DON’T take tchotchkes without hearing the spiel. There are certain tchotchke hounds out there who roam around LTNY collecting “gifts” for the kids back at home. While I won’t frown on this behavior per se, it’s only courteous to actually listen to the pitch (as a quid pro quo) before you ask for the swag. Anything less is uncivilized.
  7. DON’T get over-served at the B-Discovery Party. After a long day on the show floor you’re probably ready to let loose with some of the eDiscovery practitioners you haven’t seen in a year.  But, in this era of flip cams and instant tweeting, letting your hair down too much can be career limiting. If you haven’t done Jägermeister shots since college, LTNY probably isn’t a good time to resume that dubious practice.
  8. DON’T forget to take your badge off (please!). Yes, it’s cool to let everyone know you’re attending the premier legal technology event of the year, but once you leave the show floor random New Yorkers will heckle you for sporting your badge after hours – particularly the baristas at Starbucks. Plus, if you’ve broken any of the other admonitions above, at least you’ll be more anonymous.
  9. DON’T forget to bring a heavy coat, mittens and scarf. Last year there was the infamous ice storm that stranded folks for days (me included). Even if the weather isn’t that severe this year, anyone from warmer climates will need to bundle up, particularly because it’s easy to unintentionally get caught outside for extended amounts of time – waiting for a cab in the Hilton queue, eating at Symantec’s free food cart, walking to a meeting at a “nearby” hotel that’s “just a block or so away.” Keep in mind those cross town blocks are longer than they appear on a map.
  10. DON’T forget to learn something. Without hyperbole, LTNY has the world’s greatest collection of legal/technology minds in one place for 3 days.  Most folks, even the vaunted panelists, judges and industry luminaries are actually quite accessible. So, at a minimum, attend sessions, ask questions and interact with your peers. Try to ignore the bright lights and signs on the floor and make sure to take some useful information back to your firm, company or governmental agency. You’ll undoubtedly have fun (and maybe a Jagermeister shot, too) along the way.

The Social Media Rubik’s Cube: FINRA Solved it First, Are Non-Regulated Industries Next?

Wednesday, January 25th, 2012

It’s no surprise that the first industry to be heavily regulated regarding social media use was the financial services industry. The predominant factor that drove regulators to address the viral qualities of social media was the fiduciary nature of investing that accompanies securities, coupled with the potential detrimental financial impact these offerings could have on investors.

Although there is no explicit language in FINRA’s Regulatory Notices 10-06 (January 2010) or 11-30 (August 2011) requiring archival, the record keeping component of the notices necessitate social media archiving in most cases due to the sheer volume of data produced on social media sites. Melanie Kalemba, Vice President of Business Development at SocialWare in Austin, Texas states:

“Our clients in the financial industry have led the way, they have paved the road for other industries, making social media usage less daunting. Best practices for monitoring third-party content, record keeping responsibilities, and compliance programs are available and developed for other industries to learn from. The template is made.”

eDiscovery and Privacy Implications. Privacy laws are an important aspect of social media use that impact discoverability. Discovery and privacy represent layers of the Rubik’s cube in the ever-changing and complex social media environment. No longer are social media cases only personal injury suits or HR incidents, although those are plentiful. For example, in Largent v. Reed the court ruled that information posted by a party on their personal Facebook page was discoverable and ordered the plaintiff to provide user name and password to enable the production of the information. In granting the motion to compel the Defendant’s login credentials, Judge Walsh acknowledged that Facebook has privacy settings, and that users must take “affirmative steps” to keep their information private. However, his ruling determined that no social media privacy privilege exists: “No court has recognized such a privilege, and neither will we.” He further reiterated his ruling by adding, “[o]nly the uninitiated or foolish could believe that Facebook is an online lockbox of secrets.”

Then there are the new cases emerging over social media account ownership which affect privacy and discoverability. In the recently filed Phonedog v. Kravitz, 11-03474 (N.D. Cal.; Nov. 8, 2011), the lines between the “professional” versus the “private” user are becoming increasingly blurred. This case also raises questions about proprietary client lists, valuations on followers, and trade secrets  – all of which are further complicated when there is no social media policy in place. The financial services industry has been successful in implementing effective social media policies along with technology to comply with agency mandates – not only because they were forced to by regulation, but because they have developed best practices that essentially incorporate social media into their document retention policies and information governance infrastructures.

Regulatory Framework. Adding another Rubik’s layer are the multitude of regulatory and compliance issues that many industries face. The most active and vocal regulators for guidance in the US on social media have been FINRA, the SEC and the FTC. FINRA initiated guidance to the financial services industry, and earlier this month the SEC issued their alert. The SEC’s exam alert to registered investment advisers issued on January 4, 2012 was not meant to be a comprehensive summary for compliance related to the use of social media. Instead, it lays out staff observations of three major categories: third party content, record keeping and compliance – expounding on FINRA’s notice.

Last year the FTC issued an extremely well done Preliminary FTC Staff Report on Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Businesses and Policymakers.  Three main components are central to the report. The first is a call for all companies to build privacy and security mechanisms into new products – considering the possible negative ramifications at the outset, avoiding social media and privacy issues as an afterthought. The FTC has cleverly coined the notion, “Privacy by Design.” Second, “Just-In-Time” is a concept about notice and encourages companies to communicate with the public in a simple way that prompts them to make informed decisions about their data in terms that are clear and that require an affirmative action (i.e., checking a box). Finally, the FTC calls for greater transparency around data collection, use and retention. The FTC asserts that consumers have a right to know what kind of data companies collect, and should have access to the sensitivity and intended use of that data. The FTC’s report is intended to inform policymakers, including Congress, as they legislate on privacy – and to motivate companies to self-regulate and develop best practices. 

David Shonka, Principal Deputy General Counsel at the FTC in Washington, D.C., warns, “There is a real tension between the situations where a company needs to collect data about a transaction versus the liabilities associated with keeping unneeded data due to privacy concerns. Generally, archiving everything is a mistake.” Shonka arguably reinforces the case for instituting an intelligent archive, whether a company is regulated or not;  an archive that is selective about what it ingests based on content, and that has an appropriate deletion cycle applied to defined data types/content according to a policy. This will ensure expiry of private consumer information in a timely manner, but retains the benefits of retrieval for a defined period if necessary.

The Non-Regulated Use Case­. When will comprehensive social media policies, retention and monitoring become more prevalent in the non-regulated sectors? In the case of FINRA and the SEC, regulations were issued to the financial industry. In the case of the FTC, guidance had been given to companies regarding how to avoid false advertisement and protect consumer privacy. The two are not dissimilar in effect. Both require a social media policy, monitoring, auditing, technology, and training. While there is no clear mandate to archive social media if you are in a non-regulated industry, this can’t be too far away. This is evidenced by companies that have already implemented social media monitoring systems for reasons like brand promotion/protection, or healthcare companies that deal with highly sensitive information. If social media is replacing email, and social media is essentially another form of electronic evidence, why would social media not be part of the integral document retention/expiry procedures within an organization?

Content-based monitoring and archiving is possible with technology available today, as the financial sector has demonstrated. Debbi Corej, who is a compliance expert for the financial sector and has successfully implemented an intensive social media program, says it perfectly: “How do you get to yes? Yes you can use social media, but in a compliant way.” The answer can be found at LegalTech New YorkJanuary 30 @ 2:00pm.

2012: Year of the Dragon – and Predictive Coding. Will the eDiscovery Landscape Be Forever Changed?

Monday, January 23rd, 2012

2012 is the Year of the Dragon – which is fitting, since no other Chinese Zodiac sign represents the promise, challenge, and evolution of predictive coding technology more than the Dragon.  The few who have embraced predictive coding technology exemplify symbolic traits of the Dragon that include being unafraid of challenges and willing to take risks.  In the legal profession, taking risks typically isn’t in a lawyer’s DNA, which might explain why predictive coding technology has seen lackluster adoption among lawyers despite the hype.  This blog explores the promise of predictive coding technology, why predictive coding has not been widely adopted in eDiscovery, and explains why 2012 is likely to be remembered as the year of predictive coding.

What is predictive coding?

Predictive coding refers to machine learning technology that can be used to automatically predict how documents should be classified based on limited human input.  In litigation, predictive coding technology can be used to rank and then “code” or “tag” electronic documents based on criteria such as “relevance” and “privilege” so organizations can reduce the amount of time and money spent on traditional page by page attorney document review during discovery.

Generally, the technology works by prioritizing the most important documents for review by ranking them.  In addition to helping attorneys find important documents faster, this prioritization and ranking of documents can even eliminate the need to review documents with the lowest rankings in certain situations. Additionally, since computers don’t get tired or day dream, many believe computers can even predict document relevance better than their human counterparts.

Why hasn’t predictive coding gone mainstream yet?

Given the promise of faster and less expensive document review, combined with higher accuracy rates, many are perplexed as to why predictive coding technology hasn’t been widely adopted in eDiscovery.  The answer really boils down to one simple concept – a lack of transparency.

Difficult to Use

First, early predictive coding tools attempt to apply a complicated new technological approach to a document review process that has traditionally been very simple.  Instead of relying on attorneys to read each and every document to determine relevance, the success of today’s predictive coding technology typically depends on review decisions input into a computer by one or more experienced senior attorneys.  The process commonly involves a complex series of steps that include sampling, testing, reviewing, and measuring results in order to fine tune an algorithm that will eventually be used to predict the relevancy of the remaining documents.

The problem with early predictive coding technologies is that the majority of these complex steps are done in a ‘black box’.  In other words, the methodology and results are not always clear, which increases the risk of human error and makes the integrity of the electronic discovery process difficult to defend.  For example, the methodology for selecting a statistically relevant sample is not always intuitive to the end user.  This fundamental problem could result in improper sampling techniques that could taint the accuracy of the entire process.  Similarly, the process must often be repeated several times in order to improve accuracy rates.  Even if accuracy is improved, it may be difficult or impossible to explain how accuracy thresholds were determined or to explain why coding decisions were applied to some documents and not others.

Accuracy Concerns

Early predictive coding tools also tend to lack transparency in the way the technology evaluates the language contained in each document.  Instead of evaluating both the text and metadata fields within a document, some technologies actually ignore document metadata.  This omission means a privileged email sent by a client to her attorney, Larry Lawyer, might be overlooked by the computer if the name “Larry Lawyer” is only part of the “recipient” metadata field of the document and isn’t part of the document text.  The obvious risk is that this situation could lead to privilege waiver if it is inadvertently produced to the opposing party.

Another practical concern is that some technologies do not allow reviewers to make a distinction between relevant and non-relevant language contained within individual documents.  For example, early predictive coding technologies are not intelligent enough to know that only the second paragraph on page 95 of a 100-page document contains relevant language.  The inability to discern what language  led to the determination that the document is relevant could skew results when the computer tries to identify other documents with the same characteristics.  This lack of precision increases the likelihood that the computer will retrieve an over-inclusive number of irrelevant documents.  This problem is generally referred to as ‘excessive recall,’ and it is important because this lack of precision increases the number of documents requiring manual review which directly impacts eDiscovery cost.

Waiver & Defensibility

Perhaps the biggest concern with early predictive coding technology is the risk of waiver and concerns about defensibility.  Notably, there have been no known judicial decisions that specifically address the defensibility of these new technology tools even though some in the judiciary, including U.S. Magistrate Judge Andrew Peck, have opined that this kind of technology should be used in certain cases.

The problem is that today’s predictive coding tools are difficult to use, complicated for the average attorney, and the way they work simply isn’t transparent.  All these limitations increase the risk of human error.  Introducing human error increases the risk of overlooking important documents or unwittingly producing privileged documents.  Similarly, it is difficult to defend a technological process that isn’t always clear in an era where many lawyers are still uncomfortable with keyword searches.  In short, using black box technology that is difficult to use and understand is perceived as risky, and many attorneys have taken a wait-and-see approach because they are unwilling to be the guinea pig.

Why is 2012 likely to be the year of predictive coding?

The word transparency may seem like a vague term, but it is the critical element missing from today’s predictive coding technology offerings.  2012 is likely to be the year of predictive coding because improvements in transparency will shine a light into the black box of predictive coding technology that hasn’t existed until now.  In simple terms, increasing transparency will simplify the user experience and improve accuracy which will reduce longstanding concerns about defensibility and privilege waiver.

Ease of Use

First, transparent predictive coding technology will help minimize the risk of human error by incorporating an intuitive user interface into a complicated solution.  New interfaces will include easy-to-use workflow management consoles to guide the reviewer through a step-by-step process for selecting, reviewing, and testing data samples in a way that minimizes guesswork and confusion.  By automating the sampling and testing process, the risk of human error can be minimized which decreases the risk of waiver or discovery sanctions that could result if documents are improperly coded.  Similarly, automated reporting capabilities make it easier for producing parties to evaluate and understand how key decisions were made throughout the process, thereby making it easier for them to defend the reasonableness of their approach.

Intuitive reports also help the producing party measure and evaluate confidence levels throughout the testing process until appropriate confidence levels are achieved.  Since confidence levels can actually be measured as a percentage, attorneys and judges are in a position to negotiate and debate the desired level of confidence for a production set rather than relying exclusively on the representations or decisions of a single party.  This added transparency allows the type of cooperation between parties called for in the Sedona Cooperation Proclamation and gives judges an objective tool for evaluating each party’s behavior.

Accuracy & Efficiency

2012 is also likely to be the year of transparent predictive coding technology because technical limitations that have impacted the accuracy and efficiency of earlier tools will be addressed.  For example, new technology will analyze both document text and metadata to avoid the risk that responsive or privileged documents are overlooked.  Similarly, smart tagging features will enable reviewers to highlight specific language in documents to determine a document’s relevance or non-relevance so that coding predictions will be more accurate and fewer non-relevant documents will be recalled for review.

Conclusion - Transparency Provides Defensibility

The bottom line is that predictive coding technology has not enjoyed widespread adoption in the eDiscovery process due to concerns about simplicity and accuracy that breed larger concerns about defensibility.  Defending the use of black box technology that is difficult to use and understand is a risk that many attorneys simply are not willing to take, and these concerns have deterred widespread adoption of early predictive coding technology tools.  In 2012, next generation transparent predictive coding technology will usher in a new era of computer-assisted document review that is easy to use, more accurate, and easier to defend. Given these exciting technological advancements, I predict that 2012 will not only be the year of the dragon, it will also be the year of predictive coding.

Losing Weight, Developing an Information Governance Plan, and Other New Year’s Resolutions

Tuesday, January 17th, 2012

It’s already a few weeks into the new year and it’s easy to spot the big lines at the gym, folks working on fad diets and many swearing off any number of vices.  Sadly perhaps, most popular resolutions don’t even really change year after year.  In the corporate world, though, it’s not good enough to simply recycle resolutions every year since there’s a lot more at stake, often with employee’s bonuses and jobs hanging in the balance.

It’s not too late to make information governance part of the corporate 2012 resolution list.  The reason is pretty simple – most companies need to get out of the reactive firefighting of eDiscovery given the risks of sloppy work, inadvertent productions and looming sanctions.  Yet, so many are caught up in the fog of eDiscovery war that they’ve failed to see the nexus between the upstream, proactive good data management hygiene and the downstream eDiscovery chaos.

In many cases the root cause is the disconnect between differing functional groups (Legal, IT, Information Security, Records Management, etc.).  This is where the emerging umbrella concept of Information Governance comes to play, serving as a way to tackle these information risks along a unified front. Gartner defines information governanceas the:

“specification of decision rights, and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archiving and deletion of information, … [including] the processes, roles, standards, and metrics that ensure the effective and efficient use of information to enable an organization to achieve its goals.”

Perhaps more simply put, what were once a number of distinct disciplines—records management, data privacy, information security and eDiscovery—are rapidly coming together in ways that are important to those concerned with mitigating and managing information risk. This new information governance landscape is comprised of a number of formerly discrete categories:

  • Regulatory Risks – Whether an organization is in a heavily regulated vertical or not, there are a host of regulations that an organization must navigate to successfully stay in compliance.  In the United States these include a range of disparate regimes, including the Sarbanes-Oxley Act, HIPPA, the Securities and Exchange Act, the Foreign Corrupt Practices Act (FCPA) and other specialized regulations – any number of which require information to be kept in a prescribed fashion, for specified periods of time.  Failure to turn over information when requested by regulators can have dramatic financial consequences, as well as negative impacts to an organization’s reputation.
  • Discovery Risks – Under the discovery realm there are any number of potential risks as a company moves along the EDRM spectrum (i.e., Identification, Preservation, Collection, Processing, Analysis, Review and Production), but the most lethal risk is typically associated with spoliation sanctions that arise from the failure to adequately preserve electronically stored information (ESI).  There have been literally hundreds of cases where both plaintiffs and defendants have been caught in the judicial crosshairs, resulting in penalties ranging from outright case dismissal to monetary sanctions in the millions of dollars, simply for failing to preserve data properly.  It is in this discovery arena that the failure to dispose of corporate information, where possible, rears its ugly head since the eDiscovery burden is commensurate with the amount of data that needs to be preserved, processed and reviewed.  Some statistics show that it can cost as much as $5 per document just to have an attorney privilege review performed.  And, with every gigabyte containing upwards of 75,000 pages, it is easy to see massive discovery liability when an organization has terabytes and even petabytes of extraneous data lying around.
  • Privacy Risks – Even though the US has a relatively lax information privacy climate there are any number of laws that require companies to notify customers if their personally identifiable information (PII) such as credit card, social security, or credit numbers have been compromised.  For example, California’s data breach notification law (SB1386) mandates that all subject companies must provide notification if there is a security breach to the electronic database containing PII of any California resident.  It is easy to see how unmanaged PII can increase corporate risk, especially as data moves beyond US borders to the international stage where privacy regimes are much more staunch.
  • Information Security Risks Data breaches have become so commonplace that the loss/theft of intellectual property has become an issue for every company, small and large, both domestically and internationally.  The cost to businesses of unintentionally exposing corporate information climbed 7 percent last year to over $7 million per incident.  Recently senators asked the SEC to “issue guidance regarding disclosure of information security risk, including material network breaches” since “securities law obligates the disclosure of any material network breach, including breaches involving sensitive corporate information that could be used by an adversary to gain competitive advantage in the marketplace, affect corporate earnings, and potentially reduce market share.”  The senators cited a 2009 survey that concluded that 38% of Fortune 500 companies made a “significant oversight” by not mentioning data security exposures in their public filings.

Information governance as an umbrella concept helps organizations to create better alignment between functional groups as they attempt to solve these complex and interrelated data risk challenges.  This coordination is even more critical given the way that corporate data is proliferating and migrating beyond the firewall.  With even more data located in the cloud and on mobile devices a key mandate is managing data in all types of form factors. A great first step is to determine ownership of a consolidated information governance approach where the owner can:

  • Get C-Level buy-in
  • Have the organizational savvy to obtain budget
  • Be able to define “reasonable” information governance efforts, which requires both legal and IT input
  • Have strong leadership and consensus building skills, because all stakeholders need to be on the same page
  • Understand the nuances of their business, since an overly rigid process will cause employees to work around the policies and procedures

Next, tap into and then leverage IT or information security budgets for archiving, compliance and storage.  In most progressive organizations there are likely ongoing projects that can be successfully massaged into a larger information governance play.  A great place to focus on initially is information archiving, since this one of the simplest steps an organization can take to improve their information governance hygiene.  With an archive organizations can systematically index, classify and retain information and thus establish a proactive approach to data management.  It’s this ability to apply retention and (most importantly) expiration policies that allows organizations to start reducing the upstream data deluge that will inevitably impact downstream eDiscovery processes.

Once an archive is in place, the next logical step is to couple a scalable, reactive eDiscovery process with the upstream data sources, which will axiomatically include email, but increasingly should encompass cloud content, social media, unstructured data, etc.  It is important to make sure  that a given  archive has been tested to ensure compatibility with the chosen eDiscovery application to guarantee that it can collect content at scale in the same manner used to collect from other data sources.  Overlaying both of these foundational pieces should be the ability to place content on legal hold, whether that content exists in the archive or not.

As we enter 2012, there is no doubt that information governance should be an element in building an enterprise’s information architecture.  And, different from fleeting weight loss resolutions, savvy organizations should vow to get ahead of the burgeoning categories of information risk by fully embracing their commitment to integrated information governance.  And yet, this resolution doesn’t need to encompass every possible element of information governance.  Instead, it’s best to put foundational pieces into place and then build the rest of the infrastructure in methodical and modular fashion.

Information Governance Gets Presidential Attention: Banking Bailout Cost $4.76 Trillion, Technology Revamp Approaches $240 Billion

Tuesday, January 10th, 2012

On November 28, 2011, The White House issued a Presidential Memorandum that outlines what is expected of the 480 federal agencies of the government’s three branches in the next 240 days.  Up until now, Washington, D.C. has been the Wild West with regard to information governance as each agency has often unilaterally adopted its own arbitrary policies and systems.  Moreover, some agencies have recently purchased differing technologies.  Unfortunately,  with the President’s ultimate goal of uniformity, this centralization will be difficult to accomplish with a range of disparate technological approaches.

Particular pain points for the government traditionally include retention, search, collection, review and production of vast amounts of data and records.  Specifically, these pain points include examples of: FOIA requests gone awry, the issuance of legal holds across different agencies leading to spoliation, and the ever present problem of decentralization.

Why is the government different?

Old Practices. First, in some instances the government is technologically behind (its corporate counterparts) and is failing to meet the judiciary’s expectation that organizations effectively store, manage and discover their information.  This failing is self-evident via  the directive coming from the President mandating that these agencies start to get a plan to attack this problem.  Though different than other corporate entities, the government is nevertheless held to the same standards of eDiscovery under the Federal Rules of Civil Procedure (FRCP).  In practice, the government has been given more leniency until recently, and while equal expectations have not always been the case, the gap between the private and public sectors in no longer possible to ignore.

FOIA.  The government’s arduous obligation to produce information under the Freedom of Information Act (FOIA) has no corresponding analog for private organizations, who are responding to more traditional civil discovery requests.  Because the government is so large with many disparate IT systems, it is cumbersome to work efficiently through the information governance process across agencies and many times still difficult inside one individual agency with multiple divisions.  Executing this production process is even more difficult if not impossible to do manually without properly deployed technology.  Additionally, many of the investigatory agencies that issue requests to the private sector need more efficient ways to manage and review data they are requesting.  To compound problems, within the US government there are two opposing interests are at play; both screaming for a resolution, and that solution needs to be centralized.  On the one hand, the government needs to retain more than a corporation may need to in order to satisfy a FOIA request.

Titan Pulled at Both Ends. On the other hand, without classification of the records that are to be kept, technology to organize this vast amount of data and some amount of expiry, every agency will essentially become their own massive repository.  The “retain everything mentality” coupled with the inefficient search and retrieval of data and records is where they stand today.  Corporations are experiencing this on a smaller scale today and many are collectively further along than the government in this process, without the FOIA complications.

What are agencies doing to address these mandates?

In their plans, agencies must describe how they will improve or maintain their records management programs, particularly with regard to email, social media and other electronic communications.  They must also move away from such a paper-centric existence.  eDiscovery consultants and software companies are helping agencies through this process, essentially writing their plans to match the President’s directive.  The cloud conversation has been revisited, and agencies also have to explain how they will use cloud-based services and storage solutions, as well as identify gaps in existing laws or regulations that presently prevent improved management.  Small innovations are taking place.  In fact, just recently the DOJ added a new search feature on their website to make it easier for the public to find documents that have been posted by agencies on their websites.

The Office of Management and Budget (OMB), National Archives and Records Administration (NARA), and Justice Department will use those reports to come up with a government-wide records management framework that is more efficient, maintains accountability by documenting agency actions and promotes “appropriate” public access to records.  Hopefully, the framework they come up with will be centralized and workable on a realistic timeframe with resources sufficiently allocated to the initiative.

How much will this cost?

The President’s mandate is a great initiative and very necessary, but one cannot help but think about the costs in terms of money, time and resources when considering these crucial changes.  The most recent version of a financial services and general government appropriations bill in the Senate extends $378.8 million to NARA for this initiative.  President Obama appointed Steven VanRoekel as the United States CIO in August 2011 to succeed Vivek Kundra.  After VanRoekel’s speech at the Churchill Club in October of 2011, an audience member asked him what the most surprising aspect of his new job was.  VanRoekel said that it was managing the huge and sometimes unwieldy resources of his $80 billion budget.  It is going to take even more than this to do the job right, however.

Using conservative estimates, assume for an agency to implement archiving and eDiscovery capabilities as an initial investment would be $100 million.  That approximates $480 billion for all 480 agencies.  Assume a uniform information governance platform gets adopted by all agencies at a 50% discount due to the large contracts and also factoring in smaller sums for agencies with lesser needs.  The total now comes to $240 billion.  For context, that figure is 5% of what was spent by Federal Government ($4.76 trillion) on the biggest bailout in history in 2008. That leaves a need for $160 billion more to get the job done. VanRoekel also commented at the same meeting that he wants to break down massive multi-year information technology projects into smaller, more modular projects in the hopes of saving the government from getting mired in multi-million dollar failures.   His solution to this, he says, is modular and incremental deployment.

While Rome was not built in a day, this initiative is long overdue, yet feasible, as technology exists to address these challenges rather quickly.  After these 240 days are complete and a plan is drawn the real question is, how are we going to pay now for technology the government needed yesterday?  In a perfect world, the government would select a platform for archiving and eDiscovery, break the project into incremental milestones and roll out a uniform combination of solutions that are best of breed in their expertise.

Lessons Learned for 2012: Spotlighting the Top eDiscovery Cases from 2011

Tuesday, January 3rd, 2012

The New Year has now dawned and with it, the certainty that 2012 will bring new developments to the world of eDiscovery.  Last month, we spotlighted some eDiscovery trends for 2012 that we feel certain will occur in the near term.  To understand how these trends will play out, it is instructive to review some of the top eDiscovery cases from 2011.  These decisions provide a roadmap of best practices that the courts promulgated last year.  They also spotlight the expectations that courts will likely have for organizations in 2012 and beyond.

Issuing a Timely and Comprehensive Litigation Hold

Case: E.I. du Pont de Nemours v. Kolon Industries (E.D. Va. July 21, 2011)

Summary: The court issued a stiff rebuke against defendant Kolon Industries for failing to issue a timely and proper litigation hold.  That rebuke came in the form of an instruction to the jury that Kolon executives and employees destroyed key evidence after the company’s preservation duty was triggered.  The jury responded by returning a stunning $919 million verdict for DuPont.

The spoliation at issue occurred when several Kolon executives and employees deleted thousands emails and other records relevant to DuPont’s trade secret claims.  The court laid the blame for this destruction on the company’s attorneys and executives, reasoning they could have prevented the spoliation through an effective litigation hold process.  At issue were three hold notices circulated to the key players and data sources.  The notices were all deficient in some manner.  They were either too limited in their distribution, ineffective since they were prepared in English for Korean-speaking employees, or too late to prevent or otherwise ameliorate the spoliation.

The Lessons for 2012: The DuPont case underscores the importance of issuing a timely and comprehensive litigation hold notice.  As DuPont teaches, organizations should identify what key players and data sources may have relevant information.  A comprehensive notice should then be prepared to communicate the precise hold instructions in an intelligible fashion.  Finally, the hold should be circulated immediately to prevent data loss.

Organizations should also consider deploying the latest technologies to help effectuate this process.  This includes an eDiscovery platform that enables automated legal hold acknowledgements.  Such technology will allow custodians to be promptly and properly apprised of litigation and thereby retain information that might otherwise have been discarded.

Another Must-Read Case: Haraburda v. Arcelor Mittal U.S.A., Inc. (D. Ind. June 28, 2011)

Suspending Document Retention Policies

Case: Viramontes v. U.S. Bancorp (N.D. Ill. Jan. 27, 2011)

Summary: The defendant bank defeated a sanctions motion because it modified aspects of its email retention policy once it was aware litigation was reasonably foreseeable.  The bank implemented a retention policy that kept emails for 90 days, after which the emails were overwritten and destroyed.  The bank also promulgated a course of action whereby the retention policy would be promptly suspended on the occurrence of litigation or other triggering event.  This way, the bank could establish the reasonableness of its policy in litigation.  Because the bank followed that procedure in good faith, it was protected from court sanctions under the Federal Rules of Civil Procedure 37(e) “safe harbor.”

The Lesson for 2012: As Viramontes shows, an organization can be prepared for eDiscovery disputes by timely suspending aspects of its document retention policies.  By modifying retention policies when so required, an organization can develop a defensible retention procedure and be protected from court sanctions under Rule 37(e).

Coupling those procedures with archiving software will only enhance an organization’s eDiscovery preparations.  Effective archiving software will have a litigation hold mechanism, which enables an organization to suspend automated retention rules.  This will better ensure that data subject to a preservation duty is actually retained.

Another Must-Read Case: Micron Technology, Inc. v. Rambus Inc., 645 F.3d 1311 (Fed. Cir. 2011)

Managing the Document Collection Process

Case: Northington v. H & M International (N.D.Ill. Jan. 12, 2011)

Summary: The court issued an adverse inference jury instruction against a company that destroyed relevant emails and other data.  The spoliation occurred in large part because legal and IT were not involved in the collection process.  For example, counsel was not actively engaged in the critical steps of preservation, identification or collection of electronically stored information (ESI).  Nor was IT brought into the picture until 15 months after the preservation duty was triggered. By that time, rank and file employees – some of whom were accused by the plaintiff of harassment – stepped into this vacuum and conducted the collection process without meaningful oversight.  Predictably, key documents were never found and the court had little choice but to promise to inform the jury that the company destroyed evidence.

The Lesson for 2012: An organization does not have to suffer the same fate as the company in the Northington case.  It can take charge of its data during litigation through cooperative governance between legal and IT.  After issuing a timely and effective litigation hold, legal should typically involve IT in the collection process.  Legal should rely on IT to help identify all data sources – servers, systems and custodians – that likely contain relevant information.  IT will also be instrumental in preserving and collecting that data for subsequent review and analysis by legal.  By working together in a top-down fashion, organizations can better ensure that their eDiscovery process is defensible and not fatally flawed.

Another Must-Read Case: Green v. Blitz U.S.A., Inc. (E.D. Tex. Mar. 1, 2011)

Using Proportionality to Dictate the Scope of Permissible Discovery

Case: DCG Systems v. Checkpoint Technologies (N.D. Ca. Nov. 2, 2011)

The court adopted the new Model Order on E-Discovery in Patent Cases recently promulgated by the U.S. Court of Appeals for the Federal Circuit.  The model order incorporates principles of proportionality to reduce the production of email in patent litigation.  In adopting the order, the court explained that email productions should be scaled back since email is infrequently introduced as evidence at trial.  As a result, email production requests will be restricted to five search terms and may only span a defined set of five custodians.  Furthermore, email discovery in DCG Systems will wait until after the parties complete discovery on the “core documentation” concerning the patent, the accused product and prior art.

The Lesson for 2012: Courts seem to be slowly moving toward a system that incorporates proportionality as the touchstone for eDiscovery.  This is occurring beyond the field of patent litigation, as evidenced by other recent cases.  Even the State of Utah has gotten in on the act, revising its version of Rule 26 to require that all discovery meet the standards of proportionality.  While there are undoubtedly deviations from this trend (e.g., Pippins v. KPMG (S.D.N.Y. Oct. 7, 2011)), the clear lesson is that discovery should comply with the cost cutting mandate of Federal Rule 1.

Another Must-Read Case: Omni Laboratories Inc. v. Eden Energy Ltd [2011] EWHC 2169 (TCC) (29 July 2011)

Leveraging eDiscovery Technologies for Search and Review

Case: Oracle America v. Google (N.D. Ca. Oct. 20, 2011)

The court ordered Google to produce an email that it previously withheld on attorney client privilege grounds.  While the email’s focus on business negotiations vitiated Google’s claim of privilege, that claim was also undermined by Google’s production of eight earlier drafts of the email.  The drafts were produced because they did not contain addressees or the heading “attorney client privilege,” which the sender later inserted into the final email draft.  Because those details were absent from the earlier drafts, Google’s “electronic scanning mechanisms did not catch those drafts before production.”

The Lesson for 2012: Organizations need to leverage next generation, robust technology to support the document production process in discovery.  Tools such as email analytical software, which can isolate drafts and offer to remove them from production, are needed to address complex production issues.  Other technological capabilities, such as Near Duplicate Identification, can also help identify draft materials and marry them up with finals that have been marked as privileged.  Last but not least, technology assisted review has the potential of enabling one lawyer to efficiently complete the work that previously took thousands of hours.  Finding the budget and doing the research to obtain the right tools for the enterprise should be a priority for organizations in 2012.

Another Must-Read Case: J-M Manufacturing v. McDermott, Will & Emery (CA Super. Jun. 2, 2011)

Conclusion

There were any number of other significant cases from 2011 that could have made this list.  We invite you to share your favorites in the comments section or contact us directly with your feedback.

For more on the cases discussed above, watch this video: