24h-payday

Posts Tagged ‘EDRM’

The Gartner 2013 Magic Quadrant for eDiscovery Software is Out!

Wednesday, June 12th, 2013

This week marks the release of the 3rd annual Gartner Magic Quadrant for e-Discovery Software report.  In the early days of eDiscovery, most companies outsourced almost every sizeable project to vendors and law firms so eDiscovery software was barely a blip on the radar screen for technology analysts. Fast forward a few years to an era of explosive information growth and rising eDiscovery costs and the landscape has changed significantly. Today, much of the outsourced eDiscovery “services” business has been replaced by eDiscovery software solutions that organizations bring in house to reduce risk and cost. As a result, the enterprise eDiscovery software market is forecast to grow from $1.4 billion in total software revenue worldwide in 2012 to $2.9 billion by 2017. (See Forecast:  Enterprise E-Discovery Software, Worldwide, 2012 – 2017, Tom Eid, December, 2012).

Not surprisingly, today’s rapidly growing eDiscovery software market has become significant enough to catch the attention of mainstream analysts like Gartner. This is good news for company lawyers who are used to delegating enterprise software decisions to IT departments and outside law firms. Because today those same company lawyers are involved in eDiscovery and other information management software purchasing decisions for their organizations. While these lawyers understand the company’s legal requirements, they do not necessarily understand how to choose the best technology to address those requirements. Conversely, IT representatives understand enterprise software, but they do not necessarily understand the law. Gartner bridges this information gap by providing in depth and independent analysis of the top eDiscovery software solutions in the form of the Gartner Magic Quadrant for e-Discovery Software.

Gartner’s methodology for preparing the annual Magic Quadrant report is rigorous. Providers must meet quantitative requirements such as revenue and significant market penetration to be included in the report. If these threshold requirements are met then Gartner probes deeper by meeting with company representatives, interviewing customers, and soliciting feedback to written questions. Providers that make the cut are evaluated across four Magic Quadrant categories as either “leaders, challengers, niche players, or visionaries.” Where each provider ends up on the quadrant is guided by an independent evaluation of each provider’s “ability to execute” and “completeness of vision.” Landing in the “leaders” quadrant is considered a top recognition.

The nine Leaders in this year’s Magic Quadrant have four primary characteristics (See figure 1 above).

The first is whether the provider has functionality that spans both sides of the electronic discovery reference model (EDRM) (left side – identification, preservation, litigation hold, collection, early case assessment (ECA) and processing and right-side – processing, review, analysis and production). “While Gartner recognizes that not all enterprises — or even the majority — will want to perform legal-review work in-house, more and more are dictating what review tools will be used by their outside counsel or legal-service providers. As practitioners become more sophisticated, they are demanding that data change hands as little as possible, to reduce cost and risk. This is a continuation of a trend we saw developing last year, and it has grown again in importance, as evidenced both by inquiries from Gartner clients and reports from vendors about the priorities of current and prospective customers.”

We see this as consistent with the theme that providers with archiving solutions designed to automate data retention and destruction policies generally fared better than those without archiving technology. The rationale is that part of a good end-to-end eDiscovery strategy includes proactively deleting data organizations do not have a legal or business need to keep. This approach decreases the amount of downstream electronically stored information (ESI) organizations must review on a case-by-case basis so the cost savings can be significant.

Not surprisingly, whether or not a provider offers technology assisted review or predictive coding capabilities was another factor in evaluating each provider’s end-to-end functionality. The industry has witnessed a surge in predictive coding case law since 2012 and judicial interest has helped drive this momentum. However, a key driver for implementing predictive coding technology is the ability to reduce the amount of ESI attorneys need to review on a case-by-case basis. Given the fact that attorney review is the most expensive phase of the eDiscovery process, many organizations are complementing their proactive information reduction (archiving) strategy with a case-by-case information reduction plan that also includes predictive coding.

The second characteristic Gartner considered was that Leaders’ business models clearly demonstrate that their focus is software development and sales, as opposed to the provision of services. Gartner acknowledged that the eDiscovery services market is strong, but explains that the purpose of the Magic Quadrant is to evaluate software, not services. The justification is that “[c]orporate buyers and even law firms are trending towards taking as much e-Discovery process in house as they can, for risk management and cost control reasons. In addition, the vendor landscape for services in this area is consolidating. A strong software offering, which can be exploited for growth and especially profitability, is what Gartner looked for and evaluated.”

Third, Gartner believes the solution provider market is shrinking and that corporations are becoming more involved in buying decisions instead of deferring technology decisions to their outside law firms. Therefore, those in the Leaders category were expected to illustrate a good mix of corporate and law firm buying centers. The rationale behind this category is that law firms often help influence corporate buying decisions so both are important players in the buying cycle. However, Gartner also highlighted that vendors who get the majority of their revenues from the “legal solution provider channel” or directly from “law firms” may soon face problems.

The final characteristic Gartner considered for the Leaders quadrant is related to financial performance and growth. In measuring this component, Gartner explained that a number of factors were considered. Primary among them is whether the Leaders are keeping pace with or even exceeding overall market growth. (See “Forecast:  Enterprise E-Discovery Software, Worldwide, 2012 – 2017,” Tom Eid, December, 2012).

Companies landing in Gartner’s Magic Quadrant for eDiscovery Software have reason to celebrate their position in an increasingly competitive market. To review Gartner’s full report yourself, click here. In the meantime, please feel free to share your own comments below as the industry anxiously awaits next year’s Magic Quadrant Report.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

ADR Offers Unique Solutions to Address Common eDiscovery Challenges

Friday, May 3rd, 2013

Much of the writing in the eDiscovery community focuses on the consequences of a party failing to adequately accomplish one of the nine boxes of the Electronic Discovery Reference Model. Breaking news posts frequently report on how spoliation and sanctions are typically issued for failure to suspend auto-deletion or to properly circulate a written litigation hold notices. This begs the question, aside from becoming perfectly adept in all nine boxes of the EDRM, how else can an organization protect themselves from discovery wars and sanctions?

One way is explore the possibilities Alternative Dispute Resolution (ADR) has to offer. While there is no substitute for the proper implementation of information governance processes, technology, and the people who manage them; there are alternative and creative ways to minimize exposure. This is not to say that ESI is less discoverable in ADR, but it is to say with the proper agreements in place, the way ESI is handled in the event of a dispute can be addressed proactively.  That is because although parties are free to use the Federal Rules of Civil Procedure in ADR proceedings, they are not constricted by them. In other words, ADR proceedings can provide parties with the flexibility to negotiate and tailor their own discovery rules to address the specific matter and issues at hand.

Arbitration is a practical and preferred way to resolve disputes because it is quick, relatively inexpensive and commonly binding. With enough foresight, parties can preemptively limit the scope of discovery in their agreements to ensure the just and speedy resolution of a matter. Practitioners who are well versed in electronic discovery will be the best positioned to counsel clients in the formation of their agreements upfront, obviating protracted discovery. While a similar type of agreement can be reached and protection can be achieved with the Meet and Confer Conference in civil litigation, ADR offers a more private forum giving the parties more contractual power and less unwanted surprises.

For example, JAMS includes this phrase in one of their model recommendations:

JAMS recognizes that there is significant potential for dealing with time and other limitations on discovery in the arbitration clauses of commercial contracts. An advantage of such drafting is that it is much easier for parties to agree on such limitations before a dispute has arisen. A drawback, however, is the difficulty of rationally providing for how best to arbitrate a dispute that has not yet surfaced. Thus, the use of such clauses may be most productive in circumstances in which parties have a good idea from the outset as to the nature and scope of disputes that might thereafter arise.

Thus, arbitration is an attractive option for symmetrical litigation where the merits of the case are high stakes and neither party wants to delve into a discovery war. A fair amount of early case assessment would be necessary as well, so parties have a full appreciation about what they are agreeing to include or not include in the way of ESI.  Absent a provision to use specific rules (American Arbitration Association or Federal Arbitration Act), the agreement between parties is the determining factor as to how extensive the scope of discovery will be.

In Mitsubishi Motors v. Soler Chrysler-Plymouth, Inc., 473 U.S. 614, 625 (1985), the U.S. Supreme Court has explained that the “liberal federal policy favoring arbitration agreements’…is at bottom a policy guaranteeing the enforcement of private contractual agreements. As such, assuming an equal bargaining position or, at least an informed judgment, courts will enforce stipulations regarding discovery, given the policy of enforcing arbitration agreements by their terms.” Please also see an excellent explanation of Discovery in Arbitration by Joseph L. Forstadt for more information.

Cooperation amongst litigants in discovery has long been a principle of the revered Sedona Conference. ADR practitioners facing complex discovery questions are looking to Sedona’s Cooperation Proclamation for guidance with an eye toward negotiation by educating themselves on ways to further minimize distractions and costs in discovery.  An example of one such event is at The Center for Negotiation and Dispute Resolution at UC Hastings, where they are conducting a mock Meet and Confer on May 16, 2013. The event highlights the need for all practitioners, whether it be the 26 (f) conference for litigation or the preliminary hearing in the case of arbitration, to assess electronic discovery issues with the same weight they do claims and damages early on in the dispute.

It is also very important that arbitrators, especially given the power they have over a matter, to understand the consequences of their rulings. Discovery is typically under the sole control of the arbitrator in a dispute, and only in very select circumstances can relief be granted by the court. An arbitrator that knows nothing about eDiscovery could miss something material and affect the entire outcome adversely. For parties that have identified and addressed these issues proactively, there is more protection and certainty in arbitration. Typically, the primary focus of an arbitrator is enforcing the contract between parties, not to be an eDiscovery expert.

It is also important to caution against revoking rights to discovery by entering into mutual agreements to unreasonably limit discovery.  This approach is somewhat reminiscent of the days when lawyers would agree not to conduct discovery, because neither knew how. Now, while efficiency and cost savings are a priority, we must guard against a potential similar paradigm emerging as we may know too much about how to shield relevant ESI.

As we look to the future, especially for serial litigants, one can imagine a perfect world in arbitration for predictive coding. In the Federal courts, we have seen over the past two years or so an emergence of the use of predictive coding technologies. However, even when the parties agree, which they don’t always, they still struggle with achieving a meeting of the minds on the protocol. These disputes have at times overshadowed the advantage of using predictive coding because discovery disputes and attorney’s fees have overtaken any savings. In ADR there is a real opportunity for similarly situated parties to agree via contract, upfront on tools, methodologies and scope. Once these contracts are in place, both parties are bound to the same rules and a just and speedy resolution of a matter can take place.

Q & A With the Men Who Made the EDRM Mold: Tom Gelbmann and George Socha

Monday, February 11th, 2013

Q. On a daily basis I discuss the EDRM (Electronic Discovery Reference Model) with customers, of course attributing credit to you two, and I find it is a very actionable and helpful paradigm to describe eDiscovery and Information Governance. Not too long ago, the EDRM was supplemented by the IGRM- can you explain the history of both? 

Tom: IGRM (Information Governance Reference Model) is a response to the growing focus on addressing arguably the most significant issue facing organizations with respect to managing internal information – getting their electronic data house in order. Many of the cost components associated with electronic discovery can be lessened by effective information governance policies and processes in place. The IGRM provides a framework to help organizations develop and implement these policies and processes that reflect the active involvement of the key constituencies in an organization – Legal, IT, Records, Management, line of business entities, and Privacy /Security.

George: As Tom noted, organizations need to get their electronic houses in order.  To do that, they need the appropriate people at the table, all equipped with a something to use as the starting point for their discussions and as means of focusing their efforts.  Our IGRM framework is meant to be that something.  The new IGRM framework grew out of the first box in the EDRM diagram, Information Management.  That box started in 2005 as Records Management, but in 2007 we changed the name to Information Management to reflect that broader scope implicated by eDiscovery.  We launched the Information Management Reference Model (IMRM) project in 2009 and in 2011 changed the name from Management to Governance.

Q. Can you explain the appropriate use case for each? 

Tom: Best to review materials posted at EDRM.net on IGRM including the IGRM Guide, Using the IGRM Model, and the whitepaper developed jointly between IGRM and ARMA International.

George: Use cases abound for both the EDRM and the IGRM frameworks.  The intent with both frameworks is to provide a structure that allows for many different use cases. For example, one dealing with legal hold issues will approach the EDRM framework from a different perspective and use it in a different fashion than one concerned with choosing appropriate forms of production.

Q. I also see Privacy has been added as a key component to the IGRM. Many of the customers we talk to outside of the U.S. don’t have the litigation drivers that prompt U.S. companies to purchase eDiscovery software. However, when a privacy breach occurs- litigation typically follows, therefore they benefit from implementing aspects of eDiscovery in-house. Moreover, archiving and document classification can enable compliance with the requirements of data protection and privacy. Can you explain why you added Privacy from your perspective and your thoughts surrounding this issue? 

Tom: Privacy and Security were added this past year in recognition of the growing importance of this function within organizations.

George: With the growing attention paid to areas such as data transfer and data privacy, security and privacy issues have been receiving heightened scrutiny.  Were these likely to be short-term issues only, we would not have modified the IGRM diagram to add Privacy and Security.  Indications are that these issues will continue to grow in importance for many years to come, hence the addition.

Q. Finally, since you are both at the forefront of eDiscovery, please enlighten our readership on your 2013 predictions. 

Tom: Increasing focus on Information Governance, defensible disposition of ESI, increasing interest in Computer Assisted Review (EDRM recently published the Computer Assisted Review Reference Model (CARRM) to bring clarity and common understanding of basic principles involved), consolidation of providers in the EDiscovery space and the arrival of new providers  - a continuation of what we have seen in the past couple of years.

George: I suspect that we are going to see a dramatic growth in efforts to redirect “traditional” e-discovery tools and techniques, pointing them toward the much larger set of issues encompassed by information governance.  While some providers will continue to absorb others, the growth in new providers will continue to outstrip and decrease coming from acquisitions, mergers, or collapses.  Costs will remain of great concern, but with luck their will be a greater emphasis on how to make better use of the ever-growing array of tools and techniques to accomplish what we really need to do more of in the e-discovery arena – support and enhance the ability to build and tell a persuasive story.

Legal Tech 2013 Sessions: Symantec explores eDiscovery beyond the EDRM

Wednesday, December 19th, 2012

Having previously predicted the ‘happenings-to-be’ as well as recommended the ‘what not to do’ at LegalTech New York, the veteran LTNY team here at Symantec has decided to build anticipation for the 2013 event via a video series starring the LTNY un-baptized associate.  Get introduced to our eDiscovery-challenged protagonist in the first of our videos (above).

As for this year’s show we’re pleased to expand our presence and are very excited to introduce eDiscovery without limits, along with a LegalTech that promises sessions, social events and opportunities for attendees in the same vein.   In regards to the first aspect – the sessions – the team of Symantec eDiscovery counsels will moderate panelist sessions on topics ranging across and beyond the EDRM.  Joined by distinguished industry representatives they’ll push the discussion deeper in 5 sessions with a potential 6 hours of CLE credits offered to the attendees.

Matt Nelson, resident author of Predictive Coding for Dummies will moderate “How good is your predictive coding poker face?” where panelists tackle the recently controversial subjects of disclosing the use of Predictive Coding technology, statistical sampling and the production of training sets to the opposition.

Allison Walton will moderate, “eDiscovery in 3D: The New Generation of Early Case Assessment Techniques” where panelists will enlighten the crowd on taking ECA upstream into the information creation and retention stages and implementing an executable information governance workflow.  Allison will also moderate “You’re Doing it Wrong!!! How To Avoid Discovery Sanctions Due to a Flawed Legal Hold Process” where panelists recommend best practices towards a defensible legal hold process in light of potential changes in the FRCP and increased judicial scrutiny of preservation efforts.

Phil Favro will moderate “Protecting Your ESI Blindside: Why a “Defensible Deletion” Offense is the Best eDiscovery Defense” where panelists debate the viability of defensible deletion in the enterprise, the related court decisions to consider and quantifying the ROI to support a deletion strategy.

Chris Talbott will moderate a session on “Bringing eDiscovery back to Basics with the Clearwell eDiscovery Platform”, where engineer Anna Simpson will demonstrate Clearwell technology in the context of our panelist’s everyday use on cases ranging from FCPA inquires to IP litigation.

Please browse our microsite for complete supersession descriptions and a look at Symantec’s LTNY 2013 presence.  We hope you stay tuned to eDiscovery 2.0 throughout January to hear what Symantec has planned for the plenary session, our special event, contest giveaways and product announcements.

Where There’s Smoke There’s Fire: Powering eDiscovery with Data Loss Prevention

Monday, November 12th, 2012

New technologies are being repurposed for Early Case Assessment (ECA) in this ever-changing global economy chockfull of intellectual property theft and cybertheft. These increasingly hot issues are now compelling lawyers to become savvier about how the technologies they use to identify IP theft and related issues in eDiscovery. One of the more useful, but often overlooked tools in this regard is Data Loss Prevention (DLP) technology. Traditionally a data breach and security tool, DLP has emerged as yet another tool in the Litigator’s Tool Belt™ that can be applied in eDiscovery.

DLP technology utilizes Vector Machine Learning (VML) to detect intellectual property, such as product designs, source code and trademarked language that are deemed proprietary and confidential. This technology eliminates the need for developing laborious keyword-based policies or fingerprinting documents. While a corporation can certainly customize these policies, there are off the shelf materials that make the technology easy to deploy.

An exemplary use case that spotlights how DLP could have been deployed in the eDiscovery context is the case of E.I. Du Pont de Nemours v. Kolon Industries. In DuPont, a jury issued a $919 million verdict after finding that the defendant manufacturer stole critical elements of the formula for Kevlar, a closely guarded and highly profitable DuPont trade secret. Despite the measures that were taken to protect the trade secret, a former DuPont consultant successfully copied key information relating to Kevlar on to a CD that was later disseminated to the manufacturer’s executives. All of this came to light in the recently unsealed criminal indictments the U.S. Department of Justice obtained against the manufacturer and several of its executives.

Perhaps all of this could have been avoided had a DLP tool been deployed. A properly implemented DLP solution in the DuPont case might have detected the misappropriation that occurred and perhaps prompted an internal investigation. At the very least, DLP could possibly have mitigated the harmful effects of the trade secret theft. DLP technology could potentially have detected the departure/copying of proprietary information and any other suspicious behavior regarding sensitive IP.

As the DuPont case teaches, DLP can be utilized to detect IP theft and data breaches. In addition, it can act as an early case assessment (ECA) tool for lawyers in both civil and criminal actions. With data breaches, where there is smoke (breach) there is generally fire (litigation). A DLP incident report can be used as a basis for an investigation, and essentially reverse engineer the ECA process with hard evidence underlying the data breach. Thus, instead of beginning an investigation with a hunch or tangential lead, DLP gives hard facts to lawyers, and ultimately serves as a roadmap for effective legal hold implementation for the communications of custodians. Instead of discovering data breaches during the discovery process, DLP allows lawyers to start with this information, making the entire matter more efficient and targeted.

From an information governance point of view, DLP also has a relationship with the left proactive side of the Electronic Discovery Reference Model. The DLP technology can also be repurposed as Data Classification Services for automated document retention. The policy and technology combination of DCS/DLP speak to each other in harmony to accomplish appropriate document retention as well as breach prevention and notification. It follows that there would be similar identifiers for both policy consoles in DCS/DLP, and that these indicators enable the technology to make intelligent decisions.

Given this backdrop, it behooves both firm lawyers and corporate counsel to consider getting up to speed on the capabilities of DLP tools. The benefits DLP offers in eDiscovery are too important to be ignored.

Federal Directive Hits Two Birds (RIM and eDiscovery) with One Stone

Thursday, October 18th, 2012

The eagerly awaited Directive from The Office of Management and Budget (OMB) and The National Archives and Records Administration (NARA) was released at the end of August. In an attempt to go behind the scenes, we’ve asked the Project Management Office (PMO) and the Chief Records Officer for the NARA to respond to a few key questions. 

We know that the Presidential Mandate was the impetus for the agency self-assessments that were submitted to NARA. Now that NARA and the OMB have distilled those reports, what are the biggest challenges on a go forward basis for the government regarding record keeping, information governance and eDiscovery?

“In each of those areas, the biggest challenge that can be identified is the rapid emergence and deployment of technology. Technology has changed the way Federal agencies carry out their missions and create the records required to document that activity. It has also changed the dynamics in records management. In the past, agencies would maintain central file rooms where records were stored and managed. Now, with distributed computing networks, records are likely to be in a multitude of electronic formats, on a variety of servers, and exist as multiple copies. Records management practices need to move forward to solve that challenge. If done right, good records management (especially of electronic records) can also be of great help in providing a solid foundation for applying best practices in other areas, including in eDiscovery, FOIA, as well as in all aspects of information governance.”    

What is the biggest action item from the Directive for agencies to take away?

“The Directive creates a framework for records management in the 21st century that emphasizes the primacy of electronic information and directs agencies to being transforming their current process to identify and capture electronic records. One milestone is that by 2016, agencies must be managing their email in an electronically accessible format (with tools that make this possible, not printing out emails to paper). Agencies should begin planning for the transition, where appropriate, from paper-based records management process to those that preserve records in an electronic format.

The Directive also calls on agencies to designate a Senior Agency Official (SAO) for Records Management by November 15, 2012. The SAO is intended to raise the profile of records management in an agency to ensure that each agency commits the resources necessary to carry out the rest of the goals in the Directive. A meeting of SAOs is to be held at the National Archives with the Archivist of the United States convening the meeting by the end of this year. Details about that meeting will be distributed by NARA soon.”

Does the Directive holistically address information governance for the agencies, or is it likely that agencies will continue to deploy different technology even within their own departments?

“In general, as long as agencies are properly managing their records, it does not matter what technologies they are using. However, one of the drivers behind the issuance of the Memorandum and the Directive was identifying ways in which agencies can reduce costs while still meeting all of their records management requirements. The Directive specifies actions (see A3, A4, A5, and B2) in which NARA and agencies can work together to identify effective solutions that can be shared.”

Finally, although FOIA requests have increased and the backlog has decreased, how will litigation and FOIA intersecting in the next say 5 years?  We know from the retracted decision in NDLON that metadata still remains an issue for the government…are we getting to a point where records created electronically will be able to be produced electronically as a matter of course for FOIA litigation/requests?

“In general, an important feature of the Directive is that the Federal government’s record information – most of which is in electronic format – stays in electronic format. Therefore, all of the inherent benefits will remain as well – i.e., metadata being retained, easier and speedier searches to locate records, and efficiencies in compilation, reproduction, transmission, and reduction in the cost of producing the requested information. This all would be expected to have an impact in improving the ability of federal agencies to respond to FOIA requests by producing records in electronic formats.”

Fun Fact- Is NARA really saving every tweet produced?

“Actually, the Library of Congress is the agency that is preserving Twitter. NARA is interested in only preserving those tweets that a) were made or received in the course of government business and b) appraised to have permanent value. We talked about this on our Records Express blog.”

“We think President Barack Obama said it best when he made the following comment on November 28, 2011:

“The current federal records management system is based on an outdated approach involving paper and filing cabinets. Today’s action will move the process into the digital age so the American public can have access to clear and accurate information about the decisions and actions of the Federal Government.” Paul Wester, Chief Records Officer at the National Archives, has stated that this Directive is very exciting for the Federal Records Management community.  In our lifetime none of us has experienced the attention to the challenges that we encounter every day in managing our records management programs like we are now. These are very exciting times to be a records manager in the Federal government. Full implementation of the Directive by the end of this decade will take a lot of hard work, but the government will be better off for doing this and we will be better able to serve the public.”

Special thanks to NARA for the ongoing dialogue that is key to transparent government and the effective practice of eDiscovery, Freedom Of Information Act requests, records management and thought leadership in the government sector. Stay tuned as we continue to cover these crucial issues for the government as they wrestle with important information governance challenges. 

 

Gartner’s 2012 Magic Quadrant for E-Discovery Software Looks to Information Governance as the Future

Monday, June 18th, 2012

Gartner recently released its 2012 Magic Quadrant for E-Discovery Software, which is its annual report analyzing the state of the electronic discovery industry. Many vendors in the Magic Quadrant (MQ) may initially focus on their position and the juxtaposition of their competitive neighbors along the Visionary – Execution axis. While a very useful exercise, there are also a number of additional nuggets in the MQ, particularly regarding Gartner’s overview of the market, anticipated rates of consolidation and future market direction.

Context

For those of us who’ve been around the eDiscovery industry since its infancy, it’s gratifying to see the electronic discovery industry mature.  As Gartner concludes, the promise of this industry isn’t off in the future, it’s now:

“E-discovery is now a well-established fact in the legal and judicial worlds. … The growth of the e-discovery market is thus inevitable, as is the acceptance of technological assistance, even in professions with long-standing paper traditions.”

The past wasn’t always so rosy, particularly when the market was dominated by hundreds of service providers that seemed to hold on by maintaining a few key relationships, combined with relatively high margins.

“The market was once characterized by many small providers and some large ones, mostly employed indirectly by law firms, rather than directly by corporations. …  Purchasing decisions frequently reflected long-standing trusted relationships, which meant that even a small book of business was profitable to providers and the effects of customary market forces were muted. Providers were able to subsist on one or two large law firms or corporate clients.”

Consolidation

The Magic Quadrant correctly notes that these “salad days” just weren’t feasible long term. Gartner sees the pace of consolidation heating up even further, with some players striking it rich and some going home empty handed.

“We expect that 2012 and 2013 will see many of these providers cease to exist as independent entities for one reason or another — by means of merger or acquisition, or business failure. This is a market in which differentiation is difficult and technology competence, business model rejuvenation or size are now required for survival. … The e-discovery software market is in a phase of high growth, increasing maturity and inevitable consolidation.”

Navigating these treacherous waters isn’t easy for eDiscovery providers, nor is it simple for customers to make purchasing decisions if they’re correctly concerned that the solution they buy today won’t be around tomorrow.  Yet, despite the prognostication of an inevitable shakeout (Gartner forecasts that the market will shrink 25% in the raw number of firms claiming eDiscovery products/services) they are still very bullish about the sector.

“Gartner estimates that the enterprise e-discovery software market came to $1 billion in total software vendor revenue in 2010. The five-year CAGR to 2015 is approximately 16%.”

This certainly means there’s a window of opportunity for certain players – particularly those who help larger players fill out their EDRM suite of offerings, since the best of breed era is quickly going by the wayside.  Gartner notes that end-to-end functionality is now table stakes in the eDiscovery space.

“We have seen a large upsurge in user requests for full-spectrum EDRM functionality. Whether that functionality will be used initially, or at all, remains an open question. Corporate buyers do seem minded to future-proof their investments in this way, by anticipating what they may wish to do with the software and the vendor in the future.”

Information Governance

Not surprisingly, it’s this “full-spectrum” functionality that most closely aligns with marrying the reactive, right side of the EDRM with the proactive, left side.  In concert, this yin and yang is referred to as information governance, and it’s this notion that’s increasingly driving buying behaviors.

“It is clear from our inquiry service that the desire to bring e-discovery under control by bringing data under control with retention management is a strategy that both legal and IT departments pursue in order to control cost and reduce risks. Sometimes the archiving solution precedes the e-discovery solution, and sometimes it follows it, but Gartner clients that feel the most comfortable with their e-discovery processes and most in control of their data are those that have put archiving systems in place …”

As Gartner looks out five years, the analyst firm anticipates more progress on the information governance front, because the “entire e-discovery industry is founded on a pile of largely redundant, outdated and trivial data.”  At some point this digital landfill is going to burst and organizations are finally realizing that if they don’t act now, it may be too late.

“During the past 10 to 15 years, corporations and individuals have allowed this data to accumulate for the simple reason that it was easy — if not necessarily inexpensive — to do so. … E-discovery has proved to be a huge motivation for companies to rethink their information management policies. The problem of determining what is relevant from a mass of information will not be solved quickly, but with a clear business driver (e-discovery) and an undeniable return on investment (deleting data that is no longer required for legal or business purposes can save millions of dollars in storage costs) there is hope for the future.”

 

The Gartner Magic Quadrant for E-Discovery Software is insightful for a number of reasons, not the least of which is how it portrays the developing maturity of the electronic discovery space. In just a few short years, the niche has sprouted wings, raced to $1B and is seeing massive consolidation. As we enter the next phase of maturation, we’ll likely see the sector morph into a larger, information governance play, given customers’ “full-spectrum” functionality requirements and the presence of larger, mainstream software companies.  Next on the horizon is the subsuming of eDiscovery into both the bigger information governance umbrella, as well as other larger adjacent plays like “enterprise information archiving, enterprise content management, enterprise search and content analytics.” The rapid maturation of the eDiscovery industry will inevitably result in growing pains for vendors and practitioners alike, but in the end we’ll all benefit.

 

About the Magic Quadrant
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner’s “2012 Magic Quadrant for E-Discovery Software” Provides a Useful Roadmap for Legal Technologists

Tuesday, May 29th, 2012

Gartner has just released its 2012 Magic Quadrant for E-Discovery Software, which is an annual report that analyzes the state of the electronic discovery industry and provides a detailed vendor-by-vendor evaluation. For many, particularly those in IT circles, Gartner is an unwavering north star used to divine software market leaders, in topics ranging from business intelligence platforms to wireless lan infrastructures. When IT professionals are on the cusp of procuring complex software, they look to analysts like Gartner for quantifiable and objective recommendations – as a way to inform and buttress their own internal decision making processes.

But for some in the legal technology field (particularly attorneys), looking to Gartner for software analysis can seem a bit foreign. Legal practitioners are often more comfortable with the “good ole days” when the only navigation aid in the eDiscovery world was provided by the dynamic duo of George Socha and Tom Gelbmanm, who (beyond creating the EDRM) were pioneers of the first eDiscovery rankings survey. Albeit somewhat short lived, their Annual Electronic Discovery[i] Survey ranked the hundreds of eDiscovery providers and bucketed the top tier players in both software and litigation support categories. The scope of their mission was grand, and they were perhaps ultimately undone by the breadth of their task (stopping the Survey in 2010), particularly as the eDiscovery landscape continued to mature, fragment and evolve.

Gartner, which has perfected the analysis of emerging software markets, appears to have taken on this challenge with an admittedly more narrow (and likely more achievable) focus. Gartner published its first Magic Quadrant (MQ) for the eDiscovery industry last year, and in the 2012 Magic Quadrant for E-Discovery Software report they’ve evaluated the top 21 electronic discovery software vendors. As with all Gartner MQs, their methodology is rigorous; in order to be included, vendors must meet quantitative requirements in market penetration and customer base and are then evaluated upon criteria for completeness of vision and ability to execute.

By eliminating the legion of service providers and law firms, Gartner has made their mission both more achievable and perhaps (to some) less relevant. When talking to certain law firms and litigation support providers, some seem to treat the Gartner initiative (and subsequent Magic Quadrant) like a map from a land they never plan to visit. But, even if they’re not directly procuring eDiscovery software, the Gartner MQ should still be seen by legal technologists as an invaluable tool to navigate the perils of the often confusing and shifting eDiscovery landscape – particularly with the rash of recent M&A activity.

Beyond the quadrant positions[ii], comprehensive analysis and secular market trends, one of the key underpinnings of the Magic Quadrant is that the ultimate position of a given provider is in many ways an aggregate measurement of overall customer satisfaction. Similar in ways to the net promoter concept (which is a tool to gauge the loyalty of a firm’s customer relationships simply by asking how likely that customer is to recommend a product/service to a colleague), the Gartner MQ can be looked at as the sum total of all customer experiences.[iii] As such, this usage/satisfaction feedback is relevant even for parties that aren’t purchasing or deploying electronic discovery software per se. Outside counsel, partners, litigation support vendors and other interested parties may all end up interacting with a deployed eDiscovery solution (particularly when such solutions have expanded their reach as end-to-end information governance platforms) and they should want their chosen solution to used happily and seamlessly in a given enterprise. There’s no shortage of stories about unhappy outside counsel (for example) that complain about being hamstrung by a slow, first generation eDiscovery solution that ultimately makes their job harder (and riskier).

Next, the Gartner MQ also is a good short-handed way to understand more nuanced topics like time to value and total cost of ownership. While of course related to overall satisfaction, the Magic Quadrant does indirectly address the query about whether the software does what it says it will (delivering on the promise) in the time frame that is claimed (delivering the promise in a reasonable time frame) since these elements are typically subsumed in the satisfaction metric. This kind of detail is disclosed in the numerous interviews that Gartner conducts to go behind the scenes, querying usage and overall satisfaction.

While no navigation aid ensures that a traveler won’t get lost, the Gartner Magic Quadrant for E-Discovery Software is a useful map of the electronic discovery software world. And, particularly looking at year-over-year trends, the MQ provides a useful way for legal practitioners (beyond the typical IT users) to get a sense of the electronic discovery market landscape as it evolves and matures. After all, staying on top of the eDiscovery industry has a range of benefits beyond just software procurement.

Please register here to access the Gartner Magic Quadrant for E-Discovery Software.

About the Magic Quadrant
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.



[i] Note, in the good ole days folks still used two words to describe eDiscovery.

[ii] Gartner has a proprietary matrix that it uses to place the entities into four quadrants: Leaders, Challengers, Visionaries and Niche Players.

[iii] Under the Ability to Execute axis Gartner weighs a number of factors including “Customer Experience: Relationships, products and services or programs that enable clients to succeed with the products evaluated. Specifically, this criterion includes implementation experience, and the ways customers receive technical support or account support. It can also include ancillary tools, the existence and quality of customer support programs, availability of user groups, service-level agreements and so on.”

Courts Increasingly Cognizant of eDiscovery Burdens, Reject “Gotcha” Sanctions Demands

Friday, May 18th, 2012

Courts are becoming increasingly cognizant of the eDiscovery burdens that the information explosion has placed on organizations. Indeed, the cases from 2012 are piling up in which courts have rejected demands that sanctions be imposed for seemingly reasonable information retention practices. The recent case of Grabenstein v. Arrow Electronics (D. Colo. April 23, 2012) is another notable instance of this trend.

In Grabenstein, the court refused to sanction a company for eliminating emails pursuant to a good faith document retention policy. The plaintiff had argued that drastic sanctions (evidence, adverse inference and monetary) should be imposed on the company since relevant emails regarding her alleged disability were not retained in violation of both its eDiscovery duties and an EEOC regulatory retention obligation. The court disagreed, finding that sanctions were inappropriate because the emails were not deleted before the duty to preserve was triggered: “Plaintiff has not provided any evidence that Defendant deleted e-mails after the litigation hold was imposed.”

Furthermore, the court declined to issue sanctions of any kind even though it found that the company deleted emails in violation of its EEOC regulatory retention duty. The court adopted this seemingly incongruous position because the emails were overwritten pursuant to a reasonable document retention policy:

“there is no evidence to show that the e-mails were destroyed in other than the normal course of business pursuant to Defendant’s e-mail retention policy or that Defendant intended to withhold unfavorable information from Plaintiff.”

The Grabenstein case reinforces the principle that reasonable information retention and eDiscovery processes can and often do trump sanctions requests. Just like the defendant in Grabenstein, organizations should develop and follow a retention policy that eliminates data stockpiles before litigation is reasonably anticipated. Grabenstein also demonstrates the value of deploying a timely and comprehensive litigation hold process to ensure that relevant electronically stored information (ESI) is retained once a preservation duty is triggered. These principles are consistent with various other recent cases, including a decision last month in which pharmaceutical giant Pfizer defeated a sanctions motion by relying on its “good faith business procedures” to eliminate legacy materials before a duty to preserve arose.

The Grabenstein holding also spotlights the role that proportionality can play in determining the extent of a party’s preservation duties. The Grabenstein court reasoned that sanctions would be inappropriate since plaintiff managed to obtain the destroyed emails from an alternative source. Without expressly mentioning “proportionality,” the court implicitly drew on Federal Rule of Civil Procedure 26(b)(2)(C) to reach its “no harm, no foul” approach to plaintiff’s sanctions request. Rule 2626(b)(2)(C)(i) empowers a court to limit discovery when it is “unreasonably cumulative or duplicative, or can be obtained from some other source that is more convenient, less burdensome, or less expensive.” Given that plaintiff actually had the emails in question and there was no evidence suggesting other ESI had been destroyed, proportionality standards tipped the scales against the sanctions request.

The Grabenstein holding is good news for organizations looking to reduce their eDiscovery costs and burdens. By refusing to accede to a tenuous sanctions motion and by following principles of proportionality, the court sustained reasonableness over “gotcha” eDiscovery tactics. If courts adhere to the Grabenstein mantra that preservation and production should be reasonable and proportional, organizations truly stand a better chance of seeing their litigation costs and burdens reduced accordingly.

First State Court Issues Order Approving the Use of Predictive Coding

Thursday, April 26th, 2012

On Monday, Virginia Circuit Court Judge James H. Chamblin issued what appears to be the first state court Order approving the use of predictive coding technology for eDiscovery. Tuesday, Law Technology News reported that Judge Chamblin issued the two-page Order in Global Aerospace Inc., et al, v. Landow Aviation, L.P. dba Dulles Jet Center, et al, over Plaintiffs’ objection that traditional manual review would yield more accurate results. The case stems from the collapse of three hangars at the Dulles Jet Center (“DJC”) that occurred during a major snow storm on February 6, 2010. The Order was issued at Defendants’ request after opposing counsel objected to their proposed use of predictive coding technology to “retrieve potentially relevant documents from a massive collection of electronically stored information.”

In Defendants’ Memorandum in Support of their motion, they argue that a first pass manual review of approximately two million documents would cost two million dollars and only locate about sixty percent of all potentially responsive documents. They go on to state that keyword searching might be more cost-effective “but likely would retrieve only twenty percent of the potentially relevant documents.” On the other hand, they claim predictive coding “is capable of locating upwards of seventy-five percent of the potentially relevant documents and can be effectively implemented at a fraction of the cost and in a fraction of the time of linear review and keyword searching.”

In their Opposition Brief, Plaintiffs argue that Defendants should produce “all responsive documents located upon a reasonable inquiry,” and “not just the 75%, or less, that the ‘predictive coding’ computer program might select.” They also characterize Defendants’ request to use predictive coding technology instead of manual review as a “radical departure from the standard practice of human review” and point out that Defendants cite no case in which a court compelled a party to accept a document production selected by a “’predictive coding’ computer program.”

Considering predictive coding technology is new to eDiscovery and first generation tools can be difficult to use, it is not surprising that both parties appear to frame some of their arguments curiously. For example, Plaintiffs either mischaracterize or misunderstand Defendants’ proposed workflow given their statement that Defendants want a “computer program to make the selections for them” instead of having “human beings look at and select documents.” Importantly, predictive coding tools require human input for a computer program to “predict” document relevance. Additionally, the proposed approach includes an additional human review step prior to production that involves evaluating the computer’s predictions.

On the other hand, some of Defendants’ arguments also seem to stray a bit off course. For example, Defendants’ seem to unduly minimize the value of using other tools in the litigator’s tool belt like keyword search or topic grouping to cull data prior to using potentially more expensive predictive coding technology. To broadly state that keyword searching “likely would retrieve only twenty percent of the potentially relevant documents” seems to ignore two facts. First, keyword search for eDiscovery is not dead. To the contrary, keyword searches can be an effective tool for broadly culling data prior to manual review and for conducting early case assessments. Second, the success of keyword searches and other litigation tools depends as much on the end user as the technology. In other words, the carpenter is just as important as the hammer.

The Order issued by Judge Chamblin, the current Chief Judge for the 20th Judicial Circuit of Virginia, states that “Defendants shall be allowed to proceed with the use of predictive coding for purposes of the processing and production of electronically stored information.”  In a hand written notation, the Order further provides that the processing and production is to be completed within 120 days, with “processing” to be completed within 60 days and “production to follow as soon as practicable and in no more than 60 days.” The order does not mention whether or not the parties are required to agree upon a mutually agreeable protocol; an issue that has plagued the court and the parties in the ongoing Da Silva Moore, et. al. v. Publicis Groupe, et. al. for months.

Global Aerospace is the third known predictive coding case on record, but appears to present yet another set of unique legal and factual issues. In Da Silva Moore, Judge Andrew Peck of the Southern District of New York rang in the New Year by issuing the first known court order endorsing the use of predictive coding technology.  In that case, the parties agreed to the use of predictive coding technology, but continue to fight like cats and dogs to establish a mutually agreeable protocol.

Similarly, in the 7th Federal Circuit, Judge Nan Nolan is tackling the issue of predictive coding technology in Kleen Products, LLC, et. al. v. Packaging Corporation of America, et. al. In Kleen, Plaintiffs basically ask that Judge Nolan order Defendants to redo their production even though Defendants have spent thousands of hours reviewing documents, have already produced over a million documents, and their review is over 99 percent complete. The parties have already presented witness testimony in support of their respective positions over the course of two full days and more testimony may be required before Judge Nolan issues a ruling.

What is interesting about Global Aerospace is that Defendants proactively sought court approval to use predictive coding technology over Plaintiffs’ objections. This scenario is different than Da Silva Moore because the parties in Global Aerospace have not agreed to the use of predictive coding technology. Similarly, it appears that Defendants have not already significantly completed document review and production as they had in Kleen Products. Instead, the Global Aerospace Defendants appear to have sought protection from the court before moving full steam ahead with predictive coding technology and they have received the court’s blessing over Plaintiffs’ objection.

A key issue that the Order does not address is whether or not the parties will be required to decide on a mutually agreeable protocol before proceeding with the use of predictive coding technology. As stated earlier, the inability to define a mutually agreeable protocol is a key issue that has plagued the court and the parties for months in Da Silva Moore, et. al. v. Publicis Groupe, et. al. Similarly, in Kleen, the court was faced with issues related to the protocol for using technology tools. Both cases highlight the fact that regardless of which eDiscovery technology tools are selected from the litigator’s tool belt, the tools must be used properly in order for discovery to be fair.

Judge Chamblin left the barn door wide open for Plaintiffs to lodge future objections, perhaps setting the stage for yet another heated predictive coding battle. Importantly, the Judge issued the Order “without prejudice to a receiving party” and notes that parties can object to the “completeness or the contents of the production or the ongoing use of predictive coding technology.”  Given the ongoing challenges in Da Silva Moore and Kleen, don’t be surprised if the parties in Global Aerospace Inc. face some of the same process-based challenges as their predecessors. Hopefully some of the early challenges related to the use of first generation predictive coding tools can be overcome as case law continues to develop and as next generation predictive coding tools become easier to use. Stay tuned as the facts, testimony, and arguments related to Da Silva Moore, Kleen Products, and Global Aerospace Inc. cases continue to evolve.