Wednesday, December 3, 2014

For Austin - Google Fiber

My family is in a Google Fiber commercial!









Best Practices of eDiscovery

Wednesday, November 5, 2014

Jason Atchley : Information Technology : FIve Software Approaches to Harness the Internet of Things

Jason Atchley : Information Technology : Five Software Approaches to Harness the Internet of Things

jason atchley

Five Software Approaches To Harness The Internet Of Things

Research is evidence-based. But how much information is too much? The life sciences industry is facing an onslaught of digital data, coming from the proliferation of mobile devices and genetic sequencing, to new biomarkers and diagnostics. For the life science industry to make use of all this input – to really succeed in the new “Internet of Things” – technology companies are searching for a holistic approach to harness this vast data.
As the technology industry evolves to meet the needs of life science, software-as-a-service (SaaS) companies – companies that host their software in a cloud infrastructure – add a variety of tools to support research. Some tools are purchased and others built in-house, and this can create a dilemma. How do you get diverse products to behave as a single, logical unit without tearing everything down and starting again?
Software Programmer
The manufacturing industry is a far cry from life sciences: it deals with “things,” not people, and its mechanisms, though complicated, are nothing compared to human bodies. Still, there is a lesson to be learned by looking how Volkswagen solved a problem in supplying parts to a large variety of models quickly, reliably and with massive volumes.
Volkswagen is one of the biggest automobile manufacturers in the world and owns companies like Audi, Seat, Skoda, each with many different models. Cars of all shapes and sizes from super-minis to hulking people carriers, but what do they have in common? They are all based on the VW Golf. No matter what size, shape and purpose the cars have, they all share VW Golf’s MQB platform.
The system is a shared modular platform for transverse engine, front-wheel-drive cars. Using the Golf MQB platform, VW can produce a wide variety of vehicles that sell quickly and reliably on a scale that satisfies its market but does not lead to waste via over-production in anticipation of demand.
Drawing a comparison, how can a software platform be developed that delivers the same benefits as the VW’s standardized component platform? Starting from a common platform is obvious, but creating one from scratch would be too expensive. VW’s route to a platform was evolutionary, with gradual adoption over time. That is the approach Medidata is adopting with Service-oriented Architecture (SOA).
At Medidata, our products are being designed to behave in concert with one another, as a platform, but still to retain each product’s unique characteristics and intrinsic technology stack e.g. Java, .Net, Ruby, etc. This can be achieved by adding services into the existing product mix so they can handle the activities that individual products share like authentication, feature authorization, workflow management, alerts and notifications, etc.
By separating out the common product functionality and delegating it into services, we’ve reduced complexity, standardized interchange-ability and increased product utility just as VW did. However, developing services, or standard components, to take on the tasks already handled by innate product functionality only gets you so far.
For the whole to be greater than the sum of its parts, each new service has to outperform its original in-product role. This is achieved by the adoption of new cloud methodologies/approaches:
The 12-factor app: Each service must be compliant with the twelve-factor methodology for building software-as-a-service. One of the tenets of the 12 Factor methodology covers a topic called “process formation.” This means when a developer creates a service, the developer should build it so each constituent process can be duplicated across to another cloud server to take on more work should the load increase. This allows the application to scale horizontally and increase performance to match the level of processing required by the users. In the pre-cloud days, greater performance meant buying a bigger server, which was both expensive and slow to spin-up.
Auto-scaling and flexibility: It is one thing to spawn another process to carry an ever-increasing user load, but it’s quite another to control it. Auto-scaling allows your service to automatically scale up when certain pre-defined conditions are met and more importantly, to scale down and stop paying for the extra nodes when the user load drops off.
12-factor compliance: Make services more reliable by improving the way they are built, tested and deployed into production by having each one as self-contained as possible. That way, when a bug does arise, it can be easily traced and rectified by a group of developers working in unison.
Hypermedia service adoption: Application programming interfaces (APIs) should behave like web-pages, where a developer can navigate to and visualize data more easily. This should allow them to write better quality integrations.
Unified reporting strategy: Using SOA, you can combine the audit trails from all component modules and create a stateless, homogenized, service-driven data feed that we can use for comprehensive, platform-wide reporting and analytics.
In short, to meet the ever-increasing demands for data from regulatory authorities, software-as-a-service providers have to make relatively diverse products work together closely. At Medidata, adopting a common component approach, as VW has done, gives us a flexible platform that is reliable and robust. And using a cloud-based, service-orientated architecture allows us to scale up and down. That helps our customers run complex clinical trials cost-effectively.
[A version of this article appeared as “A Platform In The Cloud” in Pharma Technology Focus.]


Thursday, October 30, 2014

Jason Atchley : Legal Technology : New Discovery Rules to Rein in Litigation Expenses

jason atchley

New Discovery Rules to Rein in Litigation Expenses

, Corporate Counsel
    | 0 Comments

Ask U.S. businesses about the country’s legal system, and the primary complaint almost always involves cost. Not far behind is the lament that the high cost of litigation forces companies to offer generous settlement payments, even when the merits of the case suggest that the case should be taken to trial or settled for a much smaller amount. Although lawsuits are likely to remain expensive, the federal judiciary has approved new rules that represent an important assault on runaway costs.
Most of the cost involved in a typical business lawsuit is incurred in pretrial discovery. Although discovery has long been expensive, costs exploded with the introduction—and now near-ubiquity—of electronic data, including emails and databases. In larger cases, the cost of document discovery can easily reach into the millions of dollars per lawsuit. Perhaps more important, because the court rules place little restraint on the tendency of lawyers to search through as much data as possible in the hope of finding something useful, the process is not only expensive but inefficient. One survey of large lawsuits found that for every 1,000 pages of documentation produced in discovery, only one page became an exhibit at trial.
This treasure hunt for documents imposes costs on companies in another respect: The current litigation system requires companies to incur significant costs to ensure that documents and data—primarily in electronic form—are preserved for potential litigation. The reason is that, with increasing frequency, the outcome of lawsuits is determined not so much by whether a contract was broken or whether a fraud took place, but how well the litigants preserved electronic data. Earlier this year, for example, a federal judge in Louisiana instructed a jury that because a pharmaceutical company had failed to preserve certain records—even though the records may not have been useful in the lawsuit—the jury was “free to infer those documents and files would have been helpful to the plaintiffs or detrimental to” the pharmaceutical company. The jury went on to slap the pharmaceutical company with a $6 billion punitive damages verdict.
The risk of such an instruction drives many companies to spend huge sums on document preservation and storage that would otherwise be unnecessary for their business operations. A report issued earlier this year by professor William Hubbard of the University of Chicago Law School pegged the “fixed” cost of implementing hardware and software systems to preserve electronic data to be $2.5 million per year for large companies, and the additional, lawsuit-specific costs of preserving data to range from $12,000 per year for small companies to nearly $39 million per year for large companies.
On September 16, the Judicial Conference of the United States—a group of 26 federal judges that serves as the policy-making body for the federal courts—announced several proposed amendments to the rules that govern discovery and trials in the federal courts that address the problem of discovery cost in several significant ways. First, the new rules emphasize that discovery is to be “proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit.” The explanatory notes accompanying the proposed rules make it plain that the intent of the rules is to encourage active judicial oversight of the discovery process to ensure that discovery is not excessive, redundant or more expensive than necessary.
Second, the proposed rules give judges more explicit authority to ask the party requesting documents and information to share in the costs of locating and producing such documents and data. Whereas the cost of locating, processing, reviewing and producing information—including electronically stored information—today is almost always borne by the producing party, the new rules portend greater judicial willingness to condition approval of expensive document and data requests on the requesting party’s willingness to pick up the tab, particularly when the importance of the requested data is not obvious or when the likelihood of finding useful information is low relative to the effort and cost of searching for it.
Third, the new rules place explicit limitations on the circumstances under which courts may mete out the most severe sanctions for failure to preserve electronically stored information. At present, some federal courts take the position that only intentional loss or destruction of documents may result in an “adverse inference instruction”—that is, the judge telling the jury that a party has failed to preserve data, and, as a result, the jury may (or even must) presume that the unavailable data or documents are unfavorable to that party. Those courts similarly require willfulness before imposing even more draconian sanctions such as dismissal. Other federal courts are of the view that even inadvertent failures to preserve data (typically electronic data) may give rise to such sanctions.
The new rules make it plain that judges cannot impose any sanctions without first determining that the loss of data or documents prejudiced the opposing party, and that any sanctions may be “no greater than necessary to cure the prejudice.” Moreover the proposed new rules are explicit that the most severe sanctions, such as adverse inference instructions or dismissal, may be imposed “only upon finding that the party [that failed to preserve data] acted with the intent to deprive another party of the information’s use in the litigation.”
The new rules will help bring clarity and uniformity to the imposition of sanctions for spoliation and should limit the imposition of the most severe sanctions (as well as the expensive motion practice concerning such sanctions). Further, the new rules will likely reduce the cost of corporate document preservation, as companies will no longer have to over-preserve electronically stored data and information as protection against the possibility that merely inadvertent loss of such data and information could be grounds for the most serious sanctions (and billion-dollar punitive damages verdicts).
The new rules will not be a panacea, however. Whether spoliation occurred “with the intent to deprive another party of the information’s use in the litigation” still will be decided by district court judges. In the Louisiana pharmaceutical case, for example, the judge determined that the destruction of documents before the pharmaceutical company anticipated litigation over the particular dispute at issue constituted intentional spoliation, because a litigation hold involving different and previously resolved claims had never been withdrawn (imposing, in the judge’s view, an ongoing preservation obligation). Still, the new rules’ unmistakable directive should give federal judges pause before imposing the more severe types of spoliation sanctions.
The proposed new rules are now before the U.S. Supreme Court, which is likely to approve them. Absent (unlikely) action by Congress to reject or modify the rules, the new rules will go into effect on December 1, 2015. Although that is a year away, the clear statements of the Judicial Conference concerning the need to reduce litigation costs and to cabin the circumstances under which the most severe sanctions may be imposed for failures to preserve (primarily electronic) data and information can be expected to begin influencing the decisions of judges and lawyers much sooner.
Creighton Magid is head of the Dorsey & Whitney’s Washington, D.C., office. He’s a member of the electronic discovery practice group and co-chair of the firm-wide products liability practice. He focuses on technology, commercial and products liability litigation.


Read more: http://www.corpcounsel.com/id=1202674945208/New-Discovery-Rules-to-Rein-in-Litigation-Expenses#ixzz3HdUe4RtQ



Wednesday, October 29, 2014

Jason Atchley : Data Security : Hackers Want Your Healthcare Data

jason atchley

Hackers Want Your Healthcare Data

Medical data is more valuable to hackers than credit cards, says Baker Hostetler.
, Law Technology News
    | 0 Comments

Doctor Explaining Consent Form To Senior Patient
Doctor Explaining Consent Form To Senior Patient
Social security number or credit card information? According to partner Lynn Sessions and associate Nita Garg of Baker Hostetler, medical information is the most valuable to hackers.
“Hackers have increased their focused attacks on the U.S. healthcare industry,” they said, relying on information from the Ponemon Institute citing a 20 percent increase in healthcare organizations reporting cyberattacks between 2009 and 2013. According to security experts, this increase stems from weak institutional security coupled with profitability of health records. “Unlike credit cards, which may be quickly canceled once fraudulent activity is detected, it often takes months or years before patients or providers discover the theft of medical information,” said Sessions and Garg.
We reported recently that the majority of health care breaches stem from the health care providers themselves. Between 2011 and 2012, protected health information was the leading cause of breaches in the health care industry, whether through theft, loss, unauthorized access or hacking incidents. The authors noted in their post that many healthcare companies rely on old computer systems. This, along with the switch from paper medical records to electronic ones, adds electronic fuel to the online fire. In the black market, health information is 10 to 20 times more valuable than a credit card number, they said. This information includes names, birthdays, policy number, diagnosis codes and billing information, they said.
Attorney Marlisse Silver Sweeney is a freelance writer based in Vancouver. Twitter: @MarlisseSS.


Read more: http://www.lawtechnologynews.com/id=1202674717300/Hackers-Want-Your-Healthcare-Data-#ixzz3HXpXbPl4



Tuesday, October 28, 2014

Jason Atchley : Information Technology : Why CIOs Must Participate in Budget Planning

jason atchley

Why CIOs Must Participate in Budget Planning

CIOs should learn and speak the board's jargon to effectively participate in budget planning.
, Law Technology News
    | 0 Comments

CIOs should have a prominent voice in every organization's budget talks—that was the theme of a recent webinar from Gartner Inc. “Every Budget is an IT Budget” featured Michael Smith, vice president and "distinguished analyst" at the Stamford-based research company.
“Every enterprise, every department, every employee could not do their daily tasks without [IT],” said Smith during his 56-minute solo presentation. Last month, Gartner published two papers on the topic: “How CIOs Influence Decisions When Every Budget Is an IT Budget” and “Every Budget Is an IT Budget.”
The types of technologies that fall under the umbrella of IT are expanding, said Smith, to include digital marketing, operational, information and infrastructure (e.g., information security, data integration, data quality, information sourcing and life cycle management), as well as the Internet of things. Companies spend an average of 2.3 percent of total revenue on IT operational budget, said Smith, citing Gartner data. That figure, he said, is likely to double this year due to an increase in marketing technology (e.g., hardware, software and analytics).
Smith's premises were amplified by three members of the legal technology community. 
In law firm environments, “it is critical for CIOs to have a seat at the table for budget processes,” said Scott Christensen, (left) director of technology and information security at Edwards Wildman Palmer. The international law firm has 600 attorneys in 16 offices in Asia, Europe and the U.S.Christensen said. “It comes down to the importance of strategic planning for the firm, and how technology can be an enabler to accomplish the strategic goals of the firm,” he said. If IT is not involved in budget decision-making, it would be “relegated to a maintenance function and unable to meet the important business goals of a modern law firm,” he said.
“Without a CIO/IT director present to articulate the current state of the technology environment, it becomes difficult for a board of directors to evaluate how much investment might be needed," said  Ted Theodoropoulos, president of Charlotte-based Acrowire, a legal technology consultancy. “It is paramount that someone from IT leadership participate in budgetary discussions in order to inform decision makers on when, where, why and how much investment should be made into the firm's technology environment.”
IT professionals at the budget table should use appropriate terminology that all participants can understand, he suggested. (The underlying message for IT: learn leadership's jargon). For example, expect that technology programs will be discussed in context of return on investment, Theodoropoulos said. Another term to use is "technical debt," which refers to the consequences of a poor system design, can be used to highlight infrastructure gaps.
“There is little doubt that technology is becoming more strategic to the way in which law firms operate and IT professionals will become more relevant in budget discussions as that trend continues to accelerate,” Theodoropoulos said.
A well-designed IT budget will help law firms run more effectively, said Susan Keno, vice president at Keno Kozie Associates. The Chicago-based company provides IT design and support services to law firms. "IT is increasingly important given the current mantra of doing more with less," she said. "Critical to effective technology plans is how the required investments will be prioritized and funded."
"The IT budget can and should be used as a planning document that will help the entire firm prepare for future technology needs and communicate to its partners the priorities that those needs will support," said Keno. "In short, it ensures that scarce resources are appropriately aligned with the strategic vision of the firm. A poor IT budget can create a disconnect between the IT department and all other departments leading to failed technology implementations—and to technology purchases that might not be a best fit for the firm,” she said.
"However, law firms can implement many steps and processes that improve IT budgeting. Most of these have less to do with the actual budget development and more to do with how decisions are made in the organization regarding IT investments," said Keno. "The first step in developing a sound IT budgeting process is to develop a governance structure for IT," she said. "IT governance specifies the decisions, rights, and accountability framework to encourage desirable behavior in the use of IT."
"Also, incorporating this structure into the IT budgeting process can ensure that future IT investments are based on performance of past projects, help manage risks, optimize resources, and foster the exploration of possible benefits of technology investments," she said.
Mark Gerlach is a staff reporter at LTN. @LTNMarkGerlach


Read more: http://www.lawtechnologynews.com/id=1202674717305/Why-CIOs-Must-Participate-in-Budget-Planning-#ixzz3HTL7DWTx




Friday, October 10, 2014

Jason Atchley : Information Governance : Managing Time, Costs, and Expectations in eDiscovery

jason atchley

Managing Time, Costs and Expectations in E-discovery

, Corporate Counsel
    | 0 Comments

Litigation is a complex and costly process. Motions practice, discovery and trial comprise an intricate dance, but it is surprising that the discovery phase of litigation frequently ends up being the most complicated and expensive part of the process. It often steals the show. Discovery—and e-discovery in particular—has a number of moving parts, but from a 10,000-foot view it should be fairly straightforward. Discovery is about fact-gathering in the form of interrogatories, document exchange and depositions.
Document exchange, however, ends up being much more than just gathering and sorting through documents in order to identify those that are responsive and not privileged. In fact, it ends up being unnecessarily complicated. The mechanics of document exchange, oddly, are not the most complicated part of the process. Managing time, money and expectations throughout document exchange are the real tricks in the process.
What if we were able to better manage time, money and expectations? Once those concerns are eliminated (particularly cost), the discovery phase of litigation will no longer steal the show, and we can make litigation a much more predictable process.

Managing Time

Discovery (and document exchange in particular) is most intensive in the first six months of litigation. During this time decisions about custodians, data sets and review are being formed. Much of this time is spent figuring out how to minimize the number of custodians, reduce the overall data sets and minimize the number of documents that need to be reviewed. Why? Because historically, the cost of review was directly proportional to the size of the review . . . and the size of review was a result of the number of documents . . . and the number of documents was a result of the number of custodians. With earlier technology, controlling document sizes and custodian counts saved money. But trying to engineer the most effective size of review consumes a great deal of time.
The less we try to engineer the size of the review and focus on simply getting started with the review at hand, the more time we will save. And now a number of new technologies (discussed below) greatly minimize the time spent on the review process and allow for more effective discovery.

Managing Cost

Document review has long been a costly process. In the past, teams of attorneys (hundreds at times) would pour over documents to determine relevancy and privilege. In that context, it was obviously important to reduce the number of documents to be reviewed in order to reduce attorney costs. As a result, a tremendous amount of technology has been used to reduce data sizes and increase the speed of review. But in the past, all of this data-reduction technology came at a cost, which ended up repeating the negative cycle of increased costs.
Modern e-discovery software no longer needs to repeat that cycle—and here’s why. There are many service providers that will help take all of your custodians’ data, across disparate sources, and immediately reduce data sets by removing explicitly nonresponsive data. This can be accomplished through deduplication and noise-reduction technology that can remove all nonrelevant data at the time of ingestion.
And the cost? With the proper negotiation, this process can be accomplished at a cost less than the fees associated with paying counsel to discuss the best way to minimize custodians and overall data size. In other words, reducing the set of data from a veritable mountain down to a pile of potentially relevant documents now can be accomplished at a wash.
That pile of information still needs to be reviewed, however, and even with substantial reduction in size there still potentially will be hundreds of thousands of documents to review. This is where further technology comes into the picture. Technology-assisted review (TAR, aka “predictive coding”) can enable a small team of attorneys to review the remaining pile of documents in a matter of weeks, not months. And while it used to cost a substantial fee to use the technology, in today’s e-discovery landscape TAR can be applied at minimal-to-no additional cost beyond the cost of hosting the data. The result is a streamlined document review process that enables litigators to collect more data while expending less time and money to review the data at hand. All you need to do is find technology providers that will accept the challenge and provide the technology at a lower cost.

Managing Expectations

This new discovery workflow will free up a tremendous amount of time spent managing expectations. No longer will it be necessary to determine priority custodians and the most relevant data sources. In the past, much energy was expended managing those two aspects of discovery alone. Today, you simply need to focus on the merits of the litigation and use the technology to work through the data at play in the matter.
With the introduction of such technologies, however, there are new expectations to manage. For example, it is important to make sure that all data relevant to a matter is being collected and that the right TAR process is being followed. But concerns about time and cost will no longer lead the conversation.

Conclusion

Discovery, with its extreme costs and extended timelines, used to steal the show in litigation. Using modern technologies, however, we can move the focus from managing data sizes and document counts to making sure we are concerned with the merits of the case, efficacy of motion practice and the strategy of the overall review.
Let the technology do the heavy lifting. Technology will not resolve every issue that we have in litigation—and new issues may arise—but overall there are enormous economies of scale to gain by applying the right technology. Thankfully, the industry is maturing at a rapid rate, and service providers are rising to meet the challenge to help litigators focus less on time, money and expectations—and more on the case itself.


Read more: http://www.corpcounsel.com/id=1202672877532/Managing-Time-Costs-and-Expectations-in-Ediscovery#ixzz3FluQ2b6U