Tuesday, March 03, 2009

Drake IP Scholars, Panel 6

Panel 6: Information Technology and Patents

Robert A. Heverly, Michigan State University College of Law ―A Spirited Defense: Duty and Liability in Denial of Service Attacks

When you go someplace online and it’s not there: you don’t know why; it’s not like finding a shop boarded up, or destroyed by a hurricane, or whatever. Denial of service attacks can prevent people from reaching where they’re going in a way with little analogy to the offline world. Botnets: hijacked computers that aren’t protected against outside intrusion, used to carry DoS attacks out.

Battle of the metaphors: is having an infected/insecure computer like having a car with a bad tire that you drive on the highway, endangering everyone around. If you owe a duty to others to avoid driving the car, do you owe a duty to others to avoid having an unsafe computer? Or is it like having an ok car that someone secretly climbs under and uses to start throwing rocks; you’d have no duty to prevent that. (Seems to me this is about foreseeability: you may have a duty to foreseeable victims of intentional torts carried out by people you’ve let onto your property.)

In general, courts won’t imply a duty to the world. Duty usually exists because of some relationship between people, and here the only relationship is that you’re both on the information highway. There’s no general duty to stop third parties from doing bad things to other people. Also, purely economic injury is generally not recoverable in tort, so you’d need some server damage.

Joint and several liability: some courts would be troubled by holding one botnet computer owner liable for the full damage caused by a DoS attack.

Added together, seems unlikely that a person whose computer had been hijacked would be liable.

How do we preserve a robust internet, then? Could use no-fault. Encourage end-users to protect themselves.


Jon Garon, Hamline University School of Law ―Content, Control and Socially Networked Media

This is part of a multipart project (see also Reintermediation), focused on following the money and letting creators choose which economy to enter. But this part’s about privacy.

Four trends: the harnessing of social trends/data for predictions about public behavior; the growing role of audiences as curators; the expansion of behavioral advertising; the development of tools to search the “deep web” in databases etc. Slides available here.

People want to be early experiencers—opening night tickets, etc. This is important social behavior. Combined with passive data collection: lots of info out there. Google can anticipate flu outbreaks merely by watching search terms, just as well as the CDC can. Combined with curators who want to have all of some type of material archived on their hard drives. Quid pro quo: owners want permission to mine the data in return for the stuff. Watching isn’t enough: Hulu.com allowed people to embed Obama’s address on their own websites, as if there weren’t already enough places to see it.

But the revenue models aren’t yet adequate. YouTube doesn’t make money; Hulu makes money, but nowhere near the money of broadcast. Tracking: to justify ad fees, companies are increasingly adding data about who watches when and how. Behavioral advertising: the internet now knows where you are and will increasingly become contextual and more specific. The FTC has behavioral advertising guidelines, but they’re voluntary.

Meanwhile, Amazon is consolidating content distribution, cutting out the middlemen—publishers, other ways of getting content (Kindle). Recommends books for us; can it start to look at our Netflix accounts to see what else to recommend? The scary model: relying on what people do, not what they say about themselves. The software can know more about you than you know about yourself.

FTC guidelines must become law, not voluntary. Need more explicit consent, restraints on data-sharing unless there’s explicit opt-in based on meaningful information. (Not clear to me that disclosure ever, ever works. Google’s got great disclosure when you install, for example, Google Desktop, but who reads it?)


Linda M. Beale, Wayne State University Law School ―Is Bilski Likely the Final Answer on Tax Strategy Patents?

Beale is a tax lawyer, not an IP lawyer.

Why tax strategy patents are bad: The innovation fallacy of patent law. Tax doesn’t need incentives for innovation, not innovation that helps noncompliance. The IRS devotes considerable effort to close the tax gap and close down innovation in tax practices. The government shouldn’t spank tax practitioners for innovating and then hand out the candy of patents.

Institutionally, tax strategy patents allow monopolization of areas of the law, conflicting with congressional control over policy levers of tax. If the stimulus package intends to reduce tax burdens on certain people, tax strategy patents can divert the benefit for other people.

Bilski, which requires a machine (not a general purpose computer) or a physical transformation: legal relations aren’t physical objects or substances, so maybe Bilski protects us from them. One dissent, though, argues that the majority fails to grapple with the information economy.


Efthimios Parasidis, Hofstra University School of Law ―Stop that Thought!: The Neuroscience Paradigm as Evidence of an Inconsistent Doctrine of Patent Eligibility

Patents for neuroscience inventions demonstrate that Bilski leads to incontistent results. Classen v. Biogen: a classification of subjects in experimental group v. control group, similar to the claim in Metabolite v. Labcorp, which was about correlating individual blood test results to figure out whether there was a particular deficiency. Bilski seemed to get rid of mental process patents as in Metabolite. But what about neuropatents?

Brain fingerprinting: method for truth detection—EEG signals from brain measured. Compare one person’s brain scan with normalized findings on how memory is stored to determine truth. Such evidence has been admitted in two state courts here, and one conviction in India based on evidence from such a device.

Two patents have been issued on this. The inventive concept is the significance of the correlation between truth and activation in certain regions of the brain. This entails observation of a natural law. Perhaps using the EEG counts as a machine. The machine takes electrical currents and turns them into a digital signal that can be reproduced on a computer. But, Bilski doesn’t say the machine/transformation has to be an integral component. The integral component is the correlation; the machine is a means of observing the correlation.

Better way: ask whether a claim preempts a fundamental principle, law of nature, or natural phenomenon. Neuroscience patents were granted based on correlating brain results with truth. To him, that’s preempting a natural phenomenon and should be unpatentable. Similar to the Metabolite claim.

Prometheus v. Mayo, S.D. Cal. case before Bilski, is before the Federal Circuit now; it’s a Metabolite-like case that applied the preemption test.

Steam: the person who discovers/defines steam can’t patent it, but can patent a method of applying it to a useful end. That’s the right distinction.

Andrew W. Torrance, University of Kansas School of Law ―Patent Expertise and the Regress of Useful Arts

The Patent Game: IRB-approved trials of human users, testing patent, patent/open source, and pure commons regimes. Users possessed specific expertise in patent law and open innovation.

No statistical difference in innovation (unique inventions) among the regimes. Productivity (total numbers of invention per unit of time): pure patent regime generates the least productivity; patent/open source generates an intermediate amount; and pure commons generates the most. Social result (dollars in the system): even more extreme results than with productivity: lots more with pure commons.

Possible explanations: done a bad job creating the Patent Game, or choosing participants. Or, maybe, patents don’t spur more innovation.


Discussants: Daniel R. Cahoy, Smeal College of Business, Penn State University

For Heverly: What are the nonlegal incentives that still exist against users letting systems get infected? Does shaming work?

For Garon: Students are getting more savvy about privacy, e.g. on Facebook. Are we overly concerned about this as lawyers? Users can understand and react.

For Parasidis: Are these patents good for society, though?

Lars S. Smith, Louis D. Brandeis School of Law, University of Louisville

For Heverly: Think also about nuisance, not just negligence.

For Garon: Do users accept that privacy is dead? (I don’t see why this or Cahoy’s questions are challenges to the project. One huge problem is that data mining generates externalities: the kind of culture we will all live in if behavioral targeting succeeds the way it’s supposed to may not be the kind of culture that we want to live in. This is the typical reason we look to regulation to solve problems that individual choices can’t.)

For Beale: Maybe we should embrace uncertainty. Bilski gets rid of the one thing we still do well in America: business method design! Apple designs in US but makes in China. What’s wrong with designing ways to reduce taxes? Isn’t tax avoidance legitimate?

Greg R. Vetter, University of Houston Law Center

For Torrance: What does open source mean in your study? Open source has different possible meanings, from attribution to other restrictions, and that may make a difference.

Robert Bohrer: Patents are really important for drug development. It may be that many areas sustain creativity without patents, but not all, especially drugs.

Beale: She doesn’t see the positives of tax patents. Almost all the issued ones are obvious to anyone who knows tax; the problem is that patent examiners don’t know tax, and can’t learn it in a couple of hours. Beyond that, customized financial engineering might not be obvious—the dangerous ones aren’t obvious. There ought to be a principle of morality.

No comments: