October 13, 2012
GAO Report Looks At Privacy In Wireless Location Information
The Govermemnt Accountability Office has released a report called Mobile Device Location Data: Additional Federal Actions Could Help Protect Consumer Privacy. Here's a bit from the summary of what the GAO found:
Using several methods of varying precision, mobile industry companies collect location data and use or share that data to provide users with location-based services, offer improved services, and increase revenue through targeted advertising. Location-based services provide consumers access to applications such as real-time navigation aids, access to free or reduced-cost mobile applications, and faster response from emergency services, among other potential benefits. However, the collection and sharing of location data also pose privacy risks. Specifically, privacy advocates said that consumers: (1) are generally unaware of how their location data are shared with and used by third parties; (2) could be subject to increased surveillance when location data are shared with law enforcement; and (3) could be at higher risk of identity theft or threats to personal safety when companies retain location data for long periods or share data with third parties that do not adequately protect them.
Industry associations and privacy advocates have developed recommended practices for companies to protect consumers’ privacy while using mobile location data, but companies have not consistently implemented such practices. Recommended practices include clearly disclosing to consumers that a company is collecting location data and how it will use them, as well as identifying third parties that companies share location data with and the reasons for doing so. Companies GAO examined disclosed in their privacy policies that the companies were collecting consumers’ location data, but did not clearly state how the companies were using these data or what third parties they may share them with. For example, some companies’ policies stated they collected location data and listed uses for personal information, but did not state clearly whether companies considered location to be personal information. Furthermore, although policies stated that companies shared location data with third parties, they were sometimes vague about which types of companies these were and why they were sharing the data. Lacking clear information, consumers faced with making a decision about whether to allow companies to collect, use, and share data on their location would be unable to effectively judge whether the uses of their location data might violate their privacy.
CALI Lessons for Paralegal Students and Programs
From Sarah Glassmeyer's CALI Spotlight Blog post:
For paralegal students and programs, we just made it a little easier to find lessons that would work for your courses. The CALI for Paralegal Students and Programs page on our website has links to several model paralegal course syllabi. Within those syllabi are links to CALI lessons that would work for those courses.
October 12, 2012
Strange but True: One very heated and arguably prejudicial exchange between a judge and public defender in a murder case
"[R]etired Jefferson Circuit Judge Martin McDonald repeatedly criticizes assistant public advocate David Barron and cuts him off as he tries to offer a few thoughts in response to claims that the attorney is "unethical" and a "backseat driver" who is "making a mountain out of a molehill" rather than a "real" trial lawyer," wrote Martha Neil in See the Video: Angry Judge Blasts ‘Backseat Driver’ Appellate Counsel in High-Profile Murder (ABAJ News). For the Courier-Journal, Andrew Wolfson covers the story and the judge's past history on the bench in detail at Judge threatens to 'strangle' attorney in 'ridiculous' case. [JH]
Friday Fun: Engineering the Legal Info Factory?
October 11, 2012
Authors Guild Loses Its Suit Against HathiTrust
The lawsuit filed by the Authors Guild, foreign associations, and individual authors against the HathiTrust and university defendants came to an unceremonious close yesterday. I’ll start with the conclusion of Judge Harold Baer’s opinion:
I have considered the parties’ remaining arguments and find them to be without merit. For the foregoing reasons, Plaintiffs’ motions are DENIED. Defendants’ motion for judgment on the pleadings is GRANTED in part and DENIED in part: the Associational Plaintiffs have Article III standing; the U.S. Associational Plaintiffs lack statutory standing; and Plaintiffs’ OWP claims are not ripe. Defendants’ and Defendant Intervenors’ motions for summary judgment are GRANTED: their participation in the MDP and present application of the HDL are protected under fair use. The two unopposed motions for leave to file briefs as amici are GRANTED. The Clerk of Court is instructed to close the seven open motions, close the case, and remove it from my docket.
How’s that for finality. The case came about through the agreement several university libraries made with Google to scan their collections as part of the Google Book Project. The defendant universities were free to make use of these scans under the agreement. This birthed the HathiTrust Digital Library. The parallel litigation against Google continues, though the plaintiff publishers recently dropped out leaving the Authors Guild as the principle antagonist Plaintiff in that case. I note that the publishers declined to join this suit. The case against the HathiTrust was designed to secure a ruling that the scan and possible distribution of library collections was not fair use. As we can see, that goal failed. Several issues presented themselves in this case
To make clear, the abbreviated references in the quoted paragraph, MDP means Mass Digitization Project, OWP means Orphan Works Project and HDL means HathiTrust Digital Library. Judge Baer addressed several issues, one of which is whether the trade associations had standing to file suit on behalf of their members. The answer to that question was essentially yes, via the Constitution rather than the Copyright Statute itself. While important to the litigation, it is not the central issue to the broader library community.
The Orphan Works Project was intended to make available scanned titles that were in copyright though the copyright holder could not be identified. One of the individual author plaintiffs was misidentified as not findable by the University of Michigan. The mistake caused the University to halt the program and re-evaluate the process it used to designate a work as orphan. The Authors Guild sought a holding that the OWP violated the copyright law. The University argued and Judge Baer agreed that such a ruling would be based on something speculative since there was no current program in place and there was no replacement in the office. This was resolved by the ruling that the issue was not ripe for adjudication.
The first real issue was whether Section 108 of the Copyright Act precluded the library from utilizing fair use as a defense to a prima facie claim of copyright infringement. The section defines what libraries can do with the materials in their collections. Creating archival copies is allowed when an item is damaged or lost. The Guild essentially argued that the scanning project more than exceeded the allowable uses under Section 108. Section 108(f)(4) states however:
(f) Nothing in this section—
* * * *
(4) in any way affects the right of fair use as provided by section 107, or any contractual obligations assumed at any time by the library or archives when it obtained a copy or phonorecord of a work in its collections.
Judge Baer said the argument failed on the clear statutory text. The next question was whether the activity of the library qualified as fair use. It did as Judge Baer balanced the four non-exclusive factors listed in Section 107. I won’t go through the detailed analysis. One of the main reasons used by Judge Baer was the transformative nature of the scans. They did not act as an alternative to the book in the market. The transformation came in the digital index created out of the scans. Scholars and researchers could make use of the index to search for word occurrences and references to pages with an indication as to how many hits appeared on a page. The Guild argued that it contemplated the creation of a licensed service that may offer the same capability. Judge Baer, citing case law, said the courts do not preclude an allowable alternative because something may happen in the future.
One of the other major factors in Judge Baer’s decision is the relevance of the Americans With Disabilities Act to the case. The University used the digital collection to make works more easily available to blind and seeing-impaired students. While the scanning project was not mandated by the Act, the use of the materials once scanned was authorized. I won’t go into the arguments raised by the Guild, but I will offer part of Judge Baer’s response from footnote 25:
25Plaintiffs suggestion at oral argument that print-disabled individuals could have “asked permission” of all the rights holders whose works comprise the HDL borders on ridiculous. Aug. 6, 2012 Tr. 11:13–12:8.
There were other parts of the opinion where the Guild’s arguments received similar (and deserved) treatment.
So what is next? I assume the Guild will appeal to the Second Circuit. There is no statement on the Guild’s web site (at least of this writing) on the outcome. I would suspect this ruling would bolster Google’s defense against the Guild in the parallel litigation in Judge Chin’s courtroom. The case is not quite the same in that Google claims indexing and providing snippets is fair use. The use made by the HathiTrust defendants does not offer snippets or make the underlying scan available otherwise. Assuming this case holds up on appeal, Google would be safe to do the same if it came to that.
Further analysis is available from Kevin Smith on the Scholarly Communications @ Duke blog. There are links to a copy of the opinion. Both are well worth reading. [MG]
If Law School Isn't Worth Attending Unless One Attends a Top School, Are These the Top 50 "Worth It" Law Schools?See Business Insider's The 50 Best Law Schools in America. And then there is The Princeton Review, "the other, other white meat U. S. News." See Staci Zaretsky's Princeton Review Ranks The Law Schools with The Best Career Prospects on ATL. [JH]
The Wu Recipe for Fixing Legal Education
In Shrinking Law Schools, Frank H. Wu, Chancellor & Dean of UC Hastings College of the Law, writes
Law schools must reduce their J.D. class sizes. They should do so immediately and permanently.
The data are compelling. There are simply too many lawyers and too many law students in the United States nowadays. Only about half of recent graduates of law schools, of which there also are too many, are securing permanent full-time employment in the legal profession at this point.
There, I've said it. Indeed, my law school has taken action.
Ah, OK. What Dean Wu fails to mention anywhere in his opinion piece is that when Hastings cut its class size by 20% it also jacked up its in-state tuition in a single year by 15% (to $48K) and cut its payroll costs -- staff but not law faculty -- by approximately 10%. Less students, higher tuition, staff cutbacks but don't touch the faculty ranks ... is this the recipe for reforming the legal academy?
From a more detailed deconstruction of Wu's article, see Inside the Law School Scam's Wu-less. [JH]
October 10, 2012
What Else Can An Online Catalog Do?
There is an article in Library Journal, Librarians As Booksellers, which promotes the idea of libraries partnering with publishers as a sales point for e-books. One mechanism would have catalogs include “buy” buttons in a bibliographic record. A borrower may be a buyer if the book is unavailable for loan, or alternatively may want to acquire a title after having borrowed it. I like the idea in that, as the article suggests, publishers and libraries could easily be partners rather than antagonists. One of the themes running through the Apple e-book pricing case is preserving the local bookstore as a place of literary discovery. The library could easily fill that role if the local bookseller went out of business. Comparatively, the local library is not likely going away no matter how much market share Amazon amasses.
This got me thinking on how libraries could further adapt their roles in modern times. The term “information center” is another common way libraries define themselves these days. The heart of the information center is, of course, its catalog. The traditional view of the catalog is Charles Cutter’s Objectives as published in his Rules For A Dictionary Catalog (see page 12):
1. To enable a person to fine a book of which either
(A.) the author)
(B.) the title) is known
(C.) the subject)
2. To show what the library has
(D.) by a given author
(E.) on a given subject
(F.) in a given kind of literature
3. To assist in the choice of a book
(G.) as to its edition (bibliographically)
(H.) as to its character (literary or topical)
Modern catalogs changed in the 1980s and later to include features such as linking to external electronic sources. Many of these will be electronic components of a library’s collection such as subscriptions to electronic journals and books, videos, or any legitimate external link with a stable URL. Those of us in academics promote the use of the online catalog typically during orientation. We want students to use it. The current trend is to overlay the catalog with discovery mechanisms such as WorldCat Local as a way of deep mining subscription information beyond bibliographic content.
I realize that a school’s web site typically contains information about the academic program such as class schedule, texts, faculty, and other details. Why not make some of this information available through the online catalog? It would certainly promote it as a source of institutional information. This may not comport with Cutter’s Objects for a catalog, but the Internet did not exist in 1904 and the way we access and consume information has expanded since then.
I’m well aware of past discussions as to whether libraries should catalog the web. I think that is impossible given the number of pages out there. If anything, that is the purpose of Google and other search sites. Nonetheless, there can be room for curated local information that is not bibliographic. We have research guides and other self-generated content that can be discoverable through the catalog.
Traditionalists may disagree and I understand that. Libraries, however, are doing more than collecting books these days. If the Library Journal article floats the idea of libraries taking on the some of the role performed by bookstores, why stop there? The examples I used may or may not be practical. The institution’s web site may be sufficient. But my real point is how else can the catalog be useful? What other information pointers can be included?
I think any adaptation of the online catalog to include other content is less a technological issue than a financial one. It comes down whether the money is there to buy the infrastructure and the people to manage it. Who knows? The future may be a discovery service partnered with a large Internet search company.Look at page 99 of The New Catalogue of Harvard College Library where Cutter describes the mechanisms of the card catalog drawer. He marveled at the utility of the rods that held the cards in place. They prevented accidental spills but allowed orderly rearrangement. I wonder what he would think of today’s catalogs. [MG]
Keynote Sessions from LITA's 2012 National Forum
LITA's 2012 National Forum was held in Columbus, OH Oct. 4-7. Gary Price has embedded the videos of the three keynote presentations on InfoDocket:
- Eric Hellman: “Building a Public Sector for eBooks”
- Ben Shneiderman: “Fresh Thinking about Information Technology: Visual Analytics, Social Discovery & Networked Communities”
- Sarah Houghton: “Library Futures: Star Trek or Starbucks?”
October 9, 2012
Some Thoughts (And A Few Personal Disclosures) On Web Privacy
I’m always fascinated by news concerning online privacy, especially in the context of marketing. I think it’s pretty well established that free services from Google, Facebook, and others are not really free. Our payment is the information we voluntarily provide combined with monitoring our activities on the Internet. A story in CIO highlights the lack of transparency on the part of the largest Internet companies on how they create and update user profiles. It’s not merely a matter of filling in the blanks on a profile page. That’s too easy and too obvious. I know that search habits and clicks form the basis of the implied interests, but how does that really work? The more important questions may be how can we edit that information and control its use.
All we see are the end results of that inferential profile. I use Gmail and I know I get targeted ads based on the content of mail sent to and from my account. Google does have a link “Why this ad?” The short description says it is based on emails from my inbox and offers links where I may manage or opt out of ads. There is another option called “Ads on the web” where I can view some of the interest categories associated with my profile.
They are not all entirely accurate. One shows I'm interested in “Arts & Entertainment - Music & Audio - Urban & Hip-Hop - Rap & Hip-Hop.” I do have an interest in music but it does not extend to the listed genres. Another category is “Books & Literature - Children's Literature.” I admit a fondness to Scooby-Doo videos, though I can’t believe that outweighs the subject searches I perform as a reference librarian. I assume Google hasn’t found a way to turn extensive searches on competition, marketing, and antitrust into an ad bonanza. Google seems to think I like basketball. I guess all those searches for news on the Chicago Blackhawks do not register with our mechanized overlords.
The point of this outpouring of detail is to illustrate Google, Facebook, and others pay detailed attention to what we do. We’ve always known that. What I don’t know is how Google selected basketball over hockey. They are not anything alike outside of they are both team sports and typically played in dual use arenas. It also illustrates how profoundly wrong some of algorithms may be in generating our preferred interests. Some of these inferences may have consequences beyond ads.
The CIO article suggests our profiles contain our political affiliations. That may not matter much to some in the United States. It may make a difference, as CIO notes, for citizens in other countries where politics and violence are heavily associated. Google doesn’t list this in my limited demographic listing it displays. For some reason, however, Google suggests following the Obama campaign whenever I log into Google+. I do read a lot of political news though I find it disturbing that Google is predicting an assumed voting preference on my part based on my web habits. I rarely visit campaign sites.
One suggestion in the article is that the only way to make this process more transparent to the end user is through regulation. I think it is a great idea though I think it would be hard to implement. Congress isn’t known for productivity these days. Moreover, there’s money in this, likely generating hard resistance to changing the privacy landscape. CNN reports on how much an individual is worth to Google and others. The amounts change based on, you guessed it, demographics but in 2010 an individual was worth about $14.70 per thousand searches. There are estimates for Facebook as well.
I wouldn’t suggest that individuals give up Google and the rest. I do suggest, however, that anyone using these services should pay attention to the details they have on us when possible. It’s not only marketing. Who knows, maybe these profiles could become a component of things like credit scores. It seems unlikely, but then again, social security numbers were never meant to be unique personal identifiers. Look how that turned out. [MG]
The "Duplication of Legal Publications" Issue: Recalling a "forgotten moment in the history of law librarianship in which a prominent law librarian provided leadership on a matter of concern throughout the legal profession."
Last weekend I read Dick Danner's very interesting and well-documented history of the ABA, AALL and AALS joint efforts in the late 1930s to address the problem characterized then as the "duplication of legal publications." Hopefully a couple of quotes from the opening paragraphs of Danner's The ABA, the AALL, the AALS, and the 'Duplication of Legal Publications' [SSRN; LLJ forthcoming] will stimulate interest by all three associations to join together in a concerted and coordinated advocacy effort with law librarian leadership to address today's issues despite the fact that the "forgotten moment" Danner recounts failed. One can make the case that the stakes are much higher today.
It was neither new nor unusual for lawyers to complain about having to deal with “too much law.” Historical concerns about too many law books are limited neither to common law legal systems nor to the post-Gutenberg age. Because of their reliance on precedents found in judicial opinions, common law lawyers in particular have complained about too many published opinions at least since the time of Francis Bacon in the early seventeenth century. The problem remains alive today in the background of twenty-first century controversies regarding citation of unpublished opinions in federal and state courts. American lawyers challenged by perhaps two million reported cases in the 1930s6 would likely be astounded at the number of appellate cases available since the introduction of commercial legal databases in the mid-1970s.
In the 1930s, it was not out of place for the ABA to be concerned about the problem of too many law reports. The problem of “duplication” of legal publications had been of interest to the ABA from the mid-1880s through the first two decades of the twentieth century, especially for the impacts of multiple versions of published law reports on the work of the practicing bar. In his history of the ABA, Edson Sunderland called it “one of the most baffling subjects” with which the Association dealt. By the 1930s, however, two other associations established near the beginning of the twentieth century had matured to where they too might exert influence on this and other matters of mutual concern. The American Association of Law Libraries had grown from thirty-four individual charter members in 1906 to 172 regular members in 1933.10 The Association of American Law Schools was formed in 1900, with thirty-two law schools as charter members, and had seventy-seven member schools by 1933.11 More recently, the American Law Institute (ALI) had been founded in 1923 by a group of prominent judges, lawyers, and law professors with the goal of promoting “the clarification and simplification of the law and its better adaptation to social needs.”
The 1930s initiative to solve the problem of the “duplication” of publications was significant not only because it was a joint effort by these organizations, but because it was coordinated within the ABA and with the other associations by Professor James, the Harvard Law Librarian. This article describes those efforts (and their eventual failure) in hopes of shedding light on a forgotten moment in the history of law librarianship in which a prominent law librarian provided leadership on a matter of concern throughout the legal profession.
Highly recommended. [JH]
October 8, 2012
A Few Thoughts On The AAP-Google Settlement
One of the lingering questions about last week’s settlement between Google and the Association of American Publishers is why now after seven years of litigation? The easy answer is both sides simply got tired of the fight. I think there is more to it than that. Even though the terms of the settlement are confidential, there are a few known elements. One is that the publishers can opt out of selling through the Google Play Store. Another is that Google will supply copies of scanned works to the publishers for possible sale through other vendors. The benefit to the public is that a publisher can now offer back catalog items for sale as e-books. None of this explains the “why now” question.
I think the settlement represents a strategic decision by the publishers to create an alternative distribution system as a counterweight to Amazon. Google is perfect for this role. The settlement required give and take between the publishers and Google which is not the relationship they seem to have with Amazon. The lawsuit filed against Apple and several publishers filed by the Department of Justice at least indicated that Apple did not want to compete against Amazon on price, hence the most favored nation clause in the agency contracts between the parties.
Google, on the other hand, has very deep pockets and a very deep reach with public. It’s also a company that is known to experiment with concepts and ideas without worrying too much about the ultimate cost. As Apple would prefer not to compete on e-book prices, Google may be more than willing to do so. I think the publishers would be delighted if Google took market share away from Amazon much like the record labels hoped Amazon and Wal-Mart would take market share away from Apple in music sales. None of this could happen with Google as an adversary. The recent settlement makes Google not only a partner, but a compliant partner. It has a viable ecosystem through Android devices and the Nexus 7 tablet. Unlike Microsoft and Barnes & Noble, Google has a relationship with consumers that go beyond its store.The market, it seems might not be so completely foreclosed to Amazon as the publishers feared. I believe the development may undermine their argument that adopting the agency model and MFN clauses was necessary to prevent Amazon from gaining a monopoly. Getting Google into the market in a big way seems to be a perfectly legal alternative. None of this may address the publishers’ concern that e-book pricing is too low. As the song goes, you can’t always get what you want, but you get what you need. Right now the publishers need a viable alternative to Amazon. Google may be their best bet. [MG]
The No Sacred Cow Models: Sole Provider, Primary Provider, or Multiple Narrow-Focused Providers for Online Legal Search in the Private Sector
On 3 Geeks, Greg Lambert reported that BigLaw Foley & Lardner has gone the sole legal search provider route (Lexis) recently.
For many of us in the law library world, we've been waiting for a BigLaw firm (Foley falls in at #45 in the AmLaw100 revenue rankings) to pull the trigger and go to a single legal research provider. Now we have someone to use as an example. Now that the seal is off that bottle, it will be interesting to see how many other BigLaw firms finally start looking seriously at dropping the two-vendor legal research model and start going as a Westlaw, Lexis, or even Bloomberg Law only shop.
I have no idea if Foley is the first AmLaw 100 firm to go the sole provider route. Certainly many mid-size law firms have gone sole provider since the 2008 recession because there are no sacred cows anymore. WEXIS no more; Lexis or Westlaw but not both. Do we add "or BLaw"? Not yet. With only two uber BigLaw firms on board for US-firmwide access and no public information that either have taken BLaw as a sole provider, about all one can say is that BLaw may be either one of two or three search providers or the primary provider offered to users at those firms.
The provider model is evolving from both Lexis and Westlaw to
- Sole Provider: Lexis or Westlaw but not both.
- Primary Provider: a more comprehensive plan for Lexis or Westlaw or BLaw with smaller plans as secondary provider(s) with or with limiting the number of user accounts at the firm level.
When the dust settles, I think there will also be a third model. Multiple narrow-focused licenses for Lexis Advance, WestlawNext and BLaw for select groups of institutional users based on a selection of in-plan only access to secondary sources that each practice group user population requires. Add into the institutional buyer decision matrix (1) any appropriate productivity "solutions" that tie into search needed by specific user populations of institutional buyers and (2) the eventual licensing of enhanced eBooks with their own tie-ins to the vendor's search service which will certainly cannibalize database inclusions in future search licenses.
Given that all three vendors provide essentially the same primary legal sources and that none of their next gen current gen metadata-enhanced search engines are convincingly better than any other despite marketing claims, my hunch is that eventually the dominate model in the private sector, particularly in the BigLaw sector, may end up becoming licensing multiple narrow-focused search plans from Lexis, TR Legal and BLaw which are limited to targeted user populations based on the spectific secondary sources and tools needed online,
Of course, this is where the editorial quality of secondary legal sources becomes a competitive advantage in the context of the current WEXIS model of content commodization (think BLaw-BNA offerings) but also intelligent professional grade workflow solutions (think forthcoming WK offerings for some legal practice group specialities). [JH]
October 7, 2012
Kickstarting Production of Zombie Law: Zombies in the Federal Courts
Joshua Warren has received over $5,000 in pledges to support his Zombie Law: Zombies in the Federal Courts project. From the Kickstarter page:
This edited collection is a serious book compendium of epic real life zombie stories as told by the U.S. Federal Courts. This Kickstarter project funds production of a law school style bound casebook to include case opinions from the over 300 U.S. Federal Court opinions with the word “zombie” (and “zombies”, “zombi”, “zombis”, “zombified”, “zombism”, etc..)