Match Group, LLC v. Bumble Trading Inc.

Western District of Texas, txwd-6:2018-cv-00080

Affidavit of Chris Schmandt

Interested in this case?

Current View

Full Text

2 IN THE UNITED STATES DISTRICT COURT WESTERN DISTRICT OF TEXAS WACO DIVISION MATCH GROUP, LLC, Plaintiff, v. Civil Action No. 6:18-cv-00080-ADA BUMBLE TRADING, INC., and BUMBLE HOLDING, LTD. Defendants. DECLARATION OF MR. CHRISTOPHER M. SCHMANDT IN SUPPORT OF DEFENDANTS' OPENING CLAIM CONSTRUCTION BRIEF 2 Table of Contents Page I. INTRODUCTION ............................................................................................................. 1 II. EXPERT QUALIFICATIONS .......................................................................................... 2 A. Educational Background ........................................................................................ 2 B. Relevant Professional Experience.......................................................................... 2 C. Publications ............................................................................................................ 3 D. Engagement............................................................................................................ 3 III. LEGAL STANDARDS ..................................................................................................... 4 A. Claim Construction ................................................................................................ 4 B. Indefiniteness ......................................................................................................... 4 C. Person of Ordinary Skill in the Art ........................................................................ 5 IV. THE ASSERTED PATENTS AND CLAIMS .................................................................. 6 V. DISPUTED TERMS .......................................................................................................... 7 A. "graphical representation" ..................................................................................... 7 B. "associated".......................................................................................................... 10 C. "the text area" ...................................................................................................... 12 -i- 2 I, Christopher M. Schmandt, submit this declaration in support of Defendants Bumble Trading, Inc. and Bumble Holding, Ltd.'s (collectively, "Bumble") Opening Claim Construction Brief, and declare as follows: I. INTRODUCTION 1. I have recently retired from my position as a Principal Research Scientist at the Media Laboratory at Massachusetts Institute of Technology ("M.I.T."), after 40 years of employment by MIT. In that role, I also served as faculty for the M.I.T. Media Arts and Sciences academic program. I have more than 40 years of experience in the field of Media Technology and am a founder of the M.I.T. Media Laboratory. 2. I have been asked to offer opinions as an expert regarding the construction of terms within the claims of U.S. Patent No. 9,733,811 (the "ʼ811 Patent"), U.S. Patent No. 9,959,023(the "ʼ023 Patent"), and U.S. Patent No. 10,203,854 (the "ʼ854 Patent") (collectively the "asserted patents"), which have been asserted against Bumble. I have also been asked to offer an opinion regarding the definiteness of certain terms. 3. I have based my opinions on my analysis of the materials I have received and reviewed during the course of this case and on the knowledge and experience I have obtained during my forty-plus years of working in the field of computer science, multimedia, and Web technology. The materials I reviewed include, without limitation, the asserted patents, the file histories of the asserted patents, and various other materials referenced in this declaration as well as those materials reviewed during my involvement in related inter partes review proceedings. 4. It is my opinion that Bumble's positions regarding the interpretation or indefiniteness of the disputed claim terms described below are correct and should be adopted. 2 II. EXPERT QUALIFICATIONS A. Educational Background 5. I received my Bachelor of Science degree in Electrical Engineering and Computer Science from M.I.T. in 1978, and my Master of Science degree in Visual Studies (Computer Graphics), also from M.I.T. B. Relevant Professional Experience 6. I was employed at M.I.T since 1980, initially at the Architecture Machine Group, which was an early computer graphics research lab. In 1985, I helped found the Media Laboratory and continued to work there until my retirement. I ran a research group titled "Living Mobile." 7. In my faculty position, I taught courses and directly supervised student research and theses at the Bachelors, Masters, and Ph.D. level. I oversaw the Masters and Ph.D. thesis programs for the entire Media Arts and Sciences academic program. Based on the above experience and qualifications, I have a solid understanding of the knowledge and perspective of a person of ordinary skill in this technical field since at least 1980. 8. During my career, a number of aspects of my research were directly relevant to the matter in the asserted patents. When I worked at the predecessor to the Media Lab, the Architecture Machine Group, I worked on very early interactive computer graphics systems. As this was before the widespread presence of the computer mouse, much of our work used touch sensitive surfaces and screens, much as modern mobile phones use. My EECS BS thesis, titled "Pages Without Paper," was perhaps the first e-book reader and allowed a user to swipe on a touch pad to turn pages; right-to-left to move forward a page and left-to-right to move to the previous page. Although the thesis refers to these as "flipping," it is the same gesture as is sometimes called "swiping." 2 2 9. Some years later, in the mid-1990s, I was a member of the Media Lab News in the Future project. In this area, I worked on matching user interests in news articles, based on personal preference profiles as well as learning from the history of what the users read. Similar techniques were also applied to filtering email and voice messages. This is related to matching algorithms that may allow people to meet others who meet personal criteria. 10. For many years, my research included pioneering work in location based services, such as learning where one traveled and who or what resources as such might be found en route, as well as sharing location information with others in various ways. Additionally, in 1988 with my grad students, I developed the first real time spoken driving direction system, a very early precursor to the common in-vehicle voice navigation systems in use today. 11. My Curriculum Vitae is submitted herewith as Exhibit A. C. Publications 12. My research spans distributed communication and collaborative systems, with an emphasis on multi-media and user interfaces; I have more than 70 published conference and journal papers and one book in these fields. 13. The list of publications of which I am an author or co-author may be found in my Curriculum Vitae, which is attached as Exhibit A. D. Engagement 14. I have been retained by counsel for Bumble to provide my expert opinion in connection with claim construction in the above-captioned proceeding. I am being compensated at a rate of $450 per hour for my study and other work in this matter. My compensation is not contingent on the outcome of this matter or the specifics of my testimony. 15. Throughout my declaration, all of my opinions are expressed from the point of view of a person of ordinary skill in the art. 3 2 III. LEGAL STANDARDS 16. Certain legal standards have been explained to me by counsel for Bumble. I have applied them to my detailed opinions explained below and I state these legal standards as follows. A. Claim Construction 17. I have been informed that the claims are to be interpreted in terms of their meaning to one of ordinary skill in the art at the time of the invention, which I understand means on the effective filing date of the patent application. I have also been informed that the scope of a patent claim is to be determined by the words of the claim itself, read in view of the relevant intrinsic record ― i.e., the specification and file history. I also understand that, although one is to consider the specification when interpreting the claims of a patent, it is improper to import limitations into the claims from the embodiments described into the specification that are not required by the claims. 18. I have also been informed that extrinsic sources of information may be used to inform the understanding of one of ordinary skill in the art at the time of patenting, but that such extrinsic information cannot overcome a meaning for a term in a manner that is inconsistent with that given by the intrinsic record consisting of the claims, specification, and file history. B. Indefiniteness 19. I understand that a patent must be precise enough to afford clear notice of what is claimed, thereby apprising the public of what is still open to them. I understand that a patent claim is invalid for indefiniteness if the specification delineating the patent, and the prosecution history, fail to inform, with reasonable certainty, those skilled in the art about the scope of the invention. I further understand that definiteness is measured from the viewpoint of a person skilled in the art at the time of the effective filing date. 4 2 C. Person of Ordinary Skill in the Art 20. I understand that an assessment of the claims of the asserted patents should be taken from the perspective of a person of ordinary skill in the art as of the earliest claimed priority date, which I understand is alleged to be June 6, 2012.1 21. I have been advised that to determine the appropriate level of skill of a person having ordinary skill in the art, the following factors may be considered: (a) the types of problems encountered by those working in the field and prior art solutions thereto; (b) the sophistication of the technology in question and the rapidity with which inventions occur in the field; (c) the educational level of active workers in the field; and (d) the educational level of the inventor. 22. In my opinion, a person of ordinary skill in the art as of June 2012 (the earliest alleged priority date) would have possessed at least a bachelor's degree in computer science or an equivalent field requiring learning computation principles, and two years of experience in building software applications employing client/server communication architectures, database queries, and graphical user interfaces. A person could have qualified as a person of ordinary skill in the art with some combination of (1) more formal education (such as a master's of science degree) and less technical experience, or (2) less formal education and more technical or professional experience in the fields listed above. 23. My opinions regarding the level of ordinary skill in the art are based on, among other things, my over 40 years of experience in the field of computer science, multimedia, and Web technology. My understanding of the basic qualifications that would be relevant to an 1 I am not offering an opinion that the asserted patents should be entitled to this priority date. I have formed no opinion as to whether the challenged claims can properly be afforded this invention date. My opinions as to the level of the ordinary skill in the art, and all the opinions presented in my declaration, would remain the same whether June 6, 2012, August 6, 2012 or March 15, 2013 was established as the date of the invention for purposes of the asserted claims. 5 2 engineer or scientist tasked with investigating methods and systems in the relevant area, and my familiarity with the backgrounds of colleagues, co-workers, and employees, both past and present. I also note that the asserted patents confirm that the underlying technology is not overly sophisticated. The specifications note that the alleged invention of the asserted patents "may include software and/or algorithms to achieve the operations for processing, communicating, delivering, gathering, uploading, maintaining, and/or generally managing data, (ʼ811 Patent, 4:40- 43), and users may use a "personal computer," "cellular telephone, an electronic notebook, a laptop, a personal digital assistant (PDA), or any other suitable device. (Id., 3:63-4:2). Additionally, the alleged invention "may be achieved by any suitable hardware, component, device, application specific integrated circuit, additional software, field programmable gate array, server, processor, algorithm, erasable programmable ROM, electronically erasable programmable ROM, or any other suitable object that is operable to facilitate such operations." (Id., 4:44-51.) 24. Although my qualifications and experience exceed those of the hypothetical preson having ordinary skill in the art defined above, my analysis and opinions regarding the asserted patents have been based upon the perspective of a person or ordinary skill in the art as of June 2012. IV. THE ASSERTED PATENTS AND CLAIMS 25. The asserted patents are all part of the same patent family. I understand that the ʼ023 and ʼ854 Patents are continuations of the ʼ811 Patent. I also understand that all three asserted patents share an identical specification. Generally, the asserted patents disclose a method and/or system for profile matching. I have been informed that the litigation is in the early stages, and Plaintiffs are currently asserting all claims of all the asserted patents. 6 2 V. DISPUTED TERMS A. "graphical representation" Bumble's Construction2 Plaintiff's Construction "summary of information displayed on a "pictorial portrayal" graphical user interface" 26. I understand that all of the asserted patents' independent claims recite a "graphical representation," along with a dependent claim in the ʼ023 Patent. Below is a summary of how the term "graphical representation" appears in the claims.  "graphical representation of [a/the] [first, second, third, etc.] potential match;" (ʼ811 Patent, cls. 1, 4, 7; ʼ854 Patent, cls. 1, 4, 7, 10)  "graphical representation of [a/the] [first, second] item of information;" (ʼ023 Patent, cls. 1, 2, 3, 5)  and "graphical representation of a [first, second] online dating profile" (ʼ023 Patent, cls. 1, 3, 5).  "graphical representation of the [first, second] user" (ʼ854 Patent, cls. 1, 4, 7, 10) 27. A person of ordinary skill in the art would understand a "graphical representation" to be a representation that appears on a graphical user interface. There are generally two types of user interfaces: (1) character-based user interfaces (also referred to as text-based used interfaces) and (2) graphical user interfaces. On a character- or text-based user interface, all interface elements are composed of only text. To interact with character-based interfaces, the user generally types in commands: hence, they're referred to as command-line interfaces. Comparatively, a graphical user interface can display windows, menus, icons, text, pictures, or some combination thereof. Therefore, a graphical user interface is not limited in what it can display. Most user 7 2 interfaces, such as those on your cellphone and computer, are graphical user interfaces. Nothing precludes the use of text as a display element on a graphical user interface, as we all can see on our cellphones and computers. A user can interact with a graphical user interface through the use of a pointer or in the case of a touchscreen, a finger or stylus. 28. I have reviewed the specification of the asserted patents. The actual phrase "graphical representation" does not appear in the specifications. Nonetheless, based on the specifications, a person of ordinary skill in the art would understand "graphical representation" to mean a "summary of information displayed on a graphical user interface." The specifications disclose that "user 14 may request that matching server 20 present a subset of users from user profile pool 30 based on specified search parameters." (See e.g., ʼ811 Patent, 21:8-10.) "User 14 may be presented with a summary of information regarding a suggested user. The summary may include one or more of: a picture, an icon, name, location information, gender, physical attributes, hobbies, or other profile information." (ʼ811 Patent, 21:18-22.) A person of ordinary skill in the art would understand this disclosure to correspond to the claimed "graphical representation." Because a "graphical representation" is displayed on a graphical user interface, nothing precludes it from including text. In view of this disclosure in the specification, a person of ordinary skill in the art would understand a graphical representation to be a "summary of information displayed on a graphical user interface. 29. A person of ordinary skill in the art would not understand the term "graphical representation" to be limited to a "pictorial portrayal" of a user. A "graphical representation" may include icons or other representations such as a card metaphor in addition to text without containing a picture of a user. This understanding is consistent with and supported by the 8 2 specifications' disclosure that "the summary may include one or more of: a picture, an icon, name, location information, gender, physical attributes, hobbies, or other profile information." 30. In addition, even where a picture is included in a graphical representation along with other images and information, a person of ordinary skill in the art would not understand a graphical representation to be limited to a picture. For example, Figure 1F, shown below depicts "the presentation of details of a match result entity to a user." (ʼ811 Patent, 2:65-67.) Based on a request from a user to view potential matches, "[m]atching server 20 would receive this request and respond by displaying Jane Doe's profile (stored in memory 26), as depicted by FIG. 1F." (ʼ811 Patent, 6:20-22.) Figure 1F depicts more than just a picture of the potential match, Jane Doe. As seen above, Figure 1F includes information such as Jane Doe's birthday, hometown, likes, and dislikes. The entire figure is a card-based metaphor for a potential match that is displayed on a graphical user interface. (ʼ811 Patent, 6:12-22, Figure 1F.) Figure 1F contains a "summary of information" about Jane Doe that is displayed on the requesting user's graphical user interface. The entire card, not just Jane Doe's picture, is the "graphical representation" of "Jane Doe." Similarly, "card 88" of Figure 6 is a "graphical representation" that includes a "summary of information" about Sally and "card 88" involves text. 9 2 B. "associated" Bumble's Construction Plaintiff's Construction Indefinite No construction necessary/plain and ordinary meaning 31. The claims of the asserted patents use the term "associated" in different contexts throughout the asserted claims. Three exemplar claim limitations are:  "a [first, second] swiping gesture associated with the graphical representation;" (ʼ811 Patent, cls. 1, 4, 7)  "a positive preference indication associated with the first item of information;"(ʼ023 Patent, cls. 1, 3, 5)  and "a [first, second, third] [positive, negative] preference indication associated with a graphical representation" (ʼ854 Patent, cls. 1, 4, 7, 10). 32. A person of ordinary skill in the art would not be able to determine with reasonable certainty, the meaning of "associated" as used in the claim phrases here. It is my opinion that the term is so vague that there is no discernable meaning and the term is indefinite. The term "associated" is generally used relationally―to describe a relationship between two different things. I have reviewed the claims and the specification and neither make clear the claimed relationship between: (1) a swiping gesture and a graphical representation; (2) a positive preference indication and an item of information; and (3) a preference indication and a graphical representation. 33. A person of ordinary skill in the art would be unable to determine, for the swiping gesture and a graphical representation, if the patents are claiming, for example, (a) a swiping gesture that acts upon the graphical representation; (b) a swiping gesture linked to or connected to the graphical representation; or something else. 10 2 34. This lack of clarity also applies to the supposed relationship between the positive preference indication and the first item of information and the preference indications and the graphical representation. Likewise, a person of ordinary skill in the art would not be able to discern, for the preference indications and the items of information or the graphical representation, if (a) the preference indications act upon the item of information and graphical representations; (b) the preference indications are linked to or connected to the item of information and graphical representations; or (c) the preference indication is stored somewhere on the system with the item of information and the graphical representation. 35. It is also unclear to one of ordinary skill in the art what "associated" means because the terms recite user action on a graphical user interface without explaining how the user is interacting with the displayed graphical representation. For example, a person of ordinary skill in the art would not be able to understand how the user must interact with the graphical user interface with the claimed swiping gesture for it to be "associated" with the displayed graphical representation. It is unclear whether the user would need to swipe directly on top of the graphical representation on the graphical user interface or whether the user can swipe anywhere on the graphical user interface for the swipe and the displayed graphical representation to be "associated" with one another. Similarly, a person of ordinary skill in the art could not discern whether the preference indication need occur directly on the item of information or graphical representation displayed on the graphical user interface or elsewhere on the graphical user interface for the two to be "associated." 36. There are several possibilities of what "associated" could mean in the context of these limitations, but no one meaning is decipherable in the context of the intrinsic record. Therefore, it is my opinion that the term is indefinite to a person of ordinary skill in the art. 11 2 C. "the text area" Bumble's Construction Plaintiff's Construction Indefinite "a text area" 37. "The text area" appears in dependent claims of the ʼ811 Patent: "the graphical notification comprising a user interface control enabling the text area to be presented to the first user." (ʼ811 Patent, cls. 2, 5, 8.) A person of ordinary skill in the art could understand "the text area" in the context of the claim language to mean at least two different things: (1) an area where text is displayed or (2) a text input area. The words "text area" never appear in the ʼ811 Patent's specification. In light of the claim language and specification, a person of ordinary skill in the art would not be able to determine with reasonable certainty, which of these two meanings applies here. Therefore, it is my opinion that this term is indefinite. 38. I have reviewed Figure 12D and do not believe that this would resolve the ambiguity of the term "text area" to a person of ordinary skill in the art. Figure 12D "illustrates an example communication interface between users of the matching system." (ʼ811 Patent, 24:30- 31.) 12 2 In this communication interface, "[u]ser 14 is presented with chat box 1208 for each of the matches that exist for user 14," and "[u]sers 14 may communicate with each other through chat box 1208." (ʼ811 Patent, 24:31-37.) In Figure 12D, however, chat box 1208 appears next to an icon of two talking bubbles, and for the first match "Ashok," there appears to be text displayed next to the icons. (See ʼ811 Patent, Figure 12D (depicting text under Ashok's name: "Hey! How's it going?").) I see no place to enter text in Figure 12D. Figure 12D, thus, does not resolve any ambiguity for the term: it remains unclear whether "the text area" is a text input area (not shown), an area with links to multiple conversations, or whether "the text area" is merely area with text on the screen. 13 2 I declare under penalty of perjury that the foregoing is true and correct to the best of my knowledge, information, and belief. Christopher M. Schmandt 14 2 Christopher Schmandt 4 Longfellow Rd Winchester, MA 01890 Education M.I.T., Master of Science, Visual Studies (Computer Graphics), 1980 M.I.T., Bachelor of Science, Computer Science, 1978 Professional Experience -- MIT (retired) MIT Media Laboratory, Principal Research Scientist, 1985-2018 Director, Living Mobile Research Group (formerly Speech + Mobility) Architecture Machine Group, Research Associate, 1980-1984 Architecture Machine Group, Research Assistant, 1979-1980 Architecture Machine Group, Graphics Programmer, 1977-1979 Departmental Undergraduate Research Opportunities Program Coordinator, 1984-2018 Laboratory Intellectual Property Committee 2001-2017, chair 2002-2009 Departmental Committee on Graduate Studies, 1996-2001, 2007-2018 Sponsored Research Activities Alerting and Mobile Messaging, Digital Life Consortium, MIT Media Lab, 1997-2018 Acoustical Cues to Discourse Structure, National Science Foundation, Principal Investigator, 1995-1998 Parsing Radio, News in the Future Consortium, MIT Media Lab, 1993-199 Desktop Audio, SUN Microsystems, Inc., Principal Investigator, 1989-1996 Voice Interaction in Hand Held Computer, Apple Computer, Principal Investigator, 1991-1993 Voice Interfaces for Network Services, AT&T, Principal Investigator, 1989-1991 Back Seat Driver, NEC, Principal Investigator, 1988-1991 Acoustic and Visual Cues for Speech Recognition, DARPA, co-Principal Investigator, 1986-1988 Personal Computers and Telephony, NTT Public Corporation, Principal Investigator, 1984-1989 Home Telecomputing, Atari, Inc., Principal Investigator, 1983 1 of 6 2 Publications Most publications can be found at http://www.media.mit.edu/speech/publications or http://living.media.mit.edu/publications SkinMorph: Texture-Tunable On-Skin Interface Through Thin, Programmable Gel ISWC 2018. (with Cindy Kao, M. Banforth, D. Kim) ARTextiles for Promoting Social Interactions Around Personal Interests. CHI 2018. (with Anna Fuste) Technical Interventions to Detect, Communicate, and Deter Sexual Assault. ISWC 2017. (with Manisha Mohan) Exploring Interactions and Perceptions of Kinetic Wearables. DIS 2017. (with Cindy Hsin-Liu Kao, D. Ajilo, O. Anilionyte, A. Dementyev, I. Choi and S. Follmer) Rovables: On-Body Robots as Mobile Wearables. UIST 2016. (with Cindy Hsin-Liu Kao, A. Dementyev, I. Choi, D. Ajilo, M. Xu, and S. Follmer) DuoSkin: Rapidly Prototyping On-Skin User Interfaces Using Skin-Friendly Materials. ISWC 2016. (with Cindy Hsin-Liu Kao, Christian Holz, Asta Roseway, and Andres Calvo) Immersive Terrestrial Scuba Diving Using Virtual Reality (with Dhruv Jain, Misha Sra, Jingru Go, Rodrigo Margues, Raymond Wu and Justin Chiu) Proceedings, UIST 2016 Expanding social mobile games beyond the device screen (with Misha Sra) Journal of Personal and Ubiquitous Computing, 2015 NailO: Fingernails as an input surface (with Cindy Hsin-Liu Kao, Artem Dementyev, Joseph Paradiso) CHI 2015 Mugshots: A mug display for front and back stage social interaction in the workplace (with Cindy Hsin-Liu Kao) TEI (Tangible and Embedded Interfaces) 2015 Mime: compact, low power 3D gesture sensing for interactionwith head mounted displays (with Andrea Colaco, Ahmend Kirmani, Hye Soo Yang, Nan-Wei Gong, and Vivek Goyal) Proceedings of UIST 2013. Spotz: A location-based approach to self-awareness (with Misha Sra) Proceedings of Persuasive 2013. Setting the stage for interaction: A tablet application to augment group discussion in a seminar class (with Drew Harry and Eric Gordon) Proceedings of CSCW 2012. Indoor Location Sensing using Geo-Magnetism (with Jaewoo Chung, Matt Donahoe, Ig-Jae Kim, Pedram Razavai and Micaela Wiseman) Proceedings of International Conference on Mobile Systems, Applications, and Services (Mobisys) 2011. My second bike: a TV-enabled social and interactive riding experience (with Jaewoo Chung, Kuang Xu, Andrea Colaco, and Victor Li) Proceedings of IEEE Communications and Networking Conference, Jan 2010. Going my way?: User-aware route planner (with Jaewoo Chung) Proceedings CHI 2009. Globetoddler: Designing for remote interataction between preschoolers and their traveling parents (with Paulina Modlitba) CHI 2008 Extended Abstracts Are we there yet? - a temporally aware media player (with Matt Adcock and Jaewoo Chung), Australian User 2 of 6 2 Interface Conference (AUIC) 2008 Physical embodiments for mobile communication agents (with Stefan Marti), UIST 2005 Giving the caller the finger: collaborative responsibility for cellphone interruptions (with Stefan Marti) Extended Abstracts, CHI 2005 Active Messenger: email filtering and delivery in a heterogeneous network (with Stefan Marti) Human-Computer Interaction Journal (HCI) Volume 20 (2005) WatchMe: communication and awareness between members of a closely-knit group (with Natalia Marmasse) Proceedings of Ubicomp 2004 An audio-based personal memroy aid (with S. Vemuri, W. Bender, S.Tellex and B. Lassey) Proceedings of Ubicomp 2004 Improving speech playback using time-compression and speech recognition (with Sunil Vemuri, Philip DeCamp, and Walter Bender) Proceedings of CHI 2004 Impromptu: managing networked audio applications for mobile users (with Kwan Lee, Jang Kim, and Mark Ackerman), Proceedings of MobiSys 2004 TalkBack: a conversational answering machine (with Vidya Lakshmipathy and Natalia Marmasse) Proceedings of UIST 2003 ``ListenIn'' to domestic environments from remote locations (with Gerardo Vallejo) Proceedings of the 2003 International Conference on Auditory Display (ICAD) Safe & Sound: a wireless leash (with Natalia Marmasse) Extended Abstracts, Proceedings of CHI 2003 Mediated voice communication via mobile IP (with Jang Kim, Kwan Lee, Gerardo Vallejo, and Mark Ackerman), Proceedings of UIST 2002. The Audio Notebook: Paper and pen interaction with structured speech (with Lisa Stifelman and Barry Arons), Proceedings of CHI 2001 Synthetic News Radio (with Keith Emnett) IBM Systems Journal, Vol. 39 Nos. 3-4, pp. 646-659, 2000. Everywhere messaging (with Natalia Marmasse, Stefan Marti, Nitin Sawhney, and Sean Wheeler) IBM Systems Journal, Vol. 39 Nos. 3-4, pp. 660-677, 2000. Location-aware information delivery wth comMotion (with Natalia Marmasse), Proceedings of the Second International Symposium on Handheld and Ubiquitous Computing, pp. 157-171, Springer, 2000. Nomadic Radio: Scalable and contextual notification for wearable audio messaging (with Nitin Sawhney), Proceedings of CHI 1999. Speaking and listening on the run: Design for wearable audio computing (with Nitin Sawhney), Proceedings of International Symposium on Wearable Computing, 1998. Audio Hallway: A virtual acoustic environment for browsing, Proceedings of UIST 1998. Dynamic Soundscape: Mapping time to space for audio browsing (with Minoru Kobayashi), Proceedings of CHI 1997. 3 of 6 2 CLUES: Dynamic personalized message filtering (with Matt Marx), Proceedings of CSCW 1996. Using acoustic structure in a hand-held audio playback device (with Deb Roy), IBM Systems Journal, Vol 35, Nos. 3 and 4, 1996. Mailcall: Message presentation and navigation in a nonvisual environment (with Matt Marx), Proceedings of CHI 1996 AudioStreamer: Exploiting simultaneity for listening (with Atty Mullins), short paper, CHI 1995. Multimedia momadic services on today's hardware, IEEE Network, September/October 1994. Putting people first: Specifying proper names in speech interfaces (with Matt Marx), proceedings of UIST 1994. Chatter: A conversational learning speech interface (with E. Ly) AAAI Workshop on Intelligent Multi-Media Multi- Modal Systems, 1994. Voice Communication with Computers: Conversational Systems. New York: Van Nostrand Reinhold. 1994. Capturing, structuring, and representing ubiquitous audio (with D. Hindus and C. Horner), ACM Transactions on Information Systems, Vol. 11, No. 4, October 1993. Speech Recognition Architectures for Multimedia Environments, (with E. Ly and B. Arons), Proceedings of the 1993 AVIOS Conference, September 1993. Phoneshell: the Telephone as Computer Terminal, Proceedings of the ACM Multimedia Conference, August 1993. Voicenotes: A Speech Interface for a Hand-Held Voice Notetaker (with L. Stifelman, B. Arons, and E. Hulteen), Proceedings of INTERCHI'93, April 1993. From Desktop Audio to Mobile Access: Opportunities for Voice in Computing, book chapter in Advances in Human- Computer Interaction, Vol. 4, H.R. Hartson and D. Hix editors. 1992. Ubiquitous Audio: Capturing Spontaneous Collaboration (with D. Hindus), Proceedings of CSCW'92, November 1992. Integrating Audio and Telephony in a Distributed Workstation Environment (with S. Angebranndt, R. Hyde, D. Luong, and N. Siravara), Proceedings of the Summer 1991 USENIX Conference, June 1991. Augmenting a Window System with Speech Input (with M. Ackerman and D. Hindus), Computer, IEEE Computer Society, Vol. 23, No. 8, August 1990. Observations on Using Speech Input for Window Navigation (with D. Hindus, M. Ackerman, and S. Manandhar), Proceedings, Human-Computer Interaction, Interact '90, IFIP, August 1990. Phonetool: Integrating Telephones and Workstations (with S. Casner), Proceedings, GLOBECOM '89, IEEE Communications Society, November 1989. Desktop Audio (with B. Arons), UNIX Review, October 1989. Synthetic Speech for Real Time Direction-Giving (with J. Davis), IEEE Transactions on Consumer Electronics, IEEE, September 1989. The Back Seat Driver: Real Time Spoken Driving Instructions (with J. Davis), Proceedings, IEEE Vehicle Navigation 4 of 6 2 and Information Systems Conference, IEEE, Toronto, Canada, September 1989. An Audio and Telephone Server for Multi-media Workstations (with M. McKenna), Proceedings, Second IEEE Conference on Workstations, IEEE, Palo Alto, CA., 1988. Employing Voice Back Channels to Facilitate Audio Document Retrieval, Proceedings, ACM Conference on Office Information Systems (COIS), Santa Clara, CA, 1988. Conversational Telecommunications Environments, Proceedings, Second International Conference on Human- Computer Interaction, 1987. Understanding Speech Without Recognizing Words, Proceedings, American Voice Input/Output Society Conference, AVIOS, 1987. A Robust Parser and Dialog Generator for a Conversational Office System (with B. Arons and C. Simmons), Proceedings, American Voice Input/Output Society Conference, AVIOS, Palo Alto, CA, 1987. Integrated Messages and Network Services for a Personal Workstation, IEEE Workshop on Telematics and Message Handling Systems, IEEE, 1986. Voice Interaction in an Integrated Office and Telecommunications Environment, Proceedings, American Voice Input/Output Society Conference, AVIOS, San Francisco, CA, 1985. Voice Communication with Computers, book chapter in Advances in Human-Computer Interaction, H. R. Hartson ed., 1985. Speech Synthesis Gives Voiced Access to an Electronic Mail System, Speech Technology, Vol. 2, No. 3, Aug/Sept 1984. A Conversational Telephone Messaging System (with B. Arons), IEEE Transactions on Consumer Electronics, IEEE, Vol CE-30, August 1984. Phone Slave: A Graphical Telecommunications Interface (with B. Arons), Proceedings, Society for Information Display International Symposium, SID, San Francisco, CA, June 1984. Input/Display Registration in a Stereoscopic Workstation, Displays, April 1984. Remote Access to Voice and Text Messages, Proceedings, American Voice Input/Output Society Conference, AVIOS, Washington D.C., 1984. Fuzzy Fonts: Analog Models Improve Digital Text Quality, Proceedings, National Computer Graphics Association Conference, Chicago, IL, 1983. Greyscale Fonts Designed From Video Signal Analysis, Society of Applied Learning Technology, Houston, TX, 1983. Spatial Input/Display Correspondence in a Stereoscopic Computer Graphic Work Station, Proceedings, ACM/SIGGRAPH, Detroit, MI, 1983. A Programmable Virtual Vocabulary Speech Processing Peripheral (with W. Bender), Proceedings, American Voice Input/Output Society Conference on Voice Data Entry Systems Applications, AVIOS, 1983. The Intelligent Voice Interactive Interface (with E. A. Hulteen), Proceedings, Human Factors in Computer Systems, National Bureau of Standards/ACM, Gaithersburg, MD, 1982. 5 of 6 2 Interactive Three-Dimensional Computer Space, Proceedings, SPIE Conference on Processing and Display of Three- Dimensional Data, SPIE, San Diego, CA, 1982, Vol. 367. Speech Communications, a Systems' Approach, Proceedings, American Voice Input/Output Society Conference on Entry Systems Applications, 1982. Voice Interaction: Putting Intelligence into the Interface, Proceedings, IEEE International Conference on Cybernetics and Society, IEEE, Seattle, WA, 1982. The Intelligent Ear: A Graphical Interface to Digital Audio, Proceedings, IEEE International Conference on Cybernetics and Society, IEEE, Atlanta, GA, 1981. Soft Typography, Information Processing 1980, IFIPS, S. Lavington ed., North-Holland Publishing Co., 1980. 6 of 6