Frequently Asked Questions

Vision / Outlook

When will Text 2.0 be there?
Text 2.0 is already there, working, for example, at our laboratory. The question is rather, when will it be widely available? Its answer is influenced by the distribution of eye tracking devices, commercial applications facilitating their output as well as a satisfactory consumer demand.



Are you stating this is (definitely going to be) the future?
No! We are not stating that any of these techniques will become mainstream at any time in the future. In fact, we are rather sure that in the long term most of them will be superseded by even more refined algorithms and interfaces. It is also very likely that some of them will eventually vanish completely because they were shown to be impractical (for example by daily routine). What we are stating is this: 1) diagnostic reading research has gone a very long way during the past hundred years 2) there is a chance that eye tracking can become mainstream and might be used as a common input device 3) the scientific community (this includes us as well) has only scratched the surface of the benefits and problems that the realtime-combination of gaze and text has to offer 4) we (this includes the rest of the scientific community as well) really should do more research on this topic.



Do you claim all of this is your work?
No! Reading research is over hundred years old, it was also one of the driving forces behind the development of eye tracking devices. Diagnostic eye tracking applications were naturally the first to emerge, interactive applications followed years later. In 1990 Starker and Bolt presented their idea of an interactive, "gaze-responsive, self-disclosing display", a virtual 3D world reacting to gaze in a multimodal way. In 2000 Sibert et al. published a "gaze triggered auditory prompting for reading remediation", a system that was able to speak problematic words in order to assist learners. At the same time Hyrskykari et al. published their first document about iDict, a gaze aware reading aid that is able to provide translations on the fly. As you can see (and depending on the level of abstraction you apply), some of the features we present have already been invented or shown elsewhere, long before our vision of Text 2.0. However, most of the publications on (interactive) gaze augmented reading and text were sporadic, and to our knowledge there has not been a channeled research endeavor to discover the chances and limitations of these topics. With our vision we want to stress that specifically the combination of gaze and reading and realtime has tremendous potential which has yet to be uncovered.



Demos / Usage

Where and when can I try it?
If you should happen to come close to Kaiserslautern please contact us and we will see what we can arrange. Besides that, you can test it at every location we have demos at, see the front page for more details. However, note that we are not showing every feature at every location. If you want to see something specific please contact us beforehand.



How good does it work? Will it work for me?
We have given a couple hundred demos now and the results were varying. The most general answer is "usually it works". As a rule of thumb we estimate that about two thirds of all demos were reported as good. The rest of our participants had problems at various stages. The most common issues were problems during the calibration (due to special types of glasses or contact lenses), degraded tracking performance (due to varying lighting conditions, resulting in inaccurate tracking output) and bugs in our software (mysterious flickers that occur once in a while). These numbers are for our eyeReader prototype. The Augmented Reading demos appear to be a bit more delicate, which could be caused by their differing interaction paradigm. We will investigate this further.



I am disabled. Does it work for me, too?
This depends on the specific disability you have. The eyeBook, for example, can be used with the eyes only and requires no manual interaction. Some glasses, contact lenses or deformations of the eyeball however can lead to problems. Although we cannot predict if it will work or not, you are invited to come and try.



Is there a minimum age to use one of the demo applications?
A few German children below the age of 13-14 tried it but for most of them it was no fun as all the texts at that time were English. We have no experience with English-speaking children though.



Do I have to be able to understand English?
Well, if you came this far the demos should be perfectly fine for you ;-). Recently we added a German version of "The Little Prince" and some of our Text 2.0 demos are explicitly targeted at helping the reader comprehend English.



What will you do with my tracking data?
Nothing. First of all, all demos we give are conducted anonymously (you stay anonymous, not we). We do not ask for any personal information that could link your identity to any sort of recorded data. The system does, however, store log files of all performed demos. These log files contain various metrics, errors and system messages and might also contain fragments of the corresponding reading process (like the session start time or reading duration), which we use to improve the quality of our prototype. If this discomforts you please refrain from taking a demo.





Code / Downloads / Development

Are there any binaries / tools / ... I can download?
Not at this time. We have not published any downloads due to two reasons. First of all, for a truly realistic feeling Text 2.0 can offer you need an eye tracking device, which only very few people do possess right now. Second we are not yet satisfied with the usability of the code, some things are more difficult to handle than they should be. When the second issue has been fixed we will consider which form of publication will be most suitable.



But I really want to use it!
As we stated on the About page we are constantly looking for skilled and motivated students joining us in our research and development efforts. If you want to work on the bleeding edge of interactive text right now (and if you like to visit the southern part of Germany) please contact us.



Can you give me some more technical information about the prototype? What is it implemented in?
The browser we currently use is Safari 4. Inside Safari runs some glue code written in JavaScript which makes use of jQuery (thanks John!) for many internal tasks. The eye tracking plugin itself is written in Java and connected through LiveConnect. Internally the plugin makes heavy use of the Java Simple Plugin Framework (JSPF) the open sourced backbone of our Text 2.0 framework. All other services, like a DBPedia database, linguistic data, and so on are implemented as JSPF plugins, distributed across the network and discovered automatically during the startup. However, just to get started basic HTML and JavaScript knowledge should be sufficient.



Dangers

Isn't eye tracking posing a threat? Won't this be a new form of surveillance?
We are very aware that omnipresent eye tracking bears the potential for great abuse and surveillance. Likewise do cell phones, digital cameras and personal computers. Kindlegate, for example, was a good demonstration of the threats malfunctioning systems impose, and it likely destroyed some of the trust that these new technologies require. At the same time, however, many implementations of the aforementioned technologies managed to emerge as mostly trusted and controllable platforms with a multitude of benefits that, in sum, usually outweigh their potential for misuse. Nevertheless, in the end it is not ours to balance the usefulness of Text 2.0 against its threats, but yours, through your support or disapproval.



Is using an eye tracker / taking a demo dangerous?
We are not aware of any health issues that eye tracking imposes. The way to the demo location is probably more dangerous.





Misc

Isn't the eyeReader version of The Little Prince a copy of the Starker's virtual 3D world?
No. Starker and Bolt's self disclosing display was a 3D environment. Their interaction paradigm was based on, roughly speaking, accumulated attention and did not employ any sort of reading detection or text interaction. The fact that both share The Little Prince as their central element is a coincidence. We've chosen the book because it was one of the most beautiful, yet comprehensible texts we know.



Why is it Text 2.0 and not Reading 2.0?
Because reading is an English word, while text is used (more or less) internationally, it is also shorter.