In Kognit (2014–2015), we enter the mixed reality realm for helping dementia patients. Dementia is a general term for a decline in mental ability severe enough to interfere with daily life. Memory loss is an example. Alzheimer’s is the most common type of dementia. Mixed reality refers to the merging of real and virtual worlds to produce new episodic memory visualisations where physical and digital objects co-exist and interact in real-time. Cognitive models are approximations of a patient’s mental abilities and limitations involving conscious mental activities (such as thinking, understanding, learning, and remembering). We are concerned with fundamental research in coupling artificial intelligence based situation awareness with augmented cognition for the patient.
Kognit is a BMBF fundamental research pre-project based on ERmed and RadSpeech.
Focus on body sensor interpretation, activity recognition, and pro-active episodic memory aid
Artificial Intelligence learning system for activity recognition
Modelling of the situation
Modelling of the cognitive impairments
Modelling of the augmented cognition
Eye tracker and video camera realtime input
Head-mounted display output
Real-time registration into the field of view
The future of care for older people (DE) In order to provide a new foundation for managing dynamic content and improve the usability of optical see-through HMD and mobile eye tracker systems, we implement a salience-based activity recognition system and combine it with an intelligent multimedia text and image multimodal display management system.
We also integrated the desktop Tobii EyeX into Unity3D. For 3D reconstruction we use structure io (in combination with Skanect for capturing a full color 3D model of an object and Faceshift for analysing the face motions of a patient and describing them as a mixture of basic expressions, plus head orientation and gaze.
We also use Anoto and Galaxy Note 3 for a direct digitisation and recognition of hand-written gestures.
For the HMD augmented reality rendering, we are experimenting with Vuforia and Metaio, both vision-based augmented reality software platforms.
The new Google Glass is also used in combination with a depth camera.
Takumi Toyama, Jason Orlosky, Daniel Sonntag and Kiyoshi Kiyokawa Proceedings of 12th International Working Conference on Advanced Visual Interfaces (AVI 2014), ACM 2014 VIDEO: first and second half of Jason’s presentation of “A Natural Interface for Multi-focal Plane Head Mounted Displays using 3d Gaze” at AVI 2014