In Kognit (2014–2015), we enter the mixed reality realm for helping dementia patients. Dementia is a general term for a decline in mental ability severe enough to interfere with daily life. Memory loss is an example. Alzheimer’s is the most common type of dementia. Mixed reality refers to the merging of real and virtual worlds to produce new episodic memory visualisations where physical and digital objects co-exist and interact in real-time. Cognitive models are approximations of a patient’s mental abilities and limitations involving conscious mental activities (such as thinking, understanding, learning, and remembering). We are concerned with fundamental research in coupling artificial intelligence based situation awareness with augmented cognition for the patient.

Kognit is a BMBF fundamental research pre-project based on ERmed and RadSpeech.

Focus on body sensor interpretation, activity recognition, and pro-active episodic memory aid

Cognitive Model

  • Artificial Intelligence learning system for activity recognition
  • Modelling of the situation
  • Modelling of the cognitive impairments
  • Modelling of the augmented cognition

Mixed Reality

  • Eye tracker and video camera realtime input
  • Head-mounted display output
  • Real-time registration into the field of view

The future of care for older people (DE)
In order to provide a new foundation for managing dynamic content and improve the usability of optical see-through HMD and mobile eye tracker systems, we implement a salience-based activity recognition system and combine it with an intelligent multimedia text and image multimodal display management system.



Image showing the general structure of our latest hardware prototype –
an example of a configuration that allows the user to merge a binocular telescopic video stream into a one-to-one field of view.

  • SMI stereoscopic eye-tracker (later integrated directly into the display),
  • Oculus Rift DK2 head mounted display,
  • modular attachment plate used to interchange various camera-lens modules,
  • stereo camera pairs with telescopic, fisheye, and ultrawide vision augmentation lenses.

SensoMotoric Instruments (SMI) provides the following products:

  • SMI Eye Tracking Glasses with real-time data streaming for mobile gaze interaction
  • SMI Eye Tracking Upgrade for the Oculus Rift DK2 HMD with a dedicated plug-in for the Unity engine allowing for easy integration of gaze interaction in virtual environments

Pupillabs provides a tiny, automatic camera and app that gives you a searchable and shareable photographic memory.

Getnarratives provides a small camera for scene and context detection. 


MetaPro provides a first eergonomic head-mounted display solution. 


Epson Moverio provides a binocular display, Brother Airscouter provides a high-contrast retina display. 

NAO is a small companion robot. 


Oculus Rift2 is a virtual reality head-mounted display. 


We also integrated the desktop Tobii EyeX into Unity3D. For 3D reconstruction we use structure io (in combination with Skanect for capturing a full color 3D model of an object and Faceshift for analysing the face motions of a patient and describing them as a mixture of basic expressions, plus head orientation and gaze. 


We also use Anoto and Galaxy Note 3 for a direct digitisation and recognition of hand-written gestures. 


For the HMD augmented reality rendering, we are experimenting with Vuforia and Metaio, both vision-based augmented reality software platforms. 

The new Google Glass is also used in combination with a depth camera.