Our team is particularly specialized in theoretical and experimental research on sign language linguistics. The elicitation of specific data is conducted with the assistance of DGS native signers and early learners. For the video sessions, we use picture elicitation tasks, question-answer tasks, retelling of picture stories, narration tasks, translation tasks, and many more methods. We record natural language data in conversations between native signers at the sign language lab. There are at least two cameras filming simultaneously, one centered on the upper body and the other zooming on the face, thus enabling us to study both the manual and non-manual aspects of signed languages. The raw film material is cut, edited and then annotated with professional annotation software specifically designed for this task. This guarantees a systematic analysis of the data. Moreover, we conduct so-called eye-tracking experiments, for example to investigate eye movements during signing. We use, for example, a head-mounted-eye tracker which includes a helmet-like device, on top of which are two small cameras. These cameras film the scene, which is the region in which the signing takes place, as well as movement of the pupils. The two data sets are subsequently unified at the computer, thereby showing in which direction the signer looked exactly at a specific time while signing. Currently, we are conducting EEG-studies in a joined project with the University of Mainz. Electroencephalographic depictions enable us to investigate more precisely into the time dimension of processing language in the brain. A cap with electrodes measures the brain activity while subjects watch filmed sign language sentences. More detailed information on the respective research methods is available under the different project descriptions (see projects).