The aim of this practical seminar (7 CP) is the realization of a complete pipeline of a project from the problem statement to finding solutions using methods of machine learning on our Deep Learning cluster. The topics are proposed by different groups of the Mathematics and Computer Science departments consisting for example of topics in computer linguistics, bioinformatics, computer vision, computer graphics, language processing, and optimization.

Each topic will be supervised by the group that proposed the topic. Up to three students will be working on one topic. The project seminar consists of three parts. In the first part, the student group working on a topic will get acquainted with the topic, the input data, and research about potential solutions for the given problem. The second part is the implementation and testing of solutions for the problem. At the end of the course, the groups will present their topics and the solutions in a seminar. Dates:

Registration *from April 10th, 2018 to April 17th, 2018 HERE
Kick-off meetingApril 19th 2018, 10:00-11:00, Building E 2.1 Room 2.06
Deadline to register in HISPOS OR de-register from seminar *May 10th, 2018
Lecture datesGeneral introduction: 20.04.18, 9:00-10:00, Building E 2.1 Room 2.06
Technical introduction: 27.04.18, 9:00-10:00, Building E 2.1 Room 2.06
Intermediate Progress MeetingMay 18th 2018, 8:00-10:00, Building E 2.1 Room 2.06
Deadline for handing in results of implementationJuli 13th, 2018
PresentationsJuli 20th 2018, 10:15-12:30, Building E 2.1 Room 2.06
Final report deadlineJuli 27th 2018
*If you want to deregister from the seminar, please send the tutor an email irrespectively whether you (de)registered in HISPOS or not.

Requirements for participation:

  • at least one course in Machine Learning or Statistical Learning or Neural Networks: Implementation and Applications

Certificate requirements:

  • Successful presentation:
    • Talk: 30 minutes
    • Questions from the tutors/audience after the presentation
  • Taking minutes during the practical part to make clear which student worked on which part of the project.
  • Handing in a final report after the presentation along with the protocol of the practical part.

Final grade:

  • Based on the given presentation (see “Certificate requirements”)
  • May be influenced by the submitted report and handling of the practical part


Nr.SupervisorTopicDescriptionParticipantsOffice hours
1Prof. KellermiRNA target predictionmiRNAs are small non-coding RNA molecules that regulate gene expression post-transcriptionally [1]. The prediction of miRNA targets is still challenging and relies on feature engineering and extraction [2]. Deep learning approaches may help to resolve these issues by learning complex features directly from the data. One of the proposed approaches [3] uses auto-encoders to learn new mRNA/miRNA representation from the sequence data and feed it into subsequent neural network for target classification. The goal of the project would be to implement the described model using Keras [4] and TensorFlow [5] in Python. The model should be trained, tuned, and validated on the given data. Moreover, different model structures should be tested, e.g. CNN (convolution neural network) and RNN (recurrent neural network) layers. References: [1] David P. Bartel. Metazoan MicroRNAs. Cell, 2018. [2] Sarah M. Peterson, Jeffrey A. Thompson, Melanie L. Ufkin, Pradeep Sathyanarayana, Lucy Liaw, Clare Bates Congdon. Common features of microRNA target prediction tools. Front Genet, 2014. [3] Byunghan Lee, Junghwan Baek, Seunghyun Park, Sungroh Yoon. deepTarget: End-to-end Learning Framework for microRNA Target Prediction using Deep Recurrent Neural Networks. arXiv, 2016. [4] Keras [5] TensorFlow Saheli De Fridays, 9:00-10:00
2Prof. KlakowNew types of word representationsRepresenting words as vectors is extremely popular since the advent of tools like word2vec. However vectors have bad (that is unrealistic) compositional properties. Consider „Only he told me that ….“ vs „He told only me that …“. Both versions use the same words but have very different meanings. Adding up vectors representing the words in this example gives the same result for both versions, because vector addition is commutative while word composition to form a sentence is not commutative. The simplest possibles extension could be matrices to represent words. This would have the advantage that matrix multiplication is not-commutative like word composition is not. Therefore computing sentence meaning by matrix multiplication is more realistic. The goal of this project is to develop a word2mat tool and check its properties on standard language modelling tasks. Mossad Helali Muhammad Ehtisham Ali Ayan Majumdar Shahzain Mehboob
3Mario Fritz & Yang HeDepth-aware dilated convolution networks for RGBD semantic segmentation of Traffic SceneUnderstanding of street scene is a key ingredient to the success of autonomous and assisted driving. One widely used formulation of this task is a pixel-wise labeling into relevant semantic classes such as road, car, pedestrian and so on. State of the art methods rely on contextual information for highly accurate predictions in realistic conditions. E.g. Dilated convolutions [1] are widely used operations in semantic segmentation, which is able to acquire large receptive field while keeping feature map resolution. Besides, it has been shown that dilation factors in convolutions can be learned automatically for better context modeling [2]. However, in [2], only color information is exploited to learn dilation factors. In the case of depth are easily captured, depth information is potentially more effective than color image to determine the size of regions or objects. Therefore, this project will investigate the use of depth information for learning more suitable dilation factors in RGBD semantic segmentation in traffic scenes
References: [1] F. Yu and V. Koltun, Multi-Scale Context Aggregation by Dilated Convolutions, ICLR, 2016. [2] Y. He, M. Keuper, B. Schiele and M. Fritz, Learning Dilation Factors for Semantic Segmentation of Street Scenes, GCPR, 2017. [3] S. Kong, C. Fowlkes, Recurrent Scene Parsing with Perspective Understanding in the Loop, CVPR, 2018. [4] W. Wang and U. Neumann, Depth-aware CNN for RGB-D Segmentation, arXiv, 2018.
Subhabrata Choudhury Divyam Saran
4Vera Demberg & Wei ShiDomain-adaptation for neural discourse relation classifiers.Discourse relation classification is the task of determining how two different sentences relate to one another. For instance, in a pair of sentences like “It’s cold. The radiator is broken.”, human comprehenders will usually infer a causal relation between the two sentences. Being able to automatically calculate such inferences is an important step in deep language understanding; however, the task is very challenging when explicit cues like “because”, “but” or “however” are not present in the text. Current state of the art systems in automatic discourse relation classification rely on neural networks, e.g. LSTMs [1]. Due to the relatively small amount of training data, it is important to train models such that they don’t just learn idiosyncracies of the training data, but generalize well also to new domains [2]. The central goal of this project is to apply neural discourse relation classifiers to a new out-of-domain dataset [3] to establish a baseline, and then improve over this baseline by using neural domain adaptation methods [4] as well as additional weakly labelled data [5]. References: [1] Attapol T. Rutherford, Vera Demberg, and Nianwen Xue. A systematic study of neural discourse models for implicit discourse relation. In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL), 2017. [2] Shi, Wei, and Vera Demberg. “Do We Need Cross Validation for Discourse Relation Classification?.” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Vol. 2. 2017. [3] Ines Rehbein, Merel Scholman, and Vera Demberg. Annotating discourse relations in spoken language: A comparison of the PDTB and CCR frameworks. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portoroz, Slovenia, May 23-28, 2016, 2016. [4] Man Lan et al., “Multi-task Attention-based Neural Networks for Implicit Discourse Relationship Representation and Identification.” Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017. [5] Shi, W., Yung, F., Rubino, R., & Demberg, V. (2017). Using Explicit Discourse Connectives in Translation for Implicit Discourse Relation Classification. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (Vol. 1, pp. 484-495).
5Tim Kehl & Kerstin LenhofPredicting the sensitivity of cancer drugsCancer is a heterogeneous class of diseases that are caused by an interplay of various genetic and environmental factors and that can be characterized by a common set of features, known as the Hallmarks of Cancer. The high genotypic and phenotypic diversity among tumors makes the treatment of malignant tumors a grand challenge. As a remedy, optimal therapies have to be determined in a personalized manner based on the in-depth characterization of each tumors genetic and epigenetic makeup. In this project, machine learning techniques should be used to predict the sensitivity of certain drugs based on genetic and epigenetic markers of tumors. To this end, we use the Genomics of Drug Sensitivity in Cancer Project (GDSC) dataset, which contains measurements of gene expression, genetic aberrations and methylation for over 1000 cancer cell lines and their response to 250 anti-cancer drugs.Hasan Md Tusfiqur Alam Anilkumar Erappanakoppal Swamy Louisa Schwed