Publications

Please click on the respective heading to sort the list accordingly
(e.g. click on 'year' to sort the list in descending order of year of publication).

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Nemec, B. and Vuga, R. and Ude, A. (2013).
    Efficient sensorimotor learning from multiple demonstrations. Advanced Robotics, 1023-1031, 27, 13.
    BibTeX:
    @article{nemecvugaude2013,
      author = {Nemec, B. and Vuga, R. and Ude, A.},
      title = {Efficient sensorimotor learning from multiple demonstrations},
      pages = {1023-1031},
      journal = {Advanced Robotics},
      year = {2013},
      volume= {27},
      number = {13},
      doi = 10.1080/01691864.2013.814211},
      abstract = Abstract In this paper, we present a new approach to the problem of learning motor primitives, which combines ideas from statistical generalization and error learning. The learning procedure is formulated in two stages. The first stage is based on the generalization of previously trained movements associated with a specific task configuration, which results in a first approximation of a suitable control policy in a new situation. The second stage applies learning in the subspace defined by the previously acquired training data, which results in a learning problem in constrained domain. We show that reinforcement learning in constrained domain can be interpreted as an error-learning algorithm. Furthermore, we propose modifications to speed up the learning process. The proposed approach was tested both in simulation and experimentally on two challenging tasks: learning of matchbox flip-up and pouring}}
    		
    Abstract: Abstract In this paper, we present a new approach to the problem of learning motor primitives, which combines ideas from statistical generalization and error learning. The learning procedure is formulated in two stages. The first stage is based on the generalization of previously trained movements associated with a specific task configuration, which results in a first approximation of a suitable control policy in a new situation. The second stage applies learning in the subspace defined by the previously acquired training data, which results in a learning problem in constrained domain. We show that reinforcement learning in constrained domain can be interpreted as an error-learning algorithm. Furthermore, we propose modifications to speed up the learning process. The proposed approach was tested both in simulation and experimentally on two challenging tasks: learning of matchbox flip-up and pouring
    Review:
    Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Semantic image search for robotic applications. Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region (RAAD), 1-8.
    BibTeX:
    @inproceedings{kulviciusmarkelictamosiunaite2013,
      author = {Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic image search for robotic applications},
      pages = {1-8},
      booktitle = {Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region (RAAD)},
      year = {2013},
      location = {Portoro Slovenia},
      month = {September 11-13},
      abstract = Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes}}
    		
    Abstract: Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes
    Review:
    Markievicz, I. and Vitkute-Adzgauskiene, D. and Tamosiunaite, M. (2013).
    Semi-supervised Learning of Action Ontology from Domain-Specific Corpora. Information and Software Technologies, 173-185, 403.
    BibTeX:
    @incollection{markieviczvitkuteadzgauskienetamosi,
      author = {Markievicz, I. and Vitkute-Adzgauskiene, D. and Tamosiunaite, M.},
      title = {Semi-supervised Learning of Action Ontology from Domain-Specific Corpora},
      pages = {173-185},
      booktitle = {Information and Software Technologies},
      year = {2013},
      volume= {403},
      editor = {Skersys, Tomas and Butleris, Rimantas and Butkiene, Rita},
      publisher = {Springer Berlin Heidelberg},
      series = {Communications},
      doi = 10.1007/978-3-642-41947-8_16},
      abstract = The paper presents research results, showing how unsupervised and supervised ontology learning methods can be combined in an action ontology building approach. A framework for action ontology building from domain-specific corpus texts is suggested, using different natural language processing techniques, such as collocation extraction, frequency lists, word space model, etc. The suggested framework employs additional knowledge sources of WordNet and VerbNet with structured linguistic and semantic information. Results from experiments with crawled chemical laboratory corpus texts are given}}
    		
    Abstract: The paper presents research results, showing how unsupervised and supervised ontology learning methods can be combined in an action ontology building approach. A framework for action ontology building from domain-specific corpus texts is suggested, using different natural language processing techniques, such as collocation extraction, frequency lists, word space model, etc. The suggested framework employs additional knowledge sources of WordNet and VerbNet with structured linguistic and semantic information. Results from experiments with crawled chemical laboratory corpus texts are given
    Review:
    Ude, A. and Nemec, B. and Petric, T. and Morimoto, J. (2014).
    Orientation in Cartesian space dynamic movement primitives. IEEE International Conference on Robotics and Automation (ICRA), 2997-3004.
    BibTeX:
    @inproceedings{udenemecpetric2014,
      author = {Ude, A. and Nemec, B. and Petric, T. and Morimoto, J.},
      title = {Orientation in Cartesian space dynamic movement primitives},
      pages = {2997-3004},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2014},
      month = {May},
      doi = 10.1109/ICRA.2014.6907291},
      abstract = Dynamic movement primitives DMPs were proposed as an efficient way for learning and control of complex robot behaviors. They can be used to represent point-to-point and periodic movements and can be applied in Cartesian or in joint space. One problem that arises when DMPs are used to define control policies in Cartesian space is that there exists no minimal, singularity-free representation of orientation. In this paper we show how dynamic movement primitives can be defined for non minimal, singularity free representations of orientation, such as rotation matrices and quaternions. All of the advantages of DMPs, including ease of learning, the ability to include coupling terms, and scale and temporal invariance, can be adopted in our formulation. We have also proposed a new phase stopping mechanism to ensure full movement reproduction in case of perturbations}}
    		
    Abstract: Dynamic movement primitives DMPs were proposed as an efficient way for learning and control of complex robot behaviors. They can be used to represent point-to-point and periodic movements and can be applied in Cartesian or in joint space. One problem that arises when DMPs are used to define control policies in Cartesian space is that there exists no minimal, singularity-free representation of orientation. In this paper we show how dynamic movement primitives can be defined for non minimal, singularity free representations of orientation, such as rotation matrices and quaternions. All of the advantages of DMPs, including ease of learning, the ability to include coupling terms, and scale and temporal invariance, can be adopted in our formulation. We have also proposed a new phase stopping mechanism to ensure full movement reproduction in case of perturbations
    Review:
    Petric, T. and Gams, A. and Zlajpah, L. and Ude, A. (2014).
    Online learning of task-specific dynamics for periodic tasks. IEEE/RSJ Conference on International Intelligent Robots and Systems (IROS), 1790-1795.
    BibTeX:
    @inproceedings{petricgamszlajpah2014,
      author = {Petric, T. and Gams, A. and Zlajpah, L. and Ude, A.},
      title = {Online learning of task-specific dynamics for periodic tasks},
      pages = {1790-1795},
      booktitle = {IEEE/RSJ Conference on International Intelligent Robots and Systems (IROS)},
      year = {2014},
      month = {Sept},
      doi = 10.1109/IROS.2014.6942797},
      abstract = In this paper we address the problem of accurate trajectory tracking while ensuring compliant robotic behaviour for periodic tasks. We propose an approach for on-line learning of task-specific dynamics, i.e. task specific movement trajectories and corresponding force/torque profiles. The proposed control framework is a multi-step process, where in the first step a human tutor shows how to perform the desired periodic task. A state estimator based on an adaptive frequency oscillator combined with dynamic movement primitives is employed to extract movement trajectories. In the second step, the movement trajectory is accurately executed in the controlled environment under human supervision. In this step, the robot is accurately tracking the acquired movement trajectory, using high feedback gains to ensure accurate tracking. Thus it can learn the corresponding force/torque profiles, i. e. task-specific dynamics. Finally, in the third step, the movement is executed with the learned feedforward task-specific dynamic model, allowing for low position feedback gains, which implies compliant robot behaviour. Thus, it is safe for interaction with humans or the environment. The proposed approach was evaluated on a Kuka LRW robot performing object manipulation and crank turning}}
    		
    Abstract: In this paper we address the problem of accurate trajectory tracking while ensuring compliant robotic behaviour for periodic tasks. We propose an approach for on-line learning of task-specific dynamics, i.e. task specific movement trajectories and corresponding force/torque profiles. The proposed control framework is a multi-step process, where in the first step a human tutor shows how to perform the desired periodic task. A state estimator based on an adaptive frequency oscillator combined with dynamic movement primitives is employed to extract movement trajectories. In the second step, the movement trajectory is accurately executed in the controlled environment under human supervision. In this step, the robot is accurately tracking the acquired movement trajectory, using high feedback gains to ensure accurate tracking. Thus it can learn the corresponding force/torque profiles, i. e. task-specific dynamics. Finally, in the third step, the movement is executed with the learned feedforward task-specific dynamic model, allowing for low position feedback gains, which implies compliant robot behaviour. Thus, it is safe for interaction with humans or the environment. The proposed approach was evaluated on a Kuka LRW robot performing object manipulation and crank turning
    Review:
    Markievicz, I. and Kapovciut.e-Dzikien.e, J. and Tamovsiunait.e, M. and Vitkut.e-Advzgauskien.e, D. (2015).
    Action Classification in Action Ontology Building Using Robot-Specific Texts. Information Technology And Control, 155--164, 44, 2.
    BibTeX:
    @article{markieviczkapovciutedzikienetamovsi,
      author = {Markievicz, I. and Kapovciut.e-Dzikien.e, J. and Tamovsiunait.e, M. and Vitkut.e-Advzgauskien.e, D.},
      title = {Action Classification in Action Ontology Building Using Robot-Specific Texts},
      pages = {155--164},
      journal = {Information Technology And Control},
      year = {2015},
      volume= {44},
      number = {2},
      doi = 10.5755/j01.itc.44.2.7322},
      abstract = Instructions written in human-language cause no perception problems for humans, but become a challenge when translating them into robot executable format. This complex translation process covers different phases, including instruction completion by adding obligatory information that is not explicitly given in human-oriented instructions. Robot action ontology is a common source of such additional information, and it is normally structured around a limited number of verbs, denoting robot specific action categories, each of them characterized by a certain action environment. Semi-manual action ontology building procedures are normally based on domain-specific human-language text mining, and one of the problems to be solved is the assignment of action categories for the obtained verbs. Verbs in English language are very polysemous, therefore action category, referring to different robot capabilities, can be determined only after comprehensive analysis of the verbs context. The task we solve is formulated as the text classification task, where action categories are treated as classes, and appropriate verb context - as classification instances. Since all classes are clearly defined, supervised machine learning paradigm is the best selection to tackle this problem. We experimentally investigated different context window widths directions (context on the right, left, both sides of analyzed verb) and feature types (symbolic, lexical, morphological, aggregated). All statements were proved after exploration of two different datasets. The fact that all obtained results are above random and majority baselines allow us to claim that the proposed method can be used for predicting action categories. The best obtained results were achieved with Support Vector Machine method using window width of only 25 symbols on the right and bag-of-words as features. This exceeded random and majority baselines by more than 37% reaching 60% of accuracy.}}
    		
    Abstract: Instructions written in human-language cause no perception problems for humans, but become a challenge when translating them into robot executable format. This complex translation process covers different phases, including instruction completion by adding obligatory information that is not explicitly given in human-oriented instructions. Robot action ontology is a common source of such additional information, and it is normally structured around a limited number of verbs, denoting robot specific action categories, each of them characterized by a certain action environment. Semi-manual action ontology building procedures are normally based on domain-specific human-language text mining, and one of the problems to be solved is the assignment of action categories for the obtained verbs. Verbs in English language are very polysemous, therefore action category, referring to different robot capabilities, can be determined only after comprehensive analysis of the verbs context. The task we solve is formulated as the text classification task, where action categories are treated as classes, and appropriate verb context - as classification instances. Since all classes are clearly defined, supervised machine learning paradigm is the best selection to tackle this problem. We experimentally investigated different context window widths directions (context on the right, left, both sides of analyzed verb) and feature types (symbolic, lexical, morphological, aggregated). All statements were proved after exploration of two different datasets. The fact that all obtained results are above random and majority baselines allow us to claim that the proposed method can be used for predicting action categories. The best obtained results were achieved with Support Vector Machine method using window width of only 25 symbols on the right and bag-of-words as features. This exceeded random and majority baselines by more than 37% reaching 60% of accuracy.
    Review:
    Abu-Dakka, F. and Nemec, B. and Jorgensen, J. and Savarimuthu, T. and Krüger, N. and Ude, A. (2015).
    Adaptation of manipulation skills in physical contact with the environment to reference force profiles. Autonomous Robots, 199--217, 39, 2.
    BibTeX:
    @article{abudakkanemecjorgensen2015,
      author = {Abu-Dakka, F. and Nemec, B. and Jorgensen, J. and Savarimuthu, T. and Krüger, N. and Ude, A.},
      title = {Adaptation of manipulation skills in physical contact with the environment to reference force profiles},
      pages = {199--217},
      journal = {Autonomous Robots},
      year = {2015},
      volume= {39},
      number = {2},
      language = {English},
      publisher = {Springer US},
      doi = 10.1007/s10514-015-9435-2},
      abstract = We propose a new methodology for learning and adaption of manipulation skills that involve physical contact with the environment. Pure position control is unsuitable for such tasks because even small errors in the desired trajectory can cause significant deviations from the desired forces and torques. The proposed algorithm takes a reference Cartesian trajectory and force/torque profile as input and adapts the movement so that the resulting forces and torques match the reference profiles. The learning algorithm is based on dynamic movement primitives and quaternion representation of orientation, which provide a mathematical machinery for efficient and stable adaptation. Experimentally we show that the robots performance can be significantly improved within a few iteration steps, compensating for vision and other errors that might arise during the execution of the task. We also show that our methodology is suitable both for robots with admittance and for robots with impedance control.}}
    		
    Abstract: We propose a new methodology for learning and adaption of manipulation skills that involve physical contact with the environment. Pure position control is unsuitable for such tasks because even small errors in the desired trajectory can cause significant deviations from the desired forces and torques. The proposed algorithm takes a reference Cartesian trajectory and force/torque profile as input and adapts the movement so that the resulting forces and torques match the reference profiles. The learning algorithm is based on dynamic movement primitives and quaternion representation of orientation, which provide a mathematical machinery for efficient and stable adaptation. Experimentally we show that the robots performance can be significantly improved within a few iteration steps, compensating for vision and other errors that might arise during the execution of the task. We also show that our methodology is suitable both for robots with admittance and for robots with impedance control.
    Review:
    Lisca, G. and Nyga, D. and Blint-Benczedi, D. (2015).
    The Chemist Robot Extracting DNA. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Under Review).
    BibTeX:
    @inproceedings{liscanygablintbenczedi2015,
      author = {Lisca, G. and Nyga, D. and Blint-Benczedi, D.},
      title = {The Chemist Robot Extracting DNA},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2015},
      note = {Under Review},
      abstract = Autonomous mobile robots are employed to per- form increasingly complex tasks which require appropriate task descriptions, accurate object recognition, and dexterous object manipulation. In this paper we will address three key questions: How to obtain appropriate task descriptions from natural language NL instructions, how to choose the most competent control program to perform a task description, and how to recognize and manipulate the objects referred by a task description? We describe an evaluated robotic agent which takes a natural language instruction stating a step of DNA extraction procedure as a starting point. The system is able to transform the textual instruction into an abstract symbolic plan representation. It can reason about the representation and answer queries about what, how, and why it is done. The robot selects the most appropriate control programs and robustly coordinates all manipulations required by the task description. The execution is based on a perception sub-system which is able to locate and recognize the objects and instruments needed in the DNA extraction procedure}}
    		
    Abstract: Autonomous mobile robots are employed to per- form increasingly complex tasks which require appropriate task descriptions, accurate object recognition, and dexterous object manipulation. In this paper we will address three key questions: How to obtain appropriate task descriptions from natural language NL instructions, how to choose the most competent control program to perform a task description, and how to recognize and manipulate the objects referred by a task description? We describe an evaluated robotic agent which takes a natural language instruction stating a step of DNA extraction procedure as a starting point. The system is able to transform the textual instruction into an abstract symbolic plan representation. It can reason about the representation and answer queries about what, how, and why it is done. The robot selects the most appropriate control programs and robustly coordinates all manipulations required by the task description. The execution is based on a perception sub-system which is able to locate and recognize the objects and instruments needed in the DNA extraction procedure
    Review:
    Vuga, R. and Aksoy, E. E. and Wörgötter, F. and Ude, A. (2015).
    Probabilistic semantic models for manipulation action representation and extraction. Robotics and Autonomous Systems, 40 - 56, 65.
    BibTeX:
    @article{vugaaksoywoergoetter2015,
      author = {Vuga, R. and Aksoy, E. E. and Wörgötter, F. and Ude, A.},
      title = {Probabilistic semantic models for manipulation action representation and extraction},
      pages = {40 - 56},
      journal = {Robotics and Autonomous Systems},
      year = {2015},
      volume= {65},
      doi = 10.1016/j.robot.2014.11.012},
      abstract = Abstract In this paper we present a hierarchical framework for representation of manipulation actions and its applicability to the problem of top down action extraction from observation. The framework consists of novel probabilistic semantic models, which encode contact relations as probability distributions over the action phase. The models are action descriptive and can be used to provide probabilistic similarity scores for newly observed action sequences. The lower level of the representation consists of parametric hidden Markov models, which encode trajectory information.}}
    		
    Abstract: Abstract In this paper we present a hierarchical framework for representation of manipulation actions and its applicability to the problem of top down action extraction from observation. The framework consists of novel probabilistic semantic models, which encode contact relations as probability distributions over the action phase. The models are action descriptive and can be used to provide probabilistic similarity scores for newly observed action sequences. The lower level of the representation consists of parametric hidden Markov models, which encode trajectory information.
    Review:
    Kirk, N. H. and Nyga, D. and Beetz, M. (2014).
    Controlled Natural Languages for language generation in artificial cognition. IEEE International Conference on Robotics and Automation (ICRA), 6667-6672.
    BibTeX:
    @inproceedings{kirknygabeetz2014,
      author = {Kirk, N. H. and Nyga, D. and Beetz, M.},
      title = {Controlled Natural Languages for language generation in artificial cognition},
      pages = {6667-6672},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2014},
      month = {May},
      doi = 10.1109/ICRA.2014.6907843},
      abstract = In this paper we discuss, within the context of artificial assistants performing everyday activities, a resolution method to disambiguate missing or not satisfactorily inferred action-specific information via explicit clarification. While arguing the lack of preexisting robot to human linguistic interaction methods, we introduce a novel use of Controlled Natural Languages CNL as means of output language and sentence construction for doubt verbalization. We additionally provide implemented working scenarios, state future possibilities and problems related to verbalization of technical cognition when making use of Controlled Natural Languages}}
    		
    Abstract: In this paper we discuss, within the context of artificial assistants performing everyday activities, a resolution method to disambiguate missing or not satisfactorily inferred action-specific information via explicit clarification. While arguing the lack of preexisting robot to human linguistic interaction methods, we introduce a novel use of Controlled Natural Languages CNL as means of output language and sentence construction for doubt verbalization. We additionally provide implemented working scenarios, state future possibilities and problems related to verbalization of technical cognition when making use of Controlled Natural Languages
    Review:
    Haidu, A. and Kohlsdorf, D. and Beetz, M. (2014).
    Learning task outcome prediction for robot control from interactive environments. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 4389-4395.
    BibTeX:
    @inproceedings{haidukohlsdorfbeetz2014,
      author = {Haidu, A. and Kohlsdorf, D. and Beetz, M.},
      title = {Learning task outcome prediction for robot control from interactive environments},
      pages = {4389-4395},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2014},
      month = {Sept},
      doi = 10.1109/IROS.2014.6943183},
      abstract = In order to manage complex tasks such as cooking, future robots need to be action-aware and posses common sense knowledge. For example flipping a pancake requires a robot to know that a spatula has to be under a pancake in order to succeed. We present a novel approach for the extraction and learning of action and common sense knowledge, and developed a game using a robot-simulator with realistic physics for data acquisition. The game environment is a virtual kitchen, in which a user has to create a pancake by pouring pancake-mix on an oven and flipping it using a spatula. The interaction is done by controlling a virtual robot hand with a 3D input sensor. We incorporate a realistic fluid simulation in order to gather appropriate data of the pouring action. Furthermore, we present a task outcome prediction algorithm for this specific system and show how to learn a failure model for the pouring and flipping action}}
    		
    Abstract: In order to manage complex tasks such as cooking, future robots need to be action-aware and posses common sense knowledge. For example flipping a pancake requires a robot to know that a spatula has to be under a pancake in order to succeed. We present a novel approach for the extraction and learning of action and common sense knowledge, and developed a game using a robot-simulator with realistic physics for data acquisition. The game environment is a virtual kitchen, in which a user has to create a pancake by pouring pancake-mix on an oven and flipping it using a spatula. The interaction is done by controlling a virtual robot hand with a 3D input sensor. We incorporate a realistic fluid simulation in order to gather appropriate data of the pouring action. Furthermore, we present a task outcome prediction algorithm for this specific system and show how to learn a failure model for the pouring and flipping action
    Review:
    Kiforenko, L. and Buch, A. G. and Bodenhagen, L. and Krüger, N. (2015).
    Object detection using categorised 3D edges. Proc. SPIE, 94450C-94450C-8, 9445.
    BibTeX:
    @proceeding{kiforenkobuchbodenhagen2015,
      author = {Kiforenko, L. and Buch, A. G. and Bodenhagen, L. and Krüger, N.},
      title = {Object detection using categorised 3D edges},
      pages = {94450C-94450C-8},
      journal = {Proc. SPIE},
      year = {2015},
      volume= {9445},
      doi = 10.1117/12.2180551},
      abstract = In this paper we present an object detection method that uses edge categorisation in combination with a local multi-modal histogram descriptor, all based on RGB-D data. Our target application is robust detection and pose estimation of known objects. We propose to apply a recently introduced edge categorisation algorithm for describing objects in terms of its different edge types. Relying on edge information allow our system to deal with objects with little or no texture or surface variation. We show that edge categorisation improves matching performance due to the higher level of discrimination, which is made possible by the explicit use of edge categories in the feature descriptor. We quantitatively compare our approach with the state-of-the-art template based Linemod method, which also provides an effective way of dealing with texture-less objects, tests were performed on our own object dataset. Our results show that detection based on edge local multi-modal histogram descriptor outperforms Linemod with a significantly smaller amount of templates.}}
    		
    Abstract: In this paper we present an object detection method that uses edge categorisation in combination with a local multi-modal histogram descriptor, all based on RGB-D data. Our target application is robust detection and pose estimation of known objects. We propose to apply a recently introduced edge categorisation algorithm for describing objects in terms of its different edge types. Relying on edge information allow our system to deal with objects with little or no texture or surface variation. We show that edge categorisation improves matching performance due to the higher level of discrimination, which is made possible by the explicit use of edge categories in the feature descriptor. We quantitatively compare our approach with the state-of-the-art template based Linemod method, which also provides an effective way of dealing with texture-less objects, tests were performed on our own object dataset. Our results show that detection based on edge local multi-modal histogram descriptor outperforms Linemod with a significantly smaller amount of templates.
    Review:
    Markievicz, I. and Vitkute-Advzgauskiene, D. and Tamovsiunaite, M. (2014).
    Ontology Learning in Practice: Using Semantics for Knowledge Grounding. IGI Global book series Advances in Educational Technologies and Instructional Design, 158 - 171.
    BibTeX:
    @bibtexentrytype{markieviczvitkuteadvzgauskienetamov,
      author = {Markievicz, I. and Vitkute-Advzgauskiene, D. and Tamovsiunaite, M.},
      title = {Ontology Learning in Practice: Using Semantics for Knowledge Grounding},
      pages = {158 - 171},
      booktitle = {IGI Global book series Advances in Educational Technologies and Instructional Design},
      year = {2014},
      doi = 10.4018/978-1-4666-6154-7.ch009},
      abstract = This chapter presents research results, showing the use of ontology learning for knowledge grounding in e-learning environments. The established knowledge representation model is organized around actions, as the main elements linking the acquired knowledge with knowledge-based real-world activities. A framework for action ontology building from domain-specific corpus texts is suggested, utilizing different Natural Language Processing NLP techniques, such as collocation extraction, frequency lists, word space model, etc. The suggested framework employs additional knowledge sources of WordNet and VerbNet with structured linguistic and semantic information. Results from experiments with crawled chemical laboratory corpus texts are presented.}}
    		
    Abstract: This chapter presents research results, showing the use of ontology learning for knowledge grounding in e-learning environments. The established knowledge representation model is organized around actions, as the main elements linking the acquired knowledge with knowledge-based real-world activities. A framework for action ontology building from domain-specific corpus texts is suggested, utilizing different Natural Language Processing NLP techniques, such as collocation extraction, frequency lists, word space model, etc. The suggested framework employs additional knowledge sources of WordNet and VerbNet with structured linguistic and semantic information. Results from experiments with crawled chemical laboratory corpus texts are presented.
    Review:
    Schoeler, M. and Worgotter, F. and Kulvicius, T. and Papon, J. (2015).
    Unsupervised Generation of Context-Relevant Training-Sets for Visual Object Recognition Employing Multilinguality. IEEE Winter Conference on Applications of Computer Vision (WACV), 805-812.
    BibTeX:
    @inproceedings{schoelerworgotterkulvicius2015,
      author = {Schoeler, M. and Worgotter, F. and Kulvicius, T. and Papon, J.},
      title = {Unsupervised Generation of Context-Relevant Training-Sets for Visual Object Recognition Employing Multilinguality},
      pages = {805-812},
      booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      year = {2015},
      month = {Jan},
      doi = 10.1109/WACV.2015.112},
      abstract = Image based object classification requires clean training data sets. Gathering such sets is usually done manually by humans, which is time-consuming and laborious. On the other hand, directly using images from search engines creates very noisy data due to ambiguous noun-focused indexing. However, in daily speech nouns and verbs are always coupled. We use this for the automatic generation of clean data sets by the here-presented TRANSCLEAN algorithm, which through the use of multiple languages also solves the problem of polyesters a single spelling with multiple meanings. Thus, we use the implicit knowledge contained in verbs, e.g. in an imperative such as "hit the nail", implicating a metal nail and not the fingernail. One type of reference application where this method can automatically operate is human-robot collaboration based on discourse. A second is the generation of clean image data sets, where tedious manual cleaning can be replaced by the much simpler manual generation of a single relevant verb-noun tuple. Here we show the impact of our improved training sets for several widely used and state-of-the-art classifiers including Multipath Hierarchical Matching Pursuit. All tested classifiers show a substantial boost of about +20% in recognition performance}}
    		
    Abstract: Image based object classification requires clean training data sets. Gathering such sets is usually done manually by humans, which is time-consuming and laborious. On the other hand, directly using images from search engines creates very noisy data due to ambiguous noun-focused indexing. However, in daily speech nouns and verbs are always coupled. We use this for the automatic generation of clean data sets by the here-presented TRANSCLEAN algorithm, which through the use of multiple languages also solves the problem of polyesters a single spelling with multiple meanings. Thus, we use the implicit knowledge contained in verbs, e.g. in an imperative such as "hit the nail", implicating a metal nail and not the fingernail. One type of reference application where this method can automatically operate is human-robot collaboration based on discourse. A second is the generation of clean image data sets, where tedious manual cleaning can be replaced by the much simpler manual generation of a single relevant verb-noun tuple. Here we show the impact of our improved training sets for several widely used and state-of-the-art classifiers including Multipath Hierarchical Matching Pursuit. All tested classifiers show a substantial boost of about +20% in recognition performance
    Review:
    Ude, A. (2014).
    Estimation of Cartesian Space Robot Trajectories Using Unit Quaternion Space. International Journal of Advanced Robotic Systems, 137-1-137-8, 11.
    BibTeX:
    @bibtexentrytype{ude2014,
      author = {Ude, A.},
      title = {Estimation of Cartesian Space Robot Trajectories Using Unit Quaternion Space},
      pages = {137-1-137-8},
      journal = {International Journal of Advanced Robotic Systems},
      year = {2014},
      volume= {11},
      doi = 10.5772/58871},
      abstract = The ability to estimate Cartesian space trajectories that include orientation is of great importance for many practical applications. While it is becoming easier to acquire trajectory data by computer vision methods, data measured by general-purpose vision or depth sensors are often rather noisy. Appropriate smoothing methods are thus needed in order to reconstruct smooth Cartesian space trajectories given noisy measurements. In this paper, we propose an optimality criterion for the problem of the smooth estimation of Cartesian space trajectories that include the end-effector orientation. Based on this criterion, we develop an optimization method for trajectory estimation which takes into account the special properties of the orientation space, which we represent by unit quaternions.The efficiency of the developed approach is discussed and experimental results are presented.}}
    		
    Abstract: The ability to estimate Cartesian space trajectories that include orientation is of great importance for many practical applications. While it is becoming easier to acquire trajectory data by computer vision methods, data measured by general-purpose vision or depth sensors are often rather noisy. Appropriate smoothing methods are thus needed in order to reconstruct smooth Cartesian space trajectories given noisy measurements. In this paper, we propose an optimality criterion for the problem of the smooth estimation of Cartesian space trajectories that include the end-effector orientation. Based on this criterion, we develop an optimization method for trajectory estimation which takes into account the special properties of the orientation space, which we represent by unit quaternions.The efficiency of the developed approach is discussed and experimental results are presented.
    Review:
    Wolniakowski, A. and Miatliuk, K. and Krüger, N. and Rytz, J. (2014).
    Automatic Evaluation of Task-Focused Parallel Jaw Gripper Design. Simulation, Modeling, and Programming for Autonomous Robots, 450-461, 8810.
    BibTeX:
    @incollection{wolniakowskimiatliukkrueger2014,
      author = {Wolniakowski, A. and Miatliuk, K. and Krüger, N. and Rytz, J.},
      title = {Automatic Evaluation of Task-Focused Parallel Jaw Gripper Design},
      pages = {450-461},
      booktitle = {Simulation, Modeling, and Programming for Autonomous Robots},
      year = {2014},
      volume= {8810},
      editor = {Brugali, Davide and Broenink, Jan F. and Kroeger, Torsten and MacDonald, Bruce A.},
      language = {English},
      publisher = {Springer International Publishing},
      series = {Lecture Notes i},
      doi = 10.1007/978-3-319-11900-7_38},
      abstract = In this paper, we suggest gripper quality metrics that indicate the performance of a gripper given an object CAD model and a task description. Those, we argue, can be used in the design and selection of an appropriate gripper when the task is known. We present three different gripper metrics that to some degree build on existing grasp quality metrics and demonstrate these on a selection of parallel jaw grippers. We furthermore demonstrate the performance of the metrics in three different industrial task contexts}}
    		
    Abstract: In this paper, we suggest gripper quality metrics that indicate the performance of a gripper given an object CAD model and a task description. Those, we argue, can be used in the design and selection of an appropriate gripper when the task is known. We present three different gripper metrics that to some degree build on existing grasp quality metrics and demonstrate these on a selection of parallel jaw grippers. We furthermore demonstrate the performance of the metrics in three different industrial task contexts
    Review:
    Schoeler, M. and Papon, J. and Wörgötter, F. (2015).
    Constrained Planar Cuts - Object Partitioning for Point Clouds. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5207-5215.
    BibTeX:
    @inproceedings{schoelerpaponwoergoetter2015,
      author = {Schoeler, M. and Papon, J. and Wörgötter, F.},
      title = {Constrained Planar Cuts - Object Partitioning for Point Clouds},
      pages = {5207-5215},
      booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year = {2015},
      location = {Boston, MA, USA},
      month = {06},
      doi = 10.1109/CVPR.2015.7299157},
      abstract = While humans can easily separate unknown objects into meaningful parts, recent segmentation methods can only achieve similar partitionings by training on human- annotated ground-truth data. Here we introduce a bottom-up method for segmenting 3D point clouds into functional parts which does not require supervision and achieves equally good results. Our method uses local concavities as an indicator for inter-part boundaries. We show that this criterion is efficient to compute and generalizes well across different object classes. The algorithm employs a novel locally constrained geometrical boundary model which proposes greedy cuts through a local concavity graph. Only planar cuts are considered and evaluated using a cost function, which rewards cuts orthogonal to concave edges. Additionally, a local clustering constraint is applied to ensure the partitioning only affects relevant locally concave regions. We evaluate our algorithm on recordings from an RGB-D camera as well as the Princeton Segmentation Benchmark, using a fixed set of parameters across all object classes. This stands in stark contrast to most reported results which require either knowing the number of parts or annotated ground-truth for learning. Our approach outperforms all existing bottom-up methods reducing the gap to human performance by up to 50 % and achieves scores similar to top-down data-driven approaches}}
    		
    Abstract: While humans can easily separate unknown objects into meaningful parts, recent segmentation methods can only achieve similar partitionings by training on human- annotated ground-truth data. Here we introduce a bottom-up method for segmenting 3D point clouds into functional parts which does not require supervision and achieves equally good results. Our method uses local concavities as an indicator for inter-part boundaries. We show that this criterion is efficient to compute and generalizes well across different object classes. The algorithm employs a novel locally constrained geometrical boundary model which proposes greedy cuts through a local concavity graph. Only planar cuts are considered and evaluated using a cost function, which rewards cuts orthogonal to concave edges. Additionally, a local clustering constraint is applied to ensure the partitioning only affects relevant locally concave regions. We evaluate our algorithm on recordings from an RGB-D camera as well as the Princeton Segmentation Benchmark, using a fixed set of parameters across all object classes. This stands in stark contrast to most reported results which require either knowing the number of parts or annotated ground-truth for learning. Our approach outperforms all existing bottom-up methods reducing the gap to human performance by up to 50 % and achieves scores similar to top-down data-driven approaches
    Review:
    Savarimuthu, R. and Papon, J. and Buch, A. G. and Aksoy, E. E. and Mustafa, W. and Wörgötter, F. and Krüger, N. (2015).
    An Online Vision System for Understanding Complex Assembly Tasks. International Conference on Computer Vision Theory and Applications (VISAPP), 1 - 8.
    BibTeX:
    @inproceedings{savarimuthupaponbuch2015,
      author = {Savarimuthu, R. and Papon, J. and Buch, A. G. and Aksoy, E. E. and Mustafa, W. and Wörgötter, F. and Krüger, N.},
      title = {An Online Vision System for Understanding Complex Assembly Tasks},
      pages = {1 - 8},
      booktitle = {International Conference on Computer Vision Theory and Applications (VISAPP)},
      year = {2015},
      location = {Berlin Germany},
      month = {March 11 - 14},
      abstract = We present an integrated system for the recognition, pose estimation and simultaneous tracking of multiple objects in 3D scenes. Our target application is a complete semantic representation of dynamic scenes which requires three essential steps recognition of objects, tracking their movements, and identification of interactions between them. We address this challenge with a complete system which uses object recognition and pose estimation to initiate object models and trajectories, a dynamic sequential octree structure to allow for full 6DOF tracking through occlusions, and a graph-based semantic representation to distil interactions. We evaluate the proposed method on real scenarios by comparing tracked outputs to ground truth trajectories and we compare the results to Iterative Closest Point and Particle Filter based trackers}}
    		
    Abstract: We present an integrated system for the recognition, pose estimation and simultaneous tracking of multiple objects in 3D scenes. Our target application is a complete semantic representation of dynamic scenes which requires three essential steps recognition of objects, tracking their movements, and identification of interactions between them. We address this challenge with a complete system which uses object recognition and pose estimation to initiate object models and trajectories, a dynamic sequential octree structure to allow for full 6DOF tracking through occlusions, and a graph-based semantic representation to distil interactions. We evaluate the proposed method on real scenarios by comparing tracked outputs to ground truth trajectories and we compare the results to Iterative Closest Point and Particle Filter based trackers
    Review:
    Jorgensen, J. A. and Rukavishnikova, N. and Krüger, N. and Petersen, H. G. (2015).
    Spatial constraint identification of parts in SE3 for action optimization. Industrial Technology (ICIT), 2015 IEEE International Conference on, 474-480.
    BibTeX:
    @inproceedings{7125144,
      author = {Jorgensen, J. A. and Rukavishnikova, N. and Krüger, N. and Petersen, H. G.},
      title = {Spatial constraint identification of parts in SE3 for action optimization},
      pages = {474-480},
      booktitle = {Industrial Technology (ICIT), 2015 IEEE International Conference on},
      year = {2015},
      month = {March},
      doi = 10.1109/ICIT.2015.7125144},
      abstract = In this paper we present a method to structure contextual knowledge in spatial regions/manifolds that may be used in action selection for industrial robotic systems. The contextual knowledge is build on relatively few prior task executions and it may be derived from either teleoperation or previous action executions. We argue that our contextual representation is able to improve the execution speed of individual actions and demonstrate this on a specific time-consuming action of object detection and pose estimation. Our contextual knowledge representation is especially suited for industrial environments where repetitive tasks such as bin-and belt picking are plentiful. We present how we classify and detect the contextual information from prior task executions and demonstrate the performance gain on a real industrial pick-and-place problem.}}
    		
    Abstract: In this paper we present a method to structure contextual knowledge in spatial regions/manifolds that may be used in action selection for industrial robotic systems. The contextual knowledge is build on relatively few prior task executions and it may be derived from either teleoperation or previous action executions. We argue that our contextual representation is able to improve the execution speed of individual actions and demonstrate this on a specific time-consuming action of object detection and pose estimation. Our contextual knowledge representation is especially suited for industrial environments where repetitive tasks such as bin-and belt picking are plentiful. We present how we classify and detect the contextual information from prior task executions and demonstrate the performance gain on a real industrial pick-and-place problem.
    Review:
    Savarimuthu, T. R. and Papon, J. and Buch, A. G. and Aksoy, E. E. and Mustafa, W. and Wörgötter, F. and Krüger, N. (2015).
    An Online Vision System for Understanding Complex Assembly Tasks. Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISIGRAPP), 454-461.
    BibTeX:
    @conference{savarimuthupaponbuch2015a,
      author = {Savarimuthu, T. R. and Papon, J. and Buch, A. G. and Aksoy, E. E. and Mustafa, W. and Wörgötter, F. and Krüger, N.},
      title = {An Online Vision System for Understanding Complex Assembly Tasks},
      pages = {454-461},
      booktitle = {Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISIGRAPP)},
      year = {2015},
      location = {Berlin (Germany)},
      month = {Mar},
      doi = 10.5220/0005260804540461},
      abstract = We present an integrated system for the recognition, pose estimation and simultaneous tracking of multiple objects in 3D scenes. Our target application is a complete semantic representation of dynamic scenes which requires three essential steps recognition of objects, tracking their movements, and identification of interactions between them. We address this challenge with a complete system which uses object recognition and pose estimation to initiate object models and trajectories, a dynamic sequential octree structure to allow for full 6DOF tracking through occlusions, and a graph-based semantic representation to distil interactions. We evaluate the proposed method on real scenarios by comparing tracked outputs to ground truth part trajectories and compare the results to Iterative Closest Point and Particle Filter based trackers.}}
    		
    Abstract: We present an integrated system for the recognition, pose estimation and simultaneous tracking of multiple objects in 3D scenes. Our target application is a complete semantic representation of dynamic scenes which requires three essential steps recognition of objects, tracking their movements, and identification of interactions between them. We address this challenge with a complete system which uses object recognition and pose estimation to initiate object models and trajectories, a dynamic sequential octree structure to allow for full 6DOF tracking through occlusions, and a graph-based semantic representation to distil interactions. We evaluate the proposed method on real scenarios by comparing tracked outputs to ground truth part trajectories and compare the results to Iterative Closest Point and Particle Filter based trackers.
    Review:
    Agostini, A. and Torras, C. and Wörgötter, F. (2015).
    Efficient interactive decision-making framework for robotic applications. Artificial Intelligence, 1--32.
    BibTeX:
    @article{agostinitorraswoergoetter2015,
      author = {Agostini, A. and Torras, C. and Wörgötter, F.},
      title = {Efficient interactive decision-making framework for robotic applications},
      pages = {1--32},
      journal = {Artificial Intelligence},
      year = {2015},
      doi = 10.1016/j.artint.2015.04.004},
      abstract = Abstract The inclusion of robots in our society is imminent, such as service robots. Robots are now capable of reliably manipulating objects in our daily lives but only when combined with artificial intelligence (AI) techniques for planning and decision-making, which allow a machine to determine how a task can be completed successfully. To perform decision making, AI planning methods use a set of planning operators to code the state changes in the environment produced by a robotic action. Given a specific goal, the planner then searches for the best sequence of planning operators, i.e., the best plan that leads through the state space to satisfy the goal. In principle, planning operators can be hand-coded, but this is impractical for applications that involve many possible state transitions. An alternative is to learn them automatically from experience, which is most efficient when there is a human teacher. In this study, we propose a simple and efficient decision-making framework for this purpose. The robot executes its plan in a step-wise manner and any planning impasse produced by missing operators is resolved online by asking a human teacher for the next action to execute. Based on the observed state transitions, this approach rapidly generates the missing operators by evaluating the relevance of several cause-effect alternatives in parallel using a probability estimate, which compensates for the high uncertainty that is inherent when learning from a small number of samples. We evaluated the validity of our approach in simulated and real environments, where it was benchmarked against previous methods. Humans learn in the same incremental manner, so we consider that our approach may be a better alternative to existing learning paradigms, which require offline learning, a significant amount of previous knowledge, or a large number of samples.}}
    		
    Abstract: Abstract The inclusion of robots in our society is imminent, such as service robots. Robots are now capable of reliably manipulating objects in our daily lives but only when combined with artificial intelligence (AI) techniques for planning and decision-making, which allow a machine to determine how a task can be completed successfully. To perform decision making, AI planning methods use a set of planning operators to code the state changes in the environment produced by a robotic action. Given a specific goal, the planner then searches for the best sequence of planning operators, i.e., the best plan that leads through the state space to satisfy the goal. In principle, planning operators can be hand-coded, but this is impractical for applications that involve many possible state transitions. An alternative is to learn them automatically from experience, which is most efficient when there is a human teacher. In this study, we propose a simple and efficient decision-making framework for this purpose. The robot executes its plan in a step-wise manner and any planning impasse produced by missing operators is resolved online by asking a human teacher for the next action to execute. Based on the observed state transitions, this approach rapidly generates the missing operators by evaluating the relevance of several cause-effect alternatives in parallel using a probability estimate, which compensates for the high uncertainty that is inherent when learning from a small number of samples. We evaluated the validity of our approach in simulated and real environments, where it was benchmarked against previous methods. Humans learn in the same incremental manner, so we consider that our approach may be a better alternative to existing learning paradigms, which require offline learning, a significant amount of previous knowledge, or a large number of samples.
    Review:
    Denisa, M. and Gams, A. and Ude, A. and Petri, T. (2015).
    Generalization of discrete Compliant Movement Primitives. International Conference on Advanced Robotics (ICAR), 565-572.
    BibTeX:
    @inproceedings{denisagamsude2015,
      author = {Denisa, M. and Gams, A. and Ude, A. and Petri, T.},
      title = {Generalization of discrete Compliant Movement Primitives},
      pages = {565-572},
      booktitle = {International Conference on Advanced Robotics (ICAR)},
      year = {2015},
      month = {July},
      doi = 10.1109/ICAR.2015.7251512},
      abstract = This paper addresses the problem of achieving high robot compliance while maintaining low tracking error without the use of dynamical models. The proposed approach uses programing by demonstration to learn new task related compliant movement. The presented Compliant Movement Primitives are a combination of 1) position trajectories, gained through human demonstration and encoded as Dynamical Movement Primitives and 2) corresponding torque trajectories encoded as a linear combination of radial basis functions. A set of example Compliant Movement Primitives is used with statistical generalization in order to execute previously unexplored tasks inside the training space. The proposed control approach and generalization was evaluated with a discrete pick-and-place task on a Kuka LWR robot. The evaluation showed a major decrease in tracking error compared to a classic feedback approach and no significant rise in tracking error while using generalized Compliant Movement Primitives.}}
    		
    Abstract: This paper addresses the problem of achieving high robot compliance while maintaining low tracking error without the use of dynamical models. The proposed approach uses programing by demonstration to learn new task related compliant movement. The presented Compliant Movement Primitives are a combination of 1) position trajectories, gained through human demonstration and encoded as Dynamical Movement Primitives and 2) corresponding torque trajectories encoded as a linear combination of radial basis functions. A set of example Compliant Movement Primitives is used with statistical generalization in order to execute previously unexplored tasks inside the training space. The proposed control approach and generalization was evaluated with a discrete pick-and-place task on a Kuka LWR robot. The evaluation showed a major decrease in tracking error compared to a classic feedback approach and no significant rise in tracking error while using generalized Compliant Movement Primitives.
    Review:
    Denisa, M. and Gams, A. and Ude, A. and Petric, T. (2016).
    Learning compliant movement primitives through demonstration and statistical generalization. IEEE/ASME Transactions on Mechatronics, 1-1, PP, 99.
    BibTeX:
    @article{denisagamsude2016,
      author = {Denisa, M. and Gams, A. and Ude, A. and Petric, T.},
      title = {Learning compliant movement primitives through demonstration and statistical generalization},
      pages = {1-1},
      journal = {IEEE/ASME Transactions on Mechatronics},
      year = {2016},
      volume= {PP},
      number = {99},
      doi = 10.1109/TMECH.2015.2510165},
      abstract = In this paper we address the problem of simultaneously achieving low trajectory tracking errors and compliant control without using explicit mathematical models of task dynamics. To achieve this goal, we propose a new movement representation called Compliant Movement Primitives, which encodes position trajectory and associated torque profiles and can be learned from a single user demonstration. With the proposed control framework, the robot can remain compliant and consequently safe for humans sharing its workspace, even if high trajectory tracking accuracy is required. We developed a statistical learning approach that can use a database of existing Compliant Movement Primitives and compute new ones, adapted for novel task variations. The proposed approach was evaluated on a Kuka LWR-4 robot performing 1) a discrete pick-and-place task with objects of varying weight and 2) a periodic handle turning operation. The evaluation of the discrete task showed a 15-fold decrease of the tracking error while exhibiting compliant behavior compared to the standard feedback control approach. It also indicated no significant rise in the tracking error while using generalized primitives computed by the statistical learning method. With respect to unforeseen collisions, the proposed approach resulted in a 75% drop of contact forces compared to standard feedback control. The periodic task demonstrated on-line use of the proposed approach to accomplish a task of handle turning.}}
    		
    Abstract: In this paper we address the problem of simultaneously achieving low trajectory tracking errors and compliant control without using explicit mathematical models of task dynamics. To achieve this goal, we propose a new movement representation called Compliant Movement Primitives, which encodes position trajectory and associated torque profiles and can be learned from a single user demonstration. With the proposed control framework, the robot can remain compliant and consequently safe for humans sharing its workspace, even if high trajectory tracking accuracy is required. We developed a statistical learning approach that can use a database of existing Compliant Movement Primitives and compute new ones, adapted for novel task variations. The proposed approach was evaluated on a Kuka LWR-4 robot performing 1) a discrete pick-and-place task with objects of varying weight and 2) a periodic handle turning operation. The evaluation of the discrete task showed a 15-fold decrease of the tracking error while exhibiting compliant behavior compared to the standard feedback control approach. It also indicated no significant rise in the tracking error while using generalized primitives computed by the statistical learning method. With respect to unforeseen collisions, the proposed approach resulted in a 75% drop of contact forces compared to standard feedback control. The periodic task demonstrated on-line use of the proposed approach to accomplish a task of handle turning.
    Review:
    Gams, A. and Denisa, M. and Ude, A. (2015).
    Learning of parametric coupling terms for robot-environment interaction. Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, 304-309.
    BibTeX:
    @inproceedings{gamsdenisaude2015,
      author = {Gams, A. and Denisa, M. and Ude, A.},
      title = {Learning of parametric coupling terms for robot-environment interaction},
      pages = {304-309},
      booktitle = {Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on},
      year = {2015},
      month = {11},
      doi = 10.1109/HUMANOIDS.2015.7363559},
      abstract = In order to be effective, learning of robotic motion by demonstration should not remain limited to direct repetition of movements, but should enable modifications with respect to the state of the external environment, and generation of actions for previously unencountered situations. In this paper we propose an approach that combines these two features, and applies them in the framework of dynamic movement primitives (DMP). The proposed approach is based on the notion of motion adaptation through the use of coupling terms introduced to the DMPs at the velocity level. The coupling term is learned in a few repetitions of the motion with iterative learning control (ILC). The adaptation, which is based on force feedback, derives from either autonomous contact with the environment, or from human intervention. It can adapt to a given constraint, e.g., to a desired force of contact or to a given position. The major novelty of this paper is in extending this notion with statistical generalization between the coupling terms, allowing online adaptation of motion to a previously unexplored situation. The benefit of the approach is in reduced effort in human demonstration, because a single demonstration can be autonomously adapted to different situations with ILC, and recording the learned coupling terms builds up a database for generalization. A side-effect of learning, which takes a few iterations, is that also the coupling terms of the learning attempts can be stored in the database, allowing for different generalization queries and outcomes. In the paper we provide the details on the approach, followed by simulated and real-world evaluations.}}
    		
    Abstract: In order to be effective, learning of robotic motion by demonstration should not remain limited to direct repetition of movements, but should enable modifications with respect to the state of the external environment, and generation of actions for previously unencountered situations. In this paper we propose an approach that combines these two features, and applies them in the framework of dynamic movement primitives (DMP). The proposed approach is based on the notion of motion adaptation through the use of coupling terms introduced to the DMPs at the velocity level. The coupling term is learned in a few repetitions of the motion with iterative learning control (ILC). The adaptation, which is based on force feedback, derives from either autonomous contact with the environment, or from human intervention. It can adapt to a given constraint, e.g., to a desired force of contact or to a given position. The major novelty of this paper is in extending this notion with statistical generalization between the coupling terms, allowing online adaptation of motion to a previously unexplored situation. The benefit of the approach is in reduced effort in human demonstration, because a single demonstration can be autonomously adapted to different situations with ILC, and recording the learned coupling terms builds up a database for generalization. A side-effect of learning, which takes a few iterations, is that also the coupling terms of the learning attempts can be stored in the database, allowing for different generalization queries and outcomes. In the paper we provide the details on the approach, followed by simulated and real-world evaluations.
    Review:
    Buch, A. and Petersen, H. and Krüger, N. (2016).
    Local shape feature fusion for improved matching, pose estimation and 3D object recognition. SpringerPlus, 1--33, 5, 1.
    BibTeX:
    @article{buchpetersenkrueger2016,
      author = {Buch, A. and Petersen, H. and Krüger, N.},
      title = {Local shape feature fusion for improved matching, pose estimation and 3D object recognition},
      pages = {1--33},
      journal = {SpringerPlus},
      year = {2016},
      volume= {5},
      number = {1},
      doi = 10.1186/s40064-016-1906-1},
      abstract = We provide new insights to the problem of shape feature description and matching, techniques that are often applied within 3D object recognition pipelines. We subject several state of the art features to systematic evaluations based on multiple datasets from different sources in a uniform manner. We have carefully prepared and performed a neutral test on the datasets for which the descriptors have shown good recognition performance. Our results expose an important fallacy of previous results, namely that the performance of the recognition system does not correlate well with the performance of the descriptor employed by the recognition system. In addition to this, we evaluate several aspects of the matching task, including the efficiency of the different features, and the potential in using dimension reduction. To arrive at better generalization properties, we introduce a method for fusing several feature matches with a limited processing overhead. Our fused feature matches provide a significant increase in matching accuracy, which is consistent over all tested datasets. Finally, we benchmark all features in a 3D object recognition setting, providing further evidence of the advantage of fused features, both in terms of accuracy and efficiency.}}
    		
    Abstract: We provide new insights to the problem of shape feature description and matching, techniques that are often applied within 3D object recognition pipelines. We subject several state of the art features to systematic evaluations based on multiple datasets from different sources in a uniform manner. We have carefully prepared and performed a neutral test on the datasets for which the descriptors have shown good recognition performance. Our results expose an important fallacy of previous results, namely that the performance of the recognition system does not correlate well with the performance of the descriptor employed by the recognition system. In addition to this, we evaluate several aspects of the matching task, including the efficiency of the different features, and the potential in using dimension reduction. To arrive at better generalization properties, we introduce a method for fusing several feature matches with a limited processing overhead. Our fused feature matches provide a significant increase in matching accuracy, which is consistent over all tested datasets. Finally, we benchmark all features in a 3D object recognition setting, providing further evidence of the advantage of fused features, both in terms of accuracy and efficiency.
    Review:
    Wolniakowski, A. and Jorgensen, J. A. and Miatliuk, K. and Petersen, H. G. and Kruger, N. (2015).
    Task and context sensitive optimization of gripper design using dynamic grasp simulation. Methods and Models in Automation and Robotics (MMAR), 2015 20th International Conference on, 29-34.
    BibTeX:
    @inproceedings{wolniakowskijorgensenmiatliuk2015,
      author = {Wolniakowski, A. and Jorgensen, J. A. and Miatliuk, K. and Petersen, H. G. and Kruger, N.},
      title = {Task and context sensitive optimization of gripper design using dynamic grasp simulation},
      pages = {29-34},
      booktitle = {Methods and Models in Automation and Robotics (MMAR), 2015 20th International Conference on},
      year = {2015},
      month = {08},
      doi = 10.1109/MMAR.2015.7283701},
      abstract = In this work, we present a generic approach to optimize the design of a parametrized robot gripper including both gripper parameters and parameters of the finger geometry. We demonstrate our gripper optimization on a parallel jaw type gripper which we have parametrized in a 11 dimensional space. We furthermore present a parametrization of the grasping task and context, which is essential as input to the computation of gripper performance. We exemplify the feasibility of our approach by computing several optimized grippers on a real world industrial object in three different scenarios.}}
    		
    Abstract: In this work, we present a generic approach to optimize the design of a parametrized robot gripper including both gripper parameters and parameters of the finger geometry. We demonstrate our gripper optimization on a parallel jaw type gripper which we have parametrized in a 11 dimensional space. We furthermore present a parametrization of the grasping task and context, which is essential as input to the computation of gripper performance. We exemplify the feasibility of our approach by computing several optimized grippers on a real world industrial object in three different scenarios.
    Review:
    Kiforenko, L. and Buch, A. G. and Krüger, N. (2016).
    TriPoD: Triplet Point Descriptor for Robust 3D Object Detection. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (submitted).
    BibTeX:
    @inproceedings{kiforenkobuchkrueger2016,
      author = {Kiforenko, L. and Buch, A. G. and Krüger, N.},
      title = {TriPoD: Triplet Point Descriptor for Robust 3D Object Detection},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2016},
      note = {submitted},
      abstract = This works present an extension of point pair features for object detection. Instead of using point pair relations, we propose to use point triplets. Triplets, in comparison to pairs, provide a richer set of features as well as more stable pose estimates. We present preliminary results of different investigations of comparison between pairs and triplets. We also present an algorithm for object detection based on triplets and perform a quantitative evaluation on two datasets and one practical application.}}
    		
    Abstract: This works present an extension of point pair features for object detection. Instead of using point pair relations, we propose to use point triplets. Triplets, in comparison to pairs, provide a richer set of features as well as more stable pose estimates. We present preliminary results of different investigations of comparison between pairs and triplets. We also present an algorithm for object detection based on triplets and perform a quantitative evaluation on two datasets and one practical application.
    Review:
    Meyer, K. K. and Wolniakowski, A. and Hagelskjaer F.and Kiforenko, L. and Buch, A. G. and Krüger, N. and J., J. and Bodenhagen, L. (2016).
    Using online modelled spatial constraints for pose estimation in an industrial setting. IEEE Int. Conf. on Conference on Multimedia and Expo (ICME) (accepted).
    BibTeX:
    @inproceedings{kiforenkobuchkrueger2016,
      author = {Meyer, K. K. and Wolniakowski, A. and Hagelskjaer F.and Kiforenko, L. and Buch, A. G. and Krüger, N. and J., J. and Bodenhagen, L.},
      title = {Using online modelled spatial constraints for pose estimation in an industrial setting},
      booktitle = {IEEE Int. Conf. on Conference on Multimedia and Expo (ICME)},
      year = {2016},
      note = {accepted},
      abstract = We introduce a vision system that is able to on-line learn spatial constraints to improve pose estimation in terms of correct recognition as well as computational speed. By making use of a simulated industrial robot system performing various pick and place tasks, we show the effect of model building when making use of visual knowledge in terms of visually extracted pose hypotheses as well as action knowledge in terms of pose hypotheses verified by action execution. We show that the use of action knowledge significantly improves the pose estimation process.}}
    		
    Abstract: We introduce a vision system that is able to on-line learn spatial constraints to improve pose estimation in terms of correct recognition as well as computational speed. By making use of a simulated industrial robot system performing various pick and place tasks, we show the effect of model building when making use of visual knowledge in terms of visually extracted pose hypotheses as well as action knowledge in terms of pose hypotheses verified by action execution. We show that the use of action knowledge significantly improves the pose estimation process.
    Review:
    Wolniakowski, A. and Kramberger, A. and Gams, A. and Chrysostomou, D. and Hagelskjaer, F. and Thulesen, T. N. and Kiforenko, L. and Buch, A. G. and Bodenhagen, L. and Petersen, H. G. and Madsen, O. and Ude, A. and Krüger, N. (2016).
    Optimizing Grippers for Compensating Pose Uncertainties by Dynamic Simulation. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (submitted).
    BibTeX:
    @inproceedings{kiforenkobuchkrueger2016,
      author = {Wolniakowski, A. and Kramberger, A. and Gams, A. and Chrysostomou, D. and Hagelskjaer, F. and Thulesen, T. N. and Kiforenko, L. and Buch, A. G. and Bodenhagen, L. and Petersen, H. G. and Madsen, O. and Ude, A. and Krüger, N.},
      title = {Optimizing Grippers for Compensating Pose Uncertainties by Dynamic Simulation},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2016},
      note = {submitted},
      abstract = The gripper design process is one of the interesting challenges in the context of grasping within industry. Typically, simple parallel-finger grippers, which are easy to install and maintain, are used in platforms for robotic grasping. The context witches in these platforms require frequent exchange of gripper fingers to accommodate grasping of new products, while subjected to numerous constraints, such as workcell uncertainties due to the vision systems used. The design of these fingers consumes the man-hours of experienced engineers, and involves a lot of trial-and-error testing. In our previous work, we have presented a method to automatically compute the optimal finger shapes for defined task contexts in simulation. In this paper, we show the performance of our method in an industrial grasping scenario. We first analyze the uncertainties of the used vision system, which are the major source of grasping error. Then, we perform the experiments, both in simulation and in real setting. The experiments confirmed the validity of our approach. The computed finger design was employed in a real industrial assembly scenario.}}
    		
    Abstract: The gripper design process is one of the interesting challenges in the context of grasping within industry. Typically, simple parallel-finger grippers, which are easy to install and maintain, are used in platforms for robotic grasping. The context witches in these platforms require frequent exchange of gripper fingers to accommodate grasping of new products, while subjected to numerous constraints, such as workcell uncertainties due to the vision systems used. The design of these fingers consumes the man-hours of experienced engineers, and involves a lot of trial-and-error testing. In our previous work, we have presented a method to automatically compute the optimal finger shapes for defined task contexts in simulation. In this paper, we show the performance of our method in an industrial grasping scenario. We first analyze the uncertainties of the used vision system, which are the major source of grasping error. Then, we perform the experiments, both in simulation and in real setting. The experiments confirmed the validity of our approach. The computed finger design was employed in a real industrial assembly scenario.
    Review:
    Nyga, D. and Beetz, M. (2015).
    Cloud-based Probabilistic Knowledge Services for Instruction Interpretation. International Symposium of Robotics Research (ISRR).
    BibTeX:
    @inproceedings{nygabeetz2015a,
      author = {Nyga, D. and Beetz, M.},
      title = {Cloud-based Probabilistic Knowledge Services for Instruction Interpretation},
      booktitle = {International Symposium of Robotics Research (ISRR)},
      year = {2015},
      abstract = As the tasks of autonomous manipulation robots get more complex, the tasking of the robots using natural-language instructions becomes more important. Executing such instructions in the way they are meant often requires robots to infer missing, and disambiguate given information using lots of common and common-sense knowledge. In this work, we report on Probabilistic Action Cores (P RAC ) 21 - a framework for learning of and reasoning about action-specific probabilistic knowledge bases that can be learned from hand-labeled instructions to address this problem. In P RAC, knowledge about actions and objects is compactly represented by first-order probabilistic models, which are used to learn a joint probability distribution over the ways in which instructions for a given action verb are formulated. These joint probability distributions are then used to compute the plan instantiation that has the highest probability of producing the intended action given the natural language instruction. Formulating plan interpretation as a conditional probability is a promising approach because we can at the same time infer the plan that is most appropriate for performing the instruction, the refinement of the parameters of the plan on the basis of the information given in the instruction, and automatically fill in missing parameters by inferring their most probable value from the distribution. P RAC has been implemented as a web-based online service on the cloud-robotics platform openEASE 7.}}
    		
    Abstract: As the tasks of autonomous manipulation robots get more complex, the tasking of the robots using natural-language instructions becomes more important. Executing such instructions in the way they are meant often requires robots to infer missing, and disambiguate given information using lots of common and common-sense knowledge. In this work, we report on Probabilistic Action Cores (P RAC ) 21 - a framework for learning of and reasoning about action-specific probabilistic knowledge bases that can be learned from hand-labeled instructions to address this problem. In P RAC, knowledge about actions and objects is compactly represented by first-order probabilistic models, which are used to learn a joint probability distribution over the ways in which instructions for a given action verb are formulated. These joint probability distributions are then used to compute the plan instantiation that has the highest probability of producing the intended action given the natural language instruction. Formulating plan interpretation as a conditional probability is a promising approach because we can at the same time infer the plan that is most appropriate for performing the instruction, the refinement of the parameters of the plan on the basis of the information given in the instruction, and automatically fill in missing parameters by inferring their most probable value from the distribution. P RAC has been implemented as a web-based online service on the cloud-robotics platform openEASE 7.
    Review:
    Nyga, D. and Beetz, M. (2015).
    Reasoning about Unmodelled Concepts - Incorporating Class Taxonomies in Probabilistic Relational Models. Arxiv.orgCoRR, abs/1504.05411.
    BibTeX:
    @inproceedings{nygabeetz2015b,
      author = {Nyga, D. and Beetz, M.},
      title = {Reasoning about Unmodelled Concepts - Incorporating Class Taxonomies in Probabilistic Relational Models},
      booktitle = {Arxiv.org},
      journal = {CoRR},
      year = {2015},
      volume= {abs/1504.05411},
      url = http://arxiv.org/abs/1504.05411},
      abstract = A key problem in the application of first-order probabilistic methods is the enormous size of graphical models they imply. The size results from the possible worlds that can be generated by a domain of objects and relations. One of the reasons for this explosion is that so far the approaches do not sufficiently exploit the structure and similarity of possible worlds in order to encode the models more compactly. We propose fuzzy inference in Markov logic networks, which enables the use of taxonomic knowledge as a source of imposing structure onto possible worlds. We show that by exploiting this structure, probability distributions can be represented more compactly and that the reasoning systems become capable of reasoning about concepts not contained in the probabilistic knowledge base.}}
    		
    Abstract: A key problem in the application of first-order probabilistic methods is the enormous size of graphical models they imply. The size results from the possible worlds that can be generated by a domain of objects and relations. One of the reasons for this explosion is that so far the approaches do not sufficiently exploit the structure and similarity of possible worlds in order to encode the models more compactly. We propose fuzzy inference in Markov logic networks, which enables the use of taxonomic knowledge as a source of imposing structure onto possible worlds. We show that by exploiting this structure, probability distributions can be represented more compactly and that the reasoning systems become capable of reasoning about concepts not contained in the probabilistic knowledge base.
    Review:
    Lisca, G. and Nyga, D. and Balint-Benczedi, F. and Langer, H. and Beetz, M. (2015).
    Towards robots conducting chemical experiments. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5202-5208.
    BibTeX:
    @inproceedings{liscanygabalintbenczedi2015,
      author = {Lisca, G. and Nyga, D. and Balint-Benczedi, F. and Langer, H. and Beetz, M.},
      title = {Towards robots conducting chemical experiments},
      pages = {5202-5208},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2015},
      month = {Sept},
      doi = 10.1109/IROS.2015.7354110},
      abstract = Autonomous mobile robots are employed to perform increasingly complex tasks which require appropriate task descriptions, accurate object recognition, and dexterous object manipulation. In this paper we will address three key questions: How to obtain appropriate task descriptions from natural language (NL) instructions, how to choose the control program to perform a task description, and how to recognize and manipulate the objects referred by a task description? We describe an evaluated robotic agent which takes a natural language instruction stating a step of DNA extraction procedure as a starting point. The system is able to transform the textual instruction into an abstract symbolic plan representation. It can reason about the representation and answer queries about what, how, and why it is done. The robot selects the most appropriate control programs and robustly coordinates all manipulations required by the task description. The execution is based on a perception sub-system which is able to locate and recognize the objects and instruments needed in the DNA extraction procedure.}}
    		
    Abstract: Autonomous mobile robots are employed to perform increasingly complex tasks which require appropriate task descriptions, accurate object recognition, and dexterous object manipulation. In this paper we will address three key questions: How to obtain appropriate task descriptions from natural language (NL) instructions, how to choose the control program to perform a task description, and how to recognize and manipulate the objects referred by a task description? We describe an evaluated robotic agent which takes a natural language instruction stating a step of DNA extraction procedure as a starting point. The system is able to transform the textual instruction into an abstract symbolic plan representation. It can reason about the representation and answer queries about what, how, and why it is done. The robot selects the most appropriate control programs and robustly coordinates all manipulations required by the task description. The execution is based on a perception sub-system which is able to locate and recognize the objects and instruments needed in the DNA extraction procedure.
    Review:
    Reich, S. and Aein, M. J. and Wörgötter, F. (2016).
    Context Dependent Action Affordances and their Execution using an Ontology of Actions and 3D Geometric Reasoning. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 1-8 (submitted).
    BibTeX:
    @inproceedings{reichaeinwoergoetter2016,
      author = {Reich, S. and Aein, M. J. and Wörgötter, F.},
      title = {Context Dependent Action Affordances and their Execution using an Ontology of Actions and 3D Geometric Reasoning},
      pages = {1-8},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2016},
      location = {Daejeon, Korea},
      month = {Oct},
      note = {submitted},
      abstract = When looking at an object humans can quickly and efficiently assess which actions are possible given the scene context. This task remains hard for machines. Here we focus on manipulation actions and in the first part of this study define an object-action linked ontology for such context dependent affordance analysis. We break down every action into three hierarchical pre-condition layers starting on top with abstract object relations (which need to be fulfilled) and in three steps arriving at the movement primitives required to execute the action. This ontology will then, in the second part of this work, be linked to actual scenes. First the system looks at the scene and for any selected object suggests some actions. One will be chosen and, we use now a simple geometrical reasoning scheme by which this actions movement primitives will be filled with the specific parameter values, which are then executed by the robot. The viability of this approach will be demonstrated by analysing several scenes and a large number of manipulations.}}
    		
    Abstract: When looking at an object humans can quickly and efficiently assess which actions are possible given the scene context. This task remains hard for machines. Here we focus on manipulation actions and in the first part of this study define an object-action linked ontology for such context dependent affordance analysis. We break down every action into three hierarchical pre-condition layers starting on top with abstract object relations (which need to be fulfilled) and in three steps arriving at the movement primitives required to execute the action. This ontology will then, in the second part of this work, be linked to actual scenes. First the system looks at the scene and for any selected object suggests some actions. One will be chosen and, we use now a simple geometrical reasoning scheme by which this actions movement primitives will be filled with the specific parameter values, which are then executed by the robot. The viability of this approach will be demonstrated by analysing several scenes and a large number of manipulations.
    Review:
    Kiforenko, L. and Buch, A. G. and Krüger, N. (2015).
    Computer Vision Systems: 10th International Conference, ICVS 2015, Copenhagen, Denmark, July 6-9, 2015, Proceedings. , 343--353.
    BibTeX:
    @inbook{Kiforenko2015,
      author = {Kiforenko, L. and Buch, A. G. and Krüger, N.},
      title = {Computer Vision Systems: 10th International Conference, ICVS 2015, Copenhagen, Denmark, July 6-9, 2015, Proceedings},
      pages = {343--353},
      year = {2015},
      chapter = {Object Detection Using a Combination of Multiple 3D Feature Descriptors},
      editor = {Nalpantidis, Lazaros and Krüger, Volker and Eklundh, Jan-Olof and Gasteratos, Antonios},
      publisher = {Springer International Publishing},
      url = http://dx.doi.org/10.1007/978-3-319},
      doi = 10.1007/978-3-319-20904-3_31}}
    		
    Abstract:
    Review:
    Kiforenko, L. and Buch, A. G. and Krüger, N. (2015).
    Proceedings of 10th International Conference of Computer Vision Systems (ICVS). , 343--353, 39.
    BibTeX:
    @article{kiforenkobuchkrueger2015,
      author = {Kiforenko, L. and Buch, A. G. and Krüger, N.},
      title = {Proceedings of 10th International Conference of Computer Vision Systems (ICVS)},
      pages = {343--353},
      year = {2015},
      volume= {39},
      chapter = {Object Detection Using a Combination of Multiple 3D Feature Descriptors},
      editor = {Nalpantidis, L. and Krüer, V. and Eklundh, J. and Gasteratos, A.},
      language = {English},
      publisher = {Springer International Publishing},
      doi = 10.1007/978-3-319-20904-3_31},
      abstract = This paper presents an approach for object pose estimation using a combination of multiple feature descriptors. We propose to use a combination of three feature descriptors, capturing both surface and edge information. Those descriptors individually perform well for different object classes. We use scenes from an established RGB-D dataset and our own recorded scenes to justify the claim that by combining multiple features, we in general achieve better performance. We present quantitative results for descriptor matching and object detection for both datasets.}}
    		
    Abstract: This paper presents an approach for object pose estimation using a combination of multiple feature descriptors. We propose to use a combination of three feature descriptors, capturing both surface and edge information. Those descriptors individually perform well for different object classes. We use scenes from an established RGB-D dataset and our own recorded scenes to justify the claim that by combining multiple features, we in general achieve better performance. We present quantitative results for descriptor matching and object detection for both datasets.
    Review:
    Kramberger, A. and Piltaver, R. and Nemec, B. and Gams Mand Ude, A. (2016).
    Learning of assembly constraints by demonstration and active exploration. Industrial robot: An international journal (Submitted).
    BibTeX:
    @article{krambergerpiltavernemec2016,
      author = {Kramberger, A. and Piltaver, R. and Nemec, B. and Gams Mand Ude, A.},
      title = {Learning of assembly constraints by demonstration and active exploration},
      journal = {Industrial robot: An international journal},
      year = {2016},
      note = {Submitted},
      publisher = {MCB UP Ltd}}
    		
    Abstract:
    Review:
    Wolniakowski, A. and Miatliuk, K. and Gosiewski, Z. and Jorgensen, J. A. and Bodenhagen, L. and Petersen H. G.and Krüger, N. (2016).
    Task and Context Sensitive Gripper Design Learning Using Dynamic Grasp Simulation. Journal of intelligent and robotic systems (Submitted).
    BibTeX:
    @article{wolniakowskimiatliukgosiewski2016,
      author = {Wolniakowski, A. and Miatliuk, K. and Gosiewski, Z. and Jorgensen, J. A. and Bodenhagen, L. and Petersen H. G.and Krüger, N.},
      title = {Task and Context Sensitive Gripper Design Learning Using Dynamic Grasp Simulation},
      journal = {Journal of intelligent and robotic systems},
      year = {2016},
      note = {Submitted},
      publisher = {Kluwer Academic Publishers}}
    		
    Abstract:
    Review:
    Wolniakowski, A. and Gams, A. and Kiforenko, L. and Kramberger, A. and Chrysostomou, D. and Madsen, O. and Miatliuk, K. and Petersen H. G.and Hagelskjaer, F. and Buch, A. G. and Ude, A. and Krüger, N. (2016).
    Compensating Pose Uncertainties Through Appropriate Gripper Finger Cutouts. Mechatronic Systems and Materials (Submitted).
    BibTeX:
    @article{wolniakowskimiatliukgosiewski2016,
      author = {Wolniakowski, A. and Gams, A. and Kiforenko, L. and Kramberger, A. and Chrysostomou, D. and Madsen, O. and Miatliuk, K. and Petersen H. G.and Hagelskjaer, F. and Buch, A. G. and Ude, A. and Krüger, N.},
      title = {Compensating Pose Uncertainties Through Appropriate Gripper Finger Cutouts},
      journal = {Mechatronic Systems and Materials},
      year = {2016},
      note = {Submitted}}
    		
    Abstract:
    Review:
    Jorgensen, J. A. and Rukavishnikova, N. and Krüger, N. and Petersen, H. G. (2015).
    Spatial constraint identification of parts in SE3 for action optimization. IEEE International Conference on Industrial Technology (ICIT), 474-480.
    BibTeX:
    @inproceedings{jorgensenrukavishnikovakrueger2015,
      author = {Jorgensen, J. A. and Rukavishnikova, N. and Krüger, N. and Petersen, H. G.},
      title = {Spatial constraint identification of parts in SE3 for action optimization},
      pages = {474-480},
      booktitle = {IEEE International Conference on Industrial Technology (ICIT)},
      year = {2015},
      month = {March},
      doi = 10.1109/ICIT.2015.7125144},
      abstract = In this paper we present a method to structure contextual knowledge in spatial regions/manifolds that may be used in action selection for industrial robotic systems. The contextual knowledge is build on relatively few prior task executions and it may be derived from either teleoperation or previous action executions. We argue that our contextual representation is able to improve the execution speed of individual actions and demonstrate this on a specific time-consuming action of object detection and pose estimation. Our contextual knowledge representation is especially suited for industrial environments where repetitive tasks such as bin-and belt picking are plentiful. We present how we classify and detect the contextual information from prior task executions and demonstrate the performance gain on a real industrial pick-and-place problem.}}
    		
    Abstract: In this paper we present a method to structure contextual knowledge in spatial regions/manifolds that may be used in action selection for industrial robotic systems. The contextual knowledge is build on relatively few prior task executions and it may be derived from either teleoperation or previous action executions. We argue that our contextual representation is able to improve the execution speed of individual actions and demonstrate this on a specific time-consuming action of object detection and pose estimation. Our contextual knowledge representation is especially suited for industrial environments where repetitive tasks such as bin-and belt picking are plentiful. We present how we classify and detect the contextual information from prior task executions and demonstrate the performance gain on a real industrial pick-and-place problem.
    Review:
    Kramberger, A. and Gams, A. and Nemec, B. and Schou, C. and Chrysostomou, D. and Madsen, O. and Ude, A. (2016).
    Generalization of Peg-in-Hole Skill for Intelligent Robotic Assembly. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (submitted).
    BibTeX:
    @inproceedings{asdf,
      author = {Kramberger, A. and Gams, A. and Nemec, B. and Schou, C. and Chrysostomou, D. and Madsen, O. and Ude, A.},
      title = {Generalization of Peg-in-Hole Skill for Intelligent Robotic Assembly},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2016},
      location = {Daejeon, Korea},
      month = {Oct},
      note = {submitted}}
    		
    Abstract:
    Review: