Generalisation in Lifelong Reinforcement Learning through Logical Composition | 0 | 0.34 | 2022 |
Improved Action Prediction through Multiple Model Processing of Player Trajectories | 0 | 0.34 | 2022 |
Play-style Identification through Deep Unsupervised Clustering of Trajectories | 1 | 0.40 | 2022 |
Applying A Principle Of Explicability To Ai Research In Africa: Should We Do It? | 1 | 0.37 | 2021 |
Fairness and accountability of AI in disaster risk management: Opportunities and challenges | 0 | 0.34 | 2021 |
If dropout limits trainable depth, does critical initialisation still matter? A large-scale statistical analysis on ReLU networks | 1 | 0.35 | 2020 |
Utilising Uncertainty for Efficient Learning of Likely-Admissible Heuristics. | 0 | 0.34 | 2020 |
A Remote Sensing Method To Monitor Water, Aquatic Vegetation, And Invasive Water Hyacinth At National Extents | 0 | 0.34 | 2020 |
A Boolean Task Algebra For Reinforcement Learning | 0 | 0.34 | 2020 |
Understanding Structure Of Concurrent Actions | 0 | 0.34 | 2019 |
Learning Portable Representations for High-Level Planning. | 0 | 0.34 | 2019 |
Composing Value Functions in Reinforcement Learning | 0 | 0.34 | 2019 |
Transfer Learning for Prosthetics Using Imitation Learning. | 0 | 0.34 | 2019 |
Anticipatory Bayesian Policy Selection for Online Adaptation of Collaborative Robots to Unknown Human Types | 0 | 0.34 | 2019 |
Implementation of A Neural Natural Language Understanding Component for Arabic Dialogue Systems. | 1 | 0.48 | 2018 |
Symbol Emergence in Cognitive Developmental Systems: A Survey | 3 | 0.39 | 2018 |
Social Cobots: Anticipatory Decision-Making for Collaborative Robots Incorporating Unexpected Human Behaviors. | 2 | 0.40 | 2018 |
Zero-Shot Transfer with Deictic Object-Oriented Representation in Reinforcement Learning. | 0 | 0.34 | 2018 |
Reasoning about Unforeseen Possibilities During Policy Learning. | 0 | 0.34 | 2018 |
Will it Blend? Composing Value Functions in Reinforcement Learning. | 0 | 0.34 | 2018 |
Belief Reward Shaping in Reinforcement Learning. | 0 | 0.34 | 2018 |
Learning the Influence Structure between Partially Observed Stochastic Processes Using IoT Sensor Data. | 0 | 0.34 | 2018 |
A Non-Linear Manifold Alignment Approach to Robot Learning from Demonstrations. | 1 | 0.36 | 2018 |
Real-Time Motion Planning In Changing Environments Using Topology-Based Encoding Of Past Knowledge | 0 | 0.34 | 2018 |
Hierarchical Subtask Discovery With Non-Negative Matrix Factorization. | 1 | 0.35 | 2017 |
Hierarchy Through Composition with Multitask LMDPs. | 5 | 0.43 | 2017 |
An Analysis of Monte Carlo Tree Search. | 2 | 0.39 | 2017 |
Online Constrained Model-Based Reinforcement Learning | 2 | 0.36 | 2017 |
Fingerprint minutiae extraction using deep learning | 6 | 0.55 | 2017 |
Hierarchy through Composition with Linearly Solvable Markov Decision Processes. | 0 | 0.34 | 2016 |
A Bayesian Approach for Learning and Tracking Switching, Non-Stationary Opponents: (Extended Abstract). | 2 | 0.36 | 2016 |
Identifying and Tracking Switching, Non-Stationary Opponents: A Bayesian Approach. | 3 | 0.39 | 2016 |
Enhancing agent safety through autonomous environment adaptation | 0 | 0.34 | 2015 |
Bayesian Policy Reuse | 7 | 0.47 | 2015 |
Action Priors for Learning Domain Invariances | 0 | 0.34 | 2015 |
Behavioural domain knowledge transfer for autonomous agents | 0 | 0.34 | 2014 |
Modelling Primate Control Of Grasping For Robotics Applications | 0 | 0.34 | 2014 |
Giving advice to agents with hidden goals | 2 | 0.45 | 2014 |
On user behaviour adaptation under interface change | 3 | 0.49 | 2014 |
Clustering Markov Decision Processes For Continual Transfer. | 3 | 0.40 | 2013 |
What good are actions? Accelerating learning using learned action priors | 8 | 0.53 | 2012 |
Learning spatial relationships between objects | 30 | 1.12 | 2011 |
A game-theoretic procedure for learning hierarchically structured strategies | 1 | 0.37 | 2010 |
Language performance at high school and success in first year computer science | 3 | 0.45 | 2006 |