The crux of human-allied smart machines lie in their capability to communicate back to the human, be it for sharing knowledge, for querying additional guidance/knowledge or just for mere inconsequential conversation.
We develop AI agents that are able to infer dearth of knowledge in specific parts of a task (knowing-what-it-knows), communicate and explain to human (non-)expert(s) through various modalities, such as text, visuals, gestures etc., and actively seek advice/preference/guidance accordingly (knowing-when-to-ask). Queries and explanations from to the agent to the human are hierarchical and are at varying levels of generality, making it extremely intuitive and easy for the human ally to effectively help the agent make better choices and learn better models. We build such frameworks both for predictive modeling and sequential decision making.
- Das, M., Odom, P., Islam, M.R., Doppa, J., Roth, D., & Natarajan, S., “Preference- Guided Planning: An Active Elicitation Approach”, International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2018.
- Alexander L. Hayes, Mayukh Das, Phillip Odom, Sriraam Natarajan. “User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams.”, Knowledge Capture Conference 2017.
- Odom, P., & Natarajan, S., “Active Advice Seeking for Inverse Reinforcement Learning.”, International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2016.
- Odom, P., & Natarajan, S., “Actively Interacting with Experts: A Probabilistic Logic Approach.”, European Conference on Machine Learning and Principles of Knowledge Discovery in Databases (ECMLPKDD) 2016.
- Odom, P., Kumaraswamy, R., Kersting, K., & Natarajan, S., “Learning through Advice-Seeking via Transfer.”, International Conference on Inductive Logic Programming (ILP) 2016.