The crux of human-allied smart machines lie in their capability to communicate back to the human, be it for sharing knowledge, for querying additional guidance/knowledge or just for mere inconsequential conversation.

We develop AI agents that are able to infer dearth of knowledge in specific parts of a task (knowing-what-it-knows), communicate and explain to human (non-)expert(s) through various modalities, such as text, visuals, gestures etc., and actively seek advice/preference/guidance accordingly (knowing-when-to-ask). Queries and explanations from to the agent to the human are hierarchical and are at varying levels of generality, making it extremely intuitive and easy for the human ally to effectively help the agent make better choices and learn better models. We build such frameworks both for predictive modeling and sequential decision making.