Research InterestsThe broad focus of my research is the development of artificial intelligence (AI) methods for human-computer/robot interaction, aiming to develop systems that adaptively interact with human users through understanding their actions in the environment. The integration of perception and decision-making enables a machine to consider people's actions and responses as feedback to its previous choices during the interactive experience---this is called closed-loop interaction. As I delve further into applications of my past theoretical contributions, I continue to recognize how essential it is for human-aware AI research to learn from and incorporate practices from the human-computer/robot interaction and human factors engineering communities---I am a strong advocate for human-aware design of human-aware AI. People are not simply mathematical models or sources of data that we can assume away in our theory and code. To better facilitate people's interactions with machines, we scientists must also think about and work with the people who will participate in these interactions.To achieve these goals, my research interests lie at the intersection of interdisciplinary areas to understand how autonomous machines can interact with people in ways that feel natural: Artificial intelligence is a very broad focus with many specialties. Planning tries to develop autonomous problem solvers that can make decisions that accomplish a goal. That is, can a computer solve a problem on its own? The Resource Bounded Reasoning (RBR) Lab's work often involves planning under uncertainty, which can be viewed in many ways including:
Inverting the planning problem makes the solution the input and the problem the output. Specifically, given an observed sequence of an agent's executed actions and/or a sequence of changes in the world, what was the problem? For example, you walk into a room and see your friend walking around doing various things. So you ask yourself, "What is he/she doing?" Given a collection of raw sensor readings, can we identify higher-level actions? For example, if an accelerometer senses an up-and-down motion (jumping, using a leg to climb a stair, lifting weights, and a number of other motions can cause this depending on the placement of the sensor), then how does the machine explain this in words that a human can interpret? Given an observed sequence of an agent's executed actions, what is the agent's goal and/or next action(s)? This task is more predictive than plan recognition, but the two areas have a lot in common with respect to formulations and approaches. The study of facilitating user experiences with digital entities, both virtual and physical, as a development and refinement of algorithms and systems. This can range from easing usability of an interface to making more pleasant/comfortable engagements between the system and user. Identifying features of information that make it practical for artificial intelligence algorithms. That is, what does a piece of information look like in the machine's brain, and how can it be used for reasoning and understanding? Combining nondeterministic/probabilistic and relational approaches in artificial intelligence. This provides the best of both worlds between handling some forms of statistical uncertainty and relational structure for a given domain.
|