One of the challenges when designing AI agents is how to support people in understanding their responses. Making AI outputs explainable is important for the accountability of AI decisions in critical domains like law and medicine. In everyday life, however, technical and static AI explanations are often not effective, and in some cases are not even inclusive, as they position the user (and others affected by AI) as passive recipients. Thus, there is a real need to improve our everyday understanding of AI, allowing users to develop more implicit ways of understanding AI through interaction and experimentation. In this context, misunderstandings become meaningful interactions for people to understand what agents can and cannot do.