Building Multi-Agent Systems that Support Behavioral Specialists Working with People with Intellectual and Developmental Disabilities
Written by Yuanchen (Sophie) Bai
Behavioral intervention specialists (BISs) play a vital role in supporting individuals with intellectual and developmental disabilities (I/DD), often through highly detail-oriented work. As behavioral data grows more complex, spanning longer timeframes and more individualized cases, the work becomes harder to manage. My project investigates how artificial intelligence (AI) can support BISs by reducing manual workloads, increasing operational efficiency, and improving knowledge bases to lead to better care.
A Careful Entry into High-Stakes Work
I joined YAI as a Siegel PiTech PhD Impact Fellow in late May, entering into work where data privacy and sensitivity are of the utmost importance. Behavioral data about people with IDD is deeply personal and highly contextual. Any system working with this information needs to be designed with caution, care, and a clear understanding of the needs and ethical boundaries.
Through YAI’s onboarding and my early field engagement, I was able to take time to understand the realities of this work, and learn from BISs, engaging them in conversations about trust, ethics, and their lived experiences. This grounding was essential: it shaped not only the direction of the project but my understanding of what it means to “help” in a space where every individual’s needs are unique.
Collaboratively Building AI Support with Behavioral Experts
When I began this project, I conducted a thorough literature review of prior work on behavioral data analysis in I/DD contexts and found very little prior academic research. Most existing solutions were commercial products with limited transparency and accessibility. This gap made me realize that meaningful progress would require a firsthand understanding of practitioners’ real-world workflows and challenges.
During early field visits, I observed how demanding this work is. BISs manage complex data streams for each person they support (e.g., daily narrative logs, monthly progress summaries, and long-term reports). Every case is different. They track behavioral patterns over months, coordinate with clinicians, and write detailed documentation that justifies every intervention decision.
To address this challenge, I built my first prototype: a multi-agent AI system where different “agents” handle specific types of behavioral data (e.g., daily frequency data, narrative logs) and then analyze and discuss them collaboratively. The system also helps with administrative writing and generates traceable, evidence-based summaries.
By the time I’m writing this, I have already completed a new round of interviews using the prototype as a discussion tool. My conversations revealed something very important: in the I/DD field, data alone never drives decisions. When specialists evaluate an intervention plan, they don’t just look at the numbers. They question data accuracy, seek out missing contexts, and draw on their personal knowledge of each individual. The data is a starting point, and the real insight comes from how people reason around it.
This realization inspired me. The tool I’m refining now is not just for data analysis, but it’s for supporting the reflective, contextual process that experts do with the data. It helps BISs think with data, verifying, interpreting, and connecting it to lived experiences, so they can design better, more informed intervention plans.
Challenges That Will Define the Path Forward
One of the biggest challenges in developing tools for the IDD community is privacy. Unlike other fields where data can be openly shared and studied, behavioral and clinical data here are highly sensitive and rarely accessible. As a result, the real challenges the community faces can remain invisible to researchers and technologists who might otherwise help address them.
This realization made me see the importance of finding safe and ethical ways to make problems visible. When testing my prototype, I couldn’t use real behavioral cases, so I created synthetic data that mirrored the structure and patterns of real individual data. Experts reviewed these examples and confirmed they felt realistic and meaningful for exploration.
The process taught me that innovation in sensitive domains must balance protection and participation—safeguarding individuals’ data while enabling collective understanding. To me, this work is about helping the IDD community become more visible and supported, so that more people can see and contribute to solving challenges that have too often stayed unseen.
Looking Ahead: Building with Care and Intention
Yuanchen (Sophie) Bai
Ph.D. Student, Information Science, Cornell University
Understanding the needs of behavioral experts and the realities of the individuals they support takes time, trust, and close collaboration. Each interview, prototype, and conversation brings us closer to this goal. That’s why I’m continuing this work into fall 2025, as a PiTech Rubinstein Innovation Fellow. I will further explore collaborations with YAI’s technical teams to integrate these tools into existing systems. I additionally hope to share insights with the broader HCI and AI communities to spark dialogue and reflection.
While building responsible AI systems for behavioral support is undoubtedly complex and context-dependent, I believe surfacing the needs, pain points, and lived realities of BISs and the people they serve is a critical first step toward collective understanding and solution-building. I’m proud to be part of this pioneering work.