Currently, there is no way to filter or compare graphs and outcome data based on the session prototype used. Many clients have multiple session types (e.g., school, home, community), and all data is combined, which limits meaningful analysis. Because of this, we often have to create separate goals for each setting (e.g., 2–3 versions of the same goal) just to track performance accurately by environment. This creates unnecessary duplication in programming and makes treatment plans more complex to manage. In addition to filtering by prototype, it would be highly beneficial to have a comparison feature that allows data to be viewed side-by-side or overlaid across prototypes. For example, comparing behavior frequency or skill acquisition rates between school and home sessions within the same time frame. This would allow users to evaluate how performance varies by environment, staff implementation, or session structure without needing duplicate goals or external data tracking.