BlendX: Complex Multi-Intent Detection with Blended Patterns

Yejin Yoon1, Jungyeon Lee1, Kangsan Kim1, Chanhee Park2, Taeuk Kim1*
1Hanyang University 2Hyundai Motor Company
LREC-COLING 2024

*Corresponding author
BlendX overview figure

Overview of the BlendX construction framework across ATIS, Banking77, CLINC150, and SNIPS.

Abstract

Task-oriented dialogue (TOD) systems are commonly designed with the presumption that each utterance represents a single intent. However, this assumption may not accurately reflect real-world situations, where users frequently express multiple intents within a single utterance. While there is an emerging interest in multi-intent detection (MID), existing in-domain datasets such as MixATIS and MixSNIPS have limitations in their formulation. To address these issues, we present BlendX, a suite of refined datasets featuring more diverse patterns than their predecessors, elevating both its complexity and diversity. For dataset construction, we utilize both rule-based heuristics as well as a generative tool—OpenAI’s ChatGPT—which is augmented with a similarity-driven strategy for utterance selection. To ensure the quality of the proposed datasets, we also introduce three novel metrics that assess the statistical properties of an utterance related to word count, conjunction use, and pronoun usage. Extensive experiments on BlendX reveal that state-of-the-art MID models struggle with the challenges posed by the new datasets, highlighting the need to reexamine the current state of the MID field.

Overview

Motivation


Task-oriented dialogue (TOD) systems typically assume that each user utterance corresponds to a single intent, but real-world interactions frequently violate this assumption. Multi-intent detection (MID) aims to address this gap, yet benchmarks such as MixATIS and MixSNIPS often exhibit overly simplistic merging patterns (e.g., limited conjunction templates), which can provide shallow cues for models. BlendX is motivated by the need for a more rigorous and diverse MID testbed that goes beyond naive concatenation and better captures complex blended patterns found in natural conversations.

Motivation for multi-intent detection in BlendX
Figure 1. From MixX to BlendX: beyond naive concatenation.

Dataset Construction

BlendX extends four single-intent datasets: ATIS, SNIPS, Banking77, and CLINC150, producing BlendATIS, BlendSNIPS, BlendBanking77, and BlendCLINC150.

BlendX concatenation patterns
Figure 3. Concatenation taxonomy in BlendX.

The BlendX concatenation space is defined by two axes—complexity (explicit vs. implicit) and methodology (naive/manual vs. generative)—covering diverse blended patterns such as omissions, coreferences, and gerund phrases.

These axes motivate three complementary construction approaches: a naive approach with explicit concatenation using AND-variant connectors, a manual approach with rule-based heuristics and broader conjunction patterns (including omission and gerund phrasing), and a generative approach that uses ChatGPT to produce more natural, implicit merges while preserving original intents. To model realistic complexity, BlendX covers both explicit concatenation and implicit patterns such as ambiguities, omissions, and coreferences.

BlendX construction approaches

To improve LLM-based generation quality, BlendX applies a similarity-based selection strategy using SBERT cosine similarity, selecting utterance pairs above a threshold before merging. This reduces intent distortion errors compared to random selection.

For quality control, BlendX defines three custom metrics for word count change (W), conjunction change (C), and pronoun change (P). These metrics filter low-quality generations, followed by expert review to remove failures such as intent removal, intent change, or unsuccessful merges.

BlendX metric examples
Table 3. Examples of blended patterns and their metric signatures.

Representative explicit and implicit concatenation cases in BlendX (e.g., ambiguity, gerund phrases, omissions, coreferences), with corresponding values of the three custom metrics (W, C, P).

Evaluation

BlendX is designed to stress-test MID systems under distribution shifts. Models trained and evaluated on MixX can perform well, but performance drops sharply on BlendX, indicating that MixX is not sufficiently challenging. Even when training on BlendX, results do not fully recover to MixX-level performance, suggesting that BlendX contains intrinsically harder patterns and requires stronger MID modeling.

BlendX evaluation results
Table 6. Benchmark performance on MixX vs. BlendX.

Accuracy of competitive MID models under different train/test splits, highlighting substantial performance drops when evaluating on BlendX and revealing brittleness under distribution shift.

Visualization

We visualize semantic structure with SBERT embeddings and t-SNE projections to compare MixX and BlendX distributions.

BlendX visualization
Figure 6. Embedding-space visualization of MixX vs. BlendX.

The projection shows that BlendX remains semantically grounded while exhibiting broader and more diverse distributions than MixX.

BibTeX

@inproceedings{yoon-etal-2024-blendx-complex,
  title = "{B}lend{X}: Complex Multi-Intent Detection with Blended Patterns",
  author = "Yoon, Yejin and Lee, Jungyeon and Kim, Kangsan and Park, Chanhee and Kim, Taeuk",
  editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen",
  booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
  month = may,
  year = "2024",
  address = "Torino, Italia",
  publisher = "ELRA and ICCL",
  url = "https://aclanthology.org/2024.lrec-main.218",
  pages = "2428--2439"
}