DS1 spectrogram: AEC-Bench: A Multimodal Benchmark for Agentic Systems in Architecture, Engineering, and Construction

AEC-Bench: A Multimodal Benchmark for Agentic Systems in Architecture, Engineering, and Construction

March 31, 20262603.29199

Authors

Andriy Mulyar,Harsh Mankodiya,Chase Gallik,Theodoros Galanos

Abstract

The AEC-Bench is a multimodal benchmark for evaluating agentic systems on real-world tasks in the Architecture, Engineering, and Construction (AEC) domain. The benchmark covers tasks requiring drawing understanding, cross-sheet reasoning, and construction project-level coordination.

This report describes the benchmark motivation, dataset taxonomy, evaluation protocol, and baseline results across several domain-specific foundation model harnesses. We use AEC-Bench to identify consistent tools and harness design techniques that uniformly improve performance across foundation models in their own base harnesses, such as Claude Code and Codex.

We openly release our benchmark dataset, agent harness, and evaluation code for full replicability at https://github.com/nomic-ai/aec-bench under an Apache 2 license.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.