Abstract: | Various compression methods have been proposed for tackling
the problem of increasing test-data volume of contemporary, core-based
systems. Despite their effectiveness, most of the approaches that are based
on classical codes (e.g., run-lengths, Huffman) cannot exploit the test-application-time advantage of multiple-scan-chain cores, since they are not
able to perform parallel decompression of the encoded data. In this paper,
we take advantage of the inherent parallelism of Huffman decoding and
we present a generalized multilevel Huffman-based compression approach
that is suitable for cores with multiple scan chains. The size of the encoded
data blocks is independent of the slice size (i.e., the number of scan chains),
and thus it can be adjusted so as to maximize the compression ratio. At the
same time, the parallel data-block decoding ensures the exploitation of most
of the scan chains’ parallelism. The proposed decompression architecture
can be easily modified to suit any Huffman-based compression scheme. |