This research is concerned with the delivery of scalably compressed data over erasure channels. Recent works proposed several strategies for assigning optimal code redundancies to elements of scalable data, which form a linear structure of dependency, under the assumption that all source elements are encoded onto a common group of network packets. Given large data and small network packets, such schemes require very long channel codes with high computational complexity. In networks with high loss, small packets are more desirable than long packets. A strategy for optimally assigning elements of the scalable data to clusters of packets according to their dependency structure, subject to constraints on packet size and code complexity is proposed. Given a packet cluster arrangement, the scheme then assigns optimal code redundancies to the source elements, subject to a constraint on transmission length. Then the scheme is modified to accommodate limited retransmission of lost data. An optimization algorithm determines the transmission and level of protection for each source element, using information about the success of earlier transmission.