This paper presents DS-Net++, a novel framework for efficient inference in neural networks. Dynamic weight slicing allows for scalable performance across multiple architectures like CNNs and vision transformers. The method delivers up to 61.5% real-world acceleration with minimal accuracy drops on models like MobileNet, ResNet-50, and Vision Transformer, showing its potential in hardware-efficient dynamic networks.