Flatness Control for Cold Rolling Process of Steel Strip Based on Ddpg with Delay Compensation
27 Pages Posted: 29 Mar 2025
Abstract
Flatness control in cold rolling processes poses significant challenges due to multivariable interactions, strong coupling effects, nonlinear dynamics, and time-delay characteristics, making it a complex industrial control problem. Conventional control approaches that rely on static models with fixed parameters inherently exhibit limitations in accuracy and robustness. These limitations become particularly pronounced during unsteady-state rolling phases, including acceleration/deceleration, flying gauge control, and tension loss during shape cutting. To address these challenges, this study develops an intelligent control framework based on the Deep Deterministic Policy Gradient (DDPG) reinforcement learning algorithm. The proposed architecture integrates three core components: a reinforcement learning agent, a real-time state observer, and an adaptive reward computation module. By incorporating an experience replay mechanism with delay compensation strategies, the system demonstrates enhanced dynamic response and superior disturbance rejection capabilities (settling time reduced by 42% from 8.6 s to 5.0 s) compared to traditional PID controllers. Industrial field trials confirm a 10% reduction in root mean square error (RMSE from 3.94 I-units to 3.55 I-units) under equivalent interference conditions. This advancement optimizes continuous production of thin silicon steel (thickness < 0.5 mm) while maintaining flatness quality during rapid speed transitions, achieving a critical breakthrough for high-precision rolling applications.
Keywords: DDPG Algorithm, Cold Rolling Process, Flatness Control, Time-Delay compensation, Reinforcement Learning, Nonlinear Actuator
Suggested Citation: Suggested Citation