This week, I successfully deployed my website pages via GitHub and made some minor modifications to a few of them:https://stonesgate604.github.io/Code-Your-Way/
This project aims to develop an intelligent system that can understand and simulate the human drawing process. By converting complex drawing input into machine-readable sequential data, it uses deep neural networks to predict subsequent creation steps. The system can not only complete the artwork automatically, but also provide creators with real-time process assistance.

The current logic of text-to-image AI is to take the user’s prompt as input to the model, output pixels, and arrange those pixels to form an image. In fact, the machine is not really drawing like a human; it is more like rendering an image. The drawback is that users cannot fully control their artwork. Each prompt feels like a gamble, and the generated images are not editable project files, so users cannot continue to refine and adjust them.

The core logic of my idea is to build a “perception–decision–action” closed-loop system: it uses a neural network to continuously perceive and recognize the current state of the canvas (Input), predicts the optimal next drawing-action instruction (Decision), and then drives an executor to translate it into concrete operations in the software (Action). In the example, we can see that the rabbit ear at the top of the head on the left has not been colored in yet, so the AI’s predicted next action is to color the rabbit ear. To be precise, these operations are more akin to imitation learning, falling within the scope of embodied intelligence.
