In Class

  1. mainly Gemini(to build the program structure and give the prompt) Copilot( mainly to code with the prompts given by Gemini)
  2. Gemini is good at discussing the idea the project and the need of logic and workflow and sometimes I also copy the code and let the Gemini to explain the code) Copilot is very good at coding because it is an AI agent and it can overview the whole project file so it know the dependings.
  3. Gemini is sometimes bad at coding because it didn’t have a view of whole project, copilot is bad at explain the code and sometime it’s not convenient
  4. Because I use Ai for coding a lot, I feel like my coding skill is even worse than I before, Even I know the logic and workflow but when I really start coding myself I found that I don’t know the format the function name. ( dose AI make me a worse coder, Yes it does)
  5. Practical aspect, just as I mentioned in Question 4

Assignment

This week, I successfully deployed my website pages via GitHub and made some minor modifications to a few of them:https://stonesgate604.github.io/Code-Your-Way/

Plan for Next half semes

Proposal 1

concept

This project aims to develop an intelligent system that can understand and simulate the human drawing process. By converting complex drawing input into machine-readable sequential data, it uses deep neural networks to predict subsequent creation steps. The system can not only complete the artwork automatically, but also provide creators with real-time process assistance.

drawingAI.png

The current logic of text-to-image AI is to take the user’s prompt as input to the model, output pixels, and arrange those pixels to form an image. In fact, the machine is not really drawing like a human; it is more like rendering an image. The drawback is that users cannot fully control their artwork. Each prompt feels like a gamble, and the generated images are not editable project files, so users cannot continue to refine and adjust them.

Frame 20.png

The core logic of my idea is to build a “perception–decision–action” closed-loop system: it uses a neural network to continuously perceive and recognize the current state of the canvas (Input), predicts the optimal next drawing-action instruction (Decision), and then drives an executor to translate it into concrete operations in the software (Action). In the example, we can see that the rabbit ear at the top of the head on the left has not been colored in yet, so the AI’s predicted next action is to color the rabbit ear. To be precise, these operations are more akin to imitation learning, falling within the scope of embodied intelligence.

Frame 201.png