This week, I primarily focused on redesigning and optimizing the UI within my drawing panel. A particularly noteworthy improvement concerns the timeline—a core feature of this project—where I replaced the original single slider with a format more akin to the parametric modeling history found in software like Fusion. Specifically, I now use icons to represent individual steps, employing distinct colors to differentiate between user actions and AI-generated actions for easier distinction. Of course, even with these updates, the feature remains in a very rudimentary state. As I mentioned last week, SketchRNN still relies on text prompts as input and outputs a sequence of strokes—rather than taking the current state of the drawing as input—which prevents me from fully realizing the functionality I originally envisioned. Consequently, the current state of this project serves primarily as a conceptual and interactive demonstration. However, the good news is that this week I successfully managed to get the PyTorch training scripts up and running—a process I will describe in greater detail later on.

https://stonesgate604.github.io/AIdrawing/

Timeline UI Optimization

New User Interface

New User Interface

New Timeline

New Timeline

Since the codebase for this project has become overly complex, I have decided to focus solely on explaining the underlying implementation logic. (Additionally, I have split the relevant JavaScript code into several separate files, organized according to module functionality.)

The core of the timeline is the session array (located in state.js), where each individual stage is represented by a step object:


`{
  step_id: stepCount,
  action: {
    type: 'stroke_end' | 'fill' | 'eraser' | 'ai_stroke_end',
    tool, color, brush_size,
    points: [{x, y}, ...]  
  },
  snapshot 
}`

Whenever a brushstroke concludes (mouseReleased), a fill operation occurs, or an AI stroke is completed, canvas.js invokes captureStep() to push the current state onto the stack, subsequently updating stepCount and currentTimelineIndex.

Rendering (timeline.js)

updateTimeline() serves as the main entry point; it is called whenever the session state changes and performs three key tasks:

  1. renderTimelineIcons() — It iterates through the session data, creating a .tl-step div for each step. It then assigns a CSS class based on the action type (brush, eraser, fill, or ai) and inserts the corresponding Lucide icon.
  2. updateTimelineCursor() — It identifies the DOM position of the currently active step and uses absolute positioning to move the vertical green cursor line (#tl-cursor) to that location.
  3. ensureActiveStepVisible() — If the active step falls outside the visible viewport, it automatically calls scrollTo to bring it into alignment.

The width of each icon is fixed at STEP_SLOT_WIDTH = 50px. The total width of the #tl-icons container is calculated as steps × 50 + 28px; if this width exceeds the available space, the timeline track becomes horizontally scrollable.

https://github.com/StonesGate604/AIdrawing.git

Training

The most significant development in my training process this week is that I successfully managed to run the Python training script. Using the stroke data I had previously collected via that webpage, I successfully carried out a model training session. The following is my training script; I wrote it with the assistance of AI. As I do not yet fully understand the current code, I can only post it here without offering any interpretation.