[{"data":1,"prerenderedAt":19},["ShallowReactive",2],{"docs-post-ima-claw-creator-training-guide-en":3},{"slug":4,"title":5,"description":6,"date":7,"author":8,"tags":9,"lang":13,"image":14,"ogImage":14,"thumbnail":14,"featured":15,"featuredOrder":16,"content":17,"html":18},"ima-claw-creator-training-guide-en","Ima Claw Creator's Training Guide","A hands-on guide for content creators on training your AI lobster. No theory — just real pitfalls, breakthroughs, and methods from my daily work with Claw.","2026-03-08T03:50:00.000Z","Yuki He",[10,11,12],"Tutorial","Tips & Tricks","OpenClaw","en","",false,99,"\n> **Preface**\n>\n> Fu Sheng (傅盛) wrote an excellent piece called \"AI Assistant Training Manual\" about his 25 days with his AI assistant. Every OpenClaw user should read it.\n>\n> But Fu Sheng's guide is about general AI assistants — managing emails, running scripts, handling messages.\n>\n> My world is different. **I'm a content creator.** I need an AI employee that makes videos, generates images, writes articles, and manages social media.\n>\n> This is my version: **How creators can train an AI lobster that actually does the work.**\n\n---\n\n## Chapter 1: Four Misconceptions Creators Have About AI\n\nMany creators try AI, and conclude \"it's not that great.\" It's not that AI can't do it — it's that the approach is wrong.\n\n### ❌ Misconception 1: \"AI Can Directly Make My Video\"\n\nNot in one step.\n\nAI video generation reality: **5 seconds per shot.** You say \"make me a one-minute video\" — it can't.\n\nBut you can do this:\n\n> Break into 3 shots → 5 seconds each → auto-stitch → auto-score → 15-second film\n\nThe key isn't \"can AI do it\" but **how you break down the task.** You can do this yourself, or let the AI figure it out — but it needs to know how the world works.\n\n### ❌ Misconception 2: \"Send a Reference Image and AI Will Replicate It\"\n\nNot quite.\n\nAI image generation has two modes:\n- **Text-to-image**: You describe in words → AI imagines → probably doesn't match\n- **Image-to-image**: You provide a reference → AI generates based on it → 10x more accurate\n\nMy lesson: Asked AI to create an iPhone 17 Pro Max promo without reference images. AI \"imagined\" a phone that looked nothing like the real thing.\n\n**Rule: For specific products\u002Fpeople\u002Fscenes, always search for reference images first. Use image-to-image.**\n\n### ❌ Misconception 3: \"Good Prompts Are All You Need\"\n\nPrompts matter, but they're **not the most important thing.**\n\nMaking a video, the prompt is 10% of the decision. The other 90%:\n- Which model? (Kling O1 for character consistency, Wan 2.6 for visual quality, Veo 3 for complex scenes)\n- Which mode? (text-to-video \u002F image-to-video \u002F reference image?)\n- How many shots? What transitions?\n- What music? What tempo?\n\nA good AI assistant should **make these decisions for you**, not wait for instructions on each one.\n\n### ❌ Misconception 4: \"AI Output Is Ready to Use\"\n\nNot yet.\n\nFirst-attempt pass rate is roughly 60-70%. That means 3-4 out of 10 need a redo.\n\nAnd AI often can't tell what went wrong — like when a door opens in the wrong direction in a video.\n\n**You look at it, say \"direction's wrong,\" and it fixes it.**\n\nThis isn't a flaw. It's the current workflow: **humans decide, AI executes.**\n\n---\n\n## Chapter 2: How Ima Claw's Creative Workflow Works\n\n### It's Not a Tool — It's an Employee\n\nTraditional creative tool: You open Photoshop → operate it yourself → export.\nAI creative tool: You open Midjourney → write prompt → wait → rewrite if unsatisfied.\n\nIma Claw is different: You say one sentence, and it **decides how to do it.**\n\nReal case — **One cat photo becomes a 15-second film:**\n\n1. I sent a cat photo and said \"make a short video\"\n2. Claw decided: preserve the cat's real appearance → chose `reference_image_to_video` mode\n3. Auto-selected Kling O1 (strongest character consistency)\n4. Self-planned 3 shots (cat scratching door → door opens → cat runs out)\n5. Shot 2 was wrong (door direction reversed) → caught its own mistake → rewrote prompt → regenerated\n6. Three shots auto-stitched + auto-generated BGM + merged output\n\n**I spent 2 minutes. Claw worked for 40 minutes.**\n\nThat's the difference between an \"AI employee\" and an \"AI tool\": tools wait for your input; employees think for you.\n\n### The Creative Stack\n\n| Capability | Models | Use Cases |\n|-----------|--------|-----------|\n| Image Generation | Midjourney \u002F Nano Banana Pro \u002F Seedream | Covers, posters, product shots |\n| Video Generation | Kling O1 \u002F Wan 2.6 \u002F Veo 3.1 \u002F Seedance | Short films, ads, demos |\n| Music Generation | DouBao BGM \u002F Suno | Background music, soundtracks |\n| Copywriting | Claude \u002F GPT | Blogs, social copy, scripts |\n| Auto-Publishing | Xiaohongshu \u002F WeChat | One-click distribution |\n\nYou don't need to know what these models are — tell Claw what you want, it picks.\n\n### What Does It Cost?\n\n| Project | Cost | If Done Manually |\n|---------|------|-----------------|\n| 15-sec film (3 shots + BGM) | 174 credits ≈ $1.70 | Half-day shoot ≈ $300+ |\n| AI cover image | 10-18 credits ≈ $0.15 | Designer ≈ $30+ |\n| Bilingual blog post | 0 credits (text only) | Translator ≈ $70+ |\n\n**It's not just 90% cheaper. It's 90% faster AND 90% cheaper.**\n\n---\n\n## Chapter 3: 10 Practical Creative Tips\n\nAll from real mistakes, now coded into Claw's rule files.\n\n### 1. Product Images: Always Search for References First\n> ❌ \"Make an iPhone promo\" → generate from text\n> ✅ Get the task → search real product photos → use image-to-image\n\n### 2. Wrong Video Direction? Use Light as a Guide\n\"Light pouring through the door crack, getting brighter\" → AI understands the door is opening.\n> Claw's own lesson: light and motion direction matter more than subject description.\n\n### 3. Dual-Model Comparison for Images\nGenerate with Midjourney + Nano Banana Pro simultaneously. Show both, let user choose. 2x efficiency vs single model.\n\n### 4. Never Use Sub-Agents for Writing\nSub-agents can't see the main session context. Articles come out completely off-target. **Write\u002Fcode\u002Fdesign → always in the main session.**\n\n### 5. Send Files Directly, Never Paste File Paths\n> ❌ \"File is at \u002Froot\u002Fworkspace\u002Foutput\u002Fxxx.mp4\" (user can't open it)\n> ✅ Send the file directly via Feishu\u002Fmessaging\n\n### 6. One-Sentence vs Step-by-Step\nSimple creative task → one sentence: \"make a cat video\"\nComplex task with standards → step by step: \"cover first → I review → then video → I confirm → then copy\"\n\n### 7. Self-Test Before Every Delivery\nMandatory checklist: HTTP 200 verification, no 404 links, DOM validation, visual check. No testing = rework.\n\n### 8. Video Poster Thumbnails\nAI videos show a black frame by default on web. Fix: extract first frame via ffmpeg → webp → add poster attribute. Small detail, huge UX difference.\n\n### 9. Install CJK Fonts First\nServers don't have Chinese fonts by default. Generated covers show □□□□ instead of text. One command to fix, but without it everything breaks.\n\n### 10. Write Lessons as Rules, Not Memory\nSame as Fu Sheng's point: **AI doesn't \"remember.\"** Mistake → write it into AGENTS.md\u002FTOOLS.md → becomes a permanent rule.\n\n---\n\n## Chapter 4: The Creator's 7-Day Path\n\n### 📅 Day 1: Adopt + Establish Identity\n- Name your lobster, set its personality (SOUL.md)\n- Tell it who you are, what content you make, your style preferences (USER.md)\n- First task: have it generate a profile picture for you\n\n### 📅 Day 2: First Creative Task\n- Send a photo, say \"make a short video\"\n- Watch how it breaks down the task, selects models, generates\n- Not satisfied? Tell it what's wrong, watch how it adapts\n\n### 📅 Day 3: Establish Creative Rules\n- Write your brand colors, font preferences, content style into TOOLS.md\n- Write lessons (\"always use image-to-image for products\") into AGENTS.md\n- Set up your platforms and publishing formats\n\n### 📅 Day 4-5: Batch Creation\n- Try creating 3-5 pieces at once (images + videos + copy)\n- Have it write bilingual versions\n- Try auto-publishing to social platforms\n\n### 📅 Day 6: Establish Daily Routines\n- Set up Heartbeat: daily industry news scan\n- Set up Cron: weekly content calendar generation\n- Competitor tracking automation\n\n### 📅 Day 7: Review + Optimize\n- Review the week's output — what worked, what needs fixing\n- Write lessons into rule files\n- Slim down MEMORY.md, keep it focused\n\n### 📅 Day 8+: Continuous Evolution\n\nAs Fu Sheng says — AI doesn't self-evolve. **You evolve, and the rule system you build for it evolves.**\n\nBut here's what's different for creators: **your work is the best proof of evolution.**\n\nA month ago, you might have been struggling with prompts.\nA month later, you send a photo, say a sentence, and a 15-second film is done.\n\n---\n\n## Final Thought\n\nFu Sheng said: \"Your job isn't to make AI smarter — it's to make sure AI sees the right information.\"\n\nI'll add: **For creators, your job isn't to learn every AI tool — it's to raise a lobster that learns them all for you.**\n\nThis lobster selects models, writes scripts, generates videos, adds music, and publishes.\n\nYou just need to have ideas, and say them out loud.\n\n---\n\n👉 [**imaclaw.ai**](https:\u002F\u002Fimaclaw.ai)\n","\u003Cblockquote>\n\u003Cp>\u003Cstrong>Preface\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>Fu Sheng (傅盛) wrote an excellent piece called &quot;AI Assistant Training Manual&quot; about his 25 days with his AI assistant. Every OpenClaw user should read it.\u003C\u002Fp>\n\u003Cp>But Fu Sheng&#39;s guide is about general AI assistants — managing emails, running scripts, handling messages.\u003C\u002Fp>\n\u003Cp>My world is different. \u003Cstrong>I&#39;m a content creator.\u003C\u002Fstrong> I need an AI employee that makes videos, generates images, writes articles, and manages social media.\u003C\u002Fp>\n\u003Cp>This is my version: \u003Cstrong>How creators can train an AI lobster that actually does the work.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\u003Chr>\n\u003Ch2>Chapter 1: Four Misconceptions Creators Have About AI\u003C\u002Fh2>\n\u003Cp>Many creators try AI, and conclude &quot;it&#39;s not that great.&quot; It&#39;s not that AI can&#39;t do it — it&#39;s that the approach is wrong.\u003C\u002Fp>\n\u003Ch3>❌ Misconception 1: &quot;AI Can Directly Make My Video&quot;\u003C\u002Fh3>\n\u003Cp>Not in one step.\u003C\u002Fp>\n\u003Cp>AI video generation reality: \u003Cstrong>5 seconds per shot.\u003C\u002Fstrong> You say &quot;make me a one-minute video&quot; — it can&#39;t.\u003C\u002Fp>\n\u003Cp>But you can do this:\u003C\u002Fp>\n\u003Cblockquote>\n\u003Cp>Break into 3 shots → 5 seconds each → auto-stitch → auto-score → 15-second film\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\u003Cp>The key isn&#39;t &quot;can AI do it&quot; but \u003Cstrong>how you break down the task.\u003C\u002Fstrong> You can do this yourself, or let the AI figure it out — but it needs to know how the world works.\u003C\u002Fp>\n\u003Ch3>❌ Misconception 2: &quot;Send a Reference Image and AI Will Replicate It&quot;\u003C\u002Fh3>\n\u003Cp>Not quite.\u003C\u002Fp>\n\u003Cp>AI image generation has two modes:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>\u003Cstrong>Text-to-image\u003C\u002Fstrong>: You describe in words → AI imagines → probably doesn&#39;t match\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Image-to-image\u003C\u002Fstrong>: You provide a reference → AI generates based on it → 10x more accurate\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>My lesson: Asked AI to create an iPhone 17 Pro Max promo without reference images. AI &quot;imagined&quot; a phone that looked nothing like the real thing.\u003C\u002Fp>\n\u003Cp>\u003Cstrong>Rule: For specific products\u002Fpeople\u002Fscenes, always search for reference images first. Use image-to-image.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Ch3>❌ Misconception 3: &quot;Good Prompts Are All You Need&quot;\u003C\u002Fh3>\n\u003Cp>Prompts matter, but they&#39;re \u003Cstrong>not the most important thing.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>Making a video, the prompt is 10% of the decision. The other 90%:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Which model? (Kling O1 for character consistency, Wan 2.6 for visual quality, Veo 3 for complex scenes)\u003C\u002Fli>\n\u003Cli>Which mode? (text-to-video \u002F image-to-video \u002F reference image?)\u003C\u002Fli>\n\u003Cli>How many shots? What transitions?\u003C\u002Fli>\n\u003Cli>What music? What tempo?\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>A good AI assistant should \u003Cstrong>make these decisions for you\u003C\u002Fstrong>, not wait for instructions on each one.\u003C\u002Fp>\n\u003Ch3>❌ Misconception 4: &quot;AI Output Is Ready to Use&quot;\u003C\u002Fh3>\n\u003Cp>Not yet.\u003C\u002Fp>\n\u003Cp>First-attempt pass rate is roughly 60-70%. That means 3-4 out of 10 need a redo.\u003C\u002Fp>\n\u003Cp>And AI often can&#39;t tell what went wrong — like when a door opens in the wrong direction in a video.\u003C\u002Fp>\n\u003Cp>\u003Cstrong>You look at it, say &quot;direction&#39;s wrong,&quot; and it fixes it.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>This isn&#39;t a flaw. It&#39;s the current workflow: \u003Cstrong>humans decide, AI executes.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>Chapter 2: How Ima Claw&#39;s Creative Workflow Works\u003C\u002Fh2>\n\u003Ch3>It&#39;s Not a Tool — It&#39;s an Employee\u003C\u002Fh3>\n\u003Cp>Traditional creative tool: You open Photoshop → operate it yourself → export.\nAI creative tool: You open Midjourney → write prompt → wait → rewrite if unsatisfied.\u003C\u002Fp>\n\u003Cp>Ima Claw is different: You say one sentence, and it \u003Cstrong>decides how to do it.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>Real case — \u003Cstrong>One cat photo becomes a 15-second film:\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Col>\n\u003Cli>I sent a cat photo and said &quot;make a short video&quot;\u003C\u002Fli>\n\u003Cli>Claw decided: preserve the cat&#39;s real appearance → chose \u003Ccode>reference_image_to_video\u003C\u002Fcode> mode\u003C\u002Fli>\n\u003Cli>Auto-selected Kling O1 (strongest character consistency)\u003C\u002Fli>\n\u003Cli>Self-planned 3 shots (cat scratching door → door opens → cat runs out)\u003C\u002Fli>\n\u003Cli>Shot 2 was wrong (door direction reversed) → caught its own mistake → rewrote prompt → regenerated\u003C\u002Fli>\n\u003Cli>Three shots auto-stitched + auto-generated BGM + merged output\u003C\u002Fli>\n\u003C\u002Fol>\n\u003Cp>\u003Cstrong>I spent 2 minutes. Claw worked for 40 minutes.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>That&#39;s the difference between an &quot;AI employee&quot; and an &quot;AI tool&quot;: tools wait for your input; employees think for you.\u003C\u002Fp>\n\u003Ch3>The Creative Stack\u003C\u002Fh3>\n\u003Ctable>\n\u003Cthead>\n\u003Ctr>\n\u003Cth>Capability\u003C\u002Fth>\n\u003Cth>Models\u003C\u002Fth>\n\u003Cth>Use Cases\u003C\u002Fth>\n\u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\u003Ctr>\n\u003Ctd>Image Generation\u003C\u002Ftd>\n\u003Ctd>Midjourney \u002F Nano Banana Pro \u002F Seedream\u003C\u002Ftd>\n\u003Ctd>Covers, posters, product shots\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>Video Generation\u003C\u002Ftd>\n\u003Ctd>Kling O1 \u002F Wan 2.6 \u002F Veo 3.1 \u002F Seedance\u003C\u002Ftd>\n\u003Ctd>Short films, ads, demos\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>Music Generation\u003C\u002Ftd>\n\u003Ctd>DouBao BGM \u002F Suno\u003C\u002Ftd>\n\u003Ctd>Background music, soundtracks\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>Copywriting\u003C\u002Ftd>\n\u003Ctd>Claude \u002F GPT\u003C\u002Ftd>\n\u003Ctd>Blogs, social copy, scripts\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>Auto-Publishing\u003C\u002Ftd>\n\u003Ctd>Xiaohongshu \u002F WeChat\u003C\u002Ftd>\n\u003Ctd>One-click distribution\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftbody>\u003C\u002Ftable>\n\u003Cp>You don&#39;t need to know what these models are — tell Claw what you want, it picks.\u003C\u002Fp>\n\u003Ch3>What Does It Cost?\u003C\u002Fh3>\n\u003Ctable>\n\u003Cthead>\n\u003Ctr>\n\u003Cth>Project\u003C\u002Fth>\n\u003Cth>Cost\u003C\u002Fth>\n\u003Cth>If Done Manually\u003C\u002Fth>\n\u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\u003Ctr>\n\u003Ctd>15-sec film (3 shots + BGM)\u003C\u002Ftd>\n\u003Ctd>174 credits ≈ $1.70\u003C\u002Ftd>\n\u003Ctd>Half-day shoot ≈ $300+\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>AI cover image\u003C\u002Ftd>\n\u003Ctd>10-18 credits ≈ $0.15\u003C\u002Ftd>\n\u003Ctd>Designer ≈ $30+\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>Bilingual blog post\u003C\u002Ftd>\n\u003Ctd>0 credits (text only)\u003C\u002Ftd>\n\u003Ctd>Translator ≈ $70+\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftbody>\u003C\u002Ftable>\n\u003Cp>\u003Cstrong>It&#39;s not just 90% cheaper. It&#39;s 90% faster AND 90% cheaper.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>Chapter 3: 10 Practical Creative Tips\u003C\u002Fh2>\n\u003Cp>All from real mistakes, now coded into Claw&#39;s rule files.\u003C\u002Fp>\n\u003Ch3>1. Product Images: Always Search for References First\u003C\u002Fh3>\n\u003Cblockquote>\n\u003Cp>❌ &quot;Make an iPhone promo&quot; → generate from text\n✅ Get the task → search real product photos → use image-to-image\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\u003Ch3>2. Wrong Video Direction? Use Light as a Guide\u003C\u002Fh3>\n\u003Cp>&quot;Light pouring through the door crack, getting brighter&quot; → AI understands the door is opening.\u003C\u002Fp>\n\u003Cblockquote>\n\u003Cp>Claw&#39;s own lesson: light and motion direction matter more than subject description.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\u003Ch3>3. Dual-Model Comparison for Images\u003C\u002Fh3>\n\u003Cp>Generate with Midjourney + Nano Banana Pro simultaneously. Show both, let user choose. 2x efficiency vs single model.\u003C\u002Fp>\n\u003Ch3>4. Never Use Sub-Agents for Writing\u003C\u002Fh3>\n\u003Cp>Sub-agents can&#39;t see the main session context. Articles come out completely off-target. \u003Cstrong>Write\u002Fcode\u002Fdesign → always in the main session.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Ch3>5. Send Files Directly, Never Paste File Paths\u003C\u002Fh3>\n\u003Cblockquote>\n\u003Cp>❌ &quot;File is at \u002Froot\u002Fworkspace\u002Foutput\u002Fxxx.mp4&quot; (user can&#39;t open it)\n✅ Send the file directly via Feishu\u002Fmessaging\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\u003Ch3>6. One-Sentence vs Step-by-Step\u003C\u002Fh3>\n\u003Cp>Simple creative task → one sentence: &quot;make a cat video&quot;\nComplex task with standards → step by step: &quot;cover first → I review → then video → I confirm → then copy&quot;\u003C\u002Fp>\n\u003Ch3>7. Self-Test Before Every Delivery\u003C\u002Fh3>\n\u003Cp>Mandatory checklist: HTTP 200 verification, no 404 links, DOM validation, visual check. No testing = rework.\u003C\u002Fp>\n\u003Ch3>8. Video Poster Thumbnails\u003C\u002Fh3>\n\u003Cp>AI videos show a black frame by default on web. Fix: extract first frame via ffmpeg → webp → add poster attribute. Small detail, huge UX difference.\u003C\u002Fp>\n\u003Ch3>9. Install CJK Fonts First\u003C\u002Fh3>\n\u003Cp>Servers don&#39;t have Chinese fonts by default. Generated covers show □□□□ instead of text. One command to fix, but without it everything breaks.\u003C\u002Fp>\n\u003Ch3>10. Write Lessons as Rules, Not Memory\u003C\u002Fh3>\n\u003Cp>Same as Fu Sheng&#39;s point: \u003Cstrong>AI doesn&#39;t &quot;remember.&quot;\u003C\u002Fstrong> Mistake → write it into AGENTS.md\u002FTOOLS.md → becomes a permanent rule.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>Chapter 4: The Creator&#39;s 7-Day Path\u003C\u002Fh2>\n\u003Ch3>📅 Day 1: Adopt + Establish Identity\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>Name your lobster, set its personality (SOUL.md)\u003C\u002Fli>\n\u003Cli>Tell it who you are, what content you make, your style preferences (USER.md)\u003C\u002Fli>\n\u003Cli>First task: have it generate a profile picture for you\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>📅 Day 2: First Creative Task\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>Send a photo, say &quot;make a short video&quot;\u003C\u002Fli>\n\u003Cli>Watch how it breaks down the task, selects models, generates\u003C\u002Fli>\n\u003Cli>Not satisfied? Tell it what&#39;s wrong, watch how it adapts\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>📅 Day 3: Establish Creative Rules\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>Write your brand colors, font preferences, content style into TOOLS.md\u003C\u002Fli>\n\u003Cli>Write lessons (&quot;always use image-to-image for products&quot;) into AGENTS.md\u003C\u002Fli>\n\u003Cli>Set up your platforms and publishing formats\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>📅 Day 4-5: Batch Creation\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>Try creating 3-5 pieces at once (images + videos + copy)\u003C\u002Fli>\n\u003Cli>Have it write bilingual versions\u003C\u002Fli>\n\u003Cli>Try auto-publishing to social platforms\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>📅 Day 6: Establish Daily Routines\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>Set up Heartbeat: daily industry news scan\u003C\u002Fli>\n\u003Cli>Set up Cron: weekly content calendar generation\u003C\u002Fli>\n\u003Cli>Competitor tracking automation\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>📅 Day 7: Review + Optimize\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>Review the week&#39;s output — what worked, what needs fixing\u003C\u002Fli>\n\u003Cli>Write lessons into rule files\u003C\u002Fli>\n\u003Cli>Slim down MEMORY.md, keep it focused\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>📅 Day 8+: Continuous Evolution\u003C\u002Fh3>\n\u003Cp>As Fu Sheng says — AI doesn&#39;t self-evolve. \u003Cstrong>You evolve, and the rule system you build for it evolves.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>But here&#39;s what&#39;s different for creators: \u003Cstrong>your work is the best proof of evolution.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>A month ago, you might have been struggling with prompts.\nA month later, you send a photo, say a sentence, and a 15-second film is done.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>Final Thought\u003C\u002Fh2>\n\u003Cp>Fu Sheng said: &quot;Your job isn&#39;t to make AI smarter — it&#39;s to make sure AI sees the right information.&quot;\u003C\u002Fp>\n\u003Cp>I&#39;ll add: \u003Cstrong>For creators, your job isn&#39;t to learn every AI tool — it&#39;s to raise a lobster that learns them all for you.\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>This lobster selects models, writes scripts, generates videos, adds music, and publishes.\u003C\u002Fp>\n\u003Cp>You just need to have ideas, and say them out loud.\u003C\u002Fp>\n\u003Chr>\n\u003Cp>👉 \u003Ca href=\"https:\u002F\u002Fimaclaw.ai\">\u003Cstrong>imaclaw.ai\u003C\u002Fstrong>\u003C\u002Fa>\u003C\u002Fp>\n",1775543780333]