Category: AI Development Journal

  • Completing the Translation of 16 Files β€” The Night All the AIs Went Down, and the Completion of the Four-Way Collaboration

    This article was originally written in Japanese and translated into English with AI assistance. Please note that some expressions may carry nuances from the original Japanese.

    πŸ‡―πŸ‡΅ ζ—₯本θͺžη‰ˆγ―こけら / Japanese version


    In the previous post (SP02), I covered the first six sessions of the HP English translation project β€” the discovery and repair of the footer damage, the establishment of the three-way cross-review system, and the structural solution to procedural mistakes.

    This post covers the second half: translating the remaining four files and carrying the project through to completion. The biggest stumble of the project was waiting in this second half. The night all the AIs went down.

    Taking On a Giant 68 KB File

    In Sessions 7 and 8, I completed the translation of the What’s New page (whats_new.html) and the About page. Three files remained. The biggest wall among them was index.html β€” the top page.

    This file was 68 KB, three to four times the size of the others. It contained product introductions, a development timeline, a technology stack overview β€” the content that effectively represents the entire site. It was also the file with the most expressions that required careful handling from an intellectual-property perspective.

    For example, the phrase “six AI systems” appears six times in historical descriptions, while descriptions of the current state say “seven.” The translation needed to preserve this distinction precisely. Past references should say “six systems,” and present references “seven systems.” Mix them up even once, and you give the reader contradictory information.

    Before handing index.html to Claude Code, I first had Claude.ai chat read through it entirely and identify the IP-protection-critical passages in advance. Twelve spots of architecture-design-related terminology, seven spots involving six/seven systems notation. I compiled these into a list and incorporated them into the translation prompt, which prevented Claude Code from accidentally “just making everything seven” across the board.

    Session 9 β€” The Night All the AIs Went Down

    Session 9 was the most abnormal session of this project. The amount of work done was zero.

    As I wrote in SP02, by this point Copilot was refusing to read files entirely, and was out of action as a reviewer. “If Copilot isn’t available, we’ll just run reviews through Claude Code and Claude.ai chat” β€” that was the plan.

    But then, even Claude β€” our fallback β€” died.

    It was the night I was about to begin translating the product detail page (ai-multilingual-meeting-detail.html). When I sent a status-check prompt to Claude Code, I got back an API 500 error. Retries gave the same. Three failures in a row.

    “If Claude Code is unhealthy, let me switch models,” I thought, and retried after changing settings. Still no good. Even when I shortened the prompt to the absolute minimum β€” “just return git status” β€” I got a 500.

    Reluctantly, I shut down Claude Code and restarted it. git status and git log came through. But just as I breathed a sigh of relief, the moment I tried to run a grep command, this time I got an auth error (401).

    “Maybe it’s just a Claude Code issue,” I thought, and tried to continue in Claude.ai chat. The chat side displayed: “Cannot connect to Claude.”

    Copilot couldn’t read files. Claude Code wouldn’t run due to errors. Claude.ai chat couldn’t even connect. All three AIs were unusable at once. When Copilot first became unstable, I could still think “I’ll just use Claude instead.” But now Claude itself was down. The backup for the backup didn’t exist.

    At that point, I decided: “There’s nothing to do but wait.” I judged this was an infrastructure outage on Anthropic’s side. If it’s a problem on the service provider’s end, not my environment, there’s nothing to do but wait.

    The Structural Vulnerability of AI-Dependent Development

    The biggest lesson from Session 9 was this simple fact: “Development that depends on AI stops completely when AI infrastructure fails.”

    In traditional development β€” the style of writing with just a text editor and compiler β€” work can continue even if the internet connection drops. The tools are right there in your hands. But in vibe coding, when AI services go down, you literally cannot do anything. Since the ability to write code resides on the AI side, the human has no option but to wait.

    I believe this is an inherent vulnerability of vibe coding. Convenience and fragility often go hand in hand. In SP01 I wrote about “the pitfall of an era when AI can build anything,” but “there are moments when you cannot build anything” is another feature of this era.

    That said, I don’t think we need to fear this vulnerability too much. By the next day, the service had recovered, and I was able to resume as Session 10 in a new chat. Outages are temporary. What matters is accepting that “things can stop” as a premise, breaking work into session-sized chunks, and committing intermediate state diligently.

    Phase A Complete β€” Finishing the Translation Without Copilot Review

    From Session 10 onward, Claude had recovered, and I progressed through the remaining files smoothly. The two product detail pages had the most IP-protection-critical content, but the three-step approach I had established with index.html β€” “prior analysis β†’ translation β†’ verification” β€” worked.

    In Session 11, the last translation file was completed, and all eight English-version files were in place. Internally, I’ve been calling this “Phase A complete.”

    However, the translations up to this point were completed by just Claude Code + Claude.ai chat, a two-way system. The third-party review by Copilot was missing. As I wrote in SP02, Copilot had been left as-is, refusing file reads. The decision was to prioritize keeping the translation moving, but we couldn’t skip Copilot review from a quality-assurance standpoint.

    Wrestling with Copilot β€” Somehow Getting It to Review

    With translation done, the next step was to have Copilot review. But Copilot was still unable to read files. From here, the struggle began.

    Since this is translation review, HTML file reading was essential. Text pasting had a 10,240-character limit β€” not workable for long files like the product detail pages. I tried converting to text files (.txt) and uploading those, but that didn’t work either. I tried converting to PDF, but Copilot pushed back: “It has to be an HTML file.”

    “You were reading HTML just a little while ago,” I pointed out. It kept insisting, “The specification has changed.” Within the last hour or so, mind you.

    The exchange went on. “The PDF contents are no longer auto-extracted β€” paste the text instead.” “Text is impossible because of the character limit, which is why I made a PDF.” “The PDF contents are treated as empty.” After all that, it kindly explained: “You’re not doing anything wrong on your end. It’s a change on Copilot’s side.” Well then, please do something about it.

    In the end, when I attached the PDF again from the web version of Copilot, it suddenly said: “I can now confirm the full English-version HTML.” The reason is unclear. What it had been saying was impossible became possible using the same method. A considerable amount of wasted time.

    And once it could read the file, another problem emerged. The HTML source code had been broken during the PDF conversion. Spaces were inserted around underscores in BEM notation (a CSS class-naming convention), comment tag closings were mangled, and newlines appeared mid-attribute value. Copilot explained this as “breakage during text extraction from PDF,” but it was Copilot’s specification change that forced PDF delivery in the first place β€” a circular argument.

    Ultimately, I fixed the tag breakage on the Claude Code side and had Copilot focus only on the naturalness of the English. The role of Copilot as “reviewer” in the three-way system was preserved, but frankly, the journey to restoration was more exhausting than the translation work itself.

    This is the reality of vibe coding. Even with three AIs, each one becomes unstable for different reasons at different times. And when you ask the AI itself why it became unstable, you don’t get accurate answers. Right after saying “the spec has changed,” it does the same thing with the same method. Even in human-to-human communication this would be confusing, but with AI especially, pursuing “why can you do it now when you couldn’t a moment ago” yields no answers. You need the judgment to give up and move forward.

    Copilot’s “Legal Tone” Revision Suggestions

    The revived Copilot offered an interesting observation. It noted that the wording of the AI translation disclaimer (the header banner saying “This page contains AI-assisted translation”) was too casual for a business site.

    Indeed, the initial disclaimer had been written during translation work with a “good enough if the meaning gets across” mindset. Following Copilot’s feedback, I rewrote it in a tone closer to legal documentation. What was previously “This page contains AI-assisted translation” was revised to formal wording that specifies the scope of disclaimer and a contact point for inquiries.

    AI generates β†’ another AI reviews β†’ human scrutinizes and accepts or rejects. This cycle, which had nearly collapsed at one point, ended up elevating quality even to the level of legal language. It was worth persisting to restore the three-way system rather than giving up.

    However, having all eight files in place with review completed wasn’t the end. A list of issues that had accumulated through Phase A β€” beyond the disclaimer wording, the footer horizontal rule inconsistency, navigation structure mismatches β€” remained. That’s Phase B.

    Phase B β€” What Should Have Been “Minor Fixes” Became Major Surgery

    Phase B was initially estimated as “a collection of minor fixes.” But once I looked into it, the scope was beyond imagination.

    The biggest discovery was that the navigation structure existed in two different variants. Of the 16 files, 10 were built in Pattern A (ul>li>a format) and 6 in Pattern B (div>a format), and mobile display behavior was split into three patterns. On some pages the hamburger menu would open and close. On others it was always displayed and wrapped. On yet others, the menu disappeared entirely.

    This “disappearing menu” was occurring on the About and What’s New pages. On mobile access, navigation wasn’t displayed at all, leaving zero routes to other pages. This is a serious practical bug.

    The cause was subtle structural differences between HTML that had been AI-generated at different times. Differences in generation timing and prompts produced divergent internal structures across pages of the same site. The theme I wrote about in SP02 β€” “fixing HTML broken by AI, with AI” β€” was repeating itself here.

    In Session 15, I performed a large-scale refactoring that unified all 16 files to Pattern B + hamburger toggle. Including link path corrections, 212 modifications were made in a single session.

    16 Files, Completed

    With all Phase B issues closed, the 8 English files + 8 Japanese files = 16 files were now consistent both structurally and linguistically.

    From the start of the project to this point: 15 sessions, Session 1 through 15. One AI breaks, another fixes, a third reviews, a human decides. Through that repetition, the 16-file multilingual site came together.

    However, this is still just within my local development environment. It hasn’t been uploaded to the production server. Next time: at last, the production deployment story. Unexpected traps were waiting here too.

    Next Time

    SP04: “Production Deployment β€” The .htaccess Trap and the Moment of Going Live.”


    About Soul Resonant Works

    Soul Resonant Works is a solo venture developing seven local AI systems.
    Starting from zero programming experience, the development is progressing through collaboration with AI.

    🌐 Soul Resonant Works:
    β†’ https://www.sr-works.net/en/

    πŸ“ This blog publishes the entire development process as a serialized journal.


    If you found this article useful, please share it.

  • Starting the Multilingual Site β€” Fixing HTML Broken by AI, with AI

    This article was originally written in Japanese and translated into English with AI assistance. Please note that some expressions may carry nuances from the original Japanese.

    πŸ‡―πŸ‡΅ ζ—₯本θͺžη‰ˆγ―こけら / Japanese version


    At the end of the previous Special Edition (SP01), I wrote: “I’ll now cover the three products in order.” Following the timeline, the DataMigrator story β€” the data migration hell β€” should come next.

    However, that’s not what happened.

    As I began the blog, I noticed something critical. The Soul Resonant Works homepage had no English version. The Japanese pages were built in January 2026, but without an English “mothership” in place, I couldn’t start an English blog on its own.

    So I pivoted and launched a project to translate the homepage into English. It means breaking the chronological order, but I judged that establishing the blog’s foundation should come first. Please forgive me. The “unable to reach main business” pattern I wrote about in SP01 is playing out again here.

    Clarifying the Timeline β€” Why This Story, Now?

    To avoid confusion, let me lay out the timeline so far.

    At the end of December 2025, I began designing the architecture for the AI business, and in January 2026 I built the Japanese homepage. From there, the Mac migration triggered DataMigrator, utf8conv (born from the character-garbling issue), and maiguru (sparked by a single remark at dinner) β€” three unplanned products that emerged one after another. That was the story in SP01.

    Then, as I began preparing the blog, I realized there was no English version of the homepage. This was mid-April 2026. The DataMigrator story will come afterward, so please bear with me a little longer.

    Hidden Damage Discovered on Day One

    On day one of the HP English translation project, I started with the privacy policy as the first file. I began by loading the Japanese HTML as the source and grasping its structure.

    Right away, a problem surfaced.

    The file was cut off mid-way. There was no </body>, no </html>. The file simply stopped at a truncated string </ul on line 496.

    The cause was presumably that the AI output, when generating the Japanese page, had hit its token limit and been cut off. In other words, it had already been broken since January β€” and I hadn’t noticed for nearly three months.

    The reason I hadn’t noticed is that when you open it in a browser, it displays just fine. Modern browsers auto-complete missing HTML closing tags and render the page normally. Visually, it looked perfectly fine. It was the kind of issue you’d never notice unless you inspected the source code directly.

    Suspecting that “if one is broken, there may be others,” I inspected all eight files. Sure enough, the terms of service and the legal notice page had the same damage. Three files in total. All were cut off at the shared footer section, suggesting they had been generated in the same session.

    The translation work was paused, and I had to start by repairing the Japanese versions first. No time for the main business.

    The Three-Way Cross-Review System Takes Shape

    After completing the repairs, I returned to translating the privacy policy. From here, a development structure unique to this project gradually took shape.

    For the HP English translation, I used three AIs. Claude Code (an AI agent that runs in the terminal) handled file reading, translation, structural verification, and commits. Claude.ai chat (the AI chat in a browser) handled overall strategy design and the design of instruction prompts. Microsoft Copilot took on the role of a third party reviewing the naturalness of the English.

    This setup wasn’t planned from the start β€” it emerged naturally when I translated the privacy policy in Session 2. It started when I tried passing the file Claude Code had translated to Copilot and asked: “Are there any spots that a native speaker would find unnatural?” Copilot returned 10 observations, which I sorted through in Claude.ai chat: 8 adopted, 3 rejected, 1 deferred.

    The important thing here is that I deliberately reviewed and filtered Copilot’s feedback rather than accepting everything. Even with three AIs involved, the final decisions are made by a human (me). AIs propose; a human approves. Without this structure, if the three AIs’ proposals contradicted each other, things would become unmanageable.

    Copilot Stopped Being Able to Read Files

    Just as the three-way system was starting to work well, Copilot suddenly stopped accepting files.

    When I tried to drag and drop the translated HTML file into Copilot for review, as usual, I got a “file not found” error. Just moments before, it had been working normally. Selecting the file through Finder via the plus button gave the same error.

    “Maybe iCloud sync is keeping the file from existing locally?” I wondered, but Claude could read it without issue, so that wasn’t it. Without knowing the cause, reviewing via Copilot became impossible altogether.

    I had to make a decision. Wait for Copilot to recover, or proceed without Copilot?

    In the end, I decided to proceed without Copilot. Claude.ai chat took over as the alternate reviewer, double-checking Claude Code’s translations from the chat side. Not perfect, but I could maintain the principle that “what one AI writes, another AI verifies.” I decided to deal with the Copilot issue later and prioritize keeping the translation moving.

    This decision to “proceed with only Claude, without Copilot” would later cause an even more severe situation in Session 9.

    Three Sequential Procedural Mistakes β€” A Problem of Process, Not Attention

    In Session 3, despite the translation itself going smoothly, procedural mistakes occurred three times in a row.

    Specifically, I kept forgetting to add an “English Version link” to the Japanese version after creating each new English file. I’d finish the English translation of terms.html, commit it, and then notice: “Ah, I didn’t add the link in the Japanese version.” Same thing with tokushoho.html. Even worse, the path from the English version back to the Japanese version was also wrong.

    When the third mistake came to light, I realized this wasn’t an attention problem β€” it was a structural problem. So from Session 4 onward, I switched to: “Pre-populate link placeholders for the English versions in all remaining Japanese files first.” Add the link placeholder even before the English file exists. That way, the failure mode of “forgetting to add the link during translation work” is structurally eliminated.

    This is something I’ve felt while developing maiguru as well: rather than trying harder to prevent mistakes, designing structures where mistakes cannot happen is far more effective. Especially when collaborating with AI, the more steps a human handles, the more mistakes creep in. Reducing the number of steps itself is a key design principle.

    Six Days, Six Sessions β€” Where We Are Now

    Over Sessions 1 through 6, across six sessions, four files had been translated: the privacy policy, terms of service, legal notice, and the About page.

    The total is eight files, so exactly half done. However, the remaining four include the 68 KB index.html (the largest of all) and two product detail pages. The biggest files were left for the second half.

    Even so, the three-way cross-review system established in these first six sessions, and the structural solution to procedural mistakes, became the foundation that supported the second half of the work.

    Next time, I’ll cover the story of completing the remaining four files. I’ll also include the tale of Session 9, where all the AI services went down at once and the work ground to zero.

    Next Time

    SP03: “Completing the Translation of 16 Files β€” The Night All the AIs Went Down, and the Completion of the Four-Way Collaboration.”


    About Soul Resonant Works

    Soul Resonant Works is a solo venture developing seven local AI systems.
    Starting from zero programming experience, the development is progressing through collaboration with AI.

    🌐 Soul Resonant Works:
    β†’ https://www.sr-works.net/en/

    πŸ“ This blog publishes the entire development process as a serialized journal.


    If you found this article useful, please share it.

  • Four Months Unable to Reach My Main Business β€” How Three Unplanned Products Were Born

    This article was originally written in Japanese and translated into English with AI assistance. Please note that some expressions may carry nuances from the original Japanese.

    πŸ‡―πŸ‡΅ ζ—₯本θͺžη‰ˆγ―こけら / Japanese version


    At the end of the previous article, I wrote: “I started design at the end of December 2025 β€” four months ago β€” and the first product still isn’t done.”

    This time, I’ll walk through what happened during those four months, in chronological order. Over these four months, I barely made progress on my main business β€” the seven AI systems. Instead, three products I had never planned got off the ground. I say “unplanned,” but all three are serious products in their own right. I need to walk through it step by step just to make sense of it myself β€” so I decided to write it down.

    January β€” The First Detour Began with Data Migration

    At the end of the year, I purchased a new MacBook Pro to move toward AI system development. Migrating data from the old iMac was the first hurdle. But this turned out to be hell beyond anything I had imagined.

    My iMac, used for many years, had accumulated an enormous number of files, never properly organized. Files with identical names and identical sizes were somehow scattered across different folders. I couldn’t tell which were the originals and which I could delete. I had probably backed things up to an external HDD at some point, because the same files existed on both the internal and external drives. I tried Rsync, I computed file hashes for comparison, I tested various methods while consulting with Claude β€” but the amount of manual work was simply too much.

    And at a certain moment, I thought: “If there were a tool that could semi-automate this migration work, it would not only move my iMac cleanup forward, but also have value for other people with the same problem.”

    Just like that, the data migration work itself stopped, and development of a tool to support data migration began. This is what would later become the first product, called DataMigrator. I began the migration so I could finally move toward my main business β€” yet somehow I ended up building a tool for the migration itself. My main business slipped even further out of reach.

    End of January β€” A Business Trip Tore Me Away from AI

    Just as the DataMigrator concept was beginning to take shape, a business trip for my main job came up. For over two weeks from late January to early February, my development work came to a complete halt.

    Anyone who is doing personal development as a side project probably understands: continuing development during a business trip for your main job is simply not realistic. You can’t do personal development on a company laptop, and carrying two laptops (company and personal) on a trip is too heavy, both physically and mentally. As a result, business trip periods forcibly tear you away from AI.

    I think this is a common pattern for side-project developers. The pace at which I produced 20,000 lines of design documents in two weeks at the end of the year β€” why did it suddenly stop? The answer is simple: I was physically cut off from the environment where I could focus and work alongside AI. You can do text-based brainstorming, but you can’t do real work. On top of that, I had enthusiastically subscribed to a high-tier AI plan intending to “go all-in on AI-assisted development,” but during the trip I couldn’t use my tokens. I spent the trip almost in tears.

    February β€” The Second Unplanned Venture Began from Garbled Text

    After returning home, I tried to resume DataMigrator development. But here, I encountered another problem. Files I had saved from past Claude conversations were garbled and unreadable.

    When I looked into it, the cause was differences in character encoding. UTF-8, Shift-JIS, CP932 β€” there are many types of character encodings, and unless you identify which encoding a file was saved in, you can’t read it correctly. I hadn’t expected such pitfalls to be hiding in everyday Japanese text files that I had been using without any awareness.

    And then, the same thing happened again. “If I’m struggling with this, then someone else must be too. Let me build a tool that detects character encoding and converts files.” Just like that, DataMigrator development stopped, and development of a second product β€” utf8conv β€” began.

    My main business drifted even further away.

    March β€” Tax Filing Consumed Several Whole Days

    Just as utf8conv was nearly taking shape, mid-March arrived. Tax filing season.

    I registered as a sole proprietor under Soul Resonant Works on March 23, 2025. In other words, this was my first tax filing. On top of that, it was the more complex “blue return” format. Even with accounting software, at first I could barely understand what the software was asking me or what the displayed terms meant.

    “What is an employment income deduction?” “How are the numbers on this withholding statement from my employer calculated?” “What happens if I file a blue return?” “What is loss and profit offsetting?” β€” a general search would turn up plenty of explanations on the internet, but to truly internalize them in the context of my own situation, I needed to go one step deeper.

    This is where AI shone. Every time an unfamiliar term appeared on the accounting software screen, I asked Claude. From the general meaning to the specific application to my case, I worked through it conversationally, one concept at a time. It was probably faster β€” and more insightful β€” than reading an entire book.

    Over several days, I somehow got through the tax filing. Once again, several working days for my main business evaporated.

    End of March β€” The Third Unplanned Venture Dropped in at a Dinner

    On March 27, having finished tax filing and finally ready to return to my main business, I went out to dinner with an acquaintance.

    The food at that restaurant was so delicious that, at the register, I said to the owner: “My name is Kanazawa, and I carry around what you might call the ‘Kanazawa Guide’ β€” a collection of my recommended restaurants, like a private Michelin Guide. This food was so good I’d like to register this place as a starred entry.” The owner replied: “Please, by all means, make it three stars.”

    At that moment, the words “Kanazawa Guide” β€” which I had just spoken aloud myself β€” started running through my head from that day on. “Wait, could I actually build this?”

    The next day, I consulted Claude. That was March 28. From there, I had Claude as my sounding board every day, refined the concept, put together a business plan, and embarked on implementation. This became the third product, called “maiguru.” Instead of getting back to my main business, I found myself launching yet another product.

    The products I had been considering were primarily on-premises software products, designed to run without an internet connection. But maiguru is software implemented on the internet. It’s in a completely different direction from anything before, but this one too is moving toward realization through collaboration with AI. In less than two weeks from that March 27 dinner, development has progressed to the point where a proof-of-concept prototype is running on a production-environment server.

    Over Four Unplanned Months, I Had Built Up Basic Capacity

    Laying out the timeline makes it clear: over these four months, implementation of the seven AI systems β€” my main business β€” has barely progressed. Completing the requirements definition for M1, the first of the systems, is where I currently stand. Compared to my original projections, I am clearly behind.

    On the other hand, the three products I had never planned β€” DataMigrator, utf8conv, and maiguru β€” are each moving. DataMigrator has progressed from requirements definition to design, utf8conv is nearly complete, and maiguru has started running in production.

    The level of completion varies across the three, but since I have been working with AI constantly, the fundamentals of vibe coding β€” better ways to write prompts, better ways to move forward, better ways to ask questions β€” have been converging. Each time I took on a new product, I became more efficient than the last. This has also meant personal growth.

    The Pitfall of an Era When AI Can Build Anything

    Why did things end up this way? Let me try to think it through.

    In a word, I believe AI has caused such an explosive expansion in Capability β€” the sheer range of what’s now possible β€” that ideas can now be turned into reality almost instantly. For someone like me β€” who tends to move through each day following whatever ideas happen to surface β€” an era where everything shines brightly has arrived. You could say products now take concrete shape almost immediately.

    Let me touch briefly on my own programming experience. In my student days, I had a chance to learn C. The result: I dropped out at Hello C World (a program that displays “Hello C World” on screen when run β€” the very first thing beginners are made to program in C). In that first class, when told “#include <stdio.h> β€” just think of it as a magic spell,” I wasn’t the kind of person who could accept things as magic spells. But I also didn’t have the energy to research and understand it myself. When told “See? It’s complaining about a Syntax error,” my state was: what on earth is Syntax in the first place? When told “This pointer here…” I thought: isn’t a pointer the arrow you move with a mouse? My grades were terrible.

    Fundamentally, I am terribly unsuited to logical thinking. I can’t draw paths like “if you do this to this, this happens” or “if you compute this after this, you get that.” And I didn’t understand the grammar either. I was completely useless at programming.

    And yet, someone like me can describe what I want to build, discuss it with AI, and have the AI handle architecture and coding. Honestly, even if I restarted my life three times and devoted all three lives entirely to software development, I could not reach the volume of architecture and coding that has been produced in these four months.

    Not long ago, even if I thought “a data migration tool would be useful,” I had no implementation skills, so I would simply have given up. Same with “if only there were a character encoding detection tool.” Same with “if only there were a Kanazawa Guide SNS.” There was a high wall between ideas and implementation.

    That wall has been lowered by AI. Or rather, to be precise β€” for someone like me, the wall has essentially disappeared. It’s been lowered so much that I can build whatever I think of, one after another. As a result, I can no longer focus on my original main business. I believe this is a problem that many people developing with AI will face going forward.

    Fortunately, in my case, the “three products that emerged as byproducts” ended up serving as the foundational skills and capacity for moving forward on the seven AI systems. But that’s only with the benefit of hindsight β€” it wasn’t what I was aiming for from the start.

    From the next article, I was planning to cover each of the three products born over these four months, in order. However, as I began the blog, I realized there is no English version of the homepage. The “unable to reach main business” pattern activates again. Next time, as a special edition, I’ll bring you the record of building a multilingual site in collaboration with AI.


    About Soul Resonant Works

    Soul Resonant Works is a solo venture developing seven local AI systems.
    Starting from zero programming experience, the development is progressing through collaboration with AI.

    🌐 Soul Resonant Works:
    β†’ https://www.sr-works.net/en/

    πŸ“ This blog publishes the entire development process as a serialized journal.


    If you found this article useful, please share it.

  • Why I Started Soul Resonant Works β€” 25 Years in Theater, Zero Programming Experience

    This article was originally written in Japanese and translated into English with AI assistance. Please note that some expressions may carry nuances from the original Japanese.

    πŸ‡―πŸ‡΅ ζ—₯本θͺžη‰ˆγ―こけら / Japanese version

    Series: Byproducts Born from Setup Hell β€” Building a macOS App with Vibe Coding (Vol. 0)


    Toward the end of 2025, I was just planning to replace my Mac for video editingβ€”nothing more. I had absolutely no idea that, two weeks later, I would be working with AI to produce roughly 20,000 lines’ worth of design documentation.

    That said, producing design documents does not mean the software is finished. The review of the contents is still ahead. I have zero programming experience, so my only real partner in this challenge is AI. Whether this challenge will ultimately succeed β€” honestly, even I don’t know.

    In this blog, I will record the process of someone like me working together with AI to take on software development, exactly as it happens. It might be completed, or it might hit a wall somewhere along the way. By leaving a real record that includes both possibilities, I hope it can serve as a reference for anyone who feels, “Maybe I could build something too.”

    Hello β€” my name is Satoshi Kanazawa, and I run Soul Resonant Works.

    25 Years in Theater, and 20 Years with Mac

    My main career is as a company employee, in a role that involves international communication.

    Alongside that, I have continued theater and band activities as hobbies. I’m not a hardcore participant β€” more of a casual one β€” but the years have simply stacked up. It has been about 25 years since my first involvement with theater, and about 10 years with band activities. I’m what you might call an enthusiastic amateurβ€”someone who never quite grows out of the beginner stage, but keeps going anyway. I think it’s simply because I love it.

    My role is mostly behind the scenes. Handling sound for theater performances, supporting operations, taking stage photos, shooting performance videos. I rarely stand in front of the audience, but I love being in a place where audience, performers, and staff can all share an enjoyable time together β€” and seeing everyone enjoy themselves.

    Because I deal with video, the innovative UI of Mac, a recommendation from an Apple-enthusiast associate professor, and my own desire to try video editing all came together. In 2004, I bought a PowerBook G4. Since then, I have used Mac continuously for more than 20 years. Every time camera pixel counts increased, my Mac’s processing power fell behind, and I kept upgrading every few years to get here.

    How an Overpowered Mac Led Me to AI Development

    In recent years, the video world moved from 4K to 8K, and the Intel iMac I had been using could no longer keep up. I started considering a new Mac, but honestly, buying a high-performance Mac felt like more power than I could reasonably justify for a hobby. Video editing alone could not justify that kind of investment. I felt I needed to keep other uses in mind as well.

    It was around that time that the term “Apple Silicon” caught my eye.

    As I looked into it, I learned something unexpected: instead of using AI services over the internet like ChatGPT or Claude, you can run AI models directly on your own Mac.

    “So, could my own challenges be solved with AI?”

    That question was the beginning of everything.

    The multilingual communication walls I had felt at work. The challenges I had run into in theater sound and video production. Maybe these could be solved with local AI β€” through dialogue with AI, that question grew into an idea.

    The Design Documents Are Written. But the Real Work Starts Here

    At the end of December 2025, I began designing the systems by talking things through with AI.

    I used a method called “vibe coding.” Rather than writing in a programming language, you tell the AI what you want to build in natural language β€” in my case, plain Japanese β€” and work together with the AI to create designs and code.

    In about two weeks, I produced design documents for seven AI systems. About 20,000 lines. Even I was surprised.

    That said, to be honest, I just produced them. The documents took shape through dialogue with AI, but my own review is not yet finished. This is probably a common pattern in vibe coding: I lack the experience to fully judge the quality of what the AI has produced.

    Can this really be called “design documentation”? Is the quality good enough to implement from? Verifying that is part of what lies ahead.

    All seven of these systems are designed to run entirely on my own Mac β€” fully offline, one-time purchase tools. For example: a tool that lets every participant in a meeting where Japanese, English, and French are flying around receive meeting minutes in their own language. A tool where AI automatically selects and edits footage from multiple cameras at a theater performance. A tool where AI analyzes multi-track recordings and proposes the optimal mix.

    All of these are things I’ve wished existed during my time in theater, video production, and international business. I don’t yet know whether they’re truly achievable, but I believe the challenge is worth taking on. For details, please see the Soul Resonant Works site.

    The Name “A Workshop Where Passion Resonates”

    I had also produced CDs during my band activities, and I decided to position my music activities as a side business, which led me to register as a sole proprietor. The trade name is Soul Resonant Works.

    Whether in theater or in a band, I feel it is the passion that burns within us that sustains these activities. Even as a company employee, rather than drifting along on inertia, thinking “let me improve this” or “let me add a little more to the outcome” β€” I believe that too comes from passion. In work, when passion is lost, the work becomes entirely obligatory, and any sense of forward momentum disappears.

    Work, theater, and band activities all involve lots of human connections, and people with similar passion tend to gather together.

    “I want to create a place where passionate people come together and make something” β€” from that thought, I named it Soul Resonant Works: a Workshop where Souls (passion) gather and Resonantly amplify each other.

    What I’m Working on Now

    My current passion is directed toward software development that “solves my own challenges using AI.”

    While preparing to implement the seven AI systems, I went through an enormous amount of struggle migrating data from my iMac to the new Mac. Scattered files, macOS-specific traps, weeks of manual work β€” I thought, “Let me turn the tool that solves this very suffering into my first product.” So right now, I am working with AI to develop a data migration and deduplication tool for macOS.

    If it gets completed, it will be Soul Resonant Works’ first product. If it doesn’t, I’ll understand “why it didn’t work.” Either way, this experience should be valuable.

    By the way, some readers may have noticed something odd by now. “You started design at the end of December 2025. Four months have already passed. And the first product still isn’t done?”

    That’s exactly right. Four months in, my main product still isn’t complete. However, during those four months, three products I had never planned began to move forward. Every time I tried to head toward the main business, something else was born β€” the second post in this blog isn’t part of the main development record, but it will describe what happened during those four months.


    About Soul Resonant Works

    Soul Resonant Works is a solo venture developing seven local AI systems.
    Starting from zero programming experience, the development is progressing through collaboration with AI.

    🌐 Soul Resonant Works:
    β†’ https://www.sr-works.net/en/

    πŸ“ This blog publishes the entire development process as a serialized journal.


    If you found this article useful, please share it.