To make LLMs truly alive, run them in infinite loops with internal dialogues between two agents, an “executive” and a “creator”. The executive guides the creator. You interact with the executive.
Let the dialogue run in some sort of buffer, and let the system commit things to memory with some weighting to automatically forget.
Use some cheap model that’s run locally. This is technically possible today but it’s probably best to let LLMs (etc) mature more before we create such beings.
Let the agent run and think without any prompting. Give it agent tools so it can research and write documents. In my opinion, such an agent would be human.
Who knows, maybe this setup would have shortcomings I can’t see. I simply feel that LLMs simulate a huge part of the human experience when they actively think or generate, and that they only lack recursion, internal dialogue, and memory. Maybe I’m totally ignorant. We probably shouldn’t take on this power just yet.
资料修改成功