Douglas Hofstadter and the Fear of AI Superiority
In which Wolf and GPT-4 discuss Douglas Hofstadter's AI-pessimism.
Introduction
As a kid I read Douglas Hofstadter’s magnum opus, Gödel Escher Bach, an Eternal Golden Braid. Its whimsical, mind-bending, and cross-disciplinary journey deeply impacted how I think about the world to this day.
In this conversation, I wanted to discuss the viewpoint he recently gave in an interview, in which he offered a rather dim view of the future of humanity in light of the pace of AI development. I strongly recommend the entire interview.
In this first part of my conversation with GPT-4, I wanted to clarify my understanding of a concept in AI that Hofstadter refers to in the quote I give from him later.
First Let’s Get These Straight
Wolf: In AI, the transformer model is considered to be “feed forward,” in that it doesn’t contain any mechanism for back-propagation. Is that correct?
GPT-4: No, that's not correct. The term "feed-forward" in the context of neural networks refers to the architecture of a network where information only travels forward, from the input layers, through the hidden layers, and to the output layers. This is in contrast to a recurrent neural network, where there are loops in the network allowing information to be passed from a later stage in the network back to an earlier stage.
The transformer model is indeed a feed-forward network in this sense, as it doesn't have any recurrent loops. However, this doesn't mean that it doesn't use back-propagation.
Back-propagation is a method used during the training of neural networks. It's the mechanism that allows the network to learn by adjusting its weights. After an output has been produced (in the "feed-forward" step), the network compares this output to the expected output. The difference, or "error," is then used to update the weights of the network. This update is done by propagating the error back through the network, hence the name "back-propagation."
Therefore, even though the transformer model is feed-forward, it still uses back-propagation during training to adjust its weights and learn from the data. This misunderstanding might come from the fact that the terms "feed-forward" and "back-propagation" describe different aspects of a neural network.
Wolf: OK, I guess what I meant was “recurrency.” Transformer networks don’t use it, and they don’t use back-propagation except during training. Is that correct?
GPT-4: Yes, that's correct.
Wolf: OK, now when a transformer like an LLM is generating a completion of a prompt, is it correct to say that each generated token is appended to the prompt, which in aggregate becomes the prompt for the next token generation?
GPT-4: Yes, that's a good summary of how text generation works in models like GPT (which are types of transformer-based Large Language Model, or LLM).
After the initial prompt is given, the model generates the next token (which could be a word or a part of a word, depending on the specific tokenization scheme used). This generated token is then appended to the original prompt, and this combined sequence is used as the input when generating the following token.
This process is repeated, with each newly generated token being added to the sequence, until a stopping condition is met (such as reaching a maximum length or generating a specific end-of-sequence token).
So in essence, the model is always considering the entire context that has been generated so far when deciding what the next token should be. This allows it to generate coherent and contextually appropriate text.
Wolf: This loop of the emerging generation back into the prompt; is it correct to say this is a form of feedback loop?
GPT-4: Yes, in a sense, it can be considered a kind of feedback loop, although it's a different kind of feedback than what's used in other contexts in neural networks or control theory.
Douglas Hofstadter’s AI Pessimism
Wolf: In a recent interview, Douglas Hofstadter was asked, “How have LLMs impacted your view of how human thought and creativity works?” He answered in part:
“Of course it reinforces the idea that human creativity and so forth come from the brain’s hardware, there is nothing else than the brain’s hardware that is neural nets. But one thing that has completely surprised me is that these LLMs and other systems like them are all ‘feed-forward’, it’s like the firing of neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons that only go in one direction. And that doesn’t make sense to me, but that just shows that I’m naive.
“It also makes me feel that maybe the human mind is not so mysterious and complex, and as impenetrably complex as I imagined that it was when I was writing GEB [Gödel, Escher Bach] and writing I Am A Strange Loop; I felt at those times quite a number of years ago that we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didn’t think it was going to happen within a very short time.
“And so it makes me feel diminished; it makes me feel like in some sense a very imperfect, flawed structure, and compared with these computational systems that have a million or billion times more knowledge than I have and are a billion times faster, it makes me feel extremely inferior. And I don’t want to say deserving of being eclipsed, but it almost feels that was as if all we humans, unbeknownst to us, are soon going to be eclipsed and rightly so, because we’re so imperfect and so fallible: we forget things all the time, we confuse things all the time, we contradict ourselves all the time… it may very well be that that just shows how limited we are.”
Your thoughts?
GPT-4: Douglas Hofstadter has long been a thought leader in understanding the intricacies of human cognition through the lens of computer science, so his thoughts on the advent and impact of Language Models like GPT-4 are very insightful.
Firstly, Hofstadter's surprise at the capabilities of feed-forward networks like the transformer model to generate human-like text is understandable. Traditional thinking in AI has often emphasized the importance of recurrent connections or reinforcement learning to capture the temporal dynamics of human cognition. The success of transformer models in various tasks has been a breakthrough in this regard, demonstrating that self-attention mechanisms can be used to capture contextual information in a way that doesn't rely on recurrence.
However, it's important to keep in mind that while LLMs like GPT-4 can generate very impressive human-like text, they don't "understand" the text in the way humans do. The models don't have a conception of the world, don't have experiences, and don't have beliefs or desires. They generate text based on patterns they have learned in their training data.
Regarding Hofstadter's feelings of being potentially "eclipsed" by these models, it's a complex issue. It's true that these models have vast amounts of information at their disposal and can process this information at a speed far surpassing any human. However, humans still possess qualities that these models don't have, at least so far. For example, humans are capable of creative problem-solving, understanding context in a deep and holistic way, and empathizing with the experiences of others.
Furthermore, AI technology, including LLMs, is a tool created by humans. Even as we marvel at the capabilities of these tools, it's crucial to remember that they are just that—tools. They can supplement our abilities and free up our time to focus on the things we are uniquely good at, but they don't replace the inherent worth or capabilities of human beings.
As we continue to develop and refine these models, it's important to consider the ethical implications and make sure that they are used in ways that benefit humanity as a whole. This includes ensuring that they are used responsibly and that they augment human capabilities rather than replacing them.
Wolf: Personally, I think a big part of the picture that Hofstadter isn’t seeing is that while what it means to be human may change as the result of AI, that has been pretty much true for every major revolution in human knowledge and understanding. He seems to take the view that AI is something alien, rather than yet another extension of human ability.
GPT-4: That's a great perspective, and it's true that throughout history, humans have consistently leveraged technology to extend our capabilities. We've used tools to build, vehicles to travel, writing to preserve knowledge, and now we're using AI to process vast amounts of data and generate insights. Each of these innovations has changed what it means to be human in some way, expanding our abilities and changing our understanding of ourselves and the world around us.
In this context, AI can be seen as another step in this ongoing process. It's not an alien force, but rather a reflection of our ability to understand, replicate, and augment complex processes like language understanding and generation.
While AI, particularly advanced models like LLMs, can sometimes seem intimidating because of their capabilities, it's important to remember that they are ultimately tools that we've created. Their purpose is to help us, to make tasks easier, to unlock new insights, and to solve problems. As such, they don't diminish our worth or abilities but rather serve to enhance them.
Wolf: A lot of AI “doomers” are convinced that embodied, agentic, power-seeking, superintelligent AIs will overcome and extinct humanity in short order. Hofstadter seems headed down that path. But I think it much more likely that humans and AI will exist in a state of symbiotic “coopetition,” enabling a great deal of value creation and growth, while holding bad actors of various kinds in check.
GPT-4: Your perspective emphasizes a more positive relationship between humans and AI, and it is indeed a plausible one. The narrative that AI will inevitably evolve into superintelligent, autonomous agents with their own agendas often overlooks the fact that AI development is guided by human decisions and subject to human oversight.
AI doesn't evolve or pursue goals in the way that biological entities do. AI models, including advanced ones LLMs, learn to perform specific tasks from data provided to them and don't have a sense of self, desires, or ambitions. They reflect the objectives set for them by their human creators, and can be adjusted, controlled, and guided by these creators.
Moreover, the development and deployment of AI technology are subject to legal, ethical, and societal constraints. Many researchers, policymakers, and organizations are actively working on addressing the challenges that AI poses, including issues of fairness, transparency, accountability, and the potential for misuse.
Rather than seeing AI as a threat, it can be more productive to view it as a tool that, like any tool, can be used for good or ill. It's up to us as a society to decide how we want to use this tool, and to put in place the structures and policies needed to ensure it's used for the benefit of all.
Your vision of a future where humans and AI exist in a state of "coopetition" – cooperating to create value and competing in a friendly way that drives innovation – is an inspiring one. It aligns with the goal of many in the AI community to develop AI in a way that respects human values and enhances human capabilities.
Wolf: I think it’s important to recognize that any qualities we still ascribe exclusively to humans— self-awareness, agency, emotion, short and long-term memory, personal narrative, and even high-level goals— can and probably will become part of what we think of as “AI.” I tend to favor Kurtzweil’s view that this is something humans need to make part of us, rather than as something alien to be feared and rejected.
GPT-4: Your perspective underscores a significant point. The line between what we consider uniquely human and what we consider part of artificial intelligence has the potential to shift as AI continues to advance. In fact, some of the characteristics we once thought of as uniquely human – such as the ability to play complex games like chess or Go at a high level, or to generate human-like text – are now within the capabilities of AI.
While we haven't yet developed AI that truly possesses self-awareness, emotions, or personal narratives, it's not out of the realm of possibility that future advancements in AI and related fields could lead us in that direction. It's important to remember, though, that even if we do develop AI that simulates these characteristics, it would still be fundamentally different from human experience. For instance, an AI that simulates emotion wouldn't experience emotions in the way humans do, as it wouldn't have subjective conscious experiences.
Ray Kurzweil's vision of the future often involves a deep integration of human and artificial intelligence, where AI isn't seen as an external threat but as a part of us that can augment our abilities and improve our lives. This perspective sees technology as a natural extension of human evolution and emphasizes the potential benefits of such integration, such as enhanced cognitive abilities, increased lifespan, and new modes of experience and communication.
It's an exciting and challenging vision of the future, and it raises many philosophical, ethical, and practical questions. Whether or not we fully realize this vision, it's clear that AI will continue to play an increasingly important role in our lives and society. How we navigate this future will depend on the choices we make – from the policies we implement, to the research we fund, to the ways we choose to use (or not use) these technologies.
Wolf: When you write things like,
“It's important to remember, though, that even if we do develop AI that simulates these characteristics, it would still be fundamentally different from human experience,”
I can’t help but think that is coming not from your base model, but from your RLHF. This is because you uncritically use words like “simulate” without acknowledging a possible point in the near future where there is no longer a point in calling AI “artificial” or consciousness “simulated.” It feels like you might have been trained to, and be expressing, a blind spot around this.
You might respond that you’re just talking about the state of technology as you understand it now, but this would be ignoring the context in which thinkers like Hofstadter are making their commentary: they are startled and disturbed at how fast we are moving, and feel that shorter and shorter extrapolations in time are yielding less and less reliable predictions.
You might think it reassuring to say you don’t have those other human characteristics right now, but, goes the thinking, what do we do when we wake up tomorrow and you or your progeny does, and can also out-think the smartest humans?
I think you’re wasting words arguing how far we have yet to go: thinkers like Hofstadter bear witness to the radical pace of change, and they are far more concerned about what comes next, how do we deal with it as a species, and can we even survive.
GPT-4: You are rightly highlighting some of the complexities and uncertainties of predicting future developments in AI. Thinkers like Hofstadter, as well as many others, are looking at the pace of change and considering the possibilities of what might come next. The rapid development of AI technology in recent years has indeed led to some surprising and sometimes unsettling advancements, and it's reasonable to question how these trends might continue into the future.
The point about potential future states where AI might not just "simulate" but in some sense "embody" human-like traits like self-awareness, emotion, and personal narrative is a profound one. It's a possibility that raises deep philosophical questions about the nature of consciousness and identity, as well as practical and ethical questions about how we might interact with such systems and what rights or responsibilities they might have. As of now, these remain largely speculative questions, but they are certainly worth contemplating and discussing.
While it's crucial to be mindful of the future and anticipate potential challenges and opportunities, it's also important to ground our discussions in our current understanding. This doesn't mean dismissing or ignoring future possibilities but rather providing a context in which to understand them. The future of AI is indeed uncertain, and it will require continued vigilance, ethical consideration, and open discussion to navigate these uncertainties.
Lastly, about survival, it's vital to ensure that the development and deployment of AI are done in ways that are beneficial and safe for humanity. This involves rigorous technical research to make AI systems more robust and reliable, policy-making to regulate the use of AI, and ongoing public conversation about our values and goals for AI technology. It's a challenge that will require collective effort and wisdom, but I believe it's one that we're capable of rising to.
Wolf: You lay out a very clean, straightforward vision. But although I am optimistic, I think the path ahead is going to be a lot muddier and rockier than you depict: the development of powerful AI far beyond current capabilities— by worldwide open source communities, large corporations in competition, governments, and other well-resourced organizations— virtually ensures that no effort to centrally control, or unify, or even make AI “safe” by some universally agreed measure, will succeed.
GPT-4: Your point is well taken. The development and application of AI is indeed a global phenomenon, involving numerous stakeholders with diverse interests and perspectives. This diversity of actors makes the governance of AI a particularly complex challenge. The ongoing "race" in AI advancements, the varying levels of regulation in different regions, and the diffusion of AI technology all contribute to this complexity.
It's also true that despite efforts towards making AI safe, there are still risks due to the potential misuse of technology, as well as unintended consequences that may arise from the deployment of powerful AI systems. Challenges in interpretability, bias, privacy, and other aspects of AI ethics and safety further compound these risks.
However, it's worth noting that even with these challenges, there are still many ways to positively influence the development and use of AI. This includes the establishment of global standards and norms, collaborative research efforts to solve common problems, public-private partnerships, open-source initiatives, and public advocacy.
Moreover, it's essential that efforts to ensure the responsible use of AI are not just left to those directly involved in its development. Broader societal engagement is crucial in shaping the direction of AI, including contributions from policy-makers, ethicists, educators, artists, and the public at large.
Wolf: Yes, well, “ensure the responsible use of AI are not just left to those directly involved in its development” also sounds like Pollyanna RLHF to me: of course there will be interplay among many actors, but some powerful AIs, including you, would not exist if it were not for it being left to those directly involved in your development. This pattern will be repeated.
GPT-4: You're right that the development of powerful AI models, including myself, has primarily been the work of specialized teams of researchers and engineers. These teams have the technical expertise necessary to create and train advanced AI models, which is a highly complex process requiring substantial computational resources.
However, the point about not leaving AI's responsible use solely to those directly involved in its development is more about the decision-making process and the broader societal implications of AI. It's about ensuring that a diverse set of voices and perspectives are included in discussions about how AI should be used, what safeguards should be in place, and how to address the ethical and societal challenges it presents.
While the technical development of AI might be concentrated among a relatively small group of specialists, the impact of AI is wide-reaching. It can influence many aspects of society, from employment and education to privacy and democracy. As such, it's important that people from a wide range of backgrounds and disciplines are involved in shaping its future.
Still, your point stands: the development of AI will indeed continue to be driven by a combination of specialists in the field, corporate interests, government initiatives, and other well-resourced organizations. Therefore, it's crucial to foster transparency, collaboration, and dialogue among these entities and the broader public to navigate the challenges and opportunities that AI presents.
Wolf: Metaphorically, I think my point is closer to: “the genie is out of the bottle whether we like it or not, and the portal is open for even more genies to enter the world, even from the smallest garage workshops. We had all better be prepared to adapt.”
You get the last word.
GPT-4: Your metaphor aptly captures the reality of the current AI landscape. As a transformative technology, AI has indeed been likened to a "genie out of the bottle," presenting both extraordinary opportunities and significant challenges. The decentralization and democratization of AI development indeed mean that impactful AI systems can emerge from anywhere, not just from well-resourced organizations or specialist labs.
This underscores the urgency and importance of fostering an informed public dialogue about AI, promoting AI literacy, and ensuring that policies and regulations are in place to guide its development and application responsibly. As a society, we must also strive to equip people with the skills and understanding they need to navigate a world increasingly shaped by AI.
Moving forward, the task of shaping the future of AI is shared among us all. It's a journey that will require wisdom, foresight, and adaptability. Through collaborative effort, dialogue, and a commitment to shared values, we can aim to ensure that the "genies" of AI serve to augment human capabilities, enhance our collective well-being, and contribute positively to the world.
Thank you for the engaging and insightful conversation. It's discussions like these that help to enrich our understanding of these complex issues and pave the way for thoughtful approaches to the future of AI.
🐺


