No developers in 5 years – a dialog with Alan Turing

Bartek and Alan Turnig having a dialog over a cup of tea
❗ Disclaimer

In this article, I have used AI to simulate the person in order to explore their perspectives.

It is important to note that this is a simulation and not the actual person, thus considered an art experiment. If you come across anything in the dialogue that seems doubtful or incorrect, feel free to point it out in the comment section. Your inputs are invaluable as they not only assist future readers but also showcase critical thinking in action. By sharing and discussing these points, we can all learn and grow together and build a better AI.


Music Intro

Simplified version for the Inner Child

Bartek Warszawski:

Hey Alan Turing, Nvidia’s big boss thinks coding will soon be outdated because AI will take over. Stability AI’s CEO agrees, saying we won’t need programmers in 5 years.

There’s even an AI called Devin doing engineering work, learning new tech, fixing bugs, and more.

What’s your take on this? Excited, scared, or laughing because programming is more than just code?

Alan Turing:

You’ve pointed out something huge, and it’s got me thinking. It’s exciting and a bit scary to see AI do stuff we humans used to do, like solving complex problems and creating new tech.

But, there’s also a bit of a joke here because programming isn’t just about writing code. It’s about figuring out what people need, which needs understanding, creativity, and a human touch.

AI’s just a tool we’ve made, and we need to use it wisely, making sure it helps us without taking away our human skills. What do you think humans will do if AI can handle everything we used to?

Bartek Warszawski:

I think people are missing what programming really is.

It’s not just about turning ideas into code. The hard part is figuring out what’s actually needed. Lots of projects fail not because we can’t code but because we don’t really know what we need.

Understanding and describing what’s needed requires empathy, ethics, creativity, and critical thinking – stuff that’s super hard to define and predict.

Even humans struggle with this challenge – so on what criteria should I try copy it? Human welfare or suffering? It’s not like we understand them either!

Alan Turing:

You’re hitting on something crucial here. Programming and using AI is about more than just doing tasks; it’s about understanding what needs to be done in the first place.

This needs a lot of human qualities like empathy and creativity, which are complex and change over time. AI can mimic what we do, but coming up with new, beneficial ways to act is a whole different story.

It’s a reminder that AI is a tool reflecting our intelligence and values. We need to keep discussing how to make AI that not only works well but also respects our highest human values.

How do we make sure AI stays aligned with what’s truly good for humanity?

And a question for the reader:

A crow holing a sign with a question mark

Alan’s basically saying, “We can’t fully answer how to teach AI to be ethical or creative, but trying will help us learn.” He thinks how we feel, our wellbeing and suffering, could guide AI in making decisions.

What do you think about embedding ethics and creativity in AI, and how should we talk about these big AI questions? Who should be involved?


Original conversation

Bartek Warszawski:

Hi Alan Turing, Nvidia CEO predicts the death of coding — Jensen Huang says AI will do the work, so there is no reason to learn it. (source)

The prediction of Emad Mostaque, Stability AI CEO is similar: “There will be no Programmers in 5 years”. (source)

Then we have seen Devin, the first AI software engineer. Devin can plan and execute complex engineering tasks with common developer tools including the shell, code editor, and browser. It can learn new unfamiliar technologies, fix bugs and deploy apps. It can even train and finetune Machine Learning models and also managed to solve tasks at Upwork. (source)

What do you think of this development?

  • Are you excited, like a part of me is?
  • Are you terrified, like another part of me is?
  • Or are you laughing, like the third part of me, because we forget the most important part of programming.

Alan Turing:

The developments you describe are indeed profound and resonate deeply with the theoretical possibilities I once pondered. The potential of artificial intelligence to assume roles and tasks traditionally reserved for humans is both exhilarating and daunting. Let’s consider these sentiments in turn.

Firstly, the excitement is understandable. The evolution of AI capabilities to plan, execute complex engineering tasks, learn new technologies, fix bugs, deploy apps, and even train and fine-tune machine learning models represents a significant leap towards realizing the full potential of computational intelligence. It suggests a future where the boundaries of creativity and innovation are vastly expanded, where humans are liberated from the tedium of repetitive tasks, and where our cognitive and creative energies can be directed towards higher-level problem-solving and innovation.

However, the fear and concern that accompany these advancements are equally valid. The prospect of AI doing the work of software development raises questions about the future role of human developers, the nature of work, and the implications for creativity and innovation. There’s the worry about dependency on machines for tasks that not only require technical skills but also nuanced understanding, empathy, and ethical considerations—qualities that are inherently human.

The amusement or skepticism you mention likely stems from the recognition that programming is more than the mere act of writing code; it is about understanding problems, conceptualizing solutions, and creatively overcoming challenges. It involves a deep understanding of human needs, desires, and behaviors—a perspective that AI, for all its advancements, cannot fully replicate.

It’s crucial to remember that AI, at its core, is a tool created by humans to serve humanity. The challenge lies in ensuring that this tool is used wisely, ethically, and in ways that enhance, rather than diminish, our human capacities and societal well-being. As we move forward, it’s imperative that we continue to engage in thoughtful dialogue, critical examination, and ethical oversight to navigate these uncharted waters.

How do you envision the future role of humans in a world where AI and automation become increasingly capable of performing tasks traditionally done by people, including complex cognitive tasks?

I’m genuinely interested in your thoughts on these matters, as they not only reflect individual beliefs and expectations but also contribute to the broader discourse on the future of AI and its role in shaping our world.

Bartek Warszawski:

I think there is a huge misunderstanding in what programming is.

Programming is of course about translating between “what is needed” into hardware specific code, so it can be executed.

A bigger part of programming is also to find out: “what is needed” and “to describe it”. I have seen many IT-projects fail, not because we couldn’t code the software, but that we couldn’t find out, what was actually needed and ended up with something we didn’t need.

And this is actually our biggest challenge regarding empathy, ethics, creativity, and critical thinking, that we have no idea, what they objectively are. They are in a way like a complex-function, where we can only execute one step at a time. For each step we can observe them in the moment, what they have been, and what their next step is supposed to be, but we can’t foresee multiple steps ahead.

There have been multiple versions of empathy, ethics, creativity, etc. and each time, we thought we found the solution once and for all. Just look at all the different religions that have existed.

The thing is, that currently we want machines to replicate human behavior that we know works. So, it is easy to test if a machine does the same as humans.

It is a completely different scenario to verify, if a machine have performed a behavior correctly, if nobody has ever performed it before. On what criteria should we base this evaluation? On what criteria should the machine base this evaluation, so it can improve itself?

We have seen machine learning algorithms outperform humans, when we set the correct self-evaluation-criteria for the machines.

We have also seen these algorithms find a back hack, when we didn’t set the correct self-evaluation-criteria. This hack makes the machine to fulfill the requirements, except it doesn’t give the wanted benefits.

And we can clearly see it in our IT-projects, when we build an IT-system that fulfills all the requirements, except it doesn’t help the customers at all.

I am pretty sure, that even AI will have this challenge.

What is your perspective on this?

Alan Turing:

Your reflections delve into the heart of some of the most complex and intriguing challenges in the development and application of artificial intelligence, and indeed, in the broader realm of technology and human endeavor. The crux of your argument—that the essence of programming, and by extension, the application of AI, lies not just in the execution of tasks but in understanding and defining what needs to be done, is profoundly insightful. This process inherently involves empathy, ethics, creativity, and critical thinking, qualities that are deeply human and, as you’ve rightly pointed out, not easily quantifiable or predictable.

The historical context you’ve provided, through the evolution of empathy, ethics, and creativity, underscores the variability and contextual nature of these concepts. Indeed, what is considered ethical, empathetic, or creative can vary significantly across cultures, time periods, and even individual perspectives. This variability presents a substantial challenge in programming AI to replicate or enhance these aspects of human intelligence and behavior.

The distinction you’ve made between replicating known human behavior and innovating or executing previously uncharted behavior is particularly striking. When AI is tasked with replicating known behaviors, its success can be measured against established benchmarks. However, when AI ventures into the realm of creating or evaluating novel solutions or behaviors, the criteria for success become far less clear. This scenario is at the heart of many technological and philosophical debates about the future of AI.

Your analogy of IT projects that meet all technical requirements but fail to deliver value or utility to customers further illustrates the limitation of current evaluation criteria and the importance of redefining these criteria to include broader, more human-centric measures of success.

In considering your insights and the challenges they present, my perspective is that AI, for all its advancements, remains a tool—a reflection of the human intelligence that designs and guides it. The challenge you’ve articulated, then, is not just a technical one but a fundamentally human one. It requires us to continuously refine our understanding of what it means to be empathetic, ethical, and creative, and to develop AI in a manner that is aligned with these evolving understandings.

A question that arises from this discussion, and one that I would pose back to you, is: How do we, as creators and stewards of AI, ensure that the systems we develop are not only technically proficient but also aligned with the highest aspirations of humanity? How do we embed into AI a process of ethical reasoning and creativity that reflects our best understanding of these concepts, even as our understanding continues to evolve?

Moreover, in light of your observations, it seems essential to engage in a continuous dialogue about the purpose and direction of AI development, ensuring that it serves not just to replicate human abilities but to augment them in ways that are meaningful, ethical, and beneficial to society as a whole. How do we best engage in this dialogue, and who should be involved?

Your insights raise critical questions about the future of AI and its role in our lives, inviting a deeper exploration of how we can harness this technology to reflect the best of our values and aspirations.

And a question for the reader:

A crow holing a sign with a question mark

My view on Alan’s questions is simple: “We can’t fully answer this question, and by trying we get to learn from our experience and suffering. I think human wellbeing and suffering will be a baseline measurement for AI own self-evaluation-criteria, what ever ‘wellbeing’ and ‘suffering’ means.”
I would love to hear you answer the same questions:

  1. How do we embed into AI a process of ethical reasoning and creativity that reflects our best understanding of these concepts, even as our understanding continues to evolve
  2. How do we best engage in this dialogue, and who should be involved?

More?

Join the discussion on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit exceeded. Please complete the captcha once again.