Automation is Coming – How Will You Survive?

Original source:

How AI will disrupt the developer and IT-architect; and what can they do about it?

You might be in your 20s, 30s, or 40s and surfing on the wave of brand new technology with no fear of “what’s to come”. We know that AI (Artificial Intelligence) with its Machine Learning and Cognitive Automation is surfacing below the deep unknown waters and that they will disrupt the industries within the transport, fast food, and farming sectors. We even know that the disruption will not only influence the blue-collar workers, but almost every job profile is also affected by accountants, lawyers, medical practitioner up to data scientists already in touch with AI. The questions we will treat here are, how will this AI revolution influence software development, and therefore the developer and IT-architect? What should you expect from the upcoming world of the 2020s and how should you prepare for it?

To understand what will happen in the 2020s we need to review what has happened in the last 61 years in the field of AI.


The first machine learning program and neural networks

The concept of “learning machines” is not new. The first electronic device that was able to learn and recognize patterns [Cor57], was built as early as 1957 by Frank Rosenblatt. The first machine learning prototype was built in 1961 by Leonard Uhr and Charles Vossler [Pat61]. Neural networks have existed for a long time, but their learning required a lot of labeled digital data, which was rare and expensive in the 1960s. In today’s digital world, especially social media deliver huge amounts of labeled date every second. Sensors were also analog and not very precise compared to today’s sensors in phones, smart homes, and cars. This is probably why chess was so interesting to automate because the sensory data input was simple (only 8×8 tiles and 12 types of bricks) but could result in at least 10^120 probable chess games [Che], which is larger than the number of atoms in the known observable universe [Uni].

The chess game itself was simple enough to code but winning a chess game was not. It was obvious that neural networks had a potential, but no neural network at the time was able to beat a human chess champion.

The 2000s

Something happened in the 2000s, that made AI move forward and solve problems AI couldn’t solve before. DARPA arranged in 2004 a self-driving car competition: “The DARPA Challenge”. No car managed to complete the course and the longest driven distance was 12 km.

DARPA tried again in 2005 with completely different results: 23 of 24 cars managed to beat the record of 12 km, where 5 of the 24 cars managed to complete the whole course of 212 km. What happened? The incredible improvements of big data, affordable HPC and deep architectures, e.g. digital online maps became available, image analysis tools became better, etc.

Deep learning

Software engineers and scientists required fewer data to build e.g. an image recognition algorithm, than a neural network required to learn to recognize images. In the late 2000s enough labeled digital data was available for machine learning software to remake the same image recognition algorithms, that humans already had developed. Deep learning was one of the machine learning methods, that split the learning into multiple layers. The first layer learned to recognize simple forms from pixels. The next layer learned to recognize faces from simple forms. The state of the art is that machines e.g. can diagnose diseases on the basis of images and do so better than corresponding human specialists

Not only did deep learning remake the previous image recognition algorithms, it even improved them. Machines didn’t only recognize items on images but also began to create its own images called “computer dreams” [Ifl]. Machines even began to change summer to winter on existing photos and even change day to night and the other way around [Pix]. The machine learning algorithms were not new, but the new huge datasets gave new possibilities.


Software development by humans

Software is all about mapping the inputs with the correct outputs to solve specific problems.
A software engineer or a computer scientist could take a complex real-life phenomenon and refine the necessary patterns into an algorithm:

Software development by machines

Most machine learning is only about connecting inputs with outputs, but machine learning based on evolutionary algorithms are able to improve itself by putting pieces of codes together into completely new algorithms. This type of evolutionary development is a lot faster than traditional human software development.

In 1997 humans managed to develop Deep Blue, the first computer that was able to win a chess game against a human world champion. It took 20 years, before AlphaGo in 2017 was able to win the board game Go against the best player in the world.

AlphaGo Zero (a new version of AlphaGo) was developed based on reinforcement learning. AlphaGo Zero learned itself to play Go from scratch in only 4 hours, after which it was able to beat the original AlphaGo 100 times in a row [Dep].

A machine was able to create a better algorithm in only 4 hours compared to the 20 years the multiple developers and architects needed. Developers and architects will be disrupted, but there will still be a need for humans.

It might not be fair to compare the human development time to only training/learning time of the machine since it also took some time to develop the training/learning algorithm. What we have to remember, that the training/learning algorithm can be reused for other projects, while the human development time will not.

The risk in not understanding

Software in the last 70 years has been built on simple algorithms that are understandable by humans (even though they are still difficult for humans to understand). Machine learning software can create a new type of algorithms that are much more complex and impossible for humans to understand. We are much more dependent on our software today than ever before, and not understanding the algorithms behind our software will be a huge risk to take.

The consequences are not dire, when weather-chatbot mistakes the word “tonight” with a city name, as asking the weather-bot “What is the weather tonight?” would only result in a bad answer: “I don’t know ‘tonight’. Try another city or neighborhood.”. Word mistakes such as that would have far worse consequences for a critical healthcare system, especially, if no human was able to fix the problem. Some would say, that these mistakes happen because AI is still in the early stages of development and it is only a question of time before these mistakes won’t happen – but will they?

Complex algorithms are not perfect

Kurt Gödel proved with his incompleteness theorem, that all systems are incomplete, even math. I have simplified Gödel’s proof into:

  1. For a sentence to be true in math, it must be proved.
  2. But what about: “This sentence is unprovable”?
    If this sentence is true, then it can’t be proved and therefore false according to math.

The bad news: Complex algorithms created by machines are also systems and therefore incomplete, imperfect, flawed, and containing unexpected bugs.

The good news: A chess algorithm does not have to win all possible games. It just needs to win more times, than the current world champion. If humans could design a healthcare system as a board game, then machines would find better and better ways to win this game. If humans could design a healthcare system as a board game, then machines would find better and better ways to win this game, just like AlphaGo Zero managed to win the board game Go better than any human could.

Test Driven Reinforcement Learning

In traditional development of machine learning, a dataset is split into training data (80 %) and testing data (20 %). The training data is used to train the machine, while the testing data is used to evaluate how precise a machine is to predict/categorize. Data that reduce the precision is removed from the dataset, while data that increase the precision is added. Building, navigating, and improving the datasets require a data scientist or someone with a lot of mathematical, statistical and technical know-how. This restricts a lot of people from working with machine learning.

Reinforcement Learning is about rewarding and punishing behavior to either reinforce wanted behavior or diminish unwanted behavior. I have built a prototype: A bot that has to follow a target. The bot has 4 cameras (one in each direction) and a leg (so it can move in a direction). The bot would start moving the leg randomly. To reinforce a leg movement a developer could set up a technical test, that measures the distance between the bot and its target. If the distance decreases, the probability of the leg movement would be reinforced, but if the distance increased, then the probability of the leg movement would diminish:

Caption: “Visual example of how my prototype learns and unlearns.”

My prototype is even able to handle changes in the environment, because of constant learning and unlearning. If two cameras switch place, then the bot would unlearn the old camera-movement relations and relearn the new ones.I named this approach TDRL (Test Driven Reinforcement Learning) and is completely different than traditional development of machine learning. Instead of preparing the dataset for a machine, the machine gets access to an unfiltered dataset or environment. The tests will govern the development of the machine and whenever a machine behavior needs to change or be fixed, then a new test case can be written. For example, Microsoft deployed an AI chatbot “Tay”, that became racist within 24 hours. If this was built on TDRL, then this could have been easily fixed by adding a test case that measures the racism level within a sentence. The chatbot would then fix the problem itself, by governing its own sentences and lower the probability of writing something racist.The book “Testing in the digital age” [Tes2018] describes, that we will not only need technical tests (like in the example above: “measuring the distance”), but also ethical and conceptual tests (like measuring mood, empathy, humor, and charm) to improve robots in the roles of a partner, coworker, or assistant. Technical tests will be less complex to design, while the ethical and conceptual test will be more difficult. Ethical and conceptual tests would require domain-specific knowledge. For example, the concept of “health” or “happiness” can be perceived differently: Something that is healthy for me is not necessarily healthy for my children, or the planet’s environment.SummaryHow will this AI revolution influence software development, and therefore the developer and IT-architect?There is no doubt about that the AI revolution will disrupt the IT-industry in the 2020s. Machines will be able to create better and more complex solutions, in less time, than a human developer and architect ever could. Machines will even disrupt the data scientist.What should you expect from the upcoming world of the 2020’s?We already see the need to govern machine learning bots and this need will only be growing. To govern the machines doing software development; developers and architects will have to design tests and guide the machines in creating software and solutions.How should you prepare for it?There is already a need to design ethical tests, like for Microsoft’s AI chatbot “Tay” that became a racist. But before ethical tests can be implemented, a new set of development tools need to be developed from current TDLR prototypes. As a developer or architect, you will have the opportunity in the next few years to become a part of the development, and get the experience needed to not only design the technical tests, but also the ethical and conceptual tests. The book “Testing in the digital age” [Tes2018] is a good start to find out how to deal with these new challenges. Ethical and conceptual tests will require us to answer difficult domain-specific questions such as “what is a good partner”, “what makes a good product”, “how to make the customer happy”, etc.. These questions are important, and as a developer/architect in the 2020’s we will be answering them!Bartek Rohard Warszawski defines “knowledge” as the ability to predict a future – if you can predict something, then you can prepare for it.He has almost 30 years of experience within software development and testing, which he uses, when he speaks at conferences, writes articles, and holds workshops in TDD and test automation.Email: Bartek.Warszawski@Capgemini.comLinked:

[Cor57] “The Perceptron A perceiving and recognizing automaton (project para)”, Cornell Aeronautical Laboratory, inc., Report No. 85-560-1, Frank Rosenblatt, January 1957

[Pat61] “A Pattern Recognition Program That Generates, Evaluates and Adjusts its Own Operators”, Leonard Uhr and Charles Vossler, May 1961






[Tes2018] Testing in the digital age, by Rik Marselis, Tom van de Ven, Humayun Shaukat, 2018, ISBN 9789075414875

This content was first published in magazine OBJEKTspektrum, Germany.

Translate between code and Gherkin with AI

A raven is translating from one language to another
A raven is translating from one language to another

Part 2 of 2

Read Part 1

In this article about “AI-learning (E-learning 3.0)” series, we delve into translation of code, to make it easier to understand. To show that AI is also about learning and not only automation.

What is Gherkin

Gherkin is a domain-specific language that makes it possible to describe software behaviors in a human readable syntax. It’s primarily used to write structured tests that can be understood by non-technical stakeholders, making it a crucial tool in Behavior-Driven Development (BDD).

We usually write Gherkin language with: Given, When, and Then.

Translating code to Gherkin

Let’s take some Python code:

number1 = 1
number2 = 2
result = number1 + number2

Let’s make ChatGPT 3.5 translate the code to Gherkin and back:

I have the following python code, and I would like you to add Given, When, Then comments into it, so they explain the code:
number1 = 1
number2 = 2
result = number1 + number2

And the result is:

Translating from Gherkin

Let feed ChatGPT 3.5 with:

Can you please translate the following text into Python?
# Given there are two variables named numbers with values: 1 and 2
# When they are added together into a result variable
# Then the result variable must be printed.

It will give the following output:

Isn’t Given, When, Then for testing only?

It is for testing, but not only.

Given, when, then is a fantastic way to describe processes, and this is how LEGO manuals work and even IKEA.

  • Given You have the following LEGO bricks (don’t start without them).
  • When a some of the bricks are assembled in a specific way.
  • Then a specific result must be achieved.
  • When the next bricks are added.
  • Then the next result must be achieved.
  • Then the final result must be achieved.

It has been working well for LEGO and IKEA for many years.

Code is also an automated process.

By adding given, when, then comments into my code, it can make me structure my code much better.

Let’s take the following example:

# given
number1 = 1
number2 = 2

# when
result = number1 + number2

# then

Under the given: I can structure my variables or deal with null-pointers and other errors.

Under the when: I can do what is needed by the code.

Under the then: I can handle the clean the output and return it.

This makes our code clean and easier to understand.

And when it is not possible to structure code like this, then maybe it is a sign, that the code needs to be split up into smaller parts and simplified!

Want to read more?

Follow the discussion on LinkedIn

Take the following lesson:

or the full course:

Comment below or on here:

Translate between programing languages with AI

A raven is translating from one language to another

Part 1 of 2

Read Part 2

In this article about “AI-learning (E-learning 3.0)” series, we delve into translation of code, to make it easier to understand. To show that AI is also about learning and not only automation.

Transcending Language Barriers in Coding

One of the most formidable challenges in the world of programming is the diversity of languages. Python, JavaScript, Groovy, and C# — each serves its purpose, but fluency across them can be daunting. Enter ChatGPT 3.5, an AI model that acts as a linguistic bridge, effortlessly translating code from one language to another.

So, if I take the following code:

# Given
number1 = 1
number2 = 2

# When 
result = number1 + number2

# Then

and put it into ChatGPT as:

Can you please translate the following Python script into JavaScript:
# Given
number1 = 1
number2 = 2

# When 
result = number1 + number2

# Then

Then the result will be in ChatGPT:

code example in JavaScript

We can translate it into Groovy, C# or any other language!

It’s especially good, when you need to do something in one language, but don’t understand what this method does. Python code example:

pn = str(n).zfill(5)

and put it into ChatGPT 3.5:

Can you please translate the following Python script into JavaScript:
pn = str(n).zfill(5)

and it gives the following result:

and same goes for the other languages:



How cool is that?

You are welcome to try the code out directly in the browser, by using web editors: 💻 Python / 💻 JavaScript 💻 Groovy / 💻 C#

You can also try yourself with PHP, Ruby, C, C++, whatever!

Part 2 of 2 – Translate between Code and Gherkin with AI

Read Part 2

Want to read more?

Follow the discussion on LinkedIn

Take the following lesson:

or the full course:

Unlocking Fun in Testing and Coding!

Learn testing and coding with TDD and AI
Learn testing and coding with TDD and AI

Hey there, Adventurers!

Have you ever imagined that learning technology could be filled with as much fun and imagination as creating your own art? Welcome to, where we turn that imagination into reality!

Who Am I?

I’m Bartek, a Quality Coach, Test Engineer, and an AI explorer.

I’ve witnessed first-hand the transformative power of AI across different sectors.
However, it was the integration of AI into my own field that illuminated a path less travelled.

My mission is to bring the playful spirit of kawaii (Japanese for lovely, adorable, cute, etc.) into the world of coding, testing, and automation.

Why should testing and coding be all boring and only serious?

Let’s find the fun in it!!!

Crafting Automation Scripts: Your Digital Magic Spells

Remember the wonder of seeing a machine do something amazing for the first time? That’s the thrill of learning to code!

We’ll guide you through creating automation scripts that are incredibly fun to build but also fulfilling to watch in action. It’s like when my grandmother got her new washing machine and watched it do its magic.

And why not learn it with the AI-tools of today?

This is not another: “Learn to program” course. It’s a new way to do it with AI!

This is an excellent example:

The Magic of Testing and TDD Explained

But how do you make sure your digital creations do exactly what you want them to do, every single time?

That’s where testing and Test-Driven Development (TDD) come in. We’re here to demystify these concepts, showing you step by step how you can be the master of your code. To learn how you ideas actually work and what system-behavior actually needs to be implemented!

Precision in Communication: Beyond AI Guesswork

I had a dialog with Alan Turing (simulated with AI), about the future of programming. He pointed out, that programming is not dead, but more needed than ever before.

In a world increasingly run by AI, the need for precision in our commands and codes has never been more critical.

Yes, AI can guess what we need from our prompts written popular language, except it often guesses based on the majority’s needs.

TDD provides us with a tool for communicating our deepest thoughts and wishes to the computer and AI, ensuring our creative vision comes to life exactly as we imagine.

The future of programming is not dead, but programming languages will go from hardware-specific to concept-specific.

Are You In?

Ready to embark on a journey where learning coding and testing becomes a canvas for your creativity?

Aided by the charm of kawaii and the cutting-edge capabilities of AI?

Let’s not just code; let’s inspire and be inspired, blending the artistry of programming with the precision of TDD, to express our self and grow in areas we never could imagine before.

Join the adventure at – where coding meets creativity and imagination!

Looking forward to creating magic together,


Follow the discussion on LinkedIn

Adding web editors for coding

A raven is coding

Let’s talk about learning to code – it’s like unlocking a whole new world of possibilities, right? But wait, here’s the thing: sometimes, getting started can feel like hitting a roadblock, especially when it comes to installing all those fancy tools.

But guess what? I’ve got your back! I’ve whipped up some super cool editors for JavaScript and Python, and I’ve even thrown in some existing ones for Groovy and C#! 🎉

Direct link

Now, here’s the kicker – we might not be able to code anything just yet, but we’re gonna learn the basics together. It’s like laying down the foundation for a super cool coding adventure! 💻✨

Stay tuned, because I’m about to show you the ropes in these editors. And get this – we’re gonna dive into the world of Test Driven Development and AI too! It’s gonna be an epic journey of learning and discovery. 🌈📚

So, wish me luck on this exciting adventure, and get ready to join me on this coding quest! Let’s do this! 💪🤖🍀

Unlock Your Inner Zen: With coding Music!

Listening to chill music
Listening to chill music

Hey there, awesome inner child at 40! 🌟

Guess what? I’ve just launched a super cool music page that’s all about helping you chill out and dive deep into your thoughts like a fearless explorer! 🎵✨

Picture this: you’re in your own little world, coding away like a pro. But wait, something’s missing… Ah, that’s it! With our totally free meditation/musication music playing in the background, you’ll feel like you’re on a magical journey to uncover hidden treasures within your mind! 💡🎶

So, what are you waiting for? Grab those headphones and let the adventure begin! Check out our awesome selection of tunes and get ready to embark on a journey of peaceful coding bliss! 🚀💻

Putting the inner child into the text

From intellectual to kawaii

I have a specific way of writing things. It comes from, that I have tried to remove emotional meaning from my words and use as much intellectual as I can. (Maybe it also comes from all the programming I have done, since my childhood?)

My point is, that this is not always easy to read or fun.

Since I have moved to a kawai style (adorable, cute & lovely) in my images, then maybe I should include the inner child in my texts?

All of this could be rewritten to:

Let’s chat about something super important – my way of writing stuff! So, you see, I’ve always been kinda serious, trying to keep emotions out and use big brain words instead. Maybe it’s ’cause I’ve been coding since forever?

But guess what? Reading my stuff isn’t always a breeze, and definitely not a party!

Lately, I’ve been thinking: since I’ve gone all in on this super cute kawaii style in my pics – you know, all adorable and sweet – maybe I should sprinkle a little bit of that magic into my words too?

What do you think about that, huh?

Lets vote for: 1) easier, 2) more fun, 3) both or 4) neither

Testing of a background

tiled background image

Since I got the new kawaii (Japanese for lovely, cute, and adorable):

I needed to update my old tiled background:

Into a kawaii one, so I started with something pretty and detailed, but had to make it less detailed and eye catchy, to not steal attention from the text and other content:

cute office wall
cute office wall
cute office wall
cute office wall

It works well with content, while the prettiest one took too much attention.

Here is an example, that when we test something, we need to test it in the correct context.

Because in a different context, we might select the wrong solution.

Why the cute style?

Illustration of a what is testing
Illustration of a what is testing

I have always been fascinated by visuals, just like programming.

In the first version of the website, I made a lot of pretty images:

It’s pretty, but I wanted it to also be more fun, so I got an idea:
Why not include fun and cuteness, like kawaii?
(kawaii in Japanese means lovely, adorable, cute)

Why does testing and programming always have to be so serious?
Maybe I should loosen up a bit and have some fun?

So I made an alternative gallery:

I loved it!

But what would be best for my site?
What would testers, developers, and others think?

An A/B test could give an result.
(An A/B test is a test, where I post both solutions, but the 50% users see the A version, while the other see the B version. Then we can compare which version, works best for the users).

There was also another option:
Who would I preferer to eat lunch with?
I might loose 98% people, if I use the kawaii style.
At the same time, the rest 2 % will be fun to eat lunch with!

So, why not try it?

So this is how this kawaii style came about:

First edition of my new website

front image of
front image of bartek.k
binary comment

🎨 Intro Image Exploration

I use DALL-E 3, because I can specify better what my image is about, and I used the following prompt:

Can you create an image from Norse Mythology? It should be a cover image for a website about software testing. Here you can find auxiliary tools, theory with AI-learning within test management, test analysis, and test automation. You can also learn about Test Driven Development. And purchase Onsite Keynotes and workshops. There are also dialogues with historical figures, simulated with AI.

No text or runes.

It creates a rather boring image, adding “Norse Mythology” can make it more mystical, but it still is rather boring:

🤔 The Unpromptable Prompt

I also added an unpromptable prompt, where I asked the AI-model to tell me, what style and colors it sees, so I know how I can communicate with this model. It won’t work with other models, but I want something specific and this is how I can learn how the AI understands the world. The image I used, that I have created another time:

Combining these prompts gave me this result:

🚀 Upscaling Adventures

I needed it to be 2000×1200 pixels, but the image generated was only 1024×1024.

I use Upscayl – Free and Open Source AI Image Upscaler for Linux, MacOS and Windows built with Linux-First philosophy.

Which can upscale my image up to 16 times!

And I only have a laptop with a NVIDIA GTX 1650.

🔍 Zooming-Out

I needed it to work on mobile devices, which would zoom too much on the image:

So I needed to Zoom out.

I copied, flipped an pasted of the image again and again (left, right, up and down):

And then put it into (which runs with Stable Diffusion AI), because I can add an mask on top, to select where the image generation must happen:

It gave me images like this:

I then combined multiple of these images in GIMP2 (The Free & Open Source Image Editor) to the following result:

I had to upscale it again, because the rendered image is only 1344×768, but with Upscayl it was easy.’

Incredible what multiple AI tools and open source tools can bring to the table!

🔮 Unlocking AI’s limitations

Midjourney and Stable diffusion has keyword prompting, but its too simple.
Often I feel I can’t get out of a template. Let me give an example of 2 similar images, that have 2 different prompts :

Dynamic transpose in Excel as a mythological digital art

Another example is, when I wanted Sisyphus pushing his stone up hill, but couldn’t get it. Either the stone was already at the top. Or flying. Or there was no stone, but a moon!

The image below is one funny example, because if Sisyphus is trying to push the stone with his butt. If that was the case, then I understand why the stone keep rolling down again and again.

DALL-E 3 can take more details into account, which makes it easier to specify.


A man writes "ABC" on a piece of paper. He sits under a tree with his block of paper and beside him is a bike. It is fall and crows fly in the air. The style must be "cute".


It is better to specify, but again not all details. That’s why we often need to combine images.

🎨 AI-images require skill

Yea, right!

I use AI to produce tons of images, that I can choose what direction I want to take them.
I use my Gimp-skills (Open source photoshop), to combine these images.
I use creative ways to understand my tools, AI included – like unpromptable prompting.
Sometimes I simulate historical people with AI, to inspire me or see things from new angle!

All this required me to:
Test, fail, redo, learn,
test, fail, redo, learn,
test, fail, redo, learn,

Not everything went as I wanted, even though it is pretty:

And that is my point: Creative work is not dead. Our tools have just upgraded. Just like Portrait painters complained about photographers, at one time.

And this my friends, requires a lot of
testing, learning, and experience
which is the definition of
creativity and skill!

– Bartek / Bartlomiej Rohard Warszawski

End of this post

And my site is not finished yet. I have just made the first step of:

testing, doing, reflecting, changing, and testing some more.