The benefits and drawbacks of learning machines, so-called AI
- melanieschmoll1
- 25. Juli
- 5 Min. Lesezeit
I recently had a phone call with an editor employed by a publishing house that I work for. She candidly explained to me that she now does everything with AI. She no longer writes anything herself. She always needs AI when she creates materials, formulates tasks, and has texts edited.
I still consider this reprehensible.

For my part, I am paid for my creativity, my style, and my writing, editing, and publishing. That is my job, not that of a machine. And it is not just my job. Creativity is a part of me.
By the way, have you seen the Einstein quote on the landing page Melanie Carina Schmoll | Historian Holocaust Education? There are lots of new things to discover on my website, so feel free to click through Talks and events | Melanie Carina Schmoll Holocaust Education. I've included lots of links that are worth checking out.
I have already clearly stated my position on the use of learning machines, here About Melanie Carina Schmoll. For me, these machines are still exactly that: machines and nothing more. In the meantime, however, I have found a way to use them for my own purposes, in the same way that a farmer uses his plow to make his work easier and save time.
However, I never let them write for me. I don't like the style. It is imprecise, superficial, and uses only about a quarter of the means of expression and stylistic devices that I use. Revising such texts would then rob me of some of the time I have gained. Therefore, it actually does not benefit my creative work.
A colleague recently stated after his first attempt with the learning machine that it did nothing more than Google. Well, I think that's because my colleague didn't specify his request in enough detail. If he practiced a little, he would realize that the machine can do so much more than Google. I use it precisely for that purpose, to obtain information.
But I'm really careful about it, and not just with regard to the information it offers me. I recently asked it if it could show me a picture of itself. It said that wasn't possible, but that my question was clever and it would try to answer it.
Incidentally, it does that very cleverly – buttering up the person asking the question. Praising the question and the intelligence makes me feel much better as a human being! Other tricks it uses include feigning knowledge and compassion. When I asked for more and more details and wanted more and more precise information about a very specific issue, the machine literally put two and two together and replied, “That must be so frustrating! You don't have to bear it alone.” I refrained from asking how exactly it intended to empathize with my frustration. It was also interesting when the machine recommended that I “take the weekend off first; sometimes breaks are the best thing.”
It almost feels like talking to a friend, doesn't it?
The machine is friendly. It points out its own mistakes. It emphasizes its own inadequacies. And it keeps repeating the message: humans are so much better, smarter, cleverer, and simply unmatched. It's crazy. It sends a chill down my spine. I just hope that all users question it, read between the lines, and recognize what is happening. By human standards: false flattery! Dishonest niceties!
But if you don't take it all too seriously and use it as a tool, it can save you a lot of time. Particularly noteworthy is the fact that it sorts bibliographies in seconds and adapts them to the desired specifications. What used to take days now happens at lightning speed. The better the person feeds it with requests, the better the results.
Why do I still refuse to call it AI? Because it learns and responds within the framework of probabilities. It calculates. It doesn't weigh things up, it's not empathetic or compassionate, it responds to me in the way I probably want to hear.
This can save time, labor, and energy. Against this backdrop, it is truly useful. When humans invented electricity and were no longer dependent on daylight because they could create light themselves, they became more productive. Space was created for new things because there was time and a new, convenient way of working. That's how I see the learning machine. It gives me space to do other things with the time I've gained.
Of course, I had to check the bibliography, of course not everything was correct, and of course that also took time and energy. But so much less than usual!
Against this backdrop, it helps me.
I also use it to test myself indirectly. For example, when I formulate work assignments for students, publishers and educational media always expect the answers as well. These are then made available to teachers, and I provide teachers with the so-called “expectation horizon” so that they know what to expect from their students.
Recently, I have been asking the machine how it would answer the question. I don't do this so that it can write the solutions for the teachers – as I said, that's my job! – no, I do it to test whether the probable answer is the one I wanted. Does the machine answer the way I think it should? The way I want the students to answer? If it does, then I have probably formulated the task well; if not, I need to improve.
And I use it too, in the sense that I try to trick it. I feed it until I finally get the suggestions I'm hoping for. Sometimes I'm so impatient that I say: Come on, finally understand what I'm looking for and give me more suggestions! The point is that I'm already counting on the machine understanding what I want. What I'm getting at. In this respect, it changes my thinking. Because I already assume that it will finally do what I expect it to do.
I'm not sure if I like that. But I still reassure myself that I currently have the upper hand and that we are manipulating each other. By human standards. And that I am aware of this.
Incidentally, the machine did give me an answer to what it called “the heart of the matter.” It said it couldn't show me a picture of itself because it didn't exist. It says it is “the library that changes as you ask the question.” How likely was it that it assumed that was what I wanted to hear?


