In a dysphotic future, there will exist an AI, who monitors all public, privat and even dark net communication and discussion bords and if the AI detects and emotional HATE-SPEACH or VERY DESTRUCTIVE post, then it automatically creates a news story about a crime (e.g. terrorist attack or assault, slaughter, rape), where the victim is a virtual person baseed an designed on the victim, who was blamed in the discussion board. The perpetrator is portrayed similarly to the person who posted / wrote the hate speech in the discussion forum.
The problem with this is that in this future there are regular articles about terrorist attacks and also a lot of rapes, e.g. if a man with women's problems expresses himself indulgently in a gamer forum.
Originally the purpose was to see emotionally unstable people for the ultimate dire consequence of their actions, but with this deluge of gruesome articles more and more dulled people and overlooked the warning signs of real crime.
[Das Problem dabei ist, dass es regelmäßig in dieser Zukunft, Artikel über Terroranschläge gibt und auch viele Vergewaltigungen, wenn z.B. ein Mann mit Frauenproblemen sich ablassend in einem Gamer Forum äußert.
Ursprünglich war der Zweck, emotional instabilen Menschen die letztmögliche schrecklichste Konsequenz ihres Handeln zu veranschauen, aber durch diese Flut an grausamen Artikeln stumpfen die Menschen immer mehr ab und übersehen die Warnsignale für echte Verbrechen.]
Keine Kommentare:
Kommentar veröffentlichen