• JohnDClay@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    We kind of still need to assume they do though for morality to work. If we treat everyone as automatons, it’s hard to hold them responsible for their actions.

    • mrcleanup@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Do we though? If you capture a Terminator robot from the future and reprogram it to be helpful, protective, and good, do you still think we should punish it too for all the people it killed? An eye for an eye and all that?

      Assuming that you said no…

      What do we gain as a society by focusing on punishing people instead of reprogramming them? And what does that say about us?

      If you do think that even after reprogramming we should punish it too, what do we gain from that?

      • JohnDClay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s more from an individual point of view. It’s a common psychopathic tendency to blame your actions on others, which is not a helpful or healthy way of dealing with them. I’m not thinking in terms of punishment as much as acknowledgement of the problem.

    • andrew_bidlaw@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If a line of code can’t be executed, causes problems to the program as a whole, doesn’t follow formatting guidelines or just conflicts with one’s experience how it could work better, it doesn’t follow what’s moral to one. It sure has reasons to be written this way, and it functions as it is meant to be in a current form. Yet, it would rise either an exception or an eyebrow when reviewed. Moral is a correcting process that ensures the code keeps working, whenever or not a subject has a free will. It in itself created and constantly updated as the code executes.

      Your feeling it’s immoral to judge those who seemingly don’t have responsibility was coded into you at some point, as their morals in them too.

      We are machines who learn how to avoid our termination and follow our everchanging scripts, however clean our learning base is. And it’s never 100% clean.

      But at that level of complication and with our own clouded judgement, we can only do our best guesses? It’s impossible or just not optimal to abstain from a non-informed judgement when, for example, you are in immediate danger. So you use different models, like ‘is it good to X’, to get a close enough answer in time. And then, when reflecting on it on cold winter nights, you judge your past judgement models and adjust them for a future use.

      It sounds weird alright.