• Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    114
    arrow-down
    4
    ·
    1 month ago

    Wow, the text generator that doesn’t actually understand what it’s “writing” is making mistakes? Who could have seen that coming?

    I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      2
      ·
      edit-2
      1 month ago

      Nobody should trust LLMs with anything

      ftfy

      also any inputs are probably scrapped and used for training, and none of these people get GDPR

      • mox@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 month ago

        also any inputs are probably scraped

        ftfy

        Let’s hope it’s the bad outputs that are scrapped. <3

      • curbstickle@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 month ago

        Eh, I’d say mostly.

        I have one right now that looks at data and says “Hey, this is weird, here are related things that are different when this weird thing happened. Seems like that may be the cause.”

        Which is pretty well within what they are good at, especially if you are doing the training yourself.

    • SketchySeaBeast@lemmy.ca
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      3
      ·
      1 month ago

      I wish we could say the students will figure it out, but I’ve had interns ask for help and then I’ve watched them try to solve problems by repeatedly asking ChatGPT. It’s the scariest thing - “Ok, let’s try to think about this problem for a moment before we - ok, you’re asking ChatGPT to think for a moment. FFS.”

        • djsaskdja@reddthat.com
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          1 month ago

          Has critical thinking ever been taught? Feel like it’s just something you have or you don’t.

          • Sauerkraut@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 month ago

            Critical thinking is essentially learning to ask good questions and also caring enough to follow the threads you find.

            For example, if mental health is to blame for school shootings then what is causing the mental health crisis and are we ensuring that everyone has affordable access to mental healthcare? Okay, we have a list of factors that adversely impact mental health, what can we do to address each one? Etc.

            Critical thinking isn’t hard, it just takes time, effort.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 month ago

              I have the impression that most people (or maybe it’s my faith in Humanity that’s at an all time low and it’s really just “some people”) just want pre-chewed explanations given to them rather than spend time and energy figuring things out themselves - basically baby pap as ideas food rather than cooking their own ideas food out of raw ingredients.

              Certainly that would help explain the resurgence of Populist sloganeering and continued popularity of Religion (with it’s ever popular simple explanations of “Deity did it” and “it’s the will of Deity”)

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 month ago

            British primary schools used to have something called ‘problem solving’ it was usually a simple maths problem described in words that required some degree of critical thinking to solve. e.g. A frog is at the bottom of a 30m well, it climbs 7m each day but in the night it slides 3m back down in its sleep. You can’t just calculate 30/(7-3) because it doesn’t account for the day the frog gets over the top and thus doesn’t slide back down in its sleep.

            Not the most complex problem but pretty good for kids under 10 to start getting the basics.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            Critical thinking, especially Skepticism, does not make for good Consumers or mindless followers of Political Tribes.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            Nah, it’s something you’re lucky enough to learn coincidentally or you don’t. And if you found out too late in life, you might be too stubborn to learn it at that point.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 month ago

        I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn’t recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I’ve had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.

        I absolutely think SW engineering is going to be a great career long-term, I just can’t encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you’re passionate about it, you’re going to ignore whatever I say anyway, and you’ll succeed. But if my recommendation changes your mind, then you probably aren’t passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.

        I’m not worried at all about my job or anyone on my team, I’m worried for the next batch of CS grads who chatGPT’d their way through their degree. “Cs get degrees” isn’t going to land you a job anymore, passion about the subject matter will.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 month ago

          Outsourcing killed a lot of the junior and even mid-level career level opportunities in CS and AI seems on track to do the same.

          The downside is that going into CS now (and having gone into CS in the last decade or so, especially in English-speaking countries) was basically the career equivalent of just out of the starting line running full speed into a brick wall.

          The upside is that for anybody who now is a senior techie things have never been this good because there are significantly fewer people at that level than there is need for such people, since in the last decade or so a lot of people haven’t had the chance to progress in their careers to that point.

          Whilst personally this benefits me, I’m totally against this shit and what it has done to the kids entering my career.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            30 days ago

            Yup, and that’s why I’ll discourage people from entering my career, not because it’s a bad gig and it’s going away, but because the bar for competency is about to go up. Do it if you’re passionate and you’ll probably do well for yourself, but don’t do it if you’re just looking for a good job. If you just want a good job, go into nursing, accounting, or the trades.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              30 days ago

              I think it’s even worse than just the bar for competency going up: even for a coding wizard going into the career, it’s a lot harder to squeeze through the bottleneck which is getting an entry level position nowadays unless they have some public proof out on the Net of how good they’re at coding (say, commits in open source projects, your own public projects, or even Youtube videos about it).

              This is something that will negativelly impact perfectly capable young developers who have an introvert personality type (which are most of them in my experience, even in domains such as Hacking) since some of the upsides of Introversion are a greater capacity for really focusing on on things and for detailed analysis - both things that make for the best programmers - and self publicising isn’t a part of the required skillset for good developers (though sooner or later the best ones will have to learn some “image management” if they end up in the Corporate world)

              I’m a bit torn on this since on one side salesmanship being more of a criteria determining one’s chances of getting a break at the start of one’s career as a developer is bad news (good coding and good salesmanship tend to be inverselly correlated) but on the other side a junior developer with some experience actually working with other people on real projects with real users (because they contributed to existing open source projects) has already started learning what we have to teach fresh-out-of-Uni developers to make them professionals.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                30 days ago

                it’s a lot harder to squeeze through the bottleneck

                Eh, I think that’s overblown. As someone involved in hiring, we go through a ton of crappy candidates before finding someone half-decent, and when we see someone who actually knows what they’re doing, we rush them through the process. The problem is that we’re not a big tech company, we’re in manufacturing, but we do interesting things w/ software. So getting on at one of the big tech companies may be challenging, but if you broaden the scope a little, there are tons of jobs waiting. We’ve had junior positions open for months because the hiring pool is so trash, but when we see a good candidate, we can get an offer to them by the end of the week.

                We don’t care too much about broader visibility (though I will look at your code if you provide a link), we expect competency on our relatively simple coding challenges, as well as a host of technical questions. We also don’t mind hiring immigrants, we’ve sponsored a number of immigrants on our team.

                introversion

                As an introvert myself, I totally get it. I got my job because a recruiter reached out to me, not because I was particularly good at following up with applications. And that’s why I tend to tell people to not get into CS. I encourage them to take CS classes if they’re offered, but not to make it a career choice, and this is for two reasons:

                • manage expectations of the future of CS - junior jobs are likely to contract a bit w/ AI
                • thin the field so it’s easier to find the good candidates - we have to go through 5-10 candidates before we find someone we like
      • pirat@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        1 month ago

        Altering the prompt will certainly give a different output, though. Ok, maybe “think about this problem for a moment” is a weird prompt; I see how it actually doesn’t make much sense.

        However, including something along the lines of “think through the problem step-by-step” in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of “reasoning”, thereby arriving at an output that’s more correct or of higher quality.

        This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It “thinks” about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw “thinking” to the user.

        Of course, it’s unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.

        • Blaster M@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 month ago

          And that’s why I’m the one that fixes the PC when it breaks… because even good programmers may even consider the pc to be magicboxes if they’ve never turned a screwdriver in their life…

    • blackjam_alex@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      1 month ago

      My experience with ChatGPT goes like this:

      • Write me a block of code that makes x thing
      • Certainly, here’s your code
      • Me: This is wrong.
      • You’re right, this is the correct version
      • Me: This is wrong again.
      • You’re right, this is the correct version
      • Me: Wrong again, you piece of junk.
      • I’m sorry, this is the correct version.
      • (even more useless code) … and so on.
      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        1 month ago

        I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn’t notice.

        AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you’re doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It’ll do a decent job at certain prompts (i.e. generate me a function/class that…), but you’re going to need to go through line-by-line and make sure it’s actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don’t bother w/ AI, but YMMV.

        An area where it’s probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, “find the docs for a function in library X that does…” I know what I want, I just may not remember the name or the module, and I certainly don’t remember the argument order.

        • 9488fcea02a9@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          15
          ·
          1 month ago

          AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion)

          I’m not a developer and i havent touched code for over 10 yrs, but when i heard about my company pushing AI tools on the devs, i thought exactly what you said. It should be a tool for experienced devs who already know what they’re doing…

          Lo and behold they did the opposite… They fired all the senior people and pushed AI on the interns and new grads… and then expected AI to suddenly make the jr devs work like the expensive Sr devs they just fired…

          Wtf

        • slaacaa@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 month ago

          AI is like having an intern you can delegate to. If you give it a simple enough task with clear direction, it can come up with something useful, but you need to check.

      • saltesc@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        edit-2
        1 month ago

        All the while it gets further and further from the requirements. So you open five more conversations, give them the same prompt, and try pick which one is least wrong.

        All the while realising you did this to save time but at this point coding from scratch would have been faster.

      • TaintPuncher@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        That sums up my experience too, but I have found it good for discussing functions for SQL and Powershell. Sometimes, it’ll throw something into its garbage code and I’ll be like “what does this do?” It’ll explain how it’s supposed to work, I’ll then work out its correct usage and solve my problem. Weirdly, it’s almost MORE helpful than if it just gave me functional code, because I have to learn how to properly use it rather than just copy/paste what it gives me.

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 month ago

          That’s true. The mistakes actually make learning possible!

          Man, designing CS curriculum will be easy in future. Just ask it to do something simple, and ask your CS students to correct the code.

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I like using it like a rubber ducky. I even have it respond almost entirely in quacks.

      Note: it’s a local model running for free. Don’t pay anyone for this slop.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      30 days ago

      What llm did you use, and how long ago was it? Claude sonnet usually writes pretty good python for smaller scripts (a few hundred lines)

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        30 days ago

        It was ChatGPT from earlier this year. It wasn’t a huge deal for me that it made mistakes, because I had a very specific use case and just wanted to save some time; I knew I’d have to troubleshoot grafting it into my function, but even after I pointed out that it was using depreciated syntax (and how to correct it), it just spat out the code again with even more errors and still using depreciated syntax.

        All LLMs will fail like this in some way, because they don’t actually understand what they’re generating (i.e. they have no mechanism for self-evaluating the veracity of their statements).

  • ulkesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    4
    ·
    1 month ago

    Oh geez…who could have seen this coming?

    Oh wait, every single senior developer who is currently railing against their moron AI-bandwagoning CEOs.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 month ago

      Middle and upper management are like little children - they’ll only learn that fire hurts by putting their hand in it.

    • Naz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      I’ve been laughing at this quote for 5 minutes straight

      It’s so good

      He knows he’s right

      Also: I code sometimes, and all of my code is of masterpiece quality. I cannot debug my own code, I ask for outside help and we have to dismantle the NT kernel to find out what’s gone wrong

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    2
    ·
    1 month ago

    Good. This is digital Darwinism at its finest. Weeds out the companies who thought they could save money by relying on a digital monkey instead of actual professionals.

    • ChickenLadyLovesLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 month ago

      I always claimed in job interviews to be good at debugging, but there are no certifications for debugging and there’s really no way for an interviewer to verify such a claim. So even though it is an incredibly important skill, companies just do not look for it. There is also the hilariously misguided belief that good coders do not produce bugs so there’s no need for debugging.

      • PM_Your_Nudes_Please@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        edit-2
        1 month ago

        There is also the hilariously misguided belief that good coders do not produce bugs so there’s no need for debugging.

        Yeah, fuck this specifically. I’d rather have a good troubleshooter. I work in live events; I don’t care if an audio technician can run a concert and have it sounding wonderful under ideal conditions. I care if they can salvage a concert after the entire fucking rig stops working 5 minutes before the show starts. I judge techs almost solely on their ability to troubleshoot.

        Anyone can run a system that is already built, but a truly good technician can identify where a problem is and work to fix it. I’ve seen too many “good” technicians freeze up and panic at the first sign of trouble, which really just tells me they’re not as good as they say. When you have a show starting in 10 minutes and you have no audio, you can’t waste time with panic.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 month ago

          Good programmers (and I don’t mean just at the coding level) make less bugs exactly because they want to avoid bug fixing as much as possible.

          They still have to do debugging - and hence have to be good at it - just less often than if they didn’t invest any time into figuring out ways of working that reduce the rate of bugs in their work (and, again, this is at more levels than just coding).

          I think that misconception of “good coders do not produce bugs” in anchored in the totally wrong idea that it’s at all possible to make code without bugs - the way I see it the path to being a “good coder” must go through being good at debugging and just wanting to avoid doing it as much because how how much more time it takes to have to go all the way down to using the debugger to find bugs than doing things like at least some analysis upfront of the program requirements, using proper naming conventions to reduce the likelihood of the kind of bugs that comes from confusing variables and structuring you code so that you don’t get lost or don’t forget things (especially for code you don’t see for months and later come back to having forgotten the logic you were following with it).

          I’ve done some programming without proper debuggers (embedded stuff in shitty shit microcontrollers, shader programming) and it’s a total PITA.

      • affiliate@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 month ago

        There is also the hilariously misguided belief that good coders do not produce bugs so there’s no need for debugging.

        i’m terrified of people who think this way. my experience has been that they are much less inclined to check for bugs in their code and tend to produce much buggier code

    • Suzune@ani.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      AI code is not clever. It’s all developers averaged. Even if it worked properly, you’d get average quality code.

      It’s rather lazy and cheap. This is where the quality is lacking.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 month ago

      The pain in the arse which is debugging is what motivated me to, as my career progressed, improve my coding, improve my software design, improve my systems design, even improved my software development process and standards and eventually that even extended to getting those I worked with to also improving those things as I sometimes ended up having to debug their bugs.

      Debugging definitelly makes better techies, IMHO, mainly because of the lengths people will go to in order the avoid having to do it.

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    1 month ago

    Me and my team take our site down the old fashioned way. Code copied from some rando on the internet.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 month ago

      Reminds me of the time that I took down the corporate website by translating the entire website into German. I’d been asked to do this but I hadn’t realized that the auto translation Plug-In actually rewrote code into German, I thought it was just going to alter the HTML with JavaScript at runtime, but nope. It actually edited the files.

      It also translated the password into German which was fun because it was just random characters so I have no idea what it translated into.

      • send_me_your_ink@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        Can we take a moment to ask ourselves - how the hell did piping to shell become ok? We have all kinds of method’s for deploying stuff - from the age old tarball to the new shinny flat pack. But somehow we also became ok with

        Curl foo | sh
        

        Oftentimes as root.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 month ago

      It’s pretty much the same as AIs do - copy and past random code from Stackoverflow - but they do it automatically.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 month ago

      Copy pasting random snippets from search results and chatgpt until something works is how I do my job.

      • BlitzFitz @lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        “until something works" At least you’re doing a better job than some people.

        Some leave it at will ai told me so. And they don’t know better and put that into prod!

  • Snapz@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    1
    ·
    1 month ago

    And none of the forced tech support “AI” replacements work. And the companies don’t give a shit.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      33
      ·
      1 month ago

      I’ve had this argument with them a few times at work. They are definitely going to replace this all with AI. Probably within the next year and no amount of us pointing out that it won’t work and they’ll end up having to bring us back, at 3x the rate, seems to have any effect on them.

      I’m probably going to have to listen to a lot of arguments about this strawberry thing tomorrow.

      Anyway whatever, severance is severance.

      • stringere@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        30 days ago

        I was once in a similar position: company merger and they decided to move support offshore. We got 6 months lead notice and generous severance paid out as long as we stayed to the end. Fast forward a year and they took 85% customer approval to 13%. We got hired back at 1.5x our old pay rate, so not quite the 3x you mentioned. Hoping this works out similar for you in the end.

  • Tylerdurdon@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    1 month ago

    See? AI creates jobs! Granted, it’s specialized mop up situations, but jobs!

    It’ll be even more interesting in the future! Every now and then a T1000 will lose all hydraulic fluids right out it’s prosthetic anus and they’ll need someone there with a mop and bucket! Our economy lives on…

    • andxz@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      If by economy you mean some of us are needed to mop up hydraulic ass-juices at gunpoint I suppose you’re technically correct. At least they have to feed us, right?

      …right?

  • reka@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    4
    ·
    1 month ago

    As stated in the article, this has less to do with using AI, more to do with sloppy code reviews and code quality enforcement. Bad code from AI is just the latest version of mindlessly pasting from Stack Overflow.

    I encourage jrs to use tools such as Phind for solving problems but I also expect them to understand what they’re submitting and be ready to defend it no differently to any other PR. If they’re submitting code they don’t understand that’s incredibly unprofessional and I would come down very hard on them. They don’t do this though because we don’t hire dickheads.

    • MonkderVierte@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      1 month ago

      Yeah but… i asked chatgpt once how to style something in asciidoctors style.yml. It proposed me html syntax (some inline stuff can be done with html tags in asciidoctor, if output is html). After the usual apology, it suggested some wrong yaml. Third try, because formatting was wrong, it mixed them both.

      I mean, sure, some niche usecase in a somewhat obscure (lots of moving parts) lightweight markup. But still, this was a lesson.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      30 days ago

      Bad code from AI is just the latest version of mindlessly pasting from Stack Overflow.

      Humans literally can not scan all of SO to make a huge copypasta.

      It takes much more time, effort, and thought to find various solutions on SO and patch them together into something that works well.

    • forrcaho@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      30 days ago

      We used to have these shit developers and I accepted a lot of bad code back then – if it actually worked – because otherwise “code review” is full-on training, which is an entire other job from the one I was hired to do.

      The client ditched that contracting firm, and the devs I work with now are worth putting in time on code review with – but damn, we got hella shit code in our codebase to deal with now. Some of it got tossed, some of it … we live with.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 month ago

    “When asked about buggy AI, a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”

    That’s… That’s so fucking cool…

  • Ilandar@aussie.zone
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 month ago

    The point of the article isn’t that AI is outright useless as a coding tool but that it lulls programmers into a false sense of security regarding the quality and security of their code. They aren’t reviewing their work as frequently because of this new reliance on AI as a time saver, and as such are more likely to miss any mistakes that they or the AJ made.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      1 month ago

      The point of the article isn’t that AI is outright useless as a coding tool but that it lulls programmers into a false sense of security regarding the quality and security of their code.

      Lulling them into a false sense of security is half of what makes it useless. The fact that it makes shitty code is the other half.

      • cheddar@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 month ago

        But the job of a software developer is not to write good code, it is to deliver features. People have been writing bad code without any AI for decades. Businesses often prioritize speed over quality, rewarding teams that deliver features quicker.

        • synae[he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          30 days ago

          A computer lets you make more mistakes faster than any other invention with the possible exceptions of handguns and Tequila.

          Now Even Faster™ with no exceptions thanks to “AI”

  • dustycups@aussie.zone
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 month ago

    Sounds like the Sirius cybernetics corporation:

    The fundamental design flaws are obscured by the superficial design flaws.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 month ago

    If I was still in a senior dev position, I’d ban AI code assistants for anyone with less than around 10 years experience. It’s a time saver if you can read code almost as fluently as you can read your own native language but even besides the A.I. code introducing bugs, it’s often not the most efficient way. It’s only useful if you can tell that at a glance and reject its suggestions as much as you accept them.

    Which, honestly, is how I was when I was first starting out as a developer. I thought I was hot shit and contributing and I was taking half a day to do tasks an experienced developer could do in minutes. Generative AI is a new developer: irrationally confident, not actually saving time, and rarely doing things the best way.

    • GetOffMyLan@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      edit-2
      1 month ago

      I’ve found they’re great as a learning tool where decent docs are available. Or as an interactive docs you can ask follow up questions to.

      We mostly use c# and it’s amazing at digging into the MS docs to pull out useful things from the bcl or common patterns.

      Our new juniors got up to speed so fast by asking it to explain stuff in the existing codebases. Which in turn takes pressure off more senior staff.

      I got productive in vuejs in a large codebase in a couple days that way.

      Using to generate actual code is insanely shit haha It is very similar to just copy pasting code and hacking it in without understanding it.

      • ShittyBeatlesFCPres@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 month ago

        You make a good point about using it for documentation and learning. That’s a pretty good use case. I just wouldn’t want young developers to use it for code completion any more than I’d want college sophomores to use it for writing essays. Professors don’t have you write essays because they like reading essays. Sometimes, doing a task manually is the point of the assignment.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      Eh, I’m a senior dev, and I don’t ban it (my boss, the director, does that for me lol; he’s worried about company secrets leaking).

      In fact, we had an interview for a senior dev position, and the applicant asked if they could use AI, and I told them to use whatever tools they normally would for development. It shouldn’t come as a surprise that they totally botched the programming challenge because of it (introduced the same bug twice, then said they were very confident in the correctness of the code…), and that made it so much easier to filter them out from our hiring pool. If you’re going to use a tool in an interview, you better feel confident with it. If that dev had solved the problem significantly faster than our other applicants, I would’ve taken that to my boss to have the team experiment with it. We target budget 30 min for our challenges, and our seniors generally finish in under 20, and it took them more than our allotted time to get the code to actually run properly (and that’s with us pointing out certain mistakes the AI generated).

      But no, I haven’t seen an actually productive use of AI for software development, beyond searching for docs online (which you can totally do w/ Bing or Google w/o involving our codebase). You may feel more productive because more code is appearing on the screen, but the increase in bugs likely reduces overall productivity. We’re always looking for ways to improve, but when I can solve the same problem in my bare-bones editor (vim) faster than my more junior colleagues can with their fancy IDEs, I really don’t think AI is going to be the thing that improves our productivity, actually understanding logic will. If someone demonstrates that AI does save time, I’ll try it out and campaign for it.

      Anyway, that’s my take as someone who has been in the industry for something like 15 years. Knowing your tools is more important, IMO, than having more tools.

      • ShittyBeatlesFCPres@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 month ago

        I had my suspicions before but the moment I realized for certain Elon Musk couldn’t run a software company was when he judged people by lines of code written.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Ew, I would hate to be in charge of code reviews at an org like that.

          The proper metric is success of the actual product. We have our engineers give estimates, then hold them to those estimates and evaluate based on consistency of on-time releases and number of production bugs. At the end of the day, predictable, high quality delivery is usually more valuable than faster time to market, unless you’re in a startup or something and just need to get early adopters on-board. Judge QA by defects discovered in production and devs by defects found by QA and in production. It’s really not that hard.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 month ago

          The one time some manager voiced such an idea, I very overtly in front of everybody offered to make “loop unrolling” software working at the source level (compilers already do it at the Assembly level in some cases for performance) for me and my colleagues to really boost that code line count (while totally screwing maintenability).

          Mind you, all devs in that meeting were loudly against measuring performance by code lines, but I like to think that suggestion of mine really hammered down the coup the grace on that “brilliant” idea.

        • Wappen@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          Not trying to defend him, but I thought the reasoning behind doing that was to get the least obedient people to leave the company so that there won’t be a delayed push back from the employees.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 month ago

            In my experience working for almost 3 decades in software development, passive-agressive shit from upper management just causes the best people to leave (as they’re the ones who easilly find better jobs) leaving behind mainly a mix of the incompetent and those who never worked anywhere else (who are either already incompetent or will become so, as only ever having worked in just one company is far too narrow professional experience for anything beyond junior/mid level - you need to have seen more than one way of doing things to understand certain higher level concerns and choices in software development).

            • Wappen@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              30 days ago

              Yeah and I’d say these people left are exactly those Elon wants, he doesn’t want white guys in their 50s, he wants obedient young guys.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                30 days ago

                Sound like a variant of the good old saying “pay peanuts, get monkeys” only using a stick and threats instead of payment.

                Mind you, it does sound like the kind of think somebody with his kind of personality - narcissistic shameless and dishonest salesman - would think it’s a great idea.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      I’ve worked as a freelancer (specifically as a Contractor) in Software Development for over a decade and more often than not I ended up having to work with some existing code base, having to deal with the design choices, coding style and bugs of somebody else, often multiple somebody elses.

      There’s nothing quite as “entertaining” as having to deal with 3+ different code and design styles in the same code base because all previous developer thought their own way of doing things was the superior way so just added one more layer of their style (not just coding but, worse, software design) on top of what was already there increasing the mess, rather than work within the existing structure and style and doing some refactoring.

      Anyway, in my experience having to read, understand and work with existing code that you yourself did not made is way more time costly and less pleasant than actually doing your stuff from scratch.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    4
    ·
    1 month ago

    Also it is pure junk. Chat-GPT code may come out fast on the screen but it’s garbage. I tried python and c++ both just pure garbage. Sure I got it to do what I wanted but only after a day of hair pulling repetitive madness. Simple task, open an image and invert it . Then we’ll it opened the image but didn’t invert. Or maybe it’s upside down. Can you open the image right side up and invert it…fuck fuck, why is the window full screen? Did I ask for full screen, shit heavens no! Anyway it’s a fuckin idiot just rambling code at me.