I’ve been using airoboros-l2-70b for writing fiction, and while overall I’d describe the results as excellent and better than any llama1 model I’ve used, it doesn’t seem to be living up to the promise of 4k token sequence length.

Around 2500 tokens output quality degrades rapidly, and either starts repeating previous text verbatim, or becomes incoherent (grammar, punctuation and capitalization disappear, becomes salad of vaguely related words)

Any other experiences with llama2 and long context? Does the base model work better? Are other fine tunes behaving similarly? I’ll try myself eventually, but the 70b models are chunky downloads, and experimentation takes a while at 1 t/s.

(I’m using GGML Q4_K_M on kobold.cpp, with rope scaling off like you’re supposed to do with llama2)

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    No experience, but just adding that long context models have a tendency of ‘forgetting’ whats in the middle of the text. Worth noting if you work on long texts I assume. I can’t remember the paper tho. There’s so many…

      • h3ndrik@feddit.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        But is that a bug or a feature? I think it is plausible that relevant information is most likely either at the beginning of a document or in the previous few lines. So that is where attention should be focused.

        Like when you get an assignment, the important instructions are at the beginning and not somewhere in the middle. And when writing a document or a book, the most important thing is your current sentence fits in with that paragraph. At that point you don’t worry about remembering exactly what the hobbits did back in the Shire.

        I remember reading some criticism on that paper. But i cannot comment on the technical aspects.

        • noneabove1182@sh.itjust.worksM
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You raise an interesting point though in that most examples likely follow exactly as you suggest, there would have to be large amounts of training specifically for focusing on middle content, there probably just isn’t enough in the dataset

        • flamdragparadiddle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m my application (summarising excerpts from several papers) it is a bug. I had assumed the context would be given equal weight throughout, but the distribution of information in the generated summaries suggests it is following the lost in the middle shape. This is most evident when the early chunks of text say something contradicted by the middle. I’d expect the models to talk about the contradiction at least, but it hasn’t been mentioned in any that I’ve looked at.

          I can see what you mean, when generating text you need to pay most attention to what you just wrote, but you also don’t want to claim the hobbits started out in Mordor. I have no idea how to mitigate it, other than making the context short enough that it is all ‘remembered’.

          If you remember where you read some criticism, I’d be very grateful for a link. That paper is doing a lot of heavy lifting in how I understand what I’m seeing, so it would be good to know where the holes in it are.

      • Sims@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I was unaware that the smaller context models exhibited the same effect. It does seem logical that broad important information and conclusions is naturally put at the ends of a sentence by us. I haven’t read the paper yet, but wonder if the training set - our communication - also contains more information at the ends, so the effect isn’t caused by the algorithm, but by the data. I’ll give the paper a read, thx…

  • h3ndrik@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I read some other people complain, too. Maybe try the base model. I’m not sure if it’s the fine-tune or llama2’s fault.

    There are ways to measure that. To measure perplexity across the context. And whatever people did to measure if the things went into the right direction that increased the first llama context size past 2048. But I didn’t find measurements for Llama2 at least with a quick google .

    Edit: And people mentioned Llama2 has a different attention mechanism at the 70B version. This also might be specific to the 70B version. Make sure to use the most recent version of KoboldCPP or whatever you use and to configure the scaling correctly. At 4096 it shouldn’t need any context scaling as far as i understand.

  • creolestudios@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Yes, the 4k context length of Llama2 is indeed real. Llama2 is a cutting-edge language model developed by OpenAI, and its impressive capability to understand and generate text with such a lengthy context is one of its remarkable features. If you’re interested in leveraging advanced AI models like Llama2 for chatbot development or other applications, you may consider reaching out to an AI chatbot development company for assistance in harnessing this technology effectively.