Web Dev Person / Ex Performance ECU Calibrations Person

  • 8 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • That seems like a pretty naive and biased approach to software to me honestly.

    Ease of use, community support, feature set, CI/CD etc…all should come into play when deciding what to use.

    Freedom at all costs is great until you limit the community development and potential user base by 90% by using a completely open repo service that 5% of the population uses or some small discord alternative.

    So then the option is to host on multiple platforms/communities and the management and time investment goes up keeping them in sync and active.

    As with most things in life, it’s best to look at things with nuance rather than a hard stance imo.

    I may stand it up on another service at some point, but also anyone else is totally free to do that as well. There are no restrictions.




  • Thanks!

    Unfortunately currently there isn’t a true RAG implementation largely due to the fact that this site/app is fully self contained with no additional servers or database etc…which is typically required for RAG.

    For now file uploads are stored in the browser’s own local database and the content can be extracted and added to the current conversation context easily.

    I definitely want to add a more full RAG system but it’s a process to say the least, and if I implement it I want it to be quite effective. My experience with RAG generally has left me quite unimpressed with a few quite decent implementations being the exception.




  • This project is entirely web based using Vue 3, it doesn’t use langchain and I haven’t looked into it before honestly but I do see they offer a JS library I could utilize. I’ll definitely be looking into that!

    As a result there is no LLM function calling currently and apps like LM Studio don’t support function calling when hosting models locally from what I remember. It’s definitely on my list to add the ability to retrieve outside data like searching the web and generating a response with the results etc…


  • Yep that’s a pretty good comparison!

    I’m curious on what you mean by sourcing training data in an ethical way? I know OpenAI has come under well deserved scrutiny for apparently using content that is hidden behind paywalls without purchasing it themselves in their training data. Which is quite unethical, but aside from that instance I’m interested in hearing some other concerns for my own education.

    In general there are definitely loads of models on places like Hugging Face that are fully open source and provide training data sources for many.

    I believe for Microsoft’s new Phi 3 models they actually generated synthetic data themselves for training as well which is an interesting approach that seems to yield good results.

    In the open source LLM world the new Meta Llama 3 models are the latest and greatest, I haven’t seen any cause for concerns with it yet. Might be worth looking into those!



  • Local models are indeed already supported! In fact any API (local or otherwise) that uses the OpenAI response format (which is the standard) will work.

    So you can use something like LM Studio to host a model locally and connect to it via the local API it spins up.

    If you want to get crazy…fully local browser models are also supported in Chrome and Edge currently. It will download the selected model fully and load it into the WebGPU of your browser and let you chat. It’s more experimental and takes actual hardware power since you’re fully hosting a model in your browser itself. As seen below.













  • Well you’ll probably really enjoy This Video haha!

    The MGU-H mode was on the “motor” so it doesn’t count for this challenge but still that’s the absolute most I can get out of the car!

    I’m right on the edge of my ability through a lot of this lap, proper sweaty attempt.

    I’ll run in the correct mode for an official lap if I end up needing to, no worries there. Moar people need to join up and give it a go!