this post was submitted on 05 Mar 2024
52 points (93.3% liked)
Asklemmy
43826 readers
840 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Lawyers who has tried to use AI so far had lost their cases miserably.
That's because we only hear about AI being used by lawyers when they use it wrong and it hallucinates a case that doesn't exist, and then they don't actually verify the case themselves.
I'm sure lawyers are already using it successfully, we just don't hear about successful cases.
And right now they're using general purpose LLM models, I'm sure we'll get models actually focused on legal knowledge in the future that will do much better than the current ones.
Most of the recent change in AI has been owed to Openai’s approach of combining a more primitive transformer with going from all the books they could pirate with GPT3 to the entire text interment with GPT4. Smaller subject specific models have made relatively little progress in the last ten to fifteen years, so I don’t think a chatbot like GPT4 that regurgitates more specific information with high accuracy is likely to be on the table anytime soon.
A better search engine seems far more suited to such a task than a generitive system anyway.
First off, it's not AI, it's llm, basically a better way to collate and search data. It's a tool that they should be using for research but they better not be using chatgpt or any of the other publicly available ones. I would hope that by now someone has launched or is working on one that was trained with data from law books, existing case law, etc and then you could also feed it any discovery documents that come in and it can help highlight what is important.
[citation needed]
Though I'm sure your LLM could hallucinate some for you!
Do I need to define collate? Maybe it wasn't the best choice of verbiage but the point still stands. The quality of the output is always relative to the input. That's why a growing number of companies are training their own llms with data from their own databases instead of trying to rely on external datasets.
For the record, I'm not talking about ones that you can ask a question and get an answer. I was talking about law firms using a local or privately hosted llm to scan through discovery documents and finding keywords or related keywords that may be relevant to the case they are working. Especially now that a lot of discovery is digital.
I can't give more detail than the following because it may not be public yet but I am aware of one company working on their own llm to let clients more easily find info that has been published on their platform and would take longer to skim through than to just use a search engine.
I love that term "hallucinate".
That's a big of a euphemism as the word "faith", and like the term "faith", it's used to mask glaring operational deficiencies. It reminds me of the time when I test drove a used car and there was a clear steering issue, which the car salesman called a "shimmy".
Because we don't actually have AI. We have people following paint by numbers, not artists.
True AI, and not the sparkling programming we have, will be more effective than any lawyer.
Oh, you mean that thing that hasn’t been proven possible yet?
Two years later he and his brother achieved the first successful test of powered flight. Their flight would last 12 seconds and cover 120ft with a top speed of 6.8mph.
The SR-71 Blackbird, flown 61 years after the first powered flight, had a top speed of 2190mph and had a range of 2,500 miles.
True AI will happen unless temporary stars are all the rage.
Yeah the wright brothers were great, but it pains me to say as a Daytonian engineer, but they were also completely full of themselves. There was good reason to believe heavier than air flight was not only possible but soon at the time. Lighter than air flight was not only already happening but had been used in conflicts, there was a hot air balloonist involved in the Paris commune.
But my doubts are of the possibility, immediacy, and practicality of an artificial device having human or greater cognition power in ways able to mimic organic brains. These questions aren’t me just being some doubter (though that is valid given the sheer resources being thrown at them and the way that we’re being asked to leave problems to them rather than seeking more immediate alternatives), but based on discussions with artificial intelligence specialists who don’t have a financial stake in the technology
Who downvoted you? I've been arguing the same thing since AI has become the buzzword of the decade. No one seems to understand what Artificial Intelligence actually is and how these current systems are anything but. They aren't even really a step in that direction because the underlying software and hardware isn't anywhere near ready to emulate a human or even lower animal brain.