The initial data determines the end result .GIGO .A doctor can put an entire patient's medical history in the prompt, a lawyer an entire case history, etc.
Law, medicine and Software engineering all require facts, not possibly incorrect approximations. So unless they can get them to generate only facts, I don't see these generative language models replacing humans. But they can enable people in those fields to be more productive.This thing is really good, i see how AI taking over jobs is real, lawyers, doctors, coders are at risk, most white collar jobs and it’s happening so fast.
Watu washaanza kupoteza kaziThe initial data determines the end result .GIGO .
Can it predict which stocks will go up or down ? or the dollar rate movement
You don't need facts you need a consultant who can point you in the right direction and give you a second opinion which you can explore all while verifying with the experts take for instance this guy who saved his dogs life, this is the perfect use of Chat GPT in the medical field, it's better than googling your symptoms.I think GPT is good at fooling people also, because it will many times state some generated nonsense as fact.
Law, medicine and Software engineering all require facts, not possibly incorrect approximations. So unless they can get them to generate only facts, I don't see these generative language models replacing humans. But they can enable people in those fields to be more productive.
You don't need facts you need a consultant who can point you in the right direction and give you a second opinion which you can explore all while verifying with the experts take for instance this guy who saved his dogs life, this is the perfect use of Chat GPT in the medical field, it's better than googling your symptoms.
Do you trust google 100% or whatever you find on the internet, same principles applyNote though he needed a vet to verify the diagnosis. In my line of work chatgpt can be very helpful, but also very wrong in some cases even though it may give the answer authoritatively. A layman would thing it is always correct but an expert would recognize when it is wrong.
Fundamentally these generative models are approximation functions, they only approximate the correct answer. Useful tool but cannot be trusted 100%.
True, but google didn't replace lawyers or doctors. It made them more productive. I see the same with chatgpt and the likesDo you trust google 100% or whatever you find on the internet, same principles apply
More like less than 50%Do you trust google 100% or whatever you find on the internet, same principles apply