Heirs Insurance Group achieves 70% revenue spike in FY2024; hits N61 billion GWP
Two key factors she uses to evaluate such chatbots are the accuracy of information they provide and how consistently they answer the same questions even if the question is asked in different ways. But CalMatters found that it fails to accurately describe the containment of a given wildfire, doesn’t reliably provide information such as a list for evacuation supplies and can’t tell users about evacuation orders. Put your brand in front of 10,000+ tech and VC leaders across all three days of Disrupt 2025. Amplify your reach, spark real connections, and lead the innovation charge. «People are disconnected from healthcare, and they’re desperate,» says John Ayers, a computational epidemiologist at UC San Diego who was lead author of the new paper. «This is how patients do this now. And doctors didn’t sign up for it.»
California has sued this Trump administration way more than the last one. Here’s where cases stand
While CX chatbots might leave customers with more questions, the ability of ChatGPT to parse and present information is nothing short of amazing. This content spectrum covers press releases, formal announcements, specialized content, product promotions, and a variety of corporate communications tailored to engage our readership. At Nairametrics, while we provide a platform for these diverse voices, it is important to clarify that our relationship with the content under «NM Partners» does not imply endorsement or affiliation.
Its Essay Championship drives insurance literacy among young students and the school ecosystem, and its travel festival advocates for more inclusive policies to enable cross-border travel, among many other initiatives. In April, OpenAI landed in hot water for a ChatGPT update that turned extremely sycophantic, to the point where uncomfortable examples went viral on social media. Intentionally or not, OpenAI over-optimized for seeking human approval rather than helping people achieve their tasks, according to a blog post this month from former OpenAI researcher Steven Adler. Wysa co-founder Aggarwal emphasizes the importance of creating a safe and trustworthy space for users, particularly in sensitive domains like mental health. Two of those terms are present on the site the chatbot referenced. California government agencies are going all-in on generative artificial intelligence tools after Gov. Gavin Newsom’s 2023 executive order to improve government efficiency with AI.
Best Covid-19 Travel Insurance Plans
To conduct the test, a team of researchers from the University of California in San Diego lurked on r/AskDocs, a Reddit forum where registered, verified healthcare professionals answer people’s medical questions. The researchers selected nearly 200 representative questions on the forum, from the silly-sounding («Swallowed a toothpick, friend said I’m going to die») to the terrifying («miscarriage one day after normal ultrasound?»). They then fed the questions into the virtual maw of the bot ChatGPT, and had a separate group of healthcare experts conduct a blind evaluation of answers from both AI and MDs. Humans are starting to have, for lack of a better term, relationships with AI chatbots, and for Big Tech companies, it’s never been more competitive to attract users to their chatbot platforms — and keep them there. As the “AI engagement race” heats up, there’s a growing incentive for companies to tailor their chatbots’ responses to prevent users from shifting to rival bots. Large language models (LLMs) like Google Gemini are essentially advanced text predictors, explains Dr. Peter Garraghan, CEO of Mindgard and Professor of Computer Science at Lancaster University.
I’m skeptical that AI bots driven by large language models will revolutionize journalism or even make internet search better. I suppose I’m open to the idea that they’ll accelerate the coding of software and the analysis of spreadsheets. But I now think that with some tinkering, chatbots could radically improve the way people interact with healthcare providers and our broken medical-industrial complex. If a future of AI-driven health advice — complete with access to your medical records — makes you worried, I don’t blame you.
Should the plaintiffs be successful, it would have a “chilling effect” on both Character AI and the entire nascent generative AI industry, counsel for the platform says. When CalMatters asked Cal Fire’s bot questions about what fires were currently active and basic information about the agency, it returned accurate answers. But for other information, CalMatters found that the chatbot can give different answers when the wording of the query changes slightly, even if the meaning of the question remains the same.
Top 3 insurance companies in Nigeria leading digital innovation in 2025
- Give those medical chatbots access to people’s individual medical records, and they could offer more precisely directed advice.
- This is a general-purpose chatbot, almost as good as a fully trained doctor.
- If they can simulate caring about us at the same time — maybe even better than human doctors do — well, that’d still be a nice message to receive.
- Add in dealing with other electronic medical record technocracy and you end up with some doctors dedicating half their time every day to these back-and-forths.
- Anthropic’s behavior and alignment lead, Amanda Askell, says making AI chatbots disagree with users is part of the company’s strategy for its chatbot, Claude.
Character.AI, a Google-backed chatbot company that has claimed its millions of users spend hours a day with its bots, is currently facing a lawsuit in which sycophancy may have played a role. Sanchez said he and his team of about four people tested the chatbot before it went out by submitting questions they expected the public to ask. Cal Fire is currently making improvements to the bot’s answers by combing through the queries people make and ensuring that the chatbot correctly surfaces the needed answer. «A human clinician backed by the knowledge base and processing power of AI systems will only be better,» says Jonathan Chen, a physician at the Stanford University School of Medicine who has been studying AI systems. «It is entirely likely that patients will reach for imperfect medical advice from automated systems with 24/7 availability, rather than waiting months for an appointment with a human expert.» For all the tech-world promises of robot pets and AI psychotherapists, the idea of a caring chatbot still feels destabilizing — maybe even dangerous.
If they can simulate caring about us at the same time — maybe even better than human doctors do — well, that’d still be a nice message to receive. Optimizing AI chatbots for user engagement — intentional or not — could have devastating consequences for mental health, according to Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University. Character AI, which was founded in 2021 by Google AI researcher Noam Shazeer, and which Google reportedly paid $2.7 billion to “reverse acquihire,” has claimed that it continues to take steps to improve safety and moderation. In December, the company rolled out new safety tools, a separate AI model for teens, blocks on sensitive content, and more prominent disclaimers notifying users that its AI characters are not real people. The law’s authors have implied that Section 230 doesn’t protect output from AI like Character AI’s chatbots, but it’s far from a settled legal matter. Mila Gascó-Hernandez is research director for the University at Albany’s Center for Technology in government and has studied how public agencies use AI-powered chatbots.
Asking Eric: My co-workers are blackmailing the boss, and it’s a mess for all of us
Other suits allege that Character AI exposed a 9-year-old to “hypersexualized content” and promoted self-harm to a 17-year-old user. To be clear, Character AI’s counsel isn’t asserting the company’s First Amendment rights. Rather, the motion argues that Character AI’s users would have their First Amendment rights violated should the lawsuit against the platform succeed. Following Setzer’s death, Character AI said it would roll out a number of new safety features, including improved detection, response, and intervention related to chats that violate its terms of service. But Garcia is fighting for additional guardrails, including changes that might result in chatbots on Character AI losing their ability to tell stories and personal anecdotes. Additionally, geopolitical and corporate motivations can compound these risks.