New AI Transparency and Companion Chatbot Laws going into effect statewide January 1

Among the new laws going into effect as of January 1st, 2026, AB853 will update the California AI Transparency Act (CAITA) by extending the compliance deadline to August 2nd of this year requiring disclosures for AI-generated content, as well as the new SB243 Companion Chatbot law.
Originally signed into law by Gavin Newsom on September 19th, 2024, the California AI Transparency Act requires providers of generative AI systems to make AI detection tools available; offer AI users a disclosure that content is AI generated, for both obvious and ambiguous elements; and enter into contracts with licensees. The new AB853 law amends the original CAITA act that previously only applied to “covered providers” (those who create, code, or produce a generative AI system with over one million visitors in California). Now, the law will include any large online platforms, AI system hosting platforms, or manufacturers of capture devices (video, still photography cameras, and voice recorders). With this wider net, all said providers will have until August 2nd to comply or else face a $5000 fine per violation enforced by the California Attorney General, with each day over the August 2nd date considered a “discreet” violation (subtle or hidden breaches of law).
While this new law might appear a step forward for critics of AI, owner of our local Space Cowboy Books Jean Paul Garnier, who is also an anti-AI activist, and has taken issue with the logic of the new law, suggesting its bedrock of plagiarism would make any such disclosures irrelevant.
“The problem with AB853 is that it’s asking the very people that have created LLM (Large Language Model) tools and so called artificial intelligence to be responsible for disclosure, yet these are the very companies that have used plagiarism to train their tools and the trust is being put in them to protect against us against the problem that they have created.”
Also going into effect January 1st is SB243 Companion Chatbot Law, which creates transparency, safety protocol, and reporting obligations to natural language interfaces that provide adaptive, human-like responses to user inputs. However, this will exclude chatbots used for customer service, business functions, video games, voice command and virtual assistant systems. The law appears in place primarily to protect teenagers from exposure to adult or dangerous content, as exemplified when multiple teens used Chat GPT and Character.AI to assist their suicides in 2024. With SB243, operators must now maintain protocol for preventing suicidal ideation and report annually to the California Office of Suicide Prevention, as well as provide notifications every three hours suggesting users take a break with reminders that their companion chatbot is not human.



