The Supreme Court (SC) has approved a governance framework regulating the use of artificial intelligence (AI) in the judiciary, setting guidelines aimed at modernizing court operations while preserving human judgment in decision-making.
In a resolution dated February 18, 2026, the SC adopted the “Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary,” which lays out rules anchored on fairness, accountability, and transparency.
The framework states that these principles support “the ethical and responsible use of human-centered augmented intelligence tools in the Judiciary” and “reinforce the public’s faith and confidence in the independence and impartiality of the judicial system.”
The policy was drafted by a working group led by senior associate justice Marvic Leonen, with associate justices Ramon Paul L. Hernando and Rodil V. Zalameda as vice chairpersons.
It was developed in consultation with members of the judiciary, legal experts, and the academe, and aligned with international standards, including frameworks from Asean and guidelines from Unesco.
At the core of the framework is the concept of “human-centered augmented intelligence,” emphasizing that AI should assist — not replace — human reasoning.
“The use of human-centered augmented intelligence should be centered on human values, such as the promotion of the rule of law and fundamental freedoms, dignity and autonomy, privacy and data protection, fairness, nondiscrimination, and social justice,” the high tribunal said.
The SC said AI tools may be used to support tasks such as legal research, document summarization, transcription, translation, and data processing, but their outputs cannot be the sole basis for judicial decisions. Judges and court officials remain accountable for all rulings.
Use of AI tools will require prior authorization from the SC and will be rolled out in phases, beginning with pilot testing. Mandatory disclosure rules will also apply, requiring users to identify the AI tool used, its purpose, and the extent of human oversight.
The framework also imposes safeguards on privacy and data protection, prohibiting the processing of confidential or privileged information without express authority. Risk assessments must be conducted before deploying any AI system, including checks against threats such as data poisoning.
To oversee implementation, the SC will establish a permanent committee tasked with guiding the development and ethical use of AI in the judiciary. The body will include representatives from the legal, technical, and academic sectors.
The policy further requires measures to prevent algorithmic bias and discrimination, and encourages the use of AI systems that are environmentally sustainable.
The SC said the framework supports its Strategic Plan for Judicial Innovations 2022–2027, which aims to build a more transparent, accountable, and technology-driven judiciary.


