Time:2025-09-16
Publication Date:2025-09-16
Year in review
Technology
Comment on the most significant or innovative technology advances or use cases in the past year.
Developments in policy and legislation
Currently, no legislation has been enacted regarding artificial intelligence; however, two legislative proposals have been submitted.
The proposal to Amend the Penal Code regarding ‘deepfakes’ seeks to address the dangers of technologies that enable the creation of high-quality fake content. The law defines ‘deepfake’ as audio or visual content that has been digitally created or altered with the intent to make it difficult to identify as a forgery.
The proposal establishes criminal penalties according to severity: up to five years imprisonment for publishing sexual deepfakes, up to seven years for fraud and up to 10 years for influencing elections.
The proposal for the Marking of Advertisements Containing AI-Generated Content aims to ensure transparency for the public. The law would require clear labelling of content created using AI in advertisements and sponsorship notices.
According to the proposal, clarification notices must be included in content that might appear authentic. The authority to establish rules on this matter would be granted to the Israeli Public Broadcasting Corporation Council and the Second Authority Council.
Both proposals align with international legislative trends and seek to address the challenges of artificial intelligence in content creation.
CasesOmer Berger v. The State of Israel
Several decisions have been given by Judge Ido Droyan-Gamliel, as part of discussions related to the charge against a man who was detained in the airport after the system at the airport ‘flagged’ him and drugs were seized in his possession.
The profiling method leads to the search of a person's body and belongings without a judicial warrant, and this without there being reasonable suspicion against him of drug smuggling or specific intelligence information. This seems to be a serious violation of the fundamental right to privacy and equality, because it is an arbitrary and discriminatory action, carried out by a computerised system without human involvement. The Judge clarified that the system in this respect is a ‘black box’ that no one knows for sure – not even the police – how it works.
Further, at the end of May 2024, the Civil Rights Association petitioned to the High Court of Justice with a request to order the police to stop making decisions about delaying and searching for drugs for those returning from abroad, relying on artificial intelligence system that has been operating in the airport in recent years (Case No. 4271/24).
L.H.S v. Clal Insurance Company Ltd
Court Civil File (Haifa) 41416-12-23.
In this case, the plaintiff contested the use of an AI-generated medical document in a personal injury claim, arguing that the document manipulated information to favour the defendant. The court ruled that the AI-generated document could compromise the appointed expert’s objectivity and ordered its exclusion. The decision highlights ethical and technical concerns about AI-generated data, such as potential biases and data confidentiality.
Shimon Peri v. National Insurance Institute
Court: Labor Court.
Shimon Peri, a former aircraft mechanic, claimed that prolonged exposure to hazardous substances during his employment caused his Non-Hodgkin’s Lymphoma. The court-appointed medical expert utilised AI tools to analyse the disease’s latency period, prompting criticisms over the reliability of AI-generated data. Ultimately, the court approved the appointment of an additional expert to provide a fresh perspective, emphasising the evolving role of AI in legal and medical decision making.