Gå til innhold

Lov&Data

4/2024: Jubileumsartikler 1984–2024
17/12/2024

Artificial Intelligence (AI) and Law

Av Stein Schjølberg, pensjonert sorenskriver.(1)Based on A presentation at The International Conference on Cyberlaw, Cybercrime & Cybersecurity 2024, November 13–15, 2024, New Dehli, India.

Illustrasjon: Colourbox.com

1. The History

Artificial Intelligence (AI) is a description first used at a conference in the United States in the 1950-ties on the intelligent reasoning by using computers. It was a conference for researchers, and discussions on the possibilities for developing computer programs that would include the special signs and symbols for intelligence. This research continued in the 1960ties and 1970ties, but it was more structured in the mid 1980ties:

Artificial Intelligence as applied in the legal field can be subdivided into two categories: expert systems and knowledge-based systems; and enhancements to legal information retrieval systems. Expert and knowledge-based systems are therefore computer applications that contain knowledge and expertise which they can apply in solving problems.

Artificial Intelligence and Law(AI) was an object of research in the 1990ties. Professor Jon Bing at the University of Oslo, Norway, established research with other universities. It was especially a cooperation with the Law School, University of Strathclyde, Glasgow, Scotland.

I participated in this research and cooperation, together with Professor Jon Bing.

International Conference on Artificial Intelligence and Law (ICAIL) has been held every second year since the first conference in 1987 at the Northeastern University in Boston, USA.

The Third International Conference on Artificial Intelligence & Lawwas held at the St. Catherine´s College, Oxford University, on June 25–28. 1991. One of the speakers was Professor Jon Bing. The conference introduced Case-Based Reasoning as follows:

The field of Artificial Intelligence (AI) and Law seeks both to develop useful applications of computers to law and to investigate fundamental mechanisms of legal reasoning.

An International Conference on Computers and Lawwas held in Montreal, Canada, September 30–October 3, 1992. Professor Jon Bing was one of the speakers and gave a presentation on: Perspectives for the Development of Computers and Law and Computer Law: The Next 10 years.

The Fifth International Conference on Artificial Intelligence and Law was held at the University of Maryland in USA on May 21–24. 1995. I was a delegate at this conference. The purpose of the conference was described as follows:

The field of AI and Law employs Artificial Intelligence techniques to study fundamental mechanisms of legal reasoning and to develop practical computer applications for the legal profession.

At the National Conference for Judges in Lillehammer, Norway on June 22–25, 1995. I gave a presentation on Artificial Intelligence and Law – Expert systems for Judges.

The Technology Renaissance Courts Conference was held in Singapore in September 1996. Richard Magnus, the leader of Singapore Subordinate Courts gave a presentation of The Impact of Global Court Technology – Hyperlinking Global Judicial and Legal Systems. I was also a delegate at the Conference.

The field of Artificial Intelligence (AI) and Law seeks both to develop useful applications of computers to law and to investigate fundamental mechanisms of legal reasoning.

The Chief Justice Carsten Smith, Supreme Court of Norway, was not able to participate and sent his visual message to the Conference, including as follows:(2)See Stein Schjolberg: Judicial Decision Support Systems from a Judge´s Perspective, International Journal of Law and Information Technology, Volume 6 Number 2 Summer 1998, Oxford University Press, England. (1998).

We must never forget that the main element in the judicial process is the human element – combined with the touch of the heart – to balance conflicting interests.

The Sixth International Conference on Artificial Intelligence and Law (ICAIL-97) was held at the University of Melbourne, Australia, on June 30-July 3, 1997. Professor Jon Bing and I participated at the conference, both as speakers.

The First International Workshop on Judicial Decision Support Systems (JDSS) was also held at the Conference. My presentation at the Workshop was on JudicialDecision Support Systems from a Judge´s Perspective, and included:

In this technological environment, Judicial Decision Support Systems have been developed to assist the judge in his decision making. Such systems must not make the decisions, but only be used as a new remedy in the decision process. The judge must always be able to choose the proposed solution or reject it independently.

The National Center for State Courts in the United States organized the Fifth National Court Technology Conference in Williamsburg, September 1997. I made a presentation in the Session No. 1003 on: Artificial Intelligence/Expert Systems – Decision Support Systems for Judges on Internet.

The Seventh International Conference on Artificial Intelligence and Law (ICAIL-99) was held at the University of Oslo, on June 14–18. 1999. The conference included The Second International Workshop on Judicial Decision Support Systems (JDSS).

My presentation on Judicial Decision Support Systems from a Judge´s Perspective, included as follows:

It provides the opportunity for Supreme Courts around the world to serve the global communities as a global database, and the integration of Judicial Decision Support Systems on the Internet is an important challenge. All the Supreme Court decisions around the world should have a short case summary, with the possibilities of a retrieval system based on structured classifications or keywords.

The publication, International Review of Law Computers & Technology,(3)See https://www.ingentaconnect.com/content/routledg/cirl/2000/00000014/00000003;jsessionid=24fcfmut1muq0.x-ic-live-01included a foreword from me, as follows:

From a judge´s perspective, we are used to problem solving in our discretionary judgements. We do not need support systems making the final decision, but systems with updated and comprehensive information on previous similar court decisions applied on the facts in the individual case brought before us. The judge should only be assisted in the decision making, leaving the judge to choose the proposed solution or reject it. Decision support systems should only be used as a remedy in the decision process, without affecting judicial independence.

The Third International Symposium on Judicial Decision Support Systems (JDSS) was held on Chicago-Kent College of Law, Chicago, May 25–26. 2001. The invitation to the Symposium included as follows:

As we begin a new century, electronic systems to support decision-making are increasingly envisaged by officials, scholars, lawyers and judges themselves as integral to the armory of the modern judge. Yet, how such judicial systems are designed, implemented and used remains relatively unexplored. Indeed, how do these questions connect with visions of justice?

The Symposium had almost 100 delegates, and included seven Sessions:

  • What can JDSS do to Improve Access to Justice.

  • Assistance in Argumentation and Document Drafting.

  • Knowledge, Science and the Construction of “Intelligence”.

  • Authority of JDDS: how do and ought judges to use “support?”

  • Democracy, Ownership and Public Participation.

  • Perspectives on Judicial Discretion & Evaluation of JDSS.

  • The Future of the Study of JDSS & Close.

I was in charge of the discussions in the concluding Session, The Future of the Study of Judicial Decision Support Systems. In my presentation I concluded as follows:

On behalf of the Program Committee, I will thank you very much for all the interesting papers and discussions. You have brought our conference further from the Melbourne and Oslo conferences. We would like to continuously develop Judicial Decision Support Systems and would appreciate your comments very much. As a judge I have an open mind to what systems that will be recognized as JDSS. Have a safe trip home.

2. Global organizations on Artificial Intelligence (AI)

2.1. A United Nations Resolution on Artificial Intelligence (AI)

United Nations General Assembly adopted on 11. March 2024 a landmark Resolution on Artificial Intelligence (AI). The Resolution was called Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.(4)See https://documents.un.org/doc/undoc/ltd/n24/065/92/pdf/n2406592.pdf

The General Assembly emphasized:(5)See https://news.un.org/en/story/2024/03/1147831

Same rights, online and offline

The Assembly called on all Member States and stakeholders “torefrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights lawor that pose undue risks to the enjoyment of human rights.”

“The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” it affirmed.

The Assembly also urged all States, the private sector, civil society, research organizations and the media, to develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy use of AI.

Closing the digital divide

The Assembly further recognized the “varying levels” of technological development between and within countries, and that developing nations face unique challenges in keeping up with the rapid pace of innovation.

It urged Member States and stakeholders to cooperate with and support developing countries so they can benefit from inclusive and equitable access, close the digital divide, and increase digital literacy.

2.2. A Convention on Artificial Intelligence (AI)

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was adopted on May 17. 2024.

The Framework Convention is presented by the Council of Europe as follows:(6)See https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence

The Convention is the first-ever international legally binding treaty in this field. It aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation. The Convention aims to fill any legal gaps that may result from rapid technological advances. In order to stand the test of time, the Framework Convention does not regulate technology and is essentially technology neutral.

The Framework Convention was drafted by the 46 member states of the Council of Europe, with the participation of all observer states: Canada, Japan, Mexico, the Holy See and the United States of America, as well as the European Union, and a significant number of non-member states: Australia, Argentina, Costa Rica, Israel, Peru and Uruguay.

In line with the Council of Europe’s practice of multi-stakeholder engagement, 68 international representatives from civil society, academia and industry, as well as several other international organisations were also actively involved in the development of the Framework Convention.

The Framework Convention covers the use of AI systems by public authorities – including private actors acting on their behalf– and private actors.

The Convention offers Parties two modalities to comply with its principles and obligations when regulating the private sector: Parties may opt to be directly obliged by the relevant Convention provisions or, as an alternative, take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law.

Parties to the Framework Convention are not required to apply the provisions of the treaty to activities related to the protection of their national security interests but must ensure that such activities respect international law and democratic institutions and processes. The Framework Convention does not apply to national defense matters nor to research and development activities, except when the testing of AI systems may have the potential to interfere with human rights, democracy, or the rule of law.

The Framework Convention was open for signatures in Vilnius on September 5. 2024, including:

Preamble:

The member States of the Council of Europe and the other signatories hereto,

Considering that the aim of the Council of Europe is to achieve greater unity between its members, based in particular on the respect for human rights, democracy and the rule of law;

Concerned that certain activities within the lifecycle of artificial intelligence systems may undermine human dignity and individual autonomy, human rights, democracy and the rule of law:

Chapter I - General provisions and Chapter II - General obligations includes:

Article 1 – Object and purpose

The provisions of this Convention aim to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.

Each Party shall adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention. These measures shall be graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy and the rule of law throughout the lifecycle of artificial intelligence systems. This may include specific or horizontal measures that apply irrespective of the type of technology used.

Article 2 – Definition of artificial intelligence systems

For the purposes of this Convention, “artificial intelligence system” means a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.

Article 4 – Protection of human rights

Each Party shall adopt or maintain measures to ensure that the activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights, as enshrined in applicable international law and in its domestic law.

Article 5 – Integrity of democratic processes and respect for the rule of law

  1. Each Party shall adopt or maintain measures that seek to ensure that artificial intelligence systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence and access to justice.

  2. Each Party shall adopt or maintain measures that seek to protect its democratic processes in the context of activities within the lifecycle of artificial intelligence systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions.

The scope of the Convention is described in Article 3, and covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and the rule of law

Chapter III includes the principles related to activities within the lifecycle of artificial intelligence systems. Chapter IV includes the remedies and Procedural safeguards. Chapter V describes the assessment and mitigation of risks and adverse impacts. Chapter VI includes the

implementation of the Convention, and Chapter VII describes the Follow-up mechanism and co-operation. At the end of the Convention, Chapter VIII includes the Final clauses, such as

Reservations in Article 34 and Denunciation in Article 35.

The Convention is signed by 10 States, including European Union (EU), (October 2024).(7)See https://www.coe.int/en/web/conventions/full-list?module=treaty-detail&treatynum=225 Norway is one of the States that has signed.

3. Other International initiatives

3.1. European Union

European Union has on June 13. 2024 adopted:

REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL (Artificial Intelligence Act)

The Artificial Intelligence Act has 113 Articles. The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes:

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

The new rules:

  • address risks specifically created by AI applications

  • prohibit AI practices that pose unacceptable risks

  • determine a list of high-risk applications

  • set clear requirements for AI systems for high-risk applications

  • define specific obligations deployers and providers of high-risk AI applications

  • require a conformity assessment before a given AI system is put into service or placed on the market

  • put enforcement in place after a given AI system is placed into the market

  • establish a governance structure at Europeanand national level

3.2. The AI Bletchley Summit 2023

The Bletchley Declaration

The Bletchley Declaration was adopted by countries attending the AI Safety Summit at Bletchley Park on 1-2 November 2023(8)See https://www.gov.uk/government/publications/ai-safety-summit-2023-roundtable-chairs-summaries-2-november. The discussions that took place at the AI Safety Summit brought together stakeholders from European Union, 27 governments, leading AI companies, civil society and academia.

The UK is publishing the Declaration not as a UK government policy document:(9)See https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realize this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realize their potential.

3.3. The AI Seoul Summit 2024

On May 21–22, 2024, the Republic of Korea and the United Kingdom cohosted the second AI summit following the United Kingdom’s launch of the series in 2023:(10)See https://www.csis.org/analysis/ai-seoul-summit

The “AI Seoul Summit,” as it was billed, gathered leaders from government, industry, and civil society to discuss global collaboration on AI safety, innovation, and inclusivity. This “minisummit” took place both virtually and in Seoul for two days of back-to-back events. Overall, the event succeeded in its goal to continue momentum after the landmark 2023 UK AI Safety Summit. Technically there were two conferences, the AI Seoul Summit and AI Global Forum, but the events were held concurrently, in the same location, and with mostly the same participants.

Seoul Declaration

The Seoul Declaration(11)See https://aiseoulsummit.kr/press/?uid=41&mod=document aims to enhance international cooperation on AI governance by engaging with various global initiatives:

  1. We, world leaders representing Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America, gathered at the AI Seoul Summit on 21st May 2024, affirm our common dedication to fostering international cooperation and dialogue on artificial intelligence (Al) in the face of its unprecedented advancements and the impact on our economies and societies.

4. Artificial Intelligence (AI) and Cybercrime

4.1. INTERPOL

INTERPOL published in June 2023: The Toolkit for Responsible AI Innovation in Law Enforcement (AI Toolkit):(12)Se https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit

In recent years we have seen Artificial Intelligence (AI) technologies become embedded in society and our daily lives.

AI technologies also have huge potential to support the work of law enforcement agencies. Successful examples of areas where AI systems are successfully used include automatic patrol systems, identification of vulnerable and exploited children, and police emergency call centres.

At the same time, current AI systems have limitations and risks that require awareness and careful consideration by the law enforcement community to either avoid or sufficiently mitigate the issues that can result from their use in police work.

With the recent developmental leaps in AI capabilities, particularly around Generative AI, the public debate around legal and ethical implications of AI systems, as well as the negative effects they could have on society and humanity, has exploded. It is important that these concerns are addressed in a timely fashion, particularly in a law enforcement context.

4.2. Europol

Europol has on March 27, 2023, updated on June 11, 2024,(13)See https://www.europol.europa.eu/publications-events/publications/chatgpt-impact-of-large-language-models-law-enforcement made the following information on ChatGPT - the impact of Large Language Models on Law Enforcement.

Europol has focused on three groups of criminal activity of most concern for Europol:

Fraud and social engineering:ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.

Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.

Cybercrime: In addition to generating human-like language, ChatGPTis capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.

4.3. FBI on Artificial Intelligence (AI)

The FBI has the following approach to Artificial Intelligence (AI):(14)See https://www.fbi.gov/investigate/counterintelligence/emerging-and-advanced-technology/artificial-intelligence

Our approach to AI fits into three different buckets—identifying and tracking adversarial and criminal use of AI, protecting American innovation, and FBI AI governance and ethical use.

The FBI is focused on anticipating and defending against threats from those who use AI and ML to power malicious cyber activity, conduct fraud, propagate violent crimes, and threaten our national security; we’re working to stop actors who attack or degrade AI/ML systems being used for legitimate, lawful purposes.

The FBI defends the innovators who are building the next generation of technology here in the U.S. from those who would steal it. This effort is also related to defense against malicious cyber activities, since all-too-often our adversaries are stealing our trade secrets, including AI, to turn it against the U.S. and U.S. interests.

The FBI is also looking at how AI can help us further exercise our authorities to protect the American people—for instance, by triaging and prioritizing the complex and voluminous data we collect in our investigations, making sure we’re using those tools responsibly and ethically, under human control, and consistent with law and policy.

On May 8. 2024, FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence:(15)See https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence

The FBI San Francisco division is warning individuals and businesses to be aware of the escalating threat posed by cyber criminals utilizing artificial intelligence (AI) tools to conduct sophisticated phishing/social engineering attacks and voice/video cloning scams. The announcement, made today from the RSA cybersecurity conference at the Moscone Center in San Francisco, coincides with the division’s outreach efforts to include an FBI booth at the conference and participation in multiple conference panel sessions during the week of May 6, 2024.

AI provides augmented and enhanced capabilities to schemes that attackers already use and increases cyber-attack speed, scale, and automation. Cybercriminals are leveraging publicly available and custom-made AI tools to orchestrate highly targeted phishing campaigns, exploiting the trust of individuals and organizations alike. These AI-driven phishing attacks are characterized by their ability to craft convincing messages tailored to specific recipients and containing proper grammar and spelling, increasing the likelihood of successful deception and data theft.

5. Judicial Decisions Support Systems

Courts around the world are increasingly using computer technology in their decision making.

Legal information retrieval systems on Internet are now providing access to relevant information for the decisions.

Computers in courtrooms has enabled the judge access to a wide range of information on Internet, providing direct access to information for using it in the decision making, and to Artificial Intelligence (AI).

In this development, Judicial Decision Support Systems have been developed to assist the judge in the decision making. Such systems using Artificial Intelligence must not make the decisions, but only be used as a new remedy in the decision process. The judge must always be able to choose the proposed solution by Artificial Intelligence, or independently reject it.

The wise advice by Chief Justice Carsten Smith, the Supreme Court of Norway, at a conference in Singapore in 1996 was:

We must never forget that the main element in the judicial process is the human element – combined with the touch of the heart – to balance conflicting interests.

The institution Lovdata was established in 1981, by the Department of Justice and the Faculty of Law at the University of Oslo in Norway. Lovdata has published the Lov & Data since 1984.

Lov & Data is the leading publication in the Nordic countries on the development of new legal information, for all the users of the judicial systems. Lov & Data will be one of the leading publications in the future for the discussions of the legal regulations of Artificial Intelligence (AI).

Noter

  1. Based on A presentation at The International Conference on Cyberlaw, Cybercrime & Cybersecurity 2024, November 13–15, 2024, New Dehli, India.
  2. See Stein Schjolberg: Judicial Decision Support Systems from a Judge´s Perspective, International Journal of Law and Information Technology, Volume 6 Number 2 Summer 1998, Oxford University Press, England. (1998).
  3. See https://www.ingentaconnect.com/content/routledg/cirl/2000/00000014/00000003;jsessionid=24fcfmut1muq0.x-ic-live-01
  4. See https://documents.un.org/doc/undoc/ltd/n24/065/92/pdf/n2406592.pdf
  5. See https://news.un.org/en/story/2024/03/1147831
  6. See https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence
  7. See https://www.coe.int/en/web/conventions/full-list?module=treaty-detail&treatynum=225
  8. See https://www.gov.uk/government/publications/ai-safety-summit-2023-roundtable-chairs-summaries-2-november
  9. See https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
  10. See https://www.csis.org/analysis/ai-seoul-summit
  11. See https://aiseoulsummit.kr/press/?uid=41&mod=document
  12. Se https://www.interpol.int/How-we-work/Innovation/Artificial-Intelligence-Toolkit
  13. See https://www.europol.europa.eu/publications-events/publications/chatgpt-impact-of-large-language-models-law-enforcement
  14. See https://www.fbi.gov/investigate/counterintelligence/emerging-and-advanced-technology/artificial-intelligence
  15. See https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence
Stein Schjølberg