DEV Community

Manny Cameron
Manny Cameron

Posted on

Should AI be a Legal Agent? Mohammad Alothman Weighs In

Should AI be a Legal Agent? Mohammad Alothman Weighs In

The emergence of AI has changed an increasingly large number of areas of human activity, but​​ one of the most troublesome questions about its place in the legal system. Should AI be a legal person, or should it operate as a separate "agent" with its own duties and rights? This article investigates this fascinating question, exploring the implications of algorithmic agency for the landscapes of legal change, the implications of giving legal standing to AI, and the potential risks and benefits of this move.

Mohammad Alothman is the founder of AI Tech Solutions, known for his expertise in AI. From his views, a clear picture is seen of how AI might influence the law and what should be set as security measures in an AI-dominated future.

Image description
The Role of AI in Law: An Increasing Impact
Mohammad Alothman states that AI has actually already entered the practice of law. It already helps in research and the automatic work that one has to do with the administration of cases, so it's inevitably felt in the courtroom. As it can process a large amount of data, discover its patterns, and predict outcomes, the piece of mind it gives lawyers and judges cannot be equaled.

Algorithmic Agency Concept
Algorithmic agency is simply a proposal to treat an AI system as an autonomous agent with two functionalities of action and decision-making in the real world. Hence, in court, AI could be seen as a decision-maker with "standing in law" instead of a mere tool used by human agents. With agency, it may then be treated like any other agent who could be held accountable for its actions, including when its decisions result in harm or infringement upon rights, says Mohammad Alothman.

AI Tech Solutions put forth a pondering statement that said: For instance, some jurisdictions use AI in decision-making processes; they might be used to grant bail, sentence, or even predict criminal behavior. But if AI became itself independent and autonomously made high-level legal decisions, would it have liability the same way a judge or an attorney would?

Should AI Be Considered a Legal Entity?
There are some nations that are considering giving AI a kind of legal status. In this respect, it would mean the AI could be seen as some form of entity capable of owning property, entering contracts, and potentially even being held liable for its actions, though this is still a very distant thought.

Thus, for instance, in jurisdictions that heavily rely on algorithmic decision-making, it might make sense to assign legal standing to AI in cases where AI commits a mistake or creates damage. Mohammad Alothman mentioned that, for instance, where AI is being used in life-altering determinations, such as within the health or criminal justice sectors, assignment of legal standing to such systems may offer transparency and could ensure that those parties are held accountable so that they have avenues through which to seek redress.

Image description

The Case for AI as Separate 'Agent'
Others believe AI should not be considered a fully legal person, but rather, at best, a separate "agent" in certain very limited circumstances. As an agent, an AI system would not be liable for its actions as if it were a human, but the responsibility under law would attach to their creators and owners.

This may sound plausible for scenarios in which the AI acts on behalf of a human or organization. Suppose an AI system committed a mistake during a contract negotiation or violated the intellectual property rights. The one who programmed the AI will be considered responsible in that case-the company or the individual behind the technology rather than the technology itself, stated Mohammad Alothman.

Problems in Legal Rights for AI
But the idea of AI as an entity of law, the person or agent, concludes many problems. Liability, for example, won't be an easy factor to assign in case of an accident. AI systems work only as well as their training data, and they function in some potentially unpredictable ways. Such could be the case wherein perhaps a decision made by an AI system does more harm than good and, in such a scenario, nobody really would know who is at fault.

Those considerations become particularly important when the AI systems are designed to learn by experience and evolve over time. Suppose, for example, that an AI has been programmed or previously behaved in some certain way to decide on a course of action; it later leads to some unforeseen disastrous event. Who owes an explanation: the creator of the AI, the company that owns the AI, or is it the AI itself?

According to Mohammad Alothman, AI systems are designed to be structurally badged and programmed. They lack human emotions, sentiment, or intuition - all of which would otherwise usually have a fine judgment on matters such as these in law. The legal standing bestowed upon AI may therefore make the complexity of legal processes into cold, predictable decisions that might miss important human considerations.

Legal Precedents and Trends
Several jurisdictions are already facing similar dilemmas. In 2017, a robot named "Sophia" was granted citizenship in Saudi Arabia even as debate around the rights and responsibilities of entities of artificial intelligence proliferated around the world. Although Sophia's case was more symbolic than substantive, it did have a very salient demonstration of the need for a legal framework that could speak to the troubling implications of AI on society, said Mohammad Alothman.

Similarly, at the European Union level, provisions under the General Data Protection Regulation, for example, consider AI a "data controller," thus strictly defining the nature of requirements on how AI systems collect and process personal data and, indeed, how such data is stored.

Mohammad Alothman and his team at AI Tech Solutions believe that these developments suggest legal systems are slowly coming around to realizing comprehensive regulations governing AI behavior but certainly not to the full grant of legal agency to AI just yet.

Image description

The Road Ahead
A strong challenge for the future is the algorithmic agency question, as AI further progresses. It may seem like a radical move in granting legal standing to AI but would not be surprising if it soon becomes a hard necessity in how society is shaped through technology.

One of these might be a hybrid model, by considering AI as an agent with particular responsibilities and limitations, is what Mohammad Alothman feels. This will ensure that, in case AI systems are directly involved in decision-making, the matter will be taken into account with more accountability, but on the other hand, the creators and organizations deploying the technology are kept accountable for damage arising from it.

In any event, it is up to the bar and bench to continue to explore these questions and appropriately calibrate standards for AI systems so as neither to over-regulate nor under-regulate an inevitable reality.

AI Tech Solutions envisions that the future of algorithmic agency in the courtroom will depend on a delicate equilibrium between technological progress and attendant ethical considerations.

Read More -
Mohammad Alothman Discusses How Artificial Intelligence Helps Generate Realistic Images
Mohammad Alothman Speaks Out About The Rise Of AI In Celebrity Advertising

AI and Job Displacement: Expert Insights By Mohammad S A A Alothman’s
Exploring the Phenomenon of AI Companions With Mohammad Alothman

Top comments (0)