Leibniz' Project
Home
A definition of intelligence

Presentation

Here is formalized the concept of intelligence of an agent in a deterministic evironment.

One relies on this article.

Throughout this page, let \(T\) be the set of instants in the unfolding of time. If \(t, t' \in T\), \(t \le t'\) means that the instant \(t\) is before (or simultaneous with) \(t'\).
Let \(S\) be the set of states and \(A\) the set of actions the agent can do.

	
		Context {TimePerception : Type}.
Context {SpacePerception : Type}.
Context {Action : Type}.
Context {Before : TotalOrder TimePerception}.

Policies

A policy is a line of conduct: what the agent does depending on the situation.

\(\mathbf{Definition}\)
The set \(P\) of policies is the set of functions from histories (before some instant) to actions.

\(\mathbf{Definition}\)
A history is said to follow a policy if at each instant the action done is the one given by the policy.
With \(t_0 \in T\), a history before \(t_0\) is said to follow a policy before \(t_0\) if at each instant before \(t_0\) the action done is the one given by the policy.

	
		Definition Policy : Type :=
  forall (t0 : TimePerception),
    @HistoryBefore TimePerception (SpacePerception * Action) Before t0 -> Action.

Definition follows_policy (h : @History TimePerception (SpacePerception * Action))
(policy : Policy) : Prop :=
  forall (t : TimePerception),
    snd (h t) = policy t (
      @History_restriction_Before TimePerception (SpacePerception * Action) Before t h
    ).

Definition follows_policy_before {t0 : TimePerception}
(h0 : @HistoryBefore TimePerception (SpacePerception * Action) Before t0)
(policy : Policy) : Prop :=
  forall (t : {t : TimePerception | strict Before t t0}),
    snd (h0 t) = policy (proj1_sig t) (
      @HistoryBefore_restriction TimePerception (SpacePerception * Action) Before
      (proj1_sig t) t0
      (strict_implies_non_strict Before (proj1_sig t) t0 (proj2_sig t)) h0
    ).

Given a goal, one establishes a policy in order to reach it.

\(\mathbf{Definition}\)
The set \(STR\) of strategizings is the set of functions from events (goals) to policies.

	
		Definition Strategizing : Type :=
  @Event TimePerception (SpacePerception * Action) -> Policy.

Goal fulfilling, adequateness to reach goals and intelligence

A policy is more adequate than another to reach a goal if its failure implies the failure of the other one. And one is more intelligent if one elaborates a more adequate policy whatever the goal.

These two definitions do not work in non-deterministic environments. Indeed, if there is some randomness, a goal may be reached thanks to beginner’s luck: an action which is not the most likely to achieve the goal may succeed because something unexpected (but still possible) happened. And in deerministic environments, good decisions taken for bad reasons are regarded as intelligent.

\(\mathbf{Definition}\)
A goal is said fulfilled in a history by a policy if it happens while the policy is followed.
A goal is said fulfilled in a history before \(t_0 \in T\) by a policy if it happened before \(t_0\) while the policy is followed.

	
		Definition fulfills_goal
(h : @History TimePerception (SpacePerception * Action))
(policy : Policy) (goal : @Event TimePerception (SpacePerception * Action)) :
Prop :=
  follows_policy h policy /\
  happens_in goal h.

Definition fulfilled_goal {t0 : TimePerception}
(h0 : @HistoryBefore TimePerception (SpacePerception * Action) Before t0)
(policy : Policy) (goal : @Event TimePerception (SpacePerception * Action)) :
Prop :=
  follows_policy_before h0 policy /\
  happened_before goal h0.

\(\mathbf{Definition}\)
A policy may achieve a goal under a scientific theory if there is a history satisfying the scientific theory in which the goal is fulfilled.

	
		Definition may_achieve (policy : Policy)
(goal : @Event TimePerception (SpacePerception * Action))
(st : @ScientificTheory TimePerception (SpacePerception * Action)) :
Prop :=
  exists (h : @History TimePerception (SpacePerception * Action)),
    fulfills_goal h policy goal /\
    satisfies h st.

\(\mathbf{Definition}\)
A scientific theory \(st\) being given, a policy \(p_1 \in P\) is said more adequate than \(p_2 \in P\) to achieve a goal \(g\) if \[g \text{ is fulfilled in some history satisfying } st \text{ following } p_2 \Rightarrow g \text{ is fulfilled in some history satisfying } st \text{ following } p_1\]

	
		Definition more_adequate_in_deterministic_environment (policy1 policy2 : Policy)
(goal : @Event TimePerception (SpacePerception * Action))
(st : @ScientificTheory TimePerception (SpacePerception * Action)) :
Prop :=
  may_achieve policy2 goal st ->
  may_achieve policy1 goal st.

\(\mathbf{Definition}\)
A scientific theory \(st\) being given, a strategizing \(str_1 \in STR\) is said more intelligent than \(str_2 \in STR\) if, whatever the goal, it provides a more adequate policy.

	
		Definition more_intelligent_in_deterministic_environment
(strategizing1 strategizing2 : Strategizing)
(st : @ScientificTheory TimePerception (SpacePerception * Action)) :
Prop :=
  forall (goal : @Event TimePerception (SpacePerception * Action)),
    more_adequate_in_deterministic_environment (
      strategizing1 goal
    ) (
      strategizing2 goal
    ) goal st.

Definition strictly_more_intelligent_in_deterministic_environment
(strategizing1 strategizing2 : Strategizing)
(st : @ScientificTheory TimePerception (SpacePerception * Action)) :
Prop :=
  more_intelligent_in_deterministic_environment strategizing1 strategizing2 st /\
  exists (goal : @Event TimePerception (SpacePerception * Action)),
    may_achieve (strategizing1 goal) goal st /\
    ~ may_achieve (strategizing2 goal) goal st.
The proofs' code is currently in . To switch to , click on the following icon: