In the mathematical theory of decisions, decision-theoretic rough sets (DTRS) is a probabilistic extension of rough set classification.  First created in 1990 by Dr. Yiyu Yao,[1] the extension makes use of loss functions to derive  and
 and  region parameters. Like rough sets, the lower and upper approximations of a set are used.
 region parameters. Like rough sets, the lower and upper approximations of a set are used.
Definitions
The following contains the basic principles of decision-theoretic rough sets.
Conditional risk
Using the Bayesian decision procedure, the decision-theoretic rough set (DTRS) approach allows for minimum-risk decision making based on observed evidence. Let  be a finite set of
 be a finite set of  possible actions and let
possible actions and let  be a finite set of
 be a finite set of  states.
 states. ![{\displaystyle \textstyle P(w_{j}\mid [x])}](./_assets_/ada9f8723f9dfa484967e71e66b9e713d8d55cdc.svg) is
calculated as the conditional probability of an object
 is
calculated as the conditional probability of an object  being in state
 being in state  given the object description
 given the object description
![{\displaystyle \textstyle [x]}](./_assets_/cf1738c76a0f64c763e4ab8b28fb4ce36cde1ba6.svg) .
.  denotes the loss, or cost, for performing action
 denotes the loss, or cost, for performing action  when the state is
 when the state is  .
The expected loss (conditional risk) associated with taking action
.
The expected loss (conditional risk) associated with taking action  is given
by:
 is given
by:
![{\displaystyle R(a_{i}\mid [x])=\sum _{j=1}^{s}\lambda (a_{i}\mid w_{j})P(w_{j}\mid [x]).}](./_assets_/095728ebbd037b8c25a47ba6db7e0e4dbbaff5bc.svg) 
Object classification with the approximation operators can be fitted into the Bayesian decision framework. The
set of actions is given by  , where
, where  ,
,  , and
, and  represent the three
actions in classifying an object into POS(
 represent the three
actions in classifying an object into POS( ), NEG(
), NEG( ), and BND(
), and BND( ) respectively.  To indicate whether an
element is in
) respectively.  To indicate whether an
element is in  or not in
 or not in  , the set of states is given by
, the set of states is given by  .  Let
.  Let
 denote the loss incurred by taking action
 denote the loss incurred by taking action  when an object belongs to
 when an object belongs to
 , and let
, and let  denote the loss incurred by take the same action when the object
belongs to
 denote the loss incurred by take the same action when the object
belongs to  .
.
Loss functions
Let  denote the loss function for classifying an object in
 denote the loss function for classifying an object in  into the POS region,
 into the POS region,  denote the loss function for classifying an object in
 denote the loss function for classifying an object in  into the BND region, and let
 into the BND region, and let  denote the loss function for classifying an object in
 denote the loss function for classifying an object in  into the NEG region. A loss function
 into the NEG region. A loss function  denotes the loss of classifying an object that does not belong to
 denotes the loss of classifying an object that does not belong to  into the regions specified by
 into the regions specified by  .
.
Taking individual can be associated with the expected loss ![{\displaystyle \textstyle R(a_{\diamond }\mid [x])}](./_assets_/87751079d83fde3bdb5fb18147986984b321f042.svg) actions and can be expressed as:
actions and can be expressed as:
![{\displaystyle \textstyle R(a_{P}\mid [x])=\lambda _{PP}P(A\mid [x])+\lambda _{PN}P(A^{c}\mid [x]),}](./_assets_/13f36ebab7bcddd42eb2fdb413294fe3fa59a143.svg) 
![{\displaystyle \textstyle R(a_{N}\mid [x])=\lambda _{NP}P(A\mid [x])+\lambda _{NN}P(A^{c}\mid [x]),}](./_assets_/a213f31ce2c4c995fece385a4cc41fcc776fcbef.svg) 
![{\displaystyle \textstyle R(a_{B}\mid [x])=\lambda _{BP}P(A\mid [x])+\lambda _{BN}P(A^{c}\mid [x]),}](./_assets_/6f9290a337a67506fed7955f5cf534d951c4b89b.svg) 
where  ,
,  , and
, and  ,
,  , or
, or  .
.
Minimum-risk decision rules
If we consider the loss functions  and
 and  , the following decision rules are formulated (P, N, B):
, the following decision rules are formulated (P, N, B):
- P: If ![{\displaystyle \textstyle P(A\mid [x])\geq \gamma }](./_assets_/1e930e1f3bc4671bbe9c711f62112d962b948ab9.svg) and and![{\displaystyle \textstyle P(A\mid [x])\geq \alpha }](./_assets_/36ad2b3083c80fda56aae9ce185d87fd56b1934b.svg) , decide POS( , decide POS( ); );
- N: If ![{\displaystyle \textstyle P(A\mid [x])\leq \beta }](./_assets_/578a1a4e12ad4acd7a8551773788fa03e6da76a9.svg) and and![{\displaystyle \textstyle P(A\mid [x])\leq \gamma }](./_assets_/c2fa9ab74d7d8177ef5c4792d4ebb6674077acd6.svg) , decide NEG( , decide NEG( ); );
- B: If ![{\displaystyle \textstyle \beta \leq P(A\mid [x])\leq \alpha }](./_assets_/77c564b826f6b6b199483819519f36e991a19d8e.svg) , decide BND( , decide BND( ); );
where,
 
 
 
The  ,
,  , and
, and  values define the three different regions, giving us an associated risk for classifying an object. When
 values define the three different regions, giving us an associated risk for classifying an object. When  , we get
, we get  and can simplify (P, N, B) into (P1, N1, B1):
 and can simplify (P, N, B) into (P1, N1, B1):
- P1: If ![{\displaystyle \textstyle P(A\mid [x])\geq \alpha }](./_assets_/36ad2b3083c80fda56aae9ce185d87fd56b1934b.svg) , decide POS( , decide POS( ); );
- N1: If ![{\displaystyle \textstyle P(A\mid [x])\leq \beta }](./_assets_/578a1a4e12ad4acd7a8551773788fa03e6da76a9.svg) , decide NEG( , decide NEG( ); );
- B1: If ![{\displaystyle \textstyle \beta <P(A\mid [x])<\alpha }](./_assets_/7d1eaefd8efed3fe5a6c6338c9703c849e76b7a2.svg) , decide BND( , decide BND( ). ).
When  , we can simplify the rules (P-B) into (P2-B2), which divide the regions based solely on
, we can simplify the rules (P-B) into (P2-B2), which divide the regions based solely on  :
:
- P2: If ![{\displaystyle \textstyle P(A\mid [x])>\alpha }](./_assets_/bfa49f164c9f40c1e92457bba04e3ce1d99102fd.svg) , decide POS( , decide POS( ); );
- N2: If ![{\displaystyle \textstyle P(A\mid [x])<\alpha }](./_assets_/0cf304aa2124f0b2e6f5c5d95314075906bd478b.svg) , decide NEG( , decide NEG( ); );
- B2: If ![{\displaystyle \textstyle P(A\mid [x])=\alpha }](./_assets_/2c9accbb04e52132606e3c87804e2af3f30f465f.svg) , decide BND( , decide BND( ). ).
Data mining, feature selection, information retrieval, and classifications are just some of the applications in which the DTRS approach has been successfully used.
See also
References
- ^ Yao, Y.Y.; Wong, S.K.M.; Lingras, P. (1990). "A decision-theoretic rough set model". Methodologies for Intelligent Systems, 5, Proceedings of the 5th International Symposium on Methodologies for Intelligent Systems. Knoxville, Tennessee, USA: North-Holland: 17–25.
External links