Talks about something similar to our idea, where they give the idea of cost and distance between the attributes

However, our idea deals with calculating each record cost and assigning a priority to the qid and handling temporal attack as a result of that

Untitled

They calculate the information loss directly without assigning a cost to the record, so using the information loss is the only way to identify records which are similar

We do pre processing to identify the records close to each other and is absolute

the information loss used here is a measure of our data quality

Initially pick a random record and keep adding records till the IL < c and |c| < k

Untitled

Then pick the farthest record and repeat the same till there are less than k

Then for the remaining, see where the IL is minimum and add it there

for each equivalence class should be less than a user defined constant c : not easy to calculate

Very time consuming

Untitled