Journal of Applied Mathematics and Physics
Vol.05 No.04(2017), Article ID:76021,12 pages
10.4236/jamp.2017.54079

Error Analysis and Variable Selection for Differential Private Learning Algorithm

Weilin Nie1, Cheng Wang2

Huizhou University, Huizhou, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: February 14, 2017; Accepted: April 27, 2017; Published: April 30, 2017

ABSTRACT

In this paper, we construct a modified least squares regression algorithm which can provide privacy protection. A new concentration inequality is applied and the expected error bound is derived by error decomposition. Furthermore, via the error analysis, we find a method to choose an appropriate parameter ϵ to balance the error and privacy.

Keywords:

Differential Privacy, Least Squares Regularization, Concentration Inequality, Error Decomposition

1. Introduction

Privacy protection attracts much attention in many branches of computer sci- ence. To deal with this, Dwork et al. proposed differential privacy in [1] . Soon [2] builds an exponential mechanism which is a useful approach to construct a differential private algorithm. The concept is introduced into learning theory in [3] . There, the authors consider output perturbation and object perturbation for ERM algorithms. Analysis of privacy and generalization for those algorithms also has been conducted. P. Jain and his collaborators have done a lot of work on differential private learning afterwards [4] [5] and etc. Recently, in [6] , the authors find that the empirical average of the output from a differential private algorithm can converge to its expectation. And [7] provides another analysis of this convergence, which motivates our work.

In this paper, we consider the following statistical learning model (see [8] [9] for more details): The input space X is a compact metric space, and the output space is Y as a regression problem. Throughout the paper, we assume the output Y is uniformly bounded, i.e., | y | M for some M > 0 almost surely. On the sample space Z : = X × Y , we try to find a function f : X Y via some algorithms A , reflecting the relationship between the input and output. Algorithm A relies on the random chosen sample z = { z i } i = 1 m = { ( x i , y i ) } i = 1 m , while the sample is drawn according to a distribution function ρ on Z . Furthermore, we assume there is a marginal distribution ρ X on X and conditional distribution ρ ( y | x ) on Y given some x .

Now we expect the algorithm can provide some privacy protection. We assume A satisfies the ( ϵ , γ ) differential private condition [1] . Denoting the Hamming distance between two sample sets { z 1 , z 2 } is

d ( z 1 , z 2 ) = # { i = 1 , , m : z 1 , i z 2 , i } ,

i.e., there is only one element is different. Then ( ϵ , γ ) -differential private is defined as follows:

Definition 1 A random algorithm A : Z m H is ( ϵ , γ ) -differential private if for every two data sets z 1 , z 2 satisfying d ( z 1 , z 2 ) = 1 , and every sets O H we have

Pr { A ( z 1 ) O } e ϵ Pr { A ( z 2 ) O } + γ .

Here H is a function space from X to Y , which is called the hypothesis space. In the sequel, we focus on the ( ϵ ,0 ) -differential privacy with some 0 < ϵ < 1 , which is always called ϵ -differential privacy for simplicity. How to choose an appropriate ϵ is a fundamental problem in differential private algorithms [10] , and we will provide a method during our error estimation in the following sections.

2. Concentration Inequality

In this section, we study the error between average and expectation for an algorithm A providing ϵ -differential privacy. Our first result can be stated as follow:

Theorem 1 If an algorithm A provides ϵ -differential privacy, and outputs a positive function g z , A : X × Y with bounded expectation E z , A g z , A G for some G > 0 , where the expectation is taken over the sample via the algorithm output. Then

E z , A ( 1 m i = 1 m g z , A ( z i ) Z g z , A ( z ) d ρ ) 2 G ϵ ,

and

E z , A ( Z g z , A ( z ) d ρ 1 m i = 1 m g z , A ( z i ) ) 2 G ϵ .

Denote sample sets w j = { z 1 , z 2 , , z j 1 , z j , z j + 1 , , z m } for j { 1 , 2 , , m } . We observe that

E z , A ( 1 m i = 1 m g z , A ( z i ) ) = 1 m i = 1 m E z E A ( g z , A ( z i ) ) = 1 m i = 1 m E z E z i 0 + Pr A { g z , A ( z i ) t } d t

1 m i = 1 m E z E z i 0 + e ϵ Pr A { g w i , A ( z i ) t } d t = e ϵ 1 m i = 1 m E w i E z i E A ( g w i , A ( z i ) ) = e ϵ 1 m i = 1 m E w i , A E z i ( g w i , A ( z i ) ) = e ϵ 1 m i = 1 m E w i , A Z g w i , A ( z ) d ρ = e ϵ 1 m i = 1 m E z , A Z g z , A ( z ) d ρ = e ϵ E z , A Z g z , A ( z ) d ρ .

Then

E z , A ( 1 m i = 1 m g z , A ( z i ) Z g z , A ( z ) d ρ ) ( e ϵ 1 ) E z , A ( Z g z , A ( z ) d ρ ) 2 G ϵ .

On the other hand,

E z , A Z g z , A ( z ) d ρ = 1 m i = 1 m E z E A Z g z , A ( z ) d ρ = 1 m i = 1 m E w i E A Z g w i , A ( z ) d ρ = 1 m i = 1 m E w i E A Z g w i , A ( z i ) d ρ ( z i ) = 1 m i = 1 m E w i E z i E A ( g w i , A ( z i ) ) = 1 m i = 1 m E z E z i 0 + Pr A { g w i , A ( z i ) t } d t 1 m i = 1 m E z E z i e ϵ 0 + Pr A { g z , A ( z i ) t } d t = e ϵ 1 m i = 1 m E z E A ( g z , A ( z i ) ) = e ϵ E z , A 1 m g z , A ( z i ) .

This leads to

E z , A ( Z g z , A ( z ) d ρ 1 m i = 1 m g z , A ( z i ) ) = ( e ϵ 1 ) E z , A 1 m i = 1 m g z , A ( z j ) 2 G ϵ .

These verify our results.

Remark 1 Similar results are proposed in [6] and [7] . However, there the authors limits the function to take value in [ 0 , 1 ] or { 0 , 1 } , our result here extends theirs to the function taking value in + . This makes our following error analysis implementable.

3. Differential Private Learning Algorithm

In this section we consider the differential private least squares regularization algorithm. For a Mercer kernel K defined on X × X , the function space H K : = span { K ( x , ) , x X } ¯ is the induced reproducing kernel Hilbert space (RKHS). Denote K x ( y ) = K ( x , y ) for any x , y X , and κ = sup x , y X K ( x , y ) . It is well known that f ( x ) = f , K x K as the reproducing property. In the sequel, we always assume | y | M for some constant M > 0 . The least squares regularization algorithm, which has been extensively studied in such as [8] [11] [12] and etc. is:

f z , λ = arg min f H K 1 m i = 1 m ( f ( x i ) y i ) 2 + λ f K 2 . (1)

Denote π as a projection operator as we did in [13] [14] :

π ( f ( x ) ) = { M , f ( x ) > M f ( x ) , M f ( x ) M M , f ( x ) < M .

Then we add a noise term b in the original algorithm (1) like the output perturbation algorithm in [3] :

f z , A ( x ) = π ( f z , λ ( x ) ) + b (2)

where the density of b is independent with z which will be clarified in the following analysis. Moreover, we take the following notation for simplicity:

E ( f ) = Z ( f ( x ) y ) 2 d ρ , E z ( f ) = 1 m i = 1 m ( f ( x i ) y i ) 2 .

Definition 2 We denote Δ f z as the maximum infinite norm of difference when changing one sample point in z , i.e., if d ( z , z ) = 1 ,

Δ f z = sup z , z f z f z .

Then we have the following result:

Lemma 1 Assume Δ π ( f z , λ ( x ) ) is bounded, and b has density function

proportion to exp { ϵ | b | Δ π ( f z , λ ) } , then algorithm (2) provides ϵ -differential

privacy.

The proof is just as Theorem 4 in [15] . For all possible function r , and z , z differ in one element, then

Pr { f z , A = r } = Pr b { b = r π ( f z , λ ) } exp ( ϵ r π ( f z , λ ) Δ π ( f z , λ ) ) ,

and

Pr { f z , A = r } = Pr b { b = r π ( f z , λ ) } exp ( ϵ r π ( f z , λ ) Δ π ( f z , λ ) ) .

So

Pr { f z , A = r } Pr { π ( f z , A ) = r } × e ϵ π ( f z , λ ) π ( f z , λ ) Δ π ( f z , λ ) e ϵ Pr { f z , A = r } .

Then the lemma is proved by a union bound.

Now we will bound the term Δ f z , λ .

Lemma 2 For the function f z , λ obtained from algorithm (1), assume f z , λ K R for any z Z m for some R M , and 0 < λ 1 , we have

Δ f z , λ 2 R κ 2 ( κ + 1 ) λ m .

Assume f z , λ and f z , λ are two results derived via algorithm (1) given any sample set z , z satisfying d ( z , z ) = 1 . Without loss of generality, we set z = ( z 1 , z 2 , , z m 1 , z m ) . Since the two functions are both the optimizer of algorithm (1), take derivative for f we have

2 m i = 1 m ( f z , λ ( x i ) y i ) K x i + 2 λ f z , λ = 0

and

2 m i = 1 m 1 ( f z , λ ( x i ) y i ) K x i + 2 m ( f z , λ ( x m ) y m ) K x m + 2 λ f z , λ = 0.

These lead to

1 m i = 1 m ( f z , λ ( x i ) f z , λ ( x i ) ) K x i + λ ( f z , λ f z , λ ) = 1 m [ ( f z , λ ( x m ) y m ) K x m ( f z , λ ( x m ) y m ) K x m ] .

Take inner product with f z , λ f z , λ by both sides we have

1 m i = 1 m ( f z , λ ( x i ) f z , λ ( x i ) ) 2 + λ f z , λ f z , λ K 2 = 1 m [ ( f z , λ ( x m ) y m ) ( f z , λ ( x m ) f z , λ ( x m ) ) ( f z , λ ( x m ) y m ) ( f z , λ ( x m ) f z , λ ( x m ) ) ] .

This means

λ f z , λ f z , λ K 2 1 m [ | f z , λ ( x m ) y m | + | f z , λ ( x m ) y m | ] f z , λ f z , λ 1 m ( f z , λ + f z , λ + 2 M ) κ f z , λ f z , λ K .

The last inequality is from the fact that

f = sup x X f ( x ) = sup x X f , K x K K x K f K κ f K .

Since f z , λ K R , then f z , λ K R as well. Therefore,

f z , λ f z , λ K 1 λ m ( 2 R κ + 2 M ) κ 2 R κ ( κ + 1 ) λ m

for any 0 < λ 1 . So

f z , λ f z , λ 2 R κ 2 ( κ + 1 ) λ m

for any z , z , and our lemma holds.

It can be easily verified by discussion that

π ( f z , λ ) π ( f z , λ ) f z , λ f z , λ

for any z , z , so we have the choice of noise b and the result for algorithm (2).

Proposition 1 Assume f z , λ K R for any z Z m for some R M , and b takes value in ( , + ) , we choose the density of b to be

1 α exp ( λ m ϵ | b | 2 R κ 2 ( κ + 1 ) ) , where α = 4 R κ 2 ( κ + 1 ) λ m ϵ , then the algorithm (2) pro-

vides ϵ -differential privacy.

The proof is by combining the two lemmas and the inequality above. And by simply calculation we can get the expression of α .

4. Error Analysis for Differential Private Learning Algorithm

In this section, we will study the expectation of the error between E ( f z , A ) E ( f ρ ) , where f ρ = Y y d ρ ( y | x ) is the regression function which minimizes E ( f ) . Firstly we shall introduce the error decomposition:

E ( f z , A ) E ( f ρ ) E ( f z , A ) E ( f ρ ) + λ f z , λ K 2 E ( f z , A ) E z ( f z , A ) + E z ( f z , A ) E z ( π ( f z , λ ) ) + E z ( π ( f z , λ ) ) + λ f z , λ K 2 E ( f ρ ) E ( f z , A ) E z ( f z , A ) + E z ( f z , A ) E z ( π ( f z , λ ) ) + E z ( f z , λ ) + λ f z , λ K 2 E ( f ρ ) E ( f z , A ) E z ( f z , A ) + E z ( f z , A ) E z ( π ( f z , λ ) ) + E z ( f λ ) + λ f λ K 2 E ( f ρ ) R 1 + R 2 + S + D ( λ ) , (3)

where f λ is a function in H K to be determined and

R 1 = E ( f z , A ) E z ( f z , A ) ,

R 2 = E z ( f z , A ) E z ( π ( f z , λ ) ) ,

S = E z ( f λ ) E ( f λ ) ,

D ( λ ) = E ( f λ ) E ( f ρ ) + λ f λ K 2 .

Here R 1 and R 2 involve the function f z , A from random algorithm (2) so we call them random errors. S and D ( λ ) are similar as classical ones in the past literature in learning theory and we still call them sample error and approximation error. In the following, we will study these errors respectively.

4.1. Error Bounds for Random Errors

Proposition 2 For function f z , A obtained from algorithm (2) with density of b as described in Proposition 1, we have

E z , A R 1 8 ϵ ( 2 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + M 2 ) .

Note that

R 1 = Z ( f z , A ( x ) y ) 2 d ρ 1 m i = 1 m ( f z , A ( x i ) y i ) 2 ,

analogous analysis to the proof of Theorem 1 tells us that

E z , A ( Z ( f z , A ( x ) y ) 2 d ρ 1 m i = 1 m ( f z , A ( x i ) | y i ) 2 ) ( e ϵ 1 ) E z E A 1 m i = 1 m ( π ( f z , λ ( x i ) ) + b y i ) 2 d ρ = 2 ϵ E z E b ( b 2 + b ( π ( f z , λ ( x i ) ) y i ) + ( π ( f z , λ ( x i ) ) y i ) 2 ) 2 ϵ ( 8 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + 4 M 2 ) ,

which verifies the proposition.

For the term R 2 , we have the same analysis.

Proposition 3 For function f z , A obtained from algorithm (2) with density of b as described in Proposition 1, we have

E z , A R 2 8 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 .

Since

R 2 = E z ( f z , A ) E z ( π ( f z , λ ) ) = 1 m i = 1 m [ ( f z , A ( x i ) y i ) 2 ( π ( f z , λ ( x i ) ) y i ) 2 ] = 1 m i = 1 m b ( b + 2 π ( f z , λ ( x i ) ) 2 y i ) = b 2 + 2 b 1 m i = 1 m ( π ( f z , λ ( x i ) ) y i ) ,

we have

E z , A R 2 = E z E b b 2 8 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 .

And the proposition is proved.

4.2. Error Estimates for Sample Error and Approximation Error

Error estimates for sample error and approximation error have been extensively studied since [8] . Here we provide the proof for completeness. It is known that f λ in the error decomposition (3) can be arbitrarily chosen in H K in [12] [13] [14] and etc. Here we simply choose it to be the classical one

f λ = arg min f H K E ( f ) + λ f K 2 .

From [16] [17] we have the expression of f λ is

f λ = ( L K + λ ) 1 L K f ρ ,

where L K is the operator defined on L ρ X 2 as

L K f ( t ) = X f ( x ) K ( x , t ) d ρ X .

[8] told us that L K has a eigenvalue sequence { μ i } i 1 satisfies μ i > 0 μ i 0 when i , and L K κ 2 . Now we recall the Hoeffding inequality [18] .

Lemma 3 Let ξ be a random variable on a probability space Z satisfying | ξ ( z ) E ξ | B for some B > 0 for almost all z Z , then

Pr { | 1 m i = 1 m ξ ( z i ) E ξ ε | } 2 exp { m ε 2 2 B 2 } .

Then we have the following analysis.

Proposition 4 For f λ and f ρ defined as above, assume f ρ L K r ( L ρ X 2 ) , we have

E z , A S + D ( λ ) 8 2 π M 2 m + λ min { 2 r , 1 } ( κ 4 r 2 + κ 4 r 4 + 2 ) L K r f ρ ρ 2 .

Firstly we bound the sample error.

S = E ( f λ ) E z ( f λ ) = Z ( f λ ( x ) y ) 2 d ρ 1 m i = 1 m ( f λ ( x i ) y i ) 2 .

Let ξ ( z ) = ( f λ ( x ) y ) 2 , since | f ρ ( x ) | = | Y y d ρ ( y | x ) | M , and

f λ = ( L K + λ I ) 1 L K f ρ ( L K + λ I ) 1 L K f ρ M ,

we have | ξ E ξ | 8 M 2 . So from Hoeffding inequality there holds

Pr z { | Z ( f λ ( x ) y ) 2 d ρ 1 m i = 1 m ( f λ ( x i ) y i ) 2 | ε } 2 exp { m ε 2 128 M 4 } .

Then we have

E z , A S E z | S | = 0 + Pr z { | S | t } d t = 0 + 2 exp { m t 2 128 M 4 } d t 8 2 π M 2 m .

For the approximation error, note that E ( f λ ) E ( f ρ ) = f λ f ρ ρ 2 [9]

which is independent with z and b , we have

E z , A E ( f λ ) E ( f ρ ) = f λ f ρ ρ 2 = ( L K + λ I ) 1 ( L K ( L K + λ I ) ) f ρ ρ 2 = λ 2 ( L K + λ I ) 1 L K r L K r f ρ ρ 2 λ 2 ( L K + λ I ) 1 L K r 2 L K r f ρ ρ 2 { λ 2 r L K r f ρ ρ 2 , r 1 λ 2 κ 4 ( r 1 ) L K r f ρ ρ 2 , r > 1 λ min { 2 r , 2 } ( κ 4 ( r 1 ) + 1 ) L K r f ρ ρ 2 .

On the other hand, in [8] , the authors pointed out that f K = L K 1 2 f ρ for

any f H K . So

E z , A λ f λ K 2 = λ ( L K + λ I ) 1 L K f ρ K 2 = λ ( L K + λ I ) 1 L K 1 2 f ρ ρ 2 λ ( L K + λ I ) 1 L K 1 2 + r 2 L K r f ρ ρ 2 { λ 2 r L K r f ρ ρ 2 , r 1 2 λ κ 4 r 2 L K r f ρ ρ 2 , r > 1 2 λ min { 2 r , 1 } ( κ 4 r 2 + 1 ) L K r f ρ ρ 2 .

Combining the 3 bounds above, we can verify the proposition.

4.3. Convergence Result with Fixed ϵ

In our analysis for E z , A R 1 above, we indeed have the following result

E z , A R 1 16 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ + 2 ϵ E z ε z ( π ( f z , λ ) ) .

Therefore, the error decomposition can be

E z , A ( E ( f z , A ) ( 1 + 2 ϵ ) E ( f ρ ) ) = E z , A ( R 1 + R 2 + S + D ( λ ) 2 ϵ E ( f ρ ) ) 16 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ + 8 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + E z 2 ϵ ( E z ( π ( f z , λ ) ) E ( f ρ ) ) + E z ( S + D ( λ ) ) 24 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + 2 ϵ E z ( E z ( f z , λ ) + λ f z , λ K 2 E ( f ρ ) ) + E z ( S + D ( λ ) ) 24 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + 2 ϵ E z ( E z ( f λ ) + λ f λ K 2 E ( f ρ ) ) + E z ( S + D ( λ ) ) 24 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + ( 1 + 2 ϵ ) E z ( S + D ( λ ) ) 24 M 2 κ 4 ( κ + 1 ) 2 λ 3 m 2 ϵ 2 + 3 2 π M 2 ( 1 + 2 ϵ ) m + λ min { 1 , 2 r } ( κ 4 r 2 + κ 4 r 4 + 2 ) L K r f ρ ρ 2 .

Then by choosing λ = ( 1 m ) 2 / min { 4 , 3 + 2 r } for balance we have the following

result.

Theorem 2 Let f z , A derived from algorithm (2), f z , λ , f λ defined in the

above subsections, and assume f ρ L K r ( L ρ X 2 ) , take λ = ( 1 m ) 2 / min { 4 , 3 + 2 r } ,

there holds

E z , A ( E ( f z , A ) ( 1 + 2 ϵ ) E ( f ρ ) ) C ϵ ( 1 m ) min { 1 2 , 4 r 3 + 2 r } ,

where constant

C ϵ = 24 M 2 κ 4 ( κ + 1 ) 2 ϵ 2 + 8 2 π M 2 ( 1 + 2 ϵ ) + ( κ 4 r 2 + κ 4 r 4 + 2 ) L K r f ρ ρ 2 .

4.4. Selection of ϵ and Total Error Bound

From the analysis for random error, sample error and approximation error above, we can obtain the whole error bound as follow.

Theorem 3 Let f z , A derived from algorithm (2), f z , λ , f λ defined in the

above subsections, and assume f ρ L K r ( L ρ X 2 ) , take

λ = ( 1 m ϵ ) 2 / min { 4 , 3 + 2 r } ,

and

ϵ = ( 1 m ) min { 1 / 3 , 4 r / ( 3 + 6 r ) }

we have

E z , A ( E ( f z , A ) E ( f ρ ) ) C ˜ ( 1 m ) min { 1 3 , 4 r 3 + 6 r } ,

where constant

C ˜ = 8 ( 1 + 2 π ) M 2 + 24 M 2 κ 4 ( κ + 1 ) 2 + ( κ 4 r 2 + κ 4 r 4 + 2 ) L K r f ρ ρ 2 .

It can be seen from error decomposition (3) that

E z , A ( E ( f z , A ) E ( f ρ ) ) E z , A ( E ( f z , A ) E ( f ρ ) + λ f z , λ K 2 ) E z , A ( R 1 + R 2 + S + D ( λ ) ) 8 ϵ ( 2 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + M 2 ) + 8 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + 8 2 π M 2 m + λ min { 2 r , 1 } ( κ 4 r 2 + κ 4 r 4 + 2 ) L K r f ρ ρ 2 8 M 2 ϵ + 24 R 2 κ 4 ( κ + 1 ) 2 λ 2 m 2 ϵ 2 + 8 2 π M 2 m + λ min { 2 r , 1 } ( κ 4 r 2 + κ 4 r 4 + 2 ) L K r f ρ ρ 2 .

Since λ f z , λ K 2 E z ( f z , λ ) + λ f z , λ K 2 E z ( 0 ) M 2 , we have f z , λ K M λ , i.e., we can choose R = M λ . Now take λ = ( 1 m ϵ ) 2 / min { 4 , 3 + 2 r } and ϵ = ( 1 m ) min { 1 / 3 , 4 r / ( 3 + 6 r ) } for balance, and the result is proved.

5. Conclusions

Theorem 2, where ϵ is taken as a constant, reveals that the generalization error E ( π ( f z , A ) ) converges not to the one of regression function E ( f ρ ) , but a little different one ( 1 + 2 ϵ ) E ( f ρ ) in expectation.

It can be seen from the definition of differential privacy that algorithms will provide more privacy when ϵ tends to 0. However, Theorem 3 shows that ϵ cannot be too small, since the expected error will be very large accordingly. Hence our choice can be regarded as a balance between privacy protection and the expected error. In [19] , the authors announce that ϵ also needs tend to 0 in some rates to keep generalization which matches our result.

Compared with previous learning theory results [12] [20] [21] [22] and etc., our learning rate is not so good since a perturbation term is introduced. However, in our result Theorem 1, we did not need a capacity condition as what we did in classical error analysis, i.e., conditions on covering numbers, VC or Vg dimensions. Instead the ϵ -differential private condition is adopted. So it may be capable and interesting for us to apply such condition to other learning algorithms.

Acknowledgements

This work is supported by NSFC (Nos. 11326096, 11401247), NSF of Guangdong Province in China (No. 2015A030313674), National Social Science Fund in China (No. 15BTJ024), Planning Fund Project of Humanities and Social Science Research in Chinese Ministry of Education (No. 14YJAZH040), Foundation for Distinguished Young Talents in Higher Education of Guangdong, China (No. 2016KQNCX162) and the Major Incubation Research Project of Huizhou University (No. hzux1201619).

Cite this paper

Nie, W.L. and Wang, C. (2017) Error Analysis and Variable Selection for Differential Private Learning Algorithm. Journal of Applied Mathematics and Physics, 5, 900-911. https://doi.org/10.4236/jamp.2017.54079

References

  1. 1. Dwork, C., McSherry, F., Nissim, K. and Smith, A. (2006) Calibrating Noise to Sensitivity in Private Data Analysis. In: Halevi, S. and Rabin, T., Eds., Theory of Cryptography, Springer, Berlin, 265-284.

  2. 2. McSherry, F. and Talwar, K. (2007) Mechanism Design via Differential Privacy. Proceedings of the 48th Annual Symposium on Foundations of Computer Science, Providence, 21-23 October 2007, 94-103. https://doi.org/10.1109/focs.2007.66

  3. 3. Chaudhuri, K., Monteleoni, C. and Sarwate, A.D. (2011) Differentially Private Empirical Risk Minimization. Journal of Machine Learning Research, 12, 1069-1109.

  4. 4. Jain, P. and Thakurta, A.G. (2013) Differentially Private Learning with Kernels. JMLR: Workshop and Conference Proceedings, 28, 118-126.

  5. 5. Jain, P. and Thakurta, A.G. (2014) Dimension Independent Risk Bounds for Differentially Private Learning. Proceedings of the 31st International Conference on Machine Learning, Beijing, 21-26 June 2014, 476-484.

  6. 6. Dwork, C., Feldman, V., Hardt, M., Pitassi, T., Reingold, O. and Roth, A. (2015) Preserving Statistical Validity in Adaptive Data Analysis. ACM Symposium on the Theory of Computing, Portland, 14-17 June 2015, 117-126. https://doi.org/10.1145/2746539.2746580

  7. 7. Bassily, R., Nissim, K., Smith, A., Steinke, T., Stemmer, U. and Ullman, J. (2015) Algorithmic Stability for Adaptive Data Analysis.

  8. 8. Cucker, F. and Smale, S. (2002) On the Mathematical Foundations of Learning. Bulletin of the AMS, 39, 1-49. https://doi.org/10.1090/S0273-0979-01-00923-5

  9. 9. Cucker, F. and Zhou, D.X. (2007) Learning Theory: An Approximation Theory Viewpoint. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511618796

  10. 10. Dwork, C. (2008) Differential Privacy: A Survey of Results. International Conference on Theory and Applications of Models of Computation, Xi’an, 25-29 April 2008, 1-19.

  11. 11. Steinwart, I., Hush, D. and Scovel, C. (2009) Optimal Rates for Regularized Least Squares Regression. In: Dasgupta, S. and Klivans, A., Eds., Proceedings of the 22nd Annual Conference on Learning Theory, Montreal, 18-21 June 2009, 79-93.

  12. 12. Wu, Q., Ying, Y. and Zhou, D.X. (2006) Learning Rates of Least-Square Regularized Regression. Foundations of Computational Mathematics, 6, 171-192. https://doi.org/10.1007/s10208-004-0155-9

  13. 13. Nie, W.L. and Wang, C. (2015) Constructive Analysis for Coefficient Regularization Regression Algorithms. Journal of Mathematical Analysis and Applications, 431, 1153-1171.

  14. 14. Wang, C. and Nie, W.L. (2014) Constructive Analysis for Least Squares Regression with Generalized K-Norm Regularization. Abstract and Applied Analysis, 2014, Article ID: 458459. https://doi.org/10.1155/2014/458459

  15. 15. Dwork, C. (2006) Differential Privacy. Springer, Berlin, 1-12.

  16. 16. Smale, S. and Zhou, D.X. (2003) Estimating the Approximation Error in Learning Theory. Analysis and Applications, 1, 17-41. https://doi.org/10.1142/S0219530503000089

  17. 17. Smale, S. and Zhou, D.X. (2007) Learning Theory Estimates via Integral Operators and Their Applications. Constructive Approximation, 26, 153-172. https://doi.org/10.1007/s00365-006-0659-y

  18. 18. Hoeffding, W. (1963) Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association, 58, 13-30. https://doi.org/10.1080/01621459.1963.10500830

  19. 19. Wang, Y.-X., Lei, J. and Fienberg, S.E. (2015) Learning with Differential Privacy: Stability, Learn Ability and the Sufficiency and Necessity of ERM Principle.

  20. 20. Wang, C. and Zhou, D.X. (2011) Optimal Learning Rates for Least Squares Regularized Regression with Unbounded Sampling. Journal of Complexity, 27, 55-67.

  21. 21. Hu, T., Fan, J., Wu, Q. and Zhou, D.X. (2015) Regularization Schemes for Minimum Error Entropy Principle. Analysis and Applications, 13, 437-455. https://doi.org/10.1142/S0219530514500110

  22. 22. Christmann, A. and Zhou, D.X. (2016) Learning Rates for the Risk of Kernel-Based Quantile Regression Estimators in Additive Models. Analysis and Applications, 14, 449-477. https://doi.org/10.1142/S0219530515500050