Home

User`s Manual - Jieping Ye, Ph.D.

image

Contents

1. A ps wa 24 E ze j 5 S a E west A Y lt By 4 i do TA K y Z K o A Aa Y gt K 7 LE oN MULTI TASK LEARNING VIA STRUCTURAL REGULARIZATION JIAYU ZHOU JIANHUI CHEN JIEPING YE User s Manual Version 1 0 MALSAR Multi tAsk Learning via StructurAl Regularization Version 1 0 Jiayu Zhou Jianhui Chen Jieping Ye Computer Science amp Engineering Center for Evolutionary Medicine and Informatics The Biodesign Institute Arizona State University Tempe AZ 85287 jiayu zhou jianhui chen jieping ye Oasu edu Website http www public asu edu 3jye02 Software MALSAR April 23 2012 Contents 1 Introduction ie mi A er AA A REA SR e ee Ree ee 5 1 4 Multi Task Rearming 64 tnai poke be ae Pe ee ee we led 5 1 2 Optimization Algorithm 3 3 ami ae ew oe geo ep em RR 6 2 Package and Installation ee 7 3 Interface Specification ee 8 Si Inputiand Output scada end dds at oss dere api i i 8 32 Optimization Options di tie de eee Bo Goes at a ged hs ewe ee 9 4 Miulti Task Learning Formulations 0 0 0 0 0 0 0 pee es 10 4 1 norm Regularized Problems eee ee 10 ALI Least Lasso kacu bo Aes he ety RA Beas 10 4 1 2 Logistic Lasso 11 4 2 C5 4 norm Regularized Problems 4 0 melt cd a ee e a 11 42 o Beast L2I fk on Bae Seok we a po Sih E a eS 11 42 2 Logistic TIS tais ea fe Gi Da al en cay dn Th As aR GPa
2. Task Models Sparse Component Low Rank W P Component X Figure 6 Learning Incoherent Sparse and Low Rank Patterns form Multiple Tasks The assumption that all models share a common low dimensional subspace is restrictive in some appli cations To this end an extension that learns incoherent sparse and low rank patterns simultaneously was 15 proposed in Chen et al 2010 The key idea is to decompose the task models W into two components a sparse part P and a low rank part Q as shown in Figure 6 It solves the following optimization problem min W 11P subject to W P Q Q x lt 7 4 5 1 Trace Norm Regularization with Least Squares Loss Least Trace The function W funcVal Least_Trace X Y p opts solves the trace norm regularized multi task least squares problem t rain DW Xs Yill pillW le 16 where X denotes the input matrix of the th task Y denotes its corresponding label W is the model for task 2 and the regularization parameter p controls the rank of W Currently this function supports the following optional fields e Starting Point opts init opts w0O e Termination opts tFlag 4 5 2 Trace Norm Regularization with Logistic Loss Logistic_Trace The function W c funcVal Logistic Trace X Y pi opts solves the trace norm regularized multi task logistic regression problem ton 12221080 exp Yig W7 Xig e pa Will 17 where
3. min W X Yille pun n a W nI My W 27 i 1 subject to tr M k M 31 Me 28 p where X denotes the input matrix of the i th task Y denotes its corresponding label W is the model for task 7 and p is the regularization parameter Due to the equality constraint tr W k the starting point of M is initialized to be My k t x I satisfying tr Mo k Currently this function supports the following optional fields e Starting Point opts init opts Wo0 Termination opts tFlag 4 7 2 CASO with Logistic Loss Logistic CASO The function W c funcVal M Logistic_CASO X Y 1 po K opts solves the convex relaxed alternating structure optimization ASO multi task logistic regression problem t ni T Le f T 1 an 2 log 1 exp Yi WI Xij 05 pind n tr W 91 M 2W 29 i 1 j subject to tr M k M lt I M Si n E 30 p where X denotes sample j of the 7 th task Y denotes its corresponding label W and c are the model for task 7 and p is the regularization parameter Due to the equality constraint tr W k the starting point of M is initialized to be Mo k t x I satisfying tr Mo k Currently this function supports the following optional fields e Starting Point opts init opts W0 opts CO e Termination opts tFlag 4 8 Dealing with Outlier Tasks Robust Multi Task Learning Most multi task learning formulations assume that all tasks are relevan
4. NSF 32 References Abernethy J Bach F Evgeniou T amp Vert J 2006 Low rank matrix factorization with attributes Arxiv preprint cs 0611124 Abernethy J Bach F Evgeniou T amp Vert J 2009 A new approach to collaborative filtering Operator estimation with spectral regularization The Journal of Machine Learning Research 10 803 826 Agarwal A Daum III H amp Gerber S 2010 Learning multiple tasks using manifold regularization Ando R amp Zhang T 2005 A framework for learning predictive structures from multiple tasks and unlabeled data The Journal of Machine Learning Research 6 1817 1853 Argyriou A Evgeniou T amp Pontil M 2007 Multi task feature learning Advances in neural informa tion processing systems 19 41 Argyriou A Evgeniou T amp Pontil M 2008a Convex multi task feature learning Machine Learning 73 243 272 Argyriou A Micchelli C Pontil M amp Ying Y 2008b A spectral regularization framework for multi task structure learning Advances in Neural Information Processing Systems 20 25 32 Bakker B amp Heskes T 2003 Task clustering and gating for bayesian multitask learning The Journal of Machine Learning Research 4 83 99 Bickel S Bogojeska J Lengauer T amp Scheffer T 2008 Multi task learning for hiv therapy screening Proceedings of the 25th international conference on Machine learning pp 56
5. log_lam log lambda for i 1 length lambda W funcVal Least_Lasso X Y lambda i opts set the solution as the next initial point this gives better efficiency opts init 1 opts WO sparsity i end W nnz W 23 The algorithm records the number of non zero entries in the resulting prediction model W We show the change of the sparsity variable against the logarithm of regularization parameters in Figure 11 Clearly when the regularization parameter increases the sparsity of the resulting model increases or equivalently the number of non zero elements decreases The code that generates this figure is from the example file example_Lasso m Sparsity of Predictive Model when Changing Regularization Parameter 2500 T T T T T T T 2000F J 1500 4 1000 Sparsity of Model Non Zero Elements in W 500 0 L 1 L 1 0 1 2 3 4 5 6 7 8 logip Figure 11 Sparsity of the model Learnt from f norm regularized MTL As the parameter increases the number of non zero elements in W decreases and the model W becomes more sparse 5 3 Joint Feature Selection norm regularization In this example we explore the 2 norm regularized multi task learning using the School data from the data folder load data school mat load sample data Define a set of regularization parameters and use pathwise computation lambda 200 30
6. rho_1 rho_2 clus_num opts recover clustered order kmCMTL_OrderedModel zeros size W OrderedTrueModel zeros size W for i 1 clus_num clusModel W_learn i clus_num task_num kmCMTL_OrderedModel 1 1 x clus_task_num 1 ix clus_task_num clusModel clusModel W i clus_num task_num OrderedTrueModel i 1 clus_task_num 1 ix clus_task_num clusModel end We visualize the models in Figure 17 We see that the clustered structure is captured in the learnt model The code that generates this result is from the example file example_CMTL m Model Correlation Ground Truth Model Correlation Clustered MTL Figure 17 Illustration of the cluster Structure Learnt from CMTL 31 6 Citation and Acknowledgement Citation In citing MALSAR in your papers please use the following reference Zhou 2012 J Zhou J Chen and J Ye MALSAR Multi tAsk Learning via StructurAl Regularization Arizona State University 2012 http www public asu edu jye02 Software MALSAR Please use the following in BibTeX EMANUAL zhou2012manual title MALSAR Multi tAsk Learning via StructurAl Regularization author J Zhou and J Chen and J Ye organization Arizona State University year 2012 url http www public asu edu jye02 Software MALSAR Acknowledgement The MALSAR software project has been supported by research grants from the National Science Foundation
7. 4 Trace norm Regularization ee 25 5 5 Graph Regularization 2 ee 26 5 6 Robust Multi Task learning oaa ee 26 5 7 Robust Multi Task Feature learning 2 0 0 00000 eee eee eee 27 5 8 Dirty Multi Task Learning 29 5 9 Clustered Multi Task Learning e 30 Bibliography s oo ee ee ee ee ek 33 ladere a ce ee te EE ae E ee en ek ey ae tee a E 36 List of Figures 1 Illustration of single task learning and multi task learning 5 2 The input and output variables 2 a 8 3 Eearming with Casso 36 chia eS YA a A a a a a Bh a ean ia 10 4 Learning with l2 1 norm Group Lasso ooa e e e 12 5 Dirty Model for Multi Task Learning ooa ee 13 6 Learning Incoherent Sparse and Low Rank Patterns aoaaa aaa 15 7 Illustration of clustered tasks oaa a 17 8 Illustration of multi task learning using a shared feature representation 19 9 Illustration of robust multi task learning ooa a 21 10 Illustration of robust multi task feature learning oaoa 22 11 Example Sparsity of Model Learnt from norm regularized MTL 24 12 Example Shared Features Learnt from l2 1 norm regularized MTL 25 13 Example Trace norm and rank of model learnt from trace norm regularization 26 14 Example Outlier Detected by RMTL o oo e 0000 27 15 Example Outlier Detected by rMTFL o o 000200
8. 63 Chen J Liu J amp Ye J 2010 Learning incoherent sparse and low rank patterns from multiple tasks Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining pp 1179 1188 Chen J Tang L Liu J amp Ye J 2009 A convex formulation for learning shared structures from multiple tasks Proceedings of the 26th Annual International Conference on Machine Learning pp 137 144 Chen J Zhou J amp Ye J 2011 Integrating low rank and group sparse structures for robust multi task learning Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining Evgeniou T Micchelli C amp Pontil M 2006 Learning multiple tasks with kernel methods Journal of Machine Learning Research 6 615 Evgeniou T amp Pontil M 2004 Regularized multi task learning Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining pp 109 117 Fazel M 2002 Matrix rank minimization with applications Doctoral dissertation PhD thesis Stanford University Friedman J Hastie T amp Tibshirani R 2008 Sparse inverse covariance estimation with the graphical lasso Biostatistics 9 432 441 Gong P Ye J amp Zhang C 2012 Robust multi task feature learning Submitted Gu Q amp Zhou J 2009 Learning the shared subspace for multi task clustering and transductive transf
9. OL el als 12 4 3 Dirty Model x 3 323 03 Bese dat A a es a ee a BAO Bartek ed bia ee A 12 43 Teast DIFE ve hs oie he ekoa le ee Be a ES ae Ew we 13 4 4 Graph Regularized Problems e 13 44 1 Least SRAM 20 00 00 e a a a ee 14 44 2 Logistic SRMTL o 14 4 5 Trace norm Regularized Problems 2 0 000000 pee ee eae 15 45 1 Least Trace iia A ete eh it Coe o 16 4 5 2 Logistic Trace iH A A A dea a SO eS 16 4 5 3 Least_SparseTrace 2 0 aa 16 4 6 Clustered Multi Task Learning 2 2 2 2 0 0000000002 eee ee 17 4 6 1 Least _CMTL roande ee Ee a a ep ee eee 18 462 Logistic CMTE oa ce ae a hg de Se sa ce rh ea dy kd on a Gere a cea A 18 4 7 Alternating Structure Optimization 2 2 2 0 22 0000000000004 19 47 1 4 east CASO ia ns aos hoe Se Pe ans hs aed Aa ea 19 47 2 Logisti CASO Lac hte eck ate es dae Hae ta A dae hae wh a 20 4 8 Robust Multi Task Learning 2 0 0 0 0 00000002 eee ee 20 4 8 1 Least RMTL o o puei e eR i Be I a ee 21 4 9 Robust Multi Task Feature Learning ee 21 4 91 Least MTEL 2 056684 a NoW A a ck De A a 22 5 Examples 3 4 05 2 ouch ade le bales Tak ete BR ed ele Pala ed aes 23 5 1 Code Usage and Optimization Setup 2 2 20 0 0000002002 eee 23 5 2 Linor Regularization sips oak eh a e a hk Ga ae ae Be a ee 23 931 La norm Reg larization sins praia eta ee eis BGT SE Re ep WOR Ea Gals 24 5
10. ReCALL 76 Nesterov Y amp Nesterov I 2004 Introductory lectures on convex optimization A basic course vol 87 Springer Nie F Huang H Cai X amp Ding C 2010 Efficient and robust feature selection via joint 121 norms minimization Obozinski G Taskar B amp Jordan M 2010 Joint covariate selection and joint subspace selection for multiple classification problems Statistics and Computing 20 231 232 Thrun S amp O Sullivan J 1998 Clustering learning tasks and the selective cross task transfer of knowl edge Learning to learn 181 209 Tibshirani R 1996 Regression shrinkage and selection via the lasso Journal of the Royal Statistical Society Series B Methodological 267 288 Vandenberghe L amp Boyd S 1996 Semidefinite programming SJAM review 49 95 Wang F Wang X amp Li T 2009 Semi supervised multi task learning with task regularizations 2009 Ninth IEEE International Conference on Data Mining pp 562 568 Xue Y Liao X Carin L amp Krishnapuram B 2007 Multi task learning for classification with dirichlet process priors The Journal of Machine Learning Research 8 35 63 Zha H He X Ding C Gu M amp Simon H 2002 Spectral relaxation for k means clustering Advances in Neural Information Processing Systems 2 1057 1064 Zhang Y amp Yeung D 2010 A convex formulation for learning task relationships in mult
11. S S iS 3 2 wn wn o 3 S 2 a aS a Q i E a o yn wn Model W Dimension d Feature X R Y a eature esponse Model logistic Regression Figure 2 The main input and output variables where W is a d by t matrix each column of which is a d dimensional parameter vector for the corresponding task and c is a t dimensional vector For a new input x from task 2 the binary prediction y is given by y sign x W i e i These two loss functions are available for most of the algorithms in the package The output func_val is the objective function values at all iterations of the optimization algorithms In some algorithms there are other output variables that are not directly related to the prediction For example in convex relaxed ASO the optimization also gives the shared feature mapping which is a low rank matrix In some scenarios the user may be interested in such variables The variables are given in the field ZOTHER_OUTPUT 3 2 Optimization Options All optimization algorithms in our package are implemented using iterative methods Users can use the optional opts input to specify starting points termination conditions tolerance and maximum iteration number The input opts is a structure variable To specify an option user can add corresponding fields If one or more required fields are not specified or the opts variable is not given then default values will be used The default values can be changed in
12. gin 2 deal exp Y WI Xij 1 pillWlloa erallW Iz 7 i 1 j where X denotes sample 7 of the th task Y denotes its corresponding label W and c are the model for task the regularization parameter p controls group sparsity and the optional p72 regularization parameter controls 2 norm penalty Currently this function supports the following optional fields e Starting Point opts init opts W0 opts CO e Termination opts tFlag e Regularization opts rho_L2 4 3 The Dirty Model for Multi Task Learning The joint feature learning using norm regularization performs well in idea cases In practical appli cations however simply using the norm regularization may not be effective for dealing with dirty data which may not fall into a single structure To this end the dirty model for multi task learning is proposed Jalali et al 2010 The key idea in the dirty model is to decompose the model W into two components P and Q as shown in Figure 5 12 a y _ Model Group Sparse Sparse W Component Component P Q Figure 5 Illustration of dirty model for multi task learning 4 3 1 A Dirty Model for Multi Task Learning with the Least Squares Loss Least Dirty The function w funcVal P Q Least Dirty X Y pi P2 opts solves t
13. in grouped sparsity assuming that all tasks share a common set of features The learnt model is illustrated in Figure 4 4 2 1 21 Norm Regularization with Least Squares Loss Least _L21 The function W funcVal Least_L21 X Y pi opts solves the 2 norm and the squared 2 norm regularized multi task least squares problem t min X W7 X YI pillWloa prallW lz 6 1 where X denotes the input matrix of the th task Y denotes its corresponding label W is the model for task 7 the regularization parameter p controls group sparsity and the optional pz regularization parameter controls 2 norm penalty Currently this function supports the following optional fields e Starting Point opts init opts wWO e Termination opts tFlag e Regularization opts rho_L2 11 N __ Task t SS o Taskt Taskt S EN y a Y E a pa is E Jo K o La 4 3 2 2 S a a u 5 S S a a a T d Dimension d Feature X Response Y Figure 4 Illustration of multi task learning with joint feature selection based on the 2 1 norm regularization 4 2 2 2 1 Norm Regularization with Logistic Loss Logistic_L21 The function W c funcVal Logistic_L21 X Y p opts solves the 2 1 norm and the squared 2 norm regularized multi task logistic regression problem t Ni
14. opts 14 solves the graph structure regularized and norm and the squared 2 norm regularized multi task logistic regression problem t ni min Y Y log 1 exp Yi WI Xij ci A 11WRII p21Wll1 pra lIWI 14 i 1 j 1 where X j denotes sample j of the th task Y denotes its corresponding label W and c are the model for task i the regularization parameter p controls sparsity and the optional pz2 regularization parameter controls 2 norm penalty Currently this function supports the following optional fields e Starting Point opts init opts W0 opts CO e Termination opts tFlag e Regularization opts rho_L2 4 5 Low Rank Assumption Trace norm Regularized Problems One way to capture the task relationship is to constrain the models from different tasks to share a low dimensional subspace i e W is of low rank resulting in the following rank minimization problem min L W Arank W The above problem is in general NP hard Vandenberghe amp Boyd 1996 One popular approach is to replace the rank function Fazel 2002 by the trace norm or nuclear norm as follows min L W A W 15 where the trace norm is given by the sum of the singular values W gt 7 0 W The trace norm regularization has been studied extensively in multi task learning Ji amp Ye 2009 Abernethy et al 2006 Abernethy et al 2009 Argyriou et al 2008a Obozinski et al 2010
15. the dark blue color corresponds to a zero entry The matrices are transposed since the dimensionality is much larger than the task number After transpose each row corresponds to a task We see that for the sparse component s has non zero rows corresponding to the outlier tasks The code that generates this result is from the example file example_Robust m 5 7 Joint Feature Learning with Outlier Tasks rMTFL In this example we show how to use robust multi task learning to detect outlier tasks using synthetic data o rng default reset random generator Visualization of Robust Multi Task Learning Model S outliers EE low rank Dimension Figure 14 Illustration of RMTL The dark blue color corresponds to a zero entry The matrices are trans posed since the dimensionality is much larger than the task number After transpose each row corresponds to a task We see that for the sparse component s has non zero rows corresponding to the outlier tasks dimension 500 sample_size 50 task 50 X cell task 1 Y cell task 1 for i 1 task X i rand sample_size dimension Y i rand sample_size 1 end To generate reproducible results we reset the random number generator before we use the rand function We then run the following code opts init 0 opts tFlag 1 opts tol 10 6 opts maxIter 500 guess start point from data terminate after relative ob
16. 0 1500 sparsity zeros length lambda 1 log_lam log lambda for i 1 length lambda W funcVal Least_L21 X Y lambda i opts set the solution as the next initial point this gives better efficiency opts init 1 opts WO W sparsity i nnz sum W 2 0 d end a 6 ES 6 24 The statement nnz sum W 2 0 computes the number of features that are not selected for all tasks We can observe from Figure 12 that when the regularization parameter increases the number of selected features decreases The code that generates this result is from the example file example_L21 m Row Sparsity of Predictive Model when Changing Regularization Parameter 0 9 i T T T 0 8 4 0 77 J 0 6 J 0 5 A 0 2 J Row Sparsity of Model Percentage of All Zero Columns 0 1 4 0 L L L 5 5 5 6 6 5 7 7 5 log p Figure 12 Joint feature learning via the 2 1 norm regularized MTL When the regularization parameter increases the number of selected features decreases 5 4 Low Rank Structure Trace norm Regularization In this example we explore the trace norm regularized multi task learning using the School data from the data folder load data school mat load sample data Define a set of regularization parameters and use pathwise computation tn_val zeros length lambda 1 rk_val zeros length lambda 1 log_lam log lambda for i 1 le
17. 00000 28 16 Example Dirty Model Learnt from Dirty MTL aaau aaae 29 17 Example Cluster Structure Learnt from CMTL oaaae 31 List of Tables 1 Formulations included in the MALSAR package o o a 6 2 Installation of MALSAR 2 20 0 0 0 2 ee 7 1 Introduction 1 1 Multi Task Learning In many real world applications we deal with multiple related classification regression tasks For example in the prediction of therapy outcome Bickel et al 2008 the tasks of predicting the effectiveness of several combinations of drugs are related In the prediction of disease progression the prediction of outcome at each time point can be considered as a task and these tasks are temporally related Zhou et al 2011b A simple approach is to solve these tasks independently ignoring the task relatedness In multi task learn ing these related tasks are learnt simultaneously by extracting and utilizing appropriate shared information across tasks Learning multiple related tasks simultaneously effectively increases the sample size for each task and improves the prediction performance Thus multi task learning is especially beneficial when the training sample size is small for each task Figure 1 illustrates the difference between traditional single task learning STL and multi task learning MTL In STL each task is considered to be independent and learnt independently In MTL multiple tasks are learnt simultaneously by utilizing task
18. W Q W Name Loss function W Regularization Q W Main Reference Lasso Least Squares Logistic p1 W 1 Tibshirani 1996 Joint Feature Selection Least Squares Logistic A W 1 2 Argyriou et al 2007 Dirty Model Least Squares PillPllajco pal ll Jalali et al 2010 Graph Structure Least Squares Logistic pi WR p2 W 1 Low Rank Least Squares Logistic p1 W Ji amp Ye 2009 Sparse Low Rank Least Squares Plla s t W P Q Q lt T Chen et al 2010 Relaxed Clustered MTL Least Squares Logistic p n 1 n tr W nI M W7 Zhou et al 2011a s t tr M k M IM Syn 5 Relaxed ASO Least Squares Logistic pn 1 n tr WT nI M W Chen et al 2009 s t tr M k M lt x IM Sin A Robust MTL Least Squares PillLll e2llSll1 2 8 t W L S Chen et al 2011 Robust Feature Learning Least Squares pr Plloa pallQ7ll21 5 t W Gong etal 2012 P Q 2009 Abernethy et al 2006 Abernethy et al 2009 Argyriou et al 2008a Obozinski et al 2010 Chen et al 2010 Argyriou et al 2008b Agarwal et al 2010 The formulations implemented in the MALSAR package is summarized in Table 1 1 2 Optimization Algorithm In the MALSAR package most optimization algorithms are implemented via the accelerated gradient meth ods AGM Nemirovski Nemirovski 2001 Nesterov amp Nesterov 2004 Nesterov 2005 Nesterov 2007 The AGM di
19. X denotes sample j of the th task Y denotes its corresponding label W and c are the model for task 2 and the regularization parameter p controls the rank of W Currently this function supports the following optional fields e Starting Point opts init opts W0 opts C0 e Termination opts tFlag 4 5 3 Learning with Incoherent Sparse and Low Rank Components Least _SparseTrace The function W funcVal P O Least SparseTrace X Y pi P2 opts 16 Cluster 1 Cluster 2 Cluster k 1 Cluster k Training Data COD tia ee P 0 eae Figure 7 Illustration of clustered tasks Tasks with similar colors are similar with each other solves the incoherent sparse and low rank multi task least squares problem t min W X5 Yi pall Pl 18 i 1 subject to W P Q Q x lt pa 19 where X denotes the input matrix of the i th task Y denotes its corresponding label W is the model for task 7 the regularization parameter p controls sparsity of the sparse component P and the pa regularization parameter controls the rank of Q Currently this function supports the following optional fields e Starting Point opts init opts P0 opts Q0 set opts WO to any non empty value e Termination opts tFlag 4 6 Discovery of Clustered Structure Clustered Multi Task Learning Many multi task learning algorithms assume that all learning tasks are related In practica
20. all Mosek 6 0 or later Required only for Least CMTL Logistic CMTL Least _ CASO Logistic CASO 3 Download MALSAR and uncom Required for all functions press 4 In MATLAB go to the MALSAR Required for non Windows machines folder run INSTALL M in command window The folder structure of MALSAR package is e manual The location of the manual e MALSAR This is the folder containing main functions and libraries utils This folder contains opts structure initialization and some common libraries The folder should be in MATLA path functions This folder contains all the MATLAB functions and are organized by categories c_files All c files are in this folder It is not necessary to compile one by one For Windows user there are precompiled binaries for i386 and x64 CPU For Unix user and Mac OS user you can perform compilation all together by running INSTALL M e examples Many examples are included in this folder for functions implemented in MALSAR If you are not familiar with the package this is the perfect place to start with e data Popular multi task learning datasets such as School data nttp www mathworks com products matlab http www mosek com http www public asu edu jye02 Software MALSAR 3 Interface Specification 3 1 Input and Output All functions implemented in MALSAR follow a common specification For a multi task learning algorithm NAME the input and output of the algorithm ar
21. ast_Dirty X Y rho_1 rho_2 opts guess start point from data terminate after relative objective value does not changes much oe oe oe Visualization Non Zero Entries in Dirty Model a ISA ee pT a o a Dimension Figure 16 The dirty prediction model w learnt as well as the joint feature selection component P and the element wise sparse component Q 29 We visualize the non zero entries in P Q and R in Figure 16 The figures are transposed for better visualization We see that matrix P has a clear group sparsity property and it captures the joint selected features The features that do not fit into the group sparsity structure is captured in matrix Q The code that generates this result is from the example file example Dirty m 5 9 Learning with Clustered Structures CMTL In this example we show how to use clustered multi task learning CMTL functions we use synthetic data generated as follows rng default clus_var 900 cluster variance task_var 16 inter task variance nois_var 150 variance of noise clus_num 2 clusters task number of each cluster total task number clus_task_num 10 task_num clus_num clus_task_num AP al ye sample_size 100 dimension 20 total dimension comm_dim 2 independent dimension for all tasks clus_dim floor dimension comm_dim 2 dimension of cluster generate cluster model cluster_
22. e in the following format MODEL_VARS func_val OTHER_OUTPUT LOSS NAME X Y Plr Pp opts where the name of the loss function is LOSS and MODEL_VARS is the model variables learnt In the input fields X and Y are two t dimensional cell arrays Each cell of X contains a n by d matrix where n is the sample size for task and d is the dimensionality of the feature space Each cell of Y contains the corresponding n by 1 response The relationship among X Y and W is given in Figure 2 pj pp are algorithm parameters e g regularization parameters opts is the optional optimization options that are elaborated in Sect 3 2 In the output fields MODEL_VARS are model variables that can be used for predicting unseen data points Depending on different loss functions the model variables may be different Specifically the following format is used under the least squares loss W func val OTHER OUTPUT Least NAME X Y pi Pp opts where W is a d by t matrix each column of which is a d dimensional parameter vector for the corresponding task For a new input x from task 7 the prediction y is given by y x WI i The following format is used under the logistic loss W c func_val OTHER_OUTPUT Logistic_NAME X Y pi Pp opts Taskt ht aca N N N lt S iw v v gt 3 a a
23. ecommended to add paths that contains necessary functions at the beginning ae load function load utilities load c files addpath MALSAR functions Lasso addpath MALSAR utils addpath genpath MALSAR c_files oe oe An alternative is to add the entire MALSAR package addpath genpath MALSAR The users then need to setup optimization options before calling functions refer to Section 3 2 for detailed information about opts oe opts init 0 compute start point from data opts tFlag 1 terminate after relative objectiv value does not changes much ae ae opts tol 10 5 tolerance opts maxIter 1500 maximum iteration number of optimization W funcVal Least_Lasso data_feature data_response lambda opts Note For efficiency consideration it is important to set proper tolerance termination conditions and most importantly maximum iterations especially for large scale problems 5 2 Sparsity in Multi Task Learning norm regularization In this example we explore the sparsity of prediction models in norm regularized multi task learning using the School data To use school data first load it from the dat a folder load data school mat load sample data Define a set of regularization parameters and use pathwise computation lambda 1 10 100 200 500 1000 2000 sparsity zeros length lambda 1
24. er classification Data Mining 2009 ICDM 09 Ninth IEEE International Conference on pp 159 168 33 Jacob L Bach F amp Vert J 2008 Clustered multi task learning A convex formulation Arxiv preprint arXiv 0809 2085 Jalali A Ravikumar P Sanghavi S amp Ruan C 2010 A dirty model for multi task learning Proceed ings of the Conference on Advances in Neural Information Processing Systems NIPS Ji S amp Ye J 2009 An accelerated gradient method for trace norm minimization Proceedings of the 26th Annual International Conference on Machine Learning pp 457 464 Li C amp Li H 2008 Network constrained regularization and variable selection for analysis of genomic data Bioinformatics 24 1175 1182 Liu J Ji S amp Ye J 2009a Multi task feature learning via efficient 1 2 1 norm minimization Proceed ings of the Twenty Fifth Conference on Uncertainty in Artificial Intelligence pp 339 348 Liu J Ji S amp Ye J 2009b Slep Sparse learning with efficient projections Arizona State University Nemirovski A Efficient methods in convex programming Lecture Notes Nemirovski A 2001 Lectures on modern convex optimization Society for Industrial and Applied Math ematics SIAM Nesterov Y 2005 Smooth minimization of non smooth functions Mathematical Programming 103 127 152 Nesterov Y 2007 Gradient methods for minimizing composite objective function
25. ffer from traditional gradient method in that every iteration it uses a linear combination of pre vious two points as the search point instead of only using the latest point The AGM has the convergence speed of O 1 k which is the optimal among first order methods The key subroutine in AGM is to compute the proximal operator 1 W argmin M s W argmin rw S 2YL W ll 0 W 2 Ww w 2 y where Q W A is the non smooth regularization term 2 Package and Installation The MALSAR package is currently only available for MATLAB The user needs MATLAB with 2010a or higher versions Some of algorithms i e clustered multi task learning and alternating structure optimiza tion need Mosek to be installed The recommended version of Mosek is 6 0 If you are not sure whether Mosek has been installed or not you can type the following in MATLAB command window to verify the installation help mosekopt Mosek version information will show up if it is correctly installed After MATLAB and Mosek are correctly installed download the MALSAR package from the software homepage and unzip to a folder If you are using a Unix based machines or Mac OS there is an additional step to build C libraries Open MATLAB navigate to MALSAR folder and run INSTALL M A step by step installation guide is given in Table 2 Table 2 Installation of MALSAR Step Comment 1 Install MATLAB 2010a or later Required for all functions 2 Inst
26. he dirty multi task least squares problem t mjn 2 WF X Valle pallPlhico pall hh 8 subject to W P Q 9 where X denotes the input matrix of the i th task Y denotes its corresponding label W is the model for task 7 P is the group sparsity component and Q is the elementwise sparse component p controls the group sparsity regularization on P and p controls the sparsity regularization on Q Currently this function supports the following optional fields e Starting Point opts init opts P0 opts Q0 set opts WO to any non empty value Termination opts tFlag e Initial Lipschiz Constant opts 1Flag 4 4 Encoding Graph Structure Graph Regularized Problems In some applications the task relationship can be represented using a graph where each task is a node and two nodes are connected via an edge if they are related Let denote the set of edges and we denote edge i as a vector e Rt defined as follows e and el are set to 1 and 1 respectively if the two nodes x and 13 y are connected The complete graph is encoded in the matrix R e e ell e R IIEl The following regularization penalizes the differences between all pairs connected in the graph al lel IWR gt We 3 X Wo Well 10 i 1 i 1 which can also be represented in the following matrix form W Rip tr WR WR t WRRTW t WLW 11 where L RRT known as the Laplacian matrix is symmetric and positive defi
27. i task learning Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence UAI pp 733 742 34 Zhou J Chen J amp Ye J 2011a Clustered multi task learning via alternating structure optimization Advances in Neural Information Processing Systems Zhou J Yuan L Liu J amp Ye J 2011b A multi task learning formulation for predicting disease progression Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining pp 814 822 New York NY USA ACM 35 Index 2 norm 8 lq norm 10 L2 norm 9 accelerated gradient 4 ASO 17 clustered multi task 15 dirty data 10 folder structure 5 graph 11 joint sparsity 9 Lasso 8 loss function 6 Mosek 5 multi label learning 3 nuclear norm 13 spectral relaxed k means 15 temporal smoothness 20 trace norm 13 transfer learning 3 36
28. imization ASO Ando amp Zhang 2005 is to decompose the pre dictive model of each task into two components the task specific feature mapping and task shared feature mapping as shown in Figure 8 The ASO formulation for linear predictors is given by T 1 ad wi aw 2 vt w O 2 Il ell subjectto 007 I w w 0 0 25 where O is the low dimensional feature map across all tasks The predictor f for task t can be expressed as fila wie u Ox Task 2 Low Dimensional Feature Map Task m Figure 8 Illustration of Alternating Structure Optimization The predictive model of each task includes two components the task specific feature mapping and task shared feature mapping The formulation in Eq 12 is not convex A convex relaxation of ASO called cASO is proposed in Chen et al 2009 T nt 1 a y gt cin 0n 1 n tr W nI M W WEE tar NE 1 subjectto tr M h M lt I M S 26 It has been shown in Zhou et al 2011a that there is an equivalence relationship between clustered multi task learning in Eq 20 and cASO when the dimensionality of the shared subspace in cASO is equivalent to the cluster number in cMTL 4 7 1 CASO with Least Squares Loss Least CASO The function 19 W funcVal M Least CASO X Y 1 P2 K opts solves the convex relaxed alternating structure optimization ASO multi task least squares problem t
29. init_opts min MALSAR utils e Starting Point init Users can use the field to specify different starting points opts init 0 If0 is specified then the starting points will be initialized to a guess value computed from data For example in the least squares loss the model W i for i th task is initialized by X i Y i opts init 1 If 1 is specified then opts WO is used Note that if value 1 is specified in init but the field WO is not specified then init will be forced to the default value opts init 2 default If2 is specified then the starting point will be a zero matrix e Termination Condition tFlag and Tolerance tol In this package there are 4 types of termination conditions supported for all optimization algorithms opts tFlag default opts tFlag 0 1 opts tFlag 2 opts tFlag 3 e Maximum Iteration maxIter When the tolerance and or termination condition is not properly set the algorithms may take an unacceptable long time to stop In order to prevent this situation users can provide the maximum number of iterations allowed for the solver and the algorithm stops when the maximum number of iterations is achieved even if the termination condition is not satisfied For example one can use the following code to specify the maximum iteration number of the opti mization problem opts maxIter 1000 The algorithm will stop after 1000 i
30. input matrix of the th task Y denotes its corresponding label W is the model for task 7 the regularization parameter p controls the low rank regularization on the structure L and the pa regularization parameter controls the 2 norm penalty on S Currently this function supports the following optional fields e Starting Point opts init opts L0 opts S0 set opts WO to any non empty value e Termination opts tFlag 4 9 Joint Feature Learning with Outlier Tasks Robust Multi Task Feature Learning The joint feature learning formulation in 4 2 selects a common set of features for all tasks However it assumes there is no outlier task which may not the case in practical applications To this end a robust multi task feature learning r MTFL formulation is proposed in Gong et al 2012 rMTFL assumes that the model W can be decomposed into two components a shared feature structure P that captures task relatedness and a group sparse structure Q that detects outliers If the task is not an outlier then it falls into the joint feature structure P with its corresponding column in Q being a zero vector if not then the Q matrix has non zero entries at the corresponding column The following formulation learns the two components simultaneously in L W P d 34 yes W pillPlloa BIOS lla 34 The predictive model of rMTFL is illustrated in Figure 10 21 Task Models Group Sparse Group Sparse ioin Component Co
31. jective value does not changes much tolerance de AP ol maximum iteration number of optimization rho_1 90 rhol joint feature learning rho_2 280 5 rho2 detect outlier w funcVal P Q Least_rMTFL X Y rho_1 rho_2 opts We visualize the matrices P and Q in Figure 15 The code that generates this result is from the example file example_rMTFL m Visualization of Robust Multi Task Feature Learning Model pT feature Q outliers k O T 000 00 A FST A AAA a A GE RD II Dimension Figure 15 Illustration of outlier tasks detected by rMTFL Black means non zero entries The matrices are transposed because that dimension number is larger than the task number After transpose each row denotes a task 28 5 8 Learning with Dirty Data Dirty MTL Model In this example we show how to use dirty multi task learning using synthetic data rng default reset random generator dimension 500 sample_size 50 task 50 X cell task 1 Y cell task 1 for i 1 task X i rand sample_size dimension Y i rand sample_size 1 end We then run the following code oe opts init 0 opts tFlag 1 opts tol 10 4 tolerance opts maxIter 500 maximum iteration number of optimization rho_1 350 rhol group sparsity regularization parameter rho_2 10 rho2 elementwise sparsity regularization parameter w funcVal P Q Le
32. l pun n Wnt M W 21 i 1 subject to tr M k M 1 M Si n E 22 p where X denotes the input matrix of the i th task Y denotes its corresponding label W is the model for task i and p is the regularization parameter Because of the equality constraint tr M k the starting point of M is initialized to be Mo k t x I satisfying tr Mo k Currently this function supports the following optional fields e Starting Point opts init opts w0O Termination opts tFlag 4 6 2 Convex Relaxed Clustered Multi Task Learning with Logistic Loss Logistic_CMTL The function W c funcVal M Logistic_CMTL X Y pi P2 K opts solves the relaxed k means clustering regularized multi task logistic regression problem t ni T a T 2 gt log 1 exp Yi y WF Xij 05 on n tr W 91 M W 23 i 1 j subject to tr M k M I M Si n E 24 p where X j denotes sample j of the t th task Y denotes its corresponding label W and c are the model for task and p is the regularization parameter Because of the equality constraint tr W k the starting point of M is initialized to be Mp k t x I satisfying tr Mo k Currently this function supports the following optional fields e Starting Point opts init opts W0 opts CO e Termination opts tFlag 18 4 7 Discovery of Shared Feature Mapping Alternating Structure Optimization The basic idea of alternating structure opt
33. l applications the tasks may exhibit a more sophisticated group structure where the models of tasks from the same group are closer to each other than those from a different group There have been many work along this line of research Thrun amp O Sullivan 1998 Jacob et al 2008 Wang et al 2009 Xue et al 2007 Bakker amp Heskes 2003 Evgeniou et al 2006 Zhang amp Yeung 2010 known as clustered multi task learning CMTL The idea of CMTL is shown in Figure 7 In Zhou et al 201 1a we proposed a CMTL formulation which is based on the spectral relaxed k means clustering Zha et al 2002 min L W a tW W tF WWF Btr wW w 20 W F FT F I where k is the number of clusters and F captures the relaxed cluster assignment information Since the formulation in Eq 20 is not convex a convex relaxation called cCMTL is also proposed The formulation 17 of cCMTL is given by min W pin 1 n tr W nI M w subject to tr M k M lt T M Sn E p There are many optimization algorithms for solving the cCMTL formulations Zhou et al 2011a In our package we include an efficient implementation based on Accelerated Projected Gradient 4 6 1 Convex Relaxed Clustered Multi Task Learning with Least Squares Loss Least CMTL The function W funcVal M Least CMTL X Y pi P2 K opts solves the relaxed k means clustering regularized multi task least squares problem t min X W7 X Y l
34. m penalty Note that both 2 norm and 2 norm penalties are used in elastic net Currently this function supports the following optional fields e Starting Point opts init opts wWO e Termination opts tFlag e Regularization opts rho_L2 10 4 1 2 Multi Task Lasso with Logistic Loss Logistic Lasso The function W c funcVal Logistic_Lasso X Y pi opts solves the norm and the squared 2 norm regularized multi task logistic regression problem ton gn 2 log 1 exp i j Wj Xij ci pill Whi pz2llW 4 where X denotes sample j of the 7 th task Y denotes its corresponding label W and c are the model for task i the regularization parameter p controls sparsity and the optional pz2 regularization parameter controls the 2 norm penalty Currently this function supports the following optional fields e Starting Point opts init opts W0 opts CO e Termination opts tFlag e Regularization opts rho_L2 4 2 Joint Feature Selection norm Regularized Problems One way to capture the task relatedness from multiple related tasks is to constrain all models to share a com mon set of features This motivates the group sparsity i e the 41 l2 norm regularized learning Argyriou et al 2007 Argyriou et al 2008a Liu et al 2009a Nie et al 2010 min L W Al W li2 5 where W De W 2 is the group sparse penalty Compared to Lasso the 2 norm regularization results
35. mponent sied E E E E E E Outlier Tasks Figure 10 Illustration of robust multi task feature learning The predictive model of each task includes two components the joint feature selection structure P that captures task relatedness and the group sparse structure Q that detects outliers 4 9 1 RMTL with Least Squares Loss Least _rMTFL The function w funcVal Q P Least rMTFL X Y pi P2 lopts solves the problem of robust multi task feature learning with least squares loss t a WF X Ville al Pllz 1 21107 lla 35 subject to W P Q 36 where X denotes the input matrix of the i th task Y denotes its corresponding label W is the model for task 2 the regularization parameter p controls the joint feature learning and the regularization parameter pa controls the columnwise group sparsity on Q that detects outliers Currently this function supports the following optional fields e Starting Point opts init opts L0 opts S0 set opt s WO to any non empty value Termination opts tFlag e Initial Lipschitz constant opts 1Flag 22 5 Examples In this section we provide some running examples for some representative multi task learning formulations included in the MALSAR package All figures in these examples can be generated using the corresponding MATLAB scripts in the examples folder 5 1 Code Usage and Optimization Setup The users are r
36. ngth lambda W funcVal Least_Trace X Y lambda i opts set the solution as the next initial point this gives better efficiency opts init 1 opts WO W tn_val i sum svd W rk_val i rank W end 25 In the code we compute the value of trace norm of the prediction model as well as its rank We gradually increase the penalty and the results are shown in Figure 17 The code sum svd W computes the trace norm the sum of singular values 1400 Trace Norm of Predictive Model when Changing Regularization Parameter T 7 T T 1200F 1000F e T S N sa iin 25 Rank of Predictive Model when Changing Regularization Parameter 207 a T D T Rank of Model a T Trace Norm of Model Sum of Singular Values of W N a T a 1 1 1 0 1 2 3 4 log p Figure 13 The trace norm and rank of the model learnt from trace norm regularized MTL As the trace norm penalty increases we observe the monotonic decrease of the trace norm and rank The code that generates this result is from the example file example Trace m 5 5 Graph Structure Graph Regularization In this example we show how to use graph regularized multi task learning using the SRMTL functions We use the School data from the data folder load sample data load data school mat 2 For a given graph we first cons
37. niteness In Li amp Li 2008 the network structure is defined on the features while in MTL the structure is on the tasks In the multi task learning formulation proposed by Evgeniou amp Pontil 2004 it assumes all tasks are related in the way that the models of all tasks are close to their mean T T 1 min L W r W 7 Wal 12 t 1 s 1 where gt 0 is penalty parameter The regularization term in Eq 12 penalizes the deviation of each task from the mean De W This regularization can also be encoded using the structure matrix R by setting R eye t ones t t 4 4 1 Sparse Graph Regularization with Logistic Loss Least_SRMTL The function W funcVal Least SRMTL X Y R pi 2 opts solves the graph structure regularized and norm and the squared 2 norm regularized multi task least squares problem t min IWF X Yili pA1WRIE pa1W lla ecallW lz 13 i l where X denotes the input matrix of the i th task Y denotes its corresponding label W is the model for task i the regularization parameter p controls sparsity and the optional pz2 regularization parameter controls 2 norm penalty Currently this function supports the following optional fields e Starting Point opts init opts w0O e Termination opts tFlag e Regularization opts rho_L2 4 4 2 Sparse Graph Regularization with Logistic Loss Logistic_SRMTL The function W c funcVal Logistic SRMTL X Y R 1 P2
38. relatedness Single Task Learning f 7 i m a i l Task1 ago Training Gp Generalization gt 7 4 Trained AS Task2 Training Data Training Model Generalization e e e e l Dd dE Taskt Training Data Training Moder EGeneralization y N Model Y E ba l cea Task 2 Training Data o Generalization ea a aa gt Model Generalization Taskt Training Data Figure 1 Illustration of single task learning STL and multi task learning MTL In single task learning STL each task is considered to be independent and learnt independently In multi task learning MTL multiple tasks are learnt simultaneously by utilizing task relatedness In data mining and machine learning a common paradigm for classification and regression is to mini mize the penalized empirical loss min L W Q W 1 where W is the parameter to be estimated from the training samples W is the empirical loss on the training set and Q W is the regularization term that encodes task relatedness Different assumptions on task relatedness lead to different regularization terms In the field of multi task learning there are many prior work that model relationships among tasks using novel regularizations Evgeniou amp Pontil 2004 Ji amp Ye Table 1 Formulations included in the MALSAR package of the following form minw L
39. t which is however not the case in many real world applications Robust multi task learning RMTL is aimed at identifying irrelevant outlier tasks when learning from multiple tasks One approach to perform RMTL is to assume that the model W can be decomposed into two com ponents a low rank structure L that captures task relatedness and a group sparse structure S that detects outliers Chen et al 2011 If a task is not an outlier then it falls into the low rank structure L with its corre sponding column in S being a zero vector if not then the S matrix has non zero entries at the corresponding column The following formulation learns the two components simultaneously iain g EW pillL le BUS GD The predictive model of RMTL is illustrated in Figure 9 20 Group Sparse i Low Rank Task Models j Component Component X W S L Outlier Tasks Figure 9 Illustration of robust multi task learning The predictive model of each task includes two compo nents the low rank structure L that captures task relatedness and the group sparse structure S that detects outliers 4 8 1 RMTL with Least Squares Loss Least _RMTL The function W funcVal L S Least_RMTL X Y pi p2 opts solves the incoherent group sparse and low rank multi task least squares problem t man DO WPG Yili eal Palla 32 subject to W L S 33 where X denotes the
40. teration steps even if the termination condition is not satisfied 4 Miulti Task Learning Formulations 4 1 Sparsity in Multi Task Learning norm Regularized Problems The norm or Lasso regularized methods are widely used to introduce sparsity into the model and achieve the goal of reducing model complexity and feature learning Tibshirani 1996 We can easily extend the norm regularized STL to MTL formulations A common simplification of Lasso in MTL is that the parameter controlling the sparsity is shared among all tasks assuming that different tasks share the same sparsity parameter The learnt model is illustrated in Figure 3 DS Task t DO N Taskt Task t G G a El 2 2 Q Q al A o Bo ae E N yn 7 Qa ines Dimension d Feature X Response Y Figure 3 Illustration of multi task Lasso 4 1 1 Multi Task Lasso with Least Squares Loss Least_Lasso The function W funcVal Least Lasso X Y p opts solves the norm and the squared 2 norm regularized multi task least squares problem t T 2 2 min W X Ylle pillWlhi oxalIW lz 3 i 1 where X denotes the input matrix of the 7 th task Y denotes its corresponding label W is the model for task 7 the regularization parameter p controls sparsity and the optional pz2 regularization parameter controls the l2 nor
41. truct the graph variable R to encode the graph structure construct graph structure variable R for i 1 task_num i 1 task_num if graph i j 0 edge zeros task_num edge i 1 edge 3 1 R cat 2 R end end for j 1 edge end W_est funcVal Least_SRMIL X Y R 1 20 The code that generates this result is from example_SRMTL mand example_SRMTL_spcov m 5 6 Learning with Outlier Tasks RMTL In this example we show how to use robust multi task learning to detect outlier tasks using synthetic data 26 o rng default reset random generator dimension 500 sample_size 50 task 50 X cell task 1 Y cell task 1 for i 1 task X i rand sample_size dimension Y i rand sample_size 1 end To generate reproducible results we reset the random number generator before we use the rand function We then run the following code opts init 0 guess start point from data opts tFlag 1 terminate after relative objective value does not changes much opts tol 10 6 tolerance opts maxIter 1500 maximum iteration number of optimization rho_1 10 rhol low rank component L trace norm regularization parameter rho_2 30 rho2 sparse component S L1 2 norm sprasity controlling parameter w funcVal L S Least_RMTL X Y rho_1 rho_2 opts We visualize the matrices L and s in Figure 14 In the figure
42. weight randn dimension clus_num clus_var for i 1 clus_num cluster_weight randperm dimension clus_num lt clus_dim i 0 end cluster_weight end comm_dim end 0 W repmat cluster_weight 1 clus_task_num cluster_index repmat 1l clus_num 1 clus_task_num generate task and intra cluster variance W_it randn dimension task_num task_var for i 1 task_num W_it cat 1 W 1 end comm_dim i 0 zeros comm_dim 1 1 i 0 end W W W_it apply noise W W randn dimension task _num nois_var Generate Data Sets X cell task_num 1 Y cell task_num 1 for i 1 task_num X i randn sample_size dimension XW X i x W i XW xw randn size xw nois_var Y i sign xw end We generate a set of tasks as follows We firstly generate 2 cluster centers with between cluster variance N 0 900 and for each cluster center we generate 10 tasks with intra cluster variance M 0 16 We thus generate a total of 20 task models w We generate data points with variance M 0 150 After we generate the data set we run the following code to learn the CMTL model oe opts init 0 guess start point from data opts tFlag 1 terminate after relative objective value does not changes much opts tol 10 6 tolerance opts maxIter 1500 maximum iteration number of optimization rho_1 10 ode Je 30 rho_2 10 1 W_learn Least_CMTL X Y

Download Pdf Manuals

image

Related Search

Related Contents

GE 29299 Telephone User Manual  テルモ電子体温計C520  UTDevice - Pegasus Europa GmbH  ロールスクリーン (取説変更)  GUIDE DE L`UTILISATEUR DU VÉLO D`EXERCICE  Conceptronic Universal Notebook Travel Power Adapter 90W  取扱説明書 - 三菱電機  QubeMaster Pro 2.5 Quick Start Guide  Instalación PASO 3  User Manual - Portablerotation.com  

Copyright © All rights reserved.
Failed to retrieve file