Cssu dw dm

  • Published on

  • View

  • Download

Embed Size (px)


<ul><li>1.Data Warehouses Data Mining Business Intelligence Applications</li></ul> <p>2. Outline </p> <ul><li>Why needed? </li></ul> <ul><li>Data BaseData WarehouseData Mining== Knowledge Discovery in DB </li></ul> <ul><li>Data Warehouses: organization, structuring and presentationof data, oriented to analyses. Data Cube </li></ul> <ul><li>DM: preprocessing </li></ul> <ul><li>DM: characterization and comparison </li></ul> <ul><li>DM: classification and forecasting </li></ul> <ul><li>Conclusion </li></ul> <p>3. Why needed? </p> <ul><li>Necessity is the Mother of Invention </li></ul> <ul><li>Data explosion problem </li></ul> <ul><li><ul><li>Automated data collection tools and mature database technology lead to tremendous amounts of data stored in databases and other information repositories</li></ul></li></ul> <ul><li>We are drowning in data, but starving for knowledge ! </li></ul> <ul><li>Solution: Data warehousing and data mining </li></ul> <ul><li><ul><li>Data warehousing andon-line analytical processing (OLAP) </li></ul></li></ul> <ul><li><ul><li>Extraction ofinteresting knowledge(rules, regularities,patterns, constraints) from data in large databases </li></ul></li></ul> <p>4. Why needed? </p> <ul><li>Evolution of Database Technology </li></ul> <ul><li>1960s: </li></ul> <ul><li><ul><li>Data collection, database creation, IMS and network DBMS </li></ul></li></ul> <ul><li>1970s:</li></ul> <ul><li><ul><li>Relational data model, relational DBMS implementation </li></ul></li></ul> <ul><li>1980s:</li></ul> <ul><li><ul><li>RDBMS, advanced data models (extended-relational, OO, deductive, etc.) and application-oriented DBMS (spatial, scientific, engineering, etc.) </li></ul></li></ul> <ul><li>1990s 2000s :</li></ul> <ul><li><ul><li>Data warehousing and data mining, multimedia databases, and Web databases </li></ul></li></ul> <p>5. What Is Data Mining? </p> <ul><li>Data mining (knowledge discovery in databases):</li></ul> <ul><li><ul><li>Extraction of interesting( non-trivial, implicit ,previously unknownandpotentially useful) information or patterns from data inlarge databases </li></ul></li></ul> <ul><li><ul><li>Knowledge discovery(mining) in databases (KDD), knowledge extraction, data/pattern analysis, data archeology, data dredging, information harvesting, business intelligence, etc. </li></ul></li></ul> <ul><li>What is not data mining? </li></ul> <ul><li><ul><li>Query processing.</li></ul></li></ul> <ul><li><ul><li>Expert systems or statistical programs </li></ul></li></ul> <p>6. DBDWDM == Knowledge Discovery in DB </p> <ul><li><ul><li>Data mining: the core of knowledge discovery process. </li></ul></li></ul> <p>Data Cleaning Data Integration Databases Data Warehouse Knowledge Task-relevant Data Selection Data Mining Pattern Evaluation 7. Steps of a KDD Process </p> <ul><li>Learning the application domain: </li></ul> <ul><li><ul><li>relevant prior knowledge and goals of application </li></ul></li></ul> <ul><li>Creating a target data set: data selection </li></ul> <ul><li>Data cleaningand preprocessing: (may take 60% of effort!) </li></ul> <ul><li>Data reduction and transformation : </li></ul> <ul><li><ul><li>Find useful features, dimensionality/variable reduction, invariant representation. </li></ul></li></ul> <ul><li>Choosing functions of data mining</li></ul> <ul><li><ul><li>summarization, classification, regression, association, clustering. </li></ul></li></ul> <ul><li>Choosing the mining algorithm(s) </li></ul> <ul><li>Data mining : search for patterns of interest </li></ul> <ul><li>Pattern evaluation and knowledge presentation </li></ul> <ul><li><ul><li>visualization, transformation, removing redundant patterns, etc. </li></ul></li></ul> <ul><li>Use of discovered knowledge </li></ul> <p>8. Data Mining and Business Intelligence Increasing potential to support business decisions End User Business Analyst Data Analyst DBA Making Decisions Data Presentation Visualization Techniques Data Mining Information Discovery Data Exploration OLAP, MDA Statistical Analysis, Querying and Reporting Data Warehouses / Data Marts Data Sources Paper, Files, Information Providers, Database Systems, OLTP 9. Data Warehouses: Data CubeOLAPall Europe North_America Mexico Canada Spain Germany Vancouver M. Wind L. Chan ... ... ... ... ... ... all region office country Toronto Frankfurt city 10. Data Warehouses: Data CubeOLAPProduct Region Month Dimensions: Product, Location, Time Hierarchical summarization paths IndustryRegionYear CategoryCountryQuarter ProductCityMonthWeek OfficeDay 11. Data Warehouses: Data CubeOLAPTotal annual sales ofTV in U.S.A. Date Product Country All, All, All sum sum TV VCR PC 1Qtr 2Qtr 3Qtr 4Qtr U.S.A Canada Mexico sum 12. </p> <ul><li>Concept description: Characterization and discrimination </li></ul> <ul><li><ul><li>Generalize, summarize, and contrast data characteristics, e.g., dry vs. wet regions </li></ul></li></ul> <ul><li>Association( correlation and causality) </li></ul> <ul><li><ul><li>Multi-dimensional vs. single-dimensional association</li></ul></li></ul> <ul><li><ul><li>age(X, 20..29) ^ income(X, 20..29K)buys(X, PC) [support = 2%, confidence = 60%] </li></ul></li></ul> <ul><li><ul><li>contains(T, computer)contains(x, software) [1%, 75%] </li></ul></li></ul> <p>Data Mining Functionalities (1) 13. Data Mining Functionalities (2) </p> <ul><li>Classification and Prediction </li></ul> <ul><li><ul><li>Finding models (functions) that describe and distinguish classes or concepts for future prediction </li></ul></li></ul> <ul><li><ul><li>E.g., classify countries based on climate, or classify cars based on gas mileage </li></ul></li></ul> <ul><li><ul><li>Presentation: decision-tree, classification rule, neural network </li></ul></li></ul> <ul><li><ul><li>Prediction: Predict some unknown or missing numerical values</li></ul></li></ul> <ul><li>Cluster analysis </li></ul> <ul><li><ul><li>Class label is unknown: Group data to form new classes, e.g., cluster houses to find distribution patterns </li></ul></li></ul> <ul><li><ul><li>Clustering based on the principle: maximizing the intra-class similarity and minimizing the interclass similarity </li></ul></li></ul> <p>14. </p> <ul><li>Outlier analysis </li></ul> <ul><li><ul><li>Outlier: a data object that does not comply with the general behavior of the data </li></ul></li></ul> <ul><li><ul><li>It can be considered as noise or exception but is quite useful in fraud detection, rare events analysis </li></ul></li></ul> <ul><li>Trend and evolution analysis </li></ul> <ul><li><ul><li>Trend and deviation:regression analysis </li></ul></li></ul> <ul><li><ul><li>Sequential pattern mining, periodicity analysis </li></ul></li></ul> <ul><li><ul><li>Similarity-based analysis </li></ul></li></ul> <ul><li>Other pattern-directed or statistical analyses </li></ul> <p>Data Mining Functionalities (3) 15. Are All the Discovered Patterns Interesting? </p> <ul><li>A data mining system/query may generate thousands of patterns, not all of them are interesting. </li></ul> <ul><li><ul><li>Suggested approach: Human-centered, query-based, focused mining </li></ul></li></ul> <ul><li>Interestingness measures : A pattern isinterestingif it iseasily understoodby humans,valid on new or test datawith some degree of certainty,potentially useful ,novel, or validates some hypothesisthat a user seeks to confirm</li></ul> <ul><li>Objective vs. subjective interestingness measures: </li></ul> <ul><li><ul><li>Objective:based on statistics and structures of patterns, e.g., support, confidence, etc. </li></ul></li></ul> <ul><li><ul><li>Subjective:based on users belief in the data, e.g., unexpectedness, novelty, actionability, etc. </li></ul></li></ul> <p>16. Can We Find All and Only Interesting Patterns? </p> <ul><li>Find all the interesting patterns: Completeness </li></ul> <ul><li><ul><li>Can a data mining system findallthe interesting patterns? </li></ul></li></ul> <ul><li><ul><li>Association vs. classification vs. clustering </li></ul></li></ul> <ul><li>Search for only interesting patterns: Optimization </li></ul> <ul><li><ul><li>Can a data mining system findonlythe interesting patterns? </li></ul></li></ul> <ul><li><ul><li>Approaches </li></ul></li></ul> <ul><li><ul><li><ul><li>First general all the patterns and then filter out the uninteresting ones. </li></ul></li></ul></li></ul> <ul><li><ul><li><ul><li>Generate only the interesting patterns mining query optimization </li></ul></li></ul></li></ul> <p>17. Data Preprocessing Why Data Preprocessing? </p> <ul><li>Data in the real world is dirty </li></ul> <ul><li><ul><li>incomplete:lacking attribute values, lacking certain attributes of interest, or containing only aggregate data </li></ul></li></ul> <ul><li><ul><li>noisy:containing errors or outliers </li></ul></li></ul> <ul><li><ul><li>inconsistent:containing discrepancies in codes or names </li></ul></li></ul> <ul><li>No quality data, no quality mining results! </li></ul> <ul><li><ul><li>Quality decisions must be based on quality data </li></ul></li></ul> <ul><li><ul><li>Data warehouse needs consistent integration of quality data </li></ul></li></ul> <p>18. Major Tasks in Data Preprocessing </p> <ul><li>Data cleaning </li></ul> <ul><li><ul><li>Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies </li></ul></li></ul> <ul><li>Data integration </li></ul> <ul><li><ul><li>Integration of multiple databases, data cubes, or files </li></ul></li></ul> <ul><li>Data transformation </li></ul> <ul><li><ul><li>Normalization and aggregation </li></ul></li></ul> <ul><li>Data reduction </li></ul> <ul><li><ul><li>Obtains reduced representation in volume but produces the same or similar analytical results </li></ul></li></ul> <ul><li>Data discretization </li></ul> <ul><li><ul><li>Part of data reduction but with particular importance, especially for numerical data </li></ul></li></ul> <p>19. Data Cleaning </p> <ul><li>Data cleaning tasks </li></ul> <ul><li><ul><li>Fill in missing values </li></ul></li></ul> <ul><li><ul><li>Identify outliers and smooth out noisy data</li></ul></li></ul> <ul><li><ul><li>Correct inconsistent data </li></ul></li></ul> <p>20. How to Handle Missing Data? </p> <ul><li>Ignore the tuple:usually done when class label is missing (assuming the tasks in classification not effective when the percentage of missing values per attribute varies considerably. </li></ul> <ul><li>Fill in the missing value manually: tedious + infeasible? </li></ul> <ul><li>Use a global constant to fill in the missing value: e.g., unknown, a new class?!</li></ul> <ul><li>Use the attribute mean to fill in the missing value </li></ul> <ul><li>Use the attribute mean for all samples belonging to the same class to fill in the missing value: smarter </li></ul> <ul><li>Forecast the missing value : use the most probable value Vs. use of the value with less impact on the further analysis </li></ul> <p>21. How to Handle Noisy Data? </p> <ul><li>Binning method: </li></ul> <ul><li><ul><li>first sort data and partition into (equi-depth) bins </li></ul></li></ul> <ul><li><ul><li>then one cansmooth by bin means,smooth by bin median, smooth by bin boundaries , etc. </li></ul></li></ul> <ul><li>Clustering </li></ul> <ul><li><ul><li>detect and remove outliers </li></ul></li></ul> <ul><li>Combined computer and human inspection </li></ul> <ul><li><ul><li>detect suspicious values and check by human </li></ul></li></ul> <ul><li>Regression </li></ul> <ul><li><ul><li>smooth by fitting the data into regression functions </li></ul></li></ul> <p>22. Data Integration </p> <ul><li>Data integration:</li></ul> <ul><li><ul><li>combines data from multiple sources into a coherent store </li></ul></li></ul> <ul><li>Schema integration </li></ul> <ul><li><ul><li>integrate metadata from different sources </li></ul></li></ul> <ul><li><ul><li>Entity identification problem: identify real world entities from multiple data sources, e.g., A.cust-idB.cust-# </li></ul></li></ul> <ul><li>Detecting and resolving data value conflicts </li></ul> <ul><li><ul><li>for the same real world entity, attribute values from different sources are different </li></ul></li></ul> <ul><li><ul><li>possible reasons: different representations, different scales, e.g., metric vs. British units </li></ul></li></ul> <p>23. Data Transformation </p> <ul><li>Smoothing: remove noise from data </li></ul> <ul><li>Data reduction - aggregation: summarization, data cube Generalization: concept hierarchy climbing </li></ul> <ul><li>Normalization: scaled to fall within a small, specified range </li></ul> <ul><li><ul><li>dimensions</li></ul></li></ul> <ul><li><ul><li>Scales: nominal, order and interval scales </li></ul></li></ul> <ul><li><ul><li>min-max normalization </li></ul></li></ul> <ul><li><ul><li>z-score normalization </li></ul></li></ul> <ul><li><ul><li>normalization by decimal scaling </li></ul></li></ul> <ul><li>Attribute/feature construction </li></ul> <ul><li><ul><li>New attributes constructed from the given ones </li></ul></li></ul> <p>24. Data Reduction Strategies </p> <ul><li>Warehouse may store terabytes of data: Complex data analysis/mining may take a very long time to run on the complete data set </li></ul> <ul><li>Data reduction</li></ul> <ul><li><ul><li>Obtains a reduced representation of the data set that is much smaller in volume but yet produces the same (or almost the same) analytical results </li></ul></li></ul> <ul><li>Data reduction strategies </li></ul> <ul><li><ul><li>Data cube aggregation </li></ul></li></ul> <ul><li><ul><li>Dimensionality reduction </li></ul></li></ul> <ul><li><ul><li>Numerosity reduction </li></ul></li></ul> <ul><li><ul><li>Discretization and concept hierarchy generation </li></ul></li></ul> <p>25. Mining Association Rules in Large Databases </p> <ul><li>Association rule mining </li></ul> <ul><li>Mining single-dimensional Boolean association rules from transactional databases </li></ul> <ul><li>Mining multilevel association rules from transactional databases </li></ul> <ul><li>Mining multidimensional association rules from transactional databases and data warehouse </li></ul> <ul><li>From association mining to correlation analysis </li></ul> <ul><li>Constraint-based association mining </li></ul> <p>26. What Is Association Mining? </p> <ul><li>Association rule mining: </li></ul> <ul><li><ul><li>Finding frequent patterns, associations, correlations, or causal structures among sets of items or objects in transaction databases, relational databases, and other information repositories. </li></ul></li></ul> <ul><li>Applications: </li></ul> <ul><li><ul><li>Basket data analysis, cross-marketing, catalog design, loss-leader analysis, clustering, classification, etc. </li></ul></li></ul> <ul><li>Examples.</li></ul> <ul><li><ul><li>Rule form: Body ead [support, confidence]. </li></ul></li></ul> <ul><li><ul><li>buys(x, diapers)buys(x, beers) [0.5%, 60%] </li></ul></li></ul> <ul><li><ul><li>major(x, CS) ^ takes(x, DB) grade(x, A) [1%, 75%]</li></ul></li></ul> <p>27. Rule Measures: Support and Confidence </p> <ul><li>Find all the rulesX &amp; Y Zwith minimum confidence and support </li></ul> <ul><li><ul><li>support,s ,probabilitythat a transaction contains {XYZ} </li></ul></li></ul> <ul><li><ul><li>confidence,c , conditional probabilitythat a transaction having {XY} also containsZ </li></ul></li></ul> <ul><li>Let minimum support 50%, and minimum confidence 50%, we have </li></ul> <ul><li><ul><li>A C(50%, 66.6%) </li></ul></li></ul> <ul><li><ul><li>C A(50%, 100%) </li></ul></li></ul> <p>Customer buys diaper Customer buys both Customer buys beer 28. Association Rule Mining </p> <ul><li>Boolean vs. quantitativeassociations</li></ul> <ul><li>buys(x, SQLServer) ^ buys(x, DMBook) buys(x, DBMiner) [0.2%, 60%] </li></ul> <ul><li><ul><li>age(x, 30..39) ^ income(x, 42..48K) buys(x, PC) [1%, 75%] </li></ul></li></ul> <ul><li>Single dimension vs. multiple dimensionalassociations </li></ul> <ul><li>Single level vs. multiple-levelanalysis </li></ul> <ul><li><ul><li>What brands of beers are associated with what brands of diapers? </li></ul></li></ul> <ul><li>Various extensions </li></ul> <ul><li><ul><li>Correlation, causality analysis </li></ul></li></ul> <ul><li><ul><li><ul><li>Association does not necessarily imply correlation or causality </li></ul></li></ul></li></ul> <ul><li><ul><li>Constraints enforced </li></ul></li></ul> <ul><li><ul><li><ul><li>E.g., small sales (sum &lt; 100) trigger big buys (sum &gt; 1,000)? </li></ul></li></ul></li></ul> <p>29. Concept Description: Characterization and Comparison </p> <ul><li><ul><li>Concept description: </li></ul></li></ul> <ul><li><ul><li>Characterization :provides a concise and succinct summarization of the given collection of data </li></ul></li></ul> <ul><li><ul><li>Comparison :provides descriptions comparing two or more collections of data </li></ul></li></ul> <p>30. Classification and Prediction </p> <ul><li>Classification:</li></ul> <ul><li><ul><li>predicts categorical class labels </li></ul></li></ul> <ul><li><ul><li>classifies data (constructs a model) based on the training set and the values ( class labels ) in a classifying attribute and uses it in classifying new data </li></ul></li></ul> <ul><li>Prediction:</li></ul> <ul><li><ul><li>models continuous-valued functions, i.e., predicts unknown or missing values</li></ul></li></ul> <p>31. ClassificationA Two-Step Process </p> <ul><li>Model construction : describing a set of predetermined classes </li></ul> <ul><li><ul><li>Each tuple/sample is assumed to belong to a predefined class, as determined by theclass label attribute </li></ul></li></ul> <ul><li><ul><li>The set of tuples used for model construction:training set </li></ul></li></ul> <ul><li><ul><li>The model is represented as classification rules, decision trees, or mathematical formulae </li></ul></li></ul> <ul><li>Model usage:for classifying future or unknown objects </li></ul> <ul><li><ul><li>Estimate accuracy of the model </li></ul></li></ul> <ul><li><ul><li><ul><li>The known label of test sample is compared with the classified result from the model </li></ul></li></ul></li></ul> <ul><li><ul><li><ul><li>Accuracy rate is the percentage of test set samples that are correctly classified by the model </li></ul></li></ul></li></ul> <ul><li><ul><li><ul><li>Test set is independent of training set, otherwise over-fitting will occur </li></ul></li></ul></li></ul> <p>32. Classification Process (1): Model Construction Training Data Classification Algorithms IF rank = professor OR years &gt; 6 THEN tenured = yesClassifier (Model) 33. Classification Process (2): Use the Model in Prediction Classifier Testing Data Unseen Data (Jeff, Professor, 4) Tenured? 34. Supervised vs. Unsupervised Learning </p> <ul><li>Supervised learning (classification) </li></ul> <ul><li><ul><li>Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations </li></ul></li></ul> <ul><li><ul><li>New data is classified based on the training set </li></ul></li></ul> <ul><li>Unsupervised learning (clustering) </li></ul> <ul><li><ul><li>The class labels of training data is unknown </li></ul></li></ul> <ul><li><ul><li>Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data </li></ul></li></ul> <p>35. Q and A Thank you !!! </p>