12
C C CE E E R R R T T T M M M A A A G G G I I I C C C Demo Edition PRO:MS SQL Server@ 2008, Designing a Business Intelligence Exam: 70-452 70-452 1 http://www.certmagic.com

70-452 Exam

Embed Size (px)

DESCRIPTION

CertMagic.com is a place where you can find various types of 70-452 exam certifications preparation material. CertMagic’s full range of study material for the 70-452 exam helps you to be prepared for the 70-452 exam fully and enter the exam centre with full confidence. We provide you easy, simple and updated study material. After preparing from the 70-452 material prepared by us we guarantee you that you will be a certified professional. We guarantee that with CertMagic 70-452 study material, you will pass the Certification exam.

Citation preview

Page 1: 70-452 Exam

CCCEEERRRTTT MMMAAAGGGIIICCC

Demo Edition

PRO:MS SQL Server@ 2008, Designing a Business Intelligence

Exam: 70-452

70-452

1

http://www.certmagic.com

Page 2: 70-452 Exam

QUESTION: 1 You design a Business Intelligence (BI) solution by using SQL Server 2008. You plan to create a SQL Server 2008 Reporting Services (SSRS) solution that contains five sales dashboard reports. Users must be able to manipulate the reports' parameters to analyze data. You need to ensure that the following requirements are met: Users can manipulate the parameters for data analysis in a single trip to the data source. Reports are automatically rendered as soon as they are accessed for the first time. Which two tasks should you perform? (Each correct answer presents part of the solution. Choose two.) A. Filter data by using expressions. B. Specify the default values for each parameter. C. Create an available values list for each parameter. D. Create report parameters by using query parameters to filter data at the data source. Answer: A, B QUESTION: 2 You design a SQL Server 2008 Reporting Services (SSRS) solution. You create a report by using Microsoft Visual Studio .NET 2008. The report contains the following components: A dataset named Customer that lists all active customers and their details. The dataset accepts no parameters.A dataset named SalesHistory that lists all sales transactions for a specified time period and accepts year and month as parameters.You need to ensure that a summary of sales transactions is displayed for each customer after the customer details. Which component should you add to the report? A. List B. Table C. Matrix D. Subreport Answer: D Explanation: http://msdn.microsoft.com/en-us/library/ms160348(SQL.100).aspx How to: Add a Subreport and Parameters (Reporting Services) Add subreports to a report when you want to create a main report that is a container for multiple related reports. A subreport is a reference to another report. To relate the reports through data values (for example, to have multiple reports show data for the same customer), you must design a parameterized report (for example, a report that shows the details for a specific customer) as the subreport. When you add a subreport to the main report, you can specify parameters to pass to the subreport. You can also add subreports

70-452

2

http://www.certmagic.com

Page 3: 70-452 Exam

to dynamic rows or columns in a table or matrix. When the main report is processed, the subreport is processed for each row. In this case, consider whether you can achieve the desired effect by using data regions or nested data regions. QUESTION: 3 You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server 2008 Analysis Services (SSAS) database. The database contains a data mining structure that uses a SQL Server 2008 table as a data source. A table named OrderDetails contains detailed information on product sales. The OrderDetails table includes a column named Markup. You build a data mining model by using the Microsoft Decision Trees algorithm. You classify Markup as discretized content. The algorithm produces a large number of branches for Markup and results in low confidence ratings on predictable columns. You need to verify whether the Markup values include inaccurate data. What should you do? A. Modify the content type of Markup as Continuous. B. Create a data mining dimension in the SSAS database from OrderDetails. C. Create a data profile by using SQL Server 2008 Integration Services (SSIS). D. Create a cube in SSAS. Use OrderDetails as a measure group. Recreate the data mining structure and mining model from the cube data. Answer: C Explanation: Discretized The column has continuous values that are grouped into buckets. Each bucket is considered to have a specific order and to contain discrete values. Possible values for discretization method are automatic, equal areas, or clusters. Automatic means that SSAS determines which method to use. Equal areas results in the input data being divided into partitions of equal size. This method works best with data with regularly distributed values. Clusters means that SSAS samples the data to produce a result that accounts for “clumps” of data values. Because of this sampling, Clusters can be used only with numeric input columns. You can use the date, double, long, or text data type with the Discretized content type. Microsoft Decision Trees Algorithm Microsoft Decision Trees is probably the most commonly used algorithm, in part because of its flexibility decision trees work with both discrete and continuous attributes—and also because of the richness of its included viewers. It’s quite easy to understand the output via these viewers. This algorithm is used to both view and to predict. It is also used (usually in conjunction with the Microsoft Clustering algorithm) to find deviant values. The Microsoft Decision Trees algorithm processes input data by splitting it into recursive (related) subsets. In the default viewer, the output is shown as a recursive tree structure. If you are using discrete data, the algorithm identifies the particular inputs that are most closely correlated with particular predictable values, producing a result that shows which columns are most strongly predictive of a selected attribute. If you are using continuous data, the algorithm uses standard

70-452

3

http://www.certmagic.com

Page 4: 70-452 Exam

linear regression to determine where the splits in the decision tree occur. Clicking a node displays detailed information in the Mining Legend window. You can configure the view using the various drop-down lists at the top of the viewer, such as Tree, Default Expansion, and so on. Finally, if you’ve enabled drillthrough on your model, you can display the drillthrough information—either columns from the model or (new to SQL Server 2008) columns from the mining structure, whether or not they are included in this model. Data Profiling The control flow Data Profiling task relates to business problems that are particularly prominent in BI projects: how to deal with huge quantities of data and what to do when this data originates from disparate sources. Understanding source data quality in BI projects—when scoping, early in prototyping, and during package development—is critical when estimating the work involved in building the ETL processes to populate the OLAP cubes and data mining structures. It’s common to underestimate the amount of work involved in cleaning the source data before it is loaded into the SSAS destination structures.The Data Profiling task helps you to understand the scope of the source-data cleanup involved in your projects. Specifically, this cleanup involves deciding which methods to use to clean up your data. Methods can include the use of advanced package transformations (such as fuzzy logic) or more staging areas (relational tables) so that fewer in- memory transformations are necessary during the transformation processes. Other considerations include total number of tasks in a single package, or overall package size. QUESTION: 4 You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution contains a SQL Server 2008 Analysis Services (SSAS) database. A measure group in the database contains log entries of manufacturing events. These events include accidents, machine failures, production capacity metrics, and other activities. You need to implement a data mining model that meets the following requirements: Predict the frequency of different event types. Identify short-term and long-term patterns. Which algorithm should the data mining model use? A. the Microsoft Time Series algorithm B. the Microsoft Decision Trees algorithm C. the Microsoft Linear Regression algorithm D. the Microsoft Logistic Regression algorithm Answer: A Explanation: Microsoft Time Series Algorithm Microsoft Time Series is used to impact a common business problem, accurate forecasting. This algorithm is often used to predict future values, such as rates of sale for a particular product. Most often the inputs are continuous values. To use this algorithm, your source data

70-452

4

http://www.certmagic.com

Page 5: 70-452 Exam

must contain at one column marked as Key Time. Any predictable columns must be of type Continuous. You can select one or more inputs as predictable columns when using this algorithm. Time series source data can also contain an optional Key Sequence column. Function The ARTxp algorithm has proved to be very good at short-term prediction. The ARIMA algorithm is much better at longer-term prediction. By default, the Microsoft Time Series algorithm blends the results of the two algorithms to produce the best prediction for both the short and long term. Microsoft Decision Trees Algorithm Microsoft Decision Trees is probably the most commonly used algorithm, in part because of its flexibility decision trees work with both discrete and continuous attributes and also because of the richness of its included viewers. It’s quite easy to understand the output via these viewers. This algorithm is used to both view and to predict. It is also used (usually in conjunction with the Microsoft Clustering algorithm) to find deviant values. The Microsoft Decision Trees algorithm processes input data by splitting it into recursive (related) subsets. In the default viewer, the output is shown as a recursive tree structure. If you are using discrete data, the algorithm identifies the particular inputs that are most closely correlated with particular predictable values, producing a result that shows which columns are most strongly predictive of a selected attribute. If you are using continuous data, the algorithm uses standard linear regression to determine where the splits in the decision tree occur. Clicking a node displays detailed information in the Mining Legend window. You can configure the view using the various drop-down lists at the top of the viewer, such as Tree, Default Expansion, and so on. Finally, if you’ve enabled drillthrough on your model, you can display the drillthrough information—either columns from the model or (new to SQL Server 2008) columns from the mining structure, whether or not they are included in this model. Microsoft Linear Regression Algorithm Microsoft Linear Regression is a variation of the Microsoft Decision Trees algorith m, and works like classic linear regression—it fits the best possible straight line through a series of points (the sources being at least two columns of continuous data). This algorithm calculates all possible relationships between the attribute values and produces more complete results than other (non–data mining) methods of applying linear regression. In addition to a key column, you can use only columns of the continuous numeric data type. Another way to understand this is that it disables splits. You use this algorithm to be able to visualize the relationship between two continuous attributes. For example, in a retail scenario, you might want to create a trend line between physical placement locations in a retail store and rate of sale for items. The algorithm result is similar to that produced by any other linear regression method in that it produces a trend line. Unlike most other methods of calculating linear regression, the Microsoft Linear Regression algorithm in SSAS calculates all possible relationships between all input dataset values to produce its results. This differs from other methods of calculating linear regression, which generally use progressive splitting techniques between the source inputs QUESTION: 5 You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server 2008 Analysis Services (SSAS) database. A cube in the database

70-452

5

http://www.certmagic.com

Page 6: 70-452 Exam

contains a large dimension named Customers. The database uses a data source that is located on a remote server. Each day, an application adds millions of fact rows and thousands of new customers. Currently, a full process of the cube takes several hours. You need to ensure that queries return the most recent customer data with the minimum amount of latency. Which cube storage model should you use? A. hybrid online analytical processing (HOLAP) B. relational online analytical processing (ROLAP) C. multidimensional online analytical processing (MOLAP) D. automatic multidimensional online analytical processing (automatic MOLAP) Answer: A Explanation: Relational OLAP Relational OLAP (ROLAP) stores the cube structure in a multidimensional database. The leaf-level measures are left in the relational data mart that serves as the source of the cube. The preprocessed aggregates are also stored in a relational database table. When a decision maker requests the value of a measure for a certain set of dimension members, the ROLAP system first checks to determine whether the dimension members specify an aggregate or a leaf-level value. If an aggregate is specified, the value is selected from the relational table. If a leaf-level value is specified, the value is selected from the data mart. Also, because the ROLAP architecture retrieves leaf-level values directly from the data mart, the leaf-level values returned by the ROLAP system are always as up-to-date as the data mart itself. In other words, the ROLAP system does not add latency to leaf-level data. The disadvantage of a ROLAP system is that the retrieval of the aggregate and leaf-level values is slower than the other OLAP architectures. Multidimensional OLAP Multidimensional OLAP (MOLAP) also stores the cube structure in a multidimensional database. However, both the preprocessed aggregate values and a copy of the leaf-level values are placed in the multidimensional database as well. Because of this, all data requests are answered from the multidimensional database, making MOLAP systems extremely responsive. Additional time is required when loading a MOLAP system because all the leaflevel data is copied into the multidimensional database. Because of this, times occur when the leaf-level data returned by the MOLAP system is not in sync with the leaf-level data in the data mart itself. A MOLAP system, therefore, does add latency to the leaf-level data. The MOLAP architecture also requires more disk space to store the copy of the leaf-level values in the multidimensional database. However, because MOLAP is extremely efficient at storing values, the additional space required is usually not significant. Hybrid OLAP Hybrid OLAP (HOLAP) combines ROLAP and MOLAP storage. This is why we end up with the word “hybrid” in the name. HOLAP tries to take advantage of the strengths of each of the other two architectures while minimizing their weaknesses. HOLAP stores the cube structure

70-452

6

http://www.certmagic.com

Page 7: 70-452 Exam

and the preprocessed aggregates in a multidimensional database. This provides the fast retrieval of aggregates present in MOLAP structures. HOLAP leaves the leaf-level data in the relational data mart that serves as the source of the cube. This leads to longer retrieval times when accessing the leaf-level values. However, HOLAP does not need to take time to copy the leaf-level data from the data mart. As soon as the data is updated in the data mart, it is available to the decision maker. Therefore, HOLAP does not add latency to the leaf-level data. In essence, HOLAP sacrifices retrieval speed on leaf-level data to prevent adding latency to leaf-level data and to speed the data load. (McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009)) QUESTION: 6 You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server 2008 Analysis Services (SSAS) database. The database contains a cube named Financials. The cube contains objects as shown in the exhibit.

A calculated member named Gross Margin references both Sales Details and Product Costs. You need to ensure that the solution meets the following requirements: Managers must be able to view only their cost center's percentage of the company's gross margin. The impact on query performance is minimal. What should you do? A. Add dimension-level security and enable the Visual Totals option. B. Add cell-level security that has read permissions on the Gross Margin measure C. Add cell-level security that has read contingent permissions on the Gross Margin measure. D. Change the permissions on the Managers dimension level from Read to Read/Write. Answer: A Explanation: http://msdn.microsoft.com/en-us/library/ms174927.aspx User Access Security Architecture Microsoft SQL Server Analysis Services relies on Microsoft Windows to authenticate users. By default, only authenticated users who have rights within Analysis Services can establish a connection to Analysis Services. After a user connects to Analysis Services, the permissions that user has within Analysis Services are determined by the rights that are assigned to the Analysis Services roles to which that user belongs, either directly or through membership in a Windows role.

70-452

7

http://www.certmagic.com

Page 8: 70-452 Exam

Dimension-Level Security A database role can specify whether its members have permission to view or update dimension members in specified database dimensions. Moreover, within each dimension to which a database role has been granted rights, the role can be granted permission to view or update specific dimension members only instead of all dimension members. If a database role is not granted permissions to view or update a particular dimension and some or all the dimension's members, members of the database role have no permission to view the dimension or any of its members. Note Dimension permissions that are granted to a database role apply to the cube dimensions based on the database dimension, unless different permissions are explicitly granted within the cube that uses the database dimension. Cube-Level Security A database role can specify whether its members have read or read/write permission to one or more cubes in a database. If a database role is not granted permissions to read or read/write at least one cube, members of the database role have no permission to view any cubes in the database, despite any rights those members may have through the role to view dimension members. Cell-Level Security A database role can specify whether its members have read, read contingent, or read/write permissions on some or all cells within a cube. If a database role is not granted permissions on cells within a cube, members of the database role have no permission to view any cube data. If a database role is denied permission to view certain dimensions based on dimension security, cell-level security cannot expand the rights of the database role members to include cell members from that dimension. On the other hand, if a database role is granted permission to view members of a dimension, cell-level security can be used to limit the cell members from the dimension that the database role members can view. QUESTION: 7 You design a Business Intelligence (BI) solution by using SQL Server 2008. The solution includes a SQL Server 2008 Reporting Services (SSRS) infrastructure in a scale-out deployment. All reports use a SQL Server 2008 relational database as the data source. You implement row-level security. You need to ensure that all reports display only the expected data based on the user who is viewing the report. What should you do? A. Store the credential of a user in the data source. B. Configure the infrastructure to support Kerberos authentication. C. Configure the infrastructure to support anonymous authentication by using a custom authentication extension. D. Ensure that all report queries add a filter that uses the User.UserID value as a hidden parameter.

70-452

8

http://www.certmagic.com

Page 9: 70-452 Exam

Answer: B QUESTION: 8 You design a Business Intelligence (BI) solution by using SQL Server 2008. You need to load data into your online transaction processing (OLTP) database once a week by using data from a flat file. The file contains all the details about new employees who joined your company last week. The data must be loaded into the tables shown in the exhibit. (Click the Exhibit button.) Employee.EmployeeID is an identity.

A SQL Server 2008 Integration Services (SSIS) package contains one data flow for each of the destination tables. In the Employee Data Flow, an OLE DB Command transformation executes a stored procedure that loads the Employee record and returns the EmployeeID value. You need to accomplish the following tasks: Ensure that the EmployeeID is used as a foreign key (FK) in all child tables for the correct Employee record. Minimize the number of round trips to the database. Ensure that the package performs in the most efficient manner possible. What should you do? A. Use a Lookup Transformation in each of the child table data flows to find the EmployeeID based on first name and last name. B. Store the EmployeeID values in SSIS variables and use the variables to populate the FK columns in each of the child tables. C. After the Employee table is loaded, write the data to a Raw File Destination and use the raw file as a source for each of the subsequent Data Flows. D. After the Employee table is loaded, write the data to a Flat File Destination and use the flat file as a source for each of the subsequent Data Flows.

70-452

9

http://www.certmagic.com

Page 10: 70-452 Exam

Answer: C Explanation: http://technet.microsoft.com/en-us/library/ms141661.aspx Raw File Destination The Raw File destination writes raw data to a file. Because the format of the data is native to the destination, the data requires no translation and little parsing. This means that the Raw File destination can write data more quickly than other destinations such as the Flat File and the OLE DB destinations. You can configure the Raw File destination in the following ways: Specify an access mode which is either the name of the file or a variable that contains the name of the file to which the Raw File destination writes. Indicate whether the Raw File destination appends data to an existing file that has the same name or creates a new file. The Raw File destination is frequently used to write intermediary results of partly processed data between package executions. Storing raw data means that the data can be read quickly by a Raw File source and then further transformed before it is loaded into its final destination . For example, a package might run several times, and each time write raw data to files. Later, a different package can use the Raw File source to read from each file, use a Union All transformation to merge the data into one data set, and then apply additional transformations that summarize the data before loading the data into its final destination such as a SQL Server table. Raw File Source The Raw File source lets us utilize data that was previously written to a raw data file by a Raw File destination. The raw file format is the native forma t for Integration Services. Because of this, raw files can be written to disk and read from disk rapidly. One of the goals of Integration Services is to improve processing efficiency by moving data from the original source to the ultimate destination without making any stops in between. However, on some occasions, the data must be staged to disk as part of an Extract, Transform, and Load process. When this is necessary, the raw file format provides the most efficient means of accomplishing this task. (McGraw-Hill - Delivering Business Intelligence with Microsoft SQL Server 2008 (2009)) QUESTION: 9 You design a Business Intelligence (BI) solution by using SQL Server 2008. You create a SQL Server 2008 Integration Services (SSIS) package to perform an extract, transform, and load (ETL) process to load data to a DimCustomer dimension table that contains 1 million rows. Your data flow uses the following components: A SQL Destination data flow task to insert new customers An OLE DB Command transform that updates existing customers On average, 25 percent of existing customer records is updated each night. You need to reduce the amount of time required to update customer records. What should you do? A. Modify the UPDATE statement in the OLE DB Command transform to use the PAGLOCK table hint. B. Modify the UPDATE statement in the OLE DB Command transform to use the TABLOCK table hint. C. Stage the data in the data flow. Replace the OLE DB Command transform in the data flow with an Execute SQL task in the control flow.

70-452

10

http://www.certmagic.com

Page 11: 70-452 Exam

D. Stage the data in the data flow. Replace the UPDATE statement in the OLE DB Command transform with a DELETE statement followed by an INSERT statement. Answer: C Explanation: Data Flow Once we set the precedence constraints for the control flow tasks in the package, we can define each of the data flows. This is done on the Data Flow Designer tab. Each data flow task that was added to the control flow has its own layout on the Data Flow Designer tab. We can switch between different data flows using the Data Flow Task drop-down list located at the top of the Data Flow tab. The Data Flow Toolbox contains three types of items: data flow sources, data flow transformations, and data flow destinations. However, on some occasions, the data must be staged to disk as part of an Extract, Transform, and Load process. When this is necessary, the Raw File format provides the most efficient means of accomplishing this task. Execute SQL Task The Execute SQL task enables us to execute SQL statements or stored procedures. The contents of variables can be used for input, output, or input/output parameters and the return value. We can also save the result set from the SQL statements or stored procedure in a package variable. This result set could be a single value, a multirow/multicolumn result set, or an XML document. http://www.sqlservercentral.com/blogs/dknight/archive/2008/12/29/ssis-avoid-ole-db-command.aspx SSIS – Avoid OLE DB Command he OLE DB Command runs insert, update or delete statements for each row, while the Execute SQL Task does a Bulk Insert in this instance. That means every single row that goes through your package would have an insert statement run when it gets to an OLE DB Command. So if you know you are dealing with more than just a couple hundred rows per run then I would highly suggest using a staging table vs. the OLE DB Command. QUESTION: 10 You design a SQL Server 2008 Analysis Services (SSAS) solution. The data source view has tables as shown in the exhibit. (Click the Exhibit button.)

70-452

11

http://www.certmagic.com

Page 12: 70-452 Exam

The FactInternetSales measure will be queried frequently based on the city and country of the customer. You need to design a cube that will provide optimal performance for queries. Which design should you choose? A. Create two dimensions named Customer and Geography from the DimCustomer table and the DimGeography table, respectively. Create a materialized reference relationship between the Geography dimension and the FactInternetSales measure by using the Customer dimension as an intermediate dimension. B. Create two dimensions named Customer and Geography from the DimCustomer table and the DimGeography table, respectively. Create an unmaterialized reference relationship between the Geography dimension and the FactInternetSales measure by using the Customer dimension as an intermediate dimension. C. Create a dimension named Customer by joining the DimGeography and DimCustomer tables. Add an attribute relationship from CustomerKey to City and from City to Country. Create a regular relationship in the cube between the Customer dimension and the FactInternetSales measure. D. Create a dimension named Customer by joining the DimGeography and DimCustomer tables. Add an attribute relationship from CustomerKey to City and from CustomerKey to Country. Create a regular relationship in the cube between the Customer dimension and the FactInternetSales measure. Answer: C

70-452

12

http://www.certmagic.com