Social Web Analytics – Solution Answers

Embed Size (px)

DESCRIPTION

Answers for UWS - Social Web Analytics. This includes past exam and sample exam papers. These notes contain procedure of answers to question

Citation preview

SWA Sample Solution AnswersQ1]

a) Expected Counts

To find out the expected counts in a reach table, the individual entries is calculated using the formula:

Hence the expected counts would be:

13-2425-3435-4445+

Female11.62523.62532.6257.125

Male19.37539.37554.37511.875

b) statistics and degrees of freedom

To calculate the statistics of the above data, the following formula is applied:

Hence we can compute the statistics:

Which, the value would be: 8.706 (3d.p.).

To figure out the degree of freedom, we use the following formula:

Hence the value in the degree of freedom:

df = (2-1) X (4-1) = 3

c) Proportional Value

To calculate the proportional value of a subject we use the following formula:

Hence the proportional value are shown as:

p (0.3079, 0.4421)

Q2]

a) Removing Stop-words, Removing Punctuations, Case-folding and Stemming

Document NumberBeforeAfter

iGo dog, go!dog

iiStop cat, stopstop cat stop

iiiThe dog stops the cat and the bird.dog stop cat bird

b) Document-Term Index

dogstopcatbird

i1000

ii0210

iii1111

c) Cosine Similarity Calculation

From the term frequency of [Dog Stop Cat Bird], the frequency matrix of Stop Cat is [0 1 1 0]. With each document, the value of similarity are calculated as follows:

i. [ 1 0 0 0 ] [ 0 1 1 0 ] = 0ii. [ 0 2 1 0 ] [ 0 1 1 0 ] = 3iii. [ 1 1 1 1 ] [ 0 1 1 0] = 2

Now to find the cosine similarity score, we need to turn the frequency matrix of Stop Cat into a query vector value:

Which the value would be:

With the same process as above, we need to query each of the documents in its respective values:

i. ii. iii.

Finally, the cosine similarity score is found by:

i. ii. iii.

d) TF-IDF Calculation

To find the TF-IDF value we need to use the formula:

Given that there are 3 documents: N is 3, the matrix in document 3 is [1 1 1 1] and is [2 2 2 1].

CategoryDogStopCatBird

Working Out

Answer0.28100.28100.28100.7615

Q3]

a) Adjacency Matrix

ABCD

A0100

B1011

C0100

D0100

b) Graph Diameter

PathLength

A -> B1

A -> C2

A -> D2

B -> C1

B -> D1

C -> D2

The diameter of a graph is the longest shortest path. Hence the diameter is 2.

c) Betweenness Centrality

PathCentral Node

A B0

A B CB

A B DB

B C0

B D0

C B DB

Since there are modes of central nodes, the betweenness centrality of the graph is B.

d) Graph comparison.

From the graph, the degree of distribution is as follows:

Connections01234

Frequency03010

Graph of the above data:

With observation, the graph is similar towards the Barabasi-Albert Graph.

Q4]

a) Missing Values

PC1PC2PC3PC4PC5PC6PC7

S.D.3.7632.5222.3742.2242.1551.7241.438

P.V.0.3470.1560.1380.1210.1140.0730.051

C.P.*0.5030.641*0.8760.9491.000

The value of the * at PC1 is 0.347 and the value of the * at PC4 is 0.762.

b) Binary Metric

WordsRememberingLouReedLifesWorkRockMusicianProvedCareerMeanStrivingPublicity

Tweet 1111111100000

Tweet 2011001011111

To compute the binary metric, we need to figure out the count of unique words in each tweet over the total of unique words in all tweets. Hence the binary metric: . If stemming were applied to the tweets, it would affect the result. For proof, if the word musician would stem to music, then there is an increase of common words.

Q5]

When looking at the tweet value of the data below:

Day 1Day 2Day 3Day 4Day 5Day 6Day 7

Week 136495774745461

Week 258891158911710993

Week 398145140140156115124

It is shown after performing square-root transformation on a moving average of trends and periodic components.

a) Computing Trends

We are given:

Trends

Day 1Day 2Day 3Day 4Day 5Day 6Day 7

Week 17.567.798.148.59

Week 28.719.039.47*10.0610.4310.59

Week 310.9311.1711.2111.42

To compute the missing trend at Week 2 Day 4, we need to add the average square root of the Week 2 where the central distant value is at Day 4.

Using this formula:

Hence by using this, the missing value at Week 2 Day 4 can be find out:

Therefore the missing value at Week 2 Day 4 is 9.73.

b) Computing Periodic

We are given:

Periodic

Day 1Day 2Day 3Day 4Day 5Day 6Day 7

Periodic-1.1250.5780.8770.3230.724*-0.925

To compute the missing periodic at Day 6 Periodic, we need to know that the sum of all values must equal to zero (0).

Given that , to figure out the missing values, we need to apply the formula:

Hence by using this, we can find out the missing value at Day 6 Periodic:

Therefore the missing value at Day 6 Periodic is -0.452.

Q6]

a) Explanation

The reason why using a square root transformation is advisable for count data is because the count data is most likely to be Poisson distributed. The problem with Poisson is that its variance and the mean would be the same. Which mean if you take a same with a high mean, and another sample with low mean, then the variance would be different. Using the hypothesis testing, it is noted that it is impossible to calculate if the mean is equal the variance then it would show difference in the test value and is deemed bad for testing.

Hence if we were to square root the count data, it would stabilize the variance.

b) Sum of Squares Interaction Calculation

Given the value

BeforeAfter

Company54.9160.20

Competitor49.8750.15

We need to establish the difference in terms of letters.

BeforeAfter

Competitor (C)

Company (I)

Now we need to investigate if , if then we know that there are external influences that is not related to the topic itself.

To figure out the Interaction contrast value we need to use this formula:

Hence the value in Sum of Squares Interaction Contrast is 5.01.

To Figure out the Sum of Square Interaction, this formula is needed: .

Therefore the value of .

c) F-Statistic Calculation and Degree of Freedom

Since we are given that

To find out the F-Statistic we need to use the formula:

Where and the value of also the and the value of .

Therefore the F-Statistical value is 2.3797 and the Degree of Freedom is between 1 and 12.

Q7]

a) Sum of Squares Between Cluster Calculation

Given that the center point for the clusters: and associated clusters:

With 10 centered data points:

x1x2Cluster

[1, ]-5.9021.57772

[2, ]-6.1112..98932

[3, ]-4.9460.57362

[4, ]-4.7882.13242

[5, ]-4.6991.08872

[6, ]6.237-1.21611

[7, ]4.104-3.82061

[8, ]5.850-1.65091

[9, ]4.709-0.66211

[10, ]5.546-1.01201

And Between Cluster Sum of Squares:

123456

SSB0.0*315.2318.3319.6320.9

If the cluster centers are not given then perform the following formula to calculate them:

Hence by using the formula above, it can generate Cluster Centers as shown below:

x1x2

15.289-1.672

2-5.2891.672

To solve for the value of Sum of Square Between Clusters, it is given by this:

When figuring out the Cluster Distance Matrix, we need to look at the number of data points in each cluster. In this case Cluster 1 and Cluster 2 both have 5 data points. Hence the Cluster Distance Matrix would be:

Now for SSB to be Calculated which would be:

Therefore the value of SSB is 307.7.b) Plot Elbow Graph

Now we know the missing value in the above question we can plot these data:

123456

SSB0.0307.7315.2318.3319.6320.9

c) Cluster Determination and Explanation

The number of clusters that would be suitable for this data would be at , the reason why is the most suitable for this data would be that there is a steady increase in the number of Between Clusters Sum of Squares rather than a large increase in the number of Between Clusters as it contains sudden changes.

Q8]

a) Probability Transition Matrix

In the graph shown, the Probability Transition Matrix would be:

[ ,1][ ,2][ ,3][ ,4][ ,5]

A01/31/310

B1/301/301/2

C1/31/3001/2

D1/30000

E01/31/300

b) Graph Explanation

Since in the graph, it shows that there are no distinct specific network paths to be taken, the graph is proved to be ergodic. This means that there is a path to all vertices making the network able to be walked around infinitely without having to be confined to a network vertex. Another reason would be that there are no arrows in the graph that could indicate that this is an undirected graph.

c) State Distribution and Random Walks of 2 Steps

Given the Probability Transition Matrix from part a), it is deduced that the Matrix is T.

Therefore T:

[ ,1][ ,2][ ,3][ ,4][ ,5]

A01/31/310

B1/301/301/2

C1/31/3001/2

D1/30000

E01/31/300

And since we begin at vertex A, then the initial state distribution would be:

To figure out the first step of Random Walks, then it follows as:

Hence by using this formula, the first step of Random Walks is achieved by Matrix multiplication:

Therefore in each value of the first step of Random Walks, it is seen that the values would be:

With concatenation of the above matrix, it will be seen as:

Next, to look for second step of Random Walks, it is performed with the same process as above:

Hence by using this formula, the first step of Random Walks is achieved by Matrix multiplication:

Therefore in each value of the first step of Random Walks, it is seen that the values would be:

With concatenation of the above matrix, it will be seen as:

NOTE: TO PERFORM MATRIX MULTIPLICATION, IT IS FOUND BY:

d) Stationary Distribution

When calculating Stationary Distribution, it is performed by:

Given in the above question, it is seen that

It is required to develop sub-stationary value of as its own matrix

Now to perform the first equation of this question, matrix multiplication is used to show it as:

It is given as and due to the graph being undirected, the vertex is proportional number of edges connected to the vertex. Therefore, it is solved by:

Hence, the answer is

END OF EXAMINATION PAPER

SWA Spring2013 Solution AnswersQ1]

a) Problem Statement

Given the data about the counts of reach by age groups and gender:

13-1718-2425-3435-4445-5455-6565+Total

F5981062444

M8202221671296

Total1329303112916140

One problem with using (Chi-squared) test with this data is that some of the expected values in this data are less than 5. This is because it only works if the expected values are all bigger than about 5.

b) Reduced Table

13-2425-3435-4445+Total

F148101244

M2822212596

Total42303137140

c) Expected Counts

To find out the expected counts in a reach table, the individual entries is calculated using the formula:

Hence the expected counts would be:

13-2425-3435-4445+

Female13.29.42869.742911.6286

Male28.820.571421.257125.3714

d) statistics and degrees of freedom

To calculate the statistics of the above data, the following formula is applied:

Hence we can compute the statistics:

Which, the value would be: 0.4136 (4d.p.).

To figure out the degree of freedom, we use the following formula:

Hence the value in the degree of freedom:

df = (2-1) X (4-1) = 3

Q2]

a) Adjacency Matrix

ABCD

A0111

B1001

C1000

D1100

b) Degree of Distribution

From the graph, the degree of distribution is as follows:

Connections01234

Frequency01210

Graph of the above data:

c) Closeness Centrality

To figure out the Closeness Centrality of a graph, it is needed to take in consideration of what is the smallest amount of total steps to cover the entire network.

ABCD

3454

d) Central Decision

To find out about the most central vertex of a network, it is easily able to figure out in the table of part c). By using this table, it is needed to look at the lowest value to determine the most central vertex.

In this case, the answer is "A".

Q3]

a) Missing Values

PC1PC2PC3PC4PC5PC6PC7

S.D.3.9233.0692.8672.5792.0381.8871.125

P.V.0.3160.1940.1690.1370.0850.0730.026

C.P.*0.5100.679*0.9010.9741.000

The value of the * at PC1 is 0.316 and the value of the * at PC4 is 0.816.

b) Binary Metric

Wordsassaultassistancedisadvantageduniversitystudentsbeginsbelievemoredoingbetter

Tweet 11111110000

Tweet 20001101111

To compute the binary metric, we need to figure out the count of unique words in each tweet over the total of unique words in all tweets. Hence the binary metric: .

Q4]

a) Sum of Squares Within Cluster Calculation

Given that the center point for the clusters: and associated clusters:

With 10 centered data points:

x1x2Cluster

[1, ]-1.7016-3.5221

[2, ]-1.9107-2.1111

[3, ]-0.7456-4.5261

[4, ]-0.5877-2.9681

[5, ]-3.49932.9891

[6, ]-2.56333.6841

[7, ]-4.69641.0791

[8, ]-2.95023.2491

[9, ]8.90871.2382

[10, ]90.8882

And Within Cluster Sum of Squares:

123456

SSW314.077*11.3815.5113.0761.875

If the cluster centers are not given then perform the following formula to calculate them:

Hence by using the formula above, it can generate Cluster Centers as shown below:

x1x2

1-2.332-0.2657

29.3271.0629

To solve for the value of Sum of Square Between Clusters, it is given by this:

When figuring out the Cluster Distance Matrix, we need to look at the number of data points in each cluster. In this case Cluster 1 have 8 data points and Cluster 2 have 2 data points. Hence the Cluster Distance Matrix would be:

Now for SSB to be Calculated which would be:

Therefore the value of SSB is 307.7.

To find out the missing SSW value, it is SSW = SST- SSB. Hence, is at: SSW = 314.1 - 220.3 = 93.75b) Plot Elbow Graph

Now we know the missing value in the above question we can plot these data:

123456

SSW314.07793.7511.3815.5113.0761.875

c) Cluster Determination and Explanation

The number of clusters that would be suitable for this data would be at , the reason why is the most suitable for this data would be that there is a steady decrease in the number of Between Clusters Sum of Squares rather than a large decrease in the number of Between Clusters as it contains sudden changes.

Q5]

a) Probability Transition Matrix

In the graph shown, the Probability Transition Matrix would be:

[ ,1][ ,2][ ,3][ ,4][ ,5]

A011/21/21/2

B101/200

C00000

D00001/2

E0001/20

b) Graph Explanation

Since the graph is a directed graph, by assumption there is a 50-50 chance that the graph is state to be ergodic or non-ergodic. With close observation of the graph: Vertex "A" is unable to travel to "C", "D" and "E"; Vertex "B" is unable to travel to "C", "D" and "E"; Vertex "C" is unable to travel to "D" and "E"; Vertex "D" is unable to travel to "C" and Vertex "E" is unable to travel to "C". Therefore the graph is deemed non-ergodic as neither Vertex "A", "B", "D" nor "E" is able to travel to Vertex "C".

c) Random Surfer Probability Transition Matrix

Given the Probability Transition Matrix, it shows the matrix value of T:

When figuring out the Random Surfer Probability Transition Matrix, the Jump Matrix is needed and is needed as well.

To perform the Random Surfer Probability Transition Matrix, the following formula is applied:

Therefore the Random Surfer Probability Matrix will be:

d) Stationary Distribution

When calculating Stationary Distribution, it is performed by:

Given in the above question, it is seen that and given by the value of

Now it is shown as:

Therefore the answer would be:

Shown as the stationary distribution for the random surfer transition matrix in the question to be assumed as , through calculation of the actual answer. It has fallen to the conclusion that the calculated answer is . With the difference in all of the values for the two matrixes, it is given an indication that the hypothesized matrix is not the same as the one that is calculated and confirmed. Hence the hypothesized matrix would not be the stationary distribution of the random surfer probability transition matrix.

Q6]

a) Computing Trends

Given the information of the count aggregation are gathered in 4 periods, it is safely assumed that these are specified windows of points.

We are given:

Trend

Period 1Period 2Period 3Period 4

Day 16.78*

Day 27.657.878.088.17

Day 38.428.68

To calculate the missing trend with window sizes, the following formula is used:

Hence by using this, the missing value at Day 1 Period 4 can be found out:

Therefore the missing value at Day1 Period 4 is 7.3301.

b) Computing Periodic

We are given:

Periodic

Periodi 1Period 2Period 3Period 4P

Periodic-0.6100.2350.541*

To compute the missing periodic at Day 6 Periodic, we need to know that the sum of all values must equal to zero (0).

Given that , to figure out the missing values, we need to apply the formula:

Hence by using this, we can find out the missing value at Day 6 Periodic:

Therefore the missing value at Day 6 Periodic is -0.166.

Q7]

a) Explanation

The reason why using a square root transformation is advisable for count data is because the count data is most likely to be Poisson distributed. The problem with Poisson is that its variance and the mean would be the same. Which mean if you take a same with a high mean, and another sample with low mean, then the variance would be different. Using the hypothesis testing, it is noted that it is impossible to calculate if the mean is equal the variance then it would show difference in the test value and is deemed bad for testing.

Hence if we were to square root the count data, it would stabilize the variance.

b & c) Sum of Squares Interaction Calculation

Given the value

BeforeAfter

Company29.2733.98

Competitor24.8623.92

We need to establish the difference in terms of letters.

BeforeAfter

Competitor (C)

Company (I)

Now we need to investigate if , if then we know that there are external influences that is not related to the topic itself.

To figure out the Interaction contrast value we need to use this formula:

Hence the value in Sum of Squares Interaction Contrast is 5.6572.

To Figure out the Sum of Square Interaction, this formula is needed: .

Therefore the value of .

d) F-Statistic Calculation and Degree of Freedom

Since we are given that

To find out the F-Statistic we need to use the formula:

Where and the value of also the and the value of .

Therefore the F-Statistical value is 1.46538 and the Degree of Freedom is between 1 and 8.Q8]

a) Word Distribution

Given from the tweets shown:

Positive

My teeth shine #funfun

#funfun love my fun teeth

#funfun is fun fun

Negative

No shine #funfun

No love fun fun

Where is my teeth shine #funfun

Now to tabulate the words:

#funfunfunislovemynoshineteethwhere

Positive321120120

Negative211112211

b) Word Sentiment

The sentiment of the tweet of "fun teeth shine", is shown as:

~#funfunfun~is~love~my~noshineteeth~where

Positive0/32/32/32/31/33/31/32/33/3

Negative1/31/32/32/32/31/32/31/32/3

NOTE: WE ARE ONLY ACCOUNTING FOR "FUN TEETH SHINE" TO BE PRESENT, WHEREAS THE REST ARE ABSENT PROBABILITY VALUES.

Now to apply the Rule of Succession:

~#funfunfun~is~love~my~noshineteeth~where

Positive1/52/32/32/31/34/51/32/34/5

Negative1/31/32/32/32/31/32/31/32/3

NOTE: THE RULE OF SUCCESSION ONLY APPLIES TO VALUE OF "1" AND "0", HENCE THE CHANGE OF VALUES AT "~#funfun", "`no" AND "~where" OF POSITIVE.

To determine the probability ratio, the following formula is applied:

Hence the values are:

~#funfunfun~is~love~my~noshineteeth~where

Ratio0.62.01.01.00.52.40.52.01.2

Now to find out the log probability ratio of the values, which is done by:

Hence the values are:

~#funfunfun~is~love~my~noshineteeth~where

Ratio-0.51080.69310.00000.0000-0.69310.8755-0.69310.69310.1823

NOTE: IT MUST BE DONE WITH LOG NATURAL (LN).

Calculating the Log Likelihood Ratio of the Tweet, given by the following formula:

Hence the answer is 0.547

c) Tweet explanation

Since the log likelihood ration is above 0.5, then the tweet is classified as positive.

END OF EXAMINATION PAPER