Course link:CS224W: Machine Learning with Graphs

Course video:[Course] Stanford CS224W: Map Machine Learning (2019 Autumn | English)

### directory

web (Internet) can essentially see as a Direct Graph, nodes are various web pages, and are hyperlinks between web pages.

For such a direction, we consider two questions:

- given node$v$, which nodes can be reached$v$？
- From node$v$Which nodes can be reached?

For the first point, we use a collection$In(v)={w∣wcan reachv}$Come, for example, the following picture,$In(A)={A,B,C,E,G}$. For the second point, we use a collection$Out(v)={w∣vcan reachw}$to indicate, that is,$Out(A)={A,B,C,D,F}$。

Any direction (web) can be represented by the two types below:

type | style | Description |
---|---|---|

Strongly Connected strong connection chart | Any nodes can be reached through a path, that is,$In(A)=Out(A)={A,B,C,D,E}$。 | |

Direct ACYCLIC Graph (DAG) has a ringless diagram | There is no ring circuit, node$u$can reach the node$v$, but nodes$v$Can’t reach the node$u$。 |

So, we can pay attention to the complex network**Strongly Connected Component (SCC)**. SCC is actually a collection of points$S$, in the collection

There are paths between any two points in any intention of$S$, and ensure that other SCC collection does not include this collection$S$。

For example, the scc of the above figure includes: {a, b, c, g}, {d}, {e}, {f}.

**Any orientation of any orientation is the direction without a loop.**The following is a proof–

So how do we find nodes

What about the strong connection of$v$? We know by defining,$In(v)$is all can reach the node$v$node collection,$Out(v)$is all nodes$v$The node collection that can be reached, then the node

The component of$v$is$Out(v)⋂In(v)=Out(v,G)⋂Out(v,G_{′})$, of which$G_{′}$Yes

The reversal of the$G$edge. We can use the depth of the graph to priority search (DFS)/breadth priority search (BFS) to get$Out(v,G)$and$Out(v,G_{′})$。

Use BFS algorithm to calculate$In(v)$and$Out(v)$, the number of nodes visited is relatively different. (The BFS Either Visits Many Nodes or Very Few)

Through the experiment, it was found that on the web network, for a random node$v$Let’s talk about it,$Out(v)≈100million (50% nodes)$, and$In(v)≈100million (50% nodes)$; Its great strong connection components include 56 Million nodes, which is about 28%of all nodes.

In this way, the calculation diagram of the web can be represented in the following form:

The starting point of

Pagerank is that the nodes in the network are not equally important). We will introduce the following link analysis methods to calculate the importance of nodes in the graph:

- PageRank
- Personalized PageRank
- Random Walk with Restarts

Obviously, the importance of nodes can be reflected through its link -Page is more important if it has more links. Of course, these links can be reflected in two aspects: In-COMING Links and Out-Going Links. At the same time, the importance of node external links can also reflect its own importance. Links from Important Pages Count More -this is a recursive problem.

In general:

The importance of*A “vote” from an important page is worth more.*

Each node will be passed to its external link node on average. Set nodes

The important degree of$i$is$r_{i}$, there is$d_{i}$Out-links, then the important degree it passed to these nodes is$d_{i}r_{i} $. For example, the node in the figure$j$important level$r_{j}=3r_{i} +4r_{k} $。

, that is,*A page is important if it is pointed to by other important pages*. At the same time, we can define nodes

Ranking of$j$:

$r_{j}=i→j∑ d_{i}r_{i} $

among them$d_{i}$is node

Out degree.$i$

Let’s give an example:

We can get Flow Equation:

$r_{y}r_{a}r_{m} =2r_{y} +2r_{a} =2r_{y} +r_{m}=2r_{a} $

Here is three unknown, three equations, can solve related parameters.

We can write the above Flow Equation as a matrix form:

$r=M⋅r, r_{j}=i→j∑ d_{i}r_{i} $

among them$M$is a column random matrix. Set nodes$j$‘s output is$d_{j}$, if$j→i$, then$M_{ij}=d_{j}1 $, that is, the matrix

The column and 1 of$M$are 1.

$r_{i}$is the webpage

The important degree of$i$,$∑_{i}r_{i}=1$。

Suppose someone is browsing the web, at time

At$t$, he was browsing the web$i$; at time

At$t+1$, this person will choose the webpage randomly$i$external link$j$Access; then when surfing online, the above process will be repeated.

So, set the orientation$p(t)$indicates the state probability,$p(t)_{i}$indicates at time$t$Located on the webpage

The probability of$i$; that is,$p(t)$is based on the probability distribution of webpages.

So, in time$t+1$,$p(t+1)=M⋅p(t)$. if$p(t+1)=M⋅p(t)=p(t)$, so we call it$p(t)$is a stamentary distribution of a random Walk.

Because Flow Equation is written as a matrix form$r=M⋅r$, so$r$is a matrix$M$Feature vector. We can solve the method of ** power iteration (Power Iteration) **$r$。

**Power Iteration**

Suppose there is one with$N$The web map of the node, where the node is the page and the hyperlink. Power iteration is a simple solution$r$iteration scheme, you can allocate an initial$r$value, repeat the following steps until convergence ($∑_{i}∣r_{(t+1)−r_{t}}∣_{1}<ε$）。

- Initialization-$r_{(0)}=[N1 ,⋯,N1 ]_{T}$。
- iteration-$r_{(t+1)}=M⋅r_{(t)}$，$r_{j}=∑_{i→j}d_{i}r_{i} $。
- iteration termination conditions-$∣r_{(t+1)−r_{t}}∣_{1}<ε$。

But can this algorithm be converged? Does its convergence results meet our expectations? Is convergence reasonable?

Actually, there are two problems in this algorithm:

*(1) SOME PAGES Are DEAD ENDS -Some pages have no external links.*

Solution:

*（2）Spider traps——(all out-links are within the group)*。

Spider Traps solution:

Therefore, we can get the Pagerank equation–

$r_{j}=i→j∑ βd_{i}r_{i} +(1−β)N1 $

Write into matrix form:

$r=A⋅r$

among them$A$is Google Matrix,$A=βM+(1−β)[N1 ]_{N×N}$. here$β$usually take 0.8,0.9.

The following is an example:

The calculation process of

Pagesk is mainly matrix multiplication$r_{new}=A⋅r_{old}$. But there is a problem that when the number of nodes is large, it takes a lot of memory.

We can re -organize the form of the PageRANK equation:

$r=βM⋅r+[N1−β ]_{N}$

Here, matrix$M$is a sparse matrix, and$[N1−β ]_{N}$is a dense matrix. Therefore, in order to facilitate calculations, we calculate in each round of iteration$r_{new}=βM⋅r_{old}$, and then$r_{new}$Based on a constant value$(1−β)/N$。