Why Python and NetworkX?

While R has powerful network libraries in the form of igraph and network, Python also has its fair share of excellent libraries. For example, graph-tool and Snap.py are highly optimized and therefore great for massive data sets, while igraph also has a Python equivalent.

In this tutorial, we will focus on NetworkX. You will learn how to access the NetworkX network object and how to calculate some interesting network properties. As NetworkX is relatively simple to use, it is great for researchers who spend more time developing their analysis than running it. Moreover, NetworkX contains dozens of algorithms specifically targeted towards network analysis.

Install biom

To install the Python biom package, you can use pip. The biom package is called biom-format.

python3 -m pip install biom-format

Importing a network

To play with NetworkX, we first need a network. For this tutorial, we will use the networks from the FlashWeave tutorial: Running FlashWeave on sponge networks).
If you were unable to run FlashWeave, please find the networks here: ZIP file with networks.

import networkx as nx
import pandas as pd
import biom
import os
network_loc = 'C:/Users/user/Documents/workshop/sponges_networks/'  # Change to the folder where you downloaded the files
network = nx.read_gml(network_loc + 'Axinellida.gml')

Accessing the NetworkX object

What information is already in the network? We can see the node and edge metadata by accessing a single node or edge. Fill in the nodename and edgename variables yourself. Do not forget the round brackets; with NetworkX, we index the network object with a tuple of the two nodes that participate in the edge.

print(network.nodes)
print(network.edges)
nodename = 
edgename = 

print(network.nodes[nodename]) 
print(network.edges[edgename]) 

Unfortunately, the taxonomy is not contained in the node metadata. This we can address by reading the original BIOM file. You can download the processed BIOM files from the Github repository: link to zip BIOM files Unzip the files in a location of your choice.
We first iterate to create a dictionary of dictionaries; the top-level dictionary contains the taxonomy levels, while the lower-level dictionaries contain the node - taxonomy values.

The command below constructs the dict of dicts from the BIOM file; can you print the complete taxonomy for taxon c4357330?

# read BIOM file 
# Change to the folder where you downloaded the files
biom_loc = 'C:/Users/user/Documents/workshop/sponges/'  
biom_file = biom.load_table(biom_loc + "Axinellida.biom")
tax_table = biom_file.metadata_to_dataframe(axis='observation')
# make dict of network attributes
tax = dict()
for col in tax_table.columns:
    tax[col] = dict()
for row in tax_table.iterrows():
    for col in tax_table.columns:
        tax[col][str(row[0])] = row[1][col]

print(tax['taxonomy_4']['c4357330'])

NetworkX can add node metadata with a dictionary. We can iterate over the top-level dictionary to add all the taxonomic levels. Confirm that the taxonomy addition worked by checking the node metadata again.

for key in tax:
    nx.set_node_attributes(network, tax[key], key)

Of course, we can repeat this for all sponge networks. First, we make a list of all names from the folders containing the networks. Then we find the matching BIOM files. Finally, we store these networks as graphml files instead, so they can be imported in other software. You can load these files into Cytoscape to visualize the networks and colour nodes by taxonomic information.

# Get a list of all files and remove the .gml extension
network_names = [x[:-4] for x in os.listdir(network_loc) if x[-4:] == '.gml']
network_dict = dict.fromkeys(network_names)
for name in network_dict:
  network = nx.read_gml(network_loc + name + '.gml')
  # read BIOM file 
  biom_file = biom.load_table(biom_loc + name + ".biom")
  tax_table = biom_file.metadata_to_dataframe(axis='observation')
  # make dict of network attributes
  tax = dict()
  for col in tax_table.columns:
      tax[col] = dict()
  for row in tax_table.iterrows():
      for col in tax_table.columns:
          tax[col][str(row[0])] = row[1][col]
  for key in tax:
      nx.set_node_attributes(network, tax[key], key)
  network_dict[name] = network
  # write network to graphml
  nx.write_graphml(network, network_loc + name + ".graphml")

print(tax['taxonomy_4']['c4357330'])

We can do some interesting analyses with NetworkX now that we have the complete network. For example, we can test whether the degree distribution follows a power law. For this, we first need the degree distribution. This is again given as a dictionary, but it does not matter which node has what degree. Since we cannot access the values from the DegreeView directly, we first convert it to a dictionary, extract the values and convert the values to a list.

Now we have the list of degrees, we can use the powerlaw package to extract the parameters of the power law distribution fit to the data and then test whether a power law distribution or another distribution better fits the data. If the likelihood (first value) is positive, then the first distribution is more likely. If it is negative, the second distribution fits the data better. The p-value (second value) tests whether the sign of the likelihood is significant. Which distribution do you think has the best fit?

deg = dict(nx.degree(network))
deg = list(deg.values())

import powerlaw
fit = powerlaw.Fit(deg, discrete=True)
fit.distribution_compare('power_law', 'lognormal')
fit.distribution_compare('power_law', 'exponential')
fit.distribution_compare('power_law', 'lognormal_positive')
fit.distribution_compare('power_law', 'stretched_exponential')

There are also several types of assortativity implemented in NetworkX. The degree assortativity is perhaps the most standard one: are nodes with a specific degree more or less likely to have edges to a node with the same degree?

nx.degree_assortativity_coefficient(network)

In this case, the degree assortativity is not so informative. However, we can also calculate the assortativity for node attributes. Since the taxonomy was added as a node attribute before, we can calculate taxonomic assortativity. The taxonomic assortativity at the kingdom level is a bit daft, since we only have 1 node with a dummy variable. All nodes belong to the Bacteria kingdom. What are the taxonomic assortativities for the other taxonomic levels? Can you think of a biological explanation why the trend changes at a certain point?

nx.attribute_assortativity_coefficient(network, 'Rank1')

There are also many functions for cluster analysis in NetworkX. We have one problem though: many algorithms can only work on connected components, as some calculations are not possible on disconnected nodes.

How many connected components does the network contain? What is the size of each component? Why is it not a problem if we just take the largest connected component?

If we want to extract the component, we can get the list of nodes from the connected_components function. Since this is a generator function, it is an iterable and does not actually produce an output when you call it; it only produces output when you iterate over it. This is beneficial when each value in the generator would be so large that the entire generator output would clog up memory, but we are only working with a small network. Therefore, we can just place the results in a list.

We actually still need to extract the subgraph once we have the node list.

nx.number_connected_components(network)
components = nx.connected_components(network)
for component in components:
  print(len(component))

component = list(nx.connected_components(network))[0]
component = nx.subgraph(network, component)

Now we have the connected component, we can run connectivity-based algorithms. The connectivity is the number of nodes or edges (in this implementation, the number of nodes) that need to be removed to break a graph into smaller components.

In this case, we will see which maximal subgraph of the network has at least a connectivy of k. This means that at least k nodes need to be removed to further fragment the subgraph. Such a subgraph is called a k-component. This function can take some time to run!

The components are returned as a dictionary containing a set, where the dictionary keys are the tested connectivities. To count the number of nodes in each component, we need to iterate slightly differently.

Which levels of k were tested? How many components are found per k? What is the size of the component? Try extracting these values yourself.

components = nx.k_components(component)

For many applications, shortest paths can be interesting. A shortest path is the path from one node to another node that has a minimum length compared to other paths; hence, there can be more than one shortest path per node pair. Extracting all shortest paths from the network is rather straightforward.

Can you get the shortest path from node 545306 to node 1088618? How long is it? Are there other shortest paths between these nodes? And between node 545306 and node 1002658?

paths = nx.shortest_path(network)

paths['545306']['1002658']

Several centralities and algorithms are derived from shortest paths. For example, small-worldness coefficients use shortest paths. Would you say that this is a small world network based on the sigma or omega coefficients?

Be warned: these functions can take rather long! Do not forget to set the niter and nrand parameters so they take less time.

nx.sigma(component, niter=10, nrand=5)
nx.omega(component, niter=10, nrand=5)

For an overview of the functions in NetworkX, check out the reference. In addition to algorithms for graph analysis, NetworkX contains several algorithms for constructing graphs with specific topologies.

Back to overview