../../_images/badge-colab.svg ../../_images/badge-github-custom.svg

Citation Analysis: an Introduction

This notebooks shows how to extract citations data using the Dimensions Analytics API.

Two approaches are considered: one that is most suited for smaller analyses, and one which is more query-efficient and hence is more suited for analyses involving lots of publications.

Prerequisites

Please install the latest versions of these libraries to run this notebook.

[1]:
# @markdown Click the 'play' button on the left (or shift+enter) after entering your API credentials

username = "" #@param {type: "string"}
password = "" #@param {type: "string"}
endpoint = "https://app.dimensions.ai"

!pip install dimcli  -U --quiet
import json
import dimcli
from dimcli.shortcuts import *
dimcli.login(username, password, endpoint)
dsl = dimcli.Dsl()
DimCli v0.6.1.2 - Succesfully connected to <https://app.dimensions.ai> (method: dsl.ini file)

Method A: getting citations for one publication at a time

By using the field reference_ids we can easily look up citations for individual publications (= incoming links). For example, here are the papers citing “pub.1053279155”:

[2]:
%dsldf search publications where reference_ids in [ "pub.1053279155" ] return publications[id+doi+title+year]
Returned Publications: 5 (total = 5)
[2]:
title id year doi
0 Towards ontology-based multilingual URL filter... pub.1103275659 2018 10.1007/s11227-018-2338-1
1 Das Experteninterview als zentrale Methode der... pub.1012651711 2015 10.1515/iwp-2015-0057
2 Challenges for Ontological Engineering in the ... pub.1005502446 2015 10.1007/978-3-319-24129-6_3
3 Transforming a Flat Metadata Schema to a Seman... pub.1008922470 2012 10.1007/978-3-642-24809-2_10
4 Practice-Based Ontologies: A New Approach to A... pub.1053157726 2011 10.1007/978-3-642-24731-6_38

Let’s try another paper ie “pub.1103275659” - in this case there are 3 citations

[3]:
%dsldf search publications where reference_ids in [ "pub.1103275659" ] return publications[id+doi+title+year]
Returned Publications: 3 (total = 3)
[3]:
title id year doi
0 Perception layer security in Internet of Things pub.1113878770 2019 10.1016/j.future.2019.04.038
1 A Fault Tolerant Approach for Malicious URL Fi... pub.1109815383 2018 10.1109/isncc.2018.8530984
2 Social Internet of Vehicles: Complexity, Adapt... pub.1107354292 2018 10.1109/access.2018.2872928

Using this simple approach, if we start with a list of publications (our ‘seed’) we can set up a simple loop to get through all of them and launch a ‘get-citations’ query each time.

TIP The json.dumps function easily transforms a list of objects into a string which can be used directly in our query eg

> json.dumps(seed)
'["pub.1053279155", "pub.1103275659"]'
[4]:
seed = [ "pub.1053279155" , "pub.1103275659"]
q = """search publications where reference_ids in [{}] return publications[id+doi+title+year]"""
results = {}
for p in seed:
  data = dsl.query(q.format(json.dumps(p)))
  results[p] = [x['id'] for x in data.publications]
Returned Publications: 5 (total = 5)
Returned Publications: 3 (total = 3)
[5]:
results
[5]:
{'pub.1053279155': ['pub.1103275659',
  'pub.1012651711',
  'pub.1005502446',
  'pub.1008922470',
  'pub.1053157726'],
 'pub.1103275659': ['pub.1113878770', 'pub.1109815383', 'pub.1107354292']}

Comments about this method

  • this approach is straightforward and quick, but it’s better used with small datasets

  • we create one query per publication (and so on, for a N-degree network)

  • if you have lots of publicaitons, it’ll lead to lots of queries which may not be too efficient

Method B: Getting citations for multiple publications via a single query

We can use the same query template but instead of looking for a single publication ID, we can put multiple ones in a list.

So if we combine the two citations list for “pub.1053279155” and “pub.1103275659”, we will get 5 + 3 = 8 results in total.

However then it’s down to us to figure out which paper is citing which!

[6]:
%dsldf search publications where reference_ids in [ "pub.1053279155" , "pub.1103275659"] return publications[id+doi+title+year]
Returned Publications: 8 (total = 8)
[6]:
year doi title id
0 2019 10.1016/j.future.2019.04.038 Perception layer security in Internet of Things pub.1113878770
1 2018 10.1007/s11227-018-2338-1 Towards ontology-based multilingual URL filter... pub.1103275659
2 2018 10.1109/isncc.2018.8530984 A Fault Tolerant Approach for Malicious URL Fi... pub.1109815383
3 2018 10.1109/access.2018.2872928 Social Internet of Vehicles: Complexity, Adapt... pub.1107354292
4 2015 10.1515/iwp-2015-0057 Das Experteninterview als zentrale Methode der... pub.1012651711
5 2015 10.1007/978-3-319-24129-6_3 Challenges for Ontological Engineering in the ... pub.1005502446
6 2012 10.1007/978-3-642-24809-2_10 Transforming a Flat Metadata Schema to a Seman... pub.1008922470
7 2011 10.1007/978-3-642-24731-6_38 Practice-Based Ontologies: A New Approach to A... pub.1053157726

In order to resolve the citations data we got above, we must also extract the full references for each citing paper (by including reference_ids in the results) and then recreate the citation graph programmatically. EG

[7]:
seed = [ "pub.1053279155" , "pub.1103275659"]
[8]:
data = dsl.query(f"""search publications where reference_ids in {json.dumps(seed)} return publications[id+doi+title+year+reference_ids]""")
Returned Publications: 8 (total = 8)
[9]:
def build_network_dict(seed, pubs_list):
  network={x:[] for x in seed} # seed a dictionary
  for pub in pubs_list:
    for key in network:
      if pub.get('reference_ids') and key in pub['reference_ids']:
        network[key].append(pub['id'])
  return network

A simple way to represent the citation network is a dictionary data structure with 'cited_paper' : [citing papers]

[10]:
network1 = build_network_dict(seed, data.publications)
network1
[10]:
{'pub.1053279155': ['pub.1103275659',
  'pub.1012651711',
  'pub.1005502446',
  'pub.1008922470',
  'pub.1053157726'],
 'pub.1103275659': ['pub.1113878770', 'pub.1109815383', 'pub.1107354292']}

Creating a second-level citations network

Let’s now create a second level citations network!

This means going through all pubs citing the two seed-papers, and getting all the citing publications for them as well.

[11]:
all_citing_papers = []
for x in network1.values():
  all_citing_papers += x
all_citing_papers = list(set(all_citing_papers))
[12]:
all_citing_papers
[12]:
['pub.1053157726',
 'pub.1005502446',
 'pub.1107354292',
 'pub.1012651711',
 'pub.1103275659',
 'pub.1113878770',
 'pub.1008922470',
 'pub.1109815383']

Now let’s extract the network structure as previously done

[13]:
data2 = dsl.query(f"""search publications where reference_ids in {json.dumps(all_citing_papers)} return publications[id+doi+title+year+reference_ids]""")
network2 = build_network_dict(all_citing_papers, data2.publications)
network2
Returned Publications: 20 (total = 24)
[13]:
{'pub.1053157726': ['pub.1109914120',
  'pub.1113063906',
  'pub.1099624152',
  'pub.1104531912',
  'pub.1011868512'],
 'pub.1005502446': [],
 'pub.1107354292': ['pub.1122261154',
  'pub.1113878770',
  'pub.1113902569',
  'pub.1113065837'],
 'pub.1012651711': ['pub.1101318936'],
 'pub.1103275659': ['pub.1113878770', 'pub.1109815383', 'pub.1107354292'],
 'pub.1113878770': ['pub.1121687821', 'pub.1122863207', 'pub.1121692873'],
 'pub.1008922470': ['pub.1089701016',
  'pub.1026187633',
  'pub.1002394460',
  'pub.1012381129',
  'pub.1046653745'],
 'pub.1109815383': []}

Finally we can merge the two levels into one single dataset (note: nodes with same data will be merged automatically)

[14]:
final = dict(network1, **network2 )
final
[14]:
{'pub.1053279155': ['pub.1103275659',
  'pub.1012651711',
  'pub.1005502446',
  'pub.1008922470',
  'pub.1053157726'],
 'pub.1103275659': ['pub.1113878770', 'pub.1109815383', 'pub.1107354292'],
 'pub.1053157726': ['pub.1109914120',
  'pub.1113063906',
  'pub.1099624152',
  'pub.1104531912',
  'pub.1011868512'],
 'pub.1005502446': [],
 'pub.1107354292': ['pub.1122261154',
  'pub.1113878770',
  'pub.1113902569',
  'pub.1113065837'],
 'pub.1012651711': ['pub.1101318936'],
 'pub.1113878770': ['pub.1121687821', 'pub.1122863207', 'pub.1121692873'],
 'pub.1008922470': ['pub.1089701016',
  'pub.1026187633',
  'pub.1002394460',
  'pub.1012381129',
  'pub.1046653745'],
 'pub.1109815383': []}

Building a Simple Dataviz

We can build a simple visualization using the pyvis library.

NOTE: the mygraph.html file will be saved in the local directory (in Colab, open the ‘Files’ left panel and download it to your computer to open it).

[15]:
!pip install pyvis --quiet
from pyvis.network import Network
[16]:
net = Network()

nodes = []
for x in final:
  nodes.append(x)
  nodes += final[x]
nodes = list(set(nodes))

net.add_nodes(nodes) # node ids and labels = ["a", "b", "c", "d"]

for x in final:
  for target in final[x]:
    net.add_edge(x, target)

net.show("mygraph.html")

Final considerations

Querying for more than 1000 results

Each API query can return a maximum of 1000 records, so you must use the limit/skip syntax to get more.

See the paginating results section in the docs for more info.

Querying for more than 50K results

Even with limit/skip, one can only download 50k records for each single query.

So if your list of PUB-ids is getting too long (eg > 300) you should consider splitting up the list into chunks create an extra loop to go through all of them without hitting the max upper limit.

Dealing with highly cited publications

Some publications can have lots of citations: for example, here we have a single paper with 200K+ citation: https://app.dimensions.ai/details/publication/pub.1076750128

That’s quite an exceptional case, but there are several publications with more than 10k citations each. When you encounter such cases, you will hit the 50k limit pretty quickly, so you need to keep an eye out for these and possibly ‘slice’ the data in different ways eg by year or journal (so to get less results).

Pre-checking citations counts

The times_cited and recent_citations fields of publications can be used to check how many citations a paper has (ps recent_citations counts the last two years only).

So, by using these aggregated figures, it is possible to get a feeling for the size of citations-data we’ll have to deal with before setting up a proper data extraction pipeline.



Note

The Dimensions Analytics API allows to carry out sophisticated research data analytics tasks like the ones described on this website. Check out also the associated Github repository for examples, the source code of these tutorials and much more.

../../_images/badge-dimensions-api.svg