../../_images/badge-colab.svg ../../_images/badge-github-custom.svg

Citation Analysis: an Introduction

This notebooks shows how to extract citations data using the Dimensions Analytics API.

Two approaches are considered: one that is most suited for smaller analyses, and one which is more query-efficient and hence is more suited for analyses involving lots of publications.

Prerequisites

This notebook assumes you have installed the Dimcli library and are familiar with the Getting Started tutorial.

[1]:
!pip install dimcli -U --quiet

import dimcli
from dimcli.shortcuts import *
import os, sys, time, json

print("==\nLogging in..")
# https://github.com/digital-science/dimcli#authentication
ENDPOINT = "https://app.dimensions.ai"
if 'google.colab' in sys.modules:
  import getpass
  USERNAME = getpass.getpass(prompt='Username: ')
  PASSWORD = getpass.getpass(prompt='Password: ')
  dimcli.login(USERNAME, PASSWORD, ENDPOINT)
else:
  USERNAME, PASSWORD  = "", ""
  dimcli.login(USERNAME, PASSWORD, ENDPOINT)
dsl = dimcli.Dsl()
==
Logging in..
Dimcli - Dimensions API Client (v0.7.4.2)
Connected to: https://app.dimensions.ai - DSL v1.27
Method: dsl.ini file

Method A: getting citations for one publication at a time

By using the field reference_ids we can easily look up citations for individual publications (= incoming links). For example, here are the papers citing “pub.1053279155”:

[2]:
%dsldf search publications where reference_ids in [ "pub.1053279155" ] return publications[id+doi+title+year]
Returned Publications: 5 (total = 5)
Time: 0.47s
[2]:
year doi id title
0 2018 10.1007/s11227-018-2338-1 pub.1103275659 Towards ontology-based multilingual URL filter...
1 2015 10.1515/iwp-2015-0057 pub.1012651711 Das Experteninterview als zentrale Methode der...
2 2015 10.1007/978-3-319-24129-6_3 pub.1005502446 Challenges for Ontological Engineering in the ...
3 2012 10.1007/978-3-642-24809-2_10 pub.1008922470 Transforming a Flat Metadata Schema to a Seman...
4 2011 10.1007/978-3-642-24731-6_38 pub.1053157726 Practice-Based Ontologies: A New Approach to A...

Let’s try another paper ie “pub.1103275659” - in this case there are 3 citations

[3]:
%dsldf search publications where reference_ids in [ "pub.1103275659" ] return publications[id+doi+title+year]
Returned Publications: 7 (total = 7)
Time: 0.49s
[3]:
doi year id title
0 10.1016/j.childyouth.2020.105134 2020 pub.1128314811 Cyber parental control: A bibliometric study
1 10.1007/s11042-020-08764-2 2020 pub.1125691748 OBAC: towards agent-based identification and c...
2 10.1155/2020/8545128 2020 pub.1125056530 Calculating Trust Using Multiple Heterogeneous...
3 10.1016/j.future.2019.04.038 2019 pub.1113878770 Perception layer security in Internet of Things
4 10.1109/access.2019.2918196 2019 pub.1115224509 Spammer Detection and Fake User Identification...
5 10.1109/isncc.2018.8530984 2018 pub.1109815383 A Fault Tolerant Approach for Malicious URL Fi...
6 10.1109/access.2018.2872928 2018 pub.1107354292 Social Internet of Vehicles: Complexity, Adapt...

Using this simple approach, if we start with a list of publications (our ‘seed’) we can set up a simple loop to get through all of them and launch a ‘get-citations’ query each time.

TIP The json.dumps function easily transforms a list of objects into a string which can be used directly in our query eg

> json.dumps(seed)
'["pub.1053279155", "pub.1103275659"]'
[4]:
seed = [ "pub.1053279155" , "pub.1103275659"]
q = """search publications where reference_ids in [{}] return publications[id+doi+title+year]"""
results = {}
for p in seed:
  data = dsl.query(q.format(json.dumps(p)))
  results[p] = [x['id'] for x in data.publications]
Returned Publications: 5 (total = 5)
Time: 0.46s
Returned Publications: 7 (total = 7)
Time: 0.47s
[5]:
results
[5]:
{'pub.1053279155': ['pub.1103275659',
  'pub.1012651711',
  'pub.1005502446',
  'pub.1008922470',
  'pub.1053157726'],
 'pub.1103275659': ['pub.1128314811',
  'pub.1125691748',
  'pub.1125056530',
  'pub.1113878770',
  'pub.1115224509',
  'pub.1109815383',
  'pub.1107354292']}

Comments about this method

  • this approach is straightforward and quick, but it’s better used with small datasets

  • we create one query per publication (and so on, for a N-degree network)

  • if you have lots of publicaitons, it’ll lead to lots of queries which may not be too efficient

Method B: Getting citations for multiple publications via a single query

We can use the same query template but instead of looking for a single publication ID, we can put multiple ones in a list.

So if we combine the two citations list for “pub.1053279155” and “pub.1103275659”, we will get 5 + 3 = 8 results in total.

However then it’s down to us to figure out which paper is citing which!

[6]:
%dsldf search publications where reference_ids in [ "pub.1053279155" , "pub.1103275659"] return publications[id+doi+title+year]
Returned Publications: 12 (total = 12)
Time: 0.50s
[6]:
title year id doi
0 Cyber parental control: A bibliometric study 2020 pub.1128314811 10.1016/j.childyouth.2020.105134
1 OBAC: towards agent-based identification and c... 2020 pub.1125691748 10.1007/s11042-020-08764-2
2 Calculating Trust Using Multiple Heterogeneous... 2020 pub.1125056530 10.1155/2020/8545128
3 Perception layer security in Internet of Things 2019 pub.1113878770 10.1016/j.future.2019.04.038
4 Spammer Detection and Fake User Identification... 2019 pub.1115224509 10.1109/access.2019.2918196
5 Towards ontology-based multilingual URL filter... 2018 pub.1103275659 10.1007/s11227-018-2338-1
6 A Fault Tolerant Approach for Malicious URL Fi... 2018 pub.1109815383 10.1109/isncc.2018.8530984
7 Social Internet of Vehicles: Complexity, Adapt... 2018 pub.1107354292 10.1109/access.2018.2872928
8 Das Experteninterview als zentrale Methode der... 2015 pub.1012651711 10.1515/iwp-2015-0057
9 Challenges for Ontological Engineering in the ... 2015 pub.1005502446 10.1007/978-3-319-24129-6_3
10 Transforming a Flat Metadata Schema to a Seman... 2012 pub.1008922470 10.1007/978-3-642-24809-2_10
11 Practice-Based Ontologies: A New Approach to A... 2011 pub.1053157726 10.1007/978-3-642-24731-6_38

In order to resolve the citations data we got above, we must also extract the full references for each citing paper (by including reference_ids in the results) and then recreate the citation graph programmatically. EG

[7]:
seed = [ "pub.1053279155" , "pub.1103275659"]
[8]:
data = dsl.query(f"""search publications where reference_ids in {json.dumps(seed)} return publications[id+doi+title+year+reference_ids]""")
Returned Publications: 12 (total = 12)
Time: 0.65s
[9]:
def build_network_dict(seed, pubs_list):
  network={x:[] for x in seed} # seed a dictionary
  for pub in pubs_list:
    for key in network:
      if pub.get('reference_ids') and key in pub['reference_ids']:
        network[key].append(pub['id'])
  return network

A simple way to represent the citation network is a dictionary data structure with 'cited_paper' : [citing papers]

[10]:
network1 = build_network_dict(seed, data.publications)
network1
[10]:
{'pub.1053279155': ['pub.1103275659',
  'pub.1012651711',
  'pub.1005502446',
  'pub.1008922470',
  'pub.1053157726'],
 'pub.1103275659': ['pub.1128314811',
  'pub.1125691748',
  'pub.1125056530',
  'pub.1113878770',
  'pub.1115224509',
  'pub.1109815383',
  'pub.1107354292']}

Creating a second-level citations network

Let’s now create a second level citations network!

This means going through all pubs citing the two seed-papers, and getting all the citing publications for them as well.

[11]:
all_citing_papers = []
for x in network1.values():
  all_citing_papers += x
all_citing_papers = list(set(all_citing_papers))
[12]:
all_citing_papers
[12]:
['pub.1115224509',
 'pub.1012651711',
 'pub.1053157726',
 'pub.1005502446',
 'pub.1008922470',
 'pub.1125691748',
 'pub.1125056530',
 'pub.1103275659',
 'pub.1128314811',
 'pub.1113878770',
 'pub.1109815383',
 'pub.1107354292']

Now let’s extract the network structure as previously done

[13]:
data2 = dsl.query(f"""search publications where reference_ids in {json.dumps(all_citing_papers)} return publications[id+doi+title+year+reference_ids]""")
network2 = build_network_dict(all_citing_papers, data2.publications)
network2
Returned Publications: 20 (total = 66)
Time: 1.20s
[13]:
{'pub.1115224509': ['pub.1130502573',
  'pub.1129513039',
  'pub.1130518902',
  'pub.1129302386',
  'pub.1117695595',
  'pub.1126679217'],
 'pub.1012651711': ['pub.1130168753'],
 'pub.1053157726': [],
 'pub.1005502446': [],
 'pub.1008922470': [],
 'pub.1125691748': ['pub.1129412317'],
 'pub.1125056530': [],
 'pub.1103275659': ['pub.1128314811'],
 'pub.1128314811': [],
 'pub.1113878770': ['pub.1127814760',
  'pub.1128444159',
  'pub.1129925269',
  'pub.1127416559',
  'pub.1127173019',
  'pub.1128794002',
  'pub.1129698755',
  'pub.1129698601',
  'pub.1127252959'],
 'pub.1109815383': [],
 'pub.1107354292': ['pub.1127855566', 'pub.1128759793']}

Finally we can merge the two levels into one single dataset (note: nodes with same data will be merged automatically)

[14]:
final = dict(network1, **network2 )
final
[14]:
{'pub.1053279155': ['pub.1103275659',
  'pub.1012651711',
  'pub.1005502446',
  'pub.1008922470',
  'pub.1053157726'],
 'pub.1103275659': ['pub.1128314811'],
 'pub.1115224509': ['pub.1130502573',
  'pub.1129513039',
  'pub.1130518902',
  'pub.1129302386',
  'pub.1117695595',
  'pub.1126679217'],
 'pub.1012651711': ['pub.1130168753'],
 'pub.1053157726': [],
 'pub.1005502446': [],
 'pub.1008922470': [],
 'pub.1125691748': ['pub.1129412317'],
 'pub.1125056530': [],
 'pub.1128314811': [],
 'pub.1113878770': ['pub.1127814760',
  'pub.1128444159',
  'pub.1129925269',
  'pub.1127416559',
  'pub.1127173019',
  'pub.1128794002',
  'pub.1129698755',
  'pub.1129698601',
  'pub.1127252959'],
 'pub.1109815383': [],
 'pub.1107354292': ['pub.1127855566', 'pub.1128759793']}

Building a Simple Dataviz

We can build a simple visualization using the excellent pyvis library. A custom version of pyvis is already included in dimcli.core.extras and is called NetworkViz (note: this custom version only fixes a bug that prevents pyvis graphs to be displayed online with Google Colab).

[17]:
# load custom version of pyvis
from dimcli.core.extras import NetworkViz
[18]:
net = NetworkViz(notebook=True, width="100%", height="800px")
net.heading = "A simple citation network"

nodes = []
for x in final:
  nodes.append(x)
  nodes += final[x]
nodes = list(set(nodes))

net.add_nodes(nodes) # node ids and labels = ["a", "b", "c", "d"]

for x in final:
  for target in final[x]:
    net.add_edge(x, target)

net.show("citation.html")
[18]:

Final considerations

Querying for more than 1000 results

Each API query can return a maximum of 1000 records, so you must use the limit/skip syntax to get more.

See the paginating results section in the docs for more info.

Querying for more than 50K results

Even with limit/skip, one can only download 50k records for each single query.

So if your list of PUB-ids is getting too long (eg > 300) you should consider splitting up the list into chunks create an extra loop to go through all of them without hitting the max upper limit.

Dealing with highly cited publications

Some publications can have lots of citations: for example, here we have a single paper with 200K+ citation: https://app.dimensions.ai/details/publication/pub.1076750128

That’s quite an exceptional case, but there are several publications with more than 10k citations each. When you encounter such cases, you will hit the 50k limit pretty quickly, so you need to keep an eye out for these and possibly ‘slice’ the data in different ways eg by year or journal (so to get less results).

Pre-checking citations counts

The times_cited and recent_citations fields of publications can be used to check how many citations a paper has (ps recent_citations counts the last two years only).

So, by using these aggregated figures, it is possible to get a feeling for the size of citations-data we’ll have to deal with before setting up a proper data extraction pipeline.



Note

The Dimensions Analytics API allows to carry out sophisticated research data analytics tasks like the ones described on this website. Check out also the associated Github repository for examples, the source code of these tutorials and much more.

../../_images/badge-dimensions-api.svg