../../_images/badge-colab.svg ../../_images/badge-github-custom.svg

Working with concepts in the Dimensions API

This Python notebook shows how to use the Dimensions Analytics API in order to extract concepts from documents and use them as the basis for more advanced topic-analysis tasks.

Prerequisites

This notebook assumes you have installed the Dimcli library and are familiar with the Getting Started tutorial.

[1]:
!pip install dimcli plotly -U --quiet

import dimcli
from dimcli.shortcuts import *
import json
import sys
import pandas as pd
import plotly.express as px
if not 'google.colab' in sys.modules:
  # make js dependecies local / needed by html exports
  from plotly.offline import init_notebook_mode
  init_notebook_mode(connected=True)
#

print("==\nLogging in..")
# https://github.com/digital-science/dimcli#authentication
ENDPOINT = "https://app.dimensions.ai"
if 'google.colab' in sys.modules:
  import getpass
  USERNAME = getpass.getpass(prompt='Username: ')
  PASSWORD = getpass.getpass(prompt='Password: ')
  dimcli.login(USERNAME, PASSWORD, ENDPOINT)
else:
  USERNAME, PASSWORD  = "", ""
  dimcli.login(USERNAME, PASSWORD, ENDPOINT)
dsl = dimcli.Dsl()
==
Logging in..
Dimcli - Dimensions API Client (v0.7.4.2)
Connected to: https://app.dimensions.ai - DSL v1.27
Method: dsl.ini file

1. Background: What are concepts?

Concepts are normalized noun phrases describing the main topics of a document (see also the official documentation). Concepts are automatically derived from documents abstracts using machine learning techniques, and are ranked based on their relevance.

In the JSON data, concepts are available as an ordered list (=first items are the most relevant), including a relevance score. E.g. for the publications with ID ‘pub.1122072646’:

{'id': 'pub.1122072646',
'concepts_scores': [{'concept': 'acid', 'relevance': 0.07450046286579201},
                    {'concept': 'conversion', 'relevance': 0.055053872555463006},
                    {'concept': 'formic acid', 'relevance': 0.048144671935356},
                    {'concept': 'CO2', 'relevance': 0.032150964737607}
                    [........]
                    ],
 }

Please note that (as of version 1.25 of the DSL API) it is possible to return either concepts_scores or concepts with Publications queries, but only concepts with Grants queries.

1.1 From concepts to dataframes: Dimcli’s as_dataframe_concepts method

A Dimensions API query normally returns a list of documents (publications, grants), where each document includes a list of concepts.

In order to analyse concepts more easily, it is useful to ‘unnest’ concepts into a new structure where each concept is a top level item. In other words, we want to transform the original documents table into a concepts table.

The Dimcli library provides a method that does exactly that: as_dataframe_concepts().

[2]:
q = """search publications for "graphene"
            where year=2019
       return publications[id+title+year+concepts_scores] limit 100"""
concepts = dsl.query(q).as_dataframe_concepts()
concepts.head(5)
Returned Publications: 100 (total = 101443)
Time: 1.37s
[2]:
year id title concepts_count concept score frequency score_avg
0 2019 pub.1123764889 Smart Non-Woven Fiber Mats with Light-Induced ... 63 non-woven fiber mats 0.78424 1 0.78424
1 2019 pub.1123764889 Smart Non-Woven Fiber Mats with Light-Induced ... 63 polymer matrix 0.72761 3 0.64380
2 2019 pub.1123764889 Smart Non-Woven Fiber Mats with Light-Induced ... 63 atom transfer radical polymerization 0.72668 1 0.72668
3 2019 pub.1123764889 Smart Non-Woven Fiber Mats with Light-Induced ... 63 transfer radical polymerization 0.70781 1 0.70781
4 2019 pub.1123764889 Smart Non-Woven Fiber Mats with Light-Induced ... 63 ray photoelectron spectroscopy 0.69869 4 0.65301

The as_dataframe_concepts() method internally uses pandas to explode the concepts list, plus it adds some extra metrics that are handy in order to carry out further analyses:

  1. concepts_count: the total number of concepts for each single document. E.g., if a document has 35 concepts, concepts_count=35.

  2. frequency: how often a concept occur within a dataset, i.e. how many documents include that concept. E.g., if a concept appears in 5 documents, frequency=5.

  3. score: the relevancy of a concept in the context of the document it is extracted from. Concept scores go from 0 (= not relevant) to 1 (= very relevant). NOTE if concepts are returned without scores, these are generated automatically by normalizing its ranking against the total number of concepts for a single document. E.g., if a document has 10 concepts in total, the first concept gets a score=1, the second score=0.9, etc..

  4. score_avg: the average (mean) value of all scores of a concept across multiple documents, within a given in a dataset.

As we will see, by sorting and segmenting data using these parameters, it is possible to filter out common-name concepts and highlight more interesting ones.

1.2 Extracting concepts from any text

This tutorial focuses on concepts obtained from publications available via Dimensions. However, it is also possible to take advantage of Dimensions NLP infrastructure to extract concepts from any text.

This can be achieved by using the DSL function extract_concepts and passing an abstract-length text as an argument.

For example:

[3]:
abstract = """We describe monocrystalline graphitic films, which are a few atoms thick but are nonetheless stable under ambient conditions,
metallic, and of remarkably high quality. The films are found to be a two-dimensional semimetal with a tiny overlap between
valence and conductance bands, and they exhibit a strong ambipolar electric field effect such that electrons and
holes in concentrations up to 10 per square centimeter and with room-temperature mobilities of approximately 10,000 square
centimeters per volt-second can be induced by applying gate voltage.
"""
res = dsl.query(f"""extract_concepts("{abstract}", return_scores=true)""")
pd.DataFrame(res['extracted_concepts'])
[3]:
concept relevance
0 ambipolar electric field effect 0.298596
1 two-dimensional semimetal 0.293348
2 room-temperature mobility 0.285300
3 electric field effects 0.279084
4 field effects 0.254481
5 graphitic films 0.254320
6 gate voltage 0.253241
7 conductance band 0.234315
8 square centimeter 0.226343
9 films 0.212759
10 electrons 0.203926
11 semimetals 0.200813
12 ambient conditions 0.200779
13 atoms 0.195437
14 holes 0.189890
15 centimeters 0.187298
16 metallic 0.180655
17 voltage 0.166822
18 band 0.164475
19 high quality 0.163201
20 valence 0.158688
21 mobility 0.150092
22 overlap 0.125722
23 effect 0.114412
24 conditions 0.104444
25 concentration 0.087683
26 quality 0.074386
27 monocrystalline graphitic films 0.000000
28 tiny overlap 0.000000
29 strong ambipolar electric field effect 0.000000

2. Data acquisition: retrieving publications and all their associated concepts

Let’s pull all publications from University College London classified with the FOR code “16 Studies in Human Society”.

Tip: you can experiment by changing the parameters below as you want, eg by choosing another GRID organization.

[4]:
GRIDID = "grid.83440.3b" #@param {type:"string"}
FOR = "16 Studies in Human Society" #@param {type:"string"}

query = f"""
search publications
    where research_orgs.id = "{GRIDID}"
    and category_for.name= "{FOR}"
    return publications[id+doi+concepts_scores+year]
"""

print("===\nQuery:\n", query)
print("===\nRetrieving Publications.. ")
data = dsl.query_iterative(query)
===
Query:

search publications
    where research_orgs.id = "grid.83440.3b"
    and category_for.name= "16 Studies in Human Society"
    return publications[id+doi+concepts_scores+year]

===
Retrieving Publications..
Starting iteration with limit=1000 skip=0 ...
0-1000 / 8450 (7.88s)
1000-2000 / 8450 (6.28s)
2000-3000 / 8450 (6.41s)
3000-4000 / 8450 (5.49s)
4000-5000 / 8450 (6.74s)
5000-6000 / 8450 (4.99s)
6000-7000 / 8450 (4.17s)
7000-8000 / 8450 (3.14s)
8000-8450 / 8450 (1.84s)
===
Records extracted: 8450

Let’s turn the results into a dataframe and have a quick look at the data. You’ll see a column concepts_scores that contains a list of concepts for each of the publications retrieved.

[5]:
pubs = data.as_dataframe()
pubs.head(5)
[5]:
year doi concepts_scores id
0 2020 10.1080/01419870.2018.1544651 [{'concept': 'second-generation migrants', 're... pub.1110346939
1 2020 10.1186/s12961-020-00581-1 [{'concept': 'national health research systems... pub.1128669838
2 2020 10.1007/s41109-019-0246-9 [{'concept': 'specific industrial sectors', 'r... pub.1123927393
3 2020 10.1140/epjds/s13688-020-00225-y [{'concept': 'development indicators', 'releva... pub.1126594457
4 2020 10.1016/j.ijdrr.2020.101812 [{'concept': 'police-community relations', 're... pub.1130204522

2.1 Processing concept data

Now it’s time to start digging into the ‘concepts’ column of publications.

Each publications has an associated list of concepts, so in order to analyse them we need to ‘explode’ that list so to have a new table with one row per concept.

[6]:
concepts = data.as_dataframe_concepts()

print("===\nConcepts Found (total):", len(concepts))
print("===\nPreview:")
display(concepts)
===
Concepts Found (total): 326539
===
Preview:
year doi id concepts_count concept score frequency score_avg
0 2020 10.1080/01419870.2018.1544651 pub.1110346939 50 second-generation migrants 0.69108 1 0.69108
1 2020 10.1080/01419870.2018.1544651 pub.1110346939 50 children of migrants 0.68700 1 0.68700
2 2020 10.1080/01419870.2018.1544651 pub.1110346939 50 Donald Trump’s election 0.68565 1 0.68565
3 2020 10.1080/01419870.2018.1544651 pub.1110346939 50 second-generation group 0.68345 2 0.68013
4 2020 10.1080/01419870.2018.1544651 pub.1110346939 50 feelings of exclusion 0.67114 1 0.67114
... ... ... ... ... ... ... ... ...
326534 1870 10.1038/002374c0 pub.1032460376 28 importance 0.01678 367 0.28686
326535 1870 10.1038/002374c0 pub.1032460376 28 matter 0.01657 126 0.26236
326536 1870 10.1038/002374c0 pub.1032460376 28 additional remarks 0.01414 1 0.01414
326537 1870 10.1038/002374c0 pub.1032460376 28 letter 0.01358 16 0.19310
326538 1870 10.1038/002374c0 pub.1032460376 28 engineers 0.01144 17 0.15923

326539 rows × 8 columns

If we ignore the publications metadata from the concepts list and drop duplicates, we can obtain a new table with unique concepts.

[7]:

concepts_unique = concepts.drop_duplicates("concept")[['concept', 'frequency', 'score_avg']]
print("===\nUnique Concepts Found:", len(concepts_unique))
print("===\nPreview:")
display(concepts_unique)
===
Unique Concepts Found: 87556
===
Preview:
concept frequency score_avg
0 second-generation migrants 1 0.69108
1 children of migrants 1 0.68700
2 Donald Trump’s election 1 0.68565
3 second-generation group 2 0.68013
4 feelings of exclusion 1 0.67114
... ... ... ...
326518 same subjects 1 0.02285
326519 personal opinions 1 0.02257
326520 engineering colleges 1 0.02195
326531 serious objections 1 0.01875
326536 additional remarks 1 0.01414

87556 rows × 3 columns

3. Exploring our dataset: basic statistics about Publications / Concepts

In this section we’ll show how to get an overview of the concepts data we obtained.

These statistics are important because they will help us contextualize more in-depth analyses of the concepts data we’ll do later on.

3.1 Documents With concepts VS Without

You’ll soon discover that not all documents have associated concepts (eg cause there’s no text to extract them from, in some cases).

Let’s see how many:

[8]:
CONCEPTS_FIELD = "concepts_scores"

df = pd.DataFrame({
    'type': ['with_concepts', 'without_concepts'] ,
    'count': [pubs[CONCEPTS_FIELD].notnull().sum(), pubs[CONCEPTS_FIELD].isnull().sum()]
             })

px.pie(df,
       names='type', values="count",
      title = "How many documents have concepts?")

3.2 Yearly breakdown of Documents With concepts VS Without

It’s also useful to look at whether the ratio of with/without concepts is stable across the years.

To this end we can use * the publications id column to count the total number of publications per year
* the concepts column to count the ones that have concepts
[9]:
temp1 = pubs.groupby('year', as_index=False).count()[['year', 'id', CONCEPTS_FIELD]]
temp1.rename(columns={'id': "documents", CONCEPTS_FIELD: "with_concepts"}, inplace=True)

# reorder cols/rows
temp1 = temp1.melt(id_vars=["year"],
         var_name="type",
        value_name="count")

px.bar(temp1, title="How many documents have concepts? Yearly breakdown.",
       x="year", y="count",
       color="type",
       barmode="group")

3.3 Concepts frequency

It is useful to look at how many concepts appear more than once in our dataset. As you’ll discovert, is often the case that only a subset of concepts appear more than once. That is because documents tend to be highly specialised hence a large number of extracted noun phrases aren’t very common.

By looking at this basic frequency statistics we can determine a useful frequency threshold for our analysis - ie to screen out concepts that are not representative of the overall dataset we have.

Tip: change the value of THRESHOLD to explore the data.

[10]:
THRESHOLD = 2

df = pd.DataFrame({
    'type': [f'freq<{THRESHOLD}',
             f'freq={THRESHOLD}',
             f'freq>{THRESHOLD}'] ,
    'count': [concepts_unique.query(f"frequency < {THRESHOLD}")['concept'].count(),
              concepts_unique.query(f"frequency == {THRESHOLD}")['concept'].count(),
              concepts_unique.query(f"frequency > {THRESHOLD}")['concept'].count()]
             })

px.pie(df,
       names='type', values="count",
      title = f"Concepts with a frequency major than: {THRESHOLD}")

3.4 Distribution of Concepts Frequency

It is useful to chart the overall distribution of how frequent concepts are.

The bottom-left section of the chart shows the segment we are most likely to focus on, so to avoid concepts that appear only once, or the long-tail of highly frequent concepts that are likely to be common-words of little interest.

[11]:
temp = concepts_unique.groupby('frequency', as_index=False)['concept'].count()
temp.rename(columns={'concept' : 'concepts with this frequency'}, inplace=True)
px.scatter(temp,
           x="frequency",
           y="concepts with this frequency",
          title="Distribution of concepts frequencies")

3.5 Yearly breakdown: unique VS repeated concepts

Also useful to look at the number of concepts per year, VS the number of unique concepts.

This will give us a sense of whether the distribution of repeated concepts is stable across the years.

[12]:
series1 = concepts.groupby("year")['concept'].count().rename("All concepts")
series2 = concepts.groupby("year")['concept'].nunique().rename("Unique concepts")
temp2 = pd.concat([series1, series2], axis=1).reset_index()
temp2 = temp2.melt(id_vars=["year"],
         var_name="type",
        value_name="count")

px.bar(temp2,
       title="Yearly breakdown: Tot concepts VS Unique concepts",
       x="year", y="count",
       color="type", barmode="group",
       color_discrete_sequence=px.colors.carto.Antique)

4. Isolating ‘interesting’ concepts using frequency and score_avg

In this section we will take a deep dive into the concepts themselves, in particular by using the two metrics obtained above: frequency and score_avg.

The main thing to keep in mind is that only the combination of these two metrics can lead to interesting results. In fact, if we used only frequency it’ll lead to common keywords that are not very relevant; on the other hand, using only relevancy will result in concepts that important but just to one or two documents.

4.1 The problem: frequent concepts are not that interesting!

For example, let’s see what happens if we get the top concepts based on frequency only:

[13]:
top = concepts_unique.sort_values("frequency", ascending=False)[:20]

px.bar(top,
       title="Concepts sorted by frequency",
       x="concept", y="frequency",
       color="score_avg")

Not very interesting at all! Those keywords are obviously very common (eg study or development) in the scientific literature, but of very little semantic interest.

4.2 Solution 1: prefiltering by score_avg and sorting by frequency

By doing so, we aim at extracting concepts that are both frequent and tend to be very relevant (within their documents).

[14]:
temp = concepts_unique.query("score_avg > 0.6").sort_values("frequency", ascending=False)

px.bar(temp[:50],
       title="Concepts with high average score, sorted by frequency",
       x="concept", y="frequency",
       color="score_avg")

4.3 Solution 2: prefiltering by frequency and sorting by score_avg

This method also allows to isolate interesting concepts, even if they are not very frequently appearing in our dataset.

[15]:
temp = concepts_unique.query("frequency > 10 & frequency < 100").sort_values(["score_avg", "frequency"], ascending=False)

px.bar(temp[:100],
       title="Concepts with medium frequency, sorted by score_avg",
       x="concept", y="score_avg",
       height=600,
       color="frequency")

5. Analyses By Year

In this section we will show how to use the methods above together with a yearly segmentation of the documents data. This will allow us to draw up some cool comparison of concepts/topics across years.

5.1 Adding year-based metrics to the concepts dataframe

These are the steps

  • recalculate freq and score_avg for each year, using the original concepts dataset from section 2.1

  • note this will result in duplicates (as many as the appearances of a concept within the same year), which of course we should remove

[16]:
concepts['frequency_year'] = concepts.groupby(['year', 'concept'])['concept'].transform('count')
concepts['score_avg_year'] = concepts.groupby(['year', 'concept'])['score'].transform('mean').round(5)

concepts_by_year = concepts.copy().drop_duplicates(subset=['concept', 'year'])\
                    [['year', 'concept', 'frequency_year', 'score_avg_year']]
concepts_by_year.head()
[16]:
year concept frequency_year score_avg_year
0 2020 second-generation migrants 1 0.69108
1 2020 children of migrants 1 0.68700
2 2020 Donald Trump’s election 1 0.68565
3 2020 second-generation group 1 0.68345
4 2020 feelings of exclusion 1 0.67114

For example, let’s look at the yearly-distribution of a specific concept: migrants

[17]:
concepts_by_year[concepts_by_year['concept'] == "migrants"]
[17]:
year concept frequency_year score_avg_year
15 2020 migrants 16 0.54113
34007 2019 migrants 21 0.54642
67173 2018 migrants 19 0.56283
100277 2017 migrants 10 0.45743
125102 2016 migrants 11 0.48838
142707 2015 migrants 3 0.50477
162782 2014 migrants 4 0.57564
176593 2013 migrants 3 0.58176
194523 2012 migrants 3 0.51278
206878 2011 migrants 3 0.51742
213468 2010 migrants 4 0.35890
229204 2009 migrants 2 0.56712
233276 2008 migrants 5 0.47707
243782 2007 migrants 4 0.60750
254056 2006 migrants 4 0.52949
283604 2000 migrants 1 0.49055
286053 1999 migrants 2 0.55131
292877 1997 migrants 1 0.65821
297699 1996 migrants 1 0.24544
300682 1994 migrants 1 0.59848
306503 1991 migrants 1 0.52219
318482 1980 migrants 1 0.53460
321046 1976 migrants 1 0.03168

5.2 Charting the variation: multi-year visualization

We can use Plotly’s ‘facets’ to have subsections that show variation across years. Plotly will plot all the values retrieved - which allows to spot the trends up and down.

  • tip: to have an equal representation for each year, we take the top N concepts across a chosen years-span and then look at their frequency distribution over the years

In order to isolate interesting concepts, we can use the same formula from above (filter by score, then sort by frequency). Only this time using yearly values of course!

[18]:
MAX_CONCEPTS = 50
YEAR_START = 2015
YEAR_END = 2019
SCORE_MIN = 0.4

segment = concepts_by_year.query(f"year >= {YEAR_START} & year <= {YEAR_END}").copy()

# create metrics for the segment only
segment['frequency'] = concepts.groupby('concept')['concept'].transform('count')
segment['score_avg'] = concepts.groupby('concept')['score'].transform('mean').round(5)

# get top N concepts for the dataviz
top_concepts = segment.drop_duplicates('concept')\
        .query(f"score_avg > {SCORE_MIN}")\
        .sort_values("frequency", ascending=False)[:MAX_CONCEPTS]

# use yearly data only for top N concepts
segment_subset = segment[segment['concept'].isin(top['concept'].tolist())]

px.bar(segment_subset,
       x="concept",
       y="frequency_year",
       facet_row="year",
       title=f"Top concepts {YEAR_START}-{YEAR_END} with score_avg > {SCORE_MIN}, sorted by frequency",
       height=1000,
       color="frequency_year")

6. Conclusion

In this tutorial we have demonstrated how to query for concepts using the Dimensions Analytics API.

The main takeaways

  • concepts can be easily extracted by using the as_dataframe_concepts() method

  • concepts have an implicit score relative to the document they belong to - but we can create more absolute metrics by normalizing these scores

  • it is useful to look at the frequency of concepts in the context of the entire dataset we have

  • there can be a long tail of concepts that are very infrequent, hence it’s useful to filter those out

  • by using a combination of frequency and score_avg metrics, we can filter out uninteresting concepts

What next

Using these methods, you can take advantage of concepts data in a number of real-world scenarios. Here are some ideas:

  • you can segment publications using other criteria: eg by journal or by field of research, in order to identify more specific trends;

  • concepts extracted can be used to create new DSL searches - using the in concepts search syntax;

  • concepts data can be grouped further using semantic similarity or clustering techniques;

  • you can look at the co-occurence of concepts withing the same document, in order to build a semantic network.



Note

The Dimensions Analytics API allows to carry out sophisticated research data analytics tasks like the ones described on this website. Check out also the associated Github repository for examples, the source code of these tutorials and much more.

../../_images/badge-dimensions-api.svg