Query Expansion Based on Crowd Knowledge for Code Search
Query Expansion Based on Crowd Knowledge for Code Search
[pdf-embedder url=”https://wellapets.com/wp-content/uploads/2019/06/Query-Expansion-Based-on-Crowd.pdf” title=”Query Expansion Based on Crowd”]
Abstract—As code search is a frequent developer activity in software development practices, improving the performance of
code search is a critical task. In the text retrieval based search techniques employed in the code search, the term mismatch
problem is a critical language issue for retrieval effectiveness. By reformulating the queries, query expansion provides effective
ways to solve the term mismatch problem. In this paper, we propose Query Expansion based on Crowd Knowledge (QECK), a
novel technique to improve the performance of code search algorithms. QECK identifies software-specific expansion words
from the high quality pseudo relevance feedback question and answer pairs on Stack Overflow to automatically generate the
expansion queries. Furthermore, we incorporate QECK in the classic Rocchio’s model, and propose QECK based code search
method QECKRocchio. We conduct three experiments to evaluate our QECK technique and investigate QECKRocchio in a largescale
corpus containing real-world code snippets and a question and answer pair collection. The results show that QECK
improves the performance of three code search algorithms by up to 64% in Precision, and 35% in NDCG. Meanwhile, compared
with the state-of-the-art query expansion method, the improvement of QECKRocchio is 22% in Precision, and 16% in NDCG.
Index Terms—Code search, crowd knowledge, query expansion, information retrieval, question & answer pair.
INTRODUCTION
ODE search is a frequent developer activity in
software development practices, which has been a
part of software development for decades [47]. As
repositories containing billions lines of code become
available [1], [3], [6], [33], [43] the search mechanisms
have evolved to provide better recommendation for
given queries. On Google Code Search, a developer
composes 12 search queries per weekday on average
[41]. Meanwhile, developers search for sample codes
more than anything else, 34% queries are conducted to
find sample codes, and almost a third of searches are
incrementally performed through query reformulation
[41].
The performance of text retrieval based search
techniques used in code search strongly depends on
the text contained in queries and the code snippets (a
method is viewed as a code snippet [22]). The term
mismatch problem, also known as the vocabulary
problem [13], is a critical language issue for retrieval
effectiveness, as the queries given by users and the
code snippets do often not use the same words [10].
Meanwhile, the length of queries is usually short.
Sadowski et al. report that the average number of
words per query is 1.85 for the queries proposed to
Google search for code [41]. Obviously, it is not an
easy task to formulate a good query, which depends
greatly on the experience of the developer and his/her
knowledge of the software system [37]. To solve the
vocabulary problem, the query expansion methods
provide some effective ways by reformulating the
queries [10], [36].
In recent years, some query expansion based code
search approaches are presented. For example, Wang
et al. [58] incorporate users’ opinions on the feedback
code snippets returned by a code search engine to
refine result lists. Hill et al. [40] suggest alternative
query words by calculating the frequencies of cooccurring
words with the words in the queries. Lu et al.
[29] propose a query expansion method denoted as
PWordNet by leveraging the Part-Of-Speech (POS) of each
word in queries and WordNet [30] to expand queries.
Lemos et al. [25] automatically expand test cases based
on WordNet and a code-related thesaurus.
In this paper, we propose Query Expansion based
on Crowd Knowledge (QECK) to improve the
performance of code search. Specifically, QECK
retrieves relevant Question & Answer (Q&A) pairs in a
collection extracted from Stack Overflow as the
Pseudo Relevance Feedback (PRF) documents for a
given free-form query, identifies the software-specific
words from these documents, and generates an
expansion query by adding words to the original
query. The advantages of QECK are three fold. First, it
automatically generates expansion queries without
human intervention, as QECK employs PRF to
automatically generate expansion queries. Second, it
generates high quality PRF Q&A pairs by considering
textual similarity and the quality of both questions and
answers. Third, it identifies software-specific words
from Q&A pairs by TF-IDF weighting function.
The underlying idea behind QECK is utilizing the
software-specific words contained in Q&A pairs to
further improve the possibility of searching relevant
code snippets. In Q&A pairs, the questions and
answers, denoted as posts on Stack Overflow, are
submitted and voted by developers. Therefore, the
Q&A pairs contain useful knowledge about software
development, which is called crowd knowledge in our
study. The knowledge can be extracted in the form of
software-specific words [54], [62]. Obviously, these
software-specific words are more useful for software
engineering tasks than the general words of WordNet
used in previous studies [25], [29].
Pseudo relevance feedback is one of a local query
expansion approaches, and the classic Rocchio’s model
is the implementation of pseudo relevance feedback in
information retrieval. We incorporate QECK into the
classic Rocchio’s model, and propose QECK based
code search method denoted as QECKRocchio. To evaluate
the effectiveness of QECK and investigate the
performance of QECKRocchio, we explore three Research
Questions (RQs) in three experiments, respectively.
These experiments are conducted on a Q&A pair
collection containing 312,941 Q&A pairs labeled with
the “android” tag, and a real-world code snippet
corpus containing 921,713 code snippets extracted
from 1,538 open source app projects on the Android
platform. A code snippet refers to a method in Java
files of app projects [22].
Three RQs and their conclusions are listed as follows.
RQ1: Whether QECK can improve the performance of
code search algorithms?
We employ three code search algorithms to verify
the effectiveness of QECK by comparing the
recommendation performance before and after QECK
is applied. From comparative results in the experiment,
we verify that our QECK technique can indeed
improve the retrieval performance for code search.
Specifically, QECK improve the performance of three
code search algorithms by up to 64% in Precision, and
35% in NDCG.
RQ2: How the parameters affect the performance of
QECK?
For the parameters (i.e. the number of PRF
documents and the number of expansion words) in
QECK, we further study the influence of parameters
variation on performance of QECK. As it is a timeconsuming
task to label relevant scores for code
snippets, we only discuss the situation when we fix a
parameter and explore the trend of performance for
each code search algorithm by varying another
parameter. Our results indicate that: after employing
QECK, the performance of three algorithms are
generally better, there is a unique optimal value of
performance for each code search algorithm, and too
many or too less expansion words is not desirable.
Based on the results, we recommend that, in QECK,
the default value for the number of PRF documents is
5, and the default value for the number of expansion
words is 9.
RQ3: Whether our code expansion based code search
method, QECKRocchio, is better than the state-of-the-art
method?
We compare QECKRocchio against PWordNet, a state-ofthe-
art query expansion method [29]. The experimental
results show that QECKRocchio is a better method to aid
mobile app development than the comparative method.
Specifically, for Precision, the improvement is 22%,
and for NDCG, the improvement is 16%.
This paper makes the following contributions:
We propose QECK, a novel technique leveraging
crowd knowledge on Stack Overflow to improve
the performance of code search algorithms.
We explore the performance and identify the
effectiveness of QECK by three code search
algorithms and a comparative method in terms of
Precision and NDCG.
We construct a Q&A pair collection from Stack
Overflow and a code snippet corpus from open
source app projects.
Next section outlines the background of our study.
Section 3 elaborates our technique. Section 4 provides
details about experimental setup. Experimental results
and analysis are presented in Section 5. Section 6 states
the threats to validity. The related works are shown in
Section 7. In Section 8, we conclude this paper and
introduce the future work.