ABSTRACT
Introduction: The discussion on the use of artificial intelligence is expanding and present across all fields of knowledge. In university libraries, these tools are increasingly being utilized in activities that constitute reference services, such as the development of search strategies.
Objective: To assess the potential of three artificial intelligence tools in executing direct commands related to the construction of search expressions.
Methodology: This study is characterized as applied, empirical research employing a comparative method. Three artificial intelligence tools were selected: ChatGPT-4, Copilot, and Gemini. Based on this selection, descriptors, free terms, and search expressions were developed according to a fictional research objective. Prompts were designed to specify the expected outputs from the tools concerning the search process, and the analysis focused on evaluating their comprehension of commands as well as the presence or absence of syntax errors.
Results: In more mechanical requests, ChatGPT, followed by Copilot, exhibited superior performance compared to the other tools, with minimal or no distortion. Conversely, in more context-dependent requests, Gemini yielded better results. Errors were observed in Boolean operators and other advanced search operators, as well as in the execution of unintended actions.
Conclusion: These tools are currently valuable allies for librarians and researchers in repetitive and corrective tasks. However, the conducted tests indicate that the tools alone do not generate highly sensitive search expressions, requiring supervision and adjustments before the strategy is implemented.
KEYWORDS:
Literature Reviews; Artificial Intelligence; Information Retrieval; Librarianship; Librarians
Thumbnail
Thumbnail
Thumbnail
Source: Open AI.

