Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–5 of 5 results
Advanced filters: Author: Ilia Sucholutsky Clear advanced filters
  • Large language models (LLMs) are already transforming the study of individual cognition, but their application to studying collective cognition has been underexplored. We lay out how LLMs may be able to address the complexity that has hindered the study of collectives and raise possible risks that warrant new methods.

    • Ilia Sucholutsky
    • Katherine M. Collins
    • Robert D. Hawkins
    Comments & Opinion
    Nature Computational Science
    Volume: 5, P: 704-707
  • Large language models (LLMs) can synthesize vast amounts of information. Luo et al. show that LLMs—especially BrainGPT, an LLM the authors tuned on the neuroscience literature—outperform experts in predicting neuroscience results and could assist scientists in making future discoveries.

    • Xiaoliang Luo
    • Akilles Rechardt
    • Bradley C. Love
    ResearchOpen Access
    Nature Human Behaviour
    Volume: 9, P: 305-315
  • In this Perspective, the authors advance a view for the science of collaborative cognition to engineer systems that can be considered thought partners, systems built to meet our expectations and complement our limitations.

    • Katherine M. Collins
    • Ilia Sucholutsky
    • Thomas L. Griffiths
    Reviews
    Nature Human Behaviour
    Volume: 8, P: 1851-1863