Laboratory of Computational Fluid Intelligence

We study how neural circuit dynamics support flexible, generalizable behavior.

Research

There is an ancient saying from China: “The wise delight in water; the kind delight in mountains” (智者乐水,仁者乐山). Indeed, the essence of intelligence lies in its fluidity—it allows us not only to master specific skills, but also to adapt and apply what we know across a wide variety of situations.

Imagine the first time you learned how to add two numbers. The teacher showed you a few simple examples and a set of rules. Once you grasped the rules, you could apply them to any numbers, solving infinitely many new problems. How can we do so much from learning so little? In other words, how do brains generalize from limited experience?

Our lab seeks to understand the neural circuit basis of fluid intelligence—our ability to use learned knowledge to solve new problems.

We approach this question through computational modeling and theory. We build neural network models that are informed by neurobiology and cognitive theory, and that can generalize to solve new tasks. By analyzing these models, we aim to uncover the mechanisms that support flexible reasoning and abstraction. We work closely with experimental collaborators to test and refine the hypotheses generated by our models.

This is a deeply interdisciplinary problem—one that has intrigued psychologists, linguists (how can we generate infinite sentences from a finite vocabulary?), neuroscientists, and AI researchers (how can artificial systems generalize out of distribution?). We believe that understanding the neural mechanisms of fluid intelligence will require insights from all of these disciplines. Such understanding could ultimately inspire the next generation of AI systems and inform new approaches to treating psychiatric and cognitive disorders.

  • Task generalization

    How can a neural circuit learn abstract rules from a small set of examples and apply them broadly?

  • Multi-timescale integration

    Mechanisms for decision-making and memory across timescales.

  • Brain–inspired artificial intelligence

    Using insights neuroscience to build more flexible AI systems.

News

We are hiring!

Nov 2025

Looking for postdocs and students interested in computational/theoretical neuroscience of cognition. Please contact me at yueliu@fau.edu if you are interested.

Hello world!

Nov 3, 2025

Lab Members

Yue Liu

Yue Liu, Ph.D.

Principal Investigator

Publications

  1. Liu, Y., & Wang, X. J. (2024). Flexible gating between subspaces in a neural network model of internally guided task switching. Nature communications, 15(1), 6497.
  2. Goudar, V., Kim, J. W., Liu, Y., Dede, A. J., Jutras, M. J., Skelin, I., Michael, R., Chang, W., Ram, B., Fairhall, A., Lin, J., Knight, R., Buffalo, E., & Wang, X. J. (2024). A comparison of rapid rule-learning strategies in humans and monkeys. Journal of Neuroscience, 44(28).
  3. Liu, Y., Levy, S., Mau, W., Geva, N., Rubin, A., Ziv, Y., ... & Howard, M. (2022). Consistent population activity on the scale of minutes in the mouse hippocampus. Hippocampus, 32(5), 359-372.
  4. Liu, Y., Brincat, S. L., Miller, E. K., & Hasselmo, M. E. (2020). A geometric characterization of population coding in the prefrontal cortex and hippocampus during a paired-associate learning task. Journal of Cognitive Neuroscience, 32(8), 1455-1465.
  5. Liu, Y., & Howard, M. W. (2020). Generation of scale-invariant sequential activity in linear recurrent networks. Neural Computation, 32(7), 1379-1407.
  6. Liu, Y., Tiganj, Z., Hasselmo, M. E., & Howard, M. W. (2019). A neural microcircuit model for a scalable scale‐invariant representation of time. Hippocampus, 29(3), 260-274.
  7. Liu, Y., Malafarina, D., Modesto, L., & Bambi, C. (2014). Singularity avoidance in quantum-inspired inhomogeneous dust collapse. Physical Review D, 90(4), 044040.

Contact

Florida Atlantic University – Department of Biomedical Engineering, Stiles-Nicholson Brain Institute
Email: yueliu@fau.edu