Hi, I'm Noor. I am a theoretical neuroscience researcher; focused on understanding and emulating biological adaptation in health and disease using a first principles account.
My long-term goal is to help design personalised (neural and behavioural) interventions that can support functional recovery after brain damage e.g., stroke, traumatic brain injury.
In service of this agenda, I focus on understanding adaptive mechanisms that underwrite biological learning.
Overview
My research targets fundamentally distinct, but interdependent, levels of Bayesian optimisation and investigates the necessary computations required for:
building flexible generative models for life-long learning that can aptly encode (temporal and spatial) information even when the embodied environment changes e.g., Fountas, Sajid, et al, NeurIPs, 2020 (DAI-MC); Yuan, Sajid, et al, 2021.
that can support adaptive planning strategies. This can be introduced using information-based exploration e.g., Sajid, Ball, et al, Neural Computation, 2021; Sajid, Da Costa, et al., The Drive for Knowledge 2022. Conversely, we have formulated latent state preference model learning to induce adaptive behaviour under static deep generative models e.g., Sajid, Tigas, et al, ICML URL, 2021 (PEPPER); Sajid, Tigas, et al, COLLAs Workshop, 2022 (NORE).
afford appropriate optimisation of perceptual learning and inference i.e., what is the right level of approximation for posterior estimation and ensuing message passing schemes for different contexts e.g., Sajid, Faccio, et al Neural Computations, 2022; Sajid, Convertino, Friston, Entropy, 2021; Parr, Sajid, Friston, Entropy, 2020.
Additionally, I am focused on investigating how these generative models may adapt, adjust and (re-)learn after perturbations via computational lesions. For this, we have introduced:
theoretical understanding of functional degeneracy e.g., Sajid, Parr, et al, Cerebral Cortex, 2020;
quantitative understanding of how distributed structural lesions disrupt functional outcomes e.g., Sajid, Holmes, et al, Scientific Reports, 2021, Sajid, Parr, et al, Brain Communications, 2021;
empirical evidence of degenerate functional architecture e.g., Sajid, Gajardo-Vidal, et al, 2022.
Highlights
Featured Papers
Sajid, N.*, Faccio, F.*, Da Costa, L., Parr, T., Schmidhuber, J., & Friston, K. (2022). Bayesian brains and the Rényi divergence. Neural Computation. [paper, code]
Meera, A., Novicky, F., Parr, T., Friston, K., Lanillos, P., & Sajid, N. (2022). Reclaiming saliency: rhythmic precision-modulated action and perception. Frontiers in Neuroscience, [paper]
Sajid, N., Tigas, P., Zakharov, A., Fountas, Z., & Friston, K. (2021). Exploration and preference satisfaction trade-off in reward-free learning. Proceedings of the Unsupervised Reinforcement Learning Workshop at the 38th International Conference on Machine Learning. [paper]
Sajid, N., Ball, P. J., Parr, T., & Friston, K. (2021). Active inference: demystified and compared. Neural Computation, 33(3), 674-712. [paper, code]
Fountas, Z., Sajid, N., Mediano, P. A., & Friston, K. (2020). Deep active inference agents using Monte-Carlo methods. Part of Advances in Neural Information Processing Systems. [paper, code]
Full list here.
Featured Talks
Rising Stars in AI Symposium 2023 at KAUST
Information-Theoretic Principles in Cognitive Systems (InfoCog) workshop at NeurIPs 2022
Free Energy Principle: Science, Tech and Philosophy Conference 2022
Active Inference Symposium on Robotics 2022
Data Science Forum at Zebra Technologies
Dynamical Systems & Computations in the Brain Seminar at TU Dresden
Defence Science and Technology Laboratory, UK Government 2018
Panelist: AI for Good at CogX 2017
Active Inference Lab Stream
ALBA Lab, UCSF
Active Inference Symposium
Workshops
Co-organiser: Temporal representations in reinforcement learning (TRiRL) workshop at RLDM2022
Co-organiser: International workshop on Active Inference (IWAI) 2022 at ECML/PKDD2022; IWAI 2023
Co-organiser: WiML UnWorkshop 2022 at ICML2022
Co-organiser: Postgraduate Research Conference 2022 at UCL IoN