| Kelvin Xu 
		    I am a researcher interested in large language model pre-training at Google DeepMind. I most recently led scaling laws work that became part of the Gemini 1.5 model family. I previously worked on Gemini 1, PaLM-2 and trained some of the first preference models used in Project Bard (now known also as Gemini). I am interested in all things related to creating useful AI systems. 
 I did my PhD at UC Berkeley in the EECS department where I was fortunate to be advised by Prof. Sergey Levine.  Before starting at Berkeley, I had the pleasure of working at Google
                    as part of the inaugural Brain Residency Program (now AI Residency).
 
 I did my MSc at the MILA lab at the Université de
                    Montréal under the supervision of (Turing Award Winner) Prof. Yoshua
                    Bengio and Prof. Aaron
                    Courville. I also worked closely with Prof.  
                    Kyunghyun Cho.
                    Prior to that, I did my B.A.Sc in the Engineering
                    Science Program at the University of
                    Toronto where it all started.
 
 
 
		  email /
                  google scholar
		  (website template credits)
 |