Associate Professor with Habilitation

Researcher at NOVA-LINCS,
Universidade NOVA de Lisboa.

PhD in Computer Science (2008),
Imperial College London.

Contacts

Departamento de Informatica
Faculdade de Ciencias e Tecnologia
Universidade Nova de Lisboa
2829-516 Caparica
Portugal

email: jmag@fct.unl.pt

Research Interests

My research interests are in the area of vision and language information analysis and search. In particular:

Projects@NOVASearch group

I am coordinating the NOVASearch group at FCT/UNL. Our group's applied research targets the Web, social media and clinical domains, and a number of research and industry projects have funded the team's work:

Selected Publications

  1. D. Semedo, J. Magalhaes, Diachronic cross-modal embeddings , ACM Multimedia 2019.
  2. F. Martins, J. Magalhaes, J. Callan, Modeling temporal evidence from external collections . ACM Web Search and Data Mining (WSDM) 2019.
  3. D. Semedo, J. Magalhaes, Temporal cross-media retrieval with soft constraints , ACM Multimedia 2018.
  4. G. Marcelino, R. Pinto, J. Magalhaes, Ranking news-quality multimedia , ACM International Conference in Multimedia Retrieval 2018. (Best paper nomination).
  5. I. Arapakis, F. Peleja, B. B. Cambazoglu, J. Magalhaes, Linguistic benchmarks of online news article quality . Association of Computational Linguistics (ACL) 2016.
  6. F. Martins, J. Magalhaes, J. Callan, Barbara Made the News: Mining the behavior of crowds for time-aware learning to rank . ACM Web Search and Data Mining (WSDM) 2016.
  7. A. Mourao, F. Martins, J. Magalhaes, "Multimodal medical information retrieval with unsupervised rank fusion". Comp. Med. Imag. and Graph. 2015.
  8. A. Mourao, J. Magalhaes, Competitive affective gaming: winning with a smile . ACM Multimedia 2013.
  9. J. Magalhaes, F. Ciravegna, S. M. Rueger: Exploring multimedia in a keyword space. ACM Multimedia 2008.
  10. J. Magalhaes, S. M. Rueger: Information-theoretic semantic-multimedia retrieval . ACM Int'l Conference on Image and Video Retrieval 2007. (Best paper award).

My full list of publications is available on DBLP.

Graduate Students Advising

If you are motivated and willing to work-hard + play-hard in researching vision and language understanding:

Teaching

Office hours: Thursdays 17h00 - 19h00

Courses:

Chairing and Service

Biography

Joao Magalhaes is an Associate Professor at the Department of Computer Science of the Universidade NOVA de Lisboa (FCT NOVA). He holds a Ph.D. degree (2008) in Computer Science from Imperial College London, UK. His research interests cover the different problems of vision and language understanding, in particular: multimedia search and summarization, Web and social media mining and multimodal information embeddings. He was the General Chair of the 42nd European Conference on Information Retrieval. Currently, he is the Honorary Chair for ACM Multimedia Asia 2021 and the General chair of ACM Multimedia 2022 - Lisbon.

He is regularly involved in international program committees and EU project review panels. He has participated in several research projects (national, EU-FP7 and H2020), and was the Coordinator of several research projects including international projects with the University of Texas at Austin, USA and with the Carnegie Mellon University, USA. Recent projects include COGNITUS, a H2020 project led by the BBC, and iFetch, led by Farfetch.

His team's work is published at top-tier venues, including ACM Multimedia, ACM WSDM, ACL, ECIR, and ACM ICMR. The work of his group has been awarded, or nominated for, several best paper awards: Intl Conference on the Computational Processing of Portuguese 2020 best paper award, ACM Intl Conference in Multimedia Retrieval 2018 nomination for best paper, ACM Intl Conference on the Theory of Information Retrieval 2018 nomination for best paper, Advances in Computer Entertainment 2016, best poster paper. His PhD and MSc received the ACM Int l Conference in Image and Video Retrieval 2007 Best Paper Award, and the Portuguese national engineering association Young Engineer Award in 2002 respectively.