Logically, a global tech company using advanced AI to tackle misinformation, and Indraprastha Institute of Information Technology Delhi (IIIT-Delhi), a leading research-orientated state university with a specialization in computer science, have extended their existing research partnership on combating online harms until 2026.
As part of the partnership, the two organisations will conduct further research on developing advanced technologies to counter hate speech and online mis- and disinformation. The partnership will also enhance multimedia analytical capabilities, including video, images and memes, as well as build multilingual models that understand regional languages in India.
Since 2020, Logically and the Laboratory for Computational Social Systems (LCS2) at IIIT-Delhi have been collaborating on fundamental technical research on understanding the provenance, motivations, and psychology of online misinformation. Over the last two years, the research focused on how society can identify and impede online misinformation and its spread.
Research from the first two years of collaboration have already been converted into multilingual capabilities that have been deployed in Logically’s flagship threat intelligence platform – Logically Intelligence – to detect and analyse mis- and disinformation and online harms more quickly. In 2021, outputs from the research secured recognition in prestigious academic conferences such as SIGKDD in Singapore and IEEE ICDM in Auckland.
Commenting on the partnership, Dr Anil Bandhakavi, Head of Data Science at Logically, said “We are thrilled with the impact from the first two years of our research collaboration with IIIT-Delhi. As expected, we have been able to show quantifiable results in the space of research to curb hate speech and mis/disinformation. Given the success from the first phase of our collaboration, we are excited to further strengthen our partnership with a prestigious institution like IIIT-Delhi.”
Dr Tanmoy Chakraborty, Associate Professor, Director of Laboratory for Computational Social Systems, and the head of the Center for AI at IIIT-Delhi, said “We look forward to building further on our research successes and growing our research teams in the next phase of the collaboration. Our research capabilities and Logically’s industry experience will enable us to develop better insights in understanding online harm and its prevention across languages and various forms of media”.
Combining IIIT-Delhi’s AI research capabilities and Logically’s industry-leading technological expertise, the research partnership has succeeded in designing predictive models to forecast the likelihood of a social media post attracting harmful content over social media discourse, enabling content moderators to more quickly identify social media posts that may invite online harm. Additionally, to further understand and identify community level threats, the teams modelled how hateful online echo chambers are formed, observing that a small number of echo chambers are responsible for the spread of the majority of harmful online content.