I’ve spent the last 18 months working extensively on various forms of social media automation and manipulation. While everyone is now writing about ‘bots’ and ‘trolls’, these concepts are not well understood and the research landscape (especially for social scientists) remains nascent. At the moment, I’m particularly interested in pursuing foundational work when it comes to social media automation. First, what are bots, really? What do competing definitions of the term amongst different groups of scholars tell us, and what are the challenges and the challenges for policy and research posed by competing definitions and understandings of exactly what ‘bots’ are? My colleague Doug Guilbeault and I have a paper (pre-print coming soon) in which we the examine this very question.
I’m interested in other critical questions like: how do these bots really work, do they really have effects on opinion formation, as is commonly claimed, and how do we best study them? These questions seem really obvious, but far more work is needed that looks at the basics (studying disinformation empirically is really, really difficult)! I wrote my Master’s thesis at the OII to try and address some of these questions.
I have done a long study into political automation in Poland, published as a working paper at the OII. I am currently working on a comprehensive overview of bot detection methods (pre-print coming soon).