Bot Comments Generated by ChatGPT
We have identified a substantial volume of bot accounts commenting and sharing under FTNN News fan page, and we have further discerned that a portion of these comments were generated by ChatGPT.
Compile suspicious accounts
Initially, we aggregate the leading 50 posts that have the highest volume of comments from the FTNN News fan page over a three-month duration and subsequently extract all the comments for comprehensive analysis.
We have gathered all the comments under posts and applied distribution plotting to all comments. Of these distinct 517 accounts, excluding the fan page account, 139 accounts contributed more than five comments. These particular accounts were assigned for subsequent analysis.
Signals of suspicious accounts: mass formation, falsified avatars
Because Facebook did not show the account inception details, we analyzed the avatar change time and their timeline. About half of the accounts appeared to have updated their images in near synchronicity. Among these 139 accounts, we observed that 67 changed their avatars on identical dates: 18 accounts on December 19, 2022; 14 accounts on April 13, 2024; 13 accounts on March 25, 2024; 11 accounts on December 26, 2022; and 11 accounts on March 23, 2023.
Some of the avatars originate from misappropriated online sources, and some of the others are generated by AI such as stable diffusion.
Coordinate Inauthentic Behavior of Bot Accounts
After reviewing the posts disseminated by these bot accounts, we found the posts they shared bear a significant resemblance. Primarily, they predominantly share posts originating from FTNN and fan pages dedicated to the automobile sales sector. The leading five fan pages are FTNN News (FTNN 新聞網) (33.54%), YIHAN International Automobiles (易漢國際汽車) (19.09%), Bruce Yeh (18.33%), Jiuye International Auto Limited Company (九葉國際車業有限公司) (18.05%), and Business Times (商務時報) (1.33%).
We establish a correlation between the bot accounts and the content they disseminate, depicted through the utilization of Gephi. Observation reveals that these accounts into three clusters, each disseminating unique sets of posts.
Subsequently, we examined the content-sharing behavior of these accounts, we identified some bot accounts that share the same content at the same time. This strongly suggests the existence of Coordinated Inauthentic Behavior.
We establish a correlation between the bot accounts and the content they disseminate, depicted through the utilization of Gephi. Observation reveals that these accounts into three clusters, each disseminating unique sets of posts.
Comments presumed to be generated by ChatGPT
Upon examination of the comments under the posts on the FTNN News fan page, the comments do not seem to be posted by ordinary individuals.
We endeavored to instruct chatGPT to generate responses, consequently finding the results to bear remarkable similarity with the unnatural comments. This led us to postulate that some comments may have been generated by chatGPT.
Furthermore, upon analysis of comments posted by two bot accounts, we discovered evidence suggesting that some of these comments were potentially generated via chatGPT.
DISARM analysis
We employed the DISARM Framework for the examination of the Tactics, Techniques, and Procedures (TTPs) pertinent to this case. We identified these bot accounts built spurious profiles (T0090: Create Inauthentic Accounts) first, and some avatars use AI to generate profile images (T0086.002: Develop AI-Generated Images (Deepfakes)); subsequently they share posts from the fan page (T0115: Post Content) or comment under the post (T0116: Comment or Reply on Content). Importantly, it’s been detected that a number of these comments are AI-generated (T0085.001: Develop AI-Generated Text). The overarching intent of this sequence of actions is to manipulate the platform’s algorithm (T0121: Manipulate Platform Algorithm), thereby increasing the exposure and traffic of the fan page.
Conclusion
Upon completion of our evaluation, we have ascertained the existence of bot accounts that use chatGPT to generate comments. At present, due to the inherent stylistic idiosyncrasies, such generated comments are relatively straightforward to delineate. However, if the LLM persistently develops, the style will probably evolve towards a more natural dialog, thereby escalating the difficulty in discernment. This could prospectively lead to the potential misleading of the public in ensuing times.
Thank you for taking the time to explore this case study.
Thank you for reading this investigative report. If you have any comments on this report or wish to discuss it further, please feel free to contact me on Facebook or LinkedIn. For more of my investigative reports, you can visit the Information Manipulation Forensics Hub by billy3321.