Shaokang's Blog

Introduction

TikTok is a popular social media platform that allows users to create and share short videos. The platform has gained popularity among young people and has become a significant source of news and information. However, there is limited research on the political content on TikTok and how the platform’s algorithms may influence the spread of political content. This project aims to analyze the political content on TikTok and investigate whether the platform contains biased political leaning content. The project will also explore how the TikTok community reacts to different political leaning content and whether different political leaning videos have different lexical diversity. The project will use a public multimodal dataset of TikTok videos related to the 2024 US presidential election to conduct the analysis. The dataset contains video IDs, author IDs, author followers, heart counts, comments, and transcripts. The project will use sentiment analysis, emotion analysis, and political bias analysis models to analyze the data. The project will also use lexical diversity analysis to analyze the transcripts of the videos. The project will provide insights into the political content on TikTok and how the platform’s algorithms may influence the spread of political content.

Note: This is a group project, and I am only responsible for a portion of the work. Citations have been removed to adjust for website rendering. The original PDF is at the end of this page.

ChatGPT’s potential for human-like communication is noteworthy, but the mental health implications of integrating a real human identity remain understudied. This research focuses on introducing an AvatarGPT, a human-like avatar, to the ChatGPT interface to delve into these effects. A between-subject study (N=10) was conducted to investigate users’ responses, evaluate, and compare the effectiveness of AvatarGPT and ChatGPT before and after a conversation using the UCLA Loneliness Scale. Results show that neither using an avatar (p≈0.39) nor conversing without the avatar (p≈0.11) significantly improved loneliness scores. Additionally, using the avatar did not enhance willingness to speak, as measured by increased word counts, or significantly reducing loneliness score in percentage (p≈0.59). We believe that with a larger participant pool and a longer experimental period, we would be able to observe a more significant increase in emotional change.

The original paper is under review, citation format:

Shaokang Jiang and Michael Coblenz. An Analysis of the Costs and Benefits of Autocomplete in IDEs. In review, ACM International Symposium on Foundations of Software Engineering (FSE 2024)

Key points:

  • Worked with Michael Coblenz on the usability analysis of autocomplete.
  • Designed and executed an experiment with 32 participants using an eye tracker to evaluate the costs and benefits of IDE-based autocomplete features to programmers who use an unfamiliar API; analyzed data using JMP; and wrote a paper for the study.
  • Found that participants who used autocomplete learned more about the API while spending less time reading the documentation; found autocomplete did not significantly reduce the number of keystrokes required to finish tasks.

This page contains JavaScript implementations of some common algorithms for handling raw eye-tracking data, such as generating fixations and managing vertical drifting, which served for the tracking part of autocomplete research. For example, IDT, attach, K-cluster.js. It also includes some data visualization script we used in internal testing. All codescripts designed to be run under NodeJs.

Note: This is a group project for a course. I am responsible for coding the actual model and optimizing its structure to make the model run much faster (over 200 times). I am also responsible for scraping data from the official site and setting up github action to regularly update the data source using puppeteer, which could simulate real user behavior and can never been blocked. You can find the relevant repository at ShaokangJiang/CSE-203B-crapping (github.com). My code can be seen at the end of this post.

Nowadays, new computer components are introduced into the retail market annually. Customers may find that the abundant component choices make it difficult to choose the optimum combination of hardware components while building a new computer. This project aims to take in the main components of a computer hardware build and provide several of the highest-performing options that can meet a consumer’s budget.

To help people verify the correctness of their code in two of my research projects, I built a VSCode extension. Here is the script for adding a simple code lens to VSCode, making it clickable for people to check the correctness of their code. The result will be put in the terminal area. Participants were also able to click a fixed button or use a command prompt to test correctness by using this extension.

Are Python Type Hints helpful in competitive programming?

Abstract

Type hinting, which is a way to statically indicate the type of a value in the code, was introduced in Python 3.5. Before Python 3.5, python was a dynamic language in which variable type can only be inferred during runtime. Competitive programmers(CP) are a neglected group of people who need to solve algorithm challenges as fast as possible. In this project, we are trying to assess if using Type Hints is helpful for competitive programmers. We conducted a pilot study with 5 programmers from different backgrounds, 1 of them only provided interview data. We found that type hints may not be useful for CP in Python, and autocomplete suggestions are often not good enough to be accepted. The survey indicates that People also dislike type hints in Python for CP tasks. Keywords: Type hint. Competitive programmer. Python. Autocomplete.