G4_01140.mp4 -
: "Reaping the Benefits of Global Context for Action Recognition" Authors : Yin Li, Zhefan Ye, and James M. Rehg
: It is often used to benchmark models that predict where a person is looking and what action they are performing simultaneously. g4_01140.mp4
: Activities of daily living (cooking) recorded with a head-mounted camera and an eye-tracker. : "Reaping the Benefits of Global Context for
The video file is a sample from the GTEA Gaze+ (Georgia Tech Egocentric Activities) dataset . This dataset is widely used in computer vision research for egocentric (first-person) action recognition and hand-object interaction analysis. The primary research paper associated with this dataset is: The video file is a sample from the
: The naming convention g4_01140.mp4 typically identifies the subject or session (e.g., "g4" for group/subject 4) and the specific activity or sequence number.
: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 Key Dataset Details