This content is auto-generated by Google Gemini.
The exponential growth in the creation and consumption of video content has generated a significant demand for efficient and sophisticated video editing tools. Traditional video editing processes often require considerable time, effort, and expertise.
In response to these demands, artificial intelligence (AI) has emerged as a transformative force, offering the potential to automate and significantly enhance various aspects of video editing, particularly the crucial tasks of cutting and trimming. These processes, which involve selecting and sequencing video segments to create a cohesive and engaging narrative, are fundamental to effective video production. AI-based techniques promise to streamline these workflows, reduce manual effort, and even introduce new levels of creative possibility.
This report provides a comprehensive review of important research papers in the field of AI-based video cutting and trimming, focusing specifically on contributions from arXiv and top-tier computer science conferences such as the Conference on Computer Vision and Pattern Recognition (CVPR) and the Neural Information Processing Systems (NeurIPS) conference.
This review aims to provide a detailed analysis of the current state-of-the-art in this domain, highlighting key advancements, identifying existing challenges, and exploring potential future research directions in the application of AI to video editing. The scope of this report encompasses a range of AI methodologies applied to video cutting and trimming, from fully automated systems to collaborative tools designed to augment human creativity.
The paradigm of human-AI collaboration in video editing represents a significant shift from purely automated approaches. In this model, AI acts as an intelligent assistant, offering suggestions and generating alternatives while retaining the human creator's control over the final output [1]. A key challenge arises when generative AI models, capable of rapidly producing multiple variations of an edit, present creators with numerous options. While the ability to quickly generate diverse content is a strength, the sheer volume of alternatives can overwhelm creators, making it difficult to effectively compare and select the most suitable option [1]. This challenge highlights a bottleneck in the creative process, where the abundance of AI-generated choices can hinder rather than enhance productivity and satisfaction.
To address this core problem of comparing multiple AI recommendations, researchers have developed tools like VideoDiff [1]. This AI video editing tool is specifically designed for editing with alternatives, aiming to simplify the process of generating and reviewing multiple AI recommendations for common editing tasks such as creating a rough cut, inserting B-rolls, and adding text effects [1].
The novel approach of VideoDiff lies in its ability to support easy comparison by aligning videos and highlighting differences through intuitive interfaces like timelines, transcripts, and video previews [1]. When a user uploads a video, VideoDiff initiates the process by generating ten rough cut recommendations [1]. The interface then allows users to visualize these variations, skim through the differences using synchronized timeline and transcript views, and toggle between showing only the edited content or the entire source content for additional context [1].
Furthermore, users can efficiently organize these variations by sorting, re-ordering, pinning, and archiving them, providing a flexible workspace for managing multiple editing options [1]. The tool also empowers creators to further refine or recombine existing variations, or even regenerate new ones using text prompts, fostering an iterative editing workflow [1].
A user study involving twelve video creators evaluated the effectiveness of VideoDiff by comparing it with a baseline interface that encompassed features of existing AI video editing tools [1]. The key experimental results indicated that participants rated VideoDiff as significantly more useful for quickly understanding the differences between multiple video variations, which in turn helped them to create and consider a more diverse range of editing options [1]. Moreover, participants expressed higher satisfaction with the final videos they created using VideoDiff [1]. The development of tools like VideoDiff signifies a trend towards empowering users with greater control over AI-generated content. Rather than aiming for fully automated editing, the focus is on creating systems where AI acts as a powerful assistant, augmenting human creativity by providing a range of options and tools for informed decision-making [1].
This approach acknowledges that while AI can excel at generating variations, the ultimate artistic and narrative choices often reside with the human creator. Human-AI co-creation tools such as VideoDiff hold the potential to democratize video editing, making it more accessible and efficient for a broader spectrum of users, regardless of their technical expertise [1].
As the volume of user-generated video content continues to increase, viewers face the growing challenge of sifting through vast amounts of footage to find valuable insights.6 This trend underscores the critical need for algorithms capable of efficiently extracting key information from videos.
While significant advancements have been made in related areas such as highlight detection, moment retrieval, and video summarization, these existing approaches primarily focus on selecting specific time intervals, often overlooking the crucial aspects of relevance between segments and the potential for arranging these segments into a coherent narrative.6
To address this limitation, the task of video trimming (VT) has emerged as a novel area of research, focusing on the comprehensive process of detecting wasted footage, selecting valuable segments, and composing them into a final video that tells a coherent story [6].
A pioneering approach to tackle this task is Agent-based Video Trimming (AVT). AVT [6] introduces a novel three-phase structure designed to mimic the intelligent decision-making of a human editor. The first phase, Video Structuring, employs a Video Captioning Agent, likely leveraging multimodal large language models (MLLMs), to convert the input video into smaller, manageable units and generate structured textual descriptions for each segment [7].
These descriptions enable detailed semantic analysis of the video content, going beyond simple visual features [7]. The second phase, Clip Filtering, utilizes a Filtering Module to dynamically assess the quality and relevance of each clip based on the structured information generated in the previous phase [7]. This module can identify and discard low-quality footage or irrelevant content, ensuring that only valuable segments proceed to the next stage [7].
Finally, the Story Composition phase employs a Video Arrangement Agent to select and compile the filtered clips into a coherent final narrative [7]. This agent aims to ensure a logical flow and engaging storyline in the trimmed video, addressing the limitations of previous methods that often overlooked the relationships between segments [7]. To evaluate the effectiveness of the trimmed videos, AVT also incorporates a Video Evaluation Agent to autonomously assess their quality based on criteria aligned with user preferences [7]. Experimental results have shown that AVT receives more favorable evaluations in user studies and demonstrates superior performance in terms of mAP and precision on highlight detection tasks compared to existing methods [6].
The modular design of AVT, with its specialized agents for captioning, filtering, arrangement, and evaluation, represents a significant trend towards creating more sophisticated and interpretable AI systems for complex video processing tasks [7]. By breaking down the video trimming problem into these distinct stages, each handled by an agent optimized for its specific role, AVT offers a more targeted and potentially more effective solution for efficiently condensing long-form video content while maintaining narrative coherence [7].
In the pursuit of more versatile and user-friendly video editing solutions, researchers have explored frameworks that offer flexibility across various editing tasks while maintaining efficiency. AnyV2V [10] stands out as a novel tuning-free paradigm designed to simplify a wide range of video-to-video editing tasks.
Recognizing the limitations of existing models that often require extensive fine-tuning and struggle with the desired level of quality and control, AnyV2V introduces a fundamentally different approach [10]. This framework elegantly decomposes the complex video editing process into two primary steps: first, employing an off-the-shelf image editing model to modify the initial frame of the video, and second, utilizing an existing image-to-video generation model to propagate these edits throughout the video sequence via temporal feature injection [10].
This innovative two-stage process allows AnyV2V to leverage the vast capabilities of readily available image editing tools to support an extensive array of video editing tasks that were previously unattainable by existing methods [11]. These tasks include not only traditional prompt-based editing but also more advanced techniques such as reference-based style transfer, subject-driven editing (where specific objects can be modified based on a reference image), and even identity manipulation within the video [11].
Notably, AnyV2V can also handle videos of any length, overcoming limitations often found in other video editing models [11]. Evaluation of AnyV2V has demonstrated its effectiveness, achieving CLIP scores comparable to other baseline methods in terms of text alignment and temporal consistency [10]. Furthermore, in human evaluations, AnyV2V significantly outperformed these baselines, showcasing notable improvements in maintaining visual consistency with the source video while producing high-quality edits across all tested editing tasks [10]. The success of AnyV2V in performing diverse video editing tasks without requiring any fine-tuning highlights the significant potential of leveraging the power of pre-trained image and video models for efficient and versatile video manipulation [11].
This approach not only simplifies the editing workflow but also makes advanced editing techniques more accessible to a wider range of users by eliminating the need for specialized AI training or extensive computational resources [11].
To further enhance the accessibility and intuitiveness of video editing, researchers have explored the use of multimodal interfaces that combine different forms of user input. ExpressEdit [29] presents a system that enables video editing through the natural modalities of natural language (NL) text and sketching directly on the video frame.
This approach directly addresses the challenges often faced by novice video editors who may struggle to articulate and implement their editing ideas using traditional interfaces.29 The core innovation of ExpressEdit lies in its ability to interpret and execute editing commands expressed through this multimodal input, leveraging the power of large language models (LLMs) and computer vision models [29].
To understand how multimodality could best support video editors, the researchers conducted a formative study with ten video editors of varying expertise levels, collecting 176 expressions of editing commands in the form of NL text, sketches, and media assets [30]. The findings revealed that editors felt comfortable using NL text to express their general editing requests and frequently used sketching on top of the video frame to indicate specific locations or regions of interest for the edits [30]. Based on these insights, ExpressEdit was designed to interpret temporal (when the edit should occur), spatial (where in the video frame), and operational (what specific edit action) references from the multimodal edit command [30].
The system's technical pipeline involves preprocessing the video to extract frame-level and clip-level metadata, parsing the NL input using GPT-4 to classify references, interpreting temporal references by matching them with metadata clips, interpreting spatial references using sketches and text, and finally, interpreting the edit operation and parameters to modify the video accordingly [31].
An observational study with ten novice video editors demonstrated that ExpressEdit significantly enhanced their ability to express and implement their editing ideas [29]. The system allowed participants to perform edits more efficiently and generate more ideas by providing AI-driven interpretations of their multimodal commands and supporting iterations on these commands.29 The development of ExpressEdit highlights the potential of combining natural language with direct manipulation through sketching to create more intuitive and expressive video editing systems [32].
This multimodal approach can significantly lower the barrier to entry for video editing, making it more accessible to individuals who may not possess extensive technical skills but have creative visions they wish to realize [31].
Recent advancements in generative AI, particularly diffusion models, have revolutionized the field of image and video generation, and their application to video editing has garnered significant attention [45]. Diffusion models, known for their ability to generate realistic and high-quality content through a process of iterative denoising, have shown immense promise in various video editing tasks, from style transfer to content manipulation [45].
CCEdit 16 is a versatile generative video editing framework built upon diffusion models, specifically designed to balance controllability and creativity while accommodating a wide range of editing requirements.
Addressing the challenges inherent in generative video editing, such as handling diverse editing requests and achieving fine-grained control, CCEdit introduces a novel trident network structure [48]. This network effectively decouples structure and appearance control, ensuring precise and creative editing capabilities [48]. The framework utilizes the foundational ControlNet architecture to maintain the structural integrity of the video during editing [48]. Simultaneously, it incorporates an additional appearance branch that enables users to exert fine-grained control over the edited keyframe, allowing for precise modifications to the video's visual style and content [48].
These two side branches, dedicated to structure and appearance manipulation, are seamlessly integrated into the main text-to-video generation branch through learnable temporal layers, ensuring temporal coherence across the generated video frames [48].
The versatility of CCEdit is further demonstrated through its support for various types of structural information (such as line drawings, boundaries, and depth maps) and its compatibility with personalized text-to-image models, offering a wide range of creative possibilities [48].
To facilitate a comprehensive evaluation of generative video editing methods, the researchers also introduced the BalanceCC benchmark dataset, comprising 100 diverse videos with four target prompts for each video [48].
Extensive user studies comparing CCEdit with eight state-of-the-art video editing methods demonstrated its substantial superiority across various editing tasks [48]. The architecture of CCEdit, which separates structure and appearance control, signifies a trend towards providing users with more direct and intuitive ways to guide the creative process of video editing using diffusion models.48
FateZero [59] presents a novel approach to zero-shot text-based video editing, addressing the inherent challenge of maintaining temporal consistency when applying diffusion models to video. Unlike image editing, video editing requires ensuring that modifications are coherent across all frames, which is not naturally learned by text-to-image models [59]. FateZero tackles this problem by introducing several innovative techniques based on pre-trained diffusion models [59].
Instead of relying solely on the standard DDIM inversion technique, FateZero captures intermediate attention maps during the inversion process [59]. These attention maps effectively retain both structural and motion information from the original video [59]. Crucially, these captured attention maps are directly fused into the editing process rather than being generated during the denoising steps [59]. To further minimize semantic leakage from the source video, FateZero employs a technique of fusing self-attentions with a blending mask that is obtained from cross-attention features derived from the source prompt [59].
Additionally, the method implements a reform of the self-attention mechanism within the denoising UNet architecture by introducing spatial-temporal attention, which further ensures frame consistency in the edited video [59].
Despite its succinct design, FateZero is the first method to demonstrate the capability of zero-shot text-driven video style and local attribute editing directly from a trained text-to-image model [59]. It also exhibits a better zero-shot shape-aware editing ability when leveraging text-to-video models [59]. Extensive experiments have validated FateZero's superior temporal consistency and overall editing capabilities compared to previous works in the field [59]. The emphasis on manipulating attention maps during the diffusion process highlights the critical role of these mechanisms in achieving coherent and localized video edits without requiring per-prompt training or user-specific masks [59].
VidEdit [16] introduces a zero-shot and spatially aware approach to text-driven video editing, addressing the limitations of existing diffusion-based methods that often struggle with precise control and maintaining temporal consistency, especially in longer videos [74].
VidEdit's novel method combines the strengths of atlas-based video representations with pre-trained text-to-image diffusion models to achieve training-free and efficient video editing that inherently fulfills temporal smoothness [74]. Atlas-based methods excel at providing strong temporal consistency by decomposing a video into a set of layered neural atlases, which offer a unified representation of the video content over time [75]. To grant precise user control over the generated content, VidEdit leverages conditional information extracted from off-the-shelf panoptic segmenters and edge detectors, which guide the diffusion sampling process [74]. This ensures fine spatial control over targeted regions of interest while strictly preserving the structure of the original video in untargeted areas [74].
The framework is remarkably efficient, capable of editing a full video in approximately one minute [74]. Extensive quantitative and qualitative experiments on the DAVIS dataset have demonstrated that VidEdit outperforms state-of-the-art methods in terms of semantic faithfulness to the target text query, preservation of the original content, and temporal consistency [74]. The combination of atlas-based representations and conditional diffusion models in VidEdit showcases an effective strategy for achieving high-quality, controllable, and efficient zero-shot video editing [74].
Videoshop [87] introduces a training-free video editing algorithm specifically designed for localized semantic edits. Unlike existing methods that often rely on imprecise textual instructions, Videoshop empowers users to make direct and precise modifications to the first frame of a video using any image editing software, including professional tools like Photoshop and generative inpainting techniques [87]. The core innovation of Videoshop lies in its ability to automatically propagate these changes, maintaining semantic, spatial, and temporally consistent motion throughout the remaining frames of the video [87].
This is achieved through a process of image-based video editing that involves inverting the video's latent representation using noise extrapolation, followed by a generative process conditioned on the edited first frame [87]. The technique of noise extrapolation is based on the observation that latent trajectories during the denoising process exhibit a near-linear pattern, allowing for a more accurate estimation of denoised latents without accumulating errors [89]. Videoshop also employs a latent normalization technique to ensure consistency and quality in the generated video [89].
Extensive experiments conducted on multiple editing benchmarks have demonstrated that Videoshop produces higher quality edits compared to several baseline methods across various evaluation metrics, including edit fidelity, source faithfulness, and temporal consistency [87].
This approach allows users to perform a wide spectrum of semantic edits with fine-grained control over locations and appearance, such as adding or removing objects, semantically changing objects, and inserting stock photos into videos [87]. By focusing the editing effort on a single keyframe and leveraging the properties of latent space inversion and noise extrapolation within diffusion models, Videoshop offers a powerful and flexible way to achieve localized and consistent video edits without the need for any training [87].
Beyond generative models, data-driven approaches that learn from existing video content have also proven effective in tackling the task of video cutting. Learning To Cut by Watching Movies 37 introduces a new task in computational video editing: ranking the plausibility of video cuts.
This research addresses the challenge of automating the selection of optimal cut points, a task that typically requires significant video editing expertise [100]. The core idea behind this approach is to leverage the vast amount of already edited video content available to learn the fine-grained audiovisual patterns that commonly trigger cuts in professional filmmaking [100].
To achieve this, the researchers collected a large-scale dataset comprising over 10,000 videos, from which they extracted more than 255,000 cuts [100]. They then devised a model that learns to discriminate between real cuts (those found in professionally edited videos) and artificial cuts (random alignments) through a process of contrastive learning [100]. This involved training an audio-visual model to rank pairs of consecutive shots based on how likely they are to form a natural and effective cut [100]. The study established a new task and a set of baseline methods to benchmark the performance of video cut generation models [100].
Experimental results demonstrated that the proposed model significantly outperformed these baselines in ranking the plausibility of video cuts, even in human studies conducted on a collection of unedited videos [100]. This data-driven approach highlights the potential of learning from existing edited content to automate the time-consuming and skill-intensive process of selecting appropriate cut points in video editing [100].
By learning the subtle audiovisual cues that characterize good cuts, AI models can assist video editors by suggesting optimal transitions, thereby speeding up the editing process and potentially enhancing the overall quality and flow of the final video [100].
Sketches, as an intuitive and direct form of visual input, offer a promising avenue for enhancing user control in video manipulation tasks. SketchVideo 136 introduces a novel framework that aims to achieve sketch-based spatial and motion control for video generation and support fine-grained editing of both real and synthetic videos [137]. While text prompts are effective for conveying high-level semantics, they often lack the precision needed to control the detailed layout and geometry of a scene [137]. Similarly, using images as conditions can raise questions about how to generate these input images and achieve detailed editing [137].
To address these limitations, SketchVideo leverages sketches drawn on one or two keyframes as a more direct and intuitive way to guide the spatial arrangement and motion within a video [137]. Built upon the DiT video generation model, the framework proposes a memory-efficient control structure that utilizes sketch control blocks to predict residual features of skipped DiT blocks [137].
To propagate the temporally sparse sketch conditions across all frames of the video, SketchVideo employs an inter-frame attention mechanism that analyzes the relationship between the keyframes where sketches are drawn and each individual video frame [137]. For sketch-based video editing, the framework designs an additional video insertion module that ensures consistency between the newly edited content and the original video's spatial features and dynamic motion [137].
During inference, a latent fusion technique is used to accurately preserve the unedited regions of the video while seamlessly integrating the sketch-based modifications [137]. Extensive experiments have demonstrated that SketchVideo achieves superior performance in both controllable video generation and editing compared to existing approaches [137]. The use of sketches as a control mechanism offers a more intuitive and precise way for users to manipulate video content, bridging the gap between abstract textual descriptions and concrete visual outcomes [137].
This approach can open up new creative possibilities for users who prefer visual input methods and desire a high degree of control over the spatial and geometric aspects of their videos [137].
Prompt learning has emerged as a powerful technique for adapting pre-trained language models to various downstream tasks with minimal training. Researchers have also explored its potential in enhancing video editing capabilities. Prompt Learning Based Adaptor for Enhanced Video Editing with Pretrained Text-to-Image Diffusion Models 155 addresses the challenge of achieving temporal consistency in video editing when using pre-trained text-to-image (T2I) diffusion models.
While T2I models have shown remarkable success in generating and editing still images, their frame-independent nature can lead to inconsistencies and flickering in the resulting videos when applied directly to video editing [157]. To overcome this limitation, this research proposes a lightweight adaptor that utilizes prompt learning to enhance video editing performance while requiring minimal training [157].
The core idea is to introduce shared prompt tokens that improve the overall editing capabilities of the model, along with unshared frame-specific tokens that impose consistency constraints across the different frames of the video [157]. This adaptor is designed to seamlessly integrate into existing video editing pipelines that are built upon pre-trained T2I diffusion models [157].
By learning optimal prompt representations, the adaptor guides the diffusion process to generate more coherent and temporally consistent videos without requiring extensive fine-tuning of the underlying T2I model [157].
Experimental results have demonstrated that this approach offers significant improvements in temporal coherence and overall video quality, benefiting a broad spectrum of downstream video editing algorithms [157]. The efficiency and effectiveness of prompt learning in this context highlight its potential as a valuable strategy for enhancing the performance of video editing models, making them more practical and accessible for real-world applications [157].
Automating the video clipping process for specific types of content, such as sports highlights, presents a unique set of challenges and opportunities. Research in this area often leverages domain-specific knowledge to achieve efficient and accurate results. AI-Based Video Clipping of Soccer Events [159] serves as a compelling case study in automating the generation of highlight clips for soccer games.
The manual process of annotating and clipping key events from soccer matches is typically tedious, time-consuming, and expensive, often rendering it infeasible for lower league games with limited resources [159]. To address this, researchers have explored automating the process of highlight generation by leveraging specific visual and temporal cues unique to soccer broadcasts [159].
One approach involves using logo transition detection, scene boundary detection, and optional scene removal to identify appropriate time intervals for extracting goal events and other highlights [159]. These systems often employ neural network architectures such as ResNet and VGG for logo detection and TransNet V2 for scene boundary detection, trained on datasets like ClipShots and IACC.3 [159].
Experimental results have demonstrated the high accuracy of these AI-based systems in detecting logo and scene transitions, and subjective assessments from viewers indicate a high level of acceptability for the automatically generated highlight clips [159]. The success of these domain-specific approaches highlights the effectiveness of leveraging unique features and patterns within a particular type of video content to automate the clipping process [159].
By focusing on cues like logo transitions that often signify the beginning or end of important segments, and scene changes that indicate shifts in the game's action, AI models can be trained to intelligently identify and extract key moments from soccer games, significantly reducing the need for manual intervention and enabling faster, more cost-effective highlight generation [159].
This report has provided a comprehensive review of significant research papers focusing on AI-based video cutting and trimming techniques. The analysis reveals a vibrant and rapidly evolving field characterized by diverse approaches and notable advancements across several key areas.
Human-AI collaborative editing tools like VideoDiff are empowering creators by facilitating the comparison and refinement of AI-generated suggestions. Intelligent automation of video trimming, exemplified by Agent-based Video Trimming, demonstrates the potential for efficiently condensing long-form content while maintaining narrative coherence through modular and specialized AI systems.
Flexible and efficient frameworks such as AnyV2V are making advanced video editing tasks accessible without the need for extensive training, leveraging the power of pre-trained image and video models. Intuitive multimodal interfaces, as seen in ExpressEdit, are lowering the barrier to entry for video editing by combining natural language and sketching for more expressive user interaction. Generative video editing with diffusion models, explored in CCEdit, FateZero, VidEdit, and Videoshop, showcases remarkable capabilities in style transfer, content manipulation, and localized editing, often achieving zero-shot performance with high temporal and spatial consistency.
Data-driven approaches like Learning To Cut by Watching Movies are successfully leveraging existing edited video content to predict optimal cut points.
Sketch-based frameworks like SketchVideo offer intuitive spatial and motion control for video generation and editing. Prompt learning strategies, as demonstrated by the Prompt Learning Based Adaptor, provide efficient ways to enhance video editing performance by learning optimal prompt representations for pre-trained models. Finally, domain-specific solutions, such as those for AI-Based Video Clipping of Soccer Events, highlight the effectiveness of leveraging unique features within particular video types to automate the clipping process.
Several common themes and trends emerge from the reviewed research. The increasing prominence of diffusion models across various video editing tasks underscores their effectiveness in generating high-quality and coherent results.
There is a growing focus on developing zero-shot and tuning-free methods, aiming to reduce the computational cost and increase the accessibility of advanced AI techniques for video editing. Furthermore, the importance of user control and intuitive interfaces is evident in the development of collaborative tools and multimodal interaction systems.
Despite these significant advancements, several challenges remain in the field. Improving temporal consistency in generative models, particularly for long videos and complex scenes, continues to be a key area of research. Enhancing the quality, controllability, and efficiency of generative models for diverse editing tasks also requires further investigation. The development of robust and comprehensive evaluation metrics that accurately reflect human perception of video editing quality is crucial for driving progress in the field.
Looking ahead, several potential avenues for future research can be identified. Exploring more advanced deep learning architectures and training techniques tailored for video editing models holds promise for further improving performance and efficiency. Developing more sophisticated methods for human-AI interaction and collaboration will be essential for creating truly empowering video editing tools.
Investigating the integration of multimodal information, such as audio and text, alongside visual data, could lead to more intelligent and context-aware video editing systems. The creation of larger and more diverse datasets specifically designed for training and evaluating video editing models will be crucial for advancing the field. Finally, addressing the ethical implications of AI-based video editing, such as the potential for misuse in creating deepfakes and misinformation, will be paramount for responsible innovation in this domain.
In conclusion, the field of AI in video cutting and trimming is poised for continued growth and innovation, with the potential to significantly transform how video content is created and consumed in the future.
| Task | Focus | Consideration of Segment Arrangement |
|---|---|---|
| Highlight Detection | Content Extraction | No |
| Moment Retrieval | Query-Based Retrieval | No |
| Video Summarization | Keyframe/Segment Compilation | No |
| Video Trimming | Wasted Footage Detection, Valuable Segment Selection, Coherent Story Composition | Yes |
| Method | Core Problem Addressed | Novel Approach | Key Strength | Limitations | Training Required |
|---|---|---|---|---|---|
| FateZero | Temporal Consistency in Diffusion-Based Editing | Attention Map Fusion | Temporal Consistency | Difficulty with New Concepts | Zero-Shot |
| VidEdit | Temporal and Spatial Consistency in Long Videos | Atlas-Based Representation with Conditional Diffusion | Spatial Control and Efficiency | Costly to Edit Atlases | Zero-Shot |
- VideoDiff: Human-AI Video Co-Creation with Alternatives - arXiv, accessed May 7, 2025, https://arxiv.org/html/2502.10190v1
- VideoDiff: Human-AI Video Co-Creation with Alternatives | AI Research Paper Details, accessed May 7, 2025, https://www.aimodels.fyi/papers/arxiv/videodiff-human-ai-video-co-creation-alternatives
- Papers by Dingzeyu Li - AIModels.fyi, accessed May 7, 2025, https://www.aimodels.fyi/author-profile/dingzeyu-li-01e271aa-45aa-48f3-9dcf-03bd74e3d525
- [2502.10190] VideoDiff: Human-AI Video Co-Creation with Alternatives - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2502.10190
- [Literature Review] VideoDiff: Human-AI Video Co-Creation with Alternatives - Moonlight, accessed May 7, 2025, https://www.themoonlight.io/en/review/videodiff-human-ai-video-co-creation-with-alternatives
- [2412.09513] Agent-based Video Trimming - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2412.09513
- Agent-based Video Trimming - arXiv, accessed May 7, 2025, https://arxiv.org/html/2412.09513v1
- [Revue de papier] Agent-based Video Trimming - Moonlight, accessed May 7, 2025, https://www.themoonlight.io/fr/review/agent-based-video-trimming
- [Literature Review] Agent-based Video Trimming - Moonlight, accessed May 7, 2025, https://www.themoonlight.io/en/review/agent-based-video-trimming
- [2403.14468] AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2403.14468
- AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks - arXiv, accessed May 7, 2025, https://arxiv.org/html/2403.14468v3
- [PDF] AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks | Semantic Scholar, accessed May 7, 2025, https://www.semanticscholar.org/paper/8e489ab4bcc959a12f0bcacf2975cba9a6395561
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing, accessed May 7, 2025, https://www.semanticscholar.org/paper/UniEdit%3A-A-Unified-Tuning-Free-Framework-for-Video-Bai-He/66a05b7405aa3591a8fb74e5958c8d6dc994606e
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" (TMLR 2024) - GitHub, accessed May 7, 2025, https://github.com/TIGER-AI-Lab/AnyV2V
- [Literature Review] AnyV2V: A Tuning-Free Framework For Any, accessed May 7, 2025, https://www.themoonlight.io/en/review/anyv2v-a-tuning-free-framework-for-any-video-to-video-editing-tasks
- wenhao728/awesome-diffusion-v2v: Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translation. And a video editing benchmark code. - GitHub, accessed May 7, 2025, https://github.com/wenhao728/awesome-diffusion-v2v
- Enhancing Temporal Consistency in Video Editing by Reconstructing Videos with 3D Gaussian Splatting | OpenReview, accessed May 7, 2025, https://openreview.net/forum?id=s1zfBJysbI
- AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks - Hugging Face, accessed May 7, 2025, https://huggingface.co/papers/2403.14468
- AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks - AIModels.fyi, accessed May 7, 2025, https://www.aimodels.fyi/papers/arxiv/anyv2v-tuning-free-framework-any-video-to
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing, accessed May 7, 2025, https://openreview.net/forum?id=Nifg2fQMGW
- AnyV2V: A Plug-and-Play Framework For Any Video-to-Video, accessed May 7, 2025, https://vladbogo.substack.com/p/anyv2v-a-plug-and-play-framework
- Open Source AI Video Framework Edit Video Styling With Consistency - AnyV2V - YouTube, accessed May 7, 2025, https://www.youtube.com/watch?v=G0N6ZBpr-4Y
- tiger-ai-lab/anyv2v – Run with an API on Replicate, accessed May 7, 2025, https://replicate.com/tiger-ai-lab/anyv2v
- AnyV2V - GitHub Pages, accessed May 7, 2025, https://tiger-ai-lab.github.io/AnyV2V/
- AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks | Papers With Code, accessed May 7, 2025, https://paperswithcode.com/paper/anyv2v-a-plug-and-play-framework-for-any
- Max Ku vinesmsuic - GitHub, accessed May 7, 2025, https://github.com/vinesmsuic
- Cong Wei lim142857 - GitHub, accessed May 7, 2025, https://github.com/lim142857
- prepare_video.py - TIGER-AI-Lab/AnyV2V - GitHub, accessed May 7, 2025, https://github.com/TIGER-AI-Lab/AnyV2V/blob/main/prepare_video.py
- [2403.17693] ExpressEdit: Video Editing with Natural Language and Sketching - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2403.17693
- ExpressEdit: Video Editing with Natural Language and Sketching - arXiv, accessed May 7, 2025, https://arxiv.org/html/2403.17693v1
- [Revue de papier] ExpressEdit: Video Editing with Natural Language and Sketching, accessed May 7, 2025, https://www.themoonlight.io/fr/review/expressedit-video-editing-with-natural-language-and-sketching
- ExpressEdit, accessed May 7, 2025, https://expressedit.kixlab.org/
- ExpressEdit: Video Editing with Natural Language and Sketching - CEUR-WS.org, accessed May 7, 2025, https://ceur-ws.org/Vol-3660/paper5.pdf
- ExpressEdit: Video Editing with Natural Language and Sketching | Request PDF, accessed May 7, 2025, https://www.researchgate.net/publication/379629199_ExpressEdit_Video_Editing_with_Natural_Language_and_Sketching
- ExpressEdit: Video Editing with Natural Language and Sketching | Request PDF, accessed May 7, 2025, https://www.researchgate.net/publication/379634381_ExpressEdit_Video_Editing_with_Natural_Language_and_Sketching
- HAI-GEN 2024 - ExpressEdit: Video Editing with Natural Language and Sketching, accessed May 7, 2025, https://www.youtube.com/watch?v=t16Se9rNLLQ
- wentianli/awesome-video-editing - GitHub, accessed May 7, 2025, https://github.com/wentianli/awesome-video-editing
- ExpressEdit: Video Editing with Natural Language and Sketching - KIXLAB, accessed May 7, 2025, https://kixlab.github.io/website-files/2024/iui2024-ExpressEdit-paper.pdf
- ExpressEdit: Video Editing with Natural Language and Sketching - HAI-GEN 2025, accessed May 7, 2025, https://hai-gen.github.io/2024/papers/7484-Tilekbay-Poster.pdf
- [PDF] Write-a-video - Semantic Scholar, accessed May 7, 2025, https://www.semanticscholar.org/paper/Write-a-video-Wang-Yang/84d6edefee7d491b890f5df133e5ec7006d7481f
- Saelyne Yang | Papers With Code, accessed May 7, 2025, https://paperswithcode.com/search?q=author%3ASaelyne+Yang&order_by=date
- ExpressEdit: Video Editing with Natural Language and Sketching - HAI-GEN 2025, accessed May 7, 2025, https://hai-gen.github.io/2024/papers/7484-Tilekbay.pdf
- ExpressEdit: Video Editing with Natural Language and Sketching - OUCI, accessed May 7, 2025, https://ouci.dntb.gov.ua/en/works/4wBaqEZ7/
- Alex Suryapranata | Papers With Code, accessed May 7, 2025, https://paperswithcode.com/search?q=author%3AAlex+Suryapranata&order_by=stars
- A Survey on Generative AI and LLM for Video Generation, Understanding, and Streaming, accessed May 7, 2025, https://arxiv.org/html/2404.16038v1
- [2407.07111] Diffusion Model-Based Video Editing: A Survey - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2407.07111
- MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers - CVPR 2024 Open Access Repository, accessed May 7, 2025, https://openaccess.thecvf.com/content/CVPR2024/html/Ma_MaskINT_Video_Editing_via_Interpolative_Non-autoregressive_Masked_Transformers_CVPR_2024_paper.html
- CCEdit: Creative and Controllable Video Editing via Diffusion Models - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/CVPR2024/papers/Feng_CCEdit_Creative_and_Controllable_Video_Editing_via_Diffusion_Models_CVPR_2024_paper.pdf
- CVPR Poster CCEdit: Creative and Controllable Video Editing via Diffusion Models, accessed May 7, 2025, https://cvpr.thecvf.com/virtual/2024/poster/31363
- CCEdit: Creative and Controllable Video Editing via Diffusion Models - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/384169308_CCEdit_Creative_and_Controllable_Video_Editing_via_Diffusion_Models
- CCEdit: Creative and Controllable Video Editing via Diffusion Models - arXiv, accessed May 7, 2025, https://arxiv.org/html/2309.16496v3
- CCEdit: Creative and Controllable Video Editing via Diffusion Models | Bytez, accessed May 7, 2025, https://bytez.com/docs/cvpr/31363/paper
- CCEdit: Creative and Controllable Video Editing via Diffusion Models (Supplementary Material) - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/CVPR2024/supplemental/Feng_CCEdit_Creative_and_CVPR_2024_supplemental.pdf
- CCEdit: Creative and Controllable Video Editing via Diffusion Models, accessed May 7, 2025, https://ruoyufeng.github.io/CCEdit.github.io/
- [2309.16496] CCEdit: Creative and Controllable Video Editing via Diffusion Models - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2309.16496
- Paper page - CCEdit: Creative and Controllable Video Editing via Diffusion Models, accessed May 7, 2025, https://huggingface.co/papers/2309.16496
- CCEdit: Creative and Controllable Video Editing via Diffusion Models - GitHub, accessed May 7, 2025, https://github.com/RuoyuFeng/CCEdit
- README.md - AlonzoLeeeooo/awesome-video-generation - GitHub, accessed May 7, 2025, https://github.com/AlonzoLeeeooo/awesome-video-generation/blob/main/README.md
- [2303.09535] FateZero: Fusing Attentions for Zero-shot Text-based Video Editing - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2303.09535
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing" - GitHub, accessed May 7, 2025, https://github.com/ChenyangQiQi/FateZero
- FateZero: Fusing Attentions for Zero-shot Text-based Video Editing - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/377432906_FateZero_Fusing_Attentions_for_Zero-shot_Text-based_Video_Editing
- FateZero: Fusing Attentions for Zero-shot Text-based Video Editing - Qifeng Chen, accessed May 7, 2025, https://cqf.io/papers/FateZero_ICCV2023.pdf
- [R] FateZero: Fusing Attentions for Zero-shot Text-based Video Editing - Reddit, accessed May 7, 2025, https://www.reddit.com/r/MachineLearning/comments/11uzioo/r_fatezero_fusing_attentions_for_zeroshot/
- FateZero: Fusing Attentions for Zero-shot Text-based Video Editing - ICCV 2023 Open Access Repository, accessed May 7, 2025, https://openaccess.thecvf.com/content/ICCV2023/html/QI_FateZero_Fusing_Attentions_for_Zero-shot_Text-based_Video_Editing_ICCV_2023_paper.html
- FateZero: Fusing Attentions for Zero-shot Text-based Video Editing - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/369300718_FateZero_Fusing_Attentions_for_Zero-shot_Text-based_Video_Editing
- Video Editing | Papers With Code, accessed May 7, 2025, https://paperswithcode.com/task/video-editing?page=5&q=
- InFusion: Inject and Attention Fusion for Multi Concept Zero-Shot Text-Based Video Editing - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/ICCV2023W/CVEU/papers/Khandelwal_InFusion_Inject_and_Attention_Fusion_for_Multi_Concept_Zero-Shot_Text-Based_ICCVW_2023_paper.pdf
- Add ICCV 2023 paper FateZero: Fusing Attentions for Zero-shot Text-based Video Editing #56 - GitHub, accessed May 7, 2025, amusi/ICCV2025-Papers-with-Code#56
- Supplementary Material - RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models, accessed May 7, 2025, https://rave-video.github.io/supp/supp.html
- FateZero: Fusing Attentions for Zero-shot Text-based Video Editing Supplemental Material - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/ICCV2023/supplemental/QI_FateZero_Fusing_Attentions_ICCV_2023_supplemental.pdf
- FateZero/colab_fatezero.ipynb at main - GitHub, accessed May 7, 2025, https://github.com/ChenyangQiQi/FateZero/blob/main/colab_fatezero.ipynb
- Chenyang QI ChenyangQiQi - GitHub, accessed May 7, 2025, https://github.com/ChenyangQiQi
- FateZero/README.md · chenyangqi/FateZero at ... - Hugging Face, accessed May 7, 2025, https://huggingface.co/spaces/chenyangqi/FateZero/blame/17a220f78da8c2142caa68802a562a2680733419/FateZero/README.md
- VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing - NeurIPS 2025, accessed May 7, 2025, https://neurips.cc/virtual/2023/74845
- VidEdit: Zero-shot and Spatially Aware Text-driven Video Editing - arXiv, accessed May 7, 2025, https://arxiv.org/html/2306.08707v3
- VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing | OpenReview, accessed May 7, 2025, https://openreview.net/forum?id=i02A009I5a
- VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing, accessed May 7, 2025, https://videdit.github.io/
- (PDF) VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/371605763_VidEdit_Zero-Shot_and_Spatially_Aware_Text-Driven_Video_Editing
- [2306.08707] VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2306.08707
- NeurIPS 2023 Workshop on Diffusion Models, accessed May 7, 2025, https://neurips.cc/virtual/2023/workshop/66539
- ChenHsing/Awesome-Video-Diffusion-Models - GitHub, accessed May 7, 2025, https://github.com/ChenHsing/Awesome-Video-Diffusion-Models
- VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing - arXiv, accessed May 7, 2025, https://arxiv.org/html/2306.08707v4
- baaivision/vid2vid-zero: Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models - GitHub, accessed May 7, 2025, https://github.com/baaivision/vid2vid-zero
- VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing | Papers With Code, accessed May 7, 2025, https://paperswithcode.com/paper/videdit-zero-shot-and-spatially-aware-text
- Paper page - VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing, accessed May 7, 2025, https://huggingface.co/papers/2306.08707
- VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing, accessed May 7, 2025, https://sl.zhuanzhi.ai/paper/e394084d66e7e6714f39607ef8516340
- [2403.14617] Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2403.14617
- Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion - arXiv, accessed May 7, 2025, https://arxiv.org/html/2403.14617v1
- [Literature Review] Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion - Moonlight, accessed May 7, 2025, https://www.themoonlight.io/en/review/videoshop-localized-semantic-video-editing-with-noise-extrapolated-diffusion-inversion
- Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion, accessed May 7, 2025, https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/01890.pdf
- Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion - GitHub, accessed May 7, 2025, https://github.com/sfanxiang/videoshop
- Contextually Harmonious Local Video Editing - OpenReview, accessed May 7, 2025, https://openreview.net/forum?id=GwJXJSCH1S
- GuideEdit: Enhancing Face Video Editing with Fine-grained Control | OpenReview, accessed May 7, 2025, https://openreview.net/forum?id=gWOANrFJ0t
- Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion, accessed May 7, 2025, https://arxiv.org/html/2403.14617v3
- Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion, accessed May 7, 2025, https://videoshop-editing.github.io/
- Supplementary Material - Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion, accessed May 7, 2025, https://videoshop-editing.github.io/static/supplementary/
- Supplementary Material for Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion, accessed May 7, 2025, https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/01890-supp.pdf
- video-generation-survey/video-generation.md at main · yzhang2016/video-generation-survey - GitHub, accessed May 7, 2025, https://github.com/yzhang2016/video-generation-survey/blob/main/video-generation.md
- Kobaayyy/Awesome-CVPR2025-CVPR2024-ECCV2024-AIGC - GitHub, accessed May 7, 2025, https://github.com/Kobaayyy/Awesome-CVPR2024-ECCV2024-AIGC/blob/main/ECCV2024.md
- Learning To Cut by Watching Movies - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/ICCV2021/papers/Pardo_Learning_To_Cut_by_Watching_Movies_ICCV_2021_paper.pdf
- Learning to Cut by Watching Movies | Request PDF - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/359001853_Learning_to_Cut_by_Watching_Movies
- Learning to Cut by Watching Movies Supplementary Material - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/ICCV2021/supplemental/Pardo_Learning_To_Cut_ICCV_2021_supplemental.pdf
- Learning Where To Cut From Edited Videos - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/ICCV2021W/CVEU/papers/Huang_Learning_Where_To_Cut_From_Edited_Videos_ICCVW_2021_paper.pdf
- arXiv:2308.09775v1 [cs.CV] 18 Aug 2023, accessed May 7, 2025, https://arxiv.org/pdf/2308.09775
- [2108.04294] Learning to Cut by Watching Movies - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2108.04294
- PardoAlejo/LearningToCut: Official Code of ICCV 2021 Paper: Learning to Cut by Watching Movies - GitHub, accessed May 7, 2025, https://github.com/PardoAlejo/LearningToCut
- [D] ICCV 19 - The state of (some) ethically questionable papers : r/MachineLearning - Reddit, accessed May 7, 2025, https://www.reddit.com/r/MachineLearning/comments/dp389c/d_iccv_19_the_state_of_some_ethically/
- 2021 IEEE/CVF International Conference on Computer Vision (ICCV), accessed May 7, 2025, https://www.computer.org/csdl/proceedings/iccv/2021/1BmEezmpGrm
- Match Cutting: Finding Cuts with Smooth Visual Transitions - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/364519039_Match_Cutting_Finding_Cuts_with_Smooth_Visual_Transitions
- Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books - DSpace@MIT, accessed May 7, 2025, https://dspace.mit.edu/bitstream/handle/1721.1/112996/Torralba_Aligning%20books.pdf?sequence=1&isAllowed=y
- Most Influential ICCV Papers (2024-05 Version), accessed May 7, 2025, https://www.paperdigest.org/2024/05/most-influential-iccv-papers-2024-05/
- Detours for Navigating Instructional Videos - UT Computer Science, accessed May 7, 2025, https://www.cs.utexas.edu/~grauman/papers/video-detours.pdf
- MovieCuts: A New Dataset and Benchmark for Cut Type Recognition - European Computer Vision Association, accessed May 7, 2025, https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136670659.pdf
- Adobe Research at ICCV 2021, accessed May 7, 2025, https://research.adobe.com/news/adobe-research-at-iccv-2021/
- The Anatomy of Video Editing: A Dataset and Benchmark Suite for AI-Assisted Video Editing - Joon-Young Lee, accessed May 7, 2025, https://joonyoung-cv.github.io/assets/paper/22_eccv_the_anatomy.pdf
- Alejandro Pardo PardoAlejo - GitHub, accessed May 7, 2025, https://github.com/PardoAlejo
- PardoAlejo/MovieCuts: Learning to cut end-to-end pretrained modules - GitHub, accessed May 7, 2025, https://github.com/PardoAlejo/MovieCuts
- familyfriendlymikey/mpv-cut: An mpv plugin for cutting videos incredibly quickly. - GitHub, accessed May 7, 2025, https://github.com/familyfriendlymikey/mpv-cut
- Breakthrough/PySceneDetect: 🎥 Python and OpenCV-based scene cut/transition detection program & library. - GitHub, accessed May 7, 2025, https://github.com/Breakthrough/PySceneDetect
- mifi/lossless-cut: The swiss army knife of lossless video/audio editing - GitHub, accessed May 7, 2025, https://github.com/mifi/lossless-cut
- (PDF) Film Editing and Emotional Resonance: The Psychology of Cut - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/379598550_Film_Editing_and_Emotional_Resonance_The_Psychology_of_Cut
- Concentration crisis for watching movies : r/TrueFilm - Reddit, accessed May 7, 2025, https://www.reddit.com/r/TrueFilm/comments/10dkjmj/concentration_crisis_for_watching_movies/
- Does anyone else have trouble enjoying movies and tv after specializing in filmmaking? - Reddit, accessed May 7, 2025, https://www.reddit.com/r/Filmmakers/comments/16lkxew/does_anyone_else_have_trouble_enjoying_movies_and/
- Cinematic Study: How to Watch Movies Like a Filmmaker - The Film Fund, accessed May 7, 2025, https://www.thefilmfund.co/learning-from-films-how-to-watch-movies-like-a-filmmaker/
- ART OF THE CUT – Walter Murch, ACE with clarifications on his books - ProVideo Coalition, accessed May 7, 2025, https://www.provideocoalition.com/aotc-murch-books/
- Editing Theory: Balancing The Method With Emotion - LIVING IN CINE, accessed May 7, 2025, http://www.livingincine.com/2011/06/editing-theory-balancing-method-with.html
- Watching movies: a few tips | Michigan Today, accessed May 7, 2025, https://michigantoday.umich.edu/2010/03/10/a7627/
- A Perplexing Guide to Movie-Watching - Books and Culture, accessed May 7, 2025, https://www.booksandculture.com/articles/2015/novdec/perplexing-guide-to-movie-watching.html
- Art of the Cut: Hullfish, Steve: 9781138238664: Amazon.com: Books, accessed May 7, 2025, https://www.amazon.com/Art-Cut-Steve-Hullfish/dp/113823866X
- Observations on film art : Watching movies very, very slowly - David Bordwell, accessed May 7, 2025, https://www.davidbordwell.net/blog/2007/07/22/watching-movies-very-very-slowly/
- MovieCuts: A New Dataset and Benchmark for Cut Type Recognition - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/367576933_MovieCuts_A_New_Dataset_and_Benchmark_for_Cut_Type_Recognition
- Movie editing influences spectators' time perception - PMC - PubMed Central, accessed May 7, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC9684412/
- How the Brain Affects the Way We Perceive Movies - Neuroscience News, accessed May 7, 2025, https://neurosciencenews.com/brain-perception-movies-20701/
- Art of the Cut with Dan Zimmerman, ACE of “Mazerunner: The Scorch Trials”, accessed May 7, 2025, https://www.provideocoalition.com/art-of-the-cut-with-dan-zimmerman-ace-of-mazerunner-the-scorch-trials/
- A fast, free, simple video cutter/splitter? : r/software - Reddit, accessed May 7, 2025, https://www.reddit.com/r/software/comments/rt4qzi/a_fast_free_simple_video_cuttersplitter/
- SketchVideo: Sketch-based Video Generation and Editing - CVPR 2025, accessed May 7, 2025, https://cvpr.thecvf.com/virtual/2025/poster/33517
- [2503.23284] SketchVideo: Sketch-based Video Generation and Editing - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2503.23284
- [Literature Review] SketchVideo: Sketch-based Video Generation and Editing - Moonlight, accessed May 7, 2025, https://www.themoonlight.io/en/review/sketchvideo-sketch-based-video-generation-and-editing
- IGLICT/SketchVideo - GitHub, accessed May 7, 2025, https://github.com/IGLICT/SketchVideo
- Video Editing | Papers With Code, accessed May 7, 2025, https://paperswithcode.com/task/video-editing?page=8&q=
- Breathing Life Into Sketches Using Text-to-Video Priors - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/CVPR2024/papers/Gal_Breathing_Life_Into_Sketches_Using_Text-to-Video_Priors_CVPR_2024_paper.pdf
- CVPR 2025 Accepted Papers, accessed May 7, 2025, https://cvpr.thecvf.com/Conferences/2025/AcceptedPapers
- showlab/Awesome-Video-Diffusion: A curated list of recent diffusion models for video generation, editing, and various other applications. - GitHub, accessed May 7, 2025, https://github.com/showlab/Awesome-Video-Diffusion
- SketchVideo: Sketch-based Video Generation and Editing | AI Research Paper Details, accessed May 7, 2025, https://www.aimodels.fyi/papers/arxiv/sketchvideo-sketch-based-video-generation-editing
- Sketch Video Synthesis, accessed May 7, 2025, https://diglib.eg.org/bitstream/handle/10.1111/cgf15044/v43i2_47_15044.pdf
- Sketch-Based Video Object Localization - CVF Open Access, accessed May 7, 2025, https://openaccess.thecvf.com/content/WACV2024/papers/Woo_Sketch-Based_Video_Object_Localization_WACV_2024_paper.pdf
- SketchVideo: Sketch-based Video Generation and Editing - arXiv, accessed May 7, 2025, https://arxiv.org/html/2503.23284v1
- AirSketch: Generative Motion to Sketch - NIPS papers, accessed May 7, 2025, https://proceedings.neurips.cc/paper_files/paper/2024/file/f4b6ef2a78684dca2fb3f1c09372e041-Paper-Conference.pdf
- MarkMoHR/Awesome-Sketch-Based-Applications: :books - GitHub, accessed May 7, 2025, https://github.com/MarkMoHR/Awesome-Sketch-Based-Applications
- Okrin/SketchVideo - Hugging Face, accessed May 7, 2025, https://huggingface.co/Okrin/SketchVideo
- Paper page - SketchVideo: Sketch-based Video Generation and Editing - Hugging Face, accessed May 7, 2025, https://huggingface.co/papers/2503.23284
- VidSketch: Hand-drawn Sketch-Driven Video Generation with Diffusion Control - GitHub, accessed May 7, 2025, https://github.com/CSfufu/VidSketch
- yudianzheng/SketchVideo: [EG 2023] Sketch Video Synthesis - GitHub, accessed May 7, 2025, https://github.com/yudianzheng/SketchVideo
- Weicai Ye 叶伟才, accessed May 7, 2025, https://ywcmaike.github.io/
- Prompt Learning Based Adaptor for Enhanced Video Editing with Pretrained Text-to-Image Diffusion Models - NeurIPS 2025, accessed May 7, 2025, https://neurips.cc/virtual/2024/105002
- NeurIPS Poster Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models, accessed May 7, 2025, https://neurips.cc/virtual/2024/poster/93631
- Enhancing Low-Cost Video Editing with Lightweight Adaptors and Temporal-Aware Inversion - arXiv, accessed May 7, 2025, https://arxiv.org/html/2501.04606v3
- Prompt Learning Based Adaptor for Enhanced Video Editing with Pretrained Text-to-Image Diffusion Models | OpenReview, accessed May 7, 2025, https://openreview.net/forum?id=4bPH07bP4A&referrer=%5Bthe%20profile%20of%20Yangfan%20He%5D(%2Fprofile%3Fid%3D~Yangfan_He1)
- AI-Based Video Clipping of Soccer Events - MDPI, accessed May 7, 2025, https://www.mdpi.com/2504-4990/3/4/49
- A Review of Computer Vision Technology for Football Videos - MDPI, accessed May 7, 2025, https://www.mdpi.com/2078-2489/16/5/355
- AI-based clipping of booking events in soccer - OsloMet ODA, accessed May 7, 2025, https://oda.oslomet.no/oda-xmlui/handle/11250/3101178
- (PDF) AI-Based Video Clipping of Soccer Events - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/356944354_AI-Based_Video_Clipping_of_Soccer_Events
- Mach. Learn. Knowl. Extr., Volume 3, Issue 4 (December 2021) – 14 articles - MDPI, accessed May 7, 2025, https://www.mdpi.com/2504-4990/3/4
- Automated Event Detection and Classification in Soccer: The Potential of Using Multiple Modalities - MDPI, accessed May 7, 2025, https://www.mdpi.com/2504-4990/3/4/51
- Prediction of Shooting Events in Soccer Videos Using Complete Bipartite Graphs and Players' Spatial-Temporal Relations - MDPI, accessed May 7, 2025, https://www.mdpi.com/1424-8220/23/9/4506
- AI-Based Video Clipping of Soccer Events - Simula Research Laboratory, accessed May 7, 2025, https://www.simula.no/research/ai-based-video-clipping-soccer-events
- Automated Event Detection and Classification in Soccer: The Potential of Using Multiple Modalities - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/357120615_Automated_Event_Detection_and_Classification_in_Soccer_The_Potential_of_Using_Multiple_Modalities
- Integrated AI System for Real-Time Sports Broadcasting: Player, accessed May 7, 2025, https://colab.ws/articles/10.3390%2Fapp15031543
- Integrated AI System for Real-Time Sports Broadcasting: Player Behavior, Game Event Recognition, and Generative AI Commentary in Basketball Games - MDPI, accessed May 7, 2025, https://www.mdpi.com/2076-3417/15/3/1543
- Sports Video Classification Method Based on Improved Deep Learning - MDPI, accessed May 7, 2025, https://www.mdpi.com/2076-3417/14/2/948
- Diagnostic Applications of AI in Sports: A Comprehensive Review of Injury Risk Prediction Methods - PMC - PubMed Central, accessed May 7, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11592714/
- A novel approach to a hybrid security system using Operator Machine Augmentation Resource (OMAR) by Mohammed Ameen A dissertatio - Iowa State University Digital Repository, accessed May 7, 2025, https://dr.lib.iastate.edu/bitstreams/0584d4d8-371d-4e8f-a7a4-fde6381b1737/download
- Fully Automatic Camera for Personalized Highlight Generation in Sporting Events - PMC, accessed May 7, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10857372/
- Multimodal Shot Prediction Based on Spatial-Temporal Interaction between Players in Soccer Videos - MDPI, accessed May 7, 2025, https://www.mdpi.com/2076-3417/14/11/4847
- Prediction of Shooting Events in Soccer Videos Using Complete Bipartite Graphs and Players' Spatial-Temporal Relations - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/370577842_Prediction_of_Shooting_Events_in_Soccer_Videos_Using_Complete_Bipartite_Graphs_and_Players'_Spatial-Temporal_Relations
- MAKE | Free Full-Text | AI-Based Video Clipping of Soccer Events | Notes - MDPI, accessed May 7, 2025, https://www.mdpi.com/2504-4990/3/4/49/notes
- AI-Based Cropping of Soccer Videos for Different Social Media Representations - CORE, accessed May 7, 2025, https://core.ac.uk/download/630833888.pdf
- AI-Based Cropping of Sport Videos using SmartCrop - Munin, accessed May 7, 2025, https://munin.uit.no/bitstream/handle/10037/36766/article.pdf?sequence=4&isAllowed=y
- AI / Automated Video Processing : r/SoccerCoachResources - Reddit, accessed May 7, 2025, https://www.reddit.com/r/SoccerCoachResources/comments/1fkzz3k/ai_automated_video_processing/
- Automated Clipping of Soccer Events using Machine Learning | Request PDF, accessed May 7, 2025, https://www.researchgate.net/publication/357729185_Automated_Clipping_of_Soccer_Events_using_Machine_Learning
- The AI era in sports broadcasting: Data-driven storytelling, personalised highlights, and automated audio - SportsPro, accessed May 7, 2025, https://www.sportspro.com/insights/features/sports-broadcasting-artificial-intelligence-production-content/
- Enhancing live football broadcasts by eliminating camera operator distractions with AI, accessed May 7, 2025, https://www.sciencedaily.com/releases/2024/07/240710130909.htm
- AI-Based Cropping of Sport Videos Using SmartCrop, accessed May 7, 2025, https://worldscientific.com/doi/pdf/10.1142/S1793351X24450028?download=true
- [2202.01031] MMSys'22 Grand Challenge on AI-based Video, accessed May 7, 2025, https://ar5iv.labs.arxiv.org/html/2202.01031
- AI-Based Cropping of Soccer Videos for Different Social Media Representations - Munin, accessed May 7, 2025, https://munin.uit.no/bitstream/10037/35843/2/article.pdf
- AI-PRODUCER: AI-based Video Clipping and Summarization of Sport Events, accessed May 7, 2025, https://prosjektbanken.forskningsradet.no/en/project/FORISS/327717
- [PDF] Automatic summarization of soccer highlights using audio-visual descriptors, accessed May 7, 2025, https://www.semanticscholar.org/paper/Automatic-summarization-of-soccer-highlights-using-Raventos-Quijada/a1a120c6ea23e024c202ba6b89dec4e6d34bc787
- AI in Sports: Soccer Analytics from Gameplay Videos | Object Detective Blog Series (Part 3), accessed May 7, 2025, https://www.macnica.co.jp/en/business/ai/blog/142069/
- [2202.01031] MMSys'22 Grand Challenge on AI-based Video Production for Soccer - arXiv, accessed May 7, 2025, https://arxiv.org/abs/2202.01031
- AI-Based Cropping of Soccer Videos for Different Social Media Representations, accessed May 7, 2025, https://oda.oslomet.no/oda-xmlui/bitstream/handle/11250/3164019/2024_mmm_demo_smartcrop.pdf?sequence=4&isAllowed=y
- SmartCrop: AI-Based Cropping of Soccer Videos | Request PDF - ResearchGate, accessed May 7, 2025, https://www.researchgate.net/publication/379132826_SmartCrop_AI-Based_Cropping_of_Soccer_Videos
- PlayerTV: Advanced Player Tracking and Identification for Automatic Soccer Highlight Clips - arXiv, accessed May 7, 2025, https://arxiv.org/html/2407.16076v1
- Prediction of Shooting Events in Soccer Videos Using Complete Bipartite Graphs and Players' Spatial-Temporal Relations - PMC, accessed May 7, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10181557/
- AI Video Editor - VIDIO, accessed May 7, 2025, AI Video Editor