SKETCHES are created by humans through an iterative process and reflect
one’s sketching skills, taste, world perception and even character IN JUST A SET OF SPARSE LINES.
Being the result of semantic, perceptual, or conceptual processing, sketches are distinctive from photos.
While the Computer Vision and Machine Learning communities have firmly invested in reasoning with
photos, sketch data just recently got into the spotlight. This shift of focus on using sketch data has
already started to cause a profound impact on many facets of research on computer vision, computer graphics,
machine learning, HCI and artificial intelligence at large. Sketch has not only been used for image
RETRIEVAL, 3D MODELING, USER INTERFACE DESIGN, but also as a key enabler
in our fundamental understanding of VISUAL ABSTRACTION,
CREATIVITY, and EXPRESSIVITY.
Developing such understanding is impossible with photos.
This (series of) workshop aims to BRING TOGETHER RESEARCHERS FROM INTERDISCIPLINARY RESEARCH FIELDS
in the community to consolidate cross-discipline insights, IDENTIFY AND ENCOURAGE NEW RESEARCH DIRECTIONS,
and ultimately foster the growth of the sketch research community.
CALL FOR PAPERS
TOPICS
The topics of primary interest for this workshop are those that study and exploit an expressive power of
sketches. We encourage interdisciplinary research lying at the intersection of CV (computer vision),
ML (machine learning) and CG (computer graphics), HCI (human-computer interaction), or UX (user experience).
Specifically, topics include but are not limited to:
Sketch understanding and representation:
- sketch and human perception
- sketch social and cultural impact
- synthetic 2D/3D sketch synthesis
- sketch abstraction
- synthetic sketch quality assessment
- sketch recognition
- sketch segmentation/grouping
- sketch captioning
- sketch representation
Sketch and human creativity:
- sketch for human creativity
- sketch-based ideation and product prototyping
- sketch-based modeling
- sketch-based editing
- sketching interfaces
- sketching in AR/VR, gesture recognition for sketching
- style transfer in scene sketching, portraits or animation
Multi-modal sketch applications:
- 2D sketch-based image retrieval (SBIR)
- 2D/3D sketch-based 3D shape retrieval
- novel sketch-based datasets and its applications
SUBMISSIONS
The submitted papers must be an unpublished work on the topics summarized above.
Papers will undergo full double-blind peer review by 3-5 program committee members.
There is neither a rebuttal nor a second review cycle.
The papers will be published in conjunction with ICCV 2021 proceedings.
The Best Paper will be selected by a program committee.
HOW TO SUBMIT YOUR WORK
Please submit your work using the
CMT website.
For formatting instructions and LaTeX templates please refer to the
ICCV submission guidelines.
The page limit is 8 pages. We accept supplemental material in 'pdf' or 'zip' formats.
IMPORTANT DATES
Paper submission deadline:
Supplemental material deadline:
Acceptance notification to authors:
Camera-ready deadline:
3rd August, 11:59 pm Pacific Time
3rd August, 11:59 pm Pacific Time
11th August
17th August
BEST PAPER AWARD
A GeForce RTX 3080 GPU will be sponsored by the SketchX Research Lab,
and awarded to the Best Paper.