Typography Day Typography and Culture

Similar documents
Subtitle Safe Crop Area SCA

Computer Graphics. Raster Scan Display System, Rasterization, Refresh Rate, Video Basics and Scan Conversion

Brand Style Guide January 2018

CONTENTS 1. LOGOTYPE 2. BRAND IDENTITY FINAL COMMENTS Concept 1.2. Structure & proportions Using the logotype

LOGO STANDARDS MANUAL

For Children with Developmental Differences. Brand Identity Guide

Using the VP300 to Adjust Video Display User Controls

UNICEF CLUBS BRAND BOOK

MANAGING HDR CONTENT PRODUCTION AND DISPLAY DEVICE CAPABILITIES

Recap of Last (Last) Week

TITLE MASTER GARDENER PROGRAMS STYLE GUIDE MASTER GARDENER STYLE GUIDE

Version 1.0 February MasterPass. Branding Requirements

RECOMMENDATION ITU-R BT Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios

Nattress Standards Conversion V2.5 Instructions

Superior Digital Video Images through Multi-Dimensional Color Tables

Role of Color Processing in Display

Monitor QA Management i model

CLICK TO GO BACK TO THE START CLICK TO JUMP TO ANY SECTION

Colour Features in Adobe Creative Suite

OUYA IS A NEW KIND OF GAME CONSOLE FOR THE TV THAT BRINGS THE OPENNESS OF MOBILE AND INTERNET PLATFORMS TO THE BIG SCREEN FOR THE FIRST TIME.

Designing Custom DVD Menus: Part I By Craig Elliott Hanna Manager, The Authoring House at Disc Makers

Identity Standards Guide: Color Art Integrated Interiors 2012

Introduction to GRIP. The GRIP user interface consists of 4 parts:

Color Reproduction Complex

ATSC Standard: Video Watermark Emission (A/335)

Nintendo. January 21, 2004 Good Emulators I will place links to all of these emulators on the webpage. Mac OSX The latest version of RockNES

BUREAU OF ENERGY EFFICIENCY

VivoSense. User Manual Galvanic Skin Response (GSR) Analysis Module. VivoSense, Inc. Newport Beach, CA, USA Tel. (858) , Fax.

AWT Guidelines for Speakers

Achieve Accurate Critical Display Performance With Professional and Consumer Level Displays

colors AN INTRODUCTION TO USING COLORS FOR UNITY v1.1

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

REDUCING DYNAMIC POWER BY PULSED LATCH AND MULTIPLE PULSE GENERATOR IN CLOCKTREE

Visual Identification Manual

RECOMMENDATION ITU-R BT (Questions ITU-R 25/11, ITU-R 60/11 and ITU-R 61/11)

Graphic Identity Standards

ELEMENTS AND PRINCIPLES OF DESIGN

Calibrating the timecode signal input

Graphic Identity Manual MARKETING DEPARTMENT

Color Pro DMX Protocol

+ Human method is pattern recognition based upon multiple exposure to known samples.

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

Inventions on color selections in Graphical User Interfaces

Colors. Matthew Woehlke

OLED THE PERFECT HIGH-RESOLUTION DISPLAY

Divider Motion heading graphics

Speech Recognition and Signal Processing for Broadcast News Transcription

What is the history and background of the auto cal feature?

www. enocean. com EnOcean Brand Guidelines

INTRODUCTION. NASS is an action sports & music festival that celebrates the very best of alternative culture, bringing you three days of:

Planar LookThru OLED Transparent Display. Content Developer s Guide. 1 TOLED Content Developer s Guide A

Bringing Better Pixels to UHD with Quantum Dots

Calibration Best Practices

TROLLBÄCK + COMPANY 2010 STYLE GUIDE V

Display-Shoot M642HD Plasma 42HD. Re:source. DVS-5 Module. Dominating Entertainment. Revox of Switzerland. E 2.00

Understanding Compression Technologies for HD and Megapixel Surveillance

LCD and Plasma display technologies are promising solutions for large-format

ESI VLS-2000 Video Line Scaler

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

Automatically Creating Biomedical Bibliographic Records from Printed Volumes of Old Indexes

A Fast Alignment Scheme for Automatic OCR Evaluation of Books

TechNote: MuraTool CA: 1 2/9/00. Figure 1: High contrast fringe ring mura on a microdisplay

VISUAL IDENTITY STANDARDS

Setting Up the Warp System File: Warp Theater Set-up.doc 25 MAY 04

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Topics in Computer Music Instrument Identification. Ioanna Karydi

A Framework for Segmentation of Interview Videos

FileMaker Corporate Style Guide

visual identity guidelines

Graphic Standards Manual FEBRUARY 20 17

To show the Video Scopes, click on the down arrow next to View located in the upper- right corner of your playback panel.

Will Widescreen (16:9) Work Over Cable? Ralph W. Brown

Color Reproduction Complex

AUCA Standard Graphic Identity Manual

An Empirical Study on Identification of Strokes and their Significance in Script Identification

Company Software Manual version Issued Date Sony Corporation Projector Calibration Pro Version 0.04 Mar 3rd, 2017

This tool is the collection of all the fundamental rules for the use of BOCAhealth brand. Its use helps to make all the communication tools coherent

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

InSync White Paper : Achieving optimal conversions in UHDTV workflows April 2015

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed,

Rec. ITU-R BT RECOMMENDATION ITU-R BT * WIDE-SCREEN SIGNALLING FOR BROADCASTING

Avast logo manual. Logo Overview

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

DPS Logo. Version 1.0

Logo and Brand Standards Manual. Copyright November 2013

Visual Communication at Limited Colour Display Capability

Windsor Windows & Doors Brand Identity Guidelines. Rev. 4/07

Tips for creating a poster

STEPS. For Successful Content Design In Digital Signage Systems

INFORMATION TO USER CAUTION RISK OF ELECTRIC SHOCK, DO NOT OPEN

TOWN OF QUEEN CREEK BRAND GUIDE

Constant Bit Rate for Video Streaming Over Packet Switching Networks

ILDA Image Data Transfer Format

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

Application Note AN-708 Vibration Measurements with the Vibration Synchronization Module

TV Character Generator

Tech Paper. HMI Display Readability During Sinusoidal Vibration

ATSC Candidate Standard: Video Watermark Emission (A/335)

PERCEPTUAL QUALITY COMPARISON BETWEEN SINGLE-LAYER AND SCALABLE VIDEOS AT THE SAME SPATIAL, TEMPORAL AND AMPLITUDE RESOLUTIONS. Yuanyi Xue, Yao Wang

1080p Living Room Theater Projector with Rec. 709 cinematic color

Transcription:

Typography Day 2014 - Typography and Culture Technique for optimization of font color in subtitling of modern media. Dhvanil Patel, Indian Institute of Technology Guwahati, India, dhvanilpatel2012@gmail.com Ojas Deshpande, Indian Institute of Technology Guwahati, India, 31ojas@gmail.com Abstract: Subtitles have become an integral part of modern media. From television to video games, almost every audio-visual medium in the world uses captioning in one way or the other. However, very little research has been done in this field. Traditionally, the color of subtitles is plain white or yellow with black outline stroke. Although white/yellow and black works best on a large strata of visual media, here we have device a method which takes into account the dominant colors of the film and choses the most efficient color of the subtitles (fill color as well as the stroke color) so as to make the subtitles more contrasting and easily readable. Key words: Subtitles, Color Theory, Video Processing, Image Processing 1. Introduction Subtitles are not only essential, but an indispensable part of today s media. Be it television, film, video games or any another audio visual media, subtitling or captioning are used almost everywhere. They help to bridge the gap between language barriers and help to increase the reach and accessibility of the media. Subtitles also make the film more comprehensible for the deaf and hard-of-hearing people. Till now, the subtitled movies had a constant color subtitles in white, yellow, green, and so on. It caused sometimes an unpleasant view of subtitles on the images when the background color was similar to subtitle color. Humans perceive relatively. The default or most widely used color scheme for the subtitles is white (#FFFFFF) with a stroke of black (#000000). In some case a faint shadow is also applied. However, this generalization is not particularly efficient in the case of many movies, as we see in the subsequent sections. Most of the European Cinema and TV channels use yellow subtitles, while their Hollywood counterparts use white subtitles. There has been a longstanding debate on the choice between white and yellow subtitles. Yellow subtitles are more visible than white subtitles,

especially on lighter backgrounds. For example in the film The Vertical Limit yellow subtitles work better than white ones because the overall color scheme of the film is whitish as it is based in the snow clad Himalayan ranges. But, on the other hand, they disrupt the visual harmony of the movie. Yellow subtitles make the rest of the frame appear too blue in color. White colored subtitles, on the other hand, are used for the exactly opposite reasons. Therefore, white colored subtitles are used in International Film Festivals where it is necessary to preserve the color palette of the film and yellow colored subtitles are generally used in cable TV. The main aim of the project was to device a method which would take into account the dominant colors of the film and chose the most efficient color of the subtitles (fill color as well as the stroke color) so as to make them either more readable and legible, or more neutral (non-disruptive of the existing color scheme of the movie). Figure 1. White versus Yellow subtitles for the movie Avatar (2009) 2. Detailed Method The entire program was based on python programming language. It performed all the necessary functions and provided the two most suitable subtitles colors for both the scenarios. Similar program had been developed earlier which uses fuzzy logic to select the most pleasant color, however it differs greatly with respect to the application and postextraction operations [1]. The detailed working of the program is explained below: 2.1 Extracting the frames of the film. In this step, the program extracts the frames from the film and store them in a folder. The extracted frames are in JPG format. The program is very versatile and can be tweaked in a variety of ways. Some of the parameter which can be changed are:

1. The frame extraction rate: Frame extraction rate gives an idea about the frequency of extraction of frame from the video. Lower the frame extraction rate, better would be the efficiency of the entire program. A frame extraction rate of 10 implies that every tenth frame would be extracted by the program. 2. Frame resolution: The resolution of extracted frame can also be changed to suit our convenience. Better extraction resolution will increase the processing time of the program. 3. The frame dimensions: Not only the resolution of the extracted frames can be changed, but also the size of the frame. For example, if one wants to extract only the bottom one third of the whole frame, he/she can easily do so. One interesting application of this feature is that there is a choice between improving the legibility of the subtitles and the preserving the visual harmony of the movie; on which the entire yellow subtitles versus white subtitles debate is based on. Since only the bottom one third part of the frame contains subtitles, extracting only that portion would help to create more contrasting subtitle color. Extracting the entire frame, on the other hand would help to create harmonious subtitle colors which would preserve the color scheme of the movie. Figure 2. Full frame extraction versus bottom one-third frame extraction 2.2 Analyzing the colors of the frame. The colors of the frames were then analyzed using a python code. The code was written in such a manner that it automatically takes in the frames and stores the hexadecimal codes of the three most dominant colors of the image in a text file. The python code uses k- means clustering to cluster the pixels in groups based on their color. The center of the resulting clusters are then the dominant colors. The total number of colors (or hexadecimal codes) extracted were around 12,000. For more accuracy, the number of colors can be increased to more than one million (frame extraction rate = 1 and number of k-means cluster = 5).

2.3 Making the color palette of the film. In the previous step the hexadecimal codes for the dominant colors of the frames are stored in a text file. This text file is now again fed into the python color extraction code and a color palette is generated. This color palette is the generalization of the entire film. In a way, it represents the entire color scheme of the movie. Each pixel on the palette refers to the dominant color of the frame which was extracted earlier. Figure 3. Color palette of Moneyball (2006) Figure 2. Color palette of Avatar (2008) 2.4 Selection of the color of the subtitles. The next step is the selection of the optimum color for the subtitles. Complex color processing algorithms are performed on the movie color palette which was generated earlier and two sets of colors are obtained: fill color and the stroke color [2]. Depending upon the dimensions of the frames, these colors obtained will either be more legible or more harmonious. The color choosing algorithm was based upon several researches done earlier on the field of visual perception and color psychology. Figure 4. In this frame, the color of the subtitle is complementary to the color of the background, thus making it more legible.

3. Algorithm The algorithm being used to calculate the dominant colors in every image is the k-means clustering algorithm [3]. For this, every color is represented as its RBG value in a 3 D plane. The algorithm selects k random colors from the list of colors that we have extracted from the image to use as the initial clusters [4]. Then, looping over all the colors, it finds the distance of the color from each of these clusters, and add s it to the cluster nearest to it. Once this has been done for all the colors, it recalculate the center of every cluster. It repeats the same process over and over again until it reaches a point where the centers stop moving. The centers of these clusters will then be the dominant colors of the database. 3. Results The code was run on more than thirty films with varying parameters and levels of detailing. Some of the results have been shown in the table below. Film Name Type Dominant colors Subtitles Fill Color Stroke Color Moneyball Full #515447 #8a8f7c #1f201c #F5F0F4 #1E0518 Bottom #171d0d #070605 #070605 #E9DCE6 #090007 Avatar Full #546b6f #223437 #93a6a8 #E9E4D4 #090700 Bottom #283a41 #090d0f #060606 #E9CF80 #070605 The Reservoir Full #504749 #8c848e #241c1a #E9EAC1 #1E1D1A Dogs Bottom #191414 #020201 #383838 #E9DAAE #110D02 The October Full #18191b #7189a8 #3c4550 #F0E0C7 #211402 Sky Bottom #090f0f #070918 #262626 #E2D9CC #0D0801 Gravity Full #2f3232 #2d4079 #0f1c19 #EDE9E4 #2A1702 Bottom #283a41 #090d0f #060606 #E4DAD1 #191714 Sin City Full #5d5a56 #191818 #b5b0a5 #DBDFE3 #080E16 Bottom #161615 #0c0c0e #1c0a07 #D5D9DB #010810 The Hunger Full #767e66 #1b1b17 #474a39 #EDDAE6 #17010D Games Bottom #0c230f #1b1c1c #0f160f #EDD6D6 #0D0101 Table 1: Results of the code

Some important inferences were drawn from the results obtained: 1. In order to preserve the color scheme of the movie, white subtitles with black outline are not the best choice (as commonly believed. 2. In most of the case, the palette or the dominant color obtained by using the entire frame, in general had higher saturation and brightness than the palette obtained using only the bottom one third of the frame. This might be because the upper parts of the frame usually have lighter colors, like blue and white, while the bottom part is usually darker. 3. Therefore, the fill color obtained in the second case (one third frame extraction) was, generally, more saturated and bright than the one obtained in the first case (entire frame extraction). 4. Future Scope There is immense scope for further improvement in this project. The algorithms can be further refined and improved so that the color extraction process can be made even more efficient. Moreover, detailed research can be done in the field of legibility and readability of text on non-static backgrounds. One interesting application of this method could be varying colored subtitles [5]. The format of the subtitle files currently supports changing colors for a particular time period. This feature can very well be exploited to increase the readability of the subtitles even further. The subtitled text may change the color or the hue slightly so that the viewer might not notice it knowingly, but subconsciously, it might make a significant difference in the reading experience. The current format of the subtitles already supports this feature, and with a slight extension to our code, frame based color scheme can be produced in detail which can then be used for the colors of the subtitles. Apart from Typography, this video color extraction has a variety of uses in many different fields. This method can be used in movie content analysis to find out the amount of gory scenes in the movie (violence and sex related) as well as the overall visual effects and the color scheme of the movie. Moreover, it can be also used in user interaction design of websites and video players. 5. Conclusion

This paper proposes a very simple yet efficient method which selects the color of subtitles for two very important scenarios: color scheme preservation of the movie, or improving the legibility. These simple things go a long way in affecting the experience of watching the visual media. We believe that this method would surely help to improve the experiences associated with television, film, video games or any another audio visual media. References Davoudi, M., Menhaj, M. B., Naraghi, N. S., Aref, A., Davoodi, M., & Davoudi, M. (2012) A fuzzy logic-based video subtitle and caption coloring system. Advances in Fuzzy Systems, 2012, 7. Kyrnin J. (2006) Using Color Wheels and Color Theory to Design Harmonious Pages, Color Harmony, About.com Guide. Hsieh, I. S., and Fan, K. C. (2000) An adaptive clustering algorithm for color quantization. Pattern Recognition Letters, 21(4), pp 337-346. Chuang, K. S., Tzeng, H. L., Chen, S., Wu, J., and Chen, T. J. (2006) Fuzzy c-means clustering with spatial information for image segmentation. Computerized medical imaging and graphics, 30(1), pp 9-15. Davoudi, M., & SeifNaraghi, N. (2009, March) Adaptive subtitle and caption coloring using fuzzy analysis. In Computer Science and Information Engineering, 2009 WRI World Congress on (Vol. 4, pp. 764-768). IEEE.