US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/ A1 ABE (43) Pub. Date: Jun.

Similar documents
2) }25 2 O TUNE IF. CHANNEL, TS i AUDIO

(12) Patent Application Publication (10) Pub. No.: US 2014/ A1

(12) Patent Application Publication (10) Pub. No.: US 2005/ A1

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1. (51) Int. Cl. SELECT A PLURALITY OF TIME SHIFT CHANNELS

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1

(12) Patent Application Publication (10) Pub. No.: US 2001/ A1

(12) Patent Application Publication (10) Pub. No.: US 2011/ A1

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1

(12) Patent Application Publication (10) Pub. No.: US 2014/ A1

O'Hey. (12) Patent Application Publication (10) Pub. No.: US 2016/ A1 SOHO (2. See A zo. (19) United States

(12) Patent Application Publication (10) Pub. No.: US 2008/ A1

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1. LM et al. (43) Pub. Date: May 5, 2016

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1. (51) Int. Cl.

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1

(12) Patent Application Publication (10) Pub. No.: US 2005/ A1

(12) United States Patent

( 12 ) Patent Application Publication 10 Pub No.: US 2018 / A1

(12) United States Patent (10) Patent No.: US 8, B2

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) United States Patent (10) Patent No.: US 6,275,266 B1

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

) 342. (12) Patent Application Publication (10) Pub. No.: US 2016/ A1. (19) United States MAGE ANALYZER TMING CONTROLLER SYNC CONTROLLER CTL

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1

(12) United States Patent

III... III: III. III.

(12) Patent Application Publication (10) Pub. No.: US 2017/ A1. (51) Int. Cl. (52) U.S. Cl. M M 110 / <E

(12) United States Patent (10) Patent No.: US 8,803,770 B2. Jeong et al. (45) Date of Patent: Aug. 12, 2014

(12) United States Patent

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2003/ A1

(12) United States Patent (10) Patent No.: US 6,424,795 B1

(12) Publication of Unexamined Patent Application (A)

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1. (51) Int. Cl. (JP) Nihama Transfer device.

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1. Kusumoto (43) Pub. Date: Oct. 7, 2004

(12) United States Patent (10) Patent No.: US 7,952,748 B2

(12) United States Patent

(12) United States Patent

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1. (51) Int. Cl.

(12) Patent Application Publication (10) Pub. No.: US 2012/ A1

(12) United States Patent Nagashima et al.

Randle et al. [45] Date of Patent: Jun. 30, 1998

(12) Patent Application Publication (10) Pub. No.: US 2008/ A1. Chen et al. (43) Pub. Date: Nov. 27, 2008

32O O. (12) Patent Application Publication (10) Pub. No.: US 2012/ A1. (19) United States. LU (43) Pub. Date: Sep.

Systems and methods of camera-based fingertip tracking

(12) Patent Application Publication (10) Pub. No.: US 2007/ A1. Yun et al. (43) Pub. Date: Oct. 4, 2007

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1

OOOOOOOOOOOOOOOOOOOO 30 DOJ. United States Patent 19 5,556,108. Sep. 17, Nagano et al. goese) O) 11 Patent Number: (45) Date of Patent:

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

(12) United States Patent

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1

Blackmon 45) Date of Patent: Nov. 2, 1993

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) Patent Application Publication (10) Pub. No.: US 2003/ A1

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2009/ A1

(12) Patent Application Publication (10) Pub. No.: US 2007/ A1

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1

I I I - -- United States Patent [19J A Hair CONTROL PANEL SPEAKERS COMPACT DISC CONTROL I.C. VIDEO DISPLAY PLAYER

(12) (10) Patent N0.: US 6,408,435 B1 Sato (45) Date of Patent: Jun. 18, 2002

(12) United States Patent

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2009/ A1. (51) Int. Cl. CLK CK CLK2 SOUrce driver. Y Y SUs DAL h-dal -DAL

(12) Ulllted States Patent (10) Patent N0.: US 8,643,786 B2 Park (45) Date of Patent: *Feb. 4, 2014

(12) United States Patent (10) Patent No.: US 6,657,619 B1

(12) Patent Application Publication (10) Pub. No.: US 2014/ A1

United States Patent 19 Majeau et al.

(12) Patent Application Publication (10) Pub. No.: US 2011/ A1

(19) United States (12) Reissued Patent (10) Patent Number:

United States Patent [19] [11] Patent Number: 5,862,098. J eong [45] Date of Patent: Jan. 19, 1999

(12) Patent Application Publication (10) Pub. No.: US 2014/ A1

(12) Patent Application Publication (10) Pub. No.: US 2003/ A1

(12) United States Patent

(12) United States Patent (10) Patent No.: US 8,026,969 B2

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

(12) (10) Patent No.: US 8,020,022 B2. Tokuhiro (45) Date of Patent: Sep. 13, (54) DELAYTIME CONTROL OF MEMORY (56) References Cited

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) United States Patent (10) Patent No.: US 9,389,130 B2. Teurlay et al. (45) Date of Patent: Jul. 12, 2016

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2005/ A1

(12) Patent Application Publication (10) Pub. No.: US 2014/ A1

(12) United States Patent

(12) United States Patent (10) Patent No.: US 6,462,786 B1

US Bl. wo 90/13204 wo 90/ ( *) Notice: Subject to any disclaimer, the term of this

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2012/ A1. MOHAPATRA (43) Pub. Date: Jul. 5, 2012

(12) Patent Application Publication (10) Pub. No.: US 2014/ A1

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2005/ A1. Kim et al. (43) Pub. Date: Dec. 22, 2005

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

Exexex. (12) Patent Application Publication (10) Pub. No.: US 2007/ A1. (19) United States DAT. CONTS Sense signol generotor Detection

Transcription:

. US 20140178045Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/0178045 A1 ABE (43) Pub. Date: Jun. 26, 2014 (54) VDEO PLAYBACK DEVCE, VDEO Publication Classi?cation PLAYBACK METHOD, NON-TRANSTORY STORAGE MEDUM HAVNG STORED (51) nt- Cl THEREON VDEO PLAYBACK PROGRAM, H04N 5/93 (2006.01) VDEO PLAYBACK CONTROL DEVCE, (52) US. Cl. VDEO PLAYBACK CONTROL METHOD CPC..... H04N 5/9305 (2013.01) AND NON-TRANSTORY STORAGE MEDUM USPC..... 386/244 HAVNG STORED THEREON VDEO PLAYBACK CONTROL PROGRAM (57) ABSTRACT (71) Applicant: Casio Computer Co., Ltd., Tokyo (JP) (72) nventor: Takatoshi ABE, Tokyo (JP) (73) Assignee: CASO COMPUTER CO., LTD., Tokyo (JP) (21) Appl.No.: 14/133,136 (22) Filed: Dec. 18, 2013 (30) Foreign Application Priority Data Dec. 20, 2012 (JP)..... 2012-277569 Jun. 10, 2013 (JP)..... 2013-121379 A video playback device includes a sound-attached video storage unit, a text storage unit, a text list display control unit, a text speci?cation unit and a sound-attached video portion playback control unit. n the sound-attached video storage unit, a sound-attached video is stored. n the text storage unit, texts for sounds of sound-attached video portions of the sound-attached video are stored in such a way as to be corre lated With the respective sound-attached video portions. The text list display control unit controls to display the texts as a list. The text speci?cation unit speci?es a text in the displayed list as a speci?c text on the basis of a user operation. The sound-attached video portion playback control unit controls to hide the list and play a sound-attached video portion for the speci?c text. 20 CPU NPUT UNT 0 DSPLAY UNT 40 N KEY SET MAN DSPLAY ' 10 110" TOUCH PANEL (D STORAGE UNT SECONDARY DSPLAY ~ 1 1 70 SOUND OUTPUT UNT ~13 81 VDEO PLAYBACK PROGRAM 82 - DCTONAFY DATABASE SET 820 /\ DCTONARY DATABASE 33 4 SOUNDTEACHNG MATERAL CONTENT SET 9/\4 SOUNDTEACHNG MATERAL CONTENT 91 ;\»~ SOUND-ATTACHED VDEO 91 S 91 o, A» SBPB-QEEHP 91 OS 92 ~\\4 SOUNDTEXT 920,,-_\ SOUNDTEXT 920$ SEGMENT SET-CONTENT-FOR 34»v VDEO-REPEAT-LEARNNG STORAGE TABLE 60 STORAGE MEDUM READNG UNT 12a EXTERNAL NFORMATON STORAGE MEDUM

Patent Application Publication Jun. 26, 2014 Sheet 1 0f 13 US 2014/0178045 A1 \MEHEQQEAS LQGQQE%F.DH@E@E!BM Q \\ 3) m» O \ Jo @\

Patent Application Publication Jun. 26, 2014 Sheet 2 0f 13 US 2014/0178045 A1 FG.2 2 20 CPU 30 40 NPUT UNT DSPLAY UNT 2~\- KEY SET 7 7 MAN DSPLAY ~' 10 110» TOUC- PANEL SECONDARY DSPLAY ~ 11 70 SOUND OUTPUT UNT 80 : STORAGE UNT SPEAKER '13 31 RV VDEO PLAYBACK PROGRAM 82S~ DCTONARY DATABASE SET 4 = 820 \~ DCTONARY DATABASE 33 FA. SOUND TEACHNG MATERAL CONTENT SET 9,\L SOUND TEACHNG MATERAL CONTENT 91, ~~l SOUND-ATTACHED VDEO 91 S 910,\\ SOUND-ATTACHED, VDEO SEGMENT 91 OS 92 V SOUND TEXT 60 = ; STORAGE MEDUM 920,_\_/ sgggnndn READNG UNT 9205 i i SET-CONTENT-FOR- y 1 2a 34»V VDEO-REPEAT-LEARNNG STORAGE TABLE EXTERNAL NFORMATON STORAGE MEDUM

Patent Application Publication Jun. 26, 2014 Sheet 3 0f 13 US 2014/0178045 A1 Qsouuo LEARNNG PROCESSNG ) S1 SPECFY SOUND-ATTACHED VDEO TTLE THROUGH USER OPERATON READ AND PLAY SPECFC SOUND-ATTACHED VDEO 82 (NORMAL PLAYBACK MODE) ss DSPLAY TEM ABOUT STATE OF VDEO PLAYBACK ELAPSED PLAYBACK TME, REQUR D PLAYBACKTME AND VOLUME LEVEL) S4 DSPLAY EXPLANATON ABOUT OPERATONS SPECFC TO NORMAL PLAYBACK MODE (FAST-FORWARD [b] REWND [4]) 1 $5 DSPLAY EXPLANATON ABOUT OPERATONS DURNG VDEO PLAYBACK (PAUSE [RETURN]) PAUSE? NO S7 HAS SPECFC SOUND ATTACHED VDEO BEEN PLAYED TO THE END? $6 YES S1 1 PAUSE SPECFC SOUND-ATTACHED VDEO TO STOP SOUND WTH STLL MAGE DSPLAYED 312 DSPLAY EXPLANATON ABOUT OPERATONS DURNG PAUSE (PLAY [DECDE]) $13 TEMPORARLY STORE PAUSED PONT N WORK AREA ' $14 DSPLAYJBLAYBACK A [TEXT DSPLAY] EXECUTON] CON CON S15 S [PLAYBACK EXECUTON] CON TOUCHED'? (S [DECDE] KEY OPERATED?) S16 RESTART SPECFC SOUN D-ATTACH ED VDEO FROM PAUSED PONT

Patent Application Publication Jun. 26, 2014 Sheet 4 0f 13 US 2014/0178045 A1 FG.4 320 SET SOUND TEXT SEGMENT OF PAUSED PONT AS TARGET FOR FRST DSPLAY! S21 HDE VDEO STLL MAGE AND DSPLAY SOUND TEXT S GMENTS AS ST WTH TARGET FOR FRST DSPLAY DSPLAYED FRST '1 $22 DSPLAY PLAYBACK EXECUTON] CON AND [V EO REPEAT LEARNNG] CON (SOUNDTEXT DSPLAY MODE) 823 S [PLAYBACK EXECUTON] CON TOUCHED? (S [DECDE] KEY OPERATED?) NO $25 YES S SCREEN FLCKED? i(:up ARROW OR DOWN ARROW O CURSOR S OPERATED?) SCROLL SCREEN N DRECTON SPECFED THROUGH FLCK OPERATON OR CURSOR UPDOWN OPERATON S DEO REPEAT LEARNNG CON N0 [V TOUCHED? ] v 324 FC SOUND-ATTACHED RESTART SPEC VDEO FROM PAUSED PONT S30 YES v S31 READ REPETTON ONOFF DATA AND PLAYBACK NUMBER DATA, DSPLAY REPETTON ONOFF CON AND PLAYBACK NUM ER CON, AND DSP Y AND GHLGHT[SPECF CONVERSATONAL SENTENCE VDEO PLAYBACK EXECUTON] CON r S32 HGHLGHT TOP CON AMONG [VDEO REPEAT LEARNNG CONS DSPLAYED AT BEGNNNGS OF SOUN TEXT SEGMENTS 553 (VDEO REPEAT LEARNNG MODE)

Patent Application Publication Jun. 26, 2014 Sheet 5 0f 13 US 2014/0178045 A1 @ FG.5 S33 S [REPETTON ONOFF] CON OR [PLAYBACK NUMBER] CON TOUCHED? 1 S34 CHANGE AND RE-DSPLAY REPETTON ONOFF AND/OR PLAYBACK NUMBER ACCORDNG TO TOUCH OPERATON S SCREEN FLCKED? UP ARROW OR DOWN ARROW O CURSOR S OPERATED?) v S37 SCROLL SCREEN N DRECTON SPECFED THROUGH FLCK OPERATON 0R CURSOR UP/DOWN OPERATON TO CHANGE HGHLGHTED [VDEO REPEAT LEARNNG] CON TO ANOTHER S38 S [DECDE] KEY OPERATED? (S [PLAYBACK EXECUTON] CON N SCREEN TOUCHED?) NO S REPETTON SET TO 0N? S40 YES $41 / $45 CONVERSATONAL SENTENCE VDEO CONVERSATONAL SENTENCE VDEO PLAYBACK EXECUTON PROCESSNG PLAYBACK EXECUTON PROCESSNG v / S42 TEXT-DSPLAY-FOR-CONVERSATON PLAYBACK-TME PROCESSNG V S43 HAS SPECFC SOUND-ATTACHED VDEO SEGMENT BEEN PLAYED FOR SPECFED PLAYBACK NUMBER? v 550 DSPLAY SOUND TEXT SEGMENTS AS LST WTH SPECFC SOUNDTEXT SEGMENT DSPLAYED FRST

Patent Application Publication Jun. 26, 2014 Sheet 6 0f 13 US 2014/0178045 A1 CONVERSATONAL SENTENCE VDEO G 6 PLAYBACK EXECUTON PROCESSNG T1 SET SOUND-ATTACHED VDEO SEGMENT FOR SPECFC SOUND TEXT SEGMENT AS SPECFC SOUND-ATTACHED VDEO SEGMENT T2 PLAY SPECFC SOUND-ATTACHED VDEO SEGMENT] T3 DSPLAY TEM ABOUT STATE OF VDEO PLAYBACK (ELAPSED PLAYBACKTME REQURED PLAYBACK TME AND VOLUME LEVEL) T4 DSPLAY EXPLANATON ABOUT OPERATONS DURNG VDEO PLAYBACK (PAUSE [RETURN]) T5 PAUSE? YES No T11 PAUSE SPECFC SOUND-ATTACHED VDEO SEGMENT TO STOP SOUND WTH STLL MAGE DSPLAYED T12 DSPLAY EXPLANATON ABOUT OPERATONS DURNG PAUSE (PLAY [DECDE]) T13 DSPLAY [PLAYBACK EXECUTON] CON V T14 S [PLAYBACK EXECUTON] CON TOUCHED? (S [DECDE] KEY OPERATED?) YES T15 SET PAUSED PONT AS PLAYBACK RESTART PONT HAS SPECFC SOUND-ATTACHED VDEO SEGMENT BEEN PLAYED TO THE END? SET SPECFC SOUND TEXT SEGMENT AS TARGET FOR FRST DSPLAY T18 HDE VDEO (STLL MAGE AND DSPLAY scum) TEXT SEGMENTS As Ll TWTH TARGET FOR FRST DSPLAY DSPLAYED FRST N0 SOUNDTEXT NSPLAYMODE)

Patent Application Publication Jun. 26, 2014 Sheet 7 0f 13 US 2014/0178045 A1 FG.7 C TEXT-DSPLAY-FOH-CONVERSATON-PLAYBACK-TME PROCESSNG ) T DSPLAY SOUND TEXT SEGMENTS AS LST WTH SPECFC SOUND TEXT SEGMENT DSPLAYED FRST v DSPLAY MESSAGE "REPEATNG: REPEAT PLAYED CONVERSATON" U1 U2 V RETURN? U3 HAS TME EQUAL TO REQURED PLAYBACKTME FOR SPECFC SOUND-ATTACHED VDEO SEGMENT ELAPSED?

Patent Application Publication Jun. 26, 2014 Sheet 10 0f 13 US 2014/0178045 A1 FG.1OA EEQTQQEKQE -.- - -- CZ VDEOLEARNNG [:1 This is one of the -------------------- - RETURN --------------------------------------------------- -- 920 2% ~~~ 5533553 33; E2 FG.10B N m <1) 6/15 00j32 04:43 H a p y PAUSE H RETURNLST c E2 E1

Patent Application Publication Jun. 26, 2014 Sheet 12 0f 13 US 2014/0178045 A1 MFA! om 29222::on 55 A w om.5 >.2255 22252; 2252 @9205 A in v )(c _. 10:9..EZE (<ow cum \

US 2014/0178045 A1 Jun. 26, 2014 VDEO PLAYBACK DEVCE, VDEO PLAYBACK METHOD, NON-TRANSTORY STORAGE MEDUM HAVNG STORED THEREON VDEO PLAYBACK PROGRAM, VDEO PLAYBACK CONTROL DEVCE, VDEO PLAYBACK CONTROL METHOD AND NON-TRANSTORY STORAGE MEDUM HAVNG STORED THEREON VDEO PLAYBACK CONTROL PROGRAM CROSS REFERENCE TO RELATED APPLCATON [0001] This application is based upon and claims the ben e?t of priority under 35 USC 119 of Japanese Patent Appli cations No. 2012-277569?led on Dec. 20, 2012 and No. 2013-121379?led on Jun. 10, 2013, the entire disclosure of which, including the descriptions, claims, drawings, and abstracts, is incorporated herein by reference in its entirety. BACKGROUND OF THE NVENTON [0002] 1. Field of the nvention [0003] The present invention relates to a video playback device, a video playback method, a video playback control device, a video playback control method and so forth. [0004] 2. BackgroundArt [0005] A conventional device for language learning outputs a sound for a text when a user speci?es the text in a displayed list of texts to study. [0006] n recent years, this kind of device displays a series of conversation texts and an image for the texts together and outputs sounds for the contents of the texts in order. The texts are displayed in such a way that a text for a sound which is being output is highlighted, and the displayed image is changed to another according to a sound to be output. (For example, refer to Japanese Patent Application Laid-Open Publication No. 2004-185680.) [0007] Meanwhile, a video-displayable device can display, while displaying a video, the content of conversation the sound of which is being output as subtitles. SUMMARY OF THE NVENTON [0008] However, with such conventional devices, a user studies by listening to a sound while looking at its text. This is as if a user answers a question while looking at its answer. Hence, a learning effect is low. [0009] Objects of the present invention include providing a video playback device, a video playback method, a non transitory storage medium having stored thereon a video playback program, a video playback control device, a video playback control method and a non-transitory storage medium having stored thereon a video playback control pro gram each of which can increase the learning effect of a sound-attached video which a user watches and listens to. [0010] n order to achieve at least one of the objects, according to a?rst aspect of the present invention, there is provided a video playback device including: a sound-attached video storage unit in which a sound-attached video is stored; a text storage unit in which texts for sounds of sound-attached video portions of the sound-attached video are stored in such a way as to be correlated with the respective sound-attached video portions; a text list display control unit which controls to display the texts as a list; a text speci?cation unit which speci?es a text in the displayed list of the texts as a speci?c text on the basis of a user operation; and a sound-attached video portion playback control unit which controls to hide the list of the texts and play a sound-attached video portion for the speci?c text. [0011] n order to achieve at least one of the objects, according to a second aspect of the present invention, there is provided a video playback control device including: a sound attached video obtaining unit which obtains sound-attached video portions of a sound-attached video; a text obtaining unit which obtains texts for sounds of the sound-attached video portions; a text list display control unit which controls to display the texts as a list; a text speci?cation unit which speci?es a text in the displayed list of the texts as a speci?c text on the basis of a user operation; and a sound-attached video portion playback control unit which controls to hide the list of the texts and play a sound-attached video portion for the speci?c text. BREF DESCRPTON OF THE DRAWNGS [0012] The present invention will become more fully understood from the detailed description given hereinafter and the appended drawings, which are given byway of illus tration only, and thus are not intended as a de?nition of the limits of the present invention, wherein: [0013] FG. 1A is a plan view schematically showing an electronic dictionary according to an embodiment of the present invention; [0014] FG. 1B is a plan view schematically showing a tablet personal computer (or a smartphone); [0015] FG. 1C is a plan view schematically showing a personal computer connected to an external playback device; [0016] FG. 2 is a block diagram showing the internal con?guration of the electronic dictionary; [0017] FG. 3 is a?owchart of sound learning processing; [0018] FG. 4 is a?owchart of the sound learning process ing; [0019] FG. 5 is a?owchart of the sound learning process ing; [0020] FG. 6 is a?owchart of conversational sentence video playback execution processing in the sound learning processing; [0021] FG. 7 is a?owchart of text-display-for-conversa tion-playback-time processing in the sound learning process ing; [0022] FGS. 8A to 8D show contents displayed on a dis play unit of the electronic dictionary; [0023] FGS. 9A to 9D show contents displayed on the display unit; [0024] FGS. 10A and 10B show contents displayed on the display unit; [0025] FGS. 11A to 11D show contents displayed on the display unit; [0026] FGS. 12A to 12D show contents displayed on the display unit; and [0027] FG. 13 is a block diagram showing the internal con?guration of an electronic dictionary and so forth accord ing to a modi?cation of the present invention.

US 2014/0178045 A1 Jun. 26, 2014 DETALED DESCRPTON OF THE PREFERRED EMBODMENTS [0028] n the following, an embodiment in which a video playback device of the present invention is applied to an electronic dictionary is described with reference to the draw ings in detail. [External Con?guration] [0029] FG. 1A is a plan view of an electronic dictionary 1. As shown in FG. 1A, the electronic dictionary 1 includes a main display 10, a secondary display 11, a card slot 12, a speaker 13 and a key set 2. [0030] The main display 10 and the secondary display 11 display thereon various data such as letters and symbols in color on the basis of user operations with the key set 2 and are each constituted of, for example, an LCD (Liquid Crystal Display) or an ELD (Electronic Luminescence Display). n the embodiment, the main display 10 and the secondary dis play 11 are integrally formed with a touch panel 110 (see FG. 2) to receive operations such as handwriting input. [0031] An external information storage medium 1211 (see FG. 2) in which various pieces of information are stored is attachable/detachable to/from the card slot 12. [0032] The speaker 13 outputs sounds on the basis of user operations with the key set 2. [0033] The key set 2 includes various keys to receive opera tions to operate the electronic dictionary 1 from a user. More speci?cally, the key set 2 includes a decision key 2b, letter keys 20, a cursor key 2e and a return key 2g. [0034] The decision key 2b is used by a user, for example, to carry out search and decide a headword. The letter keys 2c are used by a user, for example, to input letters and are constituted of A to Z keys in the embodiment. [0035] The cursor key 2e is used by a user, for example, to move a highlighted part displayed in a screen, namely, to move a cursor therein. n the embodiment, any of the up direction, the down direction, the left direction and the right direction can be speci?ed with the cursor key 2e. The return key 2g is used by a user, for example, to return to screens previously displayed. [nternal Con?guration] [0036] Next, the internal con?guration of the electronic dictionary 1 is described. FG. 2 is a block diagram showing the internal con?guration of the electronic dictionary 1. [0037] As shown in FG. 2, the electronic dictionary 1 includes a display unit 40, an input unit 30, a sound output unit 70, a storage medium reading unit 60, a CPU (Central Processing Unit) 20 and a storage unit 80, and these units are connected to each other via a bus to perform data communi cation therebetween. [0038] The display unit 40 includes the main display 10 and the secondary display 11, and the main display 10 and the secondary display 11 each display various pieces of informa tion thereon on the basis of display signals input from the CPU 20. [0039] The input unit 30 includes the key set 2 and the touch panel 110 and outputs signals corresponding to pressed keys or pressed points on the touch panel 110 to the CPU 20. [0040] The sound output unit 70 includes the speaker 13, and the speaker 13 outputs sounds on the basis of sound output signals input from the CPU 20. [0041] The storage medium reading unit 60 includes the card slot 12 and reads information from the external informa tion storage medium 12a attached to the card slot 12 or stores (records) information in the external information storage medium 12a. [0042] The external information storage medium 12a stores therein a dictionary database (s) 820 and a sound teach ing material content (s) 9. The data structures of the dictionary database 820 and the sound teaching material content 9 are the same as those of a dictionary database 820 and a sound teaching material content 9 stored in the storage unit 80 described below, and hence details thereof are omitted herein. [0043] The storage unit 80 is a memory in which programs and data to realize various functions of the electronic dictio nary 1 are stored and which functions as a work area of the CPU 20. n the embodiment, the storage unit 80 stores a video playback program 81, a dictionary database set 82, a sound teaching material content set 83, a set-content-for-video-re peat-learning storage table 84 and the like. [0044] The video playback program 81 is a program for the CPU 20 to perform sound learning processing (see FGS. 3 to 5) described below. [0045] The dictionary database set 82 includes a plurality of dictionary databases 820. The dictionary databases 820 each include a plurality of pieces of headword information in each of which a headword is correlated with its explanation information. [0046] The sound teaching material content set 83 includes a plurality of sound teaching material contents 9. [0047] The sound teaching material contents 9 each include a sound-attached video 91 and a sound text 92. [0048] The sound-attached video 91 is a video including sounds and, in the embodiment, constituted of a plurality of sound-attached video segments 910 which are continuous in terms of time. n the embodiment, the sound-attached video 91 is divided by sentences (sentence by sentence) of the sounds included therein, whereby the sound-attached video segments 910 are formed. [0049] The sound text 92 is text data corresponding to the sounds included in the sound-attached video 91 and is formed by converting the sounds into texts in the language of the sounds. n the embodiment, the sound text 92 is constituted of a plurality of sound text segments 920 corresponding to the sound-attached video segments 910 one-to-one. t is unnec essary that the content of each sound text segment 920 exactly match the sound content of its corresponding sound-attached video segment 910. Hence, the content of each sound text segment 920 may be an abbreviated version formed by omit ting parts irrelative to learning (i.e. language learning) from the complete content thereof. Further, the sound text seg ments 920 may include, in addition to the texts in the language of the sounds included in the sound-attached video 91, texts translated from the texts in the language of the sounds to another language. [0050] The set-content-for-video-repeat-leaming storage table 84 stores therein the set contents of setting items for a learning mode (hereinafter a video repeat learning mode, see FGS. 4 and 5) in which a predetermined sound-attached segment 910 is played one or multiple times. n the embodi ment, the setting items for the video repeat learning mode include a playback number and a with-or-without repetition. The playback number is a setting item about the number of times the predetermined sound-attached video segment 910 is played, and the with-or-without repetition is a setting item

US 2014/0178045 A1 Jun. 26, 2014 about whether or not a silent time for a user to do repetition is provided after each time the sound-attached video segment 910 is played. The silent time is provided when the with-or without repetition is ON, and the silent time is not provided when the with-or-without repetition is OFF. n the embodi ment, in the case in which the silent time is provided after each time a sound-attached video segment 910 is played, the sound text segment 920 for the sound-attached video segment 910 is displayed on the main display 10 during the silent time (Step S42 in FG. 6 described below). [0051] The CPU 20 performs various types of processing based on predetermined programs on the basis of commands input thereinto, transfers the commands and/or data to func tional units and controls the electronic dictionary 1 as a whole. More speci?cally, the CPU 20 reads a program from various programs stored in the storage unit 80 on the basis of, for example, an operation signal input from the input unit 30 and performs processing in accordance with the read pro gram. Then, the CPU 20 stores the result of the processing in the storage unit 80 and also outputs the result to the sound output unit 70 and/ or the display unit 40 as needed. [Action] [0052] Next, the action of the electronic dictionary 1 is described with reference to the drawings. [Sound Learning Processing] [0053] FGS. 3 to 5 are?owcharts of the sound learning processing performed by the CPU 20 reading the video play back program 81. [0054] As shown in FG. 3, in the sound learning process ing,?rst, the CPU 20 displays titles of sound-attached videos 91 included in the sound teaching material content set 83 on the main display 10 in a list form and speci?es a title (i.e. a sound-attached video 91) in the list of the titles of the sound attached videos 91 on the basis of a user operation (Step S1). [0055] Next, the CPU 20 moves to a normal playback mode for sound-attached videos and reads the sound-attached video 91, the title of which is speci?ed (hereinafter a speci?c sound attached video 91S), from the storage unit 80 to make the display unit 40 and the sound output unit 70 play the speci?c sound-attached video 91S (Step S2). At the time, the CPU 20 forms an information display area E1 at the edge part on the right on the main display 10 and forms an icon display area E2 at the edge part on the left on the main display 10 (see FG. 8A). [0056] Next, the CPU 20 displays a display item Ha (see FG. 8A) about a state of video playback in the information display area E1 (Step S3). The display item Ha about the state of video playback includes: time (hereinafter an elapsed play back time) having elapsed since start of playback of the speci?c sound-attached video 91S, namely, time having been required to play the speci?c sound-attached video 91S from the beginning to a point currently being played; time (here inafter a required playback time) required to play the whole speci?c sound-attached video 91S; and a volume level. [0057] Next, the CPU 11 displays an explanation Hb (see FG. 8A) about operations speci?c to the normal playback mode in the information display area E1 (Step S4). The expla nation Hb about the operations speci?c to the normal play back mode includes an explanation that an operation on the right arrow of the cursor key 2e corresponds to an operation for a fast-forwarding command and an explanation that an operation on the left arrow of the cursor key 2e corresponds to an operation for a rewinding command. [0058] Next, the CPU 20 displays an explanation Hc (see FG. 8A) about operations during video playback in the infor mation display area E1 (Step S5). The explanation Hc about the operations during video playback includes an explanation that an operation on the return key 2g corresponds to an operation for a pause command. [0059] Next, the CPU 20 determines whether or not a pause command is made through an operation on the return key 2g (Step S6). When determining that a pause command is not made (Step S6; NO), the CPU 20 determines whether or not the speci?c sound-attached video 91S has been played to the end (Step S7). [0060] When determining that the speci?c sound-attached video 91S has not been played to the end yet (Step S7; NO), the CPU 20 moves to Step S2. On the other hand, when determining that the speci?c sound-attached video 91S has been played to the end (Step S7; YES), the CPU 20 ends the sound learning processing. [0061] When determining that a pause command is made through an operation on the return key 2g (Step S6;YES), the CPU 20 pauses the speci?c sound-attached video 91S to stop the sound with an image (still image) of the paused point displayed on the main display 10 (Step S11). At the time, in the information display area E1 of the main display 10, the display item Ha about the state of video playback (the elapsed playback time, the required playback time, the volume level and the like), the explanation Hb about the operations speci?c to the normal playback mode (the explanation that an opera tion on the right arrow of the cursor key 2e corresponds to an operation for a fast-forwarding command and the like) and the explanation Hc about the operations during video play back (the explanation that an operation on the return key 2g corresponds to an operation for a pause command and the like) are still displayed. [0062] Next, the CPU 20 deletes (hides) the explanation Hc about the operations during video playback, which is dis played in the information display area E1, therefrom and displays an explanation Hd (see FG. 8B) about operations during pause therein instead (Step S12). The explanation Hd about the operations during pause includes an explanation that an operation on the decision key 2b corresponds to an operation for a playback restart command. [0063] Next, the CPU 20 temporarily stores information (for example, the elapsed playback time) about the paused point of the speci?c sound-attached video 91S in the storage unit 80 and then displays a playback execution icon la and a text display icon lb in the icon display area E2 (Step S14, see FG. 8B). [0064] The playback execution icon la is an icon which is operated to restart playing the speci?c sound-attached video 91S. n the embodiment, as indicated by the explanation Hd about the operations during pause, the speci?c sound-at tached video 91S restarts through not only a touch operation on the playback execution icon la but also an operation on the decision key 2b. [0065] The text display icon lb is an icon which is operated to display a sound text segment (s) 920 for a sound-attached video segment (s) 910. [0066] Next, the CPU 20 determines whether or not a touch operation on the playback execution icon la or an operation on the decision key 2b is performed (Step S15). When deter mining that either of them is performed (Step S15; YES), the

US 2014/0178045 A1 Jun. 26, 2014 CPU 20 restarts the speci?c sound-attached video 91S from the paused point, at which the speci?c sound-attached video 91S is paused at Step S11, and then moves to Step S2. [0067] On the other hand, when determining that neither of them is performed (Step S15; NO), the CPU 20 determines whether or not a touch operation on the text display icon b is performed (Step S17). [0068] When determining that a touch operation on the text display icon b is not performed (Step S17; NO), the CPU 20 moves to Step S15. [0069] On the other hand, when determining that a touch operation on the text display icon lb is performed (Step S17; YES), as shown in FG. 4, the CPU 20 sets a sound text segment 920 for a sound-attached video segment 910 includ ing the paused point as a target for?rst display (Step S20). [0070] Next, the CPU 20 deletes, among the displayed con tents on the main display 10, the displayed contents (the video (still image) and the information display area E1 or a list of sound text segments 920) except for the icon display area E2 from the main display 10 to move to a sound text display mode and reads the sound text segments 920 from the storage unit 80 and then displays the sound text segments 920 for the respective sound-attached video segments 910 on the main display 10 in a list form in order, namely, in order of the sound-attached video segments 910 being played, with the sound text segment 920 as the target for?rst display displayed?rst (Step S21, see FG. 8C). When the information display area E1 is deleted from the main display 10, the display item Ha about the state of video playback, the explanation Hb about the operations speci?c to the normal playback mode and the explanation Hd about the operations during pause are deleted from the main display 10 accordingly. [0071] Next, the CPU 20 once deletes the icons (the text display icon lb and the like), which are displayed in the icon display area E2, therefrom and displays the playback execu tion icon a and a video repeat learning icon c therein instead (Step S22, see FG. 8C). At the time, the CPU 20 also displays the video repeat learning icons c at the beginnings of the sound text segments 920, which are displayed on the main display 10 in a list form. The video repeat learning icons c are each an icon which is operated to move the action mode of the electronic dictionary 1 to the above-described video repeat learning mode (the mode in which a predetermined sound attached video segment 910 is played one or multiple times) or the like. [0072] Next, the CPU 20 determines whether or not a touch operation on the playback execution icon a or an operation on the decision key 2b is performed (Step S23). When deter mining that either of them is performed (Step S23; YES), the CPU 20 deletes the sound text segments 920, which are dis played on the main display 10, from the main display 10 and restarts the speci?c sound-attached video 91S from the paused point (Step S24), at which the speci?c sound-attached video 91S is paused at Step S11, and then moves to Step S2 to move to the normal playback mode. Consequently, the sound text segments 920 are prevented from being displayed during video playback. [0073] On the other hand, when determining that neither of them is performed (Step S23; NO), the CPU 20 determines whether or not an up/down?ick operation on the list of the sound text segments 920 displayed on the main display 10 or an operation on the up arrow or the down arrow of the cursor key 2e is performed (Step S25). [0074] When determining that either of them is performed (Step S25; YES), the CPU 20 scrolls the list of the sound text segments 920 displayed on the main display 10 in a direction speci?ed through the operation (Step S26, see FG. 8D) and then moves to Step S23. [0075] On the other hand, when determining that neither of them is performed (Step S25; NO), the CPU 20 determines whether or not a touch operation on any of the video repeat learning icons c is performed (Step S30). [0076] When determining that a touch operation on any of the video repeat learning icons c is not performed (Step S30; NO), the CPU 20 moves to Step S23. [0077] On the other hand, when determining that a touch operation on any of the video repeat learning icons c is performed (Step S30;YES), the CPU 20 deletes the icons (the video repeat learning icon c and the playback execution icon a), which are displayed in the icon display area E2, therefrom to move to the video repeat learning mode. Then, the CPU 20 displays and highlights a speci?c conversational sentence video playback execution icon d in the icon display area E2, reads the set contents for the video repeat learning mode from the set-content-for-video-repeat-leaming storage table 84 and then displays the set content of the with-or-without rep etition with a repetition ON/OFF icon e and the set content of the playback number with a playback number icon f in the icon display area E2 (Step S31, see FG. 9A). At the time, the CPU 20 displays an explanation window W1 for explaining functions of these icons (the speci?c conversational sentence video playback execution icon d, the repetition ON/OFF icon e and the playback number icon f) at the bottom on the main display 10. [0078] The speci?c conversational sentence video play back execution icon d is an icon which is operated to play one or multiple times a sound-attached video segment 910 (here inafter a speci?c sound-attached video segment 910S) for a sound text segment 920 (hereinafter a speci?c sound text segment 920S) speci?ed in the list of the sound text segments 920 through a user operation. n the embodiment, the speci?c sound-attached video segment 910S is played one or multiple times through not only a touch operation on the speci?c conversational sentence video playback execution icon d but also an operation on the decision key 2b. The repetition ON/OFF icon e is an icon which is operated to switch with-repetition and without-repetition (namely, to determine whether or not to provide the silent time with the speci?c sound text segment 920S displayed for a user to do repetition after each time the speci?c sound-attached video segment 910S is played). n the embodiment, the repetition ON/OFF icon e is highlighted when the set content of the with-or-without repetition is YES, and the repetition ON/OFF icon e is displayed as usual (not highlighted) when the set content of the with-or-without repetition is NO. The playback number icon f is an icon which is operated to change the playback number. n the embodiment, each time the playback number icon f is operated, the playback number changes from one to three,?ve, one, three and so on in the order named. [0079] Next, the CPU 20 speci?es, in the list of the sound text segments 920 displayed on the main display 10, a sound text segment 920 displayed?rst (on the top of the list) on the main display 10 as the speci?c sound text segment 920S and highlights the video repeat learning icon c displayed at the beginning of the speci?c sound text segment 920S (Step S32).

US 2014/0178045 A1 Jun. 26, 2014 [0080] Next, as shown in FG. 5, the CPU 20 determines whether or not a touch operation on the repetition ON/OFF icon le or a touch operation on the playback number icon f is performed (Step S33). When determining that neither of them is performed (Step S33; NO), the CPU 20 moves to Step S35. [0081] On the other hand, when determining that at least one of them is performed (Step S33; YES), the CPU 20 changes the set content(s) for the video repeat learning mode in response to the touch operation(s) and updates the contents stored in the set-content-for-video-repeat-leaming storage table 84 and the contents displayed on the main display 10 with respect to the repetition ON/OFF icon le and/or the playback number icon f (Step S34). Thus, the playback num ber is changed and speci?ed on the basis of a user operation. Also, the with-repetition and the without-repetition, namely, permission and forbiddance of processing at Step S42 described below, are switched (i.e. whether or not to provide the silent time with the speci?c sound text segment 920S displayed for a user to do repetition after each time the spe ci?c sound-attached video segment 910S is played is deter mined) on the basis of a user operation. [0082] Next, the CPU 20 determines whether or not an operation on the return key 2g is performed (Step S35). When determining that an operation on the return key 2g is per formed (Step S35; YES), the CPU 20 deletes the explanation window W1, which is displayed on the main display 10, from the main display 10 and then, as shown in FG. 4, moves to Step S22 to move to the sound text display mode. [0083] On the other hand, as shown in FG. 5, when deter mining that an operation on the return key 2g is not performed (Step S35; NO), the CPU 20 determines whether or not an up/down?ick operation on the list of the sound text segments 920 displayed on the main display 10 or an operation on the up arrow or the down arrow of the cursor key 2e is performed (Step S36). [0084] When determining that neither of them is performed (Step S36; NO), the CPU 20 moves to Step S38. [0085] On the other hand, when determining that either of them is performed (Step S36; YES), the CPU 20 scrolls the list of the sound text segments 920 displayed on the main display 10 in a direction speci?ed through the operation to newly specify another sound text segment 920 as the speci?c sound text segment 920S and highlights a video repeat leam ing icon lc which is for the new speci?c sound text segment 920S instead of the video repeat learning icon lc highlighted so far (Step S37). [0086] Next, the CPU 20 determines whether or not a touch operation on the speci?c conversational sentence video play back execution icon ld, a touch operation on any of the video repeat learning icons lc or an operation on the decision key 2b is performed (Step S38). When determining that none of them is performed (Step S38; NO), the CPU 20 moves to Step S33. [0087] On the other hand, when determining that any of them is performed (Step S38; YES), the CPU 20 determines whether or not the set content of the with-or-without repeti tion is ON (Step S40). At Step S38, when the user touches the video repeat learning icon lc displayed at the beginning of a sound text segment 920, the sound text segment 920 is speci?ed as the speci?c sound text segment 920S. [0088] When determining that the set content of the with or-without repetition is ON (Step S40; YES), the CPU 20 performs conversational sentence video playback execution processing (Step S41). [0089] More speci?cally, as shown in FG. 6, in the con versational sentence video playback execution processing,?rst, the CPU 20 speci?es a sound-attached video segment 910 for the speci?c sound text segment 920S and sets the sound-attached video segment 910 as the speci?c sound attached video segment 910S (Step T1). At the time, the CPU 20 deletes the explanation window W1 and the list of the sound text segments 920, which are displayed on the main display 10, from the main display 10. Consequently, the sound text segments 920 are prevented from being displayed during video playback at Step T2. Further, the CPU 20 forms the information display area E1 at the edge part on the right on the main display 10 and deletes the icons (the speci?c con versational sentence video playback execution icon ld, the repetition ON/OFF icon le and the playback number icon f), which are displayed in the icon display area E2, therefrom. [0090] Next, the CPU 20 plays the speci?c sound-attached video segment 910S (Step T2) and displays the display item Ha about the state of video playback (the elapsed playback time, the required playback time, the volume level and the like) in the information display area E1 (Step T3). [0091] Next, the CPU 20 displays the explanation Hc about the operations during video playback (the explanation that an operation on the return key 2g corresponds to an operation for a pause command and the like) in the information display area E1 (Step T4). n the video repeat learning mode, the expla nation Hb about the operations speci?c to the normal play back mode (the explanation that an operation on the right arrow of the cursor key 2e corresponds to an operation for a fast-forwarding command and the like) is not displayed in the information display area E1, which is different from the nor mal playback mode. Consequently, the explanation Hb about the operations speci?c to the normal playback mode func tions as a mark to recognize which is played, a sound-attached video 91 or a sound-attached video segment 910. [0092] Next, the CPU 20 determines whether or not a pause command is made through an operation on the return key 2g (Step T5). When determining that a pause command is made (Step T5; YES), the CPU 20 pauses the speci?c sound-at tached video segment 910S to stop the sound with the image (still image) of the paused point displayed on the main display 10 (Step T11). At the time, in the information display area E1 of the main display 10, the display item Ha about the state of video playback (the elapsed playback time, the required play back time, the volume level and the like) and the explanation Hc about the operations during video playback (the explana tion that an operation on the return key 2g corresponds to an operation for a pause command and the like) are still dis played. [0093] Next, the CPU 20 deletes the explanation Hc about the operations during video playback (the explanation that an operation on the return key 2g corresponds to an operation for a pause command and the like), which is displayed in the information display area E1, therefrom and displays the explanation Hd about the operations during pause (the expla nation that an operation on the decision key 2b corresponds to an operation for a playback restart command) therein instead (Step T12). [0094] Next, the CPU 20 displays the playback execution icon la in the icon display area E2 (Step T13). n the video repeat learning mode, the text display icon lb is not displayed in the ion display area E2 during pause, which is different from the normal playback mode. Consequently, the text dis

US 2014/0178045 A1 Jun. 26, 2014 play icon lb functions as a mark to recognize which is paused, a sound-attached video 91 or a sound-attached video segment 910. [0095] Next, the CPU 20 determines whether or not a touch operation on the playback execution icon la or an operation on the decision key 2b is performed (Step T14). When deter mining that either of them is performed (Step T14; YES), the CPU 20 sets the paused point, at which the speci?c sound attached video segment 910S is paused at Step T5, as a play back restart point of the speci?c sound-attached video seg ment 9105 in the conversational sentence video playback execution processing (Step T15) and then moves to Step T2. [0096] On the other hand, when determining that neither of them is performed (Step T14; NO), the CPU 20 determines whether or not an operation on the return key 2 g is performed (Step T16). [0097] When determining that an operation on the return key 2g is not performed (Step T16; NO), the CPU 20 moves to Step T14. [0098] On the other hand, when determining that an opera tion on the return key 2g is performed (Step T16; YES), the CPU 20 sets the speci?c sound text segment 920S as a target for?rst display (Step T17). [0099] Next, the CPU 20 deletes the video (still image), which is displayed on the main display 10, from the main display 10 to move to the sound text display mode, reads the sound text segments 920 from the storage unit 80 and displays the sound text segments 920 for the respective sound-attached video segments 910 on the main display 10 in a list form in order with the sound text segment 920 as the target for?rst display displayed?rst (Step T18) and then ends the conver sational sentence video playback execution processing and moves to Step S31 (see FG. 4). At the time, the CPU 20 deletes the information display area E1, which is formed on the main display 10, from the main display 10. Accordingly, the display item Ha about the state of video playback and the explanation Hd about the operations during pause are deleted from the main display 10. Further, the CPU 20 deletes the playback execution icon la, which is displayed in the icon display area E2, therefrom. [0100] When determining that a pause command is not made (Step T5; NO), the CPU 20 determines whether or not the speci?c sound-attached video segment 910S has been played to the end (Step T6). [0101] When determining that the speci?c sound-attached video segment 910S has not been played to the end yet (Step T6; NO), the CPU 20 moves to Step T2. On the other hand, when determining that the speci?c sound-attached video seg ment 910S has been played to the end (Step T6; YES), the CPU 20 ends the conversational sentence video playback execution processing. [0102] When ending the conversational sentence video playback execution processing (Step S41), as shown in FG. 5, the CPU performs text-display-for-conversation-playback time processing (Step S42). [0103] More speci?cally, as shown in FG. 7, in the text display-for-conversation-playback-time processing,?rst, the CPU 20 deletes the video (still image), which is displayed on the main display 10, from the main display 10, reads the sound text segments 920 from the storage unit 80 and displays the sound text segments 920 for the respective sound-attached video segments 910 on the main display 10 in a list form in order with the speci?c sound text segment 920S displayed?rst (Step U1). Consequently, each time the speci?c sound attached video segment 910S is played by the conversational sentence video playback execution processing, the sound text segments 920 are displayed on the main display 10 in a list form. At the time, the CPU 20 deletes the information display area E1, which is formed on the main display 10, from the main display 10. Accordingly, the display item Ha about the state of video playback and the explanation Hd about the operations during pause are deleted from the main display 10. Further, the CPU 20 displays and highlights the speci?c con versational sentence video playback execution icon ld in the icon display area E2 and displays the repetition ON/OFF icon le and the playback number icon f therein. Further, the CPU 20 displays the explanation window W1 for explaining the functions of these icons (the speci?c conversational sentence video playback execution icon ld, the repetition ON/OFF icon le and the playback number icon f) at the bottom on the main display 10. [0104] Next, the CPU 20 displays a message to urge a user to repeat the speci?c sound text segment 920S, for example, a message window W2 for a message Repeating: Repeat the played conversation. (see FG. 12C), on the main display 10 (Step U2). Following the message, the user reads aloud or silently the speci?c sound text segment 920S, thereby repeat ing the sound content of the speci?c sound-attached video segment 910S. [0105] Next, the CPU 20 determines whether or not an operation on the return key 2g is performed (Step U3). When determining that an operation on the return key 2g is per formed (Step U3; YES), the CPU 20 ends the text-display for-conversation-playback-time processing and moves to Step S20 to move to the normal playback mode (see FG. 4). [0106] On the other hand, when determining that an opera tion on the return key 2g is not performed (Step U3; NO), the CPU 20 determines whether or not a time (time length) equal to a required playback time for the speci?c sound-attached video segment 910S has elapsed (Step U4). At Step U4, the CPU 20 may determine whether or not a time length corre sponding to the required playback time (for example, a time length increased or decreased by predetermined seconds from the required playback time) for the speci?c sound-attached video segment 910S has elapsed, or may determine whether or not a time length corresponding to the length (the number of letters/words) of the speci?c sound text segment 920S has elapsed. [0107] When determining that a time equal to the required playback time for the speci?c sound-attached video segment 910S has not elapsed yet (Step U4; NO), the CPU 20 moves to Step U3. On the other hand, when determining that a time equal to the required playback time for the speci?c sound attached video segment 910S has elapsed (Step U4;YES), the CPU 20 deletes the message window W2, which is displayed on the main display 10, from the main display 10 and ends the text-display-for-conversation-playback-time processing. [0108] When ending the text-display-for-conversation playback-time processing, as shown in FG. 5, the CPU 20 determines whether or not the speci?c sound-attached video segment 910S has been played for the playback number stored in the set-content-for-video-repeat-leaming storage table 84 (Step S43). [0109] When determining that the speci?c sound-attached video segment 910S has not been played for the playback number stored in the set-content-for-video-repeat-learning storage table 84 yet (Step S43; NO), the CPU 20 moves to Step S40. Consequently, the speci?c sound-attached video