METHOD, COMPUTER PROGRAM AND APPARATUS FOR DETERMINING MOTION INFORMATION FIELD OF THE INVENTION

Similar documents
(12) Patent Application Publication (10) Pub. No.: US 2006/ A1. (51) Int. Cl. SELECT A PLURALITY OF TIME SHIFT CHANNELS

SELECTING A HIGH-VALENCE REPRESENTATIVE IMAGE BASED ON IMAGE QUALITY. Inventors: Nicholas P. Dufour, Mark Desnoyer, Sophie Lebrecht

(12) Publication of Unexamined Patent Application (A)

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1

(12) United States Patent (10) Patent No.: US 6,275,266 B1

o VIDEO A United States Patent (19) Garfinkle u PROCESSOR AD OR NM STORE 11 Patent Number: 5,530,754 45) Date of Patent: Jun.

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1

TEPZZ A_T EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art.

(12) Patent Application Publication (10) Pub. No.: US 2005/ A1

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1

2) }25 2 O TUNE IF. CHANNEL, TS i AUDIO

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) Patent Application Publication (10) Pub. No.: US 2008/ A1

(12) United States Patent (10) Patent No.: US 6,462,508 B1. Wang et al. (45) Date of Patent: Oct. 8, 2002

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE TITLE OF THE INVENTION

(12) United States Patent

(12) Patent Application Publication (10) Pub. No.: US 2017/ A1. (51) Int. Cl. (52) U.S. Cl. M M 110 / <E

DISTRIBUTION STATEMENT A 7001Ö

E. R. C. E.E.O. sharp imaging on the external surface. A computer mouse or

(12) United States Patent (10) Patent No.: US 8, B2

(12) United States Patent Nagashima et al.

(12) Patent Application Publication (10) Pub. No.: US 2007/ A1

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1

TRANSMITTING SPORTS AND ENTERTAINMENT DATA TO WIRELESS HAND HELD DEVICES OVER A TELECOMMUNICATIONS NETWORK CROSS-REFERENCE TO RELATED APPLICATIONS

Chapter 1. Introduction to Digital Signal Processing

(12) United States Patent (10) Patent No.: US 6,885,157 B1

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

CROSS-REFERENCE TO RELATED APPLICATIONS

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

Innovations in PON Cost Reduction

P1: OTA/XYZ P2: ABC c01 JWBK457-Richardson March 22, :45 Printer Name: Yet to Come

(12) Patent Application Publication (10) Pub. No.: US 2011/ A1

(12) Patent Application Publication (10) Pub. No.: US 2012/ A1

(12) United States Patent

Pattern Based Attendance System using RF module

ICT goods categories and composition (HS 2012)

Generating Flower Images and Shapes with Compositional Pattern Producing Networks

(12) United States Patent (10) Patent No.: US 7,605,794 B2

USOO A United States Patent (19) 11 Patent Number: 5,822,052 Tsai (45) Date of Patent: Oct. 13, 1998

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1

SUMMIT LAW GROUP PLLC 315 FIFTH AVENUE SOUTH, SUITE 1000 SEATTLE, WASHINGTON Telephone: (206) Fax: (206)

(12) Patent Application Publication (10) Pub. No.: US 2007/ A1

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1

(12) Patent Application Publication (10) Pub. No.: US 2014/ A1

USOO A United States Patent (19) 11 Patent Number: 5,850,807 Keeler (45) Date of Patent: Dec. 22, 1998

TEPZZ 889A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2017/35

(12) United States Patent (10) Patent No.: US 6,684,249 B1. Frerichs et al. (45) Date of Patent: Jan. 27, 2004

(12) United States Patent (10) Patent No.: US 6,424,795 B1

(12) United States Patent

United States Patent 19 11) 4,450,560 Conner

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1. (51) Int. Cl.

(12) Patent Application Publication (10) Pub. No.: US 2003/ A1

(51) Int Cl.: H04L 1/00 ( )

Chapter 7 Memory and Programmable Logic

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1

(12) United States Patent (10) Patent No.: US 6,462,786 B1

(12) (10) Patent No.: US 8,316,390 B2. Zeidman (45) Date of Patent: Nov. 20, 2012

(12) United States Patent

UNIT V 8051 Microcontroller based Systems Design

(12) Patent Application Publication (10) Pub. No.: US 2017/ A1

Digital Logic Design: An Overview & Number Systems

LED TEST. ATX Hardware GmbH West and Feasa Enterprises Limited a long-standing partnership. SPECIALIST FIXTURE SOLUTIONS

Automatic optimization of image capture on mobile devices by human and non-human agents

CABLE MODEM. COURSE INSTRUCTOR Prof.Andreas Schrader

DIVISION 28. systems. conditions. GENERAL PART 1 PART 2 PRODUCTS. Products, Inc. (2) The. (3) The one modules. (4) The. to CD-R, CD- technology.

EP A2 (19) (11) EP A2 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2011/39

(12) Patent Application Publication (10) Pub. No.: US 2012/ A1. MOHAPATRA (43) Pub. Date: Jul. 5, 2012

(12) United States Patent

TOWARD A FOCUSED MARKET William Bricken September A variety of potential markets for the CoMesh product. TARGET MARKET APPLICATIONS

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

Systems and methods of camera-based fingertip tracking

(12) United States Patent (10) Patent N0.2 US 7,429,988 B2 Gonsalves et a]. (45) Date of Patent: Sep. 30, 2008

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) United States Patent

(12) United States Patent (10) Patent No.: US 6,501,230 B1

??? Introduction. Learning Objectives. On completion of this chapter you will be able to:

Page 1 of 14 HTC EXHIBIT 1001

(12) United States Patent (10) Patent No.: US 9, B1

(12) Patent Application Publication (10) Pub. No.: US 2008/ A1

(12) United States Patent

(12) United States Patent

Design Project: Designing a Viterbi Decoder (PART I)

(12) United States Patent

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

USOO595,3488A United States Patent (19) 11 Patent Number: 5,953,488 Seto (45) Date of Patent: Sep. 14, 1999

Tone Insertion To Indicate Timing Or Location Information

(12) Patent Application Publication (10) Pub. No.: US 2005/ A1. RF Component. OCeSSO. Software Application. Images from Camera.

(12) Patent Application Publication (10) Pub. No.: US 2005/ A1

6.111 Project Proposal IMPLEMENTATION. Lyne Petse Szu-Po Wang Wenting Zheng

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube.

Mid Term Papers. Fall 2009 (Session 02) CS101. (Group is not responsible for any solved content)

(12) Patent Application Publication (10) Pub. No.: US 2011/ A1

The BBC micro:bit: What is it designed to do?

Innovative Rotary Encoders Deliver Durability and Precision without Tradeoffs. By: Jeff Smoot, CUI Inc

Cambridge International Examinations Cambridge International General Certificate of Secondary Education

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1

(10) Patent N0.: US 6,301,556 B1 Hagen et al. (45) Date of Patent: *Oct. 9, 2001

Transcription:

1 METHOD, COMPUTER PROGRAM AND APPARATUS FOR DETERMINING MOTION INFORMATION FIELD OF THE INVENTION The present invention relates to motion 5tracking. More particularly, the present invention relates to a method, a computer program and an apparatus for determining motion information of an object from a video signal. 10BACKGROUND OF THE INVENTION Motion capture is a term that describes the process of recording motion and translating the motion onto to a digital model. Presently there are available different kinds of commercial products which are based 15on various technologies, e.g. optical systems and nonoptical systems. Some of the optical systems are based on a usage of special optical markers the movements of which are recorded on a video. To reproduce movement the video is analyzed using various techniques. 20Examples of the non-optical systems include e.g. various kinds of inertial sensors. When using intertial sensors the measurement information has to be transmitted to a receiving entity either in a wired or wireless manner. 25 The existing motion capture systems have several drawbacks. Often the systems, e.g. optical systems are complex, require much processing power and need special equipment to work. Furthermore, if nonoptical systems are used, there is a need for a 30special receiver receiving the transmission from the wired or wireless transmitter. Based on the above there is an obvious need for a solution that would mitigate and/or alleviate the above drawbacks. 35

2 SUMMARY OF THE INVENTION According to an aspect there is provided a method for determining motion information of an object from a video signal. The method comprises: receiving 5the video signal, the video signal comprising a plurality of frames representing motion; analysing the video signal to determine at least one identifiable object in the plurality of frames of the video signal; and determining motion information of the object 10orthogonal to the two dimensional planar motion of the video signal by tracking changes in optical properties of the at least one identifiable object in the plurality of frames of the video signal. According to another aspect of the invention 15there is provided a computer program comprising program code, which when executed on a processor implement the method of any of claims 1 10. According to another aspect of the invention there is provided an apparatus for determining motion 20information of an object from a video signal. The apparatus comprises: a processor; means for receiving the video signal; and a memory comprising a computer program, which when executed by the processor is configured to: receive the video signal, the video 25signal comprising a plurality of frames representing motion; analyse the video signal to determine at least one identifiable object in the plurality of frames of the video signal; and determine motion information of the object orthogonal to the two dimensional planar 30motion of the video signal by tracking changes in optical properties of the at least one identifiable object in the plurality of frames of the video signal. According to yet another aspect of the invention there is provided a system for determining 35motion information of an object from a video signal. The system comprises: a video camera; at least one identifiable object attachable to a target; an

3 apparatus comprising a processor; means for receiving the video signal; and a memory comprising a computer program, which when executed by the processor is configured to: receive the video signal, the video 5signal comprising a plurality of frames representing motion; analyse the video signal to determine at least one identifiable object in the plurality of frames of the video signal; and determine motion information of the object orthogonal to the two dimensional planar 10motion of the video signal by tracking changes in optical properties of the at least one identifiable object in the plurality of frames of the video signal. The advantages of the invention relate to simplicity. With the invention it is possible to track 15three-dimensional motion by using only one video camera. Furthermore, since the motion information is coded into a video signal, the decoding can be done regardless of the location. 20BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are included to provide a further understanding of the invention and constitute a part of this specification, illustrate embodiments of the invention and together 25with the description help to explain the principles of the invention. In the drawings: Figure 1 discloses a flow diagram of a method according to one embodiment of the invention, Figure 2 discloses an initial arrangement 30according to one embodiment of the invention, Figures 3A and 3B disclose an embodiment for decoding three-dimentional movement from a video signal, Figures 4A and 4B disclose another embodiment 35for decoding three-dimentional movement from a video signal,

4 Figures 5A and 5B disclose another embodiment for decoding three-dimentional movement from a video signal, Figure 6 discloses an apparatus according to 5one embodiment of the invention, and Figure 7 disclosed a system according to one embodiment of the invention. DETAILED DESCRIPTION OF THE INVENTION 10 Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Figure 1 discloses a block diagram of illustrating one embodiment of the invention. At step 15100, a video signal is received. The video signal comprises a plurality of frames. The signal itself may consist any suitable frame rate, e.g. 24fps or 30fps or higher or even less. Furthermore, the pixel size of a single frame depends on a camera used to record the 20video signal. At step 102 the video signal is analysed to determine at least one identifiable object in the plurality of frames of the video signal. The identifiable object refers e.g. to a particular shape, colour or optical element which is identifiable from a 25frame of the video signal. At step 104 motion information of the object orthogonal to the two dimensional planar motion of the video signal is determined by tracking changes in optical properties of the at least one identifiable object in the 30plurality of frames of the video signal. In general this means that motion information of the object orthogonal to the two dimensional planar motion of the video signal can be determined from a two-dimensional video signal. This makes possible e.g. to capture and 35determine movement information of the object in each three dimension based on a video signal from a single camera.

5 Figure 2 illustrates an initial arrangement of the invention at hand. A video camera 24 is recording movements 20 of at least one object. The movements happen three-dimensionally, but the video 5camera 24 captures only a two-dimensional motion plane 10 15 22 of the movements 24. Thus, the objective of the invention is how to capture three-dimensional movements from a video signal presenting only a twodimensional motion plane. Figures 3A and 3B disclose an embodiment in which motion information of an identifiable object 30 orthogonal to the two-dimentiosional planar motion component of the object 30 is determined by tracking changes in opticl properties of the objec 30t. In the embodiment of Figures 3A and 3B, the two-dimensional motion component of the object 30 is determined from the video signal by using e.g. one or more markers (not shown in Figures 3A and 3B). The marker, e.g. a passive marker is coated e.g. with a 20retroreflective material to reflect light back to a camera. The two-dimensional motion can then be tracked by tracking the position of the marker in the twodimentional planar video signal. In Figures 3A and 3B represent consecutive 25frames of the video signal, and motion information of the object 30 orthogonal to the two-dimensional planar motion component is calculated from changes in the relative size of the at least one identifiable object 30 in the frames. In Figure 3A the object 30 is 30represented with a black circle having position coordinates x 1, y 1, z 1. It is evident to a skilled person that there might not be any fixed coordinate system in use. A more important piece of information is the difference in coordinates (positions) between 35two consecutive frames. In Figure 3B the object 30 has moved (x 2, y 2, z 2 ) compared to the position in the

6 previous frame (Figure 2A). In this embodiment only the object 30 moves and the camera stays in place. In Figure 3A the relative size of the object 30 in the video signal has a fist value. In Figure 3B 5the relative size of the object 30 is bigger. Since the camera stays in place, the change in the relative size of the identifiable object 30 represents movement of the object to a direction orthogonal to the twodimensional planar motion component determined e.g. 10from the earlier mentioned marker. In short, the bigger the object 30 (the black circle in this case) is in a data frame, the closer it is to the camera. Furthermore, the accuracy of determining of motion in the direction orthogonal to the two- planar motion component may also depend on 15dimensional the total pixel size and the frame rate of the camera. For example, an inexpensive web camera may have a frame rate of 30 frames per second (fps) and a total pixel size of 640x480. It is evident that if the 20object moves fast, the movement is captured more accurately with a higher fps value. Furthermore, the size of a single pixel in a 640x480 web camera is much bigger than e.g. in a better three megapixel camera (e.g. 2048x1536 pixels). Thus, the accuracy of the 25motion information of the object 30 orthogonal to the two-dimensional planar motion component is more accurate when the total pixel size and the frame rate of the camera increases. Furthermore, it was mentioned above that a 30marker is separate from the light source. In another embodiment, the marker and the light source comprise a single element, i.e. the light source act also as a marker. Figures 4A and 4B disclose another embodiment 35of the invention at hand. In the embodiment of Figures 4A and 4B, motion of an object 40 in a direction orthogonal to the two-dimensional planar motion

7 component based on the optical properties of the object 40. In the embodiment of Figures 4A and 4B, the two-dimensional motion component of the object 40 is 5determined from the video signal by using e.g. one or more markers 42. The marker 42, e.g. a passive marker is coated e.g. with a retroreflective material to reflect light back to a camera. The two-dimensional motion can then be tracked by tracking the position of 10the marker 42 in the two-dimentional planar video signal. In the embodiment of Figures 4A and 4B, the object 40 is a light source. The ligth source is e.g. a light emitting diode (LED). Depth information, i.e. 15the third motion component of the object 40 is determined from the signals of the light source in the video signal. The object 40 (light source) is attached to a target. The target is e.g. a hand, first, foot or any other target the movement of which is to be 20tracked. As said above, the movement of the target is coded in a predetermined manner into signals of the light source 40 so that it is possible to determine the last motion component needed from the video signal. The coding is based on e.g. a usage of at 25least one accelerometer. The accelerometer itself may be based on any possible and applicable technology. The measurements of the accelerometer are coded into light signals so that it is possible to decode the previously encoded motion information afterwards. The 30coding is executed so that both the quantity and direction can be decoded from the signals of the light source 40. Furthermore, the intensity of light may also be used in the coding procedure, e.g. to indicate whether the motion is away or toward the camera. 35 In short, a video camera is used to record movements of a target. The target is e.g. a body part of a human being, e.g. a fist. The body part is

8 equipped with at least one accelerometer which provides information about the moving body part. An accelerometer tracks movements of the body part, e.g. a fist. Measurements of the accelerometer are coded 5into a light signal of at least one light source 40 in a predetermined manner. It is evident that the target may be equipped with multiple acceletometer in different locations and thus also multiple light sources 40 providing coded movement information of the 10multiple accelerometers. Since the video camera is used to record movements of the target, it also tracks the light source 40. In other words, the coded light signal of the light source 40 appears on the video signal. The 15video signal is then transmitted to a recipient via a communication network, e.g. the Internet. Alternatively, the video may be locally connected to a receiving processing device. At the receiving end, a data processing device, e.g. a computer is provided 20with a software application which is arranged to analyse the video signal to track two-dimensional planar motion component of the object. The twodimensional planar motion component is determined e.g. based on signals provided by the at least one passive 25markers or by any other suitable technique. Several techniques for tracking two-dimensional planar motion of a target are known in the art, and therefore they are not described here in more detail. In addition to the two-dimensional planar motion component, the 30software application determines motion in a direction orthogonal to the two-dimensional planar motion component from the signals of the light source, the signals of the light source having been modulated based on motion of the object, e.g. the fist already 35mentioned above. Based on the two-dimensional planar motion component and the modulated light signals, the software application is able to determine movements

9 three-dimensionally. The decoding algorithm used in the software application to decode light signals from the video signal is apparent to a skilled person. In short, the software application combines the two- information and motion information from 5dimensional the light source into three-dimensional motion information. Furthermore, it was mentioned above that a marker is separate from the light source. In another 10embodiment, the marker and the light source comprise a single element, i.e. the light source act also as a marker. A clear advantage of the solution disclosed in Figures 4A and 4B is that three-dimensional motion 15tracking can be achieved with only one camera and that there are no special requirements for the camera itself. Of course, it is evident that more accurate cameras may provide more accurate video signal. A further advantege is that decoding can be done 20practically anywhere without special equipment, only the software application is needed to decode the video signal. Figures 5A and 5B disclose another embodiment of the invention at hand. In the embodiment of Figures 255A and 5B, motion of an object in each three dimension is determined based on the optical properties of the object. In the embodiment of Figures 5A and 5B, the object is a set of three light sources: a red light 30source 50, a green light source 52 and a blue light source 54. Each of the light sources comprise e.g. a light emitting diode (LED). The individual light sources may be set so close to each other that they appear as a single point of light to an ourside 35observer. Instead of three led light sources it is possible to use e.g. layered displays (light sources) where the three RGB components are superimposed.

10 The light sources 50, 52, 54 are attached to a target. The target is e.g. a hand, foot or any other target the movement of which is to be tracked. The movement of the target is coded in a predetermined 5manner into signals of the light sources 50, 52, 54 so that it is possible to determine the last motion component needed from the video signal. The coding is based on e.g. a usage of at least one accelerometer. The object may comprise a 10built-in three-axis accelerometer for tracking motion. In another embodiment, the accelerometer is an external component from the object. The accelerometer itself may be based on any possible and applicable technology. The measurements of the accelerometer are 15coded into light signals so that it is possible to decode the previously encoded motion information afterwards. In this embodiment, each axis (X, Y, Z) is coded into a corresponding light source. For example, the X-axis is coded into the red light source 50, the 20Y-axis is coded into the green light source 52 and the Z-axis is coded into the blue light source 54. The coding is executed so that both the quantity and direction can be decoded from the signals of the light sources 50, 52, 54. Furthermore, the intensity of 25light may also be used in the coding procedure, e.g. to indicate whether the motion is away or toward the camera. In another embodiment, intensities of the three light sources may be used in the coding procedure. The coding procedure may then e.g. use the 30proportions of the intensities of the light sources in the coding procedure. In short, a video camera is used to record movements of a target. The target is e.g. a body part of a human being, e.g. a fist. The body part is 35equipped with a three-axis accelerometer which provides information about the moving body part. An accelerometer tracks movements of the body part, e.g.

11 a fist. Measurements of the accelerometer are coded into light signals of three light sources 50, 52, 54 in a predetermined manner. Since the video camera is used to record 5movements of the target, it also tracks the light sources 50, 52, 54. In other words, the coded light signal of the light sources 50, 52, 54 appears on the video signal. The video signal is then transmitted to a recipient via a communication network, e.g. the 10Internet. Alternatively, the video may be locally connected to a receiving processing device. At the receiving end, a data processing device, e.g. a computer is provided with a software application which is arranged to analyse the video signal to determine 15motion in a first direction from signals of the red light source, the signals of the red light source having been modulated based on motion in the first direction. Furthermore, the software application determines motion in a second direction from signals 20of the green light source, the signals of the green light source having been modulated based on motion in the second direction and motion in a third direction from signals of the blue light source, the signals of the blue light source having been modulated based on 25motion in the third direction. Based on the decoded three-dimensional motion components, the software application is able to determine movements three-dimensionally. The decoding algorithm used in the software application to decode 30light signals from the video signal itsels is apparent 35 to a skilled person. In short, the software application decodes all the needed three-dimensional motion information from the video signal, where the video signal comprises RGB coded motion information. A clear advantage of the solution disclosed in Figures 5A and 5B is that three-dimensional motion tracking can be achieved with only one camera and that

12 there are no special requirements for the camera itself. Of course, it is evident that more accurate cameras may provide more accurate video signal. A further advantege is that decoding can be done 5practically anywhere without special equipment, only the software application is needed to decode the video signal. In one embodiment of Figures 5A and 5B, decoding motion information from the three light 10sources is performed a bit differently. Motion is first determined from signals of a first light source. The signals of the first light source have previously been modulated as a function of absolute sum of acceleration. The second light source acts as static 15reference signal. Direction of the motion is then determined from signals of a third light source. Thus, the three-dimentional motion is determined based on the decoded signals from the first, second and third light sources. 20 Figure 6 discloses an apparatus 60 according to one embodiment of the invention. The apparatus 60 comprises a processor 64 to which is connected a memory 62 and a video signal receiver 64. The video signal receiver receives the video signal e.g. via a 25local connection or via a data interface connection, e.g. via a local area network interface of the apparatus. Although Figure 6 discloses only one memory 62 conneted to the processor 64, the apparatus may also include other memories. The memory 62 comprises a 30softare application which is configured to implement the motion decoding step disclosed in the description of Figures 3-5. The apparatus 60 may also include a receiver configured to receive a radio transmission. The 35processor 64 is then configured to determine motion in a direction orthogonal to the two-dimensional planar motion component by simultaneously recording motion

13 data received by means of the radio transmission from the object. The information received from the radio transmission can be used to combine the planar motion information from the video signal with the motion data 5received by the radio transmission and to correct and reference the motion information sent by the radio transmission. The receiver may receive the same information also via a wired interface. In the case of radio transmission, the earlier disclosed 10identifiable object may itself comprise a radio transmitter tranmitting information provided by the accelerometer. In another embodiment, the radio transmitter may be a separate entity from the object. Figure 7 discloses a system according to one 15embodiment of the invention. The system comprises one or more identifiable objects 70. The objects have been described in more detail previously in the description. The objects are a tracked with a video camera 72. The video camera 70 has a connection to a 20processing apparatus 74 decoding motion information from the video signal. The connection may be wireless or wired. Furthermore, The decoding process itself is implemented e.g. with a software application running in the processing apparatus. The processing apparatus 25itself may be any possible device, e.g. a personal computer, a laptop computer, a hand-held computer, a mobile terminal, a mobile phone, a gaming console, a personal digital assistant etc. The solution disclosed in the invention can 30be used e.g. in gaming applications and/or apparatuses, various sports, computer vision applications, telerehabilitation etc. The exemplary embodiments can include, for example, any suitable servers, workstations, PCs, lap- computers, personal digital assistants (PDAs), In- 35top ternet appliances, handheld devices, cellular telephones, smart phones, wireless devices, other devices,

14 and the like, capable of performing the processes of the exemplary embodiments. The devices and subsystems of the exemplary embodiments can communicate with each other using any suitable protocol and can be imple- using one or more programmed computer systems 5mented or devices. One or more interface mechanisms can be used with the exemplary embodiments, including, for example, Internet access, telecommunications in any suit- form (e.g., voice, modem, and the like), wireless 10able communications media, and the like. For example, employed communications networks or links can include one or more wireless communications networks, cellular communications networks, 3G communications networks, 15Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, a combination thereof, and the like. It is to be understood that the exemplary embodiments are for exemplary purposes, as many varia- of the specific hardware used to implement the 20tions exemplary embodiments are possible, as will be appreciated by those skilled in the hardware and/or software art(s). For example, the functionality of one or more of the components of the exemplary embodiments 25can be implemented via one or more hardware and/or software devices. The exemplary embodiments can store information relating to various processes described herein. This information can be stored in one or more memo- such as a hard disk, optical disk, magneto- 30ries, optical disk, RAM, and the like. One or more databases can store the information used to implement the exemplary embodiments of the present inventions. The databases can be organized using data structures (e.g., 35records, tables, arrays, fields, graphs, trees, lists, and the like) included in one or more memories or storage devices listed herein. The processes described

15 with respect to the exemplary embodiments can include appropriate data structures for storing data collected and/or generated by the processes of the devices and subsystems of the exemplary embodiments in one or more 5databases. All or a portion of the exemplary embodiments can be conveniently implemented using one or more general purpose processors, microprocessors, digital signal processors, micro-controllers, and the like, pro- according to the teachings of the exemplary 10grammed embodiments of the present inventions, as will be appreciated by those skilled in the computer and/or software art(s). Appropriate software can be readily prepared by programmers of ordinary skill based on the 15teachings of the exemplary embodiments, as will be appreciated by those skilled in the software art. In addition, the exemplary embodiments can be implemented by the preparation of application-specific integrated circuits or by interconnecting an appropriate network 20of conventional component circuits, as will be appreciated by those skilled in the electrical art(s). Thus, the exemplary embodiments are not limited to any specific combination of hardware and/or software. Stored on any one or on a combination of com- readable media, the exemplary embodiments of the 25puter present inventions can include software for controlling the components of the exemplary embodiments, for driving the components of the exemplary embodiments, for enabling the components of the exemplary embodi- to interact with a human user, and the like. 30ments Such software can include, but is not limited to, device drivers, firmware, operating systems, development tools, applications software, and the like. Such computer readable media further can include the computer 35program product of an embodiment of the present inventions for performing all or a portion (if processing is distributed) of the processing performed in imple-

16 menting the inventions. Computer code devices of the exemplary embodiments of the present inventions can include any suitable interpretable or executable code mechanism, including but not limited to scripts, in- programs, dynamic link libraries (DLLs), 5terpretable Java classes and applets, complete executable programs, Common Object Request Broker Architecture (CORBA) objects, and the like. Moreover, parts of the processing of the exemplary embodiments of the present 10inventions can be distributed for better performance, reliability, cost, and the like. As stated above, the components of the exemplary embodiments can include computer readable medium or memories for holding instructions programmed ac- to the teachings of the present inventions and 15cording for holding data structures, tables, records, and/or other data described herein. Computer readable medium can include any suitable medium that participates in providing instructions to a processor for execution. 20Such a medium can take many forms, including but not limited to, non-volatile media, volatile media, transmission media, and the like. Non-volatile media can include, for example, optical or magnetic disks, magneto-optical disks, and the like. Volatile media can 25include dynamic memories, and the like. Transmission media can include coaxial cables, copper wire, fiber optics, and the like. Transmission media also can take the form of acoustic, optical, electromagnetic waves, and the like, such as those generated during radio 30frequency (RF) communications, infrared (IR) data communications, and the like. Common forms of computerreadable media can include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other suitable magnetic medium, a CD-ROM, CDR, CD-RW, 35DVD, DVD-ROM, DVD±RW, DVD±R, any other suitable optical medium, punch cards, paper tape, optical mark sheets, any other suitable physical medium with pat-

17 terns of holes or other optically recognizable indicia, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other suitable memory chip or cartridge, a carrier wave or any other suitable medium from which a computer can 5read. While the present inventions have been described in connection with a number of exemplary embodiments, and implementations, the present inventions are not so limited, but rather cover various modifica- and equivalent arrangements, which fall within 10tions, the purview of prospective claims.

18 CLAIMS: 1. A method for determining motion information of an object from a video signal, the method comprising: 5 receiving the video signal, the video signal comprising a plurality of frames representing motion; analysing the video signal to determine at least one identifiable object in the plurality of frames of 10 the video signal; and determining motion information of the object orthogonal to the two dimensional planar motion of the video signal by tracking changes in optical properties of the at least one identifiable object 15 in the plurality of frames of the video signal. 2. The method according to claim 1, further comprising: analysing the video signal to tracktwo-dimensional planar motion component of the object, 20 and wherein the determining comprises: determining relative size of the at least one identifiable object in the frames of the video signal; and calculating motion information of the object 25 orthogonal to the two-dimensional planar motion component from the changes in the relative size of the at least one identifiable object in the frames of the video signal. 3. The method according to claim 1, wherein the 30at least one identifiable object comprises at least one light source. 4. The method according to claim 3, wherein the determining comprises: analysing the video signal to track two-dimensional 35 planar motion component of the object, wherein the identifiable object is a light source and the determining comprises:

19 determining motion in a direction orthogonal to the two-dimensional planar motion component from signals of the light source, the signals of the light source having been modulated based on motion 5 of the object. 5. The method according to claim 3, wherein the at least one identifiable object comprises a red light source, a green light source and a blue light source, and the determining comprises: 10 determining motion in a first direction from signals of the red light source, the signals of the red light source having been modulated based on motion of the object in the first direction; determining motion in a second direction from 15 signals of the green light source, the signals of the green light source having been modulated based on motion of the object in the second direction; and determining motion in a third direction from signals 20 of the blue light source, the signals of the blue light source having been modulated based on motion of the object in the third direction. 6. The method according to any of claims 1-5, wherein the at least one identifiable object comprises 25three light sources, and wherein the determining comprises: determining motion from signals of a first light source, the signals of the first light source having been modulated as a function of absolute 30 sum of acceleration; determining a static reference signal from signals of a second light source; determining direction of the motion from signals of a third light source; and 35 determining the three-dimentional motion based on the decoded signals from the first, second and third light sources.

20 5 10 7. The method according to any of claims 1-6, further comprising: determining motion in a direction orthogonal to the two-dimensional planar motion component by simultaneously recording motion data received by means of radio transmission from the object. 8. The method according to claim 7, further comprising: combining the planar motion information from the video signal with the motion data received by the radio transmission, and correcting and referencing the motion information sent by the radio transmission. 9. The method according to any of claims 1-6, 15further comprising: determining motion in a direction orthogonal to the 20 25 two-dimensional planar motion component by simultaneously recording motion data received by means of wire transmission from the tracked object. 10. The method according to claim 9, further comprising: combining the planar motion information from the video signal with the motion data received by the wire transmission, and correcting and referencing the motion information sent by the wire transmission. 11. A computer program comprising program code, which when executed on a processor implement the 30method of any of claims 1 10. 12. The computer program according to claim 8, wherein the computer program is embodied on a computer-readable medium. 13. An apparatus for determining motion 35information of an object from a video signal, the apparatus comprising: a processor;

21 means for receiving the video signal; and a memory comprising a computer program, which when executed by the processor is configured to: receive the video signal, the video signal 5 comprising a plurality of frames representing motion; analyse the video signal to determine at least one identifiable object in the plurality of frames of the video signal; and 10 determine motion information of the object orthogonal to the two dimensional planar motion of the video signal by tracking changes in optical properties of the at least one identifiable object in the plurality of frames of the video signal. 15 14. The apparatus according to claim 13, wherein the processor is configured to: analyse the video signal to tracktwo-dimensional planar motion component of the object; determine relative size of the at least one 20 identifiable object in the frames of the video signal; calculate motion information of the object orthogonal to the two-dimensional planar motion component from the changes in the relative size of 25 the at least one identifiable object in the frames of the video signal. 15. The apparatus according to claim 13, wherein the at least one identifiable object comprises at least one light source. 3016. The apparatus according to claim 15, wherein the processor is configured to: analyse the video signal to track two-dimensional planar motion component of the object; and determine motion in a direction orthogonal to the 35 two-dimensional planar motion component from signals of the light source, the signals of the

22 light source having been modulated based on motion of the object; wherein the identifiable object is a light source and the determining comprises: 517. The apparatus according to claim 15, wherein the at least one identifiable object comprises a red light source, a green light source and a blue light source, and wherein the processor is configured to: determine motion in a first direction from signals 10 of the red light source, the signals of the red light source having been modulated based on motion of the object in the first direction; determine motion in a second direction from signals of the green light source, the signals of the 15 green light source having been modulated based on motion of the object in the second direction; and determine motion in a third direction from signals of the blue light source, the signals of the blue light source having been modulated based on motion 20 of the object in the third direction. 18. The apparatus according to claim 13-17, wherein the at least one identifiable object comprises three light sources, and wherein the processor is configured to: 25 determine motion from signals of a first light source, the signals of the first light source having been modulated as a function of absolute sum of acceleration; determine a static reference signal from signals of 30 a second light source; determine direction of the motion from signals of a third light source; and determine the three-dimentional motion based on the decoded signals from the first, second and third 35 light sources. 19. The apparatus according to any of claims 13-18, further comprising:

23 a receiver configured to receive a radio transmission; wherein the processor is configured to: determine motion in a direction orthogonal to the 5 two-dimensional planar motion component by simultaneously recording motion data received by means of the radio transmission from the object. 20. The apparatus according to claim 19, wherein the processor is configured to: 10 combine the planar motion information from the video signal with the motion data received by the radio transmission; and correct and reference the motion information sent by the radio transmission. 1521. The apparatus according to any of claims 13-18, further comprising: a receiver configured to receive a wire transmission;wherein the processor is configured to: 20 determine motion in a direction orthogonal to the two-dimensional planar motion component by simultaneously recording motion data received by means of wire transmission from the tracked object. 2522. The apparatus according to claim 21, wherein the processor is configured to: combine the planar motion information from the video signal with the motion data received by the wire transmission; and 30 correct and reference the motion information sent by the wire transmission. 23. A system for determining motion information of an object from a video signal, the system comprising: 35 a video camera; at least one identifiable object attachable to a target;

24 5 10 15 an apparatus comprising a processor; means for receiving the video signal; and a memory comprising a computer program, which when executed by the processor is configured to: receive the video signal, the video signal comprising a plurality of frames representing motion; analyse the video signal to determine at least one identifiable object in the plurality of frames of the video signal; and determine motion information of the object orthogonal to the two dimensional planar motion of the video signal by tracking changes in optical properties of the at least one identifiable object in the plurality of frames of the video signal.

ABSTRACT The invention provides a method for determining motion information of an object from a video signal. The method comprises: receiving the video signal, 5the video signal comprising a plurality of frames representing motion; analysing the video signal to determine at least one identifiable object in the plurality of frames of the video signal; and 10determining motion information of the object orthogonal to the two dimensional planar motion of the video signal by tracking changes in optical properties of the at least one identifiable object 15in the plurality of frames of the video signal. (FIG. 1)

V I D E O S I G N A L R E C E I V I N G T H E V I D E O S I G N A L, T H E V I D E O S I G N A L C O M P R I S I N G A P L U R A L I T Y O F F R A M E S 1 0 0 A N A L Y S I N G T H E V I D E O S I G N A L T O D E T E R M I N E A T L E A S T O N E I D E N T I F I A B L E O B J E C T 1 0 2 D E T E R M I N I N G M O T I O N I N F O R M A T I O N O F T H E O B J E C T O R T H O G O N A L T O T H E T W O D I M E N S I O N A L P L A N A R M O T I O N O F T H E V I D E O S I G N A L B Y T R A C K I N G C H A N G E S I N O P T I C A L P R O P E R T I E S O F T H E A T L E A S T O N E I D E N T I F I A B L E O B J E C T 1 0 4 F I G. 1

2 2 2 0 2 4 F I G. 2

x, y, z 1 1 1 3 0 F I G. 3 A x, y, z 2 2 2 3 0 F I G. 3 B

4 2 x, y 1 1 z 1 4 0 F I G. 4 A 4 2 x, y 2 2 z 2 4 0 F I G. 4 B

x, y 1 1, z 1 5 0 5 2 R G B 5 4 F I G. 5 A x, y 2 2, z 2 5 0 5 2 R G B 5 4 F I G. 5 B

6 0 M E M O R Y 6 2 P R O C E S S O R 6 4 V I D E O S I G N A L R E C E I V E R 6 6 F I G. 6 7 0 I D E N T I F I A B L E V I D E O O B J E C T ( S ) C A M E R A 7 2 P R O C E S S I N G A P P A R A T U S 7 4 F I G. 7