patents

System, method and/or computer readable medium for non-invasive workflow augmentation, WO Application Number WO2018094534A1, Priority Date 26 November 2016 (pdf).

The present invention is directed to a system, method and/or computer readable medium for non-invasive workflow augmentation. An image of an object, containing information, is obtained and supplemented with related information from a database to augment the information from the object. The augmented information is displayed to a user in association with the object.

System, method and/or computer-readable medium for identifying and/or localizing one or more rapid diagnostic tests, WO Application Number WO2018094533A1, Priority Date 26 November 2016 (pdf).

The present invention is directed to a system, method and/or computer readable medium for pattern recognition. The efficiency is increased for identifying a sample object by comparing an image of the sample object with a plurality of reference images corresponding to a plurality of reference objects to determine a match for the sample object. Models for each of the reference images and the sample object image are provided along with hypotheses, and associated confidence measures, that the sample object image corresponds to a particular reference image model. Successive refinement of each confidence measure for the plurality of hypotheses results in the removal of hypotheses and increases the efficiency of identifying the match.

Visual pattern recognition system, method and/or computer-readable medium, WO Application Number WO2018094532A1, Priority Date 26 November 2016 (pdf).

The present invention is directed to a system, method and/or computer readable medium for visual pattern recognition using a binary operator. Patterns are recognized by their overlap with identified distinctive and/or prominent regions found in a pattern library generated through analysis of multiple samples of reference patterns.

Systems and methods for tracker characterization and verification, US Application Number US20170345177A1, Priority Date 27 May 2016 (pdf).

The present application relates to systems and methods used to characterize or verify the accuracy of a tracker comprising optically detectable features. The tracker may be used in spatial localization using an optical sensor. Characterization results in the calculation of a Tracker Definition that includes geometrical characteristics of the tracker. Verification results in an assessment of accuracy of a tracker against an existing Tracker Definition.

Systems, methods and devices to scan 3d surfaces for intra-operative localization, International Publication Number WO2017185170A1, Priority Date 28 April 2016 (pdf).

Systems and methods are described herein to generate a 3D surface scan of a surface profile of a patient’s anatomy. The 3D surface scan may be generated by reflections of structured light off the surface profile of the anatomy. The 3D surface scan may be used during intra-operative surgical navigation by a localization system. Optionally, a pre-operative medical image may also be registered to the localization system or used to enhance the 3D surface scan.

Method for object pose estimation, apparatus for object pose estimation, method for object estimation pose refinement and computer readable medium, Japanese Patent JP2013050947A, Publication Date 19 October 2016 (pdf).

PROBLEM TO BE SOLVED: To enable a robot to recognize an object regardless of its pose as seen from its on-board camera.SOLUTION: An image containing an object is input, a binary mask of the input image is created, and a set of singlets from the binary mask of the input image is extracted. Each singlet represents and extracts points in an inner and outer contour of the object in the input image, connects the set of singlets into a mesh represented as a duplex matrix, and compares two duplex matrices to produce a set of candidate poses. An object pose estimate value is used to estimate the object pose from the input image and then the object pose estimate is stored. The estimated pose of the object is refined by: inputting parameters of a camera used; projecting the model of the object into a virtual image of the object; updating the initial pose parameters to new pose parameters; and minimizing an energy function.

HMD Calibration with Direct Geometric Modeling, US Patent US20160012643A1, Publication Date 14 January 2016 (pdf).

An optical see-through (OST) head-mounted display (HMD) uses a calibration matrix having a fixed sub-set of adjustable parameters within all its parameters. Initial values for the calibration matrix are based on a model head. A predefined set of incremental adjustment values is provided for each adjustable parameter. During calibration, the calibration matrix is cycled through its predefined incremental parameter changes, and a virtual object is projected for each incremental change. The resultant projected virtual object is aligned to a reference real object, and the projected virtual object having the best alignment is identified. The setting values of the calibration matrix that resulted in the best aligned virtual object are deemed the final calibration matrix to be used with the OST HMD.

HMD Calibration with Direct Geometric Modeling, EU Patent No. 15175799.4 – 1902, Filing Date 8 July 2015 (pdf).

An optical see-through (OST) head-mounted display (HMD) uses a calibration matrix having a fixed sub-set of adjustable parameters within all its parameters. Initial values for the calibration matrix are based on a model head. A predefined set of incremental adjustment values is provided for each adjustable parameter. During calibration, the calibration matrix is cycled through its predefined incremental parameter changes, and a virtual object is projected for each incremental change. The resultant projected virtual object is aligned to a reference real object, and the projected virtual object having the best alignment is identified. The setting values of the calibration matrix that resulted in the best aligned virtual object are deemed the final calibration matrix to be used with the OST HMD.

System generating three-dimensional model, method and program, Japanese Patent JP2015176600A, Publication Date 5 October 2015 (pdf).

PROBLEM TO BE SOLVED: To generate three-dimensional representation allowing a user to select any observation point.SOLUTION: A HOLOCAM system uses a plurality of input devices referred to as “orb” for capturing images of a scene from different observation points or viewpoints. Each input devices orb captures three-dimensional (3D) information including depth information of an object and visible image information of the object and forms one combined 3D model by combining the three-dimensional data captured by a plurality of pieces of orb. An observer can observe a visual image from any observation point or any viewpoint different from an input devices orb by the combined 3D model.

Holocam Systems and Methods, US Patent US20150261184, Publication Date 17 September 2015 (pdf).

Aspects of the present invention comprise holocam systems and methods that enable the capture and streaming of scenes. In embodiments, multiple image capture devices, which may be referred to as “orbs,” are used to capture images of a scene from different vantage points or frames of reference. In embodiments, each orb captures three-dimensional (3D) information, which is preferably in the form of a depth map and visible images (such as stereo image pairs and regular images). Aspects of the present invention also include mechanisms by which data captured by two or more orbs may be combined to create one composite 3D model of the scene. A viewer may then, in embodiments, use the 3D model to generate a view from a different frame of reference than was originally created by any single orb.

Method and Apparatus for Improved Training of Object Detecting System, US Patent US20140079314, Publication Date 20 March 2014 (pdf).

An adequate solution for computer vision applications is arrived at more efficiently and, with more automation, enables users with limited or no special image processing and pattern recognition knowledge to create reliable vision systems for their applications. Computer rendering of CAD models is used to automate the dataset acquisition process and labeling process. In order to speed up the training data preparation while maintaining the data quality, a number of processed samples are generated from one or a few seed images.

Method for simulating impact printer output, evaluating print quality, and creating teaching print samples, US Patent 8654398, Publication Date 18 February 2014 (pdf).

An automated printout inspection system identifies glyphs in an image by calculating a connectedness score for each foreground pixel, and comparing this score with a specified threshold. The system further generates training images by simulating printouts from an impact printer, including the specifying of specific error types and their magnitudes. The simulated printouts are combined with scan images of real-world printout to train an automated printout inspection system. The inspection results of the automated system are compared with inspection results from human inspectors, and test parameters of the automated system are adjusted so that it renders inspection results within a specified range of the average human inspector.

Method and apparatus for object pose estimation, US Patent 8467596, Publication Date 18 June 2013 (pdf).

A pose of an object is estimated from an from an input image and an object pose estimation is then stored by: inputting an image containing an object; creating a binary mask of the input image; extracting a set of singlets from the binary mask of the input image, each singlet representing points in an inner and outer contour of the object in the input image; connecting the set of singlets into a mesh represented as a duplex matrix; comparing two duplex matrices to produce a set of candidate poses; and producing an object pose estimate, and storing the object pose estimate. The estimated pose of the object is refined by: inputting an image of an object in an estimated pose, a model of the object, and parameters of a camera used to take the image of the object in the estimated pose; projecting the model of the object into a virtual image of the object using the parameters of the camera and initial pose parameters to obtain a binary mask image and image depth information; and updating the initial pose parameters to new pose parameters using the binary mask image and image depth information and updating the new pose parameters iteratively to minimize an energy function or until a maximum number of iterations is reached.