25th week of 2017 patent applcation highlights part 53 |
Patent application number | Title | Published |
20170177265 | DATA STORAGE IN A MULTI-LEVEL MEMORY DEVICE USING ONE-PASS PROGRAMMING - A method for data storage includes preparing first data having a first size for storage in a memory device that stores data having a nominal size larger than the first size, by programming a group of memory cells to multiple predefined levels using a one-pass program-and-verify scheme. The first data is combined with dummy data to produce first combined data having the nominal size, and is sent to the memory device for storage in the group. The dummy data is chosen to limit the levels to which the memory cells in the group are programmed to a partial subset of the predefined levels. In response to identifying second data to be stored in the group, the second data is combined with the first data to obtain second combined data having the nominal size, and is sent to the memory device for storage, in place, in the group. | 2017-06-22 |
20170177266 | DATA AWARE DEDUPLICATION OBJECT STORAGE (DADOS) - Embodiments include a data aware deduplicating object store. The data aware deduplicating data store includes a consistent hashing logic that manages a consistent hashing architecture for the object store. The consistent hashing architecture includes a metadata ring and a bulk ring. The consistent hashing architecture may be a multiple ring architecture comprising a metadata ring and two or more bulk rings. A bulk ring may include a key/value (k/v) data store, where a k/v data store stores a shard of an index and a reference count that facilitates the individual approach to garbage collection or data reclamation. The data aware deduplicating data store also includes a deduplication logic that provides data deduplication for data to be stored in the object store. The deduplication logic performs variable length deduplication and provides a shared nothing approach. | 2017-06-22 |
20170177267 | APPARATUS AND METHOD FOR FILE RECORDING BASED ON NON-VOLATILE MEMORY - The present disclosure includes a non-volatile memory having a boot region, a file allocation table region and a data region; a memory configured to store a program for managing a file recording; and a processor configured to execute the program. Wherein the processor allocates metadata corresponding to a file to be stored in the non-volatile memory to the FAT region as the program is executed, the processor allocates a plurality of clusters to the data region based on information upon a size of the file included in the metadata, the processor writes the file in the plurality of clusters allocated to the data region, if a size of the written file is different from a size of the plurality of allocated clusters, the processor updates the metadata based on the size of the file. | 2017-06-22 |
20170177268 | DATA STORAGE SYSTEMS, COMPUTING SYSTEMS, METHODS FOR CONTROLLING A DATA STORAGE SYSTEM, AND METHODS FOR CONTROLLING A COMPUTING SYSTEM - According to various embodiments, a data storage system may be provided. The data storage system may include: a first storage device; a second storage device; a data receiver configured to receive data to be stored in the data storage system and an indicator indicating a storage profile for the data; and a storage controller configured to determine based on the indicator whether to store the data on the first storage device or to store the data on the second storage device. | 2017-06-22 |
20170177269 | MEMORY SYNCHRONIZATION FILTER - Data synchronization between memories of a data processing system is achieved by transferring the data blocks from a first memory to a second memory, forming a hash list from addresses of data blocks that are written to the second memory or modified in the second memory. The hash list may be to identify a set of data blocks that are possibly written to or modified. Data blocks that are possibly modified may be written back from the second memory to the first memory in response to a synchronization event. The hash list may be updated by computing, in hardware or software, hash functions of an address of the transferred or modified data block to determine bit positions to be set. The hash list may be queried by computing hash functions of an address to determine bit positions, and checking bits in the hash list at those bit positions. | 2017-06-22 |
20170177270 | STORAGE SYSTEM - The storage system has one or more storage drives, and one or more controllers for receiving processing requests from a superior device, wherein each of the one or more controllers has a processor for executing the processing request and an accelerator, and the accelerator has multiple internal data memories and an internal control memory, wherein if the processing request is a read I/O request, it stores a control information regarding the request to the internal control memory, and reads data being the target of the relevant request from at least one storage drive out of the multiple storage drives, which is temporarily stored in the one or more said internal data memories, and transferred sequentially in order from the internal data memory already storing data to the superior device. | 2017-06-22 |
20170177271 | CONTENT ALIGNED BLOCK-BASED DEDUPLICATION - A content alignment system according to certain embodiments aligns a sliding window at the beginning of a data segment. The content alignment system performs a block alignment function on the data within the sliding window. A deduplication block is established if the output of the block alignment function meets a predetermined criteria. At least part of a gap is established if the output of the block alignment function does not meet the predetermined criteria. The predetermined criteria is changed if a threshold number of outputs fail to meet the predetermined criteria. | 2017-06-22 |
20170177272 | METHODS AND SYSTEMS FOR MEMORY SUSPECT DETECTION - This disclosure relates generally to memory suspect detection, and more particularly to system and method for detection of memory suspects in an application runtime environment. The method includes systematically executing a plurality of transactions associated with an application. Executing the plurality of transactions results in generation of metrics. Said metrics includes application memory information and memory allocation information associated with the transactions. Said metrics are periodically captured. Based on the metrics that are periodically captured, a set of transactions are detected from amongst the plurality of transactions that are impacted due to suspected memory allocations. | 2017-06-22 |
20170177273 | STATISTICS MANAGEMENT FOR SCALE-OUT STORAGE - Systems and processes for statistics management in a distributed storage system using a flat cluster architecture. Statistics for managed objects are collected using virtual statistics groups across multiple storage nodes. The systems and processes are compatible with storage systems that utilize microservice architectures. | 2017-06-22 |
20170177274 | Ensuring that Memory Device Actions are Valid using Reference Values - Described herein are techniques to ensure that an action (e.g., a read or a write by a host device) associated with an element of a memory device that stores a value is valid compared to a reference value. The reference value is associated with an actual characteristic of a memory. The element storing the value can be stored in a region of memory that is configured to store metadata. The element can be re-programmed after the memory device is manufactured, and thus, the value stored in the element can be modified by a host device so that it incorrectly or inaccurately reflects a characteristic of the memory. In contrast, the reference value is stored in a separate region of memory and the reference value is a true value. | 2017-06-22 |
20170177275 | TRACKING PIPELINED ACTIVITY DURING OFF-CORE MEMORY ACCESSES TO EVALUATE THE IMPACT OF PROCESSOR CORE FREQUENCY CHANGES - A processor system tracks, in at least one counter, a number of cycles in which at least one execution unit of at least one processor core is idle and at least one thread of the at least one processor core is waiting on at least one off-core memory access during run-time of the at least one processor core during an interval comprising multiple cycles. The processor system evaluates an expected performance impact of a frequency change within the at least one processor core based on the current run-time conditions for executing at least one operation tracked in the at least one counter during the interval. | 2017-06-22 |
20170177276 | DUAL BUFFER SOLID STATE DRIVE - A solid state drive includes a dual buffer for buffering incoming write data prior to committal to a non-volatile memory. The buffer is operated to provide a temporary backup of dirty data pending successful completion of a host transfer. The dual buffer may be operated as a primary buffer and a secondary buffer. The primary buffer may be used as the default buffer during normal operation. The secondary buffer is written to during a host transfer that is a cache write to dirty data. A copying process may be used to copy data between the primary and the secondary buffer to preserve the backup data pending successful completion of the host transfer. | 2017-06-22 |
20170177277 | MANAGING DATA OPERATIONS IN A QUORUM-BASED DATA REPLICATION SYSTEM - When a request is received to perform a data operation requiring an interaction with any one of multiple data replicas stored on one or more data storage devices and managed by a quorum-based data management protocol in which completion of a data update is reported to an initiator of the data update when acceptance of the data update is reported by a majority of the data replicas, the data operation is routed to be performed using one of a predefined minority of the data replicas if the data operation requires less than strong consistency, is a read-only data operation, and meets a predefined criterion of being computationally time-intensive or computationally resource-intensive, or routed to be performed using a predefined majority of the data replicas if the data operation requires strong consistency or requires a data write operation or does not meet the predefined criterion. | 2017-06-22 |
20170177278 | Pre-Loading a Parameter to a Media Accessor to Support a Data Request - Supporting of both reading and writing data to a storage media is provided. A data request is received and a storage medium to support the data request is identified. A parameter related to the data request is retrieved and pre-loaded to an associated media accessor prior to loading the storage media. The parameter includes a setting adjustment of the media accessor in support of the data request. The media accessor performs the data request in compliance with the setting adjustment. | 2017-06-22 |
20170177279 | DOCUMENT PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - A PrintTicket does not necessarily exist in each page of an XPS document. Regarding a page where no PrintTicket exists, the page is printed by referring to a PrintTicket in a higher hierarchical level. Here, when a plurality of XPS documents are combined, a user's intended print result may not be obtained when the PrintTicket in the higher hierarchical level to which the page refers changes before and after combining. Before a combining process is executed, print setting information of a page element is generated based on print setting information of an element in a higher hierarchical level than the page hierarchical level. A combined document is generated by combining a structured document including a page element to which the generated print setting information is added and another structured document including a page element to which the generated print setting information is added. | 2017-06-22 |
20170177280 | METHODS AND SYSTEMS FOR ON-DEMAND PUBLISHING OF RELIGIOUS WORKS - Systems and methods for on-demand publication of religious works include receiving user selections for exterior customization options, interior customization options, and a prompt to add user-added or user-created content. The disclosed systems and methods compiled customized religious works and optionally print some portion of a hard copy of the religious work on a thin paper that is 28-50 grams per square meter or less. In some examples, the printing process includes light-fusion printing processes, and/or an electronic copy format of the customized religious works created. The example printing process can print single volume or low volumes of hard copies of customized religious works using the light-fusion printing processes. | 2017-06-22 |
20170177281 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR CONTROLLING THE INFORMATION PROCESSING APPARATUS IN A MAINTENANCE MODE, AND STORAGE MEDIUM. - In an information processing apparatus and a method of controlling the same, settings for prohibiting an access to a removable medium is performed, and even if the setting is set, the access to the removable medium is permitted in a case where the information processing apparatus is activated in the maintenance mode. | 2017-06-22 |
20170177282 | METHOD OF CAUSING A PRINTER TO CARRY OUT A PRINTING OPERATION WITH DECOLORABLE MATERIAL - A printer driver that is executable in a computer causes the computer to carry out a process including the steps of reading a first number of copies of a document to be printed with a decolorable material and a second number of copies of the document to be printed with a non-decolorable material, and generating a print command for a printer that causes the printer to print the first number of copies with the decolorable material and the second number of copies with the non-decolorable material. | 2017-06-22 |
20170177283 | IMAGE FORMING APPARATUS, IMAGE FORMING SYSTEM, AND NON-TRANSITORY STORAGE MEDIUM - An image forming apparatus comprising: a receiver for receiving a print job; a printing unit; a storage unit; an input interface for receiving a print execution command from a user; a power source for supplying an electric power; and a controller configured to: control the power source to stop or reduce the power supply to the printing unit when the receiver has not received a next print job within an after-printing standby time from completion of the printing; and control the power source to stop or reduce the power supply to the printing unit when the print job is a print-execution-command-input required print job requiring the print execution command and the receiver has not received a next print job within an after-print-job-receipt standby time from the receipt of the print-execution-command-input required print job, the after-print-job-receipt standby time being longer than the after-printing standby time. | 2017-06-22 |
20170177284 | ELECTRONIC DEVICE CAPABLE OF PERFORMING OVERWRITE ERASURE OF OBSOLETE FILE AND COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM - To provide an electronic device that can restrict a delayed execution of a process whose existence is impossible to confirm for an overwrite erasure thread. An MFP includes a job execution part that executes a job and an overwrite erasure thread that performs overwrite erasures of obsolete files. The overwrite erasure thread performs the overwrite erasures intermittently even while the job is in execution if the obsolete files have a size more than a threshold value size. The job execution part performs the overwrite erasures while the job is in execution if the obsolete files have a size less than the threshold value size. | 2017-06-22 |
20170177285 | MULTI-FUNCTION PERIPHERAL AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING COMPUTER-READABLE INSTRUCTIONS CAUSING DEVICE TO EXECUTE WORKFLOW - A non-transitory computer-readable recording medium stores computer-readable instructions which are readable by a controller of a device provided with a communication interface and a notification interface. The computer-readable instructions, when executed by the controller, cause the device to perform determining whether pre-operations defined in multiple jobs contained in a workflow have successfully completed. In response to determination that the pre-operations have successfully completed, the device is caused to execute a main-process in which main-operations respectively defined in the multiple jobs contained in the workflow in order. In response to determination that at least one of the pre-operations has abended, the notification interface is caused to notify that the multiple jobs defined in the workflow are inexecutable. | 2017-06-22 |
20170177286 | ACQUISITION OF LINKED VERSIONS OF PRINT CONTENT HAVING MACHINE-READABLE LINKS - Examples disclosed herein relate to acquisition of linked versions of print content having machine-readable links. Examples include acquisition of a message requesting that print content be printed at a destination printing device via the remote printing service, the message comprising link information specifying a selected type of optically machine-readable link and a digital content payoff. Examples further include acquisition, from a linking service, of a linked version of the print content comprising an optically machine-readable link that is associated with the digital content payoff via the linking service. | 2017-06-22 |
20170177287 | APPARATUS AND METHOD FOR DYNAMICALLY OBTAING AND DISPLAYING SURVEILLANCE IMAGES AND TRACKED EVENTS - A method for dynamically obtaining and displaying surveillance images and dynamically tracking events, including receiving event occurrence information corresponding to one or more surveillance cameras; classifying the surveillance cameras detecting an event occurrence into one or more groups of event detection cameras based on the event occurrence information; dynamically arranging and displaying one or more real-time images acquired by the event detection cameras on a first screen, the real-time images being dynamically arranged on the first screen according to the one or more groups, and the arrangement of the real-time images on the first screen being dynamically variable according to a number of the detected events; and displaying an event image acquired within a preset time range before and after a first event detection time point by the event detection camera that first detects an event among the event detection cameras belonging to each group. | 2017-06-22 |
20170177288 | APPARATUS AND METHOD FOR TRANSITIONING CONTENT BETWEEN DISPLAYS - The present disclosure generally relates to an apparatus and method for transitioning content between multiple displays. The apparatus includes two or more displays disposed within a vehicle cockpit, where the two or more displays include a first display and a second display. The content, in the form of an image, is displayed on the first display. The first display is disposed adjacent to the second display with a gap formed therein. One or more light emitting diodes (LEDs) are disposed within the gap and one or more lightguides are disposed adjacent to the LEDs and within the gap. The LEDs and lightguides are designed to emit or radiate light corresponding to the image such that the image appears to be blended between the displays when the image shifts between displays, and gives the illusion that the image moved from one display to the other. | 2017-06-22 |
20170177289 | MULTI-DISPLAY APPARATUS - A multi-display apparatus may include a first display panel and a second display panel at least partially overlapping the first display panel. The first display panel may include a first display region in which a plurality of first pixels are disposed, and a first non-display region adjacent to the first display region. The second display panel may include a second display region in which a plurality of second pixels are disposed. The first non-display region may overlap the second display region, and first transmitting windows may be disposed in the first non-display region over the second pixels. | 2017-06-22 |
20170177290 | DISPLAY REDISTRIBUTION BETWEEN A PRIMARY DISPLAY AND A SECONDARY DISPLAY - An aspect includes a computer implemented method for display redistribution between a personal display and an external display. The method includes initiating, by a primary device, a wireless connection between a primary device and a secondary device. The primary device includes a primary display and the secondary device includes a secondary display. A confirmation is received at the primary device from the secondary device in response to the initiating. Based on receiving the confirmation, the wireless connection between the primary device and the secondary device is executed. The executing includes utilizing, by the primary device, the secondary display in place of the primary display. | 2017-06-22 |
20170177291 | MOBILE DEVICE PAIRING - Systems and methods for pairing electronic devices are provided. In an example embodiment, first motion capture data corresponding to a physical user motion is received from a first device. Second motion capture data corresponding to the physical user motion is received from a second device. Features are extracted from the first motion capture data and the second motion capture data. An association between the first device and the second device is determined based on a comparison of the extracted features. In response to identifying the association between the first and second device, a communicative coupling between the first device and the second device is initiated. | 2017-06-22 |
20170177292 | SYSTEM CONFIGURING A HUMAN MACHINE INTERFACE ON MULTIPLE DISPLAYS - A system for operating a display as a Human Machine Interface suitable to use in an automated vehicle includes a display and a controller. The display is positioned in a vehicle so as to be viewable by an occupant of the vehicle, said display characterized by a display-resolution and a display-size. The controller is in communication with the display. The controller is configured to determine how many widgets can be shown on the display based on the display-size, determine a list of selected-widgets from a list of possible-widgets, wherein each of the possible-widgets is characterized by a critical-factor, and the selected-widgets have a critical-factor greater than critical-threshold, determine a configuration of each selected-widget based on the display-resolution, wherein the configuration includes an aspect-ratio, and operate the display to show the selected-widgets to the occupant. | 2017-06-22 |
20170177293 | TECHNOLOGIES FOR PROTECTING AUDIO DATA WITH TRUSTED I/O - Technologies for cryptographic protection of I/O audio data include a computing device with a cryptographic engine and an audio controller. A trusted software component may request an untrusted audio driver to establish an audio session with the audio controller that is associated with an audio codec. The trusted software component may verify that a stream identifier associated with the audio session received from the audio driver matches a stream identifier received from the codec. The trusted software may program the cryptographic engine with a DMA channel identifier associated with the codec, and the audio controller may assert the channel identifier in each DMA transaction associated with the audio session. The cryptographic engine cryptographically protects audio data associated with the audio session. The audio controller may lock the controller topology after establishing the audio session, to prevent re-routing of audio during a trusted audio session. Other embodiments are described and claimed. | 2017-06-22 |
20170177294 | DYNAMIC AUDIO CODEC ENUMERATION - Techniques related to dynamic audio codec enumeration and dynamically providing communication between drivers are discussed. Such techniques may include providing back door communication between the drivers via mailbox registers in audio codec hardware. | 2017-06-22 |
20170177295 | MUSIC SYNCHRONIZATION ARRANGEMENT - The invention generally pertains to a hand-held computing device. More particularly, the invention pertains to a computing device that is capable of controlling the speed of the music so as to affect the mood and behavior of the user during an activity such as exercise. By way of example, the speed of the music can be controlled to match the pace of the activity (synching the speed of the music to the activity of the user) or alternatively it can be controlled to drive the pace of the activity (increasing or decreasing the speed of the music to encourage a greater or lower pace). One aspect of the invention relates to adjusting the tempo (or some other attribute) of the music being outputted from the computing device. By way of example, a songs tempo may be increased or decreased before or during playing. | 2017-06-22 |
20170177296 | SYSTEMS AND METHODS TO OPTIMIZE MUSIC PLAY IN A SCROLLING NEWS FEED - Systems, methods, and non-transitory computer readable media are configured to receive metadata for audio content associated with an audio content item for presentation in a news feed to be displayed on a screen of a computing device associated with a user. The metadata is transformed for display in the audio content item. The transformed metadata is displayed in the audio content item. In addition, systems, methods, and non-transitory computer readable media are configured to present an audio content item in a news feed to be displayed on a screen of a computing device associated with a user. An input by the user for scrolling the news feed and the audio content item on the screen is received. A pop out player is presented in response to disappearance of the audio content item from the screen based on the scrolling. | 2017-06-22 |
20170177297 | Cadence and Media Content Phase Alignment - Systems, devices, apparatuses, components, methods, and techniques for cadence and media content phase alignment are provided. An example media-playback device includes a content output device that operates to output media content, a cadence-acquiring device, a phase-delay calibration engine, a cadence-based media content selection engine, and a phase-aligned media playback engine. The cadence-acquiring device includes a movement-determining device and a cadence-determination engine configured to determine a cadence based on movement data captured by the movement-determining device. The phase-delay calibration engine configured to determine phase delay values for at least one cadence value. The cadence-based media content selection engine configured to identify a media content item based on the cadence determined by the cadence-acquiring device. The phase-aligned media playback engine configured to align the identified media content item to the repetitive-motion activity and cause the media-output device to output the aligned media content item. | 2017-06-22 |
20170177298 | INTERACTING WITH A PROCESSING STSYEM USING INTERACTIVE MENU AND NON-VERBAL SOUND INPUTS - Examples of techniques for interacting with a processing system using an interactive menu and a non-verbal sound input are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method may include receiving a command to initiate the interactive menu. The method may further include presenting the interactive menu to a user of the processing system, the interactive menu comprising a plurality of interactive menu options. The method may further include performing an action on the processing system based on receiving a non-verbal sound input from the user responsive to at least one of the plurality of interactive menu options presented to the user. | 2017-06-22 |
20170177299 | Electronic Device with a Plurality of Sound Acquisition Units for Operating under Different Operating Modes - An electronic device comprising a display unit operable under different operating modes based on a structure of the display unit, and first and second sound acquisition units that acquire sound in a first or second operating mode based on a first or second structure of the display unit. Also provided is a method for controlling an electronic device comprising steps of determining whether a display unit is in a first or second mode, where the display unit is in the first mode when all parts on the display unit are on a same plane, and in the second mode when at least two parts on the display unit are not on the same plane; controlling a first sound acquisition unit to work when the display unit is in the first mode; and controlling a second sound acquisition unit to work when the display unit is in the second mode. | 2017-06-22 |
20170177300 | DATA PROCESSING DEVICE, DATA PROCESSING METHOD, AND COMPUTER PROGRAM - A device for temporarily storing data output from a register or data obtained by processing the output data, a processing method therefor, a program, and the like is provided. A circuit (hereinafter, referred to as a selective memory cell) in which a plurality of switches and a signal storing circuit are connected is provided in a data processing device. The selective memory cell can selectively store necessary data. A result of a frequently performed process is stored in the selective memory cell. A process whose result is stored can be performed by only outputting the stored data instead of performing the whole process; thus, input data does not need to be transferred, which can result in a reduction in processing time. | 2017-06-22 |
20170177301 | ASYMMETRIC CHIP-TO-CHIP INTERCONNECT - Methods and apparatus to transfer data between a first device and a second device, is disclosed. An apparatus according to various embodiments may comprise a first device and a second device. The first device may comprise at least one first non-differential transmitter coupled to a first channel, at least one second non-differential transmitter coupled to a second channel, and at least one differential receiver to receive a data bit and its complement on the first and second channels in parallel. The second device may comprise at least one first non-differential receiver coupled to the first channel, at least one second non-differential receiver coupled to the second channel, and at least one differential transmitter to transmit a data bit and its complement on the first and second channels in parallel. | 2017-06-22 |
20170177302 | GENERATION OF DISTINCTIVE VALUE BASED ON TRUE RANDOM INPUT - Aspects of the disclosure are directed to solutions for generating a distinctive value in a computing device. A captured data gathering module is to interface with the plurality of data capture devices and to read data output from each of them. The data output has a randomness characteristic. A captured data aggregation module is to combine the data output from at least two different data capture devices to produce an aggregated output. A transformation module is to compute a transformation of the aggregated output to produce a distinctive value that is based on the randomness characteristic. | 2017-06-22 |
20170177303 | GENERATING A NATIVE ACCESS PLAN FOR DYNAMIC ENTITY CALLING - Disclosed herein are system, method, and computer program product embodiments for generating a native access plan from a query execution plan for dynamic entity calling. An embodiment operates by receiving the query execution plan comprising at least one call to an entity, the entity being implemented by a plurality of classes, and generating source code of a native access plan that implements the query execution plan. The source code of the native access plan includes instructions to translate a run-time call to the entity to a call to a corresponding implementation of the entity based on an identifier of the called implementation of the entity. | 2017-06-22 |
20170177304 | DYNAMIC SETUP OF DEVELOPMENT ENVIRONMENTS - A computer-implemented method includes receiving a request from a user at a local machine to access a project. One or more programming languages used in the project are identified. Resource availability at the local machine is analyzed. An integrated development environment (IDE) is selected for the project, based at least in part on the one or more programming languages and the resource availability of the local machine. The IDE is provisioned automatically, by a computer processor, for the user in response to the request to access the project. | 2017-06-22 |
20170177305 | METHOD AND SYSTEM FOR THE DEFINITION OF A MODEL - The disclosure generally describes methods, software, and systems, including a method for defining and using models. A model definition language is provided for defining models. The model definition language includes elements of a meta-model. The elements define, for a model, a root element of the model and plural participant instances of the model. Each participant instance is linked with the root element. Each participant instance defines at least one of plural participants of the model. Each participant instance is an instance of a participant class. A relation port for the model defines plural relations and flows among the plural participants. Each relation is defined by a relation instance being of a relation class and defining a relationship between participants. | 2017-06-22 |
20170177306 | System and Method for Executing User Channel Programs on Mainframe Computers - A method, apparatus and computer program product, the method comprising: opening a storage volume associated with a mainframe computer executing z/OS operating system; obtaining access to a required area of the storage volume; receiving a call from a program programmed in a high level programming language and executed on the mainframe computer, to execute a user channel program, wherein the user channel program may refer to any location within the required area of the storage volume; and processing the user channel program to obtain channel command words and provide the channel command words to Execute Channel Program (EXCP). | 2017-06-22 |
20170177307 | RULE-BASED AUTOMATIC CLASS GENERATION FROM A JSON MESSAGE - A method, system, and computer program product for Java development environments. The method commences upon receiving a set of one or more rules to be applied to one or more JSON messages, then generating of one or more Java classes respective to received JSON messages. The received JSON messages can be retrieved from a repository for JSON message files, or the JSON messages can be received by sniffing a message transmitted over a network link. The rules can be applied according to one or more precedence regimes, and applying the precedence regimes over the two or more rules can be considered in a pre-processing step performed before receiving a JSON message or can be considered after receiving a JSON message. | 2017-06-22 |
20170177308 | SOFTWARE DEVELOPMENT USING MULTI-DOMAIN DECISION MANAGEMENT - A multi-domain decision manager facilitates software development of a software application across knowledge domains, based on relationships between a first knowledge domain and a second knowledge domain. The multi-domain decision manager includes an assessment engine configured to construct a first assessment as an instantiation of a first knowledge base model of the first knowledge domain, and a second assessment as an instantiation of a second knowledge base model of the second knowledge domain. A relationship engine may be configured to characterize relationships between the first assessment and the second assessment, wherein the relationships characterize a likelihood that inclusion of a first selectable assessment option of the first assessment is associated with inclusion of a second selectable assessment option of the second assessment. A relationship analyzer may be configured to provide a relationship analysis characterizing a cumulative impact of the relationships on the first assessment and the second assessment. | 2017-06-22 |
20170177309 | System and Method for Rapid Development and Deployment of Reusable Analytic Code for Use in Computerized Data Modeling and Analysis - A system and method for rapid development and deployment of reusable analytic code for use in computerized data modeling and analysis is provided. The system includes a centralized, continually updated environment to capture pre-processing steps used in analyzing big data, such that the complex transformations and calculations become continually fresh and accessible to those investigating business opportunities. The system incorporates deep domain expertise as well as ongoing expertise in data science, big data architecture, and data management processes. In particular, the system allows for rapid development and deployment of analytic code that can easily be re-used in various data analytics applications, and on multiple computer systems. | 2017-06-22 |
20170177310 | SOFTWARE DEVELOPMENT USING RE-USABLE SOFTWARE COMPONENTS - A component selector may select a first software component stored in a software component library in conjunction with a first annotation, the first annotation being linked to a second annotation of a second software component via a link. An evaluation engine may evaluate a property expressed by the first annotation relative to a requirement expressed by the second annotation, and thereby verify compliance of the first software component and the second software component for inclusion within a software application being developed. A component update monitor may re-verify the compliance, based on an update to at least one of the first software component and the second software component. | 2017-06-22 |
20170177311 | SERVICE EXTRACTION AND APPLICATION COMPOSITION - Service extraction and application composition may include preprocessing and instrumenting an existing application that is to be converted to a service-oriented application. The existing application may be executed to generate traces related to a runtime behavior of services related to the existing application. The traces may be used to generate calling code related to the services related to the existing application. Representational state transfer (REST) application programming interfaces (APIs) that include the calling code to call the services related to the existing application may be generated. Refactored code for the existing application may be generated for invocation of the services related to the existing application by the REST APIs. | 2017-06-22 |
20170177312 | DYNAMIC RECOMPILATION TECHNIQUES FOR MACHINE LEARNING PROGRAMS - The embodiments described herein relate to recompiling an execution plan of a machine-learning program during runtime. An execution plan of a machine-learning program is compiled. In response to identifying a directed acyclic graph of high-level operations (HOP DAG) for recompilation during runtime, the execution plan is dynamically recompiled. The dynamic recompilation includes updating statistics and dynamically rewriting one or more operators of the identified HOP DAG, recomputing memory estimates of operators of the rewritten HOP DAG based on the updated statistics and rewritten operators, constructing a directed acyclic graph of low-level operations (LOP DAG) corresponding to the rewritten HOP DAG based in part on the recomputed memory estimates, and generating runtime instructions based on the LOP DAG. | 2017-06-22 |
20170177313 | METHOD FOR COMPILING A SOURCE CODE - The invention relates to a method for compiling a source code to a program code, the method comprising: providing a pattern graph based on the source code, the pattern graph corresponding to an intermediate representation of the source code according to a set of rules in a first programming language, wherein the set of rules comprises a specific replacement rule directing a pattern graph to be replaced by a corresponding replacement graph assigned to the pattern graph, replacing the pattern graph by the replacement graph assigned to the pattern graph, and generating the program code based on the replacement graph. | 2017-06-22 |
20170177314 | APPLICATION RANDOMIZATION MECHANISM - An example method includes generating, by a computing system, first unique configuration information, generating, by the computing system and based on the first unique configuration information, a first unique instance of a software component, generating second unique configuration information, wherein the second unique configuration information is different from the first unique configuration information, and generating, based on the second unique configuration information, a second unique instance of the software component that is executable on the runtime computing system. The first and second unique instances of the software component comprise different instances of the same software component that each are configured to have uniquely different operating characteristics during execution on the runtime computing system. | 2017-06-22 |
20170177315 | COMPOSING A MODULE SYSTEM AND A NON-MODULE SYSTEM - A bridge module is generated to bridge standard modules in a module system and non-module code in a non-module system. The bridge module includes explicit dependencies associated with a namespace, such as a dependency path corresponding to the non-module code. The bridge module exposes packages of the non-module code at least to the standard modules. Operations are performed on a code base that uses standard modules, bridge modules, and non-module code. | 2017-06-22 |
20170177316 | MOBILE APPLICATION DEPLOYMENT FOR DISTRIBUTED COMPUTING ENVIRONMENTS - Embodiments of the present invention provide a method, system, and computer program product for ensuring the veracity of a mobile application for deployment in a distributed computing environment. In an embodiment of the invention, a method for ensuring the veracity of a mobile application for deployment in a distributed computing environment is provided. The method includes detecting a mobile application being uploaded for deployment to a mobile computing device in the distributed computing environment, creating and then storing a fingerprint for the uploaded mobile application, calculating an offset value according to the fingerprint for the uploaded mobile application, and storing the offset value for the uploaded mobile application. The method further includes, prior to deploying the uploaded mobile application to the mobile computing device, validating the offset value for the uploaded mobile application to determine that the uploaded mobile application is an unaltered version of the uploaded mobile application. | 2017-06-22 |
20170177317 | Dependency-Aware Transformation of Multi-Function Applications for On-Demand Execution - An on-demand executable system includes an application acquisition engine configured to acquire a first application that is programmed to perform a first function and a second function. An applet extractor includes a function analyzer configured to analyze the first application to identify functions that the first application is programmed to perform. The identified functions include the first function. The applet extractor includes a code analyzer configured to analyze code of the first application to identify first code segments that implement the first function. The applet extractor includes an applet packager configured to package the first code segments into a first executable. An executable request servicer is configured to, in response to a request, transmit the first executable to a user device. | 2017-06-22 |
20170177318 | Dependency-Aware Transformation of Multi-Function Applications for On-Demand Execution - A mobile device includes a user interface allowing a user to enter search parameters. A query wrapper module generates a query wrapper based on the entered search parameters. A search system communication module transmits the query wrapper to a search system and receives a set of results from the search system. A first result corresponds to a first applet. A results presentation module displays the set of results to the user. An access mechanism module, in response to the user selecting the first result, selectively triggers an applet request to be sent to an applet distribution system. The applet request instructs the applet distribution system to transmit the first applet to the mobile device. The first applet includes native code for execution on an operating system of the mobile device. The first applet from the applet distribution system is executed. | 2017-06-22 |
20170177319 | Dependency-Aware Transformation Of Multi-Function Applications For On-Demand Execution - An on-demand executable system includes an application acquisition engine configured to acquire a first application that is programmed to perform a first function and a second function. An applet extractor includes a function analyzer configured to analyze the first application to identify functions that the first application is programmed to perform. The identified functions include the first function. The applet extractor includes a code analyzer configured to analyze code of the first application to identify first code segments that implement the first function. The applet extractor includes an applet packager configured to package the first code segments into a first executable. An executable request servicer is configured to, in response to a request, transmit the first executable to a user device. | 2017-06-22 |
20170177320 | UPDATING EXTENSION IN RESPONSE TO OPENING ASSOCIATED DOCUMENT - A non-transitory computer-readable storage medium may comprise instructions stored thereon. When executed by at least one processor, the instructions may be configured to cause a backend server to at least receive, from an administrator webserver, an extension, store the extension and an associated timestamp in a repository, the associated timestamp indicating a time at which the extension was received from the administrator webserver, receive a request for the extension from a customer webserver, the request for the extension identifying the extension, and in response to receiving the request for the extension, fetch the extension from the repository, and send the extension to the customer webserver. | 2017-06-22 |
20170177321 | TECHNIQUE FOR EFFICIENTLY UPGRADING SOFTWARE IN A VIDEO CONTENT NETWORK - At a carousel origin server, an indication is obtained that at least one of a plurality of consumer premises equipment connected to a video content network requires a software upgrade. Responsive to obtaining the indication, the carousel origin server loads onto a carousel at least one image required for the software upgrade. The at least one image required for the software upgrade is broadcast from the carousel to the at least one of the plurality of consumer premises equipment, for a predetermined period. Subsequent to the predetermined period, the at least one image required for the software upgrade is removed from the carousel. | 2017-06-22 |
20170177322 | MONITORING APPLICATION STATES FOR DEPLOYMENT DURING RUNTIME OPERATIONS - Interaction between development environments and runtime environments to ensure that underlying process components are in an acceptable state before deploying application updates. A deploy state monitor in a development environment interacts with runtime values in executing applications to manage deployment requests and states of executing applications. | 2017-06-22 |
20170177323 | AUTOMATIC ONLINE SYSTEM UPGRADE - Automatically upgrading a computing environment system may include automatically identifying a set of timeframes and nodes running user applications on physical machines, containers, or virtual machines (VMs) whose disruption during the identified timeframes minimally impact the user applications. The timeframes may be intelligently determined by leveraging the monitoring data obtained automatically and/or the hints supplied by the user. | 2017-06-22 |
20170177324 | MAINTAINING DEPLOYMENT PIPELINES FOR A PRODUCTION COMPUTING SERVICE USING LIVE PIPELINE TEMPLATES - Techniques are presented for managing a deployment pipeline using an inheritable and extensible source code template—generally referred to as a live pipeline template (LPT). As described, live pipeline templates may be used to manage deployment pipelines which, in turn, are used to launch, maintain, and update the services and systems used to host and provide computing services. | 2017-06-22 |
20170177325 | DYNAMIC DATA DIFFERENCE GENERATION AND DISTRIBUTION - A method of updating data may include receiving an update request from a computing device, the update request including a profile of a current set of data stored on the computing device; determining, based at least in part on the profile of the current set of data, an updated set of data is available; determining if a delta set of data has previously been generated to transform the current set of data to the updated set of data; and based on determining that the delta set of data has not been previously generated: generating the delta set of data; and transmitting an address, to the computing device, for obtaining the delta set of data by the computing device. | 2017-06-22 |
20170177326 | SYSTEMS AND METHODS FOR EXPORTING, PUBLISHING, BROWSING AND INSTALLING ON-DEMAND APPLICATIONS IN A MULTI-TENANT DATABASE ENVIRONMENT - In accordance with embodiments, there are provided mechanisms and methods for creating, exporting, viewing and testing, and importing custom applications in a multitenant database environment. These mechanisms and methods can enable embodiments to provide a vehicle for sharing applications across organizational boundaries. The ability to share applications across organizational boundaries can enable tenants in a multi-tenant database system, for example, to easily and efficiently import and export, and thus share, applications with other tenants in the multi-tenant environment. | 2017-06-22 |
20170177327 | DYNAMIC SETUP OF DEVELOPMENT ENVIRONMENTS - A computer-implemented method includes receiving a request from a user at a local machine to access a project. One or more programming languages used in the project are identified. Resource availability at the local machine is analyzed. An integrated development environment (IDE) is selected for the project, based at least in part on the one or more programming languages and the resource availability of the local machine. The IDE is provisioned automatically, by a computer processor, for the user in response to the request to access the project. | 2017-06-22 |
20170177328 | IDENTIFYING USER MANAGED SOFTWARE MODULES - A computer program product for identifying user managed software modules includes program instructions for: receiving a request for a directed load of a software module into memory, wherein the request includes an address; storing the software module at the address in the received request; adding a name and an address range of the stored software module to a data structure identifying software modules that have been loaded into memory via directed loads; receiving a query that includes an input module name or an input address range; and responsive to determining that the input module name or input address range of the received query is not stored in one or more data structures identifying one or more software modules that have been loaded into memory without directed loads, searching the data structure identifying software modules that have been loaded into memory via directed loads for the respective query. | 2017-06-22 |
20170177329 | IDENTIFYING USER MANAGED SOFTWARE MODULES - A method for identifying user managed software modules includes: receiving a query that includes an input module name or an input address range. The method further includes, responsive to determining that the input module name or input address range of the received query is not stored in one or more data structures identifying one or more software modules that have been loaded into memory without a directed load, searching a data structure identifying software modules that have been loaded into memory via directed loads for the respective input module name or input address range. | 2017-06-22 |
20170177330 | LOGICAL LEVEL DIFFERENCE DETECTION BETWEEN SOFTWARE REVISIONS - A comparison system includes a memory including a first compiled version and a second compiled version of a target application, at least one processor, and a comparison engine, executing on the at least one processor. The comparison engine is configured to identify a method in the first compiled version, locate the method in the second compiled version, compare the method in the first compiled version to the method in the second compiled version, and providing an indication that the method is an altered method from the first compiled version to the second compiled version of the target application. | 2017-06-22 |
20170177331 | METHOD AND APPARATUS FOR EXECUTION OF DISTRIBUTED WORKFLOW PROCESSES - The system provides a method and apparatus for the dynamic distribution, deployment, and configuration of optimizable code modules for use with software workflows running on a single compute device or across a network connected grid of compute devices. The system comprises one or more collections of software and data modules stored in a content catalog, conforming to a defined interface, and having metadata conforming to a schema that enables the modules to be statically or dynamically optimized by the controlling workflow and a workflow manager. The system provides a service that enables code modules to be located, deployed, configured, and updated by the controlling workflow, the workflow manager, or a remote manager. | 2017-06-22 |
20170177332 | MANAGING CHANGE-SET DELIVERY - An approach that analyzes and manages unresolved (i.e., pending, outgoing) change-sets is provided. Specifically, this approach parses the change-set into a plurality (i.e., one or more) of changes having interdependencies within a java class file to determine the impact each change may have. More specifically, a change-set management tool provides this capability. The change-set management tool includes a parsing module configured to receive an outgoing change-set and to parse the change-set into a plurality of changes having interdependencies within a java class file. The change-set management tool further comprises an evaluation module configured to evaluate an impact that each of the plurality of changes within the change-set has on source code external to the change-set in the java class file based on the interdependencies. | 2017-06-22 |
20170177333 | MAINTAINING AND UPDATING SOFTWARE VERSIONS VIA HIERARCHY - The described technology is directed towards maintaining and using a version-based hierarchy of software resources (e.g., file system files) to return version-specific responses to clients. A client sends its version information with each data request, and gets back a response based upon that version. Version changes are made by maintaining the current version of each software code resource and overriding the current version with a previous version for clients as needed. The technology allows updates (e.g., for new devices and new software resource versions) to be supported by inserting resources into the resource hierarchy and moving resources therein based upon versioning. A system based on deltas is also contemplated, in which only parts of a file may be changed relative to a different version, instead of overriding the entire file. | 2017-06-22 |
20170177334 | GENERATING AND MANAGING APPLICATIONS USING ANY NUMBER OF DIFFERENT PLATFORMS - At least one application is received from a user. The at least one application is stored on a communication platform. A catalog is received. The catalog includes at least one service. Each service of the at least one service is associated with a platform. An indication of a selection, from the user, is received. The selection comprises a first service associated with a first platform, and a second service associated with a second platform. The first service stores the at least one application from the user. The second service runs the at least one application from the user. Responsive to receiving the indication, the at least one application is deployed to the indicated first platform. Additionally, responsive to receiving the indication, a service bridge from the communication platform to the second platform is deployed. The at least one application is run, on the first platform utilizing the service bridge. | 2017-06-22 |
20170177335 | SYSTEM AND METHOD OF RECONSTRUCTING COMPLEX CUSTOM OBJECTS - A system and method is provided for reconstructing one or more collections of objects across platforms. More particularly, Java Annotations are used to assist a Web Services Description Language (WSDL) wizard in reconstructing a collection of objects. In implementation, the system and method parses the object types such that a wizard can recreate or reconstruct the collection of objects for use by a receiving service. The method includes reconstructing a collection using one or more annotations that document a base object of the collection. | 2017-06-22 |
20170177336 | HARDWARE CANCELLATION MONITOR FOR FLOATING POINT OPERATIONS - In an embodiment, a processor includes a plurality of cores, with at least one core including a cancellation monitor unit. The cancellation monitor unit comprises circuitry to: detect an execution of a floating point (FP) instruction in the core, wherein the execution of the FP instruction uses a set of FP inputs and generates an FP output; determine a maximum exponent value associated with the set of FP inputs to the FP instruction; subtract an exponent value of the FP output from the maximum exponent value to obtain an exponent difference; and in response to a determination that the exponent difference meets or exceeds a threshold level, increment a cancellation event count. Other embodiments are described and claimed. | 2017-06-22 |
20170177337 | WEIGHTED PSEUDO - RANDOM DIGITAL CONTENT SELECTION - Briefly, embodiments disclosed herein may relate to digital content selection, and more particularly to weighted pseudo-random digital content selection for use in and/or with online digital content delivery, such as online advertising, for example. | 2017-06-22 |
20170177338 | MANAGEMENT OF ASYNCHRONOUS INTERRUPTS IN A TRANSACTIONAL MEMORY MULTIPROCESSOR ENVIRONMENT - A list of instructions is received. The list of instructions includes two or more instructions. Processing of all interrupts is assigned to one core of multiple cores of at least one processor. Processing of the list of instructions is assigned to the remaining cores of the multiple cores. The remaining cores of the multiple cores do not include the first core of the multiple cores. One or more of the instructions of the list of instructions are processed with the remaining cores of the multiple cores. | 2017-06-22 |
20170177339 | HARDWARE APPARATUSES AND METHODS TO SWITCH SHADOW STACK POINTERS - Methods and apparatuses relating to switching of a shadow stack pointer are described. In one embodiment, a hardware processor includes a hardware decode unit to decode an instruction, and a hardware execution unit to execute the instruction to: pop a token for a thread from a shadow stack, wherein the token includes a shadow stack pointer for the thread with at least one least significant bit (LSB) of the shadow stack pointer overwritten with a bit value of an operating mode of the hardware processor for the thread, remove the bit value in the at least one LSB from the token to generate the shadow stack pointer, and set a current shadow stack pointer to the shadow stack pointer from the token when the operating mode from the token matches a current operating mode of the hardware processor. | 2017-06-22 |
20170177340 | VECTOR STORE/LOAD INSTRUCTIONS FOR ARRAY OF STRUCTURES - A processor comprises a plurality of vector registers, and an execution unit, operatively coupled to the plurality of vector registers, the execution unit comprising a logic circuit implementing a load instruction for loading, into two or more vector registers, two or more data items associated with a data structure stored in a memory, wherein each one of the two or more vector registers is to store a data item associated with a certain position number within the data structure. | 2017-06-22 |
20170177341 | APPARATUS AND METHOD FOR RETRIEVING ELEMENTS FROM A LINKED STRUCTURE - An apparatus and method are described for retrieving elements from a linked structure. For example, one embodiment of an apparatus comprises: a decode unit to decode a first instruction, the first instruction to utilize a current address value, an end address value, and an offset; and an execution unit to execute the first instruction to cause the execution unit to compare the current address value with the end address value, the execution unit to perform no additional operation with respect to the first instruction if the current address value is equal to the end address value; and if the current address value is not equal to the end address value, then the execution unit to add the offset value to the current address value to identify a next address pointer within an element structure, the execution unit to further set the current address value equal to the next address pointer. | 2017-06-22 |
20170177342 | Instructions and Logic for Vector Bit Field Compression and Expansion - A processor includes a core to execute an instruction for conversion between an element array and a packed bit array. The core includes logic to identify one or more bit-field lengths to be used by the packed bit array, identify a width of elements of the element array, and simultaneously for elements of the element array and for bit-fields of the packed bit array, convert between the element array and the packed bit array based upon the bit-field length and the width of elements of the element array. | 2017-06-22 |
20170177343 | HARDWARE APPARATUSES AND METHODS TO FUSE INSTRUCTIONS - Methods and apparatuses relating to a fusion manager to fuse instructions are described. In one embodiment, a hardware processor includes a hardware binary translator to translate an instruction stream into a translated instruction stream, a hardware fusion manager to fuse multiple instructions of the translated instruction stream into a single fused instruction, a hardware decode unit to decode the single fused instruction into a decoded, single fused instruction, and a hardware execution unit to execute the decoded, single fused instruction. | 2017-06-22 |
20170177344 | Instructions and Logic for Blend and Permute Operation Sequences - A processor includes a core to execute an instruction and logic to determine that the instruction will require strided data converted from source data in memory. The strided data is to include corresponding indexed elements from structures in the source data to be loaded into a same register to be used to execute the instruction. The core also includes logic to load source data into preliminary vector registers. The source data is to be unaligned as resident in the vector registers. The core includes logic to apply blend instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective interim vector registers, and to apply further blend instructions to contents of the interim vector registers to cause additional indexed elements from the structures to be loaded into respective source vector registers. | 2017-06-22 |
20170177345 | Instruction and Logic for Permute with Out of Order Loading - A processor includes a core to execute an instruction and logic to determine that the instruction will require strided data converted from source data in memory. The strided data is to include corresponding indexed elements from a plurality of structures in the source data to be loaded into a same register to be used to execute the instruction. The core also includes logic to load source data into a plurality of preliminary vector registers with a first indexed layout of elements and a second indexed layout of elements. A plurality of the preliminary vector registers are to be loaded with the first indexed layout of elements. A common register of the preliminary vector registers are to be loaded with the second indexed layout of elements. The core also includes logic to apply permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the plurality of structures to be loaded into respective source vector registers. | 2017-06-22 |
20170177346 | Instructions and Logic for Load-Indices-and-Prefetch-Scatters Operations - A processor includes an execution unit to execute instructions to load indices from an array of indices, optionally perform scatters, and prefetch (to a specified cache) contents of target locations for future scatters from arbitrary locations in memory. The execution unit includes logic to load, for each target location of a scatter or prefetch operation, an index value to be used in computing the address in memory for the operation. The index value may be retrieved from an array of indices identified for the instruction. The execution unit includes logic to compute the addresses based on the sum of a base address specified for the instruction, the index value retrieved for the location, and a prefetch offset (for prefetch operations), with optional scaling. The execution unit includes logic to retrieve data elements from contiguous locations in a source vector register specified for the instruction to be scattered to the memory. | 2017-06-22 |
20170177347 | Instruction and Logic for Detecting the Floating Point Cancellation Effect - A processor includes a front end to decode an instruction and an allocator to assign the instruction to an execution unit to execute the instruction to compute a floating point result subject to a cancellation effect. The execution unit includes a threshold to control notification the cancellation effect, a logic to compute the maximum exponent from a source value, a logic to compute the floating point exponent, a logic to compute the detected cancellation value, and a logic to compare the detected cancellation value to the threshold. | 2017-06-22 |
20170177348 | Instruction and Logic for Compression and Rotation - A processor includes an execution unit to execute an instruction. The execution unit includes logic to compress a plurality of masked elements from a source vector to a destination vector. The execution unit also includes logic to place the masked elements into the destination vector at a rotatable index within the destination vector. The rotatable index is to indicate an offset created by elements previously entered into the destination vector. The execution unit further includes logic to determine whether compression of the plurality of masked elements will cause the rotatable index to exceed a size of the destination vector. The execution unit also includes logic to reset the rotatable index with respect to the beginning of the destination vector to compress at least one of the plurality of masked elements relative to the beginning of the destination vector. | 2017-06-22 |
20170177349 | Instructions and Logic for Load-Indices-and-Prefetch-Gathers Operations - A processor includes an execution unit to execute instructions to load indices from an array of indices, optionally perform a gather, and prefetch (to a specified cache) elements for a future gather from arbitrary locations in memory. The execution unit includes logic to load, for each element to be gathered or prefetched, an index value to be used in computing the address in memory for the element. The index value may be retrieved from an array of indices that is identified for the instruction. The execution unit includes logic to compute the address based on the sum of a base address that is specified for the instruction and the index value that was retrieved for the data element, with or without scaling. The execution unit includes logic to store gathered data elements in contiguous locations in a destination vector register that is specified for the instruction. | 2017-06-22 |
20170177350 | Instructions and Logic for Set-Multiple-Vector-Elements Operations - A processor includes an execution unit to execute instructions to set data elements of different types, from different source vector registers, in destination vectors of multiple-element data structures, each including elements of multiple types. The execution unit includes logic to extract data elements from specific positions within each source vector register dependent on an instruction encoding or parameter. A vector SET3 instruction encoding specifies that respective data elements be extracted from the same positions within first, second, and third source vector registers to assemble multiple XYZ-type data structures. A vector SET4 instruction encoding specifies that respective data elements be extracted from the same positions within two source vector registers to assemble half the elements of multiple XYZW-type data structures. The execution unit includes logic to place the reorganized data elements in contiguous locations (SET3 operations), or successive even or odd locations (SET4 operations) in the destination vector. | 2017-06-22 |
20170177351 | Instructions and Logic for Even and Odd Vector Get Operations - A processor includes an execution unit to execute even and odd vector GET instructions. The execution unit includes logic to extract data elements from even numbered locations or from odd numbered locations within two source vector registers. The execution unit includes logic to place the extracted even or odd data elements in contiguous locations in a destination vector. The execution unit includes logic to store the destination vector to a destination vector register specified in the instruction. The data elements stored next to each other in the source vector registers may be respective components of a data structure. A sequence of even and odd vector GET instructions may be executed to extract vectors of data elements of the same type from an array of structures with four strides. The execution unit may include a Single Instruction Multiple Data (SIMD) coprocessor to execute the even and odd vector GET instructions. | 2017-06-22 |
20170177352 | Instructions and Logic for Lane-Based Strided Store Operations - A processor includes an execution unit to execute lane-based strided store instructions. The execution unit includes logic to extract a first data element from each of multiple lanes within a source vector register and to extract a second data element from each lane. The execution unit includes logic to place, in a destination vector, the first data element extracted from the second lane next to the first data element extracted from the first lane, and the second data element extracted from the second lane next to the second data element extracted from the first lane. The execution unit includes logic to store the destination vector in memory, beginning at a location specified in the instruction, such that data elements placed next to each other in the destination vector are stored in contiguous locations. The data elements placed next to each other may be respective components of a data structure. | 2017-06-22 |
20170177353 | Instructions and Logic for Get-Multiple-Vector-Elements Operations - A processor includes an execution unit to execute instructions to get data elements of the same type from multiple data structures packed in vector registers. The execution unit includes logic to extract data elements from specific positions within each data structure dependent on an instruction encoding. A vector GET3 instruction encoding specifies that data elements be extracted from the first, second, or third position in each XYZ-type data structure. A vector GET4 instruction encoding specifies that data elements be extracted from the first, second, third, or fourth position in each XYZW-type data structure and that the extracted data elements be placed in the upper or lower half of a destination vector. The execution unit includes logic to place the extracted data elements in contiguous locations in the destination vector. The execution unit includes logic to store the destination vector to a destination vector register specified in the instruction. | 2017-06-22 |
20170177354 | Instructions and Logic for Vector-Based Bit Manipulation - A processor includes a front end to receive an instruction to perform a vector-based bit manipulation, a decoder to decode the instruction, and a source vector register to store multiple data elements. The processor also includes an execution unit to execute the instruction with a first logic to apply a bit manipulation to each of the multiple data elements within the source vector register in parallel. In addition, the processor includes a retirement unit to retire the instruction. | 2017-06-22 |
20170177355 | Instruction and Logic for Permute Sequence - A processor includes a core to execute an instruction and logic to determine that the instruction will require strided data converted from source data in memory. The strided data is to include corresponding indexed elements from structures in the source data to be loaded into a final register to be used to execute the instruction. The core also includes logic to load source data into a plurality of preliminary vector registers to align a defined element of one of the preliminary vector registers in a position that corresponds to a required position in the final register for execution. The core includes logic to apply permute instructions to contents of the preliminary vector registers to cause corresponding indexed elements from the structures to be loaded into respective source vector registers. | 2017-06-22 |
20170177356 | Systems, Apparatuses, and Method for Strided Access - Systems, methods, and apparatuses for strided access are described. In some embodiments, a plurality of registers are loaded with data from an array of structures. Then data elements that that are not needed in a permute operation are overwritten with index values with a write mask. The register now contains a mix of data and index values. When this same write mask is passed to the permute instruction which overwrites the index register as destination, the data values are preserved and index values are overwritten with data coming from the other two source registers as controlled by the index values. | 2017-06-22 |
20170177357 | Instruction and Logic for Vector Permute - A processor includes a front end to decode an instruction and an allocator to assign the instruction to an execution unit to execute the instruction to permute vector data into a destination register for storing elements. The execution unit includes logic to compute an element count, logic to compute an index size, logic to compute a byte count, a temporary destination, an index from an index vector, an offset, logic to determine a subset of the temporary destination, and logic to store the subset in one element in the destination register. | 2017-06-22 |
20170177358 | Instruction and Logic for Getting a Column of Data - A processor includes a front end to decode an instruction, a temporary destination, and an allocator to assign the instruction to an execution unit to execute the instruction to get a selected column of data into a destination register. The execution unit includes an element counter, a logic to determine an index from an index vector based on the element count, a logic to compute an address of the data, a row to be loaded into the temporary destination, and a data processing unit to copy a portion of the temporary destination into the element of the destination register. | 2017-06-22 |
20170177359 | Instructions and Logic for Lane-Based Strided Scatter Operations - A processor includes an execution unit to execute lane-based strided scatter instructions. The execution unit includes logic to extract a first data element from each of multiple lanes within a source vector register and to extract a second data element from each lane. The execution unit includes logic to place, in a destination vector, the first data element extracted from the second lane next to the first data element extracted from the first lane, and the second data element extracted from the second lane next to the second data element extracted from the first lane. The execution unit includes logic to store each collection of data elements placed next to each other in the destination vector in contiguous locations beginning at an address computed from a base address and a respective element of an index register specified in the instruction. Each collection of data elements represents a data structure. | 2017-06-22 |
20170177360 | Instructions and Logic for Load-Indices-and-Scatter Operations - A processor includes an execution unit to execute instructions to load indices from an array of indices and scatter elements to locations in sparse memory based on those indices. The execution unit includes logic to load, for each data element to be scattered by the instruction, as needed, an index value to be used in computing the address in memory at which a particular data element is to be written. The index values may be retrieved from an array of indices identified for the instruction. The execution unit includes logic to compute the addresses based on the sum of a base address specified for the instruction and the index values retrieved for the data element locations, with optional scaling. The execution unit includes logic to retrieve data elements from contiguous locations in a source vector register specified for the instruction and store them to the computed locations. | 2017-06-22 |
20170177361 | APPARATUS AND METHOD FOR ACCELERATING GRAPH ANALYTICS - An apparatus and method are described for accelerating graph analytics. For example, one embodiment of a processor comprises: an instruction fetch unit to fetch program code including set intersection and set union operations; a graph accelerator unit (GAU) to execute at least a first portion of the program code related to the set intersection and set union operations and generate results; and an execution unit to execute at least a second portion of the program code using the results provided from the GAU. | 2017-06-22 |
20170177362 | ADJOINING DATA ELEMENT PAIRWISE SWAP PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor includes a decode unit to decode an adjoining data element pairwise swap instruction. The instruction is to indicate a source packed data that is to include pairs of adjoining data elements, and is to indicate a destination storage location. An execution unit is coupled with the packed data registers and the decode unit. The execution unit, in response to the instruction, is to store a result packed data in the destination storage location, the result packed data to include pairs of adjoining data elements. Each pair of adjoining data elements of the result packed data is to correspond to a different pair of adjoining data elements of the source packed data. The adjoining data elements in each pair of the result packed data to have been swapped in position relative to the adjoining data elements in each corresponding pair of the source packed data. | 2017-06-22 |
20170177363 | Instructions and Logic for Load-Indices-and-Gather Operations - A processor includes an execution unit to execute instructions to load indices from an array of indices and gather elements from random locations or locations in sparse memory based on those indices. The execution unit includes logic to load, for each data element to be gathered by the instruction, as needed, an index value to be used in computing the address in memory of a particular data element to be gathered. The index value may be retrieved from an array of indices that is identified for the instruction. The execution unit includes logic to compute the address as the sum of a base address that is specified for the instruction and the index value that was retrieved for the data element, with or without scaling. The execution unit includes logic to store the gathered data elements in contiguous locations in a destination vector register that is specified for the instruction. | 2017-06-22 |
20170177364 | Instruction and Logic for Reoccurring Adjacent Gathers - A processor includes a front end to decode an instruction and an allocator to assign the instruction to an execution unit to execute the instruction to gather scattered data from a memory into a destination register, and a cache with cache lines. The execution unit includes logic to compute the number of elements to gather and the address in memory for an element, and logic to fetch a cache line corresponding to the computed address into the cache, and logic to load the destination register from the cache. | 2017-06-22 |