53rd week of 2015 patent applcation highlights part 51 |
Patent application number | Title | Published |
20150378905 | CO-PROCESSOR MEMORY ACCESSES IN A TRANSACTIONAL MEMORY - Monitoring, by a processor having a cache, addresses accessed by a co-processor associated with the processor during transactional execution of a transaction by the processor. The processor executes a transactional memory (TM) transaction, including receiving, by the processor, a memory address range of data that a co-processor may access to perform a co-processor operation. The processor saves the memory address range. Based on receiving, by the processor, a cache coherency request that conflicts with the saved address range, the processor aborts the TM transaction. | 2015-12-31 |
20150378906 | CONDITIONAL INCLUSION OF DATA IN A TRANSACTIONAL MEMORY READ SET - Determining, by a processor having a cache, if data in the cache is to be monitored for cache coherency conflicts in a transactional memory (TM) environment. A processor executes a TM transaction, that includes the following. Executing a memory data access instruction that accesses an operand at an operand memory address. Based on either a prefix instruction associated with the memory data access instruction, or an operand tag associated with the operand of the memory data access instruction, determining whether a cache entry having the operand is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction. Based on determining that the cache entry is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction, marking the cache entry for monitoring for conflicts. | 2015-12-31 |
20150378907 | DYNAMIC PREDICTOR FOR COALESCING MEMORY TRANSACTIONS - A transactional memory system predicts the outcome of coalescing outermost memory transactions, the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction, the method comprising. A processor of the transactional memory system determines whether a first plurality of outermost transactions from an associated program that were coalesced experienced an abort, the first plurality of outermost transactions including a first instance of a first transaction. The processor updates a history of the associated program to reflect the results of the determination. The processor coalesces a second plurality of outermost transactions from the associated program, based, at least in part, on the updated history. | 2015-12-31 |
20150378908 | ALLOCATING READ BLOCKS TO A THREAD IN A TRANSACTION USING USER SPECIFIED LOGICAL ADDRESSES - A processor in a multi-processor configuration is configured to execute an instruction that specifies a virtual address range to be monitored to protect reads in a transaction. The processor translates the virtual address range to a series of real pages. The real starting address and ending address pairs for each real page are stored for use later on to resolve a potential cross-interrogation (XI) conflict with a real address on the XI bus. | 2015-12-31 |
20150378909 | PERFORMING STAGING OR DESTAGING BASED ON THE NUMBER OF WAITING DISCARD SCANS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed. | 2015-12-31 |
20150378910 | TRANSACTIONAL EXECUTION IN A MULTI-PROCESSOR ENVIRONMENT THAT MONITORS MEMORY CONFLICTS IN A SHARED CACHE - A higher level shared cache of a hierarchical cache of a multi-processor system utilizes transaction identifiers to manage memory conflicts in corresponding transactions. The higher level cache is shared with two or more processors. Transaction indicators are set in the higher level cache corresponding to the cache lines being accessed. The transaction aborts if a memory conflict with the transaction's cache lines from another transaction is detected. | 2015-12-31 |
20150378911 | ALLOWING NON-CACHEABLE LOADS WITHIN A TRANSACTION - A computer allows non-cacheable loads or stores in a hardware transactional memory environment. Transactional loads or stores, by a processor, are monitored in a cache for TX conflicts. The processor accepts a request to execute a transactional execution (TX) transaction. Based on processor execution of a cacheable load or store instruction for loading or storing first memory data of the transaction, the computer can perform a cache miss operation on the cache. Based on processor execution of a non-cacheable load instruction for loading second memory data of the transaction, the computer can not-perform the cache miss operation on the cache based on a cache line associated with the second memory data being not-cached, and load an address of the second memory data into a non-cache-monitor. The TX transaction can be aborted based on the non-cache monitor detecting a memory conflict from another processor. | 2015-12-31 |
20150378912 | SPECULATION CONTROL FOR IMPROVING TRANSACTION SUCCESS RATE, AND INSTRUCTION THEREFOR - Throttling instruction execution in a transaction operating in a processor configured to execute memory instructions out-of-order in a pipelined processor, wherein memory instructions are instructions for accessing operands in memory is provided. Included is executing, by the processor, instructions of a transaction comprising determining whether the transaction is in throttling mode and based on the transaction being in throttling mode, executing memory instructions in-program-order. Also included is based on the transaction not-being in throttling mode, executing memory instructions out-of-program order. | 2015-12-31 |
20150378913 | CACHING DATA IN A MEMORY SYSTEM HAVING MEMORY NODES AT DIFFERENT HIERARCHICAL LEVELS - A memory system includes a plurality of memory nodes provided at different hierarchical levels of the memory system, each of the memory nodes including a corresponding memory storage and a cache. A memory node at a first of the different hierarchical levels is coupled to a processor with lower communication latency than a memory node at a second of the different hierarchical levels. The memory nodes are to cooperate to decide which of the memory nodes is to cache data of a given one of the memory nodes. | 2015-12-31 |
20150378914 | IMPLEMENTING ADVANCED CACHING - Embodiments are disclosed for implementing a priority queue in a storage device, e.g., a solid state drive. At least some of the embodiments can use an in-memory set of blocks to store items until the block is full, and commit the full block to the storage device. Upon storing a full block, a block having a lowest priority can be deleted. An index storing correspondences between items and blocks can be used to update priorities and indicated deleted items. By using the in-memory blocks and index, operations transmitted to the storage device can be reduced. | 2015-12-31 |
20150378915 | MEMORY PERFORMANCE WHEN SPECULATION CONTROL IS ENABLED, AND INSTRUCTION THEREFOR - Throttling execution in a transaction operating in a processor configured to execute memory instructions out-of-program-order in a pipelined processor, wherein memory instructions are instructions for accessing operands in memory. Included is executing instructions of a transaction. Also included is determining whether the transaction is in throttling mode and based on determining that a transaction is in throttling mode, executing memory instructions in-program-order and dynamically prefetching memory operands of memory instructions. | 2015-12-31 |
20150378916 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments mitigate busy time in a hierarchical store-through memory cache structure including a cache directory associated with a memory cache. The cache directory is divided into a plurality of portions each associated with a portion of memory cache. A determination is made that a first subpipe of a shared cache pipeline comprises a non-store request. The shared pipeline is communicatively coupled to the plurality of portions of the cache directory. A store command is prevented from being placed in a second subpipe of the shared cache pipeline based on determining that a first subpipe of the shared cache pipeline comprises a non-store request. Simultaneous cache lookup operations are supported between the plurality of portions of the cache directory and cache write operations. Two or more store commands simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 2015-12-31 |
20150378917 | PREFETCHING OF DISCONTIGUOUS STORAGE LOCATIONS IN ANTICIPATION OF TRANSACTIONAL EXECUTION - Discontiguous storage locations are prefetched by a prefetch instruction. Addresses of the discontiguous storage locations are provided by a list directly or indirectly specified by a parameter of the prefetch instruction, along with metadata and information about the list entries. Fetching of corresponding data blocks to cache lines is initiated. A processor may enter transactional execution mode and memory instructions of a program may be executed using the prefetched data blocks. | 2015-12-31 |
20150378918 | PREFETCHING OF DISCONTIGUOUS STORAGE LOCATIONS AS PART OF TRANSACTIONAL EXECUTION - Transactional execution of a transaction beginning instruction initiates prefetching, by a CPU, of discontiguous storage locations specified by a list. The list includes entries specifying addresses and may also include corresponding metadata. The list may be specified by levels of indirection. Fetching of corresponding discontiguous cache lines is initiated while in TX mode. Additional instructions in the transaction may be executed and use the prefetched cache lines. | 2015-12-31 |
20150378919 | SELECTIVE PREFETCHING FOR A SECTORED CACHE - A memory subsystem includes memory hierarchy that performs selective prefetching based on prefetch hints. A lower level memory detects a cache miss for a requested cache line that is part of a superline. The lower level memory generates a request vector for the cache line that triggered the cache miss, including a field for each cache line of the superline. The request vector includes a demand request for the cache line that caused the cache miss, and the lower level memory modifies the request vector with prefetch hint information. The prefetch hint information can indicate a prefetch request for one or more other cache lines in the superline. The lower level memory sends the request vector to the higher level memory with the prefetch hint information, and the higher level memory services the demand request and selectively either services a prefetch hint or drops the prefetch hint. | 2015-12-31 |
20150378920 | GRAPHICS DATA PRE-FETCHER FOR LAST LEVEL CACHES - In one embodiment, an improved graphics data cache prefetcher includes a cache prefetch unit and a prefetch determination unit (PDU). The PDU determines if there is space available to retrieve some or all of the resources for an upcoming graphics operation while a current graphics operation is processed and whether the retrieval can be performed without impacting the performance of the current operation. If there is space available to retrieve some or all of the upcoming operation's resources into one or more GPU caches the prefetch determination unit programs the cache prefetch unit to retrieve the data into the one or more caches before performing the upcoming operation. | 2015-12-31 |
20150378921 | SYSTEMS AND METHODS FOR STORAGE SERVICE AUTOMATION - A cache automation module detects the deployment of storage resources in a virtual computing environment and, in response, automatically configures cache services for the detected storage resources. The automation module may detect new storage resources by monitoring storage operations and/or requests, by use of an interface provided by virtualization infrastructure, and/or the like. The cache automation module may deterministically identify storage resources that are to be cached and automatically caching services for the identified storage resources. | 2015-12-31 |
20150378922 | COLLECTING MEMORY OPERAND ACCESS CHARACTERISTICS DURING TRANSACTIONAL EXECUTION - A transactional execution of a set of instructions in a transaction of a program may be initiated to collect memory operand access characteristics of a set of instructions of a transaction during the transactional execution. The memory operand access characteristics may be stored upon a termination of the transactional execution of the set of instructions. The memory operand access characteristics may include an address of an accessed storage location, a count of a number of times the storage location is accessed, a purpose value indicating whether the storage location is accessed for a fetch, store, or update operation, a count of a number of times the storage location is accessed for one or more of a fetch, store, or update operation; a translation mode in which the storage location is accessed; and an addressing mode. | 2015-12-31 |
20150378923 | Data Bus Efficiency Via Cache Line Usurpation - Embodiments of the current invention permit a user to allocate cache memory to main memory more efficiently. The processor or a user allocates the cache memory and associates the cache memory to the main memory location, but suppresses or bypassing reading the main memory data into the cache memory. Some embodiments of the present invention permit the user to specify how many cache lines are allocated at a given time. Further, embodiments of the present invention may initialize the cache memory to a specified pattern. The cache memory may be zeroed or set to some desired pattern, such as all ones. Alternatively, a user may determine the initialization pattern through the processor. | 2015-12-31 |
20150378924 | EVICTING CACHED STORES - A tool for determining eviction of store cache entries based on store pressure. The tool determines, by one or more computer processors, a count value for one or more new store cache entry allocations. The tool determines, by one or more computer processors, whether a new store cache entry allocation limit is exceeded. Responsive to determining the new store cache entry allocation limit is exceeded, the tool determines, by one or more computer processors, an allocation value for one or more existing store cache entries, the allocation value indicating an allocation class for each of the one or more existing store cache entries. The tool determines, by one or more computer processors based, at least in part, on the allocation value for the one or more existing store cache entries, at least one allocation class for eviction. The tool program determines, by one or more computer processors, an eviction request setting for evicting the one or more existing store cache entries. | 2015-12-31 |
20150378925 | INVALIDATION DATA AREA FOR CACHE - The present disclosure relates to caches, methods, and systems for using an invalidation data area. The cache can include a journal configured for tracking data blocks, and an invalidation data area configured for tracking invalidated data blocks associated with the data blocks tracked in the journal. The invalidation data area can be on a separate cache region from the journal. A method for invalidating a cache block can include determining a journal block tracking a memory address associated with a received write operation. The method can also include determining a mapped journal block based on the journal block and on an invalidation record. The method can also include determining whether write operations are outstanding. If so, the method can include aggregating the outstanding write operations and performing a single write operation based on the aggregated write operations. | 2015-12-31 |
20150378926 | TRANSACTIONAL EXECUTION IN A MULTI-PROCESSOR ENVIRONMENT THAT MONITORS MEMORY CONFLICTS IN A SHARED CACHE - A higher level shared cache of a hierarchical cache of a multi-processor system utilizes transaction identifiers to manage memory conflicts in corresponding transactions. The higher level cache is shared with two or more processors. Transaction indicators are set in the higher level cache corresponding to the cache lines being accessed. The transaction aborts if a memory conflict with the transaction's cache lines from another transaction is detected. | 2015-12-31 |
20150378927 | ALLOWING NON-CACHEABLE LOADS WITHIN A TRANSACTION - A computer allows non-cacheable loads or stores in a hardware transactional memory environment. Transactional loads or stores, by a processor, are monitored in a cache for TX conflicts. The processor accepts a request to execute a transactional execution (TX) transaction. Based on processor execution of a cacheable load or store instruction for loading or storing first memory data of the transaction, the computer can perform a cache miss operation on the cache. Based on processor execution of a non-cacheable load instruction for loading second memory data of the transaction, the computer can not-perform the cache miss operation on the cache based on a cache line associated with the second memory data being not-cached, and load an address of the second memory data into a non-cache-monitor. The TX transaction can be aborted based on the non-cache monitor detecting a memory conflict from another processor. | 2015-12-31 |
20150378928 | MANAGING READ TAGS IN A TRANSACTIONAL MEMORY - Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction. | 2015-12-31 |
20150378929 | SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache. | 2015-12-31 |
20150378930 | VALIDATING VIRTUAL ADDRESS TRANSLATION - Systems and methods for validating virtual address translation. An example processing system comprises: a processing core to execute a first application associated with a first privilege level and a second application associated with a second privilege level, wherein a first set of privileges associated with the first privilege level includes a second set of privileges associated with the second privilege level; and an address validation component to validate, in view of an address translation data structure maintained by the first application, a mapping of a first address defined in a first address space of the second application to a second address defined in a second address space of the second application. | 2015-12-31 |
20150378931 | SHARED REFERENCE COUNTERS AMONG A PLURALITY OF VIRTUAL STORAGE DEVICES - A system, method, and computer program product are provided for implementing shared reference counters among a plurality of virtual storage devices. The method includes the steps of allocating a first portion of a real storage device to store data, wherein the first portion is divided into a plurality of blocks of memory and allocating a second portion of the real storage device to store a plurality of reference counters that correspond to the plurality of blocks of memory. The reference counters may be updated by two or more virtual storage devices hosted in one or more nodes to manage the allocation of the blocks of memory in the real storage device. | 2015-12-31 |
20150378932 | SYSTEM AND METHOD FOR EXECUTING NATIVE CLIENT CODE IN A STORAGE DEVICE - A system and method for executing user-provided code securely on a solid state drive (SSD) to perform data processing on the SSD. In one embodiment, a user uses a security-oriented cross-compiler to compile user-provided source code for a data processing task on a host computer containing, or otherwise connected to, an SSD. The resulting binary is combined with lists of input and output file identifiers and sent to the SSD. A central processing unit (CPU) on the SSD extracts the binary and the lists of file identifiers. The CPU obtains from the host file system the addresses of storage areas in the SSD containing the data in the input files, reads the input data, executes the binary using a container, and writes the results of the data processing task back to the SSD, in areas corresponding to the output file identifiers. | 2015-12-31 |
20150378933 | STORAGE MANAGEMENT APPARATUS, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN STORAGE MANAGEMENT PROGRAM, AND CONTROL METHOD - A storage management apparatus configured to allocate physical addresses in a physical storage area, to virtual addresses in a virtual storage area for storing data is provided. The storage management apparatus includes a processor that executes a process to define, in the physical area, a continuous area having a plurality of continuous physical addresses, and define, based on a virtual address to which a physical address in the continuous area has initially been allocated, an allocation range of virtual addresses for allocating the defined continuous area; and allocate a physical address in the defined continuous area to a virtual address in the defined relation range. | 2015-12-31 |
20150378934 | CONTEXT BASED CACHE EVICTION - A method, medium, and system to receive a request to add a resource to a cache, the resource including a data object and a context item key associated with the resource and uniquely identifying a context of use referenced by the context item key; determine whether the resource is stored in the cache; store, in response to the determination that the resource is not stored in the cache, the resource in the cache; and add the context item key of the resource stored in the cache to a record of reference list of resources. | 2015-12-31 |
20150378935 | STORAGE TABLE REPLACEMENT METHOD - A storage table replacement method uses an index table, a storage table containing multiple rows of storage cells, and a correlation table. The method includes storing information in one or more rows of storage cells in the storage table; and storing track addresses of the storage cells in the storage table in the index table. Every track address includes a row address and a column address. The method further includes recording, in every row in the correlation table, a total number of index rows/index table memory cells that use the row as an index target in the index table and addresses of a certain number of index rows/index table memory cells, where the correlation table and the storage table have a same number of rows; and, when a row of new information is generated, based on the correlation table, selecting and replacing a row in the storage table. | 2015-12-31 |
20150378936 | DYNAMIC MEMORY ACCESS MANAGEMENT - A system, a method and a computer program product for managing memory access of an avionics control system having at least one control computer having at least one memory control device. The method includes assigning a memory access of at least one unique memory region of at least one memory unit to each of at least one application task or task set. A memory access of at least one application data update task is assigned to at least one subregion of one or more of the at least one unique memory region. At least one data parameter is written to the at least one subregion and the assigned memory access of the at least one application data update task de-activated. | 2015-12-31 |
20150378937 | SYSTEMS AND METHODS FOR LOCKING CACHED STORAGE - The present disclosure relates to systems and methods for locking a storage device to prevent inadvertent modification when the device is mounted on a different system or different host. The method can include selecting, on a storage device, a location and contents of a byte region for locking, where the byte region comprises a boot sector of the device. The method can also include encoding the selected contents of the byte region, and locking the device by replacing the contents of the identified byte region with the encoded byte region at the identified location on the device. In some embodiments, encoding the selected contents of the byte region can include inverting the contents of the selected byte region using a binary not operation. In some embodiments, encoding the selected contents of the byte region can include modifying the selected contents of the byte region based on a generated unique identifier. | 2015-12-31 |
20150378938 | WEARABLE COMPUTER WITH EXPANDABLE LINK CAPABILITIES - A wearable computer system comprising one or more processors, memory, and an attachment accessory is disclosed. The wearable computer system that includes one or more removable link components, the attachment accessory operatively to secure the system to the person of a user, the wearable computer system being configured such that the removable link components can be added to or removed from the attachment band and the capabilities of the wearable computer system change as components are added or removed. | 2015-12-31 |
20150378939 | MEMORY MECHANISM FOR PROVIDING SEMAPHORE FUNCTIONALITY IN MULTI-MASTER PROCESSING ENVIRONMENT - A memory mechanism for providing semaphore functionality in a multi-master processing environment is disclosed. An exemplary memory unit includes a memory controller that manages access to a shared memory. The memory controller includes a semaphore context monitor associated with each master having access to the shared memory. A semaphore context monitor associated with a semaphore-capable master is activated by the semaphore-capable master (for example, by exclusive request signal(s) received by memory controller from semaphore-capable master). A semaphore context monitor associated with a non-semaphore-capable master is activated by the memory controller (for example, by exclusive request signal(s) generated by the memory controller). The memory controller can include a semaphore address command mechanism configured to derive a semaphore command from a memory access request received from the non-semaphore-capable master and activate the semaphore context monitor when the semaphore command specifies exclusive access. | 2015-12-31 |
20150378940 | TRANSACTIONAL EXECUTION ENABLED SUPERVISOR CALL INTERRUPTION WHILE IN TX MODE - A computer can manage an interruption while a processor is executing a transaction in a transactional-execution (TX) mode. Execution, in a program context, of the transaction is begun by a processor in TX mode. An interruption request is detected for an interruption, by the processor, in TX mode. The interruption is accepted by the processor to execute a TX compatible routine in a supervisor context for changing supervisor resources. The TX compatible routine is executed within the TX mode. The processor returns to the program context to complete the execution of the transaction. Based on the transaction aborting, the processor does not commit changes to the supervisor resources. | 2015-12-31 |
20150378941 | INSTRUCTIONS AND LOGIC TO INTERRUPT AND RESUME PAGING IN A SECURE ENCLAVE PAGE CACHE - Instructions and logic interrupt and resume paging in secure enclaves. Embodiments include instructions, specify page addresses allocated to a secure enclave, the instructions are decoded for execution by a processor. The processor includes an enclave page cache to store secure data in a first cache line and in a last cache line for a page corresponding to the page address. A page state is read from the first or last cache line for the page when an entry in an enclave page cache mapping for the page indicates only a partial page is stored in the enclave page cache. The entry for a partial page may be set, and a new page state may be recorded in the first cache line when writing-back, or in the last cache line when loading the page when the instruction's execution is being interrupted. Thus the writing-back, or loading can be resumed. | 2015-12-31 |
20150378942 | TRANSACTIONAL EXECUTION ENABLED SUPERVISOR CALL INTERRUPTION WHILE IN TX MODE - A computer can manage an interruption while a processor is executing a transaction in a transactional-execution (TX) mode. Execution, in a program context, of the transaction is begun by a processor in TX mode. An interruption request is detected for an interruption, by the processor, in TX mode. The interruption is accepted by the processor to execute a TX compatible routine in a supervisor context for changing supervisor resources. The TX compatible routine is executed within the TX mode. The processor returns to the program context to complete the execution of the transaction. Based on the transaction aborting, the processor does not commit changes to the supervisor resources. | 2015-12-31 |
20150378943 | DELAYING FLOATING INTERRUPTION WHILE IN TX MODE - A computer implemented method and system for delaying a floating interruption while a processor is in a transactional-execution mode. A floating interruption mechanism can detect a floating interruption request for one or more floating interruption eligible processors. Based on each eligible processor being in TX mode, the method and system can delay, using a predetermined period of time, performing the floating interruption at a selected processor of the one or more of the processors. A first processor of the one or more processors can be selected based on the first processor exiting the transactional execution mode within the predetermined period of time. Based on the predetermined period of time expiring, the method and system can cause an interrupt to one of the plurality of processors, and the interrupt can cause the processor to abort a transaction. | 2015-12-31 |
20150378944 | A METHOD OF AND CIRCUITRY FOR CONTROLLING ACCESS BY A MASTER TO A PERIPHERAL, A METHOD OF CONFIGURING SUCH CIRCUITRY, AND ASSOCIATED COMPUTER PROGRAM PRODUCTS - A method of controlling access by a master to a peripheral includes receiving one or more interrupt priority levels from one or more interrupt controllers associated with the peripheral, comparing the one or more interrupt priority level with respective one or more pre-established interrupt access levels to obtain an interrupt level comparison result, establishing whether an access condition is satisfied in dependence on at least the interrupt level comparison result, and if the access condition is satisfied, granting access. If the access condition is not satisfied, access is denied. Further, a circuitry is described including one or more masters, one or more peripherals, and an access control circuitry including one or more interrupt controllers associated with the one or more peripherals. The access control circuitry is arranged to perform a method of controlling access by a master of the one or more masters to a peripheral of the one or more peripherals. | 2015-12-31 |
20150378945 | EVADING FLOATING INTERRUPTION WHILE IN THE TRANSACTIONAL-EXECUTION MODE - A computer implemented method and system for evading a floating interruption while a processor is in a transactional-execution (TX) mode. A floating interruption request can be detected, by a floating interrupt control mechanism, for a plurality of processors for execution by any one of the plurality of processors. An evasive action can be initiated for at least one of the plurality of processors in a transactional-execution mode, for evading the floating interruption such that another one of the plurality of processors can execute the floating interruption. | 2015-12-31 |
20150378946 | HIGH THROUGHPUT REGISTER FILE MEMORY - Pipelining is included inside a register file memory. A register file memory device includes a static bitcell, and pipelined combinational logic. The combinational logic pipeline couples the I/O (input/output) node to the static bitcell. The pipeline includes multiple stages, where each stage includes a static logic element and a register element, where the operation of each stage transfers data through to a subsequent stage. The number of stages can be different for a read than a write. The multiple stages perform the operations to execute the read or write request. | 2015-12-31 |
20150378947 | CACHE LOAD BALANCING IN STORAGE CONTROLLERS - Methods and structure are provided for cache load balancing in storage controllers that utilize Solid State Drive (SSD) caches. One embodiment is a storage controller of a storage system. The storage controller includes a host interface operable to receive Input and Output (I/O) operations from a host computer. The storage controller also includes a cache memory that includes an SSD. Further, the storage controller includes a cache manager that is distinct from the cache memory. The cache manager is able to determine physical locations in the multiple SSDs that are unused, to identify an unused location that was written to a longer period of time ago than other unused locations, and to store a received I/O operation in the identified physical location. Further, the cache manager is able to trigger transmission of the stored I/O operations to storage devices of the storage system for processing. | 2015-12-31 |
20150378948 | Auxiliary Interface for Non-Volatile Memory System - A non-volatile memory system is formed a plurality of memory banks and a controller, where the controller has an auxiliary memory interface for use with an additionally non-volatile memory bank, where the additional memory bank and interface are used for metadata, such as logical to physical translation data. The other banks are used for user data. In an exemplary embodiment, a non-volatile memory could include a controller and (N+1) NAND flash memories, where N of these memories would store user data, but the remaining memory with its own controller interface would be dedicated to the storage of metadata. This allows for the metadata to be kept in non-volatile memory, but still quite readily accessible relative to the typical paging/overlay arrangement for metadata that is typically used in many non-volatile memory system. | 2015-12-31 |
20150378949 | Method of Transaction and Event Ordering within the Interconnect - The disclosure includes embodiments that apply to an interconnect architecture having multiple system masters and at least one shared resource. The disclosure provides a system and method for providing synchronization for transactions in a multi-master interconnect architecture that employs at least one shared resource, or slave component. | 2015-12-31 |
20150378950 | METHOD, APPARATUS AND SYSTEM FOR CONFIGURING AN INTEGRATED CIRCUIT - Techniques and mechanisms for configuring an integrated circuit to couple to, and exchange data with, a hardware interface. In an embodiment, the integrated circuit comprises a data channel including a plurality of bits, configuration logic, and a plurality of contacts including a first contact group and a second contact group. In response to a signal indicating connectivity of the integrated circuit to the interface, a mode of the configuration logic is selected to couple the plurality of bits to one of the first contact group and the second contact group. | 2015-12-31 |
20150378951 | DATA TRANSFER SYSTEM AND METHOD OF CONTROLLING THE SAME - A data transfer system is disclosed, which comprises a serial controller and a switch device. The switch device includes a first serial port, a second serial port, and a transferring unit. The first serial port and the second serial port are individually configured to transmit a first type signal to the transferring unit. The transferring unit selectively switches a transmission of the first type signal from either the first serial port or the second serial port to the serial controller. The first serial port and the second serial port are individually configured to transmit a second type signal to the serial controller, wherein the first type signal is faster than the second type signal in transmission rate. | 2015-12-31 |
20150378952 | METHOD AND APPARATUS OF USB 3.1 RETIMER PRESENCE DETECT AND INDEX - An apparatus for retimer presence detection is described herein. The apparatus includes at least one retimer, wherein an algorithm is to enable the at least one retimer to announce its presence by asserting a bit of a presence message during link initialization. The at least one retimer can declare an index and is accessible via the index. | 2015-12-31 |
20150378953 | OPTIMIZED CREDIT RETURN MECHANISM FOR PACKET SENDS - Method and apparatus for implementing an optimized credit return mechanism for packet sends. A Programmed Input/Output (PIO) send memory is partitioned into a plurality of send contexts, each comprising a memory buffer including a plurality of send blocks configured to store packet data. A storage scheme using FIFO semantics is implemented with each send block associated with a respective FIFO slot. In response to receiving packet data written to the send blocks and detecting the data in those send blocks has egressed from a send context, corresponding freed FIFO slots are detected, and a lowest slot for which credit return indicia has not be returned is determined. The highest slot in a sequence of freed slots from the lowest slot is then determined, and corresponding credit return indicia is returned. In one embodiment an absolute credit return count is implemented for each send context, with an associated absolute credit sent count tracked via software that writes to the PIO send memory, with the two absolute credit counts used for flow control. | 2015-12-31 |
20150378954 | DYNAMICALLY CONFIGURABLE ANALOG FRONTEND CIRCUITRY - An analog frontend (AFE) interface is dynamically programmable. A single AFE circuit can interface with multiple different analog devices, and dynamically configure its input for efficient interfacing with each different analog device. The AFE receives multiple unprocessed analog input signals and samples the analog input signals. A preprocessor element in the AFE analyzes the input signals and generates control signals based on the analyzing. The control signals dynamically adjust how the AFE samples the analog input signals, and can improve the efficiency of the operation the AFE. | 2015-12-31 |
20150378955 | GENERATING COMBINED BUS CLOCK SIGNALS USING ASYNCHRONOUS MASTER DEVICE REFERENCE CLOCKS IN SHARED BUS SYSTEMS, AND RELATED METHODS, DEVICES, AND COMPUTER-READABLE MEDIA - Generating combined bus clock signals using asynchronous master device reference clocks in shared bus systems, and related methods, devices, and computer-readable media are disclosed. In one aspect, a method for generating combined bus clock signals comprises detecting a start event by each master device of multiple master devices communicatively coupled to a shared clock line of a shared bus. Each master device samples a plurality of shared clock line values of the shared clock line at a corresponding plurality of transitions of a reference clock signal for the master device. Each master device determines whether the plurality of shared clock line values is identical. If the shared clock line values are identical, each master device drives a shared clock line drive value inverse to the plurality of shared clock line values to the shared clock line at a next transition of the reference clock signal for the master device. | 2015-12-31 |
20150378956 | MEMORY PHYSICAL LAYER INTERFACE LOGIC FOR GENERATING DYNAMIC RANDOM ACCESS MEMORY (DRAM) COMMANDS WITH PROGRAMMABLE DELAYS - A plurality of registers implemented in association with a memory physical layer interface (PHY) can be used to store one or more instruction words that indicate one or more commands and one or more delays. A training engine implemented in the memory PHY can generate at-speed programmable sequences of commands for delivery to an external memory and to delay the commands based on the one or more delays. The at-speed programmable sequences of commands can be generated based on the one or more instruction words. | 2015-12-31 |
20150378957 | EMPLOYING MULTIPLE I2C DEVICES BEHIND A MICROCONTROLLER IN A DETACHABLE PLATFORM - Methods and apparatus relating to employing multiple I2C (Interface to Communicate) devices behind a microcontroller in a detachable platform are described. In an embodiment, first logic receives a first message via a serial single ended (such as an Interface to Communicate (I2C)) bus. The first logic generates a second message to be transmitted to second logic in response to a determination that the first message is not directed to an address space assigned to the first logic. The second message includes information from the first message. Other embodiments are also disclosed. | 2015-12-31 |
20150378958 | ARBITRATING USAGE OF SERIAL PORT IN NODE CARD OF SCALABLE AND MODULAR SERVERS - A system and method for provisioning of modular compute resources within a system design are provided. In one embodiment, a node card or a system board may be used. | 2015-12-31 |
20150378959 | MULTI-PROTOCOL SERIAL NONVOLATILE MEMORY INTERFACE - An electronic device including a multi-protocol serial nonvolatile memory interface is disclosed. The interface includes: a first line operative to perform functions of a first chip select line when the interface operates as a SPI of the electronic device; a second line operative to perform functions of a second chip select line when the interface operates as the SPI of the electronic device; a third line operative to perform functions of a clock line when the interface operates as either the SPI or an I2C interface of the electronic device, and a fourth line configured to perform functions of a mast-out-slave-in (MOSI) line and a master-in-slave-out (MISO) line when the interface operates as the SPI of the electronic device, the fourth line further operative to perform functions of a serial data line when the interface operates as the I2C interface of the electronic device. | 2015-12-31 |
20150378960 | METHOD AND APPARATUS FOR THE PROCESSOR INDEPENDENT EMBEDDED PLATFORM - A method comprises identifying resource needs of a plurality of peripherals and resource requirements of a plurality microcontrollers. The method includes comparing the resource needs of the plurality of peripherals with the resource requirements of the plurality of microcontrollers to identify generic resources common to the plurality of microcontrollers, wherein a first microcontroller and a second microcontroller of the plurality of microcontrollers provide the generic resources to processor pin locations according to differing architectures. The method includes assigning each resource of the generic resources to a fixed motherboard location, the assigning including assigning the fixed location to an interface pin. The method includes identifying for each resource of the generic resources a processor pin location of the first microcontroller providing the resource, routing the processor pin location providing the resource to the assigned interface pin, wherein the interface pin provides the resource to the fixed motherboard location. | 2015-12-31 |
20150378961 | Extended Fast Memory Access in a Multiprocessor Computer System - A multiprocessor computer system comprises a first node operable to access memory local to a remote node by receiving a virtual memory address from a requesting entity in node logic in the first node. The first node creates a network address from the virtual address received in the node logic, where the network address is in a larger address space than the virtual memory address, and sends a fast memory access request from the first node to a network node identified in the network address. | 2015-12-31 |
20150378962 | Approach For More Efficient Use Of Computing Resources While Calculating Cross Product Or Its Approximation For Logistic Regression On Big Data Sets - According to one technique, a modeling computer computes a Hessian matrix by determining whether an input matrix contains more than a threshold number of dense columns. If so, the modeling computer computes a sparsified version of the input matrix and uses the sparsified matrix to compute the Hessian. Otherwise, the modeling computer identifies which columns are dense and which columns are sparse. The modeling computer then partitions the input matrix by column density and uses sparse matrix format to store the sparse columns and dense matrix format to store the dense columns. The modeling computer then computes component parts which combine to form the Hessian, wherein component parts that rely on dense columns are computed using dense matrix multiplication and component parts that rely on sparse columns are computed using sparse matrix multiplication. | 2015-12-31 |
20150378963 | DETECTING AN EVENT FROM TIME-SERIES DATA SEQUENCES - The present subject matter discloses a system and a method for detecting an event from time-series data sequences. The system receives time-series data sequences generated by sensors, wherein the time-series data sequences comprise sample points. The system pairs the sample points with one another for determining pairs of the sample points. The system computes Euclidean distances and angles between the sample points for determining distance matrix and angle matrix corresponding to the sample points. Further, the system determines global distribution of the plurality of pairs of sample points, wherein the global distribution of the plurality of pairs of sample points represent 2D shape histogram for the time-series data sequence. Further, the system concatenates the 2D shape histogram for each time-series data sequence to generate a concatenated shape histogram. Finally the system matches the concatenated shape histogram to pre-stored shape histograms for determining the event. | 2015-12-31 |
20150378964 | Embedded Document Within an Application - Data structures, methods, program products and systems for creating and executing an executable file for the Binary Runtime Environment for Wireless (BREW) where the file is capable of causing presentation of a document embedded in the file on a BREW system. | 2015-12-31 |
20150378965 | Method And Device For Text Input And Display Of Intelligent Terminal - A method and apparatus for text input and display of an intelligent terminal are provided. The method comprises: when a user performs text input on a text editing interface of the intelligent terminal, the intelligent terminal collecting a contact area of the user with a touch screen; and when the collected contact area reaches a set first area threshold, the intelligent terminal displaying and/or storing the text input by the user according to a first font attribute, wherein the first font attribute is different from a default font attribute of the text editing interface. By adopting the method and apparatus of the present invention document, the width or size of the input fonts can be changed flexibly, so as to achieve the purpose of different input and display effects of the input text. | 2015-12-31 |
20150378966 | FAST CSS PARSER ENGINEERED FOR RUNTIME USE - The technology disclosed relates to systems and methods for providing a CSS parser engineered for runtime usage to improve the maintainability of code that displays data to users. The technology disclosed also improves the performance and consistency of the code that delivers a user's experience. | 2015-12-31 |
20150378967 | Method and System for Customization of Rich Media - In at least one embodiment, a system and method place data on an user interface wherein the user interface is a medium for interaction between user and an internet capable device such as a web page or an application. The method and system includes extracting colour information from at least one of the web page and the application. Extracting the colour information includes the step of classifying each pixel of at least one of the web page and the mobile application into a cluster of a plurality of clusters. Further, the method and system includes assigning an attribute from a plurality of attributes to the cluster of the plurality of clusters. Furthermore, the method and system includes creating a plurality of Cascading Style Sheets (CSS) classes. In addition, the method and system includes customizing the data based on the plurality of CSS classes. Additionally, the method and system includes placing the data on at least one of the web page and the application. | 2015-12-31 |
20150378968 | AUTOMATICALLY DETERMINING WHETHER A PAGE OF A WEB SITE IS BROKEN DESPITE ELEMENTS ON THE PAGE THAT MAY CHANGE - In an embodiment, a method comprises rendering a first image of a first user interface based on a first set of instructions; rendering a second image of a second user interface based on a second set of instructions; generating a first mask comprising a plurality of points, wherein each point in the first mask indicates whether a first point in the first image and a second point in the second image are different; rendering a third image of a third user interface based on a third set of instructions, wherein the first set of instructions are different than the third set of instructions and the first image is different than the third image; determining that the first image is equivalent to the third image based on the first image, the first mask, and the third image. | 2015-12-31 |
20150378969 | UNIFIED GRAPHICAL USER INTERFACE FOR DISPLAYING A PLAN OF OPERATIONS IN A DATACENTER - In a computer-implemented method for a unified graphical user interface for displaying a plan of operations in a datacenter metadata is accessed from a plurality of disparate software bundles for updating targets in a datacenter. A unified visualization of a plan of operations on the targets is displayed via a unified graphical user interface based on the accessed metadata, wherein the unified graphical user interface displays the plan of operations with a common look and feel. | 2015-12-31 |
20150378970 | METHOD FOR DISPLAYING WEBPAGES - A method for displaying webpages comprises: submitting, by a client, a link to a webpage providing a perspective on a subject to a service provider; extracting, by the service provider, the perspective from the webpage, indexing the extracted perspective on the subject, and storing the indexed perspective in a perspective database which stores all indexed perspectives on the subject from different webpages; retrieving, by the client, references to other perspectives on the subject from the service provider; and displaying, by the client, the references to other perspectives on the subject when displaying the webpage. | 2015-12-31 |
20150378971 | AUTOMATED DOCUMENT REVISION MARKUP AND CHANGE CONTROL - Automated comparison of Darwin Information Typing Architecture (DITA) documents for revision mark-up includes reading document data from first and second DITA documents into respective document object model trees of nodes, and identifying and collapsing emphasis subtree nodes in the trees into their parent nodes, the collapsing caching emphasis data from the identified subtree nodes. A traversal transforms the model trees into respective node lists and captures adjacent sibling emphasis subtree nodes as single text nodes. The node lists are merged into a merged node list that recognizes matches node pairs having primary sort key information and document structure metadata meeting a match threshold, with differences between matching tokens of the node pairs saved. A merged document object model built from the refined merged node list is transformed into a hypertext mark-up language document. | 2015-12-31 |
20150378972 | INTELLIGENT CONFLICT DETECTION AND SEMANTIC EXPRESSION OF DOCUMENT EDITS - An intelligent conflict detection system. The system provides semantic expression of edits and history-aware conflict detection in a large-scale service allowing multiple users to simultaneously work with the same document, which may result in conflicting edits. When submitted, changes to a version of a document are compared to all versions of the document created since the document was sourced. Edits to documents are expressed as intents describing the changes in terms of an action and object of that action based on a characteristic of the data being edited. Comparing the intent of current edits against the historical intent of the edits made in prior versions originating from the same source document allows the system to intelligently assess whether the edits are in conflict. History-aware semantic analysis results in users being prompted less frequently to resolve conflicts, which improves the user experience. | 2015-12-31 |
20150378973 | ELECTRONIC DOCUMENT CONTENT REDACTION - Systems and methods for redacting certain content (e.g., content representing private, privileged, confidential, or otherwise sensitive information) from electronic documents. An example method comprises: identifying, by a computing device, two or more layers in an electronic document; processing each of the identified layers to produce a layer text representing one or more objects comprised by the layer; combining the produced layer texts to produce a combined text of the electronic document; and identifying, within the combined text of the electronic document, a target character string corresponding, in view of a specified search function, to a specified character string. | 2015-12-31 |
20150378974 | SYSTEM AND METHOD FOR SYNCHRONIZING BI-DIRECTIONAL DOCUMENT MANAGEMENT - Systems and methods consistent with various disclosed embodiments provide for collaborating information over a computer network. In one embodiment, a system is disclosed for collaborating information over a network. The system may include a storage device and one or more processors. The processor(s) may maintain documents in the storage device and publish content links to the documents in a workspace rendered by a collaboration platform. The processor(s) may provide content to the collaboration platform in response to a content link selection by a reviewer and receive the reviewer's changes, with the collaboration platform temporarily storing the document in a virtual memory for rendering to the reviewer and deleting it after the communication session ends. The processor(s) may synchronize the changes bi-directionally from the workspace with the original document through a collaboration document, such that the original document remains unaltered, and by re-publishing an updated content link to the workspace. | 2015-12-31 |
20150378975 | ATTRIBUTE FILL USING TEXT EXTRACTION - Systems and methods involve filling missing attribute values from unstructured text. A computing device may provide a plurality of items, such as an item catalog for an electronic marketplace. When an item is found to have a missing attribute value, a plurality of existing values for that attribute is compiled by mining other items. Text associated with the item is parsed to determine possible values for the attribute. From those possible values, the most likely value is identified and the missing attribute value is populated with that value. | 2015-12-31 |
20150378976 | METHODS AND SYSTEMS FOR PROVIDING AN ELECTRONIC FORM - A method and system for providing an electronic form are described. The method and system include identifying a visible portion of the electronic form. The electronic form can include a control component at a component location of the electronic form that is operable to receive an input from a user. The method and system can then determine an accessibility state of the control component based on the component location and at least one of a display property of the display and the visible portion. The accessibility state can be a convenient state when the component location is suitable for the display but is in an inconvenient state when the component location is not suitable for the display. When it is determined that the accessibility state is the inconvenient state, the method and system involves displaying a transient control component, or a version of the control component, on the display. | 2015-12-31 |
20150378977 | SYSTEM AND METHOD FOR OPERATING A COMPUTER APPLICATION WITH SPREADSHEET FUNCTIONALITY - The present invention provides a method for operating a computer application with spreadsheet functionality. The method comprising receiving one or more inputs in one or more cells by the spreadsheet application, parsing the received inputs for the one or more cells of the spreadsheet, constructing a dependency graph for the one or more parsed input cells, evaluating at least one of the one or more parsed input cells based on one or more criteria in the dependency graph, reconstructing the dependency graph until all of the one or more input cells are evaluated, and returning an output to the spreadsheet application. | 2015-12-31 |
20150378978 | SUMMARY DATA AUTOFILL - Technologies are described herein for summary data autofill. A device executes an application program configured to receive data input. The application program may determine a portion of the data may be aggregated or conducive to being summarized. Upon the detection of a user input of additional data into a document having data contained therein, the application program may display a suggested complete summary of the data. An input may be received to accept the suggested complete summary, whereby the manner in which the suggested complete summary is displayed may be changed to indicate the acceptance of the suggested complete summary. A confidence level that the suggested complete summary is a correct summary of the data may be determined. The confidence level may be adjusted based on further input of data or additional data. | 2015-12-31 |
20150378979 | STREAM-ENABLED SPREADSHEET AS A CIRCUIT - Converting data transformations entered in a spreadsheet program into a circuit representation of those transformations. The circuit representation can run independently of the spreadsheet program to transform input data into output data. In some cases the circuit representation is in the form of hardware, accepts and/or produces data streams, and/or the circuit and/or output data or data streams can be shared among multiple users and/or subscribers. Where data streams are processed, the transformations may include well-specified timing semantics, supporting operations that involve rate-based rate manipulation, value-based rate manipulation, and/or access to past cell values. | 2015-12-31 |
20150378980 | DATABASE MANAGEMENT SYSTEM BASED ON A SPREADSHEET CONCEPT DEPLOYED IN AN OBJECT GRID - A method for interacting with a database stored in an object grid is described. The database is given attributes of a spreadsheet. Elements stored in the database are represented and addressed as cells of a spreadsheet. Cells can store data objects, including formulas, and executable scripts. The spreadsheet can evaluate formulas, carry out the program instructions of executable scripts, and perform complex event processing. Interaction with the spreadsheet is accomplished through the use of structured data messages which include instructions, spreadsheet and cell addressing and, optionally, data elements. | 2015-12-31 |
20150378981 | REFERRING TO CELLS USING HEADER CELL VALUES - Referring to cells using header cell values is disclosed. In some embodiments, a header cell value of a header cell is allowed to be used to refer to one or more other cells that are associated with the header cell. The header cell may be included in a header row or column included in a table. A header row cell value may be employed to refer to one or more other cells in a corresponding column, and a header column cell value may be employed to refer to one or more other cells in a corresponding row. | 2015-12-31 |
20150378982 | CHARACTER ENTRY FOR AN ELECTRONIC DEVICE USING A POSITION SENSING KEYBOARD - The present disclosure provides a method and apparatus for entering characters into an electronic device. Character inputs from a keyboard are displayed on a display of an electronic device and a set of suggested character sequences are also presented on the display in proximity to the displayed text. When a user digit position is sensed in a region of the keyboard, a suggested character sequence of the set of suggested character sequences that is associated with that region of the keyboard is visually indicated or highlighted. Responsive to a sensed motion gesture beginning at the sensed user digit position, the suggested character sequence indicated by the sensed user digit position is selected for input to the electronic device. | 2015-12-31 |
20150378983 | INCREMENTAL MULTI-WORD RECOGNITION - In one example, a computing device includes at least one processor that is operatively coupled to a presence-sensitive display and a gesture module operable by the at least one processor. The gesture module may be operable by the at least one processor to output, for display at the presence-sensitive display, a graphical keyboard comprising a plurality of keys and receive an indication of a continuous gesture detected at the presence-sensitive display, the continuous gesture to select a group of keys of the plurality of keys. The gesture module may be further operable to determine, in response to receiving the indication of the continuous gesture and based at least in part on the group of keys of the plurality of keys, a candidate phrase comprising a group of candidate words. | 2015-12-31 |
20150378984 | AUGMENTING SEMANTIC MODELS BASED ON MORPHOLOGICAL RULES - A computer processor determines a root of a first element of a semantic model, in which a first relationship of the first element to a second element of the semantic model, is unknown. The computer processor generates a search token, based on applying morphological rules to the root of the first element and appending a preposition. The computer processor determines one or more regular expressions by applying the search token to search a source of unstructured data. The one or more regular expressions are in a form of a triple, having a subject, a predicate, and an object, and the computer processor applies the predicate of the triple as the first relationship of the first element of the semantic model to a second element of the semantic model. | 2015-12-31 |
20150378985 | METHOD AND SYSTEM FOR PROVIDING SEMANTICS BASED TECHNICAL SUPPORT - A method and system for providing semantics based technical support. The embodiments herein relates to providing semantics based technical support, and more particularly to providing semantics based technical support based on available knowledge sources and similarity of technical support issues. Embodiments disclosed herein provide users with requisite information in real time while an issue is being reported. | 2015-12-31 |
20150378986 | CONTEXT-AWARE APPROACH TO DETECTION OF SHORT IRRELEVANT TEXTS - Systems and methods are disclosed for determining whether a short amount of text is irrelevant. Initially, an article is selected having one or more comments of varying length. Depending on the number of comments available, a native context may be constructed based on a given comment and other neighboring comments. In other embodiments, a transferred context may be constructed from the given comment and topically similar comments extracted from other, topically similar articles. A native context-aware feature may be determined from the constructed native context and a transferred context-aware feature may be determined from the constructed transferred context. These features may be leveraged by a language classifier to determine whether a given comment is irrelevant. | 2015-12-31 |
20150378987 | INSIGHT ENGINE - Embodiments of the invention provide systems and methods for generating natural language insights about a set of data. More specifically, embodiments of the present invention are directed to methods and systems that transform data into insights or actionable information. The output generated by embodiments of the present invention would be equivalent to that of an observation made or insights gathered by a qualified data scientist presented with the same data. Embodiments as described herein can include an insight engine that can analyze both structured and unstructured data and generate information in a natural language of the user's choice. Insights provided by embodiments described herein can be supported by an ability to drilldown to graphs/tables and atomic data and provide a good starting point for further analysis. | 2015-12-31 |
20150378988 | AUTOMATIC QUESTION DETECTION IN NATURAL LANGUAGE - Systems and methods may provide for separating a sentence into a plurality of clauses and applying a set of question detection rules to each of the plurality of clauses. Additionally, the sentence may be automatically designated as a question if the question detection rules indicate that at least one of the plurality of clauses is a question. In one example, at least one of the question detection rules defines an order of a plurality of parts of speech. | 2015-12-31 |
20150378989 | TECHNIQUES FOR ON-THE-SPOT TRANSLATION OF WEB-BASED APPLICATIONS WITHOUT ANNOTATING USER INTERFACE STRINGS - A computer-implemented technique can include executing a web-based application and receiving a request to translate at least a portion of the web-based application. In response to receiving the request, the technique can include identifying text portions in the web-based application, transmitting the text portions to a server, wherein receipt of the text portions causes the server to match the text portions to entries in a database associated with the server to obtain UI strings, and receiving the UI strings from the server. In in response to receiving the UI strings, the technique can include providing an indicator of a particular UI string when the particular UI string is displayed during execution of the web-based application. The technique can also include receiving a selection of the particular UI string, and outputting metadata associated with the particular UI string, the metadata representing context information for assisting a human translator. | 2015-12-31 |
20150378990 | Measuring Linguistic Markers and Linguistic Noise of a Machine-Human Translation Supply Chain - An approach is provided in which a linguistic analyzer engine generates a leverage value of a language translation supply chain that corresponds to an amount of suggested translations that are accepted by a professional linguist. The linguistic analyzer engine generates a factor value of the language translation supply chain that indicates a productivity of the user to convert the set of accepted translation into a set of final translations. In turn, the linguistic analyzer engine determines a performance efficiency of the language translation supply chain based upon the generated leverage value and the generated factor value, and evaluates the language translation supply chain accordingly. In one embodiment, the linguistic analyzer engine determines a performance efficiency of the language translation supply chain based on ānā distinct metric values associated with final translated segments. | 2015-12-31 |
20150378991 | DETERMINING DELAY FOR LANGUAGE TRANSLATION IN VIDEO COMMUNICATION - Disclosed are various embodiments for translation of speech in a video messaging application. A segment of streaming video is decoded to separate the visual component from the audio component. The audio component is then converted to text, which may then be translated and converted to a translation output comprising a new language. In response, the translation output may be encoded with the previously separated visual component. A delay is imposed on the visual component to account for any delays that may arise in translation. The translated video may then be streamed to participants giving the appearance of real-time video conferencing. | 2015-12-31 |
20150378992 | METHOD AND APPARATUS FOR MOVING DATA IN DATABASE MANAGEMENT SYSTEM - Provided are a method of moving data from a memory to a disk by using a single structural query language (SQL) statement and a method of moving data between tables. | 2015-12-31 |
20150378993 | SYSTEM AND METHOD FOR IMPLEMENTING A QUOTA SYSTEM IN A DISTRIBUTED FILE SYSTEM - A system and method for implementing a quota system in a distributed file system is provided. Each node manages a quota database tracking available quota for the node. Should additional quota be required, a node queries a remote node to obtain a lock over the remote quota database. The additional quota is shifted and remaining free quota is reallocated between the local and remote nodes. | 2015-12-31 |
20150378994 | SELF-DOCUMENTATION FOR REPRESENTATIONAL STATE TRANSFER (REST) APPLICATION PROGRAMMING INTERFACE (API) - In an approach for documenting a representational state transfer (REST) resource. A processor monitors input JavaScript Object Notation (JSON) data and output JSON data of a REST resource of an application. A processor converts a set of data from the monitored input JSON data and output JSON data of the REST resource to a self-documenting interchange format. A processor stores the converted set of data from the monitored input JSON data and output JSON data of the REST resource. | 2015-12-31 |
20150378995 | MANAGING PUBLIC NOTES AND PRIVATE NOTES PERTAINING TO A DOCUMENT WHICH IS SHARED DURING AN ONLINE MEETING - A technique manages notes pertaining to a document during an online meeting. The technique involves, while displaying contents of the document to participants of the online meeting, accessing, by processing circuitry, public notes and private notes pertaining to the contents of the document, the public notes and the private notes having been provided by a particular participant. The technique further involves sharing, by the processing circuitry, the public notes with other participants of the online meeting. The technique further involves concealing, by the processing circuitry, the private notes from the other participants of the online meeting. | 2015-12-31 |
20150378996 | SYSTEMS AND METHODS FOR KEY PHRASE CHARACTERIZATION OF DOCUMENTS - Systems and methods are disclosed for key phrase characterization of documents. In accordance with one implementation, a method is provided for key phrase characterization of documents. The method includes obtaining a first plurality of documents based at least on a user input, obtaining a statistical model based at least on the user input, and obtaining, from content of the first plurality of documents, a plurality of segments. The method also includes determining statistical significance of the plurality of segments based at least on the statistical model and the content, and providing for display a representative segment from the plurality of segments, the representative segment being determined based at least on the statistical significance | 2015-12-31 |
20150378997 | ANALYZING DOCUMENT REVISIONS TO ASSESS LITERACY - A system and method for receiving a document from a document storage, the documents having multiple revisions. Conducting an analysis of the document by comparing the multiple revisions to identify differences between the revisions and attributing a set of revisions to an author of the document and analyzing text of the set of revisions to determine literacy metrics for the author, is provided. | 2015-12-31 |
20150378998 | METHODS AND APPARATUS FOR MERGING MEDIA CONTENT - A computerized method and apparatus is disclosed for merging content segments from a number of discrete media content (e.g., audio/video podcasts) in preparation for playback. The method and apparatus obtain metadata corresponding to a plurality of discrete media content. The metadata identifies the content segments and their corresponding timing information, such that the metadata of at least one of the plurality of discrete media content is derived using one or more media processing techniques. A number of the content segments are selected to be merged for playback using the timing information from the metadata. The merged media content can be implemented as a playlist identifying the content segments to be merged for playback. The merged media content can also be generated by extracting the content segments to be merged for playback from each of the media files/streams and then merging the extracted segments into one or more merged media files/streams. | 2015-12-31 |
20150378999 | DETERMINING AFFILIATED COLORS FROM KEYWORD SEARCHES OF COLOR PALETTES - Systems and methods are described herein to determine data associated with affiliated color palettes identified from keyword searches of color palettes. Color palettes may be searched by name or other data associated with the color palettes. Affiliated color palettes may be determined based at least in part on an input color. Furthermore, affiliated colors can be determined based at least in part on votes and/or rankings. The items and/or images associated with affiliated color palettes may be identified. Various user interfaces may be based at least in part on the keyword searches of color palettes and/or determination of affiliated color palettes. | 2015-12-31 |
20150379000 | GENERATING VISUALIZATIONS FROM KEYWORD SEARCHES OF COLOR PALETTES - Systems and methods are described herein to generate visualizations associated with color palettes identified from keyword searches. Color palettes may include colors determined by human color preferences. Color palettes may be searched by name or other data associated with the color palettes based at least in part on text or audio data. Visualizations such as mood lighting and/or atmosphere colors may be based at least in part on the searched color palettes. | 2015-12-31 |
20150379001 | AUTOMATIC COLOR VALIDATION OF IMAGE METADATA - Systems and methods are described that validate color information in metadata associated with an image. Color information associated with images, in metadata, may need to be validated because there may be mistakes in the metadata. Color names associated with colors may be based on human generated data or human surveys. Individual colors may be extracted from images, which may be associated with the color names. Furthermore, the color names may be retrieved from a data store via a fast index color search. The color information in the metadata may be validated against the color names. | 2015-12-31 |
20150379002 | DETERMINING COLOR NAMES FROM KEYWORD SEARCHES OF COLOR PALETTES - Systems and methods are described herein to determine data, including color names, associated with color palettes identified from keyword searches. Color palettes may be searched by name or other data associated with the color palettes. Images and/or items may be retrieved based at least in part on the colors of the color palettes. Individual colors may be associated with color names based at least in part on human surveys and/or color names may be retrieved. Furthermore, the color names of individual colors may be retrieved based at least in part on a fast color search and/or associated with human votes. Various user interfaces may provide color palettes, images, and/or color names to users based at least in part on keyword searching of color palettes. | 2015-12-31 |
20150379003 | IDENTIFYING DATA FROM KEYWORD SEARCHES OF COLOR PALETTES AND COLOR PALETTE TRENDS - Systems and methods are described herein to determine data associated with color palettes identified from keyword searches. Color palettes may be searched by name or other data associated with the color palettes. Color palettes may include colors determined by human color preferences and/or may be associated with human votes. Furthermore, color palettes may be filtered by trends and/or times of the color palettes. Various user interfaces may be based at least in part on the keyword searching and/or trending techniques for color palettes. | 2015-12-31 |
20150379004 | IDENTIFYING DATA FROM KEYWORD SEARCHES OF COLOR PALETTES AND KEYWORD TRENDS - Systems and methods are described herein to determine data associated with keyword searches of color palettes based at least in part on keyword trends. A keyword trend may include popular colors of color palettes associated with the keyword. Color palettes may be searched by name or other data associated with the color palettes. Furthermore, color palettes associated with a keyword may be filtered by color trends and/or keyword trends. The items and/or images associated with the filtered color palettes may be identified and presented to a user. | 2015-12-31 |