42nd week of 2017 patent applcation highlights part 47 |
Patent application number | Title | Published |
20170300330 | ISA EXTENSIONS FOR SYNCHRONOUS COALESCED ACCESSES - Global synchrony changes the way computers can be programmed. A new class of ISA level instructions (the globally-synchronous load-store) of the present invention is presented. In the context of multiple load-store machines, the globally synchronous load-store architecture allows the programmer to think about a collection of independent load-store machines as a single load-store machine. These ISA instructions may be applied to a distributed matrix transpose or other data that exhibit a high degree of data non-locality and difficulty in efficiently parallelizing on modern computer system architectures. Included in the new ISA instructions are a setup instruction and a synchronous coalescing access instruction (“sca”). The setup instruction configures a head processor to set up a global map that corresponds processor data contiguously to the memory. The “sca” instruction configures processors to block processor threads until respective times on a global clock, derived from the global map, to access the memory. | 2017-10-19 |
20170300331 | THREAD TRANSITION MANAGEMENT - A system and process for managing thread transitions includes determining that a transition is to be made regarding the relative use of two data register sets where the two data register sets are used by a processor as first-level registers for thread execution. Based on the transition determination, a determination is made whether to move thread data in at least one of the first-level registers to second-level registers. Responsive to determining to move the thread data, a portion of main memory or cache memory is assigned as the second-level registers where the second-level registers serve as registers of at least one of the two data register sets for executing a thread. The thread data from the at least one first-level register is moved to the second-level registers based on the move determination. | 2017-10-19 |
20170300332 | APPARATUS AND METHOD OF IMPROVED INSERT INSTRUCTIONS - An apparatus is described having instruction execution logic circuitry to execute first, second, third and fourth instruction. Both the first instruction and the second instruction insert a first group of input vector elements to one of multiple first non overlapping sections of respective first and second resultant vectors. The first group has a first bit width. Each of the multiple first non overlapping sections have a same bit width as the first group. Both the third instruction and the fourth instruction insert a second group of input vector elements to one of multiple second non overlapping sections of respective third and fourth resultant vectors. The second group has a second bit width that is larger than said first bit width. Each of the multiple second non overlapping sections have a same bit width as the second group. The apparatus also includes masking layer circuitry to mask the first and third instructions at a first resultant vector granularity, and, mask the second and fourth instructions at a second resultant vector granularity. | 2017-10-19 |
20170300333 | RECONFIGURABLE MICROPROCESSOR HARDWARE ARCHITECTURE - A reconfigurable, multi-core processor includes a plurality of memory blocks and programmable elements, including units for processing, memory interface, and on-chip cognitive data routing, all interconnected by a self-routing cognitive on-chip network. In embodiments, the processing units perform intrinsic operations in any order, and the self-routing network forms interconnections that allow the sequence of operations to be varied and both synchronous and asynchronous data to be transmitted as needed. A method for programming the processor includes partitioning an application into modules, determining whether the modules execute in series, program-driven parallel, or data-driven parallel, determining the data flow required between the modules, assigning hardware resources as needed, and automatically generating machine code for each module. In embodiments, a Time Field is added to the instruction format for all programming units that specifies the number of clock cycles for which only one instruction fetch and decode will be performed. | 2017-10-19 |
20170300334 | METHOD AND APPARATUS FOR IMPLEMENTING A DYNAMIC OUT-OF-ORDER PROCESSOR PIPELINE - A hardware/software co-design for an optimized dynamic out-of-order Very Long Instruction Word (VLIW) pipeline. For example, one embodiment of an apparatus comprises: an instruction fetch unit to fetch Very Long Instruction Words (VLIWs) in their program order from memory, each of the VLIWs comprising a plurality of reduced instruction set computing (RISC) instruction syllables grouped into the VLIWs in an order which removes data-flow dependencies and false output dependencies between the syllables; a decode unit to decode the VLIWs in their program order and output the syllables of each decoded VLIW in parallel; and an out-of-order execution engine to execute the syllables preferably in parallel with other syllables, wherein at least some of the syllables are to be executed in a different order than the order in which they are received from the decode unit, the out-of-order execution engine having one or more processing stages which do not check for data-flow dependencies and false output dependencies between the syllables when performing operations. | 2017-10-19 |
20170300335 | METHOD, APPARATUS AND INSTRUCTIONS FOR PARALLEL DATA CONVERSIONS - Method, apparatus, and program means for performing a conversion. In one embodiment, a disclosed apparatus includes a destination storage location corresponding to a first architectural register. A functional unit operates responsive to a control signal, to convert a first packed first format value selected from a set of packed first format values into a plurality of second format values. Each of the first format values has a plurality of sub elements having a first number of bits The second format values have a greater number of bits. The functional unit stores the plurality of second format values into an architectural register. | 2017-10-19 |
20170300336 | FPSCR STICKY BIT HANDLING FOR OUT OF ORDER INSTRUCTION EXECUTION - A hardware execution unit within a processor core executes a second instruction, which is part of a software thread, and which is executed out of order within the software thread. A sticky bit flip detection hardware device detects a change to a sticky bit in a floating-point status and control register (FPSCR) within the processor core. An instruction issue hardware unit identifies a first instruction that is in the software thread that is capable of reading or clearing the sticky bit. A flushing execution unit flushes all results of instructions from an instruction completion table (ICT) that include and are after the first instruction in the software thread. A hardware dispatch device dispatches all instructions that include and are after the first instruction in the software thread for execution by one or more hardware execution units within the processor core in a next-to-complete (NTC) sequential order. | 2017-10-19 |
20170300337 | PIPELINED CASCADED DIGITAL SIGNAL PROCESSING STRUCTURES AND METHODS - Circuitry operating under a floating-point mode or a fixed-point mode includes a first circuit accepting a first data input and generating a first data output. The first circuit includes a first arithmetic element accepting the first data input, a plurality of pipeline registers disposed in connection with the first arithmetic element, and a cascade register that outputs the first data output. The circuitry further includes a second circuit accepting a second data input and generating a second data output. The second circuit is cascaded to the first circuit such that the first data output is connected to the second data input via the cascade register. The cascade register is selectively bypassed when the first circuit is operated under the fixed-point mode. | 2017-10-19 |
20170300339 | TECHNIQUE FOR REORDERING HARD DRIVE ACTIVATION REPORTS TO ACHIEVE SEQUENTIAL HARD DRIVE ORDERING - A storage enclosure includes a plurality of hard drives coupled to a logic device that, in turn communicates with an operating system (OS) executing on a host computer system. When the storage enclosure is powered on, the hard drives become active at different times and transmit activation reports to the logic device in a random order. The logic device receives these activation reports, and then reorders then to align with the bay numbers where the hard drives are mounted. The logic device then transmits the reordered activation reports, and the OS assigns logical IDs to the hard drives that match the bay numbers. | 2017-10-19 |
20170300340 | SECURE COMPUTER ACCESS USING REMOVABLE BOOTABLE DRIVES - Systems, methods, and non-transitory computer-readable media for providing access to small form factor (SFF) and laptop computers using removable bootable drives (RBD(s)) are disclosed herein. In some implementations, a physical RBD vault that contains RBD(s) is provided instructions to release an RBD for a specific secured usage that corresponds to a usage context of a user of a specific secured system. The RBD may comprise an SFF RBD that is configured to be inserted into an RBD enclosure created for an SFF laptop. The RBD may comprise a laptop RBD that is configured to be inserted into an RBD enclosure that is, in turn, built into a battery pack of the laptop computer. | 2017-10-19 |
20170300341 | Initialize Programmable Components - A programming file including a first module is loaded to a programmable component. And then, the programmable component is dis-reset. Subsequently, first data is loaded to a memory connecting with the programmable component, to enable the first module in the programmable component to convert the first data of the memory into second data. After the first module of the programmable component converts the first data of the memory into the second data, a second module is loaded to the programmable component. The first module in the programming file is then replaced with the second module, to enable the second module to access the second data. | 2017-10-19 |
20170300342 | TECHNIQUES FOR SWITCHING BETWEEN OPERATING SYSTEMS - Various embodiments are generally directed to an apparatus, method and other techniques for receiving information to invoke a transition from a first operating system to a second operating system, copying a system context for the second operating system from a location of a non-volatile memory to a volatile memory, the location associated with the second operating system and transitioning from the first operating system to the second operating system using the system context for the second operating system. | 2017-10-19 |
20170300343 | Connection Device for a Modular Computing System - Systems and methods of controlling operation of a connection device associated with a modular computing system are disclosed. For instance, data indicative of a connection between a first connection device and a second connection device can be obtained. The first connection device can be associated with a modular computing device, and the second connection device can be associated with a modular component to be implemented within the modular computing device. Each connection device can include a plurality of connector elements. Data indicative of one or more configuration parameters of the second connection device can be obtained. An operating configuration of the first connection device can be determined based at least in part on the data indicative of the one or more configuration parameters. Operation of the first connection device can be controlled based at least in part on the operating configuration. | 2017-10-19 |
20170300344 | OPTIMIZED USER INTERFACE RENDERING - A device identifies one or more functional elements, and one or more device characteristics. The device determines a selection index based on one or more device characteristics. The device determines a first functional element of the one or more functional elements that has a highest priority level. The device determines whether there is an appropriate technology layer for the first functional element based on comparing the selection index to one or more technology layer ranges corresponding to one or more technology layers associated with the first functional element. | 2017-10-19 |
20170300345 | MOBILE ASSISTANT - One or more computing devices, systems, and/or methods for assisting a user in performing a task are provided. For example, a request may be received from the user via a messaging interface, and the task (e.g., make a reservation) may be determined (e.g., identified) based upon the request. Questions associated with information required (e.g., name, location, dates, etc.) to perform the task may be determined and provided. Visual elements (e.g., selectable buttons) corresponding to answer choices associated with the questions may be provided. A selection of a first visual element of the visual elements may be received, and the task may be performed based upon the selection of the first visual element. | 2017-10-19 |
20170300346 | ESTIMATION RESULTS DISPLAY SYSTEM, ESTIMATION RESULTS DISPLAY METHOD, AND ESTIMATION RESULTS DISPLAY PROGRAM - An estimation results display system capable of displaying an estimation result so that persons can intuitively recognize at a glance which learning model is selected when deriving the estimation result is provided. Input means | 2017-10-19 |
20170300347 | TECHNIQUES FOR CHECKPOINTING/DELIVERY BETWEEN PRIMARY AND SECONDARY VIRTUAL MACHINES - Examples may include a determining a checkpointing/delivery policy for primary and secondary virtual machines based on output-packet-similarities. The output-packet-similarities may be based on a comparison of time intervals via which content matched for packets outputted from the primary and secondary virtual machines. A checkpointing/delivery mode may then be selected based, at least in part, on the determined checkpointing/delivery policy. | 2017-10-19 |
20170300348 | VIRTUAL DEVICE BASED SYSTEMS - An embodiment includes a system, comprising: a device configured to present a logical device and enable a virtual device in response to a control signal; and a processor coupled to the device and configured to: present the logical device through a first device interface; transmit the control signal to the device to enable the virtual device; and after the virtual device is enabled, present the virtual device through a second device interface. | 2017-10-19 |
20170300349 | STORAGE OF HYPERVISOR MESSAGES IN NETWORK PACKETS GENERATED BY VIRTUAL MACHINES - Techniques for storing hypervisor messages in a network packet are described. In one aspect, a hypervisor of a computing device obtains a network packet generated by a virtual machine. The hypervisor may then identify available space within the network packet that can store data relating to a hypervisor message. The hypervisor may then store the hypervisor message in the available space within the network packet. The hypervisor may cause a physical network interface controller to transmit the network packet to a destination device through a network path that includes a message logging device. | 2017-10-19 |
20170300350 | MATCHING RESOURCES ASSOCIATED WITH A VIRTUAL MACHINE TO OFFERED RESOURCES - A request to instantiate one or more virtual machines in a cloud may be received. The request may specify a service level agreement (SLA). A specification for resources to instantiate the virtual machine in view of a type of the virtual machine and the SLA may be determined. A value and specifications offered for the resources to instantiate the type of the virtual machines may be received. A value for at least one specification in view of an amount of time for providing the resources and a comparison with other values for resources of other clouds may be determined. The specification for the resources to instantiate the type of the virtual machines and the value offered for the resources to instantiate the type of the virtual machines may be matched with at least one specification for resources offered and the value determined for the at least one specification. | 2017-10-19 |
20170300351 | Optimizations and Enhancements of Application Virtualization Layers - Methods, systems, and computer-readable media for optimizing and enhancing delivery of application virtualization layers to client computing devices are described herein. In various embodiments, an application virtualization layer optimization service may identify a first and a second application virtualization layer to be delivered to one or more client computing devices. Each application virtualization layer may represent a package of one or more applications. A layer analysis service may analyze the first and second application virtualization layers to determine conflicts between the layers, using predetermined conflict analysis rules, and generate an actionable conflict resolution report based on the analysis. Based on the actionable conflict resolution report, the application virtualization layer optimization service may resolve conflicts between the first and second application virtualization layers, order the first and application virtualization layers, and deliver the ordered layers to the one or more client computing devices. | 2017-10-19 |
20170300352 | Method and Apparatus for Deploying Virtual Machine Instance, and Device - A method, a corresponding apparatus and device for deploying a virtual machine instance in order to lower requirements for a communication capability of a virtualized value-added server (VAS) and improve processing efficiency of a service chain, where the method includes obtaining communication relationships between a VAS instances and a service switch (SSW) instances from a service template, where the VAS instances and the SSW instances provide services in a service chain, and the service chain and the communication relationships between the VAS instances and the SSW instances are defined in the service template, and deploying, according to the communication relationships, an SSW instance and a VAS instance that need to communicate with each other in the SSW instances and the VAS instances on a same physical machine. | 2017-10-19 |
20170300353 | Method for Allocating Communication Path in Cloudified Network, Apparatus, and System - A method of communication in a cloudified network, an apparatus, and a system, where the method includes receiving, by a software-defined networking (SDN) controller, a communication path request message from a policy management apparatus, allocating, by the SDN controller from a network resource managed by the SDN controller, a communication path meeting a bandwidth requirement and a quality of service (QoS) requirement to both the communications nodes according to the address information of the different communications nodes, and the bandwidth requirement information and the QoS requirement information of the communication path between the different communications nodes, and sending, by the SDN controller, a message carrying the communication path to a bearer network forwarding device. | 2017-10-19 |
20170300354 | AUTOMATED NETWORK CONFIGURATION OF VIRTUAL MACHINES IN A VIRTUAL LAB ENVIRONMENT - Methods, systems, and computer programs for creating virtual machines (VM) and associated networks in a virtual infrastructure are presented. The method defines virtual network templates in a database, where each virtual network template includes network specifications. A configuration of a virtual system is created, which includes VMs, virtual lab networks associated with virtual network templates, and connections from the VMs to the virtual lab networks. Further, the configuration is deployed in the virtual infrastructure resulting in a deployed configuration. The deployment of the configuration includes instantiating in the virtual infrastructure the VMs of the configuration, instantiating in the virtual infrastructure the virtual lab networks, retrieving information from the database, and creating and executing programming instructions for the VMs. The database information includes the network specifications from the virtual network templates associated with the virtual lab networks, and network resources for the virtual lab networks from a pool of available network resources. The programming instructions are created for the particular Guest Operating System (GOS) running in each VM based on the GOS and on the retrieved database information. When executed in the corresponding VM GOS, the programming instructions configure the VMs network interfaces with the corresponding network specifications. | 2017-10-19 |
20170300355 | TASK SWITCHING ASSISTING METHOD AND INFORMATION PROCESSING APPARATUS - A task switching assisting method includes storing, by an information processing apparatus, task information regarding a task executed by a user. An element related to execution of the task is stored in correlation with the task information, an operation related to the task is determined, a work status with respect to the correlated element related to the execution of the task is determined based on the determined operation related to the task, the task is switched by saving or restoring the work status, based on the task information and the work status with respect to the correlated element related to the execution of the task, task execution resource information for executing the task and task saving resource information at a time of saving the task are analyzed on a task basis, and the task information is generated based on a result of the analysis, by the information processing apparatus. | 2017-10-19 |
20170300356 | FINE-GRAIN SYNCHRONIZATION IN DATA-PARALLEL JOBS - A computer-implemented method and computer processing system are provided. The method includes synchronizing, by a processor, respective ones of a plurality of data parallel workers with respect to an iterative process. The synchronizing step includes individually continuing, by the respective ones of the plurality of data parallel workers, from a current iteration to a subsequent iteration of the iterative process, responsive to a satisfaction of a predetermined condition thereby. The predetermined condition includes individually sending a per-receiver notification from each sending one of the plurality of data parallel workers to each receiving one of the plurality of data parallel workers, responsive to a sending of data there between. The predetermined condition further includes individually sending a per-receiver acknowledgement from the receiving one to the sending one, responsive to a consumption of the data thereby. | 2017-10-19 |
20170300357 | Priority Trainer For Many Core Processing System - A method of a priority trainer of a many core processing system comprising a plurality of cores is disclosed. The many core processing system is configured to execute one or more computer jobs, and wherein the priority trainer comprises a controller, the method comprising:
| 2017-10-19 |
20170300358 | MEDICAL IMAGING DISTRIBUTION SYSTEM AND DEVICE - Improved systems and devices for medical imaging distribution are provided. A medical imaging order may be received from a medical facility that includes medical imaging. A configuration may be selected and applied based on a body site and an urgency field associated with the order that defines queueing rules for the medical imaging order. Utilization factors for queues associated with radiologists may also be determined. The configuration and the utilization factors may be used to determine a subset of queues associated with a subset of radiologists. The subset of queues may be prioritized based on certain requirements, such as how many medical imaging reports a particular radiologist is required to review, how many medical imaging reports are required to be allocated to a particular radiologist, and the like. The highest prioritized queue may be selected and the medical imaging order may be transmitted to the radiologist associated with that queue. | 2017-10-19 |
20170300359 | POLICY BASED WORKLOAD SCALER - In one implementation, a system for policy based workload scaler includes a parameters engine to define external factors for a number of resources providing a number of cloud service workloads, a threshold engine to define a threshold value for the cloud service workloads from the number of resources, a priority engine to assign a priority to each of the number of cloud service workloads, and a service engine to reclaim resources from a first portion of cloud service workloads with a first priority and allocate the reclaimed resources to a second portion of cloud service workloads when the threshold value is exceeded and the external factors are exceeded. | 2017-10-19 |
20170300360 | RESOURCE ALLOCATION IN DISTRIBUTED PROCESSING SYSTEMS - A distributed processing system is disclosed herein. The distributed processing system includes a server, a database server, and an application server that are interconnected via a network, and connected via the network to a plurality of independent processing units. The independent processing units can include an analysis engine that is machine-learning-capable, and thus uniquely completes its processing tasks. The server can provide one or several pieces of data to one or several of the independent processing units, can receive analysis results from these one or several independent processing units, and can update the result based on a value characterizing the machine learning of the independent processing unit. | 2017-10-19 |
20170300361 | EMPLOYING OUT OF ORDER QUEUES FOR BETTER GPU UTILIZATION - Methods and apparatus relating to employing out-of-order queues for improved GPU (Graphics Processing Unit) utilization are described. In an embodiment, logic is used to employ out-of-order queues for improved GPU (Graphics Processing Unit) utilization. Other embodiments are also disclosed and claimed. | 2017-10-19 |
20170300362 | PERFORMANCE OPTIMIZATION OF HARDWARE ACCELERATORS - Example embodiments of the present disclosure provide methods and devices for optimizing performance of hardware accelerators. The accelerator device may detect status information of a current acceleration task being executed. The detected status information is provided to a host associated with the accelerator device. The host makes preparation for a subsequent acceleration task based on the status information before termination of the current running acceleration task. The accelerator device may execute the subsequent acceleration task based on the preparation. In this way, the performance of hardware accelerator is optimized. | 2017-10-19 |
20170300363 | Modular Electronic Devices with Contextual Task Management and Performance - The present disclosure provides modular electronic devices that are capable of managing task performance based on a particular context of computing resources currently available from the ad hoc combination of devices. | 2017-10-19 |
20170300364 | Modular Electronic Devices with Prediction of Future Tasks and Capabilities - The present disclosure provides modular electronic devices that are capable of predicting future availability of module combinations and associated computing resources and/or capable of predicting future tasks. Based on such predictions, the module or modular electronic device can choose to schedule or delay certain tasks, alter resource negotiation behavior/strategy, or select from among various different resource providers. As an example, a modular electronic device of the present disclosure can identify one or more computing tasks to be performed; predict one or more future sets of computing resources that will be respectively available to the modular electronic device at one or more future time periods; and determine a schedule for performance of the one or more computing tasks based at least in part on the prediction of the one or more future sets of computing resources that will be respectively available at the one or more future time periods. | 2017-10-19 |
20170300365 | Task Management System for a Modular Electronic Device - Systems and methods are provided for managing task performance for a modular electronic device. In one implementation, a modular electronic device can include one or more electronic modular components. The modular electronic device can identify a computational task associated with the modular electronic device and identify one or more computing devices that are available to perform at least a portion of the computational task. The modular electronic device can obtain one or more sets of data associated with one or more computational resources of the computing devices. The modular electronic device can determine a potential benefit to the modular electronic device associated with the performance of the computational task by the computing devices. The modular electronic device can perform at least a portion of the computational task with the computing devices based, at least in part, on the sets of data associated with the computational resources and the potential benefit. | 2017-10-19 |
20170300366 | Determining Tasks to be Performed by a Modular Entity - Systems and methods of determining tasks to be performed by a modular entity are disclosed. For instance, data associated with one or more tasks performed by one or more first modular entities within one or more modular computing environments can be obtained. Each first modular entity includes at least one modular component. A performance score can be determined for each task performed by each first modular entity. The performance scores can provide a measure of efficiency of a performance of a task by a first modular entity. An entity profile can be determined for each first modular entity based at least in part on the determined performance scores. At least one task to be performed by at least one modular entity can be determined based at least in part on the determined entity profiles for the one or more first modular entities. | 2017-10-19 |
20170300367 | Streaming Graph Optimization Method and Apparatus - A streaming graph optimization method and apparatus are disclosed, relating to the stream processing field. A stream application streaming graph provided by a user is received and the streaming graph is parsed and a streaming graph described by an operator node and a data stream side is constructed. Additionally the streaming graph is disassembled according to a maximum atom division principle, so as to obtain at least one streaming subgraph and adjacency operator combination is performed on the at least one streaming subgraph according to a combination algorithm, so as to obtain an optimized streaming graph. | 2017-10-19 |
20170300368 | PROCESS MIGRATION IN DATA CENTER NETWORKS - There is provided a method and system for process migration in a data center network. The method includes selecting processes to be migrated from a number of overloaded servers within a data center network based on an overload status of each overloaded server. Additionally, the method includes selecting, for each selected process, one of a number of underloaded servers to which to migrate the selected process based on an underload status of each underloaded server, and based on a parameter of a network component by which the selected process is to be migrated. The method also includes migrating each selected process to the selected underloaded server such that a migration finishes within a specified budget. | 2017-10-19 |
20170300369 | METHOD FOR COMBINING UNIFIED MATTERS IN PERSONAL WORKSPACE AND SYSTEM USING THE SAME - A method for combining unified matters in a personal workspace and a system using the method are provided. The method includes the following steps. Firstly, at least one matterizer is provided to the personal workspace, wherein at least one unified information unit corresponding to at least one original information and/or at least one unified tool corresponding to at least one original tool is acquired from at least one information source via the at least one matterizer. Then, the at least one unified information unit and/or the at least one unified tool is provided to the personal workspace via the at least one matterizer. Then, through the at least one unified tool and/or the at least one unified information unit, a task is performed. | 2017-10-19 |
20170300370 | Method and Apparatus for Downsizing the Diagnosis Scope for Change-Inducing Errors - The scope of the system changes to be considered for analysis for finding problematic changes is reduced in order to allow focusing on highly potential suspicious drifts caused by change sequences. The method and system includes a data cleaning module to remove irrelevant changes, a feature extraction and normalization module to extract the features of change objects, data annotation module to remove irrelevant changes based on patterns, and a clustering module to obtain groups for further analysis. Data cleaning is simplified using domain independent rules. Additional sources of change sequences are removed by application of pattern based techniques so as to narrow down problematic system changes to analyze for root cause analysis. Change error sequence and degree of temporal correlation to correlate system changes with errors, as well as change behavior patterns may be used for downsizing the diagnosis scope. | 2017-10-19 |
20170300371 | KVM having Blue Screen of Death Detection and Warning Functions - A device, apparatus, system and method for determining failure of a computer host among a plurality of hosts. The host failure detection device may be integrated in a KVM apparatus. The device monitors the video output of the plurality of hosts and if identifies a Blue Screen of Death or BIOS failure Black Screen, it issues a warning and logs the details of the discovered failure. The device may attempt to recover the failed host by transmitting emulated keyboard and mouse commands to the failed host. | 2017-10-19 |
20170300372 | METHOD AND APPARATUS FOR THE DETECTION OF FAULTS IN DATA COMPUTATIONS - A method and apparatus for detecting and mitigating faults in numerical computations of M input data streams is claimed (embodiments of FIG. | 2017-10-19 |
20170300373 | MANAGING FAULTS IN A HIGH AVAILABILITY SYSTEM - An approach is provided for managing a failure of a critical high availability (HA) component. Weights are received and assigned to categories of critical HA components in a HA system. A current value indicating a performance of a component included in the identified components is obtained by periodically monitoring the components. A reference value for the performance of the component is received. A deviation between the current value and the reference value is determined. Based on the deviation, the component is determined to have failed. Based in part on the failed component, the categories, and the weights, a health index is determined in real-time. The health index indicates in part how much the component having failed affects a measure of health of the HA system. | 2017-10-19 |
20170300374 | PRIORITIZED DATA REBUILDING IN A DISPERSED STORAGE NETWORK - A method begins with a processing module querying distributed storage network (DSN) storage units regarding storage errors associated with a data segment. The method continues with the processing module receiving query responses and depending on the responses, assigning a first threshold priority or a second threshold priority to encoded data slices (EDSs) associated with the data segment. The method proceeds with the processing module, depending on the assigned threshold priority, issuing read slice requests and rebuilding EDS associated with the data segment. | 2017-10-19 |
20170300375 | TRANSCEIVER PARAMETER SOLUTION SPACE VISUALIZATION TO REDUCE BIT ERROR RATE - Techniques and mechanisms provide a solution space visualization of bit error rates (BER) for combinations of parameter settings of transceivers. Different types of visualizations may be generated. | 2017-10-19 |
20170300376 | METHOD AND SYSTEM TO PROCESS ISSUE DATA PERTAINING TO A SYSTEM - A method to processes issue data in a system is provided. A rules engine monitors a computer network for a violation of a parameter defined by rules in a database and automatically detects the violation. In response, the rules engine generates an issue report indicating the violation. An issue report is received from a human user indicating a potential violation in the computer network. The issue reports are parsed. Based on data parsed from the issue reports, a determination that a common issue is being reported by both the rules engine and the human user is made. In response, the issue report from the rules engine is reconciled with the issue report from the human user resulting in a single issue entry in an issue queue, whereby the reconciling including incrementing a count for each instance of the common issue being reported. | 2017-10-19 |
20170300377 | MONITORING ERROR CORRECTION OPERATIONS PERFORMED IN MEMORY - The present disclosure includes apparatuses and methods for monitoring error correction operations performed in memory. A number of embodiments include a memory and circuitry configured to determine a quantity of erroneous data corrected during an error correction operation performed on soft data associated with a sensed data state of a number of memory cells of the memory, determine a quality of soft information associated with the erroneous data corrected during the error correction operation performed on the soft data, and determine whether to take a corrective action on the sensed data based on the quantity of the erroneous data corrected during the error correction operation and the quality of the soft information associated with the erroneous data corrected during the error correction operation. | 2017-10-19 |
20170300378 | Apparatus and Method for Read Time Control in ECC-Enabled Flash Memory - In a flash semiconductor memory, sense and contiguous ECC coding operations are carried out over a range of V | 2017-10-19 |
20170300379 | DATA CORRECTING METHOD, MEMORY CONTROL CIRCUIT UNIT, AND MEMORY STORAGE DEVICE - A data correcting method for a rewritable non-volatile memory module is provided. The method includes: if a first user data read from a first physical programming unit cannot be corrected by a corresponding first parity code, reading at least one group parity code of a first encoded group that the first physical programming unit belongs to into a buffer, sending the group parity code to a correcting circuit, and reading a user data from physical programming units belonging to the first encoded group into the buffer and sending the user data and the group parity code to the correcting circuit in batches to obtain a corrected first user data corresponding to the first user data. | 2017-10-19 |
20170300380 | USING RELIABILITY INFORMATION FROM MULTIPLE STORAGE UNITS AND A PARITY STORAGE UNIT TO RECOVER DATA FOR A FAILED ONE OF THE STORAGE UNITS - Provided are a method, system, and apparatus using reliability information from multiple storage units and a parity storage unit to recover data for a failed one of the storage units. A decoding operation of the codeword is performed in each of the storage units comprising the data storage units other than the target data storage unit and the parity storage unit to produce reliability information. In response to the decoding operation failing for at least one additional failed storage unit comprising the data and/or parity storage units other than the target data storage unit that failed to decode, reliability information is obtained for the data portion of the at least one additional failed storage unit. The reliability information obtained from the storage units other than the target data storage unit is used to produce corrected data for the data unit in the target data storage unit. | 2017-10-19 |
20170300381 | MEMORY CONTROLLER AND DATA CONTROL METHOD - A memory controller includes an error check correction circuit performing a calculation regarding an error correction code of data, and a processor using the error check correction circuit and write the data with the error correction code to a non-volatile memory (NVM) when writing the data to the NVM, while performing error correction of the data using the error correction code when reading the data from the NVM. The processor counts the number of error bits of the data stored in a block that is a unit of batch-erasure of the data, stores the data in the block with a first error correction code having an error correction ability, and stores the data in the block with a second error correction code having an error correction ability higher than the first error correction code when the number of the error bits is larger than a value. | 2017-10-19 |
20170300382 | SYSTEMS AND METHODS FOR IMPROVING EFFICIENCIES OF A MEMORY SYSTEM - A memory device includes a memory component that stores data. The memory device also includes a processor that receives a signal indicating that the memory component is coupled to the processor and retrieves information from the memory component. The information may include one or more algorithms capable of being performed by the memory component. The processor may then receive one or more packets associated with one or more data operations regarding the memory component. The processor may then perform the one or more data operations by using the memory component to employ the one or more algorithms. | 2017-10-19 |
20170300383 | CONTROL DEVICE FOR A STORAGE APPARATUS, SYSTEM, AND METHOD OF CONTROLLING A STORAGE APPARATUS - A control device for a storage apparatus including a first storage device, a second storage device, and a third storage device, the control device includes a memory, and a processor coupled to the memory and configured to store, in the third storage device, first parity data generated based on first data stored in the first storage device and second data stored in the second storage device, store, in the first storage device, third data as update data of the first data, execute reading the first data and the third data from the first storage device and reading the first parity data from the third storage device when garbage collection for the first storage device is performed, and execute generating second parity data based on the read first data, the read third data, and the read first parity data. | 2017-10-19 |
20170300384 | FAILURE-DECOUPLED VOLUME-LEVEL REDUNDANCY CODING TECHNIQUES - Techniques described and suggested herein include systems and methods for storing, indexing, and retrieving original data of data archives on data storage systems using redundancy coding techniques. For example, redundancy codes, such as erasure codes, may be applied to archives (such as those received from a customer of a computing resource service provider) so as allow the storage of original data of the individual archives available on a minimum of volumes, such as those of a data storage system, while retaining availability, durability, and other guarantees imparted by the application of the redundancy code. Sparse indexing techniques may be implemented so as to reduce the footprint of indexes used to locate the original data, once stored. The volumes may be apportioned into failure-decorrelated subsets, and archives stored thereto may be apportioned to such subsets. | 2017-10-19 |
20170300385 | Impact Analysis-Based Task Redoing Method, Impact Analysis Calculation Apparatus, and One-Click Resetting Apparatus - An impact analysis-based task redoing method using an impact analysis calculation apparatus and a one-click resetting apparatus includes receiving an impact analysis request, where the impact analysis request includes a source procedure identifier, an impact start time, and an impact end time; obtaining a dependency list and a procedure information list of a source procedure according to the source procedure identifier; obtaining a period of the source procedure and a period of a target procedure according to the source procedure identifier and the target procedure identifier in the dependency list; obtaining, according to the period of the source procedure, the period of the target procedure, the impact start time, and the impact end time, a procedure instance list corresponding to each procedure identifier included in the procedure instance list; and sending the procedure instance list and the procedure information list. | 2017-10-19 |
20170300386 | SYSTEM AND METHOD FOR AGENTLESS BACKUP OF VIRTUAL MACHINES - A system and method is disclosed for performing agentless backup of a virtual machine using a temporary attached virtual disk. An example method includes creating a virtual machine disk in a datastore, loading a software application in the virtual machine disk, the software application being configured to collect metadata relating to at least one application executing in an operating system of the virtual machine, communicatively coupling the datastore to the virtual machine, collecting the metadata relating to the at least one application executing in the operating system of the virtual machine, generating a snapshot of the virtual machine, and storing a backup of the virtual machine in a backup archive based on the snapshot. | 2017-10-19 |
20170300387 | Always Current backup and recovery method on large databases with minimum resource utilization. - A method to generate and maintain always current backup copy of database system with minimum system resource in a very large RDBMS or other database environment. Requiring one life time full backup only and then periodic differential backups unlike periodic full backups in current case. A method to use these backup files to recover to a point in time. Reducing time and resource utilization on very large database backup by applying these methods. This method eliminates the need to take periodic full backup copy on a database. | 2017-10-19 |
20170300388 | NVRAM LOSS HANDLING - A technique restores a file system of a storage input/output (I/O) stack to a deterministic point-in-time state in the event of failure (loss) of non-volatile random access memory (NVRAM) of a node. The technique enables restoration of the file system to a safepoint stored on storage devices, such solid state drives (SSD), of the node with minimum data and metadata loss. The safepoint is a point-in-time during execution of I/O requests (e.g., write operations) at which data and related metadata of the write operations prior to the point-in-time are safely persisted on SSD such that the metadata relating to an image of the file system on SSD (on-disk) is consistent and complete. Upon reboot after NVRAM loss, the technique identifies (i) the most recent safepoint, as well as (ii) the inflight writes that were persistently stored on disk after the most recent safepoint. The data and metadata of those inflight writes are then deleted to place the on-disk file system to its state at the most recent safepoint. | 2017-10-19 |
20170300389 | DEVICES AND METHODS FOR RECEIVING A DATA FILE IN A COMMUNICATION SYSTEM - Devices and methods for receiving a data file in a communication system. In one embodiment, the wireless communication device includes a transceiver, a memory, and an electronic processor. The transceiver is configured to send and receive data over a wireless communication network. The electronic processor is electrically coupled to the transceiver and the memory and configured to receive, with the transceiver, a first seed, a sequence of blocks, and a subsequent seed, cause the memory to save the sequence of blocks in the memory, and determine whether the subsequent seed is aligned with the first seed. When the subsequent seed is not aligned with the first seed, the electronic processor is configured to cause the memory to delete the sequence of blocks. When the subsequent seed is aligned with the first seed, the electronic processor is configured to cause the memory to maintain the sequence of blocks. | 2017-10-19 |
20170300390 | DATABASE PROTECTION USING BLOCK-LEVEL MAPPING - A system according to certain aspects may include a client computing device including: a database application configured to output a database file in a primary storage device(s), the database application outputting the database file as a series of application-level blocks; and a data agent configured to divide the database file into a plurality of first blocks having a first granularity larger than a second granularity of the application-level blocks such that each of the first blocks spans a plurality of the application-level blocks. The system may include a secondary storage controller computer(s) configured to: in response to instructions to create a secondary copy of the database file: copy the plurality of first blocks to a secondary storage device(s) to create a secondary copy of the database file; and create a table that provides a mapping between the copied plurality of first blocks and corresponding locations on the secondary storage device(s). | 2017-10-19 |
20170300391 | Scalable Log Partitioning System - Embodiments include an improved database logging system where transactions are allocated to multiple different partitions of a database log file and log records for transactions are written to different partitions of a database log. Each partition can store log records for a separate transaction in a separate log cache memory. Writing log records to a page of the database log can be prevented until previous log records modifying that same page have been written to disk. A sequential timestamp that is unique across the plurality of partitions may be assigned to the log records for this purpose, and a log record containing a modification to a page can be flushed after previous modifications to the page have been written to disk. Restore operations can then be performed by copying the log records of the multiple partitions into a priority data structure and ordered into a merged list based on timestamp. | 2017-10-19 |
20170300392 | TEST CIRCUIT FOR 3D SEMICONDUCTOR DEVICE AND METHOD FOR TESTING THEREOF - Disclosed herein is a test circuit for a 3D semiconductor device for detecting soft errors and a method for testing thereof. The test circuit includes a first Multiple Input Signature Register (MISR) disposed in a first semiconductor chip, the first MISR compressing a first test result signal corresponding to a test pattern, a second MISR disposed in a second semiconductor chip stacked on or under the first semiconductor chip, the second MISR compressing a second test result signal corresponding to the test pattern, and a first error detector to detect a soft error by comparing a first output signal output from the first MISR with a second output signal output from the second MISR. | 2017-10-19 |
20170300393 | RAID REBUILD ALGORITHM WITH LOW I/O IMPACT - A disclosed storage management method includes detecting an unrecoverable failure associated with a logical block of a first physical storage device that is one of a plurality of storage devices within a redundant virtual drive that also includes a hot spare drive. Data for the unrecoverable block may be rebuilt from data in the remaining storage devices and stored in a logical block of the hot spare drive. One or more logical block maps may be maintained to identify unrecoverable logical blocks and to indicate the logical blocks and storage devices to which each of the unrecoverable logical blocks is relocated. I/O operations that access “good” logical blocks are normally while accesses to unrecoverable logical blocks are rerouted according to the logical block map. One or more unrecoverable thresholds may be supported to initiate operations to replace storage devices containing unrecoverable blocks exceeding an applicable threshold. | 2017-10-19 |
20170300394 | FAULT TOLERANCE FOR CONTAINERS IN A VIRTUALIZED COMPUTING ENVIRONMENT - Example methods are described to provide fault tolerance for a container in a virtualized computing environment that includes a first virtual machine and a second virtual machine. The method may comprise detecting a failure at the first virtual machine. The container may be supported by the first virtual machine to run an application on a first operating system of the first virtual machine. The method may further comprise providing data relating to the container to the second virtual machine; and based on the data relating to the container, resuming the container in the second virtual machine to run the application on a second operating system of the second virtual machine. | 2017-10-19 |
20170300395 | CHUNK REDUNDANCY ARCHITECTURE FOR MEMORY - An integrated circuit (IC) includes addressable blocks of memory, and at least one redundant block of memory. A block of memory includes two or more chunks of memory. The IC also includes redundancy control cells. Control circuitry is included to access a first chunk of a redundant block of memory in place of a first remapped chunk one of the addressable blocks of memory, and a second chunk of a redundant block of memory in place of a second remapped chunk one of the addressable blocks of memory, based on the redundancy control cells. | 2017-10-19 |
20170300396 | ALLOCATING DATA BASED ON HARDWARE FAULTS - A data storage service receives a request to store data into a data storage system that consists of many physical data storage locations, each location having various physical characteristics. The data storage service determines a proper location for the data based on data placement rules applied to the physical data storage locations such that a set of proper locations is identified. The data storage service can place the data according to data placement rules. | 2017-10-19 |
20170300397 | DEVICE WITH LOW-OHMIC CIRCUIT PATH - A device, including a low-ohmic circuit path; a normal operation circuit path coupled in parallel with the low-ohmic circuit path; and a circuit element configured to select between the low-ohmic circuit path and the normal operation circuit path. | 2017-10-19 |
20170300398 | VOLTAGE REGULATOR POWER REPORTING OFFSET SYSTEM - A voltage regulator power reporting offset system includes a monitored power reporting subsystem that determines a monitored power level, offsets the monitored power level using voltage regulator operation offset information to provide a first offset monitored power level, and reports the first offset monitored power level to voltage regulator operation components. A processor power reporting component receives the report of the first offset monitored power level from the monitored power reporting subsystem. A processor power reporting offset subsystem receives the report of the first offset monitored power level from the processor power reporting component, offsets the first offset monitored power level using the processor operation offset information to provide a second offset monitored power level that is different than the first offset monitored power level, and reports the second offset monitored power level to a processing system. | 2017-10-19 |
20170300399 | DATA TRANSMISSION METHOD, NON-TRANSITORY STORAGE MEDIUM, DATA TRANSMISSION DEVICE, LITHOGRAPHY APPARATUS, AND METHOD OF MANUFACTURING PRODUCT - A data transmission method of transmitting data of log information recorded in log data of a manufacturing apparatus to an external device includes: storing correspondence information between information of a first identifier and information of a second identifier, the first identifier being an identifier used to identify a thing about a process performed by the manufacturing apparatus and being shared by the manufacturing apparatus and the external device, the second identifier being an identifier used to identify a thing about a process performed by the manufacturing apparatus and being used by the manufacturing apparatus; and obtaining, based on the correspondence information, information of the first identifier corresponding to information of the second identifier recorded in log information, and transmitting data of the log information to which the obtained information of the first identifier has been added. The storing and the obtaining are executed by an information processing device. | 2017-10-19 |
20170300400 | DIAGNOSTIC WORKFLOW FOR PRODUCTION DEBUGGING - A diagnostic workflow file can be used to control the future diagnostic actions taken by a debugger without user interaction with the debugger when it executes. The diagnostic workflow file is used by a debugger during a debug session. The debugger performs the actions directed by the diagnostic workflow file to simulate an interactive live debug session. The diagnostic workflow file can include conditional diagnostic operations whose execution depends on the state of program variables, diagnostic variables and diagnostic primitives in the debug session. | 2017-10-19 |
20170300401 | METHODS AND SYSTEMS THAT IDENTIFY PROBLEMS IN APPLICATIONS - Methods that use marking, leveling and linking (“MLL”) processes to identify problems and dynamically correlate events recorded in various log files generated for a use case of an application are described. The marking process determines fact objects associated with the use-case from events recorded in the various log files, database dumps, captured user actions, network traffic, and third-party component logs in order to identify non-predefined problems with running the application in a distributed computing environment. The MLL methods do not assume a predefined input format and may be used with any data structure and plain log files. The MLL methods present results in a use-case trace in a graphical user interface. The use-case trace enables human users to monitor and troubleshoot execution of the application. The use-case trace identifies the types of non-predefined problems that have occurred and points in time when the problems occurred. | 2017-10-19 |
20170300402 | MOCK SERVER AND EXTENSIONS FOR APPLICATION TESTING - Techniques are described for employ a mock server that executes on a client to facilitate negative testing of an application and/or other types of testing. The mock server may intercept OData requests sent from an application toward a backend server. For at least some of the intercepted requests, the mock server may determine a mock response to be returned to the application instead of a response that would be generated by the backend server. In some examples, the mock server may employ various mock server extension components to generate the mock response. The mock response may include an error message, warning message, and/or other content, and may be provided to enable negative testing of the application. In some instances, the application employs a user interface (UI) model to provide UI elements. | 2017-10-19 |
20170300403 | RECORDATION OF USER INTERFACE EVENTS FOR SCRIPT GENERATION - An example method of generating one or more scripts specific to an application programming interface (API) type and language and in accordance with user-selected events includes receiving an API type and a language in which to implement a script. Events selected by a user via a graphical user interface in response to receiving a request to record the events may be recorded. Additionally, the user-selected events may be mapped to a set of commands specific to the API type and the language. Additionally, a script including a first command to import a set of modules specific to the API type and language, a second command to create a computing session, and the set of commands is generated. | 2017-10-19 |
20170300404 | SOFTWARE INTEGRATION TESTING WITH UNSTRUCTURED DATABASE - According to examples, software integration testing with an unstructured database may include retrieving a configuration file stored in memory, and parsing the configuration file to identify configuration details of an unstructured database. A connection may be established between an integration testing tool and the unstructured database based on the configuration details. Software integration testing with an unstructured database may further include identifying a transaction file specifying a database operation to be performed by the unstructured database to retrieve data stored in the unstructured database responsive to the application performing a function. A query may be generated based on the database operation. The query may be sent, via an interface, to the unstructured database for execution. Results of the query may be received via the interface. The query results may be compared to validation data to determine whether the function operates in a determined manner. | 2017-10-19 |
20170300405 | METHOD, APPARATUS, AND COMPUTER-READABLE MEDIUM FOR PERFORMING FUNCTIONAL TESTING OF SOFTWARE - A system, method and computer-readable medium for performing functional testing of software, including storing a plurality of statements in a plurality of cells, the plurality of cells being organized in a plurality of columns, the plurality of columns including a verification column and each statement in the verification column corresponding to an acceptance criterion for a step of a functional test of the software, storing a plurality of mappings linking the plurality of cells to a plurality of blocks of executable code, each block of executable code configured to execute commands on the software, executing the plurality of blocks of executable code to produce an output by iterating through the plurality of cells and executing each executable block of code linked to each corresponding cell, and transmitting an indication of whether the software meets acceptance criteria in the verification column based at least in part on the output. | 2017-10-19 |
20170300406 | UNIVERSAL PROTOCOL FOR POWER TOOLS - A system and method for communicating with power tools using a universal protocol. The universal protocol may be implemented using a universal core module that is installed across a variety of power tools and other devices to enable communications therewith. Communications to and from the power tools are translated to a universal protocol once received. The translated communications are handled by the universal core module of a particular tool according to a set of rules. In response, the universal core module outputs communications according to the universal protocol and the set of rules, which may be translated to another protocol for receipt by components of the tool or an external device. The communications may be used, for example, to obtain tool performance data from the tools and to provide firmware updates. | 2017-10-19 |
20170300407 | MANAGING DATABASE INDEX BY LEVERAGING KEY-VALUE SOLID STATE DEVICE - According to one general aspect, an apparatus may include a host interface layer, a translation data structure, and a non-volatile memory. The host interface layer may be configured to receive a multi-association command that associates two or more keys with a common value. The translation data structure may be configured to: maintain a key-value index that represents a plurality of key-value descriptors stored within a non-volatile memory, and associate the two or more keys with the common value. The non-volatile memory configured to store a plurality of key-value descriptors each including a respective value and at least one respective key, wherein at least one key-value descriptor includes a plurality of keys, wherein each of the plurality of keys are associated with the common value, and wherein the at least one key-value descriptor further includes either the common value or a pointer to the common value. | 2017-10-19 |
20170300408 | SYSTEM AND METHOD FOR REDUCING STRESS ON MEMORY DEVICE - A system for reducing stress on a memory device that has multiple memory blocks. The system includes a counting unit for incrementing count values respectively associated with the memory blocks. Each of the count values indicates the number of times the associated memory block has been erased. A controller monitors the count values. Upon detecting that a count value associated with a first memory block reaches a predefined threshold, the controller selects a second memory block from the memory blocks to be swapped with the first memory block based on a count value associated with the second memory block. | 2017-10-19 |
20170300409 | METHOD FOR MANAGING A MEMORY APPARATUS - A memory apparatus includes at least one non-volatile memory element, which includes a plurality of physical blocks. A method for managing the memory apparatus includes: obtaining a first host address from a received first access command; linking the first host address to a first page of the physical block; obtaining a second host address from a received second access command; linking the second host address to a second page of the physical block; and selectively erasing a portion of the blocks according to a valid/invalid page count of the physical block corresponding to accessing pages of the physical block. A difference value of the first host address and the second host address is greater than a number of pages of the physical block. | 2017-10-19 |
20170300410 | Method and System for Optimizing Deterministic Garbage Collection in Nand Flash Storage Systems - A method for partial garbage collection in a NAND flash storage system is disclosed. The method includes receiving a real time data request task in a NAND flash storage system; executing the real time data request task in the NAND flash storage system; determining a condition whether a number of free pages in the NAND flash storage system is below a pre-determined threshold; for the condition that the number of free pages in the NAND flash storage system is below a pre-determined threshold, determining whether a partial garbage collection list is empty; for the condition that the partial garbage collection list is empty, selecting a victim block from a plurality of blocks in the NAND flash storage system; creating partial garbage collection tasks in the NAND flash storage system; and putting the partial garbage collection tasks in the partial garbage collection list. | 2017-10-19 |
20170300411 | MEMORY CONTROLLER AND DATA STORAGE APPARATUS INCLUDING THE SAME - A data storage apparatus, memory controller, and or method operation method may be disclosed. The memory controller may include an address generator configured to generate an operation target address and a destination address. The memory controller may be configured to output the operation target address and the destination address. The memory controller may include a data processor configured to receive the operation target address, read data by accessing the corresponding address of the operation target address, perform an operation on the read data, access the destination address, and write a result of the operation in the accessed destination address. | 2017-10-19 |
20170300412 | PAGE MODIFICATION - Systems and methods associated with page modification are disclosed. One example method may be embodied on a non-transitory computer-readable medium storing computer-executable instructions. The instructions, when executed by a computer, may cause the computer to fetch a page to a buffer pool in a memory. The page may be fetched from at least one of a log and a backup using single page recovery. The instructions may also cause the computer to store a modification of the page to the log. The modification may be stored to the log as a log entry. The instructions may also cause the computer to evict the page from memory when the page is replaced in the buffer pool. Page writes associated with the eviction may be elided. | 2017-10-19 |
20170300413 | APPARATUSES AND METHODS FOR PROVIDING DATA TO A CONFIGURABLE STORAGE AREA - Apparatuses and methods for providing data to a configurable storage area are disclosed herein. An example apparatus may include an extended address register including a plurality of configuration bits indicative of an offset and a size, an array having a storage area, a size and offset of the storage area based, at least in part, on the plurality of configuration bits, and a buffer configured to store data, the data including data intended to be stored in the storage area. A memory control unit may be coupled to the buffer and configured to cause the buffer to store the data intended to be stored in the storage area in the storage area of the array responsive, at least in part, to a flush command. | 2017-10-19 |
20170300414 | Delayed Write Through Cache (DWTC) and Method for Operating the DWTC - A cache and a method for operating a cache are disclosed. In an embodiment, the cache includes a cache controller, data cache and a delay write through cache (DWTC), wherein the data cache is separate and distinct from the DWTC, wherein cacheable write accesses are split into shareable cacheable write accesses and non-shareable cacheable write accesses, wherein the cacheable shareable write accesses are allocated only to the DWTC, and wherein the non-shareable cacheable write accesses are not allocated to the DWTC. | 2017-10-19 |
20170300415 | ASYMMETRICAL MEMORY MANAGEMENT - Described herein are embodiments of asymmetric memory management to enable high bandwidth accesses. In embodiments, a high bandwidth cache or high bandwidth region can be synthesized using the bandwidth capabilities of more than one memory source. In one embodiment, memory management circuitry includes input/output (I/O) circuitry coupled with a first memory and a second memory. The I/O circuitry is to receive memory access requests. The memory management circuitry also includes logic to determine if the memory access requests are for data in a first region of system memory or a second region of system memory, and in response to a determination that one of the memory access requests is to the first region and a second of the memory access requests is to the second region, access data in the first region from the cache of the first memory and concurrently access data in the second region from the second memory. | 2017-10-19 |
20170300416 | ARITHMETIC PROCESSING APPARATUS AND CONTROL METHOD OF THE ARITHMETIC PROCESSING APPARATUS - An arithmetic processing apparatus includes a prefetch unit configured to send a prefetch request to a subordinate cache memory for prefetching data of a main storage device into a primary cache memory. The arithmetic processing apparatus further includes a count unit configured to count a hit count of how many times it is detected that prefetch request target data is retained in the subordinate cache memory when executing a response process to respond to the prefetch request sent from the prefetch unit. The arithmetic processing apparatus yet further includes an inhibition unit configured to inhibit the prefetch unit from sending the prefetch request when the counted hit count reaches a threshold value. | 2017-10-19 |
20170300417 | Multi-Way Set Associative Cache and Processing Method Thereof - A multi-way set associative cache and a processing method thereof, where the cache includes M pipelines, a controller, and a data memory, where any one of the pipelines includes an arbitration circuit, a tag memory, and a determining circuit, where the arbitration circuit receives at least one lookup request at an N | 2017-10-19 |
20170300418 | DYNAMIC POWERING OF CACHE MEMORY BY WAYS WITHIN MULTIPLE SET GROUPS BASED ON UTILIZATION TRENDS - A set associative cache memory comprises an M×N memory array of storage entries arranged as M sets by N ways, both M and N are integers greater than one. Within each group of P mutually exclusive groups of the M sets, the N ways are separately powerable. A controller, for each group of the P groups, monitors a utilization trend of the group and dynamically causes power to be provided to a different number of ways of the N ways of the group during different time instances based on the utilization trend. | 2017-10-19 |
20170300419 | MEMORY ACCESS METHOD, STORAGE-CLASS MEMORY, AND COMPUTER SYSTEM - A memory access method, a storage-class memory, and a computer system are provided. The computer system includes a memory controller and a hybrid memory, and the hybrid memory includes a dynamic random access memory (DRAM) and a storage-class memory (SCM). The memory controller sends a first access instruction to the DRAM and the SCM. When determining that a first memory cell set that is of the DRAM and to which a first address in the received first access instruction points includes a memory cell whose retention time is shorter than a refresh cycle of the DRAM, the SCM may obtain a second address having a mapping relationship with the first address. Further, the SCM converts, according to the second address, the first access instruction into a second access instruction for accessing the SCM, to implement access to the SCM. | 2017-10-19 |
20170300420 | NO-LOCALITY HINT VECTOR MEMORY ACCESS PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a no-locality hint vector memory access instruction. The no-locality hint vector memory access instruction to indicate a packed data register of the plurality of packed data registers that is to have a source packed memory indices. The source packed memory indices to have a plurality of memory indices. The no-locality hint vector memory access instruction is to provide a no-locality hint to the processor for data elements that are to be accessed with the memory indices. The processor also includes an execution unit coupled with the decode unit and the plurality of packed data registers. The execution unit, in response to the no-locality hint vector memory access instruction, is to access the data elements at memory locations that are based on the memory indices. | 2017-10-19 |
20170300421 | Hybrid Tracking of Transaction Read and Write Sets - Tracking a processor instruction is provided to limit a speculative mis-prediction. A non-speculative read set indication and/or write set indication are maintained for a transaction. In addition, a queue(s) of at least one address corresponding to a speculatively executed instruction is maintained. For a received request from a remote processor, a transaction resolution process takes place, and a resolution is performed if an address match in the queue is detected. The resolution includes to hold a response to the receive request until the speculative instruction is committed or flushed. | 2017-10-19 |
20170300422 | MEMORY DEVICE WITH DIRECT READ ACCESS - Several embodiments of memory devices with direct read access are described herein. In one embodiment, a memory device includes a controller operably coupled a plurality of memory regions forming a memory. The controller is configured to store a first mapping table at the memory device and also to provide the first mapping table to a host device for storage at the host device as a second mapping table. The controller is further configured to receive a direct read request sent from the host device. The read request includes a memory address that the host device has selected from the second memory table stored at the host device. In response to the direct read request, the controller identifies a memory region of the memory based on the selected memory address in the read request and without using the first mapping table stored at the memory device. | 2017-10-19 |
20170300423 | WEAR LEVELING IN STORAGE DEVICES - A system may include a plurality of memory cells and a processor. The plurality of memory cells may include a plurality of physical locations at which data is stored. The processor may be configured to determine whether to swap physical locations of data stored at logical block addresses in the first logical block address collection and physical locations of data stored at logical block addresses in the second logical block address collection. The processor may be further configured to, in response to determining to swap the physical locations of the data, swap the physical locations of the data stored at the logical block addresses in the first logical block address collection and the physical locations of the data stored at the logical block addresses in the second logical block address collection. | 2017-10-19 |
20170300424 | EFFICIENT METADATA IN A STORAGE SYSTEM - A method for managing metadata in a storage system is disclosed. The system includes a processor, a storage medium, a first metadata table that maps every data block's LBN to its unique content ID, and a second metadata table that maps every content ID to its PBN on the storage medium. During a data movement process, the processor is configured to determine the content ID of the data block and update its entry in the second metadata table without accessing the first metadata table. A method is also disclosed to reduce the size of the first metadata table. Only content ID is stored in the first metadata table and its LBN is determined by the metadata entry's relative position in the table. Metadata entries are stored in metadata blocks and deduplicated. | 2017-10-19 |
20170300425 | TRANSLATION LOOKASIDE BUFFER SWITCH BANK - Example devices are disclosed. For example, a device may include a processor, a plurality of translation lookaside buffers, a plurality of switches, and a memory management unit. Each of the translation lookaside buffers may be assigned to a different process of the processor, each of the plurality of switches may include a register for storing a different process identifier, and each of the plurality of switches may be associated with a different one of the translation lookaside buffer buffers. The memory management unit may be for receiving a virtual memory address and a process identifier from the processor and forwarding the process identifier to the plurality of switches. Each of the plurality of switches may be for connecting the memory management unit to a translation associated with the switch when there is a match between the process identifier and the different process identifier stored by the register of the switch. | 2017-10-19 |
20170300426 | READ CACHE MANAGEMENT METHOD AND APPARATUS BASED ON SOLID STATE DRIVE - A read cache management method and apparatus based on a solid state drive, and the method includes: determining whether a read request hits a first queue and a second queue (S | 2017-10-19 |
20170300427 | MULTI-PROCESSOR SYSTEM WITH CACHE SHARING AND ASSOCIATED CACHE SHARING METHOD - A multi-processor system with cache sharing has a plurality of processor sub-systems and a cache coherence interconnect circuit. The processor sub-systems have a first processor sub-system and a second processor sub-system. The first processor sub-system includes at least one first processor and a first cache coupled to the at least one first processor. The second processor sub-system includes at least one second processor and a second cache coupled to the at least one second processor. The cache coherence interconnect circuit is coupled to the processor sub-systems, and used to obtain a cache line data from an evicted cache line in the first cache, and transfer the obtained cache line data to the second cache for storage. | 2017-10-19 |
20170300428 | METHODS AND SYSTEMS FOR SELECTIVE ENCRYPTION AND SECURED EXTENT QUOTA MANAGEMENT FOR STORAGE SERVERS IN CLOUD COMPUTING - Methods and systems for selective encryption and secured extent quota management for storage servers in cloud computing are provided. A method includes associating at least one secure storage disk and at least one non-secure storage disk to a virtual disk, and associating the virtual disk to an application to allow access of the at least one secure storage disk and the at least one non-secure storage disk. The method further includes accessing the at least one secure storage disk and the at least one non-secure storage disk based on the associating of the virtual disk to the application, to write or read confidential and non-confidential data associated with the application into a respective one of the at least one secure storage disk and the at least one non-secure storage disk. | 2017-10-19 |
20170300429 | Determining Whether a Data Storage Is Encrypted - A method, program and/or system reads first data through a first path from a location in a data storage. Second data is read through a second path from the same location in the data storage. The first data is compared to the second data. A match between the first data and the second data indicates that the first path did not encrypt the first data. A mismatch between the first data and the second data indicates that the first path encrypted the first data. | 2017-10-19 |
20170300430 | Techniques for Protecting Memory Pages of a Virtual Computing Instance - Mechanisms to protect the integrity of memory of a virtual machine are provided. The mechanisms involve utilizing certain capabilities of the hypervisor underlying the virtual machine to monitor writes to memory pages of the virtual machine. A guest integrity driver communicates with the hypervisor to request such functionality. Additional protections are provided for protecting the guest integrity driver and associated data, as well as for preventing use of these mechanisms by malicious software. These additional protections include an elevated execution mode, termed “integrity mode,” which can only be entered from a specified entry point, as well as protections on the memory pages that store the guest integrity driver and associated data. | 2017-10-19 |