39th week of 2018 patent applcation highlights part 51 |
Patent application number | Title | Published |
20180276017 | SCALABLE POLICY MANAGEMENT IN AN EDGE VIRTUAL BRIDGING (EVB) ENVIRONMENT - A device includes a memory that is configured to store instructions. The device includes a processor configured to execute the instructions to: validate a message including virtual machine (VM) information using a value of a virtual station interface (VSI) type identification (ID) to perform a lookup of a fetched VSI database. The VM information for the VM includes VSI type ID and virtual local area network (VLAN) ID. The processor further generates a first table for multiple different VM types with at least a portion of information from the VSI database, retrieves an address of the VM from the first table for the multiple different VM types based on using VSI type ID and network ID, retrieves rules associated with the retrieved address of the VM and the VSI type ID from a second table including VM information, and applies the associated rules for the VM. | 2018-09-27 |
20180276018 | Hardware Independent Interface for Cognitive Data Migration - A method for off-board data migration. Responsive to receiving a request to migrate a virtual machine image, a memory location of the source virtual machine is identified. Using a generalized pathing interface, a range of logical blocks is accessed for both the source and destination virtual machines. The memory location of the source virtual machine is copied to a memory location of the destination virtual machine. The destination virtual machine is started, and; and the source virtual machine is stopped. | 2018-09-27 |
20180276019 | ENSURING OPERATIONAL INTEGRITY AND PERFORMANCE OF DEPLOYED CONVERGED INFRASTRUCTURE INFORMATION HANDLING SYSTEMS - An operational integrity and performance validation module performs operational integrity operations, performance validation operations, or both. Operational integrity operations may include: accessing an operational integrity schema comprising a task manifest identifying operational tasks and a performance manifest including performance thresholds for the tasks. The OIPV module communicates with different types of infrastructure managers and each schema may be associated with a particular infrastructure manager. The OIPV module may invoke the applicable infrastructure manager to perform the applicable tasks. For each task, the module may poll the infrastructure manager for status information and record completion status and time-to-complete information. Performance validation operations may include accessing a performance validation manifest, configuring an image manifest of a benchmark image in accordance with the performance validation operations, deploying the benchmark image to a node under test and instructing an image agent of the benchmark image to execute performance tests according with the image manifest. | 2018-09-27 |
20180276020 | INFORMATION PROCESSING SYSTEM AND VIRTUAL MACHINE - An information processing system includes one or more virtual machines, a container scaling apparatus, and a virtual-machine scaling apparatus. The container scaling apparatus performs autoscaling processing of a container that runs on a virtual machine among the one or more virtual machines. The virtual-machine scaling apparatus performs autoscaling processing of the one or more virtual machines and that stops a virtual machine whose protective state with respect to scale-in has been cancelled among the one or more virtual machines when performing scale-in. Each of the one or more virtual machines includes a controller that performs control in such a manner that the virtual machine is set to a protective state with respect to the scale-in performed by the virtual-machine scaling apparatus if one or more containers are running on the virtual machine. | 2018-09-27 |
20180276021 | INFORMATION PROCESSING SYSTEM, AUTOSCALING ASSOCIATION APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing system includes one or more virtual machines, a container scaling apparatus, a virtual-machine scaling apparatus, a calculating unit, and a reflecting unit. The container scaling apparatus performs autoscaling of a container that runs on the one or more virtual machines. The virtual-machine scaling apparatus performs autoscaling of the virtual machines in accordance with a difference in a necessary number of virtual machines and a number of virtual machines that are currently running. The calculating unit calculates, from a necessary number of containers calculated by the container scaling apparatus, a number of virtual machines that is necessary to cause the necessary number of containers to run thereon. The reflecting unit reflects the number calculated by the calculating unit to the necessary number of virtual machines to be used by the virtual-machine scaling apparatus. | 2018-09-27 |
20180276022 | CONSISTENT VIRTUAL MACHINE REPLICATION - Recovery points can be used for replicating a virtual machine and reverting the virtual machine to a different state. A filter driver can monitor and capture input/output commands between a virtual machine and a virtual machine disk. The captured input/output commands can be used to create a recovery point. The recovery point can be associated with a bitmap that may be used to identify data blocks that have been modified between two versions of the virtual machine. Using this bitmap, a virtual machine may be reverted or restored to a different state by replacing modified data blocks and without replacing the entire virtual machine disk. | 2018-09-27 |
20180276023 | NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS - A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process including generating a virtual machine when a processing load regarding one or more virtual machines on which a software runs exceeds a threshold, determining whether a number of first type licenses of the software used for the one or more virtual machines is less than a predetermined number, the first type licenses being flat-rate licenses for a given period or permanently, obtaining a second type license of the software when the number of first type licenses is no less than the predetermined number, the second type license being a pay-for-use billing license, and applying the second type license to the generated virtual machine. | 2018-09-27 |
20180276024 | HOT-PLUG HARDWARE AND SOFTWARE IMPLEMENTATION - A network device may include various cards and modules, such as management modules, line cards, and switch fabric modules. In various implementations, these components can be “hot-plugged” meaning that the components can be inserted into and removed from the network device while the network device is powered on. In various implementations, hardware in the network device can identify when a component has been added. The hardware can notify a virtual machine, which can then notify the host operating system. The host operating system can added the component, and then notify the virtual machine to also add the component. Once the virtual machine has added the component, the component becomes available for use by processes executing on the virtual machine. | 2018-09-27 |
20180276025 | VIRTUAL MACHINE SUSPENSION - A method and system for suspending operation of a virtual machine executed by a processing device of a host computing system. The method and system include storing state information of the virtual machine to a memory of the guest operating system. A notification is sent to an application of the virtual machine to enable the application to flush data prior to suspension of the operation of the virtual machine. Upon receipt of a confirmation that the state information is stored in the memory of the guest operating system, the state information is migrated to storage of the host computing system. | 2018-09-27 |
20180276026 | SCALABLE POLICY ASSIGNMENT IN AN EDGE VIRTUAL BRIDGING (EVB) ENVIRONMENT - One embodiment includes obtaining virtual machine (VM) information for at least one VM. The VM information includes a VSI type identification (ID) associated with each VM. A policy discriminator (PD) is associated for each VSI type ID, where the PD represents scalable policy assignment. At least one rule and bandwidth filter information associated with a VSI type ID is retrieved from virtual station interface (VSI) database (DB) information and PD for each VSI type ID. The associated at least one rule and filter information is applied based on one of multiple PD types. The multiple PD types comprise a VM type and a virtual local area network (vLAN) type. | 2018-09-27 |
20180276027 | COMPACTED CONTEXT STATE MANAGEMENT - Embodiments of an invention related to compacted context state management are disclosed. In one embodiment, a processor includes instruction hardware and state management logic. The instruction hardware is to receive a first save instruction and a second save instruction. The state management logic is to, in response to the first save instruction, save context state in an un-compacted format in a first save area. The state management logic is also to, in response to the second save instruction, save a compaction mask and context state in a compacted format in a second save area and set a compacted-save indicator in the second save area. The state management logic is also to, in response to a single restore instruction, determine, based on the compacted-save indicator, whether to restore context from the un-compacted format in the first save area or from the compacted format in the second save area. | 2018-09-27 |
20180276028 | SINGLE-HOP TWO-PHASE TRANSACTION RESOLUTION - A coordinator transaction processing monitor determines a transaction coordinator identifier associated with a transaction that spans transaction processing monitors distributed between transaction processing systems. The coordinator transaction processing monitor attaches the transaction coordinator identifier as part of a transaction request of an application flow of the transaction. The transaction request from the coordinator transaction processing monitor is transmitted to a next transaction processing monitor to sequentially propagate through the transaction processing monitors. A response from the next transaction processing monitor is received. The response includes a transaction resolution endpoint identifier for each of the transaction processing monitors participating in the transaction. Transaction resolution calls of a transaction resolution flow of the transaction are sent in parallel from the coordinator transaction processing monitor to the transaction processing monitors participating in the transaction as identified based on the transaction resolution endpoint identifier of each of the participating transaction processing monitors. | 2018-09-27 |
20180276029 | TRANSACTION REQUEST EPOCHS - A method may include receiving a first transaction request. The method may further include transmitting a retry response to the transaction request, which includes a first epoch identifier associated with a current epoch. The method may further include receiving a second transaction request, which includes a second epoch identifier associated with a previous epoch. The second transaction request may be fulfilled using a transaction resource reserved for the previous epoch. | 2018-09-27 |
20180276030 | OFF-THE-SHELF SOFTWARE COMPONENT REUSE IN A CLOUD COMPUTING ENVIRONMENT - A distributed data processing method, system, and computer program product include distribution of production software automatically without being designed for such distribution by the developers of the software and consideration of breakdown, automatic or otherwise, of a production software application into its components such that execution of the components can be distributed across nodes. | 2018-09-27 |
20180276031 | TASK ALLOCATION METHOD AND SYSTEM - Embodiments of the present application provide a task allocation method and system. The method includes: analyzing at least one query pattern of a target task to acquire expected response time of the query pattern; estimating system cost information and estimated response time according to the query pattern and service description information; estimating node cost information of each processing node of a set of processing nodes in a computing system; selecting a processing node of the set of processing nodes according to the node cost information to allocate subtasks of the target task to the selected processing node; and determining an unallocated subtask in the target task to schedule the unallocated subtask according to the expected response time, the system cost information, and the estimated response time. | 2018-09-27 |
20180276032 | ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF - An electronic apparatus, in which a memory is efficiently managed, and a control method thereof are provided. The electronic apparatus includes a processor is configured to allocate at least one of a general memory and a kernel memory to a process corresponding to a program in response to execution of the program; calculate a total capacity of the general memory and the kernel memory allocated to each of a plurality of processes; and erase a selected process, among the plurality of processes, which is determined as having a low priority based on the calculated total capacity of the general memory and the kernel memory. | 2018-09-27 |
20180276033 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR INFORMATION PROCESSING APPARATUS, AND APPLICATION MANAGEMENT METHOD - An information processing apparatus manages a plurality of operating applications. The information processing apparatus determines whether an activated user interface (UI) application is able to receive a clear execution notification from a system and executes reactivation processing of the UI application to release resources at the end of the UI application. | 2018-09-27 |
20180276034 | NEURAL NETWORK UNIT THAT INTERRUPTS PROCESSING CORE UPON CONDITION - A programmable apparatus includes a program memory that holds instructions of a program fetched and executed by the apparatus, a data memory that holds data processed by the instructions, a status register that holds a status having fields: a program memory address at which a most recent instruction is fetched from the program memory, a data memory access address at which data has most recently been accessed in the data memory by the apparatus and a repeat count that indicates a number of times an operation specified in a current program instruction remains to be performed. A condition register has condition fields corresponding to the status register fields. Control logic generates an interrupt request to a processing core in response to detecting that the status held in the status register satisfies the condition specified in the condition register. | 2018-09-27 |
20180276035 | NEURAL NETWORK UNIT THAT INTERRUPTS PROCESSING CORE UPON CONDITION - A method for operating an apparatus that includes a program memory, a data memory and a status register that holds a status, wherein the status has fields including: a program memory address at which a most recent instruction is fetched from the program memory, a data memory access address at which data has most recently been accessed in the data memory by the apparatus and a repeat count indicating a number of times an operation specified in a current program instruction remains to be performed, the apparatus further including a condition register having condition fields corresponding to the status fields held in the status register, the method including: writing the condition register with a condition including the condition fields; and generating an interrupt request to a processing core in response to detecting that the status held in the status register satisfies the condition specified in the condition register. | 2018-09-27 |
20180276036 | SYSTEM AND METHOD FOR PROVIDING A NATIVE JOB CONTROL LANGUAGE EXECUTION ENGINE IN A REHOSTING PLATFORM - In accordance with an embodiment, described herein is a system and method for providing a native JCL execution engine in a mainframe rehosting platform. A batch application runtime can comprise a JCL execution engine and one or more other types of execution engines. The JCL execution engine can provide a framework for supporting an internal JCL mechanism, a simulation of a plurality of mainframe utilities commonly used in JCL jobs, and a simulation of commonly used database utilities. When the JCL execution engine receives a JCL job, it can generate a statement sequence from the JCL job, order statements in the sequence into a doubly-linked sequence, and parse the doubly-linked sequence to construct a job element hierarchy tree for execution. A plurality of job initiators are provided to dequeue jobs from a job queue and dispatch the jobs to the JCL execution engine or another type of execution engine. | 2018-09-27 |
20180276037 | DATA PROCESSING SYSTEM AND DATA PROCESSING METHOD - A data processing system includes a plurality of calculation processors cascaded and a plurality of counters connected to the plurality of calculation processors, respectively. The plurality of calculation processors process a task in an order in which the plurality of calculation processors are cascaded. A count value of an individual one of the plurality of counters is incremented when a corresponding one of the calculation processors starts to process a task and is decremented when a calculation processor in a lowermost stage among the plurality of calculation processors ends the task. | 2018-09-27 |
20180276038 | VIRTUAL MACHINE CONSOLIDATION - Systems, methods and tools for managing the job queues of virtual machines, maintaining a low energy profile and a quality of service within the contractual service agreement. The systems migrate jobs to a new VM queue when a assigned VM has failed. The systems employ machine learning techniques to make decisions whether or not to reallocate the job to a VM running in an active mode (non-scalable mode) or a VM operating under a dynamic voltage and frequency scaling (DVFS) mode. The systems reconcile job failures, transfer and/or complete jobs using the network of VMs without degrading the service quality, maintaining a lower power consumption policy through scalable modes, including idle, busy, sleep, DVFS gradient and DVFS maximum modes, improving the overall reliability of the data center by switching the jobs to scalable nodes, increasing the recoverability of the systems in the virtualized environments. | 2018-09-27 |
20180276039 | Load sharing between wireless earpieces - A method for off-loading tasks between a set of wireless earpieces in an embodiment of the present invention may have one or more of the following steps: (a) monitoring battery levels of the set of wireless earpieces, (b) determining the first wireless earpiece battery level and the second wireless battery level, (c) communicating the battery levels of each wireless earpiece to the other wireless earpiece of the set of wireless earpieces, (d) assigning a first task involving one or more of the following: computing tasks, background tasks, audio processing tasks, and sensor data analysis tasks from one of the set of wireless earpieces to the other wireless earpiece if the battery level of the one of the set of wireless earpieces falls below a critical threshold, (e) communicating data for use in performing a second task to the other wireless earpiece if the second task is communicated to the first wireless earpiece. | 2018-09-27 |
20180276040 | EVENT-DRIVEN SCHEDULING USING DIRECTED ACYCLIC GRAPHS - Methods, systems, and computer-readable media for event-driven scheduling using directed acyclic graphs are disclosed. A directed acyclic graph is generated that comprises a plurality of nodes and a plurality of edges. The nodes represent jobs, and the edges represent dependency relationships between individual jobs. Based (at least in part) on one or more events, a job scheduler determines that one of the nodes represents a runnable job. One or more of the dependency relationships for the runnable job are satisfied by the one or more events. An execution schedule is determined for the runnable job. Based (at least in part) on the execution schedule, execution of the runnable job is initiated using one or more computing resources. | 2018-09-27 |
20180276041 | DYNAMIC DISPATCHING OF WORKLOADS SPANNING HETEROGENEOUS SERVICES - A system for executing a workload that includes a plurality of transactions for a first time slot determines whether a metered cloud service has a sufficient quota of operations available to execute respective metered transactions. For the first time slot, the system determines whether a non-metered cloud service has a sufficient processing load to execute respective non-metered transactions. The system executes the plurality of transactions during the first time slot when each metered cloud service has the sufficient quota and each non-metered cloud service has the sufficient processing load. Further, the system waits to execute the plurality of transactions of the workload during a time slot subsequent to the first time slot when any of the metered cloud services does not have the sufficient quota or any of the non-metered cloud services does not have a sufficient processing load. | 2018-09-27 |
20180276042 | TECHNOLOGIES FOR IDENTIFYING THREAD MEMORY ALLOCATION - Systems, methods, and computer-readable media for identifying and managing memory allocation for one or more threads are described. A computer system may detect that a threshold memory utilization has been met, and may determine an aggregate memory allocation for a thread. The aggregate memory allocation may be a difference between a first memory allocation for the thread at a first time that the threshold memory utilization was met and a second memory allocation for the thread at a second time that the threshold memory utilization was met. The computer device may provide an indication that the thread has met or exceeded a threshold memory allocation when the aggregate memory allocation is greater than or equal to the threshold memory allocation. The computer device may disable the thread when the aggregate memory allocation is greater than or equal to the threshold memory allocation. Other embodiments may be described and/or claimed. | 2018-09-27 |
20180276043 | ANTICIPATORY COLLECTION OF METRICS AND LOGS - A system includes a processor and machine readable instructions stored on a tangible machine readable medium, which when executed by the processor, configure the processor to collect data regarding resource use within a computing system, the data being collected periodically, without running a diagnostic program, and before occurrence of a diagnosis worthy event; and provide the collected data to the diagnostic program executed after the occurrence of the diagnosis worthy event so that the diagnostic program has data from before the occurrence of the diagnosis worthy event to enable determination of a cause of the diagnosis worthy event. | 2018-09-27 |
20180276044 | COORDINATED, TOPOLOGY-AWARE CPU-GPU-MEMORY SCHEDULING FOR CONTAINERIZED WORKLOADS - A workload scheduling method, system, and computer program product include analyzing a resource scheduling requirement for processes of a workload including the communication patterns among CPUs and accelerators, creating feasible resources based on static resource information of the resources for the processes of the workload, and selecting an available resource of the feasible resources to assign the workload based on the resource scheduling requirement, such that the CPU and GPU connection topology of the selection matches the communication patterns of the workload. | 2018-09-27 |
20180276045 | USER INTERFACE AND SYSTEM SUPPORTING USER DECISION MAKING AND READJUSTMENTS IN COMPUTER-EXECUTABLE JOB ALLOCATIONS IN THE CLOUD - A visual tool may be provided to display information associated with computer job allocation and to allow a user to explore different job configurations. Jobs executing on a computing environment comprising a shared pool of configurable computing resources may be monitored. Cost and duration estimates may be determined with uncertainty associated with the cost and duration estimates. Sandbox environment may be provided that allow users to manipulate one or more different job configuration options for executing the jobs in the computing environment. | 2018-09-27 |
20180276046 | HARDWARE THREAD SCHEDULING - An apparatus has processing circuitry to execute instructions from multiple threads and hardware registers to store context data for the multiple threads concurrently. At a given time a certain number of software-scheduled threads may be scheduled for execution by software executed by the processing circuitry. Hardware thread scheduling circuitry is provided to select one or more active threads to be executed from among the software-scheduled threads. The hardware thread scheduling circuitry adjusts the number of active threads in dependence on at least one performance metric indicating performance of the threads. | 2018-09-27 |
20180276047 | SYSTEMS AND METHODS FOR DYNAMIC LOW LATENCY OPTIMIZATION - Systems and methods which provide low latency optimization configured to perform from the hardware layer across the operating system to an application. Low latency operation implemented in accordance with embodiments is optimized for a specific application, which interfaces with specific hardware, executing on a host processor-based system configured for low latency optimization according to the concepts herein. For example, a low latency optimization implementation may comprise various modules implemented in both the user space and Kernel space, wherein the modules cooperate to obtain information regarding the services and hardware utilized by an application and to provide such information for facilitating low latency operation with respect to the application. In operation according to embodiments, low latency operation is dynamically enabled or disabled by a low latency optimization implementation, such as to facilitate low latency operation on an application by application basis as appropriate or as desired. | 2018-09-27 |
20180276048 | Task Management System for a Modular Electronic Device - Systems and methods are provided for managing task performance for a modular electronic device. In one implementation, a modular electronic device can include one or more electronic modular components. The modular electronic device can identify a computational task associated with the modular electronic device and identify one or more computing devices that are available to perform at least a portion of the computational task. The modular electronic device can obtain one or more sets of data associated with one or more computational resources of the computing devices. The modular electronic device can determine a potential benefit to the modular electronic device associated with the performance of the computational task by the computing devices. The modular electronic device can perform at least a portion of the computational task with the computing devices based, at least in part, on the sets of data associated with the computational resources and the potential benefit. | 2018-09-27 |
20180276049 | SYSTEMS AND METHODS FOR ESTIMATING COMPUTATION TIMES A-PRIORI IN FOG COMPUTING ROBOTICS - In order to make use of computational resources available at runtime through fog networked robotics paradigm, it is critical to estimate average performance capacities of deployment hardware that is generally heterogeneous. It is also not feasible to replicate runtime deployment framework, collected sensor data and realistic offloading conditions for robotic environments. In accordance with an embodiment of the present disclosure, computational algorithms are dynamically profiled on a development testbed, combined with benchmarking techniques to estimate compute times over the deployment hardware. Estimation in accordance with the present disclosure is based both on Gustafson's law as well as embedded processor benchmarks. Systems and methods of the present disclosure realistically capture parallel processing, cache capacities and differing processing times across hardware. | 2018-09-27 |
20180276050 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIUMS FOR WORKLOAD CLUSTERING - Methods, systems, and computer readable mediums for optimizing a system configuration are disclosed. In some examples, a method includes determining whether a system configuration for executing a workload using a distributed computer system is optimizable and in response to determining that the system configuration is optimizable, modifying the system configuration such that at least one storage resource for storing workload data is located at a server node that is executing the workload in the distributed computer system. | 2018-09-27 |
20180276051 | PROCESSOR AND TASK PROCESSING METHOD THEREFOR, AND STORAGE MEDIUM - A processor and a task processing method therefor, and a storage medium. The method comprises: a scalar calculation module executing parameter calculation of a current task, and storing a parameter obtained through calculation in a PBUF; when the parameter calculation of the current task is completed, executing a first instruction or second instruction for inter-core synchronization, and storing the first instruction or the second instruction in the PBUF ( | 2018-09-27 |
20180276052 | DEADLOCK DETECTOR, SYSTEM INCLUDING THE SAME AND ASSOCIATED METHOD - A system includes a plurality of hardware blocks, a deadlock detector and an interconnect device. The hardware blocks include a processor executing instructions and a storage device storing data. The deadlock detector monitors operations of a target hardware block among the plurality of hardware blocks in realtime to store debugging information in the storage device. The interconnect device electrically connects the deadlock detector and the plurality of hardware blocks. The interconnect device includes a system bus electrically connecting the plurality of hardware blocks and a debugging bus electrically connecting the deadlock detector to the target hardware block and the storage device. The interconnect device includes a system bus electrically connecting the plurality of hardware blocks and a debugging bus electrically connecting the deadlock detector to the target hardware block and the storage device. | 2018-09-27 |
20180276053 | DYNAMICALLY INTEGRATING A CLIENT APPLICATION WITH THIRD-PARTY SERVICES - Disclosed are various approaches for dynamically integrating a client application with multiple third-party services. An integration service receives a request to perform an action relative to a particular third-party service from a client device. The request is received through a first application programming interface (API) generic to a plurality of third-party services. The integration service communicates with the particular third-party service to perform the action using a second API specific to the particular third-party service. The integration service sends a response to the client device through the first API. The response is based at least in part on a result of the action and includes an indication of one of a plurality of predefined user interfaces. | 2018-09-27 |
20180276054 | INFORMATION SHARING AMONG MOBILE APPARATUS - A method executed by a mobile apparatus for verifying event information to be shared is disclosed. The method includes communicating with a nearby mobile or immobile apparatus to generate a verification in response to encountering the nearby mobile or immobile apparatus. The method also includes verifying existence of an incident event in response to arriving at a place of the incident event. The method further includes publishing a verified incident event in order to add into an incident event distributed ledger used for managing event information related to the incident event. | 2018-09-27 |
20180276055 | Information Sharing Among Mobile Apparatus - A method executed by a mobile apparatus for verifying event information to be shared is disclosed. The method includes communicating with a nearby mobile or immobile apparatus to generate a verification in response to encountering the nearby mobile or immobile apparatus. The method also includes verifying existence of an incident event in response to arriving at a place of the incident event. The method further includes publishing a verified incident event in order to add into an incident event distributed ledger used for managing event information related to the incident event. | 2018-09-27 |
20180276056 | EVENT NOTIFICATION - A server computer determines that an application of a plurality of applications in a cloud subscribes to an event with respect to a plurality of attributes of event data. The server computer, responsive to determining that at least one of the plurality of attributes has changed, executes the application workflow and sends a notification to the application, the notification indicating that at least one of the plurality of attributes has changed. | 2018-09-27 |
20180276057 | ENHANCED COPY-AND-PASTE - An enhanced copy-and-paste function copies multiple logical and physical software objects from a source computing environment to a distinct target computing environment. A physical object can be any software-data entity, such as a document, a container, a database, or a disk image. A logical object contains a hierarchy of two or more physical or logical objects. Objects are copied to a logical copy clipboard, where they may be assembled into logical objects. Each physical object is then transferred one at a time to a conventional physical clipboard, transmitted to a corresponding physical clipboard in a corresponding target environment, and then forwarded to a logical paste clipboard, where the original logical objects are reconstructed and pasted into the target environment. Each logical object may be pasted into multiple target environments and may contain physical objects copied from multiple source environments. Multiple logical objects may contain the same physical object. | 2018-09-27 |
20180276058 | In-Product Notifications Targeting Specific Users Selected Via Data Analysis - Systems and methods for sending in-product notifications to individual users of a software product or a specifically identified subset of users of the software product selected via their previously observed interactions with the software product. | 2018-09-27 |
20180276059 | PROGRAMMING LANGUAGE-INDEPENDENT TRANSACTION CORRELATION - A method for inserting application performance management hooks in dynamically linked system libraries in a programming language independent way includes determining a dynamically linked library called by a plurality of applications to transmit and receive messages to and from other applications, and overriding a first method, in the dynamically linked system library, used in transmitting messages to include a transaction identifier in messages transmitted by any of the plurality of applications. The method also includes overriding a second method, in the dynamically linked system library, used in receiving messages to extract the transaction identifier from any messages received by any of the plurality of applications. The method further includes determining a first application that transmitted the first message based on the identifier, determining a second application that the first message is intended for, and logging information associated with transmission and receipt of the first message for the first and second applications. | 2018-09-27 |
20180276060 | SYSTEM AND METHOD FOR INTEROPERABLE CLOUD DSL TO ORCHESTRATE MULTIPLE CLOUD PLATFORMS AND SERVICES - In one aspect, a computerized method utilizing an interoperable cloud Domain-specific language (DSL) to orchestrate multiple cloud platforms and services including the step of parsing one or more DSL inputs. The computerized method includes the step of validating one or more DSL inputs a content and a syntax based on a DSL type and a type of associated cloud platform. The computerized method includes the step of chaining the one or more DSL to orchestrate resources in various cloud platforms by using cloud-platform native DSL and with orchestrating operational tools using third party custom DSLs. The computerized method includes the step of parsing an input data structure from the one or more DSLs together. The computerized method includes the step of appending the input data structure from the one or more DSLs together to capture input values together for execution. | 2018-09-27 |
20180276061 | DEVICE LIFESPAN ESTIMATION METHOD, DEVICE DESIGN METHOD, AND COMPUTER READABLE STORAGE MEDIUM - A device lifespan prediction method includes executing software loaded on a target device, using a user scenario case selected from a user scenario pool including one or more user scenario cases, collecting usage information for respective constituent block units of the target device based on execution of the software, and predicting a lifespan of the target device by analyzing the collected usage information. | 2018-09-27 |
20180276062 | MEMORY STORE ERROR CHECK - Techniques for memory store error checks are provided. In one aspect, a process running on a processor may execute an instruction to store a first value in memory. The processor may store a plurality of values, including the first value, from a plurality of processes to the memory. A check on a synchronous error notification path may be performed to determine whether an error in storing at least one of the plurality of values occurred. | 2018-09-27 |
20180276063 | SITUATION ANALYSIS - A method for performing root cause analysis of failures in a computer network is provided. The method includes receiving an Adaptive Service Intelligence (ASI) data set related to one or more failures reported in the computer network from a plurality of interfaces. One or more impact events associated with the reported failures are identified based on the received ASI data set. Each of the identified impact events is correlated with one or more cause events. A situation record is selectively generated based on the correlation results. | 2018-09-27 |
20180276064 | DIAGNOSIS DEVICE, DIAGNOSIS METHOD, AND NON-VOLATILE RECORDING MEDIUM - The diagnosis device specifies a progression degree relating to a first information processing device for output information output by a first detection device at a first timing with respect to the first information processing device, based on device information indicates a progression degree that represents a degree to which the information processing device is abnormal with respect to the information processing device, determines whether or not information in which a first detection device identifier of the first detection device and the specified progression degree are associated with each other is included in progression-degree information in which a detection device identifier capable of identifying a detection device and the progression degree are associated with each other; and calculates the progression degree relating to the first information processing device according to the specified progression degree when the information is determined to be included in the progression-degree information. | 2018-09-27 |
20180276065 | DATA LIFECYCLE MANAGEMENT - A method and technique for data lifecycle management includes identifying a fault from a monitored system. A time period window associated with the fault is defined based on when the fault occurred. One or more metrics that are related to the fault and that fall within the time period window are identified and stored in a memory. A lifespan condition associated with the fault is identified, and varying levels of lifespans are assigned to the one or more metrics based on a level of relationship between the respective one or more metrics. The one or more metrics are removed from the memory if their associated lifespans are over. | 2018-09-27 |
20180276066 | DISK DEVICE AND NOTIFICATION METHOD OF THE DISK DEVICE - A disk device includes; a sensor that, when a driver that records and reads data in the disk device is in a non-operation state, detects an influence on the disk device; a determination circuit that determines based on a detection result of the sensor whether the detection result satisfies a condition leading to a failure of the disk device; and a transmitter that transmits information relating to a fact that the detection result satisfies the condition, to another device. | 2018-09-27 |
20180276067 | DATA PROTECTING METHOD AND MEMORY STORAGE DEVICE - A data protecting method and a memory storage device are provided. The data protecting method includes reading a first string from the rewritable non-volatile memory module to obtain a data string; performing a decoding operation based on the data string to obtain block information corresponding to a plurality of physical erasing units; inputting the block information to an error checking and correcting (ECC) circuit of the memory storage device to generate a second string; and storing the second string into the rewritable non-volatile memory module. | 2018-09-27 |
20180276068 | ERROR CORRECTION CODE IN MEMORY - In one example in accordance with the present disclosure, a system comprises a plurality of memory dies, a first region of memory allocated for primary ECC spread across a first subset of at least one memory die belonging to the plurality of memory die, wherein a portion of the primary ECC is allocated to each data block and a second region of memory allocated for secondary ECC spread across a second subset of at least one memory die included in the plurality of memory die. The system also comprises a memory controller configured to determine that an error within the first data block cannot be corrected using a first portion of the primary ECC allocated to the first data block, access the second region allocated for secondary ECC stored on the at least one memory die belonging to the plurality of memory die and attempt to correct the error using the primary and secondary ECC. | 2018-09-27 |
20180276069 | MEMORY CONTROLLER, MEMORY SYSTEM, AND CONTROL METHOD - A memory controller includes an encoder configured to generate parity from input data, a randomizer circuit configured to generate first and second data portions using a first random number and input data and parity, a program interface configured to write the first and second data portions to a nonvolatile memory, a reading interface configured to read data from the nonvolatile memory, a conversion circuit configured to convert read data into an LLR sequence, each LLR generated based on a value one bit of the read data and a value of a corresponding bit of a second random number, and a decoder configured to decode the LLR sequence to generate output data. User data stored in the nonvolatile memory as part of a codeword is restored from the codeword by reading the codeword from the nonvolatile memory and setting the second random number to be equal to the first random number. | 2018-09-27 |
20180276070 | SEMICONDUCTOR STORAGE DEVICE - According to an embodiment, a semiconductor storage device includes a detection circuit configured to detect an error in data read from a first memory cell array. The read data of a size corresponding to a page unit is subjected to detection of an error for each of a plurality of first units into which the page unit is divided. When performing a first operation of concurrently executing outputting of first data read from the first memory cell array to an outside and reading of second data different from the first data from the first memory array, an interface circuit is configured to output information based on the error detected with respect to the first data to the outside. | 2018-09-27 |
20180276071 | MEMORY SYSTEM AND RESISTANCE CHANGE TYPE MEMORY - According to one embodiment, a memory system includes a resistance change type memory including a memory cell configured to hold first data and an ECC circuit configured to detect and to correct an error in the first data; and a controller configured to control an operation of the resistance change type memory. In a read operation for the memory, when the first data from the memory cell includes an error, the memory transmits second data in which the error is corrected and a first signal to the controller. The controller transmits a control signal and a write command to the memory based on the first signal. The memory writes the second data to the memory cell based on the control signal and the write command. | 2018-09-27 |
20180276072 | MEMORY CONTROLLER AND DATA READING METHOD - According to one embodiment, a memory controller includes one or more processors configured to function as a writing unit and a reading unit. The writing unit writes data as threshold voltages of individual memory cells. The reading unit reads the written data by detecting threshold voltages of the individual memory cells. The reading unit includes a selecting unit, a detecting unit, and an estimating unit. The selecting unit selects a read-target memory cell. The detecting unit detects a first threshold voltage at a time of reading of the read-target memory cell, and a second threshold voltage at a time of reading of at least one of adjacent memory cells that are adjacent to the read-target memory cell. The estimating unit estimates a third threshold voltage as a threshold voltage at a time of writing in the read-target memory cell based on the first threshold voltage and the second threshold voltage. | 2018-09-27 |
20180276073 | MEMORY SYSTEM - A memory system includes a nonvolatile memory, a memory controller included in a first package, and a memory interface circuit included in a second package that is different from the first package. The memory controller includes an encoder for performing encoding for error correction. The memory controller is configured to encode first data into second data using the encoder, and program the second data into a location in the nonvolatile memory. The memory interface circuit is interposed between the memory and the memory controller. The memory interface circuit includes a decoder for performing decoding for error correction. The memory interface circuit is configured to read third data from a first location in the nonvolatile memory, diagnose the third data by decoding the third data using the decoder, and convey a result of the diagnosis to the memory controller. | 2018-09-27 |
20180276074 | METHOD FOR TENANT ISOLATION IN A DISTRIBUTED COMPUTING SYSTEM - A method begins by processing modules of a dispersed storage network (DSN) allocating a plurality of DSN address ranges to DSN memories of the DSN and assigning DSN address ranges of the plurality of DSN address ranges to a first and second tenant of the DSN. The method continues by the processing modules receiving a write request for a data object segmented into first data segments from a first tenant of the DSN, and encoding the first data segments in accordance with first error encoding parameters. The method continues by the one or more processing modules receiving a write request for a data object from a second tenant of the DSN, and encoding the second data segments in accordance with second encoding parameters. The method then continues with the processing modules transmitting the first encoded data segments and the second encoded data segments to the DSN memories. | 2018-09-27 |
20180276075 | APPARATUS AND METHOD FOR MANAGING DATA STORAGE - Provided are an apparatus and method for managing data storage. A first log structured array stores data in a storage device. A second log structured array in the storage device stores metadata for the data in the first log structured array, wherein the second log structured array storing the metadata for the first log structured data storage system is nested within the first log structured array, and wherein the first and second log structured arrays comprise separate instances of log structured arrays. Address space is allocated in the second log structured array for metadata when the allocation of address space is required for metadata for data stored in the first log structured array. | 2018-09-27 |
20180276076 | MONITORING CIRCUIT - Provided is a monitoring circuit equipped with a first abnormality detection circuit which detects a first abnormal state of a semiconductor device under surveillance, a second abnormality detection circuit which detects a second abnormal state of the semiconductor device under surveillance, a reset circuit which outputs a reset signal based on a logical sum of a first abnormality detection signal output from the first abnormality detection circuit and a second abnormality detection signal output from the second abnormality detection circuit to a first output terminal, and an output holding circuit which stores which of the first abnormality detection signal and the second abnormality detection signal is supplied, and outputs an abnormality discrimination signal corresponding thereto to a second output terminal. | 2018-09-27 |
20180276077 | MEMORY RESIDENT STORAGE RECOVERY DURING COMPUTER SYSTEM FAILURE - An approach for virtual machine (VM) random access memory (RAM) disk preservation during VM failure. A RAM disk manager receives a VM identifier and attributes for connecting a RAM disk to the VM, where the RAM disk includes a memory region separate from memory region(s) associated with the VM. The RAM disk manager creates a RAM disk VM driver for interfacing the RAM disk between a disk driver and virtual drive adapter. The RAM disk manager detects an output action based on the disk driver operation and responds to detecting an output action by storing output data to the RAM disk and marking synchronization status as pending. The RAM disk manager synchronizes the output data, asynchronously with non-volatile storage and detects a failed VM, responding by disconnecting the RAM disk and can re-assign the RAM disk to a next VM. | 2018-09-27 |
20180276078 | REVERSAL OF THE DIRECTION OF REPLICATION IN A REMOTE COPY ENVIRONMENT BY TRACKING CHANGES ASSOCIATED WITH A PLURALITY OF POINT IN TIME COPIES - A secondary volume of a remote computational device stores an asynchronous copy of a primary volume of a local computational device. The remote computational device generates a target volume that stores consistent data from the secondary volume, and also generates a plurality of point in time copies at a plurality of instants of time from the target volume. A restoration is made of data in the primary volume to at least one of the plurality of instants of time by using one or more data structures that provide identification of all tracks from the target volume that are to be written to the primary volume for restoring the data in the primary volume. | 2018-09-27 |
20180276079 | SYSTEM AND METHOD FOR DETERMINING THE SUCCESS OF A CROSS-PLATFORM APPLICATION MIGRATION - In accordance with an embodiment, described herein is a system and method for determining the migration success of an application (e.g., a batch application) from a second computing platform (e.g., a mainframe platform) to a first computing platform (e.g., an open platform). A first database associated with the first computing platform and a second database associated with the second computing platform can include the same data baseline. A set of triggers can be created on each database to capture database modification events generated by the execution of a job associated with the application on each computing platform, and to store the database modification events in a table in each database. The database modification events from each computing platform can be downloaded and compared to determine the success of the application migration. | 2018-09-27 |
20180276080 | MEMORY RESIDENT STORAGE RECOVERY DURING COMPUTER SYSTEM FAILURE - An approach for virtual machine (VM) random access memory (RAM) disk preservation during VM failure. A RAM disk manager receives a VM identifier and attributes for connecting a RAM disk to the VM, where the RAM disk includes a memory region separate from memory region(s) associated with the VM. The RAM disk manager creates a RAM disk VM driver for interfacing the RAM disk between a disk driver and virtual drive adapter. The RAM disk manager detects an output action based on the disk driver operation and responds to detecting an output action by storing output data to the RAM disk and marking synchronization status as pending. The RAM disk manager synchronizes the output data, asynchronously with non-volatile storage and detects a failed VM, responding by disconnecting the RAM disk and can re-assign the RAM disk to a next VM. | 2018-09-27 |
20180276081 | Method and Apparatus for Generating Virtual Machine Snapshot - A method and an apparatus for generating a virtual machine snapshot, where the method includes suspending a virtual machine at a first moment according to a received snapshot command, starting to perform a storage operation on a memory page in memory of the virtual machine and a contamination interception operation on the memory page in the memory, storing a device status, which is at the first moment, of the virtual machine to a snapshot file, and restoring the virtual machine from a suspended state to a running state after the device status is stored. | 2018-09-27 |
20180276082 | SATISFYING RECOVERY SERVICE LEVEL AGREEMENTS (SLAs) - Examples provided herein describe a system and method for satisfying recovery service level agreements (SLAs). For example, a first entity may determine that a first recovery operation is to be performed at a first storage device. The first entity may then determine that the first storage device is available. Responsive to determining that the first storage device is available, the first entity may establish a data connection with a first storage device and may perform a first recovery operation at the first storage device. The first entity may receive a second storage device availability message from a second entity that requests a second recovery operation at the first storage device and may facilitate communication with the second entity. The first entity may then perform the second recovery operation at the first storage device and communicate the recovered data to the second entity. | 2018-09-27 |
20180276083 | BUFFERED VIRTUAL MACHINE REPLICATION - Recovery points can be used for replicating a virtual machine and reverting the virtual machine to a different state. A filter driver can monitor and capture input/output commands between a virtual machine and a virtual machine disk. The captured input/output commands can be used to create a recovery point. The recovery point can be associated with a bitmap that may be used to identify data blocks that have been modified between two versions of the virtual machine. Using this bitmap, a virtual machine may be reverted or restored to a different state by replacing modified data blocks and without replacing the entire virtual machine disk. | 2018-09-27 |
20180276084 | VIRTUAL MACHINE RECOVERY POINT SELECTION - Recovery points can be used for replicating a virtual machine and reverting the virtual machine to a different state. A filter driver can monitor and capture input/output commands between a virtual machine and a virtual machine disk. The captured input/output commands can be used to create a recovery point. The recovery point can be associated with a bitmap that may be used to identify data blocks that have been modified between two versions of the virtual machine. Using this bitmap, a virtual machine may be reverted or restored to a different state by replacing modified data blocks and without replacing the entire virtual machine disk. | 2018-09-27 |
20180276085 | VIRTUAL MACHINE RECOVERY POINT GENERATION - Recovery points can be used for replicating a virtual machine and reverting the virtual machine to a different state. A filter driver can monitor and capture input/output commands between a virtual machine and a virtual machine disk. The captured input/output commands can be used to create a recovery point. The recovery point can be associated with a bitmap that may be used to identify data blocks that have been modified between two versions of the virtual machine. Using this bitmap, a virtual machine may be reverted or restored to a different state by replacing modified data blocks and without replacing the entire virtual machine disk. | 2018-09-27 |
20180276086 | INSTANT DATA CENTER RECOVERY - Facility for providing backup and restore of all data center components including physical machines, virtual machines, routers, networks, sub-networks, switches, firewall, directory lookup, DNS, DHCP and internet access. Virtual or physical machines are associated to data center components and a software defined network, storage, and compute infrastructure is provided. | 2018-09-27 |
20180276087 | REBUILD ROLLBACK SUPPORT IN DISTRIBUTED SDS SYSTEMS - Methods, computing systems and computer program products implement embodiments of the present invention that include detecting a loss of communication with a given storage node among multiple storage nodes in a distributed computing system. Upon detecting the loss of communication, a log including updates to the data stored in the given storage node is recorded and, the recorded updates can be applied to the given storage node upon communication with the given storage node being reestablished. In some embodiments, the distributed storage system may be configured as a software defined storage system where the storage nodes can be implemented as either virtual machines or software containers. In additional embodiments, upon detecting the loss of communication, a redistribution of the mirrored data among remaining storage nodes is initiated upon detecting the loss of communication, and the redistribution is rolled back upon reestablishing the communication. | 2018-09-27 |
20180276088 | CONTROLLING DEVICE, CONTROLLING METHOD, AND FAULT TOLERANT APPARATUS - A controlling device | 2018-09-27 |
20180276089 | AFTER SWAPPING FROM A FIRST STORAGE TO A SECOND STORAGE, MIRRORING DATA FROM THE SECOND STORAGE TO THE FIRST STORAGE FOR DATA IN THE FIRST STORAGE THAT EXPERIENCED DATA ERRORS - Provided are a computer program product, system, and method for after swapping from a first storage to a second storage, mirroring data from the second storage to the first storage for data in the first storage that experienced data errors. A swap operation redirects host Input/Output (I/O) requests to data from the first server to the second server in response to a health condition at the first server. A determination is made of data errors with respect to data in the first storage that experienced data errors. The second server is instructed to mirror data in the second storage to the first server including data for the data in the first storage that experienced the data errors to store in the first storage in response to determining that the first server is available for the data mirroring operations. | 2018-09-27 |
20180276090 | CROSS-PLATFORM REPLICATION - One or more techniques and/or computing devices are provided for cross-platform replication. For example, a replication relationship may be established between a first storage endpoint and a second storage endpoint, where at least one of the storage endpoints, such as the first storage endpoint, lacks or has incompatible functionality to perform and manage replication because the storage endpoints have different storage platforms that store data differently, use different control operations and interfaces, etc. Accordingly, replication destination workflow, replication source workflow, and/or a proxy representing the first storage endpoint may be implemented at the second storage endpoint comprising the replication functionality. In this way, replication, such as snapshot replication, may be implemented between the storage endpoints by the second storage endpoint using the replication destination workflow, the replication source workflow, and/or the proxy that either locally executes tasks or routes tasks to the first storage endpoint such as for data access. | 2018-09-27 |
20180276091 | APPLICATION SERVICE-LEVEL CONFIGURATION OF DATALOSS FAILOVER - Application service configuration of a timeframe for performing dataloss failover (failover that does not attempt full data replication to the secondary data store) from a primary data store to the secondary data store. A data-tier service, such as perhaps a database as a service (or DBaaS), could receive that configuration from the application service and automatically perform the any dataloss failover as configured by the application service. This relieves the application service from having to manage the failover workflow while still allowing the application service to appropriately balance the timing of dataloss failover, which will depend on a very application-specific optimal balance between the negative effects of operational latency versus dataloss. | 2018-09-27 |
20180276092 | RECOVERING USING WRITE DATA STORED BY A POWERLOSS DATA PROTECTION TECHNIQUE - In some examples, a storage system dynamically selects, in response to a determined type of power backup arrangement used in the storage system, from among different powerloss data protection techniques corresponding to different types of power backup arrangements, The different powerloss data protection techniques each is responsive to loss of power by saving write data according to a common format. In response to loss of power in the storage system, the selected powerloss data protection technique is used to store write data corresponding to write operations in the storage system according to the common format to a recovery storage medium. The storage system recovers from the loss of power in the storage system by using the write data according to the common format stored using the selected powerloss data protection technique. | 2018-09-27 |
20180276093 | MEMORY SYSTEM - A memory system is disclosed, comprising a primary memory module, a secondary memory module, and a controller. The controller is configured to identify addresses in the primary memory module requiring correction, and is further configured to receive a memory access request identifying an address in the primary memory module. The controller is configured to determine whether the address is identified as requiring correction and, if it is not, to direct the memory access request to the primary memory module. If the address is identified as requiring correction, the controller is configured to direct the memory access request to the secondary memory module. | 2018-09-27 |
20180276094 | MAINTAINING IO BLOCK OPERATION IN ELECTRONIC SYSTEMS FOR BOARD TESTING - Embodiments are generally directed to maintaining IO block operation in electronic systems for board testing. An embodiment of a system includes a processor; a power management block for the system; a plurality of IO (input/output) blocks; a read only memory for storage of firmware for the processor. The system is to provide support for a board-level test of the system including testing of one or more IO blocks of the plurality of IO blocks; and the firmware includes elements to stall a reset sequence of the system including the system branching to a mode that maintains power to the one or more IO blocks. | 2018-09-27 |
20180276095 | MONITORING AN INTEGRITY OF A TEST DATASET - A method as well as a crypto-arrangement and a computer program product for monitoring an integrity of a test dataset, wherein a random sample of a test dataset is checked for integrity is provided. The method for monitoring an integrity of a test dataset includes the following steps: random sample-type selection of the test dataset from a dataset to be transferred via a communications connection; cryptographically protected provision of the selected test dataset to a test unit, wherein a communication via the communications connection is carried out uninfluenced by the selection and preparation; testing of the cryptographically protected test dataset for integrity by the test unit, based on cryptographic calculations and plausibility information. | 2018-09-27 |
20180276096 | ON DEMAND MONITORING MECHANISM TO IDENTIFY ROOT CAUSE OF OPERATION PROBLEMS - A monitoring mechanism is used to detect, via client side monitoring, malfunctions of services within a cloud environment. Additional monitors are activated against the problem-related services in the system. Recursively, the monitored problem-related services act as the client to other services inside the cloud environment and can be used to detect more services which need to be monitored until all the problem-related services are monitored. After the problem is fixed, the monitoring can be disabled automatically or manually. | 2018-09-27 |
20180276097 | PROCESSOR PERFORMANCE MONITOR - One example aspect of the present disclosure is directed to a method for monitoring performance of a plurality of processors, wherein the plurality of processors are arranged in a daisy-chained ring configuration. The method includes receiving, by a first processor from the plurality of processors, a first signal from a second processor of the plurality of processors. The method includes determining, by the first processor, a status of the second processor based at least in part on whether the first received signal was received at a first expected interval. The method includes transmitting, by the first processor, a second signal to a third processor of the plurality of processors, wherein the third processor determines a status of the first processor based at least in part on whether the second signal was received at the third processor at a second expected interval. | 2018-09-27 |
20180276098 | COMPUTING RESIDUAL RESOURCE CONSUMPTION FOR TOP-K DATA REPORTS - System and method for providing the capability to resample computer system metrics, while providing improved accuracy over conventional techniques. One method may comprise monitoring and measuring metrics of system resource consumption of a plurality of entities to generate resource consumption data, generating a report of the resource consumption data for the plurality of entities for each of a plurality of time periods, identifying a number, k, of the plurality of entities as top-k consumers of resources for each of the plurality of time periods, identifying at least one residual entity of the plurality of entities whose resource consumption is not included in the top-k entities based on residual resource consumption data of the entity, and resampling the reports of the resource consumption data corresponding to the top-k entities and to the at least one residual entity to form at least one report covering a time period. | 2018-09-27 |
20180276099 | COMPUTING RESIDUAL RESOURCE CONSUMPTION FOR TOP-K DATA REPORTS - Methods for providing the capability to resample computer system metrics, while providing improved accuracy over conventional techniques. One method may comprise monitoring and measuring metrics of system resource consumption of a plurality of entities to generate resource consumption data, generating a report of the resource consumption data for the plurality of entities for each of a plurality of time periods, identifying a number, k, of the plurality of entities as top-k consumers of resources for each of the plurality of time periods, identifying at least one residual entity of the plurality of entities whose resource consumption is not included in the top-k entities based on residual resource consumption data of the entity, and resampling the reports of the resource consumption data corresponding to the top-k entities and to the at least one residual entity to form at least one report covering a time period. | 2018-09-27 |
20180276100 | OPTIMIZATION OF POWER AND COMPUTATIONAL DENSITY OF A DATA CENTER - Techniques for optimizing power and computational density of data centers are described. According to various embodiments, a benchmark test is performed by a computer data center system. Thereafter, transaction information and power consumption information associated with the performance of the benchmark test are accessed. A service efficiency metric value is then generated based on the transaction information and the power consumption information, the service efficiency metric value indicating a number of transactions executed via the computer data center system during a specific time period per unit of power consumed in executing the transactions during the specific time period. The generated service efficiency metric value is then compared to a target threshold value. Thereafter, a performance summary report indicating the generated service efficiency metric value, and indicating a result of the comparison of the generated service efficiency metric value to the target value, is generated. | 2018-09-27 |
20180276101 | SYSTEM AND METHOD FOR ANALYZING BIG DATA ACTIVITIES - A system and method for analyzing big data activities are disclosed. According to one embodiment, a system comprises a distributed file system for the entities and applications, wherein the applications include one or more of script applications, structured query language (SQL) applications, Not Only (NO) SQL applications, stream applications, search applications, and in-memory applications. The system further comprises a data processing platform that gathers, analyzes, and stores data relating to entities and applications. The data processing platform includes an application manager having one or more of a MapReduce Manage, a script applications manager, a structured query language (SQL) applications manager, a Not Only (NO) SQL applications manager, a stream applications manager, a search applications manager, and an in-memory applications manager. The application manager identifies if the applications are one or more of slow-running, failed, killed, unpredictable, and malfunctioning. | 2018-09-27 |
20180276102 | MULTI-THREAD SEQUENCING - Systems, methods and tools for identifying potential errors or inconsistencies occurring during the runtime of multi-threaded applications and reporting the errors to a user, administrator or developer for correction and adjustments to the program code or thread timings. Embodiments of the disclosure capture thread sequences during a runtime or simulation environment and store the thread sequences as a matrix or tabular representation in a file. Multi-threaded application runs having an error free thread sequence, may be used as benchmarks for identifying potential errors and mis-runs of variations to the multi-threaded application as changes occur to the application code or new threads are added to the application code. This comparison may be performed by comparing the captured thread sequences of both the passing run and the mis-run of the multi-threaded application for differences in the thread sequences that may have caused the mis-run to occur. | 2018-09-27 |
20180276103 | ENHANCING SOFTWARE DEVELOPMENT USING BUG DATA - For each detected bug, historical code with similar characteristics and bug corrections from a historical bug dataset can be displayed in a source code editor. Relevant training and/or testing data can be found by comparing an internal representation of the code under development with an internal representation of the original buggy code in the historical bug dataset. Training and/or testing data that is relevant to the current code can be distinguished from irrelevant training and/or testing data by determining that the code syntax tokens from the current and historical data overlap to at least a specified percentage. Code can be devolved into a set of metrics. The degree of overlap between the metric sets can be determined. If a computed risk factor for the bug correction meets or exceeds a specified threshold, the bug correction can be automatically applied. Additional testing can be automatically performed on and/or added to the corrected code. | 2018-09-27 |
20180276104 | Targeted User Notification of Bug Fixes - Systems and methods for sending in-product notifications to individual users of a software product or a specifically identified subset of users of the software product selected via their previously observed interactions with the software product. In addition, targeted notifications of bug fixes can be sent to specific users who have encountered an error condition or performance issue that a particular bug fix is designed to correct. | 2018-09-27 |
20180276105 | ACTIVE LEARNING SOURCE CODE REVIEW FRAMEWORK - Technologies are described to provide an active learning source code review framework. In some examples, a method to review source code under this framework may include extracting semantic code features from a source code under review. The method may also include training an error classifier based on the extracted semantic code features, and selecting a candidate code section of the source code under review for discrete review. The method may further include facilitating discrete review of the selected candidate code section, updating the error classifier based on a result of the discrete review of the selected candidate code section, and generating an automated review of the source code under review based on the updating of the error classifier. | 2018-09-27 |
20180276106 | TRACE DATA REPRESENTATION - Trace circuitry | 2018-09-27 |
20180276107 | METHOD FOR MESSAGE-PROCESSING - A method for message-processing adapted to a server and a device under test is disclosed. The device provides a plurality of messages and the server receives the plurality of messages through a first interface. In the method, receive a plurality of first predetermined triggering conditions. Each first predetermined triggering condition has a first character string. Then, read the plurality of messages sequentially. Then, determine whether at least one of the plurality of first predetermined triggering conditions is enabled. Check whether the plurality of messages contains the first character string of the enabled one of the plurality of first predetermined triggering conditions sequentially. Then, form a first filtering group having contents comprising part of the plurality of messages, wherein said part of the plurality of messages has the first character string of the enabled one of the plurality of first triggering conditions. Display the contents of the first filtering group. | 2018-09-27 |
20180276108 | PATTERN-BASED AUTOMATED TEST DATA GENERATION - Systems and methods described herein are directed towards a test data generator. In some examples, a reference polygon may be received from an application. Additionally a control parameter may be received from the application. Two points on a map may be selected and a path between the two points may be generated. Additional points may be created along the path and test may be generated by processing the additional points. The test data may be provided to the application. | 2018-09-27 |
20180276109 | DISTRIBUTED SYSTEM TEST DEVICE - Aspects capture test coverage in a distributed system, wherein a processor instigates execution of a unique hypertext transfer request protocol test case within a distributed system of different, networked servers. The header of the unique test case includes a unique name for the unique test case, and the distributed system servers are each configured to, in response to processing a test case, generate a time-stamped log entry that includes header data for the processed test case and a uniform resource locator address of the processing server. The processor thus maps the unique test case to a subset of the distributed system servers as endpoint servers of the unique test case, in response to determining that the uniform resource locator addresses of each of the subset endpoint servers are listed within generated log entries of the endpoint servers in association with the unique test case name. | 2018-09-27 |
20180276110 | SYSTEM AND METHOD FOR ASSISTING A USER IN AN APPLICATION DEVELOPMENT LIFECYCLE - The present disclosure relates to system(s) and method(s) for assisting a user in application development lifecycle. The system is configured to receive a new use case from a user device and identify a sub-set of development solutions from a set of development solutions, stored in a historical data repository, that are applicable for developing code corresponding to the new use case. Furthermore, the system is configured to receive a set test cases corresponding to each development solution from the historical data repository. Furthermore, the system is configured to generate a problem report and a false failure report based on analysis of the set of test cases. The system is further configured to rank the sub-set of development solutions based on analysis of the problem report and the false failure report. Further, the system is configured to generate a decision template based on the ranking of the sub-set of development solutions. | 2018-09-27 |
20180276111 | MOCK SERVICES FOR SOFTWARE INFRASTRUCTURES - A load test environment computing system may include an electronic data store configured to store a configuration tool to generate a software infrastructure and an error analysis utility and one or more hardware processors configured to execute specific computer-executable instructions to cause the configuration tool to generate a configurable mock service. The configurable mock service may include an executable file and a service component of the mock service. The executable file may identify a hardware property of the service component, an operative functionality of the service component, and an electronic communication between the configurable mock service and at least one of: a client device, a database, or an external service. The one or more hardware processors may further be configured to execute specific computer-executable instructions to cause the configuration tool configuration tool to simulate the software infrastructure. | 2018-09-27 |
20180276112 | BALANCING MEMORY PRESSURE ACROSS SYSTEMS - A memory balancing method, system, and computer program product include determining page fault rate metrics for guest operating systems. Embodiments can use these metrics to determine total guest page allocations among a set of virtual machines, virtual machine placement, and/or candidates for host-to-host migration of virtual machines to explain a means of determining page fault rates using a paravirtual memory manager component for each guest. | 2018-09-27 |
20180276113 | Storage System and Method for Predictive Block Allocation for Efficient Garbage Collection - A storage system and method for predictive block allocation for efficient garbage collection are provided. One method involves determining whether a memory in a storage system is being used in a first usage scenario or a second usage scenario; in response to determining that the memory is being used in the first usage scenario, using a first block allocation method; and in response to determining that the memory is being used in the second usage scenario, using a second block allocation method, wherein the first block allocation method allocates blocks that are closer to needing garbage collection than the second block allocation method. | 2018-09-27 |
20180276114 | MEMORY CONTROLLER - A memory controller controls first and second memory, and includes a control unit. In response to a first write command from a host, which designates a logical address for first data to be written to the first memory, the control unit determines whether mapping of the logical address is presently being managed in a first mode with a first cluster size or a second mode with a second cluster size that is smaller than the first cluster size, changes first mapping data for the logical address stored in a first table in the second memory, from the first cluster size to the second cluster size, if the mapping of the logical address is being managed in the first mode and the first mapping data can be compressed at a ratio lower than a first compression ratio, and writes the first data to a physical address of the first memory. | 2018-09-27 |
20180276115 | MEMORY SYSTEM - A memory system includes a nonvolatile memory having a plurality of blocks, and a memory controller. The memory controller is configured to control the nonvolatile memory, record an association between a first stream ID and a first block in which first data corresponding to the first stream ID is written, collect information on the first data written into the first block, and invalidate the association between the first stream ID and the first block based on the collected information. | 2018-09-27 |
20180276116 | Storage System and Method for Adaptive Scheduling of Background Operations - A storage system and method for adaptive scheduling of background operations are provided. In one embodiment, after a storage system completes a host operation in the memory, the storage system remains in a high power mode for a period of time, after which the storage system enters a low-power mode. The storage system estimates whether there will be enough time to perform a background operation in the memory during the period of time without the background operation being interrupted by another host operation. In response to estimating that there will be enough time to perform the background operation in the memory without the background operation being interrupted by another host operation, the storage system performs the background operation in the memory. | 2018-09-27 |