31st week of 2019 patent applcation highlights part 51 |
Patent application number | Title | Published |
20190235858 | APPARATUS AND METHOD FOR CONFIGURING OR UPDATING PROGRAMMABLE LOGIC DEVICE - An apparatus and a method for configuring or updating a programmable logic device are provided. The apparatus includes a control module and a storage module connected to the control module. The control module includes: a JTAG interface for connecting the control module to a JTAG host, and a configuration interface compatible with a to-be-configured programmable logic device. The control module is configured to: after receiving a first control instruction including configuration information via the JTAG interface, store the configuration information into the storage module; and after receiving a configuration instruction, read the configuration information to configure the to-be-configured programmable logic device. A configuration clock used in a process that the control module configures the to-be-configured programmable logic device is generated from the to-be-configured programmable logic device, the control module or an external clock source. | 2019-08-01 |
20190235859 | METHOD AND DEVICE FOR INCREMENTAL UPGRADE - A method for incremental upgrade is provided. The method is used in a device and includes: receiving an incremental update package corresponding to an application, wherein the incremental update package at least includes an incremental and differential file and the size of a target-version file; obtaining idle resource of a memory in the device and a current-version file corresponding to the application; comparing the idle resource of the memory with a maximum upgrade resource requirement to choose an upgrade process for upgrading the application, wherein the maximum upgrade resource requirement is a capacity sum of the size of the current-version file, the size of the incremental and differential file, and the size of the target-version file; and restoring the target-version file according to the upgrade process, and installing the target-version file. | 2019-08-01 |
20190235860 | Feature Decoupling Level - Enabling quick feature delivery is essential for product success and is therefore a goal of software architecture design. But how may we determine if and to what extent an architecture is “good enough” to support feature addition and modification, or determine if a refactoring effort is successful in that features may be added more easily? The applications may use Feature Space and Feature Dependency, derived from a software project's revision history that capture the dependency relations among the features of a system in a feature dependency structure matrix (FDSM), using features as first-class design elements. The applications may also use a Feature Decoupling Level (FDL) metric that may be used to measure the level of independence among features. | 2019-08-01 |
20190235861 | SOFTWARE CONTAINER REGISTRY SERVICE - A request to store a container image is received from a device associated with a customer of a computing resource service provider. Validity of a security token associated with the request is authenticated using a cryptographic key maintained as a secret by the computing resource service provider. One or more layers of the container image is built based at least in part on at least one build artifact to form a set of built layers. The software image including the set of built layers is stored in a repository associated with the customer. A manifest of metadata for the set of built layers is stored in a database of a structured data store. The container image is obtained in the form of an obtained container image. The obtained container image is deployed as the software container in at least one virtual machine instance associated with the customer. | 2019-08-01 |
20190235862 | TECHNIQUES FOR UTILIZING AN EXPRESSION LANGUAGE IN SERVICE CONFIGURATION FILES - Described are examples for processing a configuration file having a certain file format for defining static values. One or more static data values defined in the configuration file based on the JSON format can be determined. One or more expressions, defined according to an expression language, can be detected in the configuration file based on the file format. Via a platform engine and based on the expression language, the one or more expressions can be interpreted. The one or more static data values and the one or more expressions can be stored in memory as an in-memory representation of the configuration. An instance of a service for resolving one or more values related to the one or more expressions can be executed by the platform engine and based on the representation of the configuration. | 2019-08-01 |
20190235863 | SORT INSTRUCTIONS FOR RECONFIGURABLE COMPUTING CORES - According to various aspects, a sorting instruction described herein may advantageously be implemented using intrinsic properties of a reconfigurable computing engine. For example, the reconfigurable computing engine may comprise an arithmetic logic unit (ALU) or other suitable operational unit(s) that can perform one or more comparisons among a given plurality of inputs and output a plurality of select signals that at least indicate maximum and minimum values among the given plurality of inputs. In addition, the reconfigurable computing engine may comprise various multiplexers that make up an interconnect fabric coupled to the ALU or other suitable operational units, wherein the multiplexers may be arranged to receive the plurality of inputs and the plurality of select signals such that the plurality of multiplexers can be dynamically configured to perform the permutations to sort the plurality of inputs. | 2019-08-01 |
20190235864 | GENERATING AND VERIFYING HARDWARE INSTRUCTION TRACES INCLUDING MEMORY DATA CONTENTS - Embodiments of the present invention are directed to a computer-implemented method for generating and verifying hardware instruction traces including memory data contents. The method includes initiating an in-memory trace (IMT) data capture for a processor, the IMT data being an instruction trace collected while instructions flow through an execution pipeline of the processor. The method further includes capturing contents of architected registers of the processor by: storing the contents of the architected registers to a predetermined memory location, and causing a load-store unit (LSU) to read contents of the predetermined memory location. | 2019-08-01 |
20190235865 | SOLVING CONSTRAINT SATISFACTION PROBLEMS COMPRISING VECTORS OF UNKNOWN SIZE - A method, apparatus and product for solving CSP comprising vectors of unknown size. The method comprises generating a structural skeleton tree of a problem description, wherein the structural skeleton tree comprises a node representing a vector of unknown size and a node representing a size of the vector; determining a vector size Constraint Satisfaction Problem (CSP) based on the structural skeleton tree, wherein said determining comprises projecting over-approximated constraints on the size of the vector based on operators used on the vector or elements thereof; solving the vector size CSP to determine the size of the vector; modifying the structural skeleton tree to set the size of the vector and to include nodes for each element in the vector, whereby obtaining a CSP; and solving the CSP. | 2019-08-01 |
20190235866 | INSTRUCTION SET ARCHITECTURE FOR A VECTOR COMPUTATIONAL UNIT - A microprocessor system comprises a vector computational unit and a control unit. The vector computational unit includes a plurality of processing elements. The control unit is configured to provide at least a single processor instruction to the vector computational unit. The single processor instruction specifies a plurality of component instructions to be executed by the vector computational unit in response to the single processor instruction and each of the plurality of processing elements of the vector computational unit is configured to process different data elements in parallel with other processing elements in response to the single processor instruction. | 2019-08-01 |
20190235867 | COMPACT ARITHMETIC ACCELERATOR FOR DATA PROCESSING DEVICES, SYSTEMS AND METHODS - Disclosed are methods, devices and systems for all-in-one signal processing, linear and non-linear vector arithmetic accelerator. The accelerator, which in some implementations can operate as a companion co-processor and accelerator to a main system, can be configured to perform various linear and non-linear arithmetic operations, and is customized to provide shorter execution times and fewer task operations for corresponding arithmetic vector operation, thereby providing an overall energy saving. The compact accelerator can be implemented in devices in which energy consumption and footprint of the electronic circuits are important, such as in Internet of Things (IoT) devices, in sensors and as part of artificial intelligence systems. | 2019-08-01 |
20190235868 | SYSTEM AND METHOD FOR DIVIDE-AND-CONQUER CHECKPOINTING - A system and method which allows the basic checkpoint-reverse-mode AD strategy (of recursively decomposing the computation to reduce storage requirements of reverse-mode AD) to be applied to arbitrary programs: not just programs consisting of loops, but programs with arbitrarily complex control flow. The method comprises (a) transforming the program into a formalism that allows convenient manipulation by formal tools, and (b) introducing a set of operators to allow computations to be decomposed by running them for a given period of time then pausing them, while treating the paused program as a value subject to manipulation. | 2019-08-01 |
20190235869 | SUPPRESSING BRANCH PREDICTION ON A REPEATED EXECUTION OF AN ABORTED TRANSACTION - Branch prediction is suppressed for branch instructions executing in a transaction of a transactional memory (TM) environment in transactions that are re-executions of previously aborted transactions. | 2019-08-01 |
20190235870 | INSTRUCTION AND LOGIC FOR PROCESSING TEXT STRINGS - Method, apparatus, and program means for performing a string comparison operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store a result of a comparison between each data element of a first and second operand corresponding to a first and second text string, respectively. | 2019-08-01 |
20190235871 | OPERATION DEVICE AND METHOD OF OPERATING SAME - Aspects for processing data segments in neural networks are described herein. The aspects may include a computation module capable of performing operations between two vectors with a limited count of elements. When a data I/O module receives neural network data represented in a form of vectors that includes elements more than the limited count, a data adjustment module may be configured to divide the received vectors into shorter segments such that the computation module may be configured to process the segments sequentially to generate results of the operations. | 2019-08-01 |
20190235872 | PROCESSOR CACHE WITH A DATA PREFETCHER - A method includes storing a first address of a first instruction executed by a processor core in a first table, where the first instruction writes a value into a register for utilization in addressing memory. The method stores the first address of the first instruction executed by the processor core in a second table with multiple entries, where a register value loaded into the register is utilized as a second address by a second instruction executed by the processor core to access a main memory. The method determines whether an instruction address associated with an instruction executed by the processor core is present in the second table, where the instruction address is the second address. Responsive to determining the instruction address is present in the second table, the method prefetches data from the main memory, where the register value is utilized as the second address in the main memory. | 2019-08-01 |
20190235873 | SYSTEM AND METHOD OF REDUCING COMPUTER PROCESSOR POWER CONSUMPTION USING MICRO-BTB VERIFIED EDGE FEATURE - According to one general aspect, an apparatus may include a front end logic section comprising a main-branch target buffer (BTB). The apparatus may also include a micro-BTB separate from the main BTB, and configured to produce prediction information associated with a branching instruction and mark prediction information as verified when one or more conditions are satisfied. Wherein the front end logic section is configured to be, at least partially, powered down when the data stored by the micro-BTB that results in the prediction information is marked as previously verified. | 2019-08-01 |
20190235874 | BRANCH LOOK-AHEAD INSTRUCTION DISASSEMBLING, ASSEMBLING, AND DELIVERING SYSTEM APPARATUS AND METHOD FOR MICROPROCESSOR SYSTEM - A method and system of the branch look-ahead (BLA) instruction disassembling, assembling, and delivering are designed for improving speed of branch prediction and instruction fetch of microprocessor systems by reducing the amount of clock cycles required to deliver branch instructions to a branch predictor located inside the microprocessors. The invention is also designed for reducing run-length of the instructions found between branch instructions by disassembling the instructions in a basic block as a BLA instruction and a single or plurality of non-BLA instructions from the software/assembly program. The invention is also designed for dynamically reassembling the BLA and the non-BLA instructions and delivering them to a single or plurality of microprocessors in a compatible sequence. In particular, the reassembled instructions are concurrently delivered to a single or plurality of microprocessors in a timely and precise manner while providing compatibility of the software/assembly program. | 2019-08-01 |
20190235875 | METHODS FOR SCHEDULING MICRO-INSTRUCTIONS AND APPARATUS USING THE SAME - A method for scheduling micro-instructions, performed by a qualifier, is provided. The method includes the following steps: detecting a load write-back signal broadcasted by a load execution unit; determining whether to trigger a load-detection counting logic according to content of the load write-back signal; determining whether an execution status of a load micro-instruction is cache hit when the triggered load-detection counting logic reaches a predetermined value; and driving a release circuit to remove the first micro-instruction in a reservation station queue when the execution status of the load micro-instruction is cache hit and the first micro-instruction has been dispatched to an arithmetic and logic unit for execution. | 2019-08-01 |
20190235876 | METHODS FOR SCHEDULING MICRO-INSTRUCTIONS AND APPARATUS USING THE SAME - A method for scheduling micro-instructions, performed by a first qualifier, is provided. The method includes the following steps: detecting a write-back signal broadcasted by a second qualifier; determining whether a value of a first load-detection counting logic is to be synchronized with a value of a second load-detection counting logic carried by the write-back signal according to content of the write-back signal; determining whether execution statuses of all load micro-instructions are cache hit when the synchronized value of the first load-detection counting logic reaches a predetermined value; and driving a release circuit to remove a micro-instruction in a reservation station queue when the execution statuses of the all load micro-instructions are cache hit and the micro-instruction has been dispatched to an arithmetic and logic unit for execution. | 2019-08-01 |
20190235877 | METHOD FOR IMPLEMENTING A LINE SPEED INTERCONNECT STRUCTURE - A method and apparatus including a cache controller coupled to a cache memory, wherein the cache controller receives a plurality of cache access requests, performs a pre-sorting of the plurality of cache access requests by a first stage of the cache controller to order the plurality of cache access requests, wherein the first stage functions by performing a presorting and pre-clustering process on the plurality of cache access requests in parallel to map the plurality of cache access requests from a first position to a second position corresponding to ports or banks of a cache memory, performs the combining and splitting of the plurality of cache access request by a second stage of the cache controller, and applies the plurality of cache access requests to the cache memory at line speed. | 2019-08-01 |
20190235878 | VIRTUAL REALITY DEVICE AND METHOD FOR CONFIGURING THE SAME - The embodiments of the present disclosure disclose a virtual reality device and a method for configuring a virtual reality device. The virtual reality device comprises: at least one switching circuit and a display circuit. Each of the switching circuits comprises a first input port, a second input port, and an output port. The first input port and the second input port are both configured to input signals, respectively, and each of the switching circuits is configured to control an output port of the switching circuit to output a signal corresponding to a first input port or a second input port of the switching circuit to the display circuit. | 2019-08-01 |
20190235879 | SYSTEM AND METHOD TO TRANSFORM AN IMAGE OF A CONTAINER TO AN EQUIVALENT, BOOTABLE VIRTUAL MACHINE IMAGE - A system and method include creating a bootable virtual machine (VM) image for a container image. The method includes a controller machine creating a single partition within an output VM disk file where the single partition comprises a master boot record and a partition table, forming a valid file system in a main partition of the output VM disk file, arranging an input set of container image definitions as a list where a base image forms a head of the list and subsequent images follow in the list, sequentially processing the list for each image by adding the input set of container image definitions to the output VM file; and applying a final networking configuration over the output VM file. | 2019-08-01 |
20190235880 | SCHEME FOR AUTOMATICALLY CONTROLLING DONGLE DEVICE AND/OR ELECTRONIC DEVICE ENTER WAITING STATE OF DEVICE PAIRING IN WHICH THE DONGLE DEVICE AND THE ELECTRONIC DEVICE EXCHANGE/SHARE PAIRING INFORMATION - A method for controlling a dongle device enter a waiting state of device pairing to perform automatic device pairing includes: commanding the dongle device enter the waiting state when the dongle device is powered up, the dongle device having a storage circuit which is used for storing specific information of at least one electronic device that has been paired with the dongle device; checking the storage circuit of the dongle device; and transmitting a pairing request from the dongle device to the electronic device according to a result of checking the storage circuit. | 2019-08-01 |
20190235881 | Automatic Import Of Third Party Analytics - Techniques to facilitate acquisition of analytics associated with an application are disclosed herein. In at least one implementation, an analytics function call from main program code of the application to a third party analytics function is monitored. Responsive to the analytics function call, the analytics function call is intercepted and a call handler function is invoked instead of the third party analytics function. The call handler function processes the analytics function call to extract analytics data from the analytics function call. A copy of the third party analytics function is then called. | 2019-08-01 |
20190235882 | SYSTEMS AND METHODS OF DYNAMIC PORT ASSIGNMENT - A system provides a listener application which can be notified about specific ports used by specific instances of a Web Socket application. A Web Socket application opens multiple dynamic ports in certain scenarios with a dynamic context. When an application is executed, a listener application is made aware of the context and port information. A system rewrites a reverse proxy configuration on the fly so that any request coming into the reverse proxy will read the change and assign the correct port. A notification to the listener is received across multiple nodes, and the configuration can be updated on all nodes based on the data provided in the configuration. | 2019-08-01 |
20190235883 | Dynamically Loaded Plugin Architecture - A method and architecture for using dynamically loaded plugins is described herein. The dynamically loaded plugin architecture comprises a parent context and a plugin repository. The parent context may define one or more reusable software components. The plugin repository may store one or more plugins. When a plugin is loaded, a child context may be created dynamically. The child context is associated with the plugin and inherits the one or more reusable software components from the parent context. | 2019-08-01 |
20190235884 | METHOD AND SYSTEM FOR DISABLING NON-DELETABLE APPLICATION IN MOBILE TERMINAL - A method and a system for disabling a non-deletable application in a mobile terminal includes detecting that an application icon on a screen of the mobile terminal is long-pressed, determining whether the application is a non-deletable application or not; when the long-pressed application is a non-deletable application, determining whether the non-deletable application is a system application or not; and prompting a user, via a prompt box in the screen, to confirm whether to disable the application or not, when the non-deletable application is not a system application but is an application in a preset core application list; responsive to confirming to disable the application, disabling the application and hiding the application icon on the screen. | 2019-08-01 |
20190235885 | METHOD AND SYSTEM FOR CONTROLLING USER INTERFACE BASED ON CHARACTERISTICS OF WEARABLE DEVICE - The present invention relates to technology for controlling a user interface (UI) of a wearable device, and a system for properly controlling a UI based on characteristics of a wearable device according to an embodiment of the present invention which includes a wearable device and a terminal device configured to interwork with each other, wherein the wearable device provides device specification information when interworking with the terminal device, and the terminal device sets a UI and situation notification method to be controlled in response to a notification situation, based on the device specification information, and uses the set situation notification method to control the set UI in response to the notification situation. | 2019-08-01 |
20190235886 | VIDEO MONITORING - One or more computing devices, systems, and/or methods for monitoring a video are provided. For example, the video may be rendered within a canvas overlaying a webpage within a web browser. The video may comprise an opaque portion (e.g., a bike) and a transparent portion (e.g., a transparent background such that the bike appears to be driving across the webpage as the video plays). User input associated with the canvas may be evaluated to determine whether the user input occurs over the opaque portion or the transparent portion. Responsive to the user input occurring over the opaque portion, the web browser may be transitioned from the webpage to a biking website linked to by the video. Responsive to the user input occurring over the transparent portion, the canvas may be closed to terminate the video. | 2019-08-01 |
20190235887 | PERSONALIZED DIGITAL ASSISTANT DEVICE AND RELATED METHODS - Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant system and techniques for disambiguating commands based on personalized usage of a digital assistant device, among other things. In various embodiments, the digital assistant device can use personal data, collected device usage data, and other types of collected contextual information, to disambiguate received commands for the proper selection and execution of operations on the digital assistant device. The digital assistant can process and interpret ambiguous commands and even unique user dialects without requiring extensive training to recognize and act on the received commands, even if the particular phraseology of the command has not previously been encountered by the digital assistant. | 2019-08-01 |
20190235888 | NONDETERMINISTIC TASK INITIATION BY A PERSONAL ASSISTANT MODULE - Techniques are described herein for leveraging information about a user to enable a personal assistant module to make various inferences about what actions that may be responsive to a user declaration. In various implementations, upon identifying a user declaration received at a computing device, a plurality of candidate responsive actions that can be initiated by the computing device in response to the user declaration may be identified. A single candidate responsive action may then be non-deterministically (e.g., randomly, stochastically) selected to be exclusively initiated on the computing device in response to the user declaration. | 2019-08-01 |
20190235889 | MEMRISTIVE DOT PRODUCT ENGINE VIRTUALIZATION - An example system includes at least one memristive dot product engine (DPE) having at least one resource, the DPE further having a physical interface and a controller, the controller being communicatively coupled to the physical interface, the physical interface to communicate with the controller to access the DPE, and at least one replicated interface, each replicated interface being associated with a virtual DPE, the replicated interface with communicatively coupled to the controller. The controller is to allocate timeslots to the virtual DPE through the associated replicated interface to allow the virtual DPE access to the at least one resource. | 2019-08-01 |
20190235890 | METHOD FOR DYNAMICALLY PROVISIONING VIRTUALIZED FUNCTIONS IN A USB DEVICE BY MEANS OF A VIRTUAL USB HUB - Methods and apparatus for dynamically provisioning virtualized functions in a Universal Serial Bus (USB) device by means of a virtual USB hub. The virtual USB hub includes a USB upstream port configured to be connected to a host system and at least one external bus or external interface to which devices including non-USB devices or computing devices in which non-USB devices are embedded may be connected. The virtual USB hub is configured to detect the non-USB devices and/or functions performed by the non-USB devices and generate corresponding virtual USB configuration information under which virtual USB devices and/or functions are connected to downstream virtual ports in the virtual USB hub. The virtual USB configuration is presented to the host computer to enable the host computer to communicate with the non-USB devices and/or their functions. Also disclosed is an I3C probe having an embedded virtual USB hub and configured to communicate with I3C devices and/or functions embedded within a target system under debug. USB devices may also be virtualized in a similar manner. | 2019-08-01 |
20190235891 | CHARGEBACK SYSTEM AND METHOD USING CHARGEBACK DATA STRUCTURES - Systems, methods, and other embodiments associated with chargeback systems are described. In one embodiment, a chargeback application includes instructions for reading a data structure that defines at least attributes that identify a resource type and resource items of the resource type, and chargeback rules for calculating a cost for usage of the resource items. The data structure is parsed to identify the attributes and the chargeback rules. Chargeback functions of the chargeback application are configured based on at least in part the identified attributes and the chargeback functions are caused to retrieve metered data related to usage of the resource items in a computing system. The chargeback application translates the retrieved metered data in accordance with the chargeback rules to generate a usage cost for the resource items. | 2019-08-01 |
20190235892 | JUST-IN-TIME HARDWARE FOR FIELD PROGRAMMABLE GATE ARRAYS - A system and method are disclosed for executing a component of a design in a hardware engine. The component is compiled to include an interface that supports an ‘open_loop(n)’ function which, when invoked, requests that the hardware engine run for a specified number of steps before communicating with other hardware or software engines via a runtime system. After the compiled hardware component is transferred to the hardware engine, the hardware engine runs for the specified number of steps unless and until it encounters a system function, such as a ‘display(s)’ function, in the code of the component that requires the runtime system to intervene. The hardware engine pauses awaiting the completion of the system function and continues its execution. The ‘open_loop(n)’ operation of the hardware engine permits components in hardware engines to run at a speed close to the native speed of the target programmable hardware fabric. | 2019-08-01 |
20190235893 | JUST-IN-TIME HARDWARE FOR FIELD PROGRAMMABLE GATE ARRAYS - A system and method are disclosed for executing a hardware component of a design in a hardware engine, where the component includes a pre-compiled library component. The hardware component is compiled to include an interface that supports a ‘forward( )’ function which, when invoked, requests that the hardware engine running the hardware component run such that interactions between the library component and the hardware component occur without communicating with the runtime system because interactions between the library component and the hardware component are handled locally by the hardware engine and not the runtime system. Handling the library component without the runtime system intervening allows the library component to run at a speed that is close to the native speed of the target re-programmable hardware fabric. In addition, library components targeted to the specific reprogrammable hardware fabric are available to the design without compilation. | 2019-08-01 |
20190235894 | THROTTLING CPU UTILIZATION BY IMPLEMENTING A RATE LIMITER - An approach for a hypervisor to throttle CPU utilization based on a CPU utilization throttling request received for a data flow is presented. A method comprises receiving a request for a CPU utilization throttling. The request is parsed to extract a CPU utilization level and a data flow identifier of the data flow. Upon receiving a data packet that belongs to the data flow identified by the data flow identifier, a packet size of the data packet is determined, and a rate limit table is accessed to determine, based on the CPU utilization level and the packet size, a rate limit for the data packet. If it is determined, based at least on the rate limit, that the CPU utilization level for the data flow would be exceeded if the data packet is transmitted toward its destination, then a recommendation is generated to drop the data packet. | 2019-08-01 |
20190235895 | ORCHESTRATION ENGINE - Migration configuration data for an organization migration to move application data and application services of a to-be-migrated organization hosted at a source system instance to a target system instance is received. Migration components respectively representing to-be-migrated systems of record in a to-be-migrated organization are registered. In response to receiving an instruction to enter a specific organization migration state, migration steps for each migration component in the migration components are identified for execution in the specific organization migration state. Each migration component in the migration components automatically executes migration steps determined for each such migration component for execution in the specific organization migration state. | 2019-08-01 |
20190235896 | DISTRIBUTION OF APPLICATIONS AMONG MACHINES IN A CLOUD - A system includes at least one processor configured to host virtual machines in a cloud. Each virtual machine executes a plurality of instances of a first application. Each virtual machine also executes a distributor. The distributor is configured for accessing a profile of the application and a distribution of the first application, wherein the distribution identifies a respective first number of instances of the first application to execute in each respective virtual machine. After launch of the first application, the distributor is configured for computing an updated distribution that includes a respective second number of instances of the first application to execute in each respective virtual machine. The distributor is also configured for determining whether the second number of instances is different from the first number of instances. The distributor is configured for storing the updated distribution in a database in response to receiving a lock for accessing the distribution. | 2019-08-01 |
20190235897 | SYSTEMS AND METHODS FOR UPDATING CONTAINERS - The disclosed computer-implemented method for updating containers may include (i) identifying an application container that is instantiated from a static application container image, (ii) identifying ancillary code that is designed to modify execution of the application executing in the application container, (iii) packaging the ancillary code into a data volume container image to be deployed to the host system that hosts the application container, (iv) discovering, by the application container, a data volume container instantiated from the data volume container image on the host system, and (v) modifying, by the application container, the execution of the application executing in the application container with the ancillary code, without modifying the static application container image, at least in part by instantiating the application container with a pointer to the location of the data volume container that contains the ancillary code. Various other methods, systems, and computer-readable media are also disclosed. | 2019-08-01 |
20190235898 | STATIC IP RETENTION FOR MULTI-HOMED VMS ON MIGRATION - An illustrative embodiment disclosed herein is a method, by a migration virtual machine, including determining whether a first target network interface card is configured by dynamic host configuration protocol and sending a first address resolution protocol request for a first source Internet Protocol gateway to the first target network interface card. Sending the first address resolution protocol request is based on determining that the first target network interface card is not configured by dynamic host configuration protocol. The method further includes determining whether the first target network interface card responds to the first address resolution protocol request of the migration virtual machine and applying an Internet Protocol configuration of a first source network interface card to the first target network interface card. Applying the Internet Protocol configuration is based on receiving a response from the first target network interface card to the first address resolution protocol request of the migration virtual machine. | 2019-08-01 |
20190235899 | TRACKING VIRTUAL MACHINE DATA - Disclosed herein are related to a method, a system, and a non-transitory computer readable medium for tracking data objects associated with a virtual machine. In one approach, an object container of the virtual machine is generated. The object container includes data objects associated with the virtual machine. For each of the data objects, a corresponding tag is generated. Each tag is indicative of a corresponding data object. Each tag includes a global identification of the corresponding data object. The global identification is unique across a distributed database. The tags are stored at the distributed database. | 2019-08-01 |
20190235900 | AUTOMATED DATA MIGRATION OF SERVICES OF A VIRTUAL MACHINE TO CONTAINERS - Examples described herein may include migration of data associated with a service to a container. An example method includes creating of a user virtual machine associated with a service and an associated virtual disk storing data associated with running the service, and creating a volume group and an associated storage container at a node of a computing system. The example method further includes storing a cloned version of the virtual disk into the storage container, and, in response to discovery of the cloned version of the virtual disk in the storage container, mounting the cloned version of the virtual disk on the volume group to provide access to clients running the service. | 2019-08-01 |
20190235901 | SYSTEMS AND METHODS FOR ORGANIZING ON-DEMAND MIGRATION FROM PRIVATE CLUSTER TO PUBLIC CLOUD - Systems and methods for migrating a plurality of virtual machines (VMs) from a private cluster to a public cloud include identifying the plurality of VMs currently residing in the private cluster to be migrated to the public cloud. A communication graph indicative of communications involving the plurality of VMs is determined. A migration sequence for the plurality of VMs based on the communication graph is generated. The plurality of VMs is migrated from the private cluster to the public cloud according to the migration sequence. | 2019-08-01 |
20190235902 | BULLY VM DETECTION IN A HYPERCONVERGED SYSTEM - An illustrative embodiment disclosed herein is a method by a data analytics chip, including finding a contention within a first predetermined amount of time, sorting user virtual machines based on consumption of each of the user virtual machines, and identifying a first subset of the user virtual machines. The first subset of the plurality of user virtual machines satisfies consumption criteria. | 2019-08-01 |
20190235903 | DISTRIBUTED COMPUTING SYSTEMS INCLUDING SYSTEM TRAFFIC ANALYZERS TO REQUEST CO-LOCATION OF VIRTUAL MACHINES - Examples described herein include distributed computing systems having a system traffic analyzer. The system traffic analyzer may receive sampled packets sent to a network from a number virtual machines hosted by computing nodes in the distributed computing system. The packets may be sampled, for example, by network flow monitors in hypervisors of the computing nodes. The system traffic analyzer may request co-location of virtual machines having greater than a threshold amount of traffic between them. The request for co-location may result in the requested virtual machines being hosted on a same computing node, which may in some examples conserve network bandwidth. | 2019-08-01 |
20190235904 | CLONING SERVICES IN VIRTUALIZED COMPUTING SYSTEMS - Examples of virtualized systems are described which may include cloning services. Cloning services described herein may facilitate the generation of cloned virtual machines which may be made available (e.g., run and/or accessed) before all data utilized by the cloned virtual machine had been copied into local storage of the computing node hosting the cloned virtual machine. This may facilitate more expeditious availability of a cloned virtual machine while providing for data transfer at a later time. | 2019-08-01 |
20190235905 | AUTOMATIC DETECTION OF NETWORK HOTSPOTS IN A CLOUD INFRASTRUCTURE VIA AGGREGATE GEOLOCATION INFORMATION OF USER DEVICES - A method of detecting hotspots in a cloud infrastructure via aggregate geolocation information of user devices is described. The method includes receiving a request to launch a virtual machine executing on behalf of a first user device and retrieving a first set of identifiers of recovery data from a first data center and a second set of identifiers of recovery data from a second data center. The recovery data may be associated with a plurality of virtual machines previously executed on behalf of a plurality of user devices. The method further includes generating a first distribution of geolocations based on the first set of identifiers and a second distribution of geolocations based on the second set of identifiers. The method includes selecting the first data center and replicating, at the first data center, recovery data associated with the virtual machine executing on behalf of the first user device. | 2019-08-01 |
20190235906 | MONITORING APPLICATIONS RUNNING ON CONTAINERS - Embodiments disclosed herein relate to a method, system, and computer-readable medium for monitoring an application executing across a plurality of containers. A performance monitor requests a list of containers created on at least one computing system. The performance monitor retrieves information associated with a creation of each container in the list. The performance monitor parses the information associated with each container in the list to identify a cluster of related containers that are running the applications. The performance monitor displays a topology that relates the containers of the cluster to resources in the at least one computing system. The performance monitor identified a pair of containers that are negatively correlated based on the topology. The performance monitor adjusts the application to remove the negative correlation between the pair of containers. | 2019-08-01 |
20190235907 | EFFICIENT DISTRIBUTED ARRANGEMENT OF VIRTUAL MACHINES ON PLURAL HOST MACHINES - An apparatus determines a similarity of names of a plurality of virtual machines, and divides the plurality of virtual machines into clusters based on a result of the determination such that virtual machines having a value that represents the similarity of the names that is equal to or less than a given threshold are included in a first cluster and virtual machines having a value that represents the similarity of the names that is greater than the given threshold are included in a second cluster. The apparatus places virtual machines included in the first cluster on different host machines. | 2019-08-01 |
20190235908 | LIVE UPDATES FOR VIRTUAL MACHINE MONITOR - Generally described, aspects of the present disclosure relate to a live update process of the virtual machine monitor during the operation of the virtual machine instances. An update to a virtual machine monitor can be a difficult process to execute because of the operation of the virtual machine instances. Generally, in order to update the virtual machine monitor, the physical computing device needs to be rebooted, which interrupts operation of the virtual machine instances. The live update process provides for a method of updating the virtual machine monitor without rebooting the physical computing device. | 2019-08-01 |
20190235909 | FORWARDING POLICY CONFIGURATION - A method of configuring a forwarding policy, a cloud management platform and an intelligent network management center are provided in the present disclosure. In an examples, the cloud management platform obtains a first mapping between a virtual machine and a network device, and transmits a first notification message to an intelligent network management center associated with the network device in a way that the intelligent network management center configures a forwarding policy associated with the virtual machine for the network device according to the first notification message, wherein the first notification message comprises virtual machine information of the virtual machine and network device information of the network device, and the forwarding policy instructs the network device to perform processing for a packet associated with the virtual machine. | 2019-08-01 |
20190235910 | AUTHENTICATION AND INFORMATION SYSTEM FOR REUSABLE SURGICAL INSTRUMENTS - An authentication and information system for use in a surgical stapling device includes a handle assembly having a controller, the controller having at least one program and a memory, an adapter assembly, and a loading unit having a tool assembly mounted for articulation and a member for actuating articulation of the tool assembly, the loading unit having at least one chip assembly having a chip storing data indicating a position of the member when the tool assembly is in a fully articulated position. | 2019-08-01 |
20190235911 | UTILIZING PHYSICAL SYSTEMS AND VIRTUAL SYSTEMS FOR VIRTUAL NETWORK FUNCTIONS - A method includes provisioning a first Virtual Network Function (VNF) component on a first virtual machine, the first virtual machine being supported by a first physical computing system, provisioning a second VNF component directly on a second physical computing system, and using, within a telecommunications network, a VNF that includes both the first VNF component running on the first virtual machine and the second VNF component running directly on the second physical computing system. The method further includes, with a VNF manager, determining that a third VNF component should be provisioned, and in response to determining that the third VNF component is capable of utilizing a hardware accelerator associated with a third physical computing system, implementing the third VNF component on the third physical computing system. | 2019-08-01 |
20190235912 | CLIENT CONTROLLED TRANSACTION PROCESSING INVOLVING A PLURALITY OF PARTICIPANTS - Methods and systems are provided for client controlled transaction processing. The method may be carried out at a transaction server, and include: receiving a transaction request from a transaction initiator and allocating a transaction identifier to the transaction; receiving notification of the number of jobs to be completed in the transaction; maintaining a transaction status indicating the current status of the transaction; receiving job status updates from one or more participants processing the jobs included in the transaction and updating a transaction record reflecting the status of each of the jobs included in the transaction; updating the transaction status when required based on the job status updates of the jobs included in the transaction; and receiving and responding to transaction status polling to provide a current transaction status, where the transaction status polling originates from the transaction initiator and the participants processing the jobs. | 2019-08-01 |
20190235913 | FAIR AND EFFICIENT CONCURRENCY MANAGEMENT FOR GRAPH PROCESSING - Techniques are described herein for concurrently evaluating graph processing tasks in a fair and efficient manner. In an embodiment, a request to execute a graph processing task is received. A first mapping associates each graph processing task of a plurality of graph processing tasks to a set of workload characteristics of a plurality of sets of workload characteristics. A second mapping associates each set of workload characteristics of the plurality of sets of workload characteristics to a set of execution parameters of a plurality of sets of execution parameters. Using the first mapping, a set of workload characteristics is determined based on the graph processing task. Using the second mapping, a set of execution parameters is determined based on the determined set of workload characteristics. The graph processing task is executed based on the determined set of execution parameters. | 2019-08-01 |
20190235914 | Programmatic Implicit Multithreading - A mechanism is provided for programmatic implicit multithreading. A first operation is executed on a first thread in a processor, where the first operation is from a set of operations within a block of code of an application that are distinct and process unrelated data. A determination is made as to whether a time limit associated with executing the first operation has been exceeded. Responsive to the time limit being exceeded, a determination is made as to whether there is one or more unexecuted operations in the set of operations. Responsive to one or more unexecuted operations existing in the set of operations, a new thread is spawned off on the processor to execute a next unexecuted operation of the one or more unexecuted operations. | 2019-08-01 |
20190235915 | TECHNIQUES FOR ORDERING ATOMIC OPERATIONS - In various embodiments, an ordered atomic operation enables a parallel processing subsystem to executes an atomic operation associated with a memory location in a specified order relative to other ordered atomic operations associated with the memory location. A level 2 (L2) cache slice includes an atomic processing circuit and a content-addressable memory (CAM). The CAM stores an ordered atomic operation specifying at least a memory address, an atomic operation, and an ordering number. In operation, the atomic processing circuit performs a look-up operation on the CAM, where the look-up operation specifies the memory address. After the atomic processing circuit determines that the ordering number is equal to a current ordering number associated with the memory address, the atomic processing circuit executes the atomic operation and returns the result to a processor executing an algorithm. Advantageously, the ordered atomic operation enables the algorithm to achieve a deterministic result while optimizing latency. | 2019-08-01 |
20190235916 | METHODS TO PRESENT THE CONTEXT OF VIRTUAL ASSISTANT CONVERSATION - A method, a system, and a computer program product for indicating a dialogue status of a conversation thread between a user of an electronic device and a virtual assistant capable of maintaining conversational context of multiple threads at a time. The method includes receiving, at an electronic device providing functionality of a virtual assistant (VA), a user input that corresponds to a task to be performed by the VA. The method includes determining, from among a plurality of selectable threads being concurrently maintained by the VA and based on content of the user input, one target thread to which the user input is associated. The method includes performing the task within the target thread. | 2019-08-01 |
20190235917 | CONFIGURABLE SCHEDULER IN A GRAPH STREAMING PROCESSING SYSTEM - Systems, apparatuses and methods are disclosed for scheduling threads comprising of code blocks in a graph streaming processor (GSP) system. One system includes a scheduler for scheduling plurality of threads, the plurality of threads includes a set of instructions operating on the graph streaming processors of GSP system. The scheduler comprises a plurality of stages where each stage is coupled to an input command buffer and an output command buffer. A portion of the scheduler is implemented in hardware and comprises of a command parser operative to interpret commands within a corresponding input command buffer, a thread generator coupled to the command parser operate to generate the plurality of threads, and a thread scheduler coupled to the thread generator for dispatching the plurality of threads for operating on the plurality of graph streaming processors. | 2019-08-01 |
20190235918 | SCHEDULING FRAMEWORK FOR ORGANIZATION MIGRATIONS - A request for an organization migration to move application data and application services of an organization hosted at a source system instance in a multi-tenant computing system to a target system instance in the multi-tenant computing system is received. Based on operational parameters, a time window is selected to execute the organization migration. Computing resource usages of one or both of the source and target system instances in the selected time window are monitored. If computing resources are available, the organization migration is enqueued. | 2019-08-01 |
20190235919 | Managing the Processing of Streamed Data in a Data Streaming Application Using Query Information from a Relational Database - Queries are monitored in a database which receives input from a stream computing application to identify data of interest. Parameters defining the data of interest, which are preferably expressed as a logical query, are sent to the stream computing application, which then processes the in-flight streamed data satisfying the parameters in some special manner. In some embodiments, the stream computing application increases the processing priority of in-flight data satisfying the parameters. In some embodiments, the stream computing application applies additional processing steps to the in-flight data satisfying the parameters to provide enhanced data or metadata. | 2019-08-01 |
20190235920 | SYSTEMS AND METHODS FOR TASK SCHEDULING - A computer-implemented method is disclosed. The method comprises receiving a notification from a job scheduler that an execution time for a job registered with the job scheduler is at or before a first time being a current time. The method also comprises identifying, in response to receiving the notification, at least one task from a task data structure with a target runtime that is at or before the first time. The task data structure stores task data for one or more tasks received from one or more client computers, and the task data associates each of the one or more tasks with a target runtime. The method further comprises initiating execution for each of the at least one task and determining whether there is a specific task from the task data structure with a specific target runtime after the first time. In addition, the method comprises in response to determining that there is a specific task from the task data structure with a specific target runtime after the first time, registering a future job with the job scheduler with a runtime that is at or about the specific target runtime. | 2019-08-01 |
20190235921 | SYSTEM FOR ALLOCATING RESOURCES FOR USE IN DATA PROCESSING OPERATIONS - A system, method and the like for allocating computing resources to data processing services/applications based on the current or foreseen usage/load of the computing resources. The elastic nature of the computing resource grid allows for expansion or contraction of ancillary use of the computing resources depending on the data processing requirements and computer resource usage. Further, virtual binary codes are deployed on the computing resources, which are executed at the application layer and configured to be removed upon completion of a job or in the event that the usage state of the computing resource dictates such. The removal of the virtual binary codes from the computing resources provides for no residual effect on the computing resources (i.e., no code remains in computing resource memory and, as such no processing capabilities are subsequently used). | 2019-08-01 |
20190235922 | Controlling Resource Allocation in a Data Center - A method of controlling resource allocation in a data center, the data center comprising a plurality of servers connected by a plurality of network links. The method comprising monitoring ( | 2019-08-01 |
20190235923 | Computational Assessment of a Real-World System and Allocation of Resources of the System - Among other things, there is coordination of the timing and execution (a) of allocations of amounts of real-world resources to competing uses of the resources, relative to (b) dynamic assessments of an overall state of a changing real-world system, in order to alter the overall state of the real-world system. A request is received for an allocation of an amount of a real-world resource of the real-world system to a particular use of the resource as of a particular allocation time. From time to time, information is ingested representing states of facets of the real-world system, to maintain the information current relative to the particular allocation time. A current overall state of the real-world system is dynamically assessed, based on the current information representing states of the facets of the real-world system, as of a time that is current relative to the particular allocation time. Based on the assessed current overall state of the real-world system and on the request for the allocation, and as of a time that is current relative to the particular allocation time, a determination is made of an amount of the real-world resource to allocate to the particular use of the resource. The allocation to the particular use is executed no later than the particular allocation time. | 2019-08-01 |
20190235924 | DYNAMIC PARTITIONING OF EXECUTION RESOURCES - Embodiments of the present invention set forth techniques for allocating execution resources to groups of threads within a graphics processing unit. A compute work distributor included in the graphics processing unit receives an indication from a process that a first group of threads is to be launched. The compute work distributor determines that a first subcontext associated with the process has at least one processor credit. In some embodiments, CTAs may be launched even when there are no processor credits, if one of the TPCs that was already acquired has sufficient space. The compute work distributor identifies a first processor included in a plurality of processors that has a processing load that is less than or equal to the processor loads associated with all other processors included in the plurality of processors. The compute work distributor launches the first group of threads to execute on the first processor. | 2019-08-01 |
20190235925 | SYSTEMS, METHODS, AND INTERFACES FOR VECTOR INPUT/OUTPUT OPERATIONS - Data of a vector storage request pertaining to one or more disjoint, non-adjacent, and/or non-contiguous logical identifier ranges are stored contiguously within a log on a non-volatile storage medium. A request consolidation module modifies one or more sub-requests of the vector storage request in response to other, cached storage requests. Data of an atomic vector storage request may comprise persistent indicators, such as persistent metadata flags, to identify data pertaining to incomplete atomic storage requests. A restart recovery module identifies and excludes data of incomplete atomic operations. | 2019-08-01 |
20190235926 | SORTING APPARATUS - A sorter receives a list of elements to be sorted. An element of the list is supplied to a selected one of a plurality of processing units to be processed. The selected one of the processing units sends the element to one of a plurality of list element cells, which rank orders the elements among other elements in the same list element storage as well as storing the position of each element from the original list. Each of the plurality of list element cells processes and stores a different range of element values. The element being processed is stored in sorted order in the list element cell that has an element value range that encompasses the value of the element of the list. | 2019-08-01 |
20190235927 | DATA SYNCHRONIZATION FOR IMAGE AND VISION PROCESSING BLOCKS USING PATTERN ADAPTERS - A hardware thread scheduler (HTS) is provided for a multiprocessor system. The HTS is configured to schedule processing of multiple threads of execution by resolving data dependencies between producer modules and consumer modules for each thread. Pattern adaptors may be provided in the scheduler that allows mixing of multiple data patterns across blocks of data. Transaction aggregators may be provided that allow re-using the same image data by multiple threads of execution while the image date remains in a given data buffer. Bandwidth control may be provided using programmable delays on initiation of thread execution. Failure and hang detection may be provided using multiple watchdog timers. | 2019-08-01 |
20190235928 | DYNAMIC PARTITIONING OF EXECUTION RESOURCES - Embodiments of the present invention set forth techniques for allocating execution resources to groups of threads within a graphics processing unit. A compute work distributor included in the graphics processing unit receives an indication from a process that a first group of threads is to be launched. The compute work distributor determines that a first subcontext associated with the process has at least one processor credit. In some embodiments, CTAs may be launched even when there are no processor credits, if one of the TPCs that was already acquired has sufficient space. The compute work distributor identifies a first processor included in a plurality of processors that has a processing load that is less than or equal to the processor loads associated with all other processors included in the plurality of processors. The compute work distributor launches the first group of threads to execute on the first processor. | 2019-08-01 |
20190235929 | RECONFIGURABLE COMPUTING CLUSTER WITH ASSETS CLOSELY COUPLED AT THE PHYSICAL LAYER BY MEANS OF AN OPTICAL CIRCUIT SWITCH - Reconfigurable computing clusters, compute nodes within reconfigurable computing clusters, and methods of operating a reconfigurable computing cluster are disclosed. A reconfigurable computing cluster includes an optical circuit switch, and a plurality of computing assets, each of the plurality of computing assets connected to the optical circuit switch by two or more bidirectional fiber optic communications paths. | 2019-08-01 |
20190235930 | CAPACITY AND LOAD ANALYSIS USING STORAGE ATTRIBUTES - A method includes determining a capacity model that configures computing resource capacity for a capacity container. The method also includes estimating an available capacity in a capacity container based on a capacity of host devices in the capacity container. The method also includes generating, based on a selection of a visualization method, a visualization of a trend curve and a forecast curve, the trend curve representing historical capacity usage of the host devices. Implementations may include selecting an average virtual machine unit display or a raw units display and determining an average virtual machine based on averaging an attribute of one or more virtual machines. | 2019-08-01 |
20190235931 | PARTIAL TASK ALLOCATION IN A DISPERSED STORAGE NETWORK - A processing system in a dispersed storage and a task (DST) network operates by receiving data and a corresponding task; identifying candidate DST execution units for executing partial tasks of the corresponding task; receiving distributed computing capabilities of the candidate DST execution units; selecting a subset of DST execution units of the candidate DST execution units to favorably execute the partial tasks of the corresponding task; determining task partitioning of the corresponding task into the partial tasks based on one or more of the distributed computing capabilities of the subset of DST execution units; determining processing parameters of the data based on the task partitioning; partitioning the tasks based on the task partitioning to produce the partial tasks; processing the data in accordance with the processing parameters to produce slice groupings; and sending the slice groupings and the partial tasks to the subset of DST execution units. | 2019-08-01 |
20190235932 | AUTOSCALING OF DATA PROCESSING COMPUTING SYSTEMS BASED ON PREDICTIVE QUEUE LENGTH - Described herein are systems, methods, and software to enhance the scaling of data processing systems in a computing environment. In one implementation, a method of operating a data processing management system includes monitoring a queue length in an allocation queue for data processing system, and generating a prediction of the allocation queue based on the monitored queue length. Once the prediction is generated, the data processing management system may modify an operational state of at least one data processing system based on the prediction of the queue length and a processing time requirement for data objects in the allocation queue. | 2019-08-01 |
20190235933 | Index Structure Using Atomic Multiword Update Operations - A computer implemented method includes receiving multiple requests to update a data structure stored in non-volatile memory (NVM) and applying an atomic multiword update to the data structure to arbitrate access to the NVM. In a further embodiment, a computer implemented method includes allocating a descriptor for a persistent multi-word compare-and-swap operation (PMwCAS), specifying targeted addresses of words to be modified, returning an error if one of the targeted addresses contains a value not equal to a corresponding compare value, executing the operation atomically if the targeted addresses contain values that match the corresponding compare values, and aborting the operation responsive to the returned error. | 2019-08-01 |
20190235934 | PERFORMING PROCESS CONTROL SERVICES ON ENDPOINT MACHINES - Some embodiments of the invention provide a method for performing services on an endpoint machine in a datacenter. On the endpoint machine, the method installs a guest introspection (GI) agent and a service engine. In some embodiments, the GI agent and the service engine are part of one monitor agent that is installed on the endpoint machine. The method then registers with a set of one or more notification services on the endpoint machine, the GI agent to receive notifications regarding new data message flow events on the endpoint machine. Through the notifications, the GI agent captures contextual data items regarding new data message flows, and stores the captured contextual data items. The service engine then performs a service for the data message flow based on the captured contextual data. | 2019-08-01 |
20190235935 | SYSTEM AND METHOD FOR TAGGING AND TRACKING EVENTS OF AN APPLICATION - A system and method for providing delegated metric tools within a partially closed communication platform that includes receiving a tag identifier linked to at least a first identified platform interaction in the communication platform; associating the tag identifier with at least one logged event of an account associated with the first identified platform interaction; defining a tracking resource with at least one tag identifier; measuring platform interactions tracked by a tracking resource; and providing access to measured platform interactions through an application. | 2019-08-01 |
20190235936 | PERSONALIZED NOTIFICATION BROKERING - Aspects of the technology described herein are directed towards systems, methods, and computer storage media for, among other things, providing personalized notification management. Notifications can be communicated to a user upon receipt or queued for subsequent handling based on a probability that the user will interact with the notification within a threshold elapsed time from presentation, if it is presented. The probability is determined based on a user's past interactions with similar notifications. The interactions of other users with notifications can also be considered to determine the probability. The notifications can be managed by a notification broker. | 2019-08-01 |
20190235937 | EXTENSIBLE SYSTEMATIC REPRESENTATION OF OBJECTS AND OPERATIONS APPLIED TO THEM - Disclosed is a technique for communicating message objects from a first process to a second process in transport node of a virtualized network, the message objects specifying a change to status of a virtualized network object in the virtualized network. In technique, message objects are separated from operation objects, which have fields corresponding to the fields of the message objects, a field of the operations object being capable of specifying a change to or a status of a field of the message object to which it corresponds. Yet another object combines a message object and an operation object so that the protocol for communication between the first and second process is the same regardless of the contents of the actual message. | 2019-08-01 |
20190235938 | ENHANCED ADDRESS SPACE LAYOUT RANDOMIZATION - One embodiment provides an apparatus. The apparatus includes a linear address space, metadata logic and enhanced address space layout randomization (ASLR) logic. The linear address space includes a metadata data structure. The metadata logic is to generate a metadata value. The enhanced ASLR logic is to combine the metadata value and a linear address into an address pointer and to store the metadata value to the metadata data structure at a location pointed to by a least a portion of the linear address. The address pointer corresponds to an apparent address in an enhanced address space. A size of the enhanced address space is greater than a size of the linear address space. | 2019-08-01 |
20190235939 | HEARTBEAT FAILURE DETECTION - A heartbeat monitor detects a heartbeat failure by accumulating overage time beyond an expected time interval for each heartbeat in a sliding window of time for a connection. The connection is considered unreliable when the total overage time exceeds a threshold. The total overage time is determined by accumulating all overage time beyond the expected interval over a sliding window of time. In an illustrated example, the heartbeat monitor resides in a hypervisor to track a heartbeat of a network link to provide failover capability to a backup when the network link is no longer reliable. | 2019-08-01 |
20190235940 | SELF-REGULATING POWER MANAGEMENT FOR A NEURAL NETWORK SYSTEM - A neural network runs a known input data set using an error free power setting and using an error prone power setting. The differences in the outputs of the neural network using the two different power settings determine a high level error rate associated with the output of the neural network using the error prone power setting. If the high level error rate is excessive, the error prone power setting is adjusted to reduce errors by changing voltage and/or clock frequency utilized by the neural network system. If the high level error rate is within bounds, the error prone power setting can remain allowing the neural network to operate with an acceptable error tolerance and improved efficiency. The error tolerance can be specified by the neural network application. | 2019-08-01 |
20190235941 | SELF-MONITOR FOR COMPUTING DEVICES OF A DISTRIBUTED COMPUTING SYSTEM - Systems and methods are disclosed for monitoring features of a computing device of a distributed computing system using a self-monitoring module. The self-monitoring module can include multiple feature-specific monitoring modules and one or more parent nodes for the feature-specific monitoring modules. A feature-specific monitoring module can identify or detect a fault status change, such as a fault condition or fault resolution, for one or more features. Based on the identified fault conditions or fault resolutions, the feature-specific monitoring module can determine an internal status and communicate an updated status to a parent node. | 2019-08-01 |
20190235942 | INDIVIDUAL BUG FIXED MESSAGES FOR SOFTWARE USERS - Individual bug fixed messages for software users that includes determining an occurrence of an error in software executing on a user processor. A unique error report identifier is stored in a memory accessible by the user processor and the error is reported. The reporting includes transmitting the unique error report identifier and error data that describes the error to a developer server. The error data is analyzed to determine a fix to correct the error. A message regarding the fix to correct the error is stored in a fixed error database. The software is launched and it is determined that the error was previously reported. The fixed error database is queried by the software with the unique error report identifier to locate the message. Based on locating the message, the message is downloaded and displayed by the user processor. | 2019-08-01 |
20190235943 | QUANTITATIVE SOFTWARE FAILURE MODE AND EFFECTS ANALYSIS - Systems and methods may be used to perform a software failure mode and effects analysis (SW FMEA) for a software component. The SF FMEA may include a quantitative approach, for example based on a risk priority number for the software component. The risk priority number may be based on a severity of a failure in the software component, an occurrence likelihood of a failure in the software component, or a detectability of a failure in the software component. A safety integrity level may be determined for the software component based on the risk priority number. | 2019-08-01 |
20190235944 | Anomaly Detection using Circumstance-Specific Detectors - The technology disclosed relates to learning how to efficiently display anomalies in performance data to an operator. In particular, it relates to assembling performance data for a multiplicity of metrics across a multiplicity of resources on a network and training a classifier that implements at least one circumstance-specific detector used to monitor a time series of performance data or to detect patterns in the time series. The training includes producing a time series of anomaly event candidates including corresponding event information used as input to the detectors, generating feature vectors for the anomaly event candidates, selecting a subset of the candidates as anomalous instance data, and using the feature vectors for the anomalous instance data and implicit and/or explicit feedback from users exposed to a visualization of the monitored time series annotated with visual tags for at least some of the anomalous instances data to train the classifier. | 2019-08-01 |
20190235945 | PREVENTING CASCADE FAILURES IN COMPUTER SYSTEMS - A method prevents a cascading failure in a complex stream computer system. The method includes receiving binary data that identifies multiple subcomponents in a complex stream computer system. These identified multiple subcomponents include upstream subcomponents that generate multiple outputs and a downstream subcomponent that executes a downstream computational process that uses the multiple outputs. The method dynamically adjusts which of multiple inputs are used by the downstream subcomponent in an attempt to generate an output from the downstream subcomponent that meets a predefined trustworthiness level for making a first type of prediction. If no variations of execution of one or more functions used by the downstream subcomponent ever produce an output that meets the predefined trustworthiness level for making a first type of prediction, then computer hardware executes a new downstream computational process that produces a different second type of prediction. | 2019-08-01 |
20190235946 | DISTRIBUTED SYSTEM, MESSAGE PROCESSING METHOD, NODES, CLIENT, AND STORAGE MEDIUM - The present disclosure discloses a distributed system and a message processing method. The distributed system includes a client and a plurality of nodes. The client includes processing circuitry that is configured to send a message including a digital signature of the client. The distributed system is in a first consensus mode for reaching a consensus on the message. The processing circuitry obtains results from a subset of the nodes that receive the message. The results have respective digital signatures of the subset of the nodes. After verifying the digital signatures of the subset of the nodes, the processing circuitry of the client determines, based on the results, whether one or more of the nodes in the distributed system is malfunctioned. | 2019-08-01 |
20190235947 | TRANSFER APPARATUS AND TRANSFER METHOD - A transfer apparatus for performing transmission and reception of data using a plurality of lanes includes: a transmission control unit configured to, upon receiving a transmission instruction for performing a data transfer in a redundant mode in which the same data is transferred using a plurality of lanes, output transmission data as first data and second data without renegotiation with another transfer apparatus; a first transmission unit configured to transmit the first data output by the transmission control unit via a first lane; and a second transmission unit configured to transmit the second data output by the transmission control unit via a second lane. | 2019-08-01 |
20190235948 | HARDWARE APPARATUSES AND METHODS FOR MEMORY CORRUPTION DETECTION - Methods and apparatuses relating to memory corruption detection are described. In one embodiment, a hardware processor includes an execution unit to execute an instruction to request access to a block of a memory through a pointer to the block of the memory, and a memory management unit to allow access to the block of the memory when a memory corruption detection value in the pointer is validated with a memory corruption detection value in the memory for the block, wherein a position of the memory corruption detection value in the pointer is selectable between a first location and a second, different location. | 2019-08-01 |
20190235949 | DATA ERROR DETECTION IN COMPUTING SYSTEMS - Embodiments of ensuring data integrity in computing devices and associated methods of operations are disclosed therein. In one embodiment, a method includes receiving, at a memory controller, a data request from the persistent storage to copy data from the memory. In response to the received data request, the requested data is retrieved from the memory. The retrieved data contains data bits and corresponding error correcting bits. The method can also include determining, at the memory controller, whether the retrieved data bits contain one or more data integrity errors based on the error correcting bits associated with the data bits. In response to determining that the retrieved data bits contain one or more data integrity errors, the memory controller can write data representing existence of the one or more data integrity errors into a memory location accessible by the processor for ensuring data integrity. | 2019-08-01 |
20190235950 | DISPLAY AND DISPLAY SYSTEM - A display configured to detect an error in display data without a parallel-serial conversion circuit is provided. The display includes a display area, a control unit, and a plurality of first CRC circuits. The control unit receives whole display data to control the display area. The whole display data includes a plurality of unit display data and a plurality of CRC data. The plurality of unit display data are each composed of a predetermined count of bits. A count of the plurality of CRC data is identical to the predetermined count of bits. The plurality of first CRC circuits correspond to the respective plurality of CRC data. | 2019-08-01 |
20190235951 | APPARATUS AND CONTROL METHOD THEREOF - According to one embodiment, an apparatus is capable of exchanging a frame with an external apparatus in a packet mode of serial attached small computer system interface (SAS). The apparatus includes a controller configured to transmit a frame to the external apparatus, and to transmit a PACKET_SYNC extended binary primitive to the external apparatus when the frame is not correctly received by the external apparatus. | 2019-08-01 |
20190235952 | MEMORY SYSTEM AND METHOD - A memory system includes a plurality of memory cells and a controller. During a write operation to write data to the memory cells, the controller encodes first data to be written at a first code rate. During a read operation to read data from the memory cells, the controller decodes second data read from the memory cells at the first code rate. The controller changes the first code rate to a second code rate that is less than the first code rate upon determining that the number of error bits during the read operation of the second data is above a threshold number for error bits or upon determining that the number of memory cells having a threshold voltage that is in a voltage range that includes a read voltage is above a threshold number for memory cells. | 2019-08-01 |
20190235953 | SYSTEM AND METHOD FOR PROTECTING GPU MEMORY INSTRUCTIONS AGAINST FAULTS - A system and method for protecting memory instructions against faults are described. The system and method include converting the slave instructions to dummy operations, modifying memory arbiter to issue up to N master and N slave global/shared memory instructions per cycle, sending master memory requests to memory system, using slave requests for error checking, entering master requests to the GM/LM FIFO, storing slave requests in a register, and comparing the entered master requests with the stored slave requests. | 2019-08-01 |
20190235954 | MEMORY CONTROLLER AND METHOD OF OPERATING THE SAME - Provided herein may be a memory controller and an operating method thereof. The memory controller may include: a read fail control circuit configured to perform, when the read operation fails, an assist read operation of determining optimal read voltages to be used to read the selected memory cells, and determine whether a threshold voltage distribution of the selected memory cells is an abnormal distribution based on read-related information obtained by the read operation and the assist read operation; and an error correction code (ECC) engine configured to perform an ECC decoding operation on hard decision data obtained by reading the selected memory cells using the optimal read voltages based on whether the threshold voltage distribution of the selected memory cells is the abnormal distribution. | 2019-08-01 |
20190235955 | ADJUSTING DISPERSED STORAGE ERROR ENCODING PARAMETERS BASED ON PATH PERFORMANCE - A method includes determining, by a computing device of a dispersed storage network (DSN), routing path performance information of a set of routing paths with respect to a set of storage units of the DSN. The method further includes adjusting a pillar width to decode threshold ratio of a dispersed storage error encoding function when the routing path performance information deviates from a performance threshold. The performance threshold includes a first error rate threshold and a second error rate threshold. The method further includes dispersed storage error encoding a data object using the adjusted pillar width to decode threshold ratio to produce a plurality of sets of encoded data slices. The method further includes sending the plurality of sets of encoded data slices to the set of storage units via the set of routing paths for storage therein. | 2019-08-01 |
20190235956 | DATA STORAGE METHOD, APPARATUS, AND SYSTEM - A storage client needs to store to-be-written data into a distributed storage system, and storage nodes corresponding to a first data unit assigned for the to-be-written data by a management server are only some nodes in a storage node group. When receiving a status of the first data unit returned by the management server, the storage client may determine quantities of data blocks and parity blocks needing to be generated during EC coding on the to-be-written data. The storage client stores the generated data blocks and parity blocks into some storage nodes designated by the management server in a partition where the first data unit is located. Accordingly, dynamic adjustment of an EC redundancy ratio is implemented, and the management server may exclude some nodes in the partition from a storage range of the to-be-written data based on a requirement, thereby reducing a data storage IO amount. | 2019-08-01 |
20190235957 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MANAGING DATA STORAGE IN DATA STORAGE SYSTEMS - Techniques are disclosed for managing data storage. In one embodiment, the techniques determine one or more RAID extents having a disk extent supported by an extent of storage on a storage device in an inoperative state. Each of the RAID extents contains a respective set of disk extents allocated to that RAID extent and each disk extent is supported by an extent of storage on a storage device of the set of storage devices. The techniques also comprise evaluating a set of values, wherein each value indicates, for a corresponding pair of storage devices from the set of storage devices, a number of RAID extents which contain disk extents belonging to both storage devices of the pair. The techniques also comprise selecting, based on the said evaluation and for each of the one or more RAID extents, a free disk extent for facilitating rebuild of that RAID extent, wherein the said free disk extent is supported by an extent of storage of one of the set of storage devices other than one of the storage devices associated with that RAID extent. | 2019-08-01 |