04th week of 2020 patent applcation highlights part 43 |
Patent application number | Title | Published |
20200026508 | FAULT RESISTANT 24x7 TOPOLOGY FOR BUSINESS PROCESS MANAGEMENT ECOSYSTEM - A system and computer-implemented method for providing a load-balanced server architecture to end users and allowing software on the server architecture to be updated without downtime during a transition to the new software version. Run-time errors due to incompatibilities between datatypes, interfaces, deserialization methods, and classes loaded by class loaders in object oriented server software may be avoided by using the system to track a software version used in association with a particular task. By routing requests related to a particular task to a particular server running the same software version, compatibility is maintained and efforts to migrate data across software servers or add code to handle cross-version compatibility are unnecessary. | 2020-01-23 |
20200026509 | METHOD AND DEVICE FOR UPDATING A PROGRAM - A method updating a program in a flash memory includes executing a first image of the program while an address space of the program is imaged onto the memory blocks, which are operated in a single-level mode; copying part of the first image from a range within the address space, which is imaged onto one of the blocks, into a backup block; setting the one of the blocks to a multi-level mode; while the address range is imaged onto the backup block, programming the one of the blocks with part of the second image besides for the part of the first image; switching the address range back to the block while the block remains in the multi-level mode; as long as the second image is incomplete, repeating the copying, programming, and switching with further parts of the second image; and subsequently executing the second image instead of the first image. | 2020-01-23 |
20200026510 | SYSTEM AND METHOD FOR DISTRIBUTED LEDGER-BASED SOFTWARE SUPPLY CHAIN MANAGEMENT - Systems and methods for distributed ledger-based software supply chain management are disclosed. According to one embodiment, in an information processing apparatus comprising at least one computer processor, a method for distributed ledger-based software supply chain management may include: (1) receiving, from a software tool, a metadata artifact for a software development lifecycle event; (2) writing the metadata artifact to a metadata store; and (3) updating a present state database with values for metadata keys referencing the metadata artifact in the metadata store. | 2020-01-23 |
20200026511 | SOURCE CODE FILE RECOMMENDATION NOTIFICATION - A computing device is provided, including a non-volatile storage device and a processor configured to execute a distributed version control system. The processor may, via the distributed version control system, receive a pull request to apply a first set of one or more source code files to a project database. In response to receiving the pull request, the processor may identify a second set of one or more source code files based at least in part on a recommendation ruleset including one or more association rules identified for a plurality of training pull requests applied to a training project database. The recommendation ruleset may be determined based at least in part on a respective last iteration of each training pull request. The processor may output a source code file recommendation notification including an indication of each source code file of the second set. | 2020-01-23 |
20200026512 | OPEN-SOURCE-LICENSE ANALYZING METHOD AND APPARATUS - Embodiments of the present disclosure relate to the field of computer technologies and, in particular, to an open-source-license analyzing method and apparatus, including: receiving a file-to-be-tested and a planning condition; detecting an open-source license involved in the file-to-be-tested; matching the detected open-source license with the planning condition to determine a first conflict between the detected open-source license and the planning condition; and generating a first risk assessment report based on the first conflict. The embodiments of the present disclosure are used to analyze and evaluate the risk of using open-source licenses. | 2020-01-23 |
20200026513 | TECHNIQUES FOR DECOUPLED ACCESS-EXECUTE NEAR-MEMORY PROCESSING - Techniques for decoupled access-execute near-memory processing include examples of first or second circuitry of a near-memory processor receiving instructions that cause the first circuitry to implement system memory access operations to access one or more data chunks and the second circuitry to implement compute operations using the one or more data chunks. | 2020-01-23 |
20200026514 | HIERARCHICAL GENERAL REGISTER FILE (GRF) FOR EXECUTION BLOCK - In an example, an apparatus comprises a plurality of execution units, and a first general register file (GRF) communicatively couple to the plurality of execution units, wherein the first GRF is shared by the plurality of execution units. Other embodiments are also disclosed and claimed. | 2020-01-23 |
20200026515 | SYSTEMS, APPARATUSES, AND METHODS FOR FUSED MULTIPLY ADD - In some embodiments, packed data elements of first and second packed data source operands are of a first, different size than a second size of packed data elements of a third packed data operand. Execution circuitry executes decoded single instruction to perform, for each packed data element position of a destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N. | 2020-01-23 |
20200026516 | Systems and Methods For Rendering Vector Data On Static And Dynamic-Surfaces Using Screen Space Decals And A Depth Texture - Systems, methods, devices, and non-transitory media of various embodiments render vector data on static and dynamic surfaces by a computing device for a graphic display or for a separate computing device and/or algorithm to generate an image. Complex vector data associated with a surface for rendering may be rendered. The complex vector data may be decomposed into one or more vector subunits. A geometry corresponding to a volume and a mathematical description of an extrusion of each corresponding vector subunit may be generated. The volume and the mathematical description of the extrusion may intersect a surface level-of-detail of the surface. The geometry may be rasterized as a screen-space decal. Also, a surface depth texture may be compared for the surface against the extrusion using at least the screen-space decal. In addition, geometry batching may be performed for drawing simultaneously a plurality of the one or more vector subunits. | 2020-01-23 |
20200026517 | SURFACE DEVICES WITHIN A VERTICAL POWER DEVICE - A semiconductor device comprises a vertical power device, such as a superjunction MOSFET, an IGBT, a diode, and the like, and a surface device that comprises one or more lateral devices that are electrically active along a top surface of the semiconductor device. | 2020-01-23 |
20200026518 | PACKED DATA ELEMENT PREDICATION PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor includes a first mode where the processor is not to use packed data operation masking, and a second mode where the processor is to use packed data operation masking. A decode unit to decode an unmasked packed data instruction for a given packed data operation in the first mode, and to decode a masked packed data instruction for a masked version of the given packed data operation in the second mode. The instructions have a same instruction length. The masked instruction has bit(s) to specify a mask. Execution unit(s) are coupled with the decode unit. The execution unit(s), in response to the decode unit decoding the unmasked instruction in the first mode, to perform the given packed data operation. The execution unit(s), in response to the decode unit decoding the masked instruction in the second mode, to perform the masked version of the given packed data operation. | 2020-01-23 |
20200026519 | PROCESSOR TRACE EXTENSIONS TO FACILITATE REAL-TIME SECURITY MONITORING - Embodiments described herein provide for a computing device comprising a hardware processor including a processor trace module to generate trace data indicative of an order of instructions executed by the processor, wherein the processor trace module is configurable to selectively output a processor trace packet associated with execution of a selected non-deterministic control flow transfer instruction. | 2020-01-23 |
20200026520 | SPECULATIVE EXECUTION OF BOTH PATHS OF A WEAKLY PREDICTED BRANCH INSTRUCTION - Systems, methods, and computer-readable media are described for performing speculative execution of both paths/branches of a weakly predicted branch instruction. A branch instruction may be fetched from an instruction queue and determined to be a weakly predicted branch instruction, in which case, both paths of the branch instruction may be dispatched and speculatively executed. When the actual path taken becomes known, instructions corresponding to the path not taken may be flushed. Instructions from both paths of a weakly predicted branch instruction that are speculatively executed may be dispatch and executed in an interleaved manner. | 2020-01-23 |
20200026521 | INSTRUCTION COMPLETION TABLE CONTAINING ENTRIES THAT SHARE INSTRUCTION TAGS - Systems, methods, and computer-readable media are described for performing instruction execution using an instruction completion table (ICT) that is configured to accommodate shared ICT entries. A shared ICT entry maps to multiple instructions such as, for example, two instructions. Each shared ICT entry may be referenced by an even instruction tag (ITAG) and an odd ITAG that correspond to respective instructions that have been grouped together in the shared ICT entry. The instructions corresponding to a given shared ICT entry can be executed and finished independently of one another. A shared ICT entry is completed when each execution of each instruction corresponding to the shared ICT entry has finished and when all prior ICT entries have completed. Also described herein are system, methods, and computer-readable media for flushing instructions in shared ICT entries in response to execution of a branch instruction. | 2020-01-23 |
20200026522 | Microprocessor Code Stitching - Techniques and computing devices related to modifying images are provided. A computing device can receive an order to modify pixels of an image. The computing device can include at least a pixel processor and software snippets that are executable on the pixel processor. The computing device can determine parameter values based on the order. The computing device can select a set of software snippets from the software snippets based on the parameter values. The computing device can load the set of software snippets onto the pixel processor. The pixel processor can execute the loaded set of software snippets to modify the pixels. The computing device can generate an output that includes a depiction of the image that includes at least one of the modified pixels. | 2020-01-23 |
20200026523 | SYSTEM AND METHOD FOR LIMITING MAXIMUM RUN TIME FOR AN APPLICATION - An application or other non-transitory computer-readable medium for storing instructions is disclosed. The instructions are executed by at least one processing device which is configured to store at least one user preference. The at least one user preference comprises a selection of one or more monitored applications and one or more warning thresholds corresponding to the one or more monitored applications. The one or more warning thresholds comprise a user selected time period. Further, time information associated with a use of the one or more monitored applications, is obtained. The time information is determined according to an accessibility event notifications function, on an operating system of a device. Further, the at least one user preference and the obtained time information are compared to determine whether the one or more warning thresholds have been exceeded. Thereafter, a notification is delivered to the user, upon exceeding the one or more warning thresholds. | 2020-01-23 |
20200026524 | SYSTEM AND METHOD FOR PERFORMING AN IMAGE-BASED UPDATE - A target device operating on a first operating system can receive an incremental update for a second operating system and store in a first data set a snapshot of the second the second operating system based on the incremental update. The target device may then export an image of the second operating system to a second data set and boot into the second image. The target device may receive operating system updates, build operating system images in the background, and boot into the updated operating system when the process is complete. Storing snapshots of the incremental updates and previous operating system images allows for reversion to old operating systems. | 2020-01-23 |
20200026525 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR DUAL-PROCESSOR STORAGE SYSTEM - In accordance with certain techniques, at a first processor of a dual-processor storage system, a change in an initial logical unit corresponding to a storage area in a physical storage device of the storage system is detected. Based on the change in the initial logical unit, a plurality of update operations to be performed on a mapped logical unit driver mapping a plurality of initial logical units including the initial logical unit to a plurality of mapped logical units are determined. An indication of the plurality of update operations is sent to a second processor of the storage system, to cause the second processor to perform the plurality of update operations on a peer mapped logical unit driver associated with the mapped logical unit driver. Accordingly, there is improved performance of the dual-processor storage system. | 2020-01-23 |
20200026526 | METHOD, DEVICE, AND STORAGE MEDIUM FOR PROCESSING DRIVER ON TERMINAL DEVICE SIDE - Method, device, and storage medium for processing a driver on a terminal device side are provided. A driver processing method includes displaying a driver installation interface. The driver installation interface provides a connection method prompt message prompting a user to make a connection between a terminal device and a peripheral device. | 2020-01-23 |
20200026527 | CLOUD INTEGRATION OF APPLICATION RUNTIME - A system includes reception of a first request from a client application to create an application runtime associated with a tenant of the client application, creation, in response to the first request, of metadata describing configuration information of the application runtime, reception of a second request from the client application to start a session of the application runtime, starting, in response to the second request, of a first application runtime instance in a first container of a first virtual machine based on the configuration information of the application runtime, and return of first connection information to the client application, the first connection information usable by the client application to communicate with the first application runtime instance. | 2020-01-23 |
20200026528 | MESSAGE BASED DISCOVERY AND MANAGEMENT OF APPLICATIONS - A system can receive a message intended to be received by a device. The system can implement an application discovery service to identify keywords in the message. The keywords can be used to determine what applications are required to access content in the message. The system can determine that a required application is not available on the device from a list of managed applications. The system can cause the required application to be made available on the device before, at the same, or after the device receives the message. | 2020-01-23 |
20200026529 | APPLICATIONS START BASED ON TARGET DISTANCES - In example implementations, a method for starting a companion application on a mobile endpoint device and an apparatus for performing the same is provided. The method is performed by a processor of the mobile endpoint device. The method includes detecting that a distance between the mobile endpoint device and a main computer is less than a target distance. The distance is based on a signal strength of a wireless communication signal between the mobile endpoint device and the main computer. An application that is being executed on the main computer is detected. A companion application is started on the mobile endpoint device that is associated with the application. | 2020-01-23 |
20200026530 | TYPE-CONSTRAINED OPERATIONS FOR PLUG-IN TYPES - Techniques for performing type-constrained operations for plug-in types are disclosed. A runtime environment encounters a request to perform a type-constrained operation that requires evaluating a type constraint associated with a particular plug-in type. The runtime environment lacks sufficient native instructions to evaluate type constraints associated with plug-in types. The runtime environment accesses a plug-in type framework to obtain a particular type descriptor instance associated with the particular plug-in type. The plug-in type framework is designated, prior to encountering any request to perform the type-constrained operation, for obtaining type descriptor instances which define constraints on plug-in types, to an extent that any such constraints exist. The particular type descriptor instance defines a particular type constraint that does not match any single built-in type. The runtime environment performs the type-constrained operation, which comprises using the particular type descriptor instance to evaluate the particular type constraint associated with the particular plug-in type. | 2020-01-23 |
20200026531 | Apparatus and Method for Dynamic Modification of Machine Branding of Information Handling Systems Based on Hardware Inventory - An apparatus executes a boot operation, and determines a planar type associated with a motherboard. The apparatus queries an electronic database for the planar type associated with the motherboard, and identifies a branding identity that is electronically associated with the planar type. | 2020-01-23 |
20200026532 | METHOD AND USER INTERFACE FOR DATA MAPPING - Embodiments of apparatus, systems, and methods are described for creating, arranging, and displaying data mappings between two different data schemas in a graphical user interface (GUI). The GUI allows scaling of a data schema, automatic data sorting and grouping of objects in a schema, dynamic spacing of data mappings in the GUI, and customizable data map transformations to entities of a canonical data model. The GUI can limit the display of objects and fields to those that have been mapped into entity groups. The GUI can display mapped or unmapped fields to facilitate the mapping of additional fields or objects. The GUI displays visual logic connectors between objects and entities to summarize the relationship and number of mappings between the objects and entities. Objects and entities can be expanded and collapsed to show more granular relationship information. Instance-enabled canonical entities can be created to conceptually group fields. | 2020-01-23 |
20200026533 | Immediate Views - Techniques described herein provide an organization information distribution system. At least some implementations connect to a system server associated with an organization information distribution system over a communication network. In response to receiving a notification from the system server, various implementations present default content associated with operating in a default mode. The default content can be presented in a persistent manner and/or for a predetermined time period. Upon receiving a second notification from the system server, one or more implementations transition out of the default mode and present different content associated with the second notification, such as an immediate view, audio content, video content, and so forth. | 2020-01-23 |
20200026534 | METHOD FOR PROCESSING A USER INPUT AND MOTOR VEHICLE HAVING A DATA-PROCESSING DEVICE - The disclosure relates to a method for processing a user input in a vehicle, in which a suitable subset is selected based on a result of an evaluation of the user input from a superset of personal data records. For each data record of the subset, a frequency value is then determined corresponding to a respective frequency of use. The subset is output to the user adapted to the frequency values. In order to offer a particularly efficient operability, it is provided that the superset is retrieved from a mobile terminal and stored in a memory device of the vehicle before the subset is selected. For adapting the output to the user, the determined frequency values of the data records of the subset are checked for a predetermined selection criterion, at the fulfillment of which the data record with the highest frequency value is determined as a VIP data record and marked and/or preselected upon an output to the user. | 2020-01-23 |
20200026535 | Converting Presentations into and Making Presentations from a Universal Presentation Experience - A computer-based system processes presentations into a universal display protocol with full fidelity to the original files. The system launches a presentation utilizing native presentation software on a recorded machine, sends a series of common user actions to the presentation software, detects the beginning and end of each action's effect, and records each action's associated video segment for future playback. Playback involves playing the recorded video segments in response to user inputs received during that playback. | 2020-01-23 |
20200026536 | SYSTEMS AND METHODS FOR USER INTERFACE DETECTION - Embodiments described include systems and methods for user interface (UI) anomaly detection. One or more processors of a client device can execute an application undergoing UI anomaly detection. The application can be injected with a detection engine. The detection engine can determine, while executing as a thread of the application on the one or more processors of the client device, that a dimension of a text-designated region of a first user interface element of the application is less than that of corresponding text for rendering on the user interface element. The detection engine can provide, to a server responsive to the determination, an indication of a first UI anomaly. The indication can include information about a position and size of the first user interface element. | 2020-01-23 |
20200026537 | PROVIDING USER INTERFACE LANGUAGE OPTIONS - User interface (UI) language options are provided. A code statement in an object code of an application retrieves human language bundle value(s) for use in a UI element. Code is injected into the object code of the application to transmit a resource bundle name and key to the UI element for storage at a user value area of a control of the UI element. | 2020-01-23 |
20200026538 | MACHINE LEARNING PREDICTION OF VIRTUAL COMPUTING INSTANCE TRANSFER PERFORMANCE - The disclosure provides an approach for preventing the failure of virtual computing instance transfers across data centers. In one embodiment, a flow control module collects performance information primarily from components in a local site, as opposed to components in a remote site, during the transfer of a virtual machine (VM) from the local site to the remote site. The performance information that is collected may include various performance metrics, each of which is considered a feature. The flow control module performs feature preparation by normalizing feature data and imputing missing feature data, if any. The flow control module then inputs the prepared feature data into machine learning model(s) which have been trained to predict whether a VM transfer will succeed or fail, given the input feature data. If the prediction is that the VM transfer will fail, then remediation actions may be taken, such as slowing down the VM transfer. | 2020-01-23 |
20200026539 | Method and System for Implementing Virtual Machine (VM) Management Using Hardware Compression - Novel tools and techniques are provided for implementing virtual machine (“VM”) management, and, more particularly, to methods, systems, and apparatuses for implementing VM management using hardware compression. In various embodiments, a computing system might identify one or more first virtual machines (“VM's”) among a plurality of VM's that are determined to be currently inactive and might identify one or more second VM's among the plurality of VM's that are determined to be currently active. The computing system might compress a virtual hard drive associated with each of the identified one or more first VM's that are determined to be currently inactive. The computing system might also perform or continue to perform one or more operations using each of the identified one or more second VM's that are determined to be currently active. | 2020-01-23 |
20200026540 | DYNAMIC PROVISIONING AND DELIVERY OF VIRTUAL APPLICATIONS - The disclosure provides an approach for mounting a virtual disk to a virtual computing instance (VCI). The method comprises obtaining a set of required applications for each VCI in a set of VCIs. The method comprises obtaining constraints of each VCI in the set of VCIs. The method further comprises determining pair-wise application overlap between each pair of VCIs of the set of VCI, wherein the overlap complies with constraints of the two VCIs for which the overlap is determined. The method also comprises placing applications of at least one of the application overlaps into a virtual disk file, associating the virtual disk with the virtual disk file, and mounting the virtual disk to a first VCI of the set of VCIs. | 2020-01-23 |
20200026541 | SYSTEM AND METHOD FOR DISTRIBUTED SECURITY FORENSICS - A host device and method for efficient distributed security forensics. The method includes creating, at a first host device configured to run a first virtualization entity, a first event index for the first virtualization entity; encoding at least one event related to the first virtualization entity; updating the first event index based on the encoded at least one event; and sending the first event index to a master console, wherein the master console is configured to receive a plurality of event indices created by a plurality of host devices with respect to a plurality of virtualization entities. | 2020-01-23 |
20200026542 | VIRTUAL NETWORK FUNCTION MANAGEMENT APPARATUS, VIRTUAL INFRASTRUCTURE MANAGEMENT APPARATUS, AND VIRTUAL NETWORK FUNCTION CONFIGURATION METHOD - A virtual network function management apparatus includes: a physical machine candidate query part configured to query, when creating a virtual network function, about a physical machine candidate in which a virtual machine configuring a virtual network function can be deployed, with respect to a virtual infrastructure management apparatus that manages a virtual infrastructure configured by using 2 or more types of physical machine; a physical machine selection part configured to select a physical machine that can satisfy performance required by the virtual network function, from among physical machine candidates received from the virtual infrastructure management apparatus; and a virtual machine creation instruction part configured to instruct the virtual infrastructure management apparatus to specify the selected physical machine and create a virtual machine configuring the virtual network function. | 2020-01-23 |
20200026543 | METHOD AND SYSTEM OF HYDRATING OF VIRTUAL MACHINES - Embodiments disclosed herein generally relate to a method and system for automatically updating a virtual machine image of one or more virtual machines of an auto-scaling group. A computing system receives an indication to update a virtual machine image of a plurality of virtual machines in a plurality of auto-scaling groups. Computing system identifies a subset of the plurality of auto-scaling groups that contains a hydration tag. Computing system locates a version of the virtual machine image different from a current version of the virtual machine image. For each auto-scaling group in the subset of auto-scaling groups, computing system clones a launch configuration for the virtual machines in the auto-scaling group. Computing system stores data associated with each auto-scaling group in a remote location. Computing system updates the virtual machine image of the virtual machines in each auto-scaling group with the new version of the virtual machine image. | 2020-01-23 |
20200026544 | HYPERVISOR EXCHANGE WITH VIRTUAL-MACHINE CONSOLIDATION - A hypervisor exchange, e.g., an upgrade, can include consolidating resident virtual machines into a single host virtual machine, exchanging an old hypervisor with a new (upgraded) hypervisor, and disassociating the virtual resident virtual machines by migrating them to the new hypervisor. The consolidating can involve migrating the resident virtual machines from the old hypervisor to a guest hypervisor on the host virtual machine. The exchange can involve: 1) suspending the host virtual machine before the exchange; and 2) resuming the host virtual machine after the exchange; or migrating the host virtual machine from a partition including the old hypervisor to a partition hosting the new hypervisor. Either way, an exchange (upgrade) is achieve without requiring a bandwidth consuming migration over a network to a standby machine. | 2020-01-23 |
20200026545 | CONTAINER LOGIN METHOD, APPARATUS, AND STORAGE MEDIUM - A container login method, a container login apparatus, and a storage medium are provided. In an example embodiment, a target container login request from a browser is received; a first connection between a server and the browser is established based on the target container login request; an address of a control node corresponding to a container cluster in which a target container is located is obtained based on an identifier of the container cluster; and a second connection between the server and the target container is established based on the address of the control node and an identifier of the target container, to log in to the target container. | 2020-01-23 |
20200026546 | METHOD AND APPARATUS FOR CONTROLLING VIRTUAL MACHINE RELATED TO VEHICLE - One or more of an autonomous vehicle, a user terminal, and a server of the present disclosure may be linked or converged with an artificial intelligence (AI) module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, virtual reality (VR), a 5G service-related device, and the like. There is provided a method for providing information in a server according to an embodiment of the present disclosure includes receiving a request message including information related to generation of a virtual machine (VM) from an operating apparatus, generating a VM corresponding to the operating apparatus based on the request message, receiving information acquired at the operating apparatus, performing computation corresponding to the acquired information by use of the VM, and transmitting information related to a result of the computation to the operating apparatus. | 2020-01-23 |
20200026547 | GENERATING A VIRTUAL MACHINES RELOCATION PROTOCOL - Relocation of virtual machines is facilitated by obtaining, by a first controller, first power-related information from a first power system set that provides power to a first set of powered hardware components, where the first set of powered hardware components are running a first plurality of virtual machines. The first controller generates a relocation protocol for migrating the first plurality of virtual machines based, at least in part, upon the first power-related information. The relocation protocol includes: a migration of a first subset of one or more virtual machines so that the first subset of virtual machine(s) is to be migrated to and run on a second set of powered hardware components in a manner such that the first subset of virtual machine(s) continues to operation in a substantially continuous manner through the migration; and a snapshotting of a second subset of one or more virtual machines. | 2020-01-23 |
20200026548 | BLOCKCHAIN SHARDING WITH PARALLEL THREADS - A method comprises receiving from a distributed app (dApp), a shard creation transaction in a blockchain block of a blockchain, the block comprising multiple shards; collecting, with a join block in the blockchain, transactions, the join block adjacent the blockchain block; encapsulating the shard creation transaction; applying the block including the shard creation transaction to yield a new shard in the block; and broadcasting the block. | 2020-01-23 |
20200026549 | TRANSACTION SCHEDULING FOR A USER DATA CACHE BY ASSESSING UPDATE CRITERIA - Transaction scheduling is described for a user data cache by assessing update criteria. In one example an event records memory stores a list of events each corresponding to performance of a transaction at a remote resource for a user. The memory has criteria for each event and a criterion value for each criterion and event combination. An event manager assesses criteria for each event by performing an operation on the stored criterion value for each criterion and event combination, assigning a score for each criterion and event combination, and compiling the assigned scores to generate a composite score for each event. The events are ordered based on the respective composite scores and executed in the ordered sequence by performing a corresponding transaction at remote resource. Updated criterion values are stored for executed events. | 2020-01-23 |
20200026550 | Dataflow Execution Time Estimation for In-Memory Distributed Processing Framework - Techniques are provided for dataflow execution time estimation for distributed processing frameworks. An exemplary method comprises: obtaining an input dataset for a dataflow for execution; determining a substantially minimal data unit for a given operation of the dataflow processed by the given operation; estimating a number of rounds required to execute a number of data units in the input dataset using nodes assigned to execute the given operation; determining an execution time spent by the given operation to process one data unit; estimating the execution time for the given operation based on the execution time spent by the given operation to process one data unit and the number of rounds required to execute the number of data units in the input dataset; and executing the given operation with the input dataset. A persistent cost model is optionally employed to record the execution times of known dataflow operations. | 2020-01-23 |
20200026551 | QUANTUM HYBRID COMPUTATION - Technologies are described herein to implement quantum hybrid computations. Embodiments include receiving a hybrid program, assigning respective functions corresponding to the hybrid program to either of CPU processing or QPU processing, scheduling processing for the respective functions, initiating execution of the hybrid program, and collating results of the execution of the classical-quantum hybrid program. | 2020-01-23 |
20200026552 | METHOD AND APPARATUS FOR MANAGING EFFECTIVENESS OF INFORMATION PROCESSING TASK - Disclosed are a method and apparatus for managing effectiveness of an information processing task in a decentralized data management system. The method comprising: sending requests for multiple information processing tasks by a client to multiple execution subjects, transmitting information processing tasks in a sequential information processing task list in an order to the multiple execution subjects; caching the requested information processing tasks to a task cache queue, caching the sequential information processing task list as a whole to the task cache queue; judging whether each information processing task in the task cache queue satisfies a predetermined conflict condition; moving the information processing task to a conflict task queue if it is determined that the task satisfies the predetermined conflict condition, deleting the task from the conflict task queue and caching the task to the task cache queue when the predetermined conflict condition is not satisfied. | 2020-01-23 |
20200026553 | METHOD AND APPARATUS FOR PROCESSING DATA - Embodiments of the present disclosure disclose a method and apparatus for processing data. A specific embodiment of the method comprises: acquiring a to-be-adjusted number of target execution units, the target execution unit referring to a unit executing a target program segment in a stream computing system; adjusting a number of the target execution units in the stream computing system based on the to-be-adjusted number; determining, for a target execution unit in at least one target execution unit after the adjustment, an identifier set corresponding to the target execution unit, an identifier in the identifier set being used to indicate to-be-processed data; and processing, through the target execution unit, the to-be-processed data indicated by the identifier in the corresponding identifier set. | 2020-01-23 |
20200026554 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing apparatus switches alteration detection processing depending on timing of execution of alteration detection to perform alteration detection processing for each file to be accessed to detect an alteration in an extended application, and switches alteration detection execution determination processing depending on a type of access to an extended application package. | 2020-01-23 |
20200026555 | METHOD TO SET UP AND TEAR DOWN CLOUD ENVIRONMENTS BASED ON A SCHEDULE OBTAINED FROM ONE OR MORE HOSTED CALENDARS - Described embodiments automatically and dynamically deploy and undeploy virtual computing environments by using a schedule obtained from a user's personal, work, or shared team calendars. By using data such as out-of-office or auto-reply statuses, calendar items marked as “Out of Office”, and calendar items with specific keywords, the system may dynamically determine when a user is likely to be “present” (or capable of accessing or likely to access a virtual computing environment) or “absent” (or incapable of accessing or unlikely to access the virtual computing environment). The virtual computing environment may be dynamically deployed or undeployed responsive to and/or in anticipation of a user's presence or absence, providing cost savings and reduced bandwidth, power, and processor consumption, without increasing user frustration or requiring extra tasks. | 2020-01-23 |
20200026556 | METHODS AND APPARATUS FOR ACCELERATING VIRTUAL MACHINE MIGRATION - A server having a host processor coupled to a programmable coprocessor is provided. One or more virtual machines may run on the host processor. The coprocessor may be coupled to an auxiliary memory that stores virtual machine (VM) states. During live migration, the coprocessor may determine when to move the VM states from the auxiliary memory to a remote server node. The coprocessor may include a coherent protocol home agent and state tracking circuitry configured to track data modification at a cache line granularity. Whenever a particular cache line has been modified, only the data associated with that cache line will be moved to the remote server without having to copy over the entire page, thereby substantially reducing the amount of data that needs to be transferred during migration events. | 2020-01-23 |
20200026557 | NETWORK INTERFACE DEVICE AND HOST PROCESSING DEVICE - A network interface device has an input configured to receive data from a network. The data is for one of a plurality of different applications. The network interface device also has at least one processor configured to determine which of a plurality of available different caches in a host system the data is to be injected by accessing to a receive queue comprising at least one descriptor indicating a cache location in one of said plurality of caches to which data is to be injected, wherein said at least one descriptor, which indicates the cache location, has an effect on subsequent descriptors of said receive queue until a next descriptor indicates another cache location. The at least one processor is also configured to cause the data to be injected to the cache location in the host system. | 2020-01-23 |
20200026558 | REGULATING HARDWARE SPECULATIVE PROCESSING AROUND A TRANSACTION - A transaction is detected. The transaction has a begin-transaction indication and an end-transaction indication. If it is determined that the begin-transaction indication is not a no-speculation indication, then the transaction is processed. | 2020-01-23 |
20200026559 | DYNAMIC UPDATE OF THE NUMBER OF ARCHITECTED REGISTERS ASSIGNED TO SOFTWARE THREADS USING SPILL COUNTS - A computer system includes a processor, main memory, and controller. The processor includes a plurality of hardware threads configured to execute a plurality of software threads. The main memory includes a first register table configured to contain a current set of architected registers for the currently running software threads. The controller is configured to change a first number of the architected registers assigned to a given one of the software threads to a second number of architected registers when a result of monitoring current usage of the registers by the software threads indicates that the change will improve performance of the computer system. The processor includes a second register table configured to contain a subset of the architected registers and a mapping table for each software thread indicating whether the architected registers referenced by the corresponding software thread are located in the first register table or the second register table. | 2020-01-23 |
20200026560 | DYNAMIC WORKLOAD CLASSIFICATION FOR WORKLOAD-BASED RESOURCE ALLOCATION - Techniques for managing virtualized entities in computing systems. In a method embodiment, processing commences upon receiving I/O activity trace data associated with virtualized entities running in a computing system. Specific I/O activity attributes are extracted from the I/O activity trace data, and the I/O activity attributes are used to form a workload classification model. The workload classification model serves to assign one or more workload classifications to a respective one or more observed workloads running on the computing system. Based on the determined workload classification or classifications, recommended resource allocation operations are formed for further consideration. Considered resource allocation operations include migrations of virtualized entities from a source computing resource to a target computing resource. Consideration of the resource allocation operations include considering homogeneity of workloads at a target computing resource and/or matching specific workload resource demands to availability of specific types of resources at a candidate target computing resource. | 2020-01-23 |
20200026561 | INTELLIGENT CONTENTIONLESS PROCESSING RESOURCE REDUCTION IN A MULTIPROCESSING SYSTEM - Computer program products and a system for managing processing resource usage at a workload manager and an application are described. The workload manager and application may utilize safe stop points to reduce processing resource usage during high cost processing periods while preventing contention in the processing resources. The workload manager and application may also implement lazy resumes or processing resource utilization at the application to allow for continued reduced usage of the processing resources. | 2020-01-23 |
20200026562 | SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SCHEDULER AND WORKLOAD MANAGER THAT IDENTIFIES AND OPTIMIZES HORIZONTALLY SCALABLE WORKLOADS - In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery. For instance, according to one embodiment, there is disclosed a system to implement a stateless scheduler service, in which the system includes: a processor and a memory to execute instructions at the system; a compute resource discovery engine to identify one or more computing resources available to execute workload tasks; a workload discovery engine to identify a plurality of workload tasks to be scheduled for execution; a cache to store information on behalf of the compute resource discovery engine and the workload discovery engine; a scheduler to request information from the cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution; and further in which the scheduler is to schedule at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested. Other related embodiments are disclosed. | 2020-01-23 |
20200026563 | SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SCHEDULER AND WORKLOAD MANAGER WITH SCHEDULING REDUNDANCY AND SITE FAULT ISOLATION - In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery. For instance, according to one embodiment, there is disclosed a system to implement a stateless scheduler service, in which the system includes: a processor and a memory to execute instructions at the system; a compute resource discovery engine to identify one or more computing resources available to execute workload tasks; a workload discovery engine to identify a plurality of workload tasks to be scheduled for execution; a cache to store information on behalf of the compute resource discovery engine and the workload discovery engine; a scheduler to request information from the cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution; and further in which the scheduler is to schedule at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested. Other related embodiments are disclosed. | 2020-01-23 |
20200026564 | SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SCHEDULER AND WORKLOAD MANAGER WITH DYNAMIC WORKLOAD TERMINATION BASED ON COST-BENEFIT ANALYSIS - In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery. For instance, according to one embodiment, there is disclosed a system to implement a stateless scheduler service, in which the system includes: a processor and a memory to execute instructions at the system; a compute resource discovery engine to identify one or more computing resources available to execute workload tasks; a workload discovery engine to identify a plurality of workload tasks to be scheduled for execution; a cache to store information on behalf of the compute resource discovery engine and the workload discovery engine; a scheduler to request information from the cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution; and further in which the scheduler is to schedule at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested. Other related embodiments are disclosed. | 2020-01-23 |
20200026565 | GENERATING METRICS FOR QUANTIFYING COMPUTING RESOURCE USAGE - Various examples are disclosed for generating metrics for quantifying computing resource usage. A computing environment can identify a computing function that utilizes a plurality of computing services hosted in at least one virtual machine. The computing environment can determine a first cost metric for the at least one virtual machine based on hardware resources used by the at least one virtual machine and determine a second cost metric for individual ones of the computing services based on virtual machine resources used by the individual ones of the computing services and the first cost metric. A third cost metric can be determined for the computing function as a function of the second cost metric and a utilization ratio. | 2020-01-23 |
20200026566 | WORKLOAD IDENTIFICATION AND DISPLAY OF WORKLOAD-SPECIFIC METRICS - An architecture for implementing a mechanism for automatically displaying metrics specific to a type of workload being processed by a computer system is provided. The mechanism predicts a classification of the workload based on attributes that characterize the workload using a set of workload profiles and/or a set of classification rules that correlate different combinations of attributes of workloads with different classifications of workloads. Based on the predicted classification of the workload, one or more templates including one or more metrics specific to the classification of workload are identified. The template(s) including the metric(s) specific to the classification of the workload may be identified using set of rules that associate the metric(s) with the predicted classification of the workload. A user interface including the metric(s) is generated based on the template(s). The user interface may then be displayed to a user of the computer system. | 2020-01-23 |
20200026567 | FAST, LOW MEMORY, CONSISTENT HASH USING AN INITIAL DISTRIBUTION - Embodiments of the present systems and methods may provide a consistent hash function that provides reduced memory use and complexity, reduced computational complexity, and relatively low numbers of keys that must be reshuffled compared to current techniques. For example, in an embodiment, a computer-implemented method for controlling computing resources may comprise storing a set of labels of potential resources comprising a plurality of labels of working resources allocated to actual resources and a plurality of labels of reserved resources available to be allocated, generating an initial assignment to one of the set of labels of potential resources, when the assignment to one of a set of labels of potential resources is to one of the labels of reserved resources, reassigning the request to another label of a resource selected from a subset of the labels of potential resources, and repeating the reassigning until the request is assigned to a label of a working resource. | 2020-01-23 |
20200026568 | Fine-Grained Scheduling of Work in Runtime Systems - A runtime system for distributing work between multiple threads in multi-socket shared memory machines that may support fine-grained scheduling of parallel loops. The runtime system may implement a request combining technique in which a representative thread requests work on behalf of other threads. The request combining technique may be asynchronous; a thread may execute work while waiting to obtain additional work via the request combining technique. Loops can be nested within one another, and the runtime system may provide control over the way in which hardware contexts are allocated to the loops at the different levels. An “inside out” approach may be used for nested loops in which a loop indicates how many levels are nested inside it, rather than a conventional “outside in” approach to nesting. | 2020-01-23 |
20200026569 | SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SCHEDULER AND WORKLOAD MANAGER WITH CYCLICAL SERVICE LEVEL TARGET (SLT) OPTIMIZATION - In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery. For instance, according to one embodiment, there is disclosed a system to implement a stateless scheduler service, in which the system includes: a processor and a memory to execute instructions at the system; a compute resource discovery engine to identify one or more computing resources available to execute workload tasks; a workload discovery engine to identify a plurality of workload tasks to be scheduled for execution; a cache to store information on behalf of the compute resource discovery engine and the workload discovery engine; a scheduler to request information from the cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution; and further in which the scheduler is to schedule at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested. Other related embodiments are disclosed. | 2020-01-23 |
20200026570 | Predicting Time-To-Finish of a Workflow Using Deep Neural Network With Biangular Activation Functions - Techniques are provided for predicting a time-to-finish of at least one workflow in a shared computing environment using a deep neural network with a biangular activation function. An exemplary method comprises: obtaining a specification of an executing workflow of multiple concurrent workflows in a shared computing environment, wherein the specification comprises states of past executions of the executing workflow; obtaining a trained deep neural network, wherein the trained deep neural network is trained to predict one or more future states of the executing workflow using the states of past executions and wherein the trained deep neural network employs a biangular activation function comprising multiple parameters that define a position and a slope associated with two angles of the biangular activation function for a range of input values; and estimating, using the at least one trained deep neural network, a time-to-finish of the executing workflow of the multiple concurrent workflows. | 2020-01-23 |
20200026571 | SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SCHEDULER AND WORKLOAD MANAGER WITH WORKLOAD RE-EXECUTION FUNCTIONALITY FOR BAD EXECUTION RUNS - In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery. For instance, according to one embodiment, there is disclosed a system to implement a stateless scheduler service, in which the system includes: a processor and a memory to execute instructions at the system; a compute resource discovery engine to identify one or more computing resources available to execute workload tasks; a workload discovery engine to identify a plurality of workload tasks to be scheduled for execution; a cache to store information on behalf of the compute resource discovery engine and the workload discovery engine; a scheduler to request information from the cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution; and further in which the scheduler is to schedule at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested. Other related embodiments are disclosed. | 2020-01-23 |
20200026572 | APPARATUSES AND METHODS TO DETERMINE TIMING OF OPERATIONS - The present disclosure includes apparatuses and methods to determine timing of operations. An example method includes performing a first operation type that uses a shared resource in a memory device. The method includes applying a scheduling policy for timing of continued performance of the first operation type based upon receipt of a request to the memory device for performance of a second operation type that uses the shared resource. | 2020-01-23 |
20200026573 | SYSTEM AND METHOD FOR MANAGING NETWORK ACCESS CONTROL PRIVILEGES BASED ON COMMUNICATION CONTEXT AWARENESS - Methods and systems for managing NAC privileges based on communication context awareness are provided. In one aspect, a method includes detecting, by a network server, a collaboration session between at least first and second users and retrieving, by a network server from a directory server, a first role of the first user that corresponds with privileges including a first quality of service level. Further, the network server retrieves a second role of the second user that corresponds with privileges including a second quality of service level. Also, the method includes determining that the first quality of service level is greater than the second quality of service level and assigning the second user at least the privileges corresponding to the first role. The collaboration session is conducted between the first user and the second user based on the privileges assigned to the second user that correspond to the first role. | 2020-01-23 |
20200026574 | HYBRID CONFIGURATION ENGINE - A hybrid configuration engine and associated method for reducing the complexity and burden of configuring rich coexistence between an on-premise solution and a cloud-based solution is described herein and illustrated in the accompanying figures. The hybrid configuration engine determines the current state of the on-premise solution and the cloud-based solution and learns the desired configuration state. After obtaining the current and desired configuration state information, the hybrid configuration engine determines and automatically performs steps to reach the desired configuration state. Finally, the hybrid configuration engine provides instructions describing the manual steps needed to reach the desired configuration state. | 2020-01-23 |
20200026575 | AUTOMATIC LOCALIZATION OF ACCELERATION IN EDGE COMPUTING ENVIRONMENTS - Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement. | 2020-01-23 |
20200026576 | DETERMINING A NUMBER OF NODES REQUIRED IN A NETWORKED VIRTUALIZATION SYSTEM BASED ON INCREASING NODE DENSITY - An architecture for implementing a system planner for determining a number of nodes required in a networked virtualization system based on increasing node density is provided. The system planner receives various inputs describing a current networked virtualization system, an analysis period during which the workload of the current networked virtualization system is expected to increase, and a projected increase in node density during the analysis period. Based on the inputs, the system planner generates a new configuration of the networked virtualization system that includes a number of new nodes that are added to the current networked virtualization system to provide the resources necessary to support the increase in workload during the specified analysis period. | 2020-01-23 |
20200026577 | Allocation of Shared Computing Resources Using Source Code Feature Extraction and Clustering-Based Training of Machine Learning Models - Techniques are provided for allocation of shared computing resources using source code feature extraction and cluster-based training of machine learning models. An exemplary method comprises: obtaining a source code corpus with source code segments for execution in a shared computing environment; extracting discriminative features from the source code segments in the source code corpus; obtaining a trained machine learning model, wherein the trained machine learning model is trained using samples of source code segments from clusters derived from clustering the source code corpus based on (i) a term frequency metric, and/or (ii) observed values of execution metrics; and generating, using the trained model, a prediction of an allocation of one or more resources of the shared computing environment needed to satisfy service level agreement requirements for source code to be executed in the shared computing environment. The plurality of discriminative features are extracted from the source code corpus, for example, by natural language processing techniques and/or pattern-based techniques. | 2020-01-23 |
20200026578 | Basic Runtime Environment - A computer implemented method for providing workload resource management to applications in an embedded system. The method includes receiving, by an application-specific basic runtime environment (BRE), workload resource requirements of an application installed on the embedded system. The method includes obtaining, by the application-specific BRE, the workload resource requirements from an operating system of the embedded system. The method includes providing, by the application-specific BRE, the workload resource requirements to the application. The method includes initiating, by the application-specific BRE, the execution of the application on the embedded system. | 2020-01-23 |
20200026579 | SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SCHEDULER AND WORKLOAD MANAGER THAT IDENTIFIES AND CONSUMES GLOBAL VIRTUAL RESOURCES - In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery. For instance, according to one embodiment, there is disclosed a system to implement a stateless scheduler service, in which the system includes: a processor and a memory to execute instructions at the system; a compute resource discovery engine to identify one or more computing resources available to execute workload tasks; a workload discovery engine to identify a plurality of workload tasks to be scheduled for execution; a cache to store information on behalf of the compute resource discovery engine and the workload discovery engine; a scheduler to request information from the cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution; and further in which the scheduler is to schedule at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested. Other related embodiments are disclosed. | 2020-01-23 |
20200026580 | SYSTEMS, METHODS, AND APPARATUSES FOR IMPLEMENTING A SCHEDULER AND WORKLOAD MANAGER WITH SNAPSHOT AND RESUME FUNCTIONALITY - In accordance with disclosed embodiments, there are provided systems, methods, and apparatuses for implementing a stateless, deterministic scheduler and work discovery system with interruption recovery. For instance, according to one embodiment, there is disclosed a system to implement a stateless scheduler service, in which the system includes: a processor and a memory to execute instructions at the system; a compute resource discovery engine to identify one or more computing resources available to execute workload tasks; a workload discovery engine to identify a plurality of workload tasks to be scheduled for execution; a cache to store information on behalf of the compute resource discovery engine and the workload discovery engine; a scheduler to request information from the cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution; and further in which the scheduler is to schedule at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested. Other related embodiments are disclosed. | 2020-01-23 |
20200026581 | OPTIMIZING ACCESSES TO READ-MOSTLY VOLATILE VARIABLES - A computer-implemented method, computer program product, and computer processing system are provided for eliminating a memory fence for reading a read-mostly volatile variable of a computer system. The read-mostly variable is read from more than written to. The method includes writing data to the read-mostly volatile variable only during a Stop-The-World (STW) state of the computer system. The method further includes executing the memory fence in any mutator threads and thereafter exiting the STW state. The method also includes reading the read-mostly volatile variable by the mutator threads without executing the memory fence after the STW state. | 2020-01-23 |
20200026582 | SYNCHRONIZATION OBJECT WITH WATERMARK - A storage system includes a plurality of storage devices, a data structure, and a storage controller that is configured to obtain a threshold value for a synchronization object associated with the data structure. The storage controller is further configured to activate a plurality of threads. Each thread is configured to determine a count value of the synchronization object corresponding to a number of entries in the data structure and determine whether the count value of the synchronization object exceeds the threshold value plus a predetermined number of entries. In response to determining that the count value of the synchronization object exceeds the threshold value plus the predetermined number of entries, the thread is configured to perform an action. | 2020-01-23 |
20200026583 | AUTOMATIC CORRECTION OF CRYPTOGRAPHIC APPLICATION PROGRAM INTERFACES - A computer system may identify a cryptographic application programming interface (API) call for a program. The cryptographic API call may include a first variable. The computer system may determine that the first variable is a static value. The computer system may tag the first variable. The computer system may determine that the cryptographic API call will be executed. The computer system may replace the first variable with a second variable during execution of the program. The computer system may execute the cryptographic API call with the second variable. | 2020-01-23 |
20200026584 | COMPOSE APPLICATION EXTENSION ACTIVATION - Activating an extension includes opening a first compose application by a first computing device. A composed document is received, and an extension is activated in response to the document. The extension may be activated as the document is being composed. | 2020-01-23 |
20200026585 | FACILITY MONITORING SENSOR - A facility monitoring system and application receives and interprets readings from multiple sensors for determining a condition, alert or alarm and initiating appropriate notifications. Low-cost, deployed sensors detect or receive a parameter pertaining to a condition in a physical space, such as temperature, electric flow, or an open door. A channel to a remote monitoring server or application receives and reports the reading via a user accessible GUI (graphical user interface). A compound or aggregate value is computed from multiple channels based on readings from a plurality of sensors, thus allowing reporting of conclusory conditions based on several related readings, rather than a single scalar sensor value that imposes a burden on the user to deduce or investigate other related sensor readings. A GUI receives aggregation processor defining computations to be performed on readings from the multiple sensors, and alert processor defining conditions and alerts representative of issues requiring attention. | 2020-01-23 |
20200026586 | Discovery and Chaining of Virtual Functions - Concepts and technologies are disclosed herein for discovery and chaining of virtual functions. An application request can be received from a requestor. The application request can include a request to create an application. Based upon the application request, an application topology associated with the application can be determined. The application topology can define virtual functions and a data flow among the virtual functions. Creation of the plurality of virtual functions in a computing environment can be triggered and an indication of capabilities of the virtual functions can be received. The virtual functions can be chained together to provide functionality associated with the application. | 2020-01-23 |
20200026587 | ASYNCHRONOUS APPLICATION INTERACTIONS IN DISTRIBUTED SYSTEMS - Systems and methods for managing communication between applications (e.g., apps) in a host computing environment. Apps are published to a globally-accessible site. Upon download of an app to a host computing environment, apps can register themselves with a communication gateway as being discoverable and permissive to inter-app communications. Message queues are created to facilitate asynchronous communications between apps. After registration, any of the apps can send and receive messages using the communication gateway. The messages can be directed to any other app that is registered with the communication gateway. Using the message queues, the communication gateway facilitates asynchronous app interactions such that any app can communicate with any other discoverable and permissive app. Aspects of operation, discoverability and other attributes can be codified in an application manifest that is processed by the communication gateway. Discoverability, source of origination, payload contents, permissions and other attributes are carried in the application manifest. | 2020-01-23 |
20200026588 | REAL-TIME DATA PROCESSING PIPELINE AND PACING CONTROL SYSTEMS AND METHODS - A data processing system includes a transaction bus, a console application in communication with the transaction bus, and a view predictor subsystem in communication with the transaction bus. The transaction bus receives, from a user application executing on a client device, a call for visual information to be provided to the user application. The view predictor subsystem determines a likelihood that the visual information will be viewable within a viewport of the user application, and a plurality of respective values for a plurality of sources of the visual information are computed based on the likelihood and a respective priority for each source. The console application provides to the transaction bus the set of potential sources of the visual information, and the transaction bus selects, based on the computed values, one of the potential sources of the visual information to be the result, which is provided to the user application. | 2020-01-23 |
20200026589 | DATA PROCESSING FOR COMPONENT FAILURE DETERMINATION - A component analysis platform may communicate with one or more devices to obtain prediction data relating to a type of component. The component analysis platform may process the prediction data to determine a set of predictors for failure of an instance of the component, and may generate a model for failure of the instance of the component based on the set of predictors. The component analysis platform may monitor the instance of the component to obtain component data relating to the instance of the component. The component analysis platform may determine, using the model and based on the component data relating to the instance of the component, a predicted failure for the instance of the component. The component analysis platform may perform a response action related to the predicted failure. | 2020-01-23 |
20200026590 | COMPONENT FAILURE PREDICTION - Example systems may relate to component failure prediction. A non-transitory computer readable medium may contain instructions to analyze a plurality of features corresponding to a component of a system. The non-transitory computer readable medium may further contain instructions to determine which of the plurality of features to use to model a failure of the component. The non-transitory computer readable medium may contain instructions to generate a plurality of models to model the failure of the component and assemble the plurality of models into a single model for predicting component failure. The non-transitory computer readable medium may further contain instructions to extract data associated with a component failure predicted by the single model and correlate the data associated with the predicted component failure with the single model. | 2020-01-23 |
20200026591 | OPPORTUNISTIC OFFLINING FOR FAULTY DEVICES IN DATACENTERS - Embodiments relate to determining whether to take a resource distribution unit (RDU) of a datacenter offline when the RDU becomes faulty. RDUs in a cloud or datacenter supply a resource such as power, network connectivity, and the like to respective sets of hosts that provide computing resources to tenant units such as virtual machines (VMs). When an RDU becomes faulty some of the hosts that it supplies may continue to function and others may become unavailable for various reasons. This can make a decision of whether to take the RDU offline for repair difficult, since in some situations countervailing requirements of the datacenter may be at odds. To decide whether to take an RDU offline, the potential impact on availability of tenant VMs, unused capacity of the datacenter, a number or ratio of unavailable hosts on the RDU, and other factors may be considered to make a balanced decision. | 2020-01-23 |
20200026592 | SYSTEM AND METHOD FOR AUTOMATIC ROOT CAUSE ANALYSIS AND AUTOMATIC GENERATION OF KEY METRICS IN A MULTIDIMENSIONAL DATABASE ENVIRONMENT - In accordance with an embodiment, described herein are systems and methods for automatic root cause analysis and generation of key metrics in a multidimensional database. A system can comprise a computer and a multidimensional database server executing on the computer, wherein the multidimensional database server supports at least one hierarchical structure of data dimensions. One or more one or more user logs are created, the one or must user logs representing a plurality of operations performed by a plurality of users of the multidimensional database server and accessing the at least one hierarchical structure of data dimensions. Based upon historical data of the at least one hierarchical structure of data dimensions, a change in a query result of a user is detected. Based upon the detection of a change, a set of data dimensions can be provided to the user that contains the data dimensions most contributing to the change. | 2020-01-23 |
20200026593 | USER INTERFACE FOR MONITORING CRASHES OF A MOBILE APPLICATION - Various methods and systems for tracking incomplete purchases in correlation with application performance, such as application errors or crashes, are provided. In this regard, aspects of the invention facilitate monitoring transaction and application error events and analyzing data associated therewith to identify data indicating an impact of incomplete purchases in relation to an error(s) such that application performance can be improved. In various implementations, application data associated with an application installed on a mobile device is received. The application data is used to determine that an error that occurred in association with the application installed on the mobile device correlates with an incomplete monetary transaction initiated via the application. Based on the error correlating with the incomplete monetary transaction, a transaction attribute associated with the error is determined. | 2020-01-23 |
20200026594 | SYSTEM AND METHOD FOR REAL-TIME DETECTION OF ANOMALIES IN DATABASE USAGE - ABSTRACT A system and method for real-time detection of anomalies in database or application usage is disclosed. Embodiments provide a mechanism to detect anomalies in database or application usage, such as data exfiltration attempts, first by identifying correlations (e.g., patterns of normalcy) in events across different heterogeneous data streams (such as those associated with ordinary, authorized and benign database usage, workstation usage, user behavior or application usage) and second by identifying deviations/anomalies from these patterns of normalcy across data streams in real-time as data is being accessed. An alert is issued upon detection of an anomaly, wherein a type of alert is determined based on a characteristic of the detected anomaly. | 2020-01-23 |
20200026595 | WRITE BUFFER MANAGEMENT - A read operation to retrieve data from memory component and that bypasses a prior search for the data at a buffer in a read data path associated with the read operation can be performed. Responsive to performing the read operation that bypasses the prior search for the data at the buffer, the data is returned to a host system. | 2020-01-23 |
20200026596 | I/O RECOVERY AND DIAGNOSTICS - A method for monitoring I/O is disclosed. In one embodiment, such a method includes identifying various stages of an I/O process. The method further monitors progress of an I/O operation as it advances through the stages of the I/O process. The method records, in a data structure associated with the I/O operation, timing information indicating time spent in each of the stages. This timing information may include, for example, entry and exit times of the I/O operation relative to each of the stages. In the event the I/O operation exceeds a maximum allowable time spent in one or more of the stages, the method generates an error. Various recovery actions may be taken in response to the error. A corresponding system and computer program product are also disclosed. | 2020-01-23 |
20200026597 | Method and System For Managing Memory Device - The subject technology provides for managing a data storage system. A data operation error for a data operation initiated in a first non-volatile memory die of a plurality of non-volatile memory die in the data storage system is detected. An error count for an error type of the data operation error for the first non-volatile memory die is incremented. The incremented error count satisfies a first threshold value for the error type of the data operation error is determined. The first non-volatile memory die is marked for exclusion from subsequent data operations. | 2020-01-23 |
20200026598 | TWO DIE SYSTEM ON CHIP (SOC) FOR PROVIDING HARDWARE FAULT TOLERANCE (HFT) FOR A PAIRED SOC - Apparatuses of systems that provide Safety Integration Levels (SILs) and Hardware Fault Tolerance (HFT) include a first die, the first die including first processing logic connected to a first connection and the first connection connected to second processing logic of a second die. The first die may further include a second connection to an input/output (I/O) channel where the second connection is coupled to the first processing logic. The apparatuses may further include a second die, the second die including second processing logic and a third connection from a secondary device coupled to the second processing logic. The secondary device is outside the system. The second processing logic is configured to select among three configurations based on signals from the second processing logic and the secondary device: sending first output data on the I/O output channel, sending second output data on the I/O output channel, or de-energizing the I/O channel. | 2020-01-23 |
20200026599 | METHOD OF ENCODING DATA - Techniques for encoding data are described herein. The method includes receiving a block payload at a physical layer to be transmitted via a data bus. The method includes establishing a block header comprising an arrangement of bits, the block header defining two block header types, wherein a hamming distance between block header types is at least four. | 2020-01-23 |
20200026600 | DIE-LEVEL ERROR RECOVERY SCHEME - Methods, apparatuses, and systems for error recovery in memory devices are described. A die-level redundancy scheme may be employed in which parity data associated with particular die may be stored. An example apparatus may include a printed circuit board that has memory devices each disposed on a planar surface of the printed circuit board. Each memory device may include two or more memory die, channels communicatively coupled the two or more memory die, and a memory controller communicatively coupled to the plurality of channels. The memory controller may deterministically maintain a die-level redundancy scheme via data transmission through the plurality of channels. The memory controller may also generate parity data associated with the two or more memory die in response to a data write event. | 2020-01-23 |
20200026601 | Method of Using Common Storage of Parity Data for Unique Copy Recording - A disclosed method is performed at a fault-tolerant object-based storage system including M data storage entities, each is configured to store data on an object-basis. The method includes obtaining a request to store N copies of a data object and in response, storing the N copies of the data object across the M data storage entities, where the N copies are distributed across the M data storage entities. The method additionally includes generating a first parity object for a first subset of M copies of the N copies of the data object, where the first parity object is stored on a first parity storage entity separate from the M data storage entities. The method also includes generating a manifest linking the first parity object with one or more other subsets of M copies of the N copies of the data object. | 2020-01-23 |
20200026602 | HYBRID ITERATIVE ERROR CORRECTING AND REDUNDANCY DECODING OPERATIONS FOR MEMORY SUB-SYSTEMS - Data stored on each of a set of memory components can be read. Corresponding data stored on a number of the set of memory components that cannot be decoded using an error correction code decoding operation can be identified. A determination can be made whether the number of the set of memory components that include the corresponding data that cannot be decoded from the ECC decoding operation satisfies a threshold condition. Responsive to determining that the number of the set of memory components that include the corresponding data that cannot be decoded from the second ECC decoding operation satisfies the threshold condition, a processing device, can perform a redundancy error correction decoding operation to correct the data stored on each of the set of memory components. | 2020-01-23 |
20200026603 | DISTRIBUTED MEMORY CHECKPOINTING USING STORAGE CLASS MEMORY SYSTEMS - Systems and methods are provided for implementing memory checkpointing using a distributed non-volatile memory system. For example, an application runs on a plurality of server nodes in a server cluster. Each server node includes system memory having volatile system memory and non-volatile system memory. A current application state of the application is maintained in the system memory of one or more server nodes. A checkpoint operation is performed to generate a distributed checkpoint image of the current application state of the application. The distributed checkpoint image is stored in the non-volatile system memory of the plurality of server nodes. Fault-tolerant parity data is generated for the distributed checkpoint image, and the fault-tolerant parity data for the distributed checkpoint image is stored in the non-volatile system memory of one or more of the plurality of server nodes. | 2020-01-23 |
20200026604 | AUTOMATED FAILOVER OF DATA TRAFFIC ROUTES FOR NETWORK-BASED APPLICATIONS - The disclosure facilitates rerouting data traffic of applications. A failover request is received by a failover application including an application identifier of a main application, the failover application indicating at least one sub-application and a target data source. The failover application selects a configuration data set of the main application based on the application identifier, wherein the selected configuration data set defines an address mask of the target data source associated with the at least one sub-application. The failover application generates failover instructions for activating data traffic routing of the at least one sub-application to the target data source based on the address mask of the target data source. The failover application provides the generated failover instructions to a data traffic manager associated with the main application, whereby data traffic of the at least one sub-application is routed to the target data source by the data traffic manager. | 2020-01-23 |
20200026605 | CONTROLLING PROCESSING ELEMENTS IN A DISTRIBUTED COMPUTING ENVIRONMENT - A computer system controls processing elements associated with a stream computing application. A stream computing application is monitored for the occurrence of one or more conditions. One or more processing element groups are determined to be restarted based on occurrence of the one or more conditions, wherein the processing element groups each include a plurality of processing elements associated with the stream computing application. Each processing element of the determined one or more processing element groups is concurrently restarted. Embodiments of the present invention further include a method and program product for controlling processing elements within a stream computing application in substantially the same manner described above. | 2020-01-23 |
20200026606 | CLIENT SESSION RECLAIM FOR A DISTRIBUTED STORAGE SYSTEM - The technology disclosed herein that may enable a client of a distributed storage system to recover a storage session after a failure occurs. An example method may include: identifying a storage session of a distributed storage service, the storage session comprising session data that corresponds to a storage object of the distributed storage service; providing, by a processing device of a client, an indication that the client is recovering the storage session; and obtaining, by the client, the session data of the storage session from one or more devices that accessed the storage object of the distributed storage service. | 2020-01-23 |
20200026607 | ELECTRONIC APPARATUS AND OPERATIVE METHOD - This invention introduces an electronic apparatus and an operative method thereof which are capable of triggering an initialization operation for the electronic apparatus correctly. The electronic apparatus includes a plurality of latches and a power power-on-reset generator. The plurality of latches are coupled to memory cells and are configured to monitor memory data of the memory cells. The power-on-reset generator is coupled to the plurality of latches and is configured to generate a power-on-reset pulse to reset the electronic apparatus in response to a data corruption on at least one of the memory cells. The data corruption is detected during an initialization operation of the electronic apparatus according to memory data of the memory cells and corresponding hardwired code data. | 2020-01-23 |