07th week of 2019 patent applcation highlights part 46 |
Patent application number | Title | Published |
20190050209 | METHOD AND SYSTEM TO DEVELOP, DEPLOY, TEST, AND MANAGE PLATFORM-INDEPENDENT SOFTWARE - Some embodiments described herein provide a system for creating platform-independent software application programs. During operation, the system receives a configuration program and an application program, the application program including conditional and unconditional components. The system creates a configuration executable binary and loads this binary into a configuration execution space. The system creates a parse tree of the application program. Subsequently, the system evaluates each component of the application program in the configuration execution space and generates a modified parse tree of the application program. Semantic analysis is performed on this modified parse tree to generate an executable binary and a composition map for the application program. | 2019-02-14 |
20190050210 | SYSTEM AND METHOD FOR PROVIDING CLOUD OPERATING SYSTEM VERIFICATIONS FOR A DOMAIN-SPECIFIC LANGUAGE FOR CLOUD SERVICES INFRASTRUCTURE - A system and method for providing and executing a domain-specific programming language for cloud services infrastructure is provided. The system may be used to integrate references to external entities, such as cloud service compute instances, directly into a domain-specific programming language, allowing developers to easily integrate cloud services directly using the domain-specific programming language. A compiler stored within a cloud operating system can include one or more validations that can check instantiations of types within the domain-specific language for compliance with one or more policies set by a system administrator of a computing enterprise. | 2019-02-14 |
20190050211 | CONTEXT INFORMATION BASED ON TYPE OF ROUTINE BEING CALLED - Optimizations are provided for sibling calls. A sibling caller is marked to indicate that it may call a sibling routine or that it may call an external sibling routine. Based on the marking, certain processing is performed to facilitate use of sibling calls, particularly when the sibling routine being called is external to the caller. | 2019-02-14 |
20190050212 | TECHNOLOGIES FOR INDIRECTLY CALLING VECTOR FUNCTIONS - Technologies for indirectly calling vector functions include a compute device that includes a memory device to store source code and a compiler module. The compiler module is to identify a set of declarations of vector variants for scalar functions in the source code, generate a vector variant address map for each set of vector variants, generate an offset map for each scalar function, and identify, in the source code, an indirect call to the scalar functions, wherein the indirect call is to be vectorized. The compiler module is also to determine, based on a context of the indirect call, a vector variant to be called and store, in object code and in association with the indirect call, an offset into one of the vector variant address maps based on (i) the determined vector variant to be called and (ii) the offset map that corresponds to each scalar function. | 2019-02-14 |
20190050213 | SYSTEM AND METHOD FOR GENERATING A DOMAIN-SPECIFIC PROGRAMMING LANGUAGE PROGRAM FROM A CLOUD-BASED COMPUTING SYSTEM - A system and method for generating domain-specific programming language for cloud services infrastructure from a pre-existing cloud-based computing system is provided. In one example, a transcriber tool can generate a plurality of queries directed at a cloud service provider application program interface. The results of those queries can then be to generate a symbolic representation of the domain-specific language. Once the symbolic representation has been rendered, the symbolic representation can then be converted into a domain-specific language program. In one or more examples, the generated code can be used to clone the pre-existing cloud-based computing system. In another example, the generated code can be used to take control of the pre-existing cloud-based computing system. | 2019-02-14 |
20190050214 | METHOD AND APPARATUS TO DEPLOY APPLICATIONS ON PROPER IT RESOURCES BASED ON FREQUENCY AND AMOUNT OF CHANGES OF APPLICATIONS - Methods and apparatuses described herein are directed to a management program that manages IT infrastructures and deploys applications on them while taking the maturity level of the applications into consideration. Example implementations also involve a management program that modifies configurations of IT resources while considering the maturity level and usage frequency of the application during application resizing. | 2019-02-14 |
20190050215 | EXTERNAL RECORDING MEDIUM, MOBILE TERMINAL INCLUDING THE EXTERNAL RECORDING MEDIUM, AND COMMUNICATION CONTROL METHOD - An external recording medium ( | 2019-02-14 |
20190050216 | SYSTEMS AND METHODS FOR MAINTAINING OPERATING CONSISTENCY FOR MULTIPLE USERS DURING FIRMWARE UPDATES - Systems and methods for maintaining operating consistency for multiple users during firmware updates. According to an aspect, a method includes receiving, at a predetermined time interval, a request from one or more users of an application to carry out a result on a computing device. The method also includes analyzing a plurality of sessions of the application on the computing device servicing one or more users. The method also includes determining if one of the plurality of sessions contains an updated data. Further, the method includes creating at least one of a plurality of consistency groups based on the updated data. Further, the method includes updating, after the predetermined time interval, the application based on the at least one of the plurality of consistency groups | 2019-02-14 |
20190050217 | SYSTEMS, METHODS AND APPARATUS FOR DISTRIBUTED SOFTWARE/FIRMWARE UPDATE AND SOFTWARE VERSIONING SYSTEM FOR AUTOMATED VEHICLES - The disclosed embodiments generally relate to methods, systems and apparatuses for dynamic firmware/software (FW/SW) update distribution in highly and fully autonomous or automated vehicles. In one embodiment, the disclosure relates to an apparatus to dynamically upgrade code in a vehicle. The apparatus may include: a communication module for one or more of wireless or landline communication; a central processing unit (CPU) in communication with the communication module, the CPU configured to receive an indication requiring a code upgrade to an existing vehicle code software and receive the code upgrade; store the code upgrade; execute code upgrade in parallel with the existing vehicle code software; log one or more error indications resulted from execution of the code upgrade; replace the existing vehicle code with the code upgrade if the logged error indication is less than a first threshold; and direct the code upgrade to a second vehicle to update the second vehicle code. The disclosed embodiments may be implemented in autonomous driving (AD) vehicles as well as vehicles having operating code or software/firmware. | 2019-02-14 |
20190050218 | METHOD AND SYSTEM FOR A CLIENT TO SERVER DEPLOYMENT VIA AN ONLINE DISTRIBUTION PLATFORM - An apparatus and a method for a client to server deployment via an online distribution platform can include a mechanism to update at least part of a system software or server-side software via a parallel client software update. Online distribution platforms such as mobile application stores can be utilized in embodiments of the apparatus and method to provide not only the client update, but also the system software update in the underlying system (e.g. server-side version). | 2019-02-14 |
20190050219 | PROGRAM UPDATING SYSTEM, PROGRAM UPDATING METHOD, AND COMPUTER PROGRAM - A system according to one aspect of the present disclosure is a program updating system including a plurality of control devices installed in a vehicle, and a gateway capable of performing in-vehicle communication with the plurality of control devices. The gateway includes: a communication unit that receives a plurality of update programs for the control devices from an external device; a storage unit that stores therein the received plurality of update programs; an in-vehicle communication unit that transmits the stored plurality of update programs to the corresponding control devices, respectively; and a processing unit that prioritizes storage of a first program defined below into the storage unit over storage of a second program defined below into the storage unit. | 2019-02-14 |
20190050220 | METHOD AND SYSTEM FOR LOCOMOTIVE SOFTWARE MANAGEMENT - A method includes receiving, at a data hub onboard an asset, a new configuration file, a service program, and a software update of a software application of the asset from a remote location. The data hub includes a current configuration file that indicates a current configuration state of the software application. The new configuration file indicates an updated configuration state of the software application with the software update. The service program includes work instructions for applying the updated configuration state to the software application. The method includes displaying the current configuration file and the new configuration file onboard the asset using the data hub. The method also includes updating the software application with the updated configuration state according to the work instructions of the service program using the data hub. | 2019-02-14 |
20190050221 | SYSTEMS AND METHODS FOR USAGE DRIVEN DETERMINATION OF UPDATE CRITICALITY - In accordance with embodiments of the present disclosure, an information handling system may include a host system comprising a host system processor and a management controller communicatively coupled to the host system processor and configured to provide management of the information handling system. The management controller may be further configured to read a features-to-fixes database having one or more entries, each entry of the features-to-fixes database setting forth an association between an information handling resource feature and one or more firmware fixes, read a usage database having one or more entries, each entry of the usage database setting forth usage of information handling resource features by the information handling system, and compare entries of the features-to-fixes database and the usage database to determine at least one of a criticality and an applicability of the one or more firmware fixes to the information handling system. | 2019-02-14 |
20190050222 | TRANSFORMING DATA MANIPULATION CODE INTO DATA WORKFLOW - Aspects extend to methods, systems, and computer program products for transforming data manipulation code into data workflow. Data manipulation code for a data science process is written in a data manipulation programming language. The data manipulation code defines input instructions, data manipulation instructions, and output instructions. A learning module automatically transforms the data manipulation code into a data workflow representative of the data science process. The level of detail for a data workflow can be tailored for an intended audience and/or for subsequent editing with an editor program (e.g., a drawing program). Aspects of the invention address the disconnection between designing a data science process and documenting the data science process. The creation of data workflows is automated, virtually eliminating manual operations and providing significant productive gains for data scientists, data engineers, developers, and program managers. | 2019-02-14 |
20190050223 | COMBINED INSTRUCTION FOR ADDITION AND CHECKING OF TERMINALS - A processor core comprising in its set of instructions, a combined addition and bound-checking instruction (ADDCK) defining an integer n implicitly, or explicitly as a parameter of the instruction; an adder having a width p strictly greater than n bits; and a processing circuit (MUX, | 2019-02-14 |
20190050224 | PROCESSING CORE WITH METADATA ACTUATED CONDITIONAL GRAPH EXECUTION - A processing core and associated methods for the efficient execution of a directed graph are disclosed. A disclosed processing core comprises a memory and a first data tile stored in the memory. The first data tile includes a first set of data elements and metadata stored in association with the first set of data elements. The processing core also comprises a second data tile stored in the memory. The second data tile includes a second set of data elements. The processing core also comprises an arithmetic logic unit configured to conduct an arithmetic logic operation using data from the first set of data elements and the second set of data elements. The processing core also comprises a control unit configured to evaluate the metadata and control the arithmetic logic unit to conditionally execute the arithmetic logic operation based on the evaluation of the metadata. | 2019-02-14 |
20190050225 | MULTI-THREADED CONSTRAINT SATISFACTION SOLVER - An object-oriented method for multi-threading in a constraint satisfaction solver is provided. A master thread establishes a first solution state of a constraint problem. The master thread establishes a plurality of solver threads, each solver thread having an initial solution state that is identical to a first solution state of the master thread, and a plurality of cloned planning entity objects that are clones of a plurality of planning entity objects. The master thread communicates a first plurality of temporary incremental state changes to the plurality of solver threads that alters the initial solution state of each solver thread to a different solution state. The master thread receives, from each respective solver thread of the plurality of solver threads, a first score associated with the different solution state of the respective solver thread. | 2019-02-14 |
20190050226 | VECTOR PREDICATION INSTRUCTION - An apparatus comprises processing circuitry ( | 2019-02-14 |
20190050227 | COMPARE AND DELAY INSTRUCTIONS - A delay facility is provided in which program execution may be delayed until a predefined event occurs, such as a comparison of memory locations results in a true condition, a timeout is reached, an interruption is made pending or another condition exists. The delay facility includes one or more compare and delay machine instructions used to delay execution. The one or more compare and delay instructions may include a 32-bit compare and delay (CAD) instruction and a 64-bit compare and delay (CADG) instruction. | 2019-02-14 |
20190050228 | ATOMIC INSTRUCTIONS FOR COPY-XOR OF DATA - Apparatus and associated methods for implementing atomic instructions for copy-XOR of data. An atomic-copy-xor instruction is defined having a first operand comprising an address of a first cacheline and a second operand comprising an address of a second cacheline. The atomic-copy-xor instruction, which may be included in an instruction set architecture (ISA) of a processor, performs a bitwise XOR operation on copies of data retrieved from the first cacheline and second cacheline to generate an XOR result, and replaces the data in the first cacheline with a copy of data from the second cacheline when the XOR result is non-zero. In addition to implementation using a processor core, the atomic-copy-xor instruction may be implemented using various offloading schemes under which the processor core executing the atomic-copy-xor instruction offloads operations to other components in the processor or system in which the processor is implemented, including offloading operations to a last level cache (LLC) engine, a memory controller, or a DIMM controller. | 2019-02-14 |
20190050229 | SIMILARITY SCORES OF RULES IN INFORMATION TECHNOLOGY WORKFLOWS - In some examples, first segment of computer language text in a first rule in IT workflow data and a second segment of computer language text in a second rule in the IT workflow data may be identified. In some examples, a similarity score may be determined between the first and the second rules based on a comparison of the first segment with the second segment. | 2019-02-14 |
20190050230 | EFFICIENT MITIGATION OF SIDE-CHANNEL BASED ATTACKS AGAINST SPECULATIVE EXECUTION PROCESSING ARCHITECTURES - The present disclosure is directed to systems and methods for mitigating or eliminating the effectiveness of a side-channel based attack, such as one or more classes of an attack commonly known as Spectre. Novel instruction prefixes, and in certain embodiments one or more corresponding instruction prefix parameters, may be provided to enforce a serialized order of execution for particular instructions without serializing an entire instruction flow, thereby improving performance and mitigation reliability over existing solutions. In addition, improved mitigation of such attacks is provided by randomizing both the execution branch history as well as the source address of each vulnerable indirect branch, thereby eliminating the conditions required for such attacks. | 2019-02-14 |
20190050231 | SLAVE PROCESSOR WITHIN A SYSTEM-ON-CHIP - An integrated circuit can include a slave processor configured to execute instructions. The slave processor can be implemented in programmable circuitry of the integrated circuit. The integrated circuit also can include a processor coupled to the slave processor. The processor can be hardwired and configured to control operation of the slave processor. | 2019-02-14 |
20190050232 | METHOD AND APPARATUS TO GATHER PLATFORM CONFIGURATION PROFILE IN A TRUSTWORTHY MANNER - Various embodiments are generally directed to an apparatus, method and other techniques for gathering configuration information of a computer system during a system management mode of the computer system and exposing the gathered configuration information to securely attest to the configuration of the system. | 2019-02-14 |
20190050233 | ELECTRONIC DEVICE, METHOD FOR CONTROLLING ELECTRONIC DEVICE, AND PROGRAM - There is provided an electronic device including a manipulation unit configured to acquire manipulation by a user, and a control unit configured to selectively execute one of a plurality of controls of the electronic device which are associated with a duration of the manipulation and to perform switching of at least one of the plurality of controls according to information indicating a state of the electronic device. | 2019-02-14 |
20190050234 | AUTOMATED PREBOOT PERFORMANCE MEASUREMENT - A system includes a central processing unit (CPU) and components, a particular one of including logic to participate in a portion of a boot sequence of the system, where the portion of the boot sequence begins prior to activation of the CPU. The particular component is to send one or more signals to interact with another one of the components in the system during the portion of the boot sequence. The particular component includes a timer block to generate a set of timestamps during the portion of the boot sequence, where the set of timestamps indicates an amount of execution time of the particular component. The particular component sends the set of timestamps to the other component in a particular one of the one or more signals, where the set of timestamps are used to determine execution time of system components to complete the boot sequence. | 2019-02-14 |
20190050235 | KERNEL-INTEGRATED INSTANCE-SPECIFIC OPERATIONAL RESOURCES WITH VIRTUALIZATION - A network boot of a platform-specific operating system kernel is performed from a compressed platform-specific operating system kernel. The platform-specific operating system kernel, when booted, dynamically builds from the compressed platform-specific operating system kernel a bootable file system and boots application code. An application is loaded from the bootable file system. | 2019-02-14 |
20190050236 | BRANCH REWRITING DEVICE FEATURE OPTIMIZATION - Systems and methods for branch rewriting device feature optimization are disclosed. An example method may include identifying, by a processing device of a computing device, an occurrence of a configuration change associated with a device driver of the computing device, responsive to identification of the configuration change, evaluating one or more devices supported by the device driver and installed on the computing device, determining, in view of the evaluating, that a feature is implemented by each of the one or more devices, the feature corresponding to a conditional branch of the device driver, and responsive to the determining, modifying the device driver to execute an unconditional branch corresponding the feature. | 2019-02-14 |
20190050237 | VIRTUAL ATTRACTION CONTROLLER - A ride system includes a first ride vehicle and a second ride vehicle positioned within a course and configured to travel within the course. The ride system also includes a control system having at least one controller and at least one position tracking system, where the at least one controller is configured to control movement of the first and second ride vehicles, and where the at least one position tracking system is configured to facilitate identification of a first location and a second location of the first and second ride vehicles, respectively, within the course. The ride system also includes a wireless network configured to enable communication between components of the ride system. The at least one controller is configured to receive data indicative of the first and second locations of the first and second ride vehicles, respectively, where the at least one controller determines a control loop for the first and second ride vehicles based on the data indicative of the first and second locations, and where the at least one controller is configured to process the data indicative of the first and second locations to synchronize one or more show elements with the first and second locations. | 2019-02-14 |
20190050238 | PRIORITIZING DIGITAL ASSISTANT RESPONSES - A method and apparatus for providing a response/suggestion to a user by a digital assistant is provided herein. During operation the digital assistant will have knowledge of the status of devices connected to form a personal-area network (PAN), processed sensor data, and/or a current incident type. The digital assistant will then prioritize any responses/suggestions to the user based on the status of associated PAN devices and/or the incident type. | 2019-02-14 |
20190050239 | AUTOMATED TROUBLESHOOT AND DIAGNOSTICS TOOL - This disclosure describes a support user interface for a customer support application that allows a customer support representative to categorize and subcategorize a customer service issue in order to populate a set of probing questions, wherein selected answers to the probing questions can filter from multiple potential root causes, the most likely root cause of the customer service issue. Upon identifying the potential root cause to the customer service issue, one or more potential solutions can be implemented to resolve the customer service issue. | 2019-02-14 |
20190050240 | BIOS STARTUP METHOD AND APPARATUS - A BIOS startup method is disclosed, the method includes: in a first access mode, allocating, by a current node, a local MMCFG in a space below a local access address of the current node that is a first address, and completing memory initialization; and when performing unified memory addressing of a system, moving positions of addresses of a part or an entirety of the MMCFG space of the current node from the original space below the first address in a global access address of the system to a space that is above the first address and can be accessed in a second access mode. | 2019-02-14 |
20190050241 | AUDIO PLAYBACK DEVICE AND METHOD FOR CONTROLLING OPERATION THEREOF - Provided are an audio reproduction device and a method of controlling an operation thereof, which involve a user interface that allows a user to more effectively control various functions. The audio reproduction device includes a processor configured to obtain function information of the audio reproduction device corresponding to a user input received using mapping information for mapping between a user input received based on at least one of at least one wheel region rotatable clockwise or counterclockwise and at least one touch region and function information of the audio reproduction device, and to control the audio reproduction device according to the obtained function information of the audio reproduction device. | 2019-02-14 |
20190050242 | COMPUTER SYSTEMS, COMPUTER IMPLEMENTED METHODS AND COMPUTER EXECUTABLE CODE CONFIGURED TO PROVIDE SECURE PC SOLUTIONS BASED ON A VIRTUAL DESKTOP INFRASTRUCTURE (VDI), INCLUDING IPTV VIA VDI - The present invention relates to computer systems, computer implemented methods and computer executable code configured to provide secure PC solutions based on a virtual desktop infrastructure (VDI) including IPTV via VDI to secure locations such as prison cells. The invention is embodied in a networked system that provides a plurality of virtual machines that are hosted in a Virtual Environment, with data stored in a shared Storage Area Network (SAN). Secure application streaming technology is applied as a broker to deliver a secure user experience to the end users. The solution is based on providing a complete end-to-end solution delivering published apps to users on in-cell devices. The system is configured to provide functionalities including: (i) streaming of TV services to the cells via IPTV; and (ii) eLearning via the VDI environment. | 2019-02-14 |
20190050243 | Systems and Methods for Providing Globalization Features in a Service Management Application Interface - The disclosure can provide systems and methods for providing globalization features in a service management application interface. In one embodiment, a method can include receiving a definition comprising at least one function written in a first language; embedding, within the definition, the at least one function written in a second language translatable to the first language; retrieving the at least one function written in the second language translatable to the first language; and based at least in part on the at least one function written in the second language, converting a text string from the first language to the second language, wherein the text string comprises at least a portion of the definition. | 2019-02-14 |
20190050244 | SERVER APPARATUS, IMAGE FORMING APPARATUS, INFORMATION PROCESSING APPARATUS, IMAGE FORMING CONTROL METHOD, AND IMAGE FORMING CONTROL PROGRAM - A server apparatus provides a social networking service (SNS). The server apparatus stores a specific relation between a terminal apparatus and a printer that are allowed for communication using the SNS. The server apparatus stores relations between one or more icons and commands each corresponding to a respective one of the icons and including an image forming condition and an image forming instruction. Upon receiving a selected icon having been selected on the terminal apparatus, information indicating a selected printer having been selected on the terminal apparatus out of printers in specific relations with the terminal apparatus, and a piece of image data from the terminal apparatus, the server apparatus transmits a command corresponding to the selected icon and the piece of image data to the selected printer. | 2019-02-14 |
20190050245 | SCHEDULING FRAMEWORK FOR TIGHTLY COUPLED JOBS - Managing execution of a job in a computing environment. A method establishes, for a job to be executed in the computing environment, an execution plan for processing the job. The execution plan identifies computationally intensive tasks of the job and data intensive tasks of the job. The method selects a virtual machine of the computing environment to process the identified computationally intensive tasks of the job and identified data intensive tasks of the job. The method assigns the identified computationally intensive tasks of the job for foreground processing of the virtual machine and assigns the identified data intensive tasks of the job for background processing of the virtual machine. Execution of the job executes the identified computationally intensive tasks of the job in foreground processing of the virtual machine and executes the identified data intensive tasks of the job in background processing of the virtual machine. | 2019-02-14 |
20190050246 | PROPAGATING EXTERNAL ROUTE CHANGES INTO A CLOUD NETWORK - An internal route usage information from a set of internal route usage information is analyzed to determine an encoding structure used in the internal route usage information and an external route that is referenced in internal route usage information. Using the set of internal route usage information, a subset of external route change information is selected from a set of external route change information, where each changed external route represented in the subset is usable to reach a currently used destination on an external network. A first external route change information from the subset is encoded according to the encoding structure, forming a first encoded route change data. Using the first encoded route change data, an internal router in an internal network is caused to recognize a status change in a first external route. | 2019-02-14 |
20190050247 | DISK ENCRYPTION - A computer implemented method of providing whole disk encryption for a virtualized computer system including providing a software component executing in a first virtual machine for instantiation in a first hypervisor, the software component invoking a second hypervisor within the first virtual machine for instantiating a disk image of the virtualized computer system as a second virtual machine, and the software component being configured to install a software agent in the second virtual machine, the software agent being adapted to: a) encrypt the instantiated disk image; b) encrypt data written, by the second virtual machine, to the instantiated disk image at a runtime of the second virtual machine; and c) decrypt data read, by the second virtual machine, from the instantiated disk image at a runtime of the second virtual machine, wherein the software component is configured to migrate the second virtual machine at a runtime of the second virtual machine to the first hypervisor so as to provide a wholly encrypted disk image for the second virtual machine executing in the first hypervisor. | 2019-02-14 |
20190050248 | CONTROL APPARATUS, VNF DEPLOYMENT DESTINATION SELECTION METHOD AND PROGRAM - A control apparatus is provided with: a first part configured to hold hardware accelerator requirements that indicate hardware accelerator conditions required by a VNF; a second part configured to hold hardware accelerator configuration information that indicates configuration information of each of a plurality of hardware accelerators that are available; and a third part configured to refer to the hardware accelerator requirements and the hardware accelerator configuration information and selecting, from among the plurality of hardware accelerators, a hardware accelerator to be allocated to the VNF. | 2019-02-14 |
20190050249 | ASSIGNMENT OF PROXIES FOR VIRTUAL-MACHINE SECONDARY COPY OPERATIONS INCLUDING STREAMING BACKUP JOBS - A comprehensive approach to streaming backups for virtual machines (“VMs”) in a storage management system comprises improvements to the assignment of data agent proxies for VM secondary copy operations. New considerations in performing a VM streaming backup job include without limitation: determining and enforcing a system-wide per-proxy limit of concurrent data streams; generating an ordered priority list of the VMs to be backed up as a basis for choosing which proxies will back up the respective VM, though the illustrative system may not strictly adhere to the priority list based on further considerations; identifying a next available proxy based on data stream utilization at the proxy; and dynamically re-generating the priority list and re-evaluating considerations if some VMs become “stranded” due to a failure to be backed up. Secondary copy operations are distributed to proxies in ways that improve the chances of successfully completing VM streaming backups. | 2019-02-14 |
20190050250 | SYSTEMS AND METHODS FOR INTROSPECTIVE APPLICATION REPORTING TO FACILITATE VIRTUAL MACHINE MOVEMENT BETWEEN CLOUD HOSTS - A processor from an introspection daemon running on a virtual machine can receive an introspection report comprising configuration state data of the virtual machine. The virtual machine can comprise a guest operating system hosting the introspection daemon. The configuration state data can comprise an execution state of an application running on the guest operating system of the virtual machine. The processor can generate a virtual machine image of the virtual machine in view of the introspection report. The processor can further initiate a migration of the virtual machine to at least one target cloud in view of the virtual machine image. | 2019-02-14 |
20190050251 | DYNAMICALLY DETERMINE THE TRANSACTION COORDINATOR IN MULTITIER HYBRID TRANSACTION PROCESSING MIDDLEWARE SYSTEMS - A technique relates to dynamically determining a transaction coordinator. A transaction processing middleware (TPM) coordinator receives TPM weightages of TPM participants, where the TPM coordinator has a TPM coordinator weightage, and where the TPM coordinator and TPM participants are executing a transaction instance. The TPM coordinator individually compares the TPM coordinator weightage to each of the TPM weightages of the TPM participants. In response to not one of the TPM weightages of the TPM participants being greater than the TPM coordinator weightage, the TPM coordinator is kept unchanged. In response to a given TPM weightage of the TPM weightages of the TPM participants being greater than the TPM coordinator weightage, the TPM coordinator changes a TPM coordinator function to a given TPM participant having the given TPM weightage such that the given TPM participant is an interim TPM coordinator for the transaction instance. | 2019-02-14 |
20190050252 | ADAPTIVE QUALITY OF SERVICE CONTROL CIRCUIT - Disclosed approaches of controlling quality of service in servicing memory transactions includes periodically reading by a quality of service management (QM) circuit, respective first data rate metrics and respective latency metrics from requester circuits while the requester circuits are actively transmitting memory transactions to a memory controller. The QM circuit periodically reads a second data rate metric from the memory controller while the memory controller is processing the memory transactions, and determines, while the requester circuits are actively transmitting memory transactions to the memory controller, whether or not the respective first data rate metrics, respective latency metrics, and second data rate metric satisfy a quality of service metric. In response to determining that the operating metrics do not satisfy the quality of service metric, the QM circuit dynamically changes value(s) of a control parameter(s) of the requester circuit(s) and of the memory controller. | 2019-02-14 |
20190050253 | MEMORY REGISTER INTERRUPT BASED SIGNALING AND MESSAGING - In an example, memory register interrupt based signaling and messaging may include receiving, at a control register of a receiver, a signal number from a sender, and copying, by a memory register interrupt management device of the receiver, the signal number to an associated status register of the receiver. Further, memory register interrupt based signaling and messaging may include generating, independently of the signal number from the status register, an interrupt to a central processing unit of the receiver, and triggering, based on the interrupt, an interrupt handler of the receiver to perform an action associated with the signal number. | 2019-02-14 |
20190050254 | SYSTEMS AND METHODS FOR RECOMPUTING SERVICES - Systems, methods, and media are presented that are used to recompute a service model to match data in a configuration management database. Recomputing includes detecting a change to a configuration item in a configuration management database and marking a recomputing environment indicating a recomputing environment to be recomputed based on the change. Using a recomputation job, a service environment database is queried and a response is received from the service environment indicating at least the recomputing environment. The recomputation job then recomputes the service environment to match a service model to the change in the configuration management database. | 2019-02-14 |
20190050255 | DEVICES, SYSTEMS, AND METHODS FOR LOCKLESS DISTRIBUTED OBJECT INPUT/OUTPUT - An object node apparatus, system, and method are described. An apparatus can include a lockless-mode controller configured to communicatively couple to a plurality of storage resources and to a plurality of processor cores each preassigned to process a specific type of sub-task at a different preassigned storage resource that is configured to receive object input/output (I/O) only from the preassigned core, the lockless-mode controller being further configured to receive a plurality of object I/O messages from one or more clients, each to perform an object I/O task, divide each object I/O task into a plurality of sub-tasks, identify a specific sub-task type for each sub-task, and send each sub-task for each specific sub-task type to a processor core preassigned to process the specific sub-task type, wherein the sub-tasks include storage operations related to storing sub-object data in, or retrieving sub-object data from, the preassigned storage resource for each processor core. | 2019-02-14 |
20190050256 | SYSTEMS AND METHODS FOR DISTRIBUTED MANAGEMENT OF COMPUTING RESOURCES - A computer-implemented method for distributed management of computing resources may include (i) performing, by a computing device, an initial configuration of one or more computing resources connected to a network, (ii) detecting a request for a computing resource from a client daemon, (iii) based on the request, initializing a computing environment on the computing resource, (iv) maintaining an active state of the computing resource for a usage session by a client device, (v) detecting, from the client daemon, a notification of completion of the usage session, and (vi) in response to the notification of completion, reverting the computing resource to an initial state. Various other methods, systems, and computer-readable media are also disclosed. | 2019-02-14 |
20190050257 | SYSTEM AND METHOD FOR STRUCTURING SELF-PROVISIONING WORKLOADS DEPLOYED IN VIRTUALIZED DATA CENTERS - The system and method for structuring self-provisioning workloads deployed in virtualized data centers described herein may provide a scalable architecture that can inject intelligence and embed policies into managed workloads to provision and tune resources allocated to the managed workloads, thereby enhancing workload portability across various cloud and virtualized data centers. In particular, the self-provisioning workloads may have a packaged software stack that includes resource utilization instrumentation to collect utilization metrics from physical resources that a virtualization host allocates to the workload, a resource management policy engine to communicate with the virtualization host to effect tuning the physical resources allocated to the workload, and a mapping that the resource management policy engine references to request tuning the physical resources allocated to the workload from a management domain associated with the virtualization host. | 2019-02-14 |
20190050258 | SYSTEM FOR PROVIDING FUNCTION AS A SERVICE (FAAS), AND OPERATING METHOD OF SYSTEM - A system for providing a function as a service (FaaS) is provided. The system includes a communicator which receives a request for setting resources to execute the function, a memory which stores one or more instructions, and a processor. The processor executes the stored instructions. When the processor executes the instructions, it analyzes characteristics of the function and provides recommendation information related to the setting of the resources to execute the function based on a result of the analyzing. | 2019-02-14 |
20190050259 | DATA USAGE EFFECTIVENESS DETERMINATION - Examples disclosed herein relate to determining data usage effectiveness. A processor may determine data usage effectiveness information related to an entity's workflow based on workflow data collected related to the entity's operations and metrics determined based on the data. The determination may be based on a comparison of the information related to the entity's workflow to target workflow information and priority information associated with the target workflow information. The processor may output information related to the determined data usage effectiveness. | 2019-02-14 |
20190050260 | ALLOCATION OF RESOURCES WITH TIERED STORAGE - A computing system includes a computer in communication with a tiered storage system. The computing system identifies a set of data transferring to a storage tier within the storage system. The computing system identifies a program to which the data set is allocated and determines to increase or reduce resources of the computer allocated to the program, based on the set of data transferring to the storage tier. The computing system discontinues transferring the set of data to the storage tier if a resource allocated to the program cannot be increased. | 2019-02-14 |
20190050261 | ARBITRATION ACROSS SHARED MEMORY POOLS OF DISAGGREGATED MEMORY DEVICES - Technology for a memory pool arbitration apparatus is described. The apparatus can include a memory pool controller (MPC) communicatively coupled between a shared memory pool of disaggregated memory devices and a plurality of compute resources. The MPC can receive a plurality of data requests from the plurality of compute resources. The MPC can assign each compute resource to one of a set of compute resource priorities. The MPC can send memory access commands to the shared memory pool to perform each data request prioritized according to the set of compute resource priorities. The apparatus can include a priority arbitration unit (PAU) communicatively coupled to the MPC. The PAU can arbitrate the plurality of data requests as a function of the corresponding compute resource priorities. | 2019-02-14 |
20190050262 | AUTOMATED SYSTEM FOR OPTIMIZING BATCH PROCESSING TIME - A method and system is disclosed herein for optimizing batch processing time required for executing one or more batch jobs received in real time, while adhering to service level agreements (SLAs) compliance in one batch job arrangement of an information technology service management (ITSM). A batch job system is characterized by the set of jobs and dependencies between jobs. Each job is in turn characterized by run-time, from-time and SLA definitions. SLAs can be of two kinds Start-time and End-time. Start-time SLA requires that the job execution starts before the specified time while End-time SLA necessitates that the job finishes its execution before the specified time. To optimize processing time required for executing one or more batch jobs the disclosure identifies SLA violations and solves them to produce a set of actionable levers. | 2019-02-14 |
20190050263 | TECHNOLOGIES FOR SCHEDULING ACCELERATION OF FUNCTIONS IN A POOL OF ACCELERATOR DEVICES - Technologies for scheduling acceleration in a pool of accelerator devices include a compute device. The compute device includes a compute engine to execute an application. The compute device also includes an accelerator pool including multiple accelerator devices. Additionally, the compute device includes an acceleration scheduler logic unit to obtain, from the application, a request to accelerate a function, determine a capacity of each accelerator device in the accelerator pool, schedule, in response to the request and as a function of the determined capacity of each accelerator device, acceleration of the function on one or more of the accelerator devices to produce output data, and provide, to the application and in response to completion of acceleration of the function, the output data to the application. Other embodiments are also described and claimed | 2019-02-14 |
20190050264 | EDGE COMPUTING PLATFORM - A method for provisioning a computer includes providing a graph that defines relationships between one or more hardware components of a plurality of computers and component characteristics of the one or more hardware components, and relationships between one or more applications and requirements of the one or more applications. The method further includes receiving a selection of an application and determining, via the graph, whether at least one computer with hardware components capable of meeting the requirements of the application exists. If a computer exits, the method also includes communicating the application to the computer; triggering the computer to execute the application; and communicating, from the computer, data processed by the application to an external system. | 2019-02-14 |
20190050265 | METHODS AND APPARATUS FOR ALLOCATING A WORKLOAD TO AN ACCELERATOR USING MACHINE LEARNING - Methods, apparatus, systems, and articles of manufacture for allocating a workload to an accelerator using machine learning are disclosed. An example apparatus includes a workload attribute determiner to identify a first attribute of a first workload and a second attribute of a second workload. An accelerator selection processor causes at least a portion of the first workload to be executed by at least two accelerators, accesses respective performance metrics corresponding to execution of the first workload by the at least two accelerators, and selects a first accelerator of the at least two accelerators based on the performance metrics. A neural network trainer trains a machine learning model based on an association between the first accelerator and the first attribute of the first workload. A neural network processor processes, using the machine learning model, the second attribute to select one of the at least two accelerators to execute the second workload. | 2019-02-14 |
20190050266 | SOFTWARE APPLICATION RUNTIME HAVING DYNAMIC EVALUATION FUNCTIONS AND PARAMETERS - The present disclosure generally relates to a software application runtime having dynamic evaluation functions and parameters. A dynamic evaluation engine of an application's runtime accepts evaluation requests from methods of a software application. A request can be associated with an identifier for a method of the software application. The method identifier can be associated with one or more dynamic evaluation functions having one or more dynamic evaluation parameters. When the dynamic evaluation engine receives a request from an application method, the dynamic evaluation engine can determine the one or more current dynamic evaluation functions or parameters to use with the request. The dynamic evaluation engine can return an evaluation result to the method. The dynamic evaluation engine can be in communication with a repository, such as a central repository, providing a unique mechanism to look up dynamic evaluation functions and parameters, which can be imported into the application's runtime. | 2019-02-14 |
20190050267 | PROVISIONING OF DEVICES - A provisioning machine may receive a request that an application be executed while distributed according to a distribution constraint among various devices. The provisioning machine may access a topological model that represents multiple devices configured as a single cloud-based application server and defines a first group of devices that have the same redundancy status (e.g., active or backup). In addition, the topological model may define a second group of devices that have the same functional role (e.g., executing a particular component of the application). A device may be a member of both the first group and the second group. The provisioning machine may determine a size of the first group according to the distribution constraint. Based on the determined size of the first group, the provisioning machine may configure (e.g., provision) the first group of devices as a subset of the multiple devices of the server. | 2019-02-14 |
20190050268 | COMPOSING BY NETWORK ATTRIBUTES - The present disclosure provides a system and method for automatically composing resources in a data center using a management application. The management application can communicate with a data center management software to collect information of computer resource, storage resource, and network resource of the data center. Based at least upon the network resource information of the data center, the management application can generate a network topology of the data center. In response to receiving a request from a specific user, the management application can compose suitable resources of the data center to match the request. | 2019-02-14 |
20190050269 | ROBOT SWARM PROPAGATION USING VIRTUAL PARTITIONS - Methods, apparatus, systems and articles of manufacture are disclosed Systems, apparatus, and methods to propagate a robot swarm using virtual partitions are disclosed. An example apparatus includes a transceiver to broadcast the availability of the apparatus to host one or more bots from a swarm of bots and to receive a copy request from a bot in the swarm of bots. The example apparatus also includes an evaluator to evaluate instructions from the bot and determine if the apparatus is equipped to propagate the bot. In addition, the example apparatus includes a virtual partition to provide an interface for executing a copy of the bot. | 2019-02-14 |
20190050270 | SIMULTANEOUS MULTITHREADING WITH CONTEXT ASSOCIATIONS - Disclosed herein are systems, devices, and methods for simultaneous multithreading (SMT) with context associations. For example, in some embodiments, a computing device may include: one or more physical cores; and SMT logic to manage multiple logical cores per physical core such that operations of a first computing context are to be executed by a first logical core associated with the first computing context and operations of a second computing context are to be executed by a second logical core associated with the second computing context, wherein the first logical core and the second logical core share a common physical core. | 2019-02-14 |
20190050271 | ADJUSTING VARIABLE LIMIT ON CONCURRENT CODE EXECUTIONS - Systems and methods are described for adjusting a number of concurrent code executions allowed to be performed for a given user on an on-demand code execution environment or other distributed code execution environments. Such environments utilize pre-initialized virtual machine instances to enable execution of user-specified code in a rapid manner, without delays typically caused by initialization of the virtual machine instances. However, to improve utilization of computing resources, such environments may temporarily restrict the number of concurrent code executions performed on behalf of the given user to a number less than the maximum number of concurrent code executions allowed for the given user. Such environments may adjust the temporary restriction on the number of concurrent code executions based on the number of incoming code execution requests associated with the given user. | 2019-02-14 |
20190050272 | CONTAINER BASED SERVICE MANAGEMENT - A method, computer system, and a computer program product for migrating a service from one container to another container is provided. The present invention may include obtaining a first group of requests that are accessing a service launched in a first container instance and a second group of requests that are waiting for accessing the service. The present invention may also include generating a migrated service in a second container instance by migrating the service from the first container instance to the second container instance based on the obtained first and second groups of requests. The present invention may further include directing the second group of requests to the migrated service in the second container instance. | 2019-02-14 |
20190050273 | EXTENDING BERKELEY PACKET FILTER SEMANTICS FOR HARDWARE OFFLOADS - Examples include registering a device driver with an operating system, including registering available hardware offloads. The operating system receives a call to a hardware offload, inserts a binary filter representing the hardware offload into a hardware component and causes the execution of the binary filter by the hardware component when the hardware offload is available, and executes the binary filter in software when the hardware offload is not available. | 2019-02-14 |
20190050274 | TECHNOLOGIES FOR SYNCHRONIZING TRIGGERED OPERATIONS - Technologies for synchronizing triggered operations include a host fabric interface (HFI) of a compute device configured to receive an operation execution command associated with a triggered operation that has been fired and determine whether the operation execution command includes an instruction to update a table entry of a table managed by the HFI. Additionally, the he HFI is configured to issue, in response to a determination that the operation execution command includes the instruction to update the table entry, a triggered list enable (TLE) operation and a triggered list disable (TLD) operation to a table manager of the HFI and disable a corresponding table entry in response to the TLD operation having been triggered, the identified table entry. The HFI is further configured to execute one or more command operations associated with the received operation execution command and re-enable, in response to the TLE operation having been triggered, the table entry. Other embodiments are described herein. | 2019-02-14 |
20190050275 | METHOD FOR MANAGING CONTROL-LOSS PROCESSING DURING CRITICAL PROCESSING SECTIONS WHILE MAINTAINING TRANSACTION SCOPE INTEGRITY - An aspect includes receiving a transaction scope generated for a process in response to processing in a critical section and receiving collected data related to the process. Requests are generated using the collected data. The requests and data are stored as pending items chained together to form an ordered list in a private storage during critical section processing. The requests are processed based on the transaction scope, the processing including implementing a check of the process for any pending items in response to a transaction scope application programming interface being called or other processing relating to the pending items. The pending items are processed in the order they are created by using the ordered list. One of the requests is a rollback request that includes at least one of removing the pending items from the private storage, releasing the private storage for all pending items, and resuming normal processing. | 2019-02-14 |
20190050276 | METHOD FOR PROVIDING TELEMATICS SERVICE USING VIRTUAL VEHICLE AND TELEMATICS SERVER USING THE SAME - A method for providing a telematics service by using a virtual vehicle is provided. The method includes steps of: (a) a telematics server, if a request for registering the vehicle is acquired from a third-party system linked with the telematics service, creating a vehicle ID, and providing a telematics API to the third-party system; (b) the telematics server, if a request is received from the third-party system, creating a token ID corresponding to the system by referring to information on the system and the vehicle ID, and then transmitting it to the system; and (c) the telematics server, if a telematics service request using the token ID is transmitted from the system through the telematics API, confirming the vehicle ID corresponding to the token ID, simulating the virtual vehicle in response to the service request, and transmitting the simulation result to the third-party system. | 2019-02-14 |
20190050277 | ROUTER MANAGEMENT BY AN EVENT STREAM PROCESSING CLUSTER MANAGER - A computing device manages a router to route events between a plurality of computing device based on a manager configuration file. A manager engine is instantiated based on a manager engine definition and instantiates a manager ESPE based on a created manager ESP model. A router configuration file is created based on mapping information read from the manager configuration file that describes connectivity between an event publishing source and a source window of the manager ESPE. A router engine is instantiated based on the created router configuration file. A connector is started to receive an event based on the router configuration file. The event is received in the source window of the manager ESPE defined by the manager ESP model and processed based on the manager ESP model. A third computing device is selected by the router engine. The processed event is published to the third computing device. | 2019-02-14 |
20190050278 | AUTOMATIC RESOURCE DEPENDENCY TRACKING AND STRUCTURE FOR MAINTENANCE OF RESOURCE FAULT PROPAGATION - A computing environment includes an originating system, a plurality of networked communication channels each configured to communicate one or more of a plurality of instructions for calling one or more downstream applications in response to calling of an originating application by the first system, and a resource dependency system for providing automatic resource dependency tracking and maintenance of resource fault propagation. The resource dependency system performs a query configured to identify any application calls performed in a predetermined period of time; for each identified application call, builds a corresponding transaction paragraph comprising a list of all sub-application calls performed in response to the application call; from each transaction paragraph, extracts a chronological sequence of sub-application calls found in the transaction paragraph; forms a tier pathway for each transaction paragraph; and stores each tier pathway in an accessible file. | 2019-02-14 |
20190050279 | FUNCTIONAL SAFETY ERROR REPORTING AND HANDLING INFRASTRUCTURE - Various systems and methods for error handling are described herein. A system for error reporting and handling includes a common error handler that handles errors for a plurality of hardware devices, where the common error handler is operable with other parallel error reporting and handling mechanisms. The common error handler may be used to receive an error message from a hardware device, the error message related to an error; identify a source of the error message; identify a class of the error; identify an error definition of the error; determine whether the error requires a diagnostics operation as part of the error handling; initiate the diagnostics operation when the error requires the diagnostics operation; and clear the error at the hardware device. | 2019-02-14 |
20190050280 | SELECTING STORAGE UNITS OF A DISPERSED STORAGE NETWORK - A method begins by a processing module of a computing device in a dispersed storage network (DSN) receiving a read request for a data segment, where the data segment is dispersed error encoded to produce a set of encoded data slices (EDSs) that are stored in a plurality of storage units (SUs) in a storage unit (SU) set. The method continues with the computing device determining loading information for each SU of the SU set and identifying a read threshold number of SUs of the SU set based the loading information and a pattern selection scheme. The method continues with the processing module transmitting a read slice request to each SU of the read threshold number of SUs that are identified. | 2019-02-14 |
20190050281 | Long-Running Storage Manageability Operation Management - Serving resources. A method includes sending a message to a client indicating that the client should attempt to obtain status information for one or more asynchronous read/write operations on a datastore, requested by the client but not yet completed, at a later time. A request is received from the client for status information about the asynchronous, read/write, storage operations on the datastore. A message is sent to the client indicating that the asynchronous read/write operations are in progress and that the client should attempt to obtain status information for the asynchronous read/write operations on the datastore at a later time. Requests are received from the client for status information about the operations until the asynchronous read/write operations are complete, after which, an indication is provided to the client indicating that the asynchronous read/write operations have been completed. | 2019-02-14 |
20190050282 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An information processing device including a memory, and a processor coupled to the memory and the processor configured to execute a process, the process including generating data indicating a relationship between a processing load and a communication load of a first computer which executes a specified process in a second information processing system which is the same as or similar to a first information processing system in which a failure occurs, and calculating a processing load of a second computer which executes the specified process in the first information processing system based on the generated data and a communication load of the second computer, the estimated processing load being a processing load before the failure occurs in the first information processing system. | 2019-02-14 |
20190050283 | SERVER RAS LEVERAGING MULTI-KEY ENCRYPTION - An embodiment of a semiconductor package apparatus may include technology to determine if an access request (e.g., a read or write request) to a memory location would result in an integrity failure and, if so determined, read previous data from the memory location, set an indicator to indicate the integrity failure, and store the previous data together with the indicator and previous authentication information. Other embodiments are disclosed and claimed. | 2019-02-14 |
20190050284 | SHARED ADDRESS COUNTERS FOR MULTIPLE MODES OF OPERATION IN A MEMORY DEVICE - As described above, certain modes of operation, such as the Fast Zero mode and the ECS mode, may facilitate sequential access to individual cells of a memory array. To facilitate this functionality, a command controller may be provided, including one or more individual controllers to control the address sequencing when a particular mode entry command (e.g., Fast Zero or ECS) is received. In order to generate internal addresses to be accessed sequentially, one or more counters may also be provided. Advantageously, the counters may be shared such that they can be used in any mode of operation that may require address sequencing of all or large portions of the memory array, such as the Fast Zero mode or the ECS mode. | 2019-02-14 |
20190050285 | DATA WRITE METHOD AND MEMORY STORAGE DEVICE USING THE SAME - A data write method for writing data is provided. The data writing method is adapted to a memory controller adopting an ECC scheme and includes: encoding the data to generate a codeword; writing the codeword into the memory array according to a first write condition; and performing a verify operation. The step of performing the verify operation includes: reading the codeword from the memory array; comparing the read codeword with the codeword and obtaining an error bit number of the read codeword; decoding the read codeword to generate a decoded data by an ECC decoder; comparing the decoded data with the data; and comparing the error bit number of the read codeword with a pass threshold if the decoded data is identical to the data. If the error bit number of the read codeword is greater than the pass threshold, the data write method further comprises writing the codeword into the memory array according to a second write condition, where the second write condition is different from the first write condition. In addition, a memory storage device using the data write method is also provided. | 2019-02-14 |
20190050286 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A method for operating a memory system includes: performing a read operation in response to a first tag; performing a read operation in response to a second tag; performing a defense code operation corresponding to the first tag; performing an error correction code (ECC) operation on data output through the defense code operation corresponding to the first tag; and performing a defense code operation corresponding to the second tag, wherein the read operation in response to the second tag is started before the ECC operation corresponding to the first tag is completed, and wherein the defense code operation corresponding to the second tag is performed using a result of the defense code operation corresponding to the first tag. | 2019-02-14 |
20190050287 | DECODING METHOD, ASSOCIATED FLASH MEMORY CONTROLLER AND ELECTRONIC DEVICE - The present invention provides a decoding method of a flash memory controller, wherein the decoding method includes the steps of: reading first data from a flash memory module; decoding the first data, and recording at least one specific address of the flash memory module according to decoding results of the first data, wherein said at least one specific address corresponds to a bit having high reliability errors (HRE) of the first data; reading second data from the flash memory module; and decoding the second data according to said at least one specific address. | 2019-02-14 |
20190050288 | METHODS AND SYSTEMS FOR IMPLEMENTING REDUNDANCY IN MEMORY CONTROLLERS - The present disclosure relates to methods and systems for implementing redundancy in memory controllers. The disclosed systems and methods utilize a row of memory blocks, such that each memory block in the row is associated with an independent media unit. Failures of the media units are not correlated, and therefore, a failure in one unit does not affect the data stored in the other units. Parity information associated with the data stored in the memory blocks is stored in a separate memory block. If the data in a single memory block has been corrupted, the data stored in the remaining memory blocks and the parity information is used to retrieve the corrupted data. | 2019-02-14 |
20190050289 | SYSTEM AND METHOD FOR DISTRIBUTED ERASURE CODING - A system and method for distributed erasure coding. A plurality of storage devices is directly connected to one or more host computers, without an intervening central controller distributing data to the storage devices and providing data protection. Parity codes are stored in one or more dedicated storage devices or distributed over a plurality of the storage devices. When a storage device receives a write command, it calculates a partial parity code, and, if the parity code for the data being written is on another storage device, sends the partial parity code to the other storage device, which updates the parity code using the partial parity code. | 2019-02-14 |
20190050290 | FALLBACK DELEGATES FOR MODIFICATION OF AN INDEX STRUCTURE - A method includes identifying a fallback delegate device of a plurality of delegate devices for changing one or more nodes of a plurality of nodes of a hierarchical index structure, where a primary delegate device of the plurality of delegate devices is responsible for changing the one or more nodes and where each delegate device of a plurality of delegate devices is assigned an individual global namespace address that is partially based on a location within the DSN. The method further includes determining to process a change to a node of the one or more nodes using the fallback delegate device. | 2019-02-14 |
20190050291 | STANDARD AND NON-STANDARD DISPERSED STORAGE NETWORK DATA ACCESS - A method includes receiving, by a computing device of a dispersed storage network (DSN), a non-standard data access request regarding a set of encoded data slices, where the non-standard data access request includes a set of network identifiers of a set of storage units, a data identifier corresponding to data, and a data access function. The method further includes the computing device converting the non-standard data access request into one or more DSN slice names. The method further includes the computing device determining that the one or more DSN slice names are within a slice name range allocated to the computing device. When the one or more DSN slice names are within the slice name range, the method further includes the computing device executing the data access function regarding one or more encoded data slices corresponding to the one or more DSN slice names. | 2019-02-14 |
20190050292 | LARGE OBJECT PARALLEL WRITING - A method includes partitioning a data object into a plurality of data partitions. The method further includes dispersed storage error encoding a first data partition of the plurality of data partitions into a first plurality of sets of encoded data slices. The method further includes generating a first segment allocation table (SAT) regarding storage of the first plurality of sets of encoded data slices in a first set of storage units of the DSN. The method further includes dispersed storage error encoding the first SAT to produce a first set of SAT slices. The method further includes sending the first plurality of sets of encoded data slices and the first set of SAT slices to the first set of storage units. The method further includes updating a directory with information regarding the first SAT. | 2019-02-14 |
20190050293 | UPDATING AN ENCODED DATA SLICE - A distributed storage (DS) processing unit distributes an initial set of encoded data slices and an initial parity slice, for storage in multiple DS units. The initial parity slice is associated with an initial encoded data slice stored in a first DS unit. The DS processing unit transmits an updated encoded data slice reflecting changes to the initial encoded data slice, and obtains, from the first DS unit, delta parity information associated with a delta parity slice. The delta parity slice reflects differences between parity values calculated using the updated data slice and the initial data slice. An updated parity slice is generated by performing an exclusive OR (XOR) operation on the initial parity slice and the delta parity slice. A message transmitted to a second DS unit, which currently stores the initial parity slice, directs the second DS unit to store the updated parity slice. | 2019-02-14 |
20190050294 | CONTEXT AWARE SOFTWARE UPDATE FRAMEWORK FOR AUTONOMOUS VEHICLES - In one example a system to manage software updates for one or more devices on a vehicle comprises a communication interface to receive one or more software updates for the one or more devices on the vehicle, and a controller communicatively coupled to one or more devices and comprising processing circuitry to receive one or more software updates for at least one of the one or more devices, start a software update process for at least one of the one or more devices, detect a fault condition that corrupted the software update process, and in response to the fault condition, to implement a software update process fault protocol. Other examples may be described. | 2019-02-14 |
20190050295 | MANAGING FUNCTION LEVEL RESET IN AN IO VIRTUALIZATION-ENABLED STORAGE DEVICE - A data storage device comprises a non-volatile semiconductor memory device and a solid-state drive controller communicatively coupled to the non-volatile semiconductor memory device, including a function level reset manager. The function level reset manager can receive a function level reset request from a host system, generate a function level reset bitmap based on the function level reset request, and broadcast the function level reset request to a command processing pipeline. The function level reset bitmap can indicate which functions are in a reset state. Further, the function level reset manager can determine which functions are in the reset state and instruct the command processing pipeline to cancel commands associated with the functions in the reset state. | 2019-02-14 |
20190050296 | AUTO-UPGRADE OF REMOTE DATA MANAGEMENT CONNECTORS - Methods and systems for automatically upgrading or synchronizing a remote data management agent running on a remote host machine (e.g., a hardware server) to a particular version that is in-sync with a corresponding version used by a cluster of data storage nodes controlling the remote data management agent are described. The remote agent may be initially installed on the remote host and subsequent updates to the remote agent may be performed using the remote agent itself without requiring intervention by the remote host. The remote agent may comprise a backup agent and a bootstrap agent that are each exposed in different network ports or associated with different port numbers or networking addresses. The backup agent may perform data backup related tasks for backing up files stored on the remote host and the bootstrap agent may perform upgrade related tasks for upgrading the backup agent. | 2019-02-14 |
20190050297 | MEMORY DEVICES AND SYSTEMS WITH SECURITY CAPABILITIES - Several embodiments of systems incorporating memory devices are disclosed herein. In one embodiment, a memory device can include a controller, a main memory operably coupled to the controller, and security hardware operably coupled to the controller and to the main memory. The main memory can include a plurality of memory regions and at least one reserved memory region configured to store genuine backups of memory content stored in the plurality of memory regions. In operation, the security hardware is configured to measure memory content of the plurality of memory regions before startup, shutdown, and reset of the memory device; compare the measured value to an expected value; and direct the controller to replace the memory content with a genuine backup of the memory content stored in the at least one reserved memory region if the measured value and the expected value are not in accord. | 2019-02-14 |
20190050298 | METHOD AND APPARATUS FOR IMPROVING DATABASE RECOVERY SPEED USING LOG DATA ANALYSIS - Disclosed is a method for improving a database recovery speed using a log data analysis according to a first exemplary embodiment of the present disclosure, including: reading at least one redo log file and loading recovery log data on a storage unit; analyzing the loaded recovery log data and generating a plurality of sub log data groups, the plurality of respective sub log data groups being associated with specific data blocks and the specific data blocks associated with the plurality of respective sub log data groups being different from each other; and generating at least one adjacent log data group based on positional information of the specific data blocks associated with the plurality of respective sub log data groups, each of the at least one adjacent log data group including at least one sub log data group. | 2019-02-14 |
20190050299 | Method of Secure Storage Medium Backup and Recovery - In a method of backup and recovery of file(s) and/or folder(s) stored on a non-volatile computer readable storage medium, a serial number of a recovery drive connected in operative communication with processor of a computer system, but not mounted to the operating system, is used to mount the recovery drive to the operating system. The processor copies file(s) and/or folder(s) stored on an internal drive to the recovery drive and then unmounts the recovery drive using the serial number. Following an encrypting ransomware attack, the same or another instance of the serial number of recovery drive is used to mount the recovery drive, whereafter the file(s) and/or folder(s) stored on the recovery drive are copied or restored to the internal drive. Finally, the recovery drive is unmounted using the serial number. | 2019-02-14 |
20190050300 | SYSTEMS AND METHODS FOR SAFETY ANALYSIS INCLUDING CONSIDERATION OF DEPENDENT FAILURES - A method for performing safety analysis includes determination of diagnostic coverage of safety mechanisms. The method includes considering the estimation of failure rapture for different scenario and potential sources of failure. The method includes considering and quantifying the effect of dependent failures that arise from other errors that may be already accounted for by existing safety mechanisms. | 2019-02-14 |
20190050301 | CHUNK ALLOCATION - Methods and systems for identifying a set of disks within a cluster and then storing a plurality of data chunks into the set of disks such that the placement of the plurality of data chunks within the cluster optimizes failure tolerance and storage system performance for the cluster are described. The plurality of data chunks may be generated using replication of data (e.g., n-way mirroring) or application of erasure coding to the data (e.g., using a Reed-Solomon code or a Low-Density Parity-Check code). The topology of the cluster including the physical arrangement of the nodes and disks within the cluster and status information for the nodes and disks within the cluster (e.g., information regarding disk fullness, disk performance, and disk age) may be used to identify the set of disks in which to store the plurality of data chunks. | 2019-02-14 |
20190050302 | CHUNK ALLOCATION - Methods and systems for identifying a set of disks within a cluster and then storing a plurality of data chunks into the set of disks such that the placement of the plurality of data chunks within the cluster optimizes failure tolerance and storage system performance for the cluster are described. The plurality of data chunks may be generated using replication of data (e.g., n-way mirroring) or application of erasure coding to the data (e.g., using a Reed-Solomon code or a Low-Density Parity-Check code). The topology of the cluster including the physical arrangement of the nodes and disks within the cluster and status information for the nodes and disks within the cluster (e.g., information regarding disk fullness, disk performance, and disk age) may be used to identify the set of disks in which to store the plurality of data chunks. | 2019-02-14 |
20190050303 | USING DISPERSED COMPUTATION TO CHANGE DISPERSAL CHARACTERISTICS - A method includes determining an encoding modification for a set of encoded data slices where a data segment of data is dispersed storage error encoded into the set of encoded data slices based on dispersed storage error encoding parameters. The method further includes determining a plurality of tasks for executing the encoding modification, where the encoding modification includes altering one or more parameters of the dispersed storage error encoding parameters. The method further includes assigning a first task of the plurality of tasks to a first storage unit and assigning remaining tasks of the plurality of tasks to a set of storage units. The method further includes executing, by the first storage unit and at least some storage units of the set of storage units, the first task and the remaining tasks of the plurality of tasks, respectively, to produce a modified set of encoded data slices. | 2019-02-14 |
20190050304 | Method and Apparatus for Indirectly Assessing a Status of an Active Entity - A method and system permit a backup entity of a redundant apparatus of a communication system that shares control of hardware resources or other network resources with an active entity to indirectly determine a status of the active entity based upon behavior and reaction to actions it takes in connection with resources it shares control of with the active entity. Such a method and system permit the backup entity to deduce the state of the active entity without having any a hardware connection or other communication connection with the active entity. | 2019-02-14 |
20190050305 | REPLACEABLE MEMORY - The present disclosure includes apparatuses comprising replaceable memory. An example apparatus may include a controller and a memory package coupled to the controller and including a plurality of memory dies. At least one of the memory package and the controller may be a replaceable unit that is removable from the apparatus and replaceable with a different replaceable unit while maintaining operation of the apparatus. | 2019-02-14 |
20190050306 | DATA REDUNDANCY AND ALLOCATION SYSTEM - This disclosure describes techniques for monitoring network node traffic and dynamically re-directing network node traffic from an active repository that has a non-operational data cluster, to a standby repository with an operational alternate data cluster. Particularly, a “Data Redundancy Allocation” (DRA) system is described that can monitor the operational integrity of an active repository and dynamically co-ordinate and re-direct network node traffic to a standby, redundant data repository in response to detecting that the active repository is no longer operational. In doing so, the data redundancy allocation system may ensure a continuous communication stream of data traffic from network nodes to data repositories (i.e., active repository or a designated standby repository) in spite of a data repository inadvertently becoming non-operational, or intentionally brought offline for a planned upgrade. | 2019-02-14 |
20190050307 | MULTILEVEL FAULT SIMULATIONS FOR INTEGRATED CIRCUITS (IC) - Embodiments include apparatuses, methods, and systems for testing an IC of an in-vehicle system of a CA/AD vehicle includes a storage device and processing circuitry coupled with the storage device. A gate level fault group is provided to include one or more gate level faults of a fault model associated to a gate level circuit element of the gate level netlist of the IC with substantially same fault controllability or observability characteristics. A correlated RTL fault group is determined to be associated to a RTL circuit node, where the RTL circuit node of the RTL netlist corresponds to the gate level circuit element. Other embodiments may also be described and claimed. | 2019-02-14 |
20190050308 | FUNCTIONAL SAFETY SYSTEM ERROR INJECTION TECHNOLOGY - Systems, apparatuses and methods may provide for technology that detects a startup of a system on chip (SoC) and injects, during the startup, one or more domain startup errors into a plurality of domains on the SoC. Additionally, the technology may determine whether the domain startup error(s) were detected during the startup. In one example, the plurality of domains include one or more fabric interfaces. | 2019-02-14 |