19th week of 2016 patent applcation highlights part 41 |
Patent application number | Title | Published |
20160132269 | METHOD AND APPARATUS FOR SETTING HIGH ADDRESS BITS IN A MEMORY MODULE - Provided are a method and apparatus for setting high address bits in a memory module. A memory module controller in the memory module, having pins to communicate on a bus, determines whether high address bits are available for the memory module, uses a predetermined value for at least one high address bit with addresses communicated from a host memory controller in response to determine that the high address bits are not available to address a first address space in the memory module, and uses values communicated from the host memory controller on at least one of the pins used for the at least one high address bit in response to determine that the high address bits are available to address a second address space, wherein the second address space is larger than the first address space. | 2016-05-12 |
20160132270 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESING METHOD, AND PROGRAM - An information processing device including an NV memory that is a non-volatile recording medium, a file system unit that manages one or more files stored in the NV memory, and a memory management unit that allocates one or more areas of the NV memory, that are ready to be used by the file system unit to store a file, to a running process in response to a request from the running process. The file system unit accesses areas of the NV memory storing unused area management data sets for managing unused areas of the NV memory. The unused area management data sets have data structure that is suitable for characteristic features of the NV memory. When the NV memory is used both as a main memory and a storage, time required for allocating memory blocks to a process is shortened. | 2016-05-12 |
20160132271 | COMPUTER SYSTEM - In a computer system having a storage controller that receives a read request or a write request, a processor is configured to send to an interface device either a read-support indication, which is an indication to execute either all or a portion of read processing for read-data of the read request, or a write-support indication, which is an indication for either all or a portion of write processing for write-data of the write request. Then, the interface device, in accordance with either the read-support indication or the write-support indication, is configured to execute either all or a portion of the read processing for the read-data, or all or a portion of the write processing for the write-data, and to send to a host computer either a first response to the effect that the read processing has been completed, or a second response that the write processing has been completed. | 2016-05-12 |
20160132272 | INFORMATION PROCESSING APPARATUS, COMMUNICATION METHOD AND INFORMATION PROCESSING SYSTEM - An information processing apparatus, among a plurality of information processing apparatuses, to which one of pieces of local data is assigned, the pieces of local data having been obtained by dividing global data shared by the plurality of information processing apparatuses, includes: a storage unit that includes a first storage area sectioned into prescribed units, and stores local data; a processor that executes a process including: detecting a plurality of continuous sections to which the target local data is to be written in a second storage area that is sectioned into the prescribed units in the different information processing apparatus, on the basis of storage area information that identifies data to which the target local data corresponds in the global data; and extracting as many pieces of local data as specified by the number of the continuous sections and transmitting the data to the different information processing apparatus. | 2016-05-12 |
20160132273 | TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a distributed computing storage environment by a processor device, the distributed computing environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments and clumped hot ones of the data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage; uniformly hot groups of data segments are determined using a first, heat map for a selected one of the group of the data segments; and a second heat map is used to determine the clumped hot groups. | 2016-05-12 |
20160132274 | PRINTING IN A DISTRIBUTED COMMUNICATIONS NETWORK - A method and system is provided for printing jobs received from enterprise customers through a global printing network. One aspect relates to an architecture that interfaces customers, communication service firms (CSFs), and downstream digital print service providers (PSPs) in a global communications network. Such an architecture permits last-mile production functions that allow the distribution of print jobs to be optimized, containing costs, maintaining quality, and performing billing functions that improve the quality of such networks and make a global print network feasible. As a result, Enterprise customers benefit from lower costs and global sourcing while print service providers and graphics service firms benefit from increased revenue due to increased utilization of the overall global network.7 | 2016-05-12 |
20160132275 | Methods and Systems for Enhancement of Game Creativity - Disclosed herein are methods and systems for enhancing creativity in an interactive online environment through a system. The disclosed systems comprise 3D printers, computers, and interfaces between the computers and the printers. In addition, the disclosed methods and systems allow for creating one or more digital objects and generating one or more digital representations of the one or more three-dimensional objects; in addition, redistribution of digital representations can be performed in the digital environment. | 2016-05-12 |
20160132276 | COPYRIGHT INFRINGEMENT PREVENTION - In an approach for determining printability of an electronic file, a computer electronically receives a file for printing. The computer parses the file for one or more of text, images, and formatting indicative of potential copyrighted material. The computer, in response to identifying any text, images, or formatting indicative of potential copyrighted material, identifies potential copyrighted material within the file. The computer determines whether the file may be printed based, at least in part, on the identified potential copyrighted material. In another approach for determining printability of an electronic document, a computer electronically receives a document for printing. The computer locates attributes associated with the document and stored in a separate database, which includes one or more of the following: ownership, licensing information, printability, and number of prints allowed. The computer determines the document is printable based on the attributes and prints the document. | 2016-05-12 |
20160132277 | PRINT PATH OBFUSCATION METHOD AND SYSTEM FOR DOCUMENT CONTENT ANALYTICS ASSESSMENT - Disclosed is a method and system of differential processing a print job including one or more original documents to render an obfuscated version of the print job. According to an exemplary method, the differential process replaces letters of an original document with randomly selected characters of substantially the same size and location as the original document and objects such as images/graphics are replaced with blurred versions of substantially the same size and locations as the objects in the original document. The differential process creates an obfuscated version of the print job which is illegible and useful for further processing where privacy of documents included in the print job is required. | 2016-05-12 |
20160132278 | METHOD, SERVER, CLIENT AND SOFTWARE - A client device is disclosed. The client device comprises: a communication unit configured to receive a plurality of parameters and an image comprised of a plurality of segments of a captured scene, wherein the parameters define at least a section of the segments of the image and associate co-ordinates of a cut-out view of the segment with the image; a decoder operable to decode the image; a processing unit configured to receive the co-ordinates of the cut-out view for display on the client device and to define an area of the image to be displayed using the parameters; and a display configured to display the area of the image. | 2016-05-12 |
20160132279 | UNIFIED DESKTOP BIG BROTHER APPLICATION POOLS - Methods and devices for selectively presenting a user interface or “desktop” across two devices are provided. More particularly, a unified desktop is presented across a device and a computer system that comprise a unified system. The unified desktop acts as a single user interface that presents data and receives user interaction in a seamless environment that emulates a personal computing environment. To function within the personal computing environment, the unified desktop includes a process for docking and undocking the device with the computer system. The unified desktop presents a new user interface to allow access to functions of the unified desktop. | 2016-05-12 |
20160132280 | IMAGE TRANSMISSION SYSTEM AND IMAGE TRANSMISSION METHOD - Provided is an image transmission system including an image control device, and at least two signal processing devices. The signal processing devices each include an image receiver configured to selectively receive one or more images transmitted using multicast based on image control information transmitted from the image control device, one or more image processing units configured to perform an image process on an image received by the image receiver based on the image control information, and an image sender configured to transmit an image subjected to the image process by the image processing unit based on the image control information, the image being transmitted using multicast. | 2016-05-12 |
20160132281 | DISPLAY SYSTEM AND DISPLAY DEVICE - Provided is a display system or a display device that is suitable for increasing in size. The display system includes a first display panel, a second display panel, a detection means, and a compensation means. The first display panel includes a first display region. The second display panel includes a second display region. The first display region and the second display region include a first region where they overlap. The detection means has a function of detecting the size of the first region. The compensation means has a function of compensating an image displayed on the first display region in accordance with the change in the size of the first region. | 2016-05-12 |
20160132282 | DISPLAY APPARATUS AND DISPLAY METHODS THEREOF - A display apparatus which constitutes a display system configured of a plurality of display apparatuses is provided. The display apparatus includes a communication unit configured to receive screen change parameters in which image information of partial images divided from one image is analyzed from one or more other display apparatuses, and a controller configured to calculate a screen change value using the received screen change parameters and control the communication unit to transmit the calculated screen change value to the one or more other display apparatuses. The image information is screen change information according to change in frames of the divided partial images. | 2016-05-12 |
20160132283 | Modular Multi-Panel Display System using Integrated Data and Power Cables - A modular multi-panel display system includes a mechanical support structure and a number of display panels mounted to the mechanical support structure so as to form an integrated display panel. A number of integrated data and power cables electrically the display panels to one another. The display system is cooled passively and includes no air conditioning, fans, or heating units. | 2016-05-12 |
20160132284 | SYSTEMS AND METHODS FOR PERFORMING DISPLAY MIRRORING - A method for display mirroring is described. The method includes computing an updating region size for one or more application layers of a screen image, the updating region area regions of interest being updated on the screen image less any overlap between the regions of interest. The method also includes determining that the updating region size plus a previous frame size is less than a frame buffer size. The method further includes determining that there are sufficient resources available to combine the previous frame with the updating region. The method additionally includes generating a current frame by combining the previous frame and the updating region. The method also includes sending the current frame to a mirrored display. | 2016-05-12 |
20160132285 | PORTABLE ELECTRONIC DEVICE INCLUDING TOUCH-SENSITIVE DISPLAY AND METHOD OF CONTROLLING AUDIO OUTPUT - A method of controlling audio output from a portable electronic device having a touch-sensitive display includes detecting a touch on the touch-sensitive display when the portable electronic device is in an audio output mode in which audio is output through a speaker of the portable electronic device, identifying a location of the touch and magnitude of touch signals utilizing signals received from touch sensors of the touch-sensitive display during detecting the touch, and, based on the identified location and magnitude of the touch signals, adjusting audio output from the speaker. | 2016-05-12 |
20160132286 | METHOD, APPARATUS AND SYSTEM FOR MULTIMEDIA PLAYBACK CONTROL - Methods, apparatuses and systems for control of multimedia playback, comprising receiving an instruction data packet from a wearable smart device, wherein the instruction data packet includes a gesture instruction obtained by the wearable smart device; according to a correspondence relation between the gesture instruction and a playback control instruction, retrieving the playback control instruction associated with the gesture instruction; controlling multimedia playback according to the playback control instruction. By receiving the gesture instruction from the wearable smart device and retrieving the playback control instruction associated with the gesture instruction to control multimedia playback, a user may control the multimedia playback by gestures only, without interacting with any button. | 2016-05-12 |
20160132287 | Dynamic Reconfiguration of Audio Devices - In one example, a shared buffer acting as an audio communication channel for an audio interaction device may be reconfigured to allow audio communication channel sharing between audio data streams. An audio interaction device may execute a conversion between an initial audio data stream and an audio signal audibly detectable by a user. A shared buffer may act as an audio communication channel between an operating system and the audio interaction device. The digital audio system may execute an initial audio application with the operating system to process the initial audio data stream. The digital audio system may load the initial audio data stream into the shared buffer. The digital audio system may alter the audio communication channel into a restructured audio communication channel for a subsequent audio data stream while maintaining the initial audio data stream. The digital audio system may load the initial audio data stream into the restructured audio communication channel. | 2016-05-12 |
20160132288 | USING A PROCESSING DEVICE AS A DOCK FOR A MEDIA PLAYER - By integrating multiple electronic devices, it is possible to increase the functionality of the devices individually. For example it is possible to improve media playback functionality, create media playlists “on-the-go” and to use a first device power supply to charge the power supply of the second device. By integrating the devices, it is possible to address some of the shortcomings of devices that are decreasing in size with increasing power requirements, while still maintaining the advantages that these devices offer. | 2016-05-12 |
20160132289 | SYSTEMS AND METHODS FOR PROVIDING AUDIO TO A USER BASED ON GAZE INPUT - According to the invention, a method for providing audio to a user is disclosed. The method may include determining, with an eye tracking device, a gaze point of a user on a display. The method may also include causing, with a computer system, an audio device to produce audio to the user, where content of the audio may be based at least in part on the gaze point of the user on the display. | 2016-05-12 |
20160132290 | GAZE TRIGGERED VOICE RECOGNITION - One embodiment provides a method, involving: detecting, at an electronic device, a location of user gaze; activating, based on the location of the user gaze, a voice input module; detecting, at the electronic device, a voice input; evaluating, using the voice input module, the voice input, and performing, based on evaluation of the voice input, at least one action. Other aspects are described and claimed. | 2016-05-12 |
20160132291 | INTENT DRIVEN COMMAND PROCESSING - A computing device receives a voice command to perform an action within a document. An interpretation of the voice command is mapped to a set of commands. Disambiguation is automatically performed by conducting a user experience to receive additional information. | 2016-05-12 |
20160132292 | Method for Controlling Voice Emoticon in Portable Terminal - Disclosed is a method for controlling voice emoticons in a portable terminal for providing a recipient portable terminal with various voice files according to the emotions and feelings of the user in place of text-based emoticons, thereby enabling the various voice files to be played and to express rich emotions compared to the existing monotonous and dry TTS-based voice files. The present invention comprises the steps of: displaying a voice emoticon call menu for calling a voice emoticon menu on one area of a touch screen; displaying the voice emoticon menu provided with a voice emoticon list after the voice emoticon call menu is user-selected; and transmitting a voice emoticon user-selected from the voice emoticon list to a recipient portable terminal in place of the voice of the user. | 2016-05-12 |
20160132293 | Multi-Modal Input on an Electronic Device - A computer-implemented input-method editor process includes receiving a request from a user for an application-independent input method editor having written and spoken input capabilities, identifying that the user is about to provide spoken input to the application-independent input method editor, and receiving a spoken input from the user. The spoken input corresponds to input to an application and is converted to text that represents the spoken input. The text is provided as input to the application. | 2016-05-12 |
20160132294 | ADDER DECODER - The present disclosure relates to an add and decode hardware logic circuit for adding two n bit inputs, A and B. A series of n logic stages are each configured to perform a first operation of propagating a result of a preceding stage on the condition that the sum of A[m] and B[m] is equal to 0, wherein 0<=m2016-05-12 | |
20160132295 | EFFICIENT IMPLEMENTATION OF A MULTIPLIER/ACCUMULATOR WITH LOAD - This invention is multiply-accumulate circuit supporting a load of the accumulator. During multiply-accumulate operation a partial product generator forms partial produces from the product inputs. An adder tree sums the partial product and the accumulator value. The sum is stored back in the accumulator overwriting the prior value. During load operation an input gate forces one of the product inputs to all 0's. Thus the partial product generator generates partial products corresponding to a zero product. The adder tree adds this zero product to the external load value. The sum, which corresponds to the external load value is stored back in the accumulator overwriting the prior value. A multiplexer at the side input of the adder tree selects the accumulator value for normal operation or the external load value for load operation. | 2016-05-12 |
20160132296 | APPARATUS AND METHOD FOR GENERATING DIGITAL VALUE - Provided is an apparatus for generating a digital value, including: an identification value generator including a plurality of unit cells; and an identification value extractor outputting an identification value of a plurality of bits by using output values of the plurality of unit cells, wherein each of the plurality of unit cells includes an identification value generating element including a first upper electrode and a second upper electrode formed on the same layer, and determines the output value according to electrical connection or cut-off of the first upper electrode and the second upper electrode. | 2016-05-12 |
20160132297 | METHOD AND COMPUTER PROGRAM FOR GENERATING OR MANIPULATING SOURCE CODE - The invention relates to a computer-implemented method of generating or manipulating source code for a software development project. The computer-implemented method includes the steps of generating a map comprising a table having a plurality of cells arranged in one or more columns and one or more rows, populating a cell in the table with an attribute from a set of attributes, wherein the cell is populated either by a user inputting an attribute or, automatically, by an attribute generated from an existing source code, wherein a rule is applied to the attribute in the cell and the application of the rule to the attribute automatically generates or manipulates source code. The invention also relates to a computer program for generating or manipulating source code for a software development project, the computer program being configured to express algorithms in tabular form and apply one or more transformations against the algorithms. | 2016-05-12 |
20160132298 | RAPID PROTOTYPING OF BACKEND SERVICES - A device may determine use case information associated with a use case for a development project. The device may determine a set of use case objects associated with the use case based on the use case information. The device may select an abstract machine model. The abstract machine model may be associated with modeling the development project based on the set of use case objects. The abstract machine model may be selected from a set of abstract machine models associated with modeling development projects. The device may generate program code for the development project based on the abstract machine model and the use case information. The device may provide information associated with the generated program code. | 2016-05-12 |
20160132299 | DYNAMICALLY CONFIGURABLE WORKFLOW IN A MOBILE ENVIRONMENT - Embodiments are directed to a mobile application that enables a completely and dynamically configurable workflow. Once installed on a mobile computer, the application is completely configurable without re-compiling the application. A user may configure the “look & feel,” as well as the workflow of a particular instance of the application, via configuration templates. Once the application is downloaded and installed in an executable form, the user may configure and/or reconfigure the workflow and the “look and feel” of the application without a re-compiling operation and/or generating new machine-code to enable the configuration. To configure and/or reconfigure the application, the user need only to edit and/or receive additional configuration templates. The execution of the configured workflow is not dependent upon the mobile computer being in communication with another network computer. The mobile application may be a native application. Accordingly, the completely customizable mobile application may be executed in an “offline” mode. | 2016-05-12 |
20160132300 | CONTRACTION AWARE PARSING SYSTEM FOR DOMAIN-SPECIFIC LANGUAGES - Aspects of the present invention disclose a method, computer program product, and system for parsing a domain-specific language (DSL) statement. The method includes one or more processors accessing a DSL statement that includes contracted phrases. The method further includes one or more processors identifying one or more contracted phrases in the DSL statement utilizing an annotated domain vocabulary for a DSL associated with the DSL statement and grammar rules for the DSL. The method further includes one or more processors determining expanded phrases corresponding to the identified one or more contracted phrases based on the annotated domain vocabulary and the grammar rules. The method further includes one or more processors creating an expanded abstract syntax tree (AST) that is representative of the DSL statement with the determined expanded phrases replacing the identified one or more contracted phrases. | 2016-05-12 |
20160132301 | PROGRAMMATIC USER INTERFACE GENERATION BASED ON DISPLAY SIZE - Non-limiting examples of the present disclosure describe programmatic generation of a user interface for display on a processing device. A display class is determined from a plurality of display classes based on a detected display size of a processing device on which the user interface is to display. Prior to instantiating a user interface window, a stored user interface definition is identified and interpreted. The stored user interface definition comprises at least one programmed command object. A displayed user interface is instantiated on the processing device, where the displayed user interface comprises at least one user interface element. The user interface element is programmatically generated by translating the programmed command object of the user interface definition into the user interface element based on operations set in accordance with the determined display class. Other examples are also described. | 2016-05-12 |
20160132302 | CONDITIONAL STACK FRAME ALLOCATION - A method for allocating memory includes an operation that determines whether a prototype of a callee function is within a scope of a caller. The caller is a module containing a function call to the callee function. In addition, the method includes determining whether the function call includes one or more unnamed parameters when a prototype of the callee function is within the scope of the caller. Further, the method may include inserting instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters. | 2016-05-12 |
20160132303 | MULTI-SIZED DATA TYPES FOR MANAGED CODE - Embodiments are directed towards generating applications that include multi-sized types running in managed code. During the compilation of an intermediate language version of an application, if a multi-size type is encountered, a runtime engine may perform actions to process the multi-size types. Accordingly, architecture information associated with the target computer may be determined. Data types corresponding to the architecture of the target computer and the multi-sized types may be determined based on the architecture information. Native code calls associated with an intermediate language code calls may be determined such that the parameters of the native code calls match the architecture dependent data types. And, a machine code version of the intermediate language code call may be generated. The generated machine code version of the intermediate language code may be executed with the data types specific to the target computer. | 2016-05-12 |
20160132304 | CONTRACTION AWARE PARSING SYSTEM FOR DOMAIN-SPECIFIC LANGUAGES - Aspects of the present invention disclose a method, computer program product, and system for parsing a domain-specific language (DSL) statement. The method includes one or more processors accessing a DSL statement that includes contracted phrases. The method further includes one or more processors identifying one or more contracted phrases in the DSL statement utilizing an annotated domain vocabulary for a DSL associated with the DSL statement and grammar rules for the DSL. The method further includes one or more processors determining expanded phrases corresponding to the identified one or more contracted phrases based on the annotated domain vocabulary and the grammar rules. The method further includes one or more processors creating an expanded abstract syntax tree (AST) that is representative of the DSL statement with the determined expanded phrases replacing the identified one or more contracted phrases. | 2016-05-12 |
20160132305 | PROGRAM GRAPH DISPLAY DEVICE, PROGRAM GRAPH DISPLAY METHOD, AND PROGRAM GRAPH DISPLAY PROGRAM - A command code extraction part extracts a command code indicated in an extraction target code list, from an instrument control program. A sub-control program creation part creates a sub-control program including the command code extracted. A sub-control parameter list creation part extracts, from each command code included in the sub-control program, each of one or more elements constituting the command code, as a parameter. A sub-control parameter graph display part creates data of a sub-control parameter graph in which one or more parameters of each command code that have been extracted are associated with each other, and displays the created sub-control parameter graph. | 2016-05-12 |
20160132306 | Purity Analysis Using White List/Black List Analysis - Memoizable functions may be identified by analyzing a function's side effects. The side effects may be evaluated using a white list, black list, or other definition. The side effects may also be classified into conditions which may or may not permit memoization. Side effects that may have de minimus or trivial effects may be ignored in some cases where the accuracy of a function may not be significantly affected when the function may be memoized. | 2016-05-12 |
20160132307 | Leveraging Legacy Applications for Use with Modern Applications - An apparatus of one embodiment translates computer code from a first programming language to a second programming language. The apparatus includes an interface, a memory, and a processor. The interface is operable to receive a compiler output that is associated with source code written in a first programming language. The memory is operable to store the compiler output. The processor is communicatively coupled to the interface and the memory and is operable to analyze the data structures within the compiler output, build an internal representation of the source code based on the compiler output, and create a source code template associated with a second programming language. | 2016-05-12 |
20160132308 | LEVERAGING LEGACY APPLICATIONS FOR USE WITH MODERN APPLICATIONS - An apparatus of one embodiment translates computer code from a first programming language to a second programming language. The apparatus includes an interface, a memory, and a processor. The interface is operable to receive a compiler output that is associated with source code written in a first programming language. The memory is operable to store the compiler output. The processor is communicatively coupled to the interface and the memory and is operable to analyze the data structures within the compiler output, build an internal representation of the source code based on the compiler output, and create a source code template associated with a second programming language. | 2016-05-12 |
20160132309 | Efficient Framework for Deploying Middleware Services - A system, method, and computer program product provide computerized services to multiple enterprises. A developer creates each service according to a template, which includes both core functionality common to all services, and individualized functionality specific to the service. The developer either deactivates, or activates and configures, each function in the core based on a service level agreement with the particular enterprise for which the service was created. The template provides a wide variety of core functions, including dynamic data transformation, auditing, logging, exception handling, performance monitoring, service availability, reporting, security, and dynamic reconfiguring. After the service is deployed, it begins to report performance and usage data to a monitoring system. Based on these data, the system calculates an amount to charge the enterprise for use of the given service. | 2016-05-12 |
20160132310 | DYNAMIC RECONSTRUCTION OF APPLICATION STATE UPON APPLICATION RE-LAUNCH - A service provider system may include an application fulfillment platform that delivers desktop applications on demand to desktops on physical computing devices or virtual desktop instances of end users. An application delivery agent installed on an end user's computing resource instance may store application state data (e.g., configuration data, runtime settings, or application templates) or scratch data that is generated by an application executing on the computing resource instance to a secure location on service provider storage resources. After a machine failure or change, or a rebuilding of a virtualized computing resource instance or virtual desktop instance, an application delivery agent installed on the new machine or instance may reinstall the application, retrieve the stored application state or scratch data from service provider resources, and restore the application to the last known persisted state. Upon request, the application delivery agent may restore the application to any earlier persisted state. | 2016-05-12 |
20160132311 | Client Application with Embedded Server - Embodiments provide a web-based editing tool that intelligently leverages certain functionality of a browser, web client, desktop client, and native software at the client side to provide seamless user experience when editing a file over a network. Responsive to a user selecting a file for editing, the web client may send a passive content request to a web server embedded in the desktop client at a specific address on the client device. If no response, the web client prompts the user to start or install the desktop client on the client device. If a response is received, the web client sends a request to the desktop client with a user identifier and authorization to download the file from a server. The desktop client downloads the file, opens it in the native software, monitors the file being edited, and updates a delta associated with the file to the server. | 2016-05-12 |
20160132312 | METHOD AND DEVICE FOR PUBLISHING AND IMPLEMENTING WIRELESS APPLICATION - Embodiments of the present application relate to a method of publishing a wireless application, a method of implementing a wireless application, a device for publishing a wireless application, a device for implementing a wireless application, and a computer program product for publishing a wireless application. A method of publishing a wireless application is provided. The method includes integrating a permanent interface layer of a software development kit (SDK) into a wireless application, publishing the integrated wireless application, and installing the dynamic implementation layer of the SDK onto a server. The SDK includes the permanent interface layer and a dynamic implementation layer, the permanent interface layer including an interface protocol to be invoked by the wireless application and the dynamic implementation layer including an interface implementation corresponding to the interface protocol. | 2016-05-12 |
20160132313 | CANCEL AND ROLLBACK UPDATE STACK REQUESTS - Techniques for cancel and rollback of update stack requests are disclosed herein. At a time after receiving a request to cancel and rollback an update request for a computer system, one or more computer resources within a computer system invoke one or more computer system capabilities at least to cancel computer system operations to update the computer. When the computer system operations to update the computer system are cancelled, one or more computer resources within a computer system invoke one or more computer system capabilities at least to roll back the computer system to a previous good state. | 2016-05-12 |
20160132314 | REMOTE CONFIGURATION MANAGEMENT OF APPLICATIONS - Disclosed are systems, methods, and computer-readable media for remotely updating deployed applications by changing the values of modifiable variables incorporated in the applications. Developers can define segments with attributes and deliver customized configurations for those segments. Also disclosed is a method for resolving conflicts, based on prioritization, if an application instance matches more than one segment. | 2016-05-12 |
20160132315 | Method, Apparatus, and Communication Device for Updating Firmware - A method, an apparatus, a device, and a mobile terminal for updating firmware. The method for updating firmware includes obtaining an update start command; after the update start command is obtained, reading a firmware update file from a secure data memory; and writing the firmware update file to a Flash memory. When a Secure Digital (SD) interface does not support a Secure Digital Input and Output (SDIO) function, a mobile terminal and a communication device having an SD interface can only read data in the secure data memory in units of files. In this application, special files are defined in the secure data memory to store an update start command and update data in different files. Therefore, the communication device having the SD interface can obtain the update start command from a command swap file, and obtain a firmware update file in a firmware update process to perform an update. | 2016-05-12 |
20160132316 | TIMING REPORT FRAMEWORK FOR DISTRIBUTED SOFTWARE UPGRADES - Techniques for concurrently upgrading one or more software applications hosted by one or multiple hosts. Checkpoint data associated with the upgrade processes executing on the multiple hosts may be generated during the overall upgrade operation. The checkpoint data may be stored in a shared storage that can be accessed by the upgrade processes. A reporting tool may generate a timing report using the checkpoint data. The timing report may indicate execution timing data of all hosts executing the upgrade processes such as the total time spent for each upgrade process, when an upgrade process started execution, when an upgrade process stopped and/or completed execution, and the like. | 2016-05-12 |
20160132317 | Secure Application Distribution Systems and Methods - Systems and methods are described that use software diversification techniques to improve the security of mobile applications. Embodiments of the disclosed systems and methods may, among other things, facilitate secure application distribution through deployment of diverse of applications in an application distribution channel. Software diversification consistent with certain disclosed embodiments may mitigate large-scale automated circumvention of security protections by presenting attacking malware moving and/or otherwise unpredictable diverse targets. | 2016-05-12 |
20160132318 | NOTIFICATIONS FRAMEWORK FOR DISTRIBUTED SOFTWARE UPGRADES - Techniques for managing an upgrade operation comprising multiple upgrade process executing on multiple host machines (or hosts) for upgrading software applications on the multiple hosts. Techniques are disclosed for managing notifications that are generated by the multiple upgrade processes during execution, and more particular, techniques for reducing the number of notifications that are sent to a user. The techniques include: only sending a subset of the generated notifications to a user, the subset being selected at the host machines based upon notifications level criteria specified by the user for the host machines; consolidating multiple generated notifications into a fewer number of consolidated notifications and only sending consolidated notifications to the user; combination of criteria-based selection and notifications consolidation. | 2016-05-12 |
20160132319 | END USER PROGRAMMING FOR A MOBILE DEVICE - A tool for creating and editing applications on a mobile device. The tool searches the mobile device for one or more exposed features of a plurality of currently installed applications on the mobile device. The tool exposes a workspace using a graphical programming language on the mobile device. The tool receives a plurality of selections in the workspace. The tool receives a configuration of the plurality of received selections in the workspace. The tool determines, based on the configuration of the received selections in the workspace, the application is complete. The tool prompts to save the completed application. | 2016-05-12 |
20160132320 | Deploying Updates to an Application During Periods of Off-Peak Demand - Update preferences might be utilized to specify that an update to an application should not be applied until the demand for the application falls below a certain threshold. Demand for the application is monitored. The update to the application is applied when the actual demand for the application falls below the specified threshold. The threshold might be set such that updates are deployed during the off-peak periods of demand encountered during a regular demand cycle, such as a diurnal, monthly, or yearly cycle. | 2016-05-12 |
20160132321 | Methods For Cross-Mounting Devices And Apparatus Utilizing The Same - A technique, as well as select implementations thereof, pertaining to cross-mounting a device is described. The technique may involve an apparatus detecting a presence of a device not a part of the apparatus. The technique may also involve the apparatus performing an update in response to the detecting of the presence of the device. The technique may additionally involve the apparatus establishing a communication connection with the device. The technique may further involve the apparatus utilizing the device to perform one or more tasks. | 2016-05-12 |
20160132322 | METHOD AND SYSTEM FOR UPDATING FIRMWARE - An example method of updating firmware includes receiving a memory map of a memory. The method also includes determining, based on the memory map, a set of memory regions storing a bundle of drivers in the memory, the bundle of drivers residing in firmware and being in an executable format. The method further includes for one or more drivers in the bundle of drivers (i) building, based on the memory map, a header that describes the respective driver, and (ii) creating an object file including the header and the respective driver, where the object file is in the executable format. The method also includes storing one or more of the object files in non-volatile memory. | 2016-05-12 |
20160132323 | UPDATING SOFTWARE BASED ON UTILIZED FUNCTIONS - In a method for managing updates for a software product, receiving a request to install a software product update, wherein the software product update modifies a software product on a computing device. The method further includes identifying a first set of one or more functions of the software product that are to be modified by the software product update. The method further includes identifying historical usage information corresponding to the software product, wherein the historical usage information indicates a second set of one or more functions of the software product and a number of times each respective function of the second set of one or more functions of the software product has been used by the computing device. The method further includes determining whether the software product update modifies at least one function of the software product that corresponds to historical usage information that exceeds a minimum usage threshold condition. | 2016-05-12 |
20160132324 | VISUALIZING A CONGRUENCY OF VERSIONS OF AN APPLICATION ACROSS PHASES OF A RELEASE PIPELINE - A system for visualizing a congruency of versions of an application across phases of a release pipeline includes a selecting engine to select a phase from a number of phases; a representing engine to represent, via a user interface (UI), a congruency for a number of versions of an application compared against a target version of the application across the phases of a release pipeline, the congruency for the number of versions of the application represented with identifiers; a differentiating engine to differentiate a latest-deployed version of the application against a planned version of the application in a particular environment; and a comparing engine to compare, based on a selection, properties of the versions of the application. | 2016-05-12 |
20160132325 | VISUALIZING A CONGRUENCY OF VERSIONS OF AN APPLICATION ACROSS PHASES OF A RELEASE PIPELINE - A method for visualizing a congruency of versions of an application across phases of a release pipeline includes a selecting engine to select a phase from a number of phases; a representing engine to represent, via a user interface (UI), a congruency for a number of versions of an application compared against a target version of the application across the phases of a release pipeline, the congruency for the number of versions of the application represented with identifiers; a differentiating engine to differentiate a latest-deployed version of the application against a planned version of the application in a particular environment; and a comparing engine to compare, based on a selection, properties of the versions of the application. | 2016-05-12 |
20160132326 | SOURCE CODE VIOLATION MATCHING AND ATTRIBUTION - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for matching and attributing code violations. One of the methods includes receiving a snapshot S of a code base of source code and a different snapshot T of the code base. Data representing first violations in the snapshot S and second violations in the snapshot Tis received. Pairs of matching violations are determined using performing two or more matching processes, including performing a first matching process, the first matching process determining first pairs of matching violations according to a first matching algorithm and performing a second matching process, the second matching process determining second pairs of matching violations according to a second matching algorithm from violations not matched by the first matching process. The first pairs of matching violations and the second pairs of matching violations are included in the determined pairs of matching violations. | 2016-05-12 |
20160132327 | VISUAL TOOL FOR REVERSE ENGINEERING SOFTWARE COMPONENTS - A system and method of displaying a software application using a software architecture tool that includes: receiving a portion of an existing software application at the software architecture tool; identifying one or more software components of the existing software application from the received portion; automatically identifying a tier and layer location for each standard software component; and presenting one or more images that each represent the standard software component to a user, wherein the images visually identify a tier and layer location of each standard software component. | 2016-05-12 |
20160132328 | Configuration Packages for Software Products - A configuration package receives user-generated input that configures a decision service to generate decision data. The configuration package includes artifacts and the user-generated input selects the artifacts from an artifact library in a configuration database. A configured decision service is generated, where the generating includes receiving, by a decision service factory, the configuration package. Also, the decision service factory receives a decision template including configurable decision elements and non-configurable decision elements. Further, the decision service factory receives a user configuration specifying a parameter in the corresponding artifact. The artifact from the configuration package, the user configuration and the decision template are combined to generate the configured decision service. The configured decision service receives, from a client computer, input for each of the configurable decision elements. Based on the received input, the decision data is generated by the configured decision service. The generated decision data is transmitted to the client computer. | 2016-05-12 |
20160132329 | PARALLEL PROCESSING IN HARDWARE ACCELERATORS COMMUNICABLY COUPLED WITH A PROCESSOR - In an embodiment, a device including a processor, a plurality of hardware accelerator engines and a hardware scheduler is disclosed. The processor is configured to schedule an execution of a plurality of instruction threads, where each instruction thread includes a plurality of instructions associated with an execution sequence. The plurality of hardware accelerator engines performs the scheduled execution of the plurality of instruction threads. The hardware scheduler is configured to control the scheduled execution such that each hardware accelerator engine is configured to execute a corresponding instruction and the plurality of instructions are executed by the plurality of hardware accelerator engines in a sequential manner. The plurality of instruction threads are executed by plurality of hardware accelerator engines in a parallel manner based on the execution sequence and an availability status of each of the plurality of hardware accelerator engines. | 2016-05-12 |
20160132330 | INSTRUCTION AND LOGIC FOR BOYER-MOORE SEARCH OF TEXT STRINGS - Instructions and logic provide extended vector suffix comparisons for Boyer-Moore searches. Some embodiments, responsive to an instruction specifying: a pattern source operand and a target source operand, compare each of m data elements of the pattern operand with each data element of the target operand. A first and second equal ordered aggregation operation are performed from the comparisons according to the m data elements of the pattern source operand. A result of the first and second aggregation operations indicating whether or not a possible match exists between the m data elements of the pattern source operand and d data element positions relative to data elements of the target source operand is stored. Ordering of the data elements of the pattern and the target operands may be reversed for the second aggregation operation, and d may be a sum of m−1 and the quantity of target operand elements in some embodiments. | 2016-05-12 |
20160132331 | Computer Processor Employing Instruction Block Exit Prediction - A computer processor is provided that executes sequences of instructions stored in memory. The sequences of instructions are organized as one or more instruction blocks each having an entry point and at least one exit point offset from the entry point. An apparatus for predicting control flow through sequences of instructions includes a table storing a plurality of entries each associated with an instruction block or part thereof. At least one entry of the table corresponding to a given instruction block or part thereof includes a predictor corresponding to a predicted execution path that exits the given Instruction block or part thereof. The table is queried in order to generate a chain of predictors corresponding to a sequence of instruction blocks or parts thereof that is predicted to be executed by the computer processor. | 2016-05-12 |
20160132332 | SIGNAL PROCESSING DEVICE AND METHOD OF PERFORMING A BIT-EXPAND OPERATION - A signal processing device comprising at least one control unit arranged to receive at least one bit-expand instruction, decode the received at least one bit-expand instruction, and output at least one control signal in accordance with the received at least one bit-expand instruction. The signal processing device further includes at least one execution unit component arranged to receive at least one source register value comprising at least one data bit to be expanded, extract at least one data bit from the at least one source register value located at an offset position according to the at least one control signal, expand the at least one extracted data bit into at least one multi-bit data type, and output the at least one multi-bit data type to at least one destination register. | 2016-05-12 |
20160132333 | Method, Apparatus, And System For Speculative Abort Control Mechanisms - An apparatus and method is described herein for providing robust speculative code section abort control mechanisms. Hardware is able to track speculative code region abort events, conditions, and/or scenarios, such as an explicit abort instruction, a data conflict, a speculative timer expiration, a disallowed instruction attribute or type, etc. And hardware, firmware, software, or a combination thereof makes an abort determination based on the tracked abort events. As an example, hardware may make an initial abort determination based on one or more predefined events or choose to pass the event information up to a firmware or software handler to make such an abort determination. Upon determining an abort of a speculative code region is to be performed, hardware, firmware, software, or a combination thereof performs the abort, which may include following a fallback path specified by hardware or software. And to enable testing of such a fallback path, in one implementation, hardware provides software a mechanism to always abort speculative code regions. | 2016-05-12 |
20160132334 | Method, apparaturs, and system for speculative abort control mechanisms - An apparatus and method is described herein for providing robust speculative code section abort control mechanisms. Hardware is able to track speculative code region abort events, conditions, and/or scenarios, such as an explicit abort instruction, a data conflict, a speculative timer expiration, a disallowed instruction attribute or type, etc. And hardware, firmware, software, or a combination thereof makes an abort determination based on the tracked abort events. As an example, hardware may make an initial abort determination based on one or more predefined events or choose to pass the event information up to a firmware or software handler to make such an abort determination. Upon determining an abort of a speculative code region is to be performed, hardware, firmware, software, or a combination thereof performs the abort, which may include following a fallback path specified by hardware or software. And to enable testing of such a fallback path, in one implementation, hardware provides software a mechanism to always abort speculative code regions. | 2016-05-12 |
20160132335 | Method, apparatus, and system for speculative abort control mechanisms - An apparatus and method is described herein for providing robust speculative code section abort control mechanisms. Hardware is able to track speculative code region abort events, conditions, and/or scenarios, such as an explicit abort instruction, a data conflict, a speculative timer expiration, a disallowed instruction attribute or type, etc. And hardware, firmware, software, or a combination thereof makes an abort determination based on the tracked abort events. As an example, hardware may make an initial abort determination based on one or more predefined events or choose to pass the event information up to a firmware or software handler to make such an abort determination. Upon determining an abort of a speculative code region is to be performed, hardware, firmware, software, or a combination thereof performs the abort, which may include following a fallback path specified by hardware or software. And to enable testing of such a fallback path, in one implementation, hardware provides software a mechanism to always abort speculative code regions. | 2016-05-12 |
20160132336 | Method, apparatus, and system for speculative abort control mechanisms - An apparatus and method is described herein for providing robust speculative code section abort control mechanisms. Hardware is able to track speculative code region abort events, conditions, and/or scenarios, such as an explicit abort instruction, a data conflict, a speculative timer expiration, a disallowed instruction attribute or type, etc. And hardware, firmware, software, or a combination thereof makes an abort determination based on the tracked abort events. As an example, hardware may make an initial abort determination based on one or more predefined events or choose to pass the event information up to a firmware or software handler to make such an abort determination. Upon determining an abort of a speculative code region is to be performed, hardware, firmware, software, or a combination thereof performs the abort, which may include following a fallback path specified by hardware or software. And to enable testing of such a fallback path, in one implementation, hardware provides software a mechanism to always abort speculative code regions. | 2016-05-12 |
20160132337 | Method, apparatus, and system for speculative abort control mechanisms - An apparatus and method is described herein for providing robust speculative code section abort control mechanisms. Hardware is able to track speculative code region abort events, conditions, and/or scenarios, such as an explicit abort instruction, a data conflict, a speculative timer expiration, a disallowed instruction attribute or type, etc. And hardware, firmware, software, or a combination thereof makes an abort determination based on the tracked abort events. As an example, hardware may make an initial abort determination based on one or more predefined events or choose to pass the event information up to a firmware or software handler to make such an abort determination. Upon determining an abort of a speculative code region is to be performed, hardware, firmware, software, or a combination thereof performs the abort, which may include following a fallback path specified by hardware or software. And to enable testing of such a fallback path, in one implementation, hardware provides software a mechanism to always abort speculative code regions. | 2016-05-12 |
20160132338 | DEVICE AND METHOD FOR MANAGING SIMD ARCHITECTURE BASED THREAD DIVERGENCE - Provided are an apparatus and a method for effectively managing threads diverged by a conditional branch based on Single Instruction Multiple-based Data (SIMD). The apparatus includes: a plurality of Front End Units (FEUs) configured to fetch, for execution by SIMD lanes, instructions of thread groups of a program flow; and a controller configured to schedule a thread group based on SIMD lane availability information, activate an FEU of the plurality of FEUs, and control the activated FEU to fetch an instruction for processing the scheduled thread group. | 2016-05-12 |
20160132339 | SYSTEMS AND METHODS INVOLVING CONTROL-I/O BUFFER ENABLE CIRCUITS AND/OR FEATURES OF SAVING POWER IN STANDBY MODE - Systems and methods are disclosed involving control I/O buffer enable circuitry and/or features of saving power in standby mode. In illustrative implementations, aspects of the present innovations may be directed to providing low standby power consumption, such as providing low standby power consumption in high-speed synchronous SRAM and RLDRAM devices. | 2016-05-12 |
20160132340 | DUAL-PROCESSOR ELECTRONIC DEVICE AND METHOD FOR QUICK BOOT UP - A dual-processor electronic device is provided. The dual-processor electronic device includes a first processor, a second processor and a dynamic random access memory. The first processor sends a wake-up command to the second processor to wake up the second processor after performing local initialization. The second processor wakes up, then performs local initialization, copies and decompresses the image file to the dynamic random access memory and sends a ready message to the first processor after the image file is decompressed. The first processor delays startup, and starts the startup process according to the decompressed image file when the ready message is received and the delay-start time is expired. The present disclosure also provides a method for booting up dual-processor electronic device quickly. | 2016-05-12 |
20160132341 | WAKE UP SYSTEM FOR ELECTRONIC DEVICE - A wake up system for electronic device includes a detecting circuit, an amplifier circuit, a switch circuit, and a south bridge chip. The detecting circuit detects an ambient temperature change as a result of the physical proximity of a user, converts the temperature change to a weak voltage signal, and amplifies the voltage signal for the first time. The amplifier circuit receives the amplified voltage signal and amplifies the voltage signal for the second time. The switch circuit receives the voltage signal that is amplified for the second time, and outputs a wake up signal when the voltage signal amplified for the second time is greater than a turn-on voltage. The south bridge chip receives the wake up signal, and wakes up the electronic device accordingly. | 2016-05-12 |
20160132342 | CONTEXT-BASED COMMAND SURFACING - A computing device receives a trigger to surface commands. A possible set of commands is identified and broken into categories. A category is surfaced for user interaction. | 2016-05-12 |
20160132343 | INFORMATION PROCESSING DEVICE, LIBRARY LOADING METHOD, AND COMPUTER READABLE MEDIUM - Provided is an information processing device and others in which a plurality of applications are capable of appropriately using a plurality of libraries requested to be loaded with an identical name and including different contents. The information processing device includes an identifier generation unit which generates identifier information used for identifying contents of a library file, generates load request association information representing a relationship between the identifier information and request target information; a load request interpretation unit which obtains identifier information about the library file including a target of a load request; and a load unit which loads at least a part corresponding to the target of the load request from the library file indicated by the obtained identifier information when the load unit determines that the part corresponding to the target of the load request is not loaded. | 2016-05-12 |
20160132344 | SYSTEM AND METHOD FOR FAST STARTING AN APPLICATION - A system and method for fast starting a channel application is disclosed herein. The method includes: starting one or more applications in suspend mode during a boot up sequence of the operating system; adding the one or more applications to a suspended list; monitoring a plurality of application programming interface (API) calls made from the application layer to one or more graphic rendering modules; and processing the plurality of API calls to the one or more graphic rendering modules based on whether each API call belongs to an application on the suspended list. Fast starting a channel application can also be done in a predictive manner via the search function or based on feeds in a notification area. | 2016-05-12 |
20160132345 | Processing a guest event in a hypervisor-controlled system - The embodiments relate to processing a guest event in a hypervisor-controlled system. A guest event triggers a first firmware service for the guest event in firmware. The guest event is associated with a guest, a guest key, and with a guest state and protected guest memory accessible only by the guest and the firmware. The firmware processes information associated with the guest event. The processed information includes information of the guest state and the protected guest memory. A subset of the processed information is received by a hypervisor to process the guest event, and a non-received portion of the information is retained by the firmware. The hypervisor processes the guest event based on the received subset and sends a process result to the firmware triggering a second firmware service for the guest event. The firmware processes the process result together with the retained information to generate modification associated with the guest event. The firmware performs the generated modification associated with the guest event at the protected guest memory. | 2016-05-12 |
20160132346 | Memory Space Mapping Techniques for Server Based Graphics Processing - The server based graphics processing techniques, describer herein, include loading a given instance of a guest shim layer and loading a given instance of a guest display device interface that calls back into the given instance of the guest shim layer, in response to loading the given instance of the guest shim layer, wherein the guest shim layer and the guest display device interface are executing under control of a virtual machine guest operating system. The given instance of the shim layer requests a communication channel between the given instance of the guest shim layer and a host-guest communication manager (D3D HGCM) service module from a host-guest communication manager (HGCM). In response to the request for the communication channel loading, the D3D HGCM service module is loaded and a communication channel between the given instance of the shim layer and the D3D HGCM service module is created by the HGCM. The given instance of the shim layer maps the graphics buffer memory space of a host D3D DDI binary executing under control of a host operating system. Thereafter, function calls are sent from the given instance of the guest shim layer through the communication channel to the D3D HGCM service module utilizing the graphics buffer memory space mapping. | 2016-05-12 |
20160132347 | MANAGING VIRTUAL COMPUTING NODES USING ISOLATION AND MIGRATION TECHNIQUES - Systems and method for the management of virtual machine instances are provided. A network data transmission analysis system can use contextual information in the execution of virtual machine instances to isolate and migrate virtual machine instances onto physical computing devices. The contextual information may include information obtained in observing the execution of virtual machines instances, information obtained from requests submitted by users, such as system administrators. Still further, the network data transmission analysis system can also include information collection and retention for identified virtual machine instances. | 2016-05-12 |
20160132348 | DEPLOYMENT CONTROL DEVICE AND DEPLOYMENT CONTROL METHOD - A deployment control device includes a processor. The processor is configured to receive, from a first terminal device, a deployment request for requesting deployment of a virtual machine. The processor is configured to generate, in response to the received deployment request, the virtual machine configured to hold first permission information corresponding to unique information of the first terminal device, and selectively allow an access from a terminal device having permission information identical to the first permission information. The processor is configured to transmit the first permission information to the first terminal device. | 2016-05-12 |
20160132349 | Processing a guest event in a hypervisor-controlled system - The embodiments relate to a method for processing a guest event in a hypervisor-controlled system. A guest event triggers a first firmware service for the guest event in firmware. The guest event is associated with a guest, a guest key, and with a guest state and protected guest memory accessible only by the guest and the firmware. The firmware processes information associated with the guest event. The processed information includes information of the guest state and the protected guest memory. A subset of the processed information is received by a hypervisor to process the guest event, and a non-received portion of the information is retained by the firmware. The hypervisor processes the guest event based on the received subset and sends a process result to the firmware triggering a second firmware service for the guest event. The firmware processes the process result together with the retained information to generate modification associated with the guest event. The firmware performs the generated modification associated with the guest event at the protected guest memory. | 2016-05-12 |
20160132350 | VIRTUAL MACHINE MANAGEMENT METHOD, VIRTUAL MACHINE MANAGEMENT APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN VIRTUAL MACHINE MANAGEMENT PROGRAM - A virtual machine management method includes: permitting movement of a virtual machine to a first information processing apparatus that controls a virtual machine using a first instruction set, from a second information processing apparatus that controls a virtual machine using a second instruction set; issuing a notification of information relating to the first instruction set to the virtual machine to be moved from the second information processing apparatus to the first information processing apparatus; and executing, by the first information processing apparatus, control for the moved virtual machine using the first instruction set. | 2016-05-12 |
20160132351 | MICRO-VIRTUAL MACHINE FORENSICS AND DETECTION - The execution of a process within a VM may be monitored, and when a trigger event occurs, additional monitoring is initiated, including storing behavior data describing the real-time events taking place inside the VM. This behavior data may then be compared to information about the expected behavior of that type of process in order to determine whether malware has compromised the VM. | 2016-05-12 |
20160132352 | MAINTAINING VIRTUAL MACHINES FOR CLOUD-BASED OPERATORS IN A STREAMING APPLICATION IN A READY STATE - A streams manager monitors performance of a streaming application, and when the performance needs to be improved, the streams manager automatically requests virtual machines from a cloud manager. The cloud manager provisions one or more virtual machines in a cloud with the specified streams infrastructure and streams application components. The streams manager then modifies the flow graph so one or more portions of the streaming application are hosted by the virtual machines in the cloud. When performance of the streaming application indicates a virtual machine is no longer needed, the virtual machine is maintained and placed in a ready state so it can be quickly used as needed in the future without the overhead of deploying a new virtual machine. | 2016-05-12 |
20160132353 | IMAGE INSTANCE MAPPING - A method and system for image instance mapping is provided. The method includes receiving from change agents on virtual machine instances periodic monitoring data indicating changes for each virtual machine instance. The periodic monitoring data is analyzed and unique updates are applied to the virtual machine instances. High level semantic updates to the virtual machine instances are identified and updates associated with a golden master image are tracked. High level semantic updates to the golden master image are identified and in response, a version tree configured to track drift of each virtual machine instance with respect to golden master image is maintained. | 2016-05-12 |
20160132354 | APPLICATION SCHEDULING IN HETEROGENEOUS MULTIPROCESSOR COMPUTING PLATFORMS - Methods and apparatus to schedule applications in heterogeneous multiprocessor computing platforms are described. In one embodiment, information regarding performance (e.g., execution performance and/or power consumption performance) of a plurality of processor cores of a processor is stored (and tracked) in counters and/or tables. Logic in the processor determines which processor core should execute an application based on the stored information. Other embodiments are also claimed and disclosed. | 2016-05-12 |
20160132355 | PROCESS GROUPING FOR IMPROVED CACHE AND MEMORY AFFINITY - A multiprocessor computer system and method for use therein are provided for assigning processes to processor nodes. The system can determine a first pair of processes and a second pair of processes, each process of the first pair of processes executing on different nodes and each process of the second pair of processes executing on different nodes. The system can determine a first priority value of the first pair of processes, based at least in part on a first resource access rate of the first pair of processes; and determine a second priority value of the second pair of processes, based at least in part on a second resource access rate of the second pair of processes. The system can determine the first priority value is greater than the second priority value; and determine to reassign a first process of the first pair of processes to a first node, wherein a second process of the first pair of processes is executing on the first node. | 2016-05-12 |
20160132356 | MANAGEMENT APPARATUS AND METHOD FOR SYSTEM CONFIGURATION - A management apparatus includes (A) an acceptance unit to accept an instruction to dynamically change a processor configuration in a system that includes plural processors, and (B) a processing unit to identify a performance value of a system corresponding to a processor configuration caused by instructed dynamic change, determine whether or not the identified performance value is equal to or greater than a requested performance value for the system, and perform a processing to change the processor configuration instructed by the accepted instruction, upon determining that the identified performance value is equal to or greater than the requested performance value. | 2016-05-12 |
20160132357 | DATA STAGING MANAGEMENT SYSTEM - Batch job data staging combining synchronous/asynchronous staging. In pre-processing, a stage-in source file, and a target file for stage-out, in permanent storage, are identified using a batch script. From data amounts, time for stage-in/stage-out to/from temporary storage are estimated. Stage-in is based on the time, stage-out being asynchronous, and each asynchronous staging is classified short/long term depending on the time, each staging being recorded in a table. If a source file is modified, incremental staging is added to the table. With a staging list scheduling for batch jobs stage-in is performed, monitoring progress in the table, and resources may be allocated for the jobs nodes without waiting for stage-in to complete. The job generates results in the temporary storage, and using post-processing, stage-out transfers results to the target file in permanent storage. | 2016-05-12 |
20160132358 | PERIPHERAL DEVICE SHARING ACROSS VIRTUAL MACHINES RUNNING ON DIFFERENT HOST COMPUTING SYSTEMS - Techniques for sharing a peripheral device connected to a first host computing system in a cluster are disclosed. In one embodiment, a request to access the peripheral device connected to the first host computing system is received from a virtual machine running on a second host computing system. Further, a bandwidth requirement associated with the peripheral device is determined. Furthermore, one of enabling the virtual machine to remotely access the peripheral device over a network and recommending migration of the virtual machine to the first host computing system to locally access the peripheral device is performed based on the bandwidth requirement of the peripheral device. | 2016-05-12 |
20160132359 | ABNORMALITY DETECTION APPARATUS, CONTROL METHOD, AND PROGRAM - An abnormality detection apparatus ( | 2016-05-12 |
20160132360 | Stream Schema Resolution and Stream Tuple Processing in a Distributed Stream-Processing System - A task worker running on a worker server receives a process specification over a network. The process specification specifies a task to be executed by the task worker. The executed task includes generating an output data object for an output data stream based in part on an input data object from an input data stream. The process specification is accessed to specify the required fields to be read from for executing the task and to specify the generated the fields in the input data object that will be written to during or subsequent to the executing of the task. The task worker executes the task and generates the output data object. The output data object is then transmitted to the output stream based on the stream configuration. | 2016-05-12 |
20160132361 | SYSTEM AND METHOD FOR TOPOLOGY-AWARE JOB SCHEDULING AND BACKFILLING IN AN HPC ENVIRONMENT - A method for job management in an HPC environment includes determining an unallocated subset from a plurality of HPC nodes, with each of the unallocated HPC nodes comprising an integrated fabric. An HPC job is selected from a job queue and executed using at least a portion of the unallocated subset of nodes. | 2016-05-12 |
20160132362 | AUTOMATIC ADMINISTRATION OF UNIX COMMANDS - Various techniques for automatically administering UNIX commands to target systems are disclosed. One method involves receiving information identifying a UNIX command and additional information identifying one or more target systems. The method then issues N instances of the UNIX command in parallel to the one or more target systems, where N is an integer greater than one. The N instances of the UNIX command are issued automatically, in response to receipt of the information and the additional information. In some situations, issuing the N instances of the UNIX command in parallel involves creating N threads, where each of the N threads is configured to issue a respective one of the N instances of the UNIX command to a respective one of the target systems. | 2016-05-12 |
20160132363 | Migrating Processes Operating On One Platform To Another Platform In A Multi-Platform System - Embodiments of the claimed subject matter are directed to methods and a system that allows the optimization of processes operating on a multi-platform system (such as a mainframe) by migrating certain processes operating on one platform to another platform in the system. In one embodiment, optimization is performed by evaluating the processes executing in a partition operating under a proprietary operating system, determining a collection of processes from the processes to be migrated, calculating a cost of migration for rating the collection of processes, prioritizing the collection of processes in an order of migration and incrementally migrating the processes according to the order of migration to another partition in the mainframe executing a lower cost (e.g., open-source) operating system. | 2016-05-12 |
20160132364 | LOCK MANAGEMENT METHOD AND SYSTEM, METHOD AND APPARATUS FOR CONFIGURING LOCK MANAGEMENT SYSTEM - A lock management method and system, and a method and an apparatus for configuring a lock management system is provided. A corresponding level of a lock management system is set for each service execution node according to the number of service execution nodes included in a distributed system, the number of system instances on all service execution nodes, the number of handling processes on all the service execution nodes, and a delay of access of each service execution node to a central control node of the distributed system. At least one lock manager is allocated to each service execution node separately according to the level, which is corresponding to each service execution node, of the lock management system. A lock level context is configured for each lock manager, where the lock level context is used to determine an adjacent lock manager of each lock manager. | 2016-05-12 |
20160132365 | MECHANISM FOR INTERPOSING ON OPERATING SYSTEM CALLS - A method for interposing on operating system calls in a host is provided. The method includes patching an operating system kernel function, the patching comprising adding a first pointer that invokes an agent function, the patching performed by an agent. The method includes executing the agent function, responsive to a system call stub calling the operating system kernel function, which invokes the agent function via the first pointer, wherein at least one action of the method is performed by a processor of a host having an operating system. | 2016-05-12 |
20160132366 | SYSTEM AND METHOD FOR CONTROLLING THE SALE AND MANUFACTURE OF EQUIPMENT AND THE TRANSITION THEREBETWEEN - A software and implementable system which provides bi-directional communication between engineering, through software add-ins, and other applications within an ERP system. Such a system provides efficiency enhancements and provides improved data flow and communication between engineering and others. While not required, the system is well suited for application in association with manufacturing of equipment, and in particular, manufacturing of custom equipment. | 2016-05-12 |
20160132367 | System and Method for Linearizing Messages from Data Sources for Optimized High-Performance Processing in a Stream Processing System - A data object from a data source is received by a distributed process in a data stream. The distributed process has a sequence of categories, each category containing one or more tasks that operate on the data object. The data object includes files that can be processed by the tasks. If the task is able to operate on the data object, then the data object is passed to the task. If the task is unable to operate on the data object, then the files in the data object are passed to a file staging area of the distributed process and stored in memory. The files in the file staging area are passed, in sequence, from the file staging area to the task that was unable to operate on the data object. The data object is outputted to a next category or data sink after being operated on by the task. | 2016-05-12 |
20160132368 | EVENT PROCESSING DEVELOPMENT ENVIRONMENT - Embodiments described herein are directed to methods, and systems for generating event processing language code in a development environment using an event processing compiler. A query in event processing language is received in a development environment. The query can be associated with sample data from input files or an input data source. An event processing compiler compiles the query, where the compiler transforms the query from event processing language code to a development environment script language code. In particular, the event processing language code transforms the code based on event processing attributes that are intricately aligned in syntax and semantic between the event processing language and the development environment script language. The query as a development environment script is executed using sample data. Executing the query generates output comprising final results data, intermediate results data, and provides for display warnings when mismatches exist between the results data and output specifications. | 2016-05-12 |