Patent application number | Description | Published |
20120218268 | ANALYSIS OF OPERATOR GRAPH AND DYNAMIC REALLOCATION OF A RESOURCE TO IMPROVE PERFORMANCE - An operator graph analysis mechanism analyzes an operator graph corresponding to an application for problems as the application runs, and determines potential reallocations from a reallocation policy. The reallocation policy may specify potential reallocations depending on whether one or more operators in the operator graph are compute bound, memory bound, communication bound, or storage bound. The operator graph analysis mechanism includes a resource reallocation mechanism that can dynamically change allocation of resources in the system at runtime to address problems detected in the operator graph. The operator graph analysis mechanism thus allows an application represented by an operator graph to dynamically evolve over time to optimize its performance at runtime. | 08-30-2012 |
20130024442 | SYSTEM LOAD QUERY GOVERNOR - Techniques are disclosed for query processing. In one embodiment, a query is received for execution in a processing environment. Based on a measure of current load of the processing environment, a maximum amount of a resource that the query is allowed to consume is determined. An amount of the resource that the query is to consume is estimated. Execution of the query is managed based on a comparison between the maximum amount and the estimated amount. | 01-24-2013 |
20130031124 | MANAGEMENT SYSTEM FOR PROCESSING STREAMING DATA - Techniques are disclosed for evaluating tuples for processing by a stream application having a plurality of process elements. In one embodiment, at least one tuple to be processed by at least one processing element of the stream application is identified. A maximum duration for which the at least one processing element is allowed to process the at least one tuple is determined. A duration for which the at least one processing element is likely to process the at least one tuple is also estimated. Processing of the at least one tuple is managed based on a comparison between the maximum duration and the estimated duration. | 01-31-2013 |
20130031263 | DYNAMIC RUNTIME CHOOSING OF PROCESSING COMMUNICATION METHODS - Techniques are described for assigning and changing communication protocols for a pair of processing elements. The communication protocol determines how the pair of processing elements transmits data in a stream application. The pair may be assigned a communication protocol (e.g., TCP/IP or a protocol that uses a relational database, shared file system, or shared memory) before the operator graph begins to stream data. This assignment may be based on a priority of the processing elements and/or a priority of the communication protocols. After the operator graph begins to stream data, the pair of processing elements may switch to a different communication protocol. The decision to switch the communication protocol may be based on whether the pair of processing elements or assigned communication protocol is meeting established performance standards for the stream application. | 01-31-2013 |
20130031335 | USING PREDICTIVE DETERMINISM WITHIN A STREAMING ENVIRONMENT - Techniques are described for transmitting predicted output data on a processing element in a stream computing application instead of processing currently received input data. The stream computing application monitors the output of a processing element and determines whether its output is predictable, for example, if the previously transmitted output values are within a predefined range or if one or more input values correlate with the same one or more output values. The application may then generate a predicted output value to transmit from the processing element instead of transmitting a processed output value based on current input values. The predicted output value may be, for example, an average of the previously transmitted output values or a previously transmitted output value that was transmitted in response to a previously received input value that is similar to a currently received input value. Moreover, the processing element or elements that transmit the predicted output data may be upstream from the processing element with the predictable output. | 01-31-2013 |
20130080600 | MANAGEMENT SYSTEM FOR PROCESSING STREAMING DATA - Techniques are disclosed for evaluating tuples for processing by a stream application having a plurality of process elements. In one embodiment, at least one tuple to be processed by at least one processing element of the stream application is identified. A maximum duration for which the at least one processing element is allowed to process the at least one tuple is determined. A duration for which the at least one processing element is likely to process the at least one tuple is also estimated. Processing of the at least one tuple is managed based on a comparison between the maximum duration and the estimated duration. | 03-28-2013 |
20130080652 | DYNAMIC RUNTIME CHOOSING OF PROCESSING COMMUNICATION METHODS - Techniques are described for assigning and changing communication protocols for a pair of processing elements. The communication protocol determines how the pair of processing elements transmits data in a stream application. The pair may be assigned a communication protocol (e.g., TCP/IP or a protocol that uses a relational database, shared file system, or shared memory) before the operator graph begins to stream data. This assignment may be based on a priority of the processing elements and/or a priority of the communication protocols. After the operator graph begins to stream data, the pair of processing elements may switch to a different communication protocol. The decision to switch the communication protocol may be based on whether the pair of processing elements or assigned communication protocol is meeting established performance standards for the stream application. | 03-28-2013 |
20130080653 | USING PREDICTIVE DETERMINISM WITHIN A STREAMING ENVIRONMENT - Techniques are described for transmitting predicted output data on a processing element in a stream computing application instead of processing currently received input data. The stream computing application monitors the output of a processing element and determines whether its output is predictable, for example, if the previously transmitted output values are within a predefined range or if one or more input values correlate with the same one or more output values. The application may then generate a predicted output value to transmit from the processing element instead of transmitting a processed output value based on current input values. The predicted output value may be, for example, an average of the previously transmitted output values or a previously transmitted output value that was transmitted in response to a previously received input value that is similar to a currently received input value. | 03-28-2013 |
20130081046 | ANALYSIS OF OPERATOR GRAPH AND DYNAMIC REALLOCATION OF A RESOURCE TO IMPROVE PERFORMANCE - An operator graph analysis mechanism analyzes an operator graph corresponding to an application for problems as the application runs, and determines potential reallocations from a reallocation policy. The reallocation policy may specify potential reallocations depending on whether one or more operators in the operator graph are compute bound, memory bound, communication bound, or storage bound. The operator graph analysis mechanism includes a resource reallocation mechanism that can dynamically change allocation of resources in the system at runtime to address problems detected in the operator graph. The operator graph analysis mechanism thus allows an application represented by an operator graph to dynamically evolve over time to optimize its performance at runtime. | 03-28-2013 |
20130166617 | ENHANCED BARRIER OPERATOR WITHIN A STREAMING ENVIRONMENT - Techniques are described for processing data. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Additionally, a first processing element in the operator graph includes a barrier operator that joins the output of one or more upstream operators included in one or more of the plurality of processing elements. Embodiments initiate one or more timeout conditions at the barrier operator. Embodiments also determine, at the first processing element, that one or more timeout conditions have been satisfied before data has been received from each of the one or more upstream operators. Upon determining that the one or more timeout conditions have been satisfied, Embodiments generate output data at the barrier operator without the data from at least one of the one or more upstream operators. | 06-27-2013 |
20130166618 | PREDICTIVE OPERATOR GRAPH ELEMENT PROCESSING - Techniques are described for predictively starting a processing element. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Embodiments determine a historical startup time for a first processing element in the operator graph, where, once started, the first processing element begins normal operations once the first processing element has received a requisite amount of data from one or more upstream processing elements. Additionally, embodiments determine an amount of time the first processing element takes to receive the requisite amount of data from the one or more upstream processing elements. The first processing element is then predictively started at a first startup time based on the determined historical startup time and the determined amount of time historically taken to receive the requisite amount of data. | 06-27-2013 |
20130166620 | ENHANCED BARRIER OPERATOR WITHIN A STREAMING ENVIRONMENT - Techniques are described for processing data. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Additionally, a first processing element in the operator graph includes a barrier operator that joins the output of one or more upstream operators included in one or more of the plurality of processing elements. Embodiments initiate one or more timeout conditions at the barrier operator. Embodiments also determine, at the first processing element, that one or more timeout conditions have been satisfied before data has been received from each of the one or more upstream operators. Upon determining that the one or more timeout conditions have been satisfied, Embodiments generate output data at the barrier operator without the data from at least one of the one or more upstream operators. | 06-27-2013 |
20130166888 | PREDICTIVE OPERATOR GRAPH ELEMENT PROCESSING - Techniques are described for predictively starting a processing element. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Embodiments determine a historical startup time for a first processing element in the operator graph, where, once started, the first processing element begins normal operations once the first processing element has received a requisite amount of data from one or more upstream processing elements. Additionally, embodiments determine an amount of time the first processing element takes to receive the requisite amount of data from the one or more upstream processing elements. The first processing element is then predictively started at a first startup time based on the determined historical startup time and the determined amount of time historically taken to receive the requisite amount of data. | 06-27-2013 |
20130166942 | UNFUSING A FAILING PART OF AN OPERATOR GRAPH - Techniques for managing a fused processing element are described. Embodiments receive streaming data to be processed by a plurality of processing elements. Additionally, an operator graph of the plurality of processing elements is established. The operator graph defines at least one execution path and wherein at least one of the processing elements of the operator graph is configured to receive data from at least one upstream processing element and transmit data to at least one downstream processing element. Embodiments detect an error condition has been satisfied at a first one of the plurality of processing elements, wherein the first processing element contains a plurality of fused operators. At least one of the plurality of fused operators is selected for removal from the first processing element. Embodiments then remove the selected at least one fused operator from the first processing element. | 06-27-2013 |
20130166961 | DETECTING AND RESOLVING ERRORS WITHIN AN APPLICATION - Techniques for managing errors within an application are provided. Embodiments monitor errors occurring in each of a plurality of portions of the application while the application is executing. An error occurring in a first one of the plurality of portions of the application is detected. Additionally, upon detecting the error occurring in the first portion, embodiments determine whether to prevent subsequent executions of the first portion of the application. | 06-27-2013 |
20130166962 | DETECTING AND RESOLVING ERRORS WITHIN AN APPLICATION - Techniques for managing errors within an application are provided. Embodiments monitor errors occurring in each of a plurality of portions of the application while the application is executing. An error occurring in a first one of the plurality of portions of the application is detected. Additionally, upon detecting the error occurring in the first portion, embodiments determine whether to prevent subsequent executions of the first portion of the application. | 06-27-2013 |
20130179585 | TRIGGERING WINDOW CONDITIONS BY STREAMING FEATURES OF AN OPERATOR GRAPH - In a stream computing application, data may be transmitted between operators using tuples. However, the receiving operator may not evaluate these tuples as they arrive but instead wait to evaluate a group of tuples—i.e., a window. A window is typically triggered when a buffer associated with the receiving operator reaches a maximum window size or when a predetermined time period has expired. Additionally, a window may be triggered by a monitoring a tuple rate—i.e., the rate at which the operator receives the tuples. If the tuple rate exceeds or falls below a threshold, a window may be triggered. Further, the number of exceptions, or the rate at which an operator throws exceptions, may be monitored. If either of these parameters satisfies a threshold, a window may be triggered, thereby instructing an operator to evaluate the tuples contained within the window. | 07-11-2013 |
20130179586 | TRIGGERING WINDOW CONDITIONS USING EXCEPTION HANDLING - In a stream computing application, data may be transmitted between operators using tuples. However, the receiving operator may not evaluate these tuples as they arrive but instead wait to evaluate a group of tuples—i.e., a window. A window is typically triggered when a buffer associated with the receiving operator reaches a maximum window size or when a predetermined time period has expired. Additionally, a window may be triggered by a monitoring a tuple rate—i.e., the rate at which the operator receives the tuples. If the tuple rate exceeds or falls below a threshold, a window may be triggered. Further, the number of exceptions, or the rate at which an operator throws exceptions, may be monitored. If either of these parameters satisfies a threshold, a window may be triggered, thereby instructing an operator to evaluate the tuples contained within the window. | 07-11-2013 |
20130179591 | TRIGGERING WINDOW CONDITIONS BY STREAMING FEATURES OF AN OPERATOR GRAPH - In a stream computing application, data may be transmitted between operators using tuples. However, the receiving operator may not evaluate these tuples as they arrive but instead wait to evaluate a group of tuples—i.e., a window. A window is typically triggered when a buffer associated with the receiving operator reaches a maximum window size or when a predetermined time period has expired. Additionally, a window may be triggered by a monitoring a tuple rate—i.e., the rate at which the operator receives the tuples. If the tuple rate exceeds or falls below a threshold, a window may be triggered. Further, the number of exceptions, or the rate at which an operator throws exceptions, may be monitored. If either of these parameters satisfies a threshold, a window may be triggered, thereby instructing an operator to evaluate the tuples contained within the window. | 07-11-2013 |
20130198318 | PROCESSING ELEMENT MANAGEMENT IN A STREAMING DATA SYSTEM - Stream applications may inefficiently use the hardware resources that execute the processing elements of the data stream. For example, a compute node may host four processing elements and execute each using a CPU. However, other CPUs on the compute node may sit idle. To take advantage of these available hardware resources, a stream programmer may identify one or more processing elements that may be cloned. The cloned processing elements may be used to generate a different execution path that is parallel to the execution path that includes the original processing elements. Because the cloned processing elements contain the same operators as the original processing elements, the data stream that was previously flowing through only the original processing element may be split and sent through both the original and cloned processing elements. In this manner, the parallel execution path may use underutilized hardware resources to increase the throughput of the data stream. | 08-01-2013 |
20130198366 | DEPLOYING AN EXECUTABLE WITH HISTORICAL PERFORMANCE DATA - Techniques for incorporating performance data into an executable file for an application are described. Embodiments monitor performance of an application while the application is running. Additionally, historical execution characteristics of the application are determined based upon the monitored performance and one or more system characteristics of a node on which the application was executed on. Embodiments also incorporate the historical execution characteristics into executable file for the application, such that the historical execution characteristics can be used to manage subsequent executions of the application. | 08-01-2013 |
20130198371 | DEPLOYING AN EXECUTABLE WITH HISTORICAL PERFORMANCE DATA - Techniques for incorporating performance data into an executable file for an application are described. Embodiments monitor performance of an application while the application is running. Additionally, historical execution characteristics of the application are determined based upon the monitored performance and one or more system characteristics of a node on which the application was executed on. Embodiments also incorporate the historical execution characteristics into executable file for the application, such that the historical execution characteristics can be used to manage subsequent executions of the application. | 08-01-2013 |
20130198489 | PROCESSING ELEMENT MANAGEMENT IN A STREAMING DATA SYSTEM - Stream applications may inefficiently use the hardware resources that execute the processing elements of the data stream. For example, a compute node may host four processing elements and execute each using a CPU. However, other CPUs on the compute node may sit idle. To take advantage of these available hardware resources, a stream programmer may identify one or more processing elements that may be cloned. The cloned processing elements may be used to generate a different execution path that is parallel to the execution path that includes the original processing elements. Because the cloned processing elements contain the same operators as the original processing elements, the data stream that was previously flowing through only the original processing element may be split and sent through both the original and cloned processing elements. In this manner, the parallel execution path may use underutilized hardware resources to increase the throughput of the data stream. | 08-01-2013 |
20130290394 | MONITORING STREAMS BUFFERING TO OPTIMIZE OPERATOR PROCESSING - Method, system and computer program product for performing an operation, including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples, establishing an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator, providing each operator a buffer configured to hold data tuples requiring processing by the respective operator, wherein the buffer is a first-in-first-out buffer, receiving a plurality of data tuples in a buffer associated with an operator, the data tuples comprising at least one attribute, selecting at least one data tuple from the first buffer, examining an attribute of the selected data tuples to identify a candidate tuple, and performing a second operation on the candidate tuple. | 10-31-2013 |
20130290489 | MONITORING STREAMS BUFFERING TO OPTIMIZE OPERATOR PROCESSING - Method, system and computer program product for performing an operation, including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples, establishing an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator, providing each operator a buffer configured to hold data tuples requiring processing by the respective operator, wherein the buffer is a first-in-first-out buffer, receiving a plurality of data tuples in a buffer associated with an operator, the data tuples comprising at least one attribute, selecting at least one data tuple from the first buffer, examining an attribute of the selected data tuples to identify a candidate tuple, and performing a second operation on the candidate tuple. | 10-31-2013 |
20130290966 | OPERATOR GRAPH CHANGES IN RESPONSE TO DYNAMIC CONNECTIONS IN STREAM COMPUTING APPLICATIONS - A stream computing application may permit one job to connect to a data stream of a different job. As more and more jobs dynamically connect to the data stream, the connections may have a negative impact on the performance of the job that generates the data stream. Accordingly, a variety of metrics and statistics (e.g., CPU utilization or tuple rate) may be monitored to determine if the dynamic connections are harming performance. If so, the stream computing system may be optimized to mitigate the effects of the dynamic connections. For example, particular operators may be unfused from a processing element and moved to a compute node that has available computing resources. Additionally, the stream computing application may clone the data stream in order to distribute the workload of transmitting the data stream to the connected jobs. | 10-31-2013 |
20130305032 | ANONYMIZATION OF DATA WITHIN A STREAMS ENVIRONMENT - Streams applications may decrypt encrypted data even though the decrypted data is not used by an operator. Operator properties are defined to permit decryption of data within the operator based on a number of criteria. By limiting the number of operators that decrypt encrypted data, the anonymous nature of the data is further preserved. Operator properties also indicate whether an operator should send encrypted or decrypted data to a downstream operator. | 11-14-2013 |
20130305034 | ANONYMIZATION OF DATA WITHIN A STREAMS ENVIRONMENT - Streams applications may decrypt encrypted data even though the decrypted data is not used by an operator. Operator properties are defined to permit decryption of data within the operator based on a number of criteria. By limiting the number of operators that decrypt encrypted data, the anonymous nature of the data is further preserved. Operator properties also indicate whether an operator should send encrypted or decrypted data to a downstream operator. | 11-14-2013 |
20130305225 | STREAMS DEBUGGING WITHIN A WINDOWING CONDITION - Method, system and computer program product for performing an operation, the operation including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples. The operation then establishes an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator of the plurality of operators is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator. The operation then defines a breakpoint, the breakpoint comprising a condition, the condition based on attribute values of data tuples in a window of at least one operator, the window comprising a plurality of data tuples in an operator. The operation, upon detecting occurrence of the condition, triggers the breakpoint to halt processing by each of the plurality of operators in the operator graph. | 11-14-2013 |
20130305227 | STREAMS DEBUGGING WITHIN A WINDOWING CONDITION - Method product for performing an operation, the operation including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples. The operation then establishes an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator of the plurality of operators is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator. The operation then defines a breakpoint, the breakpoint comprising a condition, the condition based on attribute values of data tuples in a window of at least one operator, the window comprising a plurality of data tuples in an operator. The operation, upon detecting occurrence of the condition, triggers the breakpoint to halt processing by each of the plurality of operators in the operator graph. | 11-14-2013 |
20140089351 | HANDLING OUT-OF-SEQUENCE DATA IN A STREAMING ENVIRONMENT - Computer-implemented method, system, and computer program product for processing data in an out-of-order manner in a streams computing environment. A windowing condition is defined such that incoming data tuples are processed within a specified time or count of each other. Additionally, the windowing condition may be based on a specified attribute of the data tuples. If the tuples are not processed within the constraints specified by the windowing condition, the unprocessed tuples may be discarded, i.e., not processed, to optimize operator performance. | 03-27-2014 |
20140089352 | HANDLING OUT-OF-SEQUENCE DATA IN A STREAMING ENVIRONMENT - Computer-implemented method, system, and computer program product for processing data in an out-of-order manner in a streams computing environment. A windowing condition is defined such that incoming data tuples are processed within a specified time or count of each other. Additionally, the windowing condition may be based on a specified attribute of the data tuples. If the tuples are not processed within the constraints specified by the windowing condition, the unprocessed tuples may be discarded, i.e., not processed, to optimize operator performance. | 03-27-2014 |
20140132763 | Distributed Control of a Heterogeneous Video Surveillance Network - A surveillance video broker arbitrates access by multiple clients to multiple surveillance video sources. Both clients and sources register with the broker. Each source independently specifies respective clients permitted real-time access to its video and conditions of access, if any. Preferably, the video source is a local surveillance domain having one or more cameras, one or more sensors, and a local controller, the source specifying clients or client groups permitted access, and independently specifying conditions of access for each client or client group, where conditions may include scheduled events, non-scheduled events, such as alarms or emergencies, and/or physical proximity. The broker automatically authorizes real-time access according to pre-specified conditions. Preferably, the broker can also arbitrate alert notifications to the clients based on pre-specified notification criteria. | 05-15-2014 |
20140132764 | Providing Emergency Access to Surveillance Video - Real-time access by a public authority emergency responder to surveillance video of a privately-controlled source is conditionally pre-authorized dependent on the existence of at least one pre-specified emergency condition, and recorded in a data processing system. A public authority emergency responder subsequently requests real-time access to the surveillance video (e.g., during an emergency), and if the pre-specified emergency condition is met, access is automatically granted, i.e., without the need for manual intervention. A pre-specified emergency condition could, e.g., be an alarm condition detected by a sensor at the site of the video surveillance, or a declared state of emergency, properly declared by an appropriate public official. | 05-15-2014 |
20140132765 | Automated Authorization to Access Surveillance Video Based on Pre-Specified Events - Real-time access by a requestor to surveillance video is conditionally pre-authorized dependent on the existence of at least one pre-specified automatically detectable condition, and recorded in a data processing system. A requestor subsequently requests real-time access to the surveillance video (e.g., as a result of an alarm), and if the pre-specified automatically detectable condition is met, access is automatically granted, i.e., without the need for manual intervention. An automatically detectable condition could, e.g., be an alarm condition detected by a sensor at the site of the video surveillance. Alternatively, it could be a locational proximity of the requestor to the site of the video surveillance. Alternatively, it could be a previously defined time interval. | 05-15-2014 |
20140132772 | Automated Authorization to Access Surveillance Video Based on Pre-Specified Events - Real-time access by a requestor to surveillance video is conditionally pre-authorized dependent on the existence of at least one pre-specified automatically detectable condition, and recorded in a data processing system. A requestor subsequently requests real-time access to the surveillance video (e.g., as a result of an alarm), and if the pre-specified automatically detectable condition is met, access is automatically granted, i.e., without the need for manual intervention. An automatically detectable condition could, e.g., be an alarm condition detected by a sensor at the site of the video surveillance. Alternatively, it could be a locational proximity of the requestor to the site of the video surveillance. Alternatively, it could be a previously defined time interval. | 05-15-2014 |
20140133831 | Providing Emergency Access to Surveillance Video - Real-time access by a public authority emergency responder to surveillance video of a privately-controlled source is conditionally pre-authorized dependent on the existence of at least one pre-specified emergency condition, and recorded in a data processing system. A public authority emergency responder subsequently requests real-time access to the surveillance video (e.g., during an emergency), and if the pre-specified emergency condition is met, access is automatically granted, i.e., without the need for manual intervention. A pre-specified emergency condition could, e.g., be an alarm condition detected by a sensor at the site of the video surveillance, or a declared state of emergency, properly declared by an appropriate public official. | 05-15-2014 |
20140136701 | Distributed Control of a Heterogeneous Video Surveillance Network - A surveillance video broker arbitrates access by multiple clients to multiple surveillance video sources. Both clients and sources register with the broker. Each source independently specifies respective clients permitted real-time access to its video and conditions of access, if any. Preferably, the video source is a local surveillance domain having one or more cameras, one or more sensors, and a local controller, the source specifying clients or client groups permitted access, and independently specifying conditions of access for each client or client group, where conditions may include scheduled events, non-scheduled events, such as alarms or emergencies, and/or physical proximity. The broker automatically authorizes real-time access according to pre-specified conditions. Preferably, the broker can also arbitrate alert notifications to the clients based on pre-specified notification criteria. | 05-15-2014 |