SILICON GRAPHICS INTERNATIONAL CORP. Patent applications |
Patent application number | Title | Published |
20150312165 | TEMPORAL BASED COLLABORATIVE MUTUAL EXCLUSION CONTROL OF A SHARED RESOURCE - The present invention relates to a temporal base method of mutual exclusion control of a shared resource. The invention will usually be implemented by a plurality of host computers sharing a shared resource where each host computer will read a reservation memory that is associated with the shared resource. Typically a first host computer will perform and initial read of the reservation memory and when the reservation memory indicates that the shared resource is available, the first host computer will write to the reservation memory. After a time delay, the host computer will read the reservation memory again to determine whether it has won access to the resource. The first host computer may determine that it has won access to the shared resource by checking that data in the reservation memory includes an identifier corresponding to the first host computer. | 10-29-2015 |
20150280746 | Low Latency Serial Data Encoding Scheme For Enhanced Burst Error Immunity and Long Term Reliability - A high performance computing system and method communicate data packets between computing nodes on a multi-lane communications link using a modified header bit encoding. Each data packet is provided with flow control information and error detection information, then divided into per-lane payloads. Sync header bits for each payload are added to the payloads in non-adjacent locations, thereby decreasing the probability that a single correlated burst error will invert both header bits. The encoded blocks that include the payload and the interspersed header bits are then simultaneously transmitted on the multiple lanes for reception, error detection, and reassembly by a receiving computing node. | 10-01-2015 |
20150278040 | High Speed Serial Link In-Band Lane Fail Over for RAS and Power Management - A system and method provide a communications link having a plurality of lanes, and an in-band, real-time physical layer protocol that keeps all lanes on-line, while failing lanes are removed, for continuous service during fail over operations. Lane status is monitored real-time at the physical layer receiver, where link error rate, per lane error performance, and other channel metrics are known. If a lane failure is established, a single round trip request/acknowledge protocol exchange with the remote port completes the fail over. If a failing lane meets an acceptable performance level, it remains on-line during the round trip exchange, resulting in uninterrupted link service. Lanes may be brought in or out of service to meet reliability, availability, and power consumption goals. | 10-01-2015 |
20150199420 | VISUALLY APPROXIMATING PARALLEL COORDINATES DATA - A data visualization system with the capability of viewing large amounts of data in a parallel coordinates system. Large amounts of data are displayed in parallel coordinates by grouping together data points by bins and representing grouped data with fewer graphical elements. The fewer graphical elements simplify the graphical representation of the data while still providing information about the density or volume of data occupying a particular space. Bins are determined for each axis. The volume of connections between a pair of neighboring pair of bins may be represented by modifying an aspect of the connection based on the volume. | 07-16-2015 |
20150199105 | AUTOMATIC SELECTION OF CENTER OF ROTATION FOR GRAPHICAL SCENES - A center of rotation may automatically be selected for graphically displayed data. The rotation center may be automatically selected based on what is determined to be of interest to the user, the current display of the data, and other parameters. For example, if a user has selected a portion of data, the center of rotation may be within the center of the selected data. If a user has positioned a cursor within a portion of displayed data, the center of rotation may be the center of the data portion including the cursor. If the data as a whole is approximately centered about the graphical coordinate origin, or within a threshold of the origin, the data may be rotated about the origin. If the data as a whole is approximately centered at least a certain distance away from the graphical coordinate origin, the data may be rotated about the center of the data as a whole. | 07-16-2015 |
20150160702 | Hot Swappable Computer Cooling System - A computer system has a liquid cooling system with a main portion, a cold plate, and a closed fluid line extending between the main portion and the cold plate. The cold plate has an internal liquid chamber fluidly connected to the closed fluid line. The computer system also has a hot swappable computing module that is removably connectable with the cold plate. The cold plate and computing module are configured to maintain the closed fluid line between the main portion and the cold plate when the computing module is being connected to or removed from the cold plate. | 06-11-2015 |
20150046862 | MODIFYING BINNING OPERATIONS - A data visualization technique is provided with the capability of manipulating bins of data through an interactive graphical presentation of displayed data. When a histogram is generated from stored data, a user may interact directly with the histogram columns to change columns position, width and height. A user, for example, may click and drag a particular side of a bin to change the lower or upper limit of the bin, click and drag the top of a bin to change the size/height of the bin (i.e., number of data points/elements within the bin), or click and drag the center of the bin to move or reposition the bin. The techniques may be applied to other graphical representations of data as well, such as splat graphical displays of data. | 02-12-2015 |
20140331239 | DEPLOYING BIG DATA SOFTWARE IN A MULTI-INSTANCE NODE - A system for deploying big data software in a multi-instance node. The optimal CPU memory and core configuration for a single instance database is determined. After determining an optimal core-memory ratio for a single instance execution, the software is deployed in multi-instance mode on single machine by applying the optimal core-memory ratio for each of the instances. The multi-instance database may then be deployed and data may be loaded in parallel for the instances. | 11-06-2014 |
20140331014 | Scalable Matrix Multiplication in a Shared Memory System - High performance computing systems perform complex or data-intensive calculations using a large number of computing nodes and a shared memory. Disclosed methods and systems provide nodes having a special-purpose coprocessor to perform these calculations, along with a general-purpose processor to direct the calculations. Computational data transfer from the shared memory to the coprocessor incurs a data copying latency. To reduce this latency as experienced by the coprocessor, a complex computation is divided into work units, and one or more threads executing on the processor copy the work units from the shared memory to a local buffer memory of a computing node. By buffering these data for transfer from the local memory to coprocessor memory, and by ensuring that new data are copied while the coprocessor operates on older data, data copying latency is hidden from the coprocessor. | 11-06-2014 |
20140330867 | SOFTWARE DESIGN PATTERN FOR ADAPTING A GRAPH DATABASE VISUALIZATION SOFTWARE - An adapter retrieves graph data from one or more graph databases and adapts the data to be shown through a visualization tool. The adapter may be used to convert multiple formats of graph data into a format which is readable and useable by the visualization tool. The adapter module may make a connection with a graph database and query the database for particular graph data. Once retrieved, the stream of retrieved graph data may be used to populate a template in Java form. From the template, the visualization tool may provide a visualization of the retrieved data. | 11-06-2014 |
20140330851 | PLATFORM AND SOFTWARE FRAMEWORK FOR DATA INTENSIVE APPLICATIONS IN THE CLOUD - A system deploys visualization tools, business analytics software, and big data software in a multi-instance mode on a large, coherent shared memory many-core computing system. The single machine solution provides or high performance and scalability and may be implemented remotely as a large capacity server (i.e., in the cloud) or locally to a user. Most big data software running in a single instance mode has limitations in scalability when running on a many-core and large coherent shared memory system. A configuration and deployment technique using a multi-instance approach, which also includes visualization tools and business analytics software, maximizes system performance and resource utilization, reduces latency and provides scalability as needed, for end-user applications in the cloud. | 11-06-2014 |
20140282584 | Allocating Accelerators to Threads in a High Performance Computing System - A method of distributing threads among accelerators in a high performance computing system receives a request to assign an accelerator in the computing system to a thread. The request includes a mode indicative of location and exclusivity of the accelerator for use by the thread. The method selects the accelerator according to a processor assigned to the thread. The method also assigns the accelerator to the thread with the exclusivity specified in the request. | 09-18-2014 |
20140282478 | TCP SERVER BOOTLOADER - A bootloader uses a TCP server to install and verify upgrades on a networked computing device such as a storage enclosure. A data management server client may connect to a bootloader on the storage enclosure using TCP. Once the connection is established, an upgrade image (upgrade data) can be provided directly to the bootloader and installed by the bootloader at the storage enclosure. The TCP server allows for the upgrade to be installed with minimal steps and a simple interface. | 09-18-2014 |
20140281656 | Global Synchronous Clock - Processor clock signals are generated for each processor in a HPC system, such that all the processor clock signals are of the same frequency. Furthermore, as part of a startup (boot) procedure, a process sets all time stamp counters (TSCs) of the processors, such they indicate identical times. Each blade of the HPC system recovers a recovered clock signal from a synchronous communication network, to which the blade is coupled. The blade generates a processor clock from the recovered clock signal and provides the processor clock to processor(s) on the blade. Each chassis is coupled to a second, system-wide, synchronous communication network, and each chassis synchronizes its chassis synchronous communication network with the system-wide synchronous communication system. Thus, all the processor clock signals are generated with the same frequency. | 09-18-2014 |
20140281640 | INTELLIGENT FRONT PANEL - The front panel includes intelligence for controlling power, reset and power down functions for a storage enclosure having multiple servers, service processors, and enclosure management devices. The front panel may display information pertaining to system power state, disk activity, Ethernet activity, and other information. The front panel may implement sequencing rules for changes in power state. The front panel provides information for multiple servers and other devices through a single panel. | 09-18-2014 |
20140281606 | DATA STORAGE POWER CONSUMPTION THRESHOLD - A power consumption threshold is implemented to manage power consumed by a plurality of devices. A power consumption threshold may be selected for a data storage system having multiple drives. Policies may control operation of storage devices such as hard disk drives to ensure the power consumption threshold is not exceeded. The policies may implement procedures for scheduling hard disk drive operations based on disk drive power characteristics, scheduling maintenance tasks, managing device power states, and strategically scheduling device operations based on their current state. The policies may be implemented by a data manager application in communication with multiple tiers of a data storage system. | 09-18-2014 |
20140281355 | VIRTUAL STORAGE POOL - Virtual storage pool creation is simplified by allowing a user to specify what devices to include in virtual storage pool by physical location. The virtual storage pool may be automatically generated based on the simplified user specifications. The user may specify the virtual pool configuration in a configuration file. A configuration application generates the virtual storage pool based on the configuration file. The configuration application utilizes the physical locations of block devices contained in the configuration file to generate the pool. As a result, virtual pool configuration and creation is automated, more efficient and is less error prone than previous methods that involve manually linking between physical device locations and computer generated names. | 09-18-2014 |
20140281334 | Address Resource Mapping In A Shared Memory Computer System - An algorithm for mapping memory and a method for using a high performance computing (“HPC”) system are disclosed. The algorithm takes into account the number of physical nodes in the HPC system, and the amount of memory in each node. Some of the nodes in the HPC system also include input/output (“I/O”) devices like graphics cards and non-volatile storage interfaces that have on-board memory; the algorithm also accounts for the number of such nodes and the amount of I/O memory they each contain. The algorithm maximizes certain parameters in priority order, including the number of mapped nodes, the number of mapped I/O nodes, the amount of mapped I/O memory, and the total amount of mapped memory. | 09-18-2014 |
20140281322 | Temporal Hierarchical Tiered Data Storage - Embodiments of the invention includes identifying the priority of data sets based on how frequently they are accessed by data center compute resources or by other measures assigning latency metrics to data storage resources accessible by the data center, moving data sets with the highest priority metrics to data storage resources with the fastest latency metrics, and moving data sets with lower priority metrics to slower data storage resources with slower latency metrics. The invention also may be compatible with or enable new forms of related applications and methods for managing the data center. | 09-18-2014 |
20140281301 | ELASTIC HIERARCHICAL DATA STORAGE BACKEND - A multi-tiered data management system utilizes vertical storage tiers, each with one or more horizontal data storage elements, to provide a dynamic and configurable system for managing the storing, archiving and retrieval of data. The system provides an ability to automatically copy data in parallel to multiple types of storage systems horizontally within a tier and vertically between tiers transparently from the host system or user perspective. Users may decide how many backend systems would be utilized and managed, and provide information to define rules or policies for the movement of data into, and among, and from the backend systems and tiers of storage devices. Data is managed by these set policies and determines how long the data will stay in each medium, be migrated between mediums, and otherwise managed. When a user retrieves data, the present system determines which data storage source would best suit the user's request. | 09-18-2014 |
20140281300 | Opportunistic Tier in Hierarchical Storage - A system reduces the impact of constrained bandwidth to long-term data storage without adding new data storage resources to the data center, typically by temporarily storing data on data storage devices that are contained within a desktop computer, a notebook computer, or other computing device. The invention stores lower priority data sets temporarily on data storage devices that are already purchased or expensed until lower priority data sets can be migrated to long-term data storage. The invention relieves the performance impact of congestion caused by slow communication interfaces, recording channels, and mechanical systems that move tape cartridges around. The invention may also be configured with security functions that restrict where or how certain data sets are stored temporarily. | 09-18-2014 |
20140281266 | Maintaining Coherence When Removing Nodes From a Directory-Based Shared Memory System - A high performance computing system and methods are disclosed. The system includes logical partitions with physically removable nodes that each have at least one processor, and memory that can be shared with other nodes. Node hardware may be removed or allocated to another partition without a reboot or power cycle. Memory sharing is tracked using a memory directory. Cache coherence operations on the memory directory include a test to determine whether a given remote node has been removed. If the remote node is not present, system hardware simulates a valid response from the missing node. | 09-18-2014 |
20140281219 | Storage Zoning Tool - A system which semi-automates the assignment of data storage device controllers to data storage devices in a system that contains a plurality of data storage device controllers and a plurality of data storage devices. The object of the invention is to programmatically control which data storage device controllers control which specific data storage devices. The invention eliminates the need for an engineer to travel to a data center to manually reconfigure cables or interconnections between data storage device controllers and data storage devices. | 09-18-2014 |
20140281214 | TOTAL QUOTAS FOR DATA STORAGE SYSTEM - Quotas are tracked for user usage of hard disk drive space and offline backup storage space. The quota is enforced against the total space utilized by a user, not just high tier hard drive space usage. When data is migrated from hard disk drive space to backup storage space, data metadata is updated to reflect data kept offline for the user. As such, when users request to store new data, the data usage of hard disk space and backup storage space is determined from the metadata that reflects both data types, and the total storage spaced for the user is used to grant or reject the user's request to store more data in the system. | 09-18-2014 |
20140281211 | FAST MOUNT CACHE - A fast mount cache is provided by any offline storage media for fast volume mount access. The fast mount cache may be used as the first level in a hierarchical storage configuration after the high performance tier for data having high access rates shortly after creation but decreases sharply as the data ages. The fast mount cache stores migrated data from online hard disk drive storage and maintains the data on a volume basis as opposed to a file basis. As the fast mount cache capacity fills, or other events occur triggering a volume change, the fast mount cache erases the volume having the oldest data. While data is maintained on the fast mount cache for periods of time soon after it is migrated, the data may be accessed quickly. After the initial period of time has expired, the data only exists on tape storage or low tier data. | 09-18-2014 |
20140281208 | Associative Look-up Instruction for a Processor Instruction Set Architecture - An associative look-up instruction for an instruction set architecture (ISA) of a processor and methods for use of an associative look-up instruction. The associative look-up instruction of the ISA specifies one or more fields within a data unit that are used as a pattern of bits for identifying data content in a memory structure to be loaded into hardware registers or other storage components of the ISA. Specified parameters of the associative operation may be explicit within the instruction or indirectly pointed to via hardware registers or other storage components of the ISA. The memory structure may be content addressable memory (CAM). | 09-18-2014 |
20140281181 | Enhanced Performance Monitoring Method and Apparatus - A high-performance-computer system includes a statistics accumulation apparatus configured to efficiently accumulate system performance data from a variety of system components, and periodically write such data to processor local memory for efficient subsequent software processing of the thus acquired data, thereby reducing the system hardware and software overhead needed for collection of such data as compared to prior art systems. | 09-18-2014 |
20140281046 | BLOCK DEVICE MANAGEMENT - Embodiments of the present invention perform a method for reading data from, writing data to, powering on, or configuring a block device without the kernel translating a file system operation into a block device operation. This is implemented by a using a core module to couple applications running in user space to a character device through a character device driver, the core module configures the character device to communicate with a block device through a block device driver without the kernel translating a file system command into a block device command. | 09-18-2014 |
20140281036 | Synchronizing Scheduler Interrupts Across Multiple Computing Nodes - A method, system and program code for synchronizing scheduler interrupts across multiple nodes of a cluster. Network timers and local scheduling timers are clocked off a system source clock. A processor in each computing node repeatedly reads a network time of day counter. The start of scheduler interrupts is synchronized when the time of day counter is at an integer multiple of a synchronizing integer number of network timer ticks. The processor sends an interprocessor scheduler interrupt to other processors in the node to synchronize scheduling timers in the computing node and throughout the cluster. | 09-18-2014 |
20140280957 | Dynamic Assembly and Dispatch of Controlling Software - Embodiments of the invention include software that provides an operator or a system service the ability to access, control, or configure a plurality of different data center resources using common sets of functions or commands even though those data center resources natively require different commands to access, control, or configure them. The invention is configured to accept common commands and then translate them from a common command format into device specific commands or command sets. The invention simplifies how data center equipment is controlled and configured. | 09-18-2014 |
20140280663 | Apparatus and Methods for Providing Performance Data of Nodes in a High Performance Computing System - In accordance with one embodiment of the invention, a method of providing performance data for nodes in a high performance computing system receives a request for performance data for a node in the high performance computing system. According to the method, a driver in kernel space causes the performance data for the node to be stored in kernel memory. The kernel memory is accessible in userspace via a first system file. | 09-18-2014 |
20140279926 | ACTIVE ARCHIVE BRIDGE - A primary data storage system is connected with a separate and external active archive storage system to consolidate data and allow active archive data to be managed based on primary storage system events. The primary data storage system may be managed and maintained by an external entity, and may include a manager module such as a resource manager. The active archive system may include several tiers of storage in a hierarchical storage system and logic for moving data between and among the tiers. As data processing milestones are completed or the state of data changes, in projects stored in the primary data storage system, task milestone or state change events are detected. Event detection can trigger data movement in the active archive solution. One or more software modules implementing the present invention may detect the events and trigger active archive operations based on the events. | 09-18-2014 |
20140279919 | HIERARCHICAL SYSTEM MANAGER ROLLBACK - Data state rollover is performed based on data state snapshots and deltas. A series of snapshots is taken of the current data state, an original data state, and data states in between. Deltas are then generated between two sequential snapshots. This results in numerous deltas which represent the difference between consecutive snapshots. Once the deltas are acquired, the deltas may be stored along with the snapshot of the present data state. As such, previous data states may be rolled back to by determining the number of deltas to apply to the current data state to achieve the desired previous data state. In cases where the rollback or rollover fails, deltas may be played against the current data state to a point where the last known trusted and working data point existed. | 09-18-2014 |
20140273601 | MICRO ETHERNET CONNECTOR - In an embodiment, a micro ethernet connector includes an outer housing that has a recessed front end and a back end. The micro ethernet connector further includes an inner housing that is disposed within the recessed front end of the outer housing. The inner housing has an exposed end. The exposed end includes a recessed channel. The volume of the recessed channel is substantially equal to the volume of a correspondingly shaped protruding printed circuit board of a male micro ethernet connector. A plurality of spring-biased connectors are disposed within the recessed channel of the inner housing. | 09-18-2014 |
20140269342 | Scalable Infiniband Interconnect Performance and Diagnostic Tool - In accordance with some implementations, a method for evaluating large scale computer systems based on performance is disclosed. A large scale, distributed memory computer system receives topology data, wherein the topology data describes the connections between the plurality of switches and lists the nodes associated with each switch. Based on the received topology data, the system performs a data transfer test for each of the pair of switches. The test includes transferring data between a plurality of nodes and determining a respective overall test result value reflecting overall performance of a respective pair of switches for a plurality of component tests. The system determines that the pair of switches meets minimum performance standards by comparing the overall test result value against an acceptable test value. If the overall test result value does not meet the minimum performance standards, the system reports the respective pair of switches as underperforming. | 09-18-2014 |
20140269324 | Bandwidth On-Demand Adaptive Routing - An adaptive router anticipates possible future congestion and enables selection of an alternative route before the congestion occurs, thereby avoiding the congestion. The adaptive router may use a primary route until it predicts congestion will occur. The adaptive router measures packet traffic volume, such as flit volume, on a primary network interface to anticipate the congestion. The adaptive router maintains a trailing sum of the number of flits handled by the primary network interface over a trailing time period. If the sum exceeds a threshold value, the adaptive router assumes the route will become congested, and the adaptive router enables considering routing future packets, or at least the current packet, over possible secondary routes. | 09-18-2014 |
20140268552 | SERVER WITH HEAT BAFFLE COOLING - A server provides for improved cooling using one or more baffles. The baffles allow for increased cooling efficiencies by directing heat in such a manner as to reduce heat exposure for temperature sensitive hardware and data center employees. The baffle may be disposed within a server and direct hot air through the server away from temperature sensitive devices. The baffle may include an inlet that receives hot air and an outlet through which hot air may exit. One or more fans may be used to direct air through the baffle. For example, the baffle may direct heat from the baffle inlet to the baffle outlet, directing heat away from temperature sensitive devices within the server. | 09-18-2014 |
20140268551 | ENCLOSURE HIGH PRESSURE PUSH-PULL AIRFLOW - High pressure fans are mounted in the middle of an enclosure to create a low pressure zone and a high pressure zone within the enclosure. The high pressure fans pull air through high density sets of hard disk drives in the back of an enclosure and push air through high density disk drives in the front of the enclosure. Being positioned in the middle of an enclosure allows the high pressure fans to mix hot air pulled through the low pressure zone with cool air existing on the other side of the fans. The fans then push the cool mixed air through the next set of hard drives, forming a high pressure zone and allowing the air to exit at the front of the enclosure. | 09-18-2014 |
20140268550 | SERVER WITH HEAT PIPE COOLING - A server includes a tray that has a front portion and a back portion. A motherboard is disposed in the front portion of the tray and the motherboard is coupled to a heat sink. A fan is disposed in the back portion of the tray. A hard drive is disposed between the motherboard and the fan and the hard drive is operatively connected to the motherboard. The server also includes a heat pipe that has a body longitudinally bounded by an inlet and an outlet. The inlet is coupled to the heat sink, while the outlet is coupled to the fan. The body of the heat pipe extends past the hard drive. A power supply is also disposed in the tray and is operatively connected to the motherboard, the fan, and the hard drive. | 09-18-2014 |
20140268539 | TOOLLESS HOT SWAPPABLE STORAGE MODULE - A toolless hot-swappable storage module system includes a base plate for mounting within a computer enclosure and a toolless hot-swappable storage module. The storage module includes a sled that is removably coupled to the base plate. The storage module further includes a printed circuit board (PCB) that is disposed on the sled. The PCB includes a plurality of storage media connectors, a PCB signal and power connectors. The storage module also includes a support frame disposed on the PCB. The support frame includes a plurality of support members that are disposed perpendicular to the PCB. Each support member has a first edge and a second edge and includes a plurality of dividers disposed in parallel rows. The support frame also includes a sidewall that is disposed across the first edge of the support members. | 09-18-2014 |
20140268393 | LOGICIAL BLOCK PROTECTION FOR TAPE INTERCHANGE - A two part process is used for modifying records to be written and retrieved from tape devices. A record is appended with a cyclic redundancy check and a string of zeros. Submitting the entire record to tape drives which are logical block protection enabled will result in no change. For drives that are not LBP enabled, the string of zeros at the end of the record is removed. In addition to determining whether a drive is LBP compliant, a determination may be made as to whether a drive is a linear tape open drive from a particular manufacturer. Linear tape open drives may behave similarly as drives which may not be enabled with logical block protection. | 09-18-2014 |
20140265793 | BIDIRECTIONAL SLIDE RAIL - Computing components housed in a chassis interacts with a bidirectional slide rail system. The bidirectional slide rail system allows the computing components to be maneuvered in two directions—it may be pulled out about halfway to the front of the cabinet and/or pulled out halfway out of the back of the cabinet. This allows for easier access to components of the storage server and allows such components to be serviced from the top of the chassis. | 09-18-2014 |
20140265314 | QUICK CONNECT COUPLER RAPID DISENGAGEMENT EXTENSION MECHANISM - A coupler engagement mechanism includes a female member that detachably couples to a corresponding male member. The female member may include a central axis opening and a telescopically slidable outer sleeve. The outer sleeve may be spring-biased in a forward direction towards the central axis opening. A pull member may be coupled to the outer sleeve and may extend from the outer sleeve in a direction other than parallel to the radius of the outer sleeve. | 09-18-2014 |
20140258679 | Reconfigurable Protocol Tables Within An ASIC - A high performance computing system is provided with an ASIC that communicates with another device in the system according to a protocol defined by the other device. The ASIC is coupled to a reconfigurable protocol table, in the form of a high speed content-addressable memory (“CAM”). The CAM includes instructions to control the execution of the protocol by the ASIC. The CAM may include instructions to control the ASIC in the event that unanticipated signals or other errors are encountered while executing the protocol. Internal ASIC state data may be routed to the CAM to permit the ASIC to generate a reasonable response to errors either in the design or fabrication of the ASIC or the device with which it is communicating. | 09-11-2014 |
20140250252 | First-in First-Out (FIFO) Modular Memory Structure - A modular first-in first-out circuit including at least three non-addressable memory blocks forming a data pipeline is disclosed. At least two of the memory block including a data storage structure for receiving as input data from a global data bus and a control logic structure including logic for determining whether data should be added to the data storage structure from the global data bus and whether any data within the data storage structure should be transferred to the output of the memory block. The data storage structure of the at least two memory blocks includes a first data input for selectively receiving data from the global data bus and a second data input for selectively receiving data from a previous memory block in the modular first-in first-out circuit. | 09-04-2014 |
20140245079 | System and Method for Error Logging - Error data is read from error registers and written into a buffer. A computing node uses a BIOS to read the error data, rearm the error register and write the data into a memory mapped buffer. A hub chip supports creation of a shared memory system of computing nodes. A management controller in the computing node extracts error data from the buffer. The error data preferably consists essentially of the error register identifiers and the contents of the error registers. A system management node receives the error data from the management controllers in the computing nodes. The system management node may be coupled to but separate from the computing nodes. | 08-28-2014 |
20140188955 | CLUSTERED FILESYSTEM WITH MEMBERSHIP VERSION SUPPORT - A computer system with read/write access to storage devices creates a snapshot of a data volume at a point in time while continuing to accept access requests to the mirrored data volume by copying before making changes to the base data volume. Multiple snapshots may be made of the same data volume at different points in time. Only data that is not stored in a previous snapshot volume or in the base data volume are stored in the most recent snapshot volume. | 07-03-2014 |
20140126143 | Independent Removable Computer Rack Power Distribution System for High-Density Clustered Computer System - A high performance computing system includes one or more blade enclosures configured to hold a plurality of computing blades, a connection interface, coupled to the one or more blade enclosures, having one or more connectors and a shared power bus that distributes power to the one or more blade enclosures, and at least one power shelf removably coupled to the one or more connectors and configured to hold one or more power supplies. The system may further include the computing blades and the power supplies. The power shelf may include a power distribution board configured to connect the power supplies together on the shared power bus. | 05-08-2014 |
20140126141 | On-Blade Cold Sink For High-Density Clustered Computer System - A high performance computing system includes one or more blade enclosures having a cooling manifold and configured to hold a plurality of computing blades, and a plurality of computing blades in each blade enclosure with at least one computing blade including two computing boards. The system further includes two or more cooling plates with each cooling plate between two corresponding computing boards within the computing blade, and a fluid connection coupled to the cooling plate(s) and in fluid communication with the fluid cooling manifold. | 05-08-2014 |
20140108736 | SYSTEM AND METHOD FOR REMOVING DATA FROM PROCESSOR CACHES IN A DISTRIBUTED MULTI-PROCESSOR COMPUTER SYSTEM - A processor ( | 04-17-2014 |
20140108458 | NETWORK FILESYSTEM ASYNCHRONOUS I/O SCHEDULING - Resource acquisition requests for a filesystem are executed under user configurable metering. Initially, a system administrator sets a ratio of N:M for executing N read requests for M write requests. As resource acquisition requests are received by a filesystem server, the resource acquisition requests are sorted into queues, e.g., where read and write requests have at least one queue for each type, plus a separate queue for metadata requests as they are executed ahead of any waiting read or write request. The filesystem server controls execution of the filesystem resource acquisition requests to maintain the ratio set by the system administrator. | 04-17-2014 |
20140068627 | DYNAMIC RESOURCE SCHEDULING - Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance. | 03-06-2014 |
20140068201 | TRANSACTIONAL MEMORY PROXY - Processors in a compute node offload transactional memory accesses addressing shared memory to a transactional memory agent. The transactional memory agent typically resides near the processors in a particular compute node. The transactional memory agent acts as a proxy for those processors. A first benefit of the invention includes decoupling the processor from the direct effects of remote system failures. Other benefits of the invention includes freeing the processor from having to be aware of transactional memory semantics, and allowing the processor to address a memory space larger than the processor's native hardware addressing capabilities. The invention also enables computer system transactional capabilities to scale well beyond the transactional capabilities of those found computer systems today. | 03-06-2014 |
20140032958 | CLUSTERED FILESYSTEMS FOR MIX OF TRUSTED AND UNTRUSTED NODES - A cluster of computer system nodes share direct read/write access to storage devices via a storage area network using a cluster filesystem. At least one trusted metadata server assigns a mandatory access control label as an extended attribute of each filesystem object regardless of whether required by a client node accessing the filesystem object. The mandatory access control label indicates the sensitivity and integrity of the filesystem object and is used by the trusted metadata server(s) to control access to the filesystem object by all client nodes. | 01-30-2014 |
20140032766 | REAL-TIME STORAGE AREA NETWORK - A cluster of computing systems is provided with guaranteed real-time access to data storage in a storage area network. Processes issue request for bandwidth reservation which are initially handled by a daemon on the same node as the requesting processes. The local daemon determines whether bandwidth is available and, if so, reserves the bandwidth in common hardware on the local node, then forwards requests for shared resources to a master daemon for the cluster. The master daemon makes similar determinations and reservations for resources shared by the cluster, including data storage elements in the storage area network and grants admission to the requests that don't exceed total available bandwidth. | 01-30-2014 |
20140028691 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR REMOTE GRAPHICS PROCESSING - A system, method, and computer program product are provided for remote rendering of computer graphics. The system includes a graphics application program resident at a remote server. The graphics application is invoked by a user or process located at a client. The invoked graphics application proceeds to issue graphics instructions. The graphics instructions are received by a remote rendering control system. Given that the client and server differ with respect to graphics context and image processing capability, the remote rendering control system modifies the graphics instructions in order to accommodate these differences. The modified graphics instructions are sent to graphics rendering resources, which produce one or more rendered images. Data representing the rendered images is written to one or more frame buffers. The remote rendering control system then reads this image data from the frame buffers. The image data is transmitted to the client for display or processing. In an embodiment of the system, the image data is compressed before being transmitted to the client. In such an embodiment, the steps of rendering, compression, and transmission can be performed asynchronously in a pipelined manner. | 01-30-2014 |
20130346371 | CLUSTERED FILESYSTEM WITH DATA VOLUME SNAPSHOT - A computer system with read/write access to storage devices creates a snapshot of a data volume at a point in time while continuing to accept access requests to the mirrored data volume by copying before making changes to the base data volume. Multiple snapshots may be made of the same data volume at different points in time. Only data that is not stored in a previous snapshot volume or in the base data volume are stored in the most recent snapshot volume. | 12-26-2013 |
20130212339 | Data Coherence Method and Apparatus for Multi-Node Computer System - A method for maintaining data coherency in a shared-memory computer system having a plurality of nodes divides the local memory of a given node into one or more blocks and stores a data record for each block indicating a plurality of node groups and a selection of the node groups. Each selected node group represents a number of nodes, and selected node groups represent at least one node that has requested access to the block. In response to receiving an access request from a requesting node that may or may not be in a selected node group, the method and system update the data record to indicate the correct selection. If the requesting node is not in any node group, the data record is adjusted to have new node groups, one of which represents the requesting node. | 08-15-2013 |
20110133620 | RACK MOUNTED COMPUTER SYSTEM - A rack mounted computer system. In one variation the computer rack is configured for side-by-side placement of computers. In another variation, the computer rack includes flanges for supporting the placement of computer units within the rack. In another variation the computer rack is configured with retaining clips. In yet another variation, the computer rack is configured to receive computers with chassis that are adapted for side-by-side placement. | 06-09-2011 |