TechVirtuoso

HP ProLiant G6 Q&A

October 14th, 2009 at 1:53 PM  4 Comments

HP_logoEarlier this week I had the opportunity to join in on a discussion with Greg Huff, HP’s Chief Technologist for HP’s ProLiant servers team, as a followup to the HP ProLiant Tech Day and Web Jam event that we attended back in March. While the discussion was focused on some of the material that we had gone over back in March, a few points were raised that I wasn’t aware of. Points that, in my opinion, HP should be putting a little more emphasis on in their marketing and advertising. Primarily, the amount of HP’s intellectual property that makes its way into technologies that most probably aren’t aware of.

For example, we discussed some of the intellectual property (IP) that HP has had a hand in developing, and has licensed to manufacturers to be included in their products. As an example we discussed a series of network adapters made by a variety of hardware vendors that include some HP IP in their design. These adapters are sold in systems from just about every vendor, and are fully functional network adapters that have the same basic performance specs across the board. However, because HP participated in the development of the technology, there are some functions that are only available if that technology is in an HP product. So you take two different servers, one from HP and the other from another vendor, and both have the exact same NIC in them. The core functionality of the network interface is exactly the same on both systems, but the HP system could have capabilities that don’t show up at all on the other vendor’s system. Abilities such as being able to eliminate extraneous cabling by controlling data flow at the core level of the NIC itself. I asked Gregg about  other examples of these core hardware differences, and while some of the details are out there in individual white papers, there isn’t a list that points out the differences across the hardware spectrum.

As a follow-up to this discussion, HP has presented us with an opportunity to participate in a Q&A session with their ProLiant G6 folks, and we would like to get some participation from you, our readers. So if there’s anything that you’ve ever wanted to know about the HP ProLiant G6 line, or any suggestions or concerns that you feel should be addressed, please feel free to submit them here. HP will collect your submissions and they could make it into an upcoming interview and blog series that HP plans to kick off soon.

HP Superdome Tech Day: Superdome in an adaptive infrastructure demo

October 8th, 2009 at 7:56 PM  No Comments

Earlier this week we joined several other sites at HP’s Cupertino, California campus for HP’s Superdome Tech Day. One of the scheduled events focused on some of the configuration and management of an HP Superdome solution in an adaptive infrastructure. HP solutions architect Richard Warham took us through several scenarios including how to rapidly scale up an application server in the event of a sudden surge in transaction volumes, and how to maintain service availability in the event of a server failure.

[flv:http://www.iseetechpeople.com/old/SuperdomeDemo.flv 560 373]

HP Superdome Tech Day – 10 years in mission critical enterprise

October 5th, 2009 at 1:47 PM  1 Comment

HP_SuperdomeHP Superdome, the name “Superdome” alone invokes a sense of something enormous, powerful, and coming from HP, one can only envision a system at the top end of the power and capability scale. In fact, that’s just what the HP Superdome systems aim to be. For the last decade, HP has developed the Superdome platform to provide mission critical solutions for datacenter environments where down time can not only be costly but disastrous. HP Superdome provides the uptime demanded by services like emergency call centers, major financial centers, and online ordering systems, as well as mission critical infrastructures for major corporations around the world.

Over the last decade, HP has developed the Superdome platform to provide mainframe performance and stability. According to a 2008 Dataquest Insight survey, the cost of downtime within large organizations (2,500+ users) has jumped from $40,000 in 2005 as the average cost per hour of downtime of mission critical business systems to $128,000 in 2008, an increase of 120%. These same companies reported that the amount of downtime they had experienced during the 2005-2008 time frame had also increased 69%. With statistics like that, it becomes painfully obvious that IT downtime downtime directly affects the bottom line. Throughout the growth of the HP Superdome platform, features like redundant cell board components, double chip spare memory, and hot swap I/O have been developed to provide resiliency and to prevent downtime, all with the goal of providing near perfect availability.

(more…)

Wi-Fi Alliance launches updated 802.11n certification program

October 1st, 2009 at 9:58 AM  No Comments

Following the ratification of the 802.11n standard, the Wi-Fi Alliance has kicked off the Wi-Fi Certified n program, an updated version of the Wi-Fi Certified 802.11n draft 2.0 program. The ‘new’ program keeps all the draft 2.0 requirements and adds testing for optional features like:

– Simultaneous transmission of up to three spatial streams
– Packet aggregation (A-MPDU), to make data transfers more efficient
– Space-time Block Coding (STBC), a multiple-antenna encoding technique to improve reliability in some environments
– Channel coexistence measures for “good neighbor” behavior when using 40 MHz operation in the 2.4 GHz band

“Wi-Fi Certified n builds on the success of our draft-n certification program and marks a point of maturity in 802.11n technology,” said Wi-Fi Alliance executive director Edgar Figueroa. “Our expanded testing and branding program helps ensure the best user experience in the context of the Wi-Fi industry’s continued innovation and the evolving landscape of products implementing next-generation Wi-Fi.”

More info about the Wi-Fi Certified programs can be found here.