Thursday, April 8, 2010

NSF Campus Bridging Workshop

I spent yesterday at the IUPUI Conference Center attending an NSF "Campus Bridging" workshop.  Your first response will likely be the same as everyone I've spoken to - "What the heck is that ?".

Well, this was the first one I attended, but I believe this is one track in a series of workshops to help the NSF decide how to structure it's future Cyberinfrastructure (CI) funding programs.  The focus was on how to get campuses ramped up to support the data deluge generated by scientific instruments from gene sequencers to the LHC.  Obviously networking is a big part of that equation, but certainly not the only part.  There was a lot of discussion about data storage and indexing, meta data, federated identity and so on.

Here are a couple of good presentations that I think hit the nail on the head in terms of how we should be building campus networks to handle big data science applications...

Network Architecture for High Performance (Joe Metzger - ESNET)
The Data Intensive Network (Guy Almes - TAMU)

Incidentally, IU started building our campus networks this way in about 2003-04 and I think this is one of the reasons we've been so successful with projects like the Data Capacitor.

Thursday, April 1, 2010

Visit to Ball State

I spent yesterday afternoon at Ball State University in Muncie, IN. For those non-Hoosiers out there, Ball State is named after te Ball Family as in the jars you can tomatoes in ! I met with Steve Jones who is the director of their CICS program. Hopefully I don't butcher the acronym, but IIRC it stands for Center for Information and Communications Sciences. It's a very cool program and as someone who grew up right down the road from the university, I had no idea it existed. They have some very bright and motivated students and hopefully some of them will eventually come join the team at the GlobalNOC !


-- Post From My iPhone

Wednesday, March 24, 2010

Announcing the GlobalNOC Summer of Networking Program

The GlobalNOC has a long history of hiring students to work on projects during the summer.  In fact, many of our software developers and system administrators started with us as students.  This summer we anticipate having about 8-10 students working in multiple areas of the GlobalNOC including Systems Engineering, Service Desk and Network Architecture.

With such a large group of students, we decided to pilot a program to provide additional training opportunities in a group forum.  Our plan is to have group training/seminar sessions one afternoon a week.  The initial sessions with include presentations and training by GlobalNOC staff to the students.  Towards the end of the summer, the sessions will be focused on the students presenting their work from the summer or a networking topic they've been researching during the summer to each other.  Since the students will be split between the IUPUI and IUB campuses, most of the group sessions will be conducted via high-definition video conferencing, but at least two of the sessions will be conducted face-to-face with all the students.

There will also be opportunities for students to shadow GlobalNOC staff in areas other than the area in which they are working.  So a student working on software development in the Systems Engineering group would have a chance to learn about the Service Desk, Network Engineering and Network Architecture groups by shadowing someone in each of those areas.

This is a pilot, so we may need to make adjusts during the summer, but I think this will be a great opportunity for students to get hands-on experience managing large-scale networks.

Monday, March 22, 2010

Openflow Trip

Nothing like back-to-back weeks of travel !   This week we're headed to Silicon Valley for a series of meetings related to Openflow including stops at Stanford University and HP Labs.  Actually, last week's trip to GEC7 at Duke University was related to Openflow as well.   If you haven't checked out Openflow yet, I'd encourage you to do so (www.openflowswitch.org).  It's a standard API that allows external systems (think PC servers) to manipulate the forwarding ASICs in switches and routers.  IU was recently awarded an NSF grant through the GENI program to help get Openflow deployed on campuses.

Monday, March 15, 2010

Heading to GEC 7

I'm heading to GEC 7, the 7th GENI Engineering Conference, tomorrow with several colleagues from the IU GlobalNOC.  IU has received multiple GENI grants so far including one for the Openflow Campus Trials which I'm working on along with the PI, Chris Small.  Tomorrow night we'll be doing a demo of our current Openflow deployment that includes 6 HP switches running Openflow capable code along with the NOX and SNAC Openflow controllers.  You can check out our project page on the GENI Wiki for more information.

Tuesday, January 5, 2010

Back in the Saddle Again !

Happy New Year !   Hopefully everyone enjoyed the holidays.  I hardly looked at email for 2 full weeks which was very nice !  

2010 promises to be as busy and eventful as 2009, if not more !  We are in the midst of two separate beta testing programs right now along with an RFP.  I'm actively working on two grant proposals, a major project to provide a more seamless networking experience across the Clarian (hospital) and IU facilities, and I'm trying to finish up a Legacy RSA with ARIN.  In all, my group has about 20 active projects on our plate right now !  

Thursday, December 10, 2009

The Lab Experiment




I've mentioned our new testlab in a couple of tweets, so I thought I'd post some more information about what we're doing. The MDF in our new data center is quite spacious and well equipped. It includes 45 heavy-duty 2-post Panduit racks, overhead infrastructure for power cables, low-voltage copper cables (ie Cat5/6) and fiber, 36 inch raised floor and 1,800 AMPs of DC power. The production equipment is being built out from the front of the room toward the back, so we reserved the last couple of rows (10 racks total) for "test" equipment.

We've compiled a fair amount of equipment that can be used for testing and we also have a lot of equipment that moves through here to be "burned-in" and configured before it's sent into the field. All this equipment needs a place to live either temporarily or permanently. We have equipment from Ciena, Juniper, Infinera, Cisco, HP and others. Up until now it's be spread across several facilities, most of which had inadequate space, power and/or cooling. So we're very excited about having a wonderful new facility !



It's been amazing how much demand there is for this kind of testing environment. Equipment has been moved in quickly and as soon as people found out it was there, they wanted to use it. It's very clear that we'll need to designate a "lab czar" to make sure we maintain some semblance of organization in the lab - and it's clear that the lab czar better not be me ! The grand vision is to have a lab environment where engineers can "check out" specific devices, automatically build cross-connects between devices to create the topology they need and have the device configs reset to default when their work is completed. We're a long way from this, but will hopefully keep moving steadily in that direction over the next 12-24 months.