The University of Texas at Austin
As part of the Web Central Services Retirement project, the ITS website will be retired. Information about IT services will be replaced by two new sites, IT@UT and UT ServiceNow, and ITS departmental information will be migrated to a new location. All changes will be completed by 7/28/2016.

University Data Center

Last Updated: July 7, 2014 @ 1:47 pm
Next Review Date: 01/01/2017
Service Manager: Michael A Wilson
Governance Group: IT Architecture & Infrastructure
Document Status: Published

Key Metrics

  • Co-location service availability: 24 hours a day, 365 days a year (excepting defined maintenance windows and events outside the control of ITS)
  • UDC-C Data Center infrastructure availability: 99.98%
  • UDC-C Network availability: 99.96%
  • UDC-B Data Center infrastructure availability: 99.9%
  • UDC-B Network availability: 99.9%


This document defines the service level agreement for the University Data Center (UDC).

Service description

The UDC provides state-of-the-art co-location facilities for campus departments and researchers at the University of Texas at Austin. UDC-C maintains an extremely high availability infrastructure and is suitable for all critical production systems. UDC-B maintains a high availability infrastructure suitable for backup and disaster recovery systems. They also house systems running enterprise services critical to students, faculty, and staff. These secure facilities provide:

  • Server co-location space: Raised floor room equipped with server cabinets, network switches, network connections, and power distribution equipment.
  • Campus and Internet connectivity.
  • IT facilities to support system administrators in setting up and supporting systems.
  • Systems monitoring and notification at the customer's request.
  • Physical security and restricted access.
  • Information security in accordance with university standards.
  • Electrical and mechanical infrastructure designed and built to be concurrently maintainable: engineered for zero downtime.
  • Management of climate control, fire suppression, and power systems.
  • Consultation and assessment for academic and administrative units that want to move equipment into the UDC.

Server Co-Location

The raised floor space in the UDC provides an environmentally controlled facility for housing servers and related IT equipment. UDC-C East & West Hall raised floor areas are completely equipped with:

  • Server cabinets.
  • Top-of-rack switches for redundant connections to the campus network.
  • Power strips and power distribution equipment for redundant connections to power supply.
  • Patch cables for connecting servers to top-of-rack switches.

Because both UDC-C & UDC-B are fully equipped with cabinets, all IT equipment located in the West Hall must be 19-inch, four-post rack mountable (as outlined in the "Supported Computer Environment" section of this document). UDC staff determine server locations based on power and cooling management and customer business needs.

Power and Backup Power

UDC-C power is provided from an A and B source to every server for redundancy. Each server equipped with dual power supplies is connected redundantly to the power source. The redundant power connections allow servers to retain power even during maintenance and unplanned events. Maintenance will be performed on only one side of the electrical system at a time.

UDC-C primary power is provided by the University of Texas at Austin's own power plant, which is backed up by City of Austin power. The UDC is equipped with uninterruptible power supplies (UPSs) In the event of an electrical disruption, the UPSs maintain power to the IT equipment until the backup generators come online. Backup generators are connected to the A and B feeds and have at least 24-hour fuel supplies.

UDC-B power is provided from an A and B source to every server for redundancy. However, only the A side is backed up by limited UPS capacity. When maintenance is performed on the A side, any unexpected electrical disruption to the B side will result in loss of power to the IT equipment.

UDC-B primary power is provided by the University of Texas at Austin’s own power plant, with a direct secondary connection to Austin Energy via an automatic transfer switch upstream of the A side UPS.

Servers that DO NOT have dual power supplies connected redundantly to the power source will go down during scheduled electrical maintenance and unplanned events on the side to which they are connected.


The network equipment connects co-located servers to the campus network, providing out-of-campus bandwidth and Internet connectivity. Connections are available at two speeds:

  • Standard connection: 1 gigabit per second (Gbps) Ethernet connection (up to 4 per device free, more for extra cost).
  • High speed connection: 10 Gbps Ethernet connection (up to 2 10GBase-T per device free, fiber or twinax SPF+ available for extra cost)

UDC-C network connectivity is provided from A and B sources to every server for redundancy. Each server equipped with dual network interface cards (NICs) is connected redundantly to both the A and B sides of the network via the top of rack switches. The redundant network connection allows properly configured servers to retain network connectivity during most regularly scheduled maintenance and unplanned events. Maintenance will be performed on only one side of the network at a time when possible.

Under the following conditions, servers can go down during scheduled maintenance and unplanned events on the side to which they are connected:

  • If they do not have dual NICs connected redundantly to the network.
  • If they are redundantly connected but not properly configured to use the network equipment using link aggregation control protocol (LACP) bonding.

UDC-B networking connectivity is provided by two individual network switches. However, it does not provide the resiliency afforded by UDC-C’s configuration. For this reason, systems which are placed in UDC-B are more likely to encounter problems due to the network design not being as robust. The network provides 1 gigabit connectivity, out-of-band management network, and firewall services. The network in UDC-B is not indented to support primary systems and only systems where redundancy or backups with geographical separation is required for systems present in UDC-C or other locations will be placed in the facility.


Cooling is provided by a redundant cooling system that maintains the raised floor space at a temperature between 70 to 74 degrees (Fahrenheit) and humidity between 35 to 60 percent.

IT Facilities

UDC-C is equipped with facilities for system administrators:

  • A visitor work area and break room outside of the raised floor space, equipped with workstations, network ports, wireless access, and power outlets.
  • A server build room equipped with server cabinets, an A/B network, and a UPS, where staff can install, build, and configure systems prior to their installation in cabinets in the raised floor space. The server build room has only a single power source. Systems are not considered "in production" when they are in the server build room. Systems that are staged in the server build room must be installed in the raised floor space within 14 calendar days from staging. Space in the server build area is subject to availability and should be scheduled in advance with UDC staff.


Physical Security

The data center has a state of the art security system and stringent security processes. Physical security for the UDC is governed by the University Data Center Security Policy.

Information Security

Intrusion detection and data exfiltration detection services are provided by the Information Security Office (ISO). After-hours and weekend coverage are included. "In computer terminology, Exfiltration refers to the unauthorized release of data from within a computer system. This includes copying the data out through covert network channels or the copying of data to unauthorized media." (Source:

Remote Server Access

The UDC has been designed with an "out of band" network (OOBN) that provides system console access to system administrators from remote locations. All servers are required to have a Remote Management port (port types vary by manufacturer, but all should be Ethernet capable).

The OOBN is configured such that all management networks will be in private address space. Access to the OOBN will be provided in general through the campus VPN system. The OOBN network is "best effort," and no production services should attempt to use this network for regular service.

Disaster Recovery

UDC is responsible for maintaining a Disaster Facilities Plan consisting of a customer list, customer contacts, power and rack requirements, key facility related contacts, and escalation and communication paths related to the equipment in its facilities. Customers will be responsible for developing their own disaster recovery plans for servers and services.

Ownership of Data/Data Management

See the Glossary for a definition of terms in quotes.

Information Technology Services runs the University Data Center (UDC) as an "Information Technology Resources Facility." It houses "Information Technology Resources" and "Information Systems" for colleges, schools and units (customers) within the university. The facility includes a number of "Access Controls" to help ensure the physical and information security of systems.

The UDC staff includes "Network Custodians" to assist in the implementation of network controls within the facility as directed by the owner and data steward. Customers may employ "System Administrators" directly to manage their Information Technology Resources and Systems within the UDC, or customers may contract with ITS to provide System Administrators. In the event that a customer contracts with ITS for System Administration, ITS also assumes a joint "Custodian" role with the customer. The customer maintains responsibility as the "Data Steward" and "Owner" in all cases.

Ownership of Data: All data stored, collected and maintained on servers or other equipment placed on behalf of a college, school or unit at the UDC shall remain the property of the college, school or unit, and Data Steward and Data Owner responsibilities reside with the customer. The customer understands and agrees that any subpoenas, court orders, requests under the Texas Public Information Act or other demands that ITS receives for a customer's data from any third party to access the customer's data must be directed, within one business day of its receipt by ITS, to the college, school or unit. ITS is not authorized to and shall not release or permit a third-party to access a college, school or unit's data without the express permission of the college, school or unit. ITS may not access or use a college, school, or unit's data for any reason not specifically authorized in this agreement without the express permission of the college, school or unit.

Security of Data: The Data Owner for the college, school or unit is responsible for identifying the category of data stored on Information Technology Resources and Systems and for assigning a level of protection appropriate to that category, in compliance with the Minimum Security Standards for Systems and related policies and standards.

Intended users

The University Data Center can be used by students, faculty, and staff.

Supported computing environment

The UDC maintains minimum standards for equipment that can be accommodated in its facility. These minimum system standards preserve power and cooling requirements for use by all departments and allow the most efficient use of campus resources. UDC server cabinets can accommodate systems that conform to the following specifications:

  • Rack Support: 19-inch, four-post rack-mountable equipment
  • Rails: Order rails with NO cable management
  • Power Supply: Dual 208/220 volt power supplies for redundant power connectivity
  • Power Cords: Order C13 (limited C19 capability)
  • Dual Network Interface Cards (NICs) specifically for redundant network connectivity
  • Remote Management: Ethernet-capable remote server management port, for example, Dell Remote Access Controller (DRAC) or Integrated Lights Out Manager (ILOM)

Systems that are rack mountable and meet Minimum Security Standards for Systems but do not have dual NICs, dual power supplies, or remote management capabilities can be installed with a corresponding effect on redundancy and remote management capabilities (refer to "Power and Backup Power," "Network," and "Remote Server Access" in this document for an explanation of the impacts). These configurations will be discussed on a case-by-case basis and documented. It may also be possible to modify existing systems to add power supplies, NICs and remote management ports.

All ancillary equipment should be proximate to service equipment and ideally will be located within the same rack. Examples of ancillary equipment include:

  • Firewalls
  • Load balancers
  • Tape backup systems (internal or external)
  • External storage/fibre channel connections
  • IP-based KVM (keyboard, video, mouse) switches
  • IP-based serial console servers
  • Network switches

Ancillary equipment will be discussed on a case-by-case basis and documented.

Building Maintenance/Custodial

Climate Control

  • Maintenance of the data center chilled water and environmental systems.
  • Monitoring and control of climate conditions in the data center.
  • Responding to climate-related alerts resulting from variations from climate conditions that exceed system thresholds (for example, excessive temperatures, hot spots, etc.).

Fire Detection and Suppression

  • Early warning detection system.
  • Individually zoned, double-interlocked pre-action fire suppression system.
  • Dry pipe with water suppression in the event of a fire.
  • Maintenance of all sensors and alarms.
  • Maintenance of the fire suppression system.
  • Monitoring of, and response to, all related alerts and alarms.
  • Compliance with relevant certification and fire code requirements.


  • Maintenance of the uninterruptible power supply system.
  • Maintenance of the back-up generators and related systems.
  • Connection to the electrical source.


Grounds and building maintenance.

Building Insurance

The university maintains building insurance to cover the cost of facilities and equipment should an insurable event occur, subject to standard deductibles.

Technical support

All requests for technical support will be logged using the ITS centralized ticketing system.

Tier 1 Support

All requests for technical support will be logged using the ITS centralized ticketing system to enable us to appropriately assign and track the progress of your request. Staff will coordinate with customers to complete all tasks. Customers will be informed by e-mail from the ticketing system when requests have been completed.

  • For service requests that do not require immediate attention, send an e-mail message to This will create a ticket in Footprints and will follow the standard ITS SLA initial response time of 4 hours.
  • For any issue requiring immediate attention (less than 4 hours), such as the reboot of a server or other troubleshooting activity, call 471-0007. The operators are on duty 24/7 to support you and will respond within 20 minutes. You may be asked to follow up with an e-mail to Customers must provide a contact method when they contact the UDC.

Support Type Description/Examples Availability
General Support
  • Initial support ticket acknowledgment response time – 4 hours (follows the standard ITS SLA)
  • General Inquiries & Service Questions – 1 business day
8 a.m. to 5 p.m. M-F

Customer Migration Projects
  • Co-location Estimates – 2 business days
  • Engagement of ITS units such as Managed Server Support and Shared Services & Infrastructure (Virtual Machines, Storage, etc.) – 2 business days
  • System moves requiring a shutdown of services – 20 business days
8 a.m. to 5 p.m. M-F

Remote Hands Typically simple tasks that can be performed in a short timeframe and might otherwise require on-site visits by system administrators. Services include:

  • Power cycling equipment (rebooting) – 20 minutes
  • Swapping removable media (tapes, CDs, DVD, etc.) – 1 hour for onsite media / 2 hours for offsite media requiring pickup/delivery
  • Visual verification of equipment state (indicators, displays, etc.) – 20 minutes
  • Console connections – 20 minutes
  • Reseating cables – 20 minutes
  • On-site assistance during customer visits – As requested/required

Smart Hands Involves handling hardware or performing invasive tasks in the back of a cabinet. Invasive tasks require advance notification to other cabinet occupants and must be supported by ITS staff.

  • Hardware failure component replacements (NICs, hard drives, power supplies, etc.) – 1 business day
  • Installation of new hardware components involving advance scheduling of downtime – 5 business days
  • OOBM/DRAC/ILOM configuration – 1 business day
  • Equipment pickup and delivery – 3 business days
  • Inventorying equipment – 5 business days
  • Surplus of computer equipment – 5 business days
  • Installation / De-installation of equipment in the raised floor and server build room – 5 business days
  • Relocation of equipment between raised floor and server build room and vice versa – 2 business days
  • Hard drive degaussing and destruction (for devices in the UDC) – 3 business days
  • Shipping and receiving as it relates to data center activities – 3 business days
  • Vendor support activities – As requested/required
7 a.m. to 7 p.m. M-F

After Hours:

7 to 10 a.m. First Saturday of the month

Networking Any network configuration change or recabling.

  • VLAN, ACL or FWSM changes – 2 business days
  • LACP bonding – 2 business days
  • Moving, securing or terminating cables – 3 business days
  • Labeling equipment and cables – 2 business days
  • Other activities requiring networking support – Determined by complexity of request
8 a.m. to 5 p.m. M-F

After Hours: 7 to 10 a.m. First Saturday of the month

5 to 8 p.m. Third Thursday of the month

To schedule support in an after-hours window, open a ticket five days in advance of the window. The windows will be closed and staff released if no activities are logged 5 days before the window. Customers may conduct their own maintenance not requiring ITS staff at any time.

Urgent Support After Hours

Urgent requests should be communicated by telephone to 512-471-0007 and followed up with e-mail to If the UDC Operators cannot resolve the issue, they will escalate to appropriate tier 2 staff.

Monitoring and Notification

Monitoring software in the UDC will monitor publicly available ports on co-located systems as defined by customers. Examples of ports that can be monitored are 80 (HTTP) and 22 (SSH). Customers will be notified in the event of a failed response according to agreed-upon escalation paths.

Recurring maintenance on monitoring software is scheduled for the first and third Thursday of each month, from noon to 2 p.m. If maintenance activities are scheduled to occur, the UDC will notify customers. Support for critical outages in the Zenoss environment is 24x7x365 “best-effort” via a documented escalation path.

Moving to the Data Center

UDC staff will provide a process, consulting, and move assistance for customers who wish to move equipment into the data center. If the equipment is too much for ITS staff to move with existing vehicles, the customer may be asked to bear the cost of moving equipment to the facility. ITS can provide recommendations for vendors to perform this work.

For existing customers being relocated to UDC-C from another ITS operated facility, the cost of a move vendor is borne by ITS.


ITS will notify customers about both scheduled and unscheduled maintenance using the ITS Services Status page of service availability and service delivery issues. Services may not be available during the maintenance periods.

Scheduled maintenance

To the maximum extent possible, installation of service, application, and security updates will be performed during scheduled maintenance.

SystemSchedule during ramp-up period (Begins at building occupancy)Schedule in production (September 1, 2011)
NetworkFollows the Backbone, External Connectivity and Services Maintenance (impact is campus wide) schedule outlined in the Networks for Departments SLA.Monthly or Quarterly, on only one side of the network at a time. (Estimated)
Power and cooling infrastructureNoneQuarterly, on only one side of the infrastructure at a time. (Estimated)

Unscheduled maintenance

Unscheduled maintenance tasks that require service downtime will be announced as soon as possible on the ITS Services Status page.

Change notification

ITS will maintain a mailing list of customer contacts who will be notified of planned maintenance and unplanned events. Customers must notify ITS of any changes to contact information as part of providing escalation path information. Contact lists will be reviewed periodically.

ITS will notify customers using the ITS Services Status page of service availability and service delivery issues for the University Data Center. To the maximum extent possible, installation of service, application, and security updates will be performed during scheduled maintenance.

Access to Facilities

Access to facilities is governed by the UDC Security Policy, and access by unauthorized personnel is prohibited.

Visitor work spaceBusiness hoursRequired after hoursNo
Server build roomBusiness hoursRequired after hoursNo
Raised floor24 hoursAlways required (see Tier 1 support)Yes

Normal service availability is defined in the following table:

Co-location service24 hours a day, 365 days a year (excepting defined maintenance windows and events outside the control of ITS)
Remote Hands24 hours a day, 365 days a year
Smart Hands7 a.m. to 7 p.m., Monday - Friday, on demand. Can be scheduled in advance all other hours.

Customer responsibilities

In addition to the services provided by ITS, subscribers (users) of the service and identified owners/administrators agree to certain important responsibilities. All parties agree to be aware of and adhere to the university's Acceptable Use Policy.

Customers agrees to:

  • Provide vendor name, model number, and specifications for equipment to be co-located.
  • Follow documented communications and ticketing process.
  • Order systems that are consistent with the supported system standards defined in the Supported Computing Environment section. When considering systems that may not conform to the standard, customer agrees to consult with ITS prior to purchasing.
  • Properly configure systems to use the redundant network equipment provided in the facility.
  • Develop disaster recovery plans for servers and services.
  • Identify staff who are authorized for remote and on-site (escorted) access to the facility and systems co-located therein.
  • Define an escalation path outlining who should be contacted and when in the event of problems with systems that are monitored by UDC staff.
  • Abide by security procedures that control access to the facility.
  • Maintain systems according to the Minimum Security Standards for Systems.
  • Manage the hardware lifecycle of systems co-located in the UDC.

Cost of Service

Cost information for this service can be found on the University Data Center Web site.

Glossary Terms for Data Ownership

Access Controls

Access controls are the means by which the ability to use, create, modify, view, etc., is explicitly enabled or restricted in some way (usually through physical and system-based controls).


Guardian or caretaker; the holder of data, the agent charged with implementing the controls specified by the owner. The custodian is responsible for the processing and storage of information. The custodians of information resources, including entities providing outsourced information resources services to the university, must:

  • Implement the controls specified by the owner(s).
  • Provide physical and procedural safeguards for the information resources.
  • Assist owners in evaluating the cost-effectiveness of controls and monitoring.
  • Implement the monitoring techniques and procedures for detecting, reporting, and investigating incidents.

Data Steward

University representatives, such as faculty, staff, or researchers, who are tasked with managing administrative and/or research data owned by the university. Such data is to be managed by a data steward as a university resource and asset. The data steward has the responsibility of ensuring that the appropriate steps are taken to protect the data and that respective policies and guidelines are being properly implemented. Data Stewards may delegate the implementation of university policies and guidelines to professionally trained campus or departmental IT custodians.

Information Technology Resources

Any and all computer printouts, online display devices, mass storage media, and all computer-related activities involving any device capable of receiving e-mail, browsing web sites, or otherwise capable of receiving, storing, managing, or transmitting data including, but not limited to, mainframes, servers, personal computers, notebook computers, hand-held computers, PDAs, pagers, distributed processing systems, network attached and computer controlled medical and laboratory equipment (that is, embedded technology), telecommunication resources, network environments, telephones, fax machines, printers, and service bureaus. Additionally, it is the procedures, equipment, facilities, software, and data that are designed, built, operated, and maintained to create, collect, record, process, store, retrieve, display, and transmit information.

Information Technology Resources Facilities

Any location that houses information technology resource equipment (includes servers, hubs, switches, and routers). Facilities are usually dedicated rooms or mechanical/wiring closets in the buildings.

Information System

An interconnected set of information resources under the same direct management control that shares common functionality. An Information System normally includes hardware, software, information, data, applications, communications and people.

Networking Custodian

Network manager or analyst; the holder of network configuration data, the agent charged with implementing the network controls and services specified by the owner or the university. This custodian is responsible for the transfer of information. These custodians, including entities providing outsourced information resources services to the university, must:

  • Implement the network controls specified by the owner or the university.
  • Provide physical and procedural safeguards for the network infrastructure.
  • Assist owners in evaluating the cost-effectiveness of controls and monitoring.
  • Implement the monitoring techniques and procedures for detecting, reporting, and investigating or troubleshooting network incidents.


The authoritative head of the respective college, school, or unit. The owner is responsible for the function that is supported by the resource or for carrying out the program that uses the resources. The owner of a collection of information is the person responsible for the business results of that system or the business use of the information. Where appropriate, ownership may be shared by managers of different departments. The owner or his designated representatives are responsible for and authorized to:

  • Approve access and formally assign custody of an information resources asset.
  • Determine the asset's value.
  • Specify and establish data control requirements that provide security, and convey them to users and custodians.
  • Specify appropriate controls, based on risk assessment, to protect the state's information resources from unauthorized modification, deletion, or disclosure. Controls shall extend to information resources outsourced by the university.
  • Confirm that controls are in place to ensure the accuracy, authenticity, and integrity of data.
  • Confirm compliance with applicable controls.
  • Assign custody of information resources assets and provide appropriate authority to implement security controls and procedures.
  • Review access lists based on documented security risk management decisions.

System Administrator

Person responsible for the effective operation and maintenance of Information Technology Resources, including implementation of standard procedures and controls, to enforce the university's security policy.

We Can Help

Get help from an expert:

* UT Service Desk

* Call us at 512-475-9400

* Submit a help request online

We also have a walk-in service in the first floor lobby of the Flawn Academic Center (FAC). Stop by and let us help you!