Pages

Saturday 27 February 2010

Exchange 2010 Performing Database Management Pt 4

 

Manage Database Redundancy
Mailbox databases have now been moved to the organizational level as mentioned in Pt 1. Exchange 2010 introduces a radical new method of providing database redundancy. Databases can now be kept as multiple copies on different servers.

  • A logical group of mailbox servers is called a 'Database Availability Group' or DAG for short.
  • The DAG can contain 16 mailbox servers (which can also include other exchange roles)
  • These servers can be on different subnets
  • A DAG can have up to 16 copies of a database (with up to 100 databases per server)
  • Within a DAG, one copy of the database is active while the other copies are passive.
  • When a change is made to the active copy changes are recorded to the transaction log. When the log becomes full, it is closed and replicated to the passive copies on other servers. The replicated transaction logs are replayed into the passive databases which keeps the passive copies up-to-date (log shipping and replay)
  • If the active database is lost a passive copy will failover automatically and become the active copy. This can also be administrator activated. This is called a switchover.
This section deals with how to create a DAG and how to configure database failover using two servers.

Create a DAG

A DAG consists of three primary components

  1. Name
  2. IP address
  3. Witness location

The name follows NetBIOS convention and the the IP address can be granted using DHCP (not personally recommended) or set statically. If the servers are on different subnets the IP address should include those networks. Because the DAG is using Server 2008 clustering features, the quorum model used is based on a file share witness. The file share witness is used when the number of nodes in the cluster is even. It has a vote in deciding which node should be active. The location of the witness can be on any server (but not servers in the DAG) and its path is configurable.

The following commands can be used to construct a DAG:


[PS] New-DatabaseAvailabilityGroup -Name DAG1 -DatabaseAvailabilityGroupIPAddress 192.168.2.100

You might now receive an error. Perhaps like the following:
WARNING: The operation wasn't successful because an error was encountered. You may find more details in log file "C:\ExchangeSetupLogs\DagTasks\dagtask_2010-02-28_21-56-49.338_new-databaseavailabiltygroup.log". The task was unable to find any Hub Transport servers without the Mailbox server role in the local Active Directory site. Please manually specify a witness server.

You will have to manually establish the witness location. If you add the location to a DC the “Exchange Trusted Subsystem” security group has to be added as a member of the local administrators group of the server and add the DC computer account to the Exchange Trusted Subsystem Group. This is not ideal. Best practice recommends that it is placed on a Hub transport server (not in the DAG!). The other thing to mention is do NOT create the folder share ahead of time and just let the cmdlet do all the work.

To set the witness location as well as creating a DAG, type the following:

[PS] New-DatabaseAvailabilityGroup DAG1 -WitnessServer SRV1.compulinx.com -WitnessDirectory c:\DAG1witness -DatabaseAvailabilityGroupIPAddress 192.168.2.100

 

Add Servers to the DAG

We now must add mailbox servers to the DAG just created. As mentioned, these exchange servers could infact be multiple role holders. The servers should have two network cards. One NIC is used to transfer replication traffic between servers and the other for MAPI traffic. Remember that the CAS server role now is the RPC endpoint for Outlook clients. These email clients will connect to CAS servers and they in turn will communicate with mailbox servers (very different from Exchange 2007) by RPC. This connection to CAS servers is now possible because they now run the RPC Client Access Service. This will be discussed in another post (CAS arrays).

By typing the following you add a server to your DAG

[PS] Add-DatabaseAvailabilityGroupServer DAG1 -MailboxServer SRV210

This should take about 20-40 seconds to complete. This will automatically install the Failover Clustering component. This is a feature on Windows Server 2008 R2 and is unavailable on Standard Ed. You will require Enterprise Ed. servers.

Now add your second server to the same DAG:

[PS] Add-DatabaseAvailabilityGroupServer DAG1 -MailboxServer SRV211

If you need to change the location of the witness resource:

[PS] Set-DatabaseAvailabilityGroup DAG1 -WitnessServer SRV212 -WitnessDirectory c:\somelocation


So far we have created a DAG, defined the location of the witness and added two servers to the DAG. The next step is to include a database that needs to replicated between servers. Since the SRV210 has a database (DB1 which we created earlier), a database copy exists. This needs to be replicated to SRV211.

[PS] Add-MailboxDatabaseCopy DB1 -MailboxServer SRV211
[PS] Get-MailboxDatabaseCopyStatus -Identity DB1


The last command should show you that the database has replicated. Notice the replica is 'healthy' and the original and active version is 'mounted'.

DAG Networks


When we created the DAG, a network for replication was automatically established. This is called a DatabaseAvailabilityGroupNetwork. Because we have two network cards in our servers you should see two networks; DAGNetwork01 and DAGNetwork02. To see this type the following:

[PS] Get-DatabaseAvailabilityGroupNetwork | ft name,identity,replicationenabled,subnets,interfaces –au

This will show you that these networks are used for replication of database information. Considering our servers have two network cards, we can use one of the networks for MAPI traffic. This is traffic from CAS servers.
To set DAGNetwork02 for MAPI traffic, type the following:

[PS] Set-DatabaseAvailabilityGroupNetwork "DAG1\DAGNetwork02" -ReplicationEnabled $False

Manual Seeding Database Replicas

 

There are times when you will be required to force a replication to database copies because they are out of sync with the active original. This is caused by the following situations:

  • When a replica is brought back on line after an extended downtime
  • Log file corruption
  • Database corruption
  • Extended WAN outage (assuming the replicas are in different sites)
To reseed a replica database follow these steps
  1. Suspend replication
  2. Update the replica (reseeding)
  3. Start replication

1. To suspend replication use the following cmdlet. This will suspend replication to the database replica on SRV211:

[PS] Suspend-MailboxDatabaseCopy db1\srv211


2. To manually reseed the replica database type the following. Notice that you have to delete any existing files for it to work:

[PS] Update-MailboxDatabaseCopy db1\srv211 -SourceServer srv210 –DeleteExistingFiles $True


3. To resume replication type the following:

[PS] Resume-MailboxDatabaseCopy DB1\SRV211

There are two additional settings that can be used that affect how databases handle logs and failover.

  1. ReplayLagTime
  2. TruncationLagTime

ReplayLagTime

This is the time that passes before replicated logs are replayed into the passive replicas. This can be useful if you are concerned about replaying a corrupted log into a passive copy.

TruncationLagTime

Is the amount of time that passes before a log file can be deleted on a passive copy database. The following example sets the replaylagtime to a day and the trucationlagtime to a week

[PS] Set-MailboxDatabaseCopy DB01\SRV211 -ReplayLagTime 1.0:0:0 -TruncationLagTime 7.0:0:0


Failovers (When It All Goes Wrong!)
Failover occurs automatically with no administrator intervention. You can manually change your active/passive databases around by the following cmdlet. Obviously the active and passive replicas are still standing. This is called a switchover.

[PS] Move-ActiveMailboxDatabase "DB01" -ActivateOnServer SRV211 -MountDialOverride:None

Exchange 2010 Performing Database Management Pt3

Manage Database Settings
  • Configure Exchange Search
  • Configure Database Size Limits
Configuring Exchange Search
Exchange Search creates a full text index on mailbox databases allowing users to search their email very quickly. Items are added to the index as they arrive so the index is always up-to date.

This feature can be enabled or disabled on a per database or server manner.

The following disables indexing on a per database basis,

[PS] set-mailboxdatabase DB01 -IndexEnabled $False

The following disables the indexing of all databases on the server,


[PS] stop-service MSExchangeSearch

[PS] set-service MSExchangeSearch -StartupType Disabled
You should remember that although you can disable exchange search its probably not a good idea to. The 'discovery' feature (I will come to this) relies on it being enabled.

You may have to rebuild the index when you are recovering from data loss. A script is included to help out which can be found in the following location:

program files\microsoft\exchange server\scripts\resetsearchindex.ps1

The following will rebuilds the index on a database

cd "c:\program files\microsoft\exchange server\scripts\resetsearchindex.ps1 -Force DB01

Configure Limits on a Database

You can set a size limit on your database and if reached the database is dismounted. Exchange Server standard ed. has a default limit of 50 GB and the enterprise ed. does not have one. However a limt can be established and you can change the 50 GB limit the standard ed. imposes to a higher value. To do this open the registry using regedit and browse to the following path:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\NameOfSever\PrivateDatabaseGUID

The GUID can be found by using the following cmdlet

[PS] Get-MailboxDatabase DB01 | ft name,GUID -au

Look for the registry value 'Database Size Limit in GB'. If it exists change it to a value you require. If not create a new DWORD (32 bit) value by the same name.

Thursday 25 February 2010

Exchange 2010 Performing Database Management Pt2

With the database created in Pt1, we can now perform the following management tasks
  1. Configure an Online Maintenance Window
  2. Change the Timing for Database Checksumming

1. Configure an Online Maintenance Window

Exchange will perform several tasks to clean up the data in the database. Exchange defragmentation, database compaction and database contiguity maintenance. Because this occurs all the time we no longer need downtime to perform these tasks on a offline database.

Online Maintenance (OLM) of the database occurs every night for 4 hours. By default this begins at 1.00AM but can be changed to suit your requirements. Remember that OLM does not include Online Defragmentation (OLD). This process occurs in the background. In Exchange 2007, OLD was part of the OLM and added alot of time to the OLM process. OLM includes the following processes
  • Cleanup (deleted items/mailboxes).Cleanup also happens during OLD.
  • Space compaction. The database is compacted and space is reclaimed at run time.
  • Maintain Contiguity The database is analysed for contiguityand space at run time and defragmented in the background.
  • Database Checksum This runs against both active and passive copies of a database. Checksumming can occur within an OLM schedule or as a background process similar to OLD
The schedule or window of maintenace can be set in the following way

[PS] set-mailboxdatabase DB01 -MaintenanceSchedule "sat.1:00 AM-sat.5:00 AM"

This sets maintenance to be switched on at 1:00 am on Saturday and off at 5:00 AM

2. Change the Timing for Database Checksumming

As mentioned this can run within OLM or in the background. The background is the default but this can be changed to the OLM in the following way

[PS] set-mailboxdatabase DB01 -BackgroundDatabaseMaintenance $False


Exchange 2010 Performing Database Management Pt1

Databases are at the heart of information storage in Exchange. Two types of database are considered:
1. Mailbox
2. Public Folder

Microsoft have de-emphasized the use of public folder DBs in favour of sharepoint so I will focus mainly on mailbox DBs.

Write-Ahead Logging
Data is written to and read from the database using the Extensible Storage Engine (ESE). Its purpose is to allow applications to store and retrieve data via indexed and sequential access. Data is not written to the database directly, but holds the data in RAM and writes it to transaction logs first. The data in memory and the transaction logs are then written to the database periodically. This is write ahead logging. Essentially changes to data files (where tables and indexes reside) must be written only after those changes have been logged - that is, when log records have been flushed to permanent storage. When we follow this procedure, we do not need to flush data pages to disk on every transaction commit, because we know that in the event of a crash we will be able to recover the database using the log. Any changes that have not been applied to the data pages will first be redone from the log records (this is roll-forward recovery, also known as REDO) and then changes made by uncommitted transactions will be removed from the data pages (roll-backward recovery, also known as UNDO).

Creating a New Mailbox Database
Unlike previous versions of Exchange, the mailbox database has moved up to a global (or organizational) level. This means that the database has to be carefully (and uniquely) named. Because databases can be replicated between servers (this will be discussed at length later) you should not name the database after the server on which the database is housed.


[PS] new-mailboxdatabase DB01 -edbfilepath "D:\DB01.edb" -logfolderpath "E:\DB01Logs"

Unlike creating the database in the console, when created with the shell you need to mount the database.

[PS] mount-database "DB01" (Use the dismount cmdlet to dismount the database!)
Although you have specified the file path for the new database you can change the path by moving the database.

[PS] move-databasepath "DB01" -edbfilepath "F:\DB01\DB01.edb" -logfolderpath "F:\DB01 -force

(this parameter will bypass the prompt to first dismount the database).

You should make note that you cannot move database files if they exist as copies on other exchange servers. You must first remove the replicas on those servers, move the database to the new path as shown above and then re-create the replicas.











Configuring a Cluster Using iSCSI

Ensuring High Availability using Clustering

  • Failover clustering

  • Network Load Balancing

Clustering is available in Windows Server® 2008 Enterprise and Windows Server® 2008 Datacenter editions (creating 8 and 16 nodes respectively)

Failover cluster with two nodes connected to a storage unit


In this exercise, we will build a two-node failover cluster using an iSCSI SAN

Requirements for a two-node failover cluster
You will need the following hardware for a two-node failover cluster:

1.Two Windows Server 2008 (Enterprise ed.) which will serve as cluster nodes
2.A cluster storage volume. This will be a third server running StarWind iSCSI software (iSCSI target)

1.Configuring iSCSI Storage Target

StarWind offers an excellent iSCSI target solution for free. Download the software from https://www.starwindsoftware.com/download-free-version.
Once installed, connect, register and add two iscsi targets. One will be the Witness Disk (Quorum) and the other will be the Data Disk. The ‘disks’ will infact be file images of disks.
Ensure that the Data Disk is around 5GB and the Witness Disk is 500MB and clustering is availbale on both.

2.Configuring the Failover Cluster on the Cluster Nodes pt1

a.Ensure that the Failover Cluster feature is installed on both cluster nodes. (Dont bother running the initial confiuration test!)
b.On Server 1, open the Failover Cluster console from Administrative tools
c.Right click Failover Cluster Management and select Create a Cluster
d.Within the wizard, add the two servers that will be the cluster nodes (server 1 and server2)
e.Say no to the validation warning
f.Give the cluster a suitable name and supply an IP address that the cluster can be reached on
g.Finish the wizard

Now that the basic cluster has been created, notice you dont have any storage. We will next configure the iSCSI initiators.

3.Configuring Failover Cluster on the Cluster Nodes pt2 (iSCSI initiators)

a.On both cluster nodes, open the iSCSI Initiator from Administrative Tools
b.On the Discovery Tab click add portal and enter in the name or IP address of the iSCSI target (the port should be 3260)
c.This should occur with no errors. If errors occur dont forget to check firewall settings on the iSCSI target server.
d.Select the Targets Tab. Two targets should be visible. One will be the Witness Disk and the other will be the Data Disk
e.Log on to both targets making sure the check box for automatic connections on computer restart is selected
f.Follow the same procedure on server 2
g.In disk managent select each disk. Bring each online, and format with NTFS

4.Configuring Failover Cluster on the Cluster Nodes pt3 (add storage)

a.Server 1, open the Failover Cluster console from Administrative tools
b.Find and select the storage section
c.Right click and add storage and add the iSCSI disks created earlier

5.Configuring Failover Cluster on the Cluster Nodes pt4 (Quorum/witness configuration)

a.In the Failover Cluster console from Administrative tools select the cluster name.
b.Select more actions and select ‘Configure Cluster Quorum Settings’
c.Select the ‘Node and Disk Majority’ option (the second radio button)
d.Select the disk that will act as the witness disk

The above configuration should provide you with automatic failover. Shutdown a machine and on the nodes section you will see the other node is still available

6.Configuring Failover Cluster on the Cluster Nodes pt5 (add a file server service)

a.In the Failover and Cluster Managemnt console select ‘Services and Applications’
b.Right click and select ‘Configure Services or Applications’
c.Select File Server
d.Provide a name (users will use this name to access the Data Disk) and a unique IP address
e.Select the data disk storage disk
f.Finish
Clients should be able to get to shared data even if a node goes down