How to set default smtp addresses for Active Directory contacts using Powershell

May 15, 2014 Leave a comment

Hi again,

We are working currently on a critical migration project from a Windows 2003 Platform to Windows 2008/2012/R2. We had also to migrate Exchange 2003 to Exchange 2010. We had encountered some troubles and everything is working fine after some resolutions (ElhamdouliLLah!).

One of the encountered problems concerned a big number of Exchange 2003 contacts (created in Active Directory) that could not be opened in Exchange 2010. The cause is that the default primary SMTP address of these contacts was not set. Consequently, it was not possible to send mails to those contacts. We had more than 700 contacts to update. In our 2012 Domain controlers we had to modify the proxyAddresses attribute for each contact which contains initially two addresses : X400 and smtp.

To set a default SMTP address, the “smtp” keyword has to be changed by the “SMTP” word.

Let us suppose that we have an OU containing all our contacts and named “EXP Contacts” in the “contoso.com” domain. You can find some PS scripts just for Active Directory Users. The idea is the same but with the contacts things are a bit different.

The Powershell script to execute on a domain controller or an Exchange server after importing Active Directory modulle is as follows :

$EXPOUPath=”ou=EXP Contacts , dc=contoso, dc=com”;

$EXPContacts = Get-ADObject -Filter ‘objectClass -eq “contact”‘ -searchbase $EXPOUPath -Properties *

foreach ($EXPContact in $EXPContacts)
{
$proxyAdresses=$EXPContact.proxyAddresses;

foreach ($EXPContactPrxAddress in $proxyAdresses)
{
if ($EXPContactPrxAddress -match “^smtp”)
{
$EXPContactCN=”cn=”+$EXPContact.CN+”,”+$EXPOUPath;
Set-ADObject CN -Remove @{ProxyAddresses=$EXPContactPrxAddress}
$EXPContactDefaultAddress=”SMTP:”+$EXPContactPrxAddress.Split(“:”)[1];
Set-ADObject CN -Add @{ProxyAddresses=$EXPContactDefaultAddress}
}
}
}

Hope it helps.

SCCM ConfigMgr 2012 Support Center Viewer, a useful tool, is available!

February 5, 2014 Leave a comment

system center

Hi SCCM Geeks!

Troubleshooting SCCM is not a so easy work. SCCM is centralized (SCCM Site roles) and distributed (managed clients). So it is not obvious to find the source of your issue.

SCCM generates lot of log files on the servers and the clients and studying them is so difficult and time consuming.

Microsoft offers us this great tool : SCCM ConfigMgr 2012 Support Center Viewer.

This tool allows you to collect and consolidate logs to use for troubleshooting purposes and  simulate some client actions like requesting client policies.

You can find further information in this blog.

Hope it helps!

Probable Double Take 7 bug during the failover of SQL Server 2005, 2008 or 2008 R2!

February 4, 2014 Leave a comment

Image

Hi again!

I am working currently on a replication project using the famous Double Take to replicate SQL Server 2000/2005 databases and file shares between two remote sites.

Diagram_Availability

I have some experience with The Storage Mirroring (Double Take) 5.3 from HP and I liked really its different consoles. As we have not the necessary licence for Entreprise Edition, we were obliged to use the new Double Take 7.0 delivered by Vision Solutions. And unfortunatly my expectations were not met.

Unlike Storage Mirroring 5.3, when configuring a SQL replication job, there is no manner to configure the SQL services to start during the failover process. I have searched for a file in the Double Take repository that contains the services to start but no way!

SQLDT

And the final result when you launch your failover process, the job fails! Just take a look on the job log to find the cause : the SQLSERVERAdHelper service cannot be started!

This service is just launched when an SQL Server object needs to be created in the Active Directory to register an instance of SQL Server and is configured to start manually. So the only way to start the MSQLSERVERADHELPER is from SQL Server.

But our famous Double Take fails when starting this service! And this is very logical!

What is the solution? If this SQLSERVERAdHelper service is not so crucial for you, you can delete it! Yes this solution was applied in our case using this command : sc delete MSSQLServerADHelper.

By doing so, the failover process worked for us finally!

NB: the MSSQLSERVERAdHelper service is removed from SQL Server 2012 and later (I suppose). So, no problem when replicating with Double Take 7.0.

After Forefront family, Inforpath will also die!

February 1, 2014 Leave a comment

infopath-151x157

Hi again!

Strange decisions Microsoft is taking but not really, this time, about InfoPath technology as we are working on a Next Forms Technology!

Just few hours ago, Microsoft announced that this beautiful technology is knowing its end of life!

This decision concerns the Client Inforpath and InfoPath Service Applications on SharePoint Server 2013 and Office 365 SharePoint OnLine.

Just wait for the SharePoint Conerence Event coming next month inchaEllah to discover the Next InfoPath Technology.

Categories: SharePoint Tags:

Advanced Hyper-V replication configuration

August 9, 2013 Leave a comment

Hi,

In the last post, I presented the Capacity Planner for Hyper-V Replica and in the documentation, I discovered that this tool can suggest a value for the parallel machines number to be transferred.

Rapidly, I asked Google about this parameter and bingo! A very nice article from Microsoft that describes more parameters to configure :

  • DisableCertRevocationCheck
  • MaximumActiveTransfers
  • ApplyVHDLimit
  • ApplyVMLimit
  • ApplyChangeReplicaDiskThrottle

All these parameters can be configured through the registry.

Enjoy!

3id Moubarek!

August 7, 2013 Leave a comment

taqabal_Allahou_minna_wa_minkoum_23475393id Moubarek to all Muslims over the world!

Categories: Uncategorized

Capacity Planner for Hyper-V Replica; a long story from SCCM!

August 7, 2013 1 comment

Hi Geeks,

For a customer who has about 1500 users, I have designed a SCCM 2012 Platform using a single primary site since there is no a subordinate important site (to use it as secondary site or another primary site) with the these elements :

  1. A site server on a DL 360 G7
  2. A site system server with duplicated roles on a DL 360 G7
  3. 2 SQL Servers configured used Always On feature on 2 DL 360 G7

All right for 1500 users, the proposed architecture is highly available. However, the customer has changed his opinion: The SCCM Is so critical for him ans desires to get it on the Secondary site.

My challenge was with the same servers, I had to find a solution since SCCM 2012 does not support Disaster Recovery capabilities.

So I have thought a bout virtualization to offer :

  • High availability through a Hyper-V cluster
  • Disaster Recovery capabilities through Hyper-V Replica

The architecture has changed and the following schema describes the involved elements :

SCCM Ar

  • 2 servers used as Hyper-V Cluster Nodes. Each node can host two machines : SCCM (a primary site server), SQL ( configured also as a site server with some duplicated roles)
  • 1 server as SAN (Yes!). The cluster was based on SMB 3!
  • 1 server as Hyper-V replica

Very nice! The designed architecture was deployed successfully (ElhamdouliLLah). However, I have encountered some issues with the Hyper-V replication that works fine locally but with big disruptions over the WAN.

My problem is I was not able to estimate the necessary ressources (WAN bandwidth especially) for my workload.

Fortunatly, Microsoft has released this great tool ; Capacity Planner for Hyper-V that can be downloaded from this link.

CAPLA

After configuring and running the tool, it is possible to consult a rich report that covers (from the tool documentation) :

1)      Virtual Machine:

The table lists a set of VMs and VHDs which were considered for capacity planning guidance.

2)      Processor

The table captures the estimated CPU impact on the primary and replica servers, after enabling replication on the selected VMs.

3)      Memory

The table captures the estimates memory requirements on the primary and replica servers, by enabling replication on the selected VMs

4)      IOPS

There are two tables in this section – one for the primary storage subsystem and the other for the replica storage subsystem.  The attributes for the primary storage subsystem are:

a)      Write IOPS before enabling replication – This captures the write IOPS observed across all the selected VMs for the duration of the run

b)      Estimated additional IOPS during initial replication – Once replication is enabled, the VHD is transferred to the replica server/cluster as part of the ‘Initial Replication’ (IR) operation which can be completed over the network. The IOPS required during this duration is captured in this row.

c)       Estimated additional IOPS during delta replication – Once IR completes, Hyper-V Replica attempts to send the tracked changes every 5 minutes. The additional IOPS required during this operation is captured in this row.

The attributes for the replica storage subsystem are:

a)      Estimated IOPS during IR – During the course of IR, the IOPS impacts on the replica storage subsystem is captured in this row

b)      Estimated IOPS when only the latest point is preserved – While enabling replication, customers will have an option to store only the recovery point or upto 15 additional recovery points (which are spaced at a 1 hour granularity). This row captures the IOPS impact when storing only the latest recovery point.

c)       Estimated IOPS impact when multiple recovery points are used – This row captures the IOPS impact when replication is configured to store multiple recovery points. Hyper-V recovery snapshots are used to store each recovery point. The IOPS impact is independent of the number of points.

5)      Storage

This section captures the disk space requirements on the primary and replica storage. The first table which captures the primary storage subsystem contains the following details:

a)      Additional space required on the primary storage: Hyper-V Replica tracks the changes to the virtual machine in a log file. The size of the log file is proportional to the workload “churn”. When the log file is being transferred (at the end of a replication interval) from the primary to the replica server, the next set of “writes” to the virtual machine are captured in another log file. This row captures the space required across all the ‘replicating’ VMs

b)      Total churn in 5minutes: This row captures the workload “churn” (or the writes to the VM) across all the VMs on which replication will be enabled.

The following metrics are reported on the replica storage:

a)      Estimated storage to store the initial copy: Irrespective of the replication configuration around additional points (latest vs storing more than one point), this row, captures the storage required to store the initial copy.

b)      Additional storage required on the replica server when only the latest recovery point is preserved: Over and above the storage required to store the initial copy, when replication is enabled with only the latest point, the tracked changes from the primary server are written to the replica VM directly. Storage (which is equal to the churn seen in a replication interval) is required to store the log file before writing to the replica VM.

c)       Additional storage required per recovery point on the replica server when multiple recovery points are preserved: Over and above the storage required to store the initial copy, each additional recovery point (which is stored as Hyper-V snapshot on the replica server) requires additional space which is captured in this row. This is an estimate based on the total VHD size across all the VMs and the final size is dependent on parameters such as write pattern.

6)      Network

The network parameters are captured in the table. These are:

a)      Estimated WAN bandwidth between the primary and replica site: This is the input provided to the capacity planning tool.

b)      Average network bandwidth required: Based on the workload churn observed during the duration of the run, this row captures the average network bandwidth required to meet Hyper-V Replica’s attempt at sending the tracked changes every 5 minutes. This is a rough estimate as factors (which are not accounted by this tool) such as compression of the payload, latencies in the network pipe etc could impact the results.

c)       MaximumActiveTransfers: In a multi-VM-replication scenario, if the log file for each of the replicating VM is transferred sequentially, this could starve or delay the transmission of the change log file of some other replicating VM. On the other hand, if the change log file for all the replicating VMs are transferred in parallel, it would affect the transfer time of all the VMs due to network resource contention. In either scenario, the Recovery Point Objective (RPO) of the replicating VMs is affected. An optimal value for the number of parallel transfers is got by dividing the available WAN bandwidth by the TCP throughput of your link. The tool calculates the TCP throughput by replicating the temporary VM which is created and makes a recommendation for a registry key which is taken into account by Hyper-V Replica. It is worth noting that the value captures the number of parallel network transfers and *not* the number of VMs which are enabled for replication.

A great tool really!