Biggest AD Forest

Currently working on a single forest with 300,000+ users - the scale can be daunting at first but its just multiplication of strong principles.
 
Two biggest I've seen - 1.3m and 1.6m. One in the UK (Govt related) and one in the US (defense related). Thankfully I didn't have to admin them :)
 
Don't know exactly how many DCs but it certainly ran into the Zulus territory (faaasands of 'em!). I wasn't involved at all in the AD design but worked on a solution to sync objects to and from different systems including AD so got a very good feel for the number of objects in each directory/system.

AD can scale, but isn't particularly graceful at that kind of level. It also isn't fully LDAP compliant (despite what MS might have you think) which can make life pretty miserable if you're developing against it.
 
The large AD Forest that I work on is a public sector project in education. I know that its one of the largest in Europe (single Forest rather than a mesh of trusts) and contains only 3 domains.

Every user has an Exchange mailbox.

1500 Domain Controllers and nearly 6000 servers in total.
 
1500 DCs - you're having a laugh...

What O/S are the DCs on as when we did some AD work at SSE they moved up to x64 version and reduced the number of DCs

That must be a nightmare managing that in sites and services, wan links, replication and so on.. eek
 
1500 DCs - you're having a laugh...

What O/S are the DCs on as when we did some AD work at SSE they moved up to x64 version and reduced the number of DCs

That must be a nightmare managing that in sites and services, wan links, replication and so on.. eek

Yes I was thinking 1500 DC is a bit over kill!

Stelly
 
1500 DCs - you're having a laugh...

What O/S are the DCs on as when we did some AD work at SSE they moved up to x64 version and reduced the number of DCs

That must be a nightmare managing that in sites and services, wan links, replication and so on.. eek

A lot depends on the site topology though - lots of sites generally equals lots of DCs (althought not in every case). The performance of the machine often isn't the issue, rather it's the proximity to the user logging on and reliability/performance of WAN links. I've seen lots of relatively small domains/forests with lots of DCs because of the branch office nature of their sites and (relatively) poor WAN links.
 
Its a managed service for schools so each one gets at least one DC (1200 schools). The contract is written in such a way that a school must remain partially functional even if the WAN link is down and there are a lot of AD enabled apps - hence at least one DC in each site.

And yes its a nightmare! If it wasnt for a particular set of apps then it would probably be a more traditional corporate setup with DCs at a regional datacentre. No such luck.
 
Its a managed service for schools so each one gets at least one DC (1200 schools). The contract is written in such a way that a school must remain partially functional even if the WAN link is down and there are a lot of AD enabled apps - hence at least one DC in each site.

And yes its a nightmare! If it wasnt for a particular set of apps then it would probably be a more traditional corporate setup with DCs at a regional datacentre. No such luck.

I should imagine you have a really 'nice' replication topology there! So where do the schools store their data, local server or central store? I'm just trying to picture what 'partial functionality' they will have. ;)
 
Currently all data is stored locally in each school. The only things done centrally in the datacentre are Exchange, Internet Access and content filtering. Oh and some Virtual Learning Environment Web 2.0 apps.

Its not that bad really in terms of replication - each school is an AD site of course but the WAN is tiered with maybe ten central schools running 200Mbit fibre to the datacentre then another tier of ten schools off each of these running a slower connection.

Each of the hub schools hosts bridgehead servers and Global Catalogues. All the FSMO roles are in the DC. On an average day you would expect Spotlight to report maybe 50 servers having replication issues (usually a Journal Wrap or low disk space). These are fairly easily fixed.

The DCs are now 5 years old and are due for a replacement in the near future.
 
Last edited:
Currently all data is stored locally in each school. The only things done centrally in the datacentre are Exchange, Internet Access and content filtering. Oh and some Virtual Learning Environment Web 2.0 apps.

Its not that bad really in terms of replication - each school is an AD site of course but the WAN is tiered with maybe ten central schools running 200Mbit fibre to the datacentre then another tier of ten schools off each of these running a slower connection.

Each of the hub schools hosts bridgehead servers and Global Catalogues. All the FSMO roles are in the DC. On an average day you would expect Spotlight to report maybe 50 servers having replication issues (usually a Journal Wrap or low disk space). These are fairly easily fixed.

The DCs are now 5 years old and are due for a replacement in the near future.

Spotlight is cool. I've tried many a time to get places to use it, but nobody has ever bitten for me. Love to work at a place that uses it.
 
Back
Top Bottom