Blog Post

Building the perfect CMDB with System Centre Part 2

D Walsham • Nov 27, 2020

High Level Design Look - Painting the picture of how it would look on various levels

Recap of previous part

To see the first part of this series you can view it right here

We spoke about how to go through the analysis of all of the toolsets in which we have within our estate which we are able to either pull CMDB data from and for some which may perhaps need an alternative way to connect to it to pull data from which can range from bespoke tools all the way to legacy based systems.

Introduction to Part 2

This part will now cover more of the HLD (high level design) of how the overall CMDB solution would look which we will have multiple scenarios all accounted for. Also we will dive a little into the integrations with other toolsets which also carry a CMDB role to them in where perhaps a synchronous or asynchronous relationships can be made to each other.

These should hopefully cover most kind of setups and environments in which we allow room for scaling up the reach of the CMDB connections across all toolsets which maybe available.

Scenario 1 - Design of CMDB with default connectors

Here we have which would be an out of box setup which you would be able to utilise from System Centre Service Manager which already contains the following connectors by default being;

  • Active Directory
  • Microsoft Endpoint Manager Configuration Manager
  • System Centre Operations Manager (CI and Alert Connectors respectively)
In the diagram each area is tiered with information aligning next to them to give the idea of the entire workflow on how data is passed through from the toolsets all the way up to the CMDB.

This would be considered a base CMDB setup which should be able to account for everything such as all computer objects as well as hardware/software inventory on the perspective that all CIs are indeed managed and maintained across the main System Centre products as well as the backbone of your infrastructure being Active Directory.

The connectors for both the SCCM/MEMCM and also the SCOM all connect to the database servers which hold the operational databases for each tool respectively which then feed into the connectors then populates the Service Manager with the CI information which in turn populates the CMDB.

Active Directory on the other hand would connect through LDAP to the domain/forest which you choose to pull from which feeds into its base connector within Service Manager.

Scenario 2 - Design of CMDB with connections to all toolsets

In this scenario is where we promote more of the scalability within our CMDB solution, where we are now able to broaden the scope to every single toolset that we have on a global scope.

Very similar structure to the previous Scenario however we have now introduced another piece to the solution which will help with the scalability of reaching the toolsets.

If we look at Tier 5 there is an object which would represent all "Bespoke Tools" which is my way of perhaps grouping not only solutions which may be developed in house, but there may also be Microsoft based Products in which don't perhaps have a default connector such as Exchange (there is one for SMTP channels), Sharepoint and other various products.

So when we look at the next tier up we see a picture of multiple products, these represent a good example of the type of methods which would be used to pull data into Service Manager such as;

  • Excel - .CSV, XLSX
  • Access - Access Database Files
  • PowerShell - Connection via PowerShell Scripting + SDK integration
  • VBScript - Utilising a VBScript for perhaps more legacy based toolsets
  • Python - More programming scripting which maybe able to interrogate much further as well as another alternative
  • SQL Database - Though this is used as a somewhat default for the connectors which already come with Service Manager, other toolsets wouldn't have a direct plugin so a custom SQL query or ODBC connection maybe required for this.
How we would connect with these methods? This is where the tier 2 comes into place where we would be using System Centre Orchestrator which was also meant to be a solution for which would be designed to also provide custom connectors for System Centre also. We will investigate more into this side once we get into Part 3 of the series

Scenario 3 - Design of CMDB with all toolsets with Integration to another CMDB

Not necessarily a popular scenario or setup as such but is indeed a means to keep an automated CMDB in one space which can then synchronize to another.

Again this scenario is very similar to the 2nd however we are detailing a high level design of a fully automated solution of the CMDB which may not be the primary technology which is used for all Service Management as many organisations may use a technology such as ServiceNow. And with the power of ServiceNow you may not necessarily have a connection point from many products to go direct into ServiceNow very easily, though there are products such as SCCM/MEMCM which can have a connector to ServiceNow as well as SCOM with tools around Event Management and even others such as Evanios.

This scenario provides an idea of how it can be done, with that being said this would be more off scope as to where the series will be focusing more building the actual workflow within System Centre primarily. But at least with this series once your CMDB automation solution is built you can indeed go that route to expand across. Kelverion Integration packs are one of a few solutions which can provide a connection between both for example.

Overview of all Scenarios - Target Scenario

Overall we have analysed the tools in which we have, and now we have laid out a high level design on what we feel that design will look like. Scenario 1 is ideally where we want to start as a foundation but we know in a real world scenario other tools would fall by the waist side with this setup so we want to allow room to scale to capture everything in Scenario 2.

With all the toolsets we have I would summarise it as with the tools we dont have connectors for we simply create a custom one to bring them altogether. Whilst with the others which may already have a default connector we simply need to enrich and prepare for quality control before and after setting up successful connector synchronization.

Next on Part 3

Part 3 will then focus on more of the low level design of how we establish the connectors to products which already contain one, and will also look at all of the best practices and ways on how to enrich and place a quality control for the integrity of everything being synchronized into Service Manager so we don't create an issue of out of date or orphaned CI records.

by D Walsham 13 Dec, 2021
Looking through the current SQL Server topology and how it affects our decision
by D Walsham 07 Oct, 2021
Introduction
by D Walsham 06 Oct, 2021
Introduction
by D Walsham 12 Aug, 2021
All the parts of the series we went into great detail about how we analyse an end to end solution and how we would design a solution in which would allow us to build endpoints without SCCM being a dependency. Whilst we did this, there is another scenario which we have not touched on yet, which is the hybrid scenarios. In a perfect world ideally you would have your Azure Active Directory within the cloud, every machine meets the recommended requirements for Windows 10, everything is imported into Intune/Autopilot and everyone is happy. But we know this isn't realistic in all cases. Many organisations cannot just simply up and go from on-premise into the cloud therefore the checkpoint here is of course getting into hybrid solutions such as; Co-Management Between Intune and SCCM Hybrid AD with Azure AD and On-Premise AD syncing together These things can play a very interesting part in how you would tackle this if you envisage the next step in the blueprint is to be in a position in which you can build and manage endpoints soley within Intune. With this final part of the series we will go in-depth in how the common hybrid setups look like and how we go about moving into the next step of being able to manage and build devices without SCCM.
by D Walsham 29 Jul, 2021
In continuation from the previous part where we had discussed how we create the "on site" piece of the solution, this was the part which would allow us to get our endpoints into a state in which they would essentially be ready to go through the Autopilot process. Which leaves our next piece of the puzzle, to begin the configuration of the actual backend side that resides within our Endpoint Management console. And you will see how everything ties up together to satisfy the full end to end process of getting an unknown (or known) device to proceed thorough the whole workflow to be finally managed by Intune without the aid of SCCM taking part in any of the prerequisites or preparation at hand.
by D Walsham 15 Jul, 2021
In this part we are now going to look into the technical step by step points on how we put everything together. In the previous part we spoke about the structure of how we would asses whether a machine was actually ready to be built with Autopilot or not with a build checklist process which would step through all areas which would cover an endpoints eligibility. Now with everything planned out we finally want to step into making things reality by putting everything together.
by D Walsham 02 Jul, 2021
When it comes to managing your endpoints in endpoint manager, one of the things you may be looking to do is to get all of your Intune registered machines to also be enrolled as Autopilot devices. Now we can of course just have the deployment profile deployed to all machines and then hit the "Convert targeted machines to autopilot" but this might not necessarily be feasible for every client. We may want to perform some due diligence first so we can at least understand what devices in Intune are not in Autopilot.
Show More
Share by: