Blog Post

Building the perfect CMDB with System Centre Part 1

D Walsham • Nov 15, 2020

Analysis of the tools in your environment

Introduction

This will be a multi part series on investigating on how we can scale our System Centre toolsets to integrate them altogether to build and develop a powerhouse for having the perfect CMDB.

There tends to be a lot of challenges in terms of not only maintaining at least a 99.99% accurate CMDB, but then the bigger challenge is then to have one which is actually automated. From various projects and customers I have dealt with over the years a lot of the processes have been mainly manual, or the most I've seen will go up as far as semi-automated where you may have some tools automating some of the work whilst others are done manually.

Solutions such as ServiceNow are great, and even with the integration with SCCM to populate its CMDB is also quite handy, but the caveat is that it's truly dependent on a constantly healthy SCCM where there are objects which are not orphaned or out of date. Not to mention that SCCM may not be the sole source of all other toolsets which may reside in your environment so there is still a semi-automated approach to the method of trying to keep your CMDB always up to date.

This series will help to shape and understand the strategies which can be used with System Centre to provide a fully automated approach to keeping a clean, healthy and accurate CMDB. In System Centre the main technology to represent the the CMDB infrastructure is the SCSM (System Centre Service Manager) technology which will be the heart of the solution.

Purpose for the series

Designing, planning, building as well as maintaining an accurate CMDB is not an easy project by any means. And because of the complexity of it is the main reason as to why the approaches have always been more manual/semi-automated. Though manual maintenance approach can be more accurate out of the two, it can still leave space for errors as the environment would be changing at a pace in which user interaction cannot keep up with even if you were to work on it on a monthly, weekly or even daily basis.

This series will explain in detail over several parts how a CMDB can be fully automated with the right planning and right strategies.

What this part will focus on

Part 1 will be to focus on the analysis of the toolsets in which we have to understand how they can all be brought together into one centralized approach and prepped for automation. The most important one of all I would say, in order to capture a discovery phase of what is actually out there and how we will work to bring everything together.

Planning

Analysis of Toolsets

If we take a look at a technology such as SCSM (System Centre Service Manager) which has its own CMDB function, we see that SCSM already contains its own connectors such as;

  • Active Directory
  • SCCM (System Centre Configuration Manager)
  • SCOM (System Centre Operations Manager)
  • SCVMM (System Centre Virtual Machine Manager)
  • Orchestrator (SCORCH)

These standard connectors are a great foundational base to capture CI's (Configuration Items) within the environment. However it may not account for every single real-case scenario in which other tools exist that there are no actual connectors to go by.

Others could exist being for example;

  • Anti Virus Solutions i.e. Sophos, Symantec, Standalone versions of Endpoint Protection/Defender
  • VMWare
  • Exchange
  • Bespoke Toolsets

These all play a part of information for each CI which would get left out of the equation.

Also to expand on some of the other toolsets such as the ones below, so that we understand not only just the work of connecting them but also to ensure that the integrity of the data is correct and certain risks need to be highlighted as well as addressed.

SCCM

Your SCCM may contain multiple environments that could be split into multi environments, or may not have a lot of maintenance or up keep, so even though SCCM maybe a main point to source most information for your CIs, it does hold a single point of failure to some degree.

Also to take into account of the type of devices in which SCCM can manage being primarily Windows but also Linux/UNIX. Now with LINUX/UNIX machines we have to remember that hardware/software inventory doesn't not only get show in a resource explorer within SCCM but also it doesn't get imported into SCSM by default as well, that's also assuming your Linux/UNIX estate is managed by SCCM of course.

Apple Devices could also be managed by SCCM, but how about if you are using a tool such as Casper? That's then another tool you have to worry about in order to obtain CI information from.

SCOM

SCOM essentially has two connectors, one for alerting and another CMDB information.

Where the alerting is concerned, a point of concern would be of course the health state of SCOM within itself. Especially configuration churns where non tailored or tuned environments can end up transferring a configuration churn to the CMDB which could cause catastrophic issues. There are other tools which can pull in raw alert data such as Event Management based modules and other tools such as Evanios, but again it's another semi-automated approach which we are looking to move away from.

Another challenge is then the CMDB connector it has, now in order to get all of the information from SCOM into SCSM we need to add every single dependent management pack to reference from which exist in SCOM. Mostly these consist of library and discovery management packs to understand the various classes.

SCVMM

The connector in SCSM for this is not what you think. This connector would import information such as VM templates and other objects which aren't necessarily CI information around VMs, hosts, datastores etc..

In order to obtain this information this comes primarily from the information within the SCOM CMDB connector and at that we still require the management packs for SCVMM to be present as well as various dependencies.

Active Directory

You might be wondering why I've actually placed this one last? Well active directory is not only just the backbone of an infrastructure but it most certainly is for all the tools mentioned above.

This is why its imperative to ensure the integrity of data is kept right because inaccurate data in Active Directory will essentially spread into tools such as SCCM/SCOM and then eventually brought into SCSM.

Now the tools above do have their own sources of filtering data in Active Directory but this can really only go so far. Also not to mention a scenario in which we have build rooms or test environments in which objects are constantly being rebuilt, deleted or templated can also have a long lasting effect. So not only designing a strategy to ensure data integrity but to also account for and even label the objects which reside in this space is just as important perhaps more.

Legacy Toolsets

Whilst I imagine most organisations maybe trying to use the latest and greatest of technologies available, unfortunately this may not always be the case. Most of the time there will be some legacy tools around which are hanging by a thread but still very business critical to manage other legacy systems.

They can be from anything as old versions of SCCM (SMS 2000,2003, SCCM 2007 even SCCM 2012 would be considered legacy now) as well as old versions of SCOM (MOM 2005, SCOM 2007 etc) we have to account for everything which is being utilised within the business.

The challenges we would also face is not just the time span of the legacy estate but also the technology gap between them which may make it more challenging to integrate or to pull data from, and that's just from a manual perspective let alone on an automation perspective.

Where the toolsets centralise data

Most tools these days utilise a SQL Database or some form of database (MySQL, NoSQL, OracleDB)

But then some tools can be more different and perhaps keep there information in a file within an installation directory, or perhaps just within the realms of PowerShell in their SDK.

Legacy tools may use some of those mentioned above but maybe more in the realms of having to use VBScript or even something more universally legacy than that.

Overall we need to capture of the different avenues in which a connection of possible integration can be reached by investigating each point where communication can be reached.

Possibility Of Bi-Directional Synchronization

Perhaps one of the most debated decisions when it comes to the synchronization of CI objects.

In terms of the discovery phase this would seriously depend on the role in which the CMDB will play and how it will fit in your environment especially where a user operations team would fit and other various tier based technical teams.

A one way synchronization I can see the benefits as it allows for some user based validation and integrity checking, where as a bi-directional method would of course supported the fully automated approach in which we are discussing but then we may not be able to fully trace the historical side to see if that decision of synchronization was indeed accurate or not.

If we look into a technology such as SCOM, now when it comes to monitoring it uses objects which have the same kind of structure as the scenario of one way or bi-directional. Monitors in SCOM mostly hold a bi-directional configuration so an alert can be created for lets say a spiked CPU performance, and before you could even get to the ticket or alert it may already be auto-resolved. Whilst rules and a manual reset monitor however take an approach of non auto resolution so just really one way. I guess one way you can look at it is full automation is convenient whilst a manual/semi-automated has more chance of a security footprint and to obtain a baseline of how and why.

With the fully automated approach we will be expanding on maintaining a historical baseline and security footprint as well in which we will expand on within the next parts.

Next up on Part 2

Part 2 will contain all of the technical side and breakdown on how to design a fitting solution. A high level diagram will also be shown to show all of the integration points and how they will be tackled to bring everything all together

by D Walsham 13 Dec, 2021
Looking through the current SQL Server topology and how it affects our decision
by D Walsham 07 Oct, 2021
Introduction
by D Walsham 06 Oct, 2021
Introduction
by D Walsham 12 Aug, 2021
All the parts of the series we went into great detail about how we analyse an end to end solution and how we would design a solution in which would allow us to build endpoints without SCCM being a dependency. Whilst we did this, there is another scenario which we have not touched on yet, which is the hybrid scenarios. In a perfect world ideally you would have your Azure Active Directory within the cloud, every machine meets the recommended requirements for Windows 10, everything is imported into Intune/Autopilot and everyone is happy. But we know this isn't realistic in all cases. Many organisations cannot just simply up and go from on-premise into the cloud therefore the checkpoint here is of course getting into hybrid solutions such as; Co-Management Between Intune and SCCM Hybrid AD with Azure AD and On-Premise AD syncing together These things can play a very interesting part in how you would tackle this if you envisage the next step in the blueprint is to be in a position in which you can build and manage endpoints soley within Intune. With this final part of the series we will go in-depth in how the common hybrid setups look like and how we go about moving into the next step of being able to manage and build devices without SCCM.
by D Walsham 29 Jul, 2021
In continuation from the previous part where we had discussed how we create the "on site" piece of the solution, this was the part which would allow us to get our endpoints into a state in which they would essentially be ready to go through the Autopilot process. Which leaves our next piece of the puzzle, to begin the configuration of the actual backend side that resides within our Endpoint Management console. And you will see how everything ties up together to satisfy the full end to end process of getting an unknown (or known) device to proceed thorough the whole workflow to be finally managed by Intune without the aid of SCCM taking part in any of the prerequisites or preparation at hand.
by D Walsham 15 Jul, 2021
In this part we are now going to look into the technical step by step points on how we put everything together. In the previous part we spoke about the structure of how we would asses whether a machine was actually ready to be built with Autopilot or not with a build checklist process which would step through all areas which would cover an endpoints eligibility. Now with everything planned out we finally want to step into making things reality by putting everything together.
by D Walsham 02 Jul, 2021
When it comes to managing your endpoints in endpoint manager, one of the things you may be looking to do is to get all of your Intune registered machines to also be enrolled as Autopilot devices. Now we can of course just have the deployment profile deployed to all machines and then hit the "Convert targeted machines to autopilot" but this might not necessarily be feasible for every client. We may want to perform some due diligence first so we can at least understand what devices in Intune are not in Autopilot.
Show More
Share by: