Thursday, April 6, 2017

Domain Discovery - Finding Login Scripts

Background
     With my current contract, I'm helping a company in the process of collapsing multiple domains into one new domain while also tidying up their OU structure, Group Policy Objects, and converting old login scripts to Group Policy Preferences.  A quick query of the SYSVOL and NETLOGON folders show hundreds of login scripts, some dating back to the early 2000's.  Early on, I suspected that we weren't actually using all of them, so here's the process for discovering exactly what you need to concern yourself with. 

The Tools
    All you need to make this work is the ActiveDirectory module. Specifically, we can do almost all of this with the get-aduser command. 


This command pulls a list of all users that are Enabled and have a login script specified. This can take quite a while to run if you have a large domain, but this gives us the data set we need to get started. 


What this does is take just the ScriptPath value, convert it to uppercase (which is important later), and assign those to a new array. 


Now what we're doing is taking the $allScripts array and querying it for unique values. The ToUpper() from earlier is important here because "select -unique" is case sensitive. Had we not converted everything to uppercase, we'd get a ton more unique values because LoginScript.bat, LOGINSCRIPT.bat, and loginscript.bat would all have unique entries.  


This isn't necessary, but it lets you do a quick "does this sound right" check.  In a healthy domain, your count of unique scripts should be very small compared to your amount of users with scripts. 


This piece counts through every unique script we found and tallies up the number of users with that script applied.  From there, you have options for how you want to output it. For example:
$all | sort -property Instances -descending | ft
Will output the array, sorted from highest to lowest number of instances.  This gives you an immediate look at what your most common, and therefore most important, scripts are to migrate over. Alternatively, it can be extremely helpful to see what scripts only have one or two users assigned to them. In the case of my domain, most of those appear to be typos (i.e. LoginScirpt.bat).  
If you need to present a report to your manager, it's a nothing task to run 
$all | export-csv -notypeinformation -path C:\LoginScripts.csv
and fire that off in an email. 

I hope this has been useful. Let me know if there's anything else you'd like to see. 

Monday, March 27, 2017

WMI for Dummies

Part 1 – Basic Queries
To query WMI, you need to use the Get-WmiObject cmdlet.  An example is below:

No special formatting, filtering, output, etc.   A call to WMI like this will only query your local machine. If you’re trying to get information from a remote computer, then you need  to use the –ComputerName flag.  Be warned, that the credentials that work for your computer might be completely worthless on another machine.  If that happens, you’ll see an error like this:


You have some options when this happens, you can either open a new PowerShell window with the “RunAs” right click option and enter your credentials at that point, or you can use the –Credential flag to enter them on the fly.


That should be enough to give you a basic understanding of how to query WMI on your computer and remote machines. 

Part 2 – Filtering Results
Sometimes when you query WMI, you’ll get a ton of results back. Easy examples of that are the Win32_Product class (a list of everything in Add/Remove programs), or our example, the Win32_NetworkAdapter class.

There are three typical ways of filtering WMI results. The first is with a pipe to the where-object command, usually aliased to where or just ?
Here’s what the where command looks like in action:

The downside to using where to filter your results is that the first command will complete before the filtering is performed. This means that if you’re filtering against a giant list, that entire list has to be generated before it can even start filtering.   For things like our Win32_NetworkAdapter class, that isn’t a big deal, but if you’re filtering a few machines out of your entire SCCM SMS_R_Device database, it’s highly inefficient.  When you have giant datasets to filter against, you’re usually better off filtering the list as it’s being generated. The first example of this is the –filter flag.

Notice that when we use the where command, the filtering is done with “-like” and the “*” symbol , while WMI filtering has no “-“ and uses “%” instead.  We can also use the –query flag to construct the entire command as a WQL query statement.

That should give you a basic idea of how to filter your WMI search results.

Part 3 – Selecting Output
Up until now, we’ve just taken our output however WMI decided to give it to us, which was usually List format.  When looking for a lot of detail about one or two specific results, List is a great format, but when you’re getting data about more than a couple items, you usually want them as a Table.   If you just want to select one data point about your results, there are three main ways of doing this. The first is to pipe your output to the select-object or select command.

Now, we just have the Name value.  You can also do your selection from within a WQL query statement.

What you’ll notice with both of those options is that there’s some header information that comes along with the value you wanted. Personally, I like to use this method of calling out one specific value:

As you can see, that just returns the value we requested with no other information coming along for the ride.

Part 4 – Formatting and Exporting
Now, we’ve done some basic formatting by only selecting values we needed, but let’s say your SCCM architect really wants a nice report to show off and has come to you about it.  For this example, let’s say he wants a list of all the network adapters on your computer that actually have physical addresses.  Here’s what that statement looks like.

That gives us our dataset to work with, but it’s unformatted.  To format it in a report-friendly way, we pipe it to the format-table command.

Note that we can use the –property flag to specify which values from the dataset we want to display. Now, if you want to export this as a CSV, you can’t just pipe your format-table command to the export-csv command.   Instead, we use the select command from earlier to pick out the values we want, and pipe that to the export-csv command.

You’ll get no output on your screen (assuming you typed it all in correctly), but when you open the CSV you created, you’ll see your data.

You may be wondering about the –notypeinformation flag. That’s to stop your CSV file from having weird header information in the first row like this:


Hopefully, that gives you some basic information to start with. WMI is one of those commands that just don't have great documentation within the Get-Help command because each class has its own quirks, values, etc.  Microsoft does have excellent documentation for each class if you have a basic understanding of how WMI works, so just go to Google® and type in the exact name of the class you're querying to see it. 
Any further questions, just let me know! 

Thursday, March 2, 2017

Distribution Point Migration Tool-Kit

The toolkit can be downloaded from my Technet Gallery HERE
This post is a long time in coming, but creating something robust enough to work in most environments that's still user friendly (with associated documentation) can take a little bit of time.  In the course of one contract I've worked, we realized that we needed a way to convert old Secondary SCCM sites into Distribution Points, but we wouldn't be given any new servers to migrate to. We also knew that the WAN links connecting these remote sites back to our headquarters were severely lacking.  Our solution was to prestage all the content currently stored on the content libraries so we could strip off all the roles (which would clear the SCCM content library), remove unneeded programs and features, add the servers back as Distribution Points, and then reload the prestaged content so it wouldn't have to transfer over our unspeakably slow WAN connection. We got a peek at this work with my last post of the SCCM Universal Prestage script, but this post will give you the other pieces of the puzzle.  

The Core Functions


Initialize-Toolkit
                This is the first function you call if you’re running the Migration Kit from a PowerShell window you didn’t summon up from inside the Configuration Manager console.  This function will verify that you have Administrator rights, will seek out and import the Configuration Manager module, and will map your CMSite PSDrive if you don’t already have it mapped. This function is also called within every other function after a quick check to make sure that the CMSite drive is mapped.  If it isn't mapped, it calls the Initialize-Toolkit function and maps it. 
 Console run without admin rights
Console run without admin rights

After the drive has been created

Get-DPContent
                The second function in the toolkit will query our primary site server and return a list of all content that is assigned to the distribution point we provided.  There are multiple ways to get this information. I’ve seen it done with Get-CMDeploymentPackage cmdlet since that will also return package type information that we’ll need later.   However, I chose to do it via the SMS_DPContentInfo WMI class because I find that it returns the same level of information, but does so in roughly 1/3 the time.  It also means that you can run the command without needing to be connected to the CMSite drive if you don’t want to fully initialize everything. 
 A simple report of package ID's and names

An example of the data stored by SMS_DPContentInfo
Prestage-Content
                This is one of the ‘heavy lifters’ of the toolkit.  This function requires a package ID number, the Distribution Point containing the package, and the folder you want it dumped to after creation. What this creates is a PKGX file named with the package ID of whatever you prestaged.  The way it decides what to prestage is based on the PackageType value that comes from WMI’s SMS_PackageBaseClass. Again, you can get a package type identifier from Get-CMDeploymentPackage if you’d rather go that way, but I like WMI.  Once it’s pulled the PackageType value, it runs it through a SWITCH command and runs the appropriate Publish-CMPrestageContent command.  I don’t do any special logging with this function since Publish-CMPrestageContent already does a good job of it.
 Prestaging a single file

Prestaging multiple packages with a For Loop

Restage-Content
                This function is one of the main reasons I like to save my prestage files with the PackageID as the name.  You input the folder containing the prestage files as well as the name of the Distribution Point they need to be assigned to, and this will get the package type information for that package, run the same switch as Prestage-Content, and issue the Start-CMContentDistribution command with the appropriate flags.  Just to save time, it will also query the Get-DPContent function to make sure that it isn’t trying to reassign packages that are already assigned.
Packages were already assigned in SCCM

 Package distribution in progress

Extract-Content
                This function calls upon Microsoft’s ExtractContent.exe tool to run, and is designed to be run locally from whatever DP you’re importing the package to.  The only flag you need to specify is the location of the prestaged content folder.  It takes the hostname of the computer it’s running from and makes a WMI query to see any packages assigned to it that aren’t in State 0.  If the package shows as state 0, then there’s no further work to be done, and we can just work on the others.  There are multiple ways you can run the extractcontent.exe tool, but I’ve found some to work better than others.   Whether you run it specifying a single package to extract or you run it with an entire folder targeted, I’ve found that when I check the Distribution Point Configuration Status in the SCCM console, there’s always some that still show “waiting for prestage content.”  In almost every case where that’s happened, just re-prestaging the content cleared it up. I don’t know if this is a limitation of the extractcontent.exe tool, my impatience, or what, but it works for me.  Because of that, I actually have my Extract-Content function run through the Prestage content folder one item at a time, so you can re-run the function, it will re-query for unsuccessful packages, and only attempt to extract the packages that didn’t make it the first time.
 
ExtractContent running

Example

Stage-LocalDPContent
                I put this together for our SCCM architect who wanted something that he could quickly and easily run while logged into our Secondary Site Server that was being migrated.  What this does is query the local DP for all assigned content, export it with the Prestage-Content function, and give you a progress bar to show you how far along you are. 

Thursday, February 23, 2017

Universal SCCM Content Prestager

PS1 file can be found at my TechNet gallery: HERE

The Use Case
    Prestaging content is a fact of life in the SCCM world.  Whether you're standing up a new site, cloning a DP, or sending packages to a site with really bad bandwidth, there are a variety of reasons you need to create prestage packages.  At multiple contracts I worked, these packages were created by finding the content in the Configuration Manager GUI, right clicking, selecting Create Prestage Content File, and going through the wizard.  While this is technically a correct way to do things, it's cumbersome, requires you to remember where everything is in the menu structure, and ties up your console while you do packages one at a time.   I could see using this method for one or two packages every now and then, but you're on a PowerShell blog. Here, we're all about scale. 

Not pictured: efficiency


Making it happen
     The Configuration Manager module actually comes with a prestage cmdlet built right in, but this cmdlet is one of the most poorly written ones in all of PowerShell.  It has no real intelligence of its own, requiring you to spell out exactly what package type you want to back up. Since every package has a unique PackageID value, I never understood why they didn't just make it use that number and get on with life, but they didn't.  Feel free to download the script and follow along.
I actually couldn't fit them all on one screen


The Script
     The first thing we need to do is declare our variables. This script needs to know what the package ID number is (which can be found literally everywhere the package is mentioned in the SCCM console or WMI interface),  what DP we're pulling the content from, and where the file needs to be saved. All of these are mandatory, so you'll be prompted for them if you don't put them inline.  Also, depending on the prestage location, you'll need admin rights to move files there, so we just check for those at the outset.  We'll also make sure you're connected to your CMSite PSDrive. I also have the script make sure the package isn't already there.  Something else to keep in mind is that the DP name needs to be a FQDN when the command runs. If you enter it with just the hostname, the script will sort that out for you, so no worries.



     After all the pre-reqs have checked out, the real logic comes in. The first thing it does is try to run get-cmpackge with the package ID, if that comes back as $NULL, meaning there was no package with that ID, it starts down the list of available content types. It checks if it's a software update package, application, boot image, OS image, or driver package.  Until it finds something that actually returns an object, it'll keep checking. Then, it makes a mental note of what it found so we can use it later.  For troubleshooting and debugging reasons, I like to output the name of the package as well as its source path, but this isn't necessary. It will also tell you where it's going and what it will be named.  Then, we use a Switch  statement with the count variable (our mental note from earlier) to select the correct Publish-CMPrestageContent flags that actually do the prestaging work for us.

Figuring out what you wanted


Actually prestaging it




Actually Using It
     On its own, it's a nice function to have loaded in my shell. Being able to generate a prestage file without tying up my GUI is always handy, and depending on the organizational skills of the previous SCCM admins, not having to dig around to find a package in their menus can be a real time saver.  Where this script comes into its own, however, is when it's chained together with other commands, which is what we'll discuss in our next post. 

Tuesday, February 21, 2017

2-21-2017 - Working with CSV Files


Hey everyone, it's been a little while since my last post, but work's been busy.  A recurring issue I see on TechNet is that plenty of people have trouble working with importing/exporting CSV files.  Specifically, I see questions about how to modify CSV files. Personally, I don't see any value in modifying the CSV file directly, but rather importing the data from said file and working with the dataset natively in PowerShell.  Using one person's thread as an example, he had a CSV file full of IP addresses that corresponded to his VM's.  He wanted to add data about the VM based on the IP address, but wanted to know how to "modify the line" in the CSV file.    He had already written part of the script to create the CSV by pinging each computer in his IP range and outputting that to CSV.  For our test, here's what our CSV looks like:
Now, let's say we want to find the hostnames for each of these computers and add that to a new column. There are a couple ways we could do this, but in an enterprise environment, a good way to do it is to just ask AD who it is.   First, we create the empty "column" for our Hostname value.





Once that's created, we need to fill it with values:


















Keep in mind that you will see a lot of red text for any computer that doesn't have an entry in AD. This could be a printer, a switch, etc.  If you're using Windows 8.1+ or Server 2012R2+, you can use the resolve-dnsname cmdlet. 

You can add more columns to the CSV by using the Add-Member cmdlet to your heart's content, and when you're done, you pipe your $dataset variable back to the export-csv cmdlet.   

Really, once you start looking at CSV files as arrays of custom objects, they're pretty easy to work with.

Friday, February 10, 2017

2-10-2017: Importing drivers into SCCM in bulk

This is taken from my TechNet gallery here: https://goo.gl/n1QT89

     When you're tasked with something like a Windows 10 upgrade, you'll find yourself spending lots of time downloading and importing drivers into SCCM.   While this script won't go out and download them for you (like the Dell and HP Driver Import tools I've seen out there), it manufacturer, model, and architecture agnostic, you don't get caught up trying to negotiate your way past your firewall and proxy teams, and it runs in a bit under 50 lines of code (including comments). Rather than pasting in the entire thing, I'll do a screenshot and walk through from there.

     For this script to work, there's some groundwork required on your part. When you download the drivers, they need to be downloaded into a folder that has whatever name you want for your driver package later.  If you're like me, you're already doing this as you download. If I need drivers for an HP Z230 desktop, the folder they're saved in is already called "HP Z230 Windows 10 x64" or something similar so I can find them later.  The way this script works, whatever your folders' names are is what names your driver packages will end up with.
    Aside from that, all you need to do is plug in the path to the file share that has all your make/model folders in the root, as well as the location where you want to store your driver packages.
    Something you will notice in this script is that I bounce between my C:\ drive and my SCCM drive. This is because UNC paths don't always work as expected when you're on the SCCM drive, and SCCM cmdlets don't play nice running from anything other than the SCCM drive.  To guarantee they both work when needed, I just switch between locations, and it's no big deal. 
    This script can take a little while to run, but it will give you feedback as it goes, and it doesn't lock you out of the SCCM GUI while it runs.

Monday, February 6, 2017

02-06-2017: SCCM Powershell Tools

I'll get this out of the way right now: I think SCCM's PowerShell module is garbage.  It's slow, it doesn't work like you'd expect a lot of the time, and it's got absolutely nothing on the ease of use and functionality of something like the ActiveDirectory module.  Thankfully, WMI is still a thing, and that gives us an excellent way to interact with SCCM via PowerShell. With that in mind, here's some quick and dirty (but still useful) PowerShell functions I've put together. Most, if not all, PowerShell tools I've written for SCCM can also be done with SQL queries and commands, so if that's more your thing, have at.

Find-ClientByMac
File can be downloaded from my technet gallery here
This is a simple enough tool. It will find any clients in the SCCM database that match a "like" query against the MAC address you provide.  This is handy for finding duplicate objects with the same MAC, which can royally screw with trying to PXE boot and image a computer.

Function Find-ClientByMAC ($inputMac){
    $namespace = "root/SMS/Site_TST"
    $siteServer = "testServer"

    $inputMac = $inputMac.Replace(" ","")
   $inputMac = $inputMac.replace("-",":")
   
    if ($inputMac -notlike "*:*")
    {
        $count = 0
        while ($count -lt ($inputMac.Length - 2))
        {
            $inputMac = $inputMac.Insert(($count)+2,':')
            $count += 3
        }
        $inputMac
    }
    Get-WmiObject -Namespace $namespace -ComputerName $siteServer -class SMS_R_System -filter "MACAddresses like '%$inputMac%'"
 }


Something to keep in mind with SCCM is that it stores MAC addresses with ":" between each pair, but your computer's IPConfig command will give it to you with "-"'s.  This will replace a - with a :, and if you just feed it a straight set of text, it will insert the ":" between each pair.
This will find any computer that matches the MAC you provided and returns with a WMI object of the SMS_Device class.  If you want to delete the duplicate objects, you can simply pipe to the remove-wmiobject command to delete.

Thursday, February 2, 2017

02-02-2017 - Finding Empty Device Collections in SCCM

     Today's script is going to be almost a one-liner, but a very useful one.  I had found SQL queries that would do this same thing, but I frequently find myself working on a computer that does not have the SQL management software installed on it, and this query doesn't run cleanly through the SCCM Management software.  However, I'm always on a computer with PowerShell and since this runs through WMI, you don't even need to connect to the SCCM site code drive.
     Having the right amount of device collections in SCCM is a bit of a balancing act. You don't want so few collections that you don't have a good way to sort out your clients logically, but you don't want so many that you've gunked up your database with entries you're never actually going to use. On top of that, some of the companies I've contracted with have had many collections that never held a single client. This script will go through and find those empty collections for you.   If you just want the PS1 file, click HERE.

Step 1: Initializing a few variables
Typically, if I'm working with SCCM through WMI, I like to setup a few variables in my shell right away to make my life easier and save some keystrokes throughout the day.
$siteServer = "PriServer1" # whatever the name of your primary server is
$namespace = "root/SMS/site_TST" # root/SMS/site_  whatever your site code is

If you're constantly making WMI calls to SCCM, being able to just enter those variables instead of typing it all out can make things easier. At least, that's what I've found. 

Step 2: Get your SCCM Device Collections
This script is going to be done on a single line, but I'll break it out into chunks first. The first thing we need to do is pull all of the device collections from SCCM.
Get-WmiObject -Namespace $namespace -ComputerName $siteServer -Class SMS_Collection -filter 'CollectionType =2'
 
If you wanted to pull user collections, you would change 'CollectionType = 2' to 'CollectionType = 1'
If you want to get both sets of collections, just leave out the -filter portion entirely.

Step 3: Check Those Collections for Members
There are a couple ways you could do this. You could pipe your collections to a ForEach loop with an if statement in it. Alternatively, I choose to just pipe the first cmdlet into a where statement that does it for me.
| where {$colID = $_.CollectionID;(Get-WmiObject -Namespace $namespace -ComputerName $siteServer -class SMS_FullCollectionMembership -filter "CollectionID='$colID'").count -eq 0}

What this is doing is setting a variable called "$colID" to be the collection ID number from the collection passed through the pipe.  Then, we're going to do a WMI query to the SMS_FullCollectionMembership class, which is a class that maintains a list of every device/user to collection association. We're adding a filter to only show collection IDs that match the one we want, counting up all the members, and if that number equals 0, we know the collection has nothing in it.

Step 4: Profit
If you just stop there, you have a set of list-formatted data that, while not pretty, does contain all the information and objects you need to do work. It's not the easiest thing to read, but it does give you a very solid information dump about the empty collection that looks something like this:

























Chances are, if you're generating this report for a manager, customer, etc, they're going to want it formatted a bit nicer, and they'll probably want it in an Excel spreadsheet.  The easiest way to do this is:
| select Name,CollectionID | export-csv C:\users\MyUser\Documents\EmptyCollections.csv -noTypeInformation

 Alternatively, if you get the go-ahead to remove these device collections, instead of piping to a select/export-csv statement, you can pipe to remove-wmiobject instead. 
For the love of god, run it with -whatif first.  

Thanks for reading, and if you have any questions, feel free to post them in the comments. 

Monday, January 30, 2017

1-30-2017: Using Custom Objects for Fun and Profit

    Custom objects were one of those things I never really saw the point of when I read about them. It wasn't until I actually messed around with them that I really understood their uses. They're especially good for generating reports that pull data from multiple sources.  For example, let's say your manager has requested a report of all the computers in your Accounting department. He wants to know their hostname, their IP Address, their Make/Model, and their Serial Number.  Now, there's no built in function or class (that I know of) that will return all of those pieces of information, so we'll have to pull from multiple data sets.  For this example, we're going to need Active Directory, the Win32_BIOS class, and the Win32_ComputerSystem class. For the sake of argument, let's say we'll also need the Get-CompOU script from the previous post and the hostname of a computer in the Accounting department's OU to get started.
An example asset report script can be downloaded here:
https://goo.gl/5bB03A

Step 1: Generate a list of all the computers needed for this report
$allComps = Get-ADComputer -SearchBase (Get-CompOU AccountingPC1) -filter * -properties * 

      This gives us an array of all the computer objects hosted in the same OU as the Accounting PC we started with. We can get the hostname and IP information just from these objects, but we need more.

Step 2: Enumerate all the computer objects and start data mining
ForEach ($comp in $allComps)
{
        $SWMI = Get-WmiObject -ComputerName $comp.Name -class Win32_ComputerSystem
        $BWMI = Get-WmiObject -ComputerName $comp.name -class Win32_BIOS


We're not quite done with the loop yet, but this shows how we're invoking the WMI classes needed for each computer.  Step 3 takes place within the same loop.

Step 3: Create the custom object to hold our required data
$object = New-Object -TypeName PSObject
$object | Add-Member -MemberType NoteProperty -Name "Hostname" -Value $comp.Name
$object | Add-Member -MemberType NoteProperty -Name "DNSName" -Value $comp.DNSHostName
$object | Add-Member -MemberType NoteProperty -Name "Serial" -Value $BWMI.SerialNumber
$object | Add-Member -MemberType NoteProperty -Name "IPAddress" -Value $comp.IPV4Address 
 
$object | Add-Member -MemberType NoteProperty -Name "Make" -Value $SWMI.Manufacturer
$object | Add-Member -MemberType NoteProperty -Name "Model" -Value $SWMI.Model 

$allObj += $object
 }
 This one is pretty straight forward. It's a lot of text just to say that you're adding properties to an item you created, assigning those properties values based on multiple classes created in Step 2, and then adding that object to an array of objects.  You can type $allObj = @() at the beginning of your script, but it isn't required.

Step 4: Profit
    At this point, we can dump our report out to a CSV, or we can just output to the screen.  Typing $allObj will just output to the screen, but if we want to make a report our management will be proud of:
$allObj | export-csv -path C:\users\Admin\Documents\AssetReport.csv -NoTypeInformation

And we're done!  Now, if you're working in an environment with SCCM or other management software, these reports might be more easily generated by querying that database. However, this will give you up to the minute accuracy as the WMI queries are done live.  If there's anything you want to see, leave me a comment, and I'll add that to my next post.  

Wednesday, January 25, 2017

01-25-2017: A quick (and useful) PowerShell script



    Whether you've been in the IT game for years or are just starting out, there are a few simple tools you'll find yourself relying on almost every day to do tasks so basic, you'll wonder why Microsoft doesn't make them just a little easier to do.   The object of this post is to give you a couple small tools that will make your life easier.

Cmdlet 1: Get-CompOU - return the Organizational Unit of any hostname
     If you manage an Active Directory network of any size, you'll probably wind up troubleshooting Group Policy, and one of the major things that decides what policies your computer receives is the Organizational Unit (OU) where your computer resides.  Personally, I'm a lazy IT guy. I don't like opening ADUCs, right-clicking the domain, clicking search, typing my computer name (only to realize I forgot to select "Computer" from the drop down), clicking Search, right clicking my computer name, selecting properties, and then finding out the Object tab isn't there because I forgot to turn on Advanced Features, which means I get to close out of my search and start the whole thing over again.  Even once you've found it, now you have to memorize or copy it so you can find your problem child computer and move it to the right OU.
    Alternatively, you can have some simple PowerShell scripts attached to your profile that will do it all in one line.  You can download the script here: https://goo.gl/Bv90kk, but I'll also go through what makes it tick.

Code:

Function Get-CompOU ($computerName){
    $comp = get-adcomputer $computerName
    if ($comp -eq $null) {write-host -ForegroundColor Red "Computer object $computerName was not found"; return}
    else {
        $compOU = get-adorganizationalunit  ($comp.distinguishedname.substring($comp.distinguishedname.indexof(",")+1))
        return $compOU
        }
} 
 

First off, I make all of my scripts that aren't quick and dirty one-liners as Functions. This means that once I've imported my profile, I can just type "get-compou [somehostname]," and my computer will know what to do just the same as if I had typed "get-childitem" or "get-wmiobject."  The "$computerName" variable is what's passed as the first argument.   The second line calls get-adcomputer to return the AD computer object of the host name. Then, we check to make sure the computer name was actually found. If it wasn't, it'll tell you and return nothing.  If it was found, then we find the OU that matches the one given in the Distinguished Name of the computer and return that object.
      In case you're wondering why we don't just return the OU name that we extracted from the Distinguished Name, it's because I want to return an actual AD OrganizationalUnit object instead of just a string. That gives us a lot more power and freedom with what we can do. It's also why we use return at the end instead of a write-host.
     For example:  TechPC1 is able to run regedit no problem.  TechPC2 gets an error stating that regedit is disabled by his system administrator, even though 1 and 2 should be getting the same policies.  Our Sys Admin runs Get-CompOU TechPC1 and finds that PC1 is in the correct OU.  Running Get-CompOU TechPC2 reveals that PC2 was accidentally placed in the same OU as the regular production user machines.  From here, our Sys Admin types  get-adcomputer TechPC2 | move-adobject -TargetPath (Get-CompOU TechPC1). 
    Viola! That computer has been moved.  Personally, I find this easier than all the right clicking, searching, copying, and manually moving. The nice thing with PowerShell being so extensible is that the foreach-object cmdlet gives you the ability to run this script against an entire text file of names, or generate a report of every computer in a given OU.
    I hope you've found this useful. Stay tuned for the next post.