Monday, October 14, 2013

Working with SharePoint web parts and the web part configuration

I was tasked recently with making a configurable option for a SharePoint web part. Here’s a few interesting issues I ran into along the way with making this option to select between two features.

Creating a Dropdown menu in the Web Part Config

For web part config pages, particular data types have their own significance with how they appear on the resulting web part configuration. I needed a dropdown menu which, as I learned, requires an enum. This was simple enough:
1 [WebBrowsable(true), Category("My Category"),  
2 WebPartStorage(Storage.Shared),  
3 WebDisplayName("Default Viewer")] 
4 public ViewerType ViewerSelection 
5 { 
6 get 7 { 
8 return viewerSelection.Value; 
9 }
10 set 11 {
12 viewerSelection = value;
13 }
14 }

An explanation of the attributes:


  • The WebBrowsable attribute makes this property actually show up on the form

  • The Category attribute dictates the category this option will appear in

  • The WebPartStorage attribute defines how this setting is shared between users. My choice of the Storage.Shared enum of the Storage Enumeration dictates that it is shared between all users and additionally that only users with contribute or greater permission can change it.

  • The WebDisplayName attribute defines a title under which the property appears

Nullable backing fields

At this point you’re probably wondering what is in the enum ViewerType and why in the “get” I retrieve the enum value of the backing field and not just the enum. Here’s what it looks like:


1 public enum ViewerType
2
3 HTML,
4 Silverlight
5 };
6 private ViewerType? viewerSelection = null;


This was necessary because of an interesting situation where if the web part hadn’t been configured yet (first time use), it should simply use an option configured in my Service Application. I would have just made the property itself and the backing field nullable, but doing so would cause the enum to be interpreted as a string and my web part config would lose the dropdown menu functionality. Yuck. By making only the backing field nullable, I’m able to test for a null value in the constructor of the web part config and grab the appropriate value from the Service Application instead while maintaining the dropdown menu.

An enum is a value type and cannot simply be set to null like so:


1 private ViewerType viewerSelection = null;


There is no implicit conversion between an nullable enum and an enum, hence why I was required to implement the “get” on my property to retrieve the actual value from the nullable enum.

In the end, the result of this code is a dropdown menu for my web part config with a choice between two viewers. If it hasn’t been configured yet, I have the ability to test an enum for a null value and configure it appropriately. Once it’s been configured, even if you navigate away or close the browser, SharePoint will store this value and re-set it when you return!

Monday, September 16, 2013

Using PowerShell to shutdown, apply snapshots, and restart VMs… remotely!

I’ve been working on a project recently that required me to remotely apply snapshots to multiple VMs. This is with Windows Server 2008 R2 so I didn’t have access to the nice Server 2012 PowerShell commands like Get-VM so this all had to be done through WMI instead.
First I set up credentials and PowerShell session:

$pwd = ConvertTo-SecureString -AsPlainText -Force –String “your password”
$cred= New-Object System.Management.Automation.PSCredential ("domain\user", $pwd)

$sesh = new-pssession -computername "remoteComputer" -credential $cred

Now that my session is set up, I can send over a block of commands using Invoke-Command, like so:

1 invoke-command -session $sesh -scriptblock {
2 #script to be run on remote computer
3 }


Next, I set up the VM names that I want to revert:

1 $servers = “server1”, “server2”, “server3”

Now I need to set a variable for the snapshot name. It’s important to note here that for this specific script, all three servers have a snapshot with the exact same name that I’ll be reverting to.

1 $SnapshotName = “dat snapshot”

Next I need to issue a shutdown command to each server. As far as I know, you CANNOT apply a snapshot to a running VM this way. You can do so manually through Hyper-V, but I ran into issues applying snapshots through PowerShell while the target VM is running.




1 foreach ($server in $servers) {
2
3 $query = "SELECT * FROM Msvm_ComputerSystem WHERE ElementName='" + $server + "'"
4 $VM = get-wmiobject -query $query -namespace "root\virtualization" -computername "."
5 $query = "SELECT * FROM Msvm_ShutdownComponent WHERE SystemName='" + $VM.name + "'"
6 $Shutdown = get-wmiobject -query $query -namespace "root\virtualization" -computername "."
7 $Shutdown.InitiateShutdown($true,"Because I said so")
8 }


Wait for each server to start back up:



1 foreach ($server in $servers) {
2 $query = "SELECT * FROM Msvm_ComputerSystem WHERE ElementName='" + $server + "'"
3 $VM = get-wmiobject -query $query -namespace "root\virtualization" -computername "."
4
5 while ($VM.OnTimeInMilliseconds -ne 0)
6 {
7 $VM = get-wmiobject -query $query -namespace "root\virtualization" -computername "."
8 Start-Sleep -s 5
9 }
10 }


At this point, I needed to code in a 60-second delay once this loop finished (you can do that with this: Start-Sleep –s 60). This is due to how I had to query the VMs’ status. I simply looked at the VMs “OnTimeInMilliseconds” and figured that when that was reset to 0, the VM would be shutdown. That’s not quite the case, however. When that “OnTimeInMilliseconds” is set to 0, the VM has reached a stopping state, not a stopped state. I assume that the stopping state will only remain for a few seconds at worst but give it 60 seconds for good measure. I haven’t run into any issues with this yet and I’m not in any hurry.

The last step is to apply the snapshot and start the VM back up. I do both of those in one step for each VM:



1 $VMMS = gwmi -namespace root\virtualization Msvm_VirtualSystemManagementService -computername "."
2
3 foreach ($server in $servers)
4 {
5 # Get the virtual machine object
6 $VM = gwmi MSVM_ComputerSystem -filter "ElementName='$server'" -namespace "root\virtualization" -computername "."
7
8 # Find the snapshot that we want to apply
9 $Snapshot = gwmi -Namespace root\virtualization -Query "Associators Of {$VM} Where AssocClass=Msvm_ElementSettingData ResultClass=Msvm_VirtualSystemSettingData" | where {$_.ElementName -eq $SnapshotName} | select -first 1
10
11 # Apply the snapshot
12 $VMMS.ApplyVirtualSystemSnapshot($VM, $Snapshot)
13
14 $VM.requeststatechange(2)
15 }


And we’re done!

Credit to Ben Armstrong for parts of this script.

Wednesday, September 11, 2013

A quick lesson in floating point arithmetic


Consider the following code snippet:

double a = 15.95;
double b = 19.95;
double c = a + a; //evaluates to 31.9
c = c + b; //evaluates to 51.849999999999994

This is a classic case of where floating point arithmetic falls down. This is working as designed, but not as expected. This can be easily fixed by using decimal rather than double, but lets explore this issue.

The crux of the issue is that some base-10 numbers, like 0.9, cannot be represented in base-2 accurately. Since most of us don’t speak base-2 (well, if you’re reading this you probably do), you could compare this to a fraction like 1/3, which in base-10 would be “0.3333333….”. We of course just truncate the rest of the 3s when we’re in base-10. It’s close enough, but what about for practical use?

For large financial calculations, this can cause issues. Remember the movie Office Space? They exploited this floating point vulnerability by taking those fractional remainders. For example, when we truncate “0.333333….”, they would take the “0.000000333…” that gets truncated from the number.

So back to our earlier example with 0.9. In base-2, it’s represented as “0.111001100…” which, when converted back to base-10 (which is exactly what’s happening above), is essentially “0.8999…”, or in our case, 31.9 eventually becomes 31.89999... instead. That’s exactly where the issue presents itself and why you should use decimal instead in these kinds of situations.

For more on decimals, Jon Skeet has a great writeup about it. Read it here.








Monday, September 9, 2013

Useful tip on determining if a version represented as a string is greater than another

The Version class allows you to do something like this:

Version a = new Version("1.0.0.0");
Version b = new Version("1.0.0.1");

if (b>a) //evaluates to true
    blah blah blah

Being able to use pretty much any comparison operator on a Version object like this is really useful.

The MSDN page also includes this useful bit on getting the version of the current running assembly:

     Assembly assem = Assembly.GetEntryAssembly();
  AssemblyName assemName = assem.GetName();
  Version ver = assemName.Version;
  Console.WriteLine("Application {0}, Version {1}", assemName.Name, ver.ToString());

PowerShell remoting and memory issues

I ran into an issue recently with running out of memory in a PowerShell remote session. Apparently the default for a remote session is only 150MB. Here's how to increase it (Credit) via PowerShell:

Either/Or:

Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 1024

winrm set winrm/config @{MaxMemoryPerShellMB="1024"}

I also found that if you increase the memory by too much, you receive a cryptic and totally unhelpful error message when attempting to open a remote shell: 

The WSMan provider host process did not return a proper response.  A provider in the host process may have behaved improperly. For more information, see the about_Remote_Troubleshooting Help topic.

At that point, I had increased the shell memory size to 4096MB. Decreasing it back down to 2048MB fixed the issue.

Obfuscating passwords from nosy coworkers in PowerShell and batch script

First things first: Obfuscation IS NOT SECURITY! My usage case is assuming I’m not worried about security intrusions from malicious external sources. I have some scripts that run particular tasks automatically and I don't want them to prompt me for login. That requires credentials and passwords and such. I could leave plain text passwords lying around, but I don't want nosy coworkers finding them.

PowerShell

Let's obfuscate some passwords! First with PowerShell (Credit to Frank Richard):

Say your password is "ABCD" and you want to obfuscate it, let's start with the first letter, "A", in its byte character hexadecimal format. That's "41" (hex), or "65" in decimal.

The super-simple obfuscation will just add 1 to the decimal number and store it in a text file:

$pwd = "ABCD"
$pwdEncoded = ""
$pwd.ToCharArray() | Foreach { $pwdEncoded = $pwdEncoded = $pwdEncoded + ([BYTE][CHAR]($_)+1) + " " }

And that's it. Now $pwdEncoded contains each character of my password as its decimal representation + 1:

66 67 68 69

Store that string in a text file, and all you have to do to retrieve it again is this:

$strEncoded = Get-Content C:\pwdEncoded.txt
$pwd = ""

$strEncoded.Trim().Split(" ") | Foreach { $pwd = $pwd + [CHAR][BYTE](($_)-1) }

Now, $pwd contains your password and you can use it to create PowerShell credentials and so on.


Batch Script

I found that the easiest way to accomplish obfuscation in a batch script was using CertUtil.

Assuming your password is currently sitting in a text file, you can encode a binary file to Base64 (using the cmd console):

CertUtil -f -v -encode C:\pwd.txt C:\pwdout.txt

Using our previous example of "ABCD", our output would look like this in "pwdout.txt":

-----BEGIN CERTIFICATE-----
QUJDRA==
-----END CERTIFICATE-----

Now all we have to do to read is read the obfuscated password by decoding it again and getting rid of the evidence:

CertUtil -f -v -decode C:\pwdout.txt C:\pwdin.txt
set /p var=<C:\pwdin.txt
del C:\pwdout.txt
del C:\pwdin.txt

Now the password is stored in "var". Take careful note of the spacing on that particular line: 

var=<C:\pwdin.txt

It's important to notice that there are NO SPACES after the "=" sign. If I had, there would be an extra space in my password and I'd have to trim it out instead. This way I avoid that.

Programatically activating SharePoint features

I've been attempting to programatically activate the features for a custom solution on particular SharePoint site collections.
Given a SPSite object, there's a number of blogs you can find via Google that offer the following code snippet to activate a feature:

    site.Features.Add(some GUID);

I was attempting to do this locally from the Central Admin box of our development farm, while the targeted site collection was hosted on one of our Web Front Ends.
This resulted in an exception:

Feature '<some GUID>' is not installed in this farm, and can not be added to this scope.

What the heck? I can see the feature sitting right there and I'm pretty sure that GUID is correct.
Let's make sure though. Since the SPFeatureCollection on SPSite.Features won't have deactivated features, I'm going to go ahead and manually activate the features so I can see them and access their GUIDs.
Again, given a SPSite object, we can loop through all the features on it and output their names and IDs in a little console app:
SPFeatureCollection featureColl = siteCollection.Features;

foreach (SPFeature feature in featureColl)

{
try


{


Console.WriteLine("ID:{0}; Name:{1}",

feature.Definition.Id.ToString(),

feature.Definition.GetTitle(System.Globalization.CultureInfo.CurrentCulture));

}

catch (Exception ex)
{
Console.WriteLine("Caught ex " + ex.ToString());
}
}

Here I ran into an even more interesting problem. The features from my custom solution didn't show up, yet every single other feature did. Here's where the lightbulb flicked on: we were missing all 6 of the custom features from the list, and there were exactly 6 null reference exceptions caught and written to the console. That's awfully strange.

It turns out that since these features are pushed out by our solution and only exist on that site collection level and not farm-wide, they can only be activated locally from that Web Front End.

Once I ran the code on the Web Front End instead of the Central Admin box, the exceptions stopped happening and my features were activated.

So, if you want to do it from the Central Admin box anyway, I found it's much easier by PowerShell sending commands over to the target box with a PowerShell session. It's a lot easier too. Here's how you can activate the View feature from PowerShell inside an invoke-command script block:

Enable-SPfeature Solution.Featurename -URL $SiteURL

Arguably a whole lot easier to just activate them by name than the whole mess I went through before.