Sysinternals Freeware - Mark Russinovich & Bryce Cogswell

Mark's Sysinternals Blog

Inside the WMF Backdoor

Steve Gibson (of SpinRite fame) proposed a theory in his weekly Thursday-night podcast last week that if true, would be the biggest scandal to ever hit Microsoft - that the Windows Metafile (WMF) vulnerability that drew so much media attention last month is actually a backdoor programmed intentionally by Microsoft for unknown reasons. Slashdot picked up the story the next day and I received a flood of emails asking me to look into it. I finished my analysis, which Steve aided by sending me the source code to his WMF-vulnerability tester program (KnockKnock), over the weekend. In my opinion the backdoor is one caused by a security flaw and not one made for subterfuge. I sent my findings to both Steve and to Microsoft Monday morning, but because the issue continues to draw media attention I’ve decided to publicly document my investigation.

Understanding the WMF vulnerability requires a brief background in WMF files. A WMF file is a script for executing graphics commands, called graphics device interface (GDI) functions. Each command is stored as a record in the WMF file and examples of GDI functions include ones to draw lines, fill rectangles, and copy bitmaps. Image files like bitmaps, GIFs, or JPEGs, on the other hand, store the representation of pre-rendered images. Because an application can direct a WMF file’s execution at different devices, like screens and printers, with different graphics resolutions and color depths, their advantage over pre-rendered formats is that they scale to the capabilities of the target device. For this reason, many clipart images, including those used by Microsoft Office, are stored in WMF files.

WMF files originated with early 16-bit versions of Windows that implemented single-threaded cooperative multitasking. In that programming environment a process can’t perform two tasks, such as printing a document and displaying a print-cancel dialog, concurrently. Instead, they have to manually interleave the tasks, periodically polling to see if the user has asked to cancel the printing. The programming model for printing in Windows therefore has the concept of an abort procedure that an application can set before calling the printing API. If such a procedure is registered Windows periodically calls it to give an application a chance to signal that it wants the print job cancelled. Otherwise there would be no way to abort a long-running print job.

The WMF vulnerability stems from the fact that WMF supports the SetAbortProc API, which is the GDI call to set an abort procedure, that Windows expects abort procedure code to be stored directly in the SetAbortProc WMF record, and that Windows will invoke the procedure under certain conditions immediately after processing the record. Thus, if an attacker can get your computer to execute their WMF file through Internet Explorer or Outlook, for example, they can make your system execute arbitrary Windows commands, including downloading malicious applications and launching them.

Steve Gibson’s intentional backdoor theory is based on four suspicious observations he made regarding the vulnerability and the behavior of his tests with WMF files that contain a SetAbortProc record:

  1. There is no need for WMF files to include support for the SetAbortProc API.
  2. Even if an abort procedure is set by a WMF file, Windows shouldn’t execute it unless some abort condition is triggered, which should never occur when executing a WMF file.
  3. He could only get his WMF file’s abort procedure to execute when he specified certain invalid values for the size of the record containing the SetAbortProc command.
  4. Windows executes code embedded within the SetAbortProc record rather than expect the record to reference a procedure within the application executing the WMF file.
Steve’s belief that WMF files should not support the SetAbortProc API comes from the documentation for how Windows calls an abort procedure registered via SetAbortProc:

It [the abort proc] is called when a print job is to be cancelled during spooling.

The statement implies that Windows detects that a user or printer wants to cancel a print job and informs an application by executing the registered abort procedure. Steve echoes this understanding in a posting on his website’s news group:

[the abort proc] is the address of an application-provided "callback" -- a subroutine provided by the application that is expressly designed to asynchronously accept the news and notification of a printing job being aborted for whatever reason.

Steve reasoned that WMF files execute to screens, not printers, and so it makes no sense to abort their execution. Further, his tests showed that Windows calls the abort procedure registered by a WMF file immediately, when there’s no apparent cause for cancellation.

WMF files can be directed at a printer, however, and not only that, but the abort procedure documentation is misleading. Its correct description is in the Microsoft documentation that describes the basic steps for writing code that prints to a printer:

After the application registers the AbortProc abort procedure, GDI calls the function periodically during the printing process to determine whether to cancel the job.

Thus, the abort procedure really works both ways, providing Windows a way to notify an application of printing errors and the application a way to notify Windows that it wants to cancel printing. With this interpretation Windows’ execution of the abort procedure immediately after one is registered makes sense: Windows is calling the procedure to ask it if the playback of the rest of the procedure should be aborted.

Even still, the question remains as to why WMF files implement the SetAbortProc GDI function at all. My belief is that Microsoft developers decided to implement as much as the GDI function-set as possible. Including SetAbortProc makes sense for the same reason that abort procedures for printing make sense: WMF files can consist of many records containing complex GDI commands that can take along time to execute, especially when sent to a printer and on old hardware like the kind on which the cooperatively multitasked Windows 3.1 operating system ran. The abort procedure gives applications the ability to monitor the progress of a playback and to unilaterally abort it if a user makes UI choices that make a complete playback unnecessary. In addition, if a WMF file is sent to a printer and there’s a printer error Windows must have a way to know that an application wants to cancel WMF playback, which is another reason to invoke the abort procedure from within the PlayMetaFile loop. This Microsoft article from 1992 confirms the behavior as designed.

I’ve addressed the first two of Steve’s observations, but what about his claim that the abort procedure only executes when the SetAbortProc record contains certain invalid record sizes? I’ve analyzed the control flow of the PlayMetaFile function that executes WMF file records and found that, if an abort procedure is registered, it calls it after executing each record except the last record of the file. That behavior makes sense since there’s no need to ask an application if playback should be aborted when the playback is already completed.

Steve’s example WMF file contains only one record, the one that specifies SetAbortProc, so under normal circumstances PlayMetaFile will never call his abort procedure. The record sizes that he found trigger its execution cause PlayMetaFile to incorrectly increment its pointer into the WMF file such that it believes that there are more records to process, whereas the values he used that don’t trigger the execution land it on data values that indicate there are no more records. So his assertion that only certain magic values open the backdoor is wrong.

The remaining question is why PlayMetaFile expects the abort procedure to be in-lined in the metafile. It’s that fact that allows a hacker to transport malicious code within a WMF file. The actual reason is lost with the original developer of the API, but my guess is that he or she was being as flexible as possible. When a WMF file is generated in memory and played back by the application in the same run, like it would to create a graphics macro and use it mulitple times, it generally makes no difference if the procedure is copied or not.

For the code in on-disk WMF files to work any references it makes to data or code, such as Windows functions, must be to hard-coded addresses. This means that WMF file code won’t work if Windows system DLLs change in size or load into different locations in memory and therefore WMF vulnerability exploits only work on specific patch-levels of Windows. While this might make an argument against a design that includes the abort code in the WMF file things were different when the format was architected. In the Windows 3.1 “large” memory model code is inherently location-independent and Windows was never patched, so both Windows and an application could simply copy an application function into the WMF file and assume it would work when played back by the same application in a later run session. In any case, its not clear that the developers envisioned applications creating on-disk metafiles with abort procedures. Also, as Microsoft’s Stephen Toulouse pointed out in Microsoft’s rebuttal to Steve’s claims, the security landscape in the early 1990’s was very different than today and all code, including that stored in a WMF file, was inherently trusted.

The vulnerability is subtle enough that the WINE project, whose intent is to implement the Windows API for non-Windows environments, copied it verbatim in their implementation of PlayMetaFile. A secret backdoor would probably have been noticed by the WINE group, and given a choice of believing there was malicious intent or poor design behind this implementation, I’ll pick poor design. After all, there are plenty of such examples all throughout the Windows API, especially in the part of the API that has its roots in Windows 3.1. The bottom line is that I'm convinced that this behavior, while intentional, is not a secret backdoor.

posted by Mark Russinovich @ 11:05 PM 103 comments

Rootkits in Commercial Software

By now many of you have heard that Symantec released a security advisory last Tuesday that reported its use of rootkit-like cloaking technology in its SystemWorks product. The Symantec use of rootkit-like cloaking raises the question of what exactly defines a “rootkit” and whether or not there is ever a justifiable reason to use cloaking. I’ll first describe Symantec’s cloaking and then I’ll move on to trying to answer these two questions.

SystemWorks includes a feature called Norton Protected Recycle Bin that serves as an extension of the standard Windows Recycle Bin, saving copies of deleted files that the standard Recycle Bin doesn’t capture such as those deleted by applications. The saved files store in a directory named NPROTECT that SystemWorks creates under the standard Windows Recycle Bin directory, RECYCLER, of each volume. Symantec was originally concerned that end-users might stumble across the directory, not realize its purpose, and inadvertently permanently delete the backups of their already deleted files. The cloaking therefore uses a file system filter driver to mask the presence the NPROTECT directories from Windows directory enumeration APIs.

I learned of the cloaking several months again when users of our RootkitRevealer rootkit detection tool sent us log files asking whether their was evidence of malware (others have posted logs in the Sysinternals forums). A little research showed that it was generally known that SystemWorks creates NPROTECT directories that show up as “false-positives” in RootkitRevealer scans.

After the Sony rootkit story died down I turned my attention to commercial use of rootkits, purchased a copy of SystemWorks and installed it. I deleted some files, ran RootkitRevealer, and verified that the NPROTECT directory was being cloaked:



On a whim I decided to see if the Symantec cloak had the same behavior as the Sony XCP rootkit. Sony’s rootkit only masked the directory from directory enumeration APIs, but allowed access to the directory if specified explicitly by name. To my amazement, Symantec’s cloak let me into the NPROTECT directory when I entered its full path into Explorer or a command prompt change-directory command. I confirmed that a security vulnerability similar to Sony’s exists in the cloaking by copying files into the directory and noting that they did not appear in the SystemWorks interface that displays the contents of the NPROTECT directories. I contacted Symantec and they quickly agreed to remove the cloaking altogether.

So does Symantec’s cloaking classify it as a rootkit? I arrived at my working definition for the word rootkit several years ago as:

Software that hides itself or other objects, such as files, processes, and Registry keys, from view of standard diagnostic, administrative, and security software.

My definition is based largely on the way I’ve seen the term used by developers of cloaking technologies, including those that frequent www.rootkit.com, the central gathering place of both cloaking and anti-cloaking developers. The only text on the subject, Hogland (founder of Rootkit.com) and Butler’s “Rootkits: Subverting the Windows Kernel”, has a definition similar to mine:

A rootkit is a set of programs and code that allows a permanent or consistent, undetectable presence on a computer.

Symantec disagrees with my definition, and presumably Hogland and Butler’s, because they feel that malicious intent has been attributed with the word through general use in the media and that aspect is missing from the definition. But if we turn to the Rootkit.com community for guidance intent is also absent from their usage: Hoglund has even recently argued in an article he posted on Rootkit.com that Sony was justified in its use of the XCP rootkit.

Despite the evidence I see to back my definition I’m willing to concede, especially after the Sony debacle, that malicious intent has been associated with the term. The security industry is just now debating a formal definition for the word so in the meantime I’ve agreed to use “rootkit-like” when discussing cloaking that’s intended to benefit the consumer rather than the software publisher (or music publisher, in the case of the Sony rootkit).

Symantec’s use of a rootkit technique is a great place to start examining the question as to whether there’s ever justifiable use of rootkit technology. I strongly believe that there is never a case for its use, and I’ve had several discussions with Dave Cutler, Senior Technical Fellow at Microsoft and the original architect of Windows NT, and he agrees unequivocally with my view.

The obvious risk rootkits present, which has been demonstrated by both Sony’s and Symantec’s implementation, is malware being able to hide beneath the cloak. Even if a vendor has ensured with certainty that that’s not possible, the cloak makes it impossible for a security administrator to ensure that the cloaked objects have correctly configured security, and if they consist of executable code, are updated with the latest security patches.

Another detrimental effect of cloaking is that it changes the way Windows operates, making it difficult or impossible for users and systems administrators to understand the behavior of modified systems and to diagnose issues that arise as a result of altered behavior. Cloaking can make it impossible to account for resource usage like disk space, memory, or CPU, to perform a complete inventory of a system, to understand incompatibilities between Windows or other software and the cloaked objects, and even to make a functional backup. As I said when originally discussing the Sony case, a cloaked driver that crashes a computer can cause a misdiagnosis of the problem and can be extremely difficult to remove or update.

If a software developer ever believes a rootkit is a necessary part of their architecture they should go back and re-architect their solution. Microsoft was faced with a predicament very similar to Symantec’s when they implemented Windows XP System Restore. Instead of cloaking the directory that stores restore points they create a directory in the root of each volume named System Volume Information and set permissions on it so that only the Local System account has access. Even power users that check the “Show hidden files and folders” and deselect “Hide protected operating system files” settings in Explorer cannot enter the directory and inadvertently delete their backup files. There are any number of non-cloaking approaches that Symantec could have chosen.

I hope that the publicity generated by the Sony and Symantec examples have sent a strong message to the software and music industries and that they follow Symantec’s lead by removing the use of rootkit techniques from their applications and avoiding them in the future.

posted by Mark Russinovich @ 5:54 PM 56 comments

The Antispyware Conspiracy

Since the release of the first antivirus products many people have believed in a conspiracy theory where antivirus companies generate their own market by paying virus writers to develop and release viruses. I don’t subscribe to that theory and trust the major security vendors, but recent trends show that there’s a fuzzy line between second-tier antispyware vendors and the malware they clean.

The most innocuous of malware-like antimalware behaviors is to advertise with web site banners and popups that mislead average users into thinking that they have a malware problem. Most of the advertisements look like Windows error dialogs complete with Yes and No buttons, and although the word “advertisement” sometimes appears on the dialog background, the notice is usually small, faded and far from the area where users focus their attention. Even more unlike Windows dialogs, however, is the fact that clicking anywhere on the image, even the part that looks like a No button, results in the browser following the underlying link to the target page. Here’s an example I ran across recently on a popular web site:



A click on the image took me to a page at www.myspwarecleaner.com. The page looks like an Internet Explorer error message, again probably to mislead unsophisticated surfers into following its directions, and it guides visitors to download and install an antispyware utility called Spyware Cleaner:



Even on a freshly installed copy of Windows XP, Spyware Cleaner reports close to a dozen “extreme risk” and “high risk” infections that include innocuous items like cookies left by MSN.com and several built-in Windows COM components, including RDSHost.exe, the Remote Desktop Service control, and Shdocvw.dll, a Windows shell COM object, both of which Spyware Cleaner identifies as spyware. It also lists each COM component twice, reporting their presence in HKLM\Software\Classes as well as HKCR, which for those objects is a symbolic link to HKLM\Software\Classes.



Of course, to remove the “infections” a user has to pay to register the software. Who makes Spyware Cleaner? You won’t find out on the Myspywarecleaner web site, which consists of only a handful of pages like the download page, a FAQ page, and one for affiliates. A Whois lookup of the domain name shows that it belongs to Gary Preston of Secure Computer LLC. The only reference I found on the web to the owner or his company was a thread at CastleCops from June of 2004 that complains of one of their tools falsely identifying systems as being infected with the Sasser worm.

A few days later I ran into the same banner on another site, one for Windows systems administrators that would be embarrassed if revealed, and clicked again. This time I was taken to www.spywarestormer.com. I downloaded their spyware cleaner, ran it on a the same clean Windows XP install, and it reported 7 different "infections":



Once again, the infections were false positives. One group was the Registry keys associated with Windows Internet Configuration Wizard, which Spyware Stormer reported as the "Surfairy" spyware package, and the other related to COM objects involved with the per-user configuration of Explorer that the tool labelled as "WinAD" adware. The Whois report for spywarestorm.com lists it has beeing registred by Domains by Proxy, Inc. through GoDaddy.com, so whoever is behind Spyware Stormer apparently wants to remain anonymous.

The user interfaces of both these antispyware tools look the same, but with different skins and icons, which leads me to believe that Myspywarecleaner and Spyware Storm are licensing core "antispyware" technology from someone else. It looks like the unscrupulous antispyware vendors are part of a ring.

Unfortunately, sleazy antispyware vendors aren’t just stopping with misleading banners and false infection reports. Either they, or partners that have a vested interest in sales of their products, are actually infecting machines so that users are essentially blackmailed into purchasing.

The most trafficked threads on the Sysinternals forums are ones related to an infection dubbed “Spyaxe.” It gets its name because it continuously pops up tray balloons informing users that their systems are infected. Clicking on a balloon opens the Spyaxe web site. Spyaxe of course denies any connection with the underhanded advertising, but it’s hard to believe someone would promote Spyaxe this way without some financial incentive.

SpySheriff is another antispyware vendor promoted in the same way as Spyaxe . About a week ago someone sent me a link to a web page, that if visited using a version of Internet Explorer that hasn’t been patched with December’s security updates, slams the system with deluge of malware (several sites download the same malware package using the recently discovered WMF vulnerability). After the infection is complete, which is so extensive it takes close to five minutes, a system is loaded with 8 viruses, 8 spyware packages and 7 adware products. Subsequent to the installation, Internet browsing is made virtually impossible by the constant popups and popovers and processes are constantly connecting to remote SMTP servers and web pages.

You can watch the initial infection process in a movie I made (the movie is only about three minutes long because I’ve deleted sequences with no visible change). Here’s a chronology of events:

0:00 The malware has started to download through a script visible in the script prompt dialog.
0:05 The first evidence of the infection appears as a grammatically-challenged tray icon and balloon announcing that Windows has detected that the computer is infected:



0:10 Internet Explorer crashes and exits, leaving visible the changed desktop background that also announces that the system is infected.
0:20 More evidence of infection shows up as items on the left side of the desktop.
0:30 I open Process Explorer, which is paused, perform a refresh and new processes show up in green. I navigate the mouse over the image names to reveal their image paths, most of which are under the \Windows directory. Later I refresh the display and the result is this:



Note that the malicious executables have some or all of the characteristics I described as common to malware in my Understanding and Fighting Malware TechEd presentation: they have no company name, description, are packed (shown as a purple highlight), and reside under the \Windows directory.

0:55 I highlight the fact that one of the malware processes, Paytime.exe, identifies itself as Explorer from Microsoft Corporation.
1:00 After unpausing Process Explorer purple highlighting appears on most of the malware processes.
1:10 I open the process properties for Paytime and click the Verify button to check it for a digital signature. Unlike most Microsoft images, it doesn’t have one:



1:20 The appearance and disappearance of new processes shows that the infection is still underway.
1:30 I open Autoruns and perform a scan with the Verify Signatures and Hide Signed Microsoft Entries options checked, which reveals a dozen different malware autostart items:



2:00 A click on the tray icon causes the installation of SpySheriff. It shows up as shortcut on the desktop and the CPU usage goes to 100% as it starts scanning the system in the background.
2:05 I double-click on the SpySheriff tray icon and its control panel opens.
2:10 SpySheriff begins to identify some of the many spyware and adware infections:



2:20 A Windows shutdown dialog box briefly appears and then disappears, followed by a crash and restart of Explorer.
2:30 Explorer processes its autostart entries during its re-launch, which direct it to execute one of the malware programs. A Windows security warning appears because the image has an Internet Zone alternate data stream attached to it that associates it with the Internet, an untrusted zone.
2:40 I click on the “Remove found threats” button and discover that I have to purchase the product:



Not surprisingly, the SpySheriff website reveals little about the company behind it. A Whois of the domain points to Popandopulos Ltd in Greece as the owner, but the associated email address is crystaljones@list.ru, which is a Russia-based domain. List.ru appears to be an ISP from its Whois information, so it’s doubtful that the Spysheriff domain registration is accurate.

Is the connection between the infestation and SpySheriff one simply created by a SpySheriff fan or is this evidence of an antispyware conspiracy? It’s hard to believe the former, and if it’s the latter then companies like Secure Computer LLC, which registered the myspywarecleaner.com domain in 2004, and Popandopulus Ltd, which registered spysheriff.com in May 2005, have been in business long enough to show that their business model is working – and that’s far too long. I know that at least one state Attorney General’s office is investigating the Spyaxe case and I hope that this blog post spurs more action. Misleading and outright malicious advertising for antispyware casts a shadow on the entire industry.

posted by Mark Russinovich @ 6:38 AM 118 comments

This page is powered by Blogger. Isn't yours?

RSS Feed

RSS
    2.0

Index

Full Blog Index

Recent Posts

On My Way to Microsoft!
The Power in Power Users
Why Winternals Sued Best Buy
The Case of the Mysterious Driver
Running as Limited User - the Easy Way
Using Rootkits to Defeat Digital Rights Management
Inside the WMF Backdoor
Rootkits in Commercial Software
The Antispyware Conspiracy
Sony Settles

Archives

03/01/2005 - 03/31/2005
04/01/2005 - 04/30/2005
05/01/2005 - 05/31/2005
06/01/2005 - 06/30/2005
07/01/2005 - 07/31/2005
08/01/2005 - 08/31/2005
09/01/2005 - 09/30/2005
10/01/2005 - 10/31/2005
11/01/2005 - 11/30/2005
12/01/2005 - 12/31/2005
01/01/2006 - 01/31/2006
02/01/2006 - 02/28/2006
03/01/2006 - 03/31/2006
04/01/2006 - 04/30/2006
05/01/2006 - 05/31/2006
07/01/2006 - 07/31/2006

Other Blogs

Raymond Chen
Dana Epp
Aaron Margosis
Wes Miller
Larry Osterman
Bruce Schneier
Larry Seltzer