Saturday, August 28, 2010

Search Google in Real time




Google users can get in on the action with Google's realtime search which, as per the name, lets you search what's being posted online in real time.

The search is eventually going to show up at google.com/realtime, but for now, you can check it out at its experimental URL. Originally introduced in December, the search has received "significant improvements" that allow you to narrow your search down in useful ways, such as by region or geography. The example that Google provides on its blog is one that I find particularly useful: if you're traveling to somewhere, you can narrow down the search to that city to see what people are talking about, whether it's current events, new restaurants, construction, or whatever else is going on.

You can also follow conversations—basically, if a tweet or a Facebook update sparks a number of back and forth replies from multiple users, Google allows you to see all of those posts to get the full context.



The graph across the top shows you spikes and valleys in your search, too. For example, a quick search for "tamales" shows me that conversations about the tasty treats peaked around 2pm today, and again at 4pm. Mentions of Ars Technica peaked at around 11am. My name got mentioned the most right around the time an article of mine got posted to the site.

Unlike some of Google's recent feature rollouts (we're looking at you, Google Voice in Gmail), the realtime search isn't just limited to the US or English speakers. The search itself is available in 40 languages, and the geographic refinements are available in English, Japanese, Russian, and Spanish so far.

Src:Jacqui Cheng Ars Technica

Wednesday, August 25, 2010

Windows DLL-loading security flaw puts Microsoft in a bind

Last week, HD Moore, creator of the Metasploit penetration testing suite, tweeted about a newly patched iTunes flaw. The tweet said that many other (unspecified) Windows applications were susceptible to the same issue—40 at the time, but probably hundreds.

The problem has been named, or rather, renamed, "Binary Planting," and it stems from an interaction between the way Windows loads DLLs and the way it handles the "current directory." Every program on Windows has a notion of a "current directory"; any attempt to load a file using a relative path (that is, a path that does not start with a drive letter or a UNC-style "\\server" name) looks in the current directory for the named file. This concept is pretty universal—Unix-like systems have the same, called a "working directory"—and it's a decades-old feature of operating systems.

Windows, again in common with other operating systems, has the ability to load DLLs at runtime, during the execution of a program.

Where Windows is different from other operating systems is that it combines these two features; when a program instructs Windows to load a DLL, Windows looks in several different places for the library, including the current directory. Critically, it searches the current directory before looking in more likely locations such as the System32 directory, where most system libraries reside.

It's this that opens up the problem. When a file is loaded in Windows by double clicking it, and using file associations to start up the right program, Windows sets the current directory of the newly-started program to be the directory that contains the file. In the course of opening the file, they also try to load one or more DLLs. In normal circumstances, these DLLs will be loaded from the System32 directory. However, if an attacker knows the names of any of the DLLs the program tries to load, and puts a DLL with one of those names adjacent to a data file, he can ensure that his DLL will be loaded whenever the program tries to open the data file. Programs can also change their current directory manually, so the current directory will often appear to "follow" the last data file loaded.

Hence, "binary planting." An attacker creates a data file—which can be perfectly harmless in and of itself—and "plants" a DLL in the same directory. He then entices someone into opening the data file, for example through a link on a webpage or an e-mail. The vulnerable program will then attempt to open the data file, and in so doing will load the malicious DLL, allowing the attacker to do whatever he likes.

So, for example, an MP3 and a malicious DLL are placed alongside each other. The user double clicks the MP3 to load it, causing iTunes to start, and causing iTunes to use the folder with the MP3 as its current directory. During its startup, iTunes loads a number of DLLs, and one of them will be the malicious DLL.

The newly reported issue is a slightly new take on this old problem; instead of placing the DLL and data file on a local or LAN disk, they're placed on Internet-accessible file servers, using either the HTTP-based WebDAV or Windows' usual SMB/CIFS file-sharing protocols. In this way, a link to data file (with its side-by-side DLL) can be put into webpages or sent in mass e-mails.

Everything old is new again

The peculiar thing about all this is that this vulnerability has been known for a long time. The order in which directories are searched is documented, and has been documented for many years (that documentation dates back to 1998, and there are likely references that are older still, if one has any decade-old developer documents handy), and the dangers of using the current directory for loading libraries were explicitly highlighted a decade ago. As well as warning in the documentation about the dangers, Microsoft bloggers have also written about the issue in the past, telling developers how to avoid the problem.

To reduce the impact of this problem, Windows XP (and Windows 2000 Service Pack 4) changed the DLL loading behavior, by introducing a new mode named "SafeDllSearchMode." With this mode enabled, the current directory is only searched after the Windows directories, rather than before. This new mode was made the default in Windows XP Service Pack 2, and all subsequent operating systems from Microsoft. It is not, however, a complete solution; if a program tries to load a DLL that does not exist in the Windows directories, it will still fall back to attempting to load from the current directory, and so will still load a malicious DLL.

To address this, Windows has, since Windows XP SP1, provided a mechanism for programs to opt out of loading DLLs from the current directory completely. This has to be something that the program explicitly chooses to do, however, as there is the possibility that some software will depend on loading libraries from the current directory; making an OS-wide change to prevent the current directory being used in this way would break those programs.

All of which adds up to a tricky situation. Programs that are vulnerable to these problems are buggy. The operating system's standard behavior may not be ideal, but it provides the ability to write safe applications—though the design is plainly poor, it's not a fatal flaw, and certainly can't be described as a bug. It's just an artifact of Windows' long history. The behavior made sense in the security unconcerned world of single-user, un-networked 16-bit Windows, which is where it was first implemented, but is plainly undesirable in the modern world.

The fact that programs may legitimately depend on the behavior is another complexity: Microsoft can't easily make a unilateral decision to remove the current directory from the DLL search path, because the impact of such a change on legitimate programs could be substantial, and crippling. Microsoft has been telling developers to be careful, and to make sure they do the right thing, for many years now.

This places the company in an awkward position. A systematic flaw in Windows applications certainly looks bad for Windows, but the operating system vendor is not in a position to provide a fix. Though most programs will be able to get away with disabling DLL loading from the current directory completely, that's a determination that must be left to the program's authors, not Redmond. Unfortunately, not all vendors will do this in a timely manner, or possibly even at all.

Fixing the problem
Microsoft has produced a hotfix that allows system administrators to forcibly disable DLL loading from the current directory for applications of their choosing, or systemwide. This still runs the risk of breaking those applications, but this approach allows users to test that their applications will remain working and apply the fix if it's harmless.

To try to maximize its usefulness, the fix is a little more complicated than simply disabling DLL loading from the current directory completely. It has three levels of protection: it can disable DLL loading from the current directory when the current directory is a WebDAV directory, it can disable it when the current directory is any kind of remote directory (whether it be WebDAV, SMB/CIFS, or anything else), and it can disable the use of the current directory for DLL loading completely. In this way, programs which legitimately need to load libraries from the current directory can still be made safe from network-based attacks.

This is not a perfect solution—disabling network-based libraries still permits exploitation via USB key, for example—but it does allow people to protect themselves. In conjunction with disabling WebDAV and blocking SMB at the network perimeter, it should offer reasonably robust protection against untargeted mass attacks. But it places a substantial burden on administrators to create the necessary registry entries to enable the protective behavior, and for many the only practical answer may be to enable the safe behavior system-wide and just hope it doesn't break anything.

If developers are paying attention, we should expect to see a spate of fixes that tackle this problem. In its own security bulletin, Microsoft says that it's currently investigating whether any of its programs are susceptible to the issue, and as mentioned, Apple has already updated iTunes to avoid the problem; there are sure to be many other companies with work to do for the same problem.

In that sense, the extra publicity that this old problem has been given is good news: it should raise awareness of the problem ensuring that more developers take care to address it—and for many, it should be a simple fix, as simply disabling the use of the current directory for library loading will suffice.

But it does highlight a bigger issue. If something isn't safe by default, and requires extra development effort to be made safe, programmers are going to write unsafe programs. It's an issue seen time and time again with more conventional security flaws such as buffer overflows: trusting programmers to do the right thing doesn't work. They may do the right thing some of the time, perhaps even most of the time, but they won't do the right thing all of the time, and software will contain exploitable flaws as a result.

Microsoft has made great strides with its own software to strive to eliminate common exploitable bugs from its own code, and to reduce the exploitability should those bugs be discovered, but even these efforts have not been 100 percent successful, and bugs are still found. And as the (re)discovery of this problem shows, getting that message out to third-party developers is an uphill struggle.

The company says that it is looking into ways to make it easier for developers to avoid this mistake in future, but short of making it impossible—by removing DLL loading from the current directory entirely, or at least requiring it to be explicitly enabled—it's hard to see what the company could do to improve the situation, as for most applications the problem is already easily avoided with just a single line of code.

In a world where developers can't be trusted to follow best practices, perhaps the correct response should have been for the company to make the new hotfix opt-out, rather than opt-in; make it establish a system-wide default, and allow administrators to revert to the old behavior for those programs that require it. This means disabling documented, standard functionality, but there's little practical alternative.

Microsoft has said that it is willing to break backwards compatibility if it's necessary to solve security issues, but typically only when the backward compatible behavior is irredeemably unsafe, and not required for any legitimate purpose. This is, after all, why the company changed the search order in the past, to search the current directory after the Windows directories instead of before.

The situation here is certainly trickier—programs might quite legitimately depend on the current, dangerous behavior—but if exploitation using this attack vector becomes widespread, the company may be faced with little alternative but to break compatibility and provide bulletproof protection after all. History has shown that the ability to write safe programs is not enough: software needs to be safe by default, as painful as that may be.

Source:Peter Bright Ars Technica

Tuesday, August 24, 2010

Why Intel bought McAfee

There's been quite a bit of head-scratching over Intel's decision to purchase McAfee, but, despite all the breathless talk about mobile security and ARM and virus-fighting processors, the chipmaker's motivations for the purchase are actually fairly straightforward. First, Intel's management has decided, in the wake of Operation Aurora, to move security up to the top of Intel's priority list. Second, secure systems require a lot more than just hardware support—security is about the whole stack, plus the network, plus policies and practices. Third, Intel has waited for ages for its ecosystem partners to come up with ways to give consumers access to vPro's security benefits, and little has really panned out so now they're just going to take vPro (and any newer security technologies) directly to consumers via McAfee.

Let's take a look at each of these reasons in turn.

Security is Job One

At the most recent Intel R&D day, Intel CTO Justin Rattner did a Q&A session with the press in which he was asked something to the effect of, "What do you spend most of your time working on these days?" Rattner didn't hesitate in answering "security."

He then told an anecdote about how he was watching Intel CEO Paul Otellini being interviewed by Charlie Rose, and Otellini told Rose, "I've given our company a charter to make [security] job one." Rattner laughed and told us that this statement seemed to come from out of the blue, and it took him and other Intel execs by surprise. But from that day forward, Rattner was focused on security.

Rattner then went on to discuss just what a complex problem security is, and how the company is turning over every rock to come up with ways that it can contribute to making systems more secure. And, like Otellini did in this Charlie Rose interview, he referenced the Aurora attacks against Google and other tech companies as a kind of call to arms for Intel.

From Rattner's comments about the Aurora attacks, it was clear that he and his team at Intel had looked into them closely, and he indicated that the sophistication of those and subsequent attacks he has seen was insanely high. Rattner told us that the attacks—both the Aurora attacks and others that he has seen more recently—have had such a high degree of sophistication that they're clearly not carried out by garden variety criminals and vandals. He also said that the attackers are constantly upping their game.

Rattner described a few chip-specific efforts that Intel was making in the security arena, such as an on-chip random number generator and a crypto acceleration module. But these were just a small glimpse of what Intel had in mind for security.

Moving up the stack, and then off the stack

Intel's years of experience with vPro and its predecessors have no doubt confirmed to the company that providing silicon-level support for advanced security and remote management technologies is a waste of time if no systems integrator or popular software vendor implements them in some kind of consumer- or business-facing product or service.

At the 2008 Intel Developer Forum, I interviewed Intel's Andy Tryba, who was the director of marketing for the digital office platform division. The interview is worth revisiting from the perspective of 2010 to see what Intel's expectations for vPro were and how they have yet to pan out.

I asked Tryba how I, as a consumer, was supposed to use vPro to do basic troubleshooting and support for family and friends, given that, at the time, there were no consumer-facing services built on top of it. "My point," I said, "is that this isn't just a technology issue; it's a broader ecosystem issue. How are you guys trying to address that?"

Tryba responded: "I 100 percent agree with you, and what we're trying to do is offer the building blocks for services also. If you take a look at the embedded security and manageability on the box itself, that's great, but you do need some type of service to run on top of it. So what we do is go one layer up also and provide building blocks—not trying to touch the end users—but to work with people who are trying to build a business model. So we work with a lot of the guys who are going toward home IT automation and services to build a business model and use our building blocks to take advantage of the hardware capabilities."

I pressed him to name names, and to give examples of services that were going to be announced soon that would bring the power of vPro to the general public, but he wouldn't give details.

Two years have gone by since that interview, and vPro still isn't in common use for remote troubleshooting and general software security. Much of this is Intel's fault, of course, for making users pay extra for vPro-enabled processors (it should come standard across their product line), but I haven't really seen much in the way of what Tryba described—i.e., people building new home IT automation and tech support services and business models on top of vPro.

However, one of the big software vendors that did take up vPro and try to build consumer-facing products and services around it was McAfee.

Why they did it

In explaining its purchase of McAfee, Intel has clearly indicated that the real impact of the purchase won't really be felt in the computer market until later in the coming decade—this is a long-term, strategic buy. This statement fits with the idea that acquiring McAfee is Intel's way of bringing vPro and subsequent security efforts directly to businesses and consumers by just buying out the middle-man. The McAfee purchase gives Intel an instant foothold on countless PCs, a foothold that Intel itself would have to spend years building (if it were even possible).

Intel's decision to keep the McAfee brand intact and run the company as a wholly owned subsidiary lends further support to the idea that Intel has just bought its way up the stack and directly onto the consumer's hard drive.

This new foothold on the end-user's hard drive is exactly that—a small place from which Intel can now advance, pushing further out into the end-user's networked computing experience by offering as-yet unannounced and undeveloped applications and services that will (ideally) make that experience safer.

In the end, the McAfee move isn't some triple bank shot, where Intel is trying to out-security ARM in the mobile space, or whatever else the pundits have dreamed up to explain the purchase. No, it's pretty much what Intel's press release says it is: Intel wants to be (and feels that it needs to be) in the security business, period. The company thinks that they can do security better than a software vendor alone could, and they believe this because they know that security is about systems—not just hardware or software, but services, practices, policies, and user experiences and expectations.

And to make secure systems happen, Intel has to get closer to the user and to have a more pervasive part in more aspects of the user experience than it can as a parts provider. McAfee gives Intel that missing consumer-facing piece, and that's why they're buying the company at such a large premium.

Author:Jon Stokes Ars Technica