Saturday, August 28, 2010

Search Google in Real time




Google users can get in on the action with Google's realtime search which, as per the name, lets you search what's being posted online in real time.

The search is eventually going to show up at google.com/realtime, but for now, you can check it out at its experimental URL. Originally introduced in December, the search has received "significant improvements" that allow you to narrow your search down in useful ways, such as by region or geography. The example that Google provides on its blog is one that I find particularly useful: if you're traveling to somewhere, you can narrow down the search to that city to see what people are talking about, whether it's current events, new restaurants, construction, or whatever else is going on.

You can also follow conversations—basically, if a tweet or a Facebook update sparks a number of back and forth replies from multiple users, Google allows you to see all of those posts to get the full context.



The graph across the top shows you spikes and valleys in your search, too. For example, a quick search for "tamales" shows me that conversations about the tasty treats peaked around 2pm today, and again at 4pm. Mentions of Ars Technica peaked at around 11am. My name got mentioned the most right around the time an article of mine got posted to the site.

Unlike some of Google's recent feature rollouts (we're looking at you, Google Voice in Gmail), the realtime search isn't just limited to the US or English speakers. The search itself is available in 40 languages, and the geographic refinements are available in English, Japanese, Russian, and Spanish so far.

Src:Jacqui Cheng Ars Technica

Wednesday, August 25, 2010

Windows DLL-loading security flaw puts Microsoft in a bind

Last week, HD Moore, creator of the Metasploit penetration testing suite, tweeted about a newly patched iTunes flaw. The tweet said that many other (unspecified) Windows applications were susceptible to the same issue—40 at the time, but probably hundreds.

The problem has been named, or rather, renamed, "Binary Planting," and it stems from an interaction between the way Windows loads DLLs and the way it handles the "current directory." Every program on Windows has a notion of a "current directory"; any attempt to load a file using a relative path (that is, a path that does not start with a drive letter or a UNC-style "\\server" name) looks in the current directory for the named file. This concept is pretty universal—Unix-like systems have the same, called a "working directory"—and it's a decades-old feature of operating systems.

Windows, again in common with other operating systems, has the ability to load DLLs at runtime, during the execution of a program.

Where Windows is different from other operating systems is that it combines these two features; when a program instructs Windows to load a DLL, Windows looks in several different places for the library, including the current directory. Critically, it searches the current directory before looking in more likely locations such as the System32 directory, where most system libraries reside.

It's this that opens up the problem. When a file is loaded in Windows by double clicking it, and using file associations to start up the right program, Windows sets the current directory of the newly-started program to be the directory that contains the file. In the course of opening the file, they also try to load one or more DLLs. In normal circumstances, these DLLs will be loaded from the System32 directory. However, if an attacker knows the names of any of the DLLs the program tries to load, and puts a DLL with one of those names adjacent to a data file, he can ensure that his DLL will be loaded whenever the program tries to open the data file. Programs can also change their current directory manually, so the current directory will often appear to "follow" the last data file loaded.

Hence, "binary planting." An attacker creates a data file—which can be perfectly harmless in and of itself—and "plants" a DLL in the same directory. He then entices someone into opening the data file, for example through a link on a webpage or an e-mail. The vulnerable program will then attempt to open the data file, and in so doing will load the malicious DLL, allowing the attacker to do whatever he likes.

So, for example, an MP3 and a malicious DLL are placed alongside each other. The user double clicks the MP3 to load it, causing iTunes to start, and causing iTunes to use the folder with the MP3 as its current directory. During its startup, iTunes loads a number of DLLs, and one of them will be the malicious DLL.

The newly reported issue is a slightly new take on this old problem; instead of placing the DLL and data file on a local or LAN disk, they're placed on Internet-accessible file servers, using either the HTTP-based WebDAV or Windows' usual SMB/CIFS file-sharing protocols. In this way, a link to data file (with its side-by-side DLL) can be put into webpages or sent in mass e-mails.

Everything old is new again

The peculiar thing about all this is that this vulnerability has been known for a long time. The order in which directories are searched is documented, and has been documented for many years (that documentation dates back to 1998, and there are likely references that are older still, if one has any decade-old developer documents handy), and the dangers of using the current directory for loading libraries were explicitly highlighted a decade ago. As well as warning in the documentation about the dangers, Microsoft bloggers have also written about the issue in the past, telling developers how to avoid the problem.

To reduce the impact of this problem, Windows XP (and Windows 2000 Service Pack 4) changed the DLL loading behavior, by introducing a new mode named "SafeDllSearchMode." With this mode enabled, the current directory is only searched after the Windows directories, rather than before. This new mode was made the default in Windows XP Service Pack 2, and all subsequent operating systems from Microsoft. It is not, however, a complete solution; if a program tries to load a DLL that does not exist in the Windows directories, it will still fall back to attempting to load from the current directory, and so will still load a malicious DLL.

To address this, Windows has, since Windows XP SP1, provided a mechanism for programs to opt out of loading DLLs from the current directory completely. This has to be something that the program explicitly chooses to do, however, as there is the possibility that some software will depend on loading libraries from the current directory; making an OS-wide change to prevent the current directory being used in this way would break those programs.

All of which adds up to a tricky situation. Programs that are vulnerable to these problems are buggy. The operating system's standard behavior may not be ideal, but it provides the ability to write safe applications—though the design is plainly poor, it's not a fatal flaw, and certainly can't be described as a bug. It's just an artifact of Windows' long history. The behavior made sense in the security unconcerned world of single-user, un-networked 16-bit Windows, which is where it was first implemented, but is plainly undesirable in the modern world.

The fact that programs may legitimately depend on the behavior is another complexity: Microsoft can't easily make a unilateral decision to remove the current directory from the DLL search path, because the impact of such a change on legitimate programs could be substantial, and crippling. Microsoft has been telling developers to be careful, and to make sure they do the right thing, for many years now.

This places the company in an awkward position. A systematic flaw in Windows applications certainly looks bad for Windows, but the operating system vendor is not in a position to provide a fix. Though most programs will be able to get away with disabling DLL loading from the current directory completely, that's a determination that must be left to the program's authors, not Redmond. Unfortunately, not all vendors will do this in a timely manner, or possibly even at all.

Fixing the problem
Microsoft has produced a hotfix that allows system administrators to forcibly disable DLL loading from the current directory for applications of their choosing, or systemwide. This still runs the risk of breaking those applications, but this approach allows users to test that their applications will remain working and apply the fix if it's harmless.

To try to maximize its usefulness, the fix is a little more complicated than simply disabling DLL loading from the current directory completely. It has three levels of protection: it can disable DLL loading from the current directory when the current directory is a WebDAV directory, it can disable it when the current directory is any kind of remote directory (whether it be WebDAV, SMB/CIFS, or anything else), and it can disable the use of the current directory for DLL loading completely. In this way, programs which legitimately need to load libraries from the current directory can still be made safe from network-based attacks.

This is not a perfect solution—disabling network-based libraries still permits exploitation via USB key, for example—but it does allow people to protect themselves. In conjunction with disabling WebDAV and blocking SMB at the network perimeter, it should offer reasonably robust protection against untargeted mass attacks. But it places a substantial burden on administrators to create the necessary registry entries to enable the protective behavior, and for many the only practical answer may be to enable the safe behavior system-wide and just hope it doesn't break anything.

If developers are paying attention, we should expect to see a spate of fixes that tackle this problem. In its own security bulletin, Microsoft says that it's currently investigating whether any of its programs are susceptible to the issue, and as mentioned, Apple has already updated iTunes to avoid the problem; there are sure to be many other companies with work to do for the same problem.

In that sense, the extra publicity that this old problem has been given is good news: it should raise awareness of the problem ensuring that more developers take care to address it—and for many, it should be a simple fix, as simply disabling the use of the current directory for library loading will suffice.

But it does highlight a bigger issue. If something isn't safe by default, and requires extra development effort to be made safe, programmers are going to write unsafe programs. It's an issue seen time and time again with more conventional security flaws such as buffer overflows: trusting programmers to do the right thing doesn't work. They may do the right thing some of the time, perhaps even most of the time, but they won't do the right thing all of the time, and software will contain exploitable flaws as a result.

Microsoft has made great strides with its own software to strive to eliminate common exploitable bugs from its own code, and to reduce the exploitability should those bugs be discovered, but even these efforts have not been 100 percent successful, and bugs are still found. And as the (re)discovery of this problem shows, getting that message out to third-party developers is an uphill struggle.

The company says that it is looking into ways to make it easier for developers to avoid this mistake in future, but short of making it impossible—by removing DLL loading from the current directory entirely, or at least requiring it to be explicitly enabled—it's hard to see what the company could do to improve the situation, as for most applications the problem is already easily avoided with just a single line of code.

In a world where developers can't be trusted to follow best practices, perhaps the correct response should have been for the company to make the new hotfix opt-out, rather than opt-in; make it establish a system-wide default, and allow administrators to revert to the old behavior for those programs that require it. This means disabling documented, standard functionality, but there's little practical alternative.

Microsoft has said that it is willing to break backwards compatibility if it's necessary to solve security issues, but typically only when the backward compatible behavior is irredeemably unsafe, and not required for any legitimate purpose. This is, after all, why the company changed the search order in the past, to search the current directory after the Windows directories instead of before.

The situation here is certainly trickier—programs might quite legitimately depend on the current, dangerous behavior—but if exploitation using this attack vector becomes widespread, the company may be faced with little alternative but to break compatibility and provide bulletproof protection after all. History has shown that the ability to write safe programs is not enough: software needs to be safe by default, as painful as that may be.

Source:Peter Bright Ars Technica

Tuesday, August 24, 2010

Why Intel bought McAfee

There's been quite a bit of head-scratching over Intel's decision to purchase McAfee, but, despite all the breathless talk about mobile security and ARM and virus-fighting processors, the chipmaker's motivations for the purchase are actually fairly straightforward. First, Intel's management has decided, in the wake of Operation Aurora, to move security up to the top of Intel's priority list. Second, secure systems require a lot more than just hardware support—security is about the whole stack, plus the network, plus policies and practices. Third, Intel has waited for ages for its ecosystem partners to come up with ways to give consumers access to vPro's security benefits, and little has really panned out so now they're just going to take vPro (and any newer security technologies) directly to consumers via McAfee.

Let's take a look at each of these reasons in turn.

Security is Job One

At the most recent Intel R&D day, Intel CTO Justin Rattner did a Q&A session with the press in which he was asked something to the effect of, "What do you spend most of your time working on these days?" Rattner didn't hesitate in answering "security."

He then told an anecdote about how he was watching Intel CEO Paul Otellini being interviewed by Charlie Rose, and Otellini told Rose, "I've given our company a charter to make [security] job one." Rattner laughed and told us that this statement seemed to come from out of the blue, and it took him and other Intel execs by surprise. But from that day forward, Rattner was focused on security.

Rattner then went on to discuss just what a complex problem security is, and how the company is turning over every rock to come up with ways that it can contribute to making systems more secure. And, like Otellini did in this Charlie Rose interview, he referenced the Aurora attacks against Google and other tech companies as a kind of call to arms for Intel.

From Rattner's comments about the Aurora attacks, it was clear that he and his team at Intel had looked into them closely, and he indicated that the sophistication of those and subsequent attacks he has seen was insanely high. Rattner told us that the attacks—both the Aurora attacks and others that he has seen more recently—have had such a high degree of sophistication that they're clearly not carried out by garden variety criminals and vandals. He also said that the attackers are constantly upping their game.

Rattner described a few chip-specific efforts that Intel was making in the security arena, such as an on-chip random number generator and a crypto acceleration module. But these were just a small glimpse of what Intel had in mind for security.

Moving up the stack, and then off the stack

Intel's years of experience with vPro and its predecessors have no doubt confirmed to the company that providing silicon-level support for advanced security and remote management technologies is a waste of time if no systems integrator or popular software vendor implements them in some kind of consumer- or business-facing product or service.

At the 2008 Intel Developer Forum, I interviewed Intel's Andy Tryba, who was the director of marketing for the digital office platform division. The interview is worth revisiting from the perspective of 2010 to see what Intel's expectations for vPro were and how they have yet to pan out.

I asked Tryba how I, as a consumer, was supposed to use vPro to do basic troubleshooting and support for family and friends, given that, at the time, there were no consumer-facing services built on top of it. "My point," I said, "is that this isn't just a technology issue; it's a broader ecosystem issue. How are you guys trying to address that?"

Tryba responded: "I 100 percent agree with you, and what we're trying to do is offer the building blocks for services also. If you take a look at the embedded security and manageability on the box itself, that's great, but you do need some type of service to run on top of it. So what we do is go one layer up also and provide building blocks—not trying to touch the end users—but to work with people who are trying to build a business model. So we work with a lot of the guys who are going toward home IT automation and services to build a business model and use our building blocks to take advantage of the hardware capabilities."

I pressed him to name names, and to give examples of services that were going to be announced soon that would bring the power of vPro to the general public, but he wouldn't give details.

Two years have gone by since that interview, and vPro still isn't in common use for remote troubleshooting and general software security. Much of this is Intel's fault, of course, for making users pay extra for vPro-enabled processors (it should come standard across their product line), but I haven't really seen much in the way of what Tryba described—i.e., people building new home IT automation and tech support services and business models on top of vPro.

However, one of the big software vendors that did take up vPro and try to build consumer-facing products and services around it was McAfee.

Why they did it

In explaining its purchase of McAfee, Intel has clearly indicated that the real impact of the purchase won't really be felt in the computer market until later in the coming decade—this is a long-term, strategic buy. This statement fits with the idea that acquiring McAfee is Intel's way of bringing vPro and subsequent security efforts directly to businesses and consumers by just buying out the middle-man. The McAfee purchase gives Intel an instant foothold on countless PCs, a foothold that Intel itself would have to spend years building (if it were even possible).

Intel's decision to keep the McAfee brand intact and run the company as a wholly owned subsidiary lends further support to the idea that Intel has just bought its way up the stack and directly onto the consumer's hard drive.

This new foothold on the end-user's hard drive is exactly that—a small place from which Intel can now advance, pushing further out into the end-user's networked computing experience by offering as-yet unannounced and undeveloped applications and services that will (ideally) make that experience safer.

In the end, the McAfee move isn't some triple bank shot, where Intel is trying to out-security ARM in the mobile space, or whatever else the pundits have dreamed up to explain the purchase. No, it's pretty much what Intel's press release says it is: Intel wants to be (and feels that it needs to be) in the security business, period. The company thinks that they can do security better than a software vendor alone could, and they believe this because they know that security is about systems—not just hardware or software, but services, practices, policies, and user experiences and expectations.

And to make secure systems happen, Intel has to get closer to the user and to have a more pervasive part in more aspects of the user experience than it can as a parts provider. McAfee gives Intel that missing consumer-facing piece, and that's why they're buying the company at such a large premium.

Author:Jon Stokes Ars Technica

Tuesday, July 13, 2010

Beware!! 3D televisions will damage your eyesight






While it may be years until most consumers own a 3D device, the warnings about what the technology may do to your eyes are coming from every angle. Sony is the latest company to tell us what to avoid, following Nintendo and Samsung.

"Some people may experience discomfort (such as eye strain, eye fatigue or nausea) while watching 3D video images or playing stereoscopic 3D games on 3D televisions," the 3D section of the PlayStation 3 Terms of Service reads. "If you experience such discomfort, you should immediately discontinue use of your television until the discomfort subsides." The other paragraphs consist of standard warnings: if your eyes hurt, stop playing until they stop hurting. Ask your doctor about younger children using 3D devices.

Reggie Fils-Aime, the President and COO of Nintendo of America, issued a similar warning when talking to Kotaku. "We will recommend that very young children not look at 3D images," he said. "That's because, [in] young children, the muscles for the eyes are not fully formed... This is the same messaging that the industry is putting out with 3D movies, so it is a standard protocol. We have the same type of messaging for the Virtual Boy, as an example."

Just to make sure we hit all the notes, Samsung also warns consumers about its 3D televisions. "We do not recommend watching 3D if you are in bad physical condition, need sleep or have been drinking alcohol," one section says. "Watching TV while sitting too close to the screen for an extended period of time may damage your eyesight. The ideal viewing distance should be at least three times the height of the TV screen. It is recommended that the viewer's eyes are level with the screen."

Tuesday, May 18, 2010

Linux kernel 2.6.34 introduces new filesystems

Linus Torvalds announced this week the official release of version 2.6.34 of the Linux kernel. The update introduces two new filesystems and brings a number of other technical improvements and bug fixes.

One of the most significant additions are two filesystems called Ceph and Logfs

Ceph is a distributed network filesystem. It is built from the ground up to seamlessly and gracefully scale from gigabytes to petabytes and beyond. Scalability is considered in terms of workload as well as total storage. Ceph is designed to handle workloads in which tens thousands of clients or more simultaneously access the same file, or write to the same directory–usage scenarios that bring typical enterprise storage systems to their knees.

Some of the key features that make Ceph different from existing file systems:

* Seamless scaling: A Ceph filesystem can be seamlessly expanded by simply adding storage nodes (OSDs), and proactively migrates data onto new devices in order to maintain a balanced distribution of data.
* Strong reliability and fast recovery: All data in Ceph is replicated across multiple OSDs. If any OSD fails, data is automatically re-replicated to other devices.
* Adaptive MDS: The Ceph metadata server (MDS) is designed to dynamically adapt its behavior to the current workload. As the size and popularity of the file system hierarchy changes over time, that hierarchy is dynamically redistributed among available metadata servers in order to balance load and most effectively use server resources. Similarly, if thousands of clients suddenly access a single file or directory, that metadata is dynamically replicated across multiple servers to distribute the workload.

Project web site: ceph.newdream.net


LogFS is a filesystem designed for storage devices based on flash memory (SDD hard disks, USB sticks, etc). It is aimed to scale efficiently to large devices. In comparison to JFFS2, it offers significantly faster mount times and potentially less RAM usage. In its current state it is still experimental.

Project web site: www.logfs.org

Wednesday, March 10, 2010

New Hard disks will stop working in XP


A rather surprising article hit the front page of the BBC on Tuesday: the next generation of hard disks could cause slowdowns for XP users. Not normally the kind of thing you'd expect to be placed so prominently, but the warning it gives is a worthy one, if timed a bit oddly. The world of hard disks is set to change, and the impact could be severe. In the remarkably conservative world of PC hardware, it's not often that a 30-year-old convention gets discarded. Even this change has been almost a decade in the making.

The problem is hard disk sectors. A sector is the smallest unit of a hard disk that software can read or write. Even though a file might only be a single byte long, the operating system has to read or write at least 512 bytes to read or write that file.

512-byte sectors have been the norm for decades. The 512-byte size was itself inherited from floppy disks, making it an even older historical artifact. The age of this standard means that it's baked in to a lot of important software: PC BIOSes, operating systems, and the boot loaders that hand control from the BIOS to the operating system. All of this makes migration to a new standard difficult.

Given such entrenchment, the obvious question is, why change? We all know that the PC world isn't keen on migrating away from long-lived, entrenched standards—the continued use of IPv4 and the PC BIOS are two fine examples of 1970s and 1980s technology sticking around long past their prime, in spite of desirable replacements (IPv6 and EFI, respectively) being available. But every now and then, a change is forced on vendors in spite of their naturally conservative instincts.

Hard disks are unreliable

In this case, there are two reasons for the change. The first is that hard disks are not actually very reliable. We all like to think of hard disks as neatly storing the 1s and 0s that make up our data and then reading them back with perfect accuracy, but unfortunately the reality is nothing like as neat.

Instead of having a nice digital signal written in the magnetic surface—little groups of magnets pointing "all north" or "all south," what we have have is groups pointing "mostly south" or "mostly north." Converting this imprecise analog data back into the crisp digital ones and zeroes that represents our data requires the analog signal to be processed.

That processing isn't enough to reliably restore the data, though. Fundamentally, it produces only educated guesses; it's probably right, but could be wrong. To counter this, the hard disks store a substantial amount of error-checking data alongside each sector. This data is invisible to software, but is checked by the drive's firmware. This error-checking data gives the drive a substantial ability to reconstruct data that is missing or damaged using clever math, but this comes with considerable storage overhead. In a 2004-vintage disk, for every 512 bytes of data, typically 40 bytes of error checking data are also required, along with a further 40 bytes used to locate and indicate the start of the sector, and provide space between sectors. This means that 80 bytes are used for data integrity for every 512 bytes of user data, so about 13% of the theoretical capacity of a hard disk is gone automatically, just to account for the inevitable errors that come up when reading and interpreting the analog signal stored on the disk. With this 40-byte overhead, the drive can correct something like 400 consecutive unreadable bits. Longer codes could recover from longer errors, but the trade-off is that this eats into storage capacity.

Higher areal density is a blessing and a curse

This has been the status quo for many years. What's changing to make that a problem now? Throughout that period, areal density—the amount of data stored in a given disk area—has been on the rise. Current disks have an areal density typically around 400 Gbit/square inch; five years ago, the number would be closer to 100. The problem with packing all these bits into ever decreasing areas is that it's making the analog signal on the disk get increasingly worse. The signals are weaker, there's more interference from adjacent data, and the disk is more sensitive to minor fluctuations in voltages and other suboptimal conditions when writing.

This weaker analog signal in turn places greater demands on the error checking data. More errors are happening more of the time, with the result that those 40 bytes are not going to be enough for much longer. Typical consumer grade hard drives have a target of one unreadable bit for every 1014 read from disk (1014 bits is about 12 TB, so if you have six 2 TB disks in an array, that array probably has an error on it); enterprise drives and some consumer disks claim one in every 1015 bits, which is substantially better. The increased areal densities mean that the probability of 400 consecutive errors is increasing, which means that if they want to hit that one in 1014 target, they're going to need better error-checking. An 80-byte error checking block per sector would double the number of errors that can be corrected, up to 800 bits, but would also mean that about 19% of the disk's capacity was taken up by overheads, with only 81% available for user data.

In the past, enlarging the error correction data was viable; the increasing areal densities offered more space than the extra correction data used, for a net growth in available space. A decade ago, only 24 bytes were needed per sector, with 40 bytes necessary in 2004, and probably more in more recent disks. As long as the increase in areal density is greater than the increase in error correcting overhead (to accommodate signal loss from the increase in areal density), hard drives can continue to get larger. But hard drive manufacturers are now getting close to the point where each increase in areal density requires such a large increase in error correcting data that the areal density improvement gets cancelled out anyway!

Making 4096 bytes the new standard

Instead of storing 512-byte sectors, hard disks will start using 4096-byte sectors. 4096 is a good size for this kind of thing. For one, it matches the standard size of allocation units in the NTFS filesystem, which nowadays is probably the most widely used filesystem on personal computers. Secondly, it matches the standard size of memory pages on x86 systems. Memory allocations on x86 systems are generally done in multiples of 4096 bytes, and correspondingly, many disk operations (such as reading to or from the pagefile, or reading in executable programs), which interact intimately with the memory system, are equally done in multiples of 4096 bytes.

4096 byte sectors don't solve the analog problem—signals are getting weaker, and noise is getting stronger, and only reduced densities or some breakthrough in recording technology are going to change that—but it helps substantially with the error-correcting problem. Due to the way error correcting codes work, larger sectors require relatively less error correcting data to protect against the same size errors. A 4096 byte sector is equivalent to eight 512 byte sectors. With 40 bytes per sector for finding sector starts and 40 bytes for error correcting, protecting against 400 error bits, 4096 bytes requires (8 x 512 + 8 x 40 + 8 x 40) = 4736 bytes; 4096 of data, 640 of overhead.

With 4096 byte sectors, only one spacer start is needed, and to achieve protection against 400 bit errors still requires only 40 bytes, for a total of (1 x 4096 + 1 x 40 + 1 x 40) = 4176 bytes; 4096 of data, 80 of overhead. That would be more efficient, but wouldn't protect against errors any better. (In fact, it would be somewhat worse, since the eight-sector version could protect against the unlikely event of eight such errors, as long as they were one per sector). So instead, vendors might use, say, 10- byte error checking per sector. 100 bytes per sector can correct up to 1000 consecutive bits; for the forseeable future, this should be "good enough" to achieve the specified error rates. With 100 bytes per sector, plus the 40 byte spacing (etc.), this gives an overhead of just 140 bytes per sector, allowing about 96% of the disk's capacity to be used.

In one fell swoop, this change provides greater robustness against the problems caused by increasing areal density, and more efficient encoding of the data on disk. That's good news, except for that whole "legacy" thing. The 512 byte sector assumption is built in to a lot of software.

A 512-byte leaden albatross

As far back as 1998, IBM started indicating to the hard disk manufacturing community that sectors would have to be enlarged to allow for robust error correction. In 2000, IDEMA, the International Disk Drive Equipment and Materials Association, put together a task force to establish a large sector standard, the Long Data Block Committee. After initially considering, but ultimately rejecting, a 1024-byte interim format, in March 2006, they finalized their specification and committed to 4096 byte sectors. Phoenix produced preliminary BIOS support for the specification in 2005, and Microsoft, for its part, ensured that Windows Vista would support the new sector size. Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2 all support the new sector size. MacOS X supports it, and Linux kernels since September 2009 also support it.

The big obvious name missing from this list is Windows XP (and its server counterpart, Windows Server 2003). Windows XP (along with old Linux kernels) has, somewhere within its code, a fixed assumption of 512 byte sectors. Try to use it with hard disks with 4096 byte sectors and failure will ensue. Cognizant of this problem, the hard disk vendors responded with, well, a long period of inaction. Little was done to publicize the issue, no effort was made to force the issue by releasing large sector disks; the industry just sat on its hands doing nothing.

Source:Arstechnica

Friday, June 12, 2009

E-Ink A revolutionary technology still waiting to be tapped

The recent release of Amazon e-reader kindle has left everyone awed and prompted industry leaders to release similar devices.Apart from ebook readers e-ink is also used in manufacturing of various displays such as flexible readers,hoardings,mobile displays etc.But how many of us know about this highly power efficient technology.Below is an excerpt from the official website of e-ink

The E Ink coorporation was founded in 1997 by Joseph Jacobson, a professor in the MIT Media Lab.
Electronic ink is a proprietary material that is processed into a film for integration into electronic displays. Although revolutionary in concept, electronic ink is a straightforward fusion of chemistry, physics and electronics to create this new material. The principal components of electronic ink are millions of tiny microcapsules, about the diameter of a human hair. In one incarnation, each microcapsule contains positively charged white particles and negatively charged black particles suspended in a clear fluid. When a negative electric field is applied, the white particles move to the top of the microcapsule where they become visible to the user. This makes the surface appear white at that spot. At the same time, an opposite electric field pulls the black particles to the bottom of the microcapsules where they are hidden. By reversing this process, the black particles appear at the top of the capsule, which now makes the surface appear dark at that spot.



To form an E Ink electronic display, the ink is printed onto a sheet of plastic film that is laminated to a layer of circuitry. The circuitry forms a pattern of pixels that can then be controlled by a display driver. These microcapsules are suspended in a liquid "carrier medium" allowing them to be printed using existing screen printing processes onto virtually any surface, including glass, plastic, fabric and even paper. Ultimately electronic ink will permit most any surface to become a display, bringing information out of the confines of traditional devices and into the world around us.