Wednesday, 19 December 2012

Invisible QR codes tackle counterfeit bank notes

Sep. 12, 2012 — An invisible quick response (QR) code has been created by researchers in an attempt to increase security on printed documents and reduce the possibility of counterfeiting, a problem which costs governments and private industries billions of pounds each year.

Publishing their research today, 12 September, in IOP Publishing's journal Nanotechnology, the researchers from the University of South Dakota and South Dakota School of Mines and Technology believe the new style of QR code could also be used to authenticate virtually any solid object.

The QR code is made of tiny nanoparticles that have been combined with blue and green fluorescence ink, which is invisible until illuminated with laser light. It is generated using computer-aided design (CAD) and printed onto a surface using an aerosol jet printer. The development process can be viewed in this video http://www.youtube.com/watch?v=5eqtQq1Ol14

According to the researchers, the QR code will add an increased level of security over existing counterfeiting methods as the complexity of the production process makes it very difficult to replicate.

The combination of the blue and green inks also enabled the researchers to experiment with a variety of characters and symbols in different colours and sizes, varying from microscopic to macroscopic. Embedding these into the QR code further increases the level of security.

Under normal lighting conditions the QR code is invisible but becomes visible when near infra-red light is passed over it. This process, known as upconversion, involves the absorption of photons by the nanoparticles at a certain wavelength and the subsequent emission of photons at a shorter wavelength.

Once illuminated by the near infra-red light, the QR code can be read by a smartphone in the conventional manner.

QR codes can hold one hundred times more information than conventional barcodes and have traditionally been used in advertising and marketing. For example, simply scanning a QR code on a commercial product with a smartphone will take the user to a company's website, giving them more information about the product they are scanning.

The nanoparticles that were used to print the QR code are both chemically and mechanically stable meaning they could withstand the stresses and strains of being placed on paper. To prove this, the researchers printed the QR code onto a piece of paper and then randomly folded it fifty times; the code was still readable.

In addition to being printed on paper, the QR code has also been printed on glass and a flexible plastic film, demonstrating its applicability to a wide variety of solid commercial goods. The fact that the QR code is invisible is also beneficial as it would not interfere with the physical appearance of the goods.

The whole procedure took one-and-a-half hours, from the CAD process to the printing and then the scanning; however, the researchers are confident that once the QR file has been created, the printing en masse for commercial use would take around 10-15 minutes.

Lead author of the study, Jeevan Meruga, said: "The QR code is tough to counterfeit. We can also change our parameters to make it even more difficult to counterfeit, such as controlling the intensity of the upconverting light or using inks with a higher weight percentage of nanoparticles.

"We can take the level of security from covert to forensic by simply adding a microscopic message in the QR code, in a different coloured upconverting ink, which then requires a microscope to read the upconverted QR code."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Institute of Physics (IOP), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Jeevan M Meruga, William M Cross, P Stanley May, QuocAnh Luu, Grant A Crawford, Jon J Kellar. Security printing of covert quick response codes using upconverting nanoparticle inks. Nanotechnology, 2012; 23 (39): 395201 DOI: 10.1088/0957-4484/23/39/395201

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Cell network security holes revealed, with an app to test your carrier

May 21, 2012 — Popular firewall technology designed to boost security on cellular networks can backfire, unwittingly revealing data that could help a hacker break into Facebook and Twitter accounts, a new study from the University of Michigan shows.

The researchers also developed an Android app that tells phone users when they're on a vulnerable network. They will present their work May 22 at the IEEE Symposium on Security and Privacy in San Francisco.

Using Android smartphones, computer science associate professor Z. Morley Mao and doctoral student Zhiyun Qian revealed how an attacker could hijack a TCP Internet connection by taking advantage of publicly available information on smartphones; users' willingness to download untrusted apps; and network firewall middleboxes, which block data bundles that don't appear to be part of the flow of information traffic.

The researchers detected these middleboxes on 32 percent of the nearly 150 networks they tested worldwide.

"Firewall middleboxes are supposed to protect against this kind of attack, but it turns out they do the opposite," Qian said. "Most vendors and carriers that deploy such firewall middleboxes still believe they are safe and we want them to be aware of this design flaw."

Middleboxes monitor the "sequence numbers" of data packets on their way to mobile devices. When you snap and share a photo with a friend, for example, it gets chopped into numerous packets before it's sent across the network. Your friend's smartphone looks to the sequence numbers to put the picture back together. Middleboxes could help hackers use the process of elimination to home in on a number in the right range.

"An attacker can try to guess at sequence numbers. It's usually hard to get feedback on whether a guessed number is correct, but the firewall middlebox makes this possible," Qian said. "The attacker can try a range of sequence numbers. The firewall will only allow one through if it is in the valid range."

In their test, the researchers used a binary search process that can rule out half of the possible numbers at a time. In 32 rounds, which take just seconds to complete, this process guarantees that they'll arrive at a valid number and get a packet through.

How does the attacker know he has succeeded? That's where the Android spyware comes in (smartphone malware is already very popular, the researchers say, and it wouldn't be hard for an attacker to add this capability into an existing program). The intelligence the spyware needs is not privileged information. It doesn't need special administrator or root access. It would just read a couple of the phone's publicly available incoming packet counters and let the attacker know when the counters advanced.

Armed with a valid sequence number, the hacker could spoof Facebook or Twitter's HTTP (as opposed to the more secure HTTPS) web login page and gain the user's passwords.

The attack Qian and Mao propose illustrates a susceptibility in the so-called sandboxing safety mechanism that smartphone platforms utilize. Sandboxing isolates an app to a certain piece of memory, with the intention of protecting the rest of the phone from any tampering.

"What's surprising here is that this shows how malware can, in a sense, reach out of its sandbox and tamper with other legitimate apps such as your browser," Qian said.

Qian's app, Firewall Middlebox Detection, is available free of charge at https://play.google.com/store/apps/details?id=edu.umich.eecs.firewall

The paper is called "Off-Path TCP Sequence Number Inference Attack, How Firewall Middleboxes Reduce Security."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Michigan.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Risk-based passenger screening could make air travel safer

Jan. 31, 2012 — Anyone who has flown on a commercial airline since 2001 is well aware of increasingly strict measures at airport security checkpoints. A study by Illinois researchers demonstrates that intensive screening of all passengers actually makes the system less secure by overtaxing security resources.

University of Illinois computer science and mathematics professor Sheldon H. Jacobson, in collaboration with Adrian J. Lee at the Central Illinois Technology and Education Research Institute, explored the benefit of matching passenger risk with security assets. The pair detailed their work in the journal Transportation Science.

"A natural tendency, when limited information is available about from where the next threat will come, is to overestimate the overall risk in the system," Jacobson said. "This actually makes the system less secure by over-allocating security resources to those in the system that are low on the risk scale relative to others in the system."

When overestimating the population risk, a larger proportion of high-risk passengers are designated for too little screening while a larger proportion of low-risk passengers are subjected to too much screening. With security resources devoted to the many low-risk passengers, those resources are less able to identify or address high-risk passengers. Nevertheless, current policies favor broad screening.

"One hundred percent checked baggage screening and full-body scanning of all passengers is the antithesis of a risk-based system," Jacobson said. "It treats all passengers and their baggage as high-risk threats. The cost of such a system is prohibitive, and it makes the air system more vulnerable to successful attacks by sub-optimally allocating security assets."

In an effort to address this problem, the Transportation Security Administration (TSA) introduced a pre-screening program in 2011, available to select passengers on a trial basis. Jacobson's previous work has indicated that resources could be more effectively invested if the lowest-risk segments of the population -- frequent travelers, for instance -- could pass through security with less scrutiny since they are "known" to the system.

A challenge with implementing such a system is accurately assessing the risk of each passenger and using such information appropriately. In the new study, Jacobson and Lee developed three algorithms dealing with risk uncertainty in the passenger population. Then, they ran simulations to demonstrate how their algorithms, applied to a risk-based screening method, could estimate risk in the overall passenger population -- instead of focusing on each individual passenger -- and how errors in this estimation procedure can be mitigated to reduce the risk to the overall system.

They found that risk-based screening, such as the TSA's new Pre-Check program, increases the overall expected security. Rating a passenger's risk relative to the entire flying population allows more resources to be devoted to passengers with a high risk relative to the passenger population.

The paper also discusses scenarios of how terrorists may attempt to thwart the security system -- for example, blending in with a high-risk crowd so as not to stand out -- and provides insights into how risk-based systems can be designed to mitigate the impact of such activities. "The TSA's move toward a risk-based system is designed to more accurately match security assets with threats to the air system," Jacobson said. "The ideal situation is to create a system that screens passengers commensurate with their risk. Since we know that very few people are a threat to the system, relative risk rather than absolute risk provides valuable information."

The National Science Foundation and the U.S. Air Force Office of Scientific Research supported this work.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Illinois at Urbana-Champaign.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

A. J. Lee, S. H. Jacobson. Addressing Passenger Risk Uncertainty for Aviation Security Screening. Transportation Science, 2011; DOI: 10.1287/trsc.1110.0384

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Data storage: Going with the grain

Oct. 25, 2012 — Reducing information stored in magnetic thin films to the physical size of single grains could improve computer hard drives.

Despite the increasing competition from alternative technologies such as solid-state drives, magnetic disks remain an important data-storage technology. They are not only reliable and inexpensive, but their storage density has potential for even further improvement. One method under current investigation is storing each data bit in a single magnetic grain of the thin film of the recording medium, rather than in several grains as in conventional hard drives. Storage in single grains only would increase stability and reduce the magnetic fields required to write bits.

By modeling write processes in hard disks, Melissa Chua and her co-workers at the A*STAR Data Storage Institute, Singapore, have demonstrated how this is possible in practice. "The hope is that such a grain-based magnetic recording can extend storage densities by an order of magnitude, to achieve ten terabits per square inch," she says.

Thin magnetic films for data storage coat the top layer of plastic films in hard-disk drives and consist of many neighboring nanometer-sized grains. As storage density of magnetic films has increased over the years, the surface area used for storage per bit is now comparable to the size of these grains.

Achieving single-grain storage requires a solid understanding of the write processes. Two theoretical models are available to describe these processes. One is an analytical model that uses a simplified description of the magnetic fields within the grains and within the write head of the hard disk. This model achieves fast and easy-to-implement modeling of the recording process, Chua notes.

The second model is a statistical approach that uses tabulated values of parameters that detail the magnetic orientation switching process when information is written to the hard disk. These parameters are derived from detailed simulations of the magnetic fields in the grains and from the computer hard drive write head. From these, the researchers produced a probability for a grain to switch under given circumstances. This detailed approach is more accurate, but also more time intensive than the analytical approach.

Chua and her co-workers successfully applied both models to the grain-based storage process. They simulated the switching of single grains with both methods and then compared their individual performance. By adjusting relevant process parameters for both models, they achieved good agreement between them. Having shown the suitability of both models, choosing which model to use depends on specifics, such as the desired accuracy. Either way, Chua says, "Both models enable the system-level testing of future magnetic recording technologies."

The A*STAR-affiliated researchers contributing to this research are from the Data Storage Institute.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR), via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Frankenstein programmers test a cybersecurity monster

Aug. 27, 2012 — In order to catch a thief, you have to think like one.

UT Dallas computer scientists are trying to stay one step ahead of cyber attackers by creating their own monster. Their monster can cloak itself as it steals and reconfigures information in a computer program.

In part because of the potentially destructive nature of their technology, creators have named this software system Frankenstein, after the monster-creating scientist in author Mary Shelley's novel, Frankenstein; or The Modern Prometheus.

"Shelley's story is an example of a horror that can result from science, and similarly, we intend our creation as a warning that we need better detections for these types of intrusions," said Dr. Kevin Hamlen, associate professor of computer science at UT Dallas who created the software, along with his doctoral student Vishwath Mohan. "Criminals may already know how to create this kind of software, so we examined the science behind the danger this represents, in hopes of creating counter measures."

Frankenstein is not a computer virus, which is a program that can multiply and take over other machines. But, it could be used in cyber warfare to provide cover for a virus or another type of malware, or malicious software.

In order to avoid antivirus software, malware typically mutates every time it copies itself onto another machine. Antivirus software figures out the pattern of change and continues to scan for sequences of code that are known to be suspicious.

Frankenstein evades this scanning mechanism. It takes code from programs already on a computer and repurposes it, stringing it together to accomplish the malware's malicious task with new instructions.

"We wanted to build something that learns as it propagates," Hamlen said. "Frankenstein takes from what is already there and reinvents itself."

"Just as Shelley's monster was stitched from body parts, our Frankenstein also stitches software from original program parts, so no red flags are raised," he said. "It looks completely different, but its code is consistent with something normal."

Hamlen said Frankenstein could be used to aid government counter terrorism efforts by providing cover for infiltration of terrorist computer networks. Hamlen is part of the Cyber Security Research and Education Center in the Erik Jonsson School of Engineering and Computer Science.

The UT Dallas research is the first published example describing this type of stealth technology, Hamlen said.

"As a proof-of-concept, we tested Frankenstein on some simple algorithms that are completely benign," Hamlen said. "We did not create damage to anyone's systems."

The next step, Hamlen said, is to create more complex versions of the software.

Frankenstein was described in a paper published online (https://www.usenix.org/conference/woot12/frankenstein-stitching-malware-benign-binaries) in conjunction with a presentation at a recent USENIX Workshop on Offensive Technologies.

The research was supported by the National Science Foundation and Air Force Office of Scientific Research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Texas, Dallas.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Major step taken towards 'unbreakable' message exchange

Aug. 3, 2012 — Single particles of light, also known as photons, have been produced and implemented into a quantum key distribution (QKD) link, paving the way for unbreakable communication networks.

The results of the experiment, undertaken by a close collaboration of researchers based in Wuerzburg, Munich and Stuttgart, have been published August 2, in the Institute of Physics and German Physical Society's New Journal of Physics.

The single photons were produced using two devices made of semiconductor nanostructures that emitted a photon each time they were excited by an electrical pulse. The two devices were made up of different semiconductor materials so they emitted photons with different colours.

QKD is not a new phenomenon and has been in commercial use for several years; one of its first uses was to encode the national election ballot results in Switzerland in 2007. The techniques currently being used on a commercial scale rely on lasers to create the source of photons; however, researchers hope to further increase the efficiency of QKD by returning to the original concept of using single photons for generating a secure key.

One of the project coordinators, Dr Sven Hoefling, said: "The nature of light emitted by lasers is very different from light emitted by single photon sources. Whereas the emission events in lasers occur completely random in time, an ideal single photon source emits exactly one photon upon a trigger event, which in our case is an electrical pulse.

"The random nature of emission events from strongly attenuated lasers sometimes results in the emission of two photons very close to each other. Such multiple photon events can be utilized by an eavesdropper to extract information.

"Single photon sources, such as the ones used in our study, are predestined for use in the secure communication systems using quantum communication protocols."

QKD is a process that enables two parties, 'Alice' and 'Bob', to share a secret key that can then be used to protect data they want to send to each other. The secret key is made up of a stream of photons that 'spin' in different directions -- vertically, horizontally or diagonally -- according to the sender's preferences.

The laws of physics state that it is not possible to measure the state, or 'spin', of a particle like a photon without altering it, so if 'Eve' attempted to intercept the key that was sent between 'Alice' and 'Bob', it would become instantly noticeable.

In their experiment, the single photons were produced with high efficiency, then made into a key and successfully transmitted from the sender to the receiver across 40 cm of free space in the laboratory.

The researchers are aware that to make this experiment more practical and commercially viable, it needs to be scaled up so that quantum keys can be sent over larger distances. To do this, quantum repeater stations need to be incorporated into the network to 'amplify' the message.

"Meanwhile, quantum keys have been sent over 500 metres of free space on top of the roofs in the centre of Munich, Germany. Several projects have also been funded to develop this technology further," continued Hoefling.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Institute of Physics (IOP), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Tobias Heindel, Christian A Kessler, Markus Rau, Christian Schneider, Martin Fürst, Fabian Hargart, Wolfgang-Michael Schulz, Marcus Eichfelder, Robert Roßbach, Sebastian Nauerth, Matthias Lermer, Henning Weier, Michael Jetter, Martin Kamp, Stephan Reitzenstein, Sven Höfling, Peter Michler, Harald Weinfurter, Alfred Forchel. Quantum key distribution using quantum dot single-photon emitting diodes in the red and near infrared spectral range. New Journal of Physics, 2012; 14 (8): 083001 DOI: 10.1088/1367-2630/14/8/083001

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Self-adapting computer network that defends itself against hackers?

May 10, 2012 — In the online struggle for network security, Kansas State University cybersecurity experts are adding an ally to the security force: the computer network itself.

Scott DeLoach, professor of computing and information sciences, and Xinming "Simon" Ou, associate professor of computing and information sciences, are researching the feasibility of building a computer network that could protect itself against online attackers by automatically changing its setup and configuration.

DeLoach and Ou were recently awarded a five-year grant of more than $1 million from the Air Force Office of Scientific Research to fund the study "Understanding and quantifying the impact of moving target defenses on computer networks." The study, which began in April, will be the first to document whether this type of adaptive cybersecurity, called moving-target defense, can be effective. If it can work, researchers will determine if the benefits of creating a moving-target defense system outweigh the overhead and resources needed to build it.

Helping Ou and DeLoach in their investigation and research are Kansas State University students Rui Zhuang and Su Zhang, both doctoral candidates in computing and information sciences from China, and Alexandru Bardas, doctoral student in computing and information sciences from Romania.

As the study progresses the computer scientists will develop a set of analytical models to determine the effectiveness of a moving-target defense system. They will also create a proof-of-concept system as a way to experiment with the idea in a concrete setting.

"It's important to investigate any scientific evidence that shows that this approach does work so it can be fully researched and developed," DeLoach said. He started collaborating with Ou to apply intelligent adaptive techniques to cybersecurity several years ago after a conversation at a university open house.

The term moving-target defense -- a subarea of adaptive security in the cybersecurity field -- was first coined around 2008, although similar concepts have been proposed and studied since the early 2000s. The idea behind moving-target defense in the context of computer networks is to create a computer network that is no longer static in its configuration. Instead, as a way to thwart cyber attackers, the network automatically and periodically randomizes its configuration through various methods -- such as changing the addresses of software applications on the network; switching between instances of the applications; and changing the location of critical system data.

Ou and DeLoach said the key is to make the network appear to an attacker that it is changing chaotically while to an authorized user the system operates normally.

"If you have a Web server, pretty much anybody in the world can figure out where you are and what software you're running," DeLoach said. "If they know that, they can figure out what vulnerabilities you have. In a typical scenario, attackers scan your system and find out everything they can about your server configuration and what security holes it has. Then they select the best time for them to attack and exploit those security holes in order to do the most damage. This could change that."

Creating a computer network that could automatically detect and defend itself against cyber attacks would substantially increase the security of online data for universities, government departments, corporations and businesses -- all of which have been the targets of large-scale cyber attacks.

In February 2011 it was discovered that the Nasdaq Stock Market's computer network had been infiltrated by hackers. Although federal investigators concluded that it was unlikely the hackers stole any information, the network's security had been left vulnerable for more than a year while the hackers visited it numerous times.

According to Ou, creating a moving-target defense system would shift the power imbalance that currently resides with hackers -- who need only find a single security hole to exploit -- back to the network administrators -- who would have a system that frequently removes whatever security privileges attackers may gain with a new clean slate.

"This is a game-changing idea in cybersecurity," Ou said. "People feel that we are currently losing against online attackers. In order to fundamentally change the cybersecurity landscape and reduce that high risk we need some big, fundamental changes to the way computers and networks are constructed and organized."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Kansas State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Single-atom writer a landmark for quantum computing

Sep. 19, 2012 — A research team led by Australian engineers has created the first working quantum bit based on a single atom in silicon, opening the way to ultra-powerful quantum computers of the future.

In a landmark paper published September 19 in the journal Nature, the team describes how it was able to both read and write information using the spin, or magnetic orientation, of an electron bound to a single phosphorus atom embedded in a silicon chip.

"For the first time, we have demonstrated the ability to represent and manipulate data on the spin to form a quantum bit, or 'qubit', the basic unit of data for a quantum computer," says Scientia Professor Andrew Dzurak. "This really is the key advance towards realising a silicon quantum computer based on single atoms."

Dr Andrea Morello and Professor Dzurak from the UNSW School of Electrical Engineering and Telecommunications lead the team. It includes researchers from the University of Melbourne and University College, London.

"This is a remarkable scientific achievement -- governing nature at its most fundamental level -- and has profound implications for quantum computing," says Dzurak.

Dr Morello says that quantum computers promise to solve complex problems that are currently impossible on even the world's largest supercomputers: "These include data-intensive problems, such as cracking modern encryption codes, searching databases, and modelling biological molecules and drugs."

The new finding follows on from a 2010 study also published in Nature, in which the same UNSW group demonstrated the ability to read the state of an electron's spin. Discovering how to write the spin state now completes the two-stage process required to operate a quantum bit.

The new result was achieved by using a microwave field to gain unprecedented control over an electron bound to a single phosphorus atom, which was implanted next to a specially-designed silicon transistor. Professor David Jamieson, of the University of Melbourne's School of Physics, led the team that precisely implanted the phosphorus atom into the silicon device.

UNSW PhD student Jarryd Pla, the lead author on the paper, says: "We have been able to isolate, measure and control an electron belonging to a single atom, all using a device that was made in a very similar way to everyday silicon computer chips."

As Dr Morello notes: "This is the quantum equivalent of typing a number on your keyboard. This has never been done before in silicon, a material that offers the advantage of being well understood scientifically and more easily adopted by industry. Our technology is fundamentally the same as is already being used in countless everyday electronic devices, and that's a trillion-dollar industry."

The team's next goal is to combine pairs of quantum bits to create a two-qubit logic gate -- the basic processing unit of a quantum computer.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of New South Wales, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Jarryd J. Pla, Kuan Y. Tan, Juan P. Dehollain, Wee H. Lim, John J. L. Morton, David N. Jamieson, Andrew S. Dzurak, Andrea Morello. A single-atom electron spin qubit in silicon. Nature, 2012; DOI: 10.1038/nature11449

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Georgia Tech Releases Cyber Threats Forecast for 2013

Nov. 14, 2012 — The year ahead will feature new and increasingly sophisticated means to capture and exploit user data, escalating battles over the control of online information and continuous threats to the U.S. supply chain from global sources. Those were the findings made by the Georgia Tech Information Security Center (GTISC) and the Georgia Tech Research Institute (GTRI) in today's release of the Georgia Tech Emerging Cyber Threats Report for 2013. The report was released at the annual Georgia Tech Cyber Security Summit, a gathering of industry and academic leaders who have distinguished themselves in the field of cyber security.

According to GTISC, GTRI and the experts cited in the report, specific threats to follow over the coming year include, among others:

• Cloud-based Botnets -- The ability to create vast, virtual computing resources will further convince cyber criminals to look for ways to co-opt cloud-based infrastructure for their own ends. One possible example is for attackers to use stolen credit card information to purchase cloud computing resources and create dangerous clusters of temporary virtual attack systems.

• Search History Poisoning -- Cyber criminals will continue to manipulate search engine algorithms and other automated mechanisms that control what information is presented to Internet users. Moving beyond typical search-engine poisoning, researchers believe that manipulating users' search histories may be a next step in ways that attackers use legitimate resources for illegitimate gains.

• Mobile Browser and Mobile Wallet Vulnerabilities -- While only a very small number of U.S. mobile devices show signs of infection, the explosive proliferation of smartphones will continue to tempt attackers in exploiting user and technology-based vulnerabilities, particularly with the browser function and digital wallet apps.

• Malware Counteroffensive -- The developers of malicious software will employ various methods to hinder malware detection, such as hardening their software with techniques similar to those employed in Digital Rights Management (DRM), and exploiting the wealth of new interfaces and novel features on mobile devices.

"Every year, security researchers and experts see new evolutions in cyber threats to people, businesses and governments," said Wenke Lee, director of GTISC. "In 2013, we expect the continued movement of business and consumer data onto mobile devices and into the cloud will lure cyber criminals into attacking these relatively secure, but extremely tempting, technology platforms. Along with growing security vulnerabilities within our national supply chain and healthcare industry, the security community must remain proactive, and users must maintain vigilance, over the year ahead."

"Our adversaries, whether motivated by monetary gain, political/social ideology or otherwise, know no boundaries, making cyber security a global issue," said Bo Rotoloni, director of GTRI's Cyber Technology and Information Security Laboratory (CTISL). "Our best defense on the growing cyber warfront is found in cooperative education and awareness, best-of-breed tools and robust policy developed collaboratively by industry, academia and government."

Today's Georgia Tech Cyber Security Summit is one forum where the IT security ecosystem can gather together to discuss and debate the evolving nature of cyber threats, and to chart the course for creating solutions through collaborations among industry, government and academia. The 2012 Summit was keynoted by Brendan Hannigan, IBM Internet Security and included a panel of security experts from Damballa, AirWatch, E*TRADE, MAAWG, Pindrop Security and Symantec Research Lab.

The Georgia Institute of Technology is one of the nation's leading public research universities and the home of groundbreaking cyber security research and academic initiatives through GTISC, GTRI and other facilities across campus. These efforts are focused on producing technology and innovation that will help drive economic growth, while improving human life on a global scale.

The report can be downloaded by visiting http://www.gtsecuritysummit.com/report.html.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Georgia Institute of Technology, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Disentangling information from photons

July 12, 2012 — Theoretical physicist Filippo Miatto and colleagues from the University of Strathclyde, Glasgow, UK, have found a new method of reliably assessing the information contained in photon pairs used for applications in cryptography and quantum computing. The findings, published in The European Physical Journal D, are so robust that they enable access to the information even when the measurements on photon pairs are imperfect.

The authors focused on photon pairs described as being in a state of quantum entanglement: i.e., made up of many superimposed pairs of states. This means that these photon pairs are intimately linked by common physical characteristics such as a spatial property called orbital angular momentum, which can display a different value for each superimposed state.

Miatto and his colleagues relied on a tool capable of decomposing the photon pairs' superimposed states onto the multiple dimensions of a Hilbert space, which is a virtual space described by mathematical equations. This approach allowed them to understand the level of the photon pairs' entanglement.

The authors showed that the higher the degree of entanglement, the more accessible the information that photon pairs carry. This means that generating entangled photon pairs with a sufficiently high dimension -- that is with a high enough number of decomposed photon states that can be measured -- could help reveal their information with great certainty.

As a result, even an imperfect measurement of photons' physical characteristics does not affect the amount of information that can be gained, as long as the level of entanglement was initially strong. These findings could lead to quantum information applications with greater resilience to errors and a higher information density coding per photon pair. They could also lead to cryptography applications where fewer photons carry more information about complex quantum encryption keys.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Springer.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

F. M. Miatto, T. Brougham, A. M. Yao. Cartesian and polar Schmidt bases for down-converted photons. The European Physical Journal D, 2012; 66 (7) DOI: 10.1140/epjd/e2012-30063-y

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Quantum physics enables perfectly secure cloud computing

Jan. 29, 2012 — Researchers have succeeded in combining the power of quantum computing with the security of quantum cryptography and have shown that perfectly secure cloud computing can be achieved using the principles of quantum mechanics. They have performed an experimental demonstration of quantum computation in which the input, the data processing, and the output remain unknown to the quantum computer.

The international team of scientists will publish the results of the experiment, carried out at the Vienna Center for Quantum Science and Technology (VCQ) at the University of Vienna and the Institute for Quantum Optics and Quantum Information (IQOQI), in the forthcoming issue of Science.

Quantum computers are expected to play an important role in future information processing since they can outperform classical computers at many tasks. Considering the challenges inherent in building quantum devices, it is conceivable that future quantum computing capabilities will exist only in a few specialized facilities around the world -- much like today's supercomputers. Users would then interact with those specialized facilities in order to outsource their quantum computations. The scenario follows the current trend of cloud computing: central remote servers are used to store and process data -- everything is done in the "cloud." The obvious challenge is to make globalized computing safe and ensure that users' data stays private.

The latest research, to appear in Science, reveals that quantum computers can provide an answer to that challenge. "Quantum physics solves one of the key challenges in distributed computing. It can preserve data privacy when users interact with remote computing centers," says Stefanie Barz, lead author of the study. This newly established fundamental advantage of quantum computers enables the delegation of a quantum computation from a user who does not hold any quantum computational power to a quantum server, while guaranteeing that the user's data remain perfectly private. The quantum server performs calculations, but has no means to find out what it is doing -- a functionality not known to be achievable in the classical world.

The scientists in the Vienna research group have demonstrated the concept of "blind quantum computing" in an experiment: they performed the first known quantum computation during which the user's data stayed perfectly encrypted. The experimental demonstration uses photons, or "light particles" to encode the data. Photonic systems are well-suited to the task because quantum computation operations can be performed on them, and they can be transmitted over long distances.

The process works in the following manner. The user prepares qubits -- the fundamental units of quantum computers -- in a state known only to himself and sends these qubits to the quantum computer. The quantum computer entangles the qubits according to a standard scheme. The actual computation is measurement-based: the processing of quantum information is implemented by simple measurements on qubits. The user tailors measurement instructions to the particular state of each qubit and sends them to the quantum server. Finally, the results of the computation are sent back to the user who can interpret and utilize the results of the computation. Even if the quantum computer or an eavesdropper tries to read the qubits, they gain no useful information, without knowing the initial state; they are "blind."

The research at the Vienna Center for Quantum Science and Technology (VCQ) at the University of Vienna and at the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences was undertaken in collaboration with the scientists who originally invented the protocol, based at the University of Edinburgh, the Institute for Quantum Computing (University of Waterloo), the Centre for Quantum Technologies (National University of Singapore), and University College Dublin.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Vienna.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

S. Barz, E. Kashefi, A. Broadbent, J. F. Fitzsimons, A. Zeilinger, P. Walther. Demonstration of Blind Quantum Computing. Science, 2012; 335 (6066): 303 DOI: 10.1126/science.1214707

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Mimicking public health strategies could improve cyber security

Nov. 29, 2012 — Mimicking public health strategies, such as maintaining good "cyber hygiene," could improve cyber security, according to a new paper by a team of economists and public health researchers at RTI International.

The paper, published in the November/December issue of Cross Talk, provides a substantive look at how public health strategies and research methodologies could be used to guide cyber security strategies.

Currently, no centralized approach has been successfully used to coordinate action in improving cyber security. The government has played a relatively limited role, developing standards for industry and, more recently, distributing education materials to schools and civic organizations, but most of the focus has been on business security.

"The public health community has been very successful in identifying, monitoring, and reducing the health impacts of many types of threats," said Brent Rowe, a senior economist at RTI and the paper's lead author. "Given the many similarities between public health and cyber security, the cyber security community would be wise to leverage relevant public health strategies and analysis techniques."

The paper takes a detailed look at public health frameworks that can be used to identify and describe specific cyber security threats and potential solutions.

According to the authors, some of the key lessons from the public health community include:

•Introduce potential solutions to individuals in a way that establishes a measure of trust

•Provide solutions in a convenient and attractive framework (individuals will not engage if participation is difficult, expensive or inconvenient)

•Communicate the nature of threats and interventions to a wide variety of audiences

•Involve multiple organizations (government and nongovernment) in responding to a threat or set of threats

•Consider the unpredictability of individual behavior

"Although the idea of organizing the community of cyber security stakeholders similar to the complexity and scale of public health is daunting, public health research, implementation, and evaluation strategies offer a wealth of well-tested approaches that could be easily leveraged to study cyber security topics, such as how to better understand cyber security risk preferences," said Michael Halpern, Ph.D., a senior public health researcher and an RTI Senior Fellow.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by RTI International.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Zappos breach goes beyond credit cards: Consumers face identity theft if hackers correlate other penetrated databases

Jan. 17, 2012 — Stephen B. Wicker, Cornell professor of Electrical and Computer Engineering at Cornell University, comments on the Zappos web site breach by hackers.

Wicker conducts research in wireless information networks. He focuses on networking technology, law, and sociology, and how regulation can affect the privacy and speech rights. He is the author of the book "Cellular Convergence and the Death of Privacy," to be published by Oxford University Press at the end of 2012.

He says: "Though Zappos has not stated how security was breached, this event is a reminder that security is not a fix or an overlay, it is an ongoing process that must be intrinsic to the design and maintenance of an Internet presence.

"Zappos said that credit card information was not stolen, but acknowledged that email addresses, billing and shipping addresses, phone numbers, and the last four digits from credit cards may have been compromised. This is a lopsided outcome for the customer.

"The bigger problem Zappos faces is that large databases of consumer information can be used for identity theft. As Zappos acknowledged, users who use the same or similar passwords are at risk of theft through access to other sites such as Amazon or Ebay.

"More generally, information about a customer can be used to 'de-anonymize' other databases on other Web sites, further invading customer privacy. Correlation attacks enabled by such data have been shown to strip anonymity from NetFlix, AOL and other databases that were assumed safe. Thus, the information used can include customer preferences, beliefs and practices that are far harder to change than a credit card number.

"Zappos' response is admirable for its forthrightness and immediacy, but this is a reminder of the risk run when online service providers maintain databases of user data. This is a practice that many, many web site and service providers engage in for convenience and, in some cases, for profit. This is a practice that a networked society cannot afford for the long term if individual privacy is to be preserved."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Cornell University, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Better security for web and mobile applications

July 20, 2012 — A team led by Harvard computer scientists, including two undergraduate students, has developed a new tool that could lead to increased security and enhanced performance for commonly used web and mobile applications.

Called RockSalt, the clever bit of code can verify that native computer programming languages comply with a particular security policy.

Presented at the ACM Conference on Programming Language Design and Implementation (PLDI) in Beijing, in June, RockSalt was created by Greg Morrisett, Allen B. Cutting Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS), two of his undergraduate students Edward Gan '13 and Joseph Tassarotti '13, former postdoctoral fellow Jean-Baptiste Tristan (now at Oracle), and Gang Tan of Lehigh University.

"When a user opens an external application, such as Gmail or Angry Birds, web browsers such as Google Chrome typically run the program's code in an intermediate and safer language such as JavaScript," says Morrisett. "In many cases it would be preferable to run native machine code directly."

The use of native code, especially in an online environment, however, opens up the door to hackers who can exploit vulnerabilities and readily gain access to other parts of a computer or device. An initial solution to this problem was offered over a decade ago by computer scientists at the University of California, Berkeley, who developed software fault isolation (SFI).

SFI forces native code to "behave" by rewriting machine code to limit itself to functions that fall within particular parameters. This "sandbox process" sets up a contained environment for running native code. A separate "checker" program can then ensure that the executable code adheres to regulations before running the program.

While considered a major breakthrough, the solution was limited to devices using RISC chips, a processor more common in research than in consumer computing. In 2006, Morrisett developed a way to implement SFI on the more popular CISC-based chips, like the Intel x86 processor. The technique was adopted widely. Google modified the routine for Google Chrome, eventually developing it into Google Native Client (or "NaCl").

When bugs and vulnerabilities were found in the checker for NaCl, Google sent out a call to arms. Morrissett once again took on the challenge, turning the problem into an opportunity for his students. The result was RockSalt, an improvement over NaCl, built using Coq, a proof development system.

"We built a simple but incredibly powerful system for proving a hypothesis -- so powerful that it's likely to be overlooked. We want to prove that if the checker says 'yes,' the code will indeed respect the sandbox security policy," says Joseph Tassarotti '13, who built and tested a model of the execution of x86 instructions. "We wanted to get a guarantee that there are no bugs in the checker, so we set out to construct a rigorous, machine-checked proof that the checker is correct."

"Our proofs about the correctness of our own tool say that if you run the tool on a program, and it says it's safe to run, then according to the model, this program can only do certain things," Tassarotti adds. "Our proof, however, was only as good as this model. If the model was wrong, then the tool could potentially have an error."

In other words, he explains, think of an analogy in physics. While you might mathematically prove that according to Newton's laws, a moving object will follow a certain trajectory, the proof is only meaningful to the degree that Newton's laws accurately model the world.

"Since the x86 architecture is very complicated, it was essential to test the model by running programs on a real chip, then simulating them with the model, and seeing whether the results matched. I specified the meanings of many of these instructions and developed the testing infrastructure to check for errors in the model," Tassarotti says.

Even more impressively, RockSalt comprises a mere 80 lines of code, as compared to the 600 lines of the original Google native code checker. The new checker is also faster, and, to date, no vulnerabilities have been uncovered. The tool offers tremendous advantages to programmers and users alike, allowing programmers to code in any language, compile it to native executable code, and secure it without going through intermediate languages such as JavaScript, and even to cross back and forth between Java and native code. This allows coders to choose the benefits of multiple languages, such as using one to ensure portability while using others to enhance performance.

"The biggest benefit may be that users can have more peace of mind that a piece of software works as they want it to," says Morrisett. "For users, the impact of such a tool is slightly more tangible; it allows users to safely run, for example, games, in a web browser without the painfully slow speeds that translated code traditionally provides."

Previous efforts to develop a robust, error-free checker have resulted in some success, but RockSalt has the potential to be scaled to software widely used by the general public. The researchers expect that their tool might end up being adopted and integrated into future versions of common web browsers. Morrisett and his team also have plans to adapt the tool for use in a broader variety of processors.

Reflecting on how the class project has been transformative, Tassarotti says, "I plan to pursue a Ph.D. in computer science, and I hope to work on projects like this that can improve the correctness of software. As computers are so prevalent now in fields like avionics and medical devices, I believe that this type of research is essential to ensure safety."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Harvard University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Researchers make quantum processor capable of factoring a composite number into prime factors

Aug. 19, 2012 — Computing prime factors may sound like an elementary math problem, but try it with a large number, say one that contains more than 600 digits, and the task becomes enormously challenging and impossibly time-consuming. Now, a group of researchers at UC Santa Barbara has designed and fabricated a quantum processor capable of factoring a composite number -- in this case the number 15 -- into its constituent prime factors, 3 and 5.

Although modest compared to a 600-digit number, the achievement represents a milestone on the road map to building a quantum computer capable of factoring much larger numbers, with significant implications for cryptography and cybersecurity. The results are published in the advance online issue of the journal Nature Physics.

"Fifteen is a small number, but what's important is we've shown that we can run a version of Peter Shor's prime factoring algorithm on a solid state quantum processor. This is really exciting and has never been done before," said Erik Lucero, the paper's lead author. Now a postdoctoral researcher in experimental quantum computing at IBM, Lucero was a doctoral student in physics at UCSB when the research was conducted and the paper was written.

"What is important is that the concepts used in factoring this small number remain the same when factoring much larger numbers," said Andrew Cleland, a professor of physics at UCSB and a collaborator on the experiment. "We just need to scale up the size of this processor to something much larger. This won't be easy, but the path forward is clear."

Practical applications motivated the research, according to Lucero, who explained that factoring very large numbers is at the heart of cybersecurity protocols, such as the most common form of encoding, known as RSA encryption. "Anytime you send a secure transmission -- like your credit card information -- you are relying on security that is based on the fact that it's really hard to find the prime factors of large numbers," he said. Using a classical computer and the best-known classical algorithm, factoring something like RSA Laboratory's largest published number -- which contains over 600 decimal digits -- would take longer than the age of the universe, he continued.

A quantum computer could reduce this wait time to a few tens of minutes. "A quantum computer can solve this problem faster than a classical computer by about 15 orders of magnitude," said Lucero. "This has widespread effect. A quantum computer will be a game changer in a lot of ways, and certainly with respect to computer security."

So, if quantum computing makes RSA encryption no longer secure, what will replace it? The answer, Lucero said, is quantum cryptography. "It's not only harder to break, but it allows you to know if someone has been eavesdropping, or listening in on your transmission. Imagine someone wiretapping your phone, but now, every time that person tries to listen in on your conversation, the audio gets jumbled. With quantum cryptography, if someone tries to extract information, it changes the system, and both the transmitter and the receiver are aware of it."

To conduct the research, Lucero and his colleagues designed and fabricated a quantum processor to map the problem of factoring the number 15 onto a purpose-built superconducting quantum circuit. "We chose the number 15 because it is the smallest composite number that satisfies the conditions appropriate to test Shor's algorithm -- it is a product of two prime numbers, and it's not even," he explained.

The quantum processor was implemented using a quantum circuit composed of four superconducting phase qubits -- the quantum equivalents of transistors -- and five microwave resonators. The complexity of operating these nine quantum elements required building a control system that allows for precise operation and a significant degree of automation -- a prototype that will facilitate scaling up to larger and more complex circuits. The research represents a significant step toward a scalable quantum architecture while meeting a benchmark for quantum computation, as well as having historical relevance for quantum information and cryptography.

"After repeating the experiment 150,000 times, we showed that our quantum processor got the right answer just under half the time" Lucero said. "The best we can expect from Shor's algorithm is to get the right answer exactly 50 percent of the time, so our results were essentially what we'd expect theoretically."

The next step, according to Lucero, is to increase the quantum coherence times and go from nine quantum elements to hundreds, then thousands, and on to millions. "Now that we know 15=3x5, we can start thinking about how to factor larger -- dare I say -- more practical numbers," he said.

Other UCSB researchers participating in the study include John Martinis, professor of physics; Rami Barends, Yu Chen, Matteo Mariantoni, and Y. Yin, postdoctoral fellows in physics; and physics graduate students Julian Kelly, Anthony Megrant, Peter O'Malley, Daniel Sank, Amit Vainsencher, Jim Wenner, and Ted White.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Santa Barbara.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Erik Lucero, R. Barends, Y. Chen, J. Kelly, M. Mariantoni, A. Megrant, P. O’Malley, D. Sank, A. Vainsencher, J. Wenner, T. White, Y. Yin, A. N. Cleland & John M. Martinis. Computing prime factors with a Josephson phase qubit quantum processor. Nature Physics, 19 August 2012 DOI: 10.1038/nphys2385

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Protecting computers at start-up: New guidelines

Dec. 21, 2011 — A new draft computer security publication from the National Institute of Standards and Technology (NIST) provides guidance for vendors and security professionals as they work to protect personal computers as they start up.

The first software that runs when a computer is turned on is the "Basic Input/Output System" (BIOS). This fundamental system software initializes the hardware before the operating system starts. Since it works at such a low level, before other security protections are in place, unauthorized changes -- malicious or accidental -- to the BIOS can cause a significant security threat.

"Unauthorized changes in the BIOS could allow or be part of a sophisticated, targeted attack on an organization, allowing an attacker to infiltrate an organization's systems or disrupt their operations," said Andrew Regenscheid, one of the authors of BIOS Integrity Measurement Guidelines (NIST Special Publication 800-155). In September, 2011, a security company discovered the first malware designed to infect the BIOS, called Mebromi. "We believe this is an emerging threat area," said Regenscheid. These developments underscore the importance of detecting changes to the BIOS code and configurations, and why monitoring BIOS integrity is an important element of security.

SP 800-155 explains the fundamentals of BIOS integrity measurement -- a way to determine if the BIOS has been modified -- and how to report any changes. The publication provides detailed guidelines to hardware and software vendors that develop products that can support secure BIOS integrity measurement mechanisms. It may also be of interest to organizations that are developing deployment strategies for these technologies.

This publication is the second in a series of BIOS documents. BIOS Protection Guidelines (NIST SP 800-147) was issued in April, 2011. It provides guidelines for computer manufacturers to build in features to secure the BIOS against unauthorized modifications. The detection mechanisms in SP 800-155 complement the protection mechanisms outlined in SP 800-147 to provide greater assurance of the security of the BIOS.

Copies of the publication can be downloaded from

http://csrc.nist.gov/publications/drafts/800-155/draft-SP800-155_Dec2011.pdf.
Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by National Institute of Standards and Technology (NIST).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Consumers should be vigilant in wake of Zappos cyberattack

Jan. 17, 2012 — As an estimated 24 million Zappos.com customers begin receiving notifications that some of their personal data have been compromised in a massive cyberattack, an Indiana University cybersecurity expert is warning those affected to be on the lookout for targeted fraud attempts.

The recent announcement by Zappos that customer accounts had been compromised by an unknown attacker poses serious risks for consumers, according to Maurer School of Law Distinguished Professor Fred H. Cate.

Efforts by Zappos CEO Tony Hsieh to reassure affected customers of his online shopping site that "customers' critical credit card and other payment data was not affected," run the risk of misfocusing the public attention and understating the risk, Cate said.

"Credit cards are covered by a federal law that limits consumer liability in the case of fraud up to $50, and card issuers universally waive even that small amount," he said. "Compromised credit card data is not the major area for concern."

Instead, according to Cate, who also serves as director of the IU Center for Applied Cybersecurity Research, the data that were reportedly accessed in the Zappos breach -- customer names, addresses, phone numbers, email addresses and encrypted passwords, in addition to the last four digits of customer credit card numbers -- pose the greatest risk to affected individuals. That risk falls into three categories.

First, this information is precisely that used by fraud perpetrators to send fraudulent phishing emails purporting to come from legitimate businesses to individuals. "Think about it," Cate said. "If you get an email from a company that includes your correct name and contact information and refers to the last four digits of your credit card number, wouldn't you think it is real?

"In fact," Cate continued, "it is not at all clear how customers will be able to distinguish real messages from fraudulent emails claiming to come from Zappos itself."

Second, this is exactly the information necessary to locate other data about individuals in public and commercial records.

"If I have your name, address and phone number, in many states I can get your property tax records, marriage license and other publicly available information," Cate said. "With that additional information a criminal is in an even better position to commit frauds in your name or to access password-protected sites by using the extra information to answer password-reset questions."

Third, since the information included emails and encrypted passwords, this poses a serious risk to other online accounts held by affected customers of Zappos.

"Almost all consumers reuse passwords, and email addresses often serve as default account names for online sites, so depending upon the quality of encryption being used by Zappos, it is entirely possible that the perpetrators will have access to a wide range of online accounts," Cate said.

Fortunately, most major breaches do not result in extensive fraud. In addition, there are practical steps consumers can take to protect themselves, including:

-- Changing passwords on all accounts that used the same passwords compromised on the Zappos site. -- Using unique passwords on all online sites. -- Monitoring account, credit card and bank statements carefully. -- Paying special attention to emails received, especially those claiming to be from businesses for which the consumer may have used the same credentials.

For practical tips on setting strong new passwords, as well as other helpful cybersecurity tutorials, visit the Center for Applied Cybersecurity Research's Security Matters website at www.securitymatters.iu.edu.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Indiana University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Multi-photon approach in quantum cryptography implemented

Oct. 3, 2012 — Move over money, a new currency is helping make the world go round. As increasing volumes of data become accessible, transferable and, therefore, actionable, information is the treasure companies want to amass. To protect this wealth, organizations use cryptography, or coded messages, to secure information from "technology robbers." This group of hackers and malware creators increasingly is becoming more sophisticated at breaking encrypted information, leaving everyone and everything, including national security and global commerce, at risk.

But the threat to information breach may be drastically reduced as a result of a technology breakthrough that combines quantum mechanics and cryptography. University of Oklahoma electrical and computer engineering professor Pramode Verma and his colleagues Professor Subhash Kak from Oklahoma State University and Professor Yuhua Chen from the University of Houston have, at the OU-Tulsa College of Engineering labs, demonstrated a novel technique for cryptography that offers the potential of unconditional security.

"Unfortunately, all commercial cryptography techniques used today are based on what is known as computational security," Verma said. "This means that as computing power increases, they are increasingly susceptible to brute force and other attacks based on mathematical principles that can recover information without knowing the key to decode the information." Cryptography techniques based on quantum mechanics are not susceptible to such attacks under any imaginable condition.

In 2006, Kak postulated a theory known as the three-stage protocol, which relies on the unpredictability of photons to ensure hackers can't locate or replicate the information used to transmit information. The first laboratory demonstration of Kak's concept took place at the College of Engineering labs at the OU-Tulsa Schusterman Center. This is an important step toward the widespread adoption of Kak's discovery and may lead to a future in which, Verma said, "Basically, no matter how long or how hard they try, technology robbers can no longer decrypt or hack transmitted information."

This breakthrough has widespread economic and global applications. Quantum cryptography has been used in rare instances, primarily Swiss banks, but is limited by its short transmission distance and slow speed. Verma and his research team's technology demonstration suggest the potential for breaking those barriers.

"As we continue to test this promising method of quantum cryptology, we can demonstrate its value and accelerate the adoption in the business world," Verma said.

The widespread application of quantum cryptology could someday ensure that technology robbers won't be able to break into the information bank.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Oklahoma, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Speedy ions could add zip to quantum computers

Aug. 13, 2012 — Take that, sports cars! Physicists at the National Institute of Standards and Technology (NIST) can accelerate their beryllium ions from zero to 100 miles per hour and stop them in just a few microseconds. What's more, the ions come to a complete stop and hardly feel the effects of the ride. And they're not just good for submicroscopic racing -- NIST physicists think their zippy ions may be useful in future quantum computers.

The ions (electrically charged atoms) travel 100 times faster than was possible before across a few hundred micrometers in an ion trap -- a single ion can go 370 micrometers in 8 microseconds, to be exact (about 100 miles per hour.)

Although ions can go much faster in accelerators, the NIST ions demonstrate precision control of fast acceleration and sudden stops in an ion trap. A close analogy is a marble resting at the bottom of a bowl, and the bowl suddenly accelerating (see animation). During the transport, the marble will oscillate back and forth relative to the center of the bowl. If the bowl is suddenly stopped at the right time, the marble will come to rest together with the bowl. Furthermore, the NIST researchers assured that their atomic marble's electron energy levels are not affected, which is important for a quantum computer, where information stored in these energy levels would need to be moved around without compromising the information content.

For a quantum computer to solve important problems that are intractable today, the information carried by many quantum bits, or qubits, needs to be moved around in the processor. With ion qubits, this can be accomplished by physically moving the ions. In the past, moving ions took much longer than the duration of logic operations on the ions. Now these timescales are nearly equivalent. This reduces processing overhead, making it possible to move ions and prepare them for reuse much faster than before.

As described in Physical Review Letters, NIST researchers cooled trapped ions to their lowest quantum energy state of motion and, in separate experiments, transported one and two ions across hundreds of micrometers in a multi-zone trap. Rapid acceleration excites the ions' oscillatory motion, which is undesirable, but researchers controlled the deceleration well enough to return the ions to their original quantum state when they came to a stop. A research group from Mainz, Germany, reports similar results.

The secret to the speed and control is custom electronics. NIST researcher Ryan Bowler used fast FPGA (field programmable gate array) technology to program the voltage levels and durations applied to various electrodes in the ion trap. The smooth voltage supply can move the ions very fast while also keeping them from getting too excited.

With advances in precision control, researchers think ions could be transported even more quickly and yet still return to their original quantum states when they stop. Researchers must also continue to work on the many practical challenges, such as suppressing unwanted heating of the ion motion from noisy electric fields in the environment. The research is supported by the Intelligence Advanced Research Projects Activity, National Security Agency, Office of Naval Research, and Defense Advanced Research Projects Agency.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by National Institute of Standards and Technology (NIST).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

R. Bowler, J. Gaebler, Y. Lin, T.R. Tan, D. Hanneke, J.D. Jost, J.P. Home, D. Leibfried and D.J. Wineland. Coherent diabatic ion transport and separation in a multi-zone trap array. Physical Review Letters, 2012; (forthcoming)

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Opening the gate to robust quantum computing: New technique for solid-state quantum info processing

Apr. 9, 2012 — Scientists have overcome a major hurdle facing quantum computing: how to protect quantum information from degradation by the environment while simultaneously performing computation in a solid-state quantum system. The research was reported in the April 5 issue of Nature.

A group led by U.S. Department of Energy's Ames Laboratory physicist Viatsheslav Dobrovitski and including scientists at Delft University of Technology; the University of California, Santa Barbara; and University of Southern California, made this big step forward on the path to using the motions of single electrons and nuclei for quantum information processing. The discovery opens the door to robust quantum computation with solid-state devices and using quantum technologies for magnetic measurements with single-atom precision at nanoscale.

Quantum information processing relies on the combined motion of microscopic elements, such as electrons, nuclei, photons, ions, or tiny oscillating joists. In classical information processing, information is stored and processed in bits, and the data included in each bit is limited to two values (0 or 1), which can be thought of as a light switch being either up or down. But, in a quantum bit, called a qubit, data can be represented by how these qubits orient and move in relationship with each other, introducing the possibility for data expression in many tilts and movements.

This power of quantum information processing also poses a major challenge: even a minor "bump" off course causes qubits to lose data. And qubits tend to interact quite sensitively with their environment, where multiple forces bump them off track.

But, because the key to quantum information processing is in the relationship between qubits, the solution is not as easy as isolating a single qubit from its environment.

"The big step forward here is that we were able to decouple individual qubits from the environment, so they retain their information, while preserving the coupling between the qubits themselves," said Dobrovitski.

Solid-state hybrid systems are useful for quantum information processing because they are made up of different types of qubits that each perform different functions, much like different parts of a car combine to move it down the road. In the case of Dobrovitski's work, the hybrid system includes magnetic moments of an electron and a nucleus.

"This type of hybrid system may be particularly good for quantum information processing because electrons move fast, can be manipulated easily, but they also lose quantum information quickly. Nuclei move very slow, are difficult to manipulate, but they also retain information well," said Dobrovitski. "You can see an analogy between this hybrid quantum system and the parts of a classical computer: the processor works fast but doesn't keep information long, while the memory works slowly but stores information for a long time."

Usually, when you decouple qubits from their environment to protect their quantum data, you decouple them from everything, even from each other.

But, Dobrovitski found a narrow window of opportunity where both the electron and nucleus can be decoupled from their environment, while retaining their relationship to each other.

"The solution is applying a certain pattern of kicks to the electron's magnetic moment, so that tiny rotations between each kick accumulate and coincide with the rotation of the nucleus," said Dobrovitski. "We can separate out this particular single electron movement from thousands of others because it is synchronized with the motion of the nuclear magnetic moment."

As a result, the electron's and nucleus' movements stay linked, while they are both protected from being bumped off course and retain their quantum information processing capabilities.

Experiments carried out by a team of scientists from Delft University of Technology in the Netherlands and University of California, Santa Barbara, showed that theoretical development of this technique worked well in practice.

The researchers took the technique one step further and showed that it can be used for small-scale quantum information processing. Scientists at Delft and UCSB successfully carried out Grover's quantum search algorithm, a method for searching random lists. In this case, they used the solid-state hybrid system to correctly search a list of four random items.

"This is the first time a robust quantum computation has been demonstrated using a solid-state system with individual spins," said Dobrovitski. "We showed that even with the inevitable imperfections of experiments, we can use this system to do quantum information processing in a way that beats its classical counterpart. Indeed, for a list of four items, the quantum device finds with certainty the desired entry by looking into the list only once, while classically we must inspect all four items one by one."

While a four-item list is a small list, consider the possibility of a random list of a million entries. Using classical computing, 500,000 queries would be needed. But, using quantum information processing, only 1,000 queries are required, showing just how much faster tomorrow's quantum information processing will be than today's classical computers.

The research conducted at Ames Laboratory was funded by the DOE's Office of Science.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Ames Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

T. van der Sar, Z. H. Wang, M. S. Blok, H. Bernien, T. H. Taminiau, D. M. Toyli, D. A. Lidar, D. D. Awschalom, R. Hanson, V. V. Dobrovitski. Decoherence-protected quantum gates for a hybrid solid-state spin register. Nature, 2012; 484 (7392): 82 DOI: 10.1038/nature10900

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Quantum cryptography theory has a demonstrated security defect

Aug. 7, 2012 — Researchers at Tamagawa University announced August 10 that they had demonstrated the incompleteness and limit of the security theory in quantum key distribution. The present theory cannot guarantee unconditional security. Details will be given at the SPIE conference on Quantum Communication and Quantum Imaging on August 15, 2012.

Many papers claim that the trace distance, d, guarantees unconditional security in quantum key distribution (QKD). In our paper, first we explain explicitly the main misconception in the claim of unconditional security for QKD theory. In general terms, the cause of the misunderstanding in the security claim is the Lemma in Renner's paper. It suggests that the generation of a perfect random key is assured by the probability (1-d), and that its failure probability is d. Thus, it concludes that the generated key provides a perfect random key sequence when the protocol succeeds. In this way QKD provides perfect secrecy (unconditional security) to a type of encryption termed 'the one-time pad'.

H. P. Yuen at Northwestern University proved that the trace distance quantity does not give the probability of such an event. If d is not small enough, the generated key sequence is never perfectly random. The evaluation of the trace distance now requires reconstruction if it is to be used. However, QKD theory groups have not accepted this criticism, and have invented many upper-bound evaluation theories for the trace distance.

We clarified that the most recent upper bound theories for the trace distance are constructed again by the reasoning of Renner, who originally introduced the concept. It is thus unsuitable to quantify the information theoretic security of QKD, and the unconditional security defined by Shannon is not satisfied.

Consequently, Yuen's theory is correct, and at present there is no theoretical proof of the unconditional security for any QKD.

Background

Quantum information science holds enormous promise for entirely new kinds of computing and communications, including important problems that are intractable using conventional digital technology. The most expected field is quantum cryptography. But realizing that promise will depend on theoretical guarantee of the security and the ability to transfer an extremely fragile quantum condition. Recently it has been pointed out sometimes that, in general, scientists are not familiar with practical applications. The quantum cryptography (quantum key distribution: QKD) is a typical example of the stern realities.

Now, despite enormous progress in theoretical QKD, many theory groups are still discussing the security proof for QKD based on Renner's trace distance theory. One of reasons is that H.P.Yuen (Northwestern University) pointed out that the present theory does not guarantee the security of the real QKD system [1,2].

Recently, Renner et al announced that in any practical implementation, the generated key length is limited by the available resources, and the present security proofs are not established rigorously in such a situation. And they published own improvement result in Nature Communication in 2012 [3]. However, without the review of the incompleteness of the theory, it is repeatedly and persistently claimed that a specific trace distance criterion would guarantee unconditional security in QKD. And, unfortunately, almost all the theory groups on QKD ignored the criticisms.

This is disagreeable in the development of science and technology. Researchers are obliged to clarify "what is going on" in the discussion of the scientific theory.

At present, there is no review on such a dispute. Our purpose is to clarify a story of the argument on the recent theory of QKD and the criticism against them. We introduced the Shannon theory on the cryptography to confirm the basis of the concept of the unconditional security. And we compared the fundamental concept of the current security theory of QKD by R.Renner and. the outline of the Yuen's criticism. Finally, we provided evidence on which there is no theoretical proof of the unconditional security for any QKD, despite that many theoretical papers claimed the perfect proof of the unconditional security.

[1] H.P.Yuen, Key generation: Foundation and a new quantum approach, IEEE J. Selected topics in Quantum Electronics, vol-15, no-6, pp1630-1645, 2009.

[2] H.P.Yuen, Fundamental quantitative security in quantum key distribution, Physical Review A, vol-82, 062304, 2010.

[3] M.Tomamichel, C.Lim, N.Gisin, and R.Renner, Tight finite-key analysis for quantum cryptography, Nature Communication, vol-3, p639, 2012.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by ResearchSEA, via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New tool aims to ensure software security policies reflect user needs

Oct. 30, 2012 — Researchers from North Carolina State University and IBM Research have developed a new natural language processing tool that businesses or other customers can use to ensure that software developers have a clear idea of the security policies to be incorporated into new software products.

Specifically, the research focuses on access control policies (ACPs), which are the security requirements that software developers need to bear in mind when developing new software. For example, an ACP for a university grading program needs to allow professors to give grades to students, but should not allow students to change the grades.

"These ACPs are important, but are often buried amidst a lengthy list of other requirements that customers give to developers," says Dr. Tao Xie, an associate professor of computer science at NC State and co-author of a paper on the research. These requirements are written in "natural language," which is the conversational language that people use when talking or corresponding via the written word.

Incomplete or inaccurate ACP requirements can crop up, for example, if the customer writing the ACP requirements makes a mistake or doesn't have enough technical know-how to accurately describe a program's security needs.

A second problem is that programmers may misinterpret some ACP requirements, or overlook them entirely.

In collaboration with IBM Research, Xie's research team has developed a solution that uses a natural language processing program to extract the ACP requirements from a customer's overall list of requirements and translate it into machine-readable language that computers can understand and enforce.

After the ACPs are extracted, they can be run through Access Control Policy Tool (ACPT) -- also developed in Xie's research team in collaboration with the National Institute of Standards and Technology (NIST) -- which verifies and tests the ACPs and determines whether the ACP requirements are adequate to meet the security needs of the program.

Once the ACP requirements have been translated into machine-readable language, they can also be incorporated into a policy-enforcement "engine" in the final software product -- which ensures that ACPs cannot be overlooked by programmers.

"In general, developing a program that understands natural language text is very challenging," Xie says. "However, ACP requirements in software documents usually follow a certain style, using terms such as 'cannot be edited' or 'does not have the ability to edit.' Because ACPs tend to use such a limited number of phrases, it is much easier to develop a program that effectively translates natural language texts in this context."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by North Carolina State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here