Industry has its head in the clouds. Analysts expect that in the next five to ten years, 95% of companies will use some kind of cloud technology. But the successful burgeoning of the Industrial Internet of Things is about to supercharge the cloud. Accommodating billions of devices will require new solutions and innovative technologies to keep the cloud from turning dark and stormy.
The cloud is already shaping the way business will use IT infrastructure and computing power in decades to come. But the burgeoning Industrial Internet of Things is about to supercharge the cloud.
The Industrial Internet of Things (IIoT) isn’t new, nor is enterprise’s adoption of the public cloud infrastructure. But...
By allowing small, simultaneous calculations far beyond what conventional computers can handle, quantum computing in the cloud promises to revolutionize science, medicine and the Internet of Things.
Classical digital computers use transistors to process information in various sequences of zeros and ones, but have a limited ability to carry out calculations. Quantum computers use the laws of quantum mechanics.Because particles such as electrons and photons can be in multiple states—one, zero, or both at the same time—they offer many more calculation possibilities than traditional machines with only two options—on or off. This could allow very complex calculations in areas such as genome modeling, drug research and weather forecasting.
Quantum computers can operate 100 million times faster than traditional computers. A 500 quantum bit (or qubit) computer could perform more calculations in a single step than the number of atoms in the Universe. And it’s in the cloud that quantum computers will typically operate to make massive calculations. Cue quantum internet.
A Quantum Platform in the Cloud
Five qubit processor, Courtesy of IBM
Dr. Matthias Keller is senior lecturer in Atomic, Molecular and Optical Physics at the University of Sussex. He stated that “a quantum network would basically work similarly to a classical fiber network. But instead of using strong optical signals, the signal is carried by a single photon.” These are the individual particles of light that transmit information between nodes.
IBM has developed the world’s first quantum computing platform at the IBM Watson Research Center in New York. Online since last May, it has a five-qubit quantum processor and is accessible to everyone via the IBM cloud.
Scientists hope that such powerful computers will enable them to model genomes and ecosystems. The human genome could be unraveled to expand drug development, while a model of Earth’s weather systems would make forecasting much more accurate.
Quantum computers also could search massive databases instantaneously, and handle large amounts of data from sensors in industrial plants and on connected machinery. This makes them perfect for the swiftly developing Industrial Internet of Things.
Courtesy of D-Wave
However, quantum computers also constitute a threat to today’s internet, warns Andersen Cheng, CEO of Post-Quantum:
Quantum computers will be able to crack the most commonly used encryption protocol today, which will make the internet as we know it totally unusable. We we won’t be able to tell if the information came from, or will go to the right person, which will completely destroy the trust we have in the internet.
This is because current online cryptographic systems—such as messaging services, email and cloud sync software—will be very simple for quantum computers to crack, while covering their tracks.
Cheng therefore thinks we need to investigate quantum safe identity authentication, which his company is developing. Quantum key distribution is one solution. It sends a secure key across a network – impossible to copy – to decipher a conventional message or file.
Cryogenic fridge, Courtesy of D-Wave Systems
Another complication is the strict environmental conditions required. Such supercomputers must be kept super cold—at absolute zero. They also require shielding from any electromagnetic interference (EMI) to control the unstable quantum states. Only then are huge calculations possible.
So far, a universal quantum computer does not exist, but there are already projects that go beyond mere prototypes. Canada’s D-Wave Systems recently released its third generation D-Wave 2X, which uses a 1,098-qubit processor. The one located at NASA’s Ames Research Center in California is shielded against EMI 50,000 times weaker than Earth’s magnetic field and is housed in a vacuum. It’s also cryogenically cooled to -460 degrees Fahrenheit, about 180 times colder than interstellar space.
IBM predicts the appearance of medium-sized quantum processors of 50-100 qubits in the next decade. Universal quantum computers could be one of the greatest milestones in the history of IT.
Edward Snowden’s revelations about the NSA make it imperative for businesses storing customer data on the cloud to be aware of the EU’s new Privacy Shield Framework.
What does your company do with personal data? The age of networks, the internet, the cloud and industrial digitization have seen a huge rise in the...
The European Union’s new Privacy Shield Framework aims at protecting personal data from non-EU organizations and foreign governments, but its effect could be to regionalize data. A hybrid cloud could be the only solution.
The cloud is quickly becoming an engine of growth, its location-free design perfectly suited to a globalized economy where political boundaries are increasingly irrelevant. However, data politics is on the rise. The EU Privacy Shield Framework is designed to make transatlantic data transfer safe by encrypting and anonymizing everything, and auditing exactly who is accessing and using personal information. But most of the major business and industry cloud providers—among them Google, Amazon Web Services and Microsoft—are US-based. Should data from EU citizens be kept inside EU boundaries? One way to achieve this is a private cloud solution.
Private Clouds vs. Hybrid Cloud
Michael Connaughton, Director Big Data at Oracle says:
Organizations want to move to the cloud, but then lose a degree of control over where the data center holding the data is located.
However, companies choosing to keep everything in a private, on-premises cloud data center lose the advantage of the massive computing power available on the public cloud. A private cloud is also expensive to maintain and upgrade. The answer is to deploy a hybrid cloud, a mix of local cloud servers and third-party public services using the remote servers run by the big cloud providers.
The problem here is ‘spanning’, says Frank Krueger, Director of Compliance at enterprise cloud hosting provider iland.
Does the provider send you to a cloud that spans multiple data centers? If so, verify that those spanned data centers are in the right data regions. It’s not uncommon that lower-cost carriers will perform spanning, whereas others are dedicated to specific and approved geo-locations.
Even US-based companies are beginning to open EU data centers in places such as London, Frankfurt and Paris.
Another option is to keep everything as local as possible, using cloud providers inside the EU only. With companies able to self-certify their compliance with the Privacy Shield Framework, this is an understandable choice. This legislation requires that customers be able to choose what happens to their personal data, a complication.
It is essential to know what customer data you are collecting and where it is being collected so that data can be handled in accordance with the laws of the country from which it is sourced.
However, the hybrid cloud comes with controls that allow restricted personal data to be kept in the EU.
Can a company remain compliant by storing information on Europeans in a local data center? Taking advantage of the cloud’s massive computing power requires division of labor. One option is to store data in private clouds in their own data centers, and to use computing resources in the remote or ‘public’ cloud. This involves uploading the data for processing, and immediately retrieving it for storage. That underscores the importance of how cloud service providers protect personal data.
To make this work, companies will have to invest additional time and effort in cleansing it of all personally identifiable information before sending it out for processing.
He adds that companies should look for private clouds that integrate easily with public clouds. Though it appears complex, the end result should be a more capable, hybrid cloud that automates everything, including compliance.
U.K.-based Ross Robotics develops and commercializes the modular Robosynthesis. Director of research and development Philip Norman talked to Directindustry e-magazine about the robotic platform, which is resilient enough for deployment in places such as Chernobyl.
DirectIndustry e-magazine: Robotics is ubiquitous and is expected to become even more so, performing different tasks in varied environments. How does Robosynthesis fit into this trend?
Philip Norman: The Robosynthesis platform makes these varied applications possible using a standardized system, dramatically improving operational results and reducing cost. It has a unique coaxial and repositionable power and data connector, biomimetic traction and non-magnetic metallized polymers for low mass, and compliant resilient structures. Additive manufacturing processes are used to create three-dimensional power and data looms within the robot’s structure and there is also a suite of artificial intelligence ranging from autonomous operation through to topology recognition, where the robot recognizes its configuration as it is assembled.
DirectIndustry e-magazine: Robosynthesis is also meant to be customizable and easy to use.
Philip Norman: The end user defines the shape, form and function of the robot system depending on the task and the environment. The physical robot itself can be transformed by plugging modules together in different combinations. No tools or specialist knowhow are required. IP-addressed sensors and tools can be plugged onto the robot in a theoretically unlimited number of combinations. AI modules give the robot the ability to analyze its environment, take action and operate without outside intervention, if this is required.
DirectIndustry e-magazine: Where are Robosynthesis modules deployed?
Philip Norman: From the first concept drawings, the aim has been to operate in environments where people cannot go. Robot modules must operate in the presence of strong magnetic fields, background radiation, chemicals and extreme temperature fluctuations. Ross Robotics is working with end users who have major operational challenges to meet and urgently need robots to replace people. Collaboration with the Centre for European Nuclear Research (CERN) in Geneva involves deploying robots in the Hadron Collider. There is also a focus on first responders working in hazardous zones such as Chernobyl and on security applications where robots are used for remote detection of radioactive, explosive or biological threats.
DirectIndustry e-magazine: How does Robosynthesis resist nuclear environments?
Philip Norman: By reducing the time spent by the robot inside the irradiated zone, [which minimizes] the risk of system failure due to the radiation. This can be achieved by deploying an autonomously operating robot without a tether that is capable of moving relatively quickly and reliably over difficult terrain. Ours achieve this due to the low mass penalty for their size (the metallized plastics construction) and the elevated torque developed by the motors and our hybrid traction systems. They also achieve exceptional mission endurance (up to 11 hours with standard batteries), allowing us to run sophisticated navigation systems as well as a large number of sensors on board. But we do also need to harden the robot, particularly the electronics, for operation in strongly irradiated operating conditions. The objective will be to harden the electronics to a point at which they could be sent into somewhere like the disabled Fukushima Daiichi nuclear power plant (where a robot was recently stranded after stalling) for more extended periods. This is a challenge that is new to the robotic technology development community and no one currently has the answer to prolonged operation in this sort of environment.
RoboInspector by iTronic is an optical control and image processing platform designed to inspect industrial parts. It was one of the company’s must-see product at the Motek 2016 trade fair. DirectIndustry e-magazine spoke with CEO Ingmar Troniarsky.
DirectIndustry e-magazine: What does RoboInspector do?
Mr. Troniarsky: The RoboInspector is a robotic device designed for inspection processes. It is equipped with a camera and a lighting system that enable it to check manufactured parts from various angles and positions. We offer cameras with resolution ranging from 1000 to 4000 pixels which can take 2- or 3-dimensional pictures and videos. It is safe to use the system cage-free at speeds up to 250 mm per second. With safety enhancement, it can work up to 1 m per second. The operator can teach the robot different positions by moving the arm manually.
DI e-mag: Why was it important to develop this technology?
Mr. Troniarsky: We developed RoboInspector to enable customers in the automotive industry to test door panels, bumpers and other parts to make sure they are perfectly manufactured. We also increased flexibility by removing the housing so that the operator can take the Inspector onto the production line. Our system should interest every company that produces parts that need to be checked because it can be freely adapted to the customer’s needs.
DI e-mag: Can it check large parts?
Mr. Troniarsky: The Inspector typically examines parts of about 1.3 meters long, but it can also inspect parts up to two meters. Checking takes about 1.2 to 1.5 seconds from each position. So, to take photos from 30 positions you need about 45 seconds.
DI e-mag: The Inspector is mounted on a Universal Robots machine. Is it specifically designed for this robot or can it be used with any brand?
Mr. Troniarsky: The existing device is indeed for Universal Robots equipment because to move a camera you don’t need robots capable of very high speed or very high payloads. We decided to use inexpensive robots offering maximum positioning freedom. This is why we chose Universal Robots. But the system can be adapted to KUKA or Fanuc robots. So far, we haven’t had demand for these other configurations.
DI e-mag: What is its price?
Mr. Troniarsky: The smallest version starts at around 60,000 euros and for the biggest ones, about 150,000 euros.
Camille Rustici is a Video Journalist and the Editor-in-Chief for DirectIndustry e-magazine. She has years of experience in business issues for various media including France 24, Associated Press, Radio France…