How the Internet Comes From Underwater Cables In The Ocean?

198°
Shopping Friend
Alpha.Barood

In ancient Palestine (a.k.a. His-Bulla land), it came to pass that a trader by the name of Abraham
Com did take unto himself a young wife by the name of Dot.

And Dot Com was a comely woman, broad of shoulder and long of leg. Indeed,
she was often called Amazon Dot Com.

And she said unto Abraham, her husband, “Why dost thou travel so far from
town to town with thy goods when thou canst trade without ever leaving thy
tent?”
And Abraham did look at her as though she were several saddle bags short of
a camel load, but simply said, “How, dear?”
And Dot replied, “I will place drums in all the towns and drums in between
to send messages saying what you have for sale, and they will reply telling
you who hath the best price. And the sale can be made on the drums and
delivery made by Uriah’s Pony Stable (UPS).”

Abraham thought long and decided he would let Dot have her way with the
drums.
And the drums rang out and were an immediate success.
Abraham sold all the goods he had at the top price, without ever having to
move from his tent.
To prevent neighbouring countries from overhearing what the drums were
saying, Dot devised a system that only she and the drummers knew. It was
known as Must Send Drum Over Sound (MSDOS), and she also developed a
language to transmit ideas and pictures – Hebrew To The People (HTTP).

And the young men did take to Dot Com’s trading as doth the greedy horsefly
take to camel dung. They were called Nomadic Ecclesiastical Rich Dominican
Sybarites, or NERDS.

And lo, the land was so feverish with joy at the new riches and the
deafening sound of drums that no one noticed that the real riches were
going to that enterprising drum dealer, Brother William of Gates, who
bought off every drum maker in the land. And indeed did insist on drums to
be made that would work only with Brother Gates’ drumheads and drumsticks.
And Dot did say, “Oh, Abraham, what we have started is being taken over by
others.”

And Abraham looked out over the Bay of Ezekiel, or eBay as it came to be
known. He said, “We need a name that reflects what we are.”

And Dot replied, “Young Ambitious Hebrew Owner Operators.” “YAHOO,” said
Abraham. And because it was Dot’s idea, they named it YAHOO Dot Com.
Abraham’s cousin, Joshua, being the young Gregarious Energetic Educated Kid
(GEEK) that he was, soon started using Dot’s drums to locate things around
the countryside. It soon became known as God’s Own Official Guide to
Locating Everything (GOOGLE).

That is how it all began. And that’s the truth. Halleluiah and glory be to
all on the internet.

21 Comments  |  
8 Dimers
  • Sort By
Pro Entertainer Pro Entertainer
Link Copied

Subhan Allah…. https://cdn1.desidime.com/assets/textile-editor/icon_biggrin.gif

Shopping Friend Shopping Friend
Link Copied
Did You Know 99 Percent Of Our Internet Comes From Underwater Cables In The Ocean?

While we keep debating the benefits of cloud computing, only a few of us know that it’s actually the ocean that fuels our Internet addiction. Thin cables that lie on the seafloor, connecting the world, transmit 99 percent of international data.

The importance of the cables can be seen in the map of cables that connect the world. The map shows 299 cables that are active, under construction, or planned to be completed by the end of 2016.

https://i.imgur.com/LfilK7V.jpg

Shopping Friend Shopping Friend
Link Copied

These underwater cables are generally preferred over satellite transmission because of their speed and reliability. They rarely fail, thanks to the incredibly high speed it offers and the backup routes available.

This brilliant vintage map comparing the map of trade routes in 1912 and map of submarine cables today is an example of how things have evolved.

1912 v/s 2016

https://i.imgur.com/wJY8l8h.jpghttps://i.imgur.com/b1m1U93.jpg

Shopping Friend Shopping Friend
Link Copied

The submarine cables get the massive investments required from companies that are looking to explore, what have now become, the ‘modern trade routes’

https://i.imgur.com/h433aTt.jpg

Which is why a tech giant like Google invested $300 million in a Trans-Pacific cable system to move their data faster. Even Facebook put money into an Asian cable system.

Submarine cables are not a new idea. It’s actually 150 years old!

https://i.imgur.com/5KQKwMT.jpg

Shopping Friend Shopping Friend
Link Copied

Not much has changed in all these years. The ship carrying the cables moves along the ocean slowly unwinding the cables as they sink to the ocean floor. The SS Great Eastern was actually the first in laying out a successful Trans-Atlantic cable back in 1866. Back then it was used to transmit telegraph messages. From 1956, it started carrying telephone signals.

Modern cables in comparison are much thinner – about 3 inches across. At the deepest point in the Japan Trench, cables go as deep as 8,000 metres deep in the ocean. That’s the height of Mt.Everest!

An optical fibre looks something like this. Many fibres bundled within a larger shell protecting it.

https://i.imgur.com/VFYG8c0.jpg

The shell includes -

Polyethylene

Mylar tape

Stranded metal (steel) wires

Aluminum water barrier

Polycarbonate

Copper or aluminum tube

Petroleum jelly (this helps protect the cables from the water)

Optical fibers

These cables transmit videos, GIFs, information and articles like the one you’re reading right now. They are the reason we get to download and stream things in a matter of milliseconds. Incredible.

Deal Cadet Deal Cadet
Link Copied
@B@R_0_0_D wrote:

Not much has changed in all these years. The ship carrying the cables moves along the ocean slowly unwinding the cables as they sink to the ocean floor. The SS Great Eastern was actually the first in laying out a successful Trans-Atlantic cable back in 1866. Back then it was used to transmit telegraph messages. From 1956, it started carrying telephone signals.

Modern cables in comparison are much thinner – about 3 inches across. At the deepest point in the Japan Trench, cables go as deep as 8,000 metres deep in the ocean. That’s the height of Mt.Everest!

An optical fibre looks something like this. Many fibres bundled within a larger shell protecting it.


https://i.imgur.com/VFYG8c0.jpg


The shell includes -

Polyethylene

Mylar tape

Stranded metal (steel) wires

Aluminum water barrier

Polycarbonate

Copper or aluminum tube

Petroleum jelly (this helps protect the cables from the water)

Optical fibers

These cables transmit videos, GIFs, information and articles like the one you’re reading right now. They are the reason we get to download and stream things in a matter of milliseconds. Incredible.

as seen on Discovery channel😁

Deal Subedar Deal Subedar
Link Copied

TATA is one of the Tier-1 network company (having settlement-free interconnection).

one might read the below answer on quora, it’s wonderfully explained about India’s network to the submarine cables https://cdn2.desidime.com/assets/textile-editor/icon_smile.gif :

https://www.quora.com/How-is-India-connected-to...

Pro Entertainer Pro Entertainer
Link Copied

Veeru aka @B@R_0_0_D paji.. Kitthe ho tussi… Busy lagde ho… Aaj koi tag nahi kitta

Shopping Friend Shopping Friend
Link Copied
@A2Zdeals wrote:

Veeru aka @B@R_0_0_D paji.. Kitthe ho tussi… Busy lagde ho… Aaj koi tag nahi kitta


Ha..ha..
Jata result ka shock hai..

@vj wala .

Ghadi ghadi ka roop badal diya in s.haiton ne..

@rockst@r

Shopping Friend Shopping Friend
Link Copied

https://i.imgur.com/yNodNQX.jpg

Shopping Friend Shopping Friend
Link Copied

Sony Just Patened Contact Lenses That Can Secretly Record What You See

https://i.imgur.com/ycpQgJQ.jpg

Our memories are fallible things. We remember something one way; but the reality can be quite different. DELIBERATE MOTIONS ACTIVATE THE MECHANISMS OF THE CAMERA But imagine contact lenses that are also tiny cameras, recording and storing whatever you see, and even playing it back before your very eyes. What was really said at last week’s meeting? Play it back and see. Want to cherish forever some treasured moment—when you first saw your future spouse, or the birth of a child, or some other formative event? You may be able to soon.

And it gets better. Imagine how it might change the criminal justice system, with such infallible eyewitnesses. Grandpa has an incredible Bigfoot, ghost, or UFO story? Maybe all three? Let’s see the playback, Gramps; let’s see the proof. It’s an intriguing concept, and a little frightening. And now, Sony is muscling its way into a game that already boasts such heavyweight players as Google and Samsung. The company has filed a patent for a “smart” contact
Helpful Helpful
Link Copied

Why not the world make Internet Free?
Will google balloons make them true?

Shopping Friend Shopping Friend
Link Copied

Google plans to get rid of passwords

Google is set to begin testing a new login method which replaces passwords with

a ‘trust-based’ system which monitors the way you typically use your phone.

Here’s How Google Plans To Get Rid Of Passwords Once And For All

The ‘Trust API’, part of ‘Project Abacus’, was unveiled at Google’s IO developer conference,

and is set to be tested by a number of large financial institutions in June, the Guardian reports.

Here’s How Google Plans To Get Rid Of Passwords Once And For All

So, no more passwords?

The system is designed to be used on smartphones, and works by constantly

checking for a number of personal indicators which can grant

access to accounts or the phone itself.

Instead of asking for a password, the phone might analyse your face, your voice,

how you type, how you swipe , how you move and where you are.

Here’s How Google Plans To Get Rid Of Passwords Once And For All

All of these bits of data are fed into the API, which then generates a ‘trust score’

which indicates how likely it is that it’s actually you carrying the phone.

Shopping Friend Shopping Friend
Link Copied

*iStockGoogle DeepMind is an artificial intelligence division within Google that was created after Google bought University College London spinout, DeepMind, for a reported £400 million in January 2014"

The division, which employs around 140 researchers at its lab in a new building at Kings Cross, London, is on a mission to solve general intelligence and make machines capable of learning things for themselves. It plans to do this by creating a set of powerful general-purpose learning algorithms that can be combined to make an AI system or “agent”.

Suleyman explains

These are systems that learn automatically. They’re not pre-programmed, they’re not handcrafted features. We try to provide a large set of raw information to our algorithms as possible so that the systems themselves can learn the very best representations in order to use those for action or classification or predictions.

The systems we design are inherently general. This means that the very same system should be able to operate across a wide range of tasks.

That’s why we’ve started as we have with the Atari games. We could have done lots of really interesting problems in narrow domains had we spent time specifically hacking our tools to fit the real world problems – that could have been very, very valuable.

Instead we’ve taken the principle approach of starting on tools that are inherently general.

AI has largely been about pre-programming tools for specific tasks: in these kinds of systems, the intelligence of the system lies mostly in the smart human who programmed all of the intelligence into the smart system and subsequently these are of course rigid and brittle and don’t really handle novelty very well or adapt to new settings and our fundamentally very limited as a result.

We characterise AGI as systems and tools which are flexible and adaptive and that learn.

We use the reinforcement learning architecture which is largely a design approach to characterise the way we develop our systems. This begins with an agent which has a goal or policy that governs the way it interacts with some evironment. This environment could be a small physics domain, it could be a trading environment, it could be a real world robotics environment or it could be a Aatari environment.The agent says it wants to take actions in this environment and it gets feedback from the environment in the form of observations and it uses these observations to update its policy of behaviour or its model of the world.

<img src=“//cdn3.techworld.com/cmsdata/features/3615354/Mustafa_Suleyman_Google_DeepMind_2.JPG” alt="" width=“800” height=“450” />

How does it work?

The technology behind DeepMind is complex to say the least but that didn’t stop Suleyman from trying to convey some of the fundamental deep learning principles that underpin it. The audience – a mixture of software engineers, AI specialists, startups, investors and media – seemed to follow.

Suleyman explains

You’ve probably heard quite a bit about deep learning. I’m going to give you a very quick high-level overview because this is really important to get intuition for how these systems work and what they basically do.

These are hierarchical based networks initially conceived back in the 80s but recently resuscitated by a bunch of really smart guys from Toronto and New York.

The basic intuition is that at one end we take the raw pixel data or the raw sensory stream data of things we would like to classify or recognise.

This seems to be a very effective way of learning to find structure in very large data sets. Right at the very output we’re able to impose on the network some requirement to produce some set of labels or classifications that we recognise and find useful as humans.

How is DeepMind being tested?

DeepMind found a suitably quirky way to test what its team of roughly 140 people have been busy building.

The intelligence of the DeepMind’s systems was put through its paces by an arcade gaming platform that dates back to the 1970s.

Suleyman demoed DeepMind playing one of them during his talk – Space Invaders. In his demo he illustrated how a DeepMind agent learns how to play the game with each go it takes.

Suleyman explains

We use the Atari test bed to develop and test and train all of our systems…or at least we have done so far.

There is somewhere on the magnitude of 100 different Atari games from the 70s and 80s.

The agents only get the raw pixel inputs and the score so this is something like 30,000 inputs per frame. They’re wired up to the action buttons but they’re not really told what the action buttons do so the agent has to discover what these new tools of the real world actually mean and how they can utilise value for the agent.

The goal that we give them is very simply to maximise score; it gets a 1 or a 0 when the score comes in, just as a human would.

Everything is learned completely from scratch – there’s absolutely zero pre-programmed knowledge so we don’t tell the agent these are Space Invaders or this is how you shoot. It’s really learnt from the raw pixel inputs.

For every set of inputs the agent is trying to assess which action is optimal given that set of inputs and it’s doing that repeatedly over time in order to optimise some longer term goal, which in Atari’s sense, is to optimise score. This is one agent with one set of parameters that plays all of the different games.

Live space invaders demo

An agent playing space invaders before training struggles to hide behind the orange obstacles, it’s firing fairly randomly. It seems to get killed all of the time and it doesn’t really know what to do in the environment.

After training, the agent learns to control the robot and barely loses any bullets. It aims for the space invaders that are right at the top because it finds those the most rewarding. It barely gets hit; it hides behind the obstacles; it can make really good predictive shots like the one on the mothership that came in at the top there.

As those of you know who have played this game, it sort of speeds up towards the end and so the agent has to do a little bit more planning and predicting than it had done previously so as you can see there’s a really good predictive shot right at the end there.

100 games vs 500 games

The agent doesn’t really know what the paddle does after 100 games, it sort of randomly moves it from one side to the other. Occasionally it accidentally hits the ball back and finds that to be a rewarding action. It learns that it should repeat that action in order to get reward.

After about 300 games it’s pretty good and it basically doesn’t really miss.

But then after about 500 games, really quite unexpectedly to our coders, the agent learns that the optimal strategy is to tunnel up the sides and then send them all around the back to get maximum score with minimum effort – this was obviously very impressive to us.

We’ve now achieved human performance in 49/57 games that we’ve tested on and this work was recently rewarded with a front cover of Nature for our paper that we submitted so we were very proud of that.

How is it being used across Google?

Google didn’t buy DeepMind for nothing. Indeed, it’s using certain DeepMind algorithms to make many of its best-known products and services smarter than they were previously.

Suleyman explains

Our deep learning tool has now been deployed in many environments, particularly across Google in many of our production systems.

In image recognition, it was famously used in 2012 to achieve very accurate recognition on around a million images with about 16 percent error rate. Very shortly after that it was reduced dramatically to about 6 percent and today we’re at about 5.5 percent. This is very much parable with the human level of ability and it’s now deployed in Google+ Image Search and elsewhere in Image Search across the company.

As you can see on Google Image Search on G+, you’re now able to type a word into the search box and it will recall images from your photographs that you’ve never actually hand labelled yourself.

We’ve also used it for text and scription. We use it to identify text on shopfronts and maybe alert people to a discount that’s available in a particular shop or what the menu says in a given restaurant. We do that with an extremely high level of accuracy today. It’s being used in Local Search and elsewhere across the company.

We also use the same core system across Google for speech recognition. It trains roughly in less than 5 days. In 2012 it delivered a 30 percent reduction in error rate against the existing old school system. This was the biggest single improvement in speech recognition in 20 years, again using the same very general deep learning system across all of these.

Across Google we use what we call Tool AI or Deep Learning Networks for fraud detection, spam detection, hand writing recognition, image search, speech recognition, Street View detection, translation.

Sixty handcrafted rule-based systems have now been replaced with deep learning based networks. This gives you a sense of the kind of generality, flexibility and adaptiveness of the kind of advances that have been made across the field and why Google was interested in DeepMind.

Should we be scared?

<img src=“//cdn3.techworld.com/cmsdata/features/3615354/Mustafa_Suleyman_media_AGI.JPG” alt="" width=“800” height=“450” />

The number of scientists and world-famous entrepreneurs speaking out on the potential dangers of AI is increasing week-by-week, with renowned physicist Stephen Hawking and PayPal billionaire Elon Musk being two of the most outspoken anti-AI advocates.

The pair, along with several others including Bill Gates and Sky cofounder Jaan Tallinn, believe that machines will soon become more intelligent than humans, just as they do in recent Hollywood blockbuster Ex Machina.

Despite this, Google is keen to develop its AI algorithms as much as possible in order to improve its offerings and boost its profits.

Suleyman tried to put people’s minds at ease and explain the logic behind all the hype.

Suleyman explains

Over the last 18 months or so, AI breakthroughs have, I think, created a sense of anxiety or in some cases hype around the potential long term direction of the field.

This of course is not least induced by Elon [Musk] who recently Tweeted that we need to be super careful with AI because it’s “potentially more dangerous than nukes” and that’s obviously backed up by various publications including Nick Bostrom’s – all culminating in this kind of sense that AI has the potential to end all human kind.

If you didn’t really pay attention to the field and all you did was read, as I think the vast majority of people do, descriptions of the kind of work that we do on the web then you could be forgiven for believing that AI is actually about this. Whether it’s Terminator coming to blow us up or societies of AIs or mad scientists looking to create quite perverted women robots.

This narrative has somehow managed to dominate the entire landscape, which I think we find really quite remarkable.

It’s true that AI has in some sense really arrived. This isn’t just a summer. These are very concrete production breakthroughs that really do make a big different but it’s also sad how quickly we adapt to this new reality. We rarely take time to acknowledge the magic and the potential of these advances and the kind of good that they can bring. In some sense, the narrative has shifted from isn’t it terrible that AI has been such a failure to isn’t it terrible that AI has been such a success.

Just to address directly this question of existential risk. Our perspective on this is that it’s become a real distraction from the core ethics and safety issues and that it’s completely overshadowed the debate.

The way we think about AI is that it’ll be a hugely powerful tool that we control and direct whose capabilities we limit, just as we do with any other tool that we have in the world around us, whether they’re washing machines or tractors.

<img src=“//cdn3.techworld.com/cmsdata/features/3615354/Ex_Machina_Ava_on_table.png” alt="" width=“800” height=“450” />Ex Machina sees a powerful CEO create super-intelligent AIs that turn against him ©Universal

These are tools that we designed that we can control. We should explicitly be designing these systems such that we are able to control them and where we fear there’s a risk that we’re not able to control them, then that’s I think when we should be slowing down, just as we have in many other sectors, from nuclear development to chemical weapons or the like.

We’re building them to empower humanity, absolutely not to destroy us.I think our technology has the potential to really positively transform the world if we can steward it in the right direction and imagine new mechanisms of governance and accountability and transparency that involve a broader group in the process of directing the application of our technology.

There are many, many much more urgent concerns we need to direct our attention to. This conversation around whether we’ll have human-like intelligences wondering around absorbing all the information that’s ever been created and giving them rights and being conscious – these are just so…there are a few engineers in the room who will know how difficult it is to get these things to do anything.The idea that we should be spending these moments now talking about consciousness and robot right is really quite preposterous.

I don’t mean to be dismissive, these are serious concerns and we put a great deal of effort and a great deal of our negotiating capital into establishing what we think is a reasonably effective process. We’re introducing additional oversight and accountability into the way that we steward our technology with the establishment of our ethics and safety board.

These are important issues, we do need to discuss them but we also need to focus on what else is at stake in the world today. How can these tools be useful elsewhere? 800 million people don’t have access to clean water, rising to 1.8 billion in the next decade alone. One of the most fundamental rights beyond so many people on our planet. 800 million people are malnourished but a third of the food we produce is wasted every year.

Looking ahead

Google has pledged to set up an ethics board to monitor its internal AI developments. Interestingly, this was one of DeepMind’s prerequisites to signing the acquisition papers, suggesting that Suleyman knows AI has potential to do harm.

A number of people have already been appointed to the board but Google has refused to reveal who they are. Suleyman said he wants the names to be revealed.

“We will [publicise the names], but that isn’t the be-all and end-all. It’s one component of the whole apparatus,” he said, adding that he was impressed a 100-strong company like DeepMind was able to convince Google to set up the board in the first place.

Some believe that the board should be appointed with the help of other organisations and public oversight but Google has so far decided to act independently.

READ MORE: What is deep learning?

The event was organised by Playfair Capital, an early stage technology VC based in London with a thematic focus on machine intelligence.

Deal Subedar Deal Subedar
Link Copied
@B@R_0_0_D wrote:


*iStockGoogle DeepMind is an artificial intelligence division within Google that was created after Google bought University College London spinout, DeepMind, for a reported £400 million in January 2014"

The division, which employs around 140 researchers at its lab in a new building at Kings Cross, London, is on a mission to solve general intelligence and make machines capable of learning things for themselves. It plans to do this by creating a set of powerful general-purpose learning algorithms that can be combined to make an AI system or “agent”.

Suleyman explains

These are systems that learn automatically. They’re not pre-programmed, they’re not handcrafted features. We try to provide a large set of raw information to our algorithms as possible so that the systems themselves can learn the very best representations in order to use those for action or classification or predictions.

The systems we design are inherently general. This means that the very same system should be able to operate across a wide range of tasks.

That’s why we’ve started as we have with the Atari games. We could have done lots of really interesting problems in narrow domains had we spent time specifically hacking our tools to fit the real world problems – that could have been very, very valuable.

Instead we’ve taken the principle approach of starting on tools that are inherently general.

AI has largely been about pre-programming tools for specific tasks: in these kinds of systems, the intelligence of the system lies mostly in the smart human who programmed all of the intelligence into the smart system and subsequently these are of course rigid and brittle and don’t really handle novelty very well or adapt to new settings and our fundamentally very limited as a result.

We characterise AGI as systems and tools which are flexible and adaptive and that learn.

We use the reinforcement learning architecture which is largely a design approach to characterise the way we develop our systems. This begins with an agent which has a goal or policy that governs the way it interacts with some evironment. This environment could be a small physics domain, it could be a trading environment, it could be a real world robotics environment or it could be a Aatari environment.The agent says it wants to take actions in this environment and it gets feedback from the environment in the form of observations and it uses these observations to update its policy of behaviour or its model of the world.

<img src=“//cdn3.techworld.com/cmsdata/features/3615354/Mustafa_Suleyman_Google_DeepMind_2.JPG” alt="" width=“800” height=“450” />

How does it work?

The technology behind DeepMind is complex to say the least but that didn’t stop Suleyman from trying to convey some of the fundamental deep learning principles that underpin it. The audience – a mixture of software engineers, AI specialists, startups, investors and media – seemed to follow.

Suleyman explains

You’ve probably heard quite a bit about deep learning. I’m going to give you a very quick high-level overview because this is really important to get intuition for how these systems work and what they basically do.

These are hierarchical based networks initially conceived back in the 80s but recently resuscitated by a bunch of really smart guys from Toronto and New York.

The basic intuition is that at one end we take the raw pixel data or the raw sensory stream data of things we would like to classify or recognise.

This seems to be a very effective way of learning to find structure in very large data sets. Right at the very output we’re able to impose on the network some requirement to produce some set of labels or classifications that we recognise and find useful as humans.

How is DeepMind being tested?

DeepMind found a suitably quirky way to test what its team of roughly 140 people have been busy building.

The intelligence of the DeepMind’s systems was put through its paces by an arcade gaming platform that dates back to the 1970s.

Suleyman demoed DeepMind playing one of them during his talk – Space Invaders. In his demo he illustrated how a DeepMind agent learns how to play the game with each go it takes.

Suleyman explains

We use the Atari test bed to develop and test and train all of our systems…or at least we have done so far.

There is somewhere on the magnitude of 100 different Atari games from the 70s and 80s.

The agents only get the raw pixel inputs and the score so this is something like 30,000 inputs per frame. They’re wired up to the action buttons but they’re not really told what the action buttons do so the agent has to discover what these new tools of the real world actually mean and how they can utilise value for the agent.

The goal that we give them is very simply to maximise score; it gets a 1 or a 0 when the score comes in, just as a human would.

Everything is learned completely from scratch – there’s absolutely zero pre-programmed knowledge so we don’t tell the agent these are Space Invaders or this is how you shoot. It’s really learnt from the raw pixel inputs.

For every set of inputs the agent is trying to assess which action is optimal given that set of inputs and it’s doing that repeatedly over time in order to optimise some longer term goal, which in Atari’s sense, is to optimise score. This is one agent with one set of parameters that plays all of the different games.

Live space invaders demo

An agent playing space invaders before training struggles to hide behind the orange obstacles, it’s firing fairly randomly. It seems to get killed all of the time and it doesn’t really know what to do in the environment.

After training, the agent learns to control the robot and barely loses any bullets. It aims for the space invaders that are right at the top because it finds those the most rewarding. It barely gets hit; it hides behind the obstacles; it can make really good predictive shots like the one on the mothership that came in at the top there.

As those of you know who have played this game, it sort of speeds up towards the end and so the agent has to do a little bit more planning and predicting than it had done previously so as you can see there’s a really good predictive shot right at the end there.

100 games vs 500 games

The agent doesn’t really know what the paddle does after 100 games, it sort of randomly moves it from one side to the other. Occasionally it accidentally hits the ball back and finds that to be a rewarding action. It learns that it should repeat that action in order to get reward.

After about 300 games it’s pretty good and it basically doesn’t really miss.

But then after about 500 games, really quite unexpectedly to our coders, the agent learns that the optimal strategy is to tunnel up the sides and then send them all around the back to get maximum score with minimum effort – this was obviously very impressive to us.

We’ve now achieved human performance in 49/57 games that we’ve tested on and this work was recently rewarded with a front cover of Nature for our paper that we submitted so we were very proud of that.

How is it being used across Google?

Google didn’t buy DeepMind for nothing. Indeed, it’s using certain DeepMind algorithms to make many of its best-known products and services smarter than they were previously.

Suleyman explains

Our deep learning tool has now been deployed in many environments, particularly across Google in many of our production systems.

In image recognition, it was famously used in 2012 to achieve very accurate recognition on around a million images with about 16 percent error rate. Very shortly after that it was reduced dramatically to about 6 percent and today we’re at about 5.5 percent. This is very much parable with the human level of ability and it’s now deployed in Google+ Image Search and elsewhere in Image Search across the company.

As you can see on Google Image Search on G+, you’re now able to type a word into the search box and it will recall images from your photographs that you’ve never actually hand labelled yourself.

We’ve also used it for text and scription. We use it to identify text on shopfronts and maybe alert people to a discount that’s available in a particular shop or what the menu says in a given restaurant. We do that with an extremely high level of accuracy today. It’s being used in Local Search and elsewhere across the company.

We also use the same core system across Google for speech recognition. It trains roughly in less than 5 days. In 2012 it delivered a 30 percent reduction in error rate against the existing old school system. This was the biggest single improvement in speech recognition in 20 years, again using the same very general deep learning system across all of these.

Across Google we use what we call Tool AI or Deep Learning Networks for fraud detection, spam detection, hand writing recognition, image search, speech recognition, Street View detection, translation.

Sixty handcrafted rule-based systems have now been replaced with deep learning based networks. This gives you a sense of the kind of generality, flexibility and adaptiveness of the kind of advances that have been made across the field and why Google was interested in DeepMind.

Should we be scared?

<img src=“//cdn3.techworld.com/cmsdata/features/3615354/Mustafa_Suleyman_media_AGI.JPG” alt="" width=“800” height=“450” />

The number of scientists and world-famous entrepreneurs speaking out on the potential dangers of AI is increasing week-by-week, with renowned physicist Stephen Hawking and PayPal billionaire Elon Musk being two of the most outspoken anti-AI advocates.

The pair, along with several others including Bill Gates and Sky cofounder Jaan Tallinn, believe that machines will soon become more intelligent than humans, just as they do in recent Hollywood blockbuster Ex Machina.

Despite this, Google is keen to develop its AI algorithms as much as possible in order to improve its offerings and boost its profits.

Suleyman tried to put people’s minds at ease and explain the logic behind all the hype.

Suleyman explains

Over the last 18 months or so, AI breakthroughs have, I think, created a sense of anxiety or in some cases hype around the potential long term direction of the field.

This of course is not least induced by Elon [Musk] who recently Tweeted that we need to be super careful with AI because it’s “potentially more dangerous than nukes” and that’s obviously backed up by various publications including Nick Bostrom’s – all culminating in this kind of sense that AI has the potential to end all human kind.

If you didn’t really pay attention to the field and all you did was read, as I think the vast majority of people do, descriptions of the kind of work that we do on the web then you could be forgiven for believing that AI is actually about this. Whether it’s Terminator coming to blow us up or societies of AIs or mad scientists looking to create quite perverted women robots.

This narrative has somehow managed to dominate the entire landscape, which I think we find really quite remarkable.

It’s true that AI has in some sense really arrived. This isn’t just a summer. These are very concrete production breakthroughs that really do make a big different but it’s also sad how quickly we adapt to this new reality. We rarely take time to acknowledge the magic and the potential of these advances and the kind of good that they can bring. In some sense, the narrative has shifted from isn’t it terrible that AI has been such a failure to isn’t it terrible that AI has been such a success.

Just to address directly this question of existential risk. Our perspective on this is that it’s become a real distraction from the core ethics and safety issues and that it’s completely overshadowed the debate.

The way we think about AI is that it’ll be a hugely powerful tool that we control and direct whose capabilities we limit, just as we do with any other tool that we have in the world around us, whether they’re washing machines or tractors.

<img src=“//cdn3.techworld.com/cmsdata/features/3615354/Ex_Machina_Ava_on_table.png” alt="" width=“800” height=“450” />Ex Machina sees a powerful CEO create super-intelligent AIs that turn against him ©Universal

These are tools that we designed that we can control. We should explicitly be designing these systems such that we are able to control them and where we fear there’s a risk that we’re not able to control them, then that’s I think when we should be slowing down, just as we have in many other sectors, from nuclear development to chemical weapons or the like.

We’re building them to empower humanity, absolutely not to destroy us.I think our technology has the potential to really positively transform the world if we can steward it in the right direction and imagine new mechanisms of governance and accountability and transparency that involve a broader group in the process of directing the application of our technology.

There are many, many much more urgent concerns we need to direct our attention to. This conversation around whether we’ll have human-like intelligences wondering around absorbing all the information that’s ever been created and giving them rights and being conscious – these are just so…there are a few engineers in the room who will know how difficult it is to get these things to do anything.The idea that we should be spending these moments now talking about consciousness and robot right is really quite preposterous.

I don’t mean to be dismissive, these are serious concerns and we put a great deal of effort and a great deal of our negotiating capital into establishing what we think is a reasonably effective process. We’re introducing additional oversight and accountability into the way that we steward our technology with the establishment of our ethics and safety board.

These are important issues, we do need to discuss them but we also need to focus on what else is at stake in the world today. How can these tools be useful elsewhere? 800 million people don’t have access to clean water, rising to 1.8 billion in the next decade alone. One of the most fundamental rights beyond so many people on our planet. 800 million people are malnourished but a third of the food we produce is wasted every year.

Looking ahead

Google has pledged to set up an ethics board to monitor its internal AI developments. Interestingly, this was one of DeepMind’s prerequisites to signing the acquisition papers, suggesting that Suleyman knows AI has potential to do harm.

A number of people have already been appointed to the board but Google has refused to reveal who they are. Suleyman said he wants the names to be revealed.

“We will [publicise the names], but that isn’t the be-all and end-all. It’s one component of the whole apparatus,” he said, adding that he was impressed a 100-strong company like DeepMind was able to convince Google to set up the board in the first place.

Some believe that the board should be appointed with the help of other organisations and public oversight but Google has so far decided to act independently.

READ MORE: What is deep learning?

The event was organised by Playfair Capital, an early stage technology VC based in London with a thematic focus on machine intelligence.

bhai raat ko itne lambe lambe mesaages mat dala karo plz… bht der tak scroll krna padta he,dusri posts b dekhni hoti.. reham karo thoda https://cdn1.desidime.com/assets/textile-editor/icon_biggrin.gif

Shopping Friend Shopping Friend
Link Copied

If u r s seriously concerned, u would have not quoted the full post ??

Shopping Friend Shopping Friend
Link Copied

https://i.imgur.com/RBkIfqN.jpg

Shopping Friend Shopping Friend
Link Copied

A piece of software that allows the user to make telephone calls over the Internet via a computer…

So many have not grasped the real power of the people’s INTERNET. It is far more than a forum for pushing personal agendas no matter how seemingly noble. More than my absurd rants, pornography, gossip, game playing, or any of that child like stuff. The INTERNET is an asset that the people need to seize and protect.

You can use the internet to break the back of the establishment. To seal up the cash leaks that big businesses have so ingrained into the American economy. Solve problems which otherwise have no solution.

I got AT T off my back, stopped them from charging me up to $35 per month just to have a land line and telephone…and I did so without needing a cell phone or account. Some sit and worry about Cell Phone radiation when they fail to suggest even one solution. Got the least clue what this is ?
soft·phone
ˈ

a piece of software that allows the user to make telephone calls over the Internet via a computer.

https://i.imgur.com/zttrkpN.png

Installed on a mobile device, laptop etc. it replaces a Cell Phone and need not be held near your “brain”, Dave. btw. You can replace Cable and Satellite with the many Internet streaming services, buy precisely the information and entertainment you want and avoid the tremendous cost/waste of a package deal. YOU DON’T EVEN WATCH HALF THE CRAP YOU PAY FOR ANYWAY.

I have watched for decades as the general public gets bogged down with the INTERNET face plate. Never pierce deeply into it. Harvest only the part of an ice berg that is in open air. Spend their lives on Facebook. Never do anything else with the INTERNET except harass one another, look up pictures and articles that they can pepper their poor captive audience with. In short they pollute the INTERNET.

In the future, stop and think what additional uses you can make of this powerful technology which gives you options and great possibilities you have never experienced before. Stop letting some use it as their own personal soap box, day after day. Eradicate the incessant “bore pollution”.

CAN YOU HEAR ME NOW !!! ~ – Electrical Engineer

Shopping Friend Shopping Friend
Link Copied

https://i.imgur.com/L3g99Dr.jpg

@Navneet

Pro Tech Guru Pro Tech Guru
Link Copied
@B@R_0_0_D wrote:

https://i.imgur.com/L3g99Dr.jpg

@Navneet


read almost 4 years back https://cdn3.desidime.com/assets/textile-editor/icon_wink.gif

replyuser
Click here to reply
Reply