Jump to content

Recommended Posts

Honda's robot, familiar from the TV ads, recently updated.

iCub An open source project backed by the EU.


These are just two examples of the latest in robot technology, there's plenty more on the Plastic Pals site.


In the past, fears that technology will lead to structural unemployment haven't been borne out, population rises and an ever increasing standard of living have helped fuel increasing demand. The other factor is the switch from manufacturing, which machines are good at, to services, which machines aren't so good at. We appear to be on the brink of producing machines which can out perform us at pretty much everything though.


I have two questions for the Drawing Room:


Firstly, can an economy function with machines producing all the wealth and humanity merely consuming it. If so how will that wealth be distributed if not in return for labour?


Secondly,

http://www.jeffbots.com/twiki3.jpg

Was this the worst prediction of the future ever?

25th century my arse it hasn't even got fingers.

Link to comment
https://www.eastdulwichforum.co.uk/topic/20500-crystal-balls/
Share on other sites

Self-replicating, fully autonomous robots mining moons, asteroids and captive comet nuclei could create a new economic paradigm in which virtually unlimited quantities of raw materials and manufactured products are produced without capital investment, energy costs or human intervention beyond building and launching the initial 'bootstrap' robot into orbit.


The ultimate "easy life," although I don't think it will happen anytime soon. Certainly not in my present lifetime.


The subject has been explored academically and philosophically under the subject heading self-replicating (aka Von Neumann) machines.

An economy with "machines producing all the wealth and humanity merely consuming it" will probably never occur. Even in a fairly extreme scenario, computers and machines need to be manufactured, programmed, and maintained by people.


The idea of machines which can do these three tasks themselves is interesting, but I doubt it will materialise because of two simple reasons. Firstly because the complexity of the machines/software we can build is limited by the capacity of the human brain. Secondly because mankind will never allow the construction a machine which has the slightest possibility of becoming a threat to us.

... computers and machines need to be manufactured, programmed, and maintained by people.


People only need to manufacture and program one self-replicating and self-maintaining machine, ever.


... the complexity of the machines/software we can build is limited by the capacity of the human brain


Software programmes are almost always capable of exceeding the capacity of their authors? brains in terms of memory, accuracy and computational speed. We wouldn?t bother to write them otherwise.


In any event, machine learning could eliminate any limitations imposed by the human brain.


Machines are rapidly acquiring more and more human-like mechanical abilities, a trend that continues to advance without any physical limits in sight.


... mankind will never allow the construction a machine which has the slightest possibility of becoming a threat to us.


The history of technological progress contradicts you here.


The risk/reward ratio is so great; I doubt we could resist exploiting such a technology.

HAL9000 Wrote:

-------------------------------------------------------

> People only need to manufacture and program one

> self-replicating and self-maintaining machine,

> ever.


Yes, but I did state that this was "in a fairly extreme scenario"... the actual proliferation of self-replicating machines would be a "extremely extreme scenario"!


> Software programmes are almost always capable of

> exceeding the capacity of their authors? brains in

> terms of memory, accuracy and computational speed.

> We wouldn?t bother to write them otherwise.


Speed/memory/etc... yes, of course. Complexity? Not even close. The human brain could never design anything as complex as the human brain.


> In any event, machine learning could eliminate any

> limitations imposed by the human brain.


Machine learning... artificial neural networks, genetic programming... all well and good for computer science post grads. In the real world - we'll see. These things still operate within a framework designed by humans, so are therefore still limited by our own abilities.


> Machines are rapidly acquiring more and more

> human-like mechanical abilities, a trend that

> continues to advance without any physical limits

> in sight.


Agree there, the physical human-like abilities will come along centuries before the "intelligence" (if the latter ever comes along at all).


> The risk/reward ratio is so great; I doubt we

> could resist exploiting such a technology.


Developing something with the capacity to override it's own "off switch"... it will never happen! We've all seen the Terminator movies (at least 1 & 2).

The human brain could never design anything as complex as the human brain.


That is a very bold assertion: to limit human ingenuity and technological progress for all time henceforth in a field that you (and everyone else) know so little about.


I suggest we cannot even speculate meaningfully on that question until we learn whether we are dealing with a substrate-dependent or non-computable property - in the first instance.


As an aside, you've obviously not considered machines equipped with artificial biological brains grown in vitro, for example?

I don't mind making bold assertions, if they seem logical (at least to me)! For a brain to understand the brain, it would have to be more complex than the brain. And while algorithms can "learn" and "evolve", somebody has to develop the framework and define a problem domain.


In short - we'll always need (and indeed want) humans to set the goals.

Implementing logic (or code) is something many of us do, and I don't know anyone who is able to implement bug-free logic even after many iterations.


Obviously if we know a particular outcome is undesirable we can error-trap the hell out of it but it is still possible for complex code to behave in unpredictable ways.

For a brain to understand the brain, it would have to be more complex than the brain.


I'm not sure that is a logical deduction. The brain appears to be composed of many small, self-similar structures such as neurons, which themselves resolve into even simpler synapses.


So, functionally a brain appears to be a synaptic network. The 'complexity' you perceive could be merely an emergent property of a relatively simple substrate configuration.


Leaving aside Hameroff-Penrose Orch-OR, G?del's theorem and other Quantum Mind/Consciousness hypotheses, I am not aware of any classical physical reason that would prevent a human mind from understanding the biological function of its own brain.


Autonomous robot labourers won't have to compose poetry or appreciate art or beauty or contemplate love or justice. All they?ll need to do is locate, identify and pickup objects and insert round ends into round holes: fixed instructions and simple manoeuvres within controlled environments. We are almost there!

Not sure I followed much of that HAL. But I think I agree with the bit at the end about the simple stuff. Before we get ahead of ourselves pondering whether a robot could ever have a soul/religeon....parking the OP's question for a moment....There is the intermediate step of having machines basically deal with the tedious crap and thus free up humans to either (a) sit in ivory towers contemplating the robot / soul dilemma; or (b) watch X Factor / Chelsea slags whilst eating Domino's pizza.


That is presumably coming sooner than the replicant T9000 scenario and probably boils down to a matter of cost and energy.


I wonder whether we will we notice any difference when basically it boils down to what we have today with Eastern Europeans replaced by machines.

The basic building blocks of the brain - nodes comprised of synapses and neurons - would appear to be fairly simple. But the connectivity is massively complex. But this is all really beside the point.


If a useful worker robot would work with "fixed instructions.. within controlled environments" - humans still need to determine the instructions and control/monitor the environment. Therefore, humans are still very much needed. A far cry from "machines producing all the wealth and humanity merely consuming it".


Actually it would probably be fixed goals rather than fixed instructions... a distinguishing feature between AI and conventional software.

I understood with AI that the objective with recent systems has not been to define the connectivity in advance, but to allow the AI to define the decisions needed to reach those goals.


Hence it's possible for complex systems to flourish that are outside the capacity of humans to interpret - effectively to outstrip our own abilities.


Some of the more intriguing current work has actually been exchanging the setting of goals with the delivery of rewards.


If you imagine that all the complexity of evolution has been driven by organisms seeking the fairly simple rewards of continuance and reproduction, it doesn't seem very peculiar to me that Articifical Intelligences could deisgn systems for delivering these rewards that far outstrip our ability to comprehend them.


It doesn't scare me at all - simply on the basis that as humanity becomes more intelligent it increases its desire to collaborate with internal and external systems and preserve parallel and interdependent ecosystems (such as our environment).


In that sense, the risk with AI, as with people, seems to lie with the stupid ones, not the intelligent ones.

Huguenot Wrote:

-------------------------------------------------------

> I understood with AI that the objective with

> recent systems has not been to define the

> connectivity in advance, but to allow the AI to

> define the decisions needed to reach those goals.

>

> Hence it's possible for complex systems to

> flourish that are outside the capacity of humans

> to interpret - effectively to outstrip our own

> abilities.


If we're talking about artificial neural nets... yes the equivalent of synaptic connectivity is developed over time as the system "learns", but the framework is very much human designed.


Note, I am by no means saying the useful AI is not attainable (it already is), but it will always be a tool for us to use. The high-level decisions will always be in our hands.

The erudite and informed Senor Chevalier just observed "That's the problem with computers, they just process data. They're no good at sense checking."


Which lead me to wonder what 'sense checking' was...


I came to the conclusion that it was nothing more than comparing it to previously established and generally accepted solutions whilst checking for variance that exceeds, say +/- 10%.


In that sense, for a computer to sense check you'd need to nothing else than ask it to explore a large enough database?

Erudite and informed (tee hee).


...but I would say that sense checking can include your approach but if done properly is a bit broader than that. The best sense checkers are those that use a combination of approaches as diverse and imaginative as possible to come at a problem and interrogate its correctness. Some take a structured approach that could be collapsed to a series of rules run through a database as you suggest.


Is the result what I was expecting

Is it similar to previous similar instances

Do small changes in data input give the expected change in data output

Are they directionally correct and is the size of the change in line with expectations

If I make wild changes in my asusmptions does the system generate the right answers or was it only working within the previously observed range

etc etc


Essentially sense checking is about understanding cause and effect relationships and using it in predicting outcomes though sometimes it is less structured and comes down to whether the answer "feels" right. Code that.

Hmmm - I think we do.


I can spot my own spelling mistake in the previous post a mile away now that I'm not editing it - but that's because I was principally in semantic mode I guess.


The whole thing seems to boil down to standard deviation.


Expectation is governed by previous experience, but I don't think it should get confused with hope. If we accept expectation is about convergence with experience then it should be easy to code, no?


Plenty of humans make disastrous conclusions due to poor coding or early experiences - vis religious indoctrination or child abuse.


Similarity with previous results is a comparable issue.


Small changes in data input vs data output I don't think is a particularly human exercise, but I think that's about experience/SD too.


Directionally correct seems to be a mathematical function. Recent self-taught electronic cockroaches worked out pretty quickly if they were getting further from their destination.


'Wild' changes seem about convergence again.


I think broadly humans work on some sort of standard differentiation - and if too many components are out of sync, then it leads to a reassessment ('flagging') of the assumptions in the original data points until we either find one that's wrong or we accept the result and put it in the 'implausible' bag until it's reinforced by other independent observations?


Isn't this then about 'weighting' output?


I've seen video of worms crawling out of pork when soaked in coke. Since I've seen 'distressed' pork not generate 'worms' then I don't deny that the worms crawled out - I just weight it as 'exceptional' and consequently don't allow it undue influence when frying myself a mustard chop.


The only likely AI development outcome is that an effective AI is likely to be 'in two minds' about many issues - and just as humans do it's likely to make decisions based on the balance of probability from its own experiences.

Well if we revert to 'reward' rather than objective, then we increase the likelihood that an AI will 'approach' the required result rather than solve or terminate.


I've been running over the concept of SD regards human behaviour, and I see a lot of that in forum debates.


Many of the positions that we take are based on our private datapoints weighted by the perceived 'penalty' of getting it wrong. They infect both commitment and vehemence.


Thinking that we could probably hand over most of the EDf to smart tech... ;-)

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Home
Events
Sign In

Sign In



Or sign in with one of these services

Search
×
    Search In
×
×
  • Create New...