(Source: ihopeyousmiled)

Reblog

housebuiltbyghosts:

fer1972:

The Textile Moths of Yumi Okita

i fucking hate butterflies and moths but also love them

(via hifructosemag)

Reblog

wmagazine:

Balmain and His Muses
Photograph by Emma Summerton; styled by Edward Enninful; W magazine September 2014. 

Balmain’s Designer Oliver Rousteinghttp://www.wmagazine.com/fashion/2014/08/balmain-rihanna-iman-naomi-campbell/photos/slide/1

wmagazine:

Balmain and His Muses

Photograph by Emma Summerton; styled by Edward Enninful; W magazine September 2014. 

Balmain’s Designer Oliver Rousteing

Reblog

(Source: dope-daydreams, via theislandkidd)

Reblog

The Creative Directors Behind Your Favorite Music Artists

Reblog

(Source: hallucinatoryobservation, via thelookingglassgallery)

Reblog

KEYWORDS:                                                                                                     stoner, purple flower, sunsets with gran-gran, blood sisters, talamak, sexual assault, empathy, love

CREDITS:
Writing/Direction by @domalene 
Graphic Design by lounahill0 
Illustration by versiaabeda 

Published via Fly Books Issuu on Wednesday August 13th, 2014. Released online via Fresh Milk

 

Reblog

versiaabeda:

#art #digital #drawing #trees #versiaharris
http://instagram.com/p/rI2-wiAxeL/

versiaabeda:

#art #digital #drawing #trees #versiaharris

Reblog

vicemag:

We Need to Stop Killer Robots from Taking Over the World
Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.
In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.
As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.
Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.
The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).
Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.
Continue

Further reading: http://www.nickbostrom.com/http://www.vice.com/read/how-to-stop-killer-robots-taking-over-the-world-212?utm_source=vicetumblrus

vicemag:

We Need to Stop Killer Robots from Taking Over the World

Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.

In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.

As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.

Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.

The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).

Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.

Continue

Further reading: http://www.nickbostrom.com/

Reblog

Reblog

gravesandghouls:

Clara Bow

gravesandghouls:

Clara Bow

Reblog

The hunt for typos

lifeasaneditor:

image

Truth.

Reblog

(Source: kyleplatts)

Reblog

fer1972:

Photography by Brooke Shaden

Reblog

starkpayshisdebt:

Hooray! In a few hours time my mental health will be shattered for the next year!

image

Reblog