Human civilisation is built upon our ability to create our own tailored environments and ecosystems. This ability stems from the skill of not only being able to create specialised tools out of our environment but from being able to create tools that make other tools. This chain relies on our superior intelligence, allowing us to be self-aware and conscious to the point where we can view ourselves as objects and therefore also a tool to integrate and create other tools around, in other words, design. Our ability to design our environments has allowed humanity to change the very nature of what it means to be human. Creativity and curation are core to being able to design. We are so far mastering Artificial Narrow Intelligence systems that can use a bottom up, machine learning approaches to evolve their own algorithms, according to answers found in vast data sets. These algorithms can be layered to form brain like neural networks able to dissolve and then solve specialised problems. In many ways AIs already outstrip human’s cognitive ability, examples like the continuator show machines ability to be exploratorily creative in ways that not even their creator can explain fully. We have reached a stage in tool design where neither the human nor the machine can understand how it functions. But, while machines are demonstrating their creativity within limited scope, they can in no way design without human propulsion due to no possession of conscious understanding. AI’s have nothing innate to say or reason to say it. While we have no way to tell if a machine will be able to become self-aware and conscious and even then, whether we will understand it when we see it. Human Design for the current future seems to be a human only playing field, as maybe only humans can understand what it is like to be human.
Human civilization’s success is built upon our ability to create environments to suit our needs and desires. The latest incarnation of this ability is referred to as design. From transport design, city design and environmental design to product design, user experience design and engineering design, our ability to design at all is a definitive characteristic of our species. But, the ability to not only carry out augmentation of our environment but to actively and systematically pursue the evolution of design itself is one of the key pillars as to how humans have risen to become the technologically dominate species on earth. (Fry, 2012).
This dissertation explores both the evolution of humanity’s’ ability to design and our venture to replicate our unique, creative and design abilities into artificial intelligence systems that can not only help us design but surpass our own ability. This dissertation is an attempt to enable creative individuals to understand and recognize their own biological advantages and disadvantages in their field while also examining the possibilities and problems their future craft could encounter as humans and machines grow ever more linked.
The focus of this literature study is to explore the links and possible implications of artificial intelligence in design via analysing questions such as why and how did humans start designing? Is design inherently human? How is creativity linked to design? How do modern artificial intelligences challenge the human mind? And can an Artificial Intelligence be creative and in turn design? Answering these questions all help serve an exploratory look at whether design will always need humans?
Many animals, including humans, create and use tools (Fry, 2012, Loc1527). For example, Australian bottlenose dolphins will find, and gather sea sponges in order to wear on their rostrum (beak), this provides them protection when foraging around the ocean floor. Another example finds that Gorillas, when observed in the wild can use sticks and branches to rake in items. (Shumaker, 2011). We can observe a wide selection of animals that actively use tools for aid in their survival. Animals that do this are uncommon but not unheard of, rarer cases however involve animals actively making specialized tools. Here crows have been observed, in a human devised puzzle environment to be able to piece together multiple short elements such as plungers and plastic syringes in order to form a longer tool, able to push out food from a box. (Von Bayern, 2018). This environment specific, compound tool manufacture and use is extremely rare among the animal kingdom. Indeed, the cognitive problem solving isn’t even observed in young children until ages 5 to 9 years old. The unique differential between humans and animals is that humans are the only current species to not only make environment specific tools but to manufacture and design tools that make tools and in turn create objects and technologies. (Fry, 2012).
This chain of events that humans have forged is currently unique to the species. However unique may not be the correct word as it implies that it is unlike anything else (Cambridge, 2019). This would suggest that it is unobtainable by any other species, which when looking at animal brain structures is clearly not the case. The human brain is still considered the most complicated machine in the known universe. But analysis of the neuron structures of both human and animal brains show its structure is similar to that of other primates in terms of efficiency, neuron density and structure - with primate brains being the densest, most efficient structure in comparison to all other animals. The only current clear advantage human brains hold is that they are a linearly scaled up primate brains, with on average a factor of 3 times the neuron count. (Herculano-Houzel, 2009).To illustrate humans and animals as binary categories is misleading to the point of falsehood. A better analogy is that of a material phase change. The transition of ice turning to water is similar to the transition between animal and human. The influences of our evolutionary history can still be seen throughout design desires today, just as many of the properties of ice can be seen from examining water.
Donald Norman breaks down human cognition into 3 key sections: Visceral, Behavioural and Reflective. (Norman, 2004). The visceral level is the prewired layer responsible for rapid judgements and decisions, it is biologically determined and has evolved from the correct evolutionary decision’s humans have made. The visceral cognitive level can be seen everywhere today in our love of beauty. For example, no matter the culture or time period there is a human preference for symmetry, recurring patterns, and the golden ratio. This is due to each of these helping our ancestors survive before there was human civilization. The ability to interpret patterns helped our ancestors predict the weather and analyse correct species of edible plants. Greater skill in identifying correct and rewarding patterns meant higher chances of successful reproduction (Kurzgesagt, 2018). Like patterns, symmetry is also everywhere in nature, plants and animals grow symmetrically, so a symmetrical human face is much more likely to belong to healthy mating partner, increasing the likelihood of that genetic line continuing (Foo, 2017). Due to the rate of transition to modern civilization, our version of beauty has evolved to a point that there is very little tangible about it, it exists in our heads, but we can spot it when we see it. Yet it still has powerful control over our decisions and wellbeing. Things that help us survive or at least what the visceral part of the brain thinks is helping us survive activates the reward centres in our brain. (Wald, 2015). A very real example of this can be see when hospital patients are given aesthetically pleasing wards and accommodation over un-aesthetic accommodation. Those in the newer, more aesthetically pleasing ward recovered on average 21% faster. (Lawson, 2003).
Here we can synthesize a few things. Firstly, the environment humans lived in shaped their very nature and thinking to the point where we now seek to and benefit from replicating parts of those original environments. Secondly, our advantage in intelligence over other animals does not free us from the same instinctual cognition they also experience. This would suggest that we hit a cognitive tipping point as animals from which we were able to pull far ahead in terms of our ability and control over design. Research suggests that this tipping point was the ability of our excess intelligence to manifest itself in our ability to reach a higher level of self-awareness and meta consciousness. (Allen, 2016). This self-awareness in turn allowed us to manipulate our tools and tool chain.
Most animals, including all mammals and birds are deemed conscious. But unlike most animals’ humans are also deemed self-aware. The level of this self-awareness is debated with some animals such as primates and elephants being self-aware to an extent. (Jabr, 2012). However, the general consensus is that humans possess a level of self-awareness far beyond that of any known animal. Whereas consciousness is awareness of one’s own form and environment, self-awareness is recognition of that awareness, being able to think about your own thoughts and processes. This can be clearly observed in young children, who in their initial years go through the stages of a being that is merely conscious, to a sense of self that can recognize itself, to being able to distinguish their own point of view and perspective and finally ending up being able to recognize others points of view and reflect these onto themselves with emotions such as empathy (ibid).
A typical animal that is conscious, does not and cannot discern something as ‘something’, whereas a human can, this cognitive jump is crucial in the linking of using tools to create tools. (Fry, 2012). In order for a human to create a tool that is then used to create a new tool, they themselves must be able to comprehend and view oneself as a resource. Being self-aware to the extent that we are able to view ourselves as objects, and then in course, tools, is crucial to how we have developed our skills to design. It stands to reason that the ability to view ourselves as beings and tools, in turn allowed us to specify and create tools with ourselves, a user and therefore part of the function of that tool in mind. In other words, design. (ibid)
Being able to design then launches us into a crucial compound interest improvement cycle. With our linking of tools, we are soon able to create an ecosystem of tools and processes that in turn creates artificial environments, made by humans for humans. These designed environments essentially serve as an ever-growing network of tools and process. As Tony Fry states “the rate of world transformation has been in direct ratio to the increased volume and power of tools” (Fry, 2012 Loc 1585).
We have seen with our own visceral cognitive level of beauty, how the environments we live in shape who we are as humans and the intangible nature of being human. Meaning that in turn our ability to design and augment our own environments for ourselves has also contributed to shaping and evolving ourselves. This then quickly turns into a cyclic motion of augmenting our own environments to suit ourselves, which in turn changes what the humans are, which in turn again requires more environmental change. Examples of our own environments can be seen in creations such as the internet, and the international economy. Whereas the forces governing what happens within them are no longer within control of any individual or nation. (Fry, 2012).
To conclude, while the ability to design is a key pillar of what distinguishes the phase change of animals to humans, that does not inherently make it human. In fact, being able to locate the initial construction of our ability to design makes design even less unique to humans as there is nothing special about humanity that another animal could or would not have evolved to become. However, humans have been uniquely shaped by their ability to design which in turn has made them more human and unique in this ability. Does this in turn mean that only a human can truly design for humans? Whether a similar species or intelligence would be able to develop in the same way remains to be seen, however it is clear that human’s higher self-awareness, leading them to create tools to make tools is a key evolutionary step in design.
In the previous chapter we have established that true, intentional human design requires not only consciousness but self-awareness. Given this, what other human features have lent themselves to our ability to design? Creativity? Is creativity even required for design in the first place?
To answer this, we need to first analyse what it even means to be creative in the first place. Marcus Du Sautoy establishes that “Creativity is the drive to come up with something that is new and surprising and that has value” (Du Sautoy, 2019 Loc 81). While these three main properties encompass the conclusions of many, Margaret Boden helps expand on this premise by identifying key types of creativity. Firstly, she points out that there is both historical creativity and phycological creativity. Psychological creativity covers a creative act that is only new and surprisingly to the user at hand, but to history and a wider audience, that act is old news. In turn a historically creative act is one that is new ultimately, and the first of its kind. Due to human’s biological inability to remember every historically creative act, it is only through repetitive psychological creativity that one can hope and achieve a truly historically creative act. (Boden, 2004). Of historically creative acts, 97% of them fall within what Boden describes as exploratory creativity. Via taking what already exists in a current field and pushing the limits of its existing known rules to the outer edge in order to facilitate new, surprising and valuable results (creative results). A second, rarer form of creativity is combination, involving cross pollination of different ideas and tools between fields into an act that is meaningful. The difficulty involved with combinational creativity is mainly focused with finding the value aspect, as creating new and even surprising things via combination is relatively easy due to the ‘ingredients’ of the system already being available for mixing. The final type of creativity is referred to by Boden as Transformational Creativity – these creative acts are both the rarest and most impactful. (ibid) Often, as was the case with the iPhone announcement in 2007 by Steve Jobs, transformation creative acts require moving away from our own set rules and systems in order to happen. The iPhone broke rules such as the need for a physical keyboard and weeklong battery life as well as the price point analysts thought consumers would be willing to pay for a phone. (Ritchie, 2019).
Defining creativity allows us to consider whether it is a requirement of design. Du Sautoys 3 pillars of creativity – new, surprising and of value lead the logical conclusion to be yes as without creativity no new ways of thinking and acting would appear. Du Sautoy furthered his definition to point out that “Creativity is not an absolute but a relative activity. We are creative within our culture and frame of mind”. (Du Sautoy, 2019, Loc 201). Taking this to be broadly true we can see that while Design requires Creativity the inverse is also true. As design implements our creativity it shapes our culture and artificial environments which in turn influences how we generate and view our creative acts. Just as creativity is reliant on all three creative types to expand each other, so too is design required to power and shape creativity. Humans thus far have been the only known intelligent species to use both their self-awareness and creativity to power design. However, if no other biological species can do this, can we also be the only species able to design another system of intelligence that can?
As far as we can analyse of our own biological history, it is evident that our evolution, while extremely unlikely was not a unique, one off event (Fry, 2012). This allows us to infer that the evolution of another intelligent species could happen again and that we could be the orchestrators. Artificial intelligence is an example at our very own evolution attempt. But what is Artificial intelligence, and could it ever be creative in the way humans are? The term artificial intelligence was coined in 1956 by John McCarthy whose main complaint about wider use of the term was that as soon as the product using a form of Artificial Intelligence worked, people stopped referring to it as Artificial Intelligence. This has given AI an ever-moving set of goalposts in terms of its meaning and classification – in turn leading to the perception that true AI is always a mythical future reality that has never come to fruition (Urban, 2015). The existing definition for Artificial intelligence stands as” the theory and development of computer systems to perform task normally requiring human intelligence such as visual perception, speech recognition, decision making and translation between languages” (Lexico, 2019). Nick Bostrom sets out a clear AI tier scale in order to help us define our overall intelligence progress. He defines the first tier of an AI to be Artificial Narrow Intelligence (ANI). ANI is described to be an Artificial Intelligence that specializes in only one area, such as playing the world’s best game of chess – however asking an ANI to compute anything besides its intended specialization will draw blank results. Bostrom’s second intelligence tier can address this weakness, referred to as Artificial General Intelligence (AGI), and often referenced to as human level intelligence. This tier refers to an AI that would be able to perform an intellectual task that a human brain can – indicating a wide general problem-solving ability as well as being able to ‘think’/compute in several different ways such as abstractly and precisely. The final intelligence tier is Artificial Super Intelligence (ASI). This is arguably the broadest category as it refers to any Artificial Intelligence greater than our own. – often discussed as the singularity. This definition inherently encompasses any AI that is 5% smarter than the human average to a multiple of trillions of times more intelligence than us. The ASI definition also dictates superior ability in every type of intelligence, such as social and emotional intelligence to scientific creativity and wisdom (Bostrom, 2006). An understanding of these divisions allows us to more clearly define AI’s implications on both creativity and design. However, defining AI brings us no closer to understanding one.
Modern Artificial intelligence runs on algorithms and data, much like our own intelligence. Algorithms at their core rely on interlinked ‘if this then that’ clauses that in turn interact with variables that can represent any value. Ideally the algorithm follows 3 key rules. It should contain a precise and explicit set of instructions; it should always finish regardless of the values inputted and ideally it should be fast. Algorithms both exploit the process in which we solve problems and at the same time are perfect for a computer to solve as they simply require computing the set instructions in reference to data (Du Sautoy, 2019). Algorithms are nothing new to mathematics, with the oldest, the Euclidean Algorithm written in approximately 300 BC (Berry, 2015). What has enabled algorithms to become so powerful in our everyday lives in the 21st century is data. Humanity now produces and stores more data in 48 hours than we ever did between the start of civilization and 2003 (Du Sautoy, 2019). Marcus Du Sautoy describes the notion of not enough data for an algorithm to process similar to denying all sensory input to a child – leading them to never develop language as well as other perceived baseline human skill (ibid). This new wealth of data has allowed machine learning to turn algorithms into neural networks. Machine learning by itself is simply the ability of one algorithm being able to learn from its mistakes and in turn adjusting its own or other algorithms thresholds/values in order to never lead to that wrong answer again. Human programmers have been able to create such machine learning ‘meta-algorithms’ that are able to write their own algorithms for a basic task such as image recognition. For instance, being able to tell the difference between a picture of a bee and a picture of a 3. Using data provided, this meta-algorithm or teacher algorithm as referenced to by CGP Grey, is then able to test each of its generated algorithms, picking and tweaking the most successful ones. This process of tweaking and testing can be computed to happen billions of times, using as much data to inform the algorithm as humans can provide it. (Grey, 2017).
Figure 1 – Multi-Layer Sigmoid Neural Network consisting of 784 input neurons, 16 hidden neurons and 10 output Neurons. (Kang, 2017)
AI Programmers and designers have been stunned by how effective these bottom up systems have performed, as the core logic is similar to own brain, which has taken millions of years to evolve to its current state (Schachner, 2013). The term neural networks are used because of this, as like the brain, smart self-learning neural groups/ algorithms can be layered so as to answer parts of the problem in steps (Du Sautoy, 2019). Here Figure 1 shows a multi layered sigmoid neural network aimed at recognising different single letter numbers drawn inside a 28x28 pixel box. The network can use known data given to it from humans to form the thresholds and values needed to form a confident end answer. By adding in hidden neuron layers, hundreds of analysis questions can be answered by the network in turn leading to correct number recognition (Kang, 2017).
Current artificial intelligence based on neural networks and machine learning have already far outstripped the human brain in terms of our ability to understand them. This has led to the term ‘black box’ being used recently, owing to the fact that we are increasingly suppling AI’s the data, questions and metrics to testing its own success, with the AI spitting out its answers. (Bathaee, 2018). There is no human programmer alive who is able to fully understand todays neural driven AI’s, while parts and sections of the networks can be understood in isolation, the sheer amount of numbers and questions at play when the whole is considered is beyond the human mind (Grey, 2017). While the progress in AI during the 21st century has been breath-taking, how does it fair with creativity and being able to suggest not only something new or correct but curate it to be of creative value to humans?
In order to test whether an AI is able to be creative, Marcus Du Sautoy has proposed the Lovelace Test, he states “ To pass the Lovelace test, an algorithm must originate a creative work of such that the process is repeatable (not the result of a hardware error) and yet the programmer is unable to explain how the algorithm produced its output” (Du Sautoy, 2019 Loc 134). The key point of Du Sautoys’ Lovelace test highlights how we must ensure that the human programmer (known to be able to be creative) must also be surprised at the repeatable results of the AI to the extent of being unexplainable. This will ensure to a reasonable extent that the programmer has not, consciously at least, already generated the possible creative answers for the machine. As of writing, there are several AI systems which have ‘passed’ the Lovelace test to some degree, such as AlphaGo from Googles deep mind division (Alphabet, 2019). But, just like the Turing test the results are debatable, as opinions of what classifies as new, surprising and of value vary. The most widely acknowledged passing system is known as the Continuator – an AI Jazz system created by Francois Pachet. Pachet wanted to create an AI improvisor that would be able to respond back and forth with a Jazz player in real time – empowering the Jazz artist to explore their music quicker. In order to do this, Pachet turned to using a Markov chain at the core of his algorithm. In brief, Markov chains are a form of algorithm that calculates the next result in the chain by focusing on the state of the most recent value. The latest result heavily weights the calculates of the next outcome, however over the long term the chain satisfies the rule of large numbers, with the dependence of the chain on each new result disappearing (Myers, 2017). Applied to the Continuator, this allowed the AI to counteract the problem of simply reproducing the training data it was given. The Markov based AI allowed the continuator to only require an initial snippet of what the artist was playing, before being able to respond in real time. The continuator then analysed itself after each note, using both the probabilities learned from the training data and the previous note to make a choice on what it thought the next note should be. While relatively simple for an AI, it proved to be extremely effective (Pachet, 2010). Renowned Jazz Musician Bernard Lubat tested the continuator and said “The system shows me ideas I could have developed, but that would have taken me years to actually develop. It is years ahead of me, yet everything it plays is unquestionably me”. (Du Sautoy, 2019 Loc 3056). Here the continuator is clearly demonstrating exploratory creativity. It has managed to surmise Lubats’ style and is bouncing back new ideas, sounds and musical questions. The responses it gives are not only surprisingly human like (Sony, 2012) but new and of value to the musicians that play with it. It explores their soundscapes to subjectively a deeper level than themselves, while also allowing the musicians to analyse, and compose new ideas based off of the paths the continuator shows them. Unlike a human, the continuator is not physically limited by the instrument or body, its lack of embodiment it seems aids to its creativity rather than restricting it.
The Continuators initial limitations were not hard to observe, its reliance on Markov chains meant that it had poor overall structure, leading to lacklustre compositions (Pachet, 2010). From this, Pachet developed what he described as the flow machine. This meta algorithm would learn from a whole artists repertoire in order to deduce patterns in their structure. It would then apply these patterns as constraints on the continuators playing choices. This not only greatly improved the overall composition of the continuator but also allowed mixing and matching of structure styles from different genres mixed in with the playing style of an artist from another genre (du Sautoy, 2019). The results of this are an initial stepping stone beyond exploratory creativity and into combination creativity. The key downfall is the AI’s ability to curate its genre mixing into a composition that had value (sounded pleasing to a human), as ultimately a human mixer was discerning what combinations worked vs those that didn’t, the AI had no idea, it just played the algorithm.The relative success of exploratory creativity in an ANI opens up a host of new questions. Most importantly though, can examples like the continuator be scaled up beyond a tool to challenge human creativity?
In terms of raw computational power, we have already eclipsed our own brain. Ray Kurzweil has calculated an estimate of the average human brain power in calculations per second (CPS) to around 10 quadrillion CPS (Kurzweil, 2006). The world’s top supercomputer, Summit, developed for the United States by IBM weighs in at a peak of 200 quadrillion CPS. However, while beating us computationally, Summit also requires more than 13 Megawatts to power and takes up hundreds of square metres, while the brain runs on just 20 watts and fits inside our heads. (Sverdlik, 2018), (Urban, 2015). This does not change the fact that we have now built machines with the physically capability to mentally outstrip us. Progressing ANI to AGI is a whole different problem to just scaling up the number of connected CPUs though. Our current neural network approach gets unwieldly very quickly when trying to focus on anything beyond a focused ANI tool. What are the questions we want it to answer, what values and outcomes should we set for it to curate itself by?
One approach aiming to sidestep these issues is to simply reverse engineering our own brain. By creating virtual neurons and sequencing them in a similar structure and scale to our own, scientists hope that a virtual brain will, at some point learn not only the thought processes of humans, but become conscious. So far, we have achieved whole brain emulation on the scale of a 1mm flatworm brain, containing a total of 302 neurons. The human brain for context is estimated at 100 billion neurons. This approach already seems hopeless, before even considering the far greater number of connections those 100 billion neurons possess (Urban, 2015). However, just like our initial AI ventures, progress on this is unlikely to be linear, we are exploiting the knowledge that the process human evolved from was not unique or one off. Therefore, when eventually possible, the success rate of an AGI digital human replica seems possible.
To truly progress beyond an ANI, machine consciousness, or at least simulated consciousness, seems like an inevitable requirement for an AGI and in turn a designer AI, rather than just a designers AI tool. Scientists already have difficulty assessing what creatures possess certain levels of consciousness, let alone defining how it works to a degree able to be programmed into a neural network. All of this also assumes that we will be able to recognise machine consciousness when it happens, what happens when a neural network has processed and learned off of so much human generated data that it is able to appear self-aware and conscious without actually being so (Urban, 2015). John Searle has proposed in his Chinese room thought experiment how no computer could ever be conscious or have a true understanding, as the human mind is more than just an information processing system. Searle argues that while an AI may be able to simulate many abilities such as self-awareness, at the end of the day it is simply running a program. He states that a simulation is not the same as understanding or thinking. In the same way that if he was to read a passage of Chinese, it may appear that he understands the conversation when in actuality he is simply just processing the information on the page. At the core of the Chinese room experiment is the notion that informational computation does not equal understanding, no matter how great (Cole, 2014). On the other side of this argument however is the notion that humanity itself became self-aware and conscious through no other change than increased computational power. In the same way that one molecule of water is not wet, but when a large collection of water molecules is present, the property of wetness suddenly applies. The concept that the whole is greater than the sum of its parts may exist for computational powers as well (Du Sautoy, 2019). Will we ever know until an AGI and in turn an ASI is created, even then will we be able to know?
Ever since humans created the first tools there were humans who knew how they worked; Artificial intelligence marks the first point in human history where the tool creators no longer understand how it does what it does. This is in some regard similar to point in time where humans became self-aware and were able to design tools with themselves and other processes in mind. However, while the results from modern AI’s have been breath-taking, on a fundamental level they don’t yet seem to pose an existential danger to the human creative code. How can this be possible, as we have seen algorithms that are able to break down some of the key historic arguments against AI. Arguments such as that learning from examples will only lead to duplicated results have been proved false by the ability of meta algorithms to facilitate rule breaking with neural networks. As such has the assumption that AI cannot reflect on what it is producing, already to a limited extent, it is able to curate its answers.
The main problem with AI that negates all of these achievements is that at the end of the day it has nothing to say. When and if an artificial consciousness and self-awareness is developed, until that point AI seems like it will remain a tool for humans to propel and initiate with their own creativity. We have seen that self-awareness is directly involved with the ability to be creative and therefore design, to the point that all three, for arguments sake seem to have manifested themselves at the same time. While we have seen AI’s able to emerge into producing exploratory and combinational creativity via examples such as the continuator, these types of creativity lend themselves well to the greater computational style of a machine, but are an order of magnitude less impressive than the transformational creative acts that have defined humanity’s self-evolution.
Considering the possibility that AI would be able to gain consciousness and self-awareness opens up a trove of questions about how it would design and what. However, the meta question should not ask what, but why. A conscious AI would have no reason or drive to design. As design at its core is symbiotic with mortality, it is an attempt by humans to not only demonstrate their intellect but to also leave something in the world that will outlive them, an attempt to defy death and pass on their ideas and thoughts. Would we consider a Conscious AI to be alive and mortal? Or even alive and immortal? Even if an AI was alive and conscious it would still have no engrained pursuit to pass on its ideas as it would not age, instead of having offspring it would just improve and evolve itself. Maybe consciousness is not only a result of intellect but of biological evolution. Just with the continuator, applying artificial limits can improve complexity and depth rather than suffocate it. Consciousness of mortality could in fact be an evolutionary positive result, as beings that are aware of their limited lifespan will seek to create the most successful life they can, while they can. If this were the case, then AI would forever remain a tool for humans to design and drive themselves. While it is impossible to say with any degree of confidence, design may always need humans to at least design with other humans in mind. Just like how we now do not understand the machines we have created; they may never be able to understand truly at all.
Thank you if you read this far.
Allen, Collin and Trestman (2016). "Animal Consciousness". [online] Plato.stanford.edu. Available at: https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=consciousness-animal [Accessed 3 Jun. 2019].
Alphabet (2019). AlphaGo | DeepMind. [online] DeepMind. Available at: https://deepmind.com/research/alphago/ [Accessed 21 Jun. 2019].
Bathaee, Y. (2018). The artificial intelligence black box and the failure of intent and causation. [online] Jolt.law.harvard.edu. Available at: https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf [Accessed 15 Jun. 2019].
Bayern, A., Danel, S., Auersperg, A., Mioduszewska, B. and Kacelnik, A. (2018). Compound tool construction by New Caledonian crows. Scientific Reports, 8(1).
Berry, B. (2015). The Euclidean Algorithm - Math Hacks. [online] Available at: https://medium.com/i-math/the-euclidean-algorithm-631d7ddf2382 [Accessed 19 Jun. 2019].
Boden, M. (2004). The Creative Mind: Myths and mechanisms. 2nd ed. London: Routledge.
Boden, M. (2016). AI: Its Nature and Future. 1st ed. Oxford: Oxford University Press.
Bostrom, N. (2006). How long before superintelligence? [online] Nickbostrom.com. Available at: https://nickbostrom.com/superintelligence.html [Accessed 19 Jun. 2019].
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Cambridge (2019). UNIQUE | definition in the Cambridge English Dictionary. [online] Dictionary.cambridge.org. Available at: https://dictionary.cambridge.org/us/dictionary/english/unique [Accessed 4 Jun. 2019].
Carykh (2016). Evolution Simulators - YouTube. [online] Available at: https://www.youtube.com/playlist?list=PLrUdxfaFpuuK0rj55Rhc187Tn9vvxck7t [Accessed 14 Jun. 2019].
Cole, D. (2014). The Chinese Room Argument. [online] Plato.stanford.edu. Available at: https://plato.stanford.edu/entries/chinese-room/ [Accessed 21 Jun. 2019].
Du Sautoy, M. (2019). The Creativity Code - How AI is learning to write, paint and think. Oxford: 4th Estate.
Foo, Y., Simmons, L. and Rhodes, G. (2017). Predictors of facial attractiveness and health in humans. Scientific Reports, 7(1).
Fry, T. (2012). Becoming human by design. London: Berg.
Grey, C. (2014). Humans Need Not Apply. [online] Available at: https://www.youtube.com/watch?v=7Pq-S557XQU&ab_channel=CGPGrey [Accessed 10 Jun. 2019]
Grey, C. (2017). How Machines Learn. [online] Available at: https://www.youtube.com/watch?v=R9OHn5ZF4Uo [Accessed 10 Jun. 2019].
Herculano-Houzel, S. (2009). The human brain in numbers: a linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3.
Jabr, F. (2012). Self-Awareness with a Simple Brain. [online] Scientificamerican.com. Available at: https://www.scientificamerican.com/article/self-awareness-with-a-simple-brain/?redirect=1 [Accessed 4 Jun. 2019].
Kang (2017). Multi-Layer Neural Networks with Sigmoid Function— Deep Learning for Rookies (2). [online] Towards Data Science. Available at: https://towardsdatascience.com/multi-layer-neural-networks-with-sigmoid-function-deep-learning-for-rookies-2-bf464f09eb7f [Accessed 14 Jun. 2019].
Kurzweil, R. (2006). The Singularity is Near. New York: Viking.
Lawson, B. (2003). The Architectural Healthcare Environment and its Effects on Patient Health Outcomes. [online] Wales.nhs.uk. Available at: http://www.wales.nhs.uk/sites3/documents/254/ArchHealthEnv.pdf [Accessed 7 Jun. 2019].
Lexico (2019). Artificial Intelligence | Definition of artificial intelligence in English. [online] Lexico Dictionaries. Available at: https://www.lexico.com/en/definition/artificial_intelligence [Accessed 11 Jun. 2019].
Moore, A. (2016). Do design. Cardigan: Do Book Co.
Myers, D. (2017). An introduction to Markov chains and their applications within finance. [online] Math.chalmers.se.
Norman, D. (2004). Why we love (or hate) everyday things. New York: Perseus Books Group.
Norman, D. (2013). The Design of Everyday Things. London: The MIT Press.
Pachet, F. (2010). The Continuator: Musical Interaction With Style. Journal of New Music Research, 32(3), pp.333-341.
Pachet, F. (2019). François Pachet - Director of Spotify Creator Technology Research Lab. [online] francoispachet.fr. Available at: https://www.francoispachet.fr [Accessed 17 Jun. 2019].
Ritchie, R. (2019). The Secret History of iPhone. [online] iMore. Available at: https://www.imore.com/history-iphone-original [Accessed 11 Jun. 2019].
Sanderson, G. (2017). But what *is* a Neural Network? | Deep learning, chapter 1. [online] Available at: https://www.youtube.com/watch?v=aircAruvnKk [Accessed 14 Jun. 2019].
Schacher, E. (2013). How Has the Human Brain Evolved? [online] Available at: https://www.scientificamerican.com/article/how-has-human-brain-evolved/ [Accessed 17 Jun. 2019].
Shumaker, R., Walkup, K. and Beck, B. (2011). Animal tool behavior. Baltimore: Johns Hopkins University Press.
Sony (2012). Musical Turing test with the Continuator. [online] YouTube. Available at: https://www.youtube.com/watch?v=ynPWOMzossI&feature=youtu.be [Accessed 21 Jun. 2019].
Sverdlik, Y. (2018). IBM, Nvidia Build “World’s Fastest Supercomputer” for US Government. [online] Data Centre Knowledge. Available at: https://www.datacenterknowledge.com/supercomputers/ibm-nvidia-build-world-s-fastest-supercomputer-us-government [Accessed 17 Jun. 2019].
Urban, T. (2015). The Artificial Intelligence Revolution: The Road to Superintelligence. [online] Available at: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html [Accessed 1 Jun. 2019].
Wald, C. (2015). Neuroscience: The aesthetic brain. Nature, 526(7572), pp. S2-S3.