Reading time: 7 minutes

TUM scientific essay

Note: When applying to TUM, I had to write a scientific essay on a specific prompt to demonstrate my domain knowledge and critical thinking skills. I’m publishing it here to give other applicants ideas on how to structure their own essays.

In March of 2016, Lee Sedol, the champion in the Chinese board game Go, was beaten by “Alpha Go,” an Artificial Intelligence (AI) computer program [1]. This represented the first time in human history where a computer had bested a Go master. With the game having ~250150 possible moves (that is, more moves than atoms in the universe), the AI won by simply making the better decision using the current layout of the table – not by enumerating all of the possibilities. This upset represents a fundamental change in humanity’s relationship with computers as the win demonstrates the unabashed superiority of Artificial Intelligence within this specific domain. While this achievement may seem trivial, with respect to future technology, Artificial Intelligence will continue its penetration into the fabric of society.

Beginning its life in 1956 as an academic discipline, AI has gone through many periods of hype and disillusion as novel approaches were tried, tested, and discarded [2]. Since the late 20th century, however, deep learning, a type of AI which leverages large datasets to train itself, has broken new barriers and redefined what is possible [3]. This change has been significant, and it has encouraged AI’s rapid spread.

One area in which AI has seen traction is in disease recognition. New models have been used to detect retinal diseases from fundus images, tuberculosis from chest radiographs, and malignant melanoma from skin images [4]. Moreover, as it is the nature of deep learning algorithms to use large swaths of data as training images, ‘practicing’ its diagnoses creates a self-reinforcing cycle of learning and improving. Additionally, while these models may not be suited to assign diagnoses with 100% confidence, they serve as valuable tools for doctors through their utilization as preventative maintenance devices. Especially with the rise of ‘wearables’ like the Apple Watch with built-in electrocardiogram sensors, real-time data can be fed to these models for instant analysis [5].

However, with all these advancements in deep learning and its benefits, AI has also allowed for different and exploitative business models to flourish. Most notably, Meta, through its products like Facebook, Instagram, and WhatsApp, collects data on people to create intricate profiles, which can then be sold and used to predict personal habits, preferences and consumer behaviors [6]. Moreover, in 2018, a company called Cambridge Analytica (CA) was put in the spotlight for acquiring and using personal data about Facebook users [7] such as demographics, internet activity, and consumer behavior to try and influence the 2016 US elections [8]. Additionally, it was later found that a Cambridge Analytica contractor passed this sensitive information to Russian Intelligence, which was a known malicious agent in the 2016 presidential elections [9]. The implication of these events is that companies are able to create intricate profiles on their users, which they can then pass on to AI programs to recommend personalized ads and other content to try and influence each user’s behavior. This poses a risk to humanity as it is fundamentally manipulative. Similarly, the data AI needs to create these profiles is also extremely sensitive and can be dangerous when put into the wrong hands. Therefore, it is paramount that, with the continued success of AI throughout society, the data used is kept safe and passed through consenting individuals.

While Artificial Intelligence has mostly held a supporting role in its various applications, it will increasingly move towards full autonomy in the future, which can be clearly seen with self-driving cars. In February of 2022, Cruise, a subsidiary of General Motors, was approved by the city of San Francisco to begin offering driverless taxi rides to the general public [10]. With this service, the role of a taxi driver will become obsolete. This foreshadows the beginning of a larger trend where AI replaces human services. Moreover, the cost of doing research in these domains is prohibitively expensive. In 2021, Alphabet, the parent company of Google, spent 31.562 billion dollars on R&D, much of which was dedicated to Artificial Intelligence [11]. For many companies, the costs of AI research are simply too great, which means only larger, more established companies will be able to develop the technology. The implication of this possibility is that AI will be in the hands of the few whose political ambitions and ethics may be questionable.

Another risk to humans is AI’s notion of bias. As most deep learning is trained on data sourced from humans – who are naturally biased – the resulting model will also carry these biases. In fact, in 2019, it was shown that Facebook’s ad-serving algorithm exhibited discrimination based on race, gender, and the religion of users [12]. This presents an obvious danger as there exists the risk of unsavory prejudices creeping their way into these important, decision-making utilities. While these systems can be improved by providing supplemental data in the areas these models fall short, it is unlikely that they will ever be perfectly unbiased, and the damage will have potentially already been done.

AI is fundamentally a tool which expands the scope of what is possible. From medical imaging to autonomous cars and even seemingly trivial applications like music recommendation, AI is increasingly rooting itself within many facets of society. But, it is a tool to be heeded carefully. Not only is it centralized within the hands of a few big actors, AI models also contain prejudices, which may lead to unsavory outcomes that could potentially undermine democracy, for example. However, with proper regulation and increasing the number of underrepresented subjects in training data, AI can be reigned-in to extract more benefits than detriments [13].

References:

[1]: D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and Tree Search,” Nature News, 27-Jan-2016. [Online]. Available: https://www.nature.com/articles/nature16961. [Accessed: 09-Nov-2022]. [2]: D. Crevier, AI: The tumultuous history of the search for Artificial Intelligence. New York, NY: Basic Books, 1995. [3]: T. Mitchell, B. Buchanan, G. DeJong, T. Diettrich, P. Rosenbloom, and A. Waibel, “Machine learning | annual Review of Computer Science,” Annual Reviews, 1990. [Online]. Available: https://www.annualreviews.org/doi/10.1146/annurev.cs.04.060190.002221. [Accessed: 09-Nov-2022]. [4]: D. S. W. Ting, Y. Liu, P. Burlina, X. Xu, N. M. Bressler, and T. Y. Wong, “AI for Medical Imaging Goes Deep,” Nature News, 07-May-2018. [Online]. Available: https://www.nature.com/articles/s41591-018-0029-3. [Accessed: 09-Nov-2022]. [5]: J.-Y. Sun, H. Shen, Q. Qu, W. Sun, and X.-Q. Kong, “The application of deep learning in electrocardiogram: Where we came from and where we should go?,” International Journal of Cardiology, 14-May-2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167527321008299. [Accessed: 09-Nov-2022]. [6]: S. C. Matz, R. E. Appel, and M. Kosinski, Privacy in the age of psychological targeting, 2019. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/31563799/. [Accessed: 09-Nov-2022]. [7]: M. Rosenberg, N. Confessore, and C. Cadwalladr, “How trump consultants exploited the Facebook data of Millions,” The New York Times, 17-Mar-2018. [Online]. Available: https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html. [Accessed: 09-Nov-2022]. [8]: “Ted Cruz campaign using firm that harvested data on millions of unwitting facebook users,” The Guardian, 11-Dec-2015. [Online]. Available: https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data. [Accessed: 09-Nov-2022]. [9]: W. Siegelman, “Did Cambridge Analytica collude with Russia’s intelligence services to interfere in US elections?,” Byline Times, 21-Apr-2021. [Online]. Available: https://bylinetimes.com/2021/04/20/did-cambridge-analytica-collude-with-russias-intelligence-services-to-interfere-in-us-elections/. [Accessed: 09-Nov-2022]. [10]: V. Vijayenthiran, “Cruise opens up driverless taxi service to public in San Francisco,” Motor Authority, 02-Feb-2022. [Online]. Available: https://www.motorauthority.com/news/1132494_cruise-opens-up-driverless-taxi-service-to-public-in-san-francisco. [Accessed: 09-Nov-2022]. [11]: S. Rosenbush, “Big Tech is spending billions on AI research. investors should keep an eye out,” The Wall Street Journal, 08-Mar-2022. [Online]. Available: https://www.wsj.com/articles/big-tech-is-spending-billions-on-ai-research-investors-should-keep-an-eye-out-11646740800. [Accessed: 09-Nov-2022]. [12]: M. Ali, P. Sapiezynski, M. Bogen, A. Korolova, A. Mislove, and A. Rieke, “Discrimination through optimization: How facebook’s ad delivery can lead to biased outcomes: Proceedings of the ACM on Human-Computer Interaction: Vol 3, NO CSCW,” Proceedings of the ACM on Human-Computer Interaction, 01-Nov-2019. [Online]. Available: https://dl.acm.org/doi/10.1145/3359301. [Accessed: 09-Nov-2022]. [13]: C. Metz, “We teach A.I. systems everything, including our biases,” The New York Times, 11-Nov-2019. [Online]. Available: https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html. [Accessed: 09-Nov-2022].