ARTIFICIAL INTELLIGENCE IN ARCHITECTURE AND BUILT ENVIRONMENT DEVELOPMENT 2024: A CRITICAL REVIEW AND OUTLOOK, 6th part: Lawsuits

Obviously, training dataset quality in terms of size, comprehensivity, and relevance is of critical meaning for an AI application´s performance. Not always but often, the objects that assemble the set represent the intellectual property of respective authors. The authors feel mishandled and affected if someone - no matter whether a human or an AI - takes and compiles their creations (or their digital representations) to put them on display or to submit them individually.

At issue, mainly, is generative AI’s tendency to replicate images, text, and more — including copyrighted content — from the data that was used to train it. Indeed, image-generating AI models like Midjourney, DALL-E, and Stable Diffusion replicate aspects of images from their training data. As a result, together with generative AI entering the mainstream, each new day brings a new lawsuit. Microsoft, GitHub, and OpenAI are currently being sued in a class action motion that accuses them of violating copyright law by allowing replicating licensed code snippets without providing credit. Two companies behind popular AI art tools, Midjourney and Stability AI, are in the crosshairs of a legal case that alleges they infringed on the rights of millions of artists by training their tools on web-scraped images. And stock image supplier Getty Images took Stability AI to court for reportedly using millions of images from its site without permission to train Stable Diffusion, an art-generating AI [196].

Let us zoom in: what is the issue? Is it the use of other authors‘ performance – even though an adaptive use? Or is it the use of other authors‘ performance by a machine – without a creative input of a man? Labeling them as paraphrasing, history and present is rich in cases of such use, shall the examples be William-Adolphe Bouguereau or Alexandre Cabanel paraphrasing Botticelli´s Birth of Venus, Joos van Cleve´s paraphrasing-slash-counterfeiting of Leonardo da Vinci´s Mona Lisa, Michal Ozibko paraphrasing Girl with a Pearl Earring by Jan Vermeer van Delft, Tadao Cern paraphrasing Selfportrait by Vincent van Gogh, Peter Lindberg featuring Julianne Moore in paraphrases of Gustav Klimt´s or Egon Schiele´s portraits, Paul Cezanne paraphrasing Édourd Manet´s Olympia paraphrasing Francesco Goya´s Maja, and many others paraphrasing many other works by many other authors. Probably none of the cases of paraphrases of authors‘ works has provoked rejective reactions either from the authors (not only because they are often already dead) or from the professional public; on the contrary, paraphrases are often perceived as a tribute to the original author.

It looks like the problem is the machine – the AI taking what it can get. Let’s leave aside that that’s not entirely true either – AI only takes what trainers – supervisors tell it to or allow it to go to if they do not serve the AI directly with it. The anonymity of the „independent machine´s“ tackling renders to be the core of the issue, underlined by the black-box nature of AI that, in a way, hides even more the author. No wonder then: How should AI cope with expectations, and legal paradigms, too, that emerged and evolved without having even a glimpse of a notion of something like AI? Maybe general understanding and legal framework only need to catch up with a new, unprecedented, and unexpected phenomenon.

Nevertheless, the latest evolution does not come close to solving the problem, rather the opposite. In late December 2023, the New York Times, publisher of the famous newspaper of the same name, filed a lawsuit against OpenAI and Microsoft for massive copyright infringement in their ChatGPT and Copilot products. The entitlement of the complaint is declared the crucial role independent journalism plays in a democratic society and how The New York Times finances its activities: through subscriptions and licensing arrangements. Also deploying a subscription model, ChatGPT competes directly with the New York Times. The defendants‘ activities are far from unprofitable: „The use of valuable content belonging to others was extremely lucrative for the defendants, … OpenAI being on track to generate $1 billion in sales by 2023, and Microsoft increasing its market value to a record $2.8 trillion (up $1 trillion from a year ago).“ the lawsuit states [197].

The infringement is twofold, according to New York Times representatives. Primarily, ChatGPT can reproduce copyrighted content „word for word“ on a larger than small scale. The difference between how search engines treat newspaper content and how a chatbot treats it is obvious. ChatGPT helps a user bypass the security of a locked article. Of course, internet search engines also show parts of the text. That is what the current online information environment is built on, and the New York Times benefits from it, too. However, the snippets that ChatGPT or Bing Chat (later Microsoft Copilot) lists are much longer and thus can be argued to infringe significantly on the editorial copyright. The second type of infringement is no less serious, according to the NY Times: it is the „hallucinations“ that occur in LLMs by the very principle of their operation, generating a plausible-looking text, but one that is out of touch with reality [198].

The New York Times (and Times) lawsuits are not isolated; more news organizations—The Intercept, Raw Story, and AlterNet among others—have filed separate lawsuits against OpenAI, alleging ChatGPT-related copyright infringement [199].

In addition, the allegations are not coming only from outside the AI industry. On the precious 29th day of February of the leap year 2024, Elon Musk filed a lawsuit against OpenAI and its CEO Sam Altman, alleging they have abandoned the company’s founding agreement to pursue AI research not for profit but for the benefit of humanity as he, Altman, Greg Brockman, and other co-founders had agreed. [199]. The issue is that upon releasing its GPT 4 model, OpenAI Inc. has transformaed into a closed-source de facto subsidiary of the largest technology company in the world – Microsoft. The true motivation of the lawsuit may reveal later; at the time being, a business motivation cannot be excluded.

Also, Verses (see section (3) of this paper) asserts that generative AI market leader OpenAI is breaching its charter by not engaging with Verses´ revolutionary approach. Having taken a full-page advertisement in the New York Times in December 2023 that has been announcing its progress and methods [200], Verses challenged OpenAI to cooperate — asking the nonprofit firm to fulfill its charter promise to „stop competing“ with any „value-aligned, safety-conscious project that comes close to building AGI before we do.“ Verses believes it qualifies for such cooperation and offers collaboration to ensure safe and beneficial AGI development. Therefore, the competition is the friction point as Verses with its roughly 100 employees and $65 million in revenues feels – no wonder – threatened when entering a battlefield of tycoons. Even if its founders are right in theory, there is no guarantee that their approach will take off in the market.

Big expectations – and challenges

During the World Economic Forum Davos gathering in 2024, AI was given a big say. Chief AI Scientist at Meta Yann LeCun was speaking on the power of open-source AI and how to keep AI progressing fast, CEO of Cohere Aidan Gomez assured that AI is about to accelerate even more due to improvements in architecture and hardware, Microsoft´s CEO Satya Nadella´s speech presaged how AI can improve lives, from healthcare through education to products and services, Rwanda Innovation Minister Paula Ingabire reported on the economic boost of the country of up to 6% from leveraging AI, CEO of IBM Arvind Krishna forecasted AI to generate 4 trillion dollars of annual productivity in economies worldwide, U.S. Senator of South Dakota Mike Rounds asserted that AI will be transformational in healthcare and that the U.S. public will become optimistic about AI after seeing the quality-of-life improvements, Qualcomm´s CEO Cristiano Amon explained advancements in AI released very recently at CES 2024 [https://www.ces.tech; https://www.ces.tech/] such as the ability to have a conversation with one´s car, and Google´s CFO Ruth Porat was bringing to mind the importance of cybersecurity and preventing misinformation in the age of AI [201].

Exploring the transformative potential of AI on the global economy and humanity, the International Monetary Fund expects AI to redraw the map of the world economy. The Fund´s research asserts a global employment impact of up to 40% of jobs worldwide and 60% in advanced economies, addressing primarily skilled jobs. Advanced economies are expected to experience both significant risks and opportunities from AI while emerging markets and developing countries, less exposed to AI, may face less immediate disruption; the second, however, are assumed at risk of falling behind due to insufficient infrastructure and skilled labor strength. Within countries, AI can exacerbate income and wealth inequality. It can boost the productivity and wages of those who can use AI but leave behind those who cannot. Policymakers are requested to address the challenges posed by AI and focus on comprehensive social safety nets and retraining programs. A need for both developed and developing economies to adapt to the AI era is regarded as urgent. Advanced economies are encouraged to focus on AI innovation and regulatory frameworks while emerging and developing economies should invest in digital infrastructure and workforce skills.

However, some skepticism remains as to whether the application of artificial intelligence will usher in a new era of sustained acceleration in productivity. US Federal Reserve Chair Jerome Powell’s take last month was “probably not in the short run,” though “probably maybe in the longer run.” What deserves attention is the emerging major difference between the world’s two largest economies concerning how AI research and applications will be funded. And that, in turn, may affect the extent to which experimentation and diffusion of generative AI, or that which creates new content, evolves in the US versus China.

The Chinese Communist Party is making it increasingly clear that it wants to call the shots on how capital gets allocated. Beijing reported this week that president Xi Jinping led a call for the party’s Central Committee to set up a mechanism to steer technology work in the country. In the meantime, a debate surfaced on whether to shut the domestic stock market to initial public offerings—something hardly thinkable in the US. American capital markets are just the opposite: a forum that allows anything to happen, including hype and mania, as witnessed this week with Nvidia corp.’s record surge. While that makes US markets vulnerable to crashes and volatility, it also means they are a venue for funding big dreams. And that may prove decisive in determining which of the two big rivals enjoys the productivity gains that—sooner or later—materialize from AI.

And Europe? Rather unexpectedly, Europe shows the capability to attract talented people for AI and European education systems prove to be proficient enough to generate about the same amount of top AI experts as the US. This good news breaks the notion that Europe is unable to generate experts. However, so far only half of them stay to work in Europe. Among other things, this relates to the strength of the capital market, the business environment, and the flexibility of jobs and entrepreneurship [202].

AI market leaders naturally have their ideas about what comes next. Microsoft and OpenAI (which is substantially co-owned by Microsoft [203]) have ambitious plans to collaborate on a massive data center project with an estimated cost of up to $100 billion. Aiming to create cutting-edge infrastructure to support AI R&D, the AI supercomputer „Stargate“ anticipated to be operational by 2028 is at the heart of the project. The project budget is not only hundreds of times larger than the cost of any of the existing „competitors“; for comparison, it is equal to the gross domestic product of Bulgaria in 2023 [204], leaving behind the gross domestic product of two-thirds of the world’s national economies.

With its renowned customer drive and unique ability to connect research and development with marketing, Apple Inc. has set teams recently to launch investigating a push into personal AI-driven robotics, a field with the potential to become one of the company’s ever-shifting “next big things.” After the electric vehicle project gets nixed, the search is on for new growth sources: naturally, AI-driven robotics is the goal – and, when it is Apple, where else than in the field of home devices [205]?

AI has become a phenomenon in economic, cultural, and also political terms. In March 2024, Demis Hassabis, co-founder and CEO of Google’s AI subsidiary DeepMind, has been knighted for his contributions to the field of AI. His work at DeepMind—particularly the development of the AI system AlphaGo—has been appreciated as pivotal in advancing AI technology, and the knighthood has been declared to reflect his role in positioning the U.K. as a leader in AI research and development. Hassabis´ acknowledgment reflects concurrently the broader impact of AI on the global stage, highlighting the significance of AI innovation in contemporary society [206].

Concurrently, OpenAI´s co-founder and CEO Sam Altman worries about upcoming hardware parts shortages that would not allow raising the overall computing power quickly enough accordin to the needs of ever-novel and mightier AI algorithms. Proactively, he seeks to raise seven billion USD to establish a bold network of chipmakers [207]. In the shadow of AI, the future renders not only intelligent but groundbreakingly expansive, too. Symptomatically, the April 3, 2024, magnitude 7.4 Hualien earthquake in Taiwan showed Altman´s worries not so detached from reality, though the reasons differed [208].

To fully appreciate AI expectations, a look at the amounts being invested is needed. Global corporate investment in AI from 2013 to 2022 goes to trillions and starts to be backed by the public sector, too (if Saudi Arabian wealth funds are considered public) [209]. Venture capital investors have lavishly funded a pipeline of additional upstarts; eight of the most prominent were recently valued at an average of 83 times their projected annual revenue in the process. In March 2024, the markets impatiently outlooked the 2023 leading companies´ economic results – and breathed a sigh of relief when Nvidia showed to have beaten Aramco (after overtaking Amazon and Alphabet six months ago) to become the third-most valuable company in the world. The largest technology firms in the world known as the „Magnificent Seven“, which includes Microsoft, Apple, Nvidia, Alphabet, Amazon, Meta, and Tesla, together have such a huge market value that they would create the second-largest stock market in the world only by themselves (the largest being the American New York Stock Exchange). The comparison can go further: Microsoft and Apple are each separately valued at a similar value to all firms listed on stock exchanges in France, Saudi Arabia, or Great Britain. Deutsche Bank warns that such a high concentration of titles has come dangerously close to the levels of 1929 and 2000, when there were huge falls in the markets [210].

Reminded in the pre-previous subsection, the clashes between tech tycoons may signalize an economic „bubble“ in the branch that also current economic data may indicate. Also, the Market cap to GDP ratio, nicknamed the Buffet indicator, achieving 193% currently, suggests that the (US) stock market is overvalued relative to GDP [211,212]. In addition, primarily the high-performing stocks of the „Magnificent Seven“ have been driving much of the market growth while other stocks have been stagnating or underperforming. Market concentration risk renders apparent, strongly advising spreading investments across various sectors and companies; a „bubble“ may be looming. Could the turn of AI R&D to new application areas – such as architecture and the built environment development – help to avert such negative prospects?

References

Introduction figure: Labbe, J.: Diverse AI-tools deployment at a multi-level design development. Luka Developent Scenario, Prague. MS architekti, Prague. 2024. author´s archive

Michal Sourek

Exkluzivní partner

Hlavní partneři

Partneři