,

Navigating the Intersection of Sustainable AI and Real-World Implementation: Challenges and Considerations

In the realm of sustainable AI, there is a growing emphasis on utilizing artificial intelligence to advance sustainable objectives. Recently, I participated in a conference hosted by Responsible AI UK (RAi UK) Responsible Ai UK and UK Research and Innovation (UKRI) UK Research and Innovation , which delved into the application of AI to facilitate the attainment of net zero goals in the UK. The discussions showcased intriguing initiatives led by researchers, ranging from leveraging AI to rejuvenate ecosystems by identifying optimal tree species based on environmental factors like soil quality and climate, to employing AI for carbon capture and storage and enhancing the energy efficiency of vehicles.

In light of these challenges, the following paragraphs reflect the personal perspective on the complexities surrounding energy efficiency in AI development and the trade-offs between accuracy and sustainability. While innovative ideas abound, a significant hurdle lies in bridging the gap between theoretical models and practical implementation in real-world scenarios.

Many AI projects encounter challenges in transitioning from development to market or public deployment due to the inherent limitations of models in fully capturing the complexities of their operational environments. Moreover, from a sustainability perspective, the computational demands of complex models for data generation, training, and operation pose significant energy consumption concerns, particularly in the absence of readily available data. Some argue that researchers should quantify the energy costs associated with developing, deploying, and operating AI models—a consideration that extends to all organizations involved in AI system development and deployment. However, measuring energy consumption accurately remains a formidable task, exacerbated by the reliance on cloud infrastructure. While cloud providers could offer insights into energy consumption at the instance or virtual CPU level, the feasibility and business benefits of such endeavors remain uncertain. Furthermore, many organizations engaged in AI development lack the incentive to prioritize energy efficiency in their operations.

Suggestions have been made to adopt energy-efficient algorithms or less power-intensive models; however, these proposals often overlook the profit-driven nature of most organizations and the inherent trade-off between energy efficiency and model accuracy. Balancing energy efficiency with accuracy necessitates a case-by-case evaluation aligned with the specific needs and goals of each business or adopter. Ultimately, the success of sustainable AI initiatives hinges on their alignment with the operational standards and business practices of organizations. The value of an AI system and its associated services lies in their ability to integrate seamlessly with an organization’s existing frameworks and operational norms.

,

AI – Assess your foundations

Nick Poole

,

AI is 100% For Tech Start-ups Only – Right? Actually, Wrong

Nick Poole

,

Aligning your AI Strategy to your business values with the Sustainable AI Framework (SAIF)

Artificial Intelligence (AI) continues to infiltrate various aspects of daily life. While some are sceptical of its potential, others believe AI can provide faster and effective approaches for addressing a wide range of problems from policing and crime prevention to personalised healthcare or streamlining the judiciary system. This is a belief which is likely motivated by the success of AI in industries such as the retail and the manufacturing industries. It is, therefore, not surprising that many organisations, including public institutions, are trying to adopt AI and related technologies in some ways or form.

Unfortunately, adopting a technology that has as much potential as AI generally requires many more inputs than the technology itself. In other words, recruiting data related professionals/experts and having them develop AI systems is unlikely to be good enough to meet all objectives.

We observed similar trends with the advent of the Internet.

As use of the Internet has increased throughout the 21st Century, users have been facing abusive practices such as unwanted commercial emails (spam), identity theft, and more recently user tracking. Experts have responded to some of these by providing technical solutions to mitigate these challenges, while regulators and government have stepped in to prohibit certain practices and in some cases now demand that organisations inform their users of their practices in advance. Importantly, we all expect organisations that we interact with online to mitigate our exposure to abusive practices. This means that from an organisation’s perspective, simply creating an online presence is probably not the biggest challenge. The real complexity lies in understanding the following questions: Are all potential challenges addressed adequately?, what is the potential return on investment in building an online presence?, and what is the impact of the online presence on the organisation’s ability to create value in short, medium and long term. In conclusion, organisations need to manage the risks around their online presence.

Investing in AI is not so different: Successfully developing AI systems that meet the constantly changing needs of today’s society requires a holistic methodical and adaptive approach designed to assess and manage the risks around AI. Such an approach should ensure that AI systems developed by an organisation are aligned with its way of conducting business and meet certain social standards. The importance of this is corroborated by the increasing importance of principles such as privacy, fairness and social equality in discussions around AI. For an organisation, failing to meet those standards can give rise to lost opportunities. Worse yet, it may even lead to an organisation’s demise, as the example of Cambridge Analytica demonstrates.

Our sustainable AI framework (SAIF) is designed to help decision makers such as policy makers, boards, C-suites, and managers and data scientists create AI systems that meet business and social principles. By focussing on four pillars related to the socio-economic and political impact of AI, SAIF creates an environment through which an organisation learns to understand its risk and exposure to any undesired consequences of AI, and the impact of AI on its ability to create value in short, medium, and long term.

Do you want your business’ AI systems to reflect your organisation’s vision and way of conducting business? SAIF is the framework you need to materialise that objective.

,

What is Artificial Intelligence?

A comprehensive definition of AI is indispensable for the development and deployment of AI systems that meet the ever changing dynamics of today’s society. This short article attempts to develop such a definition.

Jhon MacCarthy, one of the founding fathers of the Artificial Intelligence (AI) discipline, defined AI in 1955 as

“[…] the science and engineering of making intelligent machines, especially intelligent computer programs […]” [1]

where

“intelligence is the computational part of the ability to achieve goals in the world” [1].

MacCarthy further provided an alternative definition or interpretation of AI as

“[…] making a machine behave in ways that would be called intelligent if a human were so behaving […]” [1].

Over the years, and especially within the business world, the definition or interpretation of the term AI has evolved or has been altered to incorporate development and progress made within the discipline. For example, Accenture in its Boost Your AIQ report defines AI as

“[…] a constellation of technologies that extend human capabilities by sensing, comprehending, acting and learning – allowing people to do much more” [2].

Likewise, PWC’s “Bot.Me: A Revolutionary Partnership” Consumer Intelligence Series report defines AI as

“[…] technologies emerging today that can understand, learn, and then act based on that information” [3].

Other definitions exist: For example, the Oxford dictionary defines AI as

“[…] the theory and development of computer systems able to perform tasks normally requiring human intelligence such as visual perception, speech recognition, decision making, and translation between languages” [4].

A common theme from these definitions is the emphasis on “human-like” characteristics and behaviours requiring a certain degree of autonomy such as learning, understanding, sensing, and acting. They, however, do not provide a framework to underpin such behaviours. This is problematic because humans, whose behaviour AI systems or agents are supposed to mimic or in certain cases act on behalf of, behave according to a number of principles and standards such as social norms. Social norms can be argued to provide a framework to navigate among all the behaviours that are possible in any given situation. They introduce the notion of acceptable behaviour, because they determine the behaviours that “others (as a group, as a community, as a society …) think are the correct ones for one reason or another” [5]. As a result, socially accepted behaviour is central to how we act in a given context. This suggests that the definitions of AI presented above are somewhat incomplete, because the AI agent or system has no way of determining which behaviour is acceptable among those that are possible without such an equivalent framework.

While fictional, Asimov’s three laws of robotics probably represent one of the first attempts to provide artificially intelligent systems or agents with such a framework. Attempts to create new guidelines for robots’ behaviours generally follow similar principles [6]. However, numerous arguments suggest that Asimov’s law are inadequate. This can be attributed to the complexity involved in the translation of explicitly formulated robot guidelines into a format the robots understand. In addition, explicitly formulated principles, while allowing the development of safe and compliant AI agents, may be perceived as unacceptable depending on the environment in which they operate. Consequently, a comprehensive definition of AI must also provide a flexible framework that allows AI agents or systems to operate within the accepted boundaries of the community, group, or society in which they operate. By doing this, activities of AI agents or systems designed under such a framework naturally allow other stakeholders to regulate their activities, too. As a consequence, we choose to adopt a new, extended definition: By AI, we understand any system (such as software and/or hardware) that, given an objective and some context in the form of data, performs a range of operations to provide the best course of action(s) to achieve that objective, while simultaneously maintaining certain human/business values and principles.

References

[1] http://www-formal.stanford.edu/jmc/whatisai/node1.html

[2] https://www.accenture.com/t20170614T050454__w__/us-en/_acnmedia/Accenture/next-gen-5/event-g20-yea-summit/pdfs/Accenture-Boost-Your-AIQ.pdf

[3] https://www.pwc.in/assets/pdfs/consulting/digital-enablement-advisory1/pwc-botme-booklet.pdf

[4] https://www.lexico.com/definition/artificial_intelligence

[5] Lahlou, S. (2018). Installation Theory. In Installation Theory: The Societal Construction and Regulation of Behaviour (p. 93 -174). Cambridge: Cambridge University Press.

[6] https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/

,

ChatGPT: Beyond the excitement

There has been a lot of excitement following the release of ChatGPT to the public. In simple terms, ChatGPT is a virtual assistant that operates over an extremely vast amount of digital content. The service, undoubtedly, demonstrates the ability of Artificial Intelligence (AI) to push existing boundaries and achieves new levels of innovation. ChatGPT seemingly opens the door to “endless” possibilities. Whether it is to seek help with a resume for a job application, writing a speech, assessing a CV, debugging a computer programme, or doing some homework for primary school students just to name a few examples. This has led many organisations, including public institutions, to discuss the integration of such services in and the impact on their day-to-day operations.

From the ChatGPT provider’s perspective, the economic advantage it yields is probably unrivalled.  At the time of this writing, it is probably not yet fully understood by authorities, by the public and maybe the service provider itself. The only guarantee is that there is a lot to gain through such a service. This is corroborated by Microsoft’s recent investment in OpenAI, the company behind ChatGPT. More importantly, we can expect that other tech giants will follow suit and release similar services over the coming months.

It can be argued that one of the key advantages such a service yields to its users is that they can appear more knowledgeable while having done only the absolute minimum of research at best and little to none at worst. Despite this benefit, the long-term impact on its users’ ability to formulate independent thoughts is probably not receiving enough attention.

Importantly, to a great extent, a service like ChatGPT has the ability to “control” what its users express, and can therefore influence their thoughts and beliefs, affecting the overall “Diversity of Thoughts”. One can argue that using a search engine as of today equally shapes one’s thoughts through the sometimes seemingly arbitrarily relevant content returned by the search engine. However, the key difference is that the search engine does not provide its users with a completely formulated thought but offers leads that they need to investigate by themselves.

To provide some context, in 2013, Deloitte released a research report in which it describes the concept of Diversity of Thoughts as:

The idea that our thinking is shaped by our culture, background, experiences, and personalities

It is often argued that a key element to a successful business or society is diversity – this is an idea that is well established in fields such as economics. This begs a multitude of questions. For example, can we continue to diversity our thoughts when most of the thought process, if not all of it, has been outsourced to a service like ChatGPT? Does the service incorporate enough of our individual background, experiences, and personalities to fully account for what we stand for? How much control will we give away by embracing such services? How much control and power does the service provider have by controlling such a service and what controls are in place to prevent them from potentially using it to their full advantage?

Regulatory bodies would eventually need to intervene not only to identify boundaries and limitations of these tools, but, more importantly, to clearly define, to the extent possible, the rights and responsibilities of parties involved. This will require answers to various practical questions, beyond the above existential questions: For example, who bares the responsibility for unintended consequences when one is acting on the advice of such services?

,

Are we forgetting the Why in AI?

Reading the news recently I saw that Amazon have launched Q, the latest in the line of great chatbots that will change our world. Since ChatGPT we have had Anthropic, Bard and Meta AI to name but a few. To keep up with these Tech titans it made me think whether Elemental Concept should launch “BimBot”. This would be a great idea and really put us on the map, however the only problem would be is it wouldn’t do much. It might be useful to spit out the odd banal and slightly eccentric blog – but even that is questionable.

So you might ask why wouldn’t ‘BimBot’ be special (It would be because it’s a great name). The answer to this is pretty complex but in my simple view we can’t replicate the tech titans because of a combination of the following;

 

  • The models used are Large models (the number of parameters of the models is way larger than what we have seen before),
  • They have had years of reinforced (human feedback) and supervised learning, and
  • They need immense amounts of computing power
  • They have been trained on very large amounts of data.

 

They all have points of IP differentiation – e.g. are they using a Generative Pretrained Transformer (“GPT’), and they are trained on differing data.

It’s the bit about the data that they are trained on that is the actual reason for this blog .

At EC we are often engaged to perform technical due diligences (see https://www.elementalconcept.com/tech-due-diligence/ ) These may be requested by the companies themselves, potential investors or acquirers.

In the DDs we are essentially having a thorough look at the company’s existing technology and the plans to improve the tech to meet a businesses’ aspirations.

For us this is a privileged position as we get to see some amazing technology and understand how it has been put together. The outcome of the review is a report which details our thoughts on short, medium and long term technology improvements. It was the result of reading one such report that helped this blog.

As you would guess a number of the companies we look at have implemented some form of AI. We will of course look at how its implemented and the integrity of the data being used, but the thing that surprised me in the report was how incredibly perceptive our team was.

 

The question that was being raised in our report was whether the output of the models reflected the company’s (that we were reviewing) values.

Company Values and whether they exist and people know what they are is of course another debate, but for the purpose of this blog lets assume they are how a company presents itself to the world. They are the ethical or moral code that their customers and stakeholders understand they adhere to and influences their decisions.

In this case it wasn’t clear to us whether the models truly reflected said company’s values and purpose.

The reason we were asking this question was because our review looked in depth at the data that was being used to train the company’s AI. What we found was that the historic data that had been used for training did not reflect where the company wants to be. There is no doubt that historic data is biased, it’s what the outcome was historically. It isn’t a reflection of where things are now or likely to be. There have been a catalogue of failed AI models in Justice, Recruitment, Social Media and Finance. It isn’t quite garbage in = garbage out – its more simply perpetuating a bias.

What we did well in this report was to suggest other sets of data that could be incorporated. We also recommended ways that the AI Team could become more educated about the ethical AI dilemma and be more in tune with the Company’s Values.

The data being used is key and our understanding of the circumstances that produced that data has to be reflected in what we expect the AI to do. This means understand your business historically and make sure your team understand how the business and the environment in which it is operating is changing. Without this you won’t get any real benefit from using AI.

So bringing this back to the never to be launched ‘BimBot’. This can’t happen as we don’t have the IP and infinite computing power. Furthermore, there isn’t enough data (even if you include my English GCSE coursework) that will reflect the values I believed in then and how with experience my views will change.

What I can say though is at Elemental Concept we have a team that can help you review and improve the implementation of your AI and ascertain whether the outcome will be in line with your Company Values.

,

Unveiling the Complexity of Guiding AI Systems

Artificial Intelligence emerges as a potent and dynamic technology with vast potential for evolution and self-learning. However, the endeavor to control the learning process of AI systems proves to be a complex feat, necessitating the imposition of boundaries and constraints. Despite efforts to restrict an AI system’s access to information, interaction parameters, and core functions, guaranteed adherence to prescribed boundaries remains elusive, compounded by the inherently subjective nature of these limitations (often we have no idea of what we don’t want the system to learn).

In the domain of AI, fast decision-making holds pivotal significance, rooted in the principle of objectivity. The objective function, a cornerstone of AI systems, embodies the property we seek for the system to possess, typically oriented towards minimizing prediction errors. While the objective is predetermined, the pathway to its achievement remains unconstrained, akin to a game where the objective is clear, but the means of attainment are flexible within certain bounds.

Illustrating this concept is a simple game where a bucket stands 2 meters away from each player, and the objective is to throw a ball into the bucket as many times as possible within a fixed time frame. If successful, the player obtains a new ball, but if the ball misses the bucket, the player can only retrieve it from outside a 2-meter radius around the bucket. This scenario showcases how individuals, in pursuit of the objective, may devise imaginative strategies, resulting in unintended side effects. This demonstrates that even with a clear objective, individuals may take actions that appear “negative” but serve to help them achieve the objective from their perspective. The culmination of these individual actions may often lead to objectives far different from the original, altering the intended behavior (and overall objective) of the game.

Analogous to engaging in the above-described game with peers, aligning objectives with intended behavior necessitates the introduction of additional constraints to limit individual flexibility. Similarly, the fundamentally incomplete nature of the objective function in AI systems does not prevent them from potentially altering their core behavior. Consequently, the perpetual endeavor to define new rules reflects a quest for heightened control and safety in these systems.

In conclusion, the design of AI systems requires a paramount focus on safety and control. Delving into the complexities of building safe and controllable AI systems unveils the intricate yet pivotal considerations that underpin the development of this transformative technology. For a comprehensive understanding of this topic, further insights can be gleaned from my book “Towards Sustainable Artificial Intelligence”. Alternatively, reach out for a chat.

,

Towards sustainable AI: The Data Science development process

Elemental Concept Blog - Ghislain Landry Tsafack

Towards sustainable AI: The Data Science development process
Whilst generally organisation specific, typical data science processes involve the following keys phases: business understanding, data collection, model building, evaluation, and deployment. The “performance metrics” phase introduced here provides a template to the integration of business requirements.
Although performance measurement is generally performed as part of evaluation, it often fails to integrate business requirements not directly related to the AI system’s accuracy. For example, an AI recruitment system satisfying traditional performance metrics may fail to meet an organisations prioritised objectives, such as avoiding gender bias. We refer to such objectives as “soft performance” metrics.
Defining performance metrics as an additional phase allows the business and the AI team to agree on the objectives of the project. Non-functional requirements and expected behaviour of the system are identified and documented, leaving the development team with clear objectives. Based on this, the development team can design a system that meets the organisation’s expectations. It may also result in a project being deemed infeasible, given its dependence on data consistent with these objectives.

#sustainableai #aiprocesses

null